►
From YouTube: 2022-11-17 meeting
Description
OpenTelemetry Prometheus WG
A
B
C
D
B
B
Okay,
I
think
you
maybe
can
get
started.
I
will
paste
the
meeting
notes
in
the
chat
feel
free
to
add
anything.
Okay,
so
I
think.
E
B
Feel
free
to
add
anything
if
you'd
like
yeah,
welcome
back
everybody,
yeah
I
would
say
there
are
okay,
cool
yeah,
a
couple
people
added
some
agenda
events
or
agenda
items.
Basically
yeah
today,
Dimitri
created
a
benchmarking
proposal,
PR
in
the
new
open,
Telemetry
repo,
and
so
we
can
talk
about
that.
Some,
particularly
in
the
context
of
I
think
Felix
had
talked
some
about
some
changes
that
he
had
sort
of
benchmarked
independently
of
this
before
so
I
guess
yeah.
B
Maybe
we
can
talk
about
that
some
and
then
yeah
still
just
have
the
profiling
up
profiling
Otep
with
just
the
fields
in
there
right
now,
hopefully
yeah
we
can,
you
know,
start
making
some
progress
on
adding
more
to
those
and
figuring
out
which
ones
to
add,
and
why
and
then
yeah
some
items
about
involving,
go
runtime
contributors
and
defining
an
overall
architecture
which
I
guess
we'll
get
to
them.
So,
let's
maybe
start
with
the
the
benchmarking
proposal.
B
If
you'd
like
to
start
Dimitri,
I
guess
yeah,
maybe
first
of
all
just
give
like
the
context
of
maybe
how
it
came
to
be
and
what
you've
done
in
it.
So
far.
C
Right,
so
the
context
is
that
we
had
this
idea.
A
while
ago
we
had
a
benchmarking
proposal.
Doc,
then,
at
some
point,
I
made
like
a
very
like
a
skeleton
for
the
repo
and
then,
after
that
we
got
our
own
open,
Telemetry
profiling,
repo,
where
we
can
now
put
all
of
this
code
and
other
code
that
we
come
up
with
later
two
and
so
yeah.
So
what
I
did
is
I
created
a
PR
for
this
official.
You
know
open,
Telemetry,
profiling,
repo
and
I
kind
of
extended.
F
F
B
Yeah
I'll
paste
it
in
the
I'll
paste
it
in
the
chat.
I
think
this
is
oops.
C
So
so
the
main
idea
is
this:
like
we
want
to
create
some
sort
of
format
that
would
work
well
for
everyone,
and
we
want
to
have
some
sort
of
a
more
or
less
subjective
way
of
figuring
out.
You
know
what
is
a
good
format,
and
the
best
way
we
came
up
with
is
basically
having
this
Benchmark
Suite,
where
it
would
have
a
bunch
of
kind
of
source
profiles
that
you
would
pass
through
this
reference
implementation.
C
And
it
will
generate,
you
know
some
sort
of
output
and
we
could
measure
how
large
are
those
profiles
that
we
got
on
the
other
end
and
how
long
it
took
to
generate
them,
and
maybe
we
could
collect
other
metrics
as
well.
Memory
usage
on
things
of
that
nature,
and
hopefully,
by
kind
of
using
this
approach,
we'll
come
up
with
the
best
format
and
it's
important
to
come
up
with.
You
know
a
format,
that's
very
efficient
because
I
don't
know
we
decided
somewhere
that
that's
like
a
an
important
part
of
this.
C
Oh
yeah
I
thought
I
was
sharing
my
screen,
but
maybe
I'm,
not
okay.
It
doesn't
show
up
for
some
reason
Okay.
So.
F
Cool
and
when
we
talk
about
benchmarking
and
efficiency
here,
we're
talking
about
amount
of
information
over
the
wire
and
amount
of
say,
CPU
and
RAM
required
to
go
from
whatever
internal
formats.
We're
using
to
this
follow
exactly.
C
Correct
yeah,
and
for
the
kind
of
so
what
we
came
up
with
is
having
some
sort
of
intermediary
format
that
is
as
simple
as
possible
and
as
close
as
possible
to
kind
of
like
what
the
profilers
generate
and
we've
decided
that
the
best
format
for
that
would
be
that
collapse
format
where
it's
like
a
stack
Trace.
Just
as
a
string.
You
know
number
of
samples
as
a
string
and
maybe
time
stamps.
You
know
things
like
that.
C
We
figured
that
would
be
and
as
close
to
what
profiles
generate,
and
by
doing
that,
you
know
by
not
making
this
format
complicated.
We
kind
of
make
sure
that
we're
measuring
the
actual,
like
performance
of
the
actual
encoder,
as
opposed
to
Performance
of
the
parser
and
encoder.
If
that.
F
Makes
sense,
isn't
what
we're
kind
of
measuring
there?
Maybe
I've
misunderstood
you,
we're
measuring
the
performance
of
transforming
from
that
format
into
the
encoded
format,
whereas
I
guess
what
we
really
want
to
measure
is
the
performance
of
the
issues
with
transforming
from
whatever
all
of
our
own
internal
formats
are
into
into
that
I.
Guess
we're
trying
to
measure
two
separate
things:
we're
also
trying
to
measure
the
other
thing
that
you
mentioned,
which
is
obviously
performance
on
The,
Wire
and
so
on.
I.
E
F
C
So
let
me
maybe
kind
of
give
an
overview
of
the
pr
and
kind
of
the
code
there
and
how
it
works,
and
hopefully
that
will
kind
of
make
it
a
little
more
clear.
Yeah.
A
Oh
I
am.
F
C
All
right,
okay,
so
first
a
few
kind
of
changes
that
happened.
There's
now
support
for
collapsed
and
Pete
Prof
as
like
the
source
format
right.
So
there's
like
there's
a
source
format.
Let
me
get.
C
C
You
can
use
the
converter
program
to
convert
it
from
The
Source
profiles
into
this
intermediary
format
and
those
profiles
go
into
profile,
selection
to
metering
and
then,
after
you
have
your
kind
of
data
set
in
profile,
slash
intermediary,
you
can
run
CMD
slash
report
and
it
will
run
various
encoders
and
kind
of
show
you
the
results
and
show
you
how
well
they
perform
against
each
other
and
right
now,
I
implemented
some
basic
ones.
C
One
just
puts
like
all
the
lines,
all
the
stack
traces
in
a
Json
and
encodes
it
as
Json
another
one
encodes
it
encodes
is
encodes
it
sprov
and
another
one
encodes
it
sprov
and
also
gzips,
and
once
you
run
it,
you
get
results
that
look
like
this
in
the
terminal
so
for
each
file
right.
So
this
is
like
a
very
basic
like
one
line
of
file.
C
The
results
are
the
following.
You
know
people
of
gzip-
it's
not
performing
particularly
well
here,
but
this
is
probably
not
you
know
the
the
function
of
the
the
the
file
being
kind
of
very
extreme
case.
This
file,
I
added,
is
an
example
from
you
know,
average
go
Application
and
so
I
think
these
are
kind
of
much
more
interesting
results.
You
can
see
that
with
Json.
C
You
know
you
get
a
pretty
large
file
with
P
Prof,
you
get
a
smaller
file
and
with
people
of
gzip
the
file
sizes
significantly
smaller,
and
you
know
it
comes
at
a
cost
of.
You
know
the
encoding
process
being
a
little
longer,
but
you
know
you
could
argue
that
this
is
kind
of
worth
it
so
yeah.
So
that's
kind
of
roughly
how
you
use
it.
C
So
you
put
profiles
in
you
know,
profiles,
slash
Source,
you
run
the
convert
function,
it
puts
it
in
profile,
slash
intermediary,
then
you
run
CMD
report
and
it
runs
this
report
and
you
can
use
your
own.
You
can
add
your
own
kind
of
encoders.
C
Me
Maybe,
show
that
part
real
quick.
So
they
have
to
satisfy
an
interface
that
I
guess.
Json
one
will
be
pretty
straightforward,
so
they
have
to
satisfy
this
interface.
It
has
to
have
a
name
just
for
reporting
purposes.
C
It
has
to
support.
You
know
this
append
function
which
will
take
a
sample.
You
know
stack
Trace
value,
etc,
etc,
and
you
know
do
something
with
it
and
then
the
final
function
is
called
serialize.
It
takes.
C
You
know
the
internal
state
of
the
encoder
and
encodes
it
as
a
byte,
blob,
I,
suppose
and
so
yeah.
As
long
as
your
encoder
is
in
this
kind
of
satisfies
this
interface,
you
can
add
new
ones,
all
right,
I'm
gonna,
stop
for
a
second
Felix.
F
Can
so
I
guess
what
we're?
What
we're
benchmarking
here
really
is
for
a
like
a
diverse
array
of
like
profiling,
traces
gathered
from
different
like
types
of
languages
and
so
on,
and
so
forth.
F
Really
what
we're
benchmarking
here
is
whether
a
Json
representation,
a
p
Prof
representation
or
some
other
encoding
like
this
from
that
will
be
smaller
and
how
long
it
will
take
to
do
this
encoding,
I
guess
the
the
other
question,
though,
for
like
any
any
vendor
considering
the
format
would
be
what's
actually
the
cost
of
going
not
from
the
intermediary
format,
but
what's
the
cost
of
going
from
what
they
have
now
to
each
of
these
formats,
all
right,
because
what
we're
not
going
to
do
as
a
none
of
us
would
do
this
thing
like
where
we
go
to
intermediate
former,
probably
and
then
to
to
one
of
these
so
I,
guess
the
just
to
get
some
context
on
this.
F
F
One
is
to
figure
out
whether
our
based
on
our
last
conversation,
like
is
our
current
format
like
P
Prof
compatible
in
the
sense
of
could
we
take
what
we
do
now
and
convert
it
in
some
manner
into
a
p,
Prof
representation
and
then
the
follow-up
issue
on
that
is,
if
it's
a
yes,
then,
what's
the
cost
of
doing
that
in
terms
of
CPU
and
RAM,
and
what's
the
impact
on
data
over
the
wire,
and
if
it's
a
no,
what
adjustments
would
we
have
to
make
to
to
our
format
to
be
compatible
with
people,
format
and
I?
F
Think
I'm?
What
I'm
struggling
with
is
I'm
trying
to
figure
out
if
this
benchmarking
well,
what
a
is
that
what
everybody
else
is
doing
as
well
and
B?
If
so,
does
this
benchmarking
framework
help
us
with
that
or
not.
A
I
I
have
some
thoughts
that
I
was
going
to
ask
in
a
separate
question,
but
it
kind
of
fits
in
I
think
we're
now
measuring
like
if
we
Define
a
new
profiling
format,
I
think
measuring
the
final
output.
Size
is
a
very
interesting
thing
and
something
that
this
framework
and
code
you've
written
would
be
super
useful
for
I.
A
Think
in
terms
of
measuring
the
actual
conversion
time
I
share
the
concerns
that
Sean
has
I'll
throw
another
wrench
into
this
by
saying
we're
kind
of
implicitly,
assuming
that
we
write
these
conversion
functions
in,
go
or
C,
or
something
that
we
can
run
there
easily.
For
example,
we
at
datadoc
have
P
Prof
encoding
in
pure
JavaScript,
so
that
will
be
a
little
awkward
to
fit
in
here.
So
I
I
would
say
for
now
maybe
focusing
more
on
the
final
wire
size
than
the
transformation,
and
the
cost
of
that
would
probably
be
more
productive.
C
Yeah
yeah
all
fair,
fair
points,
the
way
I
would
kind
of
frame
it
is.
C
You
know
maybe
I'm
mistaken,
but
like
the
the
overall
goal
kind
of
like
where
we
want
to
take.
This
is
for
profilers
to
eventually
adapt
this
format
and
generate
this
generally
profiling
data
in
the
format
that
we
come
up
with
right
and
like
right
now
we
all
have
this
problem
of
having
to
convert
from,
let's
say
GFR
to
P
prop
to
something
else,
but
I'm
hoping
in
the
future.
C
You
know
the
profilers
will
just
kind
of
generate
the
output
in
the
format
that
we
come
up
with
and
if
that's
the
case,
then
you
know
that's
why
we
picked
the
intermediary
format
to
be
kind
of
as
close
to
I.
Don't
know,
memory,
representation
of
stacks
and
samples
as
possible,
so
that
so
that
we
are,
you
know,
measuring
something.
That's
relevant.
F
F
If
you
recall,
when
we
discussed
the
format
that
we
use,
we
have
this
kind
of
stateful
thing
where
we
we
actually
don't
always
represent
the
full
stack
on
the
the
client
side,
so
we'll
hash
it
and
then,
after
that,
we'll
send
the
hash
so
for
for
us,
it
actually
wouldn't
be
the
case
that
on
the
host,
we
would
generate
this
format.
F
We
would
what
we're
trying
to
figure
out
basically
is:
can
we
come
up
with
a
format
which
will
work
both
in
our
in
that
case,
we're
doing
this
hashing
and
then
sending
out
that
and
then
also
in
the
case
of
what
you
guys
are
doing,
where
you
actually
have
the
full
representation
and
so
I
think
that's
probably
where
we've
diverged
and
why
I
was
confused
about
like
why
we
would
like
run
an
experiment
with
the
endometrial
format.
Does
that
does
that
make
sense,
yeah.
C
Yeah,
that's
that
makes
sense
and
yeah
I
think
that's
a
very
good
point.
Yeah
I'm!
Obviously
a
little.
You
know
less
familiar
with
the
hash
representation
as
opposed
to
you
know
the
string
representation.
C
One
thing
I'll
say
is
you
know
this?
This
benchmarks
would
is
definitely
able
to
help
us
kind
of
analyze
that
that
as
well,
you
know
we
could
use
hashes
in
turn
instead
of
strings
to
kind
of
test
it
and
see
how
things
line
up
when
we
use
hashes
Virtual
strings
but
yeah
I,
yeah
I
see
your
point
and
it's
not
like.
F
Even
if
we
were
to
represent
hashes
in
the
in
the
immediate
intermediate
format
of
the
for
us,
we're
still
we're
actually
I
think
we're
a
question
behind
you
in
the
sense
of
where
what
we're
trying
to
figure
out-
and
it's
really
worth
clarifying,
if
we're
the
only
ones
in
this
scenario
where
we
have
a
format
right
now
that
we
would
actually
have
we're,
not
we
need
to
establish
first,
can
we
actually
convert
it
into
into
a
pre-platform
and
what
the
consequences
of
that
are
sorry,
crystals,
I
think
I
I
cut
you
off
there,
so
crystals
is
also
from
from
our
team
and
he's
one
of
the
people.
F
Looking
into
this
to
do
something
you
want
to
add
on
there.
E
Yeah,
so
I
was
going
to
say
based
on
the
discussion
so
far.
There
is
an
implicit
assumption
that
we're
talking
about
the
stateless
format,
so
kind
of
everybody's
talking
about
the
serialization
format
right
now,
but
from
our
point
of
view,
we're
also
talking
about
the
protocol
and
it's
the
protocol
that
gives
us
the
winds.
E
In
this
case,
and
and
by
which
I
mean
we
cast
certain
data
on
the
back
end
and
we
never
send
them
again
with
the
Benchmark
infrastructure
that
Dimitri
a
song
is
not
going
to
be
straightforward
unless
we
hack
something-
and
we
assume
that
we
basically,
we
don't
have
to
reproduce
our
protocol
in
a
way
that
is
going
to
to
be
benchmarkable.
But
it's
not
going
to
be
easy
and
also
I'm,
not
sure
what
we're
going
to
learn
from
this.
E
So
first
of
all,
my
question
would
be
is:
is
everybody
kind
of
on
the
same
page
and
that
they
want
to
pursue
a
stateless
format
or
because,
if
it's
possible,
that
we
can
make
changes
to
the
format
in
order
to
support
back-end
data
casting
that
doesn't
be
addressed,
because
the
performance
implications
of
that
are
a
big.
F
I
kind
of
based
on
our
last
meeting
I
think
my
impression
was
I'm,
not
sure
who
who
proposed
it.
But
the
idea
was
okay,
like
there's
this
format
out
there
that
P
Prof
as
a
baseline,
let's
all
go
away
and
see
if
we
can
take
our
stateless
or
stateful
protocols
and
figure
out
if
we
can,
if
we
can
work
from
that
Baseline
with
it
is
that
is
everybody
else
in
the
similar
similar
page.
B
Yeah
or
I
guess
one
thing
I
would
ask
about
yeah
if
you've
been
just
like
taking
a
step
back
about
the
you
know,
idea
or
like
goal
of
the
benchmarking
Suite
is
yeah.
I
know
you
said
it
would
be
hard
to
I,
guess
like
hack
it
or
you
know,
design
it
in
a
way
where
you
could
also
Benchmark.
You
know
State
full
versus
stateless
things,
I'm
curious,
like
I,
guess
yeah
that
was
kind
of
the
Hope.
From
my
understanding
of
the
of
the
benchmarking
Suite
was
that
we'd
be
able
to
sort
of
objectively?
B
Have
you
know
somewhat
objective
metrics
on
like
what
a
better
route
is,
for
example,
you
know
like
if
there
was
like?
Let's
just
say
there
was
some
way
to
you
know
in
you
know:
do
it
in
a
stateful
way
with
this
benchmarking
suite
and
the
metrics
look,
you
know
way
better
for
that
than
it
does
in
the
stateless
way
and
then
like.
Maybe
then,
the
conversation
about
stateful
or
yeah
stateful
versus
stateless
becomes
easier,
because
it's
like
well
look
how
much
more
efficient
it
is.
If
it's
you
know,
state
Bliss
versus
stateful,
or
vice
versa.
B
Whatever
yeah
I
don't
know,
I
just
wanted
to
that.
That's
kind
of
I
guess
what
we're
or
what
I
was
hoping
would
that
there
would
be
some
way
to
get
some
kind
of
idea
of
that
with
within
this
benchmarking
Suite,
but
that
just
might
not
be
the
case.
I
don't
know.
D
My
impression
is
that
we
can
use
paper
of
labels
to
to
put
the
ashes
instead
of
you
know,
putting
the
entire
mappings,
for
example,
and
I
also
think
giving
a
first
review
on
the
pr
by
Dimitri
that
we
may
be
able
to
write
an
encoder
with
that
interface.
That
is
similar
to
what
we
do
in
the
in
our
current
implementation
and
actually
in
the
the
stack
trace
and
just
sending
the
the
action.
D
Some
additional
metadata
so
I
think
it's
doable
I
mean
both
things
are
doable
so
using
the
Dimitri
proposal,
The,
Benchmark
implementation
and
coming
up
with
something
that
resembles
what
we
do
in
our
protocol
and
testing
out
people
of
the
same.
The
same
time.
F
I
guess
Francisco
just
on
that
one
point:
maybe
crystals
you
can
jump
in
on
this,
but
isn't
the
issue
going
to
be
that
we're
if
we're
trying
to
Model
A
the
protocol,
we're
going
to
have
to
have
something
we're
going
to
have
to
model
like
the
like
a
number
of
back
and
forth
of
like
not
just
the
encoding
of
a
single
message,
but
we'd
have
to
model
like
a
profiling
session,
for
example,
in
order
to
get
an
idea
of
what
the
savings
are
over
time
and
then
I'm
not
sure
if
the,
if
the
pure
is
designed
to
model
like
a
not
just
a
wire
form,
but
a
network
protocol
which
I
guess
is
what
we're
describing
like
crystals
did
you
have
any
thoughts
on
them?
F
E
So
most
of
the
savings
for
us
guns
come
from
skipping
traces
that
come
up
very
frequently.
So
the
simplest
example
for
me
to
give
you
is
imagine
that
we
have
200
identical
traces,
so
in
this
case
our
protocol
would
only
send
the
place
once
and
Skip
all
the
rest.
That's
it
now,
with
with
the
stateless
protocol,
you
will
send
the
trace
every
time,
so
that
would
be
different.
E
So
then,
in
order
for
me
to
model
that,
with
the
existing
benchmarking
infrastructure,
I
would
have
to
create
input
samples
that
reflect
that
there
will
be
a
lot
of
redundancy
in
the
input
sample
set
and
then
I'm
not
sure
right
now
how
we
call
the
the
the
protocol
pattern
software.
It
would
assume
that
the
trace
has
been
passed.
It
will
not
be
resent
I,
guess
it's
doable
from
theoretical
perspective.
Just
thinking
about
it
right
now,
you
also
have
to
say:
I
haven't
really
looked
at
the
format
that
you
guys
discussed
in
detail.
E
So
I'm
not
sure
at
this
point.
If
we
can
actually
represent
our
data
with
the
existing
paper
of
format,
I
mean
Francisco
mentioned
labels,
but
that's
not
the
the
only
thing.
We
also
have
like
a
little
briefly,
and
there
are
other
things
that
do
not
quite
match
up,
for
instance,
the
timing,
and
we
do
something
called
high
resolution
sampling.
E
Basically,
each
sample
has
its
own
timestamp
and
I'm,
not
sure
if
that's
compatible
with.
What's
there.
A
One
sort
I
have
is
that
I
I
can
sort
of
see
how
your
stateful
protocol
should
be
translatable
into
sort
of
an
encoding
session,
which
is
basically
everything
that
would
be
an
RPC
call
where
you
send
up
a
stack,
trace
and
get
a
hash
for
it
and
stuff.
That
would
just
be
messages
in
the
Stream
and
I
think
what
you'll
find
is.
A
It
will
really
depend
for
how
long
the
recorded
session
lasts,
if
it's
very
short
and
then
there's
probably
not
much
efficiency
difference
between
what
you're
doing
and
what
pproof
is
doing,
but
since
P
profits
generally
used
for
patching
up
like
in
minutes
worth
of
profiling
data
or
something
I
think
yeah
in
that
minute
window.
A
Maybe
it
looks
similar,
but
then,
if
you
compare
a
minute
of
pre-prov
to
maybe
24
hour
sessions
or
longer
that
you
have,
then
that's
probably
where
the
efficiency
comes
into
I,
think
sort
of
having
the
inputs
be
not
even
fair,
but
realistic
will
be
a
big
aspect
of
getting
useful
data
out
of
this.
In
addition
to
solving
sort
of
the
sort
of
yeah
multi,
multiple
RPC
calls
that
might
be
going
on
when
you're
transmitting
your
data.
D
Yeah
one
actually,
my
only
comment
that
deviates
from
the
current
state
of
Dimitri
PRS
is
that,
instead
of
running
as
the
binary,
we
could
reuse
the
go
benchmarking
tool
and
if
we
want
even
to
go
one
step
further,
we
could
have
like
literally
a
grpc
server.
That
starts
with
the
tests
and
the
benchmarking
Suite
will
actually
make
the
rpcs
into
the
the
fake
server.
So
the
local
serpent.
So
but
I
would
not.
D
E
D
Think
this
even
with
stainless
or
stateful
protocol,
we
could
today,
for
example,
pack
into
two
different
messages
and
send,
and
she
realized
both
of
them
use
this
Benchmark
implementation,
at
least
to
come
up
with
some
ideas
about
you
know
the
overall
status
and
obviously
the
biggest
challenge
is
that,
for
us,
specifically,
is
not
an
input
file
that
is
good
as
a
intermediate
representation
to
come
up
with
our
output,
because
all
of
that
are
memory
structures,
but.
C
Yeah
I
would
say
on
the
topic
of
I,
totally
agree
with
you
on
the
you
know,
using
the
go
Benchmark
function,
I
think
that
makes
a
lot
of
sense.
The
grpc
part
I
I
kind
of
understood
last
like
to
me
that
seems
like
it
would
go
against
the
goal
of
kind
of
like
making
the
the
part
that
we
profile
isolated.
The
part
that
we
yeah
I
guess
the
part
that
we
measure
isolated,
but
maybe
I'm
missing,
something
like
to
me.
C
D
D
G
D
Gut
feeling
is
that,
with
a
with
a
server,
we
could
be
looking
at
more
granular
Matrix,
because
I
I
suppose
in
the
end
it's
going
to
be
a
part
of
a
based
protocol
and
therefore
grpc
for
TLP
collector
is
going
to
be
the
de
facto
standards.
No,
so
putting
that
into
the
benchmarks
is
not
a
deviation
for
what's
going
to
be
the
end
result,
but
I
may
be
I
may
be
mistaken.
So.
D
F
Think
from
from
our
side,
it
sounds
like
we
have
some
some
homework
to
do
and
see
if
we
can
like
we'll
get
together
and
see.
If
we
can
do
this,
one
thing
I
wanted
to
check
on
is
I'm.
F
I
can
tell
it's
kind
of
right
now,
there's
at
least
three
teams,
or
maybe
four
teams
represented
here.
What
I
want
to
make
sure
just
check
on
is
like
how
many
different
profilers
do.
We
have
kind
of
represent
in
this
discussion
right
now,
because
what
I'm
worried
about
is
a
little
bit
that
a
small
group
of
us
will
kind
of
accidentally
monopolize
kind
of
the
the
discussion
around
it
like
so
there's
elastic
or
optimized,
there's
Periscope,
there's
datadog
cool.
F
And
is
the
expectation
that
like
so,
it
will
be
like
all
of
us,
are
going
to
try
and
and
try,
and
do
this
like
figure
out
if
our
formats
will
be
compatible.
F
B
I
mean
I,
guess
one
thing
I
would
say
is
yeah,
like
the
I
think.
The
idea
here
is
that
you
know
eventually
we
if
we're
going
to
go
around,
you
know,
we've
all
talked
about
you
know.
P
Prof
is
like
pretty
good.
You
know
already,
as
you
know,
something
that
if
you
know
I
guess
the
I
think
the
biggest
question
that
a
lot
of
people
will
have
if
we
were
to
come
up
with
a
different
format
is
like.
B
Why
don't
we
just
use
P
prop,
which
is
already
like
somewhat
standardized
a
lot
of
people
use
it
already.
People
are
familiar
with
it,
and
so
I
think
you
know
yeah
part
of
this
is
that
you
know
if
we
are
going
to
make
some
recommendation
I
think
we
have
to
be
able
to
say
that,
like
you
know
on
some,
you
know
Dimension,
that
whatever
we
come
up
with
is
better
in
some
way
in
some
substantial
enough
way
to
like
warrant
us
creating
a
you
know,
format
that
we
then
try
to.
B
You
know,
give
people
to
use
and
to
make
you
know
the
standard,
and
so
you
know
that's
like
a
big.
You
know
a
big
if
kind
of
thing,
and
so
you
know
a
lot
of
us.
Yeah
do
have
custom,
implementations,
custom
formats
or
ideas
for
ways.
You
know
and
that's
you
know
part
of
why
I
had
mentioned
Felix
earlier.
You
know
ways
that
we
would
improve
P
Prof
if
we
could,
but
you
know,
key
profit
is
obviously
not
in
our
control.
So
it's
like
you
know.
B
If
there
are
things
that
it,
you
know,
we
all
have
custom
format
or
the
ones
who
have
custom
formats
have
custom
formats
for
a
reason
or
are
interested
in
custom
formats
for
a
reason-
and
you
know,
presumably,
if
you
know,
if
we
can,
you
know
clearly
explain
those
reasons
and
explain
why
you
know
adding
timestamps
or
adding
a
specific
field
for
like
linking
to
traces,
or
you
know
things
that
don't
exist
in
P
Prof.
You
know
it's
things
that
you
might
be
able
to
do
in
pre-prof,
but
don't
exist.
B
You
know
in
possibly
the
most
efficient
form
in
people
off
like
if
we
can
use
this
to
sort
of
explain
why
some
you
know
representation
is
better
than
that.
I
think
that's
kind
of
the
end
goal,
otherwise
right,
yeah
I
think
that's
kind
of
I.
From
my
perspective,
I
think
that
seems
to
be
the
most
like
logical
end
goal
of
this.
How
we
get
there
is,
you
know,
kind
of
a
different.
B
You
know
yeah,
oh
yeah,
also
someone
just
mentioned
yeah,
like
also
why
we
couldn't
just
use
the
like
Hotel,
like
events
format,
for
example.
That
was
one
thing
that
a
tigrant
had
mentioned
early
on
was
like
you
know.
If
you're
going
to
propose
this,
you
know
you
should
also
be
able
to
show
that
it's
better
than
using
what
you
know.
Otel
would
otherwise
recommend
no
I,
don't
know,
I,
think
Felix
and
Sean
both
had
something
now.
A
I
have
something
that
might
take
longer
Sean.
If
you
have
a
quick
reply,
maybe
you
can
go
first,
no.
F
I
guess
my
question
really
was
so
right
now.
Are
we
the
only
team
that
has
a
like
a
query
or
concern
about
that?
There
might
be
overhead
going
from
what
we
have
right
now
to
P
Prof.
A
A
Client-Side
is
significant,
so
we
should
really
talk
about
whether
or
not
we
can
even
or
should
try
to
move
towards
an
architecture
where
we
standardize
on
a
single
format
or
if
we
should
consider
an
architecture
that
can
maybe
support
multiple
formats,
especially
the
industry
standards,
the
big
ones
being
people
off
and
JFR
really
well,
and
then
make
sure
that
that
one
of
those
two
maybe
P,
profits
the
lowest
common
denominator
between
everything
we
do
but
sort
of
have
to
have
a
way
to
actually
send
multiple
different
formats
through
otel
to
backends,
without
converting
if
they're,
backend,
supported
and
I
actually
have
a
proposal
for
that
which
we
could
skip
to
if
there's
interest.
A
F
One
question:
JFR:
sorry,
I,
don't
do
any
Java
stuff.
Jr4
is
a
just
a
file
format
or
it's
a
like
a
network
protocol
as.
G
Like
a
sorry
to
jump
in
I,
think
I
can
answer
that
I've
done
a
bunch
of
JFR
yeah.
It's
it's
both
a
file
format,
but
it's
also
an
internal
subsystem
of
the
jvm
and
in
more
contemporary
jvms
like
later
Java
11
and
17,
you
can
get
similar
to
how.net.
Does
it
you
can
register
callbacks?
They
get
hit
with
an
event,
so
you
don't
necessarily
have
a
file
format.
You
can
do
streaming
style
as
well
with
JFR,
and
the
file
format
is
not
well
specified.
I
will
say.
A
Yeah,
it's
similar
to
the
the
go
exit,
runtime
execution,
Tracer
file
formats,
but
it's
much
more
complex
than
that
which
actually
brings
me
to
another
point
that
I
added
to
gender
and
that's
just
something
you
can
quickly
mention.
We
should
really
talk
to
runtime
maintainers,
because
currently
our
vision
document
has
this
item
of
saying,
hey.
We
should
petition
runtime
maintainers
to
adopt
whatever
we're
coming
up
with
and
we
haven't
really
talked
to
them
yet,
which
might
be
a
good
idea
before
we
get
too
deep
into
some
direction.
A
So
I
have
reached
out
to
the
go
runtime
maintainers
and
somebody
might
show
up
to
our
next
meeting
where
we
could
talk
a
little
bit
about.
What's
their
future
plans
and
ideas,
are
they
might
actually
plan
to
have
some
big
plans
of
maybe
changing
the
runtime
execution
Tracer
format
to
something
they
will
call
Google
sorry
go
flight
recorder,
so
they
might
actually
be
moving
in
a
similar
direction
to
JFR.
But
we
can
ask
these
questions
and
and
find
find
out
what's
going
on
there,
which
I
think
we
should
do
so.
A
We
could
put
that
on
next
items.
Next
next
time's
agenda
and
yeah,
if
there's
more
questions,
I
can
stop
here,
but
otherwise
I
could
share
this
proposal.
Real,
quick,
it's
actually
pretty
small.
That
I
was
thinking
about.
A
A
Okay,
so
there's
a
bunch
of
text
here,
but
I
think
the
the
picture
says
it
best.
The
architecture
that
I
think
could
work
for
us
in
in
hotel
is
one
where
you
have
different
applications
or
profilers
that
are
Standalone
host
profilers
that
send
data
in
different
formats
to
The,
Collector
and
The.
Collector
then
converts
stuff,
if
needed.
So
in
order
that
the
back
end
understands
it
and
for
this
scheme
to
work
out,
we
sort
of
need
to
specify
at
least
one
thing
that
the
backends
understand.
A
So
we
have
at
least
one
way
to
convert
to
the
lowest
format
that
that
we
we
sort
of
agree
all
the
backends
should
support
if
they
want
to
be
Hotel
profiling
back
end
and
P.
Prof
seems
to
be
a
good
candidate
for
that,
because
it's
well
specified,
and
it
is
definitely
enough
to
represent
a
CPU
profile
which
a
lot
of
us
are
interested
in.
It
doesn't
have
all
the
bells
and
whistles
but
yeah.
Basically,
the
way
it
would
work.
A
The
back
ends
would
kind
of
advertise
which
formants
they
support,
so
they
would
be
like
hey
I'm,
a
backend
that
only
speaks
P,
Prof
or
I'm
a
beckons
that
speaks
people,
often
JFR
and
I'm,
a
beckon
that
speaks
a
custom
format
and
P
profit.
The
custom
format
could
be
whatever
elastic
is
doing
or
or
other
people
are
doing,
and
then
basically,
the
hotel
collector
would
propagate
that
information
on
which
formats
are
available
to
the
clients
and
the
client
will
pick
its
preferred
format
out
of
that.
A
So
the
client
will
either
send
people
for
JFR
when
it's
available.
The
hotel
collector
can
also
add
formats
to
the
list
of
available
formats
if
it
has
converters.
So,
for
example,
if
you
don't
have
any
back
end
here,
speaking
JFR,
but
the
hotel
collector
knows
how
to
convert
JFR
to
P
Prof,
then
it
can
advertise,
hey,
I,
know
JFR,
so
you
can
send
me
JFR
I'll
forward
it
as
P
Prof
I
specified
the
details
of
that
a
little
bit
more
in
the
in
the
proposal
here.
A
I
think
I
went
over
it
a
little
too
quickly
and
I
have
lots
of
examples
here.
How
that
would
work
out
in
practice
if
we
actually
did
that
for
scenarios
that
we
have
in
mind?
A
F
I
guess
the
obvious.
The
obvious
question
is:
are
on
the
on
the
hotel
collector
we're
talking
about,
like
a
say,
a
matrix
of
converters
right,
a
matrix
of
converters
right
like
if
you
it's
like,
if,
if
you're
going
to
be
able
to
convert
from
every
format
to
every
other
format,
is
that.
A
The
no
no
no
I
think
you
would
only
convert
from
every
format
to
P
Prof,
because
that
would
be
the
one
that
every
backend
would
need
to
understand.
It's
a
minimum
and
if
we
do
design
our
own
format
an
Hotel
format,
then
that
would
also
be
one
of
the
output
formats.
But
basically
the
hotel
collector
should
take
a
lot
of
input
formats
and
have
converters
to
what
we
consider
to
be
viable
output
formats.
That
backends
should
support,
which
I
think
would
be
people
for
our
and
or
our
own
format.
C
A
If
you
and
you,
if
you
have
currently
Standalone
profiler,
for
example-
and
you
want
to
support
the
scheme,
you
could
either
teach
that
Standalone
profiler
how
to
emit
P
Prof.
But
you
would
only
have
to
do
that
if
you're
in
an
environment
where
the
back
end
only
speaks
P
Prof,
because
you'll
get
the
information.
If
you're
talking
to
the
elastic
backend
you'll
be
told,
hey
your
backend
understands
your
custom
format,
so
you
can
send
that
up
directly.
There
will
be
no
conversion,
no
changes
needed,
so
you'd
only
have
to
do
that.
A
If
the
backend
tells
you
like,
hey
I,
don't
have
any
buddy
speaking
this
custom
format
here
so
yeah
this
guy
could
emit
P
Prof
or
the
hotel
collector
could
implement
the
conversion,
which
might
also
help
with
the
statefulness
right
now,
because
it
could
buffer
up
some
RPC
calls
and
then
convert
that
into
P
Prof,
which
might
be
a
good
place
to
do
that.
A
The
the
benefits
for
otel
users
would
basically
be
like
it
doesn't
matter
which
companies
or
open
source
profiler
they
are
using.
They
can
always
switch
back-ends.
They
might
sometimes
lose
some
details
of
the
data
because
in
the
worst
case
there
are
profilers
data
will
be
converted
to
P
Prof
along
the
way,
which
will
probably,
in
some
cases,
remove
the
resolution
if
there
were
timestamps
or
other
things,
but
you
will
still
be
able
to
render
basic
profiles
and
you
can
export
to
multiple
backends
at
the
same
time
as
well
as
a
user.
A
G
So
something
that's
come
up
quite
a
few
times
in
the
past.
Is
this
idea
of
stitching
profile
based
Telemetry
to
other
signal
types
like
specifically
spans
P,
Prof
I,
think
doesn't
have
you
know,
accommodations
for
that
built
in
I?
Think
JFR
doesn't
as
well.
A
Yes,
so
I
think
I
have
this
here
on
the
proposal.
Racing
signal
correlation
needs
to
be
standardized
in
P
Prof.
You
would
do
that
with
labels,
so
we
would
Define,
so
you
can
attach
a
label
to
any
sample
or
sectors
that
you
have
and
you
can
say,
span
ID
and
then
you
give
suspend
ID.
Trace
ID
says
Trace
ID.
So
you
can
it's
like
tagging
for
the
stack
choices.
Okay,.
A
And
we
could
also
for
what
it's
worth
layer,
our
other
features
on
P
Prof
as
part
of
what
we
standardize
like
how
we
want
people
of
payloads
to
look
like,
so
we
could,
for
example,
do
something
not
so
efficient
but
say
like
hey.
If
you
have
to
send
timestamps
through
P
Prof,
you
can
also
use
labels
for
that.
We
know
that
the
system
drawbacks.
My
proposal
has
a
lot
of
details
on
it
from
the
people
of
project,
but
it
would
be
one
way
to
not
lose
too
much
details
of
the
data.
A
G
One
more
question:
so
the
implication
that
back
ends
are
kind
of
required
to
support
or,
like
the
minimum
requirement,
is
p
Prof
yeah.
That
kind
of
implies
the
existence
of
a
translator
inside
the
collector
between
JFR
2p
Prof
does.
G
C
G
C
A
This
is
also
an
area
where
we
we
have
a
lot
of
people
on
my
team
who
know
JFR
really
deeply,
so
they
would
be
very
helpful
if
we
have
to
expand
this
and
and
basically
build
a
good
package.
It
would
be
some
work
that
the
hotel
group
have
would
have
to
put
together
somehow,
but
then
we
would
all
be
able
to
consume
data
from
JFR
applications,
which
are
a
lot
of
them
out
there.
G
Cool
yeah,
the
support
for
Java
8
is
such
a
bummer
there
because
of
the
file
format
like
just
having
to
deal
with
like
actual
files.
Such
a
bummer.
B
I
think
this
looks
pretty
cool,
though
I
think
this
seems
pretty
promising
yeah.
Perhaps
we
dig
a
little
deeper
and
maybe
discuss
more
in
detail
next
week
seems
like
it
may
solve
some
of
the
concerns
that
the
elastic
people
had
as
well.
Yeah
I,
don't
know
nice,
nice
work,
that's
pretty
cool.
A
D
D
In
this
slide,
is
the
benchmarking
PR
going
to
be
measuring.
Basically,
what
the
otlp
collector
would
do.
Internally
am
I
right.
A
C
But
I'm
also
like
if
we
are
going
in
this
direction,
the
the
benchmarking
almost
seems
kind
of
irrelevant
because
we
would
be
measuring.
You
know
a
couple
extra
I,
don't
know
like
how
well
P
Prof
wraps
JFR
or
other
sorry
how
how
protobuf
wraps
JFR
and
P
Pro
format,
I,
guess
right!
So
it's
like
not
I,
feel
like
it's.
It
won't
be
as
useful,
which
is
fine,
I'm,
just
yeah
kind
of.
B
B
What
other
people
think
if
you
know
and
yeah
like
like
Dimitri,
said
it's
fine,
if
it's,
if
we
feel
like
it,
won't
be
particularly
useful
or
if
it's
not
clear
how
it
would
be
useful
at
this
point,
then
you
know
maybe
we
wait,
but
otherwise
you
know
if
we
want
to
go
ahead
and
try
and
get
it
merged
or
whatever
we
would
have
to
have.
Somebody
volunteer
to
to
merge
that
Sean.
F
This
actually
is
related
to
something
I
added
to
the
the
agenda
so
from
for
us,
we
need
to,
as
I
said,
in
order
to
understand
if
the
benchmarking
infrastructure
is
useful
to
us
and
then
to
kind
of
learn
from
it.
We
need
to
kind
of
say,
go
and
do
the
effort
of
okay
figure
out.
Can
we
convert
our
like
format
into
the
people
of.
F
And
then
kind
of
use
the
thing
that
will
take
obviously
some
amount
of
engineering
time
and
my
my
question
is
at
the
moment
we
obviously
have
our
like
our
bi-weekly
Cadence,
where
we're
kind
of
say,
We'll
suggest
things
and
then
we'll
go
in
we'll
work
on
them.
F
But
in
a
way,
what
we're
kind
of
going
from
now
is
discussions
where
our
goal
was
to
produce
a
dock
which
I
think
that
works
to
to
kind
of
in
a
way,
I,
don't
know
managing
a
distributed
software
engineering
project
when
we
have
to
like
find
time
to
do
things
and
different
teams
have
to
run
stuff
and
I'm
kind
of
wondering
what
people's
thoughts
are
in
terms
of
like
so,
for
example,
for
next
week,
if
we
were
to
review
Felix's,
Doc
and
then
also
figure
out,
if
we
can
use
the
benchmarking
format
and
whether
it
works
for
us,
it's
a
fair
amount
of
work.
F
I'm,
not
sure.
If
we're
going
to
have
the
time
or
the
resources
between
now
and
then
to
do
it
and
I'm
just
wondering
how
people
are,
how
other
teams
are
thinking
about
how
we
should
manage
kind
of
the
Cadence
of
actual
actually
doing
practical
work.
As
related
to
this,
have
you
guys
any
thoughts
on
that
or,
like
one
option,
simply
slow
down
the
Cadence
at
which
we
expect
results,
but
we
should
probably
start
setting
kind
of
expectations
around
around
them
as
well.
B
Yeah
I
mean
I,
think
it
yeah,
it
obviously
is
depends
on
how
much
time
resources
effort
people
are
willing
to
put
in
obviously
yeah.
We
don't
want
to.
We
all.
Have
you
know
day
jobs,
so
it's
you
know
not
something
we
want
to
like
add
a
ton
of
you
know
stuff.
On
top
of
you
know,
ideally
that
we're
moving
towards
something
that
will
eventually
be
worth
the
effort
that
we're
putting
in
and
so
yeah.
B
It's
obviously
a
balance
of
like
wanting
to
move
forward,
and
not
you
know,
just
you
know,
talk
in
circles
week
after
week
and
also
make
some
sort
of
progress,
so
I
mean
I,
don't
think
it
necessarily
yeah
like
in
the
example
that
you
just
gave
I,
don't
think
we
necessarily
need
to
do
both.
B
You
know
it
sounds
like
there
is
the
potential
that
if
we
go
the
route
of
what
Felix
has
has
proposed
in
in
the
dock
that,
like
maybe
the
benchmarking,
becomes
less
useful.
So
maybe
we
just
you
know
we
focus
on
that.
B
I,
don't
know
if
we
necessarily
have
to
have
like
a
hard
and
fast
rule
of
how
much
you
know
work.
We
do
each
week,
but
just
I
guess
as
long
as
we
have
some
sort
of
action
item
to
like
move.
The
discussion
forward
in
some
meaningful
way
between
week
to
week
is,
is
my
thoughts.
I
don't
know
if
anyone
else
has
opinions.
A
I
I
have
a
quick
thought:
I
internally
manage
a
few
groups
of
people
who
sort
of
like
share
a
repository
together,
but
are
on
different
teams
and
try
to
make
sure
the
health
of
that
repository
is
good
and
we
have
sees
like
yeah
monthly
meetings
where
we
discuss
things
that
all
everybody
creates
should
be
done,
but
then
nobody
gets
around
to
doing
them.
What
we
found
works
pretty
well
for
that
is
to
always
try
to
put
names
to
people
at
the
end
of
the
meeting
like
hey.
A
This
person
is
going
to
try
to
do
this
until
next
time
this
person
had
like
action
items
and
then,
as
a
first
step
in
the
next
meeting,
reviewing
those
action
items.
It
exerts,
like
some
gentle
pressure,
that
when
people
make
a
promise
to
do
something,
we
have
a
record
of
that
we
go
over
it
and
then
it
gets
a
little
awkward
when
your
item
gets
carried
like
three
times
through
the
meetings
and
you
will
either
be
like
oh
yeah
I'm,
not
going
to
do
that
realistically
or
I
will
finally
finish
it.
B
Cool
I
would
say,
I
mean
for
this
it.
What
how
do
we
feel
about
you
know
at
least
committing
to
reviewing
Felix's
Doc
is
that?
Does
that
seem
fair
to
everyone.
B
So
you
know
I,
don't
know
if
anybody
wants
to
to
put
their
name
to
it
now
and
you
know.
A
F
E
B
So
we'll
say
at
least
one
person
from
each
team.
B
Yeah
and
I
guess:
I,
don't
want
to
volunteer
people
so
feel
free
to
to
add
your
name
there.
If
you're
willing
to
yeah
and
thanks
again
for
making
it
feels
that
looks,
looks
pretty
cool,
yeah,
I
guess
I
don't
know,
did
we
get
to
everything
so
yeah,
so
I
guess
just
to
be
clear.
We
will
pause
on
the
benchmarking.
Reviewing
is,
is
that
so
we'll
just
like
leave
the
pr
there
for
now?
B
E
B
Cool
then
other
than
that
I
think
that
we
yeah
I
think
that's
everything.
If
anybody
else
has
anything
else,
Speak
now
or
I
guess
hold
your
piece
for
two
weeks.