►
From YouTube: 2023-01-26 meeting
Description
OpenTelemetry Prometheus WG
A
B
A
B
Yeah
I
think
we
can
probably
go
ahead
and
get
started.
I
think
some
people
are
still
trickling
in,
but
we
can
probably
go
ahead
and
get
started.
Yeah
all
right,
well
welcome
back
everyone.
B
We
last
time
we
talked
we
that
was
before
we
had
talked
with
the
collector
Sig,
and
so
a
lot
of
the
conversations
today
is
going
to
be
around
I.
Guess,
first
of
all,
just
like
a
recap
of
what
we
talked
about
with
the
collector
Sig
and
then
based
off
of
some
of
their
feedback.
I
guess
what
we're
thinking
about
for
next
steps.
Moving
forward,
so
yeah
I
mean
I,
think
it'll
be
pretty
straightforward.
B
Today,
I
can
kind
of
give
a
quick
overview
or
in
the
notes,
there's
a
link
to
a
slack
message
where
I
kind
of
went
in
a
lot
of
detail,
but
basically
we
just
got
into
Felix
did
a
great
job
of
of
presenting
all
the
slides
that
we
had
also
had.
You
can
also
see
those
if
you
go
down
to
like
last
meeting
and
I
I
thought
it
went
pretty
well
I'm
curious
to
hear
your
thoughts
as
well.
Felix,
but
but
yeah
I
mean
I
would
say.
B
The
the
tldr
of
the
of
the
presentation
was
just
of
the
feedback
on
the
presentation.
I
guess
I
should
say
was
just
that.
Tigron
felt
like
we
need
some
benchmarks
to
sort
of
prove
that
a
stateful
versus
a
stateless
protocol
is
the
way
to
go.
I
guess
yeah,
he
kind
of
you
know.
He
saw
a
lot
of
the
logic
behind
it,
but
yeah
I
guess
when
it
really
came
down
to
it.
B
It
was
just
kind
of
like
you
know,
if
we're
going
to
add
a
level
of
complexity
to
The
Collector
that
deals
with
something
stateful,
there's
a
lot
of
other
issues
that
people
brought
up
that
come
along
with
that
one
of
the
ones
mentioned
was
like
load,
balancing
I
think
that
was
some.
Some
people
felt
like
that
might
be
a
big
issue,
but
amongst
other
things
but
yeah,
so
he
he
just
kind
of
wanted
to
dig
a
little
bit
deeper.
B
He
looked
at
the
doc
that
the
elastic
folks
made
made
and
felt
like
even
within
the
logic,
there
was
kind
of
a
jump
to
the
need
for
a
stateful
protocol
and
he
felt
like
it's
possible
that
an
equally
well-designed
stateless
protocol
could
do
you
know
be
just
as
effective,
so
that
was
kind
of
the
the
high
level
thing.
B
D
Yeah
I
also
came
away
with
a
positive
impression,
I
think
the
collector
folks
are
generally
open
to
the
ideas
that
we've
been
noodling
over
so
far,
I
think
it's
not
stateless
versus
stateful
I
mean
I.
Think
right
now
our
hope
is
to
propose
having
it
both
ways.
D
Right,
like
you,
can
send
something
in
a
stateful
format,
being
people
often
JFR
but
potentially
also
stateless
format,
sorry
I
think
I
mixed
it
up,
stateless
would
be
j4
and
P
Prof
and
stateful
would
be
something
more
similar
to
the
prod
file
or
protocol,
which
would
make
sense.
Obviously,
if
you
do
whole
host
profiling
and
I
think
yeah.
The
the
benchmarks
that
were
asked
for
I
think
are
specifically
around
like
proving
that
something
stateful
has.
D
The
big
benefits
on
bandwidth,
I
think
is
primarily
concern
the
the
hotel
folks
I
think
that
was
in
that
conversation
as
well.
They
have
maybe
different
ideas
of
what
a
lot
of
bandwidth
is,
and
we
do
I
think
like
the
hotel
tracing
stuff,
can
send
quite
a
lot
of
things
already.
So
would
we
consider
a
lot
of
data
they're
like
oh
yeah?
We
do
that
all
the
time
with
the
traces
that
that
might
be
something
to
keep
in
mind.
While
we
talk
to
them,
I
don't
actually
quite
know.
D
If
there's
like
an
order
of
magnitude
of
difference
here
or
not,
but
we
should
explore
and
yeah
I
think
figuring
out
if
we
want
something
stateful,
how
how
we
can
convince
them,
numerically
with
some
experiments,
is
something
we
should
figure
out.
I
think
there's
also
arguments
that
go
beyond
that.
There's
one
really
good
argument
that
I
didn't
think
of
so
far
for
something
stateful,
but
that
I
think
is
a
killer
argument,
which
is,
if
you
do
not
have
the
symbols
on
the
host
for
something
that
you
want
to
profile.
D
And
so,
if
Hotel
wants
profiling
capabilities
of
programs
running
on
host
where
the
symbols
are
not
available,
then
I
think
there
needs
to
be
a
way
to
send
at
least
a
symbol
separately
from
the
profiling
events,
so
but
yeah
I
think
showing
some
data
on,
especially
how
much
efficiency
can
be
gained
by
also
hashing
the
stack
traces
and
not
sending
the
program
to
counters
over
and
over
again.
Maybe
that's
something
that
still
needs
to
be
justified,
even
if
we
have
a
good
justification
for
why
symbol
should
be
sent
separately.
E
Just
just
a
minor
point
to
add
here
so
besides
the
symbolization
in
the
Sig
meeting,
there
were
other
points
that
were
never
really
brought
up.
I
think
a
lot
of
the
people
that
provided
perspective,
don't
quite
have
our
use
case
in
mind.
So,
for
example,
having
a
stateless
protocol
also
has
CPU
usage
implications
in
that
you're
doing
additional
processing
to
pre
to
do
the
unwinding
and
yeah.
E
You
could
ameliorate
that
with
some
extra
casing,
but
that
has
Now
Memory
implications
because
you're
casting
a
lot
more
data
on
the
on
the
host
like
a
stateless
protocol,
the
only
thing
you're
casting
is
the
houses
themselves
and
especially
for
on
Hawk
symbolization
that
we
do
for
interpreted
languages
like
python.
The
processing
is
quite
significant
if
we'd
have
to
to
move
towards
a
stateless
model
there.
So
my
attemption
right
now
is
that
we'll
also
see
some
ambition
level
CPU
performance
hits
the
fact.
E
F
Yeah,
thank
you.
No,
it
is
it's
Pete
I
I
should
update
that
it
always
defaults
to
that.
That's
my
middle
name
is
Pete,
really
fantastic.
Job
I
am
impressed
with
you
know,
kind
of
the
thoughtfulness
and
effort
going
into
the
presentation,
and
the
summary
here
was
was
really
spot
on
my
my
curiosity
right
now
is
I.
I
didn't
realize
I
think
previously
that
there
was
this
eventually
consistent
notion
about
the
elastic
prod,
filer
protocol
and
I
I
personally.
Have
this.
F
F
I
I,
like
that
a
lot
I
feel
like
it
answers
a
lot
of
the
questions
about
you
know:
load,
balancing
in
particular
and
and
I
think
it
just
kind
of
unhooks
a
whole
host
of
worries.
It
doesn't
answer
every
single
worry
about
statefulness,
but
to
me
it
like
answers
kind
of
the
the
worst
of
them
I.
Don't
think
we
surfaced
it
that
it
was
eventually
consistent,
I'd
like
to
beat
that
drum
I
think
it
would
be
helpful
and
you
know
sometimes
I
I
feel
like
that.
F
The
benchmarks
I
I
don't
have
high
hope
that
they're
gonna
look
like
a
Dead
Ringer
to
go
stateful
but
I
think
the
group
has
I
think
in
this
room.
We've
got
an
intuition
that
the
compression
and
the
efficiencies
are
maybe
still
a
good
idea,
and
if
you
know,
if
we
can
I
I
guess
I
like
it
I'm
just
saying
it
might
be
useful
to
mitigate
some
of
the
concerns
with
the
eventually
consistent
idea.
That's
my
biggest
intuition
at
the
moment.
B
Yeah
I
think
we're
kind
of
in
yeah
I
think
maybe
it
was
Felix
who
mentioned
it
on
the
call
that
just
like,
fundamentally
that
we
were
you
know
in
in
general
agreement
that
it
just
is
bound
to
be
better
to
have
I
guess
like
have
the
option
to
have
a
stateful.
You
know
yeah
to
go
the
staple
route,
but
yeah
I
guess
they
were.
They
were
pretty
adamant
about
us.
Finding
some
way
to
to
show
that
and
so
I
don't
know.
B
Maybe
we
can
think
a
little
bit
about
yeah
I
know
we
had
started
a
benchmarking
repo
before
I.
Don't
know
if
that
you
know
I
hope
that
there's
stuff
that
we
might
be
able
to
like
reuse
out
of
that
or
something
I
don't
know.
Sean.
G
G
Like
a
lot
of
kind
of
cover
for
them
as
well
being
like
okay
to
okay,
this
to
at
least
have
asked
for
a
benchmark
and
gotten
one
is
reasonable.
I
think
it's
up
to
us
to
come
up
with
what
we
consider
to
be
a
a
reasonable,
Benchmark
and
I
know.
There
was
the
the
effort
that
we'd
started
before,
but
if
I
recall
Christos
had
you
thought
about
this
a
bit
or
maybe
perhaps
it
was
Florian
where
there
was
like
there
was
something
extra
that
we
wanted
to
do
beyond
that.
E
Well,
we
decided
to
to
postpone
it
in
the
short
term
in
order
to
for
everybody
to
finish,
get
a
better
understanding
of
the
issue,
but
also
because
it
would
require
a
significant
work
on
our
path.
As
far
as
getting
representative
that's
been
going
because
you
know
we
could
do
the
best
like
any
number
of
ways
right.
We
could
write
a
simple
proof
of
concept:
that's
a
really
high
level
model
of
the
actual
agent
and
so
on.
E
But
ideally
we
should
be
fair
to
everyone
and
that
we
should
make
the
best
effort
towards
the
benchmarking
that
we
should
introduced
casting
layers
when
implementing
a
stateless
protocol.
We
should,
we
should
do
do
it
realistically
in
that
class.
English
students
in
the
picture
and
I
think
that's
what
the
the
subgroup
meant
by
focusing
on
the
comparison
between
status
and
stable.
That
will
take
time,
at
least
as
far
as
the
host
art
or
station
is
concerned,
in
that
it
will
operate
in
a
completely
different
way
than
that.
E
So
we
kind
of
looked
at
the
work
and
say:
okay,
let's
if
we
can
postpone
it
for
the
short
term.
Let's
do
that
and
then
in
the
future
we
could
always
go
back
to
this.
F
F
Sometimes
it
may
not
be
a
conclusive
thing
and
I'm
I
guess
I'm
just
wondering
if
the
eventually
consistent
stuff
is
helpful
for
some
of
the
concerns
succinctly,
because
it
makes.
E
Sense,
so,
regarding
the
eventual
consistent
point,
I
think
this
is
a
good
point
because
maybe
I
I
miss
it
during
the
Sig
presentation.
Yes,
a
lot
of
people
see
the
word
stateful
and
immediately.
You
know
their
mind,
jumps
to
connotations.
For
instance,
you
cannot
afford
to
lose
any
messages,
and
if
you
do,
the
whole
state
is
gone.
Then
now
you
you
have
decentralization
and
that's
a
big
problem
which
it
is
but,
like
all
of
those
issues,
are
not
really
issues
in
our
case
because,
as
we
said,
we
are,
our
model
is
eventually
consistent.
E
Data
is
periodically
recent
if,
if
we
end
up
losing
messages,
that's
not
really
an
issue
in
in
the
short
term,
as
long
as
that
information
is
eventually
received
on
the
back
end.
So
that's
kind
of
how
we
deal
with
that,
and
this
hasn't
been
an
issue
at
all,
even
though
we
do
lose
messages
so.
E
D
I
have
two
follow-up
thoughts
on
this
on
the
benchmarking.
Ideally,
we
would
have
something
representative,
but
if
that's
a
lot
of
work
and
I
can
imagine
it
being
a
lot
of
work
having
something
artificial
as
a
starting
point
is
also
not
a
bad
place
to
start
discussions,
because
with
something
artificial,
you
can
also
feed
some
assumptions
in
and
then,
if
we
don't
agree
on
the
assumptions
or
they
challenge
the
assumptions,
then
we
can
try
to
fill
it
with
more
realistic
data,
but
maybe
that
doesn't
have
to
be
the
first
step.
D
The
fully
realistic
analysis.
The
second
point
is
a
question
on
what
you
said
with
eventual
consistency.
Stuff
I
can
work.
It
out,
see
it
work
out
well
for
symbols.
I
would
be
a
little
concerned
for
stack,
trace's,
particular
CPU
samples
where,
like
within
even
the
same
line,
that's
being
executed.
You
have
several
instructions
and,
depending
on
where
you
get
the
stack,
Trace
Unwound
asynchronously,
you
could
end
on
a
lot
of
different
program
counters.
So
in
some
cases
you
might
just
not
never
see
the
exactly
same
stack,
Trace
again,
especially
the
leaf.
G
If
you
kind
of
imagine
what
the
the
end
use
of
the
of
the
tool
is,
the
idea
is
kind
of
to
be
able
to
figure
out
like
which
instructions
are
actually
consuming
the
most
CPU
over
time.
I
guess
the
intuition
is.
If
you
literally
never
see
that
instruction
again,
then
the
fact
that
you're
losing
it
probably
isn't
important.
G
Now
there
may
be
use
cases
where
you
actually
do
want
to
gather
every
single
like
you
know
it
may
be
unacceptable
to
to
ever
lose
a
stack
Trace,
but
that's
not
the
model.
I
guess
that
ours,
our
tool
was
built
around.
D
Yeah
I
think
that
that
makes
a
lot
of
sense
for
the
resource
utilization,
optimization
use
case
for
profiling.
But
of
course
all
of
us
also
have
sort
of
the
incident
use
case
in
mind.
Where
program
starts
misbehaving
and
then
maybe
code
paths
become
hard
set
or
normally
cold,
and
if
you
lose
that
data
that'd
be
unfortunate,
but
I
guess
a
lot
of
bad
things
need
to
happen
at
once.
You
need
to
have
a
loss
of
messages
from
like
your
stateful
protocol
and
you
need
to
have
an
incident
at
the
same
time.
G
So
in
terms
of
the
The
Benchmark,
like
I,
agree
with
with
Felix
in
terms
of
if
it's
a
huge
amount
of
effort
to
get
like
a
real
apples
for
apples
comparison,
then
a
better
place
to
start
might
be
with
something
something
more
synthetic,
but
I
would
guess.
We
could
probably
still
do
a
pretty
good
effort
at
making
things
somewhat
realistic,
Dimitri
your
the
benchmarking
framework.
That
was
the
framework
we
were
looking
at
previously.
G
I,
don't
know
crystals
and
foreign.
Did
you
guys
have
a
chance
to
look
at
that
at
the
time,
or
we
just
put
it
aside.
B
I
think
we
kind
of
paused
it
yeah
like,
so
we
could
figure
out
exactly
what
it
needed
to
do
so.
I,
don't
know
if
it
is
perfectly
suited
for
this
per
particular
task.
I
guess
maybe
we
can
dig
back
into
it.
I
don't
know
Deema.
If
you
have
any
thoughts
on
yeah.
I
Yeah
I
think
we
could
I
think
we
could
do
some
basic
comparisons
with
it,
but
it's
going
to
be
hard.
Yeah
I
mean
I
I,
agree
with
Felix
that
something
is
better
than
nothing.
But
but
then
you
know
my
next
thought
is:
okay.
We're
probably
gonna
have
some
Benchmark,
where
some
profiles
are
encoded
with
symbols
in
them,
and
some
profiles
are
encoded
without
symbols
and
obviously
the
ones
without
symbols
will
be
kind
of.
I
You
know,
take
less
space,
be
processed
Faster
by
CPU
and
take
less
memory,
and
then
the
next
question
will
be
okay.
Well,
how
do
we
account
for
that
and
I?
Don't
have
an
answer
to
that.
So
that's
kind
of
my
biggest
concern
with
that
right
now:
I
guess
yeah.
If
anyone
has
thoughts
on
that.
G
Crystals
in
our
infrastructure
like
would
it
be
possible
we
could
use
like
I
know
like
the
fronix
benchmarks
or
something
it's
been.
A
process
gather
data
from
it
from
30
minutes,
and
then
we
would
have
the
we
would
have
a
record
of
the
bandwidth
we've
consumed
from
that.
We
would
also
have
the
messages.
G
Could
we
then,
just
simply
recreate
the
data
that
would
have
been
sent
if
we
weren't
doing
hashing
of
Stack
traces
just
by
essentially
every
time
we
see
that
hash
insert
the
entire
stack
Trace
at
that
point
and
the
symbols
and
then
just
do
a
comparison
of
the
amount
of
data
between
both
options?
Is
that
feasible?
It's.
G
Okay,
so
I
mean
I,
guess
that's,
probably
not
a
bad
first
place
for
for
us
to
go
with
that
in
terms
of
lack
of
realism
and
that
kind
of
lack
of
shall
we
say,
apples
to
apples
with
a
real
stateless
implementation.
Would
you
guys
have
any
concerns
or
like
what
concerns
come
to
mind.
E
So
you
could
do
this,
the
naive
way,
which
you're,
just
you
know,
filling
in
the
data
without
doing
additional
effort
to
minimize
the
space
taken
right,
but
going
One,
Step
Beyond
that
then
you
would
also
change
the
the
opposite
format
itself,
so
that
the
data
is
stored
in
a
more
compression
friendly
way
right
remove
redundancies.
E
So
that
would
take
additional
time
like
so.
This
isn't,
as
you
know,
simple
as
just
okay,
filling
up
every
message
with
with
the
missing
data,
but
I
do
think.
We
should
do
that,
though,
because
if
we
don't,
then
so
we're
not
making
a
best
effort
comparison
there
right,
because
eventually
compression
will
kick
in
in
both
cases
both
for
a
stateful
protocol
and
state
it's
protocol
in
the
data
it
needs
to
be
arranged
in
a
way
that's
friendly
to
compression
in
both
ways
so
that
you
have
a
meaningful
comparison.
G
Just
out
of
curiosity
on
the
like
grpc
messages
is
compression
applied
per
per
message,
or
is
it
just
like
a
look
ahead
of
number
of
bytes
like
I
guess,
essentially,
what
I'm
asking
is,
what
is
the
like
LZ
dictionary
computed
over?
Is
it
do
you
actually
get
to
through
the
entire
message?
Really,
what
I'm
getting
at
is
is
the
compression
dictionary
solving
that
problem
for
us
without
us
having
to
like
manually,
do
some
sort
of
like
string
and
turning
thing
or
something.
E
So
compressionist
is
per
message.
The
entire
message
is
compressed
at
once
right,
but
yeah,
so
you
would
expect
that
that
compression
dictionary
would
ameliorate
that
to
significant
example.
That's
not
the
case
in
my
tests,
right
so
doing
our
own
internal
dictionary
or
we
always
get
better
performance
in
some
cases.
A
lot
more
so
it's
it's
always
working.
D
Yeah
I
I
guess
one
thing,
that's
still
kind
of
vague
is
like
also
what's
a
baseline.
Is
that
we're
comparing
against
right
because,
like
we
could
basically
take
P
Prof
versus
something
stateful,
but
the
other
option
is
JFR
versus
something
stateful.
The
other
option
is
like
some
much
simpler
thing
than
P
profits.
That
just
kind
of
logically
contains
a
similar
amount
of
data
because
I
guess
yeah,
the
Baseline
I
don't
know.
D
P
Prof
is
probably
a
reasonable
Baseline,
but
at
the
end
of
the
day,
it's
also
weird
to
use
it
as
a
baseline,
because,
at
least
from
my
point
of
view,
we're
not
trying
to
show
that
the
stateful
protocol
should,
in
all
cases
be
used
since
well,
maybe
instead
of
people,
so
maybe
that's
reasonable,
but
for
JFR
we're
still
thinking
about
sending
the
JFR
through
so
yeah.
Maybe
the
P
proof
would
be
the
Baseline
I,
don't
unless
anybody
else
sees
an
issue
with
that.
B
Yeah
I
think
that
makes
sense,
so
I
guess
yeah,
so
maybe
I
kind
of
miss
it.
So
what
is
I
guess
the
simplest
just
like,
even
if
we're
just
it
seems
like
we
have
to
start
with
getting
like
some
Metric
that
we're
we're
comparing
so
I
guess
yeah
like
what
is
the
simplest
way
that
we
can
sort
of
put
that
into
a
repo
somewhere,
where
someone
can
just
see
some
like
initial
numbers
of
like
you
know.
B
This
is
how
much
space
symbols
are
even
taking
up
to
begin
with,
or
you
know
just
something
along
those
lines:
yeah
I,
don't
know
I,
guess
I'm
still
not
clear
on,
like
you
know,
obviously,
there's
like
a
whole
bunch
of
stuff.
We
have
to
figure
out
down
the
road,
but
it
seems
like
there
should
be
like
a
you
know,
kind
of
clear
first
step
to
like
you
know:
how
do
we
start
down
that
path,
but.
D
Maybe
not
I
yeah,
maybe
some
stupid
idea,
but
it
might
be
good
enough,
is
to
just
take
some
P
Prof,
that's
been
recorded
and
show
some
workload
and
then
to
basically
take
a
time
period.
Let's
say
24
hours
and
assume
the
P
Prof
is
being
sent
once
per
minute
and
it's
always
the
same
P
Prof,
which
is
actually
not
too
unreasonable,
because
if
you've
got
a
steady
state
workload,
your
CPU
profiles
over
one
minute
will
kind
of
look
the
same
and
kind
of
has
the
same
data.
D
Certainly
the
same
symbols,
probably
most
of
the
same
stack,
traces
right
and
then
the
the
Computing
stateful
encoding
of
that
would
basically
just
takes
that
stream
of
pre-prov
data
and
try
to
write
it
out
in
the
stateful
way.
D
B
They
they
I,
don't
know
if
you
all
have
played
with
it
at
all,
but
there's
like
that
open
Telemetry
demo
that
you
know
is
meant
for
I,
guess:
demoing
various
open,
Telemetry
related
stuff
I,
don't
know.
Maybe
that
would
be
a
good.
B
You
know
thing
to
get
people
to
get
that
P
Prof
profile
that
you're
mentioning
from
people
be
I,
guess
somewhat
familiar
with
it.
I
imagine
it
has
a
all
right,
I
think
it
does
have
a
reasonable
amount
of
like
load
that
you
can
add
to
it.
What
do
you
think
of
that.
G
So
that's
definitely
one
option.
The
other
thing
that
I
mentioned
earlier
is
like
the
the
fronix
benchmarks,
in
that
you
can
pick
from
like
there'll,
be
like
a
Java
Benchmark
like
Apache
Benchmark,
a
bunch
of
others,
and
they
tend
to
be
pretty
high.
Cpu
load
I'm
not
familiar
with
the
the
the
hotel
demo
once
I
I
guess:
I,
don't
really
have
a
particularly
strong
opinion,
either
way,
but
I
guess
there's
no
reason
we
can't
do
a
few
of
them
either.
G
The
reason
that
I'm
thinking
of
the
phonics
one
is
because
what
I'm
imagining
is
going
to
be
there's
going
to
be
some
scenarios
where
you're
profiling,
a
program
and
the
benefit
of
statefulness
is
going
to
be
much
higher
just
because
of
the
verbosity
of
Stack
traces
like
if
you
have
a
Java
program,
for
instance,
you
know
with
the
verbosity
of
just
Java
stack
traces
and
their
depth
you're
going
to
benefit
more
in
that
case
than
if
you
had
say
a
like
a
c
program
where
you're
actually
sending
the
symbols
after
the
fact
so
I
guess,
it's
probably
worth
having
a
representation
of
the
worst
case.
B
Can
you
add
a
a
link
to
that
in
the
in
the
notes
into
it.
E
F
Okay,
I'll
jump
in
I
I
think
that
this
is
actually
feeling
like
a
super
productive
discussion
just
locally
at
the
moment.
The
question
I
had
well
just
to
say
about
the
eventual
Direction
I
think
symbols
after
the
fact
is
like
this
degree
of
Freedom
that
exists
in
some
cases,
but
not
others,
and
then
for
the
you
know
for
the
case
where
it
does
exist,
we
should
probably
just
cover
both
bases
as
I
think
as
I
believe
in
practice.
You
will
find
people
wanting
wanting
to
do
it
both
ways.
F
You
know
if
people
who
are
very
sensitive
to
CPU
use
or
are
that
all
the
different
resources
be
willing
to
do
it
the
efficient
way.
I,
don't
think
that
speaks
to
the
entirety
of
the
use
cases.
B
See
what
you
mean:
okay,.
C
G
Terms
of
next
steps
do
we
want
to
do
both
because
they
may
show-
and
it
may
be
interesting
to
see
what
they,
what
they
turn
up
differently
like
what
Felix
proposed
and
also
what
Christos
was
proposing.
So
I
guess
that
I
got
our
side.
We
can
Crystal
correct
me
if
I'm
wrong
here.
G
How
much
do
you
think
it'll
be
like
a
couple
of
weeks
of
effort
like
two
or
three
weeks
in
order
to
get
a
somewhat
reasonable
comparison
of
State,
full
and
statefulness
like
just
essentially
doing
that,
like
Gathering
of
a
trace
and
then
reinserting
traces
in
a
place
of
where
we've
hashed
values,
I
think
you
might
be
muted.
E
Yes,
two
to
three
weeks
at
least
we'll
get
the
initial
Benchmark
without
going
all
the
way
into
trying
to
make
it
as
best
as
we
can
right.
You
can
certainly
get
data
there
and
like
in
terms
of
us,
making
something
that's
compared
to
people.
That
would
be
a
lot
more
work
because
then
we'll
have
to
convert
our
data
into
the
performance.
So
it's
a
lot
easier
for
us
to
come
up
with
the
Benchmark
but
specific
to
elastic
and
then
maybe
Felix
or
someone
else.
E
G
D
Yeah
I
wanted
to
say
that
personally,
I
don't
care
about
the
approach
and
be
grateful
for
anybody
who
puts
time
towards
this
to
to
figure
something
out
personally
I'm
I'm
going
to
be
able
to
spend
more
time
on
things
that
are
related
to
integrating
jfrs
and
P
profs
into
a
hotel
than
to
push
for
the
state
full
protocol.
Even
so
I
like
it
and
would
like
to
see
it
happen.
D
But
I
can't
make
like
a
lot
of
time
on
my
end,
so
I
would
probably
not
be
able
to
do
the
people
of
based
Benchmark
I'm.
Also.
Another
thought
here
is
we're
discussing
all
these
nuances
and
how
it
could
go
wrong
in
this
case
in
that
case,
but
I
think
if
we
get
within
an
order
of
magnitude
of
like
the
ballpark
of
the
difference,
that's
already
very
good
and
if,
if
the
result
is
like,
oh,
this
is
a
hundred
times
more
effective
bandwidth.
What
wise
than
people
then!
D
Even
if
we're
off
by
an
order
of
magnitude,
even
if
it's
10
times
more
effective,
then
it's
going
to
be
very
easy.
So
I
think,
once
we
have
some
results
depending
on
how
close
they
are
to
the
Baseline,
whatever
we
consider
the
Baseline
I
think
that
will
inform
on
how
much
more
precise
we
will
have
to
get
so
I
think
something
very
rough.
Initially,
that's
also
less
work
for
anybody
who
wants
to
do
the
work
is
probably
the
best
starting
point,
but
again
I
don't
have
strong
opinions.
B
D
Know
yeah
I,
don't
know
if
anybody
will
end
up
doing
that,
because
it
sounds
like
the
elastic
Forks
would
be
less
likely
but
yeah.
The
idea
would
be
to
take
one
p
Prof
and
assume
that
it
covers
a
one
minute
time
period,
and
then
you
take
some
arbitrary
time
period.
24
hours,
you
can
take
a
whole
month
whatever,
and
then
you
assume
that
the
P
Prof
based
solution
would
be
resending
this
file
during
that
time
period
once
per
minute.
That's
a
simple
multiplication
to
get
the
total
bandwidth.
D
You
don't
actually
even
have
to
run
an
experiment
and
for
the
yeah
yeah
stateful
case.
You
would
probably
assume
that
all
the
stack
bases
contains
in
the
P
Prof
are
probably
spread
out
evenly
over
the
time
period.
That's
a
profile
was
recorded
for
I
know
that
I
guess
you
can
make
more
assumptions.
Elastic
still
had
some
level
of
aggregation
window
like
a
few
seconds
where
stack
creases
were
counted
up
if
they're
the
same
and
then
sent
out.
D
That's
an
assumption
you
can
put
in
there,
but
yeah,
basically
assume
that
your
stream
is
like
the
P
Pros
repeating
once
per
minute,
and
then
you
evenly
space
out
the
events
and
encodes
them
in
the
stateful
protocol
and
and
see
how
much
data
that
results
in
that
would
be
one
approach,
but
again
I
don't
know.
If
anybody
here
will
end
up
trying
that
I
think
it's
worth
capturing
it
as
an
idea.
If
the
hotel
folks
came
back
to
us
and
they're
like
we
want
to
see
it
done
in
a
different
way,.
B
You
know
sending
it
periodically
that
part's
easy
calculating.
How
much
just
the
you
know:
I
guess
that
that
then
becomes
like
the
Baseline
right
of
like
sending
the
same
or
basically
the
same
profile
over
and
over
and
over
again
and
so
and
so
yeah.
So
then
the
yeah
I
see
what
you're
saying
so
then
the
stateful
piece
is
the
part
that
is
hard
there.
B
D
Foreign
yeah
I
mean
I.
Think
would
you
like
end
up
like
if
you
draw
it
on
a
on
a
craft
I
think
what
you'll
end
up
is
a
picture
where
initially,
the
two
things
go
off
on
the
same
slope,
but
then
the
state
four
one
kind
of
drops
off
and
adopts
a
different
slope
and
it
just
stays
below
the
people.
D
Often
we're
trying
to
figure
out
how
how
far
these
two
lines
kind
of
are
from
each
other
in
practice,
which
essentially
is
asking
the
question
like
how
many
bytes
does
it
take
to
encode
a
message
for
stack
Trace
that
has
already
been
seen
like?
How
much
does
it
cost
to
sense?
A
hash
plus,
maybe
I,
don't
know
the
count
of
how
many
times
the
Ash
has
been
seen.
Yeah.
B
If
it
was
like
the
most
perfect,
you
know
situation
and,
like
you
said,
like
all
the
you
know,
you've
already
sent
all
this
stuff
and
so
I
guess
it
wouldn't
it
you
just
be
in
effect
calculating
how
much
of
that
P
Prof
profile
is,
you
know,
can
be
saved
and
encoded
elsewhere
and
so
yeah
I,
don't
know
that
I
don't
know
if
that's
easier
than
you
know,
I
guess
it
might
not
be
the
most
realistic,
but
maybe
that's
like
somewhere
to
start
of
saying
like
if
we
could
get
rid
of
this,
it's
possible
that
we
could
get
rid
of
this
much
of
the
P
prop
that
we're
sending
every
single
time
and
maybe
just
kind
of
give
that
calculation
to
start
with,
and
then
you
know
without
necessarily
having
to
have
the
implementation
of
it.
B
I
think
something
that
would
be
doable
for
sure
by
the
next
meeting
would
be
I
mean
I.
Think
the
oh
I
mean
I
haven't
looked
at
the
this
other
test,
Suite
that
that
you
mentioned
Sean
but
I,
don't
know
I
feel
like
the
hotel
demo.
One
I
have
played
with
before,
and
you
know
I
know
they
do
have
Java
stuff.
They
do
have
ghost
stuff,
I
think
it
might
work,
or
at
least
just
for
getting
this.
You
know
for
this
experiment
might
be
worth
trying
that
and
I.
B
Imagine
all
the
otel
people
are
familiar
with
it
too,
and
so
I
think
it
might
go
over.
Well
if
we
use
that
one
but
I'd
be
willing
to
like
play
around
with
it
and
see
you
know,
yeah
I,
don't
know
I
guess
it
sounds
like
nobody
wants
to
like
particularly
claim
this
at
this
moment,
but
I
think
at
least
those
first
two
two
points
would
be
doable
and
you
know
by
the
next
meeting
we
can
maybe
just
like
have
a
p
prop
file
to
look
at
and
talk
about
and
analyze
or
something.
G
So
I
guess
for
the
like,
obviously
for
the
the
profiling
of
like
our
of
our
protocol
versus
like
say,
shall
we
say
an
imagined
status
for
like
we'll
handle
that,
but
as
as
crystals
mentioned
like
that,
might
take
a
couple
of
weeks,
maybe
two
or
three
weeks,
just
because
there's
there's
an
amount
of
work
in
that
that
needs
to
be
done.
G
But
I
think
Felix
is
right,
like
the
the
people
of
one
I
think,
once
we've
captured
the
the
original
P
Prof
like
just
to
extrapolate,
the
amount
of
data
for
that
and
the
other
one
is
won't
actually
require
writing
any
code.
B
Okay,
yeah
I,
don't
know
demo.
What
do
you
think
of
thing
of
that?
I'll,
probably
enlist
in
your
help.
I
I
A
B
Each
Spectrum
or
each
you
know,
range
within
the
Spectrum
yeah
I
actually
have
to
run
to
go
catch
a
flight
but
yeah
I
think
yeah
I'd
be
willing
to
to
take
a
crack.
At
the
you
know,
kind
of
napkin
math
version
of
the
P
Prof
experiment,
and
you
know
try
and
report
back
at
the
next
meeting.
G
G
Any
like
kind
of
questions
on
like
the
implications
for
the
stateful
side
of
things
just
drop
a
question
to
the
into
the
slack
and
we're
more
than
willing
to
help
out
all.
B
I
D
It's
just
okay,
okay
and
yeah.
I
guess!
Is
there
a
more
we
should
discuss
on
the
benchmarking
stateful
stuff,
or
should
we
move
on
to
the
next
agenda
item,
which
is
next
steps
for
p,
Prof
and
JFR.
D
Yeah,
okay,
then
I
can
speak
to
the
next
item.
I
edited
to
the
list,
so
I
think
there's
work
to
be
done
to
talk
to
The
Collector
folks,
again,
like
Bogdan,
was
kind
of
unsure
why
the
collectors
even
needed,
if
all
we
want
to
try
to
do,
is
like
send
p,
profs
and
chafes
are
through
that.
So
I
think
some
of
it's
like
conversations
that
need
to
happen
to
understand
the
concerns.
D
A
little
bit
more
write
that
down
address
that
the
other
thing
is
I,
think
it's
maybe
starting
to
make
sense
to
think
about
how
both
shave,
r,
p,
Prof,
but
also
oprof
could
be
represented
in
otlp.
So
how
could
we
change
the
existing
protocol?
D
Buffer
format
to
carry
our
payloads
in
there
in
I
would
be
willing
to
kind
of
experiment
with
that
a
little
bit
and
kind
of
make
a
dummy
pull
request
on
the
otlp
repository
to
show
what
the
payloads
could
look
like
to
carry
all
our
three
like
substick
notes,
whether
they
really
become
sub
signals
or
something
else
in
the
yeah
in
the
new
profiling
signal.
I
can
kind
of
share
what
I've
been
noodling
with
around.
On
my
computer
today,
I
haven't
pushed
it
up
somewhere,
but
I'll
just
share
that
for
a
second.
D
So
people
can
give
me
some
quick
feedback
if
this
is
going
in
a
stupid,
Direction
and
then
I
can
adapt
before
I
put
more
work
in.
So
let
me
show
my
screen
here
where's
my
terminal.
This
one
is
a
terminal.
D
How
is
the
font
size,
probably
a
little
small?
Can
people
see
that.
D
Cool
so
yeah.
This
is
basically
the
open,
Telemetry
Proto
profiling
profile,
dot
Proto.
Maybe
that's
actually
not
the
best
starting
point.
There's
also
another
Proto
file
that
they
have
for
the
collector
and
that's
basically
the
service
that
would
be
exposed
the
the
grpc
service,
so
that
one
I
think
we're
probably
best
off.
D
If
we
just
kind
of
model
it
like
the
way
they've
modeled
it
for
tracing
logs
and
metrics,
where
there's
basically
like
a
single
export
method
being
exposed,
and
then
it
has
like
an
export
profiling
service
request
in
there
and
a
response,
and
then
a
yeah
request
is
basically
a
repeated
list
of
resource
profiles
that
term
resource
profiles.
I'll
get
to
that's
like
Hotel
speak
for
a
profile
that
comes
from
a
resource,
and
so
that's
like
a
resource
profile
here.
I've
basically
modeled
this
very
much
to
how
the
traces
look.
D
But
my
understanding
is
that,
like
you,
resources
like
a
machine
or
a
container
or
something
so
you
have
like
maybe
text
that
apply
to
that
entity.
That
is
sending
data,
and
then
inside
of
that
you
have
another
layer
called
scope
profiles
which
I
actually
not
quite
sure.
What's
the
difference
between
a
scope
and
a
resources,
I'll
have
to
dig
deeper
on
that.
Maybe
somebody
who
knows,
but
I
found
this
nesting
to
be
pers,
like
everywhere
in
hotel.
D
So
anyway,
you
finally
get
to
the
profiles
themselves,
once
you
sort
of
like
have
the
higher
level
metadata
on
them
and
so
for
the
profiles.
This
is
not
done
yet,
but
I'm
syncing,
your
profile,
ID,
is
probably
pretty
useful.
So
if
you
like
send
message
multiple
times
and
you
want
things
to
be
item
potent
that
could
be
the
item
potency
key,
that
tells
you,
if
you've
seen
a
message
already.
D
It's
not
an
end.
Time
is
sort
of
like
what
time
range
is
being
covered
by
the
profiling
data
being
transmitted
here.
That's
hopefully
uncontroversial,
then
yeah.
The
attributes
are
something
they
have
on
all
the
signals.
So
we
probably
want
some
key
value
pairs
that
you
put
on
on
the
profile
itself
and
then
finally,
I
think
this
is
where
it
gets
interesting
as
I'm
thinking
of
having
a
field.
D
Well,
there's
going
to
be
three
Fields,
one
called
P
Prof,
one
called
JFR
and
one
called
oprof,
but
you
can
have
only
one
of
these.
So
that's
how
you
denote
that
in
the
protocol
buffer
thing
and
then
basically,
if
you
have
a
p
Prof
profile,
then
the
main
content
is
a
payload
which
is
just
a
bytes
of
the
P
Prof.
D
But
then
you
could
probably
put
more
stuff
in
there
that
I
haven't
gotten
to
that
yet,
but
how
the
different
profile
types
or
sample
types
as
they're
called
P
Prof
how
they
are
named
inside
of
the
P
profit,
which
ones
of
these
is
like
a
CPU
profile,
which
one
of
these
is
like
an
allocation
profile.
So
I
think
we
should
probably
Define
these
like
profile
types
and
how
to
find
them
inside
of
these
plots
that
are
being
sent
because
there's
multiple
ways
to
put
them
in
there.
Similarly,
for
JFR
and
then
for
Oprah.
D
This
is
very
big
right
now,
but
I'm
thinking.
Basically,
we
could
put
a
batch
of
messages
that
the
client
is
trying
to
send
inside
of
this
oprof
message
and
then
the
there
could
be
different
types
of
events
right.
There
could
be
one
for
a
stack
Trace,
one
for
a
symbol
and
so
on,
and
you
could
put
them
into
this
o
processing
and
to
me.
That
seems
like
a
potentially
good
way
to
have
all
the
three
streams
of
data
in
in
like
one
yeah
protocol
buffer
message,
that's
called
profile.
So
that's
the
direction.
D
G
I'm
I'm
not
that
familiar
with
the
kind
of
Hotel
protocols,
so
this
may
be
a
dumb
question,
I
guess
within
the
I'm,
imagining
how
we
would
Port
our
existing
wire
protocol
to
to
this,
and
obviously
we
have
a
different
kind
of
RPC
calls
and
in
this
I
guess
the
idea
is
that,
instead
of
having
different
RPC
calls,
you
would
essentially
encode
the
RPC
call
like
yeah.
D
That
seems
to
be
the
case,
so
I
looked
at
the
protocol
buffer
definitions
for
locks,
metrics
and
traces,
and
each
one
of
them
is
like
a
grpc
service
that
has
a
single
export
method
and
then
inside
of
the
export
message
say
that
batch
up,
multiple
spans
or
logs
or
metrics
or
whatever
they
do
not
generally
have
multiple
export
methods
for
different
types
of
things
that
are
being
exported.
Now.
I'm,
not
saying
that's
necessarily
a
good
idea,
but
I
think
the
more
we
kind
of
keep
the
patterns
of
what
they're
doing.
D
There
are
the
less
controversy
we'll
steer.
But
if
we
have
good
reasons
to
propose
multiple
RPC
methods,
then
I
don't
think
that's
like
a
no-go,
but
I
I,
don't
know
what
sort
of
the
disadvantage
would
be
to
put
multiple
messages
that
logically
represent
multiple
RPC
calls
together
versus
making
separate
RPC
called
Sims
potatoes
potato
to
me.
I
It
all
looks
good
to
me.
One
thing
I
was
thinking
about
is
listing
all
the
different
types
of
profiles,
so
it
kind
of
sounds
like
you
want
to
have
basically
predefined
types
of
profiles
and
the
you
know
the
user
will
have
to
choose
or
I
guess
it
will
probably
be
hidden
from
users
and
be
somewhere
within
a
library.
I'm.
I
Thinking
like
that's,
probably
good,
but
is
there
any
way
we
could
kind
of
prepare
for
the
inevitable,
which
you
know
some
other
profile
types
will
come,
and
you
know
I
wonder
what
you
think
about
dealing
with
those
types
of
situations.
D
Yeah,
that's
a
great
question:
I
would
like
it
if
we
could
sneakily
kind
of
enable
sending
opaque
plops
through
there,
where
it's
like
you,
don't
know.
What's
going
to
be
in
there
or
like
the
people,
can
certainly
contain
more
profiles
than
what's
being
advertised
in
that
list,
so
I
think
yeah.
Pushing
additional
information
through
this
channel.
D
That
is
not
standardized
yet
should
be
possible,
but
I
guess
like
in
the
long
run,
the
whole
hotel
story
is
going
to
work
out
the
best,
so
more
of
this
stuff
actually
becomes
specified
and
the
more
compatible
it
becomes
so
I
think
over
time.
Hopefully
the
list,
the
metadata
description
of
what's
inside,
of
this
pprof
really
captures
everything
interesting
in
there,
especially
like
one
thing
that
I
one
reason
why
I
even
want
to
do
this
to
begin
with
is
signal.
Correlation
is
I
think
something
that
I
care
about
and
hotel
cares
about.
D
So
there
needs
to
be
a
way
to
say
how
did
you
put
span
IDs,
for
example,
inside
of
your
P
Prof?
What
is
the
name
of
the
label,
and
how
did
you
encode,
that
and
I
think
having
either
either
with
standardize?
It
and
be
like
you
have
to
do
it
this
way
or
we
have
to
provide
some
metadata
for
it.
The
latter
is
a
little
bit
more
flexible,
but
yeah.
G
Felix,
just
to
make
sure
I
understand
what
you
mean
when
you
talk
about
the
difference
between,
say
standardization
using
metadata
versus
just
standardization.
What
is
the
the
difference
there
so.
D
One
thing
that
hotel
could
be
doing
is
like
hey:
if
you
want
to
put
a
CPU
profile
in
a
p
Prof,
the
sample
type,
that's
what
P
Prof
calls
it
internally
has
to
be
named
CPU
profile
or
something,
and
if
you
called
anything
else,
it
will
not
be
picked
up
as
a
CPU
profile
that
works
well
in
a
world
where
otel
gets
to
dictate
everything,
but
in
a
world
where
there's
already
run
time
submitting
P
profs
like
so
go
runtime,
you
will
have
a
hard
time
doing
that,
unless
you're
willing
to
either
force
a
runtime
to
change
or
to
force
the
people
who
integrate
with
that
runtime
to
take
the
people
off
pause,
it
change
the
sample
type
name
re-encode
it,
which
would
be
very
unfortunate,
so
yeah
I
think
it's
not
really
a
choice,
I!
D
Think
for
something
like
a
sample
type
name.
You
probably
need
to
make
that
flexible
and
puts
that
as
metadata
in
the
format
for
something
like
the
label
which,
right
now,
I,
don't
think
anybody
uses
labels
in
the
industry
in
a
standardized
way
like
we,
a
datadog
have
a
way
to
like
call
the
the
label
for
putting
span
IDs
into
our
profiles,
but
we
could
change
it
like
it's,
not
a
big
deal
for
us
to
change
that.
D
If,
if
we
just
want
to
say
like
that's,
the
name
of
the
label
for
p
Prof
but
then
again,
making
it
metadata
is
also
very
easy.
So
that
might
just
be
the
smoothest
way
to
doing
it.
C
So
I
think
Pete
posts
an
interesting
one
because
it's
already
broke
above
so
that's
a
case
of
putting
one
photograph
thing
inside
another
protocol
thing
you
could
inline
the
schema
instead
of
saying
this
is
no
Peg
bunch
of
bytes.
H
C
H
D
C
That
happens,
benchmarking
because
are
we
putting
in
when
we're
tunneling
stuff?
Are
we
putting
in
stuff
that's
pre-compressed,
using
potentially
some
optimized
compression
algorithm,
or
are
we
putting
in
stuff
that
is
deliberately
uncompressed
to
keep
the
CPU
down
and
trusting
that
otlp
will
compress
it.
D
Yeah
I
don't
have
an
answer
yet,
but
it's
something
a
question:
we
need
to
answer
and
figure
out.
What
was
your
first
question
again.
Sorry.
A
C
Happens
to
be
using
the
same
description
language
as
the
envelope
is
this
the
case
with
B
Prof
the
schema,
or
do
we
want
to
say
this
is
an
opaque
bunch
of
bikes.
D
Yeah,
that's
a
great
question.
We
could
certainly
inline
the
schema
and
that
will
work
pretty
well
in
the
protocol.
Buffer
ecosystem.
I.
Don't
actually
know
if
you
could
then
put
the
payload
into
the
protocol
buffer
message
without
having
to
kind
of
decode
it
first
and
re-encode
it
I,
don't
know
if
you
can
just
be
like
here's,
my
blob
and
it
happens
to
be
this
protocol.
D
True,
it
will
make
it
more
collector
friendly
for
that
use
case.
There's
other
ways
of
doing
that
too,
like
you
could
basically
I
I've,
seen
that
done
in
hotel,
I
haven't
studied
it
yet,
but
many
of
the
messages
have
a
schema
URL
where
they
actually
be
like
hey.
This
is
like
this
schema
over
there
and.
A
D
C
So
that
that
generic
way,
instead
of
having
as
you
can
only
do
three
different
fields,
so
basically
an
enum
that
decides
what
the
compressed
payload
is,
you
could
have
a
generic
payload
that
says
schema
and
a
bunch
of
bytes,
so
it
immediately
nodes.
Look
at
the
schema
and
says
you
know:
here's
a
predefined
list
of
schemas
that
we
know
how
to
handle
and
if
it's
some
other
schema.
Well,
maybe
we
know
how
to
handle
that.
Maybe
we
don't.
D
C
D
H
D
J
Well,
the
jjfr
planets
internal.
This
is
self-describing,
so
you
need
to
either
have
the
URI
just
to
Define
as
GFR.
Then
then,
the
parser
or
the
consumer
should
know
how
to
how
to
handle
GFR,
how
to
actually
start
parsing
and
infer
all
the
internal
structure
of
GFR
from
the
device
stream,
or
there
is
what
we
are
using.
Normally
there.
There
is
like
magic,
bytes,
the
beginning
of
of
the
GFR
stream,
which
are
used
in
the
current
parser.
D
Yeah
anyway,
I
think
that's
an
action
item.
I'll
noodle
over
this,
so
sorts
a
little
bit
more
and
basically
put
up
a
draft
poll
request
that
I'll
send
out
in
the
slack
Channel,
and
we
can
talk
more
about
the
sort
of
problems
that
were
just
discovered
and
we
had
one
more
item
on
the
agenda,
which
is
the
otlp
Arrow
column,
compression,
thingy
idea.
I,
don't
think
we
have
enough
time
for
that
today.
We
can
just
move
that
to
the
agenda
for
next
time
but
yeah.
G
I
guess
just
action
items
for
for
the
future,
so
we'll
look
into
the
stateful
versus
stateless
version
of
our
protocol,
but
that,
as
I
said,
it'll
probably
take
a
few
weeks
and
I
guess:
yeah
Dimitri
you're
gonna
take
a
stab
at
the
Gathering,
the
people
thing
and
then
doing
the
napkin
math
around.
That.
I
Yeah
I'll
take
a
look
at
the
P
profit
experiment,
cool.
D
Okay,
I'll
put
that
in
the
in
the
meeting
notes
in
a
second
any
any
other
things.
D
No
then
yeah
I,
guess
we're
right
on
time
have
a
good
local
time.
Everybody
see
you
next
time.