►
From YouTube: 2022-10-06 meeting
Description
OpenTelemetry Prometheus WG
A
A
A
Hey
Ryan,
sorry
to
disturb
you
I
requested
access
on
the
document
that
you
are
editing
right
now.
B
B
B
Of
you
planning
on
going
to
kubecon
in
a
couple
weeks.
C
No,
unfortunately,
not
where
is
it
it's.
A
A
Nope,
it
should
be
embedded
into
the
link
if
you
do
have
to
enter
a
password
I
think
it's
7777,
it's
number
seven,
either
five
times
or
seven
times.
Okay,.
B
B
Channel
the
meeting
notes
and
a
very,
very
rough
draft
of
just
kind
of
I
guess
I,
don't
even
necessarily
want
to
call
it
an
Otep
yet,
but
mostly
just
what
we
talked
about
last
time,
so
feel
free
to
check
it
out.
Yeah
I
think
we
can
probably
go
ahead
and
get
started.
A
All
right,
I
added
the
first
item
on
the
agenda
and
I
have
to
drop
off
in
a
minute,
because
I
have
a
conflicting
call,
but
the
hotel
governance
election
is
going
to
take
place
in
about
two
weeks.
A
Anyone
who
has
I
think
more
than
20
contributions
to
GitHub,
which
includes
comments
and
issues
and
things
in
the
last
year,
will
automatically
be
added
as
a
voter.
However,
this
is
a
fairly
new
safety
and
I
think
a
lot
of
our
work
has
been
constrained
to
docs.
A
If
you
are
interested
in
voting
and
don't
show
up
on
the
voter
roll
which
I
linked
to
in
the
meeting
notes,
there's
a
path,
there's
some
instructions
within
the
page
that
that
links
to
with
how
you
can
just
register
through
a
form
that
says
like
I'm
on
the
profiling
group.
Most
of
my
work
is
in
docs.
I
didn't
get
picked
up
automatically.
Please
add
me
as
a
voter,
so
if
you're
interested
in
voting
and
didn't
get
picked
up,
automatically
feel
free
to
use
that
and
you'll
be
able
to
vote.
E
B
All
right
awesome
thanks,
yep
any
also
yeah.
If
anybody
else
has
anything,
they
want
to
add
to
the
agenda
feel
free
to
do
so.
B
It
may
be
a
shorter
meeting
today,
yeah,
mostly
I,
guess
yeah
there
was
I.
Only
could
think
of
like
a
couple
things
I
mean
I.
Think
honestly,
the
biggest
you
know
outstanding
stuff
we
need
to
do
is
actually
just
kind
of
the
actual.
B
You
know
work
of
writing
down
kind
of
what
we
talked
about
last
week
and
sort
of
you
know
getting
a
document,
that's
closer
to
an
Otep,
so
I
wanted
to
kind
of
recap
what
we
talked
about
last
week,
a
little
bit
and
yeah
see
if
there's
any
more
discussion
that
people
wanted
to
sort
of
pitch
in
there,
particularly
around
timestamps
symbols,
where
the
main
two
conversations,
and
then
we
could
kind
of
go
from
there,
as
well
as
updates
on
the
benchmarking
stuff.
B
So,
let's
see
with
that,
let's
go
ahead
and
I
will
kind
of
share.
A
B
Yeah,
all
right
so
yeah,
so
basically
I
kind
of
took
what
we
talked
about
last
week,
kind
of
and
also
had
it
in
a
little
bit
of
stuff
from
a
long
time
ago,
when
we
had
where
a
lot
of
people
shared
their
different,
you
know
custom
formats
and
what
they
were
doing
and
basically
just
compiled
it
into
here,
as
mentioned
last
week.
B
Eventually,
the
idea
is
that
you
know
we
want
to
come
up
with
a
basically
like
a
document
that
talks
about
all
the
different
like
fields
that
we
think
should
be
part
of
profiles
and
so
yeah,
just
to
recap,
kind
of
where
we
you
know
we're
talking
about.
Last
week,
it
was
seemed
like
we
had
some.
B
What
general
consensus
that
you
know,
P
Prof
is,
like
close,
you
know,
has
a
lot
of
what
we
need,
but
there
were
certain
limitations
of
P
Prof
that
we
talked
about
one
of
those
being
time
stamps
another
one
being
that
dealing
with
I
guess
stapleness.
B
We
probably
need
a
better
word
than
that,
but
that
was
the
best
I
could
come
up
with
for
now,
but
basically
being
able
to
I
guess,
maybe
yeah
more
specifically
dealing
with
symbols,
yeah,
and
so
those
were
the
two
main
things
that
we
talked
about.
B
Yeah
I,
don't
know:
if
anybody
has
anything
they
you
know
having
a
couple
weeks
to
think
about
it.
I
guess
yeah,
just
to
sort
of
summarize
there's
sort
of
two
use
cases.
You
know
time
stamps
for
individual
events
and
then
for
aggregated
events,
and
it
seemed
like
the
general
consensus.
B
There
was
that
people
wanted
to
be
able
to
have
the
option
to
do
either
with
this
format
and
either
I
guess
or
both
yeah
I
think
yeah,
a
quote:
I
pulled
out
from
the
last
meeting
was
you
know,
optionally,
having
times
time
stamps
for
each
event
and
then
figuring
out
what
it
looks
like
from
there.
So
I'm
curious.
If
yeah
I
don't
know,
does
anybody
have
anything
they
want
to
add
on
that
topic?
Felix
yeah.
C
One
important
thing
missing
here
is
the
ability
to
connect
with
tracing
IDs.
P
Prof
can
do
that
with
labels,
but
it's
difficult
to
do
it
efficiently,
because
it
leads
to
duplication
of
the
location,
lists
and
close-ups
of
the
size
a
little
bit.
Yes,.
B
That
is
a
great
Point.
Yeah
I
did
add
it
like
literally
just
this
morning,
because
yeah.
That
was
another
thing
that
we
had
talked
about
being.
That
was
one
of
the
ones
from
one
of
the
earlier
meetings
and
kind
of
having
an
explicit
way
to
link
either
spans
or
traces
to
profiles.
B
I
I
linked
a
a
Commerce
or
a
comment
that
tigron
had
made
a
while
back
about
this,
and
basically
you
know
yeah
his
General
I
mean
yeah.
This
is
the
comment
itself.
If
you
want
to
see
the
context
feel
free
to
click
here,
but
basically
for
metrics
and
logs,
this
is,
you
know,
becoming
more
of
a
I
guess
common.
B
B
You
know
we
have
logs
metrics.
Whatever
is
something
we
should
keep
in
mind
as
well.
C
B
Yeah
yeah
I
think
that's
a
great
Point,
yeah
I
kind
of
wanted
to.
That
was
one
thing
as
I
was
I
sort
of
like
yeah
as
I
was
thinking
about
the
last
meeting,
and
when
we
were
talking
about
symbols
you
know
obviously
there
or
I
guess
yeah.
You
know
my
understanding,
you
can
do
it
with
P
Prof,
but
it
sounded
like
the
this.
C
B
Yeah
yeah
and
yeah,
so
I
guess
the
I'm
more
talking
about
the
idea
of
which
things
we
explicitly
sort
of
leave
room
for
versus
which
things
we
use
or
I
guess
like
allow
people
to
use.
B
You
know
the
labels
feature,
that's
already
there
for
and
so
yeah.
So
one
of
those
things
yeah
I
guess.
Basically
what
I'm
trying
to
say
is
there's
kind
of
a
fork
in
the
road
of
that
I
wanted
to
I.
Guess
I
wanted
to
clarify
from
I.
B
Don't
believe
Florian
is
here,
but
it
was
something
that
Florian
said
about
the
he
I
think
he
said
something
on
lines
of
like
an
ideal
situation
is
being
able
to
have
strength
or
have
a
separate
stream
for
symbols
and
then
some
sort
of
hash
that
allows
you
to
kind
of
like
link
back
to
you
know
if
you
like,
send
symbols
separately,
some
way
to
link
back
to
it.
I
guess
he's
not
here,
so
it
might
be
hard
to.
D
It
was
helpful
I
can
chime
in
on
this
that'd
be
great.
So
this,
essentially,
what
we
do
in
profile
is
we
send
out
hashes
of
traces
and
we
send
out
the
actual
traces
which
are
then
have
frame
IDs
and
then
the
thing
is
that
you
need
to
add
symbols
from
two
different
sources:
right,
meaning
if
you're
on
the
host,
for
example,
profiling,
Java,
you
need
to
send
out
the
symbols
that
you're
putting
out
of
the
jvm
at
that
moment.
D
But
at
the
same
time,
if
you're
looking
at
native
code,
you
may
not
have
the
symbols
that
you
want
to
symbolize
ever
on
the
production
machines,
which
is
the
the
usual
scenario
for
for
CNC
plus
plus.
So
you
need
to
some
have
some
way
to
do
the
symbolization
of
the
back
end.
So
we
just
expose
second
string,
a
second
stream
of
data
where
either
the
agent
or
some
external
source
can
then
send
symbols
associated
with
a
given
frame
ID
through
the
back
end.
D
So
in
the
Java
case,
you've
got
the
host
sending
out
the
Stacked
resident
sample
and
the
stack
frame
IDs
and
then
the
associated
symbols
for
every
stack
frame,
and
it
keeps
some
amount
of
of
memory
in
terms
of
what
it
has
recently
sent.
So
you
don't
send
symbols
all
the
time
because
we're
back
to
the
issue
of
very
deep
Java
stack,
traces
being
actually
voluminous
in
in
data
yeah.
D
And
so
you
essentially
have
have
the
two
separate
streams
that
the
profiling
data
and
then
you've
got
a
symbolization
stream,
which
also
has
the
advantage
that
symbolization
stream
need
not
necessarily
originate
from
the
same
machine
stuff
as
the
as
the
profiling.
B
B
Would
we
you
know
it
sounds
like
there's
kind
of
two
ways
of
you
know
getting
symbols.
You
know
you
could
either
you
know,
leave
room
for
it
in
the
format
itself
or
you
know,
leave
room
to
kind
of
like
link
to
a
you
know
if
symbols
were
sent
separately
being
able
to
like
link
it
to
the
you
know,
yeah
to
this
format
and
I'm
curious
yeah
like
if
that's
you
know,
obviously
we're
coming
up
with
sort
of
a
you
know,
definition
of
what
a
profile
looks
like
like,
maybe
as
part
of
this
effort.
B
We
also
find
a
way
and
I
think
we
talked
about
this
like
very,
very
early
on
in
one
of
the
first
meetings
that
you
know.
Maybe
if
you
do
want
to
send
the
symbol
separately,
maybe
there's
like
some
other
ways
to
I.
Guess
optimize
that
and
so
I
kind
of
wanted
to
yeah.
Maybe
discuss
that
further
here.
If
you
know
you
include
symbols
within
the
you
know.
B
A
C
C
So
P
Prof
originally,
and
if
there's
Alexa
here
from
people,
if
you
can
correct
me,
if
I'm
saying
nonsense
but
I
think
originally
P
Prof
did
not
include
symbols
and
you
had
to
have
the
binary
and
the
lookup
was
done
when
you
looked
at
a
profile
together
with
the
binary
and
they
only
added
symbols
later
on
or
made
them
always
included,
at
least
for
the
go
project.
It's
my.
D
That's
the
memory
layouts,
because
you've
got
Aerospace
like.
A
D
F
D
F
F
What
I
mean
is
like.
D
The
the
reason
why
we
ended
up
doing
what
we
do
is
because
we
had
seen
scenarios
where
a
shared
object
would
be
loaded
would
be
part
of
a
stack
frame.
It
would
be
unloaded
again
and
then
remapped
out
a
different
address
shortly
thereafter
and
sending
a
stream
of
updates
to
the
total
virtual
adverse
space
seemed
like
a
a
messy
thing
for
us
to
do.
If
that
makes
any
sense,
I
see.
C
C
The
one
thought
I
wanted
to
mention
from
the
previous
meeting
since
you
weren't
there,
Thomas
I
brought
up
the
point
that
if
we
completely
separate
symbols
and
the
the
stack
traces
from
each
other
into
two
separate
screens,
we
lose
a
little
bit
of
like
sort
of
ecosystem
and
tooling
things
that
people
can
easily
spill,
build
like
P
profits
format
where
various
people
have
built
Tools
around
it
and
I
I
think
it's
kind
of
a
nice
property
to
be
able
to
put
it
all
into
one
file.
D
The
I
think
that
yeah
I
think
that's
at
that
point.
The
question
becomes
whether
it
is
better
to
then
try
to
evolve
the
P
performance
to
suit
our
like
I
I
would
rather
evolve
the
P
performer
than
bend.
What
we're
doing
to
to
fit
into
a
farmer
that
exists.
If
that
makes
any
sense
because
of
the.
C
B
I
I
guess
that's
kind
of
why
I
thought
this
would
be
good
to
bring
up
here
and
just
seeing
if
there's
some
way
we
can
as
I'm
curious.
Why
do
you
say
it's
better
to
I
guess
make
it
work
with
P
Prof
rather
than
you
know,
I
guess
kind
of
if,
if
we
are
building
I
guess
that's
kind
of
from
my
perspective,
the
idea
is
like
if
we
are
building
it
from
scratch
like
and
we
can
support
both
use
cases.
B
You
know-
maybe
that's
you
know,
I
guess
a
lot
easier
than
yeah
break
breaking
P,
Prof
or
whatever
you
were
saying
and
so
I'm
curious
yeah.
Why
do
you
yeah
I
guess
between
the
two
of
those
or
I?
Guess,
honestly,
regardless
of
the
p-prop
side
of
things,
if
you
think
there's
some
way,
we
could
sort
of
adapt
this.
You
know
new
format
that
we're-
or
at
least
you
know,
brainstorming
about.
B
If
there's
some
way,
we
could
support
the
use
case
that
you
mentioned,
but
also
you
know,
make
it
so
that,
like
what
Felix
was
saying
that
you
can
still,
you
know
include
everything
in
one
file.
Perhaps.
D
D
Let
me
think
for
a
second
well,
no,
actually,
it's
fine,
because
even
if
like,
if
we
don't
have
the
mappings
and
we
just
have
the
executable,
IDs
and
offsets,
we
can
create
a
fake
mapping
for
p
Prof
to
consume,
never
mind
okay,
so
that's
all
good
I
guess
the
what
I'm
aiming
at
is
as
long
as
we
have
an
easy
way
to
transform
whatever
we
do
into
pprof
we're
not
losing
the
the
tooling.
So
then
we're
limited
to
do
whatever
we
want
and
the
two
data
streams
I
think.
D
A
D
B
Yeah
definitely
yeah
I
think
it
makes
a
lot
of
sense
yeah.
Anyone
have
any
other
I
guess,
thoughts
on
that
yeah
I
mean
so
I
think
the
what
would
be
helpful.
You
know
we
started
creating
a
duck
kind
of
expl
I,
just
yeah,
maybe
over
the
next
like
couple
weeks.
If
it
might
be
I
know
you
already
kind
of
talked
about.
B
You
know
your
custom
format
and
how
you're
using
things,
maybe
if
we
could,
if
you
could
like
write
it
out
somewhat
in
in
like
just
so
that
if
we
did
want
to
include
it
in
in
this
in
this
document
that
we
could
sort
of
at
least
entertain
the
idea
of
finding
a
way
to
like
make
this
work,
I
think
that
might
be
helpful,
yeah
I'm,
just
trying
to
think
of,
like
you
know
where.
How
do
we
go
from
here?
Tell.
B
Okay,
yeah,
that
sounds
good
yeah.
We
could
probably
think
about
it,
some
and
and
and
get
back
to
you
there.
That's
kind
of
what
I
was
saying
at
the
beginning
of
this,
that
you
know
I
think
the
the
biggest
next
steps
here
sort
of
just
you
know
clearly
writing
out.
You
know
kind
of
you
know
in
some
ways
like
how
this
is
different
from
P,
Prof
or
better
or
you
know,
sort
of
you
know.
What
is
the
actual
value
add
here?
B
If
you
know,
if,
if
any
and
yeah
I
mean
I,
think
that's
definitely
something
that
a
question
that
we
will
have
to
answer.
You
know
if
there
is
to
be
a
new
format
and
then
also
once
we
do
start,
you
know
benchmarking
things
you
know.
Ideally
it
you
know
would
be
better
in
some
substantial
way
that
would
make
it
worthwhile
to
have
a
different
format
versus
just
like
you
said,
like
you
know,
fitting
everything
into
what
already
exists.
D
Boy
I've
had
one
question
on
what
what
key
Prof
does
in
in
situations
where
you've
got
just
in
time
compiled
code,
where
the
address
of
the
like
a
given
function
changes
over
time
like
if
I.
E
A
C
I
I
think
that
the
main
use
case
for
is
Java
and
I.
Think
Java
you
just
always
deal
with
jfrs
and
not
with
P
Prof
I,
don't
know
if
there's
so
other
cheat
compiled
languages,
PHP.
C
F
Usually
something
else
would
be
used
as
the
raw
format
and
then,
when
people
of
profile
is
created,
it
created
as
kind
of
like
a
symbolized,
already
makes
sense,
because
I
think
it
seems
like
jit's
question
is
because
if
you
put
virtual
addresses
into
people
of
product
itself,
then
the
timestamp,
because
you
need
to
resolve,
because
you
need
the
timestamp
to
resolve
the
the
virtual
address
to
legit
because
it
can
like.
Basically,
they
can
be
rigiding
that
that
means
the
address
is
not
a
unique,
unique
identifier.
F
I,
don't
remember
dealing
with
this
case
with
this
like
in
the
context
of
using
prep
as
the
for
example,
you
could
collect
with
Linux
perf,
you
can
collect
birth
data
and
then
perf
also
also
supports
creating
jit
map
files
which
can
be
used
to
symbolize,
but
in
that
case,
usually
I
think
the
tooling
would
first
process
perv
data
file
and
the
gitmap
files
to
symbolize
things
and
then
already
put
symbolized
information
into
into
profile.
Proto
makes
sense
if
it
makes
sense.
F
On
the
one
one
comment,
slash
question:
I
have
on
the
on
the
on
the
symbolization
statefulness
in
the
document.
I
think
it
would
be
good
to
include
kind
of
like
a
bit
more
abstract
problem
statement
there,
because
currently
in
the
text,
it
seems
the
text
starts
to
dive
into
particular
implementation.
F
D
B
Yeah
is
just
for
yeah
I
mean
that's
more
of
just
the
outline
of
what
we
discussed.
I
guess.
We
didn't
really
talk
about
specifically
the
problem
statement
there,
yeah
I,
don't
know
if
anyone
has
anything
off
the
top
of
their
head,
but
yeah
I
mean
we
can
definitely.
B
Yeah
cool
yeah,
so
that
was
one
thing
yeah.
We
also
talked
a
lot
about.
You
know
the
time
stamp
thing
that
seemed
pretty
I.
Guess
reasonably
straightforward,
I,
don't
know
if
anybody
has
any
any
more
thoughts
they
want
to.
You
know
add
to
the
timestamp
conversation
it
seems
like
it
would
be
relatively
straightforward
to
just
like
at
support
both
use
cases
of
aggregated
or
single
events.
I,
don't
know
if
anyone
sees
any
particular
issues
there
yeah.
C
No,
the
only
detail
there
would
be
it's.
The
resolution
should
probably
be
configurable,
because
I
played
a
little
bit
with
timestamp
prototype
for
the
go
CPU
profiler
and
use
different
resolutions,
and
it
has
a
large
impact
on
the
size
of
the
produced
files
and
in
many
cases,
I.
Think,
micro,
second
or
millisecond
resolution
could
be
good
enough
and
you
don't
need
nanoseconds,
but
it
has
a
variable
integer
encoding
in
protocol
buffers.
That
makes
a
huge
difference.
B
Okay,
who
made
a
comment
to
update
that.
D
C
I
think
there's
like
the
the
timeline
charts,
it's
like
where
you
throw
what
you
show
what
each
Strat
is
doing.
All
the.
D
D
C
Yeah
that
one
is
also
interesting,
because
that
means
you
probably
want
all
the
timestamps
to
be
like
in
the
same
column,
if
you
have
some
on
the
stack
trace
and
associate
with
that,
you
often
will
find
that
the
gaps
in
between
the
two
stack
traces
like
seeing
the
same
stack
Trace
again,
is
too
large,
and
then
the
Delta
also
becomes
large
and
you
don't
actually
gain
much
compression
from
Delta
encoding,
at
least
that's
what
I
found.
In
my
experience.
D
One
second,
so
you've
you're
saying
that
I
mean
we
need
to
time
some
associated
with
every
sample
and
then
so.
C
C
You
add
a
list
of
timestamps
at
where
that
stack,
Trace
was
seen
like
a
column
and
but
the
the
Deltas
I
played
was
Delta
compression
on
that
and
found
it
to
not
work
too
well,
because
while
you
might
have
a
high
sampling
rate
and
produce
stuff
every
10,
milliseconds
per
thread
or
something
for
CPU
profile,
it
will
be
between
different
stack
traces,
and
so
you
don't
get
much
Delta
compression.
Unless
you
put
the
timestamps
and
the
Deltas
is
sort
of
in
one
column,
which
also
requires
some
buffering,
I
guess
yeah.
D
Because
I
mean
my
mental
model
was
again
the
thing
that
we
do
where
we
just
send
out
a
hash,
offer
the
trace
and
then
some
metadata
associated
with
it,
and
then
you
get
reasonably
evenly
spaced
Motors
between
between
two
events,
also
because
you're
attached
to
the
time
interrupt,
which
means
you
can
probably
get
a
really
really
accurate,
a
really
small
representation
writing
down
the
deviation
from
the
expected
sampling
rate,
if
that
makes
any
sense
like,
if
you
expect
to
sample
at
40
Hertz,
you
just
send
a
small
integer
selling
the
other
side,
how
much
you
deviated
from
the
20
Hertz
yeah.
C
Yeah
I
think
it
sounds
a
little
bit
like
double
Delta
encoding
or
something,
but
the
the
gist
is
like
I
think
like.
If
you
choose
very
high
Precision,
then
you
still
run
into
problems
with
it.
So
I
think
that
yeah
like,
if
you
do
TSC
like
times
stem
counters
from
the
CPU
or
something,
then
you
might
find
that
you
quickly
get
numbers
above
whatever
16
000,
which
gets
you
in
three
byte
range
per
timestamp
in
protocol
buffers,
and
it
can
still
blow
up
on
you
so
I
think
so.
C
Controlling
the
resolution,
if
you
want
to
keep
bandwidth
down,
will
play
a
role
even
with
fancy
smartness
like
Delta,
compressions
and
other
things,
at
least
so
far.
In
my
experiments,
we
don't
need
to
solve
that
today.
We
can
even
like
yeah,
probably
should
waste
less
time
on
this
now.
F
I
would
also
call
out
that
using
Deltas
or
Deltas
of
Deltas
it
it's
it's
good
for
making
things
more
compact,
but
it's
also.
It
also
adds
complexity
that
the
like
the
code
has
to
deal
with
so
I
think.
If
we
do
that,
we
should
like
we
should.
We
should
know
that
the
trade
Opera
is
really
is
really
worth
it,
because
the
more
because
the
timestamp
Deltas
is
essentially
more
more
State
and
more
statefulness,
and
that
always
more
complexity.
So
yeah.
C
Yeah,
the
complexity
is
definitely
concerned
because
at
some
point
you
lose
the
benefit
of
protocol
buffers,
which
is
like
sort
of
a
standard
encoding
way,
and
you
put
your
own
encoding
on
top
of
that
now.
You've
got
your
custom
encoding
inside
of
protocol
and
at
that
point
like
why
you
use
protocol
buffers
at
all
like
because
those
also
have
a
cost
right.
B
Cool
yeah.
F
So
it's
almost
like,
maybe
just
so
it's
just
last
a
last
comment
also.
Maybe
there
are
kind
of
like
intermediate
options
that
give
give
enough
benefit
without
adding
too
much
complexity
such
as
use
profile,
start
time
stamp
relative
timestamps,
because
that
might
that
might
reduce
most
of
the
like
absolute
value
from
from
the
timestamp
values
that
we've
seen
in
profile
and
at
the
same
time,
it's
it's
much
easier
since
it's
relative
to
a
single
thing
rather
than.
F
F
B
Yeah,
that
might
also
be
a
good
segue
to
I.
Guess,
presumably
you
know
there
is
you
know
we
we've
talked
about.
You
know
having
a
benchmarking
repo,
you
know.
Maybe
we
can
I
guess
use
that
to
sort
of
add
further.
You
know
yeah
to
sort
of
test.
These
ideas
of
you
know
which
one
is
actually
better
in
different
situations,
all
that
kind
of
stuff
yeah.
If
there's
no
other
thoughts
on
the
you
know,
I
guess
beginnings
of
our
Otep
thing.
B
We
can
move
on
to
the
next
item
on
here,
which
is
the
benchmarking
repo
progress
are
right,
going
going,
all
right,
cool,
so
yeah,
so
I
guess
for
context
there
yeah
we've,
you
know
been
talking
about
it
for
a
while
and
eventually
we
will
need
to
create
a
repo,
so
yeah
I
guess,
like
you
know,
there's
a
lot
of
different
potential
ideas
of
you
know
how
we
can
do
things,
and
you
know
again
it
would
be
good
to
have
some
some
sort
of
quantitative.
B
You
know
benchmarks
that
can
say
you
know
definitively
that
you
know
option.
A
is
better
than
option
b,
for
you
know
even
some
of
the
stuff
that
we've
already
discussed
today,
and
so
so
there's
a
link
to
a
POC
below
that
we
could
use
sort
of
as
the
start
to
a
benchmarking.
Repo
eventually
we'll
need
to
obviously
move
this
to
the
hotel
World,
which
the
biggest
thing
we
would
need
is
I.
B
Guess
some
sort
of
I
guess
General
consensus
that
this
is
on
the
right
track
and
then
also
a
couple.
You
know
brave
people
who
are
willing
to
maintain
this,
this
benchmarking,
repo,
eventually
so
Dima.
Do
you
want
to
sort
of
explain?
What's
there
so
far,.
E
Yeah
I
can
give
an
overview
of
what
we
have
so
far
so
right
now,
it's
just
a
very
basic
kind
of
skeleton
for
the
project,
so
this
is
based
on
the
dock
that
we
created
earlier
and
yeah.
E
So
in
this
repo
there
is
a
directory
with
profiles,
and
the
idea
is
that
you
would
put
your
Source
profiles
here
in
people
of
JFR
format
and
any
other
that
we've
implement,
and
then
you
would
run
this
CMD
convert
program
that
lives
here
and
it
will
convert
your
Source
profiles
into
intermediary
format
that
we
we've
talked
about.
E
So,
for
example,
here
you
can
see
it's
the
same
profile,
but
it's
the
symbols
are
obfuscated
and
then
once
you
have
them
in
intermediary
format,
there
is
also
a
CMG
report
program
which
takes
a
reference.
Implementation
of
you
know
our
final
format
and
runs
it
on
every
file
in
intermediary
directory
and
Prince
a
report
about
it
tells
you
how
much
right
now,
the
only
thing
it
does
is
it
tells
you,
what's
the
you
know,
resulting
size
of
the
output
and
yeah.
E
As
you
can
see,
all
of
these
things
are
like
in
the
very
very
like
early
stages,
just
like
basic
implementations,
just
to
kind
of
create
this
skeleton
and
outline
and
yeah,
and
that's
kind
of
all.
We
have
so
far,
I
figured
the
next
steps
would
be
to
you
know,
actually
Implement
various
components
and
and
yeah
I.
Don't
know
I
outlined
like
some
to-do's,
but
we
should
probably
create
more
and
maybe
create
issues
and
things
like
that.
A
B
Yeah
so
I
mean
I,
guess
also
yeah,
then,
in
order
to
do
that,
we
need
to
have
a
a
repo
where
it
would
live.
You
know
in
the
in
the
hotel,
realm
and
I
put
a
link
to
What
What
tigron
had
linked
to
before
about.
You
know
how
you
create
a
issue
to
I,
guess
yeah
ask
for
approval
for
a
separate
repo
and
yeah.
B
Basically,
the
biggest
thing
we
need
is
just
like
a
couple:
people
who
are
willing
to
basically
just
help
maintain
it
help
approve,
and
once
we
have
that,
then
we
can
start
creating.
You
know
actual
issues
and
some
I
guess
yeah
start
implementing
some
of
the
ideas
that
we've
talked
about
and
and
measuring
them.
So
for
now.
Obviously
it
will
you
know.
Probably
most
of
you
are
seeing
this
for
the
first
time,
but
yeah
I
guess
just
maybe
take
a
look
and
hopefully
buy
the
next
time
we
meet.
B
We
can
at
least
have
created
the
issue
for
turning
this
into
an
Hotel,
repo
and
I.
Don't
think
it
should
be
a
ton
of
work
honestly,
I
mean
and
I
think
it's
a
lot
of
like
one-time
work.
For
most
of
this
stuff
and
then
after
that
it
will
sort
of
just
I
mean
yeah
I,
guess
the
biggest
thing
would
be
just
adding
more
profiles
to
it
and
having
somewhere
to
yeah
I
guess
test
the
different
formats
that
we
talk
about
curious.
B
If
anybody
has
any
opinions,
feedback
concerns
about
the
state
it's
in
so
far,
Felix
is
there
yeah.
C
I
guess
my
question
is:
does
anybody
in
this
group
already
have
something
that
they
could
hook
up
to
this
and
like
turn
into
profiles,
because
my
understanding
is
at
least
from
my
side
I?
Don't
we
use
P
Prof?
So
that's
not
something
we
could
add
and
I'm
curious.
If
somebody
else
would
have
the
time
or
interest
to
even
take
their
current
implementation?
Is
that
the
idea
or
it's
ideal
to
immediately
like
make
a
proposal
for
the
for
the
new
hotel
and
plug
that
in
yeah.
B
Yeah
I
don't
know
if
anyone
has
any
thoughts
there
I
mean
I
was
thinking
that,
like
there
is
so
there's
a
couple.
Ideas
that
have
been
thrown
around
in
past
meetings
like
one
is
using
the
I
guess
like
Standard
Hotel
events
somehow
and
like
measuring.
B
You
know
just
measuring
sort
of
as
like
a
baseline
for
that
there's
also
the
like
Hotel,
like
Hotel
demo
application
that
has
you
know
it's
like
the
I,
think
it's
the
Google
micro
Services
thing,
but
like
instrumented
with
a
bunch
of
otel
stuff,
you
know
we
could
obviously
just
use
profiles
from
a
lot
of
those.
Just
as
like
some
basic
first,
you
know
you
know
just
to
seat
it
with
something
that's
somewhat
meaningful
and
at
the
very
least,
I
guess.
B
We'd
get
some
profiles
about
how
the
otel
demo
operates
in
the
various
services
and
that
so
that
might
be
a
a
good
one
to
start
with,
but
basically
I
guess
in
my
head,
I'm
thinking
you
know,
maybe
we
could
just
start
with
either
open
source,
stuff
or
stuff.
That's
you
know
easy
to
stuff
that
already
exists
and
kind
of
go
from
there.
Sorry.
A
D
Was
distracted
for
short
moment
a
few
minutes
ago?
So
sorry,
if
I'm
asking
a
question
that
I
shouldn't
be
asking,
are
we
looking
for
an
example,
workload
that
provides
us
heterogeneous,
stack
traces
for
different
scenarios
right
now,.
D
When
you
say
you're,
looking
at
the
example
thing
for
hotel,
are
we
looking
for
a
setup
that
provides
us
stack
traces
for
the
very
different
profiling
use
cases,
or
are
we
currently
practicing?
Not
my
question?
I'll
I'll
just.
C
I
think
the
answers
yes
I
think
we
would
need
that
as
a
starting
point
and
then
the
question
is:
what
do
we
do
with
them?
Assuming
we
even
had
had
some
and
I
was
kind
of
poking
at
that
to
make
sure
that
like,
if
we
collect
a
bunch
of
profiles,
do
we
have
something
useful
to
do
with
them?
At
this
point.
D
So
the
the
reason
I'm
asking
for
this
sounds
silly,
but
for
elastics
product
marketing
I'm
supposed
to
create
a
container
image
with
a
bunch
of
heterogeneous
workloads
in
different
languages,
Java
Ruby
PHP
that
they
can
run
profiling
on
to
demonstrate
profiling
across
all
these
different
languages.
Since
I
have
to
do
that
anyhow,
is
it
helpful
if
I
just
provide
that
container
image.
B
D
E
D
Yeah
Let
me:
let
me
generate
the
the
container
first
and
then
I
need
to
find
a
way
to
convert
it
to
P
Prof,
because
obviously
it'll
end
up
in
our
our
format,
design.
First
and
then
I'll
I'll
try
to
figure
out
how
to
convert
it.
A
Well,
the
collapsed
format
I
think
using
BCC
would
be
the
simplest,
because
that's
what
is
just
throwing
you
the
raw
stack
trace
and
then
you
have
basically
a
converter.
That
does
that.
But
my
question
then,
is:
if
you
want
to
reuse
existing
applications
like
the
hotel
and
the
Google
Bookshop
I,
don't
know
the
demo
app
name.
Is
there
a
place
where
this
is
deployed,
where
profiling
can
be
added
or
VCC
can
be
added,
I
mean
with
managing
that?
Is
there
an
installation
somewhere?
Yes,.
B
There
is
I
was
planning
on
talking
to
that.
There's
like
a
working
group
that
I
think
meets
on
Mondays
to
talk
about
that,
so
I
was
planning
on
also
sort
of
waiting
until
we
have
at
least
the
issue
for
this.
You
know
out
in
like
in
motion
to
kind
of
show
them
something.
B
Actually,
you
know
ideally
kind
of
this
diagram,
for
example
to
say
like
hey,
we
need
something
here
that
you
know
we
could
use
this
application
for
and
yeah
and
I
think
we
could
use
that,
but
I
mean
we
don't
have
to
I.
Guess,
that's
probably
honestly,
the
easier
part
I
think
what
Felix
asked
is.
Probably
you
know,
yeah
I.
Think
that's
probably
the
bigger
thing
is
you
know
once
we
have
these
profiles,
what
do
we
do
with
them?
Like
that's
what
the
actual
I
think
more
substantive
work
will
be.
B
Well,
it's
you
know,
I.
Think
relatively
straightforward
of
you
know
we
should
get
at
least
one
sort
of
you
know
benchmark
of
converting
the
profiles
from
something
to
something
and
and
kind
of
go
from
there
and
then
add
other
formats
as
well.
B
Yeah
I,
don't
know.
Does
that
make
sense.
C
Yeah
one
last
thing:
I
want
to
throw
out
on
the
profile.
Collection
is.
If
we
collect
a
lot
of
stuff
in
P
Prof,
we
will
not
have
timestamps.
So
JFR
is
maybe
the
only
format
we
have
right
now
that
would
have
timestamps
in
it
and
I.
Think
Dima.
You
have
even
implemented
part
of
JFR
decoding
in
a
go
package
which
might
be
useful
for
this
project,
but
yeah
we
probably
need
some
sorts
data
source
that
also
has
timestamps.
C
Is
there
is
a
good
like
decoder
for
that
I
haven't
looked
because
it's
not
an
easy
format
if
I
remember
correctly,.
F
A
F
B
Yeah
I
think
all
these
are
great
ideas
looking
forward
to
hopefully
getting
these
into
issue
forms
so
that
we
I
guess
have
them
all
sort
of
laid
out
and
yeah
I
mean
I,
think
you
know
it'll.
We
can
kind
of
move
forward
from
there.
B
Otherwise
I
guess
yeah,
probably
before
this
next
meeting,
we'll
try
and
create
the
pr
and
yeah
kind
of
go
from
there.
A
Just
one
last
question:
in
the
example,
we
don't
mention
how
we
will
measure
the
criteria
that
we
want
to
Benchmark
on.
So
maybe
we
should
make
that
also
clear,
because,
obviously
common
reports,
it
takes
data
in
intermediary
format
and
converts
into
Target,
but
then
something
needs
to
record.
You
know
CPU
and
payload
size
of
what
that
does,
and
it
was
just
saying
should
we
lay
down
today,
maybe
what
this
criteria
are
and
if
we
want
to
prioritize
one
of
them
or.
E
Yeah
I
think
that's
document
like
it's
at
least
partially
documented
in
the
dark,
and
the
plan
is
to
measure
the
size
of
the
payload
CPU
and
maybe
memory
it's
something
like
that.
B
Yeah,
could
you
maybe
add
the
the
dock
to
the
oh.
A
A
I
see
make
Benchmark
generator
reports,
and-
and
is
this
report
supposed
to
have
like
number
of
execution
CPU
Cycles
I
identified
it
in
the
area
we
I
was
reading
here.
B
C
I
I
would
think
initially,
I
would
favor
mostly
looking
at
the
size
of
the
result,
because
the
optimization
of
a
particular
implementation
can
obviously
be
done
later
on
to
some
degree.
The
design
needs
to
be
good
to
optimize
something
but
like
if
we
went
to
sit
down
and
wanted
to
make
a
faster
pre-prof,
encode
or
decode
or-
and
that
was
our
only
goal
we
could
probably
achieve
that,
and
so
the
implementation,
like
CPU
time
memory,
I,
think
I'm
less
worried
about
more
on
the
file
size.
A
That's
great
idea,
I
agree
and,
given
that
we
are
going
to
use
at
different
inputs,
then
we
have
to
link
the
result
of
the
size
to
what
the
original
input
was
now.
So
we
also
have
to
have
a
measure
about
this.
This
is
the
score
of
the
input
and
this
is
the
score
of
the
output
size.
So
let's
do
that
yeah.
C
For
example,
one
one
bar
I
would
set
is
if
we
take
a
JFR
which
currently
does
a
lot
of
the
things
we
might
want
to
do
in
our
new
format,
our
format
should
not
produce
a
bigger
file
than
JFR
right.
That
would
be
bad,
I.
Think.
B
Yeah
yeah,
that
would
be
very
bad.
Okay
yeah.
We
can
add
all
that
in
thanks
for
bringing
that
up.
Oh
we're
almost
at
time
any
last
thoughts,
I
guess
any
I
believe
the
next
meeting
at
first
I
thought
it
was
during
kubecon
I.
Believe
it's
right
before
so
I
think
we
should
be
fine
to
have
it,
but
I
guess
yeah,
we'll
see,
I!
Think
a
lot
of
this
will
be
just
kind
of
actually
implementing
some
things.
B
Creating
the
dock,
so
yeah
I
mean
I,
don't
know
unless,
basically
for
next
meeting
I
would
say
if
you
have
something
you
want
to
discuss,
maybe
bring
it
up
in
the
channel
or
add
to
this
running
dock
for
agenda
for
next
meeting,
otherwise
I
would
say.
Maybe
we
could
just
try
and
do
a
lot
of
this
asynchronously
and
yeah,
but
obviously
there's
still
some
time
so
think
about
it
and
I
guess
let
everybody
know.
B
Cool
all
right,
we'll
see
everybody
next
time,
thanks
for
coming,
everybody
and
yeah
have
a
good
couple
weeks.