►
From YouTube: 2022-09-22 meeting
Description
OpenTelemetry Prometheus WG
B
Yeah
I
guess,
if
there's
a
place
to
catch
it,
that
might
be
better
I.
A
Is
that
yeah
I
had
it
in
May,
and
it
wasn't
that
bad
we
had
like
had
like
a
fever
for
a
day
and
it
just
felt
kind
of
crappy
for
two
and
then
this
time
we
had
fevers
for
several
days
and
even
after
we
started
testing
negative,
like
still
these
lingering
coughs
for
about
two
weeks
than.
B
C
B
B
Do
you
know
if
Francesco
will
be
here
by
any
chance.
C
I
think
Francesco
and
Sean
will
not
join
today
as
they
are
on
pkl,
but
I'm,
not
sure,
but
unlikely.
Okay.
D
This
guy
Dominic
actually
has
an
alternative
tool
to
view
the
runtime
Trace
data.
I
wasn't
aware
of
it.
When
I
did
my
level
thing,
it
seems
even
better
getting
the
tracing
data,
but
now
we
have
two
two
tools.
D
I
I
released
a
little
side
project
called
f,
cheap
Trace,
which
is
it's
a
profiler
that
shows
you
the
data
not
as
a
flame
graph,
but
as
a
timeline
view.
So
you
get
like
a
timeline
for
each
go
routine
and
yes,
the
profile
on
the
Chrome
browser
actually
works
the
same
way
and
it's
kind
of
nice.
If
you
want
to
debug
something,
that's
more
latency
issue,
then
resource
utilization
issue.
A
B
I
will
I
think
we
can
go
ahead
and
get
started.
I'll
paste
the
agenda
Doc
in
the
chat
all
right
well
welcome
back
everybody
yeah,
so
we
I,
guess
I
think
most
have
been
here
before,
but
yeah.
Basically,
the
you
know
we're
meeting.
This
is
I,
guess
the
first
meeting
post
our
official
getting
the
Vision
Otep
merged
I!
Think
as
of
this
morning,
so
congrats
everybody
on
that.
Obviously
it's
been
a
a
long
time
in
the
works,
but
definitely
excited
to
I.
B
Guess
kind
of
yeah
hit
that
first
major
Milestone,
so
yeah
I
guess
give
yourself
a.
C
B
But
yeah
so
now
the
we
are
kind
of
starting
to
think
towards
what
the
next
steps
will
look.
Like
you
know.
Obviously,
I'm
kind
of
agreeing
on
a
vision
is
is
sort
of
the
the
first
step
and
then
after
that
comes
I,
guess
probably
the
trickier
part
of
of
coming
up
with
exactly
you
know
what
a
profile
looks
like.
B
How
do
we
say
and
profiles
and
those
kinds
of
things,
and
so
so
yeah
I
guess
there
were
a
lot
of
people
who
were
out
last
week
or
at
last
meeting
as
well,
but
that
meeting
we
kind
of
talked
about
the
about
benchmarking
in
general
and
how
we
can
basically
just
set
up
a
benchmarking
Suite.
B
You
know
ultimately
we're
going
to
come
up
with
a
format.
You
know
whatever
we
eventually
agree
upon,
which
is
what
we'll
start
talking
about
today,
but
I
guess
continue
talking
about
today,
but
but
yeah
the
the
idea.
There
is
just
that.
You
know
it's
something
that
we
can
kind
of
do
in
parallel.
There's
a
lot
of
sort
of
you
know,
foundational
stuff
that
we
can
put
into
that
benchmarking
suite
and
the
idea
is
that
you
know
eventually,
once
we
do
start
evaluating
different
ideas,
it
would
just
be
a
nice.
B
It
would
be
nice
to
have
this
extra
sort
of
objective
element
to
say
you
know
this
format
is
better
than
that
format
based
off
of
these
actual
metrics,
but
that
also
will
kind
of
come
further
down
the
road.
It's
just
something
that
we
can
work
on
in
parallel,
and
we
talked
about
this
a
little
bit
in
the
slack
Channel
but
yeah.
B
The
other
side
of
that
is
coming
up
with
what
the
actual
you
know,
format
should
look
like
and
then
some
of
the
questions
that
we
had
sort
of
brought
up
over
past
weeks,
where
we
had
said
well,
let's
wait
until
after
we
agree
on
the
on
the
vision
before
we
start
started,
diving
deeper
into
those
into
those
questions,
so
that's
sort
of
where
we
are
at
right.
E
D
I
I
have
one
specific
question
on
that:
has
there
been
a
change?
I
was
out
on
Parental
leave.
Has
there
been
a
change
on
the
goal
with
timestamps,
because
I
didn't
see
timestamps
mentioned
in
the
Otep
at
all,
and
we
discussed
it
also.
Last
time
we
talked
benchmarking
when
I
was
here,
the
timestamp
topic
came
up
like
as
timestamps
a
goal
or
not
a
goal.
Yes,.
B
So
we
I
guess
to
be
determined.
We
yeah
I
put
it
on
the
agenda
for
today
yeah
we
for
for
this
first
version.
We
didn't
add
it
in.
There
am
like
that
doc.
We
can.
Obviously
you
know
edit
it
change
it
over
time.
B
Basically,
we
just
didn't
have
enough
sort
of
I
guess
specifics
as
to
whether
or
not
it
should
be
added
or
how
it
should
be
added,
but
it's
definitely
still
a
conversation
that
we
want
to
have,
which
is
why
it's
on
there
today
yeah
so
I
I,
guess
yeah,
it's
sort
of
to
be
determined
how
I
guess
what
wall
time
Stamps
will
play
and
and
how
that
will
work.
B
Any
other
questions:
if
not,
we
can
go
ahead
and
that's
a
a
nice
segue
into
the
first,
the
first
item
on
here
cool
yeah.
So
we
had
talked
about
it
a
couple
weeks
ago:
I'm
not
I,
don't
remember
who
initially
brought
it
up,
but
yeah,
maybe
Felix.
If
you
want
to
kind
of
I
guess
since
it's
you
know
somewhat
on
your
mind
as
well.
B
Maybe
if
you
want
to
kind
of
yeah
I,
don't
know,
give
us
give
us
so
many
your
thoughts
as
to
yeah,
as
as
to
how
you
think
about
timestamps,
whether
or
not
we
should
include
them
and
in
what
form
or
yeah
I
would
like
to
love
to
just
kind
of
I.
Don't
know
discuss
that
further.
D
So
I
think
timestamps
are
super
useful
to
have
and
we
are
interested
in
doing
more
stuff
with
timestamps
at
datadoc
and
it's
one
of
the
limitations
in
P
Prof.
That
is
currently
yeah
problematic
for
us
and
maybe
also
it's
important
for
this
whole
process,
because
I
think
other
than
timestamps
P
Prof
is
pretty
close
to
what
a
standardized
format
could
look
like.
It's
not
perfect,
but
comes
pretty
close,
so
I
think
timestamps
are
a
good
thing
and
should
be
included
in
this
effort.
D
B
D
So
individual
events
and
and
the
reason
why
one
timestamp
is
you-
can
do
visualizations
like
a
heat
map
where
you
can
do
SUB
second
resource
utilization
on
the
CPU
profile
and
figure
out.
Oh,
you
were
actually
100
utilize
utilization
for
a
very
short
period
of
time,
and
then
you
can
drill
into
what
happened
there
and
the
other
thing
is,
you
can
do
timeline
views
where
you
can
simply
simplest
thing
would
be
threat
states
where
you
show
like.
D
Oh,
the
thread
was
doing
this
waiting
on
something
or
it
was
running
some
code
on
on
the
CPU
and
the
most
advanced
form
of
that
is
to
actually
put
stack
traces
in
there,
which
is
what
I
mentioned
earlier.
Was
my
side
project
kind
of
you
get
a
flame
chart
of
Stack
traces
over
time,
and
these
are
all
the
use
cases
for
having
timestamps
and
profiling
data.
I
would
say.
E
I
would
just
add
a
little
something
to
that,
which
is
that
to
me.
I,
don't
think
the
question
of
time
stamps
can
be
fully
decoupled
from
the
the
the
the
the
question
of
whether
we're
dealing
with
always
on
or
on
demand
as
well,
because
for
the
sum
of
the
use
cases
you
talked
about
in
their
Felix,
specifically
the
finer
growing
time,
stamping.
E
You
know
that
that's
not
going
to
be
something
you're
going
to
do
with
with
always
on,
because
it's
it's
it's
going
to
be
too
expensive
to
to
capture
enough
detail
to
drill
in
that
thought.
So.
D
No
I
think
people
are
already
doing
that,
always
on
with
timestamps
like
TFR
the
Java
flight
recorder.
Does
it
I
think
yeah
I
see
Florian
yeah.
C
Yeah,
so
not
for
every
aggregated
data
at
the
moment,
but
yeah
we
need
a
more
detailed
time
stamps
and
we
just
cannot
have
timestamps
for
events,
but
also
for
sampling
based
events.
So
fully
agree
on.
Timestamps
are
and
essentially
event
element
to
then
in
the
end,
visualize,
something
in
the
correct
way
and
also
not
to
mix
up
things.
D
A
A
We
we
want
to
be
able
to
go
back
in
time
and
look
at
certain
time
windows
and
having
those
time
stamps
so
that
we
can
zoom
in
on
the
section
or
the
time
period
that
we're
interested
in
you
know,
having
those
time
stamps
can
be
is
really
useful
right.
A
A
So
I
I
think
I'm
Pro
time
stamps
I
understand
the
overhead,
maybe
that
we
have
some
way
of
like
balancing
those
two
Pete
from
our
team
is
also
here's
I,
don't
know
we
haven't
talked
about
this
I,
don't
know
if
you
have
any
other
thoughts.
F
F
F
But
if
you're
going
to
send
every
stock
Trace
individually
and
fix
a
timestamp
on
every
single
stack
Trace,
then
you
know
that
aggregation
window
is
you
know
you,
you
I,
think
if
you
had
both
you
know
this
is
a
single
stack
Trace
with
its
precise
time
stamp
and
it
it's
count
as
one
because
I
didn't
aggregate,
that's
that's
fine,
you're
gonna
send
more
stuff,
but
that's
the
you
know
that
trade-off
I
think
the
format
can
accommodate
that
trade-off.
I
think
it
could
accommodate
both
uses
is
kind
of
what
I've
concluded.
C
Yeah
I
think
I
agree
with
that.
It
really
depends
how
timestamps
are
implemented.
So
if
we
for
every
stick
trace,
for
example,
it's
in
the
complete
time
stamp,
it's
a
matter
of
memory,
space
and
network
traffic
that
we
generate,
but
maybe
just
have
a
some
kind
of
base
timestamp
and
then
do
for
a
certain
number
of
Stack
traces
only
the
offsets.
So
this
will
reduce
the
number
as
well.
So
there
are
different
options.
C
So
in
the
end,
it
really
depends
how
how
we
agree
on
it,
how
it
should
look
like,
but
or
if
you
just
say,
hey
nanosecond
level
is
not
needed
and
it
should
be
less,
but
it's
really
I'm
hoping
for
a
discussion,
I,
think
or
if
the
protocol
is
open
in
a
way
to
say:
hey
I
want
to
specify
timestamps
only
on
a
minute
or
hour
level.
C
If
I
want
to
take
traces
for
only
one
hour
yeah,
it
really
depends,
but
the
general
point
is
and
I
think
everyone
increases
that
timestamps
are
an.
C
D
One
thing
that's
also
worth
noting
is
sending
timestamps
does
not
mean
that
one
has
to
always
send
each
stack
Trace
along
with
the
timestamp
right
away.
You
can
still
do
an
aggregation
where
you
remember,
you've
seen
the
stack
trace
and
then
you
build
up
a
list
of
timestamps
that
belong
to
the
stack
trace,
and
then
you
send
the
stack
Trace
with
that
list
of
timestamps,
and
that
offers
a
lot
of
good
compression
things.
You
can
do
Delta,
encoding
and
other
things
that
make
life
easier.
A
Yeah
I
think
at
a
high
level
we're
saying
as
long
as
there's
enough
flexibility
to
be
able
to
optimize
it
and
have
flexibility
to
group
things
together
in
an
efficient
way.
I
think
we'd
all
be
Pro.
I
would
be
in
favor
of
having
the
fine,
the
ability
at
least
to
do
the
fine
grain
time,
stamping
right.
B
Okay,
yeah
I
think
that
seems
like
it
makes
sense
just
curious.
Like
so
Theory
you
mentioned
that
that's
one
of
the
limitations
of
P,
Prof
and
I
guess:
could
you
I
guess
like
theoretically,
if
you
did
want
to
just
so
I
can
like
picture
it.
If
you
did
want
to
do
it
in
P,
Prof,
I,
guess
what
would
you
because
I
I
guess
you
probably
could
it
would
just
be
very
inefficient
right.
D
So
yeah
we're
actually
thinking
about
adding
this
to
P
Prof
and
making
a
proposal
to
people
the
the
way
you
can
do
it
today.
There's
labels
in
prep
right,
so
the
most
stupid
way
you
could
do
it
is
you
could
have
a
label
called
timestamp
and
then
you
put
the
individual
timestamp
value
in
it.
The
problem,
then,
is
you
don't
get
aggregation
anymore?
Then?
D
Really
you
have
one
sample
what
they
called
in
prep
for
FB
stack
Trace
in
every
time,
step-
and
you
repeat
the
location
list,
like
all
the
frames
over
and
over
again,
so
it
usually
close
up
the
size.
You
can
get
slightly
better
results
by
not
having
a
label
called
timestamp
but
having
a
label
called
timestamps,
plural
and
putting
a
list
of
timestamps
into
the
label
value.
Then
you
don't
have
that
problem
of
getting
a
lot
of
yeah
samples
and
P
profit
and
the
size
is
okay,
but
yeah
encoding.
D
So
the
next
step
would
be
to
add
a
new
field
to
the
sample
message
type
in
P
Prof
and
make
it
an
official
like
list
of
timestamps
that
you
can
associate
with
a
sample
and
that's
essentially
what
I'd
like
to
propose
at
some
point
to
people
of
whether
or
not
like,
regardless
of
what
we
do
in
open
Telemetry
here.
But
I
think
it
would
be
useful
addition
to
pcroft.
B
Interesting,
well,
that's
I
mean
yeah
I!
Guess
that's
kind
of
the
the
goal
here
as
well
is
that
you
know
we
can
kind
of
make
those
sort
of
improvements
and
yeah
I
mean
yeah,
regardless
of
what
happens
that
I
guess
yeah.
That
seems
like
a
good
use
case
for
where
a
standardized
format
might
be
able
to
make
some
improvements,
sort
of
yeah
building
it
from
you
know
from
scratch
for
lack
of
a
better
word,
yeah.
Okay,.
B
So
it
sounds
like
people
are
pretty
much
in
agreement
there,
I
guess.
Is
there
any
I
guess?
Are
there
any
concerns
with
it?
I
mean
I
know
you
said
you
know,
or
it
sounds
like
generally
making
timestamps
an
option,
but
not
you
know
necessarily
required
yeah
I
guess
is
there?
Is
there
any
sort
of
points
of
disagreement
or
I
guess
ways
that
we
could
do
that
differently?
B
That
are
worth
discussing,
or
should
we
just
say
we
want
this
new
format,
tab
timestamps
in
some
form,
I
know
you
mentioned
like
yeah,
either
a
list
of
time
stamps
or
a
single
time
stamp
or
or
you
know
something
along
those
lines.
I
guess
I'm
curious.
If
anybody
sees
any
potential,
you
know
sort
of
like
Forks
in
the
path
of
ways
that
we
could
add.
Timestamps.
D
I
mean
the
most
stupid
way
to
do.
Timestamps
is
to
have
one
profile
for
every
sample
you
collect.
So
arguably,
every
profiling
format
supports
timestamps,
but
right
right,
there's
lots
of
ways
to
do
it,
but
I
think.
As
far
as
like
the
goal
just
saying
that
optionally,
having
timestamps
for
each
event
when
something
gets,
gets
profiled
supporting
that
in
the
data
format
should
be
the
goal,
and
then
we
can
figure
out
what
that
could
look
like.
Okay,.
B
Yeah
that
sounds
good.
Then
you
kind
of
were
mentioning
something
earlier.
Anything
else
you
wanted
to
to
add
about
the
timestamps
thing.
I
guess
is
that
kind
of
you
said
you
were
slightly
concerned
about
the
overhead
of
it.
E
E
Well
that
has
some
implications
as
to
how
you
actually
tell
the
profiler
that
it
needs
to
change
State
and
start
sampling.
More
often,.
B
Yeah
yeah,
so
that
was
another
yeah
I
felt
like
when
we,
when
we
talked
about
the
different
formats
before
it
seemed
like
statefulness,
was
probably
where
there
was
the
widest
range
of
of
opinions
on
you
know
what
we
can
do
and
and
how
I
guess
how
much
complexity
that
adds
to
to
what
we
eventually
have
to
come
up
with
I'm
curious.
If
anybody
has
any
just
strong.
D
B
It
could
yeah
I,
think
I
think
the
conversation
kind
of
came
about
as
we
were
talking
about
like
symbols
and
stuff
as
well.
Yeah
it
yeah
I,
guess
it
could
take
a
a
number
of
different
forms
yeah.
Why
do
you
ask
about
that?
One
specifically.
A
E
So
if
you,
if
you
were
sampling
each
thread,
every
every
100
milliseconds,
which
is
typically
you
know,
considered
to
be
an
upper
bound
for
for
some
I
forget
what
the
actual
definition
is.
But
there's
some
there's
some
overhead
number
that
if
you
want
to
hit
it,
you
really
shouldn't
sample
more
often
than
100
every
100
Millies.
E
C
I
think
we
are
having
some
kind
of
stateful
protocol
as
we
don't
have,
or
as
we
don't
require
the
symbols
on
the
on
the
endpoint,
so
we
are
bringing
in
the
symbols
from
a
backend,
and
this
is
quite
nice,
but
it's
not
stateful
in
a
way
that
the
backend
is
communicating
a
communication
doing
some
communication
back
to
the
agent.
So
in
these
terms,
like
changing
the
same
sample
rate
is
not
happening,
but
I
could
imagine
a
scenario
where
this
is
some
kind
of
interesting
for
probabilistic
example.
C
For
example,
if
you
have
a
base
sample
rate
at
your
Fleet
and
increases
sample
rate
for
a
certain
amount
of
times
where
a
certain
amount
of
Agents
just
from
a
mathematical
point,
is
this
really
interesting?
But
I
think
this
is
only
maybe
an
idea
but
to
reduce
symbols.
E
I
mean
the
other
case
that
I
thought
about
which
is
is
one
from
JFR?
Is
that
in
in
JFR
allocation?
Profiling
is
quite
expensive,
so
in
general
you
would
want
to
run
an
always-on
without
allocation
profiling
for
a
large
number
of
use
cases.
But
in
a
case
where
you
see
the
memory
utilization
shooting
up
having
the
ability
to
say
to
the
profiler,
oh
okay
and
now
start
actually
doing
allocation
profiling,
so
we
can
see
which
threads
are
burning
through
memory.
E
B
Yeah
Pete,
you
you
had
someone,
you
wanted
to
add.
B
F
Yeah
so
I
I
think
I'm
getting
two
threads
of
like
two
general
questions
of
that's
of
statefulness.
Is
there
some
control
plane
and
are
we
doing
some
symbol
compression
that
requires
some
State?
And
the
question
on
my
mind,
is
I
I
have
a
idea
that
there's
a
a
lot
of
features
that
are
sound,
very
useful
and
absolutely
it's
useful
to
crank
up
the
profiling
frequency
and
if
you
had
the
capability
to
do
allocation
profiling
and
turn
that
on
when
you
need
it,
that's
that's
very
powerful.
F
But
what,
if
you
know
I
guess?
Is
there
like
a
level
of
compliance
kind
of
standard
where
you
say
we're
Baseline
compliant,
which
is
we
send
the
data
and
then
there's
like?
Were
you
know,
data
optimal
compliant,
which
is
we
send
compressed
symbols
and
then
we're
a
control
plane
compliant,
which
is
we
can
respond
to
a
frequency
knob
or
respond
to
a
a
more
sophisticated
start?
Profiling,
the
allocator,
knob
and
I.
Just
was
curious.
D
Yeah
I
had
my
thought
on.
That
is
right
now
we're
discussing
if
it's
useful,
to
have
a
control,
plane
and
I
think
there's
agreement,
but
just
because
it's
useful
to
have
a
control
plane
doesn't
mean
it
has
to
be
in
the
scope
of
what
we
do
in
open.
Telemetry
right
people
can
build
control
planes
on
top
of
an
open,
Telemetry
data
format
and
life
might
be
just
fine
that
way.
D
So
we
should
ask
ourselves
the
question:
if
there's
any
benefits
of
doing
that
in
open,
Telemetry
and
standardizing
it
for
the
users
for
the
vendors
or
both,
ideally
that
make
it
worthwhile
putting
it
in
scope,
because
it's
it's
a
hairy
one
to
take
into
Scopes,
there's
also
security
considerations
right,
because
suddenly
the
profiling
provider
can
cause
havoc
on
your
Fleet
by
cranking
up
the
rate
and
making
everything
go
slow
and
that's
not
great.
So.
A
So
open
Telemetry
as
a
whole
is
already
looking
at
remote
configuration.
There's
a
working
group
looking
at
how
to
do
that
and
I
would
vote
for
just
deferring
that
part
of
the
discussion
until
that's
further
along
I.
Think
initially
the
the
discussion
around
statefulness
was
to
do
with
the
wire
protocol
and
how
much
space
we
were
using
how
much
bandwidth
we
were
using
and
I
think
that's
the
more
important
discussion,
because
I'll
play
into
the
benchmarking.
D
E
I
I
tend
to
agree
with
Jonathan
I
think
we've
got
a
decent
statement
of
the
two
sub
cases
that
we
we
want
to
talk
about
here.
Why
approach,
Poland
and
remote
configuration,
and
definitely
we
should
just
align
with
what
the
remote
configuration
folks
come
up
with.
B
Yeah
Jonathan:
do
you
know
I,
guess
what
that
group
is
called
I
mean
yeah.
We
can
also
I
guess
make
them
aware
that
at
least
we're
having
this
conversation
and
they
might
be
able
to
provide
something.
B
That
would
be
great
yeah
thanks
for
bringing
that
up,
yeah
so
and
then
I
guess
yeah
I
mean
either
way.
You
know
so
we're
trying
to
think
about
I
guess,
yeah
to
sort
of
contextualize
it
we're
trying
to
think
about
okay,
so,
like
what
can
we
do
now?
You
know
like
as
sort
of
some
of
our
next
steps,
as
we
start
to
think
about
this,
to
you
know,
come
to
some
sort
of
decision
about.
B
You
know
yeah,
for
example,
yeah
like
do
we
have
a
space
for
symbols
in
this
format
you
know:
do
we
I,
guess
yeah
I
I'm
curious?
If
anyone
has
any
thoughts
as
to
you
know,
what
can
we
sort
of
use
to
make
a
decision
on?
You
know
how
or
whether
or
not
to
you
know
sort
of
cater
to
at
least
that
that
use
case
yeah
and
and
how
we
might
be
able
to
do
that.
C
I
would
just
start
I
think
for
us
it
would
be
a
benefit
that
we
couldn't
just
send
out:
stick
traces
as
kind
of
numbers
or
hashes
and
have
a
separate
stream
that
have
similar
hashes
and
support
and
provides
the
symbols.
So
if
you
have
the
symbols
on
on
the
on
the
agent
side,
you
just
send
out
both,
but
if
you
don't
have
the
symbols
on
the
agents
that
you
can
provide
it
from
somewhere
else,
I
think
to
put
in
need,
you
know
sending
out.
C
Symbols
can
really
sum
up,
especially
if
you
have
a
large
Fleet
and
sent
from
every
agent
in
this
Fleet
the
symbols.
So
that's
one
of
the
reasons
why
we
try
to
avoid
sending
out
symbols
for
everything,
because
this
makes
a
difference
in
the
network
traffic
build
of
the
company.
B
B
C
I
can
I
hope
I
can
see.
We
are
building
a
Hashem
X
executables,
for
example,
and
this
hash
is
then
used
for
that
matches,
Honda,
so
we're
building.
Let
me
start
again
we're
building
a
Hashem
that
executables.
This
is
happening
on
the
on
the
ancient
side
and
for
stack
traces.
C
We
put
in
these
hashes
of
these
executables
and
on
the
back-end
side
we
have
these
hashes
of
executables
and
when
we
can
narrate
symbols,
we
connect
these
hashes
with
the
executables,
and
so
we
can
basically
look
up
things,
so
the
the
stateful
part
is
really
building
the
hash
of
the
executable
that
needs
to
match
the
symbols.
So,
if
you
have
the
very
same,
let's
say
you
have
a
Hello
World
example
and
change
hello
world
to
hello,
world
B.
F
Pouring,
how
does
that
work
with
like
how
much
information
about
mapped
Dynamic
libraries
have
do
you
need
like?
Does
that
method?
Work
with
Dynamic
libraries
that
are
mapped
into
the
executable
address
space,
and
then.
C
Yeah,
it
works
fine,
so
we
do
stick
unwinding
for
Java,
JavaScript,
PHP,
Ruby,
Haydn,
I,
hope
I,
don't
miss
on
us,
but
these
are
the
main
response.
So
yeah
interpreted
stack
on
Landing
is
fine.
C
F
When
you
hash
the
executable,
but
those
aren't
the
only
symbols,
some
symbols
come
out
of,
like
you
know,
libsy
or
whatever
lib
SSL
or
any
anything
else,
and
what
all
do
you
need
some
extra
metadata
about?
What
was
mapped
to
know
how
to
and
what
libraries
were
present
and
so
forth
to
do
that
method.
C
If
you
have
the
executable
on
the
agent
and
the
executable
somewhere
else,
it
should
map
very
similar,
very
similar
mappings.
So
if
you
put
in
openss
Outlook,
for
example,
it
should
be
the
same
otherwise
executable
hash
should
change.
C
For
dynamic
linking
it
works
similar
as
we're
looking
up
the
executables
that
are
mapped
into
a
program,
and
this
would
work
similar
for
execute
for
your
case
versus
AA.
I
have
Dynamic
linking
and
we
are
doing.
We
look
up
the
to
destination.
Basically,
I'll
build
a
hash,
and
this
needs
to
be
done
on
some
realization
site
as
well.
C
B
C
B
Yeah,
do
you
I
think
so
yeah,
so
this
is
kind
of
what
they
came
up
with
for
logs,
for
example,
and
you
know
they
kind
of
go
through,
and
so
they
have.
You
know
you
know
a
bunch
of
different
fields
and
they
kind
of
describe
it.
They
give
some
examples.
B
You
know
they
have
some
appendix
stuff
and
So.
Eventually,
this
is
what
we
are
working
towards
as
the
next
I
guess.
This
is
a
more
sort
of
traditional
Otep
and
this
is
what
we're
going
to
have
to
come
up
with
for
profiling,
and
so
you
know
it
sounds
like
there's.
B
Not
a
ton
of
I
would
say
disagreement
here
as
to
you
know
some
of
the
things
that
honestly
yeah
seemed
like
they
were
a
little
bit
more
contentious
before
and
so
I'm
wondering,
like
maybe
yeah
I'm,
trying
to
think
about
what
you
know.
Do
you
think,
there's
a
good
enough
sort
of
like
base
that
we
could.
You
know,
start
to
create
a
draft
of
this
document,
similar
to
what
we
did
with
the
other
Otep
with
you
know,
maybe
starting
with
I
linked
to
you
know,
yeah
to
the
P
Prof
format.
B
Here
in
the
notes
yeah
down
here,
you
know
like
maybe
we
start
with
just
listing
these
and
then
we
sort
of
like
add
and
subtract
as
needed.
That
was
one
I
guess
idea:
I
had
of
of
potential
ways
we
could
move
forward.
So
you
know.
Maybe
we
have
this
except
we
like
add
time
stamp
and
some
information
as
to
you
know,
you
know
kind
of
breaking
down.
You
know
why?
Oh
yeah,
here's
something
about
time.
B
Six,
you
know
add
time
stamp
with
sort
of
some
of
the
discussion
that
we
have
and
same
thing
with,
like
you
know,
I
guess
something
related
to
symbols.
You
know
adding
you
know
a
field
or
a
space
for
that
or
something
along
those
lines.
What
do
you
think
of
that
as
a
potential
I?
Guess
next
steps
for
this?
Do
you
think
we
have
enough
yeah?
Would
anybody
be
opposed
to
that,
as
as
a
as
as
the
route
we
go
for
for
moving
forward
from
here,
Dimitri.
A
Yeah
I
would
maybe
suggest.
Maybe
we
could
kind
of
split
this
into
two
parts,
so
that
doc,
you
showed
you
know
that
seems
to
be,
like
you
know,
a
lot
more
detailed
version
of
what
we're
trying
to
achieve
right,
like
maybe
first
we
could.
A
You
know,
list
these
points
of
contention
and
spell
out
like
what
we've
decided
on
and
have
maybe
like
another
road
tip
which,
where
it's
like
you
know,
a
short
document,
and
you
know
a
bunch
of
people
would
approve.
The
pr
I
feel
like
that
would
be
much
easier
to.
You
know
come
up
with
relatively
quickly
and
you
know
we
could
get
more
feedback
from
the
community,
and
you
know
it
would
be
easier
for
people
to
breathe
as
well.
I,
don't
know
so.
You're.
B
Saying
to
do
a
less
I
guess,
specifically
focus
on
these
trickier
points
versus
adding
all
the
fields
is,
that
is
that
kind
of
what
you're
saying.
A
Well,
like
it
I
would
write
a
doc
where
I
would
say
like
okay,
you
know
there's
a
few
points
of
contention,
particularly
what
do
we
do
with
timestamps?
What
do
we
do
with
statefulness?
What
do
we
do
with?
You
know
X,
Y
and
Z,
and
here
is
kind
of
what
we've
you
know
what
we've
agreed
upon
and
then
people
can
comment
in
case.
That's
not
what
we've
kind
of
agreed
upon
and
yeah
and
we
would
have
it
as
an
Otep.
A
So
you
know
like
a
short
document
and
in
the
future
we
could
refer
people
to
it
and
it
would
be
useful
later
when
we
do.
You
know,
write
out.
Oh
you
know,
then,
later
from
from
that,
using
that
as
a
base,
we
could
spell
out,
like
all
the
fields,
all
the
you
know,
other
things
I
don't
know
it's
just
a
suggestion.
B
My
initial
gut
reaction
is
that
it
might
be
hard
to
do
that
without
I.
Guess,
like
the
context
of
the
other
fields
that
are
there
but
I
mean
I,
do
see
the
value
in
kind
of
emphasizing
these
points
since
I
think
it
would
be
sort
of
the
you
know,
the
marginal
you
know
gain
on
top
of
something
like
P
profit,
you
know.
B
So
if
we're
gonna,
you
know
so,
if
yeah,
if
we're
effectively
going
to
start
with
something
that
exists,
and
then
you
know
sort
of
like
add
or
remove
from
it,
you
know
yeah.
Maybe
it
is
good
to
sort
of
emphasize
those
points
on
like
how
it's
different,
and
you
know
why
we
decided
to
add
timestamps,
for
example,
or
why
we
decided
to
remove
or
change
a
field
or
something
like
that.
B
F
Or
anybody
else,
I
I,
do
sort
of
as
a
general
like
philosophical
idea
enumerating
those
points
that
were
contentious
that
might
remain
a
little
bit
more
so
is
probably
just
useful
and
and
I
mean
I
think
actually
I
had
a
somewhat
related
question
to
this.
I
I
feel
what
ties
into
timestamps
and
granularity,
but
I
I
am
curious.
If
people
wanted
to
respond.
F
F
When
things
went
bad
or
whatever
happened
to
be
the
case
versus
sort
of
there's,
I
guess
this
is
coming
from
the
chair
that
I
sit
in,
but
at
pixie
the
model
is
more
so
you
can
look
at
the
profiles
on
demand,
but
in
in
the
basic
setup,
you're
not
going
to
be
able
to
see
yesterday's
profiles
because
we
didn't
ship
them
and
store
them
anywhere
and
and
it's
not
to
say
that
capability
doesn't
exist
under
some
more
sophisticated
setup.
But
the
the
Baseline
setup
doesn't
do
that.
F
F
D
I
I
think
it's
not
bad
word
to
do
it
this
way,
but
I
think
there's
a
lot
of
good
use
cases
for
the
historical
data
as
well.
Like
very
common
case
is
you
do
deployment
and
you
want
to
see
what's
every
question,
so
you
want
to
have
the
data
from
before
and
after
and
yeah
with
your
model,
you
could
probably,
if
you
remember
before
the
deployment,
go
in
and
look
at
the
profile
and
have
it
ready.
D
So
you
can
look
at
it
side
by
side,
but
you
have
to
remember
doing
that
and
often
what
happens.
You
do
a
deploy,
something
that
happens
and
you
forgot
to
do
this.
You
know
you've
yeah,
so
it's
it's
kind
of
nice
to
have
historical
context
for
that.
F
Absolutely
understood
that
that
use
case
makes
perfect
sense,
I
guess
I,
guess
I'm
more
so
saying
like
how
is
it?
Is
this
the
more
common
case
like
to
send
everything
and
store
everything
just
I'm.
D
F
B
Yeah
yeah
I
think
that
that
seems
pretty
fair.
So,
let's
see
so
I
added
on
here,
you
know
as
as
far
as
our
next
steps,
you
know
two
options:
one
sort
of
starting
with
a
known
format,
adding
and
moving
from
there.
Another
is
sort
of
starting
with
the
I'll
say,
somewhat
contentious
points,
since
it
seems
like
it's.
You
know
also
somewhat
agreed
upon.
B
You
know
that
we
discussed
above
and
and
just
limiting
the
doc
limiting
first
Otep
to
these
points,
and
then
maybe
we
can
sort
of
yeah
reference
that
we'll
you
know,
sort
of
do
something
similar
to
an
existing
format
like
P,
Prof
and
and
kind
of
you
know
enhance
it
with
with
these,
with
some
of
the
stuff
that
we've
talked
about
here
and
potentially
even
ask
other
people,
if
there's
other
you
know,
maybe
there's
other
use
cases
or
something
that
that
we
haven't
addressed
with
the
the
group
of
people
here,
that
someone
might
be
able
to
add
to
that.
B
So
I
think
both
of.
D
Those
are
one
one
use
case
that
I
want
to
throw
out
there
is.
We
should
be
explicit
on
whether
or
what
we
produce
is
really
going
to
be
like
sort
of
a
file
format
where
you
can
have
a
self-contained
profile
with
everything
or
more
like
a
network
protocol
where
you
have
a
call
to
send
stack
traces,
a
call
to
send
symbols
and
the
reason
to
make
that
distinction.
D
Is
it's
really
nice
for
the
tooling
ecosystem
to
have
a
self-contained
file
with
everything,
because
you
can
build
few
hours
and
other
tools
analyze
the
data
that
you
have
and
it's
a
little
more
icky.
D
If
all
you
have
is
like
an
API
to
receive
the
data
that
doesn't
mean
that,
like
symbols
have
to
like
always
be
sent
right
away,
but
it
should
be
possible
to
have
the
self-containment
from
my
point
of
view,
but
that's
something
we
should
be
explicit
about
whether
or
not
a
self-contained
profile
is
one
of
our
goals
like
as
a
single
file
sort
of
thing.
B
Foreign
yeah:
does
anybody
have
thoughts
on
that?
The
yeah,
this
idea
of
self-contained
versus
versus
a
network
protocol
or
I
guess
opinions
on
that.
B
Cool
so
yeah,
let's
see
we
have
like
10-ish
minutes
left
I
want
to
make
sure
we
get
to
everything
yeah.
So
Francesco
made
some
points.
He
is
not
here
today,
but
I
think
we
kind
of
talked
about
his
points.
B
Yeah,
basically
I
think
he
was
saying
similar
to
the
conversation
we
had.
You
know
about
yeah,
always
on
versus
on
demand,
which
we
did
discuss.
B
Yeah
I,
I
guess
before
we
get
to
the
last
few
items
are
pretty
quick,
so
yeah,
I
guess
before
we
get
to
those.
Is
there
anything
else
that
anybody
wants
to
bring
up
while
we're
here
today
that
feels
you
know
urgent
at
all
cool
yeah,
so
I
mean
obviously
yeah.
So
we
did.
I
did
mention
that
we
talked
about
the
benchmarking
thing
last
week.
B
B
We
according
to
tigran,
have
to
make
a
PR
and
you
know,
add
maintainers
and
that
kind
of
stuff
I,
don't
think
it's
going
to
be
super
involved.
I
guess
at
this
point
it
I
mean
it's,
it's
relatively
straightforward,
I
think
the
maintainers
have
like
you
know.
If
we
do
actually
create.
You
know
this
a
separate
agent
or
something
would
probably
be
more
involved,
but
yeah
we
are
looking
for.
B
You
know
people
who
are
willing
to
to
help
maintain
this
benchmarking
Suite.
So,
if
anybody's
interested
in
that,
definitely
let
me
know
I
will
yeah
try
to
create
a
PR,
probably
sometime
next
week
and
and
yeah.
It
would
be
nice
to
have
a
good.
You
know
wide
array
of
people
from
you
know
different
areas
who
can
help
maintain
that
yeah.
So
there's
that
and
then
also
I
thought
that
that
was
one
of
the
things
that's
also
kind
of
nice.
B
About
working
on
that
in
parallel
is
I
think
it
would
be
a
cool
thing
to
sort
of
share
with
the
community
by
kubecon
one
of
the
I
guess.
Goals
of
this
benchmarking
Suite
is
also
to
like
get.
You
know
a
lot
of
different
profiles
from
different
organizations
and
so
I
think
it
would
be
a
good
way
to
try
and
you
know,
yeah.
You
know
get
people
to
sort
of
submit
profiles.
You
know
that
are
actually
like
real.
B
You
know,
but
anonymized
profiles
from
their
organization
to
one
get
more
people
to
start
using
profiling
and
to
actually
try
it
out
and
then
two
to
make
our
benchmarking
Suite
more
robust
and
then
three
to
sort
of
just
like
you
know,
as
somewhat
of
like
a
marketing
thing,
to
get
more
people
just
aware
that
this
effort
is
going
on
and
and
being
involved
in
the
process.
So
that's
something
that
you
know
we
can
obviously
sort
of
talk
more
about.
B
It's,
not
I,
guess
crucial
by
any
means,
but
I
think
it
might
be
a
good.
You
know
a
reasonable
timeline.
I
think
it's
like
a
month
or
so
away
and
yeah
might
be
good
for
for
getting
more
more
interest
in
profiling
and
and
what
this
group
has
been
working
on.
B
So
there
is
that
yeah
again,
let
me
know
if,
if
you're
interested,
I'll,
probably
bug
everybody
and
message
you
separately
as
well,
but
yeah
I,
think
that
is
everything
we
got
a
couple
extra
minutes
left.
If
anybody
has
anything,
they
want
to
add,
feel
free
to.
Otherwise,
we
can
give
you
10
minutes
of
your
day,
back
cool
all
right.
Well,
thanks
everyone
and
we'll
talk
to
you
later.