►
From YouTube: 2022-11-03 meeting
Description
OpenTelemetry Prometheus WG
A
A
C
C
A
E
E
B
Class
good
evening,
but
they
say
there's
like
a
good
morning
good
afternoon,
good
evening,
good
night,
something
like
that
all
right,
let's
I
think
we
can
go
ahead
and
get
started.
I
think
we
got
everybody
here.
B
If
not
people
will
trickle
in
but
yeah
what
welcome
back
everybody
I
know
we.
We
missed
our
skipped
a
week
due
to
all
the
various
conferences
that
people
were
preparing
for
slash
traveling
to,
and
we
are
now
back
so
hopefully,
if
you
did
go
to
any
of
those,
you
got
some
good.
B
You
know
made
it
back
safely.
All
that
good
stuff
I
was
at
cubecon
and
yeah
I
mean
I,
was
kind
of
I
mean
I,
would
say
necessarily
surprised,
but
maybe
pleasantly
surprised
that
yeah
I
mean
we
so
went
to
a
bunch
of
Hotel
stuff,
a
bunch
of
observability
stuff
and
a
lot
of
times.
One
of
the
you
know
things
that
was
featured
in
the
sort
of
progress
updates
from
various
people
was
the
work
that
we've
been
doing
here.
A
B
People
seemed
very
receptive
of
that
and
responded
pretty
positively
to
the
fact
that
we
are
working
on
this
and
that
we've
made
so
much
progress.
Since
you
know
this
effort
started,
you
know
roughly
I,
guess
four
months,
ago-ish
and
yeah
so
I
just
wanted
to.
You
know
kind
of
bring
that
to
everyone's
attention
that
yeah.
It
seems
like
we're
on
a
path
that
the
community
generally
supports.
So
that's
pretty
cool
yeah.
As
far
as
the
agenda
for
today,
I
was
mostly
thinking.
B
We
would
sort
of
recap,
some
of
where
we
were
at
before.
B
B
Yeah
there
was
a
conversation
of
streaming
versus
batching
I
saw
that
you
added
that
this
morning
or
I
guess
a
couple
hours
ago,
Felix
and
then
before
that
you
were
made
a
proposal
for
supporting
time
stamps
in
P.
Prof
I,
don't
know
yet
how
relevant
that
is
to
here,
but
hoping
we
could
also
discuss
that.
So
not
sure
where
we
went
to
start
there.
B
Yeah
I
don't
know,
do
you
have
any
ideas
of
where
we
could
start
Felix.
I
know
you've
been
thinking
a
lot
about
this.
F
Maybe
it's
always
I
think
we're
gonna
soon
get
into
very
detailed
conversations
again,
especially
streaming
versus
patching
and
very
technical
stuff,
but
maybe
reminding
ourselves
a
little
bit
on
what
the
goals
are.
Why
we're
doing
this?
F
So
we
can
evaluate
those
ideas
based
on
that,
and
maybe
it
makes
sense
or
likes
a
different
people
here
or
on
behalf
of
the
companies
they
represent
to
at
some
point,
write
down
what
they're
hoping
to
get
out
of
a
standardized
Hotel
stuff,
because
they
feel
it's
a
little
differently
and
I
think
having
it
written
down
would
be
good.
F
I
can
make
the
first
step
for
for
data
Doc
and
also
put
that
in
the
document
later,
but
I
I
think
like
in
datadoc.
We
want
to
be
better
Hotel
people
and
support
Hotel
better
and
make
it
the
first
class
citizen.
So
we
want
to
be
here
just
to
make
sure
that
profiling
will
integrate
with
tracing
and
all
that
good
stuff.
F
We
also
have
sort
of
concerns
on
something
being
built
here
that
will
actually
create
more
overhead,
because
we
will
need
to
convert
some
data
that
comes
out
of
runtime
like
goes
P,
profs
or
Javas
jfors,
and
then
we'll
have
to
be
converted
on
the
client
side
into
the
new
format
and,
and
course
actually
worth
profiling
than
people
have.
Today.
We
want
to
avoid
that.
F
We
want
to
work
on
a
standardized
profiling
format,
but
if
I
had
to
also
guess
like
what
users
actually
care
about,
because
I
think
the
existing
Hotel
user
base
has
tracing
instrumentation
already
and
they
just
want
their
additional
data
stream.
That
has
profiling
data
to
be
connectable
by
whatever
vendors
they're
using
so
that
they
can
look
at
a
slow
span
and
figure
out.
Hey
was
that
CPU
bound?
If,
yes,
what
was
going
on
on
the
CPU
and
how
can
I
make
this
faster,
which
is
a
small
subset?
F
Actually,
it's
not
even
overlapping
with
what
I
just
said,
because
for
datadog
we've
already
solved
that
with
our
own
Integrations
we've
just
not
have
done
it
for
hotel
so
far
and
I'd
be
curious.
What
other
people
sort
of
have
on
their
agenda
to
the
degree
that
they're
comfortable
with
sharing
it,
because
I
think
it
will
be
useful
to
discuss
it.
While
we,
before
we
jump
into
sort
of
technical
details,.
B
Yeah,
that's
that's
fair.
Does
anybody
else
want
to
share
yeah,
then
yeah
I
mean
obviously
everybody.
You
know
we
want
something
that
works
for
everybody
and
solves
all
of
these
various
use
cases,
but
yeah.
Anybody
else
want
to
share.
C
So
Sean
is
now
also
here
and
I.
Think
one
of
our
most
important
topics
is
efficiency,
especially
in
the
clients,
because
we
want
to
minimize
as
well
the
overhead
of
the
agent
and
all
the
discussions
that
we
put
in
place
about
the
state
full
stated
protocol
and
why
bipro
is
kind
of
heavy
is
around
this
area,
so
minimizing
the
CPU
overhead
and
the
client
side
and
the
egress
traffic
from
the
agents
into
the
back
end.
C
G
Yeah
internally,
at
Google
we
use
the
profile
proton
format
quite
a
bit.
We
don't
use,
at
least
in
the
continuous
profiling.
We
don't
use
any
external
formats
well
because
there
are
no
unified
formats.
So
my
interest
in
is
in
in
this
effort
is
in.
If
the
community
is
going
to
come
up
with
a
new
unified
format,
then
I
would
like
it
to
represent
our
needs
as
well,
or
at
least
I
would
like
to
understand
from
the
beginning
how
it
can
be
useful
to
us
migrating
internal,
continuous
profiling
at
Google.
G
So
anything
that
is
that
is
new
and
external
would
probably
take
long
time.
So
it's
not
like,
even
if
there
is
a
new
format
that,
like
theoretically
sweets
our
needs,
it
will
take
long
time
to
actually
start
using
it,
but
at
the
same
time,
I'm
enthusiastic
I
see
a
lot
of
interest
and
I
see
a
lot
of
increase
in
interest
in
continuous
profiling
over
last
year
or
like
I.
Think
it's
like
specifically
like
18
months,
like
I,
think
there's
many
more
tools.
G
There's
many
more
discussions
and
I
believe
that
in
two
years
and
three
years
we
will
have,
we
will
see
a
lot
of
infrastructure
that
that
will
be
reusable
and
I.
Would
love
us
to
be
able
to
use
it
in
in
terms
of
like
our
own
role
model
I
would
like
also
all
languages
to
be
to
follow
the
goal
and
Lead,
because
I
think
golang
is
is
the
best
in
terms
of
profiling,
capabilities
that
are
part
of
the
runtime
itself.
G
I
would
love
Java,
Java
and
time
to
include
continuous
profiling
capabilities
out
of
the
box,
and
I
would
like
I.
Would
love
python
to
include
peep
sampling,
profiler
that
is
sufficiently
low
overhead
to
so
I
I
yeah
I.
G
In
my
Ideal
World,
the
world
would
standardize
on
profile
Proto
without
us
having
to
do
anything
and
all
languages
would
support
profile
Proto
as
the
as
the
output
out
of
the
box
with
low
overhead
profiling
I
understand
this
is
the
world
is
not
perfect,
and
this
is
not
what
is
happening,
but
I
would
like
to
at
least
understand
what
is
happening
and
be
a
part
of
it.
B
Yeah,
that's
helpful
yeah
Matt,
yeah.
E
Just
to
I'm,
not
speaking
on
behalf
of
anyone,
I'm
working
for
or
any
any
company,
but
more
in
my
capacity
from
tag
observability
and
and
just
observing
the
general
software
Market
over
the
last
25
or
six
years,
I
think
there's
a
natural
kind
of
engine
swing
in
terms
of
the
type
of
debugging
tools
that
are
used
by
developers
generally
and
to
your
previous
Point
I.
I'll.
E
Actually,
I
do
think
that
there's
been
a
coupon
that
was
evident
but
really
evident
from
the
prior
year,
a
lot
of
it
being
driven
by
BPF
or
ebpf
rather
Coming
invoke.
E
But
you
know,
there's
just
a
lot
of
a
lot
of
Need
for
developers
to
generally
understand
and
rock
the
things
they
make
and
how
easy
it
is
to
debug
those
things.
Kind
of
has
moved
backwards
and
forwards
with
the
general
shift
from
centralized
to
decentralized
Computing
and
the
various
kinds
of
models
that
we
have
for
applications,
whether
they're,
distributed
or
otherwise.
E
And
now
we
find
ourselves
In,
This,
Cloud
native
Universe,
where
not
only
you
know
have
we
fractured
our
monoliths
into
lots
of
little
pieces,
they're
spread
across
big
clusters,
with
lots
of
other
things
running
and
and
increasing
with
the
with
the
with
the
rise
of
get
Ops
as
an
overall
operating
methodology.
You
know
it
gets
harder
and
harder
for
developers
to
actually
debug
things
in
production.
You
can't
just
exec
into
a
pod
on
a
prod
cluster,
because
not
only
is
it
unsafe,
but
there's
a
whole
compliance
workflow.
E
After
the
fact,
we
need
things
like
continuous
profiling
that
are
low
overhead,
that
that
are
maybe
always
there
so
that
when
you
said
about
things
that
are
there,
so
I
think
that
might
be
what's
driving
a
lot
of
the
you
know,
profiling
and
logging
and
tracing
into
all
of
these
kind
of
after
the
fact
symbols,
instead
of
just
attaching
a
debugger,
you
know
and
actually
stepping
through
a
code.
You
know
for
all
of
those
reasons:
it's
not
practical.
E
It's
not
safe
all
these
ways,
so
so
I,
I,
I,
I,
just
I'll
just
say
that's
a
general
observation
and
so
I
think
it's
it's
so
important
that
we
are
successful
here
and
you
know
driving
an
open
source
alternative.
So
we
don't
see
you
know
a
fracturing
and
a
splintering
of
the
developer
communities
around
the
tool-
vendors
right
nice.
So
that
would
be
my
motivation
here
to
answer
the
first
thing
versus
a
corporate
interest.
B
Yeah
I
yeah
I.
G
Think
it's
I
think
debugging
is
one
thing,
but
I
think
it's
also
that
the
cloud
Market
is
growing
and
and
continues
profiling
in
many
cases,
helps
you
produce
the
costs.
I
think
that
is
also
interest
from
from
some
of
the
players.
I
think
there's
prediction
that
by
2030
cloud
is
expected
to
be
2
trillion
two
trillion
dollar
market.
G
E
And
yeah,
absolutely
unless
it's
just
a
minor
note
on
scoping
I,
want
to
make
sure
I'm
clear.
You
know
we're.
This
is
really
about
sampling
profiling,
not
great,
not
execution,
tracing
or
other
forms
of
you
know
what
happened.
You
know,
statistical
profiling
or
probably
I,
guess
we're
talking
about
a
free
prop.
You
know
that
that
gives
us.
You
know
it
gives
us
what
it
gives
us,
but
it's
not
execution
tracing,
which
I
think
is
another
thing
that
is
going
to
be
increasingly
important,
but
but
is
that
is
that
broadly
understood,
Ryan
and
all
that?
B
Yeah
I
mean
I
think
right
now,
where
yeah
also
yeah
yeah
I
mean
I.
Think
generally
from
our
vision,
doc,
we
kind
of
yeah
like
scoped
it
a
little
bit
and
we'll
continue
to
do
so.
Yeah
I
mean
one
thing
I
would
add
to
for
the
you
know,
for
our
motivation,
I
mean
yeah.
All
that
Dima
talk
some
but
part
of
the
motivation
for
us
was
just
that.
You
know
piroscope
supports
a
bunch
of
different
languages.
B
I
guess
we
took
like
a
slightly
different
approach
than
some
in
that
in
doing
that,
we
did
have
to
do
a
lot
of
work,
kind
of
standardizing
or
finding
ways
to
like
wrestle
whatever
format,
various
other
open
source
profilers
that
were
out
there
were,
you
know,
producing
and
it
would
be,
and
then
yeah
like
long-term,
also
wanting
to
connect
with
other
signals
and
that
kind
of
thing,
and
it
would
be
nice
yeah
again
if
a
lot
worked
like
going
where
you
know
it's
just
kind
of
needed
in
in
the
most
efficient
way
kind
of
coming
out
with
coming
up
with
what
is
the
most
efficient
format
for
how
people
ultimately
will
be
using
the
profiling
data,
and
it
would
be
nice,
you
know
I
think
a
standard
like
this
will
encourage
more
people
to
do
that.
B
For
example,
like
we've
been
or
a
while
back,
we
talked
with
some
of
the
like.net
like
runtime
maintainers,
about
yeah
like
could
this
be
a
native
thing
and
net
as
well?
You
know,
and
so
it's
just
something
that
would
be
pretty
convenient
there.
Yeah
I,
don't
know.
Is
there
anything
you
want
to
add.
D
Yeah
I
mean
I,
think
you've
said
it
pretty
well,
the
biggest
thing
for
us
is,
it
would
be
like
our
big
hope
is
that
all
of
these
run
times
and
major
profilers
would
adopt
the
same
format
so
that
there's
kind
of
more
parity
and
also
I
mean
a
lot
of
other
things.
That
others
said,
resonate
a
lot
and
probably
the
the
biggest
thing.
D
There
is
client
performance
a
lot
of
times,
yeah
that
that
matters
for
us
a
lot,
and
we
spend
a
lot
of
time
kind
of
like
dealing
with
optimizing
performance
of
the
profilers
in
various
runtimes.
If
that
was
kind
of
down
by
the
runtime
developers
themselves,
that
would
be
nice
and
I
think
adopting
a
format,
kind
of
helps
with
that
goal.
Yeah.
B
F
Actually,
I
have
one
quick,
a
thing
to
interject,
because
Matt
mentioned
an
interesting
point:
if,
if
we're
also
talking
about
runtime
execution
tracing
here
or
not,
I
think
to
some
degree
we'll
have
to
because
certain
events
like
allocations
are
really
only
available
by
letting
the
runtime
tell
you
when
an
allocation
happens.
So
it
looks
more
like
execution.
Tracing
then
looks
like
statistical
profiling
and
I
really
want
to
make
sure
we're
not
overfitting
the
work
in
this
group
to
CPU
profiling.
F
Only
because
a
lot
of
garbage
collected
languages
actually
create
a
lot
of
CPU
work
because
of
the
allocations,
and
unless
we
have
allocation
profiling,
it's
really
difficult
to
debug
that
and
get
those
CPU
Cycles
back.
So.
F
It
can
be
sampled
and
then
it's
kind
of
a
hybrid,
but
it's
still
like
in
response
to
something.
That's
a
runtime
does
right.
It's
still
in
response
to
like
an
event
triggered
by
the
runtime,
rather
than
you
stopping
the
runtime
and
asking
it
like
hey
what
you're
doing
right
now.
I,
don't
know
where
we
draw
the
line
here
exactly
but
I
suppose
you
could
make
a.
E
Unified
model
right,
it's
a
it's
a
well
well
put
Point,
you
know,
you
could
just
say
all
samples
or
events.
You
know
in
the
in
the
biggest
thing,
but
you
know
I
think
it
would
make
sense,
given
that
to
be
inclusive
of
that
I
guess.
I
didn't
I
also
agree
not
to
overfit
into
CPU
driven
profiling,
but
you
know
arbitrary,
driven
profile
and
other
other
interval
types.
B
Cool
Pete
I
think
you
had
something
to
say
earlier.
H
Yeah
and
I
all
a
lot
of
really
interesting
discussion
right
now.
One
curiosity
that
came
to
me,
as
the
you
know,
the
description
of
golang
as
being
it
is
accurate.
Go
absolutely.
You
know,
makes
this
profiling
somehow
it's
well
integrated
into
the
runtime,
but
it
made
me
wonder
you
know
this
Contin
when
I
hear
continuous
I
kind
of
mentally
attach
whole
system
to
it,
or
you
know
I'm,
saying
continuous
and
whole
system
in
my
brain
and
when
I
think
about
like
this
runtime
specific
nature
of
it.
H
How
do
you
merge
this
and
and
not
just?
How
do
you
merge
it?
But
are
you
just
over
provisioning,
your
sampling,
somehow
it's
I
I
have
that
puzzle
and
I
don't
actually
have
actually
an
answer
at
hand.
So
if
people
have
answers
or
have
thought
about,
it,
I'd
be
curious
to
know,
and
if
that
makes
sense,.
H
Mode
can
be
supported,
but
but
it's
harder
I
think
to
do
python
or
Ruby
or
something
sorry,
the
in
the
ebbf
land.
It's
if
you
have
a
jitted
run
time
that
is
easier
to
support
than
if
you
have
an
interpreted
run
time.
If
that
makes
sense.
I
So
I
would
disagree
on
on
that
front.
So,
like
our
DN
like
it's,
it
is
technically
feasible
to
to
do
like
our
internal.
We
have
unwinders
for
those
languages
in
ebpf,
I.
Think
Pete.
Your
point,
though,
is
a
good.
I
One
in
terms
of
our
standard
should
ideally
support
people
who
decide
to
engineer
a
continuous
profiling
engine
via
integrating
with
each
of
the
language
runtimes
for
as
well
as
what
we
and
what
pixie
do,
which
is
we
have
all
of
our
own
wonders
in
ebpf,
for
example,
I
think,
ideally,
the
the
standard
shouldn't
Force
us
to
you
know
engineer
a
profiler
in
one
or
two
different
ways,
because
there's
I
mean
there's
advantages
to
the
ways
that
we
do
things
I
mean
it
means
we
get.
I
We
can
unwind
all
languages
in
ebpf,
but
then
there's
also
significant
advantages
to
the
way
that
say
like
say,
datadog
or
other
folk
Duo,
where
they,
by
integrating
with
the
runtimes
they
get
a
memory
allocation
and
and
other
things
much
easier
than
we
do
so.
I
think,
whichever,
whichever
like
our
way
that
people
are
architecting
their
their
profilers,
the
standard
should
support
both
I
think.
B
Foreign
cool
anybody
else,
if
not,
then
I
believe
we
could
move
forward
to
some
of
either
some
of
the
questions
that
Pete
had
mentioned
in
the
in
slack
I'll
paste
a
link
to
slack
or
yeah
or
Pete
and
Felix
had
a
couple
of
discussions
that
might
be
worth
bringing
up
here.
B
I'm
not
sure
I'm
reading
it
right
now.
If
we're
to
start
of
these
bullet
points,
you
mentioned
Pete
yeah,
either
Pete
or
Felix.
If
either
of
you
have
you
know
something
you
want
to
bring
up.
I
know:
Felix
you
mentioned
streaming
versus.
Batching
is
a
great
way
to
frame
the
discussion
and
digging
more
into
this
today.
I
guess.
Maybe
if
you
want
to
give
kind
of
context
on
where
that
discussion
was,
you
know,
started
and
where
it
would
be
good
to
discuss.
F
Yeah
I
think
it
kicked
off
a
little
bit
from
the
proposal
for
the
people
project
that
I
shared
and
I
I.
Don't
even
want
to
talk
about
it
too
much
other
than
it's
like
a
change
to
the
profile,
Proto
format
that
would
make
it
easy
to
support
timestamps
as
a
first
class
Citizen,
and
it
might
not
even
be
a
really
good
proposal.
But
one
important
thing
that
I
learned
from
it,
which
is
really
trivial
in
retrospect,
is
that
compressors
are
really
good,
really
good.
F
They
do
things
really
well
and
I
was
able
to
get
really
significant
storage
size
reductions
by
like
coming
up
with
a
clever
way
to
put
stuff
into
the
P
Prof
to
to
store
the
timestamps.
But
it
turns
out
it
wasn't
really
much
more
efficient
compared
to
a
naive
thing
where
you
add
labels
as
a
timestamp,
and
then
you
duplicate
all
the
stack
traces,
because
the
compressor
just
goes
over
the
data
and
and
fixes
your
your
sloppiness.
F
So
the
the
lesson
I
want
to
share
here
and
it
is
relevant
to
the
streaming
versus
batching-
is
that
whatever
we
do,
we
should
probably
avoid
getting
overly
clever
with
our
own
ways
of
compressing
data
on
The
Wire.
If
we're
planning
to
let
a
compressor
go
over
it
anyway,
which
we
probably
should
because
they're
pretty
good,
and
that
might
just
be
something
to
keep
in
mind
in
discussions
when
we
get
to
the
nitty-gritty
of
how
to
make
file
formats
and
stuff.
C
So
you're
saying
that,
basically,
we
should
not
start
with
normalizing
the
data,
but
we
should
try
sending
all
the
data
first
at
once.
Every
time
we
have
a
sample
with
all
the
successes
and
all
the
mappings
and
see
how
well
the
universal
and
their
coding
behaves.
I
mean
I'm.
Fine,
with
that
personally.
F
I
I
think
it's
it's
easier
to
start
with
sort
of
The
Logical
representation
of
the
data
we
want
to
send
and
sort
of
align
on
what
we
want
to
send
and
then
worry
about
the
efficiency
because
I
think
to
some
degree
a
compressor
can
do
some
of
it
and
to
the
point
where
it
can't
we
can
go
back
and
and
optimize,
probably
the
the
transmission.
So,
let's,
let's
be
more
spend
more
time
on
what
Fields
do
we
want
to
send?
E
H
Where,
where
is
The
Collector
supposed
to
actually
run
I'm
the
innocent
question?
Sorry,
but
it's
an
interesting
thought.
A
E
Yeah
with
it
I
I
think
it's
all
like
Telegraph
right.
You
know
I've
got
receivers,
exporters
and
transforms,
and
so
you
have
this
building
block
to
mix
and
match.
You
could
also
run
them
in
a
pot
you
know
per
as
a
sidecar.
You
could
run
them
as
a
demon
set
on
a
cluster.
You
could
run
them.
You
know
under
system
D
on
some
random
VM.
You
know
it's
just
a
general
purpose.
I
I
You
know
in
terms
of
like,
logically,
what
is
the
problem
we're
trying
to
to
solve
without
initially
cluttering
that
with
details
of
compression
and
things
like
that,
I
do
think
there
is
there's
a
like
a
CPU
overhead
to
to
compression
and
there's
like
there
are
costs
to
to
egress
in
in
cloud-based
environments,
obviously,
and
I
think
we
should
at
least
once
we've
figured
out
logically
what
we
want
to
solve
or
send
come
back
and
and
do
then
do
think
about
efficiency
and
things
like
that,
because
I
mean
in
theory,
while
we
can,
you
know
we
can
compress
things
and
we
can
add
intermediary
layers.
I
Fundamentally,
if
we
want
to
do
continuous
whole
system
profile,
always
on
the
significant
benefits
to
aiming
for
as
low
overhead
as
possible,
and
it
can
be
achieved
like
there's,
there
isn't
there's
definitely
some
tension
eventually
between
over
optimization
and
and
getting
something
done.
But
I
do
think.
We
can
probably
do
a
pretty
solid
effort
to
begin
with.
E
To
that
point
again,
I
feel
like
I'm
talking
too
much,
but
particularly
for
Edge
scenarios.
Compression
need
not
be
CPU.
Burdened
like
there
are
offloads
for
you
know
in
in
various
various
forms.
To
do
it
with.
You
know,
compression
Hardware,
that's
quite
quite
common
and
my
point
around
the
hotel,
collector
and
layering
things
you
know
so
the
track
the
compression
could
be
just
as
a
plug-in
or
as
a
layer,
that's
decoupled
from
a
particular.
E
You
know
what
we
really
want
to
focus
on
initially
I
didn't
mean
to
burden
that
with
Hotel
specifically,
like
Envoy,
has
the
same
architecture
right
of
a
bunch
of
filters
that
are
layered
that
are
composable
or
customizable,
so
I
warm
at
the
overall
design
and
and
with
that
design
you
could
make
an
Hotel
transform
Plug-In
or
it
could
be
some
other
mechanism
if
it
were
just
a
simple
Library
right.
A
I
B
So
yeah
I
mean
I
guess
it
sounds
like
we're
at
least
I
mean
yeah,
obviously
we'll
discuss
more,
but
as
far
as
like
next
steps,
it
sounds
like
we're
generally
agreed
on.
You
know
the
next
most
immediate
step
being
just
a
you
know,
long
list
of
all
the
fields
that
we
want,
and
that
should
be
you
know,
and
then
we
can,
you
know
pare
it
down,
or
you
know,
remove
things
add
things
whatever,
but
it
seems
like.
B
It's
generally
agreed
that,
like
you
know
after
this,
or
maybe
even
at
the
end
of
this,
we
can
just
start
creating
that
list,
simply
maybe
like
a
a
small
definition
for
each
of
you
know
each
of
the
fields
and
then
go
from
there
just
as
far
as
the
next
steps
is.
That
is
anybody
opposed
to
that
just
so
we
can
make
some
progress
before
the
next
meeting.
C
All
right,
one
second
for
me,
I,
think
the
proposal
of
using
people
off
with
the
extension
ER
that
Felix
sent
given
the
tooling
that
is
around
here
of
existing,
already
fits
what
Felix
described
where
as
a
first
implementation
possible
implementation.
So
if
you
would
ask
me
what
field
do
we
need
to
put
today
in
such
a
format,
it
would
basically
just
be
be
put
off.
That
said,
there
are
concerns
that
should
just
mentioned
about
rescinding
all
the
data
of
the
times
and
I.
C
Think
if
we
say
we
are
not
going
to
take
these
concerns
as
first
in
the
first
iteration
as
blockers,
we
can
re-evaluate
them
later,
but
I
would
not.
You
know,
list
fields
that
are
already
present
in
a
format
that
is
widely
used
already,
such
as
people
of
and
I
didn't.
Try
it
out
to
be
honest,
your
PR
Felix,
but
it
seems
what
you
need
is
enough
for
sampling
events
for
continuous
profiling
scenario
and
also
for
runtime
specific
scenario.
System-Wide
that
specific
I
don't
know.
If
anyone
has
the
chance
to
read
the
Via
or
test
it.
F
C
B
Yeah
I
would
agree.
I
still
think
we
should
ER
yeah
like
right
out
the
fields
in
a
way
where
yeah
so,
for
example,
yeah
like
time
stamps,
I
guess
technically
could
have
been
used
as
a
label
before
but
like
I
guess,
you're
and
I
haven't
delved
deep
into
it
either.
But
yeah
I
mean
same
thing
with
you
know
like
linking
traces
and
profiles.
For
example,
you
know
maybe
that's
its
own
field
rather
than
being
just
like
a
label.
You.
A
B
On
on
the
a
label
itself,
so
I
guess
yeah
what
I'm
trying
to
say
is
that
it
could
take
like
slightly
different
forms
than
how
you
would
do
it
and
if
you
like,
tried
to
cram
everything
into
P
Prof,
it
might
take
slightly
different
forms
in
a
reimagined
format.
But
you
know
yeah
I
think
that's
worth
mentioning.
It.
F
I'd
say,
but
if
we,
if
we're
gonna
prototype
something
on
top
of
P,
Prof
I
would
refer
to
just
use
labels
rather
than
my
proposal,
because
then
we
can
use
all
the
normal
prep
tooling,
with
a
running
of
some
Fork
of
the
format
and
and
labels
to
everything
we
want.
We
can
agree
on
The
Logical
contents
of
the
payloads
without
getting
blocked
down
into
how
to
represent
them,
because
people
gives
us
some
mod
rails
to
follow
and
we
can
take
it
from
there.
Okay,
that's.
B
That's
fair.
That
makes
sense
cool.
B
Okay,
so
yeah
I
also
see
I,
know
one
other
piece
to
this.
That
I
that
we
anticipated
being
difficult,
maybe
or
that
discussion
would
be
needed,
is
with
like
symbols,
and
you
know
how
we
should
address
them
when
we
should
address
them.
I,
guess,
yeah
again,
we've
already
sort
of
established
that
you
know
getting
this.
This
list
of
fields
is
is
kind
of
the
first
step,
and
then,
after
that
you
know,
yeah
we'll
need
the
next
step,
so
I
don't
know
do.
B
Does
anyone
have
any
thoughts
or
progress
in
the
in
the
thinking
towards
how
we
should
handle
symbols?
Symbolization.
I
What
is
the
complication
around
symbols,
because
I
guess
so
in
our
just
to
get
some
context
in
our
format
for
some
languages,
where
you
can
pull
the
symbols
of
the
host,
we
send
the
symbol
so
python,
PHP,
Ruby
Java,
go
that
sort
of
stuff
and
then
for
others
where
you
can't
so
rust,
CNC
plus
plus,
we
simply
don't
send
the
symbols.
I
So
the
format
just
survive
like
provides
either
a
way
to
send
them
or
not
to
send
them
and
I
was
wondering.
Is
there?
Is
there
a
complication
that
I'm
missing.
H
All
right,
I
was
gonna,
say
Sean
I
think
that's
actually
what
I
had
kind
of
reached
in
my
own
thinking
about
it
is
just
if,
if
you
accommodate
what
you
just
described,
you've
captured
what
you
know
you've
captured
both
ends
of
it
elegantly.
I,
think
you
you
support
both
cases
and
I.
Don't
think
it's
even
going
to
be
inefficient,
I
think
it's
I
think
it's
actually
fine.
F
E
So
I
don't
know
if
anyone's
kind
of
grown
up
also
in
the
Microsoft
sort
of
Windows
ecosystem,
at
least
I
did
up
until
about
a
dozen
years
ago,
but
both
for
embedded
use
cases
with
Windows
CE
as
well
as
Windows
kernel
and
windows,
user
remote
stuff
they've
had
a
a
symbol,
server
kind
of
pattern
that
all
of
their
debuggers
use,
which
works,
which
is
quite
flexible
right,
where
you
can
have
a
build
server
build
pipeline
where
the
symbols
are
oftentimes,
particularly
when
you're
building
you
know,
space
constrained
devices
or
just
not
shipping
symbols.
E
You
know
so
at
that.
At
that
build
pipeline
you
need
somewhere
to
push
them
to
you're
right
and
then
at
runtime,
debug
time,
profiling,
time
analysis,
time
like
when
it
wherever,
wherever
wherever
and
whenever
you
need
symbolic
information
or
a
variety
of
purposes
and
break
points
to
stock
traces
to
all
of
it,
you
can
get
them,
and
so,
by
kind
of
again
separating
that
concern.
E
E
Could
we
take
a
similar
approach
here
right
just
to
find
a
standard
way
to
get
at
symbols?
And
maybe
they're
got
maybe
they're
happening
locally
or
maybe
they're,
not
maybe
right,
but
it
would
satisfy
a
lot
of
pragmatic
use
cases
where
bits
are
built
on
build
servers
or
build
pipelines
managed
by
one
team
and
then
when
you
go
to
debug
something
you
have
to
go,
find
the
symbols
that
you
might
not
have,
because
you
didn't
build
them
locally.
I
I.
H
Yeah
I
I
think
the
simple
server
idea
is,
you
know
fantastic
and
if,
if
there's
a
you
know
well-known,
you
know
sort
of
what
what
is
the
best
practice
on
symbol
servers
or
what?
Where,
where
are
we
headed?
H
Is
there
a
project
that
supports
that,
for
you
know
many
different
use
cases,
it
would
be
neat
but
I
think
that
the
driving
Factor
on
the
other
end
Matt
is
you
know,
I
think
there's
going
to
be
use
cases
where
the
application
developers
didn't
know
about
symbol,
servers
or
didn't
want
to
get
to
that
level
of
sophistication
before
they
were.
You
know
deploying
things
and
using
them
and
in
a
mature
project
that
has
a
symbol
server.
It's
it's
an
elegant
and
fantastic
thing,
but
you
don't
think
it's
every
use
case.
H
E
I
do
mean
to
imply
that
I
just
meant
no
I'm
just
going
to
preclude
that
workflow
generally.
It's
also
useful
in
places
where
you
have
like
multiple
teams
that
have
different
build
processes,
but
you
want
to,
but
they
end
up
getting
sucked
into
the
same
process
or
sucked
into
the
same
set
of
services.
For
you
know,
you
need
to
pull
from
a
variety
of
sources
again:
I
I
love
the
what
you
described
about
you
know.
If
they're
there
we
can
send
them
and
if
not
cool
I,
wouldn't
want
to
say
this
instead
of
that.
I
I
guess
just
from
the
point
of
view
of
say
our
task
of
defining
a
wire
format.
Would
it
necessarily
matter
I,
guess
just
give
you
an
example
in
our
architecture,
if
you
send
the
symbols
from
the
client,
then
great
they're
populated
they're
put
into
the
back
end.
But
if
you
don't,
we
have
a
separate
symbolization
service
that
mirrors
all
open
source
packages
and
periodically
writes
into
the
database
anyway.
So
it's
not.
Actually
it
doesn't
impact
the
wire
format
between
the
client
and
and
the
back
end.
I
It's
simply
a
totally
different
service
that
runs
for
us,
and
then
we
have
a
script
which
the
clients
can
add
to
their
CI
CD
system
which,
when
they
add
a
new,
build
and
it
splits
the
bug
symbols
pushes
them
to
our
back
end.
But
again,
this
is
a
an
entirely
different,
say:
wire
format,
an
endpoint
from
I
think
what
we're
aiming
to
Define
here,
which
is
kind
of
I,
guess
the
standardization
between
the
the
clients
and
the
back
ends
themselves,
assuming
I'm
understanding.
F
The
scope,
if
we
want
the
wire
format
to
work
with
and
without
symbols,
we
at
the
very
least
need
to
specify
how
the
payload
for
a
stack
trace
for
CPU
sample
looks
like
if
there's
no
symbols
attached
to
it
and
how
you
then
send
the
symbols
at
a
separate
message.
I
think
at
that
level
we
need
to
do
it
not
necessarily
on
how
it
gets
fused
together
level,
yeah.
B
Okay,
so
yeah
so
sounds
like
we
are
generally
aligned
there
as
well
or
somewhat
that
there's
I
guess
a
path
to
so
yeah
I
mean
I,
guess
you
know
I
know
we
haven't
made
a
as
of
right
now.
The
Otep
Doc
is
just
a
bunch
of
kind
of
bullet
case
bullet
points,
use
cases
that
kind
of
stuff,
so
yeah
I'm,
trying
to
think
how
we
can
you
know
make
the
most.
B
You
know
what
we
agree
on
most
at
this
point
and
how
we
can
make
the
most
progress
before
the
next
time
we
meet
and
what
we
can
yeah
do
in
that
realm
so
curious.
B
If
anybody
has
any
thoughts
or
opinions
there,
you
know
I,
guess
yeah
I
mean
I,
guess
minimally
again,
like
just
you
know,
listing
out
the
various
Fields
so
that
we
start
also
for
reference
I'll,
add
a
link
to
the
logs
one
I've
added
it
a
bunch
of
times
before,
but
yeah
I
mean
it's,
it's
basically
just
a
giant
dock
that
has
a
bunch
of
fields.
Definitions
of
those
fields
is
like
what
the
the
logs
Otep
looks
like
that
tigron
recommended
recommended.
B
We
use
as
sort
of
a
model
for
what
we
ultimately
need
to
come
up
with
there
and
so
I
think
yeah.
This
list
is
kind
of
a
good
start
for
that,
and
then
obviously
we
have
to
like
you
know,
flesh
it
out
a
bunch
more
as
time
goes
on,
but
yeah
I
don't
know.
If,
if
anybody
has
anything
else,
they
feel
would
be
most
productive
at
this
point
in
moving
forward
moving
towards
an
Otep
that
we
can,
you
know
officially
propose
to
you,
know
Place,
each
other,
the
community,
everybody.
I
I
just
had
a
quick
question
around
Felix's
PR
related
to
the
the
people
format.
Would
it
be?
Is
it
productive
for
us
to
start
digging
into
that
Felix?
Do
you
think
in
thinking
about
that
and
trying
to
say
socialize
the
changes
among
people
folk
Etc,
or
did
you
intend
that,
just
as
a
proof
of
concept,
here's
just
a
thing
that
we
could
potentially
do
like?
Is
this
on
our
critical
path
or
or
not?
Do
you
think
so.
F
So
when
I
started
working
on
it,
I
was
hoping.
The
outcome
would
be
a
way
to
add
timestamps
to
P
props.
That
is
more
efficient
than
what
you
can
do
with
labels
today,
where
you
just
put
the
text
label
saying.
F
Oh,
this
is
my
timestamp
and
then
you
put
put
the
value
there
and
I
was
successful
for
the
uncompressed
P
profs,
but
once
you
compress
them,
the
gains
are
so
small
that
I
don't
know
if
it
would
be
a
distraction
to
look
at
it,
because
what
I
was
mostly
was
trying
to
do
was
to
represent
the
data
in
a
column
or
format.
So
for
each
stack
Trace,
you
have
like
a
column
of
timestamps,
a
column
of
labels,
a
column
of
values
and
it's
kind
of
maybe
interesting.
F
Once
we
go
back
to
like
how
can
we
efficiently
represent
stuff
on
the
wire,
but
for
the
discussion
of
like
what
logically,
should
we
send?
Really
it
was
what
I
figured
out
was
we
want
events,
they
have
a
timestamp,
and
then
you
want
probably
user-defined
labels
for
each
event,
which
is
people
kind
of
gives
you
already
just
on
the
stack
Trace
level,
not
on
individual
event
level,
which
is
a
little
leaky,
and
you
you'll
probably
want
a
value
for
each
event.
F
So
if
you
have
something
like
an
allocation,
this
can
have
different
sizes,
even
if
they
come
from
the
same
stack
Trace.
If
something
is
dynamically
allocated.
So
that's
another
sort
of
property
of
the
events
that
you
want
to
have
so,
but
I
I
think
the
what
I
did
in
the
pre
purpose
of
people
are
interested
I
would
love
to
get
feedback
insults
on
it,
but
it
it
may
not
be
super
interesting
beyond
the
sort
of
encoding
that
I
chose.
I
F
I
B
B
You
know
yeah
like
some
of
those
like
I,
guess,
tiny
nuances,
like
you
were
saying
like
somehow,
I
I
was
responding
to
Josh,
so
I
didn't
hear
it
fully,
but
you
know
somehow
for
like
symbols.
We
need
to
indicate
whether
they're
included
or
not
or
something
so
I,
don't
know
just
for
like
examples
case.
If
that's
like
some,
you
know
field
in
the
format
that
says
like
symbols
are
here
or
symbols
or
not.
You
know
that's
another.
B
You
know
field
that
we
would
need
in
that
case
and
an
explanation
of
why
we
need
that
field,
or
you
know
whatever
that
might
be,
but
yeah
basically
I'm
just
using
that
as
a
broad
example,
but
yeah.
Basically,
you
know
everybody
writing
that
down
in
a
list
and
I'll
add
a
space
for
it
in
the
hotel
up,
doc
and
yeah.
If
we
could
do
that
before
next
meeting
and
honestly,
yeah
like
like
it
also
sounds
like
we're
generally
agreed
on,
it
will
be
some
somewhat.
B
You
know,
it'll
be
pretty
close
to
P
Prof,
and
so
maybe
we
can
sort
of
put
a
base
list
of
all
the
fields
that
are
already
included
in
P
Prof.
And
then
you
know
just
the
marginal.
You
know
what
if
people
would
want
to
add
something
or
even
just
slightly
adjust
something
from
that.
I
feel
like
one
of
those
options,
I'm
not
sure
which
specific
version
of
all
the
things
I
just
said,
but
one
of
those
would
be
best
just
so
that
we
can.
B
B
E
If
that's
okay
from
the
hip,
this
Pierre
looks
actually
really
cool
and
kind
of
by
defining
sets
of
labels.
Selectors
almost
you
kind
of
have
a
nice
Synergy
I
hate
to
use
the
word,
but
I
can't
think
of
another
one
with
you
know,
with
metadata
and
kubernetes,
where
you
might
be
collecting
from
a
bunch
of
PODS
that
have
the
same.
E
You
know
you
can
have
this
even
even
more
the
case
where
you've
got
this
duplicative
repetitive
sets
of
labels
and
then
lastly,
I've
been
messing
around
with
using
graph
databases
for
time
series
and
things
like
that
and
already
having
labels.
That's
broken
down
like
that.
You
can
almost
think
of
them
as
nodes
right
so
that
you
could
very
quickly
index
things
all
right,
I!
Think
it's
pretty
cool,
I'll,
look
at
Mark
in
more
detail!
E
B
And
then
yeah
I
mean
I.
Guess
after
that.
B
I
mean
yeah,
I
I,
think
that's
probably
where
we'll
start
and
similar
to
kind
of
what
Felix
was
saying.
I
guess
we'll
we'll
see,
I
mean
I.
Guess
there
is
a
world
in
which
the
things
we
want
could
more
feasibly
be
added
to
P,
Prof
or
slightly
adjusted
in
a
way
where
you
know
yeah,
where
we
do,
you
know,
yeah
I
guess
request
them
in
P
Prof.
So
you
know
I,
don't
know
I
guess
want
to
I,
don't
necessarily
want
to
rule
that
out
and
unnecessarily
make
a
new
format.
B
If
it
is,
you
know
if
it
does
end
up
being
very
close
to
that.
But
you
know
I
guess
we'll
we'll
figure
that
out
as
time
goes
on
and
I
asked
Josh
about
the
benchmarking
stuff.
You
know
I,
guess
I'm,
hoping
that
we
would
be
able
to
sort
of
quantity.
B
You
know
prove
in
some
way
that,
if
we
do,
you
know
come
up
with
a
different
format
that
it
is
more
efficient
than
P
profit
before
we
like
officially
propose
it,
because
I
imagine
people
are
going
to
want
to
see
that
if
again
yeah,
if
that
does
end
up
being
the
case,.
F
One
small
thing
to
mention
here
as
well.
My
thinking
with
the
people
of
prop
proposal
was
also
that,
if
we
are
serious
about
part
of
the
scope
of
this
group
is
to
petition
the
runtime
maintainers
of
go
and
Java
to
adopt
what
we're
producing
doing
something
that's
closer
to
what
they
already
have
will
maybe
be
an
easier
cell
than
coming
up
with
something
completely
different.
So
I
figured
adding
timestamps
to
people
is
a
much
easier
way
to
get
timestamps
from
the
go
run
time
than
to
propose
a
completely
new
format.
H
I
A
I
Question
does
anybody
know
if
there's
any
efforts
in
the
PHP
python,
Ruby,
Etc
communities
to
do
go,
ask
say
runtime
supported
profiling,
or
does
everybody
just
write
their
own
profiler
that
manually
unwinds
Stacks
in
those
languages.
B
I
feel,
like
I
mean
yeah.
We
we
did.
You
know
not
in
the
hotel
room
realm
but
yeah
as
pyroscope
reach
out
to
the.net
runtime
people,
and
they
have
it
at
least
on
the
I.
Think
it's
the
the
road
map
for
I,
don't
know
like
dot
net
eight
or
something
which
I
I
know
in
like
Microsoft
land
is
probably
like
10
years
from
now
or
something
but
but
yeah
I,
don't
yeah
I,
don't
think
it's
anything!
B
That's
you
know
that
there's
a
majorly,
concerted
effort,
I
mean
but
I
do
think.
That's
you
know,
I
guess.
That's
kind
of
an
interesting
question
is
like
why?
Wouldn't
they
all
just
switch
to
like
P
prop,
for
example,
you
know
I,
guess
yeah,
like
I
I,
think
there
is
something
to
the
fact
that,
like
if
we
all
did
collectively
agree
on
a
you
know
on
a
format.
That's
like
most
efficient.
That
like
supports
a
lot
of
use
cases.
B
You
know
yeah
I,
wonder
how
hard
it
would
be
to
Lobby
them
to
switch
to
that
format.
I,
don't
know
I
guess:
node
uses
P
Prof
as
well,
somewhat
Maybe
or
no
I,
don't
know
Alexa.
You
might
have
some
thoughts
on
that
I
think
you're,
muted,.
B
Anybody
I
I,
believe,
is
the
question:
if
there's
any
efforts
in
any
other
run
times
to
do
like
go-esque
sort
of
support.
G
D
G
G
So
we
have
so
node.js
also
has
some
profiling
support
and
we
have
people
of
there's
no
GS
repo
on
GitHub,
which
basically
Bridges
the
profiling
support
in
node.js
for
CPU
profiling
and
heat
profiling
into
in-sup
profile
product.
Basically,
you
can
use
people
of
node.js
to
produce
profile
products
for
node.js,
but
that
was
us.
It
was
not.
It
was
not
somebody
else.
If
am
I
answering
the
right
question
just
want
to
make
sure.
I
Because
I
guess,
even
if
the
languages
were
to
add
this
anyway,
I
mean
it's
presumably
not
going
to
be
a
silver
bullet
because
say
as
soon
as
you
call
into
native
code,
you
still
need
some
way
like
you're
still
looking
at
any
profilers
that
can
do
Native
code
and
writing
then
unwind
into
the
runtime
right.
G
Yes,
for
all
managed
runtimes
how
you
how
you
deal
with
Native
Stacks
is
is,
is
is
a
good
question
and
symbolization,
of
course,
also
is
a
question
because
in
most
cases
on
production
machines
there
are
no
debug
symbols,
at
least
well
at
least
maybe
I'm
biased
by
the
Google
view
on
that.
But
we
don't
ship
dwarf
to
to
to
the
machines
we
use.
We
use
basically
some
kind
of
symbol,
server
approach
because
he's
on
Monsoon
you
don't
want
to
duplicate
those
bytes
across
many
many
many
machines.
F
Well,
for
the
go
run
time
there's
this
said:
sigo
traceback
functionality,
where
you
basically
hook
in
the
dwarf
on
Winder
that.
G
Also
unwinding
is
not
equal
symbolization.
These
are
two
two
different
problems,
but
yes,
for
unwinding
for
unwinding
go
has,
has
a
callback
I,
don't
know
if
Java
does
I
know
that
for
Java
we'll
basically
do
we.
We
have
some
there's
basically
some
code
that
tries
to
do
some
heuristics
to
figure
out
whether
there
is
Native
portion
of
the
stack
that
we
want
together,
like
first
try,
I,
think
it's
like
first
try,
unwind
without
I
think
get
called
trace
and
then
try
again.
It
assumes
certain
combinations.
You
also
like
you
need
to.
G
Sometimes
you
need
to
make
you
need
to
figure
out
what
assumptions
and
what
foreign
to
make
about
interleaving
of
the
managed
stack
and
in
in
the
native
stack,
because
in
theory
you
would
have
like
I
think
the
most
practical
case
is
that,
like
native
frames
are
at
the
leave,
the
leaves
because
the
Java
code
called
something
C
plus
plus,
but
in
theory
you
could
you
could
it
could
be
like
back
and
forth
you
could?
G
G
I
So
the
reason
I
was
essentially
asking
that
question
is
like:
if
we're
advocating
for
runtimes
integrating
this
format.
Are
we
saying
that
we
think
in
the
long
term
that
the
future
of
profiling
is
going
to
be
runtime,
provided
profiling,
support
versus
an
environment?
I
Where
say
it's
ebpf,
runtime
specific
unwinders
that
are
not
in
fact
provided
by
the
runtimes,
because
the
latter
will
solve
the
problem
which
you
were
just
describing
Alexa
right,
where
you
can
actually
handle,
say
native
to
interpreted
to
gist
and
so
on
so
forth,
whereas
the
former
obviously
has
more
introspection
on
the
and
what
the
runtime
itself
is
doing.
So
you
know
memory
profiling,
so
on
and
so
forth.
I'm
just
saying
like
well
assuming
one
future
or
another.
G
Yeah
and
I
think
some
things
you
cannot
do.
I
love
the
ebpf,
but
there
are
things
that
you
cannot
do
with
eppf,
for
example
like
keep
allocation
profiling.
That
has
to
be
a
part
of
the
runtime,
because
it's
it's
it's
like
it's
very
much
bound
to
the
runtime
or
contention
like
if
you
want
to
profile
contention
on
a
spin,
lock,
yep.
G
You
could
probably
make
some
assumptions
about
like
like
try
to
infer
something
from
the
stack
from
the
calling
stack
like
if,
if
it's,
if
the
stack
yeah,
maybe
even
yeah
I,
don't
know
it's
it's
it's
it's
an
interesting.
It's
an
interesting
topic,
but
I
think
it's.
There
are
many
that
there
are
new
and
new
answers
where
integration
with
the
runtime
can
actually,
for
example,
we
have
hash
table.
We
have
hash
table
profiler
at
Google.
G
A
F
Think
overhead
is
one
dimension
and
the
other
is
how
precariously
you
want
to
marry
your
implementation
to
the
details
of
certain
runtimes
and
other.
D
I
I
E
I
have
another
blog
for
next
next
time
is
fire
me
because
we're
running
for
out
of
time,
but
maybe
this
has
already
been
covered
in
a
meeting
I
missed,
but
the
notion
of
annotations
I
was
going
through
KU
Trace
that
thing
I
linked
in
the
in
the
docs
and
they
kind
of
interleave
annotations
or
just
events
with
you
know
some
arbitrary
text
or
or
data
really
into
the
actual
profiling
data
stream
and
kind
of
like
you
might
have
like
memory
allocation
events
or
things.
E
You
know
you
could
also
have
you
know
cases
where
you
know
you
might
want
to
pin
a
point
in
time,
maybe
to
deployments
being
rolled
out
or
you
know
somebody
notices
something
bad.
You
know
when
you're
debugging,
you
want
to
put
a
pin
in
a
particular
approach
profile
so
later
on.
You
can
find
it
or
to
measure
their
start
and
the
end
of
something
interesting
that
that
might
be
interactive
or
might
be
automated
right,
like
a
scale
up
scale
down
or
any
number
of
other
events
is
that
already
covered
by
P
profit.
E
G
Yeah
labels,
which
is
kind
of
annotations,
yeah
right,
those
aren't.
B
Well,
let's
I
would
say
Let's
just
since
we're
at
time:
let's
maybe
fall
up
and
slack
or
something
I
just
want
to
say
one
last
time,
I
will
add
the
like
a
link
to
the
logs
Otep
and
then
also
just
like
the
beginning
of
a
list
of
fields.
B
Please
like
add
to
it
before
the
next
meeting,
and
we
can
discuss
further
than
any
yeah
other
than
that
yeah,
thanks
for
everybody,
I
have
to
run
but
feel
free.
If
you
guys
want
to
stay
in
chat,
all
right
later
see.