►
From YouTube: 2023-03-23 meeting
Description
OpenTelemetry Prometheus WG
D
E
D
Not
anytime
soon
I
am
literally
in
my
parents
attic.
If
you
can't
tell
so.
A
D
Kind
of
like
the
other
direction,
my
lease
just
ended
in
in
Oakland,
so
I'm,
you
know
floating
around
homeless
for
a
little
bit
until
I
figure
out
where
I
will
move.
E
When
I
was
working
before
open
Telemetry,
when
we
had
open
census
which
fed
into
open
Telemetry
in
2019,
we
had
some
members
on
the
team
at
Google
when
I
was
working,
there
who'd
worked
at
Uber
and
had
joined
I,
don't
know
at
some
point
before
the
IPO
and
and
then
they
left
just
before
the
IPO,
but
they
still
have
shares
in
the
IPO
and
I've
made
that
same
comment
of
like
what
size
of
yacht
are
you
inspecting.
Are
you?
D
Yeah
learn,
but
no
yeah
I
mean
we're
excited,
I,
mean
I.
Think
at
the
end
of
the
day,
I
mean
the
biggest
thing
that
we
were
excited
or
are
excited
about.
Is
I
mean
yeah.
Just
like
you
know
what
it
means
for
profiling,
like
I,
think
we'll
be
able
to.
You
know,
do
a
lot
more
with
profiling.
With
more
you
know,
time
resources.
E
D
Okay,
no,
no,
we
I
I've
got
all
sorts
of
flame
graph
swag.
We
can
we'll
be
at
kubecon
you,
you
know,
you'll,
definitely
see
it
I'll,
be
there
nice
cool.
Well,
let's
see
what
time
we
got.
1104
I
know
some
people
will
be
coming
in
later.
Some
people
will
be
leaving
early
today,
so
I
think
we
can
go
ahead
and
get
started
so
yeah
welcome
back
everybody.
D
We
are
yeah,
I
mean
I
I,
put
some
stuff
on
the
agenda
for
today
interested
to
get
you
all's
perspective
on
it,
I
mean
yeah.
I
mean
I.
Think
that
we've
been
talking
for
a
lot.
We've
talked
about
a
lot
of
different
potential
paths
that
we
can
go
and
I
forget
who
it
was
last
week,
but
somebody
had
mentioned
you
know
it's.
You
know
getting
to
that
time
where
it's
like.
D
Okay,
wait
like
you
know
where
we
actually
put
pen
to
paper
or
I,
guess,
code
to
I,
don't
know
GitHub
and
try
and
you
know,
actually
build
some
sort
of
prototype
in
a
direction
that
we
think
is
the
most
feasible
people.
D
It's
obviously
clear
that
we're
not
going
to
be
able
to
make
everybody
happy,
but
the
idea
is,
you
know,
yeah,
just
just
to
make
some
progress
and
move
forward
and
and
then
kind
of
build
from
there
like
we
don't
have
to
solve
everything
in
one
go,
but
rather
you
know
just
get
the
get
the
train
moving
in
the
right
direction
and
then
kind
of
you
know
improve
things
from
there.
So
yeah
the
idea,
or
at
least
my
thoughts
for
today,
was
that
we
could
just
talk
about
like
to
everyone
here.
D
Who's
been
a
part
of
these
meetings,
a
lot
you
know
just
what
that
might
look
like.
C
So
I
guess
I
have
some
thoughts
if
you
guys
are
interested
about
a
path
forward,
so
that
comparison
again,
thank
you
crystals
for
doing
it.
The
it
shows
that
there
is
a
2X
difference
between
stateful
and
stateless,
which
is
obviously
significant.
C
At
the
same
time,
two
X
is
not
being
another
deal.
Breaker.
We
probably
can
go
with
the
stateless
approach
initially,
knowing
that
we
need
to
improve
on
it
right,
leaving
the
the
door
open
to
go
stateful
as
a
form
of
network
traffic,
optimization
I
think
that's
totally
doable
you
can.
You
can
design
the
protocol
in
a
way
that
the
messages
initially
they
they
are
self-contained,
but
later
you
just
optimize
the
way
you're
right,
you'll
meet
the
state
Tracy.
The
dictionary,
if
you
want
to
you,
maintain
a
persistent
connection.
Maybe
you
use
grpc
streams.
C
You
omit
the
stack
traces
and
just
include
the
references
and,
and
there
you
have
that
that
stateful
in
one
leg
of
connection
right
and
I,
think
this
is
important
when,
when
when
that
traffic
hits
the
collector,
you
terminate
the
state
right
it,
the
The
Collector
reconstructs,
the
the
full
message
and
inside
the
collector
you
have
a
self-contained.
You
have
self-contained
messages.
Traveling,
you
can.
You
can
process
them
as
you
wish,
filter
modify
whatever
you
want
to
do,
and
then
the
collector
sending
to
the
back
end
can
also
do
that
same
optimization.
C
If
necessary,
can
compress
and
and
omit
the
stake,
traces
even
necessary
and
just
include
the
references
that
could
be
the
I
guess,
the
version
two
of
the
protocol
and
you
can
even
make
it
like
fall
gracefully
to
the
to
the
first
version
right,
if
necessary,
so
I
think
that's
a
very
good
approach.
C
You
with
the
stateless
protocol,
you,
you
will
have
your
foot
in
the
door
in
in
open
Telemetry
in
The
Collector
and
later
you
just
expand
to
to
persistent
connection
to
do
the
optimization
that
you
want
to
do
and
you'll
still
maintain
the
ability
to
process
inside
the
collector.
It's
maybe
I,
guess
less
efficient
that
you
could
be
with
the
I
guess
design
fully
relying
just
on
the
the
staple
modes.
Maybe
you
can
optimize
for
that,
but
I
feel,
like
probably
the
the
difference
is
going
to
be
negligible.
D
Yeah
I
mean
I,
think
that
makes
a
lot
of
sense.
I,
know,
I,
guess
yeah
I'm
curious
your
thoughts.
I
know
you
said
you
have
to
leave
earlier
today,
Felix
I
know
you
also
have
done.
D
You
know
a
lot
of
the
leg
work
with
I
guess
or
the
most
leg
work
with
prototyping
things
thus
far
as
to
how
it
might
fit
into
a
specialized
otel
format.
I
guess
yeah
I
don't
know.
Do
you
have
any
anything
you
want
to
to
share
any
thoughts
on
you
know
your
ideas
with
how
it'd
be
best
to
move
forward.
B
So
generally,
I
think
I'm
also
more
eager
to
explore
stateless
approaches
first
and
put
the
stick
full
stuff
on
hold
on
the
number
that
the
2x
number
that
was
devoted
on
the
efficiency
games.
I
think
that
was
over
like
a
short
aggregation
window.
B
But
if
I
also
remember
correctly
from
the
last
meeting
that
once
you
go
to
a
60-second
aggregation
window,
then
the
difference
was
even
lower,
like
20
30
or
something
and
I
I
believe
that
number
came
up
in
the
previous
meeting,
where
the
group
here
I
think
generally
is
sort
of
aligned
on
under
the
idea
that
a
stateless
approach
is
probably
the
right
set
of
trade-offs
for
hotel
and
yeah.
B
My
recollection
from
the
last
meeting
was
that
we're
still
not
100
aligned
on
how
much,
if
any
support
we
want
to
give
to
existing
performance
like
P,
broth
and
tape,
r
or
if
we
want
to
like
just
maybe
take
people
up
at
the
starting
point.
It's
already
protocol
buffer
format.
We
could
just
copy
and
paste
it
in
and
then
start
tweaking
it
for
what
we
might
want
to
change
for
hotel.
That
I
think
is
currently
where
I'm
not
sure,
yet
what
direction
we
should
take
when
we
stop
writing
code
to
GitHub.
D
Yeah
I
mean
to
me,
it
seems
like
that
was
sort
of
where
we
initially
started
was
the
idea
of
kind
of
like
starting
with
P
Prof,
and
you
know
tweaking
it
to
be
better
suited
for
otel
and
as
far
as
just
like
prototyping,
something
goes
I
mean.
It
seems
like
that
to
me
seems
like
the
best
place
to
start
and
build
from
there.
D
I
also
know
that
we've
talked
a
lot
about
the
JFR
piece
and,
like
you
know
the
fact
that
JFR
may
be
hard
to
support
in
some
way,
but
I
yeah
I,
don't
know
to
me.
It
seems
like
it's
in
a
similar
position
to
P
profit
that
we
should
just
try
it,
and
you
know
yeah
I,
don't
know
I
I
to
me.
It
seems
like
it's
I
would
say
it's
worth
trying
to
support
in
the
in
the
beginning
kind
of.
D
If
nothing
else,
then,
for
what
tigron
said
of
just
like
getting
our
foot
in
the
door
and
not
you
know
alienating
the
whole
Java
Community
from
these
efforts
by
you
know,
making
them
sort
of
second
class
to
P
Prof
in.
In
that
sense,
I,
don't
know.
That's
my
perspective,
Curious
I
know
the
elastic
folks,
were
you
know,
I
guess
hoping
for
the
stateful
route.
I'm
curious
what
you
all
think
about
the
best
path
forward.
If
any.
G
G
Christos
I,
don't
know
if
you
are
Florian,
if
you
guys
have
any
kind
of
more
more
Nuance
on
that
like
for.
For
me,
the
the
reasoning
that
you
all
have
laid
out
makes
sense
from
an
auto
point
of
view.
G
The
kind
of
the
the
few
big
questions
for
me
are,
as
you
mentioned,
Brian
the
like
whether
it's
fine
that
therefore
is
handled
via
I,
don't
know:
reverse
engineering
the
protocol
then
implementing
like
a
whatever
format
or
like
decoders
for
it
that
we
can
come
up
with
like
as
long
as
that's
fine,
then
that
sounds
reasonable.
The
other
big
questions
were
I.
G
Think
tigrant
and
other
people
had
asked
what
kind
of
filtering
do
we
actually
want
to
support
on
stack
trade
like
filtering
Transformations,
that
sort
of
thing
and
then
for
me
the
the
other
one
is
around
symbols
and
the
Deferred
sending
of
symbols.
G
How
we
want
to
handle
it
in
the
protocol
is
the
kind
of
thing
where
we
we
don't
really
talk
about
at
all,
and
we
just
tell
people
look.
You
can
exclude
the
symbols
from
your
like
Hotel
format
and
then
just
have
some
sort
of
like
side
Channel
where
you
send
them,
or
do
we
actually
want
to
try
and
standardize
how
those
delayed
symbols
are
sent
as
well.
I,
don't
know
if
anybody
had
a
chance
to
think
about
that.
B
I
I
have
a
quick
thought
on
that.
The
way
that
people
works
right
now
is
a
protocol.
Buffer
format
is
all
the
top
level.
Messages
are
optional.
You
can
include
them
or
not,
so
you
could
basically
send
the
same
profile
twice.
You'd
need
some
unique
identifier
for
the
profile,
but
the
first
time
you
send
it
without
symbols
and
the
second
time
you
just
sends
a
swing
table
essentially
later
on
I,
don't
know
how
well
that
will
work
in
practice,
but
that's
kind
of
roughly
would
have
imaginative
playing
it
out
in
my
head,
yeah.
D
B
Yeah
but
I
I
I'd
have
to
double
check
the
details
on
whether
that
would
really
work
out,
but
in
theory
you
could
basically
strip
the
symbols
from
the
people
often
than
that
and
then
later
on,
then
the
simple
pardon
but
yeah
we
could
maybe
maybe
more
design
workers
need
the
of
the
cupboard.
F
D
Jps
is
that,
okay,
what's
up.
H
Hey
I
I
do
think.
P
Prof
supports
that
I.
Wonder
actually
the
point
about
uniquely
identifying
the
message
is,
is
interesting:
I
think
that's
a
great
Point
I.
Don't
have
a
response
about
that.
However,
the
P
Prof
format
has
the
all
the
mapping
information
all
the
stuff.
You
would
need
to
do
like
the
symbolization
on
the
back
end,
I
I,
just
don't
understand
the
question.
H
That's
on
my
mind
right
now
is
you
know
if
if
the
information
is
transmitted
with
no
symbols,
then
I
think
it's
going
to
finally
you're
not
going
to
resend
something
off
the
originating
data
source
with
symbols.
After
that
I,
don't
think
I
think
you'll
have
to
reconstruct
it
completely
on
the
back
end,
which
should
be
fine
if
you've
got
that
set
up.
I.
Think
at
that
point
you
won't
need
a
message
for
symbols.
You'll
just
need
a
back
end,
that's
capable
of
tapping
into
like
the
symbol
data,
however.
H
You've
got
it
and
it'll
render
it
and
merge
it
at
that
point.
That
would
be
my
that's
my
Baseline
on
it.
But
if
there's
another
use
case
I,
you
know
I
could
defer,
but
that's
how
I
would
you
know
mentally
model
that
setup.
B
Yeah
you
bring
up
a
good
point.
I
think
one
thing
I
just
got
totally
wrong.
Is
you
wouldn't
have
the
same
ID
the
profile
ID
to
send
from
the
place
where
you
sent
the
symbols
from
because
the
symbol
p
would
probably
want
to
send
to
the
profiling
back
end
from
a
Ci
or
something
similar
where
you
take
the
symbols
and
upload
them,
so
the
identifier
would
have
to
be
sort
of
a
unique
hatch
of
the
binary.
That
profile
belongs
to
or
something
like
that,
but
I
I
think.
B
The
question
is
like:
could
Hotel
standardize
that,
like
should
a
hotel
standardize
how
the
symbols
get
sent
so
that
the
backend
can
take
a
profile
without
symbols
and
symbolize
it
right
that
sending
mechanism
and
I
think
so
yeah,
because
I
think
if
we
want
to
support
sending
unsymbilized
profiles
and
Hotel
would
really
only
be
providing
half
a
protocol
that
there
was
no
defined
way
to
get
the
symbols.
C
I
think
that
can
be
super
useful
for
the
other
parts
of
open
Telemetry
as
well.
So
you
may
want
to
reference
a
particular
placing
the
source
code
from
from
your
span
right
from
your
log
right
reference,
a
symbol
that
is
a
meeting
that
span.
D
Okay-
and
you
also
mentioned
the
the
filtering
question-
I
mean
we've
kind
of
danced
around.
It
I
think
a
decent
amount.
At
this
point,
the
I
I
guess
yeah,
the
I
think
it
was
the
actual
like
go
maintainers,
we're
kind
of
talking
about
this,
a
similar
topic,
and
it
seems
like
the
general
consensus
that
we
always
land
on
is
that
filtering.
D
I
guess,
within
the
actual,
like
filtering
data,
that's
actually
in
the
profiles
themselves
seems
like
more
trouble
than
it's
worth.
I
don't
know,
maybe
that's
a
a
hot,
take
I'm
curious.
If
anybody
does
disagree
with
that,
but
basically
that
you
know
like
filtering
a
function
name
or
something
like
out
of
the
actual
profile
itself
is
like
something
that
people
have
definitely
said
that
they
want.
But
in
reality
it's
like
not.
B
Yeah
I
I
think
that
he'll
bring
and
obligating
the
name
of
functions
or
something
that's
very,
very
rare
use
case
and
I
agree.
The
thing
that
I
think
would
require
filtering
is
to
keep
up
labels
if
we
support
a
mechanism
similar
to
people
of
labels,
where
the
developers
of
the
program
can
put
arbitrarily
labeled
on
things
that
will
be
picked
up
by
the
profile
or
those
could
contain,
certainly
identifiable
information.
That
wouldn't
be
a
good
idea
to
put
that
information
on
the
labels.
B
Then
there
is
a
case
here
for
to
bring
that
data
on
The
Collector
side,
and
that
data
also
cannot
be
sent
out
then,
because,
if
you
can
symbolize
separately,
you
could
also
tell
people
to
understand,
then
the
symbols,
at
least
for
native
code,
but
yeah
the
people
of
labels
might
be
the
thing
that
actually
needs
this
offer.
C
So
I
just
want
to
clarify
why
why
I'm
pushing
for
this?
What
you
do
inside
the
collector,
the
filtering
or
modifying
the
data?
Whatever
you
decide
to
do
right,
you
write
a
processor
to
do
that
work
and
that
processor
operates
on
on
P
data.
The
the
structure,
the
representation,
the
data
model
right,
makes
it
easier
or
harder
to
do
a
specific
type
of
processing.
So,
in
my
opinion,
the
design
of
the
data
model
should
be
driven
by
the
use
cases.
C
What
you
want
to
do
with
the
data
should
drive
your
decisions
about
what
the
data
model
looks
like
and
what
that
looks
like
also
should
be
an
input
to
what
you
want
it
to
look
like
on
the
wire,
because
if
they
are
wildly
different,
then
you
have
to
do
the
conversion
and
that's
costly
right
so
to
on
The
Wire.
Then
you
put
it
on
in
memory
when
you
receive
it,
then
you
operate
on
that
thing
and
what
you
do
with
it
depends
on
on
what
you
think
is
the
use
cases.
C
So
you
should
work
backwards
from
that
from
the
use
cases
to
the
P
data
in
memory
to
the
wire
format,
which
is
what
you
want
your
product
Buffs
to
be
right,
I
think
it's!
It's
very
difficult.
If
you
disregard
your
use
cases,
you
will
Design
something
that
works
nice
on
the
wire.
But
then
you
may
have
problems
with
writing
processors
efficiently
that
work
on
that
data.
It
probably
will
be
possible
right,
it's
not
impossible,
but
it
will
be
probably
inefficient
to
do
that.
D
Yeah
and
I
think
the
I
think
what
Felix
mentioned
is
kind
of
a
seems,
like
a
reasonable
seems,
like
the
most
reasonable
use
case
for
filtering
and
the
one
that
is
also
the
most
practical
to
like
Implement.
You
know
where
I
guess
you
don't
have
to
like
dig
through
the
actual
profiles
themselves.
You
know
you
kind
of
have
access
to
the
labels
a
little
bit
easier.
I
believe,
and
you
know,
seems
like
something
that
would
be
reasonably
easy
to
add,
being
able
to
filter
labels
or
tags
and
yeah
I.
D
Don't
know
I
feel
like
we
could
just
start
with
that
and
I.
Imagine
that
will
cover
most
of
the
use
cases
and
make
most
people
who
do
mention
filtering
in
some
capacity
happy,
and
you
know
kind
of
just
say
you
know
we're
not
going
to
support
you
know
we'll
have
to
support
as
a
separate.
You
know
later
iteration,
if,
like
people
really
do
come
in
and
start
saying,
they
want
to.
You
know,
filter
something
inside
the
profile.
Then
I
guess
we
figured
out
then,
but
I
feel
like
we
could
just
start
with.
D
And
then
I
guess
the
other
part
was
that's.
You
know
at
the
same
time
as
you
could
filter
things
out,
you
could
also
I
suppose
we,
you
know
we
had
discussed
before
being
able
to
like
add
tags
or
something
via
the
collector.
C
Yeah
yeah
do
I
mean:
do
we
do
sampling
right
is
sampling,
necessary,
I.
Think
that's
one
of
the
most
important
use
cases
for
profiles
right.
Is
it
necessary
like
do
we
think
it's
not
useful
to
do,
and
then,
if
you
do
sampling,
then
what
what
sort
of
sampling?
What's
the
criteria
for
sampling?
What
data
you
need
to
have
available
to
make
a
sampling
decision.
C
Guess
we
can
be
a
specialized
one
right,
if
maybe
a
like.
The
logic
may
be
very
different
for
profiles.
That's
that's
totally
fine
right,
but
but
the
sampling
processor,
because
it
needs
to
make
a
decision.
It
needs
to
see
the
data
right.
If
you
compress
the
data
in
a
way
that
it's
not
available
in
the
message,
then
then
what
does
the
processor
need
to
do?
It
needs
to
maintain
State
somehow
right,
so
these
are
sort
of
decisions
that
that
there's
a
consequence.
C
Tell
you
that
that
the
collector
the
state
in
the
collector
has
to
be
terminated.
The
Collector
needs
to
needs
to
see
the
stack
Trace.
Essentially,
if
you
want
to
filter
or
sample
by
function,
name
and
and
the
function
name
is
nowhere
in
the
message
to
be
fine
and
then
how
do
you
do
that
decision
right
in
Camp.
A
A
C
So
again,
what
my
my
initial
suggestion
that,
even
if
you
do
that
stateful
protocol
just
terminate
the
state
in
in
the
collector's
receiver
as
soon
as
the
data
hits,
the
collector
you
transform
it
in
in
a
in
a
in-memory
representation
which
can
which
is
self-contained
essentially
turn
the
references
into
pointers
to
a
real
data
right.
So
the
hash
ID
of
a
strict
face
is
replaced
by
a
pointer
to
the
actual
spec
Trace
that
is
maintained
in
memory
in
The
Collector.
C
C
I
guess
there
may
be
questions
about.
Do
you
if
you
do
that,
columnar
compression?
Why
do
you
use
the
or
you
use
the
dictionaries?
It
may
complicate
writing
the
processors
somewhat,
depending
on
what
you
do
with
the
data
versus
like
more
more
I,
guess,
role,
oriented
data
representation,
and
it's
fine
right.
If
it's
a
trade-off,
you
need
to
make
you
say
that
this
sort
of
processing
is
unnecessary
or
we're.
C
D
Yeah
I
guess
yeah,
so
my
biggest
you
know
worry
was
that
we
could
not
come
up
with
something
that
we
could
or
agree
on
something
that
we
could
like
prototype.
That
is
perceived,
as
you
know,
adding
enough
value
to
be.
You
know
something
other
than
just
like
proxying,
P
Prof.
You
know
across
a
collector,
and
so
it
sounds
like
you
know.
D
If
we
were
to
if
we
were
to
prototype
something
based
off
of
what
we've
discussed
so
far,
you
know
something
that,
on
the
collector,
can
you
know
basic
filtering
capabilities
for
labels,
basic,
potentially
adding
labels
that
make
sense.
D
You
know
being
able
to
do
some
sort
of
sampling
or
have
some
sort
of
sampling
rules,
even
if
we
were
doing
that
with
something
that
just
started
as
P
Prof
goes
to
the
collector.
We
do
those
things
and
then
it
sends
it
on
to
me.
That
seems
like
already,
you
know
an
improvement
over.
You
know
just
simply
sending
P
Prof
or
sending
whatever
format
of
profiling
data
across
to
a
you
know,
various
back-ends
that
accept
it
in
different
formats.
D
D
And
then
yeah
the
symbols
piece
as
well:
I
guess
I,
don't
know
if
we
would
figure
that
out
in
the
in
the
first
go
or
or
later
but
yeah
I,
don't
know,
I'm
curious.
What
other
people
think
if,
if
that
seems
like
a
reasonable
step
forward
from
here
and
then
you
know,
that
still
leaves
the
door
open
for
future
optimizations
and
future
growth.
Or
do
you
think
that
you
know
any
of
that
is
premature
at
this
point,
yeah
Pete.
H
Yeah
I
just
wanna-
this
may
be
somewhat
obvious,
but
as
we
try
to
puzzle
through
I
think
there
will
be
cases
where
you
just
simply
don't
have
symbols.
For
some
reason.
H
Probably
you
know
you
could
cook
up
a
case,
there's
a
binary
that
doesn't
have
symbols
and
then
it's
not
something
that
the
eventual
you
know
user
had
anticipated
so
they're,
not
even
going
to
provide
like
the
symbolization
information.
So
whatever
whatever's
done
should
kind
of
be
aware
that
symbols
May
finally
not
be
there.
H
I
know
I
was
reading
the
slack
messages
and
there's
a
topic.
You
know:
I
I
initially
had
okay
regarding
filtering
I,
I'm
gonna
paraphrase
something
I
read
on
slack
in
the
channel,
but
I,
don't
like
my
my
intuitive
response
to
filtering
is
I.
Don't
think
it's
useful
I,
don't
think
anybody
is
interested,
but
there
did
come
one
response
on
slack
that
a
Quant
fund
are
related.
Technology
company
would
not
want
their
symbols,
exposed
and
I.
H
H
G
I
think
that
was
that
was
me
that
said
that
yeah,
but
I,
I
kind
of
was
using
it
as
an
example
of
it
may
look
like
this
person
wants
filtering,
but
they
actually
don't.
So
when
we
spoke
to
them,
they
wanted
on-prem
like
even
if
they
had
filtering,
they
wouldn't
have
accepted,
like
they're.
Just
super
paranoid
that
the
concept
of
a
filter
going
wrong
and
then
leaking
a
symbol
would
be
unacceptable.
Yeah.
H
That
makes
perfect
sense,
so
I
I
kind
of
you
know
after
traveling
those
thought
paths,
that's
kind
of
where
I
am
at
on
on
the
filtering
and
I
guess.
My
comment
about
symbols
may
not
finally
be
there
is,
is
a
little
bit
tied
into
the
filtering
stuff
and
I
I
I'm
just
pushing
that
thought
that
the
symbols,
if
you
assume
you're,
going
to
have
symbols
I,
don't
think
like
if
you're,
if
whatever
you're
thinking
of
is
assuming
symbols,
are
there
somehow?
H
Finally,
I,
don't
think
that's
necessarily
going
to
be
the
case
for
all
stack
traces
I
mean
you
could
just
drop
those
stack
traces.
Actually,
that
doesn't
make
sense.
I,
don't
think
you'd
want
to
drop
something
because
it
doesn't
like
paint
the
full
picture,
if
you
don't
see
like
the
usage
and
and
the
random
hex
addresses
that
you've
got
okay
thanks,
that's
what
I
was
thinking.
A
Nice
Florian
thanks
I,
think
I
agree
with
Pete
and
Sean
that
we
have
to
differentiate
between
the
two
cases
filtering
on
stack,
Trace
information,
where
the
information
is
maybe
not
available
because
of
missing
symbols
and
filtering
on
additional
information
or
metadata
like
hostname
or
where
something
is
coming
from
filtering
on.
A
The
labels
and
those
names
is
straightforward,
I
would
say
and
should
not
be
a
blocker,
but
if
there
is
a
hard
requirement
that
we
should
be
able
to
filter
on
those
on
files
or
function
names,
then
I
have
the
feeling
that
we
end
up
that
the
collector
needs
to
have
basically
a
direct
storage
attached
to
it
to
it.
So
this
could
be
a
architectural
issue,
and
just
if
you
just
say,
hey
filtering
is
on
file
or
function
is
only
only
limited,
then
I
think
it
makes
reasonable
sense.
D
Yeah
and
I
think
at
this
point
the
best
path
for
it
is
like
you
know,
yeah
we
start
with
the
filtering
that
we
all
think
you
know
as
the
ones
who
will
you
know
be
involved
in.
You
know,
designing
this
and
planning
for
it
and
then
I
guess
eventually
implementing
it.
I
think
we
start
with
the
you
know
the
basic
you
know
filtering
that
we
think
makes
sense
and
then
I
want
to
hear
from
these.
D
Like
you
know,
mythical
people
who
are
out
there
like
I
would
love
for
them
to
come
to
this
I
mean
we
will
eventually
sort
of
propose
something
and
I
would
love
for
them
to
come
forward
and
like
make
the
case
for
this
other.
You
know
way
more
complicated
version
of
filtering
and
explain
how
it's
going
to
be
worth
all
of
the
extra
costs,
all
the
extra
complication
whatever
and
you
know,
yeah
I,
just
think
in
a
in
a
practical
world
that,
like
I,
don't
suspect
that
I
mean
I'll
be
interested
to
see
it.
D
Maybe
they
will
come
forward,
but
you
know
yeah.
If
they
don't,
then
we
can
sort
of
not
be
blocked
by
trying
to
account.
For
you
know
a
you
know
top
you
know
one
percent
of
people,
probably
even
less
than
that
that
will
actually
you
know
care
enough
to
bring
this
up
to
us
directly
in
some
kind
of
way,
or
you
know
flag
it
in
some
kind
of
way.
D
Yeah
yeah
I
mean
yeah,
I,
guess
yeah
I
guess
my
my
still
vote
here
is
that
we
start
to
you
know
design
like
what
a
actual,
because
I
think
when
we
first
started
doing
this
and
talking
about
it,
you
know
we
were
doing
some
benchmarking
stuff,
whatever
I
think
we
were
wholly
not
considering
how
big
of
a
piece
The,
Collector
and
I
guess,
appeasing
The
Collector
group
would
be
as
a
part
of
this,
and
now
that
you
know
they
are
much
more
aware.
I've
come
to
a
couple
of
these
meetings.
D
We've
gone
to
their
meeting.
You
know:
I
do
think
that
there's
probably
some
you
know
path
forward
where
we
prototype
this
with
the
collector
and
and
kind
of
go
from
there.
Yeah
I'm
curious
what
yeah
I
don't
know
if
anybody
has
what's
up
I
guess
Josh
just
jumped
in
what's
up
Josh,
we
were
just
talking
about
I.
Guess,
yeah
I'm
curious
to
get
your
thoughts
on
this
too,
so
yeah,
first
half
we
basically
just
talked
about
filtering.
D
You
know
filtering
the
stack
Trace
data
itself
or
you
know,
function
name
stuff
like
that
is
something
that
at
least
in
the
first
version,
I
think
we're
generally
in
agreement.
Just
is
not
a
big
enough
concern
for
enough
people
that
it's
worth
you
know
blocking
anything
filtering
something
more
I.
Guess,
theoretically,
simple,
like
you
know,
labels
or
tags
is
something
that
we
do
think
in
the
first
version.
D
Being
able
to
support
that
makes
sense
being
able
to
add
labels.
Slash
tags
is
something
that
makes
sense
being
able
to
do
some
sort
of
sampling
on
The
Collector.
D
You
know
I,
guess
those
three
main
things
and
then
also
keeping
in
mind
the
ability
to
send
symbols
potentially
separately
or
link
symbols
separately
are
the
main
things
that
if
we
were
to
at
least
from
my
opinion,
if
we
were
to
prototype
it
today
that
we
would
start
with
and
then
kind
of
build
from
there.
D
I
What
what
label
so
labels
and
tags?
What
what
do
you
mean
where
do
they
live?
Is
this?
Is
this
something
that's
on
like
the
resource?
Is
this
something
that's
on
a
particular
profile,
a
particular
piece
of
the
profile
just
curious.
F
I
I
I
Yeah
I
guess,
if
you
look
at
any
other
otlp
signal,
there's
a
resource
which
is
supposed
to
be
the
entity
that
is
sending
the
data
with
a
set
of
labels.
If
the
labels
you're
talking
about
are
at
that
level,
you
don't
have
to
do
anything.
You're.
Fine,
right,
interpretation
scope
is
the
next
thing,
which
is
basically
like
the
entity.
I
That's
reporting
the
data,
the
library,
the
the
so
you
know
if
I
were
implementing
open,
Telemetry,
instrumentation
I
would
say
here
is
you
know,
Java
spring
HTTP
instrumentation
for
motel
and
I
would
I
would
kind
of
Define
it
like
that
and
I
could
provide
attributes
and
filters
there,
so
I
just
want
to
just
want
to
qualify.
So
when
you
design
this
you're
not
like
adding
labels,
you
don't
need,
because
you
should
always
be
able
to
do
filtering
and
sampling
from
the
resource
in
hotel.
That's
just
a
fundamental
principle
right.
F
F
Okay,
so
so
then,
okay
in
that
case,
that
our
model
would
fit
into
that
I
think
you
know
the
same
way.
We
could
have
a
profile,
that's
kind
of
like
a
whole
thing
with
some
sort
of
set
of
labels
within
that
there
would
be
yeah
some
sort
of
scope
and
it
could
have
multiple
I.
Don't
know:
label
label
groups
whatever
with
stack
traces,
yeah,
I,
hope
that
makes
sense.
D
Yeah
I
mean
I,
don't
anticipate
that
I
mean
basically,
it
seems
like
the
data
model
is
close
enough.
That
yeah
I
don't
think
that
there's
anything,
that's
like
substantially
different
about
like
profiles
than
traces
that
we
wouldn't
be
able
to.
You
know
the
same
wherever
you
put
hostname
or
you
know
whatever
region
for
a
trace.
I
feel
like
you
put
at
the
same
place
for
or
we
could
try
to
make
it
so
you
put
it
in
the
same
place
for
a
profile.
I
Well,
I
I
actually
would
argue.
That's
table
Stakes
for
open
Telemetry
like
the
whole.
The
whole
reason
for
open
Telemetry
to
have
all
of
these
signals
together
is
that
there's
something
that
allows
you
to
join
them
right,
like
that's.
One
of
the
big
deals
is
like
okay,
I
get
this
observability
and
I
get
it
in
a
consistent
way,
so
resources
the
fundamental
principle
of
Hotel,
like
everything,
is
attached
to
it,
and
it's
our
lingua
Franca
of
how
you
could
say
this
all
came
from
the
same
source.
I
So
if
you
didn't
have
resource
in
your
protocol,
you're
probably
going
to
get
rejected
immediately,
so
you
have
to
have
it
effectively.
Instrumentation
scope,
I
think,
is
more
dubious.
That's
also
newer,
but
I
would
recommend
having
that
as
well.
I
don't
have
you've
seen
our
protocol
as
it
is
today
like.
Is
it
worth
walking
through
that
show
you
yeah.
F
D
Mean
yeah
yeah
you
could,
that
would
be
great
yeah
I
mean
yeah
I.
Think
that's
the
big
that's
kind
of
what
we
were
talking
about
and
like,
as
we
start
to
think
towards
you,
know,
actually
prototyping
something
I
mean
yeah
I,
think
that's!
The
biggest
lack
of
knowledge
in
this
group
is
just
the
yeah,
like
you
said,
like
the
terminology
that
kind
of
stuff,
and
once
we
know
that
it
will
make
it
so
much
easier
for
us
to
map
things
from
what
we're
proposing
to
the
correct
terminology
and
I
guess:
Concepts.
D
I
I'm
gonna
go
low
level
and
apologies
I'm
bad
with
zoom,
so
you
might
lose
my
mic
for
a
bit.
I
D
I
I
Back,
okay,
so
everything's
implemented
as
a
protocol
buffer,
but
this
is
also
you
know:
HTTP
Json
works
as
well,
and
that
sort
of
thing,
if
we
look
at
this
underneath
collector
this
is
the
there
are
three
main
channels
right
now
for
which
Telemetry
can
be
written.
They
all
look
almost
exactly
the
same,
so
I
have
a
export
request.
That
has
what
is
called
resource
and
then
signal
as
like
a
a
message.
I
Type
and
I
get
a
repeated
set
of
these
as
like
a
bundle
and
I
write
them
via
RPC
right,
if
I'm,
using
grpc,
if
I'm
using
HTTP
it's
basically
the
Json
structure
of
this
gets
dumped
to
a
Json
or
to
a
HTTP
endpoint,
and
that
endpoint
is
defined
here.
It's
like
you
know,
whatever
your
HTTP
path
is
slash,
V1,
slash
logs
for
the
current
protocol.
I
If
we
want
to
look
at
a
specific
signal,
we'll
go
with
Trace,
because
we
already
mentioned
Trace
what
what
this
looks
like
is
we
create
a
Let's
ignore
this
a
sec?
We
create
a
message
called
resource
and
then
signal
which
has
a
resource.
That
is
the
source
of
the
signal
like
the
the
process.
The
container,
whatever
there's
a
set
of
key
value
attributes,
we
don't
call
them
labels
or
tags.
We
call
them
attributes
in
open
Telemetry
just
to
be
different.
I
So
people
know
that
you
know
we're
different
I
guess
to
be
fair.
They
used
to
be
called
labels,
but
that's
a
long
story
anyway.
So
it's
a
bunch
of
key
value
pairs
that
identify
like
where
something
comes
from
and
then
underneath
that
we
have
scope
and
then
signal
right.
There's
this
notion
of
schema
URL.
I
This
is
for
signals
where
we
can
have
semantic
inventions.
This
tells
you
the
semantic
conventions
under
which
these
signals
were
generated.
So
we
have
semantic
conventions
that
say
like
if
you're
running
on
kubernetes.
You
should
have
this
set
of
key
value
pairs
to
denote
where
you,
where
you
are
like
you
should
have
you
know:
Kate's
dot,
namespace,
to
denote
you're
in
this
namespace
Kate's,
dot,
pod
name
to
denote
your
pod.
That
sort
of
thing
that's
what
schema
URL
is:
okay,
underneath
scope.
I
I
Instrumentation
scope
I'll
show
you
that
in
a
second
it's
well
actually
I
can
show
you
it
now,
just
just
so
you're
clear,
it's
basically
a
name
and
a
version
of
a
library
generally
and
then
a
set
of
key
value
pair
attributes
that
you
can
attach.
In
addition,
the
idea
here
this
is
something
Hotel
kind
of
encoded
and
we
don't
see
it
I,
don't
see
it
taking
advantage
of
a
lot
but
effectively.
I
You
can
understand
if
you
have
like
version
mismatches
in
your
Telemetry
collection
and
say:
oh
I'm,
getting
this
attribute
this
attribute
conflict.
Let
me
go
check
my
instrumentation
scope
and
see
if
I
have
a
version,
mismatch
or
I
have
to
bump
something
right
like
what
what's
going
on
in
production.
That's
what
that's
about
okay!
I
So
then,
if
we
come
back
to
trace
underneath
that
we
have
so
underscope
spans,
we
just
have
repeated
ski
band
again,
there's
another
opportunity
for
you
to
say
whether
or
not
these
spans
are
under
a
different
schema
URL.
This
schema
URL
applies
to
the
resource
and
if
no
other
one
exists,
it
applies
to
things
below
it.
This
one
applies
to
the
spans
and
the
instrumentation
scope.
If
no
other
exists,
it
applies
to
things
below
it
and
then
span.
This
is
where
we
actually
annotate
our
Telemetry.
I
I
If
we
look
at
logs
you're
going
to
see
it
be
very,
very
similar,
we
have
resource
Logs
with
resource.
We
have
scope
Logs
with
log
records
and
then
log
records
are
the
underlying
thing
that
gets
written.
So
underneath
log
record,
you
have
a
whole
bunch
of
information
there.
You
can
see
that
we
actually
have.
You
can
do
a
numes
if
you
want,
we
actually
allow
that
in
our
protocol
we
actually
can
do
bit
flags
and
things
like
that.
I
If
you
want
to
encode
that
in
the
protocol
as
well,
there's
precedence
for
that
and
then
finally,
I'll
just
show
you
metrics.
Just
so
you
get
the
the
full
gist
metrics
again
start
with
resource
metrics
then
have
scope.
Metrics
then
have
repeated
metric
Fields.
This
one
is
interesting.
It
may
be
of
interest
to
you,
there's
actually
a
a
weird
hierarchy
now
in
metrics,
where
every
single
point
can
have
a
different
piece
of
metadata
around
it.
I
So
maybe
I'll
just
show
that
data
is
a
one
of
so
a
particular
metric
can
be
a
gauge
of
some
a
histogram
and
exponential
histogram
or
summary.
So
you
have
an
opportunity
here
with
your
profilers
to
have
one
of
those.
If
that
makes
sense
of
like
my
one
of
could
be.
This
is
a
stack
trace.
This
is
I.
Remember,
JFR
was
weird
I,
don't
know
if
that's
still
in
scope,
I,
don't
so
I,
don't
know
if
I'm
speaking
out
my
ass
here,
but
look
you
could
have.
One
of
you
know
stack
Trace,
JFR
thing.
I
You
know
P
Prof
whatever.
If
that
makes
sense
to
do,
there's
precedence
for
that
as
well,
and
then
underneath
each
of
these
metrics
are
different
than
other
things.
This
is
the
number
of
Time
series:
okay,
under
common
important
things
to
reuse.
As
you
define
protocol,
you
have
the
notion
any
value.
I
Any
value
just
is
basically
it's
kind
of
something
structure
and
our
key
value
pairs
are
basically
strings
to
end
values.
So,
instead
of
labels
tag
word
string,
we
actually
allow
any
particular
value.
I
D
I
I
don't
have
a
question
about
the
I
guess
yeah.
First
of
all,
thanks,
that's
definitely
helpful.
I
Felix
is
on
mobile.
He
he
mentioned
that
he
did
somewhat
of
a
Proto
or
I
guess
of
a
yeah
I.
Guess
like
a
high
level
structure
of
what
this
could
look
like
with
the
with
a
format
that
we
would
propose.
D
Maybe
you
can
talk
about
that
next
time,
but
I
guess
I'm
curious.
So
once
you
have,
is
this
the
first
step
like
basically
like
mapping
what
we've
been
talking
about
to,
like
you,
know
a
Proto
like
this
and
then
I
guess
yeah
I,
guess,
first
of
all,
is
that,
like
the
first
step,
let's
say
we
are
all
in
agreement.
We're
like
ready
to
move
forward
is
the
first
step.
You
know
mapping
it
to
this,
and
if
so,
then,
what
is
the
next
step
that
you
would
want
to
see?
D
Is
it
you
know
we
get
this
approved
and
then
we
start
sending
data
in
this
format,
or
should
we
like
also
prototype
sending
data
in
this
format,
I,
guess
kind
of
how?
How
should
the
next?
You
know
one
two,
three
steps.
You
know
if
we
are,
if
we
were
to
move
forward
like
what
would
those
look
like
yeah.
I
So
you
know
number
one
is
defining
the
Proto
number
two
is
and
I
can
show
this
in
a
little
bit
there,
The
Collector
has
an
abstraction
for
open
Telemetry
protocol
called
P
data
right,
so
you're
probably
going
to
want
to
expose
your
protocol
inside
of
P
data,
so
you
can
actually
manipulate
in
The
Collector,
and
you
want
to
expose
it
in
a
way
that
you
could
do
those
use
cases
and
then
basically,
we
probably
want
a
baseline
set
of
pro
benchmarks
for
this
data.
I
I
So
you
know
the
fundamental
questions
that
we
asked
if,
if
I
stop
sharing
and
we
go
back
to
what
tigran
had
here
right
is
what
sort
of
filtering
and
processing
do
you
need
in
The,
Collector
and
so
I
think
defining
the
protocol
making
sure
the
protocol
can
go
into
a
back
end,
so
this
could
be
pyroscope
or
whatever,
whatever
new
grafana
thing,
you're
calling
the
the
combined
back
end
between
viroscope
and
that
that's
that's
a
fine
one
to
choose,
but
just
let's
make
sure
it
can
go
into
a
back-end
somewhere,
preferably
open
source.
I
So
we
can
all
see
it
and
touch
it
and
use
it.
And
then,
let's
do
a
prototype
of
what
a
collector
manipulation
would
look
like.
Whatever
use
case,
we
call
out
has
been
important
for
over
the
wire.
Maybe
it's
sampling
and
saying
you
know:
I
only
want
profiles
where
I
have
a
method
that
took
longer
than
x.
I
That
kind
of
thing
I,
don't
know,
I,
don't
know
what
you
know
whatever
you
pick
right
and
let's
get
it
into
the
collector
so
that
we
could
do
one
of
those
manipulations,
then
I
think
you're
pretty
good.
So
basically
an
end-to-end
working
example,
an
idea
of
performance
and
whether
or
not
that
performance
is
good
enough
and
then
a
prototype
of
like
the
in-process
manipulation
that
you'll
need.
I
If
you
want
a
good
kind
of
rule
of
thumb
here,
if
you
look
at
what
they're
trying
to
do
with
the
what
is
it
Apache
Arrow
otlp,
that's
very
aggressive
because
they're
going
for
a
stateful
protocol.
But
if
you
look
at
this,
what
they
build
and
kind
of
the
case
they
built
the
shape
of
that
is
kind
of
what
you
want
to
do
here.
I
D
Yeah,
that's
very
helpful.
Yeah
I
mean
I,
think
that
gives
us
a
good
enough
and
and
I
guess.
We
could
probably
go
back
and
look
at
the
equivalent
version
of
all
those
steps
for
pass
signals
like
I,
don't
know
yeah.
If
there's
we
can
kind
of
dig
into
it
or
maybe
we
can
ask
around
and
find
I,
don't
know
good
PR's
to
be
like
yo.
If
your
first
PR
looks
like
this
you'll
be
in
good
shape,
you
know
yeah
and
I
guess
we
can
kind
of
go
from
there.
I
Yeah
I
think
the
the
thing
that
we
have
to
figure
out
that
we
haven't
done
yet
is
we
have
stabilized,
like
the
tracing
part,
we've
stabilized
the
logging
part
we've
stabilized
the
metrics
part
in
the
past.
What
we
allowed
was
those
directories,
so
you
saw
how
there
was
like
slash
traces
right.
We
allow
individual
directories
to
be
unstable
and
we
allow
that
service
that
grpc
service
that
is
defined
to
also
be
unstable
kind
of
individually.
I
So
the
one
thing
that
I
might
you
can
you
can
propose
it
directly
against
the
Proto
directory,
but
we
might
get
pushback
is.
Should
your
Proto
evolve
in
place
in
our
Proto
directory,
or
should
we
create
a
fork
of
it
and
evolve
a
Proto
in
a
fork
temporarily
and
then
submit
like
the
1.0?
That's
we've
never
actually
had
to
deal
with
that.
Yet
for
a
new
community,
so
you're
the
first
guinea
pigs
here
so
sorry
and
we'll
figure
it
out.
D
Well,
no,
that's
that's
great!
There's
more
allowance
for
messing
stuff
up.
If
you're,
if
you're
a
guinea
pig,
yeah
I
mean
I,
I,
think
yeah
I
mean
I.
Think
that's
the
hardest
part
is
like
you
know.
Just
yeah,
like
you
know,
making
sure
we
take
a
first
step.
That
is
a
that
will
get
us
momentum
in
the
right
direction
and
that's
kind
of
where
we're
you
know
trying
to
move
towards.
As
soon
as
we
can
so
I
guess,
yeah
I
don't
know
yeah.
D
Does
anybody
else
have
any
questions
as
we
start
to
think
about
that
going
going
cool,
well,
yeah,
I,
appreciate
you
taking
the
time
to
to
explain
all
this
to
us.
Yeah
I
mean
if
that's,
if
that
nobody
else
has
any
other
questions,
then
yeah
I
guess
we
can
kind
of
make
that
the
agenda
item
or
to
Do's
over
you
know
between
now
and
next
week
of
I
can
look
into
that.
D
Trying
to
find
good
PR's
to
you
know,
look
at
us
examples
and
yeah
I,
don't
know
come
up
with
like
an
actual
action
plan
that
involves
writing
some
code
and
prototyping
some
stuff
yeah.