►
From YouTube: 2021-11-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
B
C
E
E
E
B
Okay,
we're
four
minutes
in
five
minutes
in
I
guess
frederick
do
you
want
to
take
it.
D
Yeah,
so
I
guess
in
a
way
you
tell
me
to
into
what
amount
of
detail
we
should
go
and
like
should
we
give
people
a
primer
on
continuous
profiling.
I
know
like
there
that
there
have
been
other
like
sessions
here
about
continuous
profiling,
so
I
don't
know
how
much
like
of
an
intro
we
need
to
do,
or
should
we
just
dive
right
into
it
and
kind
of
talk
about
how
we're
using
evpf
like
first
time
for
me
joining
this
meeting.
D
So
what
would
be
most
useful
to
this
group.
D
D
Yeah,
hey
so
yeah,
I'm
I'm
frederick,
I'm
the
founder
of
polar
signals,
so
my
background
is
kind
of
I've
been
in
the
like
cloud
native
ecosystem
for
quite
some
time.
I
like
joined
core
os
five
six
years
ago
as
one
of
the
first
people
here
in
berlin,
and
I
kind
of
always
worked
on
all
things
prometheus
and
kubernetes
and
end
of
last
year.
D
You
know
I
still
saw
this
market
opportunity
for
continuous
profiling
and
I
felt
like
no
one
was
really
solving
it
to
the
degree
that
I
was
wishing
for,
and
so
I
kind
of
made
it
my
my
full-time
job
and
the
kind
of
primer
on
continuous
profiling
is
that
you
know
profiling
has
been
around
since
forever.
D
Ever
since,
like
software
engineering
basically
has
has
existed,
and
I'm
I'm
gonna
assume
here
that
everyone
knows
what
profiling
is,
but
the
problem
was
very
for
a
very
long
time
that
capturing
profiling
data
was
really
expensive,
and
that
was
that
kind
of
changed
through
sampling,
profiling
techniques,
where
we're
not
like
tracing
everything
anymore,
but
we're
just
looking.
D
Let's,
let's
say,
for
the
sake
of
simplicity,
we're
only
looking
at
stack
traces
100
times
per
second,
as
opposed
to
literally
recording
what
the
program
does,
and
so
that
was
a
really
huge
jump
in
overhead
in
or
a
huge
reduction
in
overhead
of
profile,
capturing
profiling
data
and
then
the
reason
why
we're
here,
another
jump
that
we
were
able
to
make
was
using
ebpf
and
kind
of
doing
all
of
this.
D
We
we
kind
of
we
built
a
like
an
ebpf
based
profiler
and
the
thing
that
we
kind
of
always
like
to
actually
start
with.
Is
we
when
we,
when
we
started
polar
signals,
we
weren't
really
looking.
You
know
to
build
something
in
the
evps
space
we
wanted
to
get
create
a
contin,
a
really
useful,
continuous
profiling
tool,
and
so
in
the
beginning.
Actually
we
didn't
care
about
the
pro
the
the
collection
techniques
at
all,
because
you
know
the
go.
D
Runtime
had
already
really
amazing
tools
available
for
us.
So
at
the
beginning,
we
were
primarily
concerning
ourselves
with
storage
and
querying,
and
so
only
at
a
later
point
sometime
earlier
this
year
is
when
we
kind
of
went
into
the
ebpf
space
because
of
kind
of
feedback
that
we
had
from
our
from
our
product.
I
guess
it's
the
typical
feedback
that
everybody
always
has
with
every
observability
product,
which
is
they
want.
D
They
want
to
do
exactly
nothing
and
get
all
the
benefits
and,
as
it
so
happens,
usually
that's
very
hard
right,
but
with
profiling
data,
it's
actually
really
close
to
what
you
can
observe
from
the
operating
system.
As
many
of
you
probably
already
know,
and
so
it
was
actually
a
really
great
fit
and
our
like
profiler.
D
The
way
it
works
is
that
it-
and
I
can-
I
can
show
some
slides
as
well,
but
I
think
you'll
you'll
get
the
idea
the
the
way
the
way
it
works
is
it
discovers
all
of
your
containers
in
your
kubernetes
cluster
and
then
attaches
profilers
to
each
c
group,
and
that
way
we
kind
of
partition
already
where,
where
this
profiling
data
is
coming
from,
and
this
allows
us
to
like
slice
and
dice
the
data
in
a
really
nice
way,
because
we
attach
metadata
literally
the
same
way
as
prometheus.
D
Does,
I
guess
not
really
a
surprise.
Given
our
background,
and
so
this
this
allows
for
really
cool
workflows
where
we
can
go
from
like
a
prometheus
metric
to
like.
Let's
say
we
have
a
latency
spike
or
something
in
a
prometheus
metric,
because
we're
attaching
the
same
metadata,
it's
quite
easy
to
jump
to
a
cpu
profile.
At
the
same
time,.
D
Yeah,
I
think
that's
kind
of
the
super
quick
high
level
intro
of
what
we're
doing,
and
maybe
one
thing
that
that's
not
so
specific
to
ebpf,
but
something
that
we're
kind
of
excited
about
is
kind
of
the
open
standard
that
we're
using
for
everything
we're
kind
of
using
the
prof
format
as
the
exchange
format
for
everything.
So
our
ebpf
profiler
produces
pprof
compatible
format.
D
Profiles
sends
that
to
the
storage
and
the
storage
is
also
can
can
accept
any
pprof
formatted
profile,
so
it
doesn't
have
to
come
from
our
profiler,
for
example,
and
then
any
query
that
you
do
in
the
parka
project
in
the
in
the
server
can
also
be
downloaded
as
a
prof
profile
again,
so
we're
kind
of
trying
to
make
it
people
compatible
from
all
from
all
sides
and
angles.
E
Good
question
going
back
to
the
ebpf
and
the
c
group
stuff,
so
is
it
I
mean
with
ebpf
you
typically
get
like
a
system-wide
like
you
set
a
timer
and
you're
just
gonna
hit
every
like
you
know,
like
you
said,
every
100,
milliseconds
or
whatever
it
is.
So
are
you
you
are.
You
are
doing
the
same
thing
like
you're
doing
a
system-wide,
but
then
you're
filtering
out
the
stuff.
That's
not
in
c
groups.
Is
that
the
approach.
D
We're
actually
we're
actually
doing
it
kind
of
the
other
way
around.
We
are
discovering
the
c
groups
first
and
are
attaching
to
the
c
groups
our
ebpf
programs,
to
those
c
groups.
Then
I
see
okay.
E
D
D
B
I
have
a
question
regarding
kind
of
c
groups:
do
you
have
to
track
those
as
the
system
is
running
as
well?
I'm
assuming
kind
of
as
a
new
c
group
comes,
you
you
know
appears
you'd
want
to.
You
know,
filter
it
and
maybe
instrument
that.
D
Yeah,
so
the
way
that
we
do
it
effectively
is
that
you
know
we.
We
watch
the
kubernetes
pods
that
are
created
and
we
discover
through
the
container
runtime
interface,
which
containers
those
are,
and
then
we
discovered
the
secret
behind
it.
E
D
Well,
we,
yes,
I
think
I
think
that's
that
yeah,
that's
what
you're
yeah
exactly
so.
Basically
I
mean
the
the
the
kubernetes
api
does
already
declare
the
pod,
even
if
even
if
the
process
isn't
running
yet
like,
even
if
the
containers
like
is
still
being
prepared
right,
so
you
can
actually
already
attach
to
that
c
group
like
the
c
group
already
exists,
even
if
the
process
maybe
isn't
isn't
there
yet
got
it.
B
So
a
part
of
the
reason
why
I
kind
of
for
asking
is
maybe
I'll
I'll
ask
kind
of
the
you
know.
Root
of
the
question
is:
is
there
other
ebpf
data
that
is
not
profiling
data
that
would
be
useful
for
for
the
profiler
collector,
for
example,
information
about
c
groups
from
ebpf?
I
you
know
you
just
said
that
you're
getting
that
information
from
your
orchestrator
and
not
from
ebpf,
but
is
there
other
ebpf
data
that
could
help?
You
run
a
profiler
and
we're
thinking
about
modularity
of
evpf
infrastructure.
D
So
there's
one
thing
and
actually
it's
cool,
that
kemah
is
on
the
call,
something
that
I
guess
he
can
talk
about
himself
as
well.
D
But
one
thing
that's
really
challenging
with
just
grabbing
the
stack
traces
from
ebpf
is
that
we
run
into
the
problem
of
like
stacks
that
need
to
be
unwinded
or
something
like
that,
and
I
I
imagine
this
is
a
pretty
common
thing
and
something
that
kemal
is
working
on
right
now
is
to
like
realize
that
we
need
to
unwind
that
there's
a
binary
running
that
needs
stack,
unwinding
and
then
pass.
E
And
we
should
come
back
to
the
johnson's
question
as
well,
because
I'm
also
interested
to
come
back
to
that
as
well,
like
what
sorts
of
information
like
just
to
reiterate
what
jonathan
was
saying.
It's
like
in
this
work
group
we're
all
we're
trying
to
figure
out
what's
common
ground
between
all
the
different
ebpf
projects
and
kubernetes.
That's
going
on
right.
E
So
if
we
can
find
something
that
we're
all
pretty
much
doing
all
the
time
like,
it
would
make
sense
to
kind
of
have
a
standard
for
doing
that
like
getting
metadata
about
kubernetes
pods
services,
all
that
sort
of
stuff
right.
That
would
just
make
sense
for
us
to
kind
of
share
all
that
infrastructure.
C
Okay,
great,
so
what
we
pass
from
the
user
space
is,
we
are
kind
of
discovering
the
running
processes
and
find
the
attached
binary
from
the
mappings,
and
we
check
if
there
is
a
it's
called
like
eh
underscore
frame,
it's
an
exception,
a
handling
session
in
touch
file.
C
So
we
parse
that
frame
table
and
then
we
pass
on
the
table
to
at
least
we
haven't
done
this,
but
we're
trying
to
pass
that
to
the
kernel
space
so
that
we
can
just
like
unwind
to
stack
if
it's
needed
right
now,
kernel
can
actually
unwind
the
stack.
If
you
have
stack,
pointers
or
org
is
already
in
the
binary,
but
like
even
I
guess
there
were
some
work.
C
They
tried
to
unwind
to
stack
using
the
dwarf
information,
because
what
we
have,
in
the
exception,
handling
section
it's
also
in
the
more
formatted
but
like
they
failed,
so
not
right.
Now,
they're
they're
only
opposed
to
they
support.
What
we
try
to
do
is
we
actually
try
to
online
stack
using
two
of
the
information.
C
E
And
that
eh
frame
data
from
the
elf
binary
you
have
like
enough
bpf
instructions
and
memory
and
everything
to
actually
do
all
that
processing
in
the
bpf
side.
C
Yeah,
the
et
chat
frame
data
is
actually
a
table.
That's
where
you
like.
Tell
you
to
jump
on
like
here.
Is
this
like
a
frame
pointer,
go
there
and
then
what
we
try
to
find
out
the
return
address
of
each
function
in
the
stack,
so
everything
in
the
eeh
frame.
What
we
need
to
access
is
couple
of
registers
in
the
kernel
space.
D
Yeah
I
mean
the
we
don't
we
don't
really
capture
any
of
that
through
evpf.
All
of
that
happens
outside.
So
since
we,
since
we
attached
the
evpf
program
to
the
like
c
group,
basically
the
we
essentially
run
like
you
can
think
of
it.
We
run
one
profiler
per
container
right
and
that
way
we
kind
of
infer
all
of
this
metadata
from
that
right.
D
B
For
example,
you
kind
of
I'm
sorry
that
I
don't
understand
the
specifics
of
the
e8
or
eh
frame.
You
need
that
per
process
or
per
container
and
if
per
process,
how
do
you
discover
if
you
have
any
process
that
you
might
need
to
instrument,
so
it
is
kind
of
and
the
reason
I'm
asking
is,
would
you
need
kind
of?
Would
it
be
helpful
if
you
got
a
notification
on
every
process?
Sort
of
thing
like
what
type
of
other
infrastructure
would
be
helpful
to
you.
C
C
If
those
already
have
that
section
and
then
try
to
build
up
a
map
and
pass
that
right
now,
even
like
the
in
the
evp
functions,
we
get
a
step,
try
spec
right,
and
even
it's
not
unwinded
like
it
doesn't
tell
us
like,
because
maybe
there
are
some
libraries
already
linked
to
that
binary,
and
maybe
we
have
some
partial
stake
traces
right.
C
We
still
get
a
successful
stack
trace
pack
but,
for
example,
for
starters,
it
will
be
super
nice
to
know
that
that's
actually
like
we
don't
have
any
stock
stack
that
we
like
fully
unwinded
right.
Then
we
would
definitely
know
that
for
that
process
we
should
check
this
information
right
now.
What
we're
planning
to
do
is
like,
since
we
know
that,
if
there's
a
eh
frame
attached
to
that
binary,
that
means
probably
there's
there
needs
to
be
a
stack
on
unwinding
done.
That's
what
we're
going
to
do
like
check
this
ppid?
C
We
know
we
have
a
frame
data
for
this,
just
like,
unlike
the
stack
using
that
data,
rather
than
trusting
what
we
get
from
the
camera.
E
E
You
know
containers
c
groups
all
that
stuff
coming
different
into
existence
is
probably
you
know,
standard
metadata
that
we
would
want
the
eh
frame
stuff,
I
think,
is
very
specific
to
probably
profiling
right.
I
mean
still
based
on
that
notification.
E
D
Right
that
that's
what
I
was
going
to
say
whenever
you
want
to
whenever
you're
working
with
stack
traces,
it's
going
to
be
interesting,
because
the
stack
race
is
essentially
useless.
If
you
can't
unwind
it.
It's
just
look
just
looks
like
trash
basically,
and
you
can't
symbolize
it
so
yeah
whenever
you're
working
with
stack
traces,
exactly
an
account
cool.
B
How
about
other
mechanisms
that
you
use
so
how,
for
example,
how
do
you
transfer
information
from
kernel
space
to
user
space?
Are
you
kind
of,
I
think
many
of
the?
At
least
you
know
the
the
collectors
that
I've
seen.
I
I'm
actually
not
sure
I'll
meet
about
about
pixie,
but
kind
of
we
use
like
shared
memory
rings
between
kernel,
space
and
user
space
to
transfer
data.
These
perfrings
like
uses
those
or
maps,
or
is
there
like
shared
infrastructure?
There.
D
I
mean
I
I'll
be
honest,
like
I,
I
did
the
we
have
the
first
thing
that
worked
and
there's
no
particular
reason
why,
where
we're
doing
that
like
right
now
we're
using
maps,
but
I
think,
like
the
ring
buffers,
could
work
just
as
well
for
us
like,
because
we're
the
way
that
we're
using
it
essentially
is
that
we
let
the
collection
happen
for
10
seconds,
and
we
essentially,
we
have
two
maps,
one
with
the
stack
id
and
the
like
occurrences.
D
D
The
the
reason
for
that
is
essentially,
that
is
literally
what
the
prof
format
is
like.
We
do
only
do
a
bit
of
rearranging
of
the
data
and
that's
exactly
the
pre-prep
format,
so
that's
super
convenient
that
way
for
us
if
there
are
like,
if
you
have
experience
that
you
know,
a
different
type
of
data
transfer
is
better
for
this
type
of
use
case.
You
know
we
have
no,
no
emotional
connection
to
any
of
this.
E
Yeah,
we
did
look
into
it
a
little
bit.
I
think
it's
one
of
the
two
data
structures
talking
about
has
to
be
essentially
a
map,
but
the
other
one
could
be
converted
into
a
perf
buffer
just
because
our
architecture
is
very
similar.
So
I
know
exactly
what
you're
talking
about
there
is
a
there
can
be
a
slight
performance
win,
but
it's
it's
very,
very
it's
modest.
By
switching
it
to
a
perk
buffer.
We
actually
have
a
blog
post
that
talks
a
little
bit
about
this
stuff.
E
So
about
like
the
data
structure
and
how
much
performance
can
you
get
on
switching
from
a
map
to
the
to
the
perf
buffer,
yeah.
D
Nice
I'll
check
that
out
one
one
other
thing
that
I
was
also
considering
like
we,
we
use
like
the
bpf
go
for
for
this
stuff
and
one
of
the
nice
things
that
loop
bpf
offers
is
like
actual
batch
operations
right
and
one
of
the
really
nice
ones
is
read
everything
and
delete
everything
that
you've
read,
which
is
literally
the
operation
that
we're
doing
right.
D
So
I
wonder
if,
if
we're,
if
we
end
up
starting
to
use
those
batch
operations,
if
there's
still
going
to
be
that
same
perf
difference,
I
guess
it's
just
something
that
we
need
to
profile
ourselves.
Yeah.
E
Essentially,
once
you
read
it,
the
event
is
gone,
so
it's
kind
of
like
self-deleting,
which
is
which
is
kind
of
nice,
but
we
also
kind
of
ran
into
the
same
sort
of
things
like
we
used
bcc,
but
bcc
actually
wasn't
very
efficient
at
reading
and
clearing
maps,
and
so
there's
a
peak
from
our
team
was
like
doing
some
optimizations
on
the
vcc
side,
to
kind
of
very
similar
to
what
you're
talking
about
just
to
make
sure
that
you
know
clearing
is
done
in
a
very
efficient
way
because
it
can
get
expensive,
otherwise,
yeah.
C
D
Like
we,
we
clear
both
of
those
maps
right
and
that's
not
strictly
necessary
right.
Most
of
the
time
like
we,
we
observe
the
same
stack,
ids
right,
because
just
because,
like
software,
that's
long
running
tends
to
do
the
same
job,
it's
just
the
nature
of
it,
as
I'm
sure
you
know.
So
we
could
also,
you
know,
observe
whether
we're
actually
missing
stacks
at
some
point
and
only
then
clear,
clear
the
map
or
something
like
that.
But
it's
just
not
like
performance
issues
that
we've
really
encountered
just.
E
Yeah
we're
in
the
same
I
think,
yeah
I
mean
we've
thought
about
the
same
sort
of
questions
and
decided
it's
not
high
enough
priority
right
now
for
the
server
optimizations,
but
but
bringing
back
to
kind
of
the
so
framework.
If
we
did
provide
a
common
framework
for
notifications
of
you
know,
event
like
kubernetes
sort
of
events
directly
from
bpf
like
process
creation,
container
creation,
those
sort
of
things
you
could
see
a
world
where
that
would
be
useful.
D
I
I
could
also
imagine
that
you
know
maybe
one
day
we'll
decide
to
slightly
switch
up
our
our
strategy
as
well
to
to
do
something
more
similar
to
what
you're
doing
with
the
whole
system,
profiling,
our
our
storage
kind
of
wants
series
of
things,
because
that
that
allows
us
to
make
querying
super
efficient
so
which
is,
which
is
why,
like
a
series
of
profiles
of
the
same
container
or
something,
is
super
useful
for
us,
but-
and
I
I
imagine
that
kind
of
scenario,
it
would
actually
be
even
more
useful
but
yeah.
D
No,
I
I
can.
I
could
see
especially
like
process
creation.
Those
are
things
that
we
do
where
we
look
at
things
all
the
time.
Maybe
even
you
know
extracting
the
this
eh
frame
stuff
right
now
we're
extracting
this
data
in
user
space
and
they're
passing
it
down
to
to
the
ebpf
program.
If
we
could
parse
that
stuff,
even
within
within
ebpf.
That
would
be
even
more
amazing
for
us.
I
think.
B
Cool
all
right,
I
think
we're
happy.
I
think,
we're
at
time.
I
think
we're
at
times
so
I
mean
we'll.
B
Let's
just
ask
if
anybody
else
wants
to
you
know,
has
an
agenda
item
just
last
minute
and
then
maybe
I'll
ask
you,
know
frederick
if
you
can
stay
another
couple
of
minutes.
So
if
anybody
I
mean
I
don't
know,
it
depends
on.
The
sorry
depends
on
your
timing
and
we
can't
we
can
continue
right
like
I
don't
want
to
cut
cut
this
off.
If
you
want
to
stay,
but
any
any
agenda
items
somebody
wants
to
cover.
D
I
mean
sure
I'm
I'm
more
than
happy
to
to
keep
keep
discussing
if
people
are
interested.
I
can
also
give
a
very
quick
demo
yeah.
I
don't
know
whatever's
most
useful
to
the
group.
E
Sure
I
mean
I
have
time,
I
don't
know
about
the
rest.
D
Was
that
sure
for
the
demo
or
okay
cool?
Let
me
see
I
just
before
this
meeting.
I
spun
up
a
new
environment,
so
hopefully
that'll
all
work.
B
D
D
Can
you
still
hear
me
now,
okay,
so
what
I
was
saying
is
I'm
guessing
most
of
these
tools
look
largely
the
same
so
yeah
we,
you
can
kind
of
select
the
type
of
profile
that
you
want
to
view.
Our
ebpf
based
profiler
currently
only
supports
cpu
profiling,
though
it
can
in
theory,
and
I
think
that
if
I
remember
correctly,
the
pxe
one
does
support
like
network
and
like
memory
allocation
profiling
as
well,
or
something
like
that.
D
Okay-
maybe
I'm
thinking
about
another
one
but
yeah,
I
I
I'm
sure,
as
I'm
sure
you're
also
thinking
about
that
yeah
we're
definitely
gonna
gonna
be
looking
into
that
pretty
soon
as
well,
and
but
what
I
was
gonna
say
is
that
you
know
this
is
the
storage
site,
so
anything
that
produces
pprof
compatible
profiles
can
be
written
into
this.
D
So
that's
like
aside
from
ebpf
and
then
here
you
can
already
see
kind
of
the
kind
of
series
of
profiles
that
were
that
we're
building,
because
here
we're
essentially
adding
up
all
the
profile
samples
of
that
particular
profile
in
time
of
those
like
10
seconds
that
we
captured
and
kind
of
view
it
as
metrics,
and
this
already
gives
people
a
good
kind
of
understanding
from
you
know.
This
could
be
an
interesting
profile
to
look
at
and
then
you
can
pull
that
up
and
you
know
explore
it
like
you
do
and
yeah.
D
One
one
one
thing
that
I
think
is
pretty
cool:
that
kim
I
also
implemented
is
that
we
don't
only
symbolize
based
on
like
dwarf
data
but
like
if
you
strip
dwarf
data,
for
example,
from
a
go
binary,
we're
still
attempting
to
symbolize
as
much
as
possible
with
the
go
pc
line,
tab
information,
which
is
kind
of
an
alternative
thing
that
go
binaries
put
into
their
binary,
which,
even
if
you
strip
debug
symbols,
doesn't
get
stripped
because
that's
how
go
renders
like,
like
like
stack
traces
when
you,
when
you
have
a
nilpointer
or
whatever,
so
that
that's
kind
of
cool.
D
So
we
can.
Actually,
you
know
in
this
mini
coop
demo,
even
though
all
of
these
binaries
are
stripped,
we
can
actually
still
symbolize
them
and
we
also
have
support.
Just
just
yesterday,
we
were
we
released
support
for
like
perth,
like
kernel,
perth
maps,
so
like
jet
runtimes,
like
node.js
or
erlang,
they
can
write
out
like
a
file
that
that
explains
kind
of
the
jitted
memory
addresses
to
the
symbols.
So
then
we
can
also
symbolize.
You
know
node.js
stack,
traces
and
stuff
like
that.
D
D
You
can
always
click
download,
pprof
and
you'll
get
your
typical
pprop
and
you
can
open
that
with
any
other
people
compatible
tools
and
and
do
that
yeah
and
then
the
one
thing
that
I
guess
is
a
little
bit
different-
that
you
can't
do
with
your
typical
prof
tool
chain
is
that
you
can
do
comparisons
here
as
well.
D
It
grew
quite
a
lot
as
we
can
see,
like
almost
everything
is
gc
in
this
hype
in
this
spike,
but
sometimes
we
can
even
see
you
know
plus
infinity,
which
means
that
the
stack
trace
wasn't
there
at
all
in
the
previous
in
the
previous
profiling
sample
so
yeah.
This
is
kind
of
this
is
pretty
cool
because
you
know
it
allows
us
to.
D
Let
me
stop
sharing
now.
This
is
this
is
kind
of
cool,
because
we
can
finally
actually
answer
this
thing
where
people
always
are
like
what
what
was
the?
What
was
different
from
this
point
in
time
with
my
process
to
this
point
in
time
right
and
we
can
actually
see
the
stack
prices
that
were
being
executed
during
that
that
point
in
time
and
yeah,
that's
pretty
exciting.
D
We
actually
have
like
a
pretty
cool
demo
for
this,
where,
like
one
one
thing
that
we're
still
figuring
out
how
to
do
this,
but
basically
all
of
this
are
you
can
think
of
the
storage
as
kind
of
a
column
store
for
stack
traces
like
and
where
we've
seen
stack
traces
right,
and
so
we
want
to
be
able
not
to
just
filter
our
stack
traces
by
the
labels,
as
you
saw
like
the
container
equals
parka,
but
also
the
actual
stack
traces
themselves
and
the
reason
I'm
saying
that
is
because
we
have
all
of
the
like
function:
name
metadata
in
our
storage,
and
so
we
could
search
for
the
function
name
and
if
we
had
that
function
name
in
our
span
data,
for
example,
then
we
could
call
call
up
all
the
stack
traces
that
we've
ever
seen
for
that
particular
span
right
and
then
merge
all
of
these
together
and
get
a
report
of
like
this
is
all
the
cpu
time
I've
ever
spent
in
this
like
in
this
request
path,
for
example,.
A
D
But
function
names
is
something
that
you
can
fairly
fairly
easily
attach
automatically
right,
and
so,
even
if
the
same
function
name
ends
up
in
multiple
spans
that
doesn't
really
matter,
you
can
still
get
everything
within
that
function,
for
example,
or
like
under
that
function
in
terms
of
stack
traces,
yeah.
E
Thank
you,
but
park
park
is
not
open
source,
though
right
just.
E
D
So
you
can
check
it
out
here.
B
Maybe
it's
too
early
to
ask,
but
like
are
you
it's,
it
is
open
source.
Are
you?
Are
you
kind
of
considering
kind
of
joining
cncf
or
how
do
you?
How
are
you
thinking
about
open
telemetry?
Maybe
it
kind
of
it's
not
relevant,
because
I
haven't
asked
you
to
to
in
order
to
solicit
that
type
of
information,
but
you
know:
do
you
have
any
thoughts
on
how
you
interact
with
the
community.
D
I
mean
you
know
the
the
project
is
super
young,
so
we
haven't
made
a
definitive
decision,
but
you
know
we
we
intentionally
put
the
project
under
a
neutral
org.
Already
like
the
company
is
called,
you
know,
polar
signals,
it's
intentionally
something
different
from
from
the
company
name,
so
that
we
have
that
option.
That
said,
you
know,
I
think
we
we
still
need
to
figure
out
how
this
project
continues
to
develop,
like
we've
published
it
barely
a
month
ago,
so
we'll
see
but
yeah
like
we're.
D
Definitely
thinking
about
that
and
we're
it's
not
it's
not
a
no
like
we're.
Definitely
we've
definitely
thought
about
this
a
lot
and,
more
generally
in
terms
of
open
telemetry.
I
I
think
there
is
actually
space,
for
you
know
a
format
or
standards
of
sending
profiling
data.
You
know
maybe
like
I.
D
Actually,
I
don't
know
how
how
how
much
many
any
of
you
know
me
but
like
at
kubecon
2019,
I
actually
gave
a
keynote
about
like
the
future
of
observability,
and
I
was
saying
back
then
already
that
continuous
profiling
was
going
to
be
the
like
fourth
pillar
of
observability,
and
so
I
still
I'm
still
holding
on
to
that
idea,
and
I
want
to
make
it
happen
within
open
telemetry.
I
think.
A
Yeah,
no,
I
think
I
think
it's
fabulous
to
to
try
to
standardize
the
the
language
that
systems
are
using
to
explain
what
what
they're
doing,
and
this
approach
of
kind
of
decoupling,
that
from
what
you
then
do
with
the
data
is,
is
really
valuable
and
combined
with
how
do
we,
rather
than
having
separate
pillars?
I
think
the
other
open,
telemetry
approach
is
like
how
do
we?
A
I'm
super
excited
to
see
evpf
and
like
profiling.
Like
all
of
this
stuff,
you
know
how
it's
trickier,
I
think
the
tricky
bit
for
me
is:
we've
got
this.
Like
l7
stuff
and,
like
you
know,
application
user,
land
level
stuff
and
then
we've
got
all
of
the
under
the
hood
kernel
profiling.
You
know
and
network
profiling
stuff
and
like
like
connecting
those
two
things
together
other
than
just
like
temporally
kind
of
connecting
them
is
like
tricky.
A
But
seeing
this
connected
to
the
at
least
you
know
getting,
the
stack
traces
is
like
is
like
a
good,
solid
step,
but
I
do
think
there's
some
trickiness
around
moving
past,
just
just
having
like
a
heuristical
approach
to
to
trying
to
to
connect
traces,
distributed,
traces
and
and
the
lower
level
stuff,
but
part
of
it's
just.
A
You
know,
frankly,
figuring
out
how
you
shove
those
trace,
id's
and
things
like
that
down
somehow
so
that
they're
accessible
to
things
like
ebpf,
and
I
think
that's
that's
potentially
tricky,
possibly
depends
on
the
the
programming
language
you're
using.
E
I
mean
that
is,
that
is
the
dream
yeah.
I
know
we
always
think
of
observability
and
we're
like
we
just
want
to
read
stuff.
We
don't
want
to
touch
like
we
don't
want
to
modify
anything,
but
really
we
do
want
to
see
the
trace
ids
in
there.
You
know
and
all
the
data
that
we
collect
because
we
get
the
trace
ids.
Essentially,
we
have
context
to
correlate
everything
together
right.
A
Yes,
that
is
like
the
keystone
index
that,
if
you
can
get,
then
it's
like,
like
just
seeing,
for
example,
the
metrics
getting
correlated
with
traces
in
open
telemetry,
so
you
now
have
like
trace
exemplars
associated
with
your
metrics.
A
Like
that's
huge,
you
know
we
have
tools
that
try
to
you
know,
do
analysis
across
those
different
kinds
of
data
streams
and
they
are
limited
by
not
it's
so
much
better
to
be
like
well,
it
might
be
one
of
these
transactions
because
they
were
open
when
this
metric
was
created
going
from
that
to
like
nose,
literally
literally,
these
transactions
were
we're.
Creating
these
it's
just
like
a
big
jump.
So.
D
D
Just
having
the
stack
traces,
you
can
also
attach
labels
to
the
stack
pieces
themselves,
and
so
that
actually
allows
us
to
or
could
allow
us
to
do,
to
attach
a
trace
id.
For
example,
yeah
yeah.
A
And
I
know
there's
some
languages
like
go,
for
example,
where
prof
is
like
fairly
exposed.
I
actually
don't
know
if
you
can
directly
attach
labels,
but
I
I
think
you
can
or
right
so
and
then
I
imagine
in
c
c,
plus
plus,
there's
a
way
way
to
do
this
as
well,
and
we
we
haven't
done
this
work
yet,
but
I
would
be
very
interested
in
seeing
c,
plus
plus
bridges
built
to
different
languages.
So
the
open,
telemetry
api
is
like
decoupled
from
the
sdk.
A
So
we
have
our
native
language
sdks,
but
it
should
be
fairly
simple
to
write
a
very
thin
foreign
function,
interface
that
just
calls
out
to
the
the
c
plus
plus
api,
hc,
plus
plus
implementation.
And
I
wonder
if
there
is
it
wouldn't
solve
everyone's
problem.
A
D
Know
just
everything
by
by
far
no
expert
on
on
c
plus
plus,
but
like
one
of
part
of
the
reason
why
we
can
do
all
of
this
in
go
because
it's
because
the
runtime
allows
us
to
do
a
lot
of
this
and
then
in
cpus,
plus,
like
part
of
the
struggle
that
we
have
with
stack
unwinding,
is
because
of
these
like
nasty
compiler
optimizations
that
the
c
plus
plus
compilers
do
right,
and
so
I
I
would
be
surprised
if,
like
c
plus
plus,
can
do
these
types
of
things.
D
Natively,
like
you
know,
maybe
the
I
forget
what
the
google
profiling
tools
for
c
plus
plus
are.
Maybe
they
can
do
it?
I'm
not
100
sure,
but
yeah
yeah.
C
E
Yeah
I
mean
and
that's
where
the
question
becomes
like:
if
we
could
do
it
for
go,
that
would
be
great,
but
is
there
a
way
to
do
it
universally
through
ebpf
or
or
or
some
other
way,
such
as
just
languages
right
because
there's
always
proliferation?
That's
that's!
The
one
thing
I
think
is
nice
with
ebpf
and
kernel
kind
of
tracing,
because
it's
like
it's
it's
uniforms
right.
It
doesn't
matter
what
compiler
was
used.
E
What
languages
is
like
the
data
you're
capturing
is
from
the
kernel,
it's
a
single
single
source
of
truth
right,
and
so
that's
one
thing:
that's
really
nice
about
edpf.
Of
course
it
comes
with
these
challenges
because
you
have
less
context
in
some
cases,
but
is
there
a
way
to
kind
of
bring
these
two
pieces
together,
yeah
and
infer
the
context
from
evpf,
and
that
would
be
yeah.
D
Yeah,
there's
a
bit
of
a
flip
side.
Right
like
again,
like
stagger
unwinding.
Is
this
thing
that
basically
only
exists
in
c
plus
land
or
like
for
us?
Also,
like
symbolizing
stack
traces
from
ruby?
That's
just.
We
literally
need
to
read
ruby
runtime
memory
to
be
able
to
do
that
right,
like
there's.
No,
I
mean
maybe
there's
a
soon
now
I
think,
like
shopify
is
implementing
one
where
this
should
be
possible,
but
yeah
it
comes.
D
E
It's
invariably
language,
specific
yeah,
but
in
terms
of
like
context
or
trace
id
or
something
that
allows
correlations,
is
there
I
mean
that
would
be
the
gold
that
would
be
like
so
powerful
if
there
was
a
way
to
do
that.
C
A
Agree,
I
mean,
I
do
think
it's
potentially
a
two-way
street
you're
right,
like
I
wasn't
thinking
like
yeah,
even
if
we
could
do
this
in
c,
plus
plus
the
stack
trace,
you'd
get
out
for
python
or
ruby
would
be
so
low.
It
probably
wouldn't
be
like
that
useful,
because
that
the
view
at
that
level
would
be
so
different,
but
maybe
if
we
can
get
some
of
this
going,
it
becomes
easier
to
in
the
long
run
like
go
to
different
language
development
communities
and
be
like.
A
A
I
wanted
to
all
add
context
propagation
coming
from
the
distributed
tracing
site.
That's
the
part
where
I
I'm
implementing
that
in
user
line.
I'm
like
this
is
stupid.
This
is
like
this
is
like
obviously,
a
runtime
language
feature,
not
not
something
that
I'm
like
cobbling
together
on
top
of
the
runtime,
but
whatever
yeah.
A
E
E
D
E
B
Everything
else
yep
all
right,
see
ya.
Sorry,
we'd
love
to
have
you
a
long
term
if
you
want
to
join
like
if
the
group
is
open
to
you
and
if
you
happen,
that's
good
yeah.