►
Description
Blog: https://everyonecancontribute.com/post/2021-06-02-cafe-32-polar-signals-continuous-profiling/
Website: https://www.polarsignals.com/
Twitter thread: https://twitter.com/dnsmichi/status/1400121372341321734
Demo starts at 14:20
A
And
we
are
live
on
youtube,
hello,
everyone
we
are
back
to
our
30-32,
the
number
32
of
the
everyone
can
contribute
coffee,
and
today
we
are
looking
into
continuous
profiling
with
polar
signals,
and
I'm
super
happy
that
frederick
and
matthias
said
they
would
join
us
today
for
a
bit
of
like
introduction,
some
live
demos,
some
trying
new
things,
learning
how
monitoring,
observability
and
instrumentation
and
everything
around
works
and
without
further
ado.
I
would
like
to
hand
over
to
frederick
and
say:
hey.
Maybe
you
can
tell
us
a
little
bit
about
your
story.
B
Yeah
absolutely
first
of
all,
thank
you
thank
you
for
having
us,
so
I
guess
to
to
start
out
with.
Let's
talk
a
little
bit
about
like
what
continuous
profiling
is,
and
maybe
a
little
bit
also
about
like
what
led
us
to
to
today
right,
like
we
founded
a
company
around
so
clearly
we
we
believe
that
there's
something
something
bigger
here
and
how
we
got
to
this
so
matthias,
and
I
actually
we
met
through
kind
of
the
prometheus
community
and
yeah.
B
B
I
read
this
paper
by
google,
where,
where
they
essentially
described
how
they
used
profiling
techniques
more
more
specifically
how
they
continuously
profiled
all
of
their
kind
of
infrastructure,
to
kind
of
cut
down
on
resources,
kind
of
consistently
every
quarter
right,
and
that
was
intriguing
for
multiple
reasons.
Right
one
like
we.
B
I
already
knew
how
to
kind
of
deal
with
data
over
time
right,
that's
exactly
what
we've
been
doing
for
with
prometheus
for
for
a
number
of
years,
and
then
the
second
was
I
kind
of
loved
the
idea
of
this,
because
I
would
have
loved
to
have
that
have
had
that
tool.
B
While,
while
I
was
working
on
prometheus
right
and
so
kind
of
working
on
a
prototype
and
kind
of
evenings
in
my
free
time-
and
I
got
super
lucky
and
got
to
present
this
in
a
in
a
keynote
at
kubecon.
Actually
I
did
this
together
with
tom
mulkey,
where
we
essentially
talked
about
kind
of
the
future
of
observability.
B
This
was
at
kubecon,
barcelona
and
kind
of
my
part
was
that
I
felt
I
thought
that
continuous
profiling
was
going
to
be
kind
of
the
if
we
go
by
that
by
those
terms,
the
fourth
pillar
of
observability
right
and
more
more
broadly,
I
think
we're
gonna
see
much
more
kind
of
getting
rid
of
that
terminology
of
like
the
three
pillars
of
observability
right.
B
I
think
I
think
it
was
probably
useful
as
a
framework
to
get
started
in
the
community
just
to
give
people
an
idea
of
these
are
some
things
that
are
useful,
but
I
think
in
a
way
it's
also
trapped
us
to
think
this
is
the
these
are
the
things
that
we
need
and
we
understand,
our
production
systems
and
kind
of
the
thesis
was
continuous.
B
Profiling
is
one
of
those
things
that
also
allows
us
to
really
deeply
understand
the
operational
aspects
of
our
applications,
and
we
thought
that
there
is
a
there's
a
future
to
this
right.
So
that's
kind
of
the
the
back
story
and
yeah,
as
I
said,
matthias
and
I
met
through
the
through
the
prometheus
community,
and
actually
I
I
initially
hired
matthias
into
my
team
at
red
hat
and
then
after
I
kind
of
left
redhead
to
to
found
polar
signals.
B
Matthias
kind
of
approached
me
and
said:
hey:
this
is
kind
of
cool,
so
yeah,
that's
that's!
That's
how
we're
here
so
polar
signals
was
founded
about.
Six
months
ago,
in
early
february,
we
launched
our
private
beta
so
right
now
it's
kind
of
an
invite-only
system.
B
Just
so,
we
can
kind
of
work
really
closely
with
a
couple
of
people
so
that
we
can
figure
out
what
is
the
most
useful
thing
that
people
want
from
a
product
like
this
and
eventually
the
idea
is
that
we'll
obviously
ga
this
so
that
anybody
can
can
make
use
of
this
so
yeah,
that's
kind
of
very,
very
high
level
right?
Are
there?
Maybe
any
questions
already
in
the
in
the
room
at
this
point
that
we
can
answer.
B
Okay,
I
guess,
then
I
I
think
the
the
next
thing
that
is
always
useful
and
cool
to
talk
about,
is
kind
of
what
is
continuous
profiling
actually
right
and
how?
How
does
it
work
so
continuous
profiling,
almost
like
the
word
says,
right-
is
kind
of
the
act
of
always
taking
profiles.
B
So,
like
maybe
you
know,
the
situation
of
you
have
a
memory
leak
in
your
system
and
you
only
realize
that
there's
a
memory
leak
after
the
fact
right,
maybe
when
your
application
has
been
killed
already
and
you
kind
of
never
have
the
right.
You
never
have
to
profile
at
the
right
time.
So
this
is
one
of
the
use
cases
where
continuous
profiling
can
be
super
useful,
it's
very
similar
to
any
other
observability
data.
B
You
never
know
when
you're
going
to
need
the
data,
so
you
just
always
capture
it.
So
this
is
for
memory,
for
example,
and
then
for
cpu
for
example.
The
way
it
works
is
that
essentially
we
look
at
the
stack
traces,
let's
say
100
times
a
second,
and
that
way
we
can
from
that
we
can
infer
how
much
time
is
being
spent
statistically
in
certain
functions.
In
your
of
your
code
and
the
cool
thing
about
continuous
profiling
or
really
any
type
of
profiling,
is
that?
B
Because
we
have
these
stack
traces,
we
we
can
tell
you
down
to
the
line
number.
What
about
your
program
is
actually
using
the
most
cpu,
the
most
memory
etc,
and
because
we're
using
these
sampling,
profiling
techniques.
So,
as
I
said,
looking
at
the
stack
prices
like
100
times
per
second,
it's
actually
really
low
overhead
to
do
this,
which
is
why
exactly
continuous
profiling
is
made
possible
with
sampling
profilers.
That
way,
we
can
truly
always
have
it
running
for
everything
in
our
infrastructure.
A
I
I
read
your
blog
post,
which
was
published
I
think
yesterday
on
on
monday,
and
I
think
profiling
or
application
performance.
Profiling
has
been
there
for
quite
a
long
time
so
like
this
is
now
the
next
level
of
profiling,
probably
right.
B
Right
so
the
way
that
we
kind
of
think
about
it
is
that,
because
we're
now
doing
it
always
before
it
was
like
profiling,
has
always
been
a
tool
in
the
developer's
toolbox
right.
But
it
was
more
of
an
kind
of
on-demand
type
of
thing.
B
Right
like
we,
we
profiled
when
we
needed
to
profile
as
opposed
to
always
having
this
data,
you
can
think
of
it
as
a
similar
thing
to
like
going
from
like
nagios
checks,
where
we
do
individual
checks
to
to
to
see
if
something's
doing
the
right
thing
at
this
moment
in
time
versus
time,
series
data
right
where
we
always
collect
it
over
time
and
can
kind
of
observe
the
state
changes
over
time,
and
it's
a
really
similar
kind
of
shift
with
continuous
profiling.
B
No,
so
we
we
actually
just
literally
every
10
seconds,
take
a
profile,
so
it's
not
predictive
or
anything
like
that.
It's
just
because,
because
we
always
take
these
profiles
at
a
certain
interval,
we
just
will
have
the
right
profile
at
the
right
moment
in
time,
at
least
within
10
seconds
right
most
of
the
time
memory
leaks
don't
happen
within
less
than
that,
or
if
they
do,
then
you
probably
notice,
while
rolling
out
already
thanks
yeah
but
very
good
question.
B
I
suppose
the
next,
the
next
thing
we
we
could
talk
about
is
kind
of
compatibility
right.
So
the
very
first
thing
that
we
thought
of
when
we
were
developing
the
open
source
project
was
what
kind
of
data
format
are
we
gonna
use
for
this
and
we
coming
especially
coming
from
the
prometheus
world.
We
felt
pretty
strongly
that
the
standard
kind
of
needs
to
be
an
open
standard
right
and
the
thing
that
we
felt
was
the
most
most
developed
standard
out.
There
is
prof,
so
this
was.
B
This
is
a
standard
that
google
developed
and
it's
essentially
a
very
compact
representation
of
the
stack
traces
and
the
samples
belonging
to
those
stack,
traces
and
kind
of
metadata
about
the
functions
about
the
binaries
that
the
stack
traces
originated
from
etc.
B
So
that's
basically
why
this
is
really
cool.
Is
that
because
it's
an
open
standard,
we
actually
didn't
even
have
to
write
all
of
the
client
libraries
for
this
like
there
are
already
declined
right.
Libraries
for
rust,
there's
a
pre-part
of
integration
directly
into
in
the
go
run
time.
There's
no
js.
There
is
basically
every
every
runtime
out.
B
We
support
multiple
models
of
obtaining
obtaining
the
data
in
in
the
go
world.
B
For
example,
it's
very
typical
that
you
have
the
pprof
that
you
have
pprof
endpoints
on
your
http
servers,
so
we
allow
scraping
this
data
in
a
very
similar
model
as
to
as
prometheus
as
a
matter
of
fact,
we
actually
reused
kind
of
the
service
discovery
from
prometheus,
so
we've
had
multiple
users,
just
drop
our
polar
signals
agent
into
their
cluster
and
could
automatically
scrape
their
entire
infrastructure
because
they
already
had
pprov
endpoints
and
their
infrastructures
written
and
go
so
that's
kind
of
the
best
case
scenario
right,
but
we
also
have
the
ability
to
push
this
data.
B
So
if
you
want
to,
for
example,
with
go
or
with
node.js,
for
example,
you
can
every
10
seconds
push
this
data
to
polar
signals,
if
you,
if
you
prefer
that
model,
so
we
don't
really
take
a
take
an
opinion
here.
Whatever
fits
your
use
case
best,
you
can
do
any.
E
E
The
profiling
and
then
coming
a
little
bit.
Probably
it
will
go
a
little
bit
back
because
I
come
also
to
the
point
when
you
say
that
profiling
is
overhead,
mostly
so
a
lot
of
applications
or
when
you
have
intensive
workload,
applications
like
number
crunching
applications.
Probably
it
could
be
the
option
to
that.
The
profiling
also
needs
a
cpu
memory,
of
course,
so
execution
times
could
get
a
bit
longer
and
currently
what
we're
trying
to
achieve.
E
E
So
then,
the
complexity
of
the
storage
profile
ship
shifts
a
little
bit
back
to
your
product
or
to
your
solution
so
that
you
start
by
analyzing
those
into
another
part.
Instead
of
doing
it
directly
on
the
application
state
right.
B
Yeah
yeah,
absolutely
like
the
that's
the
idea
of
the
sampling
profilers.
We
we
take
few
samples
so
that
we
have
low
overhead
and
we
merge
all
of
this
data
into
aggregated
forms.
On
the
on
the
storage
side,
exactly
like
you
said,
and
that
way
we
can
have
super
low
overhead
when
we
grab
the
profiles,
but
we
can
still
ultimately
have
the
same
kind
of
understanding
as
we
did
with
a
really
high
resolution
profile
taken
out
at
a
short
amount
within
a
short
amount
of
time.
B
Very
good
question:
I
guess
we
can.
We
can
go
over
to
show
a
very
quick
demo
of
the
product
if
people
are
interested
in
that
yeah.
Okay,.
B
So
I
mean
we
have
you
know:
user
management
and
login
and
etc,
but
that's
not
the
thing
that
why
we're
here
by
what
we're
interested
in,
but
we
we
kind
of
organize
everything
in
organizations
and
projects,
much
like
you
see
in
other
cloud
providers
etc.
But
the
really
interesting
part
is
the
the
way
we
query
this
data
right.
So
it's
a
the
the
way
that
you
query.
It
is
very
similar
to
prometheus,
actually
surprise.
Surprise.
B
Given
our
background,
you
kind
of
treat
the
type
of
profile
sort
of
like
the
metric
name
in
prometheus,
and
we
created
the
query
browser
so
that
you
can
discover
these
things
nicely.
So
here,
for
example,
this
is
our
actual
production
infrastructure.
B
We
can
query
all
of
our
heap
profiles,
for
example,
and
when
we,
what
we
see
here,
is
kind
of
the
the
cumulative
data
of
the
samples
that
are
within
these
profiles,
so
whenever
we
send
a
heap
profile,
for
example,
here
we
can
see
this
is
our
jaeger
jaeger
tracing
pot
that
we
have
deployed
in
our
kubernetes
cluster.
B
Every
time
we
send
one
of
the
profiles
taken
from
jaeger
on
polar
signal
side.
We
add
all
of
these
up
and
actually
write
it
to
a
metric
storage.
So
what
we've
actually
just
queried
is
actually
a
metric
storage.
We
so
far,
we
haven't
actually
queried
any
profiling
data,
but
we
did
this
so
that
you
can
have
this
kind
of
better
view
of
the
data
rather
than
just
seeing
samples
over
time.
B
That's
what
we
had
initially,
but
what
we
found
was
that
everyone
who
who
used
our
product
first
went
through
resource
metrics
with
prometheus,
for
example,
to
find
the
right
timestamp
to
then
query
polar
signals,
and
so
it
was
kind
of
naturally
natural
for
us
to
have
this
integration,
where
we
allow
you
to
query
this
in
form
of
metrics
and
then
you
can
select
any
of
these
points
over
time
and
you
can
view
the
profiles.
So
we
have
various
visualizations
the
things
that
you're
probably
already
familiar
with
from
the
go
tool
chain.
B
So
here's
the
call
graph
where
we
essentially
can
see
the
the
critical
path,
but
we
have
also
fitting
to
our
branding
icicle
graphs.
It
was
funny
when
I,
when
I
first
read
this
it
was.
B
I
actually
learned
that
these
are
called
icicle
graphs
after
we
came
up
with
the
name,
so
it
was
kind
of
a
happy
little
accident
and
that
brandon
greg
actually
calls
these
these
kind
of
upside
down
flame
graphs,
icicle
graphs,
just
a
quick
anecdote
on
the
side,
but
yeah
we
have
all
of
these
various
visualizations
and
one
other
cool
thing
that
I
want
to
show
real,
quick
is
what
you
can
do
is
you
can
create
these
shares,
and
this
will
open
a
separate
tab
which
I'll.
C
F
Yeah,
so
this
was
mostly
done
because
what
we
found
up
until
that
point
was
that
a
lot
of
people,
usually
just
screenshotted
prof
profiles
right
and
then
just
pasted
them
into
github
issues,
and
you
couldn't
really
like
interact
with
these
profiles
and-
and
hence
we-
we
kind
of
like
said
set
out
to
to
create
this
little
service,
where
you
can
just
like
share
it
and
yeah,
then
not
only
view
it,
but
you
can
kind
of
like
modify
it
and
and
zoom
in
zoom
out
and
yeah
kind
of
like
use
it
as
you
would
locally
as
well.
B
Kind
of
so
that
you
can
collaborate
with
your
team
right.
We,
if
you,
if
you've,
worked
with
profiling
in
the
in
the
open
source
world,
for
example,
what
people
always
do
is
they
take
screenshots
and
paste
the
screenshots
onto
pull
requests
and
the
kind
of
annoying
part
about
that
is
as
a
maintainer
of
open
source
projects?
B
You
actually
wanna
explore
this
data
right,
but
you
only
get
these
static
screenshots,
and
so
we
decided
it
would
be
cool
to
to
host
this
free
service
for
people,
and
so
anybody
can
go
to
share.polarsignals.com
and
upload
any
pipaf
compatible
profile
and
kind
of
share
it
with
their
team
on
on
a
github,
pull
request,
etc.
B
So
this
is
where
we
left
off,
and
one
cool
thing
that
I
wanted
to
show
was:
let's,
let's
compare
multiple
profiles
in
time
and
why
this
is
particularly
useful.
B
Is
we
we're
essentially
able
what
we've,
what
we've
just
done?
Is
we've
taken
a
profile
of
a
lower
memory
profile
with
and
comparing
it
with
a
higher
memory
profile
right
at
two
different
points
in
time
and
why
this
is
really
cool
is
all
of
a
sudden
we're
able
to
kind
of
cut
out
all
the
all
the
noise
that
was
there
before,
and
we
can
truly
see
the
the
actual
difference
between
these
two
profiles
and
when
we
have
a
memory
leak.
B
For
example,
we
can
truly
tell
what
grew
in
memory
and
that
way
we're
really
quickly
able
to
say
this
is
exactly
the
piece
of
code
that
is
leaking
memory,
and
we
can
do
this
with
all
sorts
of
queries.
So
another
thing
that
you
could
do,
for
example:
let's
do
this,
for
example,
you
can
actually
merge
this
data
into
one
report
right.
So,
as
you
can
see,
I
queried
okay.
I
only
only
queried
over
three
minutes,
but
this
ultimately
was
samples
of
220
seconds
right,
but
the
idea
is,
you
could
take
these.
B
These
merge,
merge,
queries
now
and
compare
those
as
well,
and
the
reason
why
this
is
useful
is
now.
We
can
actually
merge
all
of
these
reports
right
and
compare
different
versions
of
software,
so
we
can
now
answer
like
these
kinds
of
questions
that
were
really
difficult
to
answer
before.
Like
I
rolled
out
a
new
change-
and
there
was
a
performance
hit,
tell
me
why
right
we
can
actually
have
this
down
to
the
line
number.
What
what
kind
of
changed
in
performance
we
can
say
kind
of
vice
versa.
B
Right
I
did
a
performance
improvement.
Did
it
actually
have
the
effect
that
I
wanted
it
to
have
and
the
kind
of
merging
of
data,
as
well
as
comparing
different
queries
with
each
other,
is
really
powerful
for
this
yeah?
I
think
that's
mostly
it
in
terms
of
kind
of
the
feature
set
or
matias.
Can
you
think
of
anything
I
mean
we
can
we
can
query
data
of
up
to
14
days?
B
Let's
see
here
we
go
but
yeah,
that's
just
more.
In
terms
of
like
how
much
retention
we
have
right
now
but
yeah,
I
think.
F
Right
you,
you
search
for
the
up
metric,
for
example,
and
then
you
drill
down
to
a
more
specific
one.
So
all
of
that
is
totally
possible
as
well
and
then,
if
you
kind
of
found
the
more
specific
profiling
time
series
you're
looking
for,
you
can
then
merge
these
together
as
well.
So,
like
all
of
all
of
the
filtering
that
you
know
from
prometheus,
given
the
fact
that
it
is
prompt
for
the
most
part
it,
it
really
comes
with
the
same
feature
set
in
that
sense.
Yeah.
B
Yeah
and
we
also
wrote
kind
of
a
little
bit
of
a
like
auto,
complete
feature
similar
to
the
to
the
one
that
that
prometheus
introduced
yeah
and,
as
you
saw
like
you,
can
do
various
interactions
you
can
click
on
labels
and
it'll,
autocomplete,
etc.
So,
that's
just
to
kind
of
round
up
the
experience
a
little
bit
more.
G
Have
a
question
about
because
I'm
not
100
aware
of
the
p
prof
protocol.
So
when
you
show
the
the
call
stack,
do
you
have
an
an
and
start
point?
So?
Is
it
one
call
and
it's
the
whole
tree,
because
there
are
hundreds
of
calls,
of
course.
So
do
I
have
to
add
my
application,
something
like
a
start
point
and
an
end
point
to
to
map
it
to
some
object
or
something
like
that.
B
So
because
of
how
sampling
profilers
work,
the
only
thing
that
we
have
is
stack
traces
right
and
we
we
know
how
often
we've
seen
a
particular
stack
trace.
So
if
we,
this
is
in
particular
when
we
look
at
the
cpu
profiles.
B
Each
of
the
nodes
in
here
is
kind
of
part
of
the
stack
traces
yeah
and
that
way,
because
we're
we're
observing
the
the
code
at
runtime
kind
of
100
times
per.
Second,
we
can
infer
that
if
we've
seen
a
particular
stack
trace
twice,
then
that's
two
hundredths
of
a
second
right,
and
it's
essentially
statistics
at
this
point.
B
A
If,
if
I
have
my
current
monitoring
system
with
metrics
and
promises,
probably
is
there
a
way
to
correlate
the
data
from
the
profiling
or
from
polar
signals
with
like
metrics,
logs
and
traces?
So
like
that,
I
have
a
global
view
on
things
and
can
like
say
hey
this
change
or
this
potential
problem
detected
by
the
profiler
is
influenced
by
a
system
outage
or
something
else.
B
Yeah,
so
actually
let
me
show
you
what's
going
on
behind
the
scenes
here,
because
it'll
look
familiar
so
actually
this
this
graph
that
you're
seeing
here
is
truly
a
prometheus
query,
that's
happening,
and
so
we're
kind
of
doing
what
you
just
said
already,
so
the
the
graphs
that
you're
seeing
are
actually
produced
by
autonomous
back
end,
which
we
push
metric
samples
to
and
these
metric
samples
we
calculate
by
kind
of
summing
all
the
values
that
we
see
summing
up,
all
the
values
that
we
see
in
a
profile.
B
So,
let's
see
when
we
click
on
this
profile,
for
example,
we
see
kind
of
value
is
somewhere
around
15
gigabytes
right
and
we
can
look
at
this.
B
It
also
says
here
sorry
this
is
this
was
cpu,
so
maybe,
let's
do
heap
that'll
be
a
bit
easier,
so
we
click
somewhere
here
and
it
says
1.4
gigabytes
right.
So
essentially
we
add
up
everything
that
we
see
in
this
profile
and
write
a
metric
sample
to
our
metrics
back
end
here
and
so
because
we're
we're
labeling
this
data
consistently
right.
B
We're
actually
doing
this
jump
already
from
metrics
data
to
profiling
data,
and
so
this
is
something
that
we
want
to
take
further,
and
actually
someone
in
the
community
has
already
built
a
grafana
plugin
for
our
open
source
project.
So
that's
really
easy
to
to
integrate
with
polysignals
now
as
well,
so
that
you
could
actually
use
grafana
annotations
to
do
this
jump
as
well.
If
you
wanted
to.
A
Thanks
so
I
could
see
polar
signals
then,
as
an
add-on
similar
to
like
linking
traces
to
metrics.
Somehow,
probably
so,
we
have
the
the
four
pillars
of
observability
now
again
with
like
metrics
logs
traces
and
profiles.
A
B
Yeah
I
mean
that
that
that's
definitely
super
interesting.
As
I
said,
we,
we
really
strongly
believe
in
the
prof
standard.
We
think
that
it's
like
the
kind
of
perfect,
perfect
mix
of
expressiveness
of
the
data,
but
also
really
optimized,
for
this
type
of
data.
I
think
the
where
we're,
maybe
kind
of
missing
a
standard
still
is
kind
of
how
we're
actually
representing
this
over
time
right
and
we're
actually
representing
it
identical
to
prometheus
time
series.
B
B
So
I
think
I
think
this
is
what
would
be
super
cool
if
we
could
standardize
it
up
to
this
point,
but
even
if
we
can
only
as
a
community
agree
on
the
pprof
standard,
that's
already
really
would
be
really
amazing.
H
So
I
I
have
a
few
questions,
and
one
of
them
is
kind
of
quite
closely
related
to
that,
and
do
you
think
that
there's
an
open,
telemetry
kind
of
angle
on
this,
where,
especially
with
p
prof
like
this
becomes
part
of
hotel
and
something
that's
getting
sent
to
wherever?
Is
that
something
you
thinking
about.
B
So
we've
actually
had
this
very
conversation
with
other,
like
continuous
profiling
providers
and
there's
definitely
interest
for
something
like
this.
The
last
time
we
spoke,
we
were
like
all
of
these.
All
the
companies
and
projects
out
there
are
still
pretty
young
right,
so
we
kind
of
concluded
with
let's
kind
of
let
the
standards
that
we're
all
using
kind
of
sink
in
and
see
how
they,
how
they
hold
up
over
the
next
year
or
so
and
then
kind
of
come
back
and
see
like
this.
B
H
Okay,
yeah
we've
got
a
little
library
called
lab
kit
that
we
drop
into
all
of
our
ruby
and
our
go
binaries
and
at
the
moment
for
the
for
the
go
ones
it
actually
sends
because
we're
running
on
gcp
for
gitlab.com.
It
sends
our
profiles
to
cloud
profiler,
but
unfortunately
that's
only
the
the
go
binaries,
but
we
do
find
it
to
be
super
useful
in
certain
circumstances,
particularly
when
you're
looking
at
you
know,
performance
diagnostics
and
memory
leaks,
and
it's
it's
a
super
useful
tool.
So
I'm
I
think
this
is
great.
H
The
the
question
I
had
was
one
of
the
things
that
we
we
really
well.
The
first
thing
that
we
really
miss
with
cloud
profile
is
anything
other
than
go
in
particular
ruby.
But
then
the
other
part
is
around
a
lot
of
the
profiling
that
we
do
is
actually
at
the
system
level.
So
if
you've
got
something
like
a
giddily
server,
you
have
all
these
get
processes
that
are
spinning
up
and
shutting
down
and
really
what
you're
trying
to
do
is
figure
out.
What's
going
on
at
the
system
level.
B
Yeah,
that's
absolutely
something
that
we're
that
we're
thinking
about,
and
actually
I
guess
I
can
tease
a
little
bit
without
without
saying
too
much
where
there
was
actually
something
that
we
had
hoped
that
we
would
get
get
ready
for
today.
Unfortunately,
it's
not
quite
ready,
but
it
kind
of
goes
into
that
direction
where
we
essentially
are
automating
profiling
and
doing
like
whole
system,
profiling
and
stuff,
like
that,
unfortunately,
ruby
support
we
don't
have
today,
but
there's
precedence
of
of
ruby
profilers.
B
And,
as
I
said,
the
because
all
of
this
is
pprof,
it's
it's
an
open
standard
right,
like
you're,
not
even
buying
into
some
proprietary
system
like
there
should
be
a
prof
library
for
ruby
anyways,
whether
you're
using
polar
signals
or
not.
So
that's
kind
of
how
we
think
about
open
standards.
Anyways.
F
What
we
already
published
like
a
month
ago
is
a
little
tool
called
professor,
where
we
essentially
take
perf
profiles
and
convert
them
to
pprof,
which
is
actually
done
by
another
little
tool
that
was
already
out
there,
but
we
kind
of
bundled
it
up
and
made
it
work
with
our
continuous
profiling.
So
you
can
essentially
hook
it
up
to
to
a
comprom
or
even
send
it
to
our
cloud
so
yeah
there
are
some
things
but
yeah.
We
can
definitely
go
further.
That's
for
sure.
B
Yeah,
professor
is
actually
go
ahead.
F
B
Yeah,
the
it's
great
that
you
mentioned,
professor,
that's
actually
something
if
you
have
kind
of
native
binaries
today,
that's
something
that
would
already
work
down
to
the
kind
of
system
level
it
even
like
jumps
into
like
shared
libraries
and
stuff.
Like
that,
the
the
thing
that's
a
little
bit
annoying
about
perf
profiles
is
that
you
need
the
debug
symbols
around
right.
B
However,
there
there
have
been
kind
of
some
cool
projects
in
the
community
by
particularly
by
sentry,
because
they
essentially
have
the
same
kind
of
problem
set
right
like
they
want
to
symbolize
native
stack
traces,
but
they
actually
wanted
for
like
core
dumps
and
stuff
like
that.
So
not
really
profiling
data,
but
it
happens
to
be
a
similar
problem
space,
and
so
they
actually
build
kind
of
what
they.
What
are
called
symbol.
E
B
Where
you
can
have
kind
of
a
federated
model
of
the
let's
say
the
sentry
or
the
polar
signals
symbol
server
can
reach
out
to
symbol
servers
from
debian
from
fedora
from
microsoft
and
we'll
kind
of
request
the
debug
symbols
at
runtime,
and
that
way
we
can
actually
symbolize
all
these
stack
traces
after
the
fact,
and
as
a
matter
of
fact,
that
part
landed
just
yesterday
in
our
open
source
project,
and
it's
also
going
to
be
something
that
we
will
be
integrating
into
into
polar
signals
pretty
soon,
so
that
we
can
kind
of
take
these
native
stack
traces
and
not
always
send
all
of
the
kind
of
debug
symbols
with
every
single
p
prof
pro
profile
that
we
send
to
polar
signals,
but
only
upload.
B
The
debug
symbols
once
and
then
only
send
the
native
stack
traces
if
that's
possible.
That
way,
we're
also
reducing
what's
stored
in
the
storage
and
what's
good,
what
goes
over
over
the
wire
right.
D
B
Today,
we
only
offer
offer
the
service
we
may
offer
kind
of
on-prem
in
the
future.
We
definitely
have
had
very
large
enterprises
requested
and
it's
it's
tempting,
but
at
the
same
time,
kind
of
innovating
in
a
in
a
service
is
much
easier
right.
B
So
it's
a
for,
especially
for
a
very
young
company,
a
very
small
company
company
like
us,
it
would
be
extremely
risky
to
go
through
kind
of
an
on-prem
sales
cycle
right
now
that
takes
like
a
year
or
two
years
right
that
then,
and
that
ends
up
failing
and
we
kind
of
neglect
the
service,
which
is
something
that
we
can
more
edu
iteratively
build,
so
I'm
not
gonna
say
never,
but
for
the
time
being,
we'll
we'll
be
offering
the
service
only.
B
That
said,
if
you're
really
interested,
there
is
the
open
source
project
that
you
can
try,
try
and
see
how
far
you
get
with
that
as
well.
F
I
actually
have
the
new
comparison
view
up
and
running,
so
we
can
take
a
look
and
I
was
just
waiting
for
some
data
to
come
in
in
the
last
couple
of
minutes,
so
yeah
I
can.
I
can
share
that
if
people
are
interested.
C
All
right
that
should
I'll
check
my
entire
screen.
Why
is
it
resizing
things
every
time
where
did
zoom
go.
B
F
See
that
yes
right
yeah,
so
from
from
what
we've
seen
earlier,
there
was
the
capability
of
of
comparing-
and
this
is
just
like
a
local
cluster
right
now
and
what
we're
looking
at
is
actually
prometheus
running
on
my
machine
as
well.
F
So
it's
just
like
sitting
there
and
I
just
ran
some
queries
as
well
as
you
can
see
so
we're
looking
at
the
heap
profiles
of
of
that
prometheus
running
on
localhost
1990
on
my
machine
and
then
I
just
every
few
seconds
scrape
scrape
some
profiles
from
that,
so
yeah
like
in
the
past,
or
still
right
now,
but
hopefully
I
can
get
this
out
soon.
F
You
had
to
select
one
and
then
kind
of
manually
select
the
other
one,
but
given
the
fact
that
we
have
metrics
for
both
sides,
the
idea
was
to
kind
of
split
so
that
you
can
have
a
select
the
query
for
the
or
like
the
profile
on
on
the
left
hand,
side
and
then
compare
it
against
something
on
the
right
hand,
side
and
yeah
yeah,
it's
slightly
broken.
So
I
need
to
do
a
few
bug
fixes
before
this
goes
into
production.
F
But
yeah
like,
as
you
can
see,
we
can
can
come,
get
like
a
take
like
a
low
data
point
and
then
a
high
one,
and
then
we
get
kind
of
the
same
diff
like
the
same
comparison
view
or
like
the
same
diff
of
a
profile,
but
it's
probably
a
lot
better
to
to
understand
what
we're
actually
looking
at.
F
Given
that
we
have
these
points
here
and
yeah,
we
can
like
clearly
see
that,
just
by
by
me
running
a
few
things,
you
actually
had
quite
a
difference
here
in
memory,
so
yeah
like
taking
this
a
bit
further,
you
can
now
just
like
click
around
you
can
can
then
compare
the
right
one
with
this
left,
one
where
the
memory
is
slightly
slightly
higher.
You
can
just
like
yeah
slice
and
dice
it,
as
you
wish.
F
That's
kind
of
the
idea
behind
all
of
this
yeah
basically
make
them
make
the
most
of
of
like
these
observability
pillars
that
we
that
we
have
and
given
the
fact
that,
as
I
said,
we
have
metrics
as
well.
We
we
try
to
make
the
most
out
of
this,
so
that
should
be
coming
soon
and
yeah.
Hopefully,
that
will
even
even
make
it
more
accessible
to
compare
profiles.
B
So
the
part
of
the
reason
why
we're
doing
this
is
because
of
obviously
feedback
that
we've
gotten
from
from
our
users,
and
one
thing
that's
been
super
cool-
is
a
bunch
of
people
who
we've
given
access
to
the
to
the
private
beta,
have
open
kind
of
random
across
the
board,
issues
on
usability
on
functionality,
et
cetera,
on
kind
of
a
public
repo
that
we
have
that's
polar
signal,
slash
issues-
and
this
was
one
of
the
examples
where
one
of
our
early
users
was
saying
this
is
kind
of
not
quite
intuitive
enough,
and
we
we
use
the
product
on
kind
of
a
daily
basis
as
well
right,
so
we
agreed-
and
so
this
is
this-
is
why
we're
where
we're
working
on
this?
B
I
guess
that
I
wanted
to
give
a
shout
out
to
if
you,
if
you're
interested
in
trying
this
out,
let
us
know
you
can
either
write
to
us
directly
or
you
can
just
go
to
our
website
and
request
beta
access
and
yeah.
So
we're
and
and
we're
super
happy
to
kind
of
listen
to
all
of
your
feedback.
Obviously
we're
most
interested
in
creating
the
most
useful
product
we
can
yeah.
A
Issues
I
will
link
it
in
the
blog
post
and
I
was
on
twitter
thanks.
I
was
wondering
about
the
like
mid-term
roadmap,
so
what
is
like
at
the
top
of
your
head,
like
in
one
year
time
in
three
years
time?
Where
do
you
see
yourself.
B
Yeah,
it's
a
that's
a
great
question.
Three
years
time
is
probably
too
far
out
to
tell,
but
the
the
mo
the
most
immediate
things
are
kind
of
figuring
out
the
usage
patterns,
even
even
more
than
we
already
have
right,
we've
kind
of
understood
what
people
want
to
use
the
merge
functionality
for
what
people
want
to
use
the
div
functionality
for
even
individual
profiles
right.
B
So
we've
kind
of
figured
that
out
by
now
by
working
with
our
early
customers
and
now
we're
kind
of
optimizing,
the
the
product
for
those
access
patterns-
and
I
I
imagine
that
will
happen
over
the
next
three
to
six
months
and
that
kind
of
goes
through
through
the
entire
stack
that
that,
as
you
just
saw,
is
about
user
experience.
B
But
it's
also
about
the
storage,
as
well
as
the
kind
of
query
language
and
query
patterns
and
query
engine
so
that
we
kind
of
optimize
everything
from
from
top
to
bottom,
that
the
storage
is
very
efficient,
that
the
querying
is
very
efficient
and
that
it
actually
kind
of
gives
you
really
useful
data
and
visualizations
right.
So,
that's
probably
what
we'll
be
spending
most
of
this
year
on
and
I
again
this
is
always
going
to
be
very
dependent
on
the
kind
of
user
feedback
that
we
get.
B
We
think
that
some
of
the
things
that
we'll
be
developing
are
kind
of
recommendation
engines
so
that
we
can
automatically
see
hey.
This
is
go
code
typically
kind
of
what
is
it
grow.
Slice
is
a
very
cpu
intensive
thing
if
you
just
pre-allocate
a
and
a
slice
here
with
this
size,
you'll
save
this
much
cpu
time
or
something
like
that
right.
B
These
kinds
of
things
are
would
allow
people
to
make
these
changes
much
much
faster
right
and
ultimately
create
a
better
product,
because
that's
what
people
want
to
use
these
kind
of
products
for
right?
They
want
to
make
optimizations,
and
if
we
can
make
recommendations
about
good
optimizations,
then
that's
only
helpful.
A
I
have
many
ideas
in
my
head
right
now.
One
of
them
is
machine
learning.
The
other
one
is
like
having
public
profiles
being
uploaded
somewhere.
So
you
have
like
a
training
model
for
machine
learning
later
on,
where
you
can
like
use
the
recommendation
engine
you're
talking
about
and
saying
hey.
This
is
like
okay,
this
is
go
but
for,
like
you
see,
plus
plus
so
for
the
cnc
development.
A
You
need
to
use
a
different
pattern
and
the
more
data
you
can
like
collect
from
the
community
or
from
your
customers,
the
better
the
results
will
be,
I'm
just
not
sure.
If
is
there
any
privacy
implication
from
continuous
profiling.
B
Yeah,
I
think
there
are
different,
definitely
some
interesting
things
here.
I
I
want
to
say
I
think
the
biggest
impact
we
can
probably
have
at
first
is
more
of
a
kind
of
let's
call
it
static
analysis
rather
than
machine
learning.
B
I
think
we
can
already
make
a
lot
of
really
good
recommendations,
just
by
kind
of
putting
our
expert
knowledge
into
into
the
recommendations
right,
but
also,
I
think
there
are
cool
things
where
we
can
understand
the
interactions
of
processes
that
are
on
the
same
node,
for
example,
whether
they're
kind
of
trashing
cpu
caches,
or
something
like
that
right
and
we
can
recommend
kind
of
kubernetes
like
pod
anti-affinities,
to
say
these
two
processes
should
never
be
on
the
same
node.
They
keep
trashing
each
other's
cpu,
caches
or
stuff
like
that.
B
So
recommendations
can
go
various
ways
right.
That's
what
I'm
trying
to
say.
A
Yeah,
I
think,
and
the
next
idea
I
had
was
like
when
you
share
the
profile
as
a
url.
You
also
want
to
share
maybe
a
preview
image
like
a
png
or
jpeg,
which
you
can
then
embed
in
a
pull
request
or
merge
request
so
that
when
you,
when
you
do
continuous
profiling
for
the
staging
deployment,
for
instance,
you
have
sort
of
review
apps,
but
also
like
review
profiling.
Somehow,
and
you
can
immediately
see
that
this
commit
poses
a
regression
in
the
code.
A
Performance
drops
and
you
never
merge
it
to
to
the
main
branch
and
just
to
visualize
it
better.
Finding
a
way
to
to
like
give
back
the
data
either
as
a
png
or
as
a
json,
blob
or
whatever,
so
that
gitlab
github,
whatever
tool
is,
is
used,
can
can
create
a
table
or
create
something
useful,
maybe
in
the
first
iteration.
This
can
be
done
by
just
like
curing
the
rest
api
of
those
tools
and
adding
a
comment
to
the
to
the
pull
up
to
merge
requests.
A
And
if
you,
if
you
start
with
recommendations
for
code,
maybe
there
is
already
an
existing
one,
like
I
think,
source
graph
or
something
else
which
you
can
like
use,
so
that
all
the
tools
out
there
can
consume
the
same
standard
again
and
just
show
it,
and
you
have
like
just
thinking
out
loud
in
gitlab.
You
have
like
a
suggestion
for
the
code
and-
and
this
is
like
coming
from
the
continuous
profile
and
you
can
just
like
review
it
in
a
merge
request
and
say
hey.
I
like
I
like
that.
A
Do
not
auto
merge
everything,
but
I
like
that
and
it's
coming
from
continuous
profiling,
because
the
thing
is
learning
from
from
the
data
it
currently
has
yeah.
B
Yeah
I
mean-
and
there
are
kind
of
other
cool
kind
of
network
effects
right,
just
just
like
what
you
said,
because
we
have
the
kind
of
data
down
to
the
line
number.
We
could
open
your
code
directly
in
git
pod
or
something
like
that
right
or
name
another
like
ide,
that's
browser-based,
right,
like
code
spaces
or
stuff,
like
that.
I
think
those
are
cool
integration,
possibilities.
B
B
Yeah,
we're
definitely
really
excited
about
kind
of
the
possibilities
that
this
this
type
of
data
gives
you,
especially
because
it's
so
so
high
resolution
right
down
to
the
line
number,
and
I
think
that's
something
that
has
kind
of
excited
us
from
day,
one
that
we
have
this
possibility
to
tell
you
go
to
your
editor
at
this
line.
Number
and
you'll
see
why
this
is
happening.
B
I
think
that's,
that's
really
magical
almost.
I
don't
think,
there's
really
another
observability
signal
that
gives
you
that
kind
of
kind
of
tactical
action
right,
like
all
the
other
things
that
we
have,
maybe
maybe
log
lines
can
be
similar
if
you
include
the
the
line
where
it
was
locked
from,
but
like
metrics
or
traces
are
much
more
higher
level
concepts
and
they're,
they
should
be
right.
I'm
not
saying
that
that's
a
bad
thing,
I'm
just
saying
it's
kind
of
magical
that
with
profiling
you
get
that.
A
A
I'm
trying
to
understand
the
logging,
the
metrics
and
the
traces
probably
didn't
have
any
traces
back
then,
but
still
it
was
so
much
time
being
invested
to
totally
understand
the
problem
because
it
works
on.
My
machine
is
not
like
the
customer
system
works
and
the
distributed
monitoring,
didn't
really
scale
or
was
running
out
of
memory,
and
if
we
would
have
been
able
to
just
use
continuous
profiling
in
a
specific
sense,
I
still
have
no
idea
how
to
use
the
parameters
for
path
and
for
the
all
for
all
the
different
tools
out
there.
A
It's
known
knowledge,
probably,
but
still
it's
it's
so
much
to
learn
and
so
much
to
understand.
You
need
an
expert
by
your
side
if
we
make
it
easier
and
also
find
ways
to
not
bind
it
to
the
developer,
who
created
the
code,
but
everyone
can
just
act
on
like
a
suggestion.
Saying
hey,
if
you
add
the
slice,
this
is
going
faster
or
if
you're,
using
not
share
pointers
but
unique
pointers
in
c
plus
plus.
There
is
a
certain
way
of
improving
that.
A
So
I
think
this
is
like
you're
filling
a
gap
of
knowledge,
probably
what
everyone
tries
to
approach
currently
like
you,
have
sort
of
application
performance,
monitoring
ideas,
but
you
need
more
than
that,
because
you
want
to
prevent
that.
The
problem
is
actually
happening
in
production,
but
you
don't
know
it
yet
so
sort
of
like
a
little
bit
of
prediction,
but
I
could
similar
to
captain
as
a
quality
gate.
A
I
could
also
imagine
that
when
you're
doing
continuous
delivery
that
you
have
sort
of
a
continuous
profiling
quality
gate
which
prevents
from
deploying
to
production
just
because
there's
a
potential
problem
with
the
performance-
and
this
builds
the
bridge
to
the
to
slo
count-
the
slo
is
not
matched
because
the
login
time
into
your
sas
application
is
five
minutes.
Like
I
told
about
gta
online,
which
has
a
had
an
algorithm
problem.
This
is
something
potentially
which
could
be
detected
by
that,
like
building
the
picture,
even
even
greater.
A
A
Yeah
I
wish
I
could
clone
myself
and
also
become
an
sre
and
yeah,
but
still
I
might,
I
totally
should
be
checking
out
the
the
beta.
I
was
wondering,
and
I
don't
know
if
you
have
mentioned
that
before,
how
long
are
you
planning
to
run
the
beta.
B
B
That
that's
that's
when
we'll
release
it
as
a
as
a
public
public
product,
obviously
like
as
a
business,
there
are
other
things
that
we
need
to
consider
right
like
we
need
to
figure
out
that
kind
of
the
margins
are
right
for
the
business.
We
need
to
make
sure
that
it
scales.
We
need
to
have
like
things
like
billing
right,
like
things
that
you
don't
want
to
think
about
as
a
as
an
engineer
necessarily,
but
it's
one
of
those
things
that
is
kind
of
crucial
to
building
a
business
yeah.
A
Definitely
if,
if
you're
thinking
about
the
the
languages
you
currently
like
have
in
the
in
the
private
beta,
is
there
a
specific
one
where
you
need
more
data
or
why
would
we
or
the
ones
who
are
listening
to
the
stream
right
now,
but
we
can
help
you
if,
like
saying
hey,
we
have
nothing
around
around
rust.
Is
there
something
where
we
can,
like
everyone,
can
dive
in
and
help
contribute.
B
So
it's
funny
that
you
mention
rust
because
the
there
is
there's
actually
a
rust
profiler
for
cpu,
but
there
isn't
for
for
memory.
I
think
the
the
folks
at
pingcap
started
working
on
one
so,
but
they
didn't
finish
it
and
I
think
the
pull
request
is
still
up
for
grabs.
If
anybody
anybody
wants
to
finish
that.
So
if
there's
a
rust
expert
out
out
here
listening,
that
would
be
a
really
amazing
one
to
to
finish
up.
B
I
think
most
of
the
other
languages
actually
are
pretty
pretty
far
advanced.
I
actually
wasn't
sure
about
the
the
state
of
ruby,
so
that's
also
a
really
cool
one.
We
know
of
people
using
the,
not
the
python
one,
the
the.
D
B
One
go,
obviously
those
are
kind
of
the
the
two
most
used
ones
with
the
with
the
people
that
we're
working
with
right
now,
but
obviously,
obviously,
a
really
big
one
for
us
is
going
to
be
java
as
well.
So
if
there
are
kind
of
java
experts
out
there
who.
B
Want
to
work
on
prof
compatible
profilers.
That
would
be
awesome
yeah,
I
think
rust
and
java
and
ruby
those
those
are
probably
the
ones
that
we
have
the
least
grasp
on.
So,
if
there's
someone
who
wants
to
create
some
profilers
we're
more
than
happy
to
help
out
on
like
the
format
right,
the
format
is
something
that
we're
very
deeply
involved
and
invested
in
so
more
than
happy
to
help
with
that.
A
Hopefully
yeah
then
I
would,
I
would
just
say
thanks
for
joining
today.
I
would
totally
love
to
catch
up
like
in
half
a
year
or
something
when,
when
you
have
something
new
to
present,
or
maybe
some
some
more
insights,
how
your
story
went
and
until
then
we
can
like
try
everything
async
and
provide
feedback.