►
From YouTube: A deep dive into Polar Signals: The open source edition - Frederic Branczyk, Polar Signals
Description
All things profiling and Polar Signals.
A
Okay
yeah,
so
thanks
thanks
for
kin
folk
to
kinfolk
for
having
me
here,
it's
good
to
be
back
yeah,
so
I
founded
polar
signals
last
year
and
actually
just
yesterday
we
announced
our
first
kind
of
set
of
products
and
that's
ultimately
what
I'm
gonna
talk
about,
but
kind
of
the
open
source
parts
of
it.
A
If
you're,
if
you're
interested,
we
can
talk
about
what
we
do
at
polar
signals
that
is
not
open
source
as
well,
but
that's
not
for
this
meetup,
so
yeah.
This
is
polar
signals,
open
source
edition,
as
lexi
already
mentioned,
I've
been
in
this
ecosystem
for
quite
some
time.
I
am
still
the
kubernetes
tech
lead
for
special
interest
group
for
instrumentation,
I'm
a
prometheus
maintainer.
A
Those
are
just-
maybe
some
of
you
know
these
projects,
maybe
some
of
you
don't,
but
essentially
these
are
all
projects
in
the
observability,
slash
monitoring
space
in
the
cloud
native
kind
of
landscape
and
what
we
started
focusing
on
polar
signals,
or
at
least
the
very
first
thing
that
we
wanted
to
first
focus
on
is
essentially
performance,
profiling
and
a
really
specific
type
of
type
of
performance
profiling
that
we'll
dive
into
a
little
bit
later.
A
But
before
we
dive
into
that
those
specifics,
I
want
to
make
sure
that
we
all
have
kind
of
a
common
understanding
of
profiling.
What
profiling
means,
how
we
can
use
it,
etc
so
just
kind
of
building
building
a
common
ground
so
that
we
all
understand
the
later
parts
of
the
of
the
talk.
A
So
profiling
at
its
very
core,
is
essentially
our
ability
as
engineers
to
understand
how
our
programs
behave
down
to
the
line
of
code
that
causes
some
behavior.
So,
typically,
that
means
the
ability
to
understand
memory
usage
or
the
way
that
the
cpu
is
being
utilized
or
potentially
even
the
way
that
reading
a
file
blocks
your
your
program
from
continuing
to
do
something.
All
of
these
things
can
be
understood
through
profiling
and
profiling
is
nothing
new.
A
This
is,
has
been
kind
of
a
a
tool
in
our
toolbox
for
a
really
long
time,
but
it's
a
really
powerful
one
and
as
as
such,
there's
there's
so
much
that
we
can
do
with
this
data,
and
just
because
the
the
kind
of
cloud
native
computing
landscape
is
really
heavy
on
go.
I
want
to
focus
on
go
in
this
talk
as
well,
or
at
least
the
parts
that
are
specific
to
languages
at
all.
A
I
want
to
point
out
and
go
because
I
think
that
probably
within
our
bubble
has
the
has
probably
the
the
biggest
reach
and
so
go
has
profiling
built
into
it
into
the
runtime.
Sometimes
this
is
not
the
case
with
some
languages,
then
profilers
are
an
add-on
in
some
way,
but
in
go.
It
comes
as
part
of
the
standard
library
and
specifically
what
that
also
means
is
that
it
uses
a
standard
that
was
created
by
google.
A
I
think
that's,
maybe
an
obvious
choice
for
google
creating
go,
but
I
think
it's
actually
a
really
awesome
choice
and
we'll
see
later
why
that
is
the
case.
Pprof
is
essentially
just
the
file
format
that
that
a
prof
format
is
saved
in,
and
these
are
just
served
by
http
endpoints
in
go
again.
This
is
all
done
through
the
go
standard
library
and
in
go.
You
have
a
couple
of
profiles
that
you
get
out
of
the
box.
A
This
is
extensible,
but
for
the
overwhelming
majority
of
use
cases,
these
are
probably
going
to
be
enough
for
for
you
for,
for
most
of
my
use
cases
they
have
been.
I
think
I've
only
used
a
custom
profile
once
or
twice
so
I
think
the
the
obvious
ones
are
cpu
time
so
in
go.
A
The
way
this
happens
essentially,
is
that
you
hit
this
http
endpoint
and
go
starts,
collecting
samples
about
where
the
cpu
is
spending
time
in
terms
of
your
program
right
and
it
does
this
for
by
default
a
period
of
30
seconds,
and
it
does
measurements
100
times
per
second
by
default,
and
it
essentially
looks
at
where,
where
your
cpu
is
spending
time.
So
that's
how
we
can
understand
cpu
usage.
A
So
these
are
samples
right
and
then
memory,
usage
or
heap
and
allocations
are
also
taken
care
of
by
the
go
runtime
and
what
we
typically
see
when
we
take
a
heap
profile
and
go.
Is
we
see
the
representation
of
our
heap
just
after
the
last
garbage
collection
run?
A
So
essentially,
this
is
done
so
that
we
don't
see
a
bunch
of
trash,
metaphorically
speaking
in
in
our
profiles,
but
the
actual
memory
usage
and
then
allocations
are
vaguely
live
essentially
and
then
the
all
the
other
profiles
are
effectively
stack,
traces
and
understanding
which
stack
traces
created
a
thread
for
example
or
where
or
the
the
the
stack
traces
each
of
our
go.
Routines
are
in
right
now
or
which
stack
traces
have
led
to
a
blocking
behavior,
that's
the
block
profile
or
which
stack
traces
have
kind
of
held
a
contended
mutex.
A
That
way
we
can
understand
blocking
behaviors,
so
that's
kind
of
what
you
get
out
of
the
box
in
go
but,
as
I
say,
this
is
entirely
extensible
and
it's
a
common
format.
So
this
is
not
something
that
is
necessarily
specific
to
go
in
any
way
and
just
to
because
I've
I
felt
like,
and
I've
heard
this
from
a
lot
of
people.
Profiling
seems
really
intimidating
to
a
lot
of
people.
I
want
to
show
you
a
really
quick
demo
of
of
profiling
some
code.
So
let
me
change
to
my
terminal.
A
Let
me
know
if
you
don't
so
we
have
this
example:
application
under
our
polar
signals,
github
org,
but
you
could
write,
write
something
up
really
quickly
yourself
as
well.
Let
me
just
show
you
really
quick
what
code
is
necessary
and
go?
You
could
really
do
this
with
a
one
one
or
two
lines
of
code
if
you
use
some
globals,
but
my
recommendation
would
be
to
not
do
that:
don't
don't
do
globals.
A
Really.
All
you
need
to
do
is
register
all
these
http
handlers
onto
some
http
mux
and
then
start
the
http
server,
and
probably
you
already
have
an
http
server,
maybe
you're
already
using
prometheus
metrics,
for
example,
and
so
you
could
just
register
these
on
there
as
well
and
as
a
matter
of
fact,
a
lot
of
go
programs
already
do
this.
So
maybe
you
there's
nothing
new
here
for
you.
A
Maybe
you
already
have
all
of
this,
and
you
just
need
to
start
making
use
of
this
data
in
new
and
exciting
ways,
and
so
let
me
just
show
you
real
quick
what
profiling
this
application
would
look
like.
So.
A
Got
here
let
me
go
back
just
to
create
a
little
bit
of
noise
in
our
profiles,
we're
going
to
calculate
the
fibonacci
sequence,
starting
with
the
1
millionth
number,
just
to
get
our
cpu
heating
a
little
bit
and
heat
us
up
in
in
the
cold
that
we
have
here
in
berlin
and
do
a
bunch
of
memory
allocations.
So
we
can
see
those
as
well.
So
if
we
start
our
program
now,
we
can
see
it's
starting
on
80..
A
As
I
said,
really
literally,
all
you
need
to
do
is
curl
these
endpoints
and
just
save
them
into
a
file.
You
could
also
immediately
call
this
with
the
go
prof
tool
chain,
but
I
do
like
to
save
my
profiles
into
a
specific
specific
file
and
then
all
you
need
to
do
is
use
the
prof
tooling
to
look
at
this
data
and
when
I
do
this,
I
need
to
change
my
sharing
again.
A
That
here
we
go
and
when
you
do
that,
you
can
see
here
our
allocations,
we
see
a
very
small
amount
of
allocations-
are
happening
in
this
path,
but
really
the
overwhelming
allocations
are
happening
here,
and
so
hopefully,
kind
of
this
example
showed
you
that
there's
no
magic
happening
here.
This
is
really
once
you've
understood
these
tools
a
little
bit.
There's
really
nothing!
Nothing
scary
about
this,
and
so
let's
go
back
to
my
presentation.
A
There's
nothing
fundamentally
hard
about
this
and
really,
I
hope
that
I
hope
I
showed
that
if
anything
going
away
from
this
talk,
I
hope
I
kind
of
lowered
the
bar
of
profiling,
your
applications
and
showed
you
that
this
can
be
really
easy,
but
to
show
you
why
we
as
polar
signals,
for
example,
kind
of
chose
prof
as
a
format.
I
want
to
take
a
little
bit
of
a
deep
dive
into
into
pprov
and
see
why
it's
such
a
cool
format.
A
So,
as
I
said
earlier,
pprov
was
initially
developed
at
google,
and
so
it's
not
a
huge
surprise
that
it's
used
by
go,
but
what
I
think
is
particularly
cool-
and
that
has
been
somewhat
novel
in
this
space-
is
that
this
is
language
agnostic,
there's
nothing
specific
about
go
in
pbrf
formats
in
pre
paper
of
profiles,
it's
really
just
a
protobuf
definition
and
we
can
generate
rust
profiles
from
this.
We
can
generate
python
profiles.
We
can
generate
ruby
profiles,
insert
your
favorite
language
or
runtime.
A
It's
totally
possible,
it's
just
a
file
format,
and
so
I
think
this
is
really
exciting,
because
it
allows
us
to
have
this
common
exchange
format
that
everybody
can
work
with
and
that
just
may.
That
just
means
that
we
can
build
common
tooling
for
all
of
these
awesome
languages
and
runtimes,
and
so
diving
a
little
bit
deeper
into
this
format.
A
We
we
roughly
have
two
things
here,
as
you
can
see
on
the
slide,
we've
got
samples
and
these
are
effectively
all
the
measurements
that
we
that
we
were
talking
about
earlier,
so
which
part
of
our
program
is
holding
how
much
memory
or
how
much,
how
many
cpu
cycles
are
being
spent
here
and
then
the
rest
is
vaguely
speaking
metadata,
so
location
and
function,
etc.
A
All
these
are
kind
of
metadata
describing
the
surroundings,
the
source
code,
that
this
is
actually
happening
in
and
what's
really
cool
here-
is
that
because
these
are
kind
of
hidden
away
from
the
samples,
these
can
be
consistently
generated
by
processes
of
the
same
binary.
So
what
that
effectively
means
is
that
if
I
start
my
api
server
multiple
times
and
take
people
off
profiles
from
each
of
these,
it
means
that
the
location
and
all
this
metadata
is
actually
equivalent.
A
So
it
means
that
really
all
we
need
to
all
that's
different
between
these
two
profiles
are
the
samples-
and
this
is
really
cool
and
we'll
see
later.
Why,
as
I
said,
all
this
other
stuff
is
vaguely
speaking
metadata,
but
let's
zoom
in
one
last
bit
in
terms
of
what
this
actual
pro
tool
looks
like.
So
we've
got
our
type-in
unit
describing
what
each
profile
represents.
A
So
these
can
all
kind
of
be
used
to
describe
what
this
profile
represents
and
the
samples
then
are
the
actual
measurements
and,
as
we
can
see,
each
sample
reference
references,
a
location
which
was
part
of
the
method
data,
as
we
saw
earlier,
so
that's
kind
of
how
we
can
then
take
these
multiple
measurements
and
potentially
even
compare
them
across
profiles.
A
A
So
to
do
that?
Let
me
walk
you
through
a
story,
and
maybe
you're
you're
you're
familiar
with
this.
If
not
then
you're
about
to
be
what
we're
seeing
here
is
a
graph
over
time
of
our
memory
usage
and
what
we're
seeing
is
that
our
memory
usage
grows
and
then
suddenly
drops
and
a
new
process
is
created
and
the
same
thing
happens
over
and
over
again
and
if
you've
seen
a
graph
like
this
before
then
you
probably
know
that.
B
A
So
that
could
be
because
we've
limited
the
amount
of
memory
a
process
can
use,
for
example,
with
containers,
or
quite
literally,
our
hardware
has
no
more
memory
available,
and
so
the
operating
system
decides
to
rather
kill
this
process
than
have
the
entire
operating
system
crash.
So
this
is
what
is
commonly
referred
to
as
an
kill.
A
The
operating
system,
essentially
protecting
itself
right,
but
what's
particularly
annoying
about
umkilds,
is
that
we
tend
to
only
figure
out
that
they
have
happened
after
they
have
happened,
and
so
that
means
that
after
this
particular
moment
in
time,
we
really
have
to
wait
again
to
take
multiple
memory
profiles
to
understand
how
this
memory
has
behavior
has
changed
over
time,
and
so
continuous
profiling
is
really
just
the
act
of
continuously
taking
these
profiles
over
time,
so
that
we
can
truly
understand
these
things
over
time.
A
And
this
is
not
a
fundamentally
new
thing
google
has
written
about
this
before.
But
what
is
somewhat
new
is
that
there
is
kind
of
a
movement
for
new,
open
source
projects
kind
of
being
created
in
this
space
and
that's
exactly
what
we
did
as
well
and
in
the
very
beginning,
when
we,
when
I
decided
that
I
wanted
to
start
a
a
continuous
profiling
project,
I
was
looking
at
the
format
what
what
I
strongly
believe
that
for
any
observability
signal,
and
I
believe
anything
that
allows
us
to
understand
the
operational
aspects
of
our
applications.
A
Better
is
an
observability
signal,
I
think.
What's
really
key
is
open
standards,
and
so
I
went
and
looked
at
the
prof
standard,
and
it
turns
out
that
this
is
a
really
awesome
fit
for
continuous
profiling
and
the
the
go
diagnostics
documentation
actually
kind
of
give
away.
Some
of
this
and
again,
this
is
not
particularly
surprising,
given
that
google
has
probably
invented
this.
This
format
with
just
this
in
mind
so
part
of
the
a
couple
of
excerpts
that
I
thought
were
really
interesting.
A
When
I
kind
of
was
researching
this,
the
go
documentation
says
you
may
want
to
periodically
profile
your
applications,
and
you
may
want
to
do
and
save
that
over
time
and
then
potentially
automatically
analyze
this
information
right
and
so
without
actually
saying
it.
A
Google
has
essentially
described
continuous
profiling,
and
so,
having
decided
on
this
format,
we
created
the
continuous
profiling
project
conpro
in
the
open
source
and,
roughly
speaking,
it
does
exactly
what
we
just
talked
about
it
periodically
goes
to
these
http
endpoints
and
all
the
http
endpoints
need
to
do
is
serve
pprov,
compatible
format,
profiles
and
controv
then
saves
those
into
its
like
special,
purpose-built
time
series
database
that
is
based
on
the
prometheus
time
series
database
and
allows
you
to
query
that
over
time
now
that
is
already
sufficient
for
the
umkill
scenario
that
we
talked
about
earlier,
but
over
the
past
couple
of
months,
we
we
felt
like
it
when
we
developed
our
product,
we
felt
like
there's
so
much
more.
A
That
is
possible
with
this
really
awesome
format
that
we
really
should
not
miss
out
on,
and
there
are
really
three
really
key
things
that
we've
we
thought
are
really
interesting.
The
first
one
being
the
ability
to
merge
all
these
profiles
into
one.
The
reason
why
this
is
really
cool
and
interesting
is
that
it
allows
us
to
take
all
of
these
profiles,
potentially
even
from
different
processes,
and
look
at
them
holistically
essentially
get
a
single
report
of
what
is
common
across
all
my
services
across
all
my
processes.
A
Sorry,
it
does
have
to
be
the
same
kind
of
process,
so
only
all
my
api
servers
I
can
compare
to
each
other.
Otherwise
it
would
be
a
quite
literal,
apples
and
oranges
comparison,
but
that
that
that
is
really
cool
and
then
the
next
one
is
now
that
we
have
this
data
over
time.
We
don't
actually
need
to
pick
specific
timestamps
anymore.
We
actually
can
we
don't
need
to
go
manually
and
obtain
this
data
anymore.
We
already
have
it
when
we
are
wanna
know
what
has
happened
from
this
timestamp
to
this
timestamp
right.
A
We
can
go
right
ahead
and
query
it
and
then,
ultimately,
because
we
have
all
this
data,
there
must
be
some
more
interesting
analytics
that
we
can
do
on
this
and
we'll
talk
about
that,
and
so
essentially
what
we
what
we
used
to
have
is
we
had
our
metrics
right,
where
we
could
see
our
total
heap
usage
of
our
processes
and
with
our
continuous
profiling.
We
can
now
actually
understand
what
has
happened
between
our
timestamp
3
and
our
timestamp
2..
Previously
we
could
only
say
yeah.
A
My
my
heap
has
changed
by
1
megabyte,
but
now
we
can
actually
go
and
say
tell
me
in
terms
of
my
code,
what
has
caused
this
behavior,
and
so
these
are
questions
that
we
were
just
fundamentally
incapable
of
asking,
because
we
just
didn't,
have
this
data
and
then
because
I
think
it's
just
visually
so
much
more
understandable
how
mergers
work.
A
A
A
We
can
just
do
that
streaming
and
so
that's
really
powerful,
because
we
can
take
all
these
like
potentially
gigabytes
or
terabytes
of
data
and
combine
them
into
a
single
report
to
be
representative
of
a
single
version
of
our
application.
For
example,
these
are
just
things
that
we
weren't
able
to
see
before
continuous
profiling.
A
Now
the
very
last
thing
that
I
think
I
think,
maybe
because
of
if
you
know
me,
as
I
mentioned
in
the
in
the
beginning-
I'm
a
prometheus
maintainer
and
one
thing
that
struck
me
really
early-
was
that
all
these
types
and
units
really
sound
a
lot
like
metrics
to
me,
and
so
one
thing
that
I
think
is
really
cool
that
we
can
do
with
these
profiles,
because
we
have
all
these
samples
within
profiles
is
we
can.
A
We
can
extract
metrics
from
this
and
we
can
look
at
how
our
our
applications
behave
not
only
on
a
on
a
total
level,
not
only
diving
into
individual
profiles
because
of
a
memory
peak
or
something
we
can
see
the
the
change
in
behavior
of
our
programs
over
time,
and
this
is
just
time
series
are
communicating
so
much
more
information
at
once
than
any
individual
sample
could
ever
do
right.
A
Why
this
is
happening
this
way,
and
so
ultimately,
all
of
this
is
about
our
ability
to
answer
questions
that
we
were
never
able
to
answer
before,
because
we
simply
didn't
have
the
data
and
because
we
have
all
of
this,
we
can
actually
do
all
these
kinds
of
optimizations
way
faster
than
we
were
before.
A
A
So
all
I
communicated
here
was
kind
of
the
idea
that
I
think
is
really
cool
and
that
we're
certainly
working
on-
and
this
is,
I
promise
the
only
slide
with
commercial
content.
As
I
mentioned,
we
launched
yesterday
with
our
invite
only
like
private
beta,
continuous
profiling
product,
if
you're
interested
in
that
check
our
website
out
and
then
we
also
launched
a
kind
of
neat
free
profile
sharing
service,
where
you
can
take
profiles
that
you
that
you
may
have
taken
manually
and
upload
them.
A
B
So
actually,
yes,
we
do
have
quite
a
lot
of
questions
and
just
so
we
get
it
on
recording.
Maybe
I
will
ask
the
questions
and
then
you
can
answer
them
one
by
one,
because
they're.
B
B
So
the
first
question
is
from
peter
and
I
will
not
attempt
to
say
the
last
name,
I'm
so
sorry.
No,
how
expensive
is
it
for
go
runtime
to
answer
the
profile
query
and,
in
parentheses,
heap
go
routines
etc?
Are
some
profiles
cheap
and
some
expensive.
A
Yeah,
I
think
that's
a
really
awesome
question
and
yeah
there.
There
are
certainly
some
some
differences
depending
on
the
type
of
profile.
A
The
cpu
profiles
are
certainly
the
most
expensive
ones
because
they
effectively
halt
your
process
to
take
measurements
pretty
much
all
the
other
ones
in
go
they're,
not
free,
but
they're,
very
close
to
being
free,
and
so
I
would
recommend
to
continuously
profile
everything
and
specifically
look
at
whether
cpu
profiling
has
any
impact
on
your
infrastructure
and
if
it
does
they're
pretty
simple
sampling
techniques
that
you
can
use
so
that
you,
let's
say
only
profile
five
percent
of
your
infrastructure
and
if
profiling
has
let's
say
a
one
percent
overhead
and
profiling
overall,
then
that's
a
minuscule
overhead
that
you're
paying
in
total
to
have
this
kind
of
data
right.
A
So
sampling
is
effectively
the
answer
if
it's
too
expensive,
but
in
our
experience
we
haven't
actually
had
the
the
case
of
even
cpu
profiling
being
overly
expensive.
It's
been
really
in
the
thousands
of
a
core
in
terms
of
price
that
we're
paying
for
this.
So
we
at
polar
signals.
I
mean
I,
I
would
not
expect
anything
else,
but
we
we
literally
profile
every
single
process.
We
have
so
yeah.
B
Okay,
actually,
I'm
gonna,
because
peter
had
a
few
other
questions,
I'm
gonna
kind
of
put
them
together.
How
often
would
you
recommend
to
scrape
profiles?
How
much
data
would
you
does
or
would
it
take
to
scrape
heap
and
cpu
profiles
from
100
servers
every
10
seconds,
and
where
does
con
prof
store
profiles.
A
So
let
me
start
with
the
later
one.
So
controv
has
its
own
kind
of
storage,
but
we
kind
of
built
a
distributed
version
of
that
storage
as
well.
So
comprompt
has
sort
of
two
two
modes
that
you
can
run:
one
where
it
actually
stores
the
the
data
locally
in
its
time
series
database
or
sends
it
off
to
kind
of
a
remote
storage.
So
that's
that's
also
the
kind
of
part
that
we're
also
offering,
but
you
can
also
run
that
part
yourself.
A
If
you
want,
if
you
want
to-
and
you
can
think
of
one
profile,
it
really
depends
on
how
large
your
your
application
is,
but
most
profiles
are
under
100
kilobytes
in
size,
and
so
you
can
take
that
in
terms
of
multiplying
that
by
your
number
of
processes,
so
100
kilobytes
times
100.
That
would
be
10
megabytes
every
10
seconds.
A
But
yeah,
ultimately
it's
it's
pretty
it's
pretty
cheap
in
terms
of
collecting
it
and
then
even
storing
it.
It's
not
it's
not
all
that
all
that
huge.
B
Okay,
there's
two
more
questions
arthur
asked:
is
it
possible
to
merge
profiles
with
con
proof
or
only
with
polar
signals,
continuous
profiler.
A
So
all
the
apis
are
available
in
in
in
controv.
It's
just
in
polar
signals,
we've
kind
of
created
a
comprehensive
ui
experience
around
it.
All
of
the
functionality
is
open
source
though,
and
you
you
can
totally
hit
the
api
endpoints
to
to
produce
the
profile
and
then
look
at
it
using
the
typical
go
prof
tooling,.
A
I
think
that's
a
great
question.
I
think
it
really
depends
more
on
the
performance
characteristics
of
your
application.
So
we
we
have
a
couple
of
applications
that
are
really
tiny,
just
a
few
hundred
lines
of
code
and
we
profile
even
those
because
ultimately,
profiling
comes
down
to
the
resource
usage
and
the
performance
of
that
code
and
as
it
turns
out,
most
code
can
be
optimized
with
a
few
lines
of
code
being
changed.
So
it
really
doesn't
matter
in
terms
of
the
size.
Any
size
code
base
can
can
benefit
from
profiling
for
sure.
B
Awesome,
thank
you
so
much.
I
think.
Oh
no,
I
have
another
question
from
christian.
Do
you
have
an
option
on
profiling,
client
applications?
Does
it
only
make
sense
in
ci
tests
and
does
it
make
sense
as
telemet
telemetry.
A
So
with
with
clients,
it's
more
complicated,
but
it
certainly
works
you
could
you
could
do
the
profiling
locally
and
then,
as
I
said,
send
it
off
to
a
remote
storage
that
could
could
totally
work.
That
would
probably
require
a
little
bit
more
work
than
let's
say
the
built-in
go.
A
Like
standard
library
things,
but
it
would
definitely
work
and
then
I
do
think
it
makes
a
lot
of
sense
to
do
this
nci,
but
it's
not
a
replacement
for
for
also
doing
this
in
production,
it's
much
like
metrics,
tracing
or
logging.
You
really
want
to
do
all
this
stuff
in
ci,
but
definitely
also
want
to
do
it
in
production,
because
you're
never
going
to
be
able
to
reproduce
exactly
the
same
kind
of
situations
as
you
have
on
production
and
those
are
really
the
interesting
ones
right
like
if
you're.
A
If
you
see
I
as
fast,
but
your
production
is
slow
well,
that
doesn't
really
help
us
right.
We
want
to
know
in
production,
is
our?
Is
our
stuff
using
the
the
amount
of
resources
that
we
want
to
and
is
it
does
it
have
the
right
latency
for
example?
So
I
I
think
you
should
do
it
everywhere
and
we
do
do
it
everywhere.