►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
What
is
an
observability
stack,
so
observability
in
general
is
an
ability
to
infer
internal
state
of
the
system
using
the
system's
external
output,
so
this
outputs
can
be
metrics
logs
and
traces.
So
these
are
the
three
main
data
types
of
observability.
A
So
about
me,
so
let
me
introduce
myself
so
my
name
is
vinit
laparti,
I'm
the
product
manager
at
time,
scale
working
on
observability
application
stream,
primarily
on
prom
scale
and
tops
I'm
also
the
maintainer
of
open
telemetry
operators.
So
if
you're
already
using
open
telemetry
operator,
I'm
always
into
here
audio
feedback
and
if
you're
using
open,
telemetry
collector
in
kubernetes
cluster,
you
should
definitely
try
out
open
telemetry
operator.
It
just
eases
your
deployment
and
managing
of
open
telemetry
collector.
A
I
also
enjoy
cycling
and
tasting
whiskies,
but
not
at
the
same
time.
So
if
you
are
based
out
of
hyderabad
india,
I
would
definitely
love
to
join
for
cycling
and
tasting
whiskey,
so
yeah
you
can
reach
out
to
me
in
twitter
or
and
slack.
A
So,
let's
see
what
are
the
data
types
in
observability?
So
first
comes
the
metric,
so
metric
is
all
about
trying
to
understand
the
state
of
something
using
the
metric
name
and
the
value.
So
here
you
can
see
that
metric
denotes
go
gc,
duration
seconds.
This
is
the
metric
name
and
the
value
is
the
float
value
0.0034.
A
So
it
can
be
anything
like
you
want
to
get
the
runtime
state
of
number
of
core
routines
running
in
your
application
or
go
gc
duration.
Second,
as
for
this
metric,
how
much
time
is
your
garbage
collector
taking
to
process
the
garbage
collection
or
you
can
see
number
of
threads
and
memory
being
utilized
so
metric
is
all
about
capturing
a
runtime
state
of
something
in
your
application.
A
So
what
are
traces
so
I
love
traces,
so
I'll
be
a
bit
biased
towards
traces.
So
this
is
the
image
you
see
here
is
the
eager
ui.
So
it's
the
visualization
of
a
trace.
So
if
you
need
to
define
a
trace,
so
trace
is
nothing
but
a
request.
Life
cycle.
Basically,
so
how
does
your
request
path
flow
in
your
set
of
micro
services
and
function?
A
Calls
in
a
particular
service,
so
if
you
think
of
an
e-commerce
site
like
amazon,
if
you're
ordering
something
so
it
just,
the
request
goes
to
the
cart
service,
payment
gateway
and
then
you'll
get
an
acknowledgement
saying
that
the
order
is
successfully
placed.
So
this
involves
the
request
traveling
into
multiple
microservices
and
in
each
microservice.
It
needs
to
travel
into
multiple
functions
so
with
trace.
You
can
understand.
What's
the
request
lifecycle
is
like
and
where
is
the?
A
Where
is
the
most
amount
of
your
time
is
being
spent
like
the
added
latency
throughput
in
each
and
every
part
by
just
looking
at
a
trace.
So
here
you
can
see
that
the
duration
for
this
trace
to
happen
was
18.54
milliseconds
and
total
services.
A
This
trace
has
panned
as
three
services
and
the
depth
is
eight,
which
means
there
are
total
eight
spans.
So
here
you
can
see
this
bars.
Basically,
it's
like
a
parent
span
and
you
have
a
chill
span.
So
you
can
see
there
is
a
span
which
is
consuming
12.94
milliseconds.
So
if
you
just
hover
or
click
on
this
span,
this
also
captures
some
metadata
to
understand.
What
is
the
metadata
in
this
particular
request,
lifecycle
to
analyze
further
into
a
trace,
so
these
are
trace.
A
These
are
traces
in
the
high
level,
so
water
logs,
so
I
think
logs
need
not
need
any
special
introduction
because
that's
the
first
place
where
any
kind
of
instrumentation
or
observability
starts
from
today.
So
these
are
logs
from
the
open,
telemetry
collector.
So
basically,
logs
help
you
to
understand
the
current
action
being
performed
on
your
application,
so
you
can
log
some
error,
debug
logs
info
logs,
to
understand
what
exactly
is
happening
in
your
log.
So
this
this
is
the
first
step
to
your
observability
and
today
will
not
be
discussing
more
about
logs.
A
Our
primary
focus
will
be
on
metrics
and
traces.
So,
let's
get
back
to
the
title,
the
full
observability
stack.
Yes,
I
mean
so
I
mean
in
the
title.
If
I
deploy
a
full
cncf-based,
observability
stack.
So
yes,
it's
the
full
observability
stack
in
under
five
minutes.
Yes,
am
I
serious?
Yes,
I'm
serious
about
it.
So,
let's
see
what
we
have
in
this
presentation,
so
it
supports
complete
metrics.
It
support
complete
traces
and
logs
just
needs
an
external
storage
system.
A
A
So
the
definition
of
top
just
stops
is
a
cli
tool
and
a
helm
chart
that
aims
to
make
it
as
easy
as
possible
to
install
a
full
observability
stack
in
your
kubernetes
cluster.
So
you
can
either
use
the
cli
tool
or
the
helm
chart
to
install
this
observability
stack.
So
it's
it's
totally
your
preference
and
if
you
want
to
check
out
more
about
tabs,
you
can
check
out
and
this
link
github
dot
com,
slash
time,
skill,
slash
tops
and
let's
discuss
about.
What
does
this
stops
include
so
we'll
just
go
layer
by
layer.
A
So
first,
let's
discuss
the
exposition
layer,
so
in
observability
we
definitely
need
some
kind
of
components
which
which
ex
which
extract
some
kind
of
metrics
from
the
targeted
resources.
So
here
we
have
node
exporter
to
basically
scrape
the
node
metrics
from
all
the
nodes
running
in
your
kubernetes
cluster.
But
basically
there
are
the
cubelets
there
and
you
have
cubestate
metrics
to
scrape
the
kubernetes
metrics
from
the
cube
api
server.
A
So
this
gives
you
the
overall
understanding
on
the
state
of
your
kubernetes
cluster,
so
by
default,
tops
includes
both
node
exporter
and
cube
state
metrics
for
you
out
of
the
box
and
visualization
layer,
so
tops
includes
graphana,
so
you
can
use
grafana
for
visualizing
anything
like
metrics,
logs
traces,
anything
and
you
can
use
multiple
data
sources
to
query
with
your
preferred
language
like
sql,
promptcurl
or
some
filtering
mechanism,
what
eager
offers
so
we
also
deploy
eager.
A
So
if
you're
so
eager
is
the
cncf-based
tracing
solution
so
from
eager,
we
just
use
the
eager
query
to
visualize
the
traces.
So
if
you
have
already
using
aeger,
this
is
already
covered
for
you,
so
you
can
just
use
the
for
the
visualization
as
well
for
traces
and
collection.
So
we
have
seen
the
exposition.
How
does
the
specific
targeted
metrics
get
extracted
from
from
the
nodes
and
the
cube
api
server?
And
we
have
seen
the
visualization
layer
for
grafana
and
using
aeger?
And
here
we
have
the
collection
layer?
A
How
does
the
data
gets
collected
so?
Firstly,
we
have
prometheus
so
prometheus
is
a
graduated
cncf
project
for
monitoring
and
alerting
system
for
your
services.
So
I
will
will
not
go
deep
into
this
project,
but
I
hope
you
are
already
aware
of
it.
If
you
are
new
to
prometheus
and
open
telemetry,
you
should
definitely
check
it
out,
because
the
observability
today,
without
the
stools
is
close
to
impossible.
A
I
should
say
so
yeah
so
prometheus
basically
helps
you
to
scrape
the
metrics
from
the
targets
like
node
exporter,
cubestate
metrics
or
your
custom
business
applications
which
you
have
instrumented
using
prometheus
so
that
basically
scrapes
the
metrics
from
them
or
you
can
also
for
push
the
metrics
to
the
prometheus
and
prometheus
also
supports
the
remote
right
back-end.
So
from
prometheus,
we
do
remote
right
to
the
prom
scale
which
we'll
be
seeing
in
the
other
slides.
So
it's
basically
prometheus
is
like
supports,
scraping
the
metrics
from
the
targets.
A
Also,
it
also
supports
in-house
storage
engine,
so
prometheus
also
supports
storing
it
within
prometheus
or
if
you
would
like
to
store
it
for
longer
durations
aggregate
from
different
prometheus
instances,
the
metrics
coming
in
so
you
can
use
the
remote
right
systems
like
prom
skill
and
the
coming
to
the
open,
telemetry.
So
open
telemetry
is
the
second
active
project
in
the
cncf,
so
after
kubernetes
so
open
telemetry
is
like
it
includes
so
many
projects
like
the
instrumentation
layer,
the
collector
layer
and
everything
and
even
the
open,
telemetry
operator.
A
So
here,
though,
when
I
say
open
telemeter,
it's
the
open,
telemetry
collector,
which
means,
if
you
have
instrumented
your
application
with
tracing
client
libraries,
you
can
just
forward
the
traces
to
the
open
telemetry.
A
You
can
configure
the
receivers
like
eager,
zipkin
or
otlp.
So
all
kind
of
tracing
instrumentations
can
can
be
connected
to
the
open,
telemetry
collector.
So
all
the
traces
you
can
forward
to
the
open,
telemetry
and
in
open
telemetry.
You
can
also
configure
the
exporter
so
where
we
will
configure
otlp
exporter
to
forward
the
traces
from
open,
telemetry
collector
to
the
prom
scale,
so
open
telemetry
collector
doesn't
support
storing
of
traces
for
future
visualization
and
analysis.
A
So
this
definitely
needs
a
back
end
to
store
the
traces,
and
here
comes
the
the
prom
scale,
which
is
like
the
the
powerful
agent
of
the
observability.
I
should
say
the
powerful
component
of
the
observability,
because
it
helps
you
to
process
the
data
for
long
and
also
helps
you
for
long
term
storage.
So
in
observability,
it's
like
for
every
five
to
ten
seconds.
The
data
just
keeps
coming
in
and
gets
ingested,
and
you
need
to
process
that
data
and
visualize
it.
A
So
when
I
mean
process
you
like,
for
example,
you
you
want
to
down
sample
it
or
you
want
to
correlate
it.
So
all
this
data
sets
on
the
prom
scale.
So
this
is
the
storage
layer
for
all
the
observability
data
we
are
discussing
in
this
presentation,
so
detailed
overview.
A
This
tag.
Basically,
we
have
discussed
the
exposition
layer,
visualization
layer,
collection,
layer
and
the
storage
layer.
So
here
it's
we're
just
listing
everything
in
one
slide
for
the
easier
understanding,
so
the
complete
tops.
Basically,
the
helm
chart
is
a
combination
of
multiple
health
charts,
and
here
you
can
see
we
are
using
cube
prometheus.
The
kubernetes
monitoring
stack
basically
offered
by
the
prometheus
community.
A
It
includes
prometheus
alert
prometheus
to
collect
the
metrics
alert
manager
to
fire
the
alerts,
so
in
cube
prometheus,
the
alert
manager
comes
with
default,
alerting
rules
for
your
kubernetes
cluster
and
node
exporter,
which
means
if
there
are
any
incidents
or
anomalies
or
something
which
is
which
causes
issues
in
your
cluster
will
be
automatically
alerted
to
the
alert
manager
using
the
out
of
the
box,
alerting
rules
offered
by
cube
prometheus
and
there
is
grafana
to
visualize.
A
What's
going
on
for
visualization
and
also
for
alerting,
you
can
also
alert
through
graphene,
and
we
have
node
exporter
to
export
the
metrics
from
your
nodes
and
cube
state
metrics
to
get
metrics
from
your
kubernetes
api
server
and
prometheus
operator
is
to
manage
the
lifecycle
of
prometheus
and
the
alert
manager.
So
it
uses
the
custom
resource
definitions
to
deploy,
manage
prometheus
and
alert
manager.
So,
if
you
are
using
prometheus
in
kubernetes
cluster,
you
should
definitely
check
out
cube
prometheus
because
it
comes
with
prometheus
operator.
A
It
eases
the
management
of
prometheus
for
you
in
the
kubernetes
world
and
the
prom
scale
to
store
metrics
and
traces
for
long
term,
a
long
term
storage
and
allows
you
to
analyze
analyze.
The
data
which
is
stored
using
both
prom
ql
and
sql
and
prom
length,
is
a
tool
to
build
and
analyze
prompt
your
queries
with
ease.
So
basically
many
users
may
not
be
aware
of
prom
cure
or
will
have
hard
time
building
some
complex
queries
with
prom
cool,
so
prominence
is
a
tool
which
helps
you
to
build.
A
This
queries
with
much
more
ease,
so
by
default.
Tops
includes
problems
to
make
your
life
easier,
while
working
with
prom
ql
queries
and
we
have
open
telemetry
operator
to
manage
the
life
cycle
of
open,
telemetry
collector
so
same
as
the
prometheus
operator.
The
open
telemetry
operator
also
manages
the
open
telemetry
collector
using
the
custom
resources,
so
so
it
just
makes
the
installation
managing
upgrading
everything
easier
with
the
operator
and
in
open
telemetry
operator
recently.
A
We
have
also
added
support
for
auto
instrumentation,
which
means
you
just
create
the
instrumentation
cr
and
you
just
need
to
add
annotations
for
your
deployment.
Saying
that
inject
java,
true
and
then
automatically
the
open
telemetry
operator
injects
the
side
car
for
your
java,
node.js
and
python
applications.
So
without
any
code
changes
you
can
achieve
the
auto
instrumentation
to
your
applications
using
the
open
telemetry
operator,
auto
instrumentation
feature,
so
you
should
definitely
check
it
out.
A
That's
really
really
interesting
and
you
can
just
get
the
observability
traces
like
the
observability
for
your
business
applications
without
zero
code
changes,
so
the
traces
just
come
are
just
exposed
and
the
sidecar
just
forwards
them
to
the
open,
telemetry
collector
yeah
and
we
have
eager
query
to
visualize
the
traces.
So
you
can
either
use
grafana
or
eager
query,
it's
just
a
preference
and
a
choice
to
visualize
traces
and
what
is
prompt
scale.
A
So,
let's
see
the
complete
overview
of
prom
scale,
so
the
prom
scale
is
an
observability
backend
powered
by
sql.
So
it
supports
unparalleled
insights
when
I
say
unparalleled
insight.
It
uses
one
database
to
store
all
the
observability
data
here.
It's
the
metrics
and
traces.
You
can
also
store
your
business
data
in
the
same
system,
which
means
you
have
all
the
data
sitting
in
one
database
and
you
can
correlate
all
this
different
data
types
at
a
specific
window
in
time.
A
So
it
just
gives
you
the
ability
to
do
all
kind
of
analytics
and
processing
in
a
specific
time
window.
So
sql
just
offers
anything.
So
the
sky
is
the
limit
for
you
if
you
are
using
sql
as
the
query
language
and
it
has
a
proven
foundation,
so
it's
built
on
petabyte,
scale,
foundation
of
time,
scale
db
and
postgres
sql,
which
means
it
also
supports
advanced
database
features
like
high
availability,
replication
and
compression,
and
many
more
so
you're
fully
covered
for
the
reliability
of
the
database
layer
and
it's
easy
to
get
started
and
use.
A
So
trust
me
this
is
the
major
differentiator
for
the
prom
skill,
because
you
need
not
worry
about
how
to
run
manage
this
observability
stack
or
the
long
term
storage
system,
whereas
with
other
solutions
you
should
be
running
tens
of
micro
services,
installation
upgrades
scaling
them
is
like
very
it
just
causes.
A
So,
on
the
left
you
can
see
there
is
open
prometheus,
which
can
do
remote,
write
and
remote
read
from
the
prom
scale
connector,
and
we
have
open
telemetry,
which
uses
the
open
telemetry
collector,
which
uses
the
otlp
grpc
endpoint
to
ingest
the
traces
to
the
prom
scale
connector.
If
you're
not
using
open,
telemetry
collector,
you
can
directly
also
instrument
your
application
using
open,
telemetry,
client
libraries
and
directly
forward
the
traces
from
your
application
to
the
prom
skill.
Connector,
that's
totally
possible
and
coming
to
the
prom
scale
itself.
A
So
prom
scale
is
a
combination
of
two
components:
one
is
the
prom
scale
connector,
which
is
stateless
and
the
other
one
is
the
time
scale
db
database.
So
all
you
need
is
just
two
components:
running
and
you're
fully
covered
for
the
storage
of
all
your
observability
data.
You
need
not
need
multiple
systems
or
two
different
stacks
to
manage
traces,
metrics
and
everything
so
in
prom
scale.
All
the
data
sets
in
one
system
and
its
prom
scale
and
coming
to
the
visualization
layer.
A
We
have
eager
ui
to
visualize
the
traces
from
the
prom
scale
connector,
and
we
have
grafana
to
query
metrics
using
prom
kill
from
the
prom
scale
connector
and
on
the
side
note
the
prom
scale.
Connector
has
100
percent
compatibility,
support
for
prom
ql
queries
and
you
can
use
grafana
to
query
using
sql
from
directly
from
the
timescale
db.
So
you
can
use
sql
for
all
the
data
which
is
stored
in
the
timescale
db
and
any
tool
that
speaks
sql
should
just
work
out
of
the
box
with
the
timescale
db.
A
So
this
is
the
visualization
layer
for
prom
skin
and
if
you
want
to
do
the
check
out
more
on
the
prom
scale
feel
free
to
open
this
link:
tsdb,
dot,
co,
slash
prom,
skin
and
let's
discuss
about
what
are
the
features
offered
by
prom
scale.
In
the
high
level,
so
these
are
just
the
top
level
features.
We
have
many
more
getting
cooked
and
developed
in
the
prom
scale
today,
and
this
just
this
list
just
grows
in
the
coming
days.
A
So
we
have
full
sql
support
and
analytics
support
on
your
observability
data,
which
means
all
the
data
which
you
are
sending
to
prom
skill
can
be
queried
with
a
full
sql
support
and
you
can
also
do
analytics
on
them
using
the
analytical
functions
offered
by
time
scale
db
and
we
the
storage
support
for
metrics
and
traces.
So,
as
I
told
for
other
solutions
in
the
market
or
the
other
open
source
solutions,
it's
about.
You
need
two
different
systems
to
store
and
process
metrics
and
for
traces.
A
So,
whereas
with
prom
skill,
all
you
need
is
one
system,
the
prompt
scale
itself
for
storing
metrics
and
traces.
So
it's
easy
to
run
and
manage
for
you,
and
we
also
offer
high
availability
for
prom
scale.
So
with
prometheus,
you
can
just
use
the
external
labels
of
prometheus
to
leverage
high
availability
from
prom
scale
and
even
multi-tenancy
is
offered
in
prom
scale,
so
you
can
use
the
tenant
ids.
A
If
you
have
multiple
prometheus
instances
which
are
sending
metrics
to
the
prom
scale,
you
can
just
attach
them
with
the
tenant
id
and
the
data
is
just
separated
out
between
tenants.
We
also
support
exemplars
in
prom
scale.
So
if
you
have
instrumented
examples
with
prometheus
client
client
libraries
in
your
application,
so
just
promise
you
scrapes
the
exemplars
and
does
a
remote
right
to
the
prom
scale
and
we
just
store
it
for
the
future
analysis
and
the
continuous
aggregates
for
prom
scale
for
metrics,
which
means,
if
you
are
already
using
time
scale
db.
A
You
should
already
be
knowing
continuous
aggregate.
So
continuous
aggregates
are
the
down
sampling
of
of
time
scale
db.
So
it's
much
more
than
down
sampling,
much
more
accurate
and
all
so
it's
we
do
support
continuous
aggregates
for
the
metric
stored
and
rom
scale.
So
this
is
another
interesting
feature,
parametric
retention,
so
many
users
love
it.
A
So
you
can
just
apply
parametric
retention
on
on
per
metric
basis,
so
the
metrics
which
are
interested
in
to
store
for
long
term,
gets
stored
for
a
long
period
of
time
and
the
other
metrics
based
on
the
retention
policies,
gets
dropped
based
on
your
preference,
so
that's
totally
possible
with
prom
scale,
and
you
can
also
ingest
your
own
time
series
data
alongside
prometheus
data.
A
A
Json
streaming
request,
format,
and
all
you
have
to
do
is
just
do
a
post
request
to
prom
skill
and
all
this
time
series
data
of
yours
will
be
stored
alongside
prometheus
data,
which
means
it
gives
you
a
power
of
querying
this
time
series
data
using
both
prom
ql
and
sql.
So
that's
totally
possible.
So
if
you
have
any
legacy,
systems
and
metrics
should
just
try
out
this
grpc
streaming
endpoint
of
promskin,
and
this
is
the
internals
of
top.
A
So
we
have
seen
what
sprom
scale
is
so
now,
let's
get
back
to
the
top
side
of
the
house,
so
we
have
top
cli,
which
basically
installs
the
top's
health
chart
into
the
cover
notice.
Cluster
and
top
cell
chart
is
the
combination
of
all
the
cell
charts.
The
cube
prometheus
from
scale
time
scale
db
and
open
telemetry
operator
health
chart
so
tops
is
basically
a
super
help
chart
which
combines
all
this
and
helm
charts
under
the
hood,
and
this
is
the
top
architecture.
A
It
looks
complex,
but
trust
me
are
just
one
command
away
from
deploying
the
stack
and
configuring
all
this
components,
so
the
tops
does
all
the
heavy
lifting
for
you.
It's
pre,
pre-configured
and
pre-baked
for
you,
all
you
have
to
do
is
just
deploy
it
and
start
using
the
stack,
and
here
comes
the
cube,
prometheus
tag.
You
see
the
box
here,
it
includes
cube
state,
matrix,
node,
exporter,
alert
manager,
prometheus
and
the
prometheus
operator
to
manage
the
q
prometheus
tag.
The
graphana
and
here
comes
the
prom
scale
itself.
A
The
prom
scale
connector
and
the
time
scale
db,
and
we
have
prominence
to
help.
You
build
prom
queries
and
here
comes
the
tracing
stack.
We
have
hotel
operator.
The
hotel
operator
has
a
dependency
on
cert
manager,
so
we
do
deploy
search
manager
for
open
telemetry
operator,
and
here
is
the
open,
telemetry,
collector
and
eager
query
to
visualize
the
traces
in
eager.
A
So
if
your
business
applications
are
instrumented
for
with
traces,
all
you
have
to
do
is
configure
the
open,
telemetry
collector
as
the
end
point
to
forward
the
traces
and
your
services
will
forward
the
traces
to
the
hotel,
collector
and
the
hotel
collector
will
forward
them
to
the
prom
scale,
and
here
comes
the
prometheus.
So
the
prometheus
also
scrapes
the
slash
metrics
endpoints
from
all
your
services,
which
means
it
just
scrapes
all
the
metrics
from
your
applications
and
the
metrics
from
prometheus
gets
forwarded
to
the
prom
scale.
A
A
Yeah,
I
just
have
the
cube
system
board,
so
the
cluster
is
just
empty
and
let's
see
the
top
cla,
what
top
cla
has
to
offer
for
us
and
tops?
Basically
has
the
sub
commands
as
grafana
to
do
graphene
operations
like
graphana
get
password
change,
password
port
forwarding
and
help
basically
to
do
helm
operations
like
show
values
for
your
tops,
helm
chart
as
the
as
the
core
component
of
tops
architecture
is
held.
A
So
we
do
have
some
helm
operations
for
you
to
have
ease
ease
with
dealing
with
top
cli
and
we
do
support
ins,
installation
of
observability
stack
using
the
install
command
and
we
have
eager
to
perform
eager
operations
like
port
forward
and
we
have
metrics
to
do
metric
operations
like
applying
per
metric
retention
directly
from
the
cli
and
configuring.
The
cheng
interval
for
the
time
scale
db
for
metrics
and
the
port
forwarding
for
time
scale
db,
prompt
scale,
prominence,
graphene,
prometheus
agar
to
localhost.
A
So
all
the
components
which
are
deployed
by
tops
can
be
port
forwarded
to
your
local
host,
using
the
support
forward
sub
command
and
we
have
prometheus
for
prometheus
operation,
rom
lens
rom
scale
and
time
scale
db
for
time,
scale
db,
operation
with
time,
scale
db,
sub
command.
You
can
do
get
password
change
password
of
the
database
and
you
can
also
do
connect,
which
means
you
can
just
get
into
the
psql
prompt
right
from
your
shell.
So
you
don't
need
to
ssh
into
the
part
of
the
database.
A
All
you
have
to
do
is
just
do
tops
timescale,
db,
connect
and
it
just
connects
to
the
database
for
you.
So
how
cool
is
it?
So
you
don't
need
to
find
the
timescale
db
pod
and
try
to
understand
what
is
the
secret
it's
mounted
to,
and
then
you
need
to
capture
the
secret,
which
is
a
password
string
in
the
secret
which
is
base64
encoded.
You
need
to
decode
it,
and
then
you
need
to
do
psq.
You
need
to
take.
You
need
to
do
exec
into
the
port
and
then
apply
the
password.
A
So
it's
a
bit
cumbersome.
So
all
you
have
to
do
is
just
do,
tops
time,
scale,
db,
connect
and
it
just
gets
connected,
and
it's
the
same
with
other
commands
as
well.
It
just
makes
your
life
easier,
while
managing
the
observability
stack
and,
let's
install
the
stack
for
you.
A
Okay,
let
me
do
quebec
till
get
crds.
I
just
wanted
to
check.
Is
my
cluster
in
the
state?
I
expect
it
to
be
so.
Yes,
that's
the
way
I
want
it
to
be,
and
let's
do
tabs
install,
so
I'm
just
doing
tops
install
iphone,
iphone
tracing
because
today
the
tracing
support
in
prom
scale
is
in
beta,
so
in
next,
in
few
weeks,
we'll
be
announcing.
Tracing
ga,
which
means
by
default
top,
should
also
support
tracing
installation,
all
the
tracing
components
installation.
A
So
at
the
moment,
you
need
to
explicitly
enable
it
by
entering
iphone,
iphone
tracing
flag,
so
I'm
just
doing
enter
so
the
installation
is
running
my
fingers
crossed
for
the
demo
to
work,
and
it
asks
you
for
the
confirmation
of
the
cert
manager
is
required
to
deploy
open
telemetry.
Do
you
want
to
install
the
cert
manager?
So,
as
I
told
the
open
telemetry
operator
has
a
dependency
with
cert
manager
and
as
the
third
manager
doesn't
exist
in
the
cluster,
it's
asking
for
a
confirmation.
A
If
the
search
manager
already
exists
in
the
cluster,
it
just
skips
the
installation
of
the
search
manager.
So
I'm
just
doing
yes
and
I
just
processed
the
installation.
So
in
the
meantime
we
can
just
see
from
my
previous
installation
installations.
What
does
the
stack
actually
contain?
So
here
you
can
see
that
I
have
another
stack
which
was
running
so
I'll.
Just
show
you
what
are
the
components
which
the
stack
deploys
by
the
time
it
gets
deployed.
So
now
the
time
is
6
14
pm
in
my
time.
A
So,
let's
see,
does
the
stack
gets
deployed
in
less
than
five
minutes,
as
the
title
says.
So
here
we
have
time
scale
db,
the
pod
for
the
database
and
we
have
prom
scale
and
we
have
the
prom
lens
and
we
have
the
prometheus
node
exporter.
So
as
this
is
the
three
node
cluster,
the
node
exporter
is
deployed
as
the
daemon
set.
A
A
A
Okay,
so
I
have
another
environment
with
all
the
dashboards
which
I
wanted
to
show
you
using
sql
command.
So
these
dashboards
are
not
pre-configured
in
the
tops
at
the
moment,
but
in
the
future
releases
we
will
also
pre-configure
this
dashboards
for
the
for
the
for
the
tops
by
default
out
of
the
box.
You'll
have
this
dashboards
configured
for
you.
So
before
we
jump
into
this
dashboards,
I
want
to
check
the
state
of
the
cluster,
so
it's
getting
installed
so
we'll
give
a
few
more
minutes
for
the
stack
to
be
up
and
running.
A
Basically,
these
are
the
dashboards
built
on
top
of
traces,
so
we
have
traces
coming
from
hipster
shop
demo
applications,
so
these
traces
are
stored
into
the
prom
scale,
and
this
we
are
using
sql
to
query
all
this
data
on
top
of
traces
for
you,
so
you
can
see
the
p99
latency
for
all
the
traces
in
an
average
is
173
milliseconds
and
the
throughput
is
6.82
requests
per
second,
and
there
is
no
error
rate,
interestingly,
which
is
great,
and
we
have
p99
response
time
here
as.
A
As
p90
and
response
time
for
each
service,
you
see
here
we
have
a
recommendation,
service,
currency
service,
email
service
and,
if
you
just
hover
on
it,
you
can
see
that
the
cart
service
has
some
like
the
response.
Time
is
ranging
to
two
seconds,
which
is
not
good.
So
this
is
the
p
19
response
time,
and
here
you
can
see
the
heat
map
for
the
trace
duration
for
all
the
traces
aggregated.
A
So
this
is
one
dashboard.
I
wanted
to
show
you
so
in
the
meantime,
we'll
just
check
the
status
of
the
stack
so
yep,
so
at
6
18
now
so
I
can
think
it
has
four
minutes
from
the
time
we
have
deployed.
It
deployed
the
stack.
So
let's
do
cubic
tail
get
parts.
So
still
the
deployment
is
the
observability
stack
is
deployed
in
four
minutes,
so
the
grafana
is
in
cash
flow
back
off.
A
Let's
give
it
10
more
seconds
for
it
to
start
up
and
running
it's
dependent
on
the
time
scale
db,
but
in
the
meantime
we
can
see
that
we
have
time
scale,
db,
deployed,
prom
scale,
prom
lens
the
prometheus
node
exporter,
the
open,
telemetry
collector,
the
cube
state
matrix
and
the
from
this
operator.
A
A
A
Okay,
so
we
have
admin
and
then
the
password
which
we
copied
and
we
are
logged
in.
So
this
is
the
graphana
and
we
have
pre-configured
dashboards
and
tops
using
the
cube
prometheus,
which
internally
uses
the
kubernetes
mixins.
So
let's
navigate
through
this
dashboard.
So
these
are
the
dashboards
which
are
pre-configured
by
cube
prometheus.
So
you
can
just
get
into
node
exporter
nodes
to
understand
what
is
the
metric.
A
So
as
the
stack
is
just
five
minutes
old,
so
the
data
is
just
getting
filled
in
so
here
you
can
see
all
the
cpu
usage
load
average
for
the
node.
So
let's
give
it
some
time
and
in
the
meantime
we
can
check
the
data
sources
so
in
tops
by
default.
The
data
sources
are
configured
for
you
out
of
the
box.
Here
you
see
the
prometheus
data
source,
which
is
configured
to
use
prom
scale.
A
So
here
we
have
prom
scale
to
query
prompt
queries
as
the
prometheus
data
source
and
we
have
from
scale
sql,
which
is
the
postgres
sql
data
source
to
query
using
sql
from
the
time
scale
db,
and
you
have
prompt
scale
tracing.
So
it's
an
eager
data
source
to
query
traces
from
prom
scan.
So
these
are
the
three
data
sources
which
are
pre-configured
for
you
using
tops.
A
And
now
we
can
just
go
to
the
other
date
prometheus
dashboard
to
understand
what
is
the
data,
so
here
you
can
see
that
these
are
the
prometheus
stats,
uptime
and
everything
for
these
things,
and
here
you
can
see
the
target
sink
target
so
this
it
says
it
has
more
than
750
targets
at
the
moment.
So
it's
8,
10
to
be
precise
and
average
scrape
interval,
is
one
in
one
minute
and
the
scrape
failures.
A
There
is
no
data
yet
and
it's
appending
samples
head
series,
so
the
head
series
is
59
000
at
the
moment,
so
these
are
some
metrics.
All
these
dashboards
are
out
of
the
box
available
for
you
if
you
are
using
tops-
and
this
is
the
grafana
visualization-
from
the
tops
which
we
have
deployed
and
how
we
have
also
seen
how
we
have
captured
the
password
using
the
top
cli
and
let's
get
back
to
the
sql
dashboards
which
I've
built
for
this
demo
in
another
environment.
A
So
here
you
can
see
that
the
service
performances,
so
even
these
dashboards
are
built
on
top
of
traces
using
sql
as
the
query
language.
So,
coming
to
the
first
panel
here,
you
see
operations
with
highest
error
rate
in
last
24
hours.
So
this
is
the
service
name,
and
this
is
the
operation.
So
in
front
and
service,
the
checkout
service
is
having
2121
spans
with
errors
and
the
total
spans
are
3083,
a
k,
total
spans
and
the
error
rate
is
55,
000,
55
percent,
55.4
percent.
A
The
error
rate
is
so
each
and
every
operation
shows
the
error
rates
per
api
here
for
operation
and
the
operations
with
highest
rate
errors
in
the
last
24
hours.
So
this
is
what
the
data
is.
A
So
the
front-end
card
checkout
has
the
55.5
percent
errors,
as
we
have
seen
here.
It's
the
same
error
rate
for
the
same
operation
here
and
you
can
see
all
the
list
of
apis
which
have
the
error
rates,
and
here
you
can
see
the
slowest
operation
in
the
last
24
hours.
So
the
front
end
service
has
p99
latency
of
28.9
seconds,
which
is
not
good,
and
here
we
also
have
the
p
triple
line
latency,
it's
the
same
again
and
for
the
product
id.
A
Also,
it's
the
same,
so
the
p99
is
almost
30
seconds,
which
is
not
good
and
here
the
slowest
operations
in
the
last
24
hours.
So
the
health
check
for
card
gr
pc
service
is
taking
approximately
one
point
like
1.64
seconds.
So
this
is
the
slash
check,
is
taking
this
much
time
so
which
is
not
good.
So
it
just
gives
you
all
this
kind
of
anomalies
and,
for
example,
we
can
just
jump
into
the
sql
query.
We
use
to
build
this
dashboard,
it's
as
easy
as
that.
A
So
this
is
a
nested
sql,
query,
you're
just
doing
select
and
then
you're
applying
to
care
like
you
need
to
do
some
small
casting
of
for
data
to
visualize
in
grafana
and
there's
another
nested
query
here.
So
all
you,
it's
just
10
lines
of
sql
query
for
you
to
get
this
kind
of
insights
and
let's
jump
into
another
interesting
dashboards.
I
have
here
to
demo
and
the
service
dependencies,
so
you
want
to
understand
what
what
are
the
dependencies
for
a
service.
A
So
you
have
a
client
service
as
front
end
check
out
service
and
they
are
dependent
on
ad
service
here
and
basically
the
front
end
client
is
calling
the
ad
service
2715
times
and
the
specific
api
is
get
ads,
so
the
total
execution
time
is
1.03
seconds.
So
how
cool
is
that?
You
just
know
the
service
dependencies
for
each
and
every
application
by
by
processing
the
traces,
and
here
you
can
see
another
interesting
thing.
The
front
end
is
calling
the
product
catalog
twenty
000
requests.
So
this
is
definitely
not
great,
so
you
should.
A
You
should
definitely
dig
into
it
by
looking
at
the
number
of
requests
the
product
catalog
service
is
getting
so
it
just
gives
you
this
deeper
insights
into
your
applications.
How
is
how
many
invocations
are
happening
for
api
and
what
is
the
client
the
source
where
this
requests
are
getting
invoked
from?
And
this
is
the
heat
map
of
the
trace
duration,
which
I've
seen
in
other
other
dashboard?
And
here
is
the
slowest
traces.
So
here
you
can
see
the
start
time
the
trace
id
the
service
name
and
the
operation.
A
So
basically,
the
slowest
trace
duration
is
from
the
card
service
and
the
duration
is
1.98
seconds
and
these
are
the
resource
tags.
So
we'll
just
see
what
is
the
sql
query
used
to
get
this,
so
the
sql
query
is
not
more
than
eight
lines.
I
should
say
here.
So
it's
just
the
select
so
you're
doing
select
of
start
time
and
replacing
this
trace
id
special
characters
with
an
empty
string.
That's
the
trace
id
here
and
we
are
doing
the
service
name.
A
Selection
span,
name,
duration,
second,
and
we're
converting
the
resource
tags
into
the
json
b
for
easier
visualization
here
in
the
table,
and
we
are
doing
from
so
there's
a
span
table.
We
are
just
querying
all
this
data
from
the
span
table
where
parent
span
id
is
null.
So
basically,
a
trace
is
nothing
but
a
parent
span
itself
when
we
d,
when
we
understand
the
trace
data
model.
So
we
are
just
saying
that
capture
all
the
parents
fans
because
they
denote
the
traces
and
just
limit
them
for
100.
A
So
it's
as
easy
as
you
see
it.
So
the
sql
way
of
querying
your
observability
is
like
a
new
approach
and
it's
really
easy
and
the
sky
is
the
limit
you
can
correlate
and
you
can
build
this
kind
of
dashboards
as
per
your
requirements.
So
it's
just
easy
and
it
just
offers
tremendous
value
for
you
to
understand
your
services
and
yep.
A
A
A
A
So
again,
here
you
can
see
that
it's
ingesting
basically
4500
samples
per
second.
So
these
are
the
info
logs
of
the
prom
scale,
stating
that
hey.
This
is
the
injection
of
samples
I'm
doing
at
the
moment
and
just
as
the
final
demo
will
just
deploy
the
sample
micro
services
so
that
we
can
see
how
does
the?
A
How
does
the
traces,
okay,
traces,
apps,
okay?
So
I'm
just
deploying
a
bunch
of
micro
services
which
basically
forward
emit
the
traces
to
the
open,
telemetry,
collector
and
open
telemetry
collector
forwards
this
traces
to
the
prompt
skill,
and
then
you
can
see
in
the
prom
scale
logs
that
it's
ingesting
x
number
of
samples
per
second.
So
this
might
take
a
couple
of
minutes
as
the
pods
needs
to
get
up
and
running.
So
I
think
they're
almost
up
and
running.
A
So
here
you
can
see
that
it
says
it's
already
ingesting
spans,
like
five
spans,
eight
spans
per
second,
so
even
the
traces
are
getting
ingested
into
the
prom
scale.
Now,
as
we
just
deployed
the
demo
microservices
here,
you
can
see
two
2000
samples
and
all
the
samples
and
spans
which
are
getting
ingested.
A
A
And
to
learn
more,
you
can
find
all
this
resources
for
slides
and
resources.
So
this
link
is
not
correct.
It's
it's
my
bad
sorry
for
that.
So
it's
a
prom
con
talk
link!
I
just
added
here.
I
should
replace
this
so,
but
you
can
find
this
slides
in
the
description.
I'll
just
share
the
slides
with
in
the
cncf
webinar
and
the
observability
stack
for
kubernetes
can
be
found.
Basically,
you
can
get
find
the
github
report.
A
Tsdb.Co
tops
github
and
prom
skill,
github
repo
at
slash
time,
skills,
slash
prom
skill,
and
you
can
also
find
the
prom
scale
blog
post
on
this
link
and
if
you
are
interested
to
discuss
more
about
tops
and
from
skill,
joiners
and
time
scale,
dbs
like
hash,
prompt
skill.
Currently,
we
are
also
rethinking
the
tops
architecture
to
support
git,
ops
and
infrastructure
as
code,
so
each
enterprise
has
their
own
way
of
deploying
components
into
their
infrastructure.
So
if
you
have
any
thoughts,
suggestions,
feedback
on
tops,
you
should
definitely
reach
out
to
us.
A
On
slack,
we
would
love
to
have
a
quick
call
with
you
to
understand
your
use
cases
and
requirements
to
better
shape.
The
future
of
tops
so
tops
will
have
some
architectural
revamping
and
some
changes
in
the
near
future,
which
should
make
it
even
more
powerful
and
easier
to
deploy
the
observability
stack.
A
So
now
you
just
see
it
as
one
command
away,
but
now
we
will
expand
this
ease
of
deploying
into
different
architectures
and
infrastructures
like
the
githubs
infrastructures
code
and
different
ways,
so
we
are
still
exploring
that
side
of
tops,
so
your
feedback
is
definitely
valuable
for
us.
You
can
reach
out
to
us
and
time
scale
db
slack.
A
A
So
here
we
have
the
tops
project
and
a
quick
start
getting
guides
how
you
can
install
the
cla
and
get
the
stack
up
and
running
it's
the
same
command,
which
we
have
run
to
get
the
tops
up
and
running,
and
we
have
prom
skill
here
so
so
here
is
the
prom
skill
repo
for
you.
So
you
can
learn
more
about
prom
skill
here
and
we
do
have
time
scale.
Docs.
A
So
if
you
are
interested
in
getting
started
with
prompt
scale,
you
should
definitely
check
out
the
prom
skill
docs
on
the
timescale
db
website.
So
it
gives
you
more
details
about
prom
skill
architecture
and
also
some
high
level
information
on.
How
is
the
schema
designed
in
the
relational
database
for
observability
data
and
installing
tops
also
has
all
kind
of
examples
for
you
and
justin
heads
up
that
we
recently
launched
the
prom
scale,
logo
and
I'm
very
excited
about
this
logo.
A
So
I
just
wanted
to
share
it
with
you,
so
you'll
be
seeing
more
prom
skill
and
this
logo
and
all
the
future
talks
in
cncf,
webinars
and
other
platforms,
and
you
can
also
check
out
the
time
scale,
the
blog
post
from
the
observability
team
at
time
scale
and
time
scale
website.
So,
basically,
the
time
scale
website
holds
all
interesting
blog
posts
on
the
time
scale,
db
and
observability.
So
you
can
also
check
out
some
crypto
related
blog
posts.
A
How
is
this
crypto
data
stored
in
time
scale
db
and
if
you
are
interested
into
the
observability,
so
you
need
to
check
out
this
filter,
observability
and
block.
So
here
we
recently
published
how
to
turn
time
scale
cloud
into
observability
back
in
with
rom
scale
should
definitely
check
out.
So
it
says
about
how
you
can
install
tops
and
prom
skill
by
powering
all
data
into
the
timescale
cloud.
A
So
all
the
storage
layer,
it
will
be
offloaded
from
your
kubernetes
cluster
to
the
time
scale
cloud,
so
it
just
works
out
of
the
box
for
you.
So
this
is
the
architectures
of
the
you
will
have
all
the
tops
components
in
your
cluster.
The
prom
scale
connector,
but
just
the
database
will
be
offloaded
to
the
timescale
cloud
and
it
will
be
running
in
timescale
cloud.
So
it
offers
all
the
major
features
like
ease
of
operations
for
your
scaling,
the
compute
disk
and
everything.
A
So
timescale
dba
has
some
amazing
features,
so
we
do
have
30
day
free
trial
if
you
are
interested
to
check
out
the
timescale
cloud
for
storing
all
the
observability
data
and
do
check
out
this
blog
post.
If
you
are
interested
in
getting
started
with
tabs
and
time
scale
cloud-
and
there
are
other
blog
posts
here
as
well-
if
you
are
interested
like
how
to
down
sample
from
this
metrics
in
prom
scale
and
what
are
traces
and
how
sql
helps
you
in
getting
the
deeper
insights
from
your
traces.
A
A
And
we
are
hiring
so
if
you
are
interested
to
join
time
scale,
both
in
the
time
scale
db
the
database
side
of
the
house,
the
time
scale
flowed
or
in
the
observability
group,
so
feel
free
to
check
out
our
careers
or
reach
out
to
us
and
slack.
So
we
would
love
to
have
you
as
a
part
of
our
time,
skill
team
and
thank
you,
see
you
in
the
future
talks
from
from
scale
and
time,
skill.