►
From YouTube: TGI Kubernetes 052: Instrumenting with Prometheus
Description
Come hang out with Kris Nova as she does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Kris talking about the things she knows. Some of this will be Kris exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
Okay,
hi
everybody
and
welcome
to
TGI
K
I'm
your
host
Chris
Nova.
It
is
really
hard
for
me
to
talk
today.
I
was
losing
my
voice,
but
I've
been
drinking
a
lot
of
hefty
o
tea
to
help
help
get
my
voice
ready
and
I've
been
trying
not
to
talk
all
day
long,
just
for
TV
I
came
today
so
anyway
welcome
happy,
Friday
everyone
I'm
back
in
Seattle.
It
feels
really
good
to
be
home
and
it
feels
even
better
to
be
back
here
at
the
office
doing
TGI
K
for
everyone.
A
So
today
we're
going
to
be
talking
about
instrumenting,
a
code
base
with
Prometheus,
but
before
we
do
that,
let's
do
everybody's
favorite
part
of
the
week
and
I'm
gonna
go
back
and
see
what
everyone's
been
saying
in
chat.
Really:
quick:
okay,
Sola
Madi!
A
First,
in
line
as
per
usual
good
to
see
you
Lou
Matty
I
love,
Matty
you'll
be
happy
to
know
that
I've
been
using
some
of
your
tea
or
some
of
your
honey
and
the
tea
that
I've
been
drinking
in
my
have
do
mug
and
it's
been
helping
a
lot
with
my
throat
condition.
Right
now,
anyway,
Suresh
hi
from
Hamburg
mateus,
hello.
We
have
George
on
the
hep,
do
handle
and
I
might
talk
there,
some
if
my
voice
starts
to
get
any
worse.
A
We
have
software
groups,
hello,
hello,
from
Lulu,
Finland,
hello,
Elise,
eonni,
o
nadir,
moody,
Roy
from
Toronto
Brian
else.
I
just
saw
Brian
earlier
this
week,
hi
Chris
Brian
I
got
super
sick
on
a
plane
and
it's
been
really
hard
to
talk
for
the
past
two
days
since
I
last
saw
you
Craig
Nicholson,
says
love
it
I'm.
Assuming
he's
talking
about
to
my
joke
introduction
people
commenting
on
silent
first
time,
you're
here
welcome
Nicholas,
it's
good
to
see
you
thanks.
Moody
I
actually
feel
pretty
good.
It's
just
my
throat
is
really
sore.
A
So
it's
just
really
hard
to
talk.
/,
hello
from
Montreal
Glen's
test.
The
slides
look
like
real
life:
oh
I
lost
my
spot,
Harshal,
hello,
Mateus
or
he
said
hi
to
you.
Let's
see
hello
from
London
hello
from
velocity
hi
from
New
York
City.
Oh
hi,
Dan
good,
to
see
you
again,
hello
from
Paris
hello
from
her
happy
Oktoberfest.
Anyway,
that's
a
lot
of
of
chats.
It's
good
to
see
everyone.
We
got
a
lot
in
store
today.
I
was
busy
all
day
yesterday,
trying
to
like
get
some
good
example.
A
Applications
and
example
instrumentations
put
together
for
us
today.
So
that
should
be
a
lot
of
fun.
So
let's
jump
right
into
the
the
hack
MD
here
so
George
did
we
already
share
the
hack
MD
link
up
in
chat
I
think
we
did
yeah
it's
up
there
anyway,
if
folks
want
to
contribute
or
add
any
of
their
own.
What's
new
in
kubernetes
this
week
to
the
bottom
of
the
TGI
k-dog
here
feel
free
to
do
that,
but
we're
going
to
start
off
here
with
instrumenting
go
with
Prometheus.
A
So
I
wanted
to
call
this
one
out
if
you
actually
go
and
you
search
the
internet
for
like
instrumenting
a
go
application
for
Prometheus.
This
is
pretty
much
the
best
blog
that
I
found,
so
hopefully
this
blog
Plus,
what
this
episode's
about
we'll
be
able
to
give
folks
like
kind
of
an
idea
of
what
it's
like
to
first
get
your
application,
boot
strapped
with
Prometheus
instrumentation
and
then
actually
go
in
and
start
crafting
your
own
custom,
metrics
and
ultimately
getting
those
to
a
graph
on
a
graph.
A
So
that's
kind
of
the
prompt
for
today
I,
okay,
I
George
added
his
hack,
MD
URL.
Again
thanks,
George.
Okay,
so
we'll
be
looking
at
this
a
little
bit
later
and
we're
actually
gonna
go
through,
and
this
one
here
is
like
the
the
example
of
adding
your
own
custom
metrics.
So
we'll
try
to
get
this
up
and
running.
But
do
you
want
to
go
ahead
and
bookmark
this?
If
you
want
to
follow
along
at
home,.
A
Okay,
so
next
I
wanted
to
point
out
Joe's
Prometheus
TGI
K
that
he
did
earlier
this
year
and
the
assumption
of
today's
episode
is
kind
of
picking
up
where
Joe's
left
off,
which
was
getting
Prometheus
up
and
running.
The
only
big
fundamental
differences
between
what
we're
doing
is
Joe's
using
the
core,
OS
prometheus
operator
and
I
have
all
the
raw
gamal
and
we're
gonna
kind
of
go
through
things.
On
my
end,
a
little
bit
more
granularly,
because
we
all
know
that
I
like
to
kind
of
break
things
down
to
the
raw
components.
A
So
we
can
see
where
the
rubber
meets
the
road.
But
if
you
haven't
watched
this
one,
it
might
be
a
good
idea
to
go
and
at
least
watch
this
at
2
times
speed
before
you
come
back
and
watch
this
episode,
so
you
get
caught
up
with
how
Prometheus
works
and
all
the
different
components
and
how
they
all
fit
together.
A
The
reason
that
I
like
to
this
is
because
a
lot
of
these
auto
scaling
primitives
are
ultimately
going
to
talk
to
the
cluster
API,
or
at
least
that's
been
like
the
word
on
the
street,
but
this
this
is
good
because
it'll
go
through
and
actually
tell
the
difference
between,
like
the
horizontal
pot
autoscaler
in
the
vertical
pot
autoscaler
and
then
the
cluster
autoscaler,
which
actually
will
audit
a
scale
infrastructure
underneath
your
kubernetes
cluster
for
you.
So
this
is
good
if
you
want
to
check
up
on
all
the
different
types
of
auto
scaling.
A
One
of
the
big
things
of
kubernetes
is
this:
new
hotness
is
supposed
to
be
super
scalable,
and
with
these
three
types
of
tools
you
can
actually
go
and
scale.
Your
cluster
mike
says
hi
from
New
Jersey,
hi
Mike
good,
to
see
you
thanks
for
joining
everyone.
So,
let's
see
what's
up
next,
we
have
step
by
step.
C
IDs,
so
this
was
an
interesting
blog
that
I
kind
of
skimmed
over
a
little
bit
before
the
episode.
I
love
the
idea
of
advertising
at
CRTs
and
flexible.
A
That
folks
can
be
using
them
and
anytime
I
see
like
a
hands-on.
You
can
go
through
them
before
these
steps
and
you
run
this
code
and
it
will
work
for
you.
I
think
that's
a
really
great
way
for
folks
to
learn,
but
this
is
basically
like
a
step
by
step
go
through
and
it's
got
this
github
repo
here
called
harbor
kubernetes
project
initializer
tutorial
and
you
can
actually
go
and
download
code
and
actually
see
how
a
CID
exists
in
code
land.
In
fact,
let's
just
go
ahead
and
pull
this
up
really
quick.
A
So
yeah
how
it
works,
a
CR
D,
oh
there's,
a
type
of
there.
If
anybody
wants
to
contribute
to
the
project
a
project
is
created
and
it
uses
initializes
the
trigger
creation
of
sub
namespaces
and
grants
a
user
access
with
role
bindings
in
our
Beck
role,
yada
yada
and
then
here's
the
tutorial
down
here
as
well.
A
That
would
help
you
not
only
with
kubernetes,
but
also
with
whatever
business
logic
e
stuff
that
might
be
important
to
you
and
your
friends
on
your
team,
so
CR
DS
are
really
cool
and,
if
you're,
if
you're
kind
of
overwhelmed
by
how
they
work
in
what
they
are
this.
This
blog
here
is
a
good
starting
point.
So
let's
go
back
here.
A
Uh-Oh
this
one's
exciting
okay,
so
earlier
this
week,
I
was
in
London
before
I
was
in
New
York
for
velocity
comm,
and
there
was
this
really
cool
conference,
and
this
is
our
second
year
doing.
The
conference
and
I
was
invited
to
come,
speak
and
I
really
feel
like.
This
is
a
great
mix
of
the
cloud
native
East
space
and
one
of
the
cool
things
that
they
did
is
immediately
after
it
like
dumped
the
hour
for
mine,
but
immediately
after
a
focus
got
offstage.
They
had
like
a
live
recording
that
you
could
actually
go
and
watch.
A
So,
if
you're
interested
in
watching
any
of
these
talks,
they're
all
free,
you
just
have
to
sign
up
and
you
can
go
through,
and
you
can
actually
learn
a
lot
about
cloud
native
and
kubernetes
from
the
thing
that
happened
earlier
in
london
this
week.
So
it
was
really
good
conference.
Thanks
for
having
me
out
there
and
of
course,
everybody's
favourite
talk
here,
declarative
infrastructure
with
kubernetes.
You
can
go
check
that
out
if
you're
interested.
So
that's
a
good
resource
as
well.
A
This
last
one
before
we
get
into
the
Prometheus
II
bets
is
here
on
the
Prometheus
documentation.
There's
this
sub
category
here,
where
you
can
actually
go
and
learn
about
the
Prometheus
query:
language
I'm
gonna
be
I'm,
not
the
best
Prometheus
query
er.
But
if
you
want
to
learn
more
about
it
and
we're
gonna,
you
know
come
back
and
reference
some
of
this
a
little
bit
downstream.
A
Oh
nice,
a
sa
notice
talk,
yeah
I,
almost
thought
about
doing
thatõs
this
week,
but
I
picked
Prometheus
because
it
just
seemed
a
little
a
little
bit
easier
because
I
knew
I
wasn't
feeling
super
well,
but
I'm
really
curious
to
find
out
more
about
Latinos.
So
that's
really
exciting.
Anyway,
with
the
query
language
you
can
come
in
and
you
can
actually
build
custom
queries
and
you
can
like
a
group
on
all
these
different
labels,
which
will
be
looking
at
a
little
bit
later,
and
these
are
some
really
good
examples
that
you
can
I
think
understand.
A
Pretty
pretty
quick
off
the
bat.
You
know
where
a
method
is
not
equal
to
get.
We
want
to
see
how
many
HTTP
requests
total
for
I
get
a
given
environment
here,
which
is
handy
so
yeah.
This
is
a
good
reference
as
well
as
we
get
into
graphing
some
of
this
in
Griffin
ax
later,
okay,
so
those
are
all
the
references
we
have
for
this
week
and
kubernetes.
A
If
anybody
has
anything
else,
feel
free
to
add
it
to
the
doc
and
we
can
pull
it
up
or
just
ping
me
or
George,
and
we
can,
we
can
jump
into
it
a
little
bit
later
so
to
get
started
here.
Let's
show
folks
what
I
have
set
up.
So
if
you
go
to
the
TGI
K
repo,
and
actually
let's
do
this
in
github,
it's
gonna
be
a
bit
easier.
A
And
then
you
can
go
into
the
episodes
52
directory
I've
pretty
much
made
like
a
tiny
repository
here,
and
this
is
where
we're
gonna
be
working
out
of
today
to
just
give
folks
an
overview
of
what
we
have
in
the
repository.
We
have.
This
thing
called
bundled
llamo,
which
this
is
going
to
define
all
of
the
Prometheus
II
and
Griffin
EE
and
Alert
monetary
stuff
that
we
need
to
install
in
kubernetes
and
let's
go
through
this
a
little
more
granularly
to
actually
see
what
we're
setting
up.
So.
A
I
kind
of
left
this
one
a
little
bit
dark,
so
we
can
explore
some
a
bit
live,
which
is
what
this
configuration
here
is
for,
so
we
define
this
config
map
for
the
alert
manager.
We
also
define
this
other
config
map,
which
has
more
yeah
Mille,
config
e,
stuff.
Okay,
I
think
this
is
actually
where
we
define
the
different
alerts
that
we
could
be
using.
So
we
might
come
back
and
poke
at
this
section
later
we
have
a
deployment
for
the
alert
manager.
A
All
of
this
is
going
into
the
monitoring
namespace
and
all
this
does
is
it
pulls
the
Prometheus
alert
manager
container
that
is
pre-built
for
us
and
you
know
gives
it
a
couple
of
simple
parameters.
Here
we
tell
it
that
we
want
to
listen
on
port
1993,
which
will
be
relevant
in
a
second
and
we
map
some
volumes
to
it
as
well.
Let's
see
what
Suresh
has
a
comment:
source
graph
was
open
source
perhaps
might
be
helpful
for
people
what
is
source
graph
source
graph,
no
I'm,
just
curious.
A
Source
graph
is
a
free,
open,
source
self,
hosted
code
search
and
intelligent
server
that
helps
developers
find
with
you
understand
and
debug
code.
Interesting,
is
this
Suresh?
Is
this
what
you're
talking
about
you
say,
source,
oh
yeah,
source
graph,
yeah
I
typed
that
right,
okay,
yeah,
this
looks
interesting.
I
would
be
curious
to
check
out
a
little
bit
more
in
and
see
this
offline.
Thanks
for
the
share,
Matea
says
the
first
config
map
was
for
the
templates
and
how
the
format
alerts,
and
the
second
was
for
the
alert
manager
itself.
Okay,
thanks
Mateus.
A
So
next
we
have
a
service.
That's
going
to
point
to
the
the
above
deployment.
The
only
change
I've
made
to
this
one
is
I,
went
ahead
and
change
this
to
title
load,
balancer.
The
reason
I
changed
all
of
our
services
to
load
balancers,
so
that
we
can
easily
pull
them
up
here
on
T
gik
and
we're
only
going
to
keep
this
cluster
online
for
about
an
hour.
A
So
next
we
have
our
Griffon
Accor.
So
this
is
the
actual
Griffon
I
component
itself.
So
this
is
a
web
UI.
That's
going
to
read
the
data,
that's
worth
storing
into
Prometheus
and
again
this
is
just
pulling
the
Griffon
in
container
we're
defining
some
resource
limits.
So
really
quick
can
I
just
rant
about
resource
limits,
I'm,
seeing
more
and
more
deployments
go
in
that
have
the
resource
limits
defined.
A
These
are
really
important
to
define
or
they
speed
yourself
a
starting
points
so
that,
as
you
scale
your
application
moving
forward,
you
have
sort
of
a
finite
block
that
you
can
keep
in
mind
that
your
deployment
will
never
exceed
and
as
we
look
at
actually
breaking
our
cluster
a
little
bit
later,
having
these
resource
limits
defined
would
be
super
handy
so
that
we
don't
put
our
nodes
into
deadlock,
which
I
was
able
to
do
last
night
anyway,
going
off
on
a
tangent.
Let's
keep
going
also.
A
We
have
a
readiness
probe
and
also
I
can't
advocate
for
that
enough
as
well.
That
basically
just
says
that
on
port
3,000
we're
going
to
do
an
HTTP
GET
to
login
and
as
long
as
that
returns
a
200.
Ok,
we're
going
to
consider
this
the
pause
in
this
deployment
happy
and
healthy.
So
that's
just
a
really
quick
way
for
the
rest
of
kubernetes.
To
know
that
this
this
deployment
is
online
and
is
functioning
as
expected.
A
This
is
handy
because
you
really
don't
need
to
do
anything
more
than
just
be
able
to
return
at
200,
so
here
we're
just
hitting
the
login
page,
which
we
know
who
would
return
to
200,
so
we
don't
even
have
to
build
it
any
extra
logic
for
it.
It's
just
a
handy
thing
to
define.
Let's
see,
we
have
more
stuff
in
chat:
Diego,
hello
from
Norway
and
I'm
gonna
butcher,
your
name
I'm.
Really.
Sorry,
kunis,
love
hello
from
Croatia,
ok,
hi,
both
of
you.
A
Thanks
for
joining
us
update
for
folks
who
weren't
here
at
the
beginning,
I'm
pretty
sick
and
my
voice
is
going
out.
So
at
any
point,
if
it
quits
working,
we're
gonna
switch
over
to
chat,
but
if
reality
seems
to
be
doing
pretty
well
so
Mike
Merrill
says
sometimes
the
alert
manager
config
needs
a
secret
like
a
HipChat
token,
forcing
me
to
make
the
entire
thing
to
seal
secret,
which
is
pretty
unsightly.
Oh
yeah,
putting
the
secrets
into
config
maps
is
like
I,
see
a
lot
of
folks.
A
Do
it
I
would
never
do
it,
but
it's
a
really
hard
problem
to
solve
and
super
annoying
as
well.
So
thanks
for
pointing
that
out,
Mike,
ok,
so
it
looks
like
we
have
another
config
map
here.
Also,
if,
if
you
are
defining
animal
for
other
folks
to
use,
I
really
enjoy
to
always
do
the
API
version
first
and
then
then
the
very
next
thing
I
really
like
to
always
have
the
kind
just
so
right
away.
A
Here
we
have
a
ton
of
data
and
we
have
to
scroll
all
the
way
down
all
the
way
down
before
we
even
see
that
it
is
of
type
config
map
which
I
I
guess
at
this
point,
it's
pretty
self-explanatory,
but
that's
just
a
thing
that
I
like
to
do
anyway,
so
this
whole
block
of
text
that
we're
looking
at
here,
let's
scroll
back
up,
so
this
is
the
graph
on
a
core-
and
this
is
all
of
the
oh
sorry
already
to
looked
at
that.
This
is
not
the
graph
on
in
court.
A
A
Okay.
So,
let's
scroll
down
past
our
graph
on
a
configuration,
I
know
I'm
going
kind
of
quick
here,
because
I
think
a
lot
of
this
was
talked
about
in
Joe's
episode
and
the
Prometheus
operator
can
get
rid
of
a
lot
of
this
noise
for
us
as
well.
So,
if
you're
interested
in
coming
through
and
figuring
this
out
and
modifying
Grenada,
this
is
the
place
to
do
it
and
there's
a
lot
here.
That's
why
I
don't
really
want
to
go
into
it.
A
Okay!
So,
finally,
down
here
on
line
2245,
we
see
it's
a
config
map
and
we
can
move
on
to
our
next
resource.
Here,
hello
from
Hamburg,
Germany,
hello,
good,
to
see
you
so
there's
next
one
we
defined
a
job,
and
this
is
the
import
dashboards
job
and
we
come
down.
We
can
actually
see
the
job
definition
itself,
so
here's
our
spec
and
you
can
see
that
we're
mapping
it
to
a
service
account
and
we're
mapping.
A
It
to
a
handful
of
containers-
and
we
have
some
what
looks
like
bash
here-
defined
that
we're
running
for
our
job
as
well.
Now,
I,
don't
actually
know
what
this
piece
is
doing.
Anybody
who's
familiar
with
how
the
graph
on
a
system
works
and
how
the
import
dashboards
component
here
works.
It
would
be
helpful
for
a
little
bit
more
information
here.
A
Let's
move
on,
so
we
have
a
service
for
that.
Still
in
the
monitoring
namespace,
that's
going
to
be
a
theme
for
today
we
have
a
deployment
here,
and
this
is
the
Prometheus
core
and
if
you
watched
shows
episode,
you'll
realize
that
Prometheus
is
basically
a
datastore
that
stores
sets
of
values
and
they're
sets
of
values.
We
call
time
series
data
and
that
time
series
data
can
then
be
reflected
in
using
a
graphing
service
like
graph
Anna,
but
doesn't
necessarily
have
to
be
Griffin.
A
A
A
Next,
we
have
a
cube
State
metrics,
which
this
basically
runs.
The
cube
state
metrics
container-
and
this
is
probably
the
simplest
deployment
we've
seen
yet
that
basically
just
listens
on
8080,
and
this
is
going
to
basically
expose
some
metrics
for
the
kubernetes
cluster
itself
and
this
thing
of
the
cluster
that
we
can
then
pick
up
later
using
Prometheus,
so
Mateo
says
Griffin,
o
v5
plus
can
import
the
files
directly
now,
so
no
more
job
or
sidecar
is
needed.
That's
hand
good
to
know
thanks
Mateus.
What
is
this?
We
have
some
stuff
commented
out
here.
A
Oh
if
you
wanted
to
set
up
our
back,
let's
see
what
we
have
here.
Oh
this
is
our
back
for
keep
state
metrics
okay.
So
if
this
was
a
concern
for
you
or
if
you
had
a
very
dispersed
namespaced
cluster,
you
could
would
probably
want
to
come
in
and
use
this
as
a
starting
point
to
define
the
cube
state
metrics
for
a
given
smaller
portion
of
your
cluster
Alesi
audio
says
just
an
FYI
that
repo
has
very
old
versions
of
Prometheus
and
crow
fauna.
Both
of
them
have
new
major
versions.
Nowadays.
A
Oh
thanks
for
thanks
for
losing
all
you
know.
I
just
picked
that
one,
because
it
did
not
involve
installing
an
operator
and
gave
us
just
raw
deployments
that
we
can
inspect
along
the
way.
But
I'm
curious
now
like
what
are
the
differences
between
gravano,
4.0
and
5.0
and,
let's
see
Mateus,
says
yep
in
the
cube
state,
metrics
version
1.4
as
well.
A
My
bad
folks
I
just
picked
this
again
because
it
was
the
simpler
alternative,
but
we're
gonna
go
with
it
for
today
and
just
bear
in
mind
that
some
of
the
components
were
defining
here
could
use
a
revamping
to
get
the
latest
versions
as
well,
which
would
probably
be
as
simple
as
going
through
and
updating.
This
ya
know
file,
but
that's
a
different
discussion
for
a
different
day.
So
next
we
have
this
demon
set,
and
this
is
no
directory
size
metrics,
which,
again
this
is
just
another
way
of
exposing
metrics
about
our
kubernetes
cluster.
A
We
know
we're
going
to
get
a
pod
on
every
node,
because
it's
a
daemon
set
and
then
we're
going
to
expose
some
metrics
that
we
can
pick
up
later.
Syed
says
I
use
case
on
it
to
stand
up,
Prometheus
Griffin
ax
rather
than
raw
Gamal,
interesting
I
know.
A
lot
of
folks
have
been
switching
over
to
case
on
it
more
and
more
Brian.
This
is
your
plug.
I
would
really
like
to
see.
Maybe
I
could
come
back
at
a
later
date
and
case
on
it,
a
PHY.
A
If
that's
a
term,
this
game
will
hear
that
we're
using
and
and
abstract
us
even
more
to
simplify
it
for
folks
at
home,
so
I'm
not
having
to
go
through
this
I
think
is
a
really
great
example
of
like
the
dreaded
wall,
if
you
animal
that
we're
continually
talking
about
solving
with
case
on
it.
So
this
is
like
a
picture
perfect
example
here
so
yeah
anyway,
here's
our
note
state
metrics
and
we
define
a
container.
A
We
have
this
tiny
tools
container
and
basically
just
goes
through,
and
it
looks
like
it
loops
through
some
directories
and
exposes
some
metrics
about
them
by
dropping
them
off
in
the
temple.
Metrics
temp
directory
mateus
says
no
worries
for
me
at
the
s,
v2
has
improved
TS
DB
and
your
Fano
version
5
has
a
new
grid,
so
nothing
urgent
for
this
episode.
A
Okay,
as
long
as
it's
not
like
any
sort
of
crazy
breaking
changes
or
any
super
major
vulnerabilities,
I
feel
pretty
good
about
using
these
and
and
really
the
point
of,
the
episode
is
to
talk
about
what
we're
doing
in
code
and
I
do
make
sure
to
use
the
most
recent
versions
of
the
the
client
libraries
in
the
go
code,
I'm
just
showing
folks
what
I
have
up
and
running.
So
they
have
an
idea.
A
As
we
start
looking
at
the
code
itself,
Mateus
says:
Sayid,
yes,
and
we
Red
Hat
and
ger
fauna,
including
a
lot
of
others,
are
working
on
JSON
it
for
monitoring
some
principles
or
the
same
principles,
Google
and
monitoring.
Mix-Ins.
Thanks
for
that
Mateus
I
said:
where
were
we
ok?
So
we
looked
at
the
daemon
set?
We
saw
how
it
was
dropping
off
some
metrics
in
the
temp
metrics
directory
and
we're
running
this
other
container
as
well,
which
is
caddy,
and
this
basically
runs
on
port
90,
102
and
I'm.
A
A
So
here
we
have
a
service
for
the
Prometheus
node
exporter,
yep,
that's
I'm!
Guessing
that
prometheus
is
scraping
this
exporter
and
that's
why
we
have
this
service
here
on
port
9100
and
we're
getting
close
to
the
end.
I
promise
folks
this
last
one
is
another
big
I'm,
not
that
big.
But
it's
another
config
map
here.
So
here's
where
we're
actually
defining
the
Prometheus
rule
and
when
we
actually
look
at
our
dashboard
in
a
second
we're,
gonna,
learn
about
rules
and
targets
and
why
they're
important?
A
So
next
we
have
some
are
back
for
Prometheus,
so
we're
defining
a
cluster
rule
binding
that
one's
really
simple.
We
have
a
cluster
role,
so
we're
exposing
node
services
in
points
in
pods,
I,
get
lists
and
watch.
So
that's
pretty
much
read-only,
which
totally
makes
sense
for
the
default
namespace.
We
have
config
Maps.
A
We
can
get
those
which
is
good
because
that's
where
we're
getting
our
configuration
from
and
we're
gonna
be
able
to
hit
slash
metrics
on
the
get
HTTP
verb,
which
is
handy,
solid,
are
back
rule
and
then,
of
course,
we
tie
that
all
together
with
a
service
account
ok.
Finally,
here
at
the
end,
we
have
one
more
final
service,
which
is
a
load
balancer
on
port
9090
and
we're
gonna
actually
look
at
these
services
in
just
a
second
ok.
A
So
this
is
the
amyl
that
we
have
up
and
running
already
and
if
you
go
back
to
the
the
documentation
for
today,
this
is
all
coming
back
together.
I
promise.
This
is
all
like
it
kind
of
makes
sense
in
a
second.
The
very
first
thing
we
do
after
we
create
a
cluster
in
Amazon.
Is
week
you
back
to
apply
this
entire
bundle
of
yamo.
So,
let's
go
and
actually
look
at
the
cluster
and
apply
the
CML
just
to
show
folks
that
it
is
up
and
running
and
take
a
peek
at
what
we
have
there.
A
So
if
we
go
to
go
path,
source,
github.com,
hep,
D,
ot,
gik
episodes
52,
we
can
go
ahead
and
can't
apply
f,
bundled
yeah
MO,
and
you
will
see
that
everything
is
pretty
much
configured
for
us
already.
So
the
interesting
thing
to
point
out,
as
we
actually
want
to
look
at
what
we
have
up
and
running,
is
to
take
a
look
at
our
services
here.
A
Let's
see
if
I
can
simplify
this
a
little
bit.
Okay,
almost
there
I'll
zoom
back
in
in
a
second
I
just
want
to
get
like
yeah
I,
just
want
used
to
be
a
little
bit
easier
to
read.
Okay,
so
this
first
one
here
is
for
the
alert
manager.
Here's
our
external
IP
address
and
you
can
see
this
is
running
on
port
90-93.
A
A
So
for
the
nodes
and
I
am
super
lazy,
I
open
these
to
the
world,
but
you
can
actually
go
in
and
open
these
up
specifically
to
the
load
balancer
that
you
need.
In
fact,
we
can
change
that
right
now,
let's,
let's
be
good
citizens
and
actually
do
this
the
right
way.
So
the
first
one
is
we're.
Just
gonna
want
to
do.
90
90!
Only
and
let's
see
we're
gonna
want
to
open
this
up.
To
will
describe
for
this
right
here
to
this
load
balancer.
So
in
Amazon
you
can
come
through.
A
You
can
actually
type
and
it'll
find
the
for
that
specific
load
balancer.
So
now
we
know
that
we
have
90
90
open
to
that
one.
Let's
do
another
rule
which
is
going
to
be
1993,
which
is
for
the
alert
manager
here
chris
bargman
hello
from
Hamburg
good
to
see
you,
so
the
alert
manager
will
grab
the
first
few
characters
here.
A
There
we
go
okay,
so
this
is
much
more
secure.
Now
we
basically
opened
up
our
node
security
group,
which
is
this:
how
Cuba
Korn
sets
up
a
cluster
to
poke
holes
for
the
various
firewalls
that
we're
going
to
be
using
today
for
1993
for
the
alert
manager,
1994,
the
Prometheus
dashboard
and
port
3000
for
core
fauna?
So
now
we
can
go
back
to
our
terminal
and
actually
these.
So
let's
get
this
one
on
port
1993,
let's
close
some
of
the
stuff
too,.
A
And
please,
like
I,
don't
want
revenge
of
like
the
rainbow
vote,
the
other
day
where
I
ended
up
taking
the
FTO
Internet
offline.
So
if
you're
gonna
hit
these
at
home,
please
be
nice
to
me.
Why
I'm
here
on
the
air,
this
one
should
be
1990.
You
know
which
what
is
this
yeah?
It
should
be
1990,
let's
see
what's
going
on
here,
Michael
hello
from
Johannes
burg
did
I
copy
the
URL
right.
A
Nope,
okay,
so
this
is
really
good,
because
if
I
was
debugging
at
load
balancer
in
kubernetes,
these
are
the
steps
I
would
take.
The
first
place
I
want
to
look,
is
in
Amazon
at
the
actual
load
balancer
itself
to
see
if
any
of
the
instances
are
in
or
out
of
service,
so
how
we
can
do
that
is
we
can
come
down
and
we
can
find
the
description
here
should
tell
us
which
port
it's
on
and
that's
gonna.
A
Let
us
know
what
services
for
so
3,000
is
for
go
fauna,
1993
is
for
the
alert
manager
which
we
know
is
up
and
running
and
ninety
ninety
here
is
from
you
for
Prometheus,
so
you
can
come
to
instances
and
actually
see
that.
Yes,
this
is
out
of
service.
So
there's
our
problem,
so
yo
Keem
says
hello
from
sweden.
Good
to
see
you.
A
So,
let's
see,
what's
going
on
here,
I
wonder
if
I
type
owed
one
of
my
security
group
rules
when
I
changed
it
because
it
wasn't
working
earlier
so
port
1992
and
this
is
going
to
be,
for
let's
get
the
name
again.
A
Let's
open
this
in
a
new
tab,
okay,
so
let's
check
this
again
and
see
if
our
instance
is
there
still
not
in
service.
Oh,
is
this
a
dreaded
case
of
a
load
balancer
in
the
wrong
availability
zone?
I
bet!
That's
what
happened
here
so
this
is
in
us
to
be
our
node
is
running
in.
Where
is
it
us
to
be?
This
is
fine,
everything's,
fine,
but
yeah.
This
is
our
basic
cluster.
By
the
way
we
only
have
one
node
online.
A
So
if
we
need
to
debug
anything,
we
know
it's
just
gonna
be
running
on
that
node.
So
let's
go
back
to
our
load
balancers
and
see
what
we
see
out
of
service
instance
has
failed
at
least
the
unhealthy
threshold
number
of
health
checks.
So
I
wonder:
let's
see
if
anything's
going
on
in
our
cluster
and
before
we
do
anything
else.
Let's
do
the
shortcut
where
we
create
a
new
cubicle,
alias
for
the
monitoring,
namespace
Gustavo
hello
from
Chicago.
It's
good
to
see
you
Gustavo.
A
So
now
I
can
km
get
P.
Oh
here
we
go.
There's
our
problem.
Let's
see
what's
going
on
here,
so
this
is
cool
cuz
we're
actually
getting
a
little
bit
of
like
live
Prometheus
debugging,
which,
obviously,
after
looking
at
that
ya,
know
file
that
we
just
looked
at
there's
a
lot
going
on
here.
So
getting
all
of
this
set
up
right
and
getting
this
kind
of
stable
is
a
task
within
itself
and
we're
gonna.
A
A
Previous
again,
if
you
haven't
seen
this
previous
command,
this
is
one
of
my
favorite
tricks.
It's
actually
the
logs
from
the
previous
pod
that
that
fell
are
failed
and
we
can
see
what's
going
on,
Matteo
says
yes,
never
mind
with
the
innocent
emoji
in
general.
I
would
say
that
I'm
really
happy
with
the
amount
of
emojis
in
the
GG
ika
chat.
I
just
wish
it
was
easier
to
add.
Emojis
Carlos
says
hello
from
Raleigh
hello,
Carlos
good
to
see
you
and
Felipe
says
Cuba
s.
Monitoring
is
great
smiley
face
or
a
winky
face.
A
Yeah,
that's
definitely
a
winky
face.
Okay,
so
let's
run
this
flag
storage.
Local
memory
chunks
is
deprecated
its
value
5,000
days
used
to
override.
So
that's
our
warning.
Here's
our
error!
Okay,
let's
see
what
this
error
says,
couldn't
load
configuration
no
such
yeah
Mel
file
or
a
directory
as
he
prometheus
bundle
done,
go
let's
go
and
let's
see,
if
my
bundle
that
go
has
any
any
typos
in
it.
A
Or
not
bundleco,
but
IMO.
Oh
my
gosh,
okay!
So
here's
our
config
file,
let's
see
where
else
we
can
find
this
okay,
so
this
only
appears
once,
but
this
has
got
to
be
coming
from
a
config
map
somewhere.
You
know
I'm
wondering
if
I
can
cheat
and
just
totally
revert
back
to
a
previous
version.
Let's
go
look
at
github
and
see
if
I
introduced
a
change
earlier
today,
I
bet,
I'd,
add
github.com,
slash
hefty
o
/t
GI
k.
A
So
how
we
can
do
this
in
github
is
we
come
in
two
episodes?
We
go
to
52
and
we
want
to
see
our
bundle
yeah
mole,
and
this
is
a
really
handy
feature
of
github
as
well
that
if
you're
not
using
this,
this
can
totally
save
you
a
lot
of
time.
There's
this
history
button
here
and
if
you
click
on
history,
you
can
actually
see
yes.
Two
hours
ago,
I
made
a
commit.
Let's
see
what
we
have
here,
where
I
wouldn't
see
what
I
did
so
I
removes
time
and
math
ran
from
main
get-go.
A
A
Let's
see
so
Bogdan
says:
hey
Chris
when
dealing
with
one
pod
services
and
even
with
deployments
and
Co
I
feel
like
Hugh
Bechtel
port
4
is
much
more
convenient
than
externally.
Exposing
them
I
totally
agree.
I
just
wanted
to
demonstrate
at
the
different
ports
they
were
working
on
and
I
was
working
on
this
from
home
earlier
today,
so
just
having
them
online
just
made
it
easier
for
me
to
move
back
and
forth
between
home
in
the
office,
but
yeah
port
forward
works
just
as
fine
as
well
earlier.
A
I
mentioned
that
it
might
be
even
better
if
we
added
an
ingress
to
this
and
I
just
wrapped
all
the
different
things
in
with
ingress,
so
that
would
be
cool
also,
but
yeah
definitely
more
ways
to
skin
a
cat,
especially
because
load
balancers
definitely
are
not
cheap,
but
thanks
for
the
tip
Bogdan,
so
I'm
wondering
if
I
deleted
this,
and
this
is
causing
some
problems.
So
what
we
can
do
is
we
can
actually
go
back
grab
the
raw
version
here,
we're
gonna
clobber,
our
bundle,
yeah
MO
and,
let's
grep
for
note
port.
A
A
Type
equals
new
port
equal
to
type
equals
load
balancer,
and
that
should
making
no
change.
So,
let's
see
here
so
node
port
and
let's
change
that
to
load
balancer
and
let's
find
the
next
one
and
change
that
to
load
balancer
and
let's
find
the
next
one
and
we'll
change
this
to
load
balancer
just
because
we
already
have
the
rest
of
the
infrastructure
in
place.
So,
let's
see
if
we
can
get
a
change
here
now,
I
can't
apply
minus
F
bundle
channel
and
let's
can
get
P.
A
Still
in
crash
leave
back
off,
Thomas
says
what
I've
been
doing
so
far
is
just
using
an
ingress
for
your
asana,
but
doing
port
forward
for
a
Prometheus
alert
manager
if
I
actually
need
to
get
to
them,
which
should
be
rare.
That
brings
up
a
really
good
point.
Prometheus
on
alert
like
alert
manager
is
basically
just
going
to
handle
map.
You
know
our
alerts
for
us
and
the
Prometheus
dashboard
is
basically
just
a
quick
and
easy
way.
A
A
So
let's
see
what
Mateo
says:
I
love
when
people
help
me
out
this
like
makes
my
life
so
much
easier
episodes,
52
bundle,
old
line,
2374
Prometheus,
not
oh
and
not,
bundled
I
am
Oh
awesome,
good,
fine,
Mateus
you're,
just
like
saving
the
day
today,
so
why
in
2374,
let's
go
find
that.
A
Good
find
this
okay
apply,
minus
F
bundle
demo
and,
let's
see
cam
gippy,
oh
and
I,
think
we
can
do
a
a
W
and
watch
and
see
what
happens
so
our
import
job
is
completed.
Our
Prometheus
core
is
terminating
in
our
Prometheus
core
is
up
and
running
hats
off
to
mateus
for
saving
the
day.
Okay.
So
now,
let's
do
our
service
command
again
km,
get
SVC
o
wide
okay.
A
So
those
all
look
the
same.
So
we,
our
cloud
provider,
has
it
mutated.
The
load
balancers
from
underneath
us
so
drumroll
see
if
we're
finally
gonna
be
able
to
hit
this
yeah.
The
Prometheus
dashboard
is
that
okay,
so
the
Prometheus
dashboard
is
up
online
and
here
are
the
rules
and
targets
that
I
mentioned
earlier.
So
as
I've
been
learning
more
and
more
about
Prometheus.
These
are
the
two
big
things
that
I
have
found
myself
coming
and
checking
in
on.
So,
let's
see
what's
going
on
in
slack,
oh,
it
looks
like
people
are
retracting.
A
Their
message
in
Bogdan
says
command
G,
so
I
think
that's
a
little
bit
for
my
name
is
debunk
in
a
moment
ago.
So
anyway,
let's
take
a
look
at
the
rules,
so
this
is
where
we're
defining
some
alerts-
and
this
is
well-
you
can
almost
read
this
like
regular
English,
which
is
handy
and
good
to
know.
So.
The
first
one
is
called
node
CPU
usage,
and
it
says
if
100
there's
this
formula
here
by
instance
times
a
thousand
is
greater
than
75
for
two
minutes,
where
the
levels
severity
equal
to
page.
A
This
is
a
label
here,
not
levels,
and
this
annotation
is
go
ahead
and
send
an
alert
which
we're
going
to
look
a
little
bit
more
at
alerts.
A
little
bit
later
and
Mateus
says
the
alerts
are
now
Gammell
in
Prometheus
V.
Oh
that's
candy.
So
that
means
we
don't
have
to
actually
go
and
learn
this
new.
This
new
language
here
for
defining
alerts.
Oh
so
that's
good
to
point
out
that
we
would
be
configuring.
A
Yeah,
Mille
or
yellow
yeah
first
have
alerts
in
Prometheus
v2,
so
that's
a
big
one
that
I
would
be
interested
in
using.
So
let's
take
a
look
at
targets
and
I'm,
almost
out
of
tea
I
might
have
to
like
you
know.
Can
we
do
like
an
intermission
and
like
put
on
a
commercial
way
go
get
another
cup
of
tea
here
in
a
second
okay.
So
this
are
our
targets,
so
the
targets
are
basically
the
different
data
sources
that
are
going
to
give
us
information
about.
A
What's
going
on
and
Prometheus
is
going
to
store
that,
as
for
us,
so
what's
interesting
is
if
we
actually
look
at
these
slash
metrics
in
point
on
the
Prometheus
server,
we
can
see
that
these
are
the
metrics
that
Prometheus
would
go
in
scrape.
So
basically
it's
just
an
HTTP
server
that
spits
out
these
metrics.
That
Prometheus
can
then
come
and
make
sense
of
in
Joe's
episode.
A
He
talked
a
little
bit
about
how
you
would
want
to
sort
of
structure
these
metrics,
if
you
are
creating
your
own
and
we're
gonna,
look
at
those
a
little
bit
later
when
we
actually
do
the
tutorial
dan
Bentley
says:
T
is
n
t
GI
Cades,
yes,
that
would
be
nice.
I
actually
have
another
cup
of
tea
at
my
desk.
It
would
take
me
like
4
seconds
to
go.
Get
it
I
really
might
do
that
in
a
second
just
cuz.
A
My
voice
has
already
starting
to
hurt
so
anyway,
the
slash
metrics
API
can
be
found
here
on
we're,
not
monitoring
it.
So
we
could
add
a
target
which
I
look
like
we
have
we're.
Prometheus
can
monitor
itself
assembly
by
scraping
basically
local
host
on
the
slash
metrics
in
point.
A
A
I
read
me
the
bundle,
not
you
animal,
that
we're
all
very
familiar
with
now:
the
Prometheus
dot
you
animal,
which
we're
going
to
take
a
look
at
and
that's
if,
when
we
look
at
adding
our
target,
we're
going
to
go
through
that
one
and
our
main
go
which
this
is
the
meat
and
potatoes
of
our
program.
So
in
my
IDE
BAM
we
can
pull
up
our
main
go
and
there's
a
lot
of
stuff
commented
out
here,
because
this
is
sort
of
me
playing
with
metrics
earlier
and
then
I've
got
some
example.
A
Http
server,
II
stuff
down
here
at
the
bottom,
with
an
example
handler
if
we
decided
to
actually
write
a
separate
HTTP
service
on
the
same
container,
so
Thomas
says
anyone
has
any
thoughts
on
deploying
Prometheus
manually
versus
Prometheus
operator.
I've
been
using
the
latter
so
far
and
it
worked
I'll
write
it
but
curious.
What
I
might
be
missing
here,
so
yeah
Thomas
in
Joe's
episode,
he
used
the
Prometheus
operator.
A
I
took
a
different
approach
because
I'm
a
big
fan
of
keeping
things
simple
for
TGI
K,
so
that
folks
can
really
see
the
different
components
under
the
hood
here
and
also
we've
already
done.
An
episode
on
the
Prometheus
operator
when
I
was
setting
this
up.
I
did
play
with
the
previous
operator.
I
found
that
the
documentation
was
pretty
good.
There's
still
some
things
that
I
found
missing
along
the
way,
but
once
I
got
it
up
and
running,
it
worked
really
well
and
just
the
TLDR
on
the
prometheus
operator
is.
A
It
takes
a
lot
of
the
configuration
that
we
just
saw
in
our
bundle
dll
file
and
simplifies
that
by
creating
three
new
CR,
d's
and
kubernetes
that
you
can
then
interact
with
configuring.
Your
prometheus
set
up
with
through
those
CR
DS,
so
that
you're
not
going
through
and
pushing
changes
like
we're
about
to
do
today.
So
it
sort
of
like
the
high
level
differences
and
pros
and
cons
there.
A
Also,
anytime,
you
adopt
any
sort
of
operator,
you're
adopting
another
dependency,
and
you
know
you
got
to
look
at
things
like
how
often
they're
releasing
who's
maintaining
the
project
so
on
and
so
forth.
So
Harshal
says:
oh
yeah
I
would
see
Herschel's
response
here
as
well.
Thomas.
The
operator
is
a
little
painful
right
now,
as
they
are
updating
their
helmet
arms
and
not
accepting
new
PRS.
That
was
another
thing.
A
I
tried
it
again
in
the
name
of
keeping
things
simple:
I
try
to
stay
away
from
configuration
management
tools
like
Helberg
he's
on
it
purposefully
kind
of
to
demonstrate
the
pain
a
little
bit,
and
just
so
that
we
are
working
with
as
raw
of
ingredients
as
possible,
so
Herschel
says.
Danis
implementation,
for
example,
is
not
possible.
With
the
present
cute
Prometheus
chart
a
matteo
says
if
you're
running
as
a
single
tendency
and
you're
kubernetes
cluster
stable
size
should
be
fine
too.
The
operator
adds
a
bit
of
sugar
on
top,
but
especially
for
multi-tenancy.
A
It's
interesting
great
feedback.
Folks.
Thank
you
very
much.
Okay.
So
let's
look
at
our
main
function
here.
So
to
start
off,
we
have
actually,
let's
just
art
off
at
the
top-
we're
not
packaged
main
we're
importing
flag
and
login
HTTP,
and
then
our
two
dependencies
here
are
prom
HTTP,
which
this
is
the
Prometheus
sponsored
client
go
Lane
library.
So
let's
go
look
at
this
thing.
Really
quick.
This
library
is
pretty
solid,
but
there
are
two
main
parts
here
that
you're
gonna
want
to
know
about.
Mateo
says:
disclosure
Ivan
maintainer.
Thank
you
for
pointing
that
out.
A
Matias
I
did
not
know
that
so
yeah.
If
you
can
tell
us
about
the
the
release
cycle
and
stuff,
that
might
be
a
good
tip
it
for
folks
to
to
know
as
well
anyway.
The
Prometheus
client
go
library
here.
The
two
parts
that
I
found
were
if
we
look
in
Prometheus,
so
this
is
all
I'm
trying
to
remember
I
remember
there
was
somewhere
in
the
documentation.
That
said
this
instrumenting
applications.
A
Okay,
so
the
model
package
has
been
moved
here.
We
go
so
the
Prometheus
directory
contains
the
instrumentation
library
I
wanted
to
make
sure
I
said
this
right,
so
I
actually
want
to
read
this
see
the
best
practices
section
of
the
Prometheus
documentation
to
learn
more
about
instrumenting
applications.
So
the
Prometheus
directory
here
is
actually
the
code
that
we're
going
to
be
venn
during
and
using
that
actually
is
going
to
help
us
calculate
metrics
and
then
serve
those
metrics
on
the
slash
metrics
endpoint
for
our
program.
A
So
basically,
all
we're
doing
is
we're
importing
this
library
and
then
they
have
like
the
defaults,
HTTP
handler
and
we're
gonna
be
using
that
today,
Matea
says
with
the
JSON
it
based
approach.
They
know
this
is
easy
to
add
with
the
sidecar
okay.
So
let's
go
back
to
our
application
here.
So
the
first
thing
we
do
is
we
call
flag
pars.
A
I
cannot
overstate
the
value
of
simply
echoing
what
address
you're
listening
on,
there's
anywhere
from
three
to
six
layers
of
network
traversal
between
your
actual
program
running
and
potentially,
where
you're
accessing
that
program
from
and
actually
having
the
the
raw
address
that
that
they
can.
The
program
itself
is
running
on
can
be
very
handy.
A
If
it
ever
comes
time
to
debug,
so
this
is
one
of
the
first
things
I'll
put
in
a
go
program
that
I'm
running
in
kubernetes
just
to
help
an
engineer
if
they
are
debugging
downstream
later
so
then
we
do
HTTP
handle
on
slash
metrics
and
the
prom
HTTP
handler,
which
this
is
in
that
library
we
just
looked
at.
If
we
go
and
we
look
this
up,
we
can
read
about
it.
It
says
the
handler
returns,
an
HTTP
handler
for
the
Prometheus
default
gather,
so
the
default
gatherer
we're
going
to
look
at
that
in
a
second.
A
A
So
this
basically
just
hard
codes.
They
came
to
our
options,
but
if
you
wanted
this
thing
to
be
a
little
bit
more
granular,
you
could
come
through
and
let's
see,
I
want
to
edit
this
file
anyway,
and
you
could
come
through
and
do
things
like
configure
compression
or
error
handling
or
error
logging,
which
would
be
cool
as
well,
but
we're
just
gonna
keep
the
defaults
for
the
example
today
and
the
default
gatherer.
So
this
is
a
really
interesting
piece
of
code
here.
A
Let's
look
at
the
gatherer
type,
so
the
gala,
the
gatherer,
is
the
interface
for
the
part
of
a
registry
that
is
in
charge
of
gathering
the
collected
metrics
into
a
number
of
metric
families.
The
gatherer
interface
comes
with
the
same
general
implication
as
described
in
the
register
interface.
So
all
we
have
here
is
this
function
called
gathered
and
it
returns
a
a
set
or
a
slice
of
metric
family
pointers
and
then
I'm
assuming
an
optional
error
here
as
well.
A
So
it
would
be
possible
to
actually
go
and
write
your
own
metrics
gatherer
and
just
simply
implement
this
interface
and
use
it
for
the
use
of
the
rest
of
the
program's
scaffolding
around
it
as
well,
and
if
we
actually
go
and
look
at
the
metric
family
I
know
we're
getting
off
in
the
the
weeds
here.
We
cannot
look
this
up.
A
Okay,
that's
probably
assigned
we're
going
to
on
that
for
now,
but
a
metrics
family
is
just
a
grouping
of
metrics
that
you
could
optionally
define
and
that's
all
that
gather
does
so
going
back
to
our
our
main
function.
We
are
keeping
themes
very
simple
for
this
first
example,
and
we
can,
you
know,
play
more
with
this
in
a
little
bit.
A
If
folks
want
to
drop
in
to
chat
anything
specifically,
they
want
to
see
I'm
happy
to
demo
that
live,
but
we
get
the
default
one,
and
we
are
just
going
to
listen
on
slash,
metrics
and
output.
The
output
of
this
prom
HTTP
handler
here
or
whatever
that
it
resolves
based
on
the
HTTP
request,
rather
okay.
So
the
next
thing
we
do
is
we
say
logger
always
this
is
my
Kiba
corn
logger,
which
is
just
a
rainbow
logger,
but
you
can
use
good
old,
regular
log
or
your
own
manga
or
whatever,
if
you
want.
A
So
this
has
no
relevance
to
running
Prometheus
anyway.
I
just
simply
say
echo
out
that
we
do
have
the
slash
metrics
in
point
registered
again.
This
is
for
debugging,
and
then
this
is
another
big
log
line
that,
like
I,
can't
get
on
my
high
horse
enough
about
this
one
anytime.
You
start
a
server
which
is
this
next
functional
line
here
and
go.
Please
please.
Please
have
a
log
that
decides
you're
starting
the
server
so
that
you
know
that
the
actual
server
is
starting
and
that's
why
your
program
is
hanging.
A
I
can't
tell
you
how
many
times
that
I've,
just
like
exact
into
a
log
or
a
two
container,
trying
to
pull
logs
or
something
and
I
get
the
logs,
and
it's
just
nothing
just
sitting
there,
and
you
have
no
idea
what's
going
on
in
the
program
or
if
the
server
even
started.
So
this
is
a
good
one
to
do
as
well.
So
then
we
we
wrap
up,
HTTP,
listen
and
serve
on
the
metrics
address,
and
we
log
fatal
that
so
now
our
slash
metrics
endpoint
is
returning
this
handler.
A
A
So
now,
let's
try
this
again
go
run
main
go,
and
that
also
explains
why
we
weren't
able
to
look
up
the
metrics
family
type
as
well:
okay,
cool,
so
here's
our
color
coded
lager
and
it
says
we're
registering
metrics
on
localhost
1313
n
go
if
you
don't
define
the
first
part
of
an
adder,
that's
delimited
by
this
colon.
Here.
It's
just
gonna,
listen
on
localhost,
so
that's
the
syntax
there
and
we
know
that
the
server
is
up
and
running
so
go
program.
A
Running
here
on
my
local
I
should
be
able
to
hit
localhost
1313
and
I
should
be
able
to
hit
slash
metrics
and
poof
Prometheus
metrics
running
on
my
local
machine
here.
So
now
all
we
need
to
do
is
get
this
up
in
kubernetes
and
then
expose
it,
so
that
Prometheus
would
be
able
to
actually
go
and
scrape
these
metrics
for
us.
So
how
we
want
to
do
that
is
we
want
to
containerize
our
application,
so
I
have
the
same.
Make
file
format
that
I
use
for
pretty
much
all
of
my
projects.
A
I
encourage
you
to
come
through
and
look
at
it.
It's
pretty
handy,
and
it's
got
this
really
cool
little
help
section
here.
That
basically
says
all
you
have
to
do
is
do
double
comment
and
then,
whatever
text
you
want
next
to
a
target
and
then
you
get
that
help
command
for
free.
So
this
is
pretty
cool.
A
So
if
I
delimit
this
and
zoom
in
I
can
actually
go
to
my
go
paths
or
skip
hub
comm
hep
do
/t
GI
k,
52
episodes,
52
and
I
can
do
make
help,
and
you
can
see
that
my
make
file
spits
out
color
coded
description
of
what
everything
does.
So
our
make
file
has
clean,
which
is
just
clean.
The
docker
images
make
which
will
build
a
container
and
then
push
up
to
a
docker
registry,
I've
already
gone
to
docker
login.
A
So
in
order
for
me
to
build
my
application
that
I
know
is
running
here
on
my
local
I
should
I
do
make
container
push
its
gonna,
send
our
contacts
to
the
docker
daemon
and
actually
build
this
container
and
push
it
up
to
docker
hub
for
us.
So
the
docker
file
is
super.
Super
simple.
It
just
Maps
episode,
52
to
the
container
and
then
says
go
run,
may
not
go
keeping
things
super
simple,
so
that
folks
can
see
where
the
rubber
meets
the
road
here,
because
really
what
we
care
about
is
the
the
software
engineering.
A
That's
going
into
this,
not
necessarily
docker
container
best
practices
right
now.
So,
let's,
let's
go
back
to
my
terminal
okay,
so
that's
pushed
up
to
docker
hub.
So
now
we
want
to
actually
run
this
application
in
our
cluster
right
now,
we're
going
to
employ
our
application
to
the
monitoring
namespace,
but
we
don't
necessarily
have
to
deploy
the
application
to
the
same
namespace
that
prometheus
is
in.
There
would
just
be
another
concern
of
steady
enough.
A
Our
back
that
I
felt
like
was
out
of
scope
for
the
episode
as
well
and
if
folks
have
questions
on
that,
I'm
happy
to
give
a
quick
little
demo
on
how
to
get
your
application
running
in
a
different
name
space.
If
you
need
some
help
along
the
way,
but
to
keep
things
simple
today,
we're
going
to
put
everything
into
monitoring
as
well,
so
how
we
do
that
is
I
have
actually
wrote
the
command
for
us,
because
I'm
good,
like
that,
how
we're
gonna
run
this
is
we're
gonna.
Do
a
cube.
A
Octal
run
I
thought
about
actually
going
and
writing
a
deployment
proper
for
this,
but
I
also
feel
like
cube.
Octal.
Writing
could
be
octal
expose
our
handy
commands.
They
don't
get
used
nearly
enough,
so
I
want
to
give
them
a
little
bit
of
love
today.
So
let's
do
Kubek
dual
run.
So
let's
look
at
our
command
here.
A
So
actually,
let's
go
back.
Actually
I
wanted
to
change
something
about
this
deployment.
I
never
got
to
update
the
docs,
so
this
actually
might
be
a
good
use
case
for
actually
having
the
women
yeah
Mille
in
the
repo
here,
but
we're
gonna
edit.
It
I
want
to
show
you
what
I
need
to
change
so
we're
gonna
do
our
km
command.
Actually
we
don't
have
km
in
this
session.
So
let's
just
do
Q
Bechdel
namespace
monitoring
edit
deploy
GG
I,
can't
happen
by
default.
A
The
image
pole
policy
is
set
to,
if
not
present,
I
think,
let's
see
where's
our
image
pool
policy.
No,
it
is
it's
set
to
always.
Okay,
that's
that's
good!
For
some
reason,
I
thought
that
image
pool
policy
was
not
set
to
always,
which
meant,
as
we
look
at
updating
our
application,
that
was
going
to
be
a
bit
of
a
pain
for
us,
so
we're
good
there
and
we
can
do
a
keg
it
Pio
in
our
monitoring
namespace
and
actually
see
that
the
TGA
app
is
now
I've
been
running.
A
So
we're
still
not
quite
there
yet
and
I'm
gonna
switch
back
over
to
to
this
terminal.
I
stopped
the
program
because
I
want
to
use
that
cam,
alias
we
defined
earlier
in
all
zoom
in
a
little
bit
more
here
as
well.
There
we
go
okay,
cool,
so
let's
go
and
let's
actually
look
at
the
Prometheus
targets
here.
A
So
if
we
refresh
this,
you
can
see
the
last
grape
was
about
six
seconds
ago
and
we're
still
down,
because
we
don't
have
a
service
up
and
running
for
this
deployment
we
just
created
and
Prometheus
is
going
to
need
to
access
it
on
the
network.
Somehow
sorry
friend
was
visiting,
so
let's
actually
use
the
Quebec
tool
expose
command
to
create
that
service
for
us.
So,
let's
go
to
our
Docs,
which
is
here
and
here's
our
expose
command
and
if
you
remember,
we
were
running
on
port
1313.
A
So
that's
why
we
have
this
dash
dash
target
port
and
dash
dash
port
earlier
I
had
mentioned,
there's
several
layers
of
network,
just
networking
noise
to
be
frank
between
your
program
and
actually
running
in
kubernetes.
So
here's
a
great
example
of
we're
going
to
listen
on
port
1993,
but
we're
actually
going
to
reform
that
onto
port
13
13
for
the
program
or
it
for
the
container.
That's
running
the
program
itself.
Okay,
so
let's
go
and
run
this
command.
A
Keep
back
to
expose
resources
were
provided,
Oh
Quebec
to
expose
okay,
so
I
have
my
syntax
wrong
here.
You
have
to
tell
it
which
resource
you
want
to
expose,
so
you
can
expose
different
resources
like
stateful
sets
other
than
just
deployments,
so
that
was
my
error
there.
Okay,
the
rest
of
this
looks
good
okay,
so
service
tgia
app
exposed.
A
So
now,
if
we
go
back
to
the
Prometheus
dashboard,
let's
see,
let's
refresh
it
takes
it
a
second
here,
but
this
this
should
get
up
and
running.
Actually,
let's
double
check,
really
quick.
Let's
just
look
at
what's
going
on
here
so
earlier
I
had
mentioned,
we
had
to
define
a
target
in
this
Prometheus
TM
font
file
and
I.
Think
it's
finally
time
for
us
to
bring
this
thing
up.
A
I
was
kind
of
holding
off
on
it
because
there's
a
lot
going
on
in
here,
but
we
can
totally
take
a
look
so
in
this
Prometheus
TM
will
file.
We
have
this
really
really
important
directive
here
called
scrape
configs,
and
if
you
go
look
at
the
Prometheus
documentation,
you
will
see
that
this
is
actually
like
the
magic
place
where
you
go
and
you
need
to
find
your
different
targets,
which
is
ultimately
how
Prometheus
is
going
to
know
which
paws
you
want
to
monitor
as
well.
A
So
if
we
scroll
down
to
the
bottom,
we
can
see
that
this
chunk
here
was
pre
populated
by
the
the
animal
that
we
stole
out
of
the
repo
house.
Was
this
snippet,
as
was
this
third
one
in
this
fourth
one
and
then
all
the
way
down
here
at
the
bottom,
I
feel
like
I
should
do
like
the
same
thing
and
that's
like
a
comment
here:
Christa
Nova,
TGI
K
target.
A
That's
good!
This
is
the
world's
simplest
prometheus
target,
because
I
wanted
you
guys
to
see
what's
going
on
here,
so
we
give
it
a
job
name
which
is
TGI
K
app.
We
tell
it.
We
want
to
scrape
every
ten
seconds
which
we
could
actually
bump
that
up
and
do
every
one
second
and
really
like
hammer
this
thing
and
then
for
the
static
configs.
You
come
in
unifying
multiple
targets
and
in
this
first
target
we
are
going
to
use
cube
penis
to
point
to
the
T
dik
app
deployment
on
port,
1992
and
I.
A
A
A
A
Why
would
this
thing
not?
Let's
just
make
sure
that
this
is
actually
bleeding
the
pond,
so
that
it's
getting
reconfigured?
Okay?
So
that's
not
what's
going
on
here,
oh
because,
okay,
duh
I'm
scaling
the
TGI
K
app,
not
the
prometheus
Prometheus!
Thank
you!
Yeah
Mateus
noticed
it
too.
I
just
did
my
scale
thing
wrong.
Okay,
so,
instead
of
doing
the
TGI
K
app,
which
we
want
to
set
that
back
to
one
first,
we
want
to
do
Prometheus
core
typos
and
my
my
end.
I'm
gonna
blame
the
dake.
A
A
A
A
Yeah,
there's
that
oh,
we
don't
want
the
99d,
but
that's
okay.
We
can
just
delete
that
okay,
so
we
got
200.
Let's
go
back
to
prometheus
and
refresh
we're
still
running
90
92
here
did
my
target
not
get
up?
Is
the
config
map
not
getting
updated,
I
mean
I.
Guess
a
simpler
solution
would
be
to
just
change
my
pod,
but
I
really
want
to
see
why
this
target
isn't
getting
up
to
and
anyway.
For
me,
this
was
the
most
interesting
part.
A
That
may
or
may
not
exist
in
the
ecosystem,
yet
to
go
in
and
automatically
do
a
lot
of
this,
for
you
would
probably
be
an
easier
approach,
but
yeah
again,
this
is
this
is
ultimately
what
needs
to
happen
to
get
Prometheus
to
check
your
app.
So,
let's
check
our
our
config
map,
so
the
config
map
is
named
me
Thea
score
and
it's
in
the
monitoring
namespace.
A
Let's
go
gamma
listing
okay,
so
here's
Chris,
Nova,
T,
G,
I,
can't
I
know
this
is
hard
to
read.
Oh.
A
A
In
Matteo
says:
yes,
you
can
use
annotations
and
deployments
and
other
resources
to
told
that
Prometheus
service
discovery
to
scrape
your
service.
You
can
create
a
service,
monitor
I
thought
the
service
monitor
was
a
primitive
that
the
Prometheus
operator,
if
that
the
core
of
us
folks
did
I
didn't
realize
that
was
actually
a
native
part
of
Prometheus.
Can
anybody
stick
a
link
to
the
service
monetary
stuff
in
the
the
markdown
file
for
us,
so
yeah?
A
Let's
just
edit
this
here
in
line
and
see
what's
going
on,
which
would
be
cool
because
then
all
we
have
to
do
is
you
have
to
add
an
annotation
or
label
to
our
service
and
then
Prometheus.
You
dishonestly
pick
it
up,
which
is
way
easier
than
what
I'm
doing
right
now.
Matteo
says:
service
monitors
are
for
the
Prometheus
operator,
only
yeah,
okay,
so
maintainer
Mateus.
What
I
had
to
say
so
yeah
to
see
our
ID?
It's
not
built
into
Prometheus
itself,
okay.
A
So
this
this
goes
back
to
what
I
was
saying
a
second
ago
about
you
know
having
some
tooling
to
help.
You
automatically
do
this
in
a
clever
way.
I
think
the
annotation
approach
is
one
good
way
to
do
it.
There
definitely
could
be
others.
You
can
even
look
at
doing
some
sort
of
type
of
service
discovery
or
you
know
some
sort
of
scraping
mechanism
that
would
go
through
and
look
at
different
services,
or
maybe
even
you
know,
certain
services
and
certain
namespaces
there's
a
lot
of
different
things.
A
You
could
do
there,
but
I
just
think
the
idea
of
having
a
load
dynamically
is
nice
and
convenient
so
exploring
different
methodologies
there.
It
would
be
cool
now.
I
want
to
go
home,
green
code,
okay,
so
let's
scroll
down
here's
our
job
name,
doo,
doo,
doo
doo
we're
almost
there.
Okay,
so
Chris
novo,
TGI
K.
So
this
is
running
1983,
so
I'm
guessing
that
the
reload.
Oh,
it
did
work.
Okay,
so
maybe
it
was
just
hanging
behind
I
thought.
A
So,
let's
talk
about
what's
going
on
here
and
how
we
were
able
to
glue
all
of
this
together,
because
this
is
basically
you
know
getting
instrumentation
working
in
your
program
and
getting
Prometheus
to
to
read
the
instrumentation
bits
that
are.
Your
program
is
now
spinning
out
on
the
slash
metrics
in
point.
Ok,
so
we
created
a
go
program
I.
A
That
is
this
main
go
file
here
and
we
would
use
the
Prometheus
default
handler
and
we've
told
it
to
list
listen
on
slash
metrics.
We
then
stuck
it
in
the
container,
push
it
up
to
docker
hub
and
created
a
deployment
in
kubernetes
and
took
a
service
to
expose
that
deployment
internally
to
the
rest
of
the
cluster,
and
we
did
the
port
remapping
to
the
default
port
here,
which
is
1313
to
port
1993.
That
is
then
defined
here
and
this
bit
of
yeah
Mille
as
well.
So
this
target
definition
is
a
Prometheus
level
construct.
A
We
were
just
talking
about
how
the
kubernetes
operator
that
Joe
used
in
his
episode
that
our
friends
at
Correa's,
Corollas,
&
Mateus
here,
are
working
on
make
this
whole
thing
a
lot
simpler,
so
that
you
don't
have
to
go
in
and
actually
made
adam
annual
target
like
what
I
did
here
anyway.
After
that's
all
I've
been
running,
Prometheus
can
then
begin
to
scrape
your
newly
created
go
program
and
start
to
track
those
metrics.
So
let's
see
what
hasn't
he
says,
he
says
yes
and
it's
not
recommended
to
use
the
operator
and
do
it.
A
A
So
if
we
go
to
Prometheus,
we
know
we're
good
and
we
can
kind
of
like
hover
our.
How
did
I
do
it
yeah
you
like
hover
your
mouse
over
it
here,
and
you
can
actually
see
some
main
information
about
what
Prometheus
is
gathering
in
your
application,
which
again
this
is
the
default
stuff.
A
software
group
says,
doesn't
even
have
the
latest
document
of
how
to
configure
the
Prometheus
operator
to
monitor
intensity
database
in
kubernetes,
1.11
yeah
feel
free
to
add
it
to
the
doc
if
you
would
like
to
so.
A
Let's
look
at
gorfod
because
this
is
where
we're
really
going
to
start
to
be
able
to
visualize
this
data.
Okay,
so
I
wanted
to
create
a
new
dashboard
for
the
TGI
Kay
app,
so
I
think
I
got
this
down
where
I
don't
need
to
look
at
the
documentation,
but
there
is
some
really
good
Doc's
out
there
that
I
used
when
I
first
did
this
for
the
first
time
yesterday.
But
let's
see
if
I
can
just
do
this
from
memory.
So
if
we
do
create
new,
we're
gonna
tell
it
to
make
a
graph.
A
We
all
know
how
much
we
love
graphs
here
and
we
can
actually
see
that
this
really
interesting
graph
was
created
for
us
and
the
reason
this
graph
looks
really
interesting
is
because,
if
you
right
click
here
and
go
to
edit,
you
can
see
that
we
have
this
data
source.
That's
called
fake
data
source,
which
basically
I
think
is
a
data
source
that
just
looks
really
pretty
so.
A
The
first
thing
we
want
to
do
is
we
want
to
get
rid
of
our
really
pretty
data
source
and
now
we're
gonna
make
it
so
that
we're
gonna
have
really
ugly
data
sources
that
we,
hopefully
you
can
turn
into
really
pretty
data
sources
a
little
bit
later.
So
if
we
look
at
let's,
actually
they
just
named
this
dashboard,
really
quick.
A
So
let's
go
back
to
edit,
let's
go
to
general
and
we're
gonna
call
this
the
TGI
K
app
dashboard,
so
I
should
be
able
to
save
this
thing,
which
I
go
up
here
and
I
hit,
save
we're
gonna
call
it
TGI
K
app,
and
so
we
save
with
this
new
dashboard.
So
if
you
look
at
how
this
graph
ona
is
set
up,
you
have
to
define
a
data
source
which
is
just
one
of
the
many
places
growth
odda
could
be
pulling
data
from
to
revisionist
prometheus.
A
Is
our
data
store,
that's
going
to
be
regurgitating
all
of
our
time
series
data
and
we're
gonna
define
Prometheus
as
one
of
our
data
sources?
So
that's
done
here
and
basically,
we
use
Q
DNS
to
hit
the
Prometheus
deployment
on
port
1990
and
it's
pretty
straightforward.
Okay.
So
now
that
we
have
a
data
source
already
pre-configured
for
us,
we
can
go
back
to
our
dashboard
here.
A
A
If
anybody
has
experience
with
getting
a
Prometheus
stack
of
sorts
up
and
running
on
eks,
maybe
let
Pablo
know-
and
you
folks
can
sync
offline
I-
have
never
never
been
able
to
set
that
up
before
I've
only
used
uks
once
before,
Pablo
but
feel
free
to
you
know,
put
a
note
in
the
markdown
file
or
ask
the
question
in
the
issue
tracker
and
we
can
see
if
we
can
track
it.
That
way.
Mateo
says
Pablo
I
think
it's
better
to
ask
this
in
the
kubernetes
lackin
sig
instrumentation,
which
actually
that's
a
really
good
point,
Mateus.
A
No,
it's
a
really
great
place
to
ask
questions,
but
if
you
are
using
agar
fauna
or
Prometheus
or
any
of
the
monitoring
style,
tooling,
sig
instrumentation
is
a
really
great
resource
with
a
lot
of
folks
who
are
working
on
these
projects
to
get
together
and
help
you
if
you
ever
get
stuck
along
the
way,
also
Pablo.
If
you
would
like
I'm
going
to
like
take
off
my
Chris
Novak
FDOT
GI
Kenny
hat
and
put
on
my
Chris
Nova
sing,
AWS,
chair
hat.
A
If
you
want
to
ping
me,
offline
I
can
also
ask
CA,
WS
and
folks
internally
at
Amazon
routinely
doing
that
call,
so
they
might
have
some
experience
there
as
well,
so
feel
free
to
do
that.
If
that
would
help
you
ok,
so
let's
switch
hats
back
to
T,
G.
Ok,
now,
ok!
So
here
in
our
dashboard,
we
don't
have
any
data
points
because
we
have
to
add
some,
and
this
is
where
I
can
like
last
night,
I
was
like
playing
with
this
and
I
spent,
probably
two
or
three
hours.
A
Just
looking
at
all
the
different
data
bits
you
could
get
from
an
application,
that's
just
default
and
it
was
there's
a
lot
of
stuff
going
on
here
and
we're
gonna
see
some
of
those.
So
we're
gonna
add
a
new
metric
here.
So
we
come
in
and
we
pick
our
Prometheus
data
source
I
want
to
make
sure
I
get
my
verbage
right
here
and
then
from
data
source.
We
want
to
add
a
query,
and
so,
in
order
for
us
to
build
a
query,
we
can
actually
do
metric
lookups
over
here
and
this
thing.
A
So
if
you
just
stick
your
mouse
cursor
in
it,
you
actually
can
go
and
you
can
see
all
the
different
metrics
you
can
go.
So
in
this
case
we
wanted
to
go.
There's
like
go
routines
or
something
just
to
get
us
started.
Yeah
go
go
routines,
ok!
So
now
we
have
some
data,
but
let's
see
what's
going
on
here.
So
if
you
look
this
bottom
piece
of
the
graph
here
has
all
of
these
different
color-coded
definitions,
I'm
completely
colorblind.
So
I
can't
even
tell
these
things
apart.
So
most
of
today
is
going
to
be
me.
A
Isolating
these
based
on
job
labels.
So
if
you
look
in
here,
you
can
see
there's
this
little.
It
looks
like
Jason,
but
it's
not
quite
perfectly
Jason.
Inside
of
these.
These
curly
braces.
That
say
the
instance
is
equal
to
the
name
of
the
instance
and
it's
kubernetes
nodes,
and
if
you
keep
looking
further
down,
you
can
see.
App
is
equal
to
Prometheus
component
is
equal
to
core
and
then
there's
our
instance
type
as
well.
So
we
want
to
find
the
TGI
K
app,
so
I
think
what
we
want
to
do.
A
Let's
just
try
to
define
this
and
see
if
we
can
get
anything
so
how
we
would
do
that
is,
we
would
do
curly
brace
app
is
equal
to
t
GI
k,
app
double
quote
in
curly,
brace
and
then
I,
don't
know
if
there's
like
an
inter
button
or
anything,
but
what
I've
been
doing
is
just
clicking
in
this
gray
space
down
here
and
that
will
reload
our
dashboard
for
us.
Okay,
so
I
don't
see
the
TGI
K
app
goroutines,
just
yet
Mateus
I'm
guessing
this
is
another
brilliant
comment
of
yours.
A
You're
fond
of
version
5
has
a
really
beautiful
type-ahead
for
ready
metrics.
Give
that
a
try
at
some
point.
Okay,
so
in
the
new
version
of
grifone
it
looks
like
this
user
experience.
Here
is
a
little
bit
better,
so
we
can't
get
go
routines.
Let's
see
if
there's
another
one
where
we
can
find
the
TGI
K
app.
A
So
let's
go
and
let's
type
go
again
and
let's
see
if
we
can
do
if
we
won't
do
go,
let's
do
HTTP
requests
total
and
we
want
to
move
this
back
to
the
beginning
and
let's
see
how
this
gives
us
any
results
so
I'm
guessing
for
some
reason:
the
TGI
K
app
isn't
being
populated.
Let's
see
what's
going
on,
let's
see
if
we
can
find
it
in
here,
Mateus
says:
I
think
clicking
in
the
gray
space
is
correct.
The
query
is
go-go
routines
job
equals
te
ika
app!
Thank
you
Mateus
again.
A
So
yeah,
so
this
is
kind
of
like
you're
going
to
see
some
of
the
user
experience
here.
You
got
to
go
back
and
delete
this,
and
then
we
want
to
add
our
curly
braces
and
I.
Think
Mateo
says
his
job
is
equal
to
ttak,
apps
or
app
singular.
So
let's
run
that
and
click
in
the
gray
space.
Ok,
perfect,
okay,
sorry,
I'm
gonna
actually
do
this
offline.
A
A
You
can
see
that
we
actually
don't
create
a
new
go
routine
anywhere
in
here
and
we're
just
doing,
listen
and
serve
in
sequence,
not
concurrently,
so
that
would
make
sense
that
we're
only
really
seeing
six
go
routines
here,
which
this
is
already
kind
of
cool,
because
you're
getting
some
visibility
into
your
go
application.
For
instance,
the
world's
simplest
go
program
with
a
handful
of
previous
and
lager
lines
in
it
is
generating
six
go
routines,
which
is
interesting
because
you
really
think
there'd
only
be
one.
A
So
there's
there's
definitely
some
other
things
going
on
in
your
program,
thanks
to
their
our
friends
over
on
the
go
team
who
have
built
this
wonderful
programming
language
for
us,
but
yeah
we're
definitely
getting
some
visibility
into
your
app.
So
yesterday,
at
this
point,
I
was
like
I
wonder
if
I
can,
just
totally
like
spike
up
the
number
of
go
routines
and
see
a
spike
in
our
data
here.
So
this
is
where
I
ended
up.
Writing
that
kubernetes
fork
bomb
that
I
tweeted
about
last
night,
where
I
just
created
an
unlimited
supply
of
go
routines.
A
So
I'm
not
going
to
do
that
today,
but
instead
I'm
going
to
write
a
really
simple
for
loop.
That's
going
to
actually
simulate
a
spike
and
we're
going
to
actually
be
able
to
see
the
amount
of
go
routines,
go
up
and
see
that
regurgitated
here
on
the
TGI,
K
app
dashboard
and
actually
see
what
the
the
data
looks
like.
A
So
this
is
just
a
good
all
for
syntax
here,
if
you've
ever
written,
C
or
Java
before
this
is
going
to
look
Miller.
But
all
we're
doing
here
is
we're,
saying,
create
a
new
variable
and
set
it
to
zero,
continue
to
loop
and
check
this
condition.
That
n
is
greater
than
or
equal
to
I
and
at
the
end
of
every
loop,
go
ahead
and
run
this
in
plus,
plus
which
just
increments
in
by
one.
So
we're
just
going
to
loop
through
1000
times
is
all
that's
happening
here.
A
So
why
we're
looping
through
a
thousand
times
we're
going
to
just
start
a
new
go
routine
by
doing
go
funk,
and
in
here
we
will
do
a
time
dot,
sleep
where's
our
random
thing
here
yeah.
So
this
is
what
I
had
earlier.
So
this
is
just
gonna
sleep
for,
like
some
random
amount
of
time,
which
is
going
to
be
some
random
int
between
0
and
100,
and
it's
gonna
do
that
for
that
me
microseconds.
A
So
we're
gonna
get
a
little
bit
of
like
authentic-looking
data
here
so
for
each
one
of
those
go
ahead
and
spike,
and
we're
only
gonna
do
that
spike.
Once
which
is
right
at
the
very
beginning,
and
then
we're
going
to
go
ahead
and
start
our
server,
so
we're
gonna
go
ahead
and
re
push
this
now
and
we're
gonna
do
that
by
doing
a
make
container
push.
A
A
Our
T
G
I
can
tap
equal
to
0
and
then
we'll
set
it
back
to
1,
and
now
we've
known
that
we've
gone
and
deleted
a
pod
and
created
a
new
TGI
K
pod
that
pull
policy
is
set
to
always
so
that's
gonna
pull
down
the
newly
created
docker
image
and
get
this
thing
up
and
running.
So
if
we
go
back
here
to
grow,
fana
I'm
curious
to
see
if
we're
gonna
be
able
to
get
a
spike
in
our
go
routines
here.
A
So
let's
see,
if
can
get
PL
so
TGI
K
app
has
been
running
for
21
seconds.
It
should
have
loops
through
those
go
routines
pretty
quickly.
I
didn't
have
anything
in
the
logs,
but
let's
just
go
to
see.
What's
going
on
in
our
logs
here
so
km
the
name
of
the
pod
f?
Okay,
so
we
got
to
the
service
metrics
log
line,
so
we
know
we've
already
gone
through
and
actually
created
a
bunch
of
go
routines
and
we
might
actually
miss
the
number
of
go
routines
depending
on
how
the
the
default
handler
here
works.
A
I,
don't
know
if
it
behaves
in
such
a
way
that
you're
going
to
get
that
metric
reported
no
matter
what
or
if
it
just
samples
the
data
in
your
program
randomly
in
which
case,
if
it
does
sample
it,
either
random
or
in
some
sequence,
there's
a
chance
that
that
sample
will
be
taken
at
some
point
in
time
other
than
this
is
happening
so
Mateus.
This
is
going
to
be
your
question
for
the
day.
I
feel
like
I
might
actually
be
able
to
stump
you
with
this
one,
but
we'll
see.
A
Mateus
says
the
metrics
are
collected
at
the
time
the
HT
endpoint
is
hit,
which
right
now
we
have
that
configured
to
one
second,
so
there's
a
good
chance
that
we
should
get
some
of
these
I
can
definitely
bump
that
sleep
up
to.
Actually,
let's
just
do
that
for
good
measure.
I
want
to
see
if
we
can
get
a
a
spike
in
our
data
here,
which
is
interesting.
So
let's
do
time
dot.
A
You
do
and
I
guess
that
would
make
sense
that
the
metrics
are
collected
whatever
it's
hid
because
they
just
have
passes
in
a
handle
or
command.
I
was
just
wondering
if
it
did.
You
need
coal,
caching
behind
the
scenes
or
like
how
intimately
it
went
with
a
the
ghost
source
tree
itself.
Okay,
so
now
that
that's
built,
let's
scale
down
to
zero
and
then
scale
back
up
to
one
and
let's
go
back
to
our
our
dashboard
here
and
reload
and
see
what
we
can
get.
Oh,
we
lost
our
query.
A
A
Job
is
equal
to
t
GI
k,
I
love
how
this
was
supposed
to
be
a
short
episode
today,
but
we're
already
an
hour
and
a
half,
and
this
is
just
what
happens
in
t
GI
k
whenever
I
start
hacking,
so
let's
hit
the
grey
space.
Okay,
this
is
cool,
so
we
have
a
spike
in
our
data.
Finally,
Anthony
says
click
on
the
last
six
hours
and
you
can
send
it
to
Auto
refresh
like
every
five
seconds
so
from
last
six
hours
to
now
refreshing
every
five
seconds
and
let's
do
now
six
hours.
Let's
do
now.
A
Let's
do
15
minutes
I,
wonder
if
that
will
work
perfect.
Look
at
this
we're
getting
like
a
real,
like
I,
feel
like
a
real
observability
engineer
here,
like
we
have
a
spike
in
our
graph
that
we
just
created
by
instrumenting
our
code
and
doing
some
cool
things.
And
if
you
look
here,
you
actually
can
see
that
the
the
go
program
is
super
super
linear,
which
this
is
exciting.
A
If
you
think
about
how
much
work
was
put
into
the
go
programming
language
to
make
this
as
linear
as
it
is,
which
basically
says
that
we're
creating
all
of
these
go
routines
relatively
in
a
uniform
amount
of
time
at
runtime,
sleeping
for
exactly
one
second
and
then
slowly
dropping
off
and
we're
so
great
at
that.
That
second
is
spread
pretty
evenly
Oh.
A
The
course
of
it
looks
like
about
two
minutes
here,
so
that's
really
cool
to
see
the
go.
The
go
program
go
and
actually
do
this
so
anyway,
there's
like
here's
our
spike
in
our
data,
and
we
can
have
different
queries
here
and
we
can
actually
go
and
start
to
to
put
different
metrics
into
our
application,
which
I'm
going
to
go
through
this.
A
This
last
little
custom
bit
of
writing
your
own
metric
here
at
the
end,
and
then
we're
going
to
sort
of
wrap
everything
up
for
the
day,
so
maybe
only
10
more
minutes
and
then
I'll
get
out
of
here.
So
if
we
go
back
to
our
reference
links
here
and
we're
going
to
look
at
this,
instrumenting
go
with
Prometheus,
which
is.
This
is
the
first
thing
that
I
brought
up
and
probably
the
last
thing
I'm
going
to
do,
which
has
this
little
snippet
here.
A
So
we're
gonna
borrow
quite
a
bit
of
this
Matea
says
most
of
these
metrics
are
stateless
and
collected
when
requested.
So
that's
a
really
good
point:
I
think
what
Matthias
is
saying
is
they're
only
taken
would
be
requested,
meaning
that
Prometheus
is
actually
ever
going
to
go
and
take
them
off
of
the
metrics
in
point.
Whenever
somebody
asks
for
them
and
store
them
because
we're
using
them,
so
that's
pretty
cool
to
know
that
everything
is
sort
of
stateless
and
then
we're
not
going
to
start
actually
tracking
it
until
till
we
need
it
at
runtime,
okay.
A
A
We
can
actually
then
turn
record
metrics
on
or
off
concurrently,
just
by
adding
or
removing
the
word
go
there
so
anyway,
we're
going
to
we'll
start
off
by
just
creating
a
function
similar
to
this
one
and
our
program
and
we'll
do
that
down
here
at
the
bottom,
we'll
call
it
T,
gik,
metrics
and
we're
gonna
do
just
that.
We're
gonna
get
rid
of
this
and
we're
gonna
get
rid
of
this,
and
let's
save
that
and
looks
like
ops
process
just
something
that
we
need
to
import.
A
Where
is
ops,
processed
okay?
So
now,
let's
look
at
and
see
what's
going
on
here,
so
in
this
example,
they
define
a
new
variable
called
ops
processed.
They
use
this
prom
auto,
which
is
I'm
assuming
as
a
Prometheus,
auto
new
counter
permit
primitive,
which
is
coming
from
I'm,
assuming
this
package
here,
which
is
in
that
client,
goaling,
Prometheus,
client
library
we
looked
at
earlier,
so
I
bet
I
can
just
copy
this
whole
snippet
and
grab
this
into
our
and
paste
this
into
our
application
here.
A
So
for
prom,
auto,
let's
see
if
we
can
auto
detect
it
Auto
not
seeing
it.
So,
let's
just
add
the
line
manually
and
we'll
do
a
dump
in
sure
as
well,
so
let's
grab
prom,
auto
and
prom
HTTP
we'll
go
and
import
these
into
our
code
base.
Here
and
here
we
already
have
prom
HTTP
and
let's
do
it
up
in
sure
and
see
what
happens,
yep
and
sure
V
I
know
I
brought
this
up
on
a
previous
TV,
okay,
but
every
time
you
do
add
up
ensure
be
this
little
source
output.
A
A
A
A
A
Really
want
to
go
and
see
which
functions
are
available
in
this
prom
auto
package.
So
as
soon
as
we
get
this
thing,
imported
I
want
to
kind
of
just
look
and
see
off
via
off-the-cuff.
What
looks
exciting
so
for
Prometheus
countertops
I'm,
assuming
countertops,
is
short
for
counter
options
and
we
give
it
a
name
and
a
help,
and
then
basically,
ops
processed,
is
just
a
some
sort
of
incremental
counter
that
we
could
use
arbitrarily.
So
you
would
want
to
use
an
incremental
counter
arbitrarily
in
your
app
for
a
number
of
things.
A
This
could
be
anything
from
like
how
many
events
have
you
processed.
Maybe
how
many
requests
have
you
taken?
Maybe
how
many
requests
have
a
certain
type,
but
as
you're
looking
at
instrumenting
your
application
as
your
application
is
taking
action
and
doing
things,
you
would
then
be
able
to
have
some
sort
of
variable
or
probably
some
sort
of
package
that
behaves
like
a
singleton
I
know,
package
level,
state
and
singleton
go
where
you
would
be
able
to
actually
go
and
increment
some
counter.
A
So
we
can
start
to
look
at
really
exciting
patterns
in
the
go
programming
language
like
maybe
we
have
some
sort
of
higher-level
struct
with
all
of
our
Prometheus,
auto
counters
and
types
in
them
that
we
can
access
from
anywhere
in
the
package.
Maybe
we
pass
a
global
one
around.
Maybe
we
have
one
per
per
file,
there's
a
couple
of
different
ways.
A
So
if
you
know
you
have
some
sort
of
data
pipeline
and
you
get
say
seven
or
eight
different
types
of
events
that
might
come
through,
maybe
we
have
orders
or
refunds
or
whatever
you're
actually
able
to
do
a
little
bit
of
segregations
and
you
would
be
able
to
see
on
a
graph
at
one
time
how
many
of
the
different
types
of
events
are
coming
through
your
system,
and
you
could
process
that.
However,
you
want
with
Prometheus
and
a
tool
like
or
fauna,
just
by
onion,
a
simple
counter
to
your
to
your
application.
A
So
let's
go
back
and
see.
Matea
says
yes,
personally,
I
always
try
to
create
a
Prometheus
new
registry
and
may
not
go,
and
then
all
the
metrics
in
the
past
goes
around
no
Global's,
so
yeah.
This
is
like
a
stylistic
thing.
I
know
a
lot
of
people
who
say
that
you
should
never
have
package
levels
state
and
go
I
know
a
lot
of
folks
who
enjoy
pass
something
around
globally.
A
It
looks
like
in
mateus
example
which,
as
soon
as
this
is
done,
which
it
looks
like
it
is
now,
there's
some
built
in
Prometheus
new
registry
primitives
that
allow
us
to
sort
of
interact
with
this
new
registry
thing,
and
it
looks
like
they
already
built
that
for
us,
which
is
handy
because
I
was
sort
of
just
talking
through
what
it
might
be
like
to
build
that
on
your
own
okay.
So
anyway,
we
should
have
our
dependencies
checked
out.
A
So
let's
go
back
to
goal
end
and
see
if
our
prom
auto
here
wants
to
look
itself
up.
Aha,
okay,
so
that
worked.
So
if
we
go
through
and
we
look
at
and
see
where
were
we
this
problem,
auto
new
count
counter,
we
can
actually
go
through
and
see
what
else
we
have
here.
Just
by
doing
prom
did
I
spell
that
right,
Auto,
dodge
I,
guess
it
still,
it
might
still
be
indexing.
A
We
can
come
back
to
this
in
a
second,
so
let's
just
build
the
rest
of
our
program
and
then
we
can
look
up
and
see
what
the
library
has
and
then
so
in
our
record
metrics.
We
just
have
this
for
loop.
That
just
does
asleep
for
two
seconds
and
it
just
increments,
and
then
we
just
call
record
metrics.
So
this
is
pretty
straightforward.
This
is
like
really
easy.
So
let's
mutate
this
a
little
bit
and
give
us
a
little
more
interesting
spike
here.
A
A
Okay,
so
we
can
add,
we
can
add
one
to
it
or
we
can
just
do
increment,
which
I
think
is
just
like
to
basically
doing
a
plus
plus
what
does
desk
to
you.
This
is
interesting,
so
this
is
the
the
the
counter
here.
That's
built
into
the
Prometheus
library.
For
us
matteo
says,
however,
these
metrics
are
not
populated
by
the
API
server
and
and
thus
are
noise.
A
Could
you
expand
on
that
a
little
bit
more
Mateus,
what
you
mean
by
they're,
not
populated
by
the
API
server,
so
this
just
returns
a
descriptor
for
the
metric,
which
I
think
is
just
part
of
what
you
would
define
in
the
metric,
which
is
pretty
straightforward?
Okay,
so
let's
go
ahead
and
run
this
so
after
we
do
our
spike
and
go
routines,
let's
go
ahead
and
do
go.
Tgi,
K,
metrics
and
we'll
run
that
concurrently
as
well
and
then
we'll
start
our
server.
So,
let's
refresh
this
make
make
container
push.
A
Oh
I
guess:
I,
miss
one
Mateo
said
he's
reading
a
message
above
the
last
one
due
to
the
package
global
metrics,
the
kubernetes
api
server
apparently
exposes
at
CD
metrics
because
it
vendors
some
packages
of
at
CD
got
it.
However,
these
metrics
are
not
populated
by
the
api
server
and
thus
are
noise.
Okay,
so
he
was
referring
to
how
kubernetes
it
attempts
to
monitor
a
CD.
I
was
like
what
are
we
talking
about
here?
Is
there
some
other
api
server
and
either
register
my
metrics
with?
A
What's
going
on
so
yeah
we're
good,
so
make
container
push?
Let's
do
our
scale
command
again.
I
feel
like
today's
episode
is
just
me
and
Mateus
talking
back
and
forth,
which
is
totally
rad.
So
let's
do
our
scale
aren't
equal
to
replicas
equal
to
one
and
you're.
Not
spamming
at
all.
Folks
are
here
to
learn
and
you're
an
expert
on
this
and
one
of
the
maintainer
zuv
a
successful
operator,
so
I
think
folks
are
excited
to
hear
what
you
have
to
say.
A
So,
if
anything,
we
should
be
thanking
you
Mateus
for
helping
us
out
today
and
also
like
I
hope,
I've
seen
your
name
correctly,
because
I've
only
said
it
like
a
hundred
times
past
hour
and
40
minutes,
so
replica
is
equal
to
0
and
then
replica
is
equal
to
1.
Okay,
so
that
thing
is
scaled.
So
let's
go
back
to
graph
ona
and
see
what
we
got
here
and
we
should,
if
everything
worked,
which
we're
gonna
see
if
this
works.
A
But
if
everything
worked,
we
should
have
this
my
apps
processed
opps
total,
because
we
called
it
the
my
apps
process,
opps
total,
which
I
should
have
given
that
a
better
name
but
anyway,
this
should
now
be
appearing
on
this
slash,
metrics
output.
So
Prometheus
should
just
pick
this
up
automatically
register
a
label
and
then
we're
off
to
the
races,
and
we
can
query
for
it
directly
in
Griffin
ax.
So,
let's
go
see
if
we
can
find
it
here,
okay,
so
our
new
app
that
we
just
pushed
is
running.
A
A
My
ABS
my
app
process,
ops,
total
okay!
So
let's
go
here
and
look
it's
already
found
it.
That's
really
cool
and
let's
put
my
app
processed
opps
total
into
this
space
and
click
here,
and
we
should
see
we
should
see
a
spike
shouldn't.
We
is
there
just
because
I'm
viewing
this
thing
too
late,
and
actually
you
know
what
I
want
to
do
is
I
want
to
save.
A
How
do
I
save
this
one,
because
I
can
add
both
on
here
at
the
same
time,
I
think
what
do
I
just
go
up
here
and
hit
save
yeah
I
think
so
so
now,
let's
add
a
query
and
we'll
do
our
go,
underbar
go
routines
and
we're
gonna
add
our
job
is
equal
to
t
GI
k
as
well,
and
we
should
have
two
metrics
here
yeah.
So
here's
our
our
go
routines
and
I'm
not
seeing
this
new
metrics
counter
go
up.
Is
there
a
bug
in
our
code
here?
A
What's
going
on
so
here
we
call.
A
Let's
out
a
log
line
and
see
what
happens,
calling
King
gik
metrics,
so
we
call
that
and
then
we
know
our
our
server
starts,
which
our
server
obviously
is
working
or
we
wouldn't
have
gotten
that
metric
and
graph
on
ax.
So
we
know
we're
calling
this
function
to
GI
KD
metrics,
and
then
we
just
do
a
for
loop
to
run
indefinitely.
That
just
says
sleep,
okay,
because
I
bet
word
cuz,
we're
sleeping
for
seconds
here,
meaning
that
actually,
no,
that
should
that
shouldn't
matter
it
should
be
working.
A
A
Automatically
refresh
every
five
seconds-
okay,
so
this
is
so
fast
I
love
watching
how
fast
the
kubernetes
scheduler
actually
is,
so
it
like
so
much
just
happened
when
I
hit
that
zero
change
to
201
it
hit
docker
hub
realized
there
wasn't
a
new
container
and
launched
a
new
container
with
the
existing
image
it
already
had
locally.
What's
doing,
all
that
so
fast
is
just
cool
like
we
can
kind
of
get
a
lot
of
that
for
free
kubernetes
and
it's
fun
to
take
advantage
of
it.
A
Okay,
so
anyway,
we're
seeing
are
spiking
our
routines,
we're
not
seeing
a
spike
in
our
ops
counter,
so
I
want
to
try
to
debug
this
a
little
bit
more
and
I.
Think
it's
gonna
be
really
interesting
to
actually
go
through
and
and
learn
about
all
the
different
types
of
counters
that
we
could
do
with
prometheus
and
all
the
different
ways
we
could
start
to
gather
metrics
I'm
in
the
future.
I
would
really
like
to
talk
about
implementing
this
on
a
real
life
use
case.
A
A
It
would
probably
be
a
great
great
metric
to
have
in
an
operator
like
one
for
the
cluster
API
mateus
house.
Please
click
on
the
only
my
app
metric
in
the
graph
panel
again
as
this
doesn't
go
up
to
1k.
Yet,
okay.
So
let's
see
what
what
Mateus
is
saying,
how
do
I
do
that
I
think
that's
this
button
here.
Oh
I
see
okay,
so
that
goes
up
to
1k
and
then
that
goes
up
to
1
so
wait.
A
This
is
what
I
want
to
do:
Oh,
interesting!
Okay.
So
why
does
this
not
okay,
I
see
okay,
I
see
what's
going
on
now.
So
let's
look
at
our
code
and
explain
this
so
we're
just
doing
this
for
loop
and
we're
sleeping
for
some
amount
of
time
and
we're
incrementing
slowly.
So
this
thing
is
counting
up
up
up
up
up
slowly
on
our
graph.
A
I
foolishly
thought
it
was
a
reasonable
assumption
to
think
that
the
the
graph
dimensions
that
we
were
using
for
the
go
routines
that
instantly
shot
up
to
a
thousand
and
came
back
down
would
be
relevant,
but,
as
Mateus
pointed
out,
we
actually
need
to
inspect
them
independently
of
each
other
and
have
just
you
know,
going
up
to
two
or
three
here.
So
here
you
can
see
what's
happening
inside
of
our
application.
Is
we're
sleeping
for
some
amount
of
time,
we're
sleeping
that
sleep
expires.
A
We
increment
the
counter
by
one
we
sleep
for
a
random
amount
of
time
that
sleep
expires.
We
increment
the
counter
by
one.
So
if
we
let
this
thing
run
for
about
a
thousand
more
iterations
through
which
could
take
quite
some
time,
we
would
actually
be
able
to
do
both
of
these
on
the
same
graph,
and
the
bottom
graph
would
look
like
this
slowly.
Incrementing
line
and
the
the
top
graph
would
be
just
a
big
spike
as
well,
and
Anthony
says
yes
in
the
metric
to
the
secondary
y-axis.
A
What
do
I
need
to
click
on?
Oxs
sees
okay,
so
we
have
left
Y
and
we
have
right
Y
in
our
X
ax
e.
So
here
resolution,
where
do
you
tell
it
which
ACCI
to
go
on?
If
you
want
to?
Let
us
know
that'd
be
great
anyway,
it's
an
hour
and
15
minutes
in
so
we've
been
doing
this
for
quite
some
time,
so
I
really
need
to
wrap
up
here.
A
So,
at
the
end
of
the
episode
we'll
try
to
get
this
graph
up
and
running
here
in
a
second,
but
at
the
end
of
the
episode,
I
like
to
start
saying,
goodbyes
a
little
bit
early
and
then
I'll
kind
of
recap:
everything
that
we've
done
today
for
folks
at
home.
So
yeah
we
were
able
to
create
a
new
Prometheus
deployment
using
some
older
Emmel,
but
some
interesting
animal
that
talked
about
the
various
complexities
and
getting
Prometheus
the
alert
manager
in
grow
fauna.
Up
and
running.
A
We
did
this
really
really
fast
spike
in
go
routines
and
the
second
one
we
did
was
this
example
of
creating
a
new
counter
that
we
called
ops
processed
and
we
incremented
this
randomly
over
the
course
of
a
very
very
long
time.
And
then,
of
course,
we
gave
it
a
name.
The
interesting
part
about
using
the
Prometheus
libraries
in
your
go.
A
So
let's
see
I'm
hoping
folks
that
told
me
how
to
do
the
y-axes
really
quick,
so
Mateus
said
to
change
the
axis.
You
need
to
click
on
the
short
green
line
left
to
the
match
in
the
graph,
so
colorblind
Nova
here
is
gonna.
Try
to
figure
this
out.
So
click
on
this
so
go
to
look
at
that.
Look
at
that
beautiful
graph!
Okay.
So
this
is
a
let
me
switch
over
okay.
So
this
is
like
the
graph
of
the
day.
I
really
want
actually
I'm
gonna.
A
Take
a
screenshot
of
this
really
quick,
so
I
can
tweet
it.
We
finally
got
like
a
really
cool
graph
here,
so
yeah.
So
thanks
for
joining,
we
finally
got
our
beautiful
graph
using
custom,
Prometheus
instrumentation
and
go,
and
we
were
able
to
see
some
exciting
things
about
running
running
a
new
program
in
production
or
all
in
kubernetes.
You
can
run
it
in
production
or
wherever
and
creating
new
metrics
using
the
the
client
libraries
that
the
folks
at
Prometheus
have
offered
for
us.
So
that's
all
I
have
today
thanks
for
joining
everyone.
A
A
Chris
yeah,
unfortunately,
I
think
I
need
to
cancel
one
of
my
Mountain
clients
this
weekend,
I
just
cancelled
some
of
my
travel
for
next
week,
so
I
think
I'm,
going
straight
home
and
laying
in
bed
and
gonna
be
making
soup
and
watching
movies
all
weekend,
which
was
probably
gonna,
be
good
for
me
anyway
to
to
chill
out
for
a
little
bit
so
Bogdan
says
at
least
you
can
find
a
5x
okay.
So
anyway,
that's
all
I
have
today
thanks
for
joining
everyone.
A
Bogdan
says
great
episode
get
well
soon
and
in
thanks
for
bearing
bearing
with
us,
through
the
almost
two
hour-long
to
gik
episode,
I'm
wondering
like
if
we've
like,
set
any
new
TGI
K
Records
for
the
longest
episode.
That
was
supposed
to
be
my
shortest
episode,
which
is
hilarious
today,
on
TGI
K
Carlos
Santana
great
name
says:
can
you
show
how
you
added
the
other
y
AXI
yeah
so
Carlos?
A
So
here
we
have
the
Left
Y
ax
e
and
the
left
to
right,
Y,
ax
e,
and
we
can
define
some
of
the
configuration
and
then,
if
you
actually
go
actually
you
can
have
any
of
these
tabs
open.
Here.
You
come
here
and
you
can
see
all
of
your
different
date
sources
and
you
can
write
or
just
single
click
on
them
and
you
can
like
change
their
color
and
you
can
like
tell
it
like
which
AXI
you
want
it
to
go
on,
and
so
now
we
have
on
the
left
side.
A
Here
we
have
up
to
a
thousand
four
thousand
go
routines
and
on
the
right
side
we
have
our
counter,
which
is
slowly
going
up
in
the
background,
and
if
we
let
this
thing
run
long
enough,
these
two
accesses
would
start
to
approach
each
other
and
ultimately
become
very
similar
in
nature,
and
our
graph
would
look
very
differently.
Actually
it
might
look
close
to
the
same
just
zoom
down
a
bunch,
so
Roy
says
thank
you.
Credits
get
well
Anthony
says
thanks
I'm.
A
Sorry,
we
won't
see
you
again
in
London
next
week,
yeah
I
haven't
announced
it
to
folks
yet
but
I.
Definitely
like
I
I'm,
too
sick
to
be
traveling,
like
I,
can't
really
shouldn't
even
be
here
at
the
office
and
I
don't
want
to
get
anybody
else
sick,
so,
unfortunately,
I'm
not
going
to
be
in
London
at
the
beginning
of
next
week.
A
Unfortunately,
so
Michael
says
thanks
get
well
soon,
but
I'm
not
even
trying
to
say
it,
but
he
said
Cheers
and
good
night.
All
and
Carlos
Santana
says
thanks
great
show
no
problem.
Carlos
I
really
enjoy
your
name.
So
thanks
for
joining
everyone,
it's
been
real.
I
got
to
go
back
to
my
face
here.
I
hope
everybody
has
a
good
weekend
feel
free
to
hit
me
up.
We
love
doing
ttak,
as
always,
but
yeah
hit
me
up.