►
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
My
name
is
Leon
Baron
I'm,
a
Solutions
architect
at
Tiger
and
in
this
video
we're
going
to
run
through
the
steps
needed
to
take
a
Bare,
Bones
kubernetes
cluster,
to
a
cluster
that
has
Calico
Prometheus
and
grafana
deployed
and
running
and
chemical
components,
exposing
metrics
for
Prometheus
consumption.
Now,
how
we'll
do
that
is
we're
going
to
take
a
kubernetes
cluster
that
has
no
cni
installed
in
it,
and
using
Helm
will
deploy
Calico
open
source
and
the
Prometheus
stack.
We'll
then
enable
the
Calico
components
to
expose
metrics
and
ensure
that
Prometheus
discovers
these
metrics.
A
Finally,
we'll
create
a
sample
dashboard
in
grafana
to
display
these
metrics.
Now
before
we
begin,
let's
first
take
a
look
at
the
lab
environment
we
have
set
up
here.
We
have
a
three
node
kubernetes
cluster
installed
using
Kube
ADM
and
a
Bastion
host.
Where
we'll
be
running.
All
of
our
commands
from
we
have
one
master,
node
and
two
worker
nodes
and
the
node
network
is
10.0.1.0
24.
A
As
a
brief
overview
of
what
each
of
these
components
do
with
the
Calico
node
runs
as
a
demon
set
on
every
cluster
node
and
Felix,
which
is
the
component
that
we're
interested
in
is
a
core
process
running
inside
of
Calico
node.
One
of
the
main
responsibilities
of
Felix
is
realizing
and
enforcing
Calico
security
policies
on
the
data
plane
now
Calico
typher
is
a
caching
data
store
proxy
that
sits
between
the
Calico
nodes
and
the
kubernetes
API
server
and
really
its
main
function
is
to
allow
for
scale
and
then
finally,
we
have
the
Calico
Q
controllers.
A
A
A
A
A
A
Once
we've
got
that
done,
we're
going
to
create
a
namespace
called
tiger
operator
and
it's
this
is
the
namespace
that
we're
going
to
Target
our
Helm
installed
into.
So
if
we
run
the
command
Helm
install
Calico
project,
Calico
tigera
operator,
I've
chosen
the
latest
version
available
at
the
moment
and
we're
going
to
install
it
into
the
namespace
tigera
operator.
A
A
A
Now,
because
the
Calico
node
pods
have
been
deployed.
If
we
rerun
the
kukat
will
get
tigera
status
command.
We'll
now
see
that
the
Calico
resource
is
available.
We
can
also
see
that
the
API
resource
is
available
too.
So
if
we
rerun
the
get
pods,
we
now
see
that
everything
is
running
to
confirm
completely.
If
we
do
a
coupe
coddle
get
nodes,
we
can
see
that
now
all
nodes
are
in
a
ready
state,
so
with
Calico,
open
source
deployed
and
all
the
pods
up
and
running.
A
A
A
We
can
check
further
by
checking
on
the
pods
in
the
monitoring
namespace,
and
here
now
we
can
see
that
we've
alert
manager,
that's
running,
got
the
Prometheus
stack,
Cube,
prom
Prometheus.
It's
also
running.
We
have
grafana,
that's
running,
we've
got
the
Prometheus
operator
running
the
state
metrics
and
then
the
Prometheus
node
exporter
pods
all
running
so
now
that
they're
all
running,
we
should
get
a
good
output
from
here,
and
indeed
we
see
now
that
it's
ready,
we
can
also
check
grafana
and
to
check
rafana.
A
A
Once
the
Ingress
is
fully
up
and
running,
we
should
be
able
to
browse
to
this
holster
URL
to
access
the
Prometheus
UI
and
down
here
describes
what
service
the
Ingress
is
going
to
match
to.
So
if
we
run
a
coupe
cuddle
get
service
on
the
monitoring
namespace,
we
can
see
that
this
service
name
matches
to
this
cluster
IP
service,
and
we
can
see
that
that
service
is
listening
on
TCP
port
1990..
A
A
A
Now
the
Ingress
is
going
to
take
a
couple
of
seconds
to
apply.
We
can
monitor
that
by
running
a
get
Ingress
and
we
can
see
that
this
is
the
Ingress
that's
currently
being
deployed,
but
we've
got
no
addresses
associated
with
it.
So
if
we
were
to
open
a
browser
to
this
location,
we
wouldn't
see
anything,
take
a
look
again
and
eventually
we're
going
to
see,
addresses
that
are
attached
to
it
and
once
they're
attached
to
it,
then
we
know
that
we've
got
access
to
it.
A
A
We
should
now
see
the
Prometheus
UI,
and
indeed
we
do.
We
can
even
check
the
targets
that
Prometheus
already
has,
and
it's
quite
interesting
because
we've
deployed
it
through
Helm
and
it
was
the
full
stack
it's
already
monitoring
itself.
So
it's
already
monitoring
alert
manager,
it's
monitoring
the
Prometheus
operator,
Prometheus
server,
the
state
metrics
and
a
node
export.
A
A
A
This
will
take
a
few
seconds,
so
what
we
can
do
to
monitor
it
is
run
a
coupe
cuddle,
get
Ingress,
minus
n
monitoring,
and
we
can
see
that
the
Prometheus
service,
of
course,
already
has
IP
addresses
attached
to
it.
But
the
grafana
service
that
we
just
applied
has
nothing.
So,
let's
wait
until
that's
populated
and
then
we'll
look
into
the
grafana
UI.
A
A
A
Let's
take
stock
of
what
we've
done
so
far.
We
first
deployed
the
Calico
components
using
helm,
and
this
gave
us
components
such
as
Calico
typher,
Calico,
Cube
controller
and
a
Calico
node
pod
on
every
cluster
node.
We
then
deploy
the
Prometheus
stack
using
helmet
deployed
components
such
as
Prometheus,
grafana
alert
manager
and
node
exporters.
A
We
then
configured
an
Ingress
to
allow
UI
access
to
both
Prometheus
and
grafana,
and
from
looking
at
the
Prometheus
UI,
we
could
see.
Many
targets
have
already
been
discovered,
both
from
the
Prometheus
stack
itself
and
from
the
kubernetes
cluster.
But
now
we
want
to
add
the
Calico
components
to
these
Prometheus
targets.
A
Next,
we
want
to
get
a
Calico
component
metrics
into
our
Prometheus
deployment,
and,
to
do
this
we
need
to
follow
three
steps.
First,
we
need
to
enable
Prometheus
metrics
on
both
Felix
and
typher,
and
to
do
this
we're
going
to
edit
the
Phoenix
configuration
resource
to
enable
Felix,
metrics
and
we'll
edit
the
installation
resource
to
enable
type
metrics.
After
this,
we
need
to
create
services
for
both
Felix
and
taifa,
and
it's
going
to
be
these
services
that
Prometheus
will
use
to
discover
the
endpoints
and
what
ports
on
that
endpoint.
A
It
needs
to
scrape
do
note
that
these
two
first
steps
are
completed
by
default
for
Calico
Cube
controller,
so
we
don't
need
to
do
anything
further
there
and
then.
Finally,
we
create
service,
monitor
resources
that
will
tell
Prometheus
about
the
Felix
typher
and
Calico
Cube
controller
services,
so
the
service
monitors
are
really
what's
tying
all
these
pieces
together.
A
A
A
A
A
Now,
with
those
Services
created
we're
ready
to
grab
metrics
from
those
endpoints,
let's
take
a
quick
look
at
the
services
in
the
calicore
system
namespace,
and
we
can
see
that
we've
got
both
Felix
metrics
and
type
metrics
and
notice
that
their
cluster
IP
type
but
they've
got
no
IP
addresses
we're
not
really
using
them
from
a
networking
perspective,
we're
really
just
using
them
so
that
we
can
discover
the
endpoints
and
we
can
see
that
it's
1991
and
1993.
now
notice.
The
Calico
Cube
controller
metrics
is
already
exposing
the
metrics
TCP
Porsche.
A
A
We
can
see
that
it's
called
Felix
service,
monitor
it's
in
the
monitoring
namespace
and
what
it's
doing
is
it's
selecting
the
Calico
system,
namespace
and
then
it's
selecting
a
service
based
on
this
label,
and
we
can
see
here
that
that
is
going
to
be
the
same
as
this
now.
Something
to
be
aware
of
here
is
this
label
up
here
the
release
Prometheus
stack
when
we
create
these
service
monitors
and
we're
creating
them
in
the
monitoring
namespace.
A
A
A
A
And
finally,
let's
take
a
look
at
IQ
controllers:
again,
it's
a
service
monitor
again,
we've
got
the
release.
Prometheus
stack
we're
calling
a
Calico
Cube
controller's,
monitor
in
the
monitoring
namespace
we're
matching
the
Calico
system
namespace,
and
here
we're
matching
K,
it's
a
Calico
Cube
controllers,
which
is
the
same
as
this
label
for
this
service.
A
A
And
after
a
few
seconds,
we
now
see
the
Calico
Q
controller's
monitors.
So
that's
everything
now
and
we
can
expand
these.
We
can
see
that
they're
up
and
if
we
were
to
take
a
look
at
these
IP
addresses
these
IP
addresses
are
going
to
be
the
pods
that
Prometheus
is
scraping.
So
that's
service
has
shown
Prometheus
what
the
endpoints
are,
what
the
pods
are
and
then
Prometheus
is
scraping.
Those
pods
on
the
TCP
portion
be
able
to
visualize
these
metrics
in
grafana.
We
need
to
create
a
dashboard
now.
You
can,
of
course,
create
any
dashboard.
A
If
we
navigate
to
this
URL,
we
can
see
a
Felix
dashboard.json
and
there's
all
the
configuration
needed
to
create
a
default
Felix
dashboard
in
grafana.
So
what
we
really
need
to
do
is
copy
all
of
this
information
and
then
change
the
data
sources,
because
in
this
example
the
data
source
is
just
specified
as
grafana.
We
need
to
specify
it
for
our
Prometheus
and
our
graphene
instance.
A
So
all
we
need
to
do
is
change
data
source,
grafana
to
dinosaurs,
type
data
source
and
uuid
grafana,
so
we'll
replace
all,
and
then
we
can
find
another
data
source
that
references
Calico,
demo
Prometheus,
which
of
course
is
not
referencing
our
Prometheus
installation.
So
let's
also
change
that
so
to
change
that
we
just
need
to
change
this.
A
A
A
We
created
services
for
both
Calico
node
and
Calico
typhen.
Remember
the
Calico.
Q
controller
service
was
already
created,
and
then
these
Services
were
referenced
by
the
service
monitors
that
we
also
created
to
tell
Prometheus
how
to
discover
the
calculate
endpoints
it
needed
to
scrape
we
verify.
The
targets
were
discovered
correctly
in
Prometheus,
and
then
we
created
a
dashboard
for
Felix
exposed
metrics
based
on
a
default
dashboard
provided
in
the
project,
Calico,
dark
site,
I
hope
this
video
has
been
helpful
and
we
look
forward
to
seeing
you
in
future
videos.