►
From YouTube: Distribution Team Demo 2020-12-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody
to
the
thursday
distribution
demo.
This
demo
will
be
actually
fairly
straightforward
and
fairly
simple
should
go
pretty
quickly,
not
expecting
problems,
but
with
any
luck,
we'll
have
a
couple
minor
problems,
so
we
can
investigate
them.
What
I'm
doing
is
I'm
starting
with
the
an
old
version
of
get
lab
deployed
with
the
helm
chart
4.2.0.
A
The
reason
I'm
starting
with
that
is
because
that
was
a
version
of
the
charts
before
we
made
the
changes
for
how
prometheus
pulls
its
metrics.
A
So
the
the
older
versions,
such
as
the
one
deployed
with
four
two
zero,
would
basically
monitor
the
entire
cluster,
so
anything
deployed
to
the
cluster
would
be
seen
in
the
get
lab
instance
of
prometheus,
and
so
with
that
I
have
actually
set
it
up.
So
we
have
both
the
getlab
prometheus
and
a
separate
prometheus
instance
running
in
the
cluster,
and
then
I
also
have
an
additional
workload.
That's
just
running
to
give
us
a
little
bit
of
information,
so
we
can
see
things
disappear.
A
Ideally,
we
should
see
things
disappear,
so
we
see
here
in
k9s.
The
get
lab
instance
is
fully
deployed
and
running.
I
have
in
prometheus
select
that
guy
prometheus
namespace.
I
have
just
a
straight
prometheus
operator
installed,
which
is
just
going
gathering
metrics,
which
we're
now
seeing
on
the
web
browser
there.
I'm
actually
filtering
for
the
get
lab
name
space
to
see
that
we're
seeing
metrics
go
into
that
prometheus
for
the
gitlab
instance,
and
then
I
also
have
just
a
generic
workload,
just
something
simple.
A
That's
running
just
to
get
a
few
stats
from
outside,
and
if
we
look
at
that.
A
We
see
that
it's
consumer
memory
and
gathering
statistics-
that's
just
from
the
external
prometheus
if
we
flip
over
to
our
instance
and
let's
jump
in
here
to.
A
A
A
We
have
our
get
lab
now,
our
original
query,
so
it
disappears
into
the
mud
down
here
with
the
get
lab
stuff
because
it
overwhelms
it,
but
I
can
turn
on
and
off
easily
just
by
so
so.
This
demonstrates
right
now
that
we
have
the
get
lab
prometheus
monitoring
everything
in
the
cluster.
A
A
So
what
we
did
is
we
in
started
in
version.
I
think
it
was
fourth
three
zero.
We
changed
the
annotations
a
little
bit
of
what
get
lab
prometheus
monitors.
A
So
if
we
come
back
here
to
to
the
get
lab
name
space,
we
look
at
any
of
these
guys.
A
Yeah
here
we
go
so
we
have
the
standard
annotations
for
prometheus
to
monitor
our
services
and
things
like
that
so
prometheus
dot,
io
blah,
is
what
all
the
prometheus
about
the
in
cluster
and
of
get
lab
instance
of
prometheus
are
both
looking
at
that
right
now
and
that's
how
they're
gathering
those
statistics
now,
what
I'm
going
to
do
now
is
we're
going
to
deploy
the
redeploy
get
lab
with
the
current
chart,
which
is
463,
and
we
should
see
all
these
annotations
change
and
what
what
should
happen
is
our
in-cluster
prometheus
should
continue
monitoring
all
the
get
lab
instances.
A
A
I
don't
need
the
install,
but
oh
well.
I
am
going
to
get
laminate.
A
A
Someone
give
me
a
little
bit
audio
too.
I
kill
my
notifications.
My
speaker
shows
it's
down
pretty
low,
just
want
to
make
sure
I'm
getting
audio
from
you
guys,
testing
all
right
excellent.
Thank
you
just
wanted
to
make
sure
I
didn't
actually
kill
my
audio
and
you
guys
are
talking
to
me
and
I'm
not
hearing
you.
A
A
A
The
method
should
be
sort
of
interesting
right.
A
Now
yeah
see
we're
seeing
the
killing
off
and
restarting
of
everything,
or
at
least
something
some
stuff
is
still.
A
While
we're
waiting,
we
should
have
things
updated
on
the
services.
A
I
hate
the
of
k9s.
Now
there
we
go
we're
looking
at
giving
before.
A
In
addition,
what
we
do
is,
we
add,
a
new
annotation
which
is
basically
more
or
less
direct
copies
of
the
original
annotations,
with
our
our
modifications,
and
then
we
configure
our
our
prometheus
instance
in
the
getlab
deployment
to
only
look
at
those
annotations.
A
A
Yeah
here
we
go
the
these
source
labels
here,
just
skip
it
on
the
event,
but
these
source
labels
tell
the
yeah
it's
moving
around
on
me
tells
the
oh:
it's
constantly
getting
rewritten
nice
tells
the
prometheus
get
lab
prometheus.
Those
are
the
late
or
the
annotations
to
look
at,
not
the
standard
annotations.
A
A
But
if
we
come
back
over
here-
and
I
think
this
is
going
to
die
on
me
a
little
bit-
let's
refresh-
will
it
continue
I'll
have
to
re-log
in
yeah?
Let
me
just
look,
looks
like
it's
updated,
although
that's
not
what
I
expected.
A
12
40
yeah,
it
is
updated,
so
we
still
have
the
internal
prometheus
pulling
metrics
for
the
get
lab
pod.
What
I
was
not
expecting.
A
B
A
A
A
A
Here
it
is
shift
f,
you're
right.
All
right
question
is
once
again
I'll.
Okay,
it
allows
me
to
set
the
local
ports
good.
A
A
A
A
If
you
have
anything
shown
up
here,
any
container
left
scene,
that's
pretty
come
on.
A
Okay,
that's
not
what
I'm
expect
active,
I'm
even
more
curious
where
we're
getting
data
here,
then,
let's
just
save
this
dashboard
for
a
second,
so
we
have
what's
the
new.
Can
we
do?
Where
did
they
put
it
there?
It
is
I'm
sure
copy
call
it
that
I
don't
care.
A
A
A
But
obviously
it's
somewhere
because
refine
is
seeing
it.
B
A
B
A
A
A
So
things
like
the
container
any
of
the
container
stuff's
coming
off
the
cube
api
nominally.
A
B
If
anything,
maybe
we
should
add
further
details
to
the
selectors
that
actually
tries
to
scrape
from
the
node
details,
which
is
where
that
information
would
be
from.
There
may
be
something
that
we
did
not
directly
pull
out
like.
Maybe
we
change
the
annotations
for
the
pods,
which
means
we're
only
scraping
our
pods
now,
but
perhaps
we
should
still
be
scraping
whatever
it
is.
Prometheus
default,
config
pools.
A
Yeah,
that's
sort
of
what
I'm
thinking
too,
at
least
for
for
a
quick
analysis
of
what's
going
on
within
a
container
or
usage
that
would
be
useful.
A
C
A
All
right,
interesting,
okay!
Well
that's
at
least
one
oversight.
That's
about
all
I
really
have
is
just
sort
of
get
the
look
at
the
prometheus
install
and
make
sure
it's
looking
good
and
I'm
glad
we
found
that
oversight.
Does
anybody
have
any
other
questions
or
anything
else
they'd
want
to
see.
A
Not
hearing
anything
I'll
look
at
getting
a
issue
created
to
start
gathering
the
container
metrics
with
our
prometheus
and
put
it
out
there
for
scheduling.
So
we
can
try
to
look
at
getting
that
done
at
some
point.