►
From YouTube: [EMEA] Container Registry Interactive Demonstration
Description
Skarbek works with other SRE's to demonstrate how to deploy, view metrics, and find logs related to our recent service migration of the Container Registry from VM's into Kubernetes.
A
Do
it
also
keep
the
zoom
chat
window
open,
cuz
I'll
be
sending
you
random,
commands
there,
just
so
that
you
could
quickly
copy
and
paste
so
that
we're
not
hunting
and
pecking
and
trying
to
tell
people
what
words
to
type
and
getting
things
incorrect.
So,
when
you're
ready
to
share
your
screen
deep,
alright.
A
B
A
Why
are
you
getting
ready
as
a
quick
overview?
We're
gonna
cover
a
few
things:
we're
going
to
perform
a
deployment
into
our
staging
environment,
we're
going
to
watch
the
pipeline.
We're
gonna
show
the
command
line
interface
that
jarv
and
I
have
kind
of
cooked
up
a
little
bit,
and
then
we're
also
going
to
look
at
some
metrics
and
some
logging
related
to
the
work
that
we
complete
during
this
little
show-and-tell.
A
A
Don't
actually
know
all
right,
our
good
CD
into
that
directory,
and
then
we
want
to
perform
an
upgrade
in
our
staging
environment.
So
we've
already
got
a
cooked
up
scenario.
In
this
case
there
should
be
a
file
called
GST
GMO.
We
want
to
modify
that
file.
We're
gonna
set
the
version
of
the
registry
sure.
So
currently
we
have
the
advantage
set
to
two
Simon
one,
we're
gonna
actually
performing
mock,
upgrade
so
I'm
gonna
set
the
two
seven
zero.
A
B
A
B
B
A
A
Right,
so
the
merge
request
is
in
right:
yep
I
see
it
okay,
so
why
we're
here
go
ahead,
switch
to
that
how
to
dock
that
you
have
in
your
first
had
that's
open
and
we
want
to
make
sure
your
computer
is
ready
to
go
so
I
only
care
about
for
the
purpose
of
this
demonstration.
We
really
care
about
the
staging
environment.
So
in
that
top
box,
where
we
do
a
bunch
of
g-cloud
commands,
we
only
want
that
little
one
hopefully
already
have
the
g-cloud
command
line
installed.
We
can
run
that
command
just
nicely
yeah.
C
B
A
B
A
B
A
C
A
B
A
A
Okay,
so
go
ahead,
run
those
plug-in
install
commands,
so
the
two
plugins
that
we
use
our
helm
tiller.
So
we
run
tiller
lists
throughout
an
entire
cluster
just
because
we
don't
want
to
deal
with
the
security
ramifications,
and
we
also
don't
want
to
deal
with
having
to
figure
out
other
things
related
to
the
home
tiller.
A
A
B
A
A
A
B
B
B
A
No,
actually,
we
were
setting
the
color
and
you
and
I
had
the
same
problem
where
the
text
matches
the
background.
It
works
fine
on
Jarvis
machine
or
in
CI,
but
yes,
I
normally
don't
care
to
look
at
it,
so
I
don't
care.
So
what
we're
doing
here
this
is
reaching
out
inside
of
helm
because
we're
running
till
it
lists
all.
This
metadata
is
stored
somewhere
else
inside
the
cluster,
because
there's
no
tiller
container
running.
But
what
you
see
here
is
exactly
what
this
specific
repo
is
going
to
deploy.
A
So
well,
you
see
a
little
bit
more
because
it's
listing
all
the
helm
installations,
but
gitlab
is
the
primary
one
that
we're
concerned
with
in
this
particular
case
and
then
at
the
very
bottom
you'll
see.
That's
listening
a
few
secrets.
There's
a
couple
secrets
registry,
something
these
are
all
installed
mainly
into
our
clusters
or
into
the
namespace
when,
during
our
configuration
of
these
services
so
homeless,
or
the
list
commands
just
doing
a
helm
list
followed
by
a
cube
control
list
of
the
seekers
that
we
need
to
maintain.
A
A
A
Specify
the
get
lab
namespace,
don't
know
why
there's
a
hell
namespace
but
yeah,
so
we
have
two
pods
running,
which
is
to
be
expected.
So
now
what
we'd
like
to
do
in
my
demo
is
we're
going
to
perform
the
pipeline
thing.
So
if
I
could
find
my
tab
with
your
merge
request,
I
clicked
on
the
run,
merge
request,
so
I'm
gonna
go
ahead.
Merge
your
mr
improve
because.
A
Staging
only
the
staging
environment
so
that
file
that
you
modified
the
wrapper
script
will
pluck
each
one
dependent
on
the
environment
that
you
specified
inside
of
the
e
flag
and
inside
of
our
CI.
We
specified
that
be
an
environment
with
a
herbal.
So
now
that
the
merge
request
has
been
merged,
let's
show
you
the
pipeline,
which
is
going
to
be
here.
A
A
So
in
this
case
there
is
a
set
of
dry
runs,
so
go
ahead
and
click
on
the
GS
TG
dry
run
because
I
want
to
show
you
what
helm
DIF
provides
us
and
if
you
scroll
up
quite
a
bit,
there's
two
things
that
are
occurring
here:
one
is
a
helm,
deaf
keep
going
on
there
we
go
and
we
should
see
highlighted
in
pretty
colors
the
actual
changes
that
helm
DIF
made
a
call-out
to.
So
we
should
see
something
about
the
register
image
being
downgraded.
A
C
A
The
hellmouth
is
so
we're
making
the
appropriate
change
that
we
desire
for
this
demo.
The
next
thing
that
runs
is
a
dry
run
of
the
helm,
upgrade
command.
So
that's
just
rolling
through
all
of
the
manifests
that
get
generated
and
does
a
test
and
whether
or
not
they
can
apply
them
or
not
and
doesn't
actually
perform
any
work.
So
that's
the
dry
run.
So
if
you
go
back
to
the
pipeline,
you'll
see
that
under
the
non
prodigy
STG
upgrade
command
is
still
running,
so
go
ahead.
Click
on
that
the.
C
A
A
A
There's
a
few
things.
I
want
to
point
out
here
on
the
top
right
is
a
panel
called
active,
active
replica
set
if
you're
familiar
with
kubernetes.
This
isn't
new
to
you,
but
every
time
you
create
a
difference
in
our
deployment,
a
new
replica
set
will
get
created
for
that
new
deployment.
So,
as
you
can
see,
the
one
that's
in
green
is
the
new
one.
So
get
libraries,
three
five,
five
five
d6y
little
blah
blah
blah
and
we
can
see
it's
trying
to
spit
up
one
pod
right
now.
The
old
replica
set
is
Hilah
and
yellow.
A
A
Because
it
goes
way
back
in
time
and
it's
currently
running
two
pods.
So
theoretically,
if
you
wanted
to
on
your
local
workstation,
you
could
do
a
cube
control.
Kay
get
pods
and
you
should
see
that
we
have
three
total
pods
available
to
us
at
least
one
being
the
one.
That's
failing
to
start
up
for
some
reason:
throw
the
gitlab
name
space
on
there.
B
A
So
you
see:
okay,
since
we've
looked
at
the
metrics
we've
got,
the
old
replica
set
has
scaled
up
to
three
pods
within
last
44
seconds
and
see
you're
jumping
the
gun
here,
and
you
can
tell
that
we
have
a
problem
with
one
of
our
new
pods,
which
is
fine.
So
for
the
sake
of
my
exercise,
what
I
would
like
for
you
to
do
is
pull
up
our
logs,
so
good.
A
non
prod
log
get
lab
net
because
I
want
you
to
find
these
logs
inside
a
Cabana.
How.
A
Look
well,
you
know
the
obvious
problem
with
that,
like
you
had
you
targeted
one
particular
pod,
but
productions
running
thirty
pods.
If
you
want
to
look
at
logs
that
doesn't
make
any
sense
so
change
your
index
to
pub/sub,
gke,
INF,
GS,
TG
and
then
I'm
gonna,
send
you
via
zoom,
chat,
a
filter
that
you
could
use.
That
will
make
it
easier
for
you
to
find
logs
just
because
I
don't
feel
like
searching
for
okay,
there's
something
wrong
with
this
cluster
there's
always
at
least
one
shard
that
fails
to
get
me
information
here.
A
A
So
had
this
been
say,
production
or
a
new
deployment
inside
of
staging.
This
is
where
the
process
for
you,
as
an
engineer,
would
continue
for
the
purposes
of
troubleshooting
and
figuring
out
why
this
prominence
actually
occurring
for
this
demonstration
I
just
wanted
to
show
you
the
ability
to
find
the
logs
and
see
the
data
inside
of
our
metrics
I
know
just
due
to
the
nature
of
what
the
problem
is.
This
is
actually
an
actual
problem
with
the
doctor
distribution
version
that
was
created.
A
Something
happened
when
they
released
this
version,
so
the
storage
driver
just
does
not
exist
inside
of
this
version,
so
we
know
that
something
bad
has
occurred.
We
aren't
not
going
to
perform
any
further
troubleshooting
on
this,
but
you
know
how
to
access
logs.
In
this
case,
this
is
the
all
the
logs
coming
from
a
docker
registry
are
stored.
Inside
of
this
particular
JSON.
Dialogue
named
object,
any
know
where
to
find
metrics.
So
now
what
I
would
like
for
you
to
do
is
go
back
to
the
pipeline.
B
B
A
Helm
realize
there
is
a
problem
and
we
set
a
timeout
inside
of
our
script.
That
says:
wait
for
five
minutes.
If
nothing
is
successful,
let's
perform
a
rollback.
So
what
you
see
here
is
the
upgrade
failed,
the
problem
being
that
at
time
that
waiting
for
ignition-
and
if
you
look
right
above
that
it
says
deployment
is
not
ready.
So
we
know
exactly
what
went
wrong
from
the
the
perspective
of
the
CI
pipeline
and
helm
will
perform
a
rollback
for
us.
So
there's
two
problems
that
we
have
here.
A
One
is
that
we
performed
a
rollback
which
is
fantastic,
so
we're
running
an
old
version
and
we
need
to
troubleshoot
that
further.
The
second
problem
is
that
our
master
branch,
or
in
this
case
well
yeah-
our
master
branch,
is
now
out
of
sync
with
what
is
deployed
inside
of
Cuba
Nettie's.
We
have
an
open
issue
to
address
this,
but
the
rollback
is
what's
important
because
if
you
go
back
to
our
dashboard,
you'll
see
that
we
had
pods
stay
alive
this
entire
time
throughout
this
entire
process.
A
A
So,
for
some
reason
it's
scaled
up,
I,
don't
know
why
there's
probably
a
metric
issue
that
occurred
like
maybe
the
new
pods
were
spinning
at
an
absurdly
high
CPU,
why
they
were
family
to
come
up
or
something
which
calls
something
to
scale
which
is
fine.
So
eventually,
because
there's
no
traffic
in
staging
those
go
back
down
to
our
minute,
which
is
these.
A
A
And
such
okay,
so
this
kind
of
this
actually
worked
out
quite
nicely.
This
was
much
longer
on
Wednesday,
but
this
kind
of
concludes
what
I
wanted
to
show
you?
You
saw
the
view
of
the
CI
pipeline.
You
got
a
chance
to
look
at
our
metrics
and
see
what
things
look
like
during
a
deploy
and
a
failure.
You
see
everything
rolled
back
and
you
see
where
to
find
things
and
logs.
So
at
this
point,
I
would
ask
for
any
questions.
D
C
D
A
D
B
D
You
know
I'm
one
of
in
swimmers.
I
am
actually
put
the
balloon
to
put
this
up
somewhere
I,
like
the
three-shift
thing
that
I
put
up.
If
I
actually
can't
put
up
the
idea
and
have
people
discuss
and
then
let
the
team
decide
so
I'm
like
I
play
you
guys
what
to
do.
I
just
used
to
know
what
a
temperature
was
on
that
everyone's.
D
D
Okay,
the
other
question
I
was
gonna,
ask
was
you
know,
I
and
I
know
it's
rude
and
really
early
iterations
skarbek,
but
it
was
like
so
when
we
start
to
add
sidekick
and
some
other
things
in.
What's
the
picture
of
how
we're
gonna
scale
out
that
pipeline
for
deploying
all
of
these
different
services?
Are
we
you
know?
But
what
were
you
guys
to
get
from
the
delivery
team
of
em
as
we
move
forward?
Are
you
thinking
of
trying
to
deploy
one
giant
chart
and
like
roll
across
all
the
things?
D
C
I
prefer,
like
I,
prefer
that
you
know
we
have
the
set
of
tools
on
the
workstation,
but
we
should
we
should
try
and
standardize
at
least
some
of
how
that's
set
up.
Of
course,
it's
difficult
because
some
people
want
to
work
on
Linux.
Some
people
want
to
work
on
Mac,
you
know,
but
we
should.
We
didn't
have
at
least
some
amount
of
like
you
know.
We
install
at
least
the
binaries
and
them
in
a
consistent
way,
and
then
we
have.
C
We
have
wrapper
scripts
like
we
have
in
some
of
these
other
places
that
are
standardized
across.
You
know,
work
clothes
and
you
know
if
we
have
good
working
set
ups,
we
should
share
those
that
it's
it's
just
faster
and
easier
to
interact
the
same
way
with
some
of
these
things,
yeah
and
but
I
also
say
that
we
should
be
trying
to
do
as
much
stuff
and
pipelines
as
possible.
There's.
A
A
A
B
A
A
That
repo
contains
a
lot
of
the
extra
fluff
I
guess
so
inside
of
the
bin
directory.
There's
a
common
script
that
has
a
bunch
of
functions
that
are
shared
across
this
because
we're
using
the
same
K
control
script
scattered
through
a
couple
of
repos.
So
we
created
one
giant
script
that
we
source
I
has
a
couple
of
functions
to
address
the
fact
that
we
need
certain
tooling
installing
people's
workstations
there's
a
feature
coming
that
made
it
into
the
script.
A
Inside
of
that
repose,
a
dockerfile
you'll
see
that
we
are
putting
the
version
of
the
Cloud
SDK
that
we
are
using
we're
pinning
the
version
of
helm
and
all
the
plugins
that
we're
using
inside
of
at
least
CI
we're
not
checking
for
these
specific
versions
for
your
local
workstation,
but
so
far
that
hasn't
been
a
huge
problem.
I
think
the
only
problem
we've
had
was
the
helm,
DIF
plug-in
when
we
first
started
using
it
master
was
broken,
so
we
had
to
pin
it
to
a
lower
version,
which
was
you
know
fine,
but.
A
But
you
know
agreeing
with
man
like
most
of
the
work
like,
especially
like
helm,
DIF
that
would
be
done
inside
a
CI
for
the
most
part
like
you
could
run
it
locally
like
you
could
have
done
it
before.
You
submitted
your
mr
and
maybe
that's
you
know
a
good
thing
to
do
just
to
make
sure
that
things
look
okay
before
you
submit
anymore.
The
one
thing
that's
different,
though,
is
that
because
we
rely
on
say
a
pipelines,
we
add
some
metadata
to
the
annotations
of
various
curva
Nettie's
objects
that
come
from
CI.
A
B
A
tough
balance
like
I'm
I'm,
strongly
biased
towards
doing
as
many
changes
in
CI
as
possible,
but
you
know
we
all
know
what
I
need
to
restate
here,
but
I'm
also
strongly
biased
towards
being
able
to
do
it
locally
in
for
you
to
not
have
to
feel
like
Homer
Simpson
with
his
gloves
in
the
glove
box,
you
know
handling
fuel
well,
I'd,
rather
just
run
a
command.
Get
a
result,
know
that
it's
gonna
work
not
actually
make
changes
to
the
cluster,
but
run
that
diff
come
on.
There's.
A
A
Side
note
we
do
have
the
ability
or
this
little
K
control
script
that
we
written
does
have
the
ability
to
talk
to
a
local
instance
say
if
you're,
using
mini
cube
or
k3d
or
kd3
I.
Think
it's
k3d.
So
if
you
spun
one
of
those
items
locally,
there
is,
there
does
exist.
The
ability
for
K
control
to
talk
to
those
you
can
make
changes
inside
of
a
local
kubernetes
cluster.
Instead
of
talking
to
you
staging
pre
or
you
know,
unfortunately,
production
it's
not
as
well-defined
as
our
normal
communities
clusters,
but
the
capability
exists.
A
B
A
Now
it's
namespace
per
application,
so
the
plant
UML
stuff,
you
know-
that's
not
created
by
git,
lab
in
any
way
shape
or
form
that
has
its
own
namespace
I.
Think
it's
still
to
be
determined,
be
just
due
to
the
nature
of
like
how
we
decide
to
run
this
in
the
future,
but
I
think
it
warrants
a
larger
discussion,
but
I
think
if
we
continue
using
our
existing
helm
chart
for
all
the
things,
we'll
probably
keep
everything
in
one
namespace,
because
that's
how
our
helm
chart
kind
of
works.
A
If
we,
because
you
know
we're,
get
low
calm
and
we
need
to
scale
a
bit
larger,
there
may
be
the
situation
where
we
spend
at
multiple
namespaces.
They
may
have
different
pipelines
for
different
things.
Some
of
that
will
be
governed
by
how
we're
able
to
figure
out
how
to
make
all
an
employee
work
for
our
use
case
and
something
that
might
just
be
how
we
define
our
clusters
like
right
now.
B
B
C
C
B
C
C
B
Imagine
we're
gonna
be
an
elastic
search
for,
like
you
know,
over
a
months
we're
gonna
probably
get
to
the
point
where
we
want
to
move
it
off
of
managed
elastic
cloud
and
we'll
have
to
find
somewhere
to
put
it
in
that
place.
Maybe
what
we
keep
identities
unless
we
really
go
fast
on
this
find
a
new
logging
solution
work
anyway,.