►
From YouTube: Delivery: Run through registry gke setup 2019-06-27
Description
Part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/70
A
I
think
it's
a
great
milestone
that
you
enabled
registry
deployment
or
other
registry
running
on
staging,
which
is
pretty
amazing
in
my
opinion.
But
what
I
want
to
see
from
from
this
discussion?
Is
you
running
me
through
the
whole?
You
know
all
the
repositories
that
we
have,
how
things
are
being
deployed
where
I
need
to
look
for
things
right
like
what
kind
of
things
you
did
to
get
the
roof
on
a
dashboard
created,
not
the
graph
and
a
dashboard
itself
right,
like
everything
under
it
to
serve
that
profundo,
dashboard
and
I.
A
Basically
just
run
me
through
the
whole
new
infra
for
this
purpose,
and
what
I
want
to
know
also
is
version
changes.
What
do
we
do,
but
how
do
we
right
now?
I
know
it's
manual
right,
but
I
want
to
just
discuss
with
you
guys
like.
How
does
this
look
to
see
how
we
can
take
the
next
step
with
Cassandra?
B
C
C
So
hopefully
you
can
see
that
first
thing
we'll
show
you
is,
we
obviously
have
mirrors
of
various
projects
on
obstacle.
Nets
are
going
to
look
at
the
gala.
Calm
projects
are
so
we
have
the
Kate
wear
clothes
group,
which
is
where
we're
going
to
have
repositories
projects
for
every
chart
that
we
installed
on
the
gke
cluster.
So
we
have
one
GK
cluster
per
environment
and
then
we
have
one
project
per
chart.
That's
sort
of
the
way
we
have
it
structured.
Now
they
may
change
later
we're
only
installing
two
charts.
C
C
They
both
of
these
projects,
use
a
single
image
in
this
common
project
in
the
registry.
So
we
have
a
pipeline
here
that
whenever
you
make
a
change
to
either
the
common
CI
config
or
we
have
some
common
scripts,
whenever
you
make
a
change,
it
builds
a
new
image.
Then
the
other
projects
use
the
latest
image
show.
C
I
guess
we
we
I
think
we
go
back
and
forth
on
this
right
now
we're
using
late.
It's
just
out
of
convenience.
We
can.
We
can
create
a
tag
but
I
think
I'd
like
to
wait.
Until
things
are
a
little
bit
a
little
bit
more
stable,
then
maybe
it
will
create
like
a
version
one
tag
and
then
we
can
bump
the
versions
as.
A
C
C
You
know
and
some
other
things
we
actually
have
this
image
being
built
on
both
Gila
calm
and
because
we
have
pipelines
that
run
on
get
low
calm
and
we
have
pipelines
I
run
on
ops,
the
ones
that
run
on
Gilad
comm
or
more
for
just
like
checking
to
make
sure
everything
is
ok.
The
pipeline's
that
run
on
ops
are
for
actually
deploying
an
update,
mm-hmm.
C
So
I'm
gonna
give
you
a
quick
on
the
left
here
in
this
terminal
I'm
in
get
led
comm.
We
have
some
helper
scripts.
These
may
these
may
change,
but
basically
these
are
just
small
lightweight
wrappers
for
install
list
removed,
templates
and
upgrade.
These
are
the
commands,
I
think
I'm,
probably
gonna
or
I,
think
we
might
just
collapse
ease
into
a
single
script.
You
can
see
that
they're
fairly
short,
they
just
source
some
common
scripts.
C
If
I
do
like
list,
you
can
either
run
easy,
they're
getting
run
in
CI
or
locally,
and
you
basically
give
it
an
environment
when
it's
run
in
CI.
The
environment
is
derived
from
the
pipeline
environment,
but
since
we're
running
it
locally,
you
have
to
specify
it.
So
if
I
do
this,
what
it
does
is
it
fetches?
The
common
functions
sources
then
run
some
pretext.
Make
sure
everything
you
have.
C
Everything
is
installed
locally,
that
you
need
it
automatically
switches,
your
cube,
CTL
context
using
cube,
CTL
CTX,
that's
only
happens
locally,
since
we
don't
need
to
do
that
in
pipelines,
but
it's
quite
handy
and
you
can
see
for
the
helm
list.
We
have
both
give
and
get
lab
monitoring,
and
then
we
can
also
do
something
like
upgrade
environment
G
stage.
C
C
A
C
C
C
So,
let's
knit
the
merge
request
here
so
on
github.com
there
really
isn't
much
going
on
here.
We
just
have
shell
check
for
like
these
shell,
wrappers
I
think
what
we
can
also
do
is
some
more
checking
with
the
suborder
Starbuck
discovered
that
he'll
he'll
upgrade
has
a
dry
run
mode,
but
it
scarred
that
correct
me
if
I'm
wrong,
but
you
said
it's
pretty
much
useless
right.
It.
C
That
kind
of
sucks,
but
we
could
in
theory
at
least
I'd,
have
had
a
dry
run
stage
to
give
a
calm.
It
would
require
us
having
credentials
on
galeb
comm,
which
are
currently
over
only
on
ops,
but
we
could
do
that
but
anyway.
So
now
the
pipeline
does
the
check
here
and
then
we
can
go
over
to
ops
and
then
I'll
go
to
lab
I'm
gonna
have
to
go
to
my
branch.
C
C
C
B
C
D
C
D
C
C
So
you
can
see
it
wrote
a
bunch
of
files.
Now
we
have
this
manifests,
and
in
here
we
have
all
of
the
generated
llamó
files,
but
these
aren't
won't
to
be
committed
to
get
they're
just
ignored,
but
they're
useful
when
you're
working
locally
I
find
like
my
workflow
has
been
like.
Okay
I
need
a
new
configuration
thing
in
the
chart.
I
make
a
branch.
C
The
charts,
repo
I,
make
a
change,
then
I,
there's
an
environment
variable
that
you
can
say
like,
instead
of
using
master
of
charts,
use
a
branch
name,
and
so
then
I
can
generate
the
config
like
this
check
it
out
to
make
sure
it
looks.
Okay,
okay,
it
looks
okay,
then
I
can
do
the
upgrade
with
my
branch.
If
I
want
to
or
wait
till
it
gets
merge
to
master
the.
D
Jar
I
have
another
question,
this
proper
suite
that
you
show
our
switch.
Your
keep
CTL
context
right.
So
if
this
is
after
the
thing,
I'm
quite
sure
at
least
it
was
job
before
that
the
minor
is
has
two
option
for
setting
the
context
only
for
the
specific
run
so
that
you
don't
have
the
environment
that.
C
C
C
A
A
A
C
C
A
C
A
A
B
C
C
C
B
A
E
C
C
A
D
D
A
B
A
C
B
A
A
B
E
C
D
C
A
So
I
think
for
me
for
me
honestly,
monitoring
wise,
that's
a
minimum
before
we
even
consider
sending
any
traffic
to
this.
No
because
we
are
running
something
very,
very
much
she
knew
and
we
are
using
like
several
layers
of
stuff.
So
luckily,
for
us
registry
is
not
changing
that
fast
or
rather
I.
Don't
know
whether
it's
lucky
but
let's
say
the
versions.
We
are
not
bumping
that
often
so
we
have
time
to
play
with
this,
but
would
like
to
see
it
and.
D
E
A
A
A
Maybe
talk
about
how
would
we
upgrade
our
charts
automatically
right
now?
It
is
whatever
is
currently
in
in
this
project
right
and
then
we
pull.
If
I
understand
correctly,
we
have
github.com
were
close
project
right
and
that
pulls
from
charts.
Whenever
we
make
a
change,
what
version
does
it
pull
of
charts
the
terrible.
C
A
C
My
own
get
lot
of
things
are
very
yeah
like
yeah,
we're
I
mean
like
we're
constantly
making
updates
and
charts
right
now.
So
we
need
the
latest
bleeding
that
stuff
yeah.
A
We
do
because
compared
to
compared
to
omnibus
it's
just
a
new
project
right
like
how
many
of
us
has
years
stabilization
prior.
B
A
A
Pretty
much
it
like
as
as
simple
as
that,
how
would
we
not
do
anything
as
in
we
thought
to
deploy
what
we
are
doing
is
we
are
basically
not
doing
much
ourselves.
We
are
depending
on
other
projects.
What
I'm
thinking
about
here
is:
if
we
think
that
helm
charts
are
going
to
be
the
source
of
truth
for
us,
how
do
we
ensure
that
any
change
that
happens
inside
of
that
repository
propagates
to
our
environments?
And,
let's
say
for
the
sake
of
argument-
it
wouldn't
go
to
staging
it?
A
Could
it
could
go
to
any
test
environment
that
we
create
so
say
we
have
we
propagate
change
through
two
or
three
environments
before
we
even
get
to
staging,
but
do
that
automatically
without
us,
touching
anything
and
depending
on
the
developers
to
to
make
changes
and
make
tests
and
get
the
dashboards
use
them
before
we
even
come
into
play
on?
How
are
we
going
to
deploy
to
staging
pre
and
prod.
B
A
A
Everything
on
us
I
want
to
have
a
framework
where
you
know,
for
example,
we
have
several
layers
say
if
the
helm
chart
is
stable
enough,
I
want
to
have
all
of
our
different
components,
say
get
early
say,
get
lab
shell,
workhorse
and
whatever
be
built
by
developers
in
their
own
pipelines
in
their
own
little
world.
Right
and
I
want
that.
We
have
enough
enough
stable
environments
before
staging
that
any
time
they
push
things
get
rolled
into
these
environments,
so
they
can
test
changes
all
they
want.
A
And
then
we
have
a
check
point
on
staging
where
we're
going
to
say
everything
that
you
did
in
the
past.
I
don't
know
a
day
or
so
is
now
going
to
be
deployed
in
bulk
on
staging
and
that's
gonna
roll
through
the
rest
of
the
environments
and
all
of
that
somehow
automatically
automatically
done.
Hopefully,
we
get
loved
features
as
well,
so
that
we
can
dog
food
properly.
D
Us
feel,
okay,
thank
you
job,
so
I'm
still
a
bit
confused
because
if
we
think
about
the
charts,
so
this
means
that
I'm
I
am
working
on
some
change
on
the
charts.
I
merge
this
and
then
I
trigger
a
pipeline
would
say-
and
we
go
through
this
several
environments
which
say-
and
maybe
then
we
have
a
gate
before
reaching
staging.
But
someone
has
to
push
about
them
or
approve
it.
D
D
A
They
would,
they
would
also
ensure
like
if
there
is
any
backing
that
needs
to
happen
from
the
home
charts
I'd
right
like
if
the
feature
depends
on
a
configuration
and
inside
of
the
chart,
it's
similar
to
what
we
have
in
omnibus
right
now
like
they
need
to
submit
a
change
in
omnibus,
but
they
will
also
need
to
think
about
how
do
they
ensure
that
it?
This
is
in
a
backwards
compatible
way.
So,
whatever
change
they
made
in
in
in
omnibus
last
charts,
let's
say
in
charts
that
is
gonna
land
in
all
of
the
environments.
A
At
the
same
time,
helm
is
going
to
be
just
the
like.
The
charts
are
going
to
be
serving
they're
going
to
be
the
plate.
I
want
to
see
like
whether
we
can
make
sure
that
the
developers
by
building
their
own
images
can
just
do
use
the
charts
that
way
so
sure
you're
not
going
to
be
able
to
deploy
a
change.
If
you
don't
have
the
backing
of
the
chart,
but
you
can
do
that
in
the
chart.
First
get
that
role
to
the
environments.
A
D
Because
the
thing
I
was
thinking
about
is
that
if
we
think
about
the
review
apps,
okay-
and
we
built
on
top
of
this
concept,
so
we
can
have
a
branch
on
github
see
whatever
it
is,
introduce
a
feature.
And
then
you
have
a
branch
on
the
charts
that
build
the
configuration
for
this
or
even
just
change
the
building
image
from
the
custom
branch.
And
then
you
can
deploy
this
somewhere.
But
this
has
to
happen
before
someone
hit
merge
button.
Yeah.
A
That
means
wait
it's
late
after
right
and
there's
also
one
thing
that
I
don't
want.
I
would
like
to
avoid
compared
to
our
current
review,
apps
or
recurrent
review
apps
are
very
much
in
step,
lock,
meaning
every
single
image
that
we
have,
because
every
single
image
that
we
have
that
would
build
for
gitlab
is
very
much
tied
to
the
helm.
Chart
version
as
well,
and
we
are
very,
we
need
to
be
looser
with
versioning.
We
need
to
figure
out
how
to
properly
version,
to
allow
developers
that
they
don't
need
to
roll
the
whole
new
environment.
A
Every
time
something
is
deployed.
That's
what's
happening
to
III
you
apps,
because
we
are
so
step.
Locked
literally
cannot
do
a
change
without
changing
everything
else.
Inside
of
the
application.
That's
not
good,
like
we
want
to
do
micro
services
like
things
so,
let's
think
about
how
we
can
actually
do
that.
Does
that
mean
that
we
need
to
build
a
new
of
new
versioning
system
suspect,
so
it.
B
Sounds
like
it
like
I
could
think
of
three
complete
whole
concepts
that
we
need
to
talk
about
in
order
to
make
something
like
that
achievable,
because
right
now
we're
relying
on
help
charts
to
deploy
everything.
We
make
a
change
to
say
giddily.
We
would
have
to
do
exactly
what
jarv
did
and
modify
our
values
file
to
push
that
change
out
to
get
that
version
of
giddily
out
there.
B
We
need
to
separate
that
concern
from
a
values
file
with
formula
if
we
want
to
enable
developers
to
go
to
deploy
a
later
version
of
Italy
out
into
the
future.
That
removes
us
from
dogfooding
or
helm
charts
the
way
our
customers
would
to
a
certain
extent,
and
we
would
have
to
figure
out
how
to
make
that
work
or
purply
and
still
make
it
such
that
if
we
need
to
rebuild
a
cluster,
for
example,
we've
got
the
ability
to
pull
in
whatever
version
we
desired.
C
Have
like
one
thing,
I
would
love
to
do,
especially
for
a
registry
right
I
think
we
can
probably
write
tests
that
can
exercise
registry
adequately
that
we
could
enable
CI
for
chart
changes
and
hit
master.
We
automatically
deploy
to
staging.
We
do
some
testing
and
staging,
and
then
we
just
deploy
prod
and
you
would
have
a
helm
upgrade
pipeline.
It
works
for
registry
because
rare
those
registry
is
not,
it
is
decoupled
from
rails,
and
you
know
it
is
pretty
easy
to
validate,
as
we
think
of
other
component
and.
C
It
as
a
model
and
I
think
as
we
like
that
we
could.
We
do
things
where,
like
if
certain
sub
directories
of
the
charts
changed
and
that
triggers
a
deployment
for
registry,
like
if
the
registry
directory
changes
in
the
chart,
and
that
so
does
that
it's
that
change
hits
master.
Then
it
triggers
a
pipeline
of
decay
workloads
and
then
we
update
registry
or
something
like
that-
I
mean
that's
possible,
but
I.
C
B
C
A
How
we
do
a
step
back
like
I,
would
like
to
introduce
something
that
we
don't
have
right
now.
We
only
have
staging
pre
canary
and
production,
canary
and
production
right.
How
about
we
use
this
opportunity
to
introduce
one
or
two
steps
before
staging
to
do
this
type
of
thing
and
find
a
way
how
we
can
automatically
verify
certain
things
that
will
give
us
enough
confidence
that
we
deploy
to
staging
automatically
and
then
the
rest.
We
can
do
manually
I'm,
not
sure
if
I'm
making
sense.
Let
me
try
to
rephrase
we've
been
using.
A
We
don't
want
that
because
we
know
the
problems
there.
We
want
to
unlock
developers
to
be
able
to
deploy
to
other
environments
where
they
cannot,
they
can
clash
all
they
want.
It's
not
a
problem
right.
If
we
do
something
like
this,
where
you
have
testing
and
you
every
commit
upgrades
upgrades
upgrades,
that
is
fine,
but
if
we
can
make
a
system
that
will
then
before
I
go
to
staging,
let
me
check
the
state
of
affairs.
Let
me
check
if
everything
is
right.
Let
me
check
if
Q
is
executed
correctly.
C
Think
for
for
a
registry,
it's
super
simple,
because
we
can
spin
up
in
a
femoral
cluster
and
connected
to
staging
and
test
it
right,
and
if
that's,
if
that
succeeds,
then
promote
it
to
the
actual
cluster
in
staging.
But
registry
is
a
special
service.
I
mean
other
services
are
not
going
to
be
so
easy.
Well.
A
C
Could
do
it,
we
could
do
it
with
the
front
end.
To
probably
I
mean
right,
we
could
speak,
could
have
an
ephemeral
cluster
that
we
spin
up
with
that
connects
to
the
staging
database
that
connects
to
staging
get
early
everything
right
and
we
validate
it
and
then,
if
it
looks
good
we
tear
it
down,
then
we
promote
to
staging
for
real
and
that,
if
that
looks
good,
then
we
go
to
canary
and
then
production,
so
I
think
a
ephemeral
cluster
would
be
the
way
to
do
it.
Yeah.
D
This
really
sounds
like
a
Bluegreen
deployment
as
a
gate
between
instead
of
just
some
global
deployment
services
because
yeah
it
is
actually
running
tests
in
a
real
environment.
So
you
have
to
QA
and
if
it
works,
you
go
the
thing
about
the
ephemeral
cluster
for
registrate
I
think
that
we
need
to
start
from
a
well-known
set
of
ages
already
existing
in
that
set
environment,
because
one
of
the
thing
that
you
want
to
make
sure
is
that
the
new
version
would
not
blow
up
the
images
that
you
already
have.
D
C
A
D
Sure
what
I
was
thinking
that,
if
we
have
a
thing
that
is
more
robust,
then
we
can
also
use
this
for
production
later
on.
But
if
we
we
are
not
testing,
let's
say
something:
it
is
enough
robust
and
you
will
never
do.
We
have
to
invent
something
else
when
you're
going
to
do
this
kind
of
automated
promotion
from
production.
This
was
my
thinking.
A
A
A
B
Would
like
some
of
the
TechNet
that
we've
kind
of
acquired
over
the
past
few
weeks,
some
of
it
I
created
an
emergent
and
last
night
to
help
resolve
the
fact
that
Priya
slightly
different.
But
we've
got
the
situation
where
the
alert
manager
configuration
has
a
two
separate
configuration
areas,
whether
it
be
in
chef
or
whether
it
be
in
our
rumbles
repo
I,
think
it
would
be
wise
to
continue
or
build
up
a
set
of
core
fana
dashboards
in
a
succinct
manner,
so
that
we
have
a
place
to
view
all
of
our
charts.
C
Yeah
I
would
say
that
we
need
to
do
that
and
I
mean
we
could.
We
could
start
going
through
like
a
production
readiness
review
now
to
kind
of
see
where
the
gaps
are
and
what
boxes
we
need
to
take
before
going
to
Canary.
Obviously,
alerts
are
a
big
one,
but
yeah
I'm,
not
sure.
If
this
this
idea
of
having
a
deployment
pipeline
is
probably
something
we
could
think
about
a
bit
later
like
you,
don't
have
to
worry
about
it
now,
but
yeah.
I
think
I
think
scarbacks
spot-on,
but
just
like
cleaning
up
some
technical
debt.
D
B
E
B
A
D
A
A
C
Think
personally,
I
think,
like
maybe
by
Tuesday
or
Wednesday
I,
would
like
to
have
the
alert
team
figure
it
out
mm-hmm
and
after
that
then
I
would
say
you
should
have
I
would
say
by
the
end
of
by
the
end
of
next
week.
We
should
have
a
draft
of
the
production
readiness
review
and
have
a
pretty
good
idea
of
what
needs
to
get
done
before
we
can
enable
this
in
Canary.
C
A
C
A
I
would
I
would
ask
you
to
leverage
as
much
as
possible
knowledge
of
other
accessories
they're
here
so
Ben
is
now
in
a
sari.
Get
him
involved.
He
loves
to
help
out.
So
why
not
as
long
as
all
of
this
is
in
issues
in
epics
and
somehow
organized
through
that
everyone
should
be
able
to
jump
in
and
help
out
if
they
want.
Obviously
do
you
like
I,
don't
want
to
decide
who
does
what
you
can
do
that
between
the
two
of
you.