►
From YouTube: 2021-09-08 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
So
this
plant
used
to
be
sitting
on
my
desk
and
it's
gotten
to
the
point
where
it
was
almost
touching
the
ceiling
when
the
desk
was
in
the
standing
position,
so
I
repotted
it.
So
you
know
when
I
sit
down
it's
as
tall
as
me,
but
you
know
it's
probably
like
four
feet:
high
yeah,
it's
like
to
play
with
it
at
this
point,
so
it's
slowly
the
lower
leaves
and
some
of
the
leaves
that
are
right
next
to
where
my
desk
is,
are
slowly
being
eaten
over
the
course
of
time.
C
B
What
about
the
issue,
scotland,
that
you
asked
me
about
prioritizing
the
registry
one,
the.
B
Yeah,
whatever
yeah
yeah,
so
I
think
it's
a
good
course
go
back
that
we
should
improve
things
before
we
go
ahead
and
move
into
production.
But
do
we
have
a.
B
Do
we
have
a
is
graham's
comment
there
is
that
the
approach
that
makes
sense
for
solving
this
yeah.
C
C
A
A
Unfortunately,
and
same
opened
an
issue
for
that
in
the
chart
to
fix
this,
and
the
short
fix
was
to
change
the
password
to
not
contain
the
special
character.
But
what
happened
was
that
the
db
migration
jobs
always
were
spinning
up
trying
to
connect
to
the
database
with
the
wrong
password
and
then
failing
to
authenticate
and
then
terminating
and
then
going
into
a
crash
loop
right,
always
retrying,
and
so
it
never
got
forward
to
deploy
a
registry
to
a
new
version.
C
So
then,
brainstorm
with
me
for
a
split
second,
because
one
of
the
items
that
we're
thinking
about
doing
is
removing
the
atomic
flag
from
our
helm
configuration,
which
means
we're
not
going
to
wait
any
longer.
We're
instead
going
to
generate
a
diff,
we're
going
to
run
the
upgrade
and
we're
going
to
have
another
tool
come
in,
say,
hey
we're
doing
this.
C
C
It's
just
going
to
move
forward
and
if
things
fail,
they're
going
to
be
stuck
in
that
failed
state
until
something
else
is
to
intervene,
whether
that
be
a
new
deployment
or
a
rollback.
Or
what
have
you
so
in
that
particular
scenario,
what's
going
to
happen
like
the
new
job
will
probably
fail,
but
will
helm
continue
with
attempting
to
deploy
the
new
pods.
A
C
But
these
are
two
distinct
objects.
Right
like
this
is
that
the
migrations
are
a
job,
whereas
the
deployment
is
still
its
own
separate
item.
So
I
wonder,
how
do
we
connect
the
jobs
and
deployment
together?
Tell
help
make
sure
you
run
this
job
before
anything
else?
Are
we
using
some
sort
of
like
pre-hook
or
something.
C
C
Honest
so
how
about
this
so
amy?
The
proposal
that
game
grain
created
is
a
relatively
easy
looking
proposal.
It's
just
a
matter
of
thinking
through
all
the
failure
scenarios,
that's
a
merge
request
that
we
could
quickly
create
and
we
could
start
the
conversation
as
to
what
options
we
have
if
that
merge
request,
is
not
the
appropriate
method
of
going
about
this
idea.
B
A
B
I
think
craig's
original
point
is
very
true
right,
which
is
this
stuff,
is
noisy
and
makes
it
easier
to
miss
other
stuff.
So
it
would
be
good
to
to
solve
this
cool
yeah.
That
would
be
great
if
you
could,
if
you
could
have
a
shot
at
that.
B
Are
you
working
on
the
abdex
stuff.
C
A
B
A
Not
really
on
a
kubernetes
side
right
now,
just
some
more
work
popped
up
around
backups,
which
still
needs
to
be
accomplished,
because
the
dr
replica
notes,
like
the
archive
and
delayed
replica
nodes,
still
need
to
be
synced,
and
this
is
depending
on
wall
archiving,
to
be
fixed,
which
is
an
issue
which
I
assigned
to
alejandro,
and
then
we
also
need
to
set
up
a
pipeline
for
a
regular
backup
testing
once
database
backups
are
working
where
we
have
a
project
for
which
we
use
for
our
other
backups,
but
this
needs
to
be
adjusted
to
the
register
dbe.
A
A
B
Cool
okay
sounds
good
and
then
the
other
thing
I
can
give
an
update
on
is
redis,
so
I
here
we
go
so
there
is
a.
There
are
various
redis
projects
going
on
at
the
moment,
so
they
just
move
this
in.
B
B
So
we
should
just
quickly-
oh
it's
not
on
here
so
it'll
be
the
is.
The
final
sentence
is
we're.
We
are.
We
have
a
shared
okr
with
scalability,
which
is
around
working
out
the
long
term
strategy
for
redis.
B
So
there
is
going
to
be
a
project
starting
up
quite
soon,
like
hopefully,
next
couple
of
weeks
to
start
putting
together
some
pocs
of
of
how
this
like
what
this
might
look
like
would,
with
reddish
on
a
cluster
or
other
options
at
the
moment
that
is
going
to
mostly
be
handled
within
scalability
igor,
is
going
to
be
working
on
that
as
well
and
graham
is
going
to
be
advising
on
that
from
our
delivery
perspective.
So
we
have
some
visibility.
B
We
won't
be
super
hands-on
at
this
stage
and
that
will
give
us
some
space
to
go
through
pages
and
complete
the
registry
phase
and
also
continue
on
some
of
this
tech
network
that
graham's
working
on
so
just
invisibility
as
we
get
this
epic
built
out,
I
can
share
more.
The
rough
idea
is
that
this
quarter,
we
work
out
like
how
do
we
want
to
scale
redis
and
then
in
q4.
We
actually
start
like
doing
the
work,
so
we'll
be
much
more
hands-on
through
q4
helping
out
on
this.
E
C
Speaking
of
capacity
limits
and
multiple
redis
things,
something
I
just
want
to
bring
the
attention
to
since
we're
here,
and
this
is
going
to
be
disastrous
if
we
don't
figure
out
how
to
fix
this
at
some
point
in
time,
I
just
put
the
link
so
item
number
four:
we
didn't
do
a
good
job
planning,
the
ip
address
space
for
our
zonal
clusters,
so
we're
limited
to
250-ish
nodes
in
each
of
our
zonal
clusters
and
at
the
time
that
jarv
created
this
issue,
we're
running
between
130
and
135
nodes,
so
we're
not
in
emergency
situation
yet,
but
we're
using
more
than
half
the
capacity
of
available
nodes
that
we're
allowed
to
run
inside
of
these
networks,
which
is
not
good.
B
C
C
The
employee
nodes
fall
into
that
same
category
so
stuff
like
that.
We
could
probably
adjust
to
help
us
here,
but
you
know
we
need
to
find
the
network
space
and
then
we
need
to
figure
out
a
migration
strategy
to
recreate
a
new
cluster
and
then
move
the
traffic
to
it
and
get
ourselves
off
the
old
cluster,
which
is
using
an
ill-defined
network
space.
So.
C
Yes,
this
is
going
to
be
very
difficult.
Excuse
me,
because
a
lot
of
networking
components
we
have
a
lot
of
ip
addresses.
C
Goodness
we
have
a
lot
of
ip
addresses
that
are
specific
to
this
network
space,
and
I
feel
like
when
we
do
this
migration.
We're
gonna
be
challenging
ourselves
to
create
interesting,
not
circumventions,
but
more
new
configurations
that
enable
us
to
run
two
kubernetes
clusters
side
by
side.
While
we
perform
the
transition.
A
C
B
Okay,
yeah,
okay,
so
kind
of
initial
thoughts
here
is
that
we
should.
We
should
certainly
like
start
doing
the
steps
to
like
find
the
network
space
and
like
we
can
get
the
issues
put
these
into
a
plan
plumber.
Roughly,
we
should
start
figuring
out
like
where
would
we
get
the
network
space
from
and
how
would
we
go
ahead
and
do
a
migration
strategy
at
this
time
of
year?
We
have
a
couple
of
interesting
like
holidays.
B
I
guess
coming
up
that
can
be
useful,
so,
for
example,
around
thanksgiving,
that's
not
a
it's,
not
a
holiday
in
europe,
but
it's
a
super
low
traffic
time
for
us.
So
that's
a
good
time
to
do
some
of
these
bigger
bits.
We've
also
got
the
period
around
christmas
and
end
of
year,
which
is
less
cool
because
it's
a
holiday
for
everyone,
but
it's
also
a
slow
like
super
low
traffic
time.
B
B
C
That's
going
to
mostly
depend
on
the
scale
of
gitlab
over
the
course
of
the
future.
I
think
at
this
point,
because
we've
migrated
the
vast
majority
of
our
largest
workloads
pages
isn't
going
to
add
a
lot
to
this.
C
A
A
C
A
C
B
C
Don't
have
alerting
currently,
as
far
as
I
know,
but
that's
something
that
you
we
should
be
able
to
get
that
information
and
alert
us
on
it.
I
think
we'll
have
some
hard-coded
data
inside
of
that
alert,
but
it
should
be
possible
just
count
how.
A
C
Nodes
we're
running
in
each
zone
and
say:
hey:
is
this
more
or
less
than
250-ish,
we'll
have
to
figure
out
what
that
actual
number
is,
but
we
could
certainly
add
alert.
Yes,.
B
Cool
okay,
because
yeah,
I
think
that
might
be.
That
would
be
a
good
backup
right
so
that,
just
if
we
hit
that
alert,
we
could
kind
of
have
it
as
we
would
pause
everything
else
and
just
jump
on
this.
If
we
got
to
that
number
okay,
so
some
actions
here.
B
Should
we,
what
do
you
think
about
us,
spinning
this
into
an
epic
and
then
having
issues
underneath
that
for
finding
the
network
space
planning
a
like,
adding
the
alerts
planning
out
how
we
would
go
about
doing
a
a
migration,
and
then
we
could
actually
link
onto
that
the
actual
change
requests
and
things
when
we
come
to
make
the
change.
A
B
D
B
B
B
C
Guess
the
only
thing
I
could
quickly
think
of
is
checking
our
quotas
to
make
sure
we're
not
hitting
any
sort
of
limits.
B
A
C
As
far
as
I
know,
we've
struggled
greatly
to
get
that
sucked
into
prometheus,
so
we
could
alert
on
our
quotas
that
are
running
out,
but
as
far
as
I
know,
we
haven't
seen
any
issues
lately.
So
hopefully
that's
not
a
big
problem
and
we
also
turned
off
the
webplate.
So
you
know
we
reclaim
some
of
our
quota
limits.
There.
C
C
D
Have
one,
but
I
don't
know
if
it's
what
you
had
in
mind,
so
I
was
going
through
an
issue
that
unless
you
opened
and
I'm
wondering
what
does
it
mean
when
you
say
running
qa
on
a
mixed
deployment.
E
E
B
I
have
a
I
have
a
question:
is:
is
it
possible
that
we
could
use
that
so
one
of
the
the
future
future
dreams
I
have
is
blue
green
deployments?
B
Is
it
but
one
thing
that's
tricky
is
the
way
we
have
our
clusters
built
at
the
moment.
Is
it's
not
actually
easy
to
destroy
and
spin
them
back
up
and
a
lot
of
that,
I
believe,
or
at
least
my
understanding
at
times
it's
down
to
naming
and
we've
we've
named
them
as
pets.
Java
is
very
attached
to
the
names
he
chose
and
is
not
super
excited
by
them
all
having
cruddy
cattle
names.
B
E
Question
I
was
thinking
that
in
terms
of
blue-green
deployment
of
the
application
code,
you
don't
need
to
do
to
destroy
the
cluster.
You
just
create
a
new
deployment
within
the
cluster
and
just
do
the
proxy
thing,
but
if
you
also
want
to
test
kubernetes
configuration
as
well
as
yeah
kubernetes
configuration
basically,
then
it's
good.
E
C
E
E
But
this
is
really
a
monolithic
approach,
so
we
are
just
trading
v
clusters
for
vms,
which
is
not
how
you
should
operate
kubernetes.
So
kubernetes
is
a
commodity
things
where
you
just
fit
everything
in
while
we
try
to
say
no.
This
is
the
cluster.
This
cluster
is
like
a
vm.
This
only
runs
front-end.
This
only
runs
api
this
on
so
it's
I
mean
it
is
not
really
kubernetes.
E
C
E
Why
you
you,
so
are
you
talking
about
the
the
master.
C
E
It's
just
a
matter
of
implementing
meaningful
readiness,
pro
good.
C
E
B
B
Come
back
fresh
you're,
ready,
awesome
all
right!
Well,
thank
you
for
discussions.
I
hope
you'll
have
a
good
30
minutes
I'll
see
you
shortly
take
it.