►
From YouTube: 2021-05-13 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
turns
out
this
meeting
also
records,
so
you
might
want
to
cut
off
a
minute
of
me
just
doing
nothing.
B
Don't
worry,
they
often
start
with
long
minutes
of
usually
me
sitting
on
my
arms
for
a
few
minutes.
These
are
the
highlights
I
was
like
and
sometimes
go
right
to
the
end
of
videos.
You
get
great
easter
eggs.
B
C
B
B
A
C
E
I'm
I'm
convinced
that
yesterday
the
the
freaky
thing
did
they
do
some
part
of
it
yesterday
that
just
didn't
have
a
user
interface,
because
my
computer
was
doing
stuff
yesterday
that
I
I've
never
seen
a
computer
do
in
my
life,
which
is
that
hash,
bang
scripts
did
not
work
so
in
cloud
work
cubecontrol
calls
g
cloud,
so
it
was
just
hanging
like
oh
yeah.
It
was
it's
bizarre
at
the
moment.
E
E
A
A
B
B
It
awesome
so,
let's
get
started
with
the
demo,
so
so
kubernetes
demo,
13th
of
may
and
scuba
have
you
got
something
to
demo.
F
D
I
was,
I
was
thinking
we
could
demo
the
failure
on.
We
have
a
deploy
failure
in
one
of
the
clusters
in
production
and
we
could
just
kind
of
walk
through
the
troubleshooting
of
that,
but
maybe
say
that
to
the
end.
B
C
Or
maybe
maybe
jarv
has
a
point
in
the
discussion
start
with
that
and
then
go
yeah.
D
C
D
This
is
more
of
an
announcement
just
to
let
you
know
that
I
I
unmounted
nfs
from
three
of
the
catch-all
notes
and
we'll
just
monitor
those
three
to
make
sure
they
all
look.
Okay
before
we
unmount
the
rest
of
the
fleet.
D
Tomorrow,
I've
prepped
the
where
I've
started
the
change
issue
for
the
rest
of
the
fleet
and
I'm
gonna
try
to
execute
it
early
in
the
day
so
that
we
have
like
a
full
day
before
the
weekend
to
kind
of
make
sure
things
are
fine,
but
I
feel
like
this
is
a
fairly
low
risk
thing.
We
do
have
the
temp
file
creation
issue,
but
after
discussing
this
a
bit
further
like
we
don't
think
that's
going
to
be
a
problem.
D
That's
that
the
most
important
thing
to
remember
is
that
the
or
the
thing
that
has
bitten
us
in
the
past
is
this
high
availability
list
of
mounts
that
you
pass
to
omnibus.
That
will
not
basically
won't
allow
services
to
start
if
these
mounts
aren't
mounted,
and
we
have
to
remember
to
remove
that
from
the
list
before
we
do
the
unmount.
Otherwise,
you
know
gitlab
ctl
will
just
or
actually
the
service
itself
won't
be
able
to
start
and
skyrack
has
a
question.
F
F
Yeah
we
were
using
the
catch
nfs
fleet
as
a
way
to
like
move
cues
over
to
it.
We
would
unmount
nfs
from
that.
That
way,
we
don't
interrupt
catch-all,
but
it
sounds
like
because
we're
doing
that
for
catch-all,
maybe
the
catch
nfs
nodes
which
we
were
using
previously
for
evaluation,
they're
no
longer
being
used
at
all
at
this
point,
so
I
wonder
if
we
should
just
turn
them
down
and
get
rid
of
them.
A
That
would
help
us
as
well,
because
it's
one
less
thing
we
need
to
think
about
when
we're
doing
our
psychic
migration
that
we're
gonna
start,
hopefully
tomorrow,
but
yeah.
That
would
help
us.
So
please.
F
Please
do
if
you
can
okay,
we
don't
have
an
issue
to
do
that,
so
I
could
create
an
issue
and.
F
C
We
can't
count
on
nfs,
so
the
application
actually
has
to
be
robust
enough
to
handle
writing
between
different
servers.
D
Yeah
I
think
like
after
this
is
done.
We
can
definitely
just
finish
the
sidekick
migration.
It's
going
to
be
as
simple
as
just
moving.
What's
remaining
the
web
pages
fleet
still
has
this
page's
mount,
but
it
shouldn't
be
using
it.
So
we
need
to
just
unmount
nfs
from
the
web
pages
fleet
as
well,
and
then
we
can
shut
down
our
last
nfs
server
and
no
more
nfs.
B
Just
copied
in
there
job
the
the
blocker
issue
you
opened
up
the
other
day
about
these
unexpected
years,
so
that's
being
worked
on
now.
So
let's
just
be
aware
of
that.
D
Yeah
I
mean
this,
this
is
I'm
not
I'm
not
sure
if
this
is
going
to
be
fixed
or
not.
It's
like
deep
into.
B
D
I'm
not
sure
if
there
is
a
desire
to
fix,
fix
this
and
like,
in
other
words
like
we
may
just
live
with
temp
file
creation,
which
means,
when
we
move
this
service
to
kubernetes,
we'll
just
do
what
we
do
now
for
temp
files,
which
is
we'll
create
an
empty
dir
for
the
srv
shared
directory,
which
will
allow
us
to
cap
the
amount
of
space
that
we
that
can
be
used
and
we'll
let
the
temp
files
be
created.
D
I
mean,
I
think,
the
main
concern
the
reason
why
I
made
this
a
blocker
before
was
because
we
can't
unmount
nfs,
while
these
temp
files
are
being
written
to,
and
I
was
concerned
about
like
disk
space
on
the
vms.
D
The
first
concern
is
like
taken
care
of
because
I'm
gonna
shut
down
or
I'm
shutting
down
sidekick
before
I
do
the
unmount
one
note
at
a
time.
So
that's
not
an
issue
and
the
second
concern.
I'm
convinced
that
it's
not
a
problem,
so
we
can
probably,
let's
give
it
like
another
day,
but
I
think
we
can
probably
remove
the
migration
blocker
label
from
this
issue.
D
Yeah,
it
isn't
and
it's
it's
something
we
deal
with
in
other
places
as
well
any
like
we
already.
We
already
deal
with
these
temp
files.
We
already
already
use
an
empty
dir,
for
you
know
scratch
space
for
kubernetes.
So
it's
not
that
big
of
a
deal.
B
B
D
F
D
Okay,
so
amy,
it
sounds
like
we
need
to
go
through
that
and
update
it.
It
sounds
like
the
evaluation
is
done,
we're
just
I
mean
once
we
remove
the
nfs
map,
there's
nothing
more
to
evaluate,
and
the
only
remaining
issue
is
to
just
to
remove
and
remove
to
move
the
remaining
workers
to
kubernetes
and
well
done
and
then
and
then
starbuck.
I
assume,
after
that,
we
can
like
reduce
it
like
this
huge
config.
We
have
in
values
for
selected
individual
queues.
It
will
just
be
like
catch-all
and
that's
all
right.
D
D
C
Item
d,
the
epic
for
the
remaining
queues,
nice.
B
B
Thanks
awesome,
nice
anything
else.
We
want
to
talk
about
on
on
that
one
on
mounting
nfs.
C
One
one
other
thing:
we
might
need
to
do
some
public
service
announcements
as
soon
as
the
nfs
is
completely
fully
removed.
We
need
to
check
out
the
documentation.
I
know
that
we
have
something
somewhere.
I
think
in
development
documentation
that
where
we
are
saying
you
know
right
services,
kubernetes
first
in
mind,
or
this
is
what
you
need
to
care
for.
So
someone
needs
to
go
and
find
that
and
we,
then
we
need
to
go
and
remind
people,
nodes
or
other
service.
C
A
Yeah
I
just
actually
the
other
day.
I
was
looking
at
something,
and
I
noticed
we
still
had
some
documentation
development
documentation
that
specifically
suggested
using
shared
ienfs
mount.
So
I've
gone
mr
there,
which
is
with
alessio
to
not
do
that
anymore.
E
Yeah
I
I'm
kind
of
a
little
bit
like
not
totally,
but
a
little
bit
blocked
on
these
labels
again
and
my
main
question
that
I
kind
of
wanted
explained
was:
we
have
two
of
the
labels.
We
have
a
type
label
and
a
tier
label
on
on
our
ingresses,
so
we
know
that
it's
the
api
type
and
it's
a
service
tier
which
is
kind
of
everything's
serviced
here.
So
it's
not
that
useful.
E
But
what
I
really
need
is
a
stage
label,
because
that's
part
of
what
we
use
to
kind
of
define
all
of
our
metrics
and
without
it
it
just
kind
of
goes
to
devnet
and
it
seems
to
be
like
more
complicated
to
add
that
label
than
it
does
the
two
that
we
already
have-
and
I
don't
understand-
and
I'm
not
asking
this
in
a
facetious
way-
I'm
asking
it
in
a
genuine
way.
Why
is
why?
Is
this
more
difficult?
E
You
know:
cubesat
metrics
has
given
us
a
bunch
of
labels
and
in
there
we've
got
api
and
service,
so
we're
like
almost
there,
but
we
just
need
one
extra
label,
that's
really
important
to
me
and
then
I
will
be
able
to
do
so
many
more
things.
So
so,
if
we,
if
we
can
put
those
two
on
can't,
we
put
the
third.
D
E
The
the
question
so
when
I
raised
it,
there
was
a
whole
question
about
basically
the
the
nginx
fork
and
do
we
want
to
kind
of
merge
off,
but
I
mean
I
noticed
that
the
nginx
base
charts
you
can
give
it
labels
right,
so
we
could
give
it
any
arbitrary
set
of
labels.
E
F
No,
I
know
that
label
is
going
to
be
missing
on
the
hp
because
I
haven't
touched
our
chart
configuration
yet,
but.
E
F
F
One,
instead
of
bringing
in
that
person's
pull
request,
that's
on
the
upstream,
I'm
just
going
to
create
a
merge
request
that
targets
the
hp
only
because
that's
the
only
thing
that
I
know
that
we're
missing
yeah.
C
E
Okay,
cool,
okay,
yeah,
because,
obviously
well
this
one
we've
got
cube,
pool
max
nodes.
Some
of
these
are,
but
some
yeah
here
desired,
replicas
component
saturation.
This
is
missing
because
of
that
as
well.
So
that's
kind
of
the
the
thing
that
I
was
looking
at.
Okay,
I
didn't
realize
that
I
thought
that
there
was
some
discussion
around
like
whether
this
would
push
us
further
away
from
the
from
the
upstream
fork
or
upstream
of
the
fork,
and
that's
why
I
was
kind
of
asking
the
question.
B
F
So
now
we
just
need
to
figure
out
why
they're
not
showing
up
like
I
would
expect
them
to
when
you're
doing
the
metrics.
So.
C
C
And
I'm
aware
that
my
zoom
froze.
So
if
you
didn't
hear
everything
tell
me.
E
H
F
B
So
it's
just
as
yeah.
B
F
And
we're
applying
them
to
both
the
pod
labels,
as
well
as
the
service
itself.
So
from
what
I
recall
that
is
supposed
to
do
what
we
wanted.
If
we
go
down
to
say,
can
areas
or
pipeline,
for
example,
we
should
see
that
we
did
indeed
add
labels.
Our
dip
doesn't
show
the
entire
object,
but
we
should
see
okay,
so
template
metadata
labels,
so
we
should
be
seeing
shard
and
stage,
and
this
is
on
the
ingress
controller
deployment,
so
we
should
be
seeing
it
somewhere.
So,
let's
hop
on
to
that
cluster
and
see
what's
up
cube.
F
F
F
F
F
Is
that
going
to
be
the
ingress
for
our
web
services?.
E
I
mean
so
yeah
I
mean
the
the
web
service.
Api
would
be
the
most
useful,
yeah
most
useful
one,
but
all
ingresses
would
ideally
have
them.
You
know
just
consistently.
E
F
We
have
the
ability
to
deploy
labels
in
or
we
have
the
ability
to
deploy
the
web
service
in
two
fashions.
One
is
where
we
define
an
individual
deployment
where
we
have
like
the
api
and
the
git
fleet
like
we
do
today
and
there's.
B
F
We
don't
separate
that
out
at
all,
so
there's
just
a
web
service
deployment
called
default.
I
think
and
there's
two
ways
that
we
need
to
like
merge
in
all
these
labels.
So
if
we
have
a
deployment
call
if
we
have
multiple
web
service
deployments,
we're
going
to
feed
this
object
here
into
a
web
service
value,
if
we're
not
doing
that
everything's
going
to
mean
the
default
deployment
thing,
that's
going
to
go
into
the
deployment
object
and
if
I
go
to
our
helper.
F
Stop
where's
web
service
helper,
I
believe,
and
we're
gonna
have
common
labels,
so
here
we're
just
right:
we're
just
merging
those
two
objects
together
that
way
we
get
rid
of
any
repetitive
labels.
Okay,
then
we
we
just
spit
out
the
labels
and
the
key
value.
That's
all
that
does.
F
F
Is
yeah
yeah,
but
hopefully
that
would
fix
it
so
I'll
try
to
work
on
that
later.
Today,
okay,
yeah.
E
F
Appreciate
it
all
right
anything
else,
andrew
no,
all
right
jar.
If
you
want
to
you've
seen
more
than
a
failure
that
I
have,
you
want
to
start
that
conversation.
D
Sure
so
what
happened
or
for
what
I
saw
so
far
was
that
we
started
the
upgrade
and
then
the
ci
we
hit
the
ci
job
timeout
and
the
problem
there
was
that
the
helm
timeout
was
set
to
an
hour
and
the
ci
job
timeout
was
set
to
an
hour.
So
I
believe
what
happened
was
there
was
a
failure
of
some
sort
and
then
helm
did
a
rollback,
but
that
hit
the
ci
drop
timeout.
So
we
don't
really
know
I
mean
we,
don't
we
didn't
see
anything
from
the
ci
output.
What
happened
after
that?
D
H
D
H
D
Yeah,
let's
see
where
we
are-
and
I
forgot
these
aren't
done
simultaneously
anymore.
We
do
them
one
at
a
time,
so
we're
on
the
last
we're
on
the
last
zone
and
so
so
c
was
successful
on
the
first
time
right.
D
D
D
B
D
D
D
G
C
D
I
can
see
it
either
so,
first
of
all
on
the
previous
failure.
Scarbec
have
you
seen
this
before
this.
F
G
F
Yeah
and
we
don't
have
control
over
those
particular
api
nodes,
so
you
know
we
don't
know
if
they're
healthy
or
not
that's
up
to
gke
to
tell
us
what's
going
on
with
them.
D
Yeah
I
mean
it's
like,
I
feel
like
we're
kind
of
getting
into
the
habit
of
just
retrying.
These
failures.
F
F
F
So
this
is
something
new
that
we
need
to
tackle.
Then.
D
It
might
be,
maybe
we
could
just
enable
it
on
a
single
zone
to
start
so
we
have
a
point
of
comparison
and
we
can
compare
the
zones.
Does
that
make
sense.
F
F
Faster,
we
can
make
it
faster
by
adjusting
the
deployment
strategy.
I
would
need
to
validate
that
that
configuration,
whether
nothing
impacts
just
the
api
or
if
it's
going
to
impact
all
of
the
web
service
deployments
at
the
same
time,
because
if
that's
the
case,
I'd
shy
away
from
that
option.
D
D
So
if
we
increase
that
to
fifty
percent,
I
guess
the
the
risk
there
is
that
one
is
like.
We
probably
have
to
spin
up
new
nodes
to
accommodate
those
pods
right
and
that's
going
to
take
time
and
also
we're
like
creating
a
lot
of
a
lot
more
pods
that
are
making
a
lot
more
connections
to
redis.
The
database,
et
cetera.
D
Well,
in
this
case,
it's
I
don't
think
this
is
a
timeout
issue
unless,
if
I'm
misunderstanding,
the
error,
it
sounds
like
there
was
an
actual
error
here.
D
B
D
H
F
D
F
It
shouldn't
fail.
Our
ability
to
talk
to
the
cluster,
though,
is
the
problem,
because
helm
still
needs
access
to
talk
to
the
cluster,
deploy
objects
and
rolling
backwards
for
that
matter.
As
far
as
I
know,
we've
only
been
blocking
access
to
gitlab.com
domains
like
gitlab
con,
the
registry
and
the
pages
domain.
D
What,
if
the,
what
if
this
hook,
fails
it's
not
going
to
roll
back
right
like
I
can
see
that
it's
enabling
gitlab.com
access
and
then
it's
pinging
gitlab.com
as
a
validation
step
as
a
validation
and
what
happens
if
this
valve
because
then
like,
if
gitlab.com
is
actually
down,
this
validation
will
fail
and
then
is
it
possible?
We
would
roll
back.
In
that
case,.
H
Right
below
the
the
peeing
thing,
there
is
yeah
the
line,
so
I
say
just
telling
us
where
it
failed.
Yeah.
Okay
can
can
we
try
to
see?
If,
because
maybe
this
is
the
line
when
we
are
defining
those
hooks
and
yeah,
I
mean.
D
F
B
Yeah,
I
think
that's
a
good
one
and
if
it
isn't,
we
should
probably
put
it
behind
one,
because
there
are
definitely
sounds
like
there'll,
be
times
where
we
might
want
to
control
that.
Okay,
what
actions
have
we
got?
So
does
somebody
have
the
original,
mr
from
that's
correct?
You
know
which
one
added
that.
B
Okay,
would
you
mind
dropping
that
in
slack
or
somewhere,
alessio
or
jarv,
or
one
of
you
jonah,
take
a
look
through
that
and
see
if
we,
if
there's
anything,
we
want
to
review
on
that.
B
I'll
open
an
issue
to
switch
the
kubernetes
deployments
back
to
be
simultaneous
how
we
used
to
have
them.
We
should,
I
think,
you're,
right
java.
We
should
probably
review
how
we
want
to
schedule
things
in
the
future,
but
I
think
give
it
our
reason
for
switching
it
previously
was
just
to
work
around
ip
availability.
F
D
But
I
I
think
that's
just
an
issue
we
need
to
look
into.
I
think
you
on
that
in
slack,
because
I
see
I
see
like
I
see,
I
see
traffic
on
the
api
canary
nodes.
D
H
H
F
C
C
It
doesn't
disable
the
hook,
it
only
basically
doesn't
doesn't
go
into
the
case
right,
so
it
doesn't
do.
C
Yeah,
when,
when
you
enter
in
the
hook,
it
just
doesn't
do.
B
C
So
here's
where
the
hook
is
defined
yeah
and
if
we
set
the
variable
above
so
hooks,
are
defined
here
and
they're
calling
these
scripts
the
script
above
says.
If
I
detect
the
variable,
I'm
just
going
to.
H
H
F
Based
on
something
that
jar
found,
it
sounds
like
we're:
we
are
waiting
for
kubernetes
to
spin
up
new
nodes
in
order
to
put
new
pods
into
place,
without
that
we
can't
move
a
deployment
forward.
So
I
think
we
need
to
revisit
how
many
nodes
we're
running,
maybe
set
a
new
minimum
that
we
always
have
some
some
nodes
available
for
pods
to
be
scheduled
onto.
B
D
I
I
think
it
would
be
really
would
be
helpful,
for
me,
maybe
is
to
see
enable
the
api
and
to
see
a
deployment
done
live
while
we're
kind
of
viewing
the
cluster
in
in
one
of
the
zones.
So
we
can
kind
of
see
the
timing
of
things
and,
what's
taking
so
long,
I
I
don't
have
a
good
way
to
get
this
from
logs
right
now.
I
think
I'd
have
to
see
it
done
in
real
time.
Maybe
we
could
just
do
that.
F
D
And
I
I
really
need
to
go
to
the
bathroom
before
my
next
meeting,
so
I
need
to
run
I'm
just
like
too
many
meetings,
so
I'm
gonna
drop
off
I'll,
see
you
guys.