►
From YouTube: 2020-12-10 GitLab.com k8s migration EMEA
Description
Discussing the progress of current blockers and planning next steps for the GitLab.com Kubernetes migration
A
A
It's
it's
a
whole
another
day
still,
but
it's
coming
it's
coming
soon.
A
There's
a
bunch
of
noisy.
A
C
Oh
hello,
hello,
sorry,
my
still
loving
zoom,
so.
C
D
Awesome
thanks
for
adding
in
all
the
updates,
comic
super
helpful,
so.
D
First
up
with
reviewing
the
blockers
so
up.
First,
two:
three:
three:
four:
let's
go
back
your
comment.
It's
closed,
which
is
great.
D
So
next
up,
we've
got
the
logging
across
cloud
native
gitlab
installations,
so
the
way
I've
interpreted
this
is.
C
B
It
sounds
like
because
we
decided
to
revert
this.
It
looks
like
distribution
needs
to
revisit
how
they
want
to
perform.
The
logging
work
as
a
whole.
B
I
personally
don't
want
them
to
stop
working
on
this,
because
I
think
it'd
be
highly
valuable
if
we
could
at
least
contribute
to
the
logging
data
that
comes
out
from
them.
So
if
they
can
continue
working
on
this,
at
least
for
sidekiq,
I
think
that
would
be
beneficial
and
we
could
just
enable
and
disable
as
needed
that
allows
us
as
they
make
the
changes
we
can
test
it
out
and
see
if
it
meets
our
needs,
make
sure
we
don't
blow
up
our
inelastic
search
cluster,
for
example.
E
B
B
B
E
Swag,
I
would
call
it
something
else,
because
it
shouldn't
be
a
future
flag.
We,
I
think
we
talked
about
it
when
we
were
reverting.
This
whole
thing
was
the
problem.
Was
there
was
no
control
over
what
is
happening
now,
whether
it's
a
feature
flag
or
not?
E
I
guess
it's
a
philosophical
discussion
more
than
anything,
but
what
I
would
expect
is
to
have
a
chart
configuration,
because
that
is
where
you
can
actually
configure
things,
because
you
can't
control
anything
inside
of
the
image,
otherwise
right,
so
whatever
they
end
up
choosing,
however,
they
want
to
call
it.
Yes,
I
think
that
would
be
a
good
option
to
go
with.
D
Awesome,
I
also
wasn't
from
just
scanning
through
I'm,
also
not
super
clear.
So
robert
has
a
comment
which
about
the
starting
point,
potential
starting
points
and
then
he's
got
another
comment
further
down
which
goes
on
about
the
four
approaches
which
I
thought
were
about
the
four
issues
that
are
scheduled
in
13.7,
but
I'm
not
100
sure,
but
that
was
how
I
interpreted
it.
E
B
E
Yeah
there
is
like
a
larger
story
here
if
I
understand
you're
right
amy.
If
I
understand
correctly,
this
is
where
distribution
actually
needs
to
work
with
the
application
teams
and
figure
out
what
is
the
actual
general
story?
This
is
not
up
to
us
an
infrastructure
department.
We
we
should
participate,
but
on
the
same
level
as
everyone
else,
because
we
are
adding
feedback,
but
actually
someone
needs
to
be
operating.
The
the
story
across
all
of
the
groups,
projects,
services.
D
B
E
E
E
But
I
think
there
is
a
lot
of
requests
we
are
putting
forward
with
with
them.
B
I
agree,
maybe
I'll
try
to
send
up
a
meeting
with
a
few
people,
if
possible,
to
see
if
we
can't
at
least
provide
some
guidance
or
pointers.
E
Or
or
just
like
continue,
there
is
a
discussion
in
one
of
those
issues
that
are
linked
in
the
epic
I
invited
mikael
and
igor
just
maybe
coordinate
a
bit
there.
B
D
D
Oh,
oh,
I
know
I
didn't
remove
it
already.
Do
you
happen
to
know
if,
as
probably
for
you
andrew
these
comments
that
you
added
in
your
requests,
do
you
know
if
they're
somewhere
other
than
this
stock.
A
Sorry
I
haven't
been
paying
attention
because
I
was
trying
to
help
hendrick
with
the
production
incident.
That's.
A
A
Let
me
try
to
find
it
quickly.
We
are
so.
A
No,
I
don't
think
I
put
that
in
anywhere
else.
I
will.
I
will
do
that
now
and
and
and
put
something
in
place
right
now,
because
yes,
it
needs.
It
needs
a
better
home.
Thank
you.
Sorry.
B
D
Great-
and
this
is
why
we've
derailed
you
last
time
as
well,
andrew,
because
we've
got
a
request
for
you
straight
after
that,
one
as
well,
which
is
about
the
labels.
A
Yes,
so
I
did
some
work
this
morning
and
jeff
put
the
mailroom
and
the
plant
uml
labels
on,
and
I
I
basically
tried
to
bring
them
in
line
like
especially
plant
your
l,
because
it's
not
even
it's
not
neither
of
them
were
really
the
matrix
catalog
and
plant
ul
plant
uml
was
not
in
the
service
catalog
either.
So
I
did
some
work
around
that
and
when
I
did
that,
I
could
see
that
the
plant
uml
stuff
was
not
working
still,
so
something
needs
to
be
done
there.
A
But
I
pinged
java
on
that,
and
the
most
important
thing
for
me
is
still
the
deployment
labels
on
the
hyperscale
horizontal
scaler,
and
so
I
don't
know
if
there's
been
any
work
on
that,
because
that
will
help
us
quite
a
lot.
A
Like
when
we
gave
graham
that
demo
the
other
day,
the
first
thing
he
said
was
yeah
we're
really
good
to
get
the
hpa,
and
we
can't
do
it
until
we
got
the
labels
and
there
was
also
a
minor
incident
the
other
day
where
it
would
have
helped
so
yeah.
That
would
be
those
would
be
great.
D
Hey
jeff
we're
just
going
through
the
labels
so
actually
andrew
the
mailroom
wasn't
done.
I
think
1375.
A
A
And
then
that's
okay,
what's
what's
this
524?
Is
this
still
open?
A
Alright,
merge,
okay,
so
this
adds
okay.
So
maybe
this
has
been
done.
If
this
is
done,
then
I
will
start
working
on
those
on
those
things.
So
I
will
open
an
issue
for
that.
I'll
put
that
in
here.
D
D
Cool
and
then
pages
is
progressing
along
quite
nicely.
Actually.
D
Testing
and
preparing
next
steps
in
the
migration
are
being
planned
out.
So
looking
looking
hopeful.
D
Blockers
awesome,
okay,
so
going
down
to
discussion,
starbuck.
B
B
So
I
guess
the
main
question
is:
what
do
we
want
to
start
picking
on
next,
because
currently,
I'm
kind
of
sifting
through
some
of
the
lower
hanging
items
that
we've
got
in
our
tech,
dad
epic,
and
also
just
pulling
some
of
the
p1's
that
have
come
across
us
in
the
last
few
weeks.
Due
to
some
incidents
and
such
I
know,
helm,
3,
helmed
version
2
is
no
longer
supported.
We've
surpassed
end
of
life
as
of
november,
so
be
wise
to
pull
that
one.
D
Yeah,
that
makes
sense
so
the
old,
the
overall
goal
for
the
quarter
is
the
apis.
So
what
do
we
like?
What's
what
would
be
blocking
the
apis
today?.
D
With
websockets,
is
that
a
we
need
to
do
it
in
terms
of?
Does
it?
Does
the
api
have
dependencies
of
some
kind?
That
would
help
us,
or
is
it
a?
It
is
small
enough
that
we
think
we
could
just
get
through
it.
F
I
mean
it's
small
enough
if
we
just
keep
with
websockets
as
they
are
now,
but
the
idea
here
was
also
to
enable
action
cable
and
that
we
need
to
go
through
a
readiness
review
talk
to
developers
to
see
whether
we
think
we're
ready
but
yeah.
B
F
As
far
as
where
I
stand
right
now,
I
feel
like
api
is
probably
should
be
blocked
by
the
nginx
upgrade,
as
well
as
the
helm
upgrade,
and
I
think
also
possibly
the
websockets
work
should
be
blocked
by
the
helm
upgrade.
I
think
we
should
just
focus
on
that.
Get
that
out
done
get
that
finished
before
we
do
anything
else,
because
this
is
going
to
be
like
the
traffic
split,
probably
where
we're
going
to
have
to
do
a
coordinated
cluster
upgrade,
and
since
it's
going
to
touch
the
regional
cluster,
it
might
be
even.
F
B
Part
of
that
was
just
due
to
not
testing
thoroughly
enough.
Distribution
has
documented
what
the
blockers
are.
So
I
think
it'd
be
wise
if
we
retest
with
a
gke
cluster
versus
our
previous
testing
method
was
which
was
just
using
mini
cube.
It
wasn't
an
accurate
representation
of
what
we
were
running
at
the
time.
F
B
I
don't
want
to
break
pre-prod.
We
could
potentially
use
that
cluster.
I
just
didn't
want
to
break
that
cluster
entirely.
If
something
went
wrong,
I
guess
it's
not
hard
to
rebuild
it,
though
so
maybe
that's
sufficient
enough.
I.
F
I
would
just
like
use
pre-brought
as
it
is.
Okay,
I
think
we
can
easily.
I
mean
worst
case
we're
down
for
like
an
hour
or
two,
and
I
don't
think
that's
a
big
deal
for
pre-prod
and
it's
only
down
for
good
https,
and
you
know
sidekick.
F
We
could
even
actually
we
could
even
actually
like
divert
traffic
to
the
vms.
We
may
even
I
think
we
still
have
the
git
vm.
There
was
like
one
git
vm
for
pre-prod
and
then
for
a
sidekick.
We
could
even
it's.
Let
me
just
say
it's
probably
faster
for
us
to
spin
up
a
sidekick
vm
than
it
is
to
spin
up
a
new
cluster
project
and
deploy
to
it
right.
D
Okay,
so
what
I
I'm
hearing
is,
so
we
got
the
api
work
which
we
believe
is
blocked
by
an
nginx
upgrade.
We
think
we
should
do
web
sockets
before
that
and
that's
blocked.
D
We
need
a
ready
notice
of
you
and
we
also
have
the
helm3
work
yeah
right,
okay,
so
helm3,
I
believe
marion
correct
me.
If
I'm
wrong,
you
want
this
completed
before
we
do
the
next
service
migration.
E
D
E
Because
I
want
to
get
it
over
and
done
with
all
of
the
breakage
that
we
get
to
see
before
we
move,
I
mean
I'm
not
saying
the
traffic
that
we
have
right
now
is
not
impactful,
but
it's
going
to
be
less
impactful
now.
Another
thing
that
worried
me
in
jar's
statement
is
dimension
of
original
cluster
and
how
that's
going
to
get
complicated
with
an
upgrade.
E
F
The
regional
cluster
doesn't
have
an
nginx
ingress
at
all,
except
for
canary,
but
in
the
gitlab
namespace
and
canary.
We
can
just
drain
and
forget
about
so
for
the
main
namespace.
F
We
don't
have
any
ingress
at
all,
since
we
moved
registry
to
the
zonal
clusters,
nothing,
so
it's
all
sidekick
and
sidekick
is
tricky
because
we
have
like
it's
not.
We
could
spin
up
a
new
cluster
and
then
we
could
activate
side
sidekick
jobs,
but
then
you
would
have
jobs
running
on
both
and
some
of
our
cues
are
throttled.
So
we
don't
want
to
run
them
like
you
know,
into
clusters
simultaneously.
E
So
that
that
would
actually
get
my
vote
like
helm
tree.
Obviously,
right
like
we
want
to
get
that
over
and
done
with,
but
it's
obvious
that
there
isn't
some
preparation
work
that
needs
to
be
done
because
karbek
did
that
what
was
it
a
year
ago
skarvik
or
something
like
that,
so
things
have
moved
on
it's.
It
would
be
good
to
actually
start
on
that
and
then
also
figure.
E
D
The
only
thing
I
was
wondering
do
you
know:
jeff,
have
you
already
spoken
to
developers
about
the
actual
cable
readiness
review.
D
F
D
Cool
the
only
other
thing
I
was
going
to
also
ask
about
was
the
only
other
thing,
but
one
other
thing
I
was
going
to
ask
about:
was
the
we've
got
this
observational
sorry
observability,
epic
I'll
just
show
my
screen,
and
what
I
would
like
is
to
try
and
give
that
a
bit
of
a
scope
so
that
at
some
point
we
can
close
that
off,
rather
than
it
just
being
the
epic
that's
going
to
be
open
forever.
D
D
So,
within
this
epic,
how
much
of
this
stuff
do
we
need
to
get
completed
before
we
do
the
api,
because
what
I
think
is
we
could
maybe
frame
this
around
cluster
observability
and
troubleshooting
to
support
api
traffic,
and
then
we
could
have.
F
I
I
added
those
items
in
the
description
under
exit
criteria,
and
this
is
what
I
was
thinking
at
a
minimum
we
should
complete
before
we
do
the
next
migration.
We
should
probably
be
linked
to
specific
issues,
but
this
was
just
like
off
the
top
of
my
head.
F
D
Could
we
get
that
list
then?
So?
Could
we
get
these
issues
to
be
there
the
ones
we
want
to
match
that
that
list
yeah
in
some
ways
awesome
cool
and
then
what
I'm
hoping
is
that
will
mean
we
can
get
on
our
epic,
we'll
be
able
to
have
observability
and
troubleshooting
and
we'll
get
that
we
can
get
that
wrapped
up
for
the
api
stuff
got
help
three
on
there
as
well
and.
D
Then,
oh,
I
need
to
add
in
the
api
one
that
you
created
skybeck,
so
we're
looking
at
roughly
this
sorry.
D
E
Okay,
good
skarbic,
a
question
for
you
before
we
continue
further
is
nothing
is
preventing
us
from
no.
This
is
not
a
question.
B
E
Is
a
statement,
so
let
me
ask
a
question:
are
we
even
able
to
upgrade
to
helm
3.
B
We
could
run
our
tooling
needs
to
support
that
it
currently
does
not
so
no,
but
we
also
need
to
take
care
of
that.
We
both
have
our
own
helm,
repo
for
the
gitlab
com
stuff,
and
we
also
have
the
components
for
gitlab
helm
files,
so
the
tooling
upgrade
is
going
to
enable
us
to
run
both
home
versions,
but
once
we
upgrade
one
cluster,
we
need
some
way
to
tell
our
tooling
to
use
helm3.
For
that.
B
E
Okay,
the
reason
why
I
ask
is
there
is
a
natural
pause
point
in
this
effort,
which
would
be
upgrading
the
zonal
clusters
and
having
them
run
on
tree
and
leaving
the
regional
clusters
cluster
running
on
two
to
use
it
as
a
feeding
point
for
the
making
it
simpler
to
upgrade
regional
cluster
as
a
exercise.
C
D
Cool
and
then
for
the
epics
that
we
have
in
progress
I'll,
consider
help
three
in
progress
for
this
testing
one.
Could
you
get
the?
Could
you
both
get
the
issues
that
you
like
your
next
round
of
issues
ready
and
on
to
the
billboard.
D
Cool
is
there
anything
we
need
to
do
to
help.
E
I
do
want
to
ask:
do
we
want
to
see
if
we
can
involve
graham
into
this
effort,
but
like
from
z
from
day
zero.
D
He's
out
until
late
december,
he's
on
pto.
F
F
F
F
Sure,
but
we're
going
to
crank
this
out
like
before
new
year's
right
yeah,
maybe
well,
okay,
how
about?
I
think
there
are
two
major
things
that
we
need
to
focus
on:
what
is
the
home
three
upgrade
and
one
is
the
logger
stuff
just
because,
like
I
feel
like
we're
stalled
right
now
with
our
feedback
saying
that
we
are
like
you
know,
we
definitely
can't
turn
this
on.
So
I
think
we
need
a
dri
for
the
logging
stuff
and
for
the
helm3.
F
Maybe
I
should
start
with
the
helm3
since
you're
busy,
with
rmming
and
and
for
the
logging
stuff.
Did
we
discuss?
I'm
sorry,
I
missed
it
earlier
in
the
meeting.
Did
we
discuss
what
the
next
steps
are
there
I
saw
that
andrew
is
giving
them
their
feedback
and
it,
but
it
looks
like
it's
completely
stalled.
B
B
As
necessary
for
at
least
one
component
but
they're
going
to
push
behind
a
configurable
option
to
enable
or
disable
logging
as
a
whole
that
way
well
logging
using
the
new
mechanism
that
way,
we
could
turn
it
on
and
off
and
test
as
needed
to
help
them
and
provide
feedback
as
they
continue
to
work
on
it.
F
E
D
Okay,
okay,
is
that
a
plum.
F
That's
a
plan,
I'm
gonna
start
flashing
out
the
helm
out
pick
then
and
scrub
back.
I
think
what
I'm
gonna
do
is
like
move
traffic
over
to
vms
in
pre-prod,
so
that
we
can
mess
around
with
that
cluster
and,
let's
just
see,
if
maybe
it'll,
be
a
breeze,
maybe
it'll
just
work.