►
From YouTube: 2021-10-20 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
one
second.
A
So
I
think
it
may
just
be
us
today:
it's
okay,
so.
B
I
do
not
have
anything
to
showcase.
I
think
graeme
showed
off
pre-working
in
the
last
demo
as.
A
B
The
only
thing
I
found
in
pre-prod
that
looked
a
little
weird
was
we
were
not
deploying
a
specific
version
of
the
pages
application
that
was
fixed
today
so
now,
at
least
in
pre-prod,
we're
now
running
the
same
version
of
the
gitlab
application
that
we
need
to
be
using
so
now
going
forward.
When
we
go
into
staging
and
production,
we
should
be
deploying
the
auto
deploy
versions
that
we
want.
Instead
of
the
master
image
that
gets
created,
I'm
not
even
sure
how
c
g
tags
master
appropriately.
A
Awesome,
that's
great
work.
Do
you
how
what
are
you
thinking
about
the
rate
limiting
additional
piece?
It's
very
limited
right.
It's
kind
of.
B
B
It's
just
a
matter
of
prioritizing
that
part,
but
I
know
that
should
work,
because
all
all
that
we
need
to
change
is
a
minor
configuration
change
in
our
omnibus
configurations,
for
pages
plus,
a
minor,
hd,
proxy
change
and
then
testing
to
roll
it
out
smoothly
and
through
to
production
for
our
home
chart.
We
do
not
support
the
flexibility.
We
need
to
make
sure
that
configuration
works
as
desired.
The
way
that
we
utilize
the
help
chart.
B
A
B
A
Awesome:
okay,
that
makes
sense
cool.
Let
me
follow
up
there
then,
and
try
and
find
out
what
the
priority
of
the
rate
limiting
is
and
then
we'll
either
do
one
of
those
two
options
we'll
either
do
it
as
part
of
this
migration
or
we'll
do
the
migration
and
then
do
the
work.
So
we
won't
do
it
on
the
vms.
B
B
Of
the
service
itself,
the
desire
for
the
api
rate
limit
or
rate
limiting
of
pages
and
then
canary.
So
I'm
trying
to
figure.
A
A
Yeah
so
yeah,
so
I'm
completely
like
we
have
a
lot
of
a
lot
of
our
delivery
projects
are
quite
overlapping.
I
don't
really
mind
where
the
work
sits.
I
guess
in
terms
of
the
pages
canary
stuff,
do
you
need
a
page
of
canary
to
help
with
the
migration.
B
The
work
that
jarv
did
to
aj
proxy
should
enable
us
to
where
I
could
create
a
canary
deployment
in
kubernetes,
and
that
could
help
us
with
the
migration
procedure
entirely,
because
we
could
roll
this
out
through
canary
and
then
once
we
validate
things
are
working
as
desired.
We
could
just
shove
it
into
the
main
stage
and
turn
off
the
virtual
machines
so
like
that
in
itself
is
beneficial.
B
I
just
didn't
make
sure
it's
going
to
work
appropriately,
because
there
are
some
differences
in
the
way
that
we
configure
aj
pro
or
configure
our
omnibus
installs
versus
the
kubernetes
installation,
and
I
did
bring
one
of
those
concerns
up
to
yesterday
and
it
looks
like
jarv
addressed
that
either
this
morning
or
last
night.
So
it's
just
a
matter
of
determining
whether
it
will
work
and
what
other
changes
I
need
to
take
into
account,
which
I
have
not.
A
A
Yeah,
that's
totally
fine,
and
I
think
this
is
kind
of
where
graham's
going
to
be
a
little
bit
similar
as
well,
so
graham's,
probably
in
an
almost
identical
situation
in
there.
At
some
point,
the
mixed
deployment
migration
makes
deployment.
Testing
reorder
stuff
will
require
a
pages
canary,
but
it
doesn't
have
to
be
step
one.
A
So
I
think
probably
what
I.
What
I'm
guessing
is
that,
like
both
the
pages
migration
and
the
mixed
deployment,
reorder
work
can
continue
on
at
some
point.
Either
one
finishes,
and
then
we
can
pick
up
the
extra
work
needed
for
pages
canary
or
one
or
the
other
may
actually
genuinely
need
the
whole
thing,
and
we
can
add
it
into
that
project.
A
A
B
A
Yeah,
okay,
yeah
yeah,
I
mean
that's
totally
fine,
like
I
say,
like
I
think,
in
a
way
it's
like
you're
kind
of
coming
at
these
things
in
slightly
different
ways,
but
the
solution
will
be
the
same
so
for
the
migration
it's
like.
Does
it
make
it
safer
to
migrate
and
on
graham's
work,
it'll
be
having
all
of
these
canary
pieces
in
place
means
we
have
better
test
coverage,
but
it's
totally
fine.
If
we
started
with
like
50
of
the
pieces
and
added
from
there,
so
yeah
so
yeah.
A
B
B
I
did
see
that
we
have
weights
available
in
aha
proxy
now,
so
I
can
now
make
sure
that
is
set
up
correctly.
So
we
should
be
good
to
go
on
that
front.
At
least.
B
Testing
to
make
sure
it
works
as
desired,
and
then
I
guess
the
other
thing
we
need
to
consider
is
observability
for
canary,
because
I
don't
know,
I
don't
know
if
that's
set
up
appropriately,
but
this
should
be.
A
This
is
a
good
question,
yes,
and
also.
B
A
Yeah
exactly
exactly
and
then
kind
of
in
terms
of
like
people,
so
you
know
we
kind
of
talked
a
little
bit
yesterday
but
like
you've
got
paid
monday,
so
you've
got
like
pages
canary
sorry
pages.
Migration
grabs
just
started.
Looking
at
the
mixed
deployment
testing,
which
has
changes
from
next
week,
henry's
back
he's
got
the
steps
just
to
wrap
up
registry.
So
registry
should
be
going
to
production
next
week
and
then
the
steps
to
wrap
up
the
removing
nginx
from
api,
but
following
that,
henry
is
available
to
pick
up
new
things.
B
A
A
Yeah,
exactly
and
as
you
as
you
get
into
at
least
management
as
well
like
if
you
wanna
step
back
from
some
projects,
we
can.
We
can
reshuffle
things
in
the
next
week
or
so.
B
A
Awesome
great
yeah
and
then
our
five.
B
So
I
was
just
answering
a
question
that
jarv
had
last
week,
but
since
he's
not
here,
he
doesn't
get
to
hear
the
answer.
When
I
deployed
pages
into
pre-prod,
I
was
just
using
our
home
chart
defaults
for
request,
limits
and
request
resource
requests
and
limits.
I
figured
when
I
get
it
to
staging.
B
B
In
time
what
we
want
to
do
with
that,
I
have
not
looked
at
any
metrics.
I
have
not
performed
any
load
testing
yet
so
right
now,
it's
just
using
the
helm
chart
default.
I
plan
to
do
the
same
thing
when
I
implement
into
staging
and
it'll
be
after
we
start
taking
traffic
is
when
I'll
start
looking
at
that
stuff,
because
that
would
be
when
I'll
start
doing.
The
necessary
testing,
also
looking
at
metrics
to
figure
out
what
those
values
ought
to
be.
A
B
I
think
it'll
still
be
relatively
easy.
I
think
the
only
thing
that's
making
it
more
difficult
is
just
the
clash
of
all
these
other
things
that
are
coming
into
play.
Yeah,
which
is
fine,
it'll,
still
go
relatively
smoothly.
It's
just
a
matter
of
making
sure
that
we
don't
step
on
top
of
each
other
or
you
know
screw
each
other
over
by
you
know,
trying
to
force
one
priority
over
the
other.
That,
importantly,
being
the
ap
api
right,
the
rate
limiting
thing,
so
it.
A
B
That
support
in
our
home
chart.
You
know
we
just
need
to
make
sure
we
prioritize
all
this
appropriately,
but
otherwise,
I
think
it'll
be
relatively
fine,
just
a
matter
of
testing
and
rolling
through
the
word.
The
one
change.
I
know
that
I
couldn't
test
in
pre-prod
was:
what's
it
called
the
profiling,
because
we
have
that
configured
inside
of
staging
production.
A
B
A
Brilliant
that's
great
nice.
I
mean
it
certainly
seemed
to
come
through
pre
pretty
pretty
easily.
So
that's
certainly
good
signs.
Awesome.
B
Now
we
still
got
that
one
blocker
for
health
checking
which
would
prevent
us
from
going
into
production,
but
you
copy-
or
you
tagged
me
on
that,
because
there's
an
update
on
that.
So
it's
still
moving
along.
It
looks
like.
A
It's
definitely
moving
along
yeah.
In
fact
it
was.
It
should
be
moving
reasonably
fast
because
it
was
last
week
it
was
still
kind
of
expected
that
it
would
be
within
14.
For
I
mean
that
was
also
going
to
be
a
little
bit
ambitious
given
like
family
and
friends,
day
and
stuff,
but
I'm
hoping
that
means
it's.
Actually
it's
not
too
far
out
from
there.
So
hopefully,
in
next
few
days,.
B
A
Cool
okay,
well
one
thing
on
the
page
of
stuff
like
let
us
check
in
next
week
and
work
out
like
what
we
want
to
do
about
bringing
other
people
in
or
like
handing
off
parts
of
it
so
that
we
we
do
get
like
you
know,
you've
got
a
manageable
workload
and
other
people
can
be
involved
if
they
want
to.
So,
let's,
let's
make
sure
we
cover
that
next
week,
but
otherwise
great
work
keep
going.
A
One
thing
I
had
a
question
on:
it
was
registry
in
the
helm
chart.
So
registry
is
planning.
It's
currently
on
schedule
to
enable
the
connection
to
the
new
registry
metadata
database
next
week,
and
then
they
have
feature
flags
which
control
things
being
migrated,
so
they'll
be
controlling
that
and
running
those
through.
But
one
thing
they
were
a
little
bit
unsure
about,
or
a
little
nervous
about,
I
guess
is
helm.
Chart
changes
whilst
they're
running
the
migration.
B
B
B
B
A
B
If
we
detect
that
they're
running
for
lengthy
periods
of
time,
you
know
we
don't
want
to
unnecessarily
kill
those
jobs
which
we
don't
have
like
the
logic
that
we
created,
that
removes
those
jobs
isn't
going
to
look
at
whether
or
not
the
job
is
still
running.
It's
just
going
to
say,
hey
it's
time
for
you
to
go,
delete
your
delete
yourself.
B
No,
I
mean
psychic
yeah,
but
these
jobs
are
kind
of
special
because
we'll
remove
them
entirely.
Sidekick,
at
least
has
the
benefit
of
let's
at
least
attempt
to
put
the
job
back
into
the
queue
such
that
the
next.
A
B
So
if
the
job
fails
or
like
the
pod
fails
for
some
reason,
well,
okay,
so,
let's
from
the
aspect
to
kubernetes,
if
the
pod
crashes,
for
example,
it
will
try
to
restart
the
job,
because
that's
the
part
of
the
job
definition
is,
we
want
to
make
sure
it
runs
to
completion.
So
kubernetes
will
take
care
of
that.
B
So
that
part
we're
safe
with,
because
kubernetes
is
doing
its
job
and
the
migrations
code
is
doing
its
job
appropriately,
theoretically,
but
yeah.
If
we
delete
that
pod,
you
know
it
and
it's
still
running
that
migration.
I'm
not
sure.
What's
going
to
happen,
because
we
may
stop
the
migration
in
the
middle
of
it
or
in
the
case
of
a
situation
where
I
guess
it
depends
on
how
the
migrations
run,
because
if,
if
it's
just
a
query
that
we
send
a
postgres-
and
it
just
runs
until
that
query-
completes-
maybe
we're
okay.
B
A
Yeah,
let
me
spin
up
an
issue
and
ping
henry,
because
I
know
he
spent
quite
a
lot
of
time
doing
kind
of
failure
testing.
So
it's
possible.
He
went
through
some
of
this,
but
it
would
be
good
to
just
double
check
that
before
we
before
we
start,
I
just
don't
think
we
can
guaran.
We
can't
guarantee
we're
not
going
to
do
it
right.
A
A
B
A
A
B
A
Yeah,
okay,
that
makes
sense:
okay
cool!
Well,
let's
see
what
let's
give
henry
some
time
to
think
throughout
through.
B
So
today
we
have
a
release
called
git
lab
that
deploys
everything
to
a
given
cluster,
the
registry,
all
the
web
services
etc.
But
maybe
we'll
have
a
release
that
says
this
release
is
specific
to
git
https.
This
release
is
specific
to
the
container
registry.
This
release
is
specific
to
lab
pages.
Something
like
that,
like.
I
wonder
if
we'll
get
to
that
point,
because
that
itself
would
have
some
interesting
challenges.
You'll.
A
A
B
That
gets
us
more
closer
towards
a
more
component
style
release.
Method,
which
I
know
alessio
is
very
keen
on,
might
be
something
to
think
about,
but
I'd
like.
B
A
A
Right,
yeah
right,
yeah,
that's!
Okay!
That's
a
super
good
point,
then.
So
what
like?
What
are
the
kind
of
rough
pieces
that
we
will
need
to
do
to
to
adopt
the
operator.
B
B
Piece
that
they
are
already
well,
where
they
are
already
aware
of
that
needs
to
be
adjusted
in
some
way,
shape
or
form.
That's,
theoretically,
the
only
thing
that's
missing.
B
B
A
A
Okay,
interesting,
okay.
I.
A
Yeah
definitely
definitely
yeah,
I
mean
both
of
those
things
would
definitely
be
things
we
will
need
to
adjust
for
in
the
future.
A
And
some
of
that
might
tie
in
so
one
of
the
things
I've
mentioned
kind
of
the
q4
okr
stuff,
and
thank
you
for
inputting
into
that
is
we,
I
think,
we're
really
at
the
stage
where
we
need
to
work
out
how
kate's
workloads
fits
in
with
release
tools
and
what,
like,
perhaps
it
kind
of
touches
on
some
of
this
stuff
around
registry,
like
you
know
what,
how
do
all
those
pieces
pull
together?
What
gets
code
deployed?
What
gets
config
deployed?
A
How
does
it
track
it?
So
I
think
we
could
probably
just
do
like
a
blueprint
or
something
to
start,
but
there
could
be
some
fairly
hefty
changes
involved
in
that
yeah,
the
good
ones
right.
Hopefully
we
can
like
solve
some
of
the
problems
we
have
but
yeah
I
don't
know,
would
you
recommend
a
year
to
get
rid
of
deployer
three
six
months.
A
B
We
support
database
migrations
inside
of
our
home
chart,
but
can
we
execute
those
at
the
right
time?
Can
we
orchestrate
that
appropriately?
That's
the
part,
that's
missing!
Yeah.
A
B
I
don't
believe
so.
I
think
the
only
thing
that
worth
might
be
worth
talking
about
is
the
use
of
the
get
lab
shell
in
transitioning
to
the
gitlab
shell
demon
versus
the
open,
ssh
demon.
B
So
let's
talk
about
that
very
quickly,
I
spun
up
an
issue
that
said
hey.
How
do
we
want
to
do
this,
and
both
jarv
and
igor
have
already
chimed
and
said
hey?
We
should
do
it
this
way.
So
it
sounds
like
what
we're
probably
going
to
do
is
create
a
canary
variety
of
the
gitlab
shell
daemon.
B
A
B
And
then,
after
we
reach
100
of
the
traffic
inside
of
canary,
we
would
just
flip
the
flag
inside
of
our
zonal
clusters
and
say:
hey
you
start
using
the
new
daemon
instead
of
the
open,
html
and
we'll
just
remove
the
weights
of
sending
so
much
traffic
into
canary.
So
I
think
that's
a
fine
method.
I
just
need
to
fine-tune
the
implementation
details
and
spend
the
necessary
issues
to
get
that
work
started.
A
Yeah,
that
would
be
awesome,
yeah
you're
totally
right,
there's
definitely
some
outstanding
bits.
There
is
it's
not
actually
on
the
plan,
but
there's
thought
there's
kind
of
some
observability
gaps
as
well
that
the
developers
would
pick
up.
B
B
The
one
thing
we
could
probably
start
doing
whenever
the
the
get
lab
shell
team
is
ready.
So
we
could
enable
this
in
pre-prod
as
a
validation
step,
because
at
such
a
small
environment.
A
B
Went
wrong
and
turn
it
off
for
staging
it'll
be
good
to
use
that
as
a
test
bed
for
it.
Let's
validate
our
procedure
of
pushing
this
into
production.
A
A
A
A
Possibly,
I
think
that's
probably
quality
will
probably
help
advise
on
that,
but
I
mean
yeah.
Maybe
maybe
so
I
don't
know
what
testing
exists
already,
if
any
for
this
so
yeah.
I
think
you're
totally
right
like
great
to
make
progress
here,
but
I
don't
think
this
is
going
to
be
like
you
know,
turned
around
the
next
couple
of
weeks
type
of
work.
A
Yeah,
I
think
sean
mentioned
on
the
sink
that
he
was
thinking
about
a
month,
but
dove
has
asked
as
well
for
some
kind
of
timelines.
So
I
think
at
the
moment
I'd
assume
like
unknown,
but
you're
right.
It's
it's
certainly
not
a
high
urgency.
It
was
a
high
urgency
to
have
the
discussion
before
nick
went
out,
but
I
don't
think
it's
high
urgency
to
land
this
in
production,
so
yeah.
B
A
Right,
yeah,
exactly
yeah.
That
would
be
good.
That
would
be
good
cool,
so
yeah,
hopefully
we'll
get
some
involved
there,
but
great
great
I've
got
I'm
glad.
Everyone
came
back
and
agreed
on
just
the
same
option.
That
makes
it
easier.
A
Awesome
is
there
any
other
stuff
you
want
to
go
through.
A
All
righty
well,
thank
you
for
the
discussion
and
I
hope
you
have
a
good
rest
of
your
day
and
enjoy
tomorrow.