►
From YouTube: 2022-04-20 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
All
right,
so
it's
just
three
of
us
amy
at
some
point
I
think
I
know
you've
asked
me
to
take
over
this
meeting
but
like
trying
to
figure
out
what
to
put
on
the
agenda.
I
don't
know
how
to
do
that.
C
D
We
review
this.
Like
I
love
you
hosting,
I
think
you're
a
better
host,
but
yeah
should
we
maybe
figure
out
how
to.
A
C
A
Anyways
so
welcome
to
oh,
it's
4
20..
How
fancy
welcome
to
april
20th.
B
Yes,
tomorrow
we
shall
roll
out
get
levis,
hd
to
production
and
yeah,
hopefully
like
it
goes
well,
let's
see
last
time
we
did
it.
We
had
some
problems,
they
are
now
resolved,
so
hopefully
like
everything,
gonna
be
fine
tomorrow
and
yeah.
Let's
forget
levels.
Actually
I
have
another
item
which
is
not
getting
chd.
If
you
have,
if
you
don't
have
any
questions,
forget.
A
B
I
don't
think
so
what
I
remember
I
remember
that
for
the
canary
environment,
it's
there
but
like
the
weight
is
zero,
so
canary
environment
doesn't
have
any
traffic,
so
it's
just.
We
will
increase
the
traffic
and
we
will
proceed
from
there
with
the
other
zones.
So
I
think
everything
should
be
fine.
A
B
D
E
A
E
F
E
B
Yeah
for
camo
proxy,
just
today
we
deployed
it
to
pre-environment.
So
it's
running
it's
working
and
I
worked
with
vlad
to
enhance
the
secrets
a
bit
because
it
was
not
working
before
so
now.
It's
working
so
far,
so
good,
it's
come
approximately.
B
So
the
next
step
is
to
basic,
basically
clean
up
the
chart
a
bit
and
know
where
to
actually
put
it
because
currently
we
are
putting
it
in
the
releases
directory
which
is
not.
I
don't
think
it's
not
the
appropriate
place
for
it.
So
but
yeah
complex
is
working
on
pre
yay
and
next
steps.
Yeah,
as
I
said
like
to
clean
it
up
and
to
push
it
forward.
B
B
And
we
added
it
and
kubernetes
is
not
in
vms,
because
I
think
in
the
other
environments,
as
in
vms.
B
B
We
didn't
tackle
this,
yet
we
would
love.
F
B
Because
I
think
like
we
are
currently
looking
at
me
and
vlad
looking
at
metrics
to
also
like
extract
metrics.
So
I
think
this
is
the
next
point
we
will
tackle.
Basically.
A
A
F
D
B
So
next
up
redness,
so
I
worked
with
igor
to
basically
integrate
the
announce
ip
from
host
from
ip
ip
from
host.
I
didn't
remember
the
exact
name
but
like
we,
we
added
it
to
omnibus
and
it
works.
We
tested
it
on
the
vms
and
the
configuration
works.
The
validation
works.
Everything
looks
fine,
so
we
are.
We
like,
I
actually
like
added
balu
and
I
think,
just
below
to
the
mr
to
basically
review
it
to
integrate
it
with
the
omnibus.
D
I'm
just
following
up
on
that.
I
just
was
in
distribution
slack,
so
they've
unassigned
themselves
per
their
process,
so
someone
will
pick
pick
up
and
assign
it.
The
question,
though,
is
what's
the
priority,
and
actually,
when
I
look
at
the
issue,
I'm
still
not
sure
so,
what's
the
priority
of
this
issue
and
emma.
F
E
B
B
B
So
what
we
did,
basically,
we
built
a
package
with
the
fix
we
did
to
omnibus
and
we
installed
it
on
the
machine
and
we
updated
the
gitlab.rb
file
basically
announced
ip
from
hostname
to
add.
It
was
true
because
this
is
what
we
are
consuming
right
now
within
the
package
itself
and
yeah
effectively.
B
So
it's
working.
I
think
this
will
take
a
bit
of
time
to
actually
finish
running
so,
but
it's
gonna
run
the
reconfigure
on
the
machines,
igor
just
jump
in
like
if
you
feel
like
you,
you
want
to
add
anything.
So
this
will
run
the
reconfigure
on
the
machines
and
it
runs
the
validation.
I
don't
really
know
what
it
does
internally,
but
I
think
it
will
announce
the
host
names
shortly.
A
So
question:
while
this
runs,
I
know
that
this
is
something
that
we
will
need
moving
into
the
future
for
all
redis
clusters.
But
I
know
moving
all
the
clusters
into
kubernetes
is
we're
multiple
months
away
from
getting
that
accomplished
in
terms
of
reconfiguring
this
aspect,
meaning
the
hostname
capability
of
our
redis
clusters?
Are
we
going
to
try
to
tackle
that
for
all
clusters
ahead
of
time,
or
are
we
just
going
to
tackle
all
of
this
when
we
want
to
migrate
an
individual
cluster,
or,
I
guess,
redis
deployment
over
to
queridities.
A
I
think
the
reason
why
I
ask
is
because
I
feel
like
we're
introducing
a
configuration
disparity
between
our
writer's
deployments,
but
on
that
same
token,
we
already
have
a
configuration
disparity
because
one
example
cluster
has
sentinels
on
differing
nodes
than
the
actual
redis
service.
So
we've
already
got
an
awkward
redis
configuration
for
or
some
awkwardness
in
our
redis
configurations
already
so
yeah.
I
think.
F
Just
looking
at
it
also
from
a
risk
management
perspective
can
like
containing
the
changes
that
we're
doing
and
kind
of
keeping
it
within
this.
One
category
allows
us
to
validate
the
whole
process
on
this
one
cluster
and
yeah.
So
I
I
kind
of
I
do
see
the
both
sides,
and
I
do
think
that
having
more
config
drift
is
a
risk
in
the
long
term,
but
I
don't
think
we
need
to
deal
with
that
now.
A
I
so
something
that
I
always
find
difficult
to
figure
out.
How
to
prioritize
is
like
the
various
changes
like
in
this
particular
case.
We
need
this
host
name
change
before
we
can
consider
migrating
kubernetes,
and
we
know
that
we
want
to
at
least
currently.
We
know
we
want
to
migrate
all
of
our
clusters
into
kubernetes.
A
A
You
know,
especially
people
that
are
not
us
performing
migrations,
just
the
reliability
teams
who
need
to
manage
our
clusters,
keeping
the
configurations
as
similar
as
possibly
beneficial
for
them.
Just
in
case
an
incident
were
to
come
about
it's
hard
to
argue
that,
though,
because
you
know
we
still
have
redis
cash
acting
slightly
different
than
any
other
writer's
cluster.
F
Don't
know
yet
we'll
we'll
deal
with
that
when
the
time
comes,
I
guess
I'm
I'm
very
much
trying
to
push
away
any
additional
scope
and
be
like
here's.
Here's
the
goal,
we're
focusing
on.
Let's
get
this
one
done
and
deal
with
the
other
stuff
later,
because
if,
if
we're
gonna
try
now
to
cover
all
of
those
use,
cases
we'll
will
never
get
it
done
ever.
D
It
could
be
worth
us
just
getting
an
issue
or
something
to
sort
of
help,
point
to
the
fact
that
we
haven't
updated
this
on
all
the
reddish
instances,
though,
and
then,
if
someone
you
know
notices
and
has
concerns
whatever,
then
we
have
an
issue
that
could
be
prioritized
if
needed.
A
A
We
might
want
to
ask
the
team
like
what
does
reliability
want
to
see
out
of
this,
because
if
we're
introducing
a
configuration
disparity,
it
might
be
something
they're
aware
of
and
as
long
as
they
are
okay
with
it.
It
might
just
be
something
we
don't
need
to
worry
about
because
they
understand
where
we
are
in
the
kubernetes
migration
process
and
they
understand
where
we
are
as
a
whole
in
terms
of
how
redis
works
in
the
first
place
and.
E
A
F
A
F
Yeah,
so
this
is
kind
of
showing
the
successful
reconfiguration
I
mean.
I
think
it
was
already
in
this
state
before
we
run
reconfigure,
but
just
to
kind
of
show
what
the
result
looks
like
we.
We
have
everything
migrated
over
to
host
names
and
all
of
the
redis
and
sentinel
instances
have
the
config
that
we
want.
F
Yeah,
it's
ridiculous!
How
much
work
it
takes
us
to
make
this
seemingly
simple
change,
but
yeah
with
like
omnibus
and
so
many
different
layers
of
abstraction
in
between
it's
kind
of
tricky.
B
And
that's
it
questions
and
questions.
Please
we'll
go
to
igor,
because
he
knows
how
to
answer.
E
F
And
it
should
basically
be
the
same
time
frame
as
what
we
just
saw.
I
don't
think
there's
a
huge
difference
between
that
and
production,
maybe
slightly
longer
but
like
on
on
the
order
of
half
an
hour.
F
E
F
It's
async
replication
right,
like
you're
gonna,
lose
some
rights
on
failover.
That's
how
that
works.
A
A
So
we've
got
websockets,
I'm
actually
going
to
touch
base
on
this
in
a
second,
but
websockets
is
soon
to
become
more
important,
they're
trying
to
ramp
up
that
feature.
So
we're
going
to
want
to
make
sure
that
we
note
this
in
our
readiness
review
for
this
project.
F
Yeah,
I
yeah-
I
also
don't
remember
which
redis
instance
was
affected
by
that.
I
think
it
might
have
been
the
persistent
one.
Oh
okay,
so
it's
it's
not
super
relevant
to
this
epic
per
se,
and
I
do
believe
there
is
an
infra
dev
issue,
or
at
least
there
was
at
some
point
an
infra,
dev
or
other
issue
about
getting
action.
Cable,
which
is
what
websockets
uses
to
be
more
friendly
in
case
of
redis,
fade
overs.
F
Yes,
so
we
can
look
at
the
epic
and
this
kind
of
gives
us
a
high
level
view
so
we're
currently
in
parallel
doing
the
sentinel
host
names
and
observability
stuff,
and
so
once
this
is
out
of
the
way
we've
unblocked
hybrid
deployment
on
pre.
So
that's
going
to
be
the
next
one.
A
F
A
Okay,
any
other
questions
related
to
redis.
A
Okay,
so
next
item
is
from
me:
websockets,
like
I
mentioned,
is
wanting
to
scale
up
currently
if
they
try
to
currently
there's
a
feature
flag.
This
feature
flag
is
being
utilized
to
add
real
time
to
the
labels
feature
set
of
the
gitlab
ui
right.
Now,
it's
not
leveraging
websockets
at
all.
So
it's
not
real-time
information
when
you
modify
an
issue,
but
that's
right.
A
We
were
saturating
both
the
memory
usage,
as
well
as
the
number
of
nodes
that
were
running,
which
was
surprising
to
me
to
see
the
nodes
being
saturated,
but
after
doing
some
research
and
igor
has
done
a
lot
more
research
than
I
have.
A
A
Technically,
the
team
is
waiting
for
someone
in
infrastructure
to
take
a
look
and
work
on
this,
and
I'm
curious
as
to
whether
or
not
this
is
something
right
now.
This
is
labeled
as
for
team
reliability,
and
this
is
kind
of
a
question
for
you
amy,
as
I
wonder
where
the
priorities
should
really
realistically
be.
I
think,
igor.
You
were
kind
of
asking
a
similar
question
as
to
like.
A
D
Yeah,
that's
what
I
was
going
to
start
with
like
so
it's
in
reliability
at
the
moment.
Does
it
does
it
sit
more
closely
with,
like
is,
is
reliability
expected
to
work
on
this?
Are
they
going
to
work
on
this,
or
does
this
sit
more
closely
with?
I
guess
some
of
our
kubernetes
kind
of
resource
usage,
type
of
issues.
A
A
D
Well,
let
me
I
mean
I
we
certainly
don't
have
excessive
capacity
and
delivery
either.
But
let
me
let
me
chat
with
anthony
about
priorities
for
this
and
figure
out
where
this
like,
what
the
urgency
is
and
where,
like
what
gets
de-scoped
if
needed,
or
where
this
fits.
D
It's
what
I
don't
want
to
accidentally
push
too
far
down
the
road
is
the
work
that
we
need
to
do
to
recreate
clusters,
and
I
think
that's
a
sort
of
thing
that
we
don't
have
solidly
locked
onto
our
roadmap,
but
we
should
have
so
that's
what
I
the
sort
of
thing
I'll
be
prioritizing
this
again.
So
let
me
follow
up
with
anthony
and
see
which
team
is
considered.
D
I
generally
I'm
assuming
that
if
something
is
in
reliability,
reliability
will
own
it.
So
if
we
open
an
issue
and
we're
kind
of
thinking,
probably
we
should
do
some
work
on
this.
Let's
put
it
in
one
of
our
team's
issue,
trackers
and
move
it
if
needed,
and
if
we
pass
that
into
a
liability
like,
I
think
I
think,
once
someone's
engaged
on
it.
I'm
hoping
that
means
they're
prioritizing
it,
but
I
can
follow
up.
A
Situation
depending
on
the
direction
that
we
go,
I'm
kind
of
leaning
towards
the
one
of
the
items
that
igor
highlighted
was
introducing
a
different
node
pool
instance
type
rather,
and
that's
going
to
require
a
little
bit
more
juggling,
which
would
extend
the
time
it
takes
to
resolve
this
problem
prior
to
the
team
being
able
to
ramp
up
the
feature
flag
again
so,
and
I
think
that
would
be
the
preferred
option
from
my
standpoint
just
because
I
know
that
we
are
getting
close
to
saturating
the
amount
of
those
that
we
could
run
in
that
cluster,
which
yeah,
unless
anyone
has
any
further
questions,
is
a
good
segue
into
my
next
item.
A
D
C
D
Would
be
our
recommendation
for
the
options.
F
A
So
this
is
a
good
segue,
so
bullet
point
number
nine
graeme
was
taking
over
the
capacity
planning
for
our
saturating
the
amount
of
nodes
that
we
run
for
our
get
in
api
workloads.
A
So
we've
already
identified
that
there's
a
few
different
routes
that
we
could
have
gone
and
we
decided
to
go
the
short
route
initially,
so
we
just
expanded
how
many
nodes
we
allow
to
run
per
cluster
for
those
two
workloads.
So
instead
of
I
think
it's
50
and
60.
It's
now
55
and
65
before
I
recall
correctly,
but
my
concern
is
that
we're
this
is
about.
A
Per
zone,
yes,
my
concern
with
this
change
is
that
we
now
enable
ourselves
to
get
closer
to
saturating
the
amount
of
nodes
available
to
run
in
a
given
cluster.
We
increased
it
by
roughly
was
it.
A
I
can't
do
the
math
that
quickly
in
my
head,
but
with
websockets
also
coming
down
with
requiring
potentially
more
nodes,
I
think
we're
going
to
hit
saturation
in
the
network
space
faster.
E
F
E
F
F
A
A
To
do
the
rebuilds
I
would
agree
and
hypothetically
that
should
I'm
not
going
to
say
that's
going
to
reduce
costs
by
doing
that
kind
of
method,
but
that
theoretically,
would
reduce
the
amount
of
components
that
are
writing
which
will
reduce
costs
in
other
locations.
So,
like
the
amount
of
logs
that
we
generate,
because
we
have
fewer
nodes,
the
amount
of
metrics
that
we're
maintaining
because
again
there's
fewer
nodes.
A
So
there's
fewer
metrics
overall-
and
I
know
those
are
very
costly-
you
know
prometheus
is
or
has
a
historical
has
historically
been
falling
over
for
the
past
like
half
a
year
and
change
so
amy
again,
I'm
seeking
advice.
D
Back
to
you
scarborough,
so
what
this
is
sorry
I
wanted
one.
But
what
I
would
like
to
see
is
some
projects
around
this,
so
one
around
the
cluster
recreation.
I
know
we
have
a
few
issues
around
various
pieces.
D
I
I
know
we
have
slight
differing
opinions.
I
would
love
to
see
an
automated
cluster
recreation
because
that
gets
us
closer
to
blue
green
deployments,
but
I'm
fine
with
going
with
like
a
manual
cluster
recreation
as
a
kind
of
first
epic,
but
maybe
let's
get
that
set
up
and
have
the
issues
in
there
around
what
we
would.
Actually
what
are
the
pieces
we
need
to
do
so
we
can
prioritize
that
and
then
maybe,
alongside
that,
we
can
review.
Do
we
have
a
separate
epic
which
actually
has
the
pieces
of?
D
What
do
we
need
to
do
to
increase
the
node
size
and
and
if
we
need
to,
we
can
pull
that
one
in
we
can
start
to
actually
prioritize
those.
But
at
the
moment
I
think,
there's
a
lot
of
issues,
but
it
feels
big
enough
and
important
enough
for
us
to
have
an
epic
and
actually
have
some
sort
of
longer
term
goals
versus
just
how
do
we
patch
it
up
for
today?
But
how
do
we
patch
it
up
for
today
and
also
set
ourselves
up
for
so
later
in
the
year
as
well?.
A
A
D
I
mean
I'm
thinking
kind
of
in
terms
of
what
we
have
to
have
to
deal
with
like
right
now,
like
yeah,
I
do
think
the
cost
stuff
will
come
in.
We
should
probably
once
we've
got
these
other
epics,
ready
and
sort
of
in
progress.
We
should
start
reviewing
that
like
it
would
be
good
to
get
that
cost
review
one
as
what's
the
first
piece
like
how
do
we
not
have
to
sort
of
park
everything
for
a
quarter
to
do
a
wholehearted
thing,
but
other
pieces
that
we
know
we
could
start
with?
D
But
I
think
knowing
we
have
to
rebuild
a
cluster
to
fix
us
ip
stuff
would
be
certainly
good
to
to
know.
We
can
do
that
so
that
we
don't
get
caught
out.
A
That
makes
sense.
Okay,
all
right,
like
you
mentioned,
let's
work
on
that
at
some
point.
D
Yeah,
let's
run
through
it,
let's
run
through
the
pieces
tomorrow
or
in
our
next
101
and
figure
out
what
bits
there
are
and
then
it's
just
a
case,
organizing
them
getting
see
what
we
need
to
write
up
and
get
them
labelled.
So
they
appear
in
the
right
places.
A
A
Well
I'll
mention
something
steve.
As
a
party
has
been
working
on
a
very
positive
change
to
kubernetes
workloads,
it
was
recently
discovered
that
gitlab
workhorse
was
not
shutting
down
cleanly.
This
was
something
that
I
knew
for
a
long
time,
but
I
never
investigated
it.
I
think,
igor.
Maybe
you
highlighted
this
in
our
last
meeting.
Oh
no,
you
highlight
this
in
scala
in
one
of
the
scalability
demos,
igor.
A
If
you
could
find
that
demo
and
drop
it
into
our
agenda,
because
that
showcase
was
awesome
but
effectively
we
found
out
how
to
make
workhorse
shut
down
cleanly,
and
this
is
serving
out
to
be
very
nice
in
terms
of
what
we
respond
with
to
our
customers
that
ask
for
that
make
request
workhorse
we're
actually
not
failing,
which
is
wonderful.
So
I
greatly
appreciate
that
and
that's
slowly
being
rolled
out
into
production
today,
which
I'm
super
excited
about.
So
big
thanks
to
both
igor
and
steve
for
rocking
that
out.