►
From YouTube: 2022-02-02 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Yep,
that's
that's
totally
fine.
I
know
lots
of
lots
of
side
projects
turning
into
main
projects
distracting
on
this
one
for
sure.
B
Well,
let's
go
ahead
and
get
started
so
welcome
it's
the
second
of
february,
it's
my
birthday
month.
B
So
the
first
item
on
the
agenda
is
me
potentially
doing
a
demo
of
a
redis
cluster
joining
a
virtual
machine
cluster.
This
is
something
that
igor
has
already
done
for
the
scalability
demo,
either
one
or
two
weeks
ago.
So
I'm
not
sure
if
it's
worth
my
time
and
effort
in
doing
the
same
experiment
here
in
this
meeting,
but
I'm
open
to
feedback
from
both
of
you.
C
A
Like
I
didn't
see-
or
I
didn't
watch
the
thing
for
scalability,
so
I
wouldn't
actually
know
how
this
would
work,
but
since
I'm
like
the
attached
from
the
project,
I'm
not
sure
if
it's
worth
the
effort.
Also,
I.
C
Think
I
think
it's
worth
the
effort
like
I
think
it's
I
think
it's
always
interesting
to
see,
and
I.
A
B
B
Okay,
so
in
here
is
my
configuration
for
our
deployment.
I
got
some
stuff
commented
up
because
I'm
testing
other
things,
but
the
important
pieces
here
is
that
we
are
asking
sentinel
that
we're
going
to
connect
to
an
external
master
and
we
need
to
enable
it
because
by
default
it's
false
and
I'm
telling
it
to
join
this
specific
node
in
in
this
particular
configuration.
We
need
to
do
the
same
for
the
replica
configuration.
B
This
is
just
how
this
home
chart
works
at
the
moment
whether
or
not
this
can
be
made
easier
in
the
future.
I
think
this
is
something
that
we
may
iterate
on,
but
you
know
this
is
what
we're
testing
out
on
the
right
side,
I'm
just
printing
off
the
logs
from
the
first
pod.
I
do
not
have
not
deployed
the
pod,
which
is
why
we're
getting
the
not
found
messages
at
the
bottom
you'll
see
all
the
pods
that
are
running
as
this
is
just
a
mock
test
cluster.
B
B
It
is
currently
the
primary
and
it's
got
two
secondaries,
which
is
the
other
two
virtual
machines
that
are
participating
in
this
cluster.
I
also
connect
to
the
other
virtual
machine.
It
is
a
secondary
or
it's
a
replica,
and
it's
connected
to
our
primary
that
I
noted
above
and
same
for
the
second
vm.
So
now
I
will
proceed
to
install
a
redis
cluster
inside
of
kubernetes
and,
if
all
goes
well,
it
will
automatically
join
this
cluster.
B
B
B
Normally,
what
would
happen
is
that
we
would
query
the
service
endpoint
that
this
home
chart
creates
for
us.
That's
that's
the
default
behavior
of
this
home
chart,
and
this
is
what
we
are
attempting
to
modify
here.
B
So
it
created
a
virtual
machine.
So
it
now
knows
who
to
talk
to-
and
we
see
later
on
that
it
connected
to
that
and
it
performed
the
sync
which
is
precisely
what
we
want
and
down
here.
We
see
that
it
now
knows
about
all
the
other
virtual
machine
replicas
as
well,
so
the
cluster
is
still
coming
up
so
I'll
give
that
a
second
longer,
let's
say
a
second
I'll,
probably
mean
a
minute.
B
I
do
not
have
my
external
dns
configured
properly,
which
is
why
we're
seeing
like
crash
loop,
backups
and
that's
just
because
the
pods
do
not
come
online
with
their
appropriate
dns
inventories
yet,
and
I
think
we
may
see
that
I'll
probably
screw
up
the
screen
when
the
pods
come
online,
they
still
try
to
talk
to
themselves
and
that
dns
name
is
simply
not
resolvable.
Yet
one
of
the
things
that
igor
was
working
on
was
trying
to
reduce
how
long
that
takes,
and
he
did
find
out
that
the
external
dns
does
have
a
configurable
option.
B
C
Do
you
know
scarborough,
I
seem
to
remember
sean
might
have
told
us
this
like
some
months
ago,
but
I
can't
recall
the
answer.
You
may
not
either
and
that's
fine,
but
I'm
wondering
do
you
know
if
this,
if
the
rate
limiting
cluster
is
one
that
we're
expecting
to
do
much
scaling
like
is
this
spiky.
B
B
That's
what
the
fruit
flies
in
my
house.
The
the
reason
that
we
don't
want
to
scale
is
because
anytime,
a
new
redis
pod
is
going
to
come
online.
It's
going
to
sync
all
of
that
data,
less
important
for
rate,
limiting
more
important
for
the
redis
cache,
for
example,
because
there's
a
lot
of
data
that
needs
to
be
synchronized
across
all
the
replicas
and
that
synchronization
is
going
to
cause
a
little
bit
of
pressure
on
the
primary
as
that
data
gets
replicated,
and
I
think
redis
cache
is
our
heaviest
cluster.
B
So
I
think
you
know
adding
pressure
like
this
is
quite
unnecessary
and
because
of
the
way
gitlab
itself
works,
so
get
lab
only
talks
to
the
primary
at
any
given
point
in
time.
It
never
talks
to
our
replicas.
The
replicas
are
simply
there
for
redundancy.
B
If
we
ever
fail
over
gitlab
will
automatically
fail
over
to
the
new
primary,
but
otherwise
we
don't
utilize
the
replicas
for
anything
at
all.
B
B
B
B
B
Our
virtual
machine
is
a
secondary,
our
second
or
our
third
virtual
machine
is
also
a
secondary
and
then,
lastly,
all
of
our
pods.
It's
hard
to
see
this.
I
should
format
this
output,
but
like
a
red,
it's
node,
two,
a
red
is
node
one.
They
write
snow,
zero,
they're,
all
secondaries
all
pointed
to
our
virtual
machine
as
primary.
B
Proposal
that
we
have
as
a
pr
for
the
bitnami
helm
chart,
I
we
do
need
to
make
a
little
bit
of
improvements
to
this
pr
which
I'm
currently
working
on.
B
But
if
this
gets
accepted,
this
will
make
the
creation
of
a
redis
cluster
slightly
easier,
which
I
think
will
benefit
us
when
it
comes
time
to
performing
the
migration
one
of
the
things
that
we
may
or
that
we
will
do
probably
is
modify.
B
B
This
is
what
redis
will
use
to
determine
as
a
weight
when
it
comes
time
to
vote
for
a
new
member
to
be
updated
as
primary.
In
the
case
of
a
failover,
and
I
think,
while
while
we
are
building
these
redis
clusters
inside
of
kubernetes,
we
want
to
make
sure
we
avoid
sending
them
as
primary
just
so
that
we
don't
have
any
surprises.
When
these
clusters
come
online.
B
I
did
update
our
epic,
that's
tracking.
All
of
this
work.
B
I'm
not
gonna
share
my
screen
for
this
I'll.
Just
share
it
verbosely,
but
currently
our
goal
for
this
is
that
we
need
redis
6.2
running
in
omnibus
before
we
try
to
start
this
migration
and
also
the
other
item
we
need
is
a
way
to
make
sure
that
all
the
configurations
are
the
same
as
far
as
our
advertisement
of
using
a
fully
qualified
hostname
versus
ip
address.
B
B
A
B
I
don't
know
how
disruptive
that
is
going
to
be
so
we'll
be
testing
that
at
some
point.
C
Awesome
did
we
figure
out
like
what
is
the
the
kind
of
I
guess
like
timeline
and
process,
then
for
us
getting
changes
into
bitnami
like
how
long
did
it
take
for
the
previous
one
to
get
merged
and
then
published?
B
It's
been
published,
it
took
about
three
weeks
from
the
initial
opening
of
that
pull
request
to
get
fully
reviewed
and
merged
in,
and
it
took
about
two
days
after
getting
merged
to
me
seeing
it
published,
but
I
don't
know
if
it
was
published
the
same
day
and
I
just
didn't
notice
it,
because
that
was
near
the
end
of
the
week.
So
so
I'm
trying
to
finish
up
this
pull
request.
Igor
started
it.
I
just
need
to
make
a
few
improvements.
B
C
B
We
got
sentinel
and
redis
on
separate.
We
got
six
total
nodes
operating
that
read
this
cluster
effectively.
B
C
C
Go
ahead
and
actually
migrate,
so
I
know.
C
Okay,
yeah,
so
redis
cache
is
one
that
we
would
hope
that
we
can
move
into
kubernetes
and
into
redis
cluster
in
one
go.
B
B
C
B
B
We
deployed
a
redis
cluster
on
virtual
machines
in
pre-prod,
but
we
don't
use
them
yet.
I
was
initially
using
that
as
a
test
bed,
but
you
know
testing
was
kind
of
sluggish,
so
I
resorted
to
creating
my
own
cluster
and
virtual
machine
setup
for
this
kind
of
thing.
So
I
want
to
reset
that
cluster.
That
way,
we
could
start
pointing
the
gitlab
application
running
in
pre-prod
to
that
cluster.
C
C
Hopefully
the
redis
upgrade
will
get
unblocked,
maybe
this
week,
if
it
is
this
week,
it'll
be
late
this
week,
but
hopefully
from
next
week,
we'll
have
the
nightly
builds
running
again
and
that
will
be
unblocked.
A
B
Not
at
the
moment,
I
think
we
still
have
plenty
of
work
that
we
are
able
to
accomplish
in
parallel
and
except
for
the
redis
upgrade
being
blocked
like
I
don't
there's
not
much.
We
could
do
about
that.
So
exactly.
C
B
Okay,
so
gitlab
sshd
as
a
reminder
we're
in
staging,
as
far
as
I
know
it
works.
I
know
the
register
review
by
sean
and.
B
Goodness
recently,
they
performed
some
performance
testing
and
staging
and
showed
some
pretty
good
stuff
like
I.
B
B
I
restructure
the
epic
just
a
wee
bit
to
make
sure
that
we
understand
where
we
are
status-wise
and
so
did
shawn.
So
at
this
point
we
in
delivery
are
pretty
much
ready
to
start
creating
the
change
request
for
production,
and
then
we
just
need
to
wait
for
the
readiness
review
to
complete
its
review
and.
A
B
I
don't
know
if
you
have
anything
else
that
you
want
to
comment
on,
because
I
know
that
you're
going
to
be
working
on
that
change,
request.
A
I
started
the
change
request
but,
like
I
still
have
to
fill
it
with
details,
so
I
opened
it
in
progress.
Basically,.
C
Awesome
also,
I
was
going
to
say
I
know
sean
proposed
running
this
in
emea.
C
I
would
suggest
that
we
either
figure
out
ideally
figure
out
like
a
way
to
do
this
a
little
bit
later
in
the
day,
so
that
you're
around
starbuck,
because
I'm
worried
about
you
being
a
release
manager
unless,
like
not
that
you
can't
do
this,
but
I'm
more.
If
there's
an
incident
or
something
to
do
with
release
management,
you're
gonna
be
a
bit
conflicted
if
they
are
desperate
to
do
this
in
emir,
like
I
know
it
means
we've
got
a
little
bit.
Less
traffic
gives
us
a
few
more
hours
in
the
day.
C
It
might
be
that
we
could
ask
jarv
if
just
as
he's
around
as
a
shadow,
just
in
case
he's
needed
like
in
case,
we
need
to
do
some
stuff,
so
see
what
you
think
about
that.
But
I
think
if
we,
if
there's
a
better
time
for
us,
we
can
certainly
propose
that.
B
How
much
overlap
do
either
of
you
have
with
graham.
C
Yeah
a
bit
it
depends
kind
of
on
his
day,
so
he
is
quite
often
around,
for
maybe
let
me
see
he
is
often
around
for
the
first
couple
of
hours
of
my
day
so
for
europe,
like
probably
like
he's
sometimes
around
for
about
three
hours,
it
is
an
evening
time
for
him,
though,
so
we'll
probably
just
need
to
check
in
with
him
in
advance
that
lance
is
going
to
work,
but
certainly
that
is
a
possibility.
It's
just
a
good
point.
If
he's
done
the
readiness
review,
maybe
that
fits
quite
well
so
yeah
it.
C
B
I
imagine
the
process
will
take
a
few
hours
to
complete,
just
because
of
modifying
the
weights
and
kind
of
waiting
for
traffic
flow
to
change.
C
Henry's
about
next
week,
so
if
we
do-
and
I
don't
I
mean
like-
let's-
let's
just
assume
this
week's
finished
right
like
it's-
it's
enough-
probably
going
on
with
the
other
things
that
that
if
we
pushed
into
next
week,
then
you'd
have
henry
and
then
you'd
have
cover
anytime.
So
yeah.
C
A
B
So,
like
the
the
important
part
that
we
need
to
take
care
of,
is
that
when
we
ship
traffic
around
and
off
the
cluster
to
make
the
configuration
change,
we
want
to
make
sure
that
we
don't
overload
canary
and
because
all
of
that's
going
to
take
time
when
we
start
shifting
traffic
to
the
cluster.
With
this
new
configuration,
we
need
to
make
sure
that
we've
got
the
capacity,
because
the
hpa
will
slowly
scale
things
downward
when
the
traffic
is
off
of
it.
So
we
want
to
make
sure
that
hpa
catches
up
over
time
as
well.
B
So,
but
you
know
that's,
we
know
that's
going
into
this
change
request,
and
you
know
this
is
something
that
we
can't
easily
test
in
staging.
So
it
would
just
be
an
incremental
rollout,
similar
to
us
shifting
traffic,
as
we
were
doing
like
a
migration
of
a
service,
new
kubernetes.
C
Awesome
and
yeah,
I
think,
like
you
know,
like
that's
pretty
safe,
like
as
long
as
we
go
steady,
then
we've
always
detected
things
pretty
quickly
in
the
past
and
and
been
able
to
recover.
So
that
should
be
pretty
similar
and.
C
What
level
of
change
requests
are
you
putting
this
in
and
is
this
a
c2.
B
C
Pretty
badly,
I
was
thinking
as
well
about
following
the
the
kind
of
near-miss
on
the
rsa
keys
like
I
think
it
may
have
some
extra
visibility,
not
not
necessarily
from
outside,
but
it
would
be
good
for
that.
So,
if
you,
when
you
get
to
the
point,
you
need
a
approval
on
that.
Just
give
me
a
ping
and
I
can
get
that
sorted
out.
B
C
B
Because,
like
the
majority
of
our
cpu
load
is
because
openssh
is
having
to
create
a
new
process
for
every
connection,
but
that
gets
completely
eliminated
with
this
demon.
B
C
C
Interesting
yeah,
okay,
nice,
oh
that'd,
be
a
super
good
thing
to
track.
Then,
like
that's
sort
of,
I
think
the
sort
of
metrics
that
often
are
quite
invisible
so,
like
you
know,
if
we
do
see
a
drop
there,
let's,
let's
shout
out
about
that.
That's
great.
B
Yeah
that'll
be
a
fun
metric
to
be
like
hey,
look
at
this
efficiency.
Wouldn't
we
get
out
of
this
yeah.
C
Awesome,
okay
and
we'll
probably
see
knock-on
effects
from
that
as
well.
So
awesome,
that's
great
nice
work
le
I'm
put
I'm
trying
to
stay
pretty
hands
off
on
this
one.
Aside
from
when
I
get
like
direct
pings
from
from
sean,
but
just
give
me
a
shout
if
you
need
any
help,
particularly
around
kind
of
coordinating
on
time
zones
and
like
if
that
means
we
need
to
like
shift
any
project
or
anything
like
that
to
make
space.
So
just
let
me
know.
C
Awesome,
oh
one
thing
I
haven't
written
on
the
agenda,
but
I
was
going
to
mention
is
so
tomorrow
morning
is
the
apac
emea
demo,
so
it
may.
I
had
like,
if
you're
available,
like
you're
working
to
join,
but
if
you're
busy,
that's
also
totally
fine,
but
I
I
was
chatting
with
graeme
this
morning
and
actually
one
thing
he's
been
thinking
about
is
cluster
recreation.
C
So
I
suggested
to
him
that,
rather
than
us
talk
about
it
in
r11,
we'll
do
it
in
the
apac
demo,
even
if
it's
just
the
two
of
us
graham's
just
been
thinking
kind
of
theoretically
about
what
might
be
involved,
which
I
think
might
be
a
great
like
recording
for
us
to
have,
as
we
start
looking
into
this
stuff.
So
if
you
have
any
comments,
questions
then
feel
free
to
leave
them
and
we'll
try
and
answer
related
to
that.
So
there's
kind
of
two
things
I
guess
about
cluster
recreation.
C
One
is
that
it's
just
generally
would
be
a
great
idea
if
we
were
if
we
knew
our
process
for
doing
it
and
had
an
automated
way,
but
then
on
a
more
practical
thing
for
expanding
our
ip
range.
We're
going
to
need
to
be
able
to
do
this,
so
I
have
also
got
an
issue.
I
just
opened
I'll
link
it
after
this.
C
I've
asked
jacob
to
try
and
figure
out
at
what
point
are
we
going
to
run
out
of
ip
addresses
based
on
scaling
so
that
we
can
sort
of
hopefully
avoid
having
to
scramble
on
this
stuff?
But
we
can
get
it
prioritized
like
well
ahead
of
when
we're
going
to
need
it.
B
Insert
that
into
what's
that
thing
that
andrew
created,
that
alerts
us
about
saturation
prediction.
C
Awesome
so
yeah,
so
I
will
get
the
recording
up
tomorrow
on
this
stuff,
but
say
if
there
anything,
if
you
have
any
ideas,
you'd
like
to
kind
of
hear
graham
talk
about-
or
you
know,
if
you
want
to
put
something
in
here
or
ask
questions
whatever
feel
free
to
stick
them
in
the
agenda,
we'll
go
through
as
much
as
we
can.
The
idea,
of
course,
is
we'll
come
out
and
it'll
be
a
epic
and
a
proposal.
C
We
can
figure
it
out,
but
it's
often
a
good
way
to
just
get
a
lot
of
information
in
one
go.
A
C
C
I
know
there's
a
few
bits
that
they
won't
be
so
familiar
with
or
may
want
to
update
us
on
so
I'll
mention
if
they
want
to
drop
in
on
demos
they
can,
but
I
wonder
as
well
like
whether
we
may
be
trying
to
figure
out
this
year
like
how
we
can
make
this
stuff
less
manual.
I
know
breaking
changes
are
hard,
but,
like
our
process
at
the
moment,
seems
to
rely
on
graham
opening
an
issue,
and
I
was
kind
of
panicking
a
bit
and
trying
to
get
it
prioritized,
which
it's
probably
not
the
best
way.
B
Is
there
any
harm
in
going
ahead
and
bringing
the
I
know
this
meaning
is
kind
of
geared
towards
migrating
things
into
kubernetes,
but
could
we
use
the
same
time
to
have
those
discussions
with
that
squad?
I.
C
Think
so
yeah,
I
think
so
I'll,
certainly
I'll
certainly
invite
them
in
because
I
think
there's
some
stuff
which
we
will
definitely
want
to
pair
on,
like
we'll
hold
the
knowledge,
basically
it'd
be
great
to
pass
that
over.
We
can
do
that
as
a
pair,
so
yeah
I'll
get
them
invited
over.