►
From YouTube: 2022-01-12 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Let's
go
so
this
is
the
redder
6.2
upgrade
testing
in
production,
we're
currently
running
6.0,
and
we
want
to
upgrade
to
6.2
as
part
of
the
kubernetes
migration,
because
the
chart
only
supports
6.2
and
well.
We
also
want
to
upgrade
at
some
point
anyway,
and
so
this
kind
of
makes
sense
as
the
next
step,
since
this
is
not
a
bug
fix
upgrade
but,
like
a
minor
point
upgrade
that
does
mean
that
we
need
to
be
a
bit
more
diligent
about
testing,
and
so
that
includes
upgrade
and
downgrade
compatibility
testing.
B
So
this
is
the
redis
upgrade
harness
project.
It
uses
docker
compose
and
a
lot
of
bash
to
test
different
scenarios
and
those
scenarios
then
run
through
ci,
and
so
we
can
take
a
look
at
one
of
them.
So
there's
a
whole
bunch
of
setup
where
it's
setting
up
several
containers
and
then
we're
actually
starting
the
scenario.
B
B
Then
this
particular
scenario
is
redis
upgrade
downgrade,
so
we
start
out
with
three
radiuses
on
version
6.0
and
you
can
kind
of
well.
I
guess
you
can
see
it
here,
yeah,
so
starting
6.0
with
three
instances
right
and
in
this
case
we're
actually
starting
sentinels
already
on
6.2.
B
So
there's
a
few
different
scenarios
here,
but
the
idea
being
if
we
were
to
upgrade
the
sentinels
ahead
of
time
and
then
upgrade
the
radiuses,
that's
kind
of
what
that
would
look
like.
So
without.
A
B
B
I
don't
remember
why
we
have
this
extra
wait
for
connected.
No,
we
do
so
it
steps
down.
It
becomes
a
replica.
We
then
stop
that
container.
We
start
the
6.2
container.
It
shares
the
data
directory
with
this
one,
and
so
then
we
wait
for
it
to
come
up
and
connect
as
a
replica
and
so
we've
got
o1
0203.
B
Since
we
had
this
background
process
that
was
continuously
writing
to
a
list,
we
go
back
and
check
that
list
and
that's
what
this
does
and
we
see
we
lost
only
50
of
our
rights,
which
is
expected
because
we
do
lose
some
rights
during
failovers
and
since
all
of
these
scenarios
are
back
to
back,
you
know
the
the
overall
time
frame
is
relatively
short,
and
so,
as
a
percentage
of
the
couple
seconds
that
it
took
to
run
through
the
scenario
we
had,
you
know
about
50
of
the
time
we
were
you're
doing
failovers
anyway.
B
So
that's
that's
why
that
number
is
going
to
be
relatively
high
on
these
short
scenarios
and
yeah
I
mean
that's,
that's
basically
how
it
works
all
of
these
past
or
without
any
changes
needed
from
my
side,
and
they
do
model
kind
of
the
the
upgrade
and
downgrade
paths
pretty
well
already,
but
I
yeah.
I
will
look
into
that
case
that
you
mentioned
scarbeck
and
make
sure
that
we
have
the
the
tandem
upgrade
covered
as
well.
If
it's
it's
not
there
yet.
A
B
B
B
Yep
and
then
the
last
point
that
I
had
was
here's
the
mr
to
bump
the
version
in
omnibus
and,
what's
fairly
straightforward,
to
do
as
well.
It
was
actually
deleting
more
code
than
adding
so
we've
got
an
old
back,
ported
patch
that
we
were
applying
that
we
can
now
delete.
So
that's
always
nice.
B
B
C
C
B
Yeah
yeah
I
mean
I,
I
did
see,
there's
this
trigger
package
job
and
then
there's
a
good
lab
docker
job.
So
it
does
push
something
here.
This
one
right
here
on
your
basket
lab
mirror,
so
I
might
be
able
to
just
run
this
directly
I'll
play
with
it.
B
Yeah,
that's
that's
all
I
had
for
the
demo
cool.
A
Look
before
we
move
on
to
the
fact
that
we
don't
have
any
discussion
names
do.
Does
anyone
have
any
questions.
C
I
do
have
one,
and
so
when
you
show
them
the
amanibus
emergency
quest,
I
have
two
questions.
Actually.
So
when
you
show
the
omnibus
merge
request,
you
were
mentioning
that
we
are
removing
some
patches
right,
so
we
were
patching
stuff.
So
does
this
means
that
the
test
that
you
show
us
in
ci
is
running
from
a
version
of
redis,
which
is
not
exactly
the
same
that
we
are
running
in
production
because
we
patched
our
own
production
environment
with
a
non-standard
package.
B
B
Those
patches
I
mean
you
know
they
could
have
an
effect
of
course,
but
they
seem
fairly
unrelated
to
the
specific
stuff
that
we're
looking
to
do
during
the
upgrade
specifically
yeah
and
and
also
the
one
that
we
now
eliminated
leaves
us
with
now.
Only
one
patch
left,
and
that
patch
is
only
well
in
in
quotes
a
build
flag
edition.
C
Thank
you
yeah.
I
had
another
question,
yes
yeah
right,
so
my
question
here
is
about
how
how
we
deploy
this
thing
right,
because,
if
I'm
not
mistaken,
this
is
not
part
of
the,
even
though
we
are
talking
about
the
nominee
bus
build.
This
is
not
part
of
a
regular
deployment
process
because
there
are
special
fleet
there's
a
there's,
a
fleet
that
runs
those
packages.
So
is
this
going
to
happen
with
with
a
chef
upgrade
or
how
are
we
going
to
roll
out
this
type
of
change.
B
Yes,
so
we
pin
the
omnibus
version
for
these
hosts
or
for
these
roles
in
chef,
and
so
we've
got
like
way
way
way
outdated,
omnibus
running
on
those
which
is
kind
of
problematic
for
some
other
reasons
as
well,
but
the
the
general
process
is
to-
and
we
have
extensive
change
issues
that
we've
used
in
the
past
for
this
yeah
is
to
basically
stop
chef
on
all
of
those
boxes.
B
Then
do.
B
C
Okay,
so
this
means
that
this
may
be
the
last
time
we
are
doing
something
like
this
by
hand,
because
the
goal
is
to
move
to
kubernetes
fingers
crossed
yeah
yeah,
obviously
right.
But
the
thing
is
that,
even
though
I
would
be
using
outdated
packages
will
no
longer
be
a
problem,
because
we
will
not
run
the
gitlab
package
for
in
kubernetes.
We
will
just
run
the
correct.
A
One
of
the
questions
I
had
is,
I
did
see
that
omnibus,
mr,
what's
holding
us
back
from
removing
that
one
build
flag.
A
B
C
B
B
Actually,
I
I
think
I
this
is
pretty
vague,
but
I
think
it's
related
to
some
of
the
platforms
that
we
build
on.
So
it's
like
to
support
some
raspberry
pi,
something.
B
At
this
point,
a
backlog
of
quite
a
few
sort
of
investigation,
type
issues
where
it's
like.
We
saw
some
weird
behavior,
so
I
guess
I'm
gonna
try
and
split
my
time
between
that
and
supporting
everyone
else
working
on
the
project,
so
skype,
let's,
let's
collab.
A
A
I
see
something
that
looks
strange
and
I
start
documenting
and
researching
that,
instead
of
accomplishing
the
goal
that
I
kind
of
stood
out
to
do
for
that
period
of
time,
so
I've
kind
of
reorganized
the
issue
that
was
kind
of
a
dumping
grounds
of
my
notes.
At
that
point-
and
I
created
some
new
issues
that
igor
just
mentioned,
igor
also,
should
be
a
set
or
a
repository
that
would
that
could
be
used
for
some
extensive
testing.
A
So
I'm
going
to
start
using
that
to
my
advantage
and
perform
some
better
testing
and
creating
a
kind
of
a
procedural
task
of
what
is
it?
What
is
it
going
to
look
like
to
take
an
existing
cluster,
merge
the
kubernetes
cluster
into
it,
and
vice
versa,
remove
the
old
vms
and
such
that
we
just
have
a
repeatable
test
process
that
I
could
spin
up
and
spin
down
as
necessary.
A
That
way,
I'm
not
trying
to
deal
with
the
various
the
current
implementations,
where
we
have
lengthy
passwords,
and
I
can't
find
the
paths
for
various
things.
It
just
takes
me
a
long
time
to
test,
so
I'm
trying
to
make
it
easier
to
test.
That's
my
current
goal
right
now
and
building
the
procedure
associated
with
what
we
need
to
accomplish
for
transitioning.
A
So
I'm
still
hedging
my
bets
on
that
bitnami
is
going
to
accept
my
pull
request
where
we
are
leveraging
the
use
of
external
dns
for
setting
our
pod
names
and
such
this
does
have
the
requirement
that
we
are
running
6-2
ahead
of
time,
and
it
does
require
that
we
probably
shift
from
using
the
announce
ip
address
to
announcing
the
host
names
of
a
virtual
machines.
A
Okay,
ahmad,
I
did
have
a
question
for
you
if
you're
paying
attention
okay,
so
question:
what's
the
status
of
gitlab
sshd
at
this
moment
in
time
where
we
have.
D
So
we
are,
after
your
marriage,
for
defects,
for
the
chart
chart
one.
We
are
now
waiting
for
the
cr,
the
change
request
to
deploy
it
to
staging
and
then
pass
it
to
qa
to
test
or
the
team.
Actually,
I
think
source
code
came
to
test
or
yeah.
I
don't
remember
the
team
name
but
like
we
were
passed
to
them,
also
to
test
the
backend
developers,
so
basically
just
waiting
for
the
cr
to
be
ready.
A
Perfect-
and
I
think
you
pinged
me
on
that
so
I'll
review
that
later
today
and
hopefully
we
can
see,
do
you
think
we'll
be
able
to
execute
that
sometime
this
week.
D
Yes,
I
mean,
if
you
see,
or
name
the
cr
good,
we
can
execute
it.
I
can
execute
it
tomorrow.
A
Perfect,
okay,
I'll
take
a
look
already,
there's
nothing
else
on
the
agenda.
Does
anyone
have
any
questions
or
comments
before
we
end.