►
From YouTube: 2022-01-05 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
I
guess
we
could
get
started.
Am
I
still
the
only
thing
on
the
yeah?
I'm
the
only
thing
on
the
agenda
today,
so
I
wanted
to
chat
a
little
bit
about
redis,
so
I've
got
a
cluster
stood
up
in
both
virtual
machine
land
as
well
as
inside
of
kubernetes.
C
C
That
at
least
gets
me
what
I
want,
without
tonica
being
in
the
way
which.
A
C
C
If
we
all
disagree,
I
would
love
the
chat
more,
but
I
wanted
to
share
my
screen
and
showcase.
What
I've
got
going
on
is
the
font
size
suitable
for
everyone.
C
I'll
make
it
a
little
bigger
okay,
so
we
have
over
here.
In
this
panel,
I've
got
the
fqdns
for
our
virtual
machines,
and
I
just
did
a
kit
pods
with
the
name
and
the
ip
address.
So
we
have
our
cluster
inside
of
kubernetes
and
we've
got
our
virtual
machines
and
ip
addresses
are
listed
as
such
in
this
panel.
I'm
connected
to.
B
What
things
and
skavic
one
one
question
sorry
for
interrupting
the
this
kubernetes
cluster
is
that,
where
is
that
running?
Is
that
running
in
pre.
C
Precisely
precisely
yeah
cool
thanks
shouldn't
be
because
you
know
it's
pre-prided,
we
do
run
stuff
in
pre-prod,
but
I
thought
hey.
You
know,
let's
test,
so
I
am
connected
to
node
3,
who,
for
some
reason,
thinks
he's
primary,
which
I
don't
think
is
correct.
C
C
I
think
because
this
kind
of
confirms
that
redis
and
sentinel
are
able
to
talk
to
each
other,
and
this
is
kind
of
where
igor
I
kind
of
need
your
guidance
on
kind
of
further
validation
to
make
sure
all
of
this
is
working
as
desired
and
making
sure
that
things
work
correctly,
because
I
don't
know
how
to
validate
whether
or
not
a
secondary
node
is
up
to
speed
with
the
primary
node
and
also
don't
know
how
to
query
anything
over
here
in
this
window.
I'm
connected
to
the
redis.
C
This
is
the
sentinel
rather
a
pod,
I
connected
to
a
redis
node
one,
the
one
that
was
supposed
to
be
noted
as
the
primary
and
if
we
do
a
replicas,
we
get
one
two
three
four
five
and
six.
So
we
have
all
six
sentinels
available
and
you
can
see
the
ip
addresses
are
noted
here.
So
16.251
is
our
pod.
C
232
4102
is
going
to
be
one
of
our
nodes,
so
that's
node,
two
and
then
we've
got
see.
This
is
the
strange
thing
here
is,
I
see.
Sometimes
we
see
the
fqdn
and
sometimes
we
see
the
ip
address,
but
here
is
pod
zero
and
here's
the
ip
address
for
node.
Three.
That's.
C
C
C
So
after
I
had
both
of
these
clusters
running
they're
running
independent
of
each
other,
and
I
basically
logged
into
a
virtual
machine
and
said:
hey-
be
a
replica
of
a
redis
node
0
using
that
fqdn.
B
C
For
me
to
investigate
yeah
yeah,
but
in
the
meantime
I
thought
maybe
igor,
if
you
could
help
me
like,
what's
a
good
way
to
test
that
things
are
kind
of
working
like.
I
don't
even
know,
I'm
not
familiar
enough
with
redis
to
be
able
to
like.
Oh,
let's
set
a
key
and
make
sure
we
can
read
it
from
a
secondary
and
some
other
virtual
machine
or
some
other
pod.
For
example,.
B
A
C
So
I
guess
over
here
let
me
connect
to
just
redis.
A
B
Reasonable
and
then
the
replication
offset,
we
can
probably
compare
that
with
what
the
primary
is
saying-
and
I
is
anyone
is:
are
there
any
clients
talking
to
any
of
these
radishes
at
all?
No.
C
B
It's
it's
that
number.
That
starts
with
two
five
seven
on
the
top
left
panel,
we've
got
first
write
under
master.
It's
got
kind
of
what
the
master
thinks
its
replication
offset
is
yeah
and
then
for
each
of
the
replicas
we've
got
how
up
to
date.
Is
this
replica
yeah?
And
so
one
of
those
looks
like
it's
kind
of
behind
right,
yeah.
B
B
That
is
note
zero
replication
offset.
I
don't
know.
A
C
So,
let's,
let's
do
an
example
failover
it
might
take
over
a
minute,
because
I
didn't
change
that
one
configuration
item
for
my
test:
cluster
yeah.
B
Wait
for
up
to
a
minute
by
default
and
up
to
or
at
least
a
minute
or
at
least
10
seconds
when
we
modify
it.
But
if
we
say
sentinel
failover,
then
it'll
go
ahead
and
do
the
failover.
C
B
B
A
C
C
Okay,
what's
another
good,
quick
test
that
we
could
do
for
demo
time.
B
Get
try
getting
a
non-existing
key
on
a
replica
then
setting
that
key
on
the
primary
and
then
getting
it
on
the
replica
okay.
So
how
do
we
get
a
key
get
space
key
name
so
get
foo.
A
C
C
B
A
B
C
B
Yeah,
it
was
under
the
impression
that
it's
actual
mutations-
and
I
do
know
that
the
sentinels
use
redis
for
coordination
as
well,
so
this
could
actually
be
the
sentinels
that
are,
you
know,
having
a
chat
yeah
and
that
that
is
driving
this
number
to
to
go
up.
But
I
don't
know
that
for
a
fact
yeah.
But
I
think
this
is
good
enough
for
for
a
first
demo.
This
is
really
cool.
C
I
guess
I
should
go
into
a
little
bit
of
the
detail
as
to
how
I
set
this
up,
because
I
haven't.
C
Maybe
I'll
just
talk
about
it
I'll
just
talk
about.
If
we
have
questions
about
what
changes
I
made
to
the
helm
chart,
we
can
dive
into
my
pull
request.
I'm
currently
leveraging
the
external
dns
project
to
accomplish
this
goal.
C
We
use
this
already
for
prometheus
and
the
alert
managers-
and
this
is
just
me-
tagging
on
to
the
bitnami
helm-
chart
to
add
certain
capabilities
to
make
external
dns
more
friendly
and
more
configurable,
because,
right
now
we
suffer
a
problem
where,
if
we
have
no
helm
chart
changes
at
all,
we
get
the
appropriate
dns
entries.
But
what
we
don't
get
from
redis
itself
is
the
appropriate
advertisement
for
which
I
p
address
or
which
fqdn
we
want
to
connect
to.
C
C
The
sentinel
service
name,
no,
the
it's
either.
Excuse
me:
it's
either
going
to
be
the
headless
service
or
it's
going
to
be
the
pods
fqdn.
That's
internal
to
the
cluster
that
is
either
of
those
options
is
not
beneficial
to
us,
because
no
node
outside
of
the
cluster
is
going
to
understand
how
to
connect
to
either
of
those
fully
qualified
domain
names
at
all.
C
C
Like
there's
no
way
for
us
to
say,
hey,
redis
node,
one
like
the
physical
virtual
machine
in
order
to
resolve
this
internal,
only
dns
name
go
talk
to
this
kubernetes
cluster
in
some
way,
shape
or
form
like.
I.
A
C
Know
how
we
would
accomplish
that
today
and
that's
gonna
be
even
worse
when
it
comes
to
clients
having
to
do
the
same
thing,
because
they're
going
to
live
in
different
clusters
and
they're
going
to
have
different
clusters
to
talk
to,
and
I
don't
know
how
to
resolve
that
problem.
So
external
dns
seemed
like
a
suitable
solution.
C
So
to
configure
this,
you
need
a
few
things.
One
is
just
a
simple
annotation.
Well
one,
you
need
the
extraneous
project,
usable
and
configured,
and
you
know
it
works
and
for
us
we
already
use
it
so
perfect.
We
already
got
that
checked
off.
We
need
a
annotation
added
to
the
headless
service
of
the
redis
help
chart
which
we
could
do
out
of
the
box.
C
There's
a
set
of
bash
scripts
that
are
inside
this
helm
chart
that
are
stored
as
configuration
maps
that
get
caught
upon
when
reddit
starts.
These
are
responsible
for
setting
up
the
announce
ip
addresses
or
announce
fqdns
and
there's
a
little
bit
of
logic
as
to
how
that
pass
script
determines
which
ip
address
or
which
fqdn
to
advertise.
C
So
my
pull
request
adds
on
to
that
logic
to
leverage
the
external
dns
entries
that
we've
created,
so
the
pull
request
itself
is
not
very
large.
It's
just
whether
or
not
bitnami
is
willing
to
accept
this
pull
request
and
whether
or
not
this
is
going
to
be
a
usable
solution,
I
kind
of
think
it
is,
but
I
would
love
further
input
from
other
people
that
are
sitting
here
on
this
call
to
look
at
that
pull
request.
C
I
guess
I'll
link
it
into
the
agenda
for
other
people
to
look
at
I'll
put
it
under
the
discussion
item.
C
So
it's
logically,
it's
not
a
very
large
pull
request.
It's
just
mainly
just
making
things
a
little
bit
easier.
C
So
that's
kind
of
the
next
steps
that
I'll
be
looking
into.
So
the
project
is
move.
From
my
perspective,
it's
moving
a
little
sluggish
just
because
I'm
not
very
familiar
with
redis
and
all
the
various
things-
and
you
know
connecting
to
redis
is
also
not
the
easiest
thing
in
the
world
because
of
authentication.
So
I
may
disable
authentication
for
testing
temporarily,
but
we'll
see
how
it
goes
so.
Yeah,
that's
where
I
stay
in
so
questions.
B
Just
wanna
say
again
this:
this
looks
really
great
and
looks
like
a
very
promising
step
for
our
migration,
which
is
one
of
the
pieces
that
I
have
been
worrying
about
quite
a
bit
in
the
back
of
my
head.
I
still
am
yeah
yeah,
but
it's
looking
good
so
far,.
C
I
think
logistically,
no
matter
what
our
end
solution
is.
If
we're
able
to
pull
off
a
clean
migration
where
we
have
both
clusters
or
excuse
me,
both
infrastructures
running
together
with
redis
and
we
transition
to
kubernetes
and
turn
on
virtual
machines.
It's
probably
blog
post
worthy
like
this
is
an
interesting
problem
to
solve,
probably.
C
Let's
get
it
done
first,
obviously,
but
one
of
the
other
solutions
I
had
spoken
about
in
our
last
demo
meeting
was
the
use
of
console
and
just
to
kind
of
catch
everyone
up
to
speed.
With
what
testing
I
did
over
what
everyone
else
took
as
a
break,
but
I
was
still
working.
C
I
was
trying
to
get
console
to
work
and
I
couldn't
figure
out
why,
but
I
could
get
it
to
work
inside
of
mini
cube,
but
as
soon
as
I
try
to
take
the
exact
same
implementation
inside
of
our
pre-cluster,
I
could
not
get
it
to
work
like
console
would
just
not
do
its
job,
and
I
could
not
understand
why
phillip
got
back
to
me
and
said:
hey,
there's
another
option
inside
a
console
that
we
could
leverage
that
it
just
operates
differently.
C
C
Yeah
yeah,
like
I
think,
it's
worth
looking
into
just
in
case
this
external
dns
kind
of
falls
apart.
So
I
think
at
this
point
what
I
would
like
to
do
is
continue
my
testing
with
this.
You
know
two
clusters
linked
together
kind
of
situation.
I
also
need
to
learn
right
as
to
you
know,
just
to
figure
out
what
kind
of
value
scenarios
are
going
to
run
into.
So
I
think
I
might
have
a
bunch
of
questions
for
igor
and
sean
over
the
course
of
time.
C
As
I
look
into
this
further
and
I'm
going
to
keep
an
eye
on
my
pull
requests
that
I've
got
opened.
If
you
all
have
feedback,
please
try
to
provide
it.
That
way,
I
could
try
to
get
ahead
of
the
maintainers,
reviewing
it
as
well,
because
I
only
targeted
my
pull
request,
only
targets.
What
I
needed
specifically,
you
know-
and
I
may
need
to
do
some
work
on
stuff-
that's
unrelated
to
sentinel,
for
example.
D
And
it
could
be
worth
asking
jason
if
he
has
any
feedback
on
that
pull
request
as
well.
D
Can
I
ask
a
question
about
be
using
this
bitnami
chat,
so
you
mentioned
that
it's
it
looks
like
it's
only
going
to
work
if
it's
packaged
is
that
is
that
correct?
Do
I
understand
that.
C
C
I
don't
know
how
to
do
that.
Helm
allows
us
to
do
this
because
you
could
specify
in
helm,
say:
hey
just
use
this
directory
where
I've
got
the
helm
chart
configure
set
up
it'll
do
its
job
like
it's
supposed
to.
I
don't
know
if
tonka
has
a
similar
functionality
so
like.
If
the
request
does
get
accepted,
we
would
have
to
wait
for
bitnami
to
perform
a
release.
C
C
But
if
we
could
figure
out
a
way
to
you
know
customize
that
in
a
way
that
works
for
us
and
then
we
could
wait
for
the
upstream
to
accept
our
pull
request
as
necessary,
and
then
we
could
revert
back
to
using
the
native
helm
chart
when
they
perform
a
release.
The
problem
I
ran
into
that
is,
I
couldn't
figure
out
a
way
to
get
tonka
again
to
make
the
changes
necessary,
and
this
could
just
be
my
very
limited
knowledge
of
tonka
jsonnet.
So.
C
B
What
are
your
thoughts
on
automating
a
bit
more
of
this
setup,
maybe
in
a
maybe
repurposing,
actually,
some
of
the
sandbox
code
to
spin
up
in
a
dedicated
gcp
project,
a
set
of
vms,
a
gke
cluster,
a
redis
in
that
cluster,
and
then
we
can
sort
of
have
a
repeatable
test
case
for
the
setup,
and
I
think
it
would
also
pay
off
in
being
able
to
tweak
something
and
rerun
the
whole
thing
and
see,
like
you
know,
iterating,
towards
getting
fqdns
to
show
up
and
not
ip
addresses.
B
There
may
be
a
lot
of
manual
steps
to
repeat
if
you
want
to
like
manually
rejoin
the
cluster
every
time.
So
what
are
your
thoughts
on
investing
in
automation?
For
that
now,.
D
D
C
I
agree,
I
think
my
hold
up
is
just
I'm
also
learning
redis
as
a
whole,
so
I
kind
of
want
to
focus
my
efforts
on
learning
that
first,
but
if
igor,
if
that
interests
you,
I
would
totally
encourage.
B
It,
I
think,
there's
already
quite
a
bit
of
stuff
there
in
in
that
sandbox
repo.
We
already
have
some
reds
vms,
managed
by
terraform.
We
we
have
gke.
We
have
some
basic
helm
helming
going
on,
so
I
think
we're
already
kind
of
80
of
the
way
there.
C
Can
you
think
in
the
yes.
C
The
slack
channel
that
I
should
check
out,
because
that
would
be
beneficial
deep
into
the
future
as
well.
D
A
D
B
B
B
Observability
deep,
diving
into
some
of
the
strange
behaviors
that
we
saw
and
really
understanding
some
of
those
things
better
testing
a
lot
of
testing.
B
C
C
I
certainly
have
not
concentrated
any
effort
on
doing
that,
but
I
know
that
we'll
probably
going
to
run
into
there's
probably
some
configuration
that
we
set
on
virtual
machines
that
we
can't
carry
over
to
our
kubernetes
clusters
because
of
maybe
a
limitation
of
the
helm
chart.
So
at.
D
B
A
B
Already
been
kind
of
done
so
we're,
I
think,
already
pretty
close.
A
A
C
A
Because
we
often
tend
to
do
prepare
things
by
learning
how
to
do
things,
then
we
implement
them,
and
then
we
write
the
readiness
review
right
and
maybe
if
we
learned
enough
to
understand
what
we
need
to
do,
maybe
already
starting
parallel
with
readiness.
If
you
bring
in
points
over
that,
we
learned,
and
especially
as
evo
mentioned
things
like
testing,
true
history
from
a
backup
and
things
like
that,
would
be
good
to
have
already
noted
somewhere.
A
B
A
I
agree
I
mean
we
have
this
structure
like
this
already,
for
I
don't
know
so
this.
We
just
have
the
topics
that
you
want
to
mention
and
and
review
right,
and
the
idea
is
to
just
link
them
from
there
to
the
results
we
have
noted
somewhere
else,
and
but
this
gives
us
some
kind
of
structure
if
we
have
tested
for
durability,
security
performance
and
all
of
these
concerns
right
and
starting
with
that
from
the
beginning,
maybe
brings
us
to
things
that
we
forgot
to
think
about.
A
B
I
I
think
so,
and
I
guess
a
lot
of
the
road
map
that
has
been
laid
out
so
far
covers
a
lot
of
the
stuff
that
will
come
up
in
the
readiness
review,
but
I
think
it
does
make
sense
to
kind
of
go
through
the
readiness
review
template
see
what
is
missing.
What
have
we
not
yet
addressed
or
not
yet
planned
to
address
and
then
just
generate
a
whole
bunch
of
issues?
Out
of
that
I
can.
I
can
take
that
as
a
task.
D
Yeah,
okay,
I
just
want
to
add
a
comment,
though,
that
scarborough
you
mentioned
that
this
has
been
a
bit
sluggish
for
the
last
few
weeks,
just
want
to
say
like
holidays,
and
this
is
fine
right,
like
I
think,
as
long
as
we're
making
progress
and
we
feel
like
we
are
focusing
on
the
most
important
stuff
figuring.
These
things
out
is
super
valuable
and
you
know
like
we
can't
go
faster
than
we
can
go.
So
as
long
as
we
stay
focused
on
what
matters,
then
it
takes
as
much
time
as
it
takes
right
like
I'm.
D
Definitely
at
the
stage
where
I
want
to
revise
our
okr,
because
we
currently
have
on
there
to
do
weight
limiting
and
one
of
the
other.
I
can't
remember
which
one
one
of
the
other
redis
clusters
we're,
definitely
not
going
to
do
two
right,
but
that's
okay,
because
what
we've
learned
in
this
time
about
how
to
do
the
migration,
how
to
set
up
the
chart?
How
to
do
deployment
like
it's
going
to
help
us
in
all
the
future
ones?
So
it's
not
a
problem.
D
D
Awesome,
thank
you
so
much
for
the
demo
and
discussion.
It
was
a
super
great
way
for
me
to
start
my
my
year.
I
feel
like
I'm
at
least
up
to
speed
on
this
one
project.
So
that's
great,
so
thank
you
very
much
and
I
hope
you
have
a
good
rest
of
your
day.