►
From YouTube: 2021-12-22 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
I
could
try,
I
feel,
like
things
changed
overnight
and
things
have
either
broken
or
new
things
have
popped
in
which
I'm
just
kind
of
like
scatterbrained
at
the
moment,
but
I
figured
I
would
try
to
give
an
overview
as
to
what
we
are
chasing
currently
in
the
options
that
we've
started
to
explore.
But
you
know
we
haven't
finished
exploring
or
we
haven't
finished
walking
through
all
the
use
cases
here.
C
C
So
I've
got
two:
this
is
a
mini
cube
environment
and
I've
got
two
redis
clusters
rolling
right
now,
just
for
the
purposes
of
testing,
so
redis
cluster
that
had
that's
prefixed
with
a
currently
has
console
tied
to
it
so
I'll
get
to
that
in
a
second,
the
redis
cluster,
that's
prefixed,
with
the
letter
b
is
the
one
where
it's
just
kind
of
redis,
and
we
our
goal
here,
is
to
create
a
service
for
each
pod
that
gets
created,
you'll
notice
in
the
pane.
C
C
C
You'll
notice,
these
contain
every
single
redis.
However,
our
application
does
not
support
the
ability
to
speak
to
every
single
radius
to
make
the
necessary
write
or
read
requests
appropriately.
We
kind
of
have
to
talk
to
just
the
primary,
but
because
these
two
services
expose
all
of
them,
that's
going
to
initiate
a
problem
where
we
initiate
a
right
request
and
if
it
does
that,
if
it
by
chance
does
not
land
on
the
primary
redis
node,
that
right
will
get
denied
and
that
will
be
problematic.
C
I
just
got
the
services
created
not
terribly
long
ago,
but
you'll
notice.
They
point
to
specific
pod,
ips
and
so
like
rotus
node
0
has
an
ip
of
41
for
some
reason
and
you'll
notice.
This
one
has
an
ip
of
41.,
so
I
know
that
the
service
is
directly
connected
to
a
pod,
and
these
are
ip
addresses
that
we
feed
into
our
home
chart.
C
10.10,
that's
a
console!
No!
It's
236.!
Okay,
so
I
don't
know
where
it's
grabbing
that
ip
address
so
we're
doing
good.
So
far,
I'm
still
trying
to
figure
this
out,
but
we
do
have
the
ability
to
connect
through
one
of
these
services,
which
is
good,
which
is
what
we
are
trying
to
shoot
for
at
the
moment
in
time.
So
for
that
part
we're
a-okay,
there's
still
more
things.
We
need
to
figure
out
and
work
through,
but
at
least
we
could
connect
to
it.
C
C
D
A
D
C
Oh
yeah:
well,
this
is
mini
cube
so,
like
I,
don't
really
have
a
a
quick
and
easy
way
outside
of
say
port
forwarding
to
connect
inside
this
cluster.
That's
why
I
was
just
using
this
random
pie
just
to
prove
that
I
could
at
least
hit
the
external
facing
ip
address.
Make
that
connection
and
prove
that
I
could
query
redis
in
that
particular.
D
C
Obviously,
when
I
get
to
a
point
where
I
could
test
this
out
and
say
inside
of
pre,
for
example,
we
would
get
to
that
point,
but
these
upstream
helm
chart
changes.
Don't
they
exist
on
a
branch
somewhere?
So
we
don't
have
this
deployed
inside
of
any
environment
for
me
to
test
quickly.
Yet.
D
C
Here
got
a
service:
we've
got.
C
Which
is
expected
precisely
what
we
want
and
there's
two
changes
to
make
that
happen.
One
we
had
to
modify
a
redis
configuration
inside
the
config
map
that
way
that
ip
address
is
what
gets
us
exposed
and
not
say
the.
C
C
Mini
cube
realm,
but
stuff
like
that
is
going
to
change.
Often
anytime,
a
pod
gets
rotated,
whereas
if
we
control
the
ip
addresses
that
we
assign
the
service
and
we
control
what
I
p
addresses
get
added
to
our
own
personal
configurations,
we
will
always
be
connected
to
a
pod,
even
if
that
underlying
pod
gets
rotated,
the
ip
address
doesn't
change.
Therefore,
when
apply
gets
rotated,
we
don't
need
to
make
further
changes
to
the
rails.
Application.
B
D
Can
you
please
go
back
to
the
end
point
when
we,
you
have
a
window
with
the
list
of
end
points
before
it
was
a
cute
dog
that
appeared
for
a
moment,
and
you
wrote
something
10
points
yeah
this
one.
Oh.
D
So
yeah
there
was
something
here,
yeah
the
when
you
were
mentioning
that
headless.
That
one
is
interesting
to
me.
Oh,
but
this
one
is
exposing
internal
addresses,
because
if
I
do
remember
how
sentinel
behaves
you
basically
connect
to
any.
So
it's
part
of
the
protocol
itself.
So
when
you
connect
to
any
node,
it
should
tell
you
no.
You
should
redirect
this
connection
to
this
one,
which
is
the
current
master,
but.
C
C
Yeah
so
again,
this
is
a
work
in
progress,
both
sean
and
I
are
kind
of
making
changes
to
the
helm
chart
this
is
still
subject
to,
but
now
I'm
accepting
this
style
of
change,
don't
know
if
this
is
the
right
way
to
go
about
this,
but
this
seems
like
the
most
logical,
because
we
are
still
in
the
majority
of
control
where
we
don't
have
like
service
discovery
running
rampant
everywhere,
which
leads
to
my
next
demo
of
console.
C
C
I've
got
console
running
inside
this
cluster,
so
this
varies
significantly
with
how
we
run
console
today,
just
by
the
fact
that
we
run
console
inside
of
virtual
machines.
Currently,
the
only
thing
we've
got
running
consoles
out
of
our
clusters
is
for
the
purposes
of
dns
resolution.
C
C
I
enabled
console
just
to
you
know,
get
something
running
for
local
testing
and
enabled
what
they
call
the
connect
injector,
and
what
this
enables
you
to
do.
Is
you
create
a
special
annotation
inside
of
your
deployment
that
says
dear
console?
Do
something
to
this
deployment
for
me
and
you'll
notice
that
the
b
redis
nodes
only
have
two
pods
running,
whereas
the
a
redis
nodes
have
three
pods
running.
C
C
So
this
is
something
new
that
we
need
to
learn,
but
if
we
could
figure
this
out,
what
we
could
do
with
this
is
create
the
necessary
configuration
where,
when
here
I'll,
actually
share
my
other
screen,
where
I've
got
console,
showing
certain
things
share
services,
so
you'll
notice
that
there's
a
service
called
a
radius
and
it
contains
each
and
every
single
one
of
our
pods.
I
don't
know
why
it
names
it
in
this
goofy
way,
but
theoretically
we
would
have
the
ability
to
query
redis
via
console
and
have
our
application
connect,
doing
dns
resolution
through
console.
C
There's
still
more
for
me
to
learn
here,
so
this
is
what
I
actually
wanted
to
showcase
in
this
particular
situation,
where,
if
I
go
back
down
to
my
fake
client
and
do
a
redis
cli
on
whoops
on
host,
let's
see
a
redis,
let's
just
connect
to
kind
of
what
I
would
have
done
if
I've.
If
I
would,
I
don't
precisely
know
how
this
would
work,
but
like
just
guessing
here,
something
is
wrong
with
my
ability
to
talk
to
radius
and
actually
doesn't
matter
who
I
talk
to
or
which
port
I
talk
to
like.
C
Look
there's
an
I
o
error
so
like
with
the
console
stuff,
there's
something
new.
I
need
to
learn
where
I
think
envoy
is
getting
in
the
way
and
preventing
certain
traffic
flow
from
happening,
which
I'm
kind
of
surprised
about,
because
my
mini
cube
environment
doesn't
have
anything
that
related
network
policies
that
would
block
traffic.
C
So
there's
that
okay,
so
still
learning,
but
theoretically
the
concepts
of
either
connecting
via
console
or
this
service
thing
would
kind
of
change.
How
we
implement
this
entire
thing
and
I'm
still
needing
to
learn
this,
but
there's
also
another
option
which
I
can't
demo,
because
I
know
it's
not
working.
C
A
C
So
we
see
redis
in
here,
there's
the
txt
right
and
we
also
see
this
is
pre-prod,
where
we
have
two
redis
deployments:
redis
cache
and
redis
rate
limiting.
So
I
was
able
to
create
dns
records
for
these,
and
if
we
select
one
we'll
get
the
ip
address
of
something
I
don't
know
what
that
thing
is
just
yet.
C
But
hypothetically
this
does
not
work.
Currently
I
suspect
it's
probably
a
firewall
rule,
that's
blocked
me,
but
theoretically
we
should
be
able
to
connect
to
this
connect
to
the
sentinel
port.
And
you
know
our
application
would
say
dear
sentinel
who's,
the
primary
and
it
will
return
with
an
ip
address
of
you
know
some
pod
that
we
should
be
able
to
have
the
ability
to
connect
to
and
it'll
have
all
the
necessary
information
such
that
it
can
connect
to
those
redis
nodes,
and
you
know
the
application
will
work
so.
A
Skylak,
I
I
didn't
fully
get
5v
aiming
for
for
consul,
because
I
mean
there's
service
discovery
built
in
and
kunditas
right
with.
I
don't
know
hd,
I
think,
and
and
for
redis
itself
I
mean
I
guess
that
they
for
a
reason
have
sent
in
there
right
to
to
use
it.
So
I
was
a
little
bit
confused
by
what
we
try
to
achieve
with
console
in
this
case.
C
So
I
looked
at
console
because
someone
suggested
it
and
you
know
I
have
no
problem
with
at
least
looking
into
it,
because
console
is
an
interesting
project
to
me.
I
just
wish
we
spent
more
time
on
it.
The
service
discovery
is
going
to
be
important
because
we
have
clusters
that
do
not
exist
in
the
same
or
we
have
applications
that
do
not
exist
in
the
same
cluster.
C
So
we've
got
a
regional
and
we
have
arizona
clusters
right.
I
don't
know
where
we're
going
to
deploy
redis.
I
assume
we're
probably
going
to
deploy
it
in
probably
our
regional
cluster,
because
that's
where
we
have
you
know
the
most
address
space
available
for
newer
nodes
and
such,
but
our
zonal
clusters
need
to
be
able
to
connect
to
this
redis
instance.
C
D
C
You
can
have
a
service
proxy
yeah,
one
of
the
problems
I
have
with
console,
despite
the
fact
that
we
heavily
depend
on
it
with
our
petroni
and
postgres
connectivity.
C
C
B
If
we
go
down
the
route
of
needing
to
specify
eyepiece,
so
I
think
that's
for
your
first
demo
like
how
much
are
we
like
how
much
work
do
we
have
to
do
in
the
future
to
handle
scaling
like?
Would
it
be
easy
to
expand
that
range.
C
With
the
current
proposal
or
excuse
me,
let
me
back
up
a
little
bit
from
what
I've
learned
and
someone
would
need
to
correct
me
if
you
know
on
this
call-
and
you
know
please
help
me,
but
from
what
my
knowledge
so
far
with
how
git
lab
utilizes
redis
is
that
we
are
unable
to
speak
to
multiple
redis
nodes
in
a
cohesive
manner.
C
So
from
my
knowledge,
we
always
connect
to
the
primary
and
say
hey,
give
me
or
here's
some
data
for
you
to
have
there's
theoretically
at
this
moment
time
no
need
to
really
care
about
the
secondaries.
In
fact,
the
secondary
sit
there
for
the
sole
purpose
of
being
able
to
fail
over
to
otherwise
they
are
a
complete
waste
of
resource,
which
is
unfortunate,
given
how
large
some
of
those
redis
nodes
are.
C
So
I
don't
think
we
need
to
worry
about
scale
at
this
moment
in
time.
I
think
we'll
need
to
worry
about
scale
when
it
gets
to
the
point
where
our
application
is
able
to
handle
that
situation
better,
because
then
our
secondaries
could
be
better
utilized,
and
then
we
could
spread
out
what
redis
is
actually
doing
and
we
get
to
accomplish
a
little
bit
more
with
it.
I
think,
from
what
I've
been
taught
so
far.
C
The
need
to
add
additional
ip
addresses
should
not
be
an
immediate
problem,
because
if
we
deploy
this
in
our
original
cluster,
we
have
plenty
of
address
space.
B
B
Yeah,
that's
fine
and
I
suspect
they
may
not
either,
so
it
might
be
a
good
one
where
we
may
want
to
figure
out
kind
of
what
we're
evaluating
on.
It
may
not
be
like
an
obviously
perfect
solution,
so
we
might
want
to
figure
out
like
like
what
are
the
downsides.
We
definitely
want
to
avoid
and
which,
which
are
the
downsides
like
we
can
live
with
for
short
term,
or
you
know
like
offset
by
doing
something
else.
C
And
I
think
igor
will
be
a
good
person
to
rely
on
for
helping
us
make
that
decision,
because
he
worked
greatly
with
the
reddest
upgrades
that
occurred.
I
think
it
was
last
year
and
then
he's
also
been
one
of
our
one
of
our
primary
persons.
Who've
been
adding
additional
clusters
as
we
move
along.
So
I
think
he's
got
some
pretty
good
knowledge
with
how
our
application
leverages
redis
and
how
redis
actually
gets
used.
C
A
C
Alessio,
no,
I
do
not
have
intentions
configured.
This
is
one.
D
B
I
guess
what's
left
in
your
kind
of
your
reddest
investigation
plan.
C
Lots
of
learning
I
still
want
to
our
initial
plan
was
to
try
to
go
down
the
route
of
modifying
our
helm
chart
and
there's
still
a
little
bit
more
work
to
make
that
a
little
more
friendly
as
a
potential
pull
requests
to
the
helm.
Chart.
B
C
B
C
C
C
Yeah
so
gitlab
pages,
the
migration
is
done.
The
only
thing
that's
holding
this
issue
open
is
the
or
this
epic
open
is
there's
an
issue
related
to
profiling,
not
working
correctly
and
jacob's,
been
helping
me
out
with
this.
Do
we
want
to
just
close
the
epic
like.
B
Well,
it
depends
what,
on
I
mean
it's
fine,
to
keep
the
epic
open
right
like
it
would.
It
would
be
super
to
have
it
closed
like
this
year,
because
it
makes
our
metrics
look
super
shiny
right,
but
let's
not
close
it
just
because
you
know
just
because
of
metrics
and
things,
but
I
guess
I'm
just
trying
to
find
stuff.
So
what
is
the
current
status
of
this
of
the
issue?.
C
We're
going
to
add
logging,
debug
logging
to
lab
kit,
which
will
require
lab
kit
to
be
updated
and
tagged
and
then
get
lab
pages
needs
to
be
updated.
With
that
new
version
of
lab
kit,
we
then
need
to
deploy
that
new
version
of
pages
and
then
flip
whatever
flag
that
might
be
added
for
lab
kit.
If
there
is
a
flag
that
needs
to
be
added
for
lab
kit,
so
not
a
quick
task:
okay,
not
hard.
Just
not
quick,
okay,.
B
Yeah
I
mean
I,
I
kind
of
feel
like
removing
all
timing
considerations.
This
issue
sits
with
this
epic
because
it
was
one
of
the
kind
of
bugs
that
we
had
following
the
migration.
We
do
want
to
fix
it.
So.
A
B
Say:
let's:
let's
leave
it
open
and
maybe
see
what
we
can
do
to
to
speed
up
the
the
fix
so.
B
C
I
wouldn't
say
it's
blocked,
it's
more
like
waiting
for
other
people
to
do
things.
Okay,.
C
My
primary
person
and
then
I've
been
kind
of
opening
up
an
issue,
gitlab
pages
and
whoever
plots
it
has
been
nice
enough
to
pull.
I
think
jamie,
if
I
recall,
was
the
last
person
to
help
me
out
with
the
last
release.
Okay,
I
just
opened
up
an
issue
saying
hey:
can
we
release
and
they're
like
yeah
sure.
B
Okay,
great
yeah,
let's
see
if
we
can
keep
this
moving
forwards,
because
it
would
be
super
good
to
get
this
wrapped
up
and
done
and
know
that
post
migration,
like
everything,
is
working
as
it
was
okay,
so
yeah,
let's
see
if
we
can
keep
moving
this
forward,
I
will
I'm
happy
to
help
coordinate
like
if
we
are
waiting
on
people
in
other
teams
and
like
we
wait
more
than
you
know
a
couple
of
days,
then
let
me
know,
and
I'm
happy
to
help
coordinate
people.
B
Yeah
is
the
I
might
be
getting
my
head
of
myself
here,
but
without
having
the
logging
in
place
already
have
we
got
any
closer
to
working
out
why
this
works
on
workhorse,
but
not
pages.
B
Yeah,
I
think,
jacob's,
probably
the
same
okay.
Well,
yeah.
Let's
keep
it
on.
Let's
keep
it
on
the
on
the
table.
Keep
this
epic
open,
so
we've
got
kind
of
focus
and
see
if
we
can
actually.
C
B
Yeah,
I
think
that's
absolutely
right.
I'm
going
to
put
the
p3
label
on
because
I
think
it's
something
we
don't
want
to
live
with
it
forever.
It's
people
are
not
gonna
scream
right
now,
but
it
would
be
good
to
get
resolved
as
soon
as
we
can.
C
B
Cool
okay,
great
well,
good
luck
is
that
so
is
there
anything
so
jacob's
got
the
mr
open
henry.
Are
you
able
to
give
that
a
review.
B
B
Fair
enough,
all
right,
well
that'll,
be
a
fun
one
for
us
to
look
forward
to
in
january.
Getting
that
wrapped
up
awesome
is
there
any
other
stuff
we
could
do
like?
Is
there
anything
else
we
can
do
to
help
there
or
any
other
things
anyone
wants
to
go
through
today.
B
Nope,
okay,
awesome,
in
which
case
I
think
we're
done
so
well
done.
Everyone
like
this
is
I'm
just
writing
up
the
delivery,
20
2021
review
stuff.
I
haven't
yet
got
through
to
digging
into
the
project,
but
I'm
already
excited
about
that
stage
because
this
year,
particularly
on
the
kubernetes
side,
we've
migrated
some
massive
pieces
and
we
did
it
pretty
quietly
and
I'm
not
sure
a
lot
of
people
actually
realized.
But
it's
really
interesting
to
see
it
on
the
metrics.
You
can
really
clearly
see
like
you
know.
I
know
exactly
when
in
fact
it
was
funny.
B
I
was
like.
Oh,
this
is
really
strange
like
what
what
happened
in
april-
and
I
said-
oh
yeah
graham
joined
the
team,
so
that's
that
mttp
switch
and
I
was
like
oh
what
happened
in
in
july,
like
oh
yeah,
that
was
the
api
and
you
can
see
that
and
then
august
you
can
see
web
and
it's
huge,
so
brilliant
progress
on
migrating
on
the
kubernetes
side.
B
What
we
did
quite
seamlessly,
in
fact,
totally
seamlessly
since
pages
and
readers
are
both
running
in
parallel-
was
one
of
our
kind
of
strategy
goals
for
this
year
was
to,
by
the
end
of
by
the
end
of
q4,
to
define
plan
and
execute
on
migration
of
stateful
services.
So
I
think
when
we
wrote
that
early
in
the
year
we
were
pretty
unsure
like
how
you
know.
How
would
this
work?
What
would
we
have
to
do?
How
would
we
pick
a
service,
so
I
think
you
know.
B
I
know
we
still
have
a
lot
of
bits
to
figure
out,
but
we're
making
great
progress
on
redis
and
really
happy
that
we
get
to
pair
up
with
scalability
and
kind
of
all,
try
and
figure
this
stuff
out
together.
So
great
great
progress
to
share
lots
of
massive
stuff
migrated
and
lots
of
stuff
like
well
set
up
for
next
year
as
well.
B
So
with
that,
I
hope
you
all
have
a
fantastic
break.
Well
done!
Thank
you
for
all
the
hard
work
this
year
and
looking
forward
to
seeing
what
we
can
smash
out
next
year.