►
From YouTube: 2021-10-27 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Welcome
everyone
to
the
kubernetes
demo
today
is
there
anything,
and
I
would
like
to
demo
or
discuss.
C
I
don't
have
anything
to
demo,
I'm
struggling
a
little
bit
with
aha
proxy.
C
We
just
have
this
special
configuration
for
how
the
proxy
v2
stuff
works
and
our
hdr
proxy
configuration
was
not
previously
as
flexible
as
required
to
enable
us
to
configure
the
necessary
ports
that
kubernetes
are
going
to
use.
So
I've
been
battling
that
a
little
bit
this
morning.
I
hope
to
have
my
final
word
request
in
place.
I'm
just
waiting
for
our
automation
to
kick
in
and
allow
me
to
push
the
final
change
out.
Hopefully,
with
that
I
could
have
a
working
canary
instance
with
pages,
so
inside
of
staging
gitlab
pages
is
fully
deployed.
C
So
far,
I've
only
configured
aj
proxy
to
be
aware
of
the
canary
deployment.
Thus
far,
I've
been
trying
to
figure
out
how
to
get
that
working
appropriately.
C
Running
as
in
it's
available,
but
it's
not
taking
traffic,
yet
that's
what
I'm
trying
to
tease
out
so
once
I
get
that
figured
out.
I
think
my
next
step
is
to
start
looking
at
the
performance
parameters,
because
I'm
still
currently
using
the
helm,
chart
defaults
for
resource
requests
and
limits
and
such
which
is
not
going
to
be
very
safe
for
production.
C
B
Do
you
need?
Well,
I
guess
you,
you
don't
know
yet,
but
like
do
you
need
any
help
with
proxy.
C
Not
at
the
moment,
jarv
has
been
reviewing
my
merch
request
this
morning
and
he's
been
helping
me
out
pretty
nicely.
So
it's
just
a
matter
of
like
our
automation
just
runs
whenever
it
feels
like
it
seems
like
I.
I
thought
it
ran
as
soon
as
you
hit
the
merge
button,
but
no
like
it
waited
20
minutes
before
it
kicked
off
the
next
job.
I'm
like,
okay,
whatever.
A
C
B
One
thing
I
was
going
to
just
mention
and
had
a
question
about
so
on
monday
or
tuesday:
maybe
it
was
it
was.
It
was
agreed
that
we
could
do
the
rate
limiting
change
for
pages
after
we've
completed
the
migration,
which
is
great.
We
only
have
to
do
that
once,
but
what
I
wanted
to
just
check
in
on
was
whether
we
did
we
get
to
a
decision
with
distribution
about
how
we
would
actually
be
able
to
do
this
like
is
there
any
stuff
that
could
be
happening
in
parallel?
C
We
can
be
making
changes
to
the
helm
chart.
I
did
not
follow
up
to
see
if
that
work
started
to
get
pulled
at
all.
So
I
don't
know
if
anyone
has
started
to
work
on
the
helm
chart
at
all.
D
A
A
B
Well,
I
didn't
hear
that
you,
I
know
you
had
a
discussion.
Was
it
with
jason?
Let's
go
back
in
slack.
Did
all
of
that
stuff
go
into
this
issue.
A
B
So
would
you
mind
just
like
checking
on
that
and
because
I'd
quite
like
to
get
the
charts
changes.
C
C
C
E
C
A
E
Are
using
a
clustered
installation
of
redis
right?
So
it's
not
so
it
it's
it!
It's
h
a
at
the
moment.
So
it's.
E
C
I
haven't
looked
into
what
it's
going
to
be
like
to
migrate
from
redis
virtual
machines
to
rightist
inside
of
kubernetes,
but
you
know
we're
going
to
have
to
solve
for
a
few
problems.
C
One
is
pointing
all
of
our
virtual
machines
and
all
of
kubernetes
to
the
new
redis
cluster
and
then
also
syncing
those
two
together
for
a
period
of
time
until
you
know
the
appropriate
primary
takes
over,
neither
of
which
I
know
how
to
accomplish
at
this
moment
in
time.
So
that's
going
to
be
an
interesting
problem
to
solve.
C
The
other
thing
I'm
curious
about
is
like
maintenance
procedures.
Like
you
briefly
highlighted
this,
and
you
know
we
could
solve
this
with
stateful
sets,
but
I'm
still
curious
as
to
what
we're
going
to
do
with
node
pools
and
what
kind
of
nodes
we're
going
to
end
up
running,
because
redis
is
not
small
for
some
of
our
clusters
using
80
gigabytes
of
ram
and
for
one
or
two
of
them
that's
going
to
be
a
massive
pod
that
has
a
lot
of
data
stuck
inside
of
it.
C
C
B
Challenge
is
the
right
word
right,
like
we're.
Definitely
going
it's
definitely
an
ambitious
okr,
especially
for
q4
the.
I
know
your
scalability
have
already
done
some
thinking
around
this
now.
One
thing
we
do
know
is:
we
have
different
types
of
redis
instance.
B
Some
of
them
should
be
able
to
migrate,
as
is
some
of
them
will
require
some
work
before
they're
ready
and
some
of
them
definitely
shouldn't
touch
so
there'll
be
some
kind
of
initial
discussion
to
try
and
identify
which
instances
fall
into
each
categories.
What
I'm
hoping
is
we'll
find
like
one,
that's
suitable
for,
like
an
as-is
migration
that
we
can
start
working
on
and
figuring
out,
a
plan
for
and
alongside
scalability
can
be
picking
up
some
work
for
like
what
needs
to
change
or
one
of
the
other
instances
to
make
it
suitable
for
migration.
B
B
C
A
D
B
We
could
also
contribute
right
like
we.
We
also
both
scalability
and
delivery.
We
have
we
have
engineers
who
can
contribute,
so
you
know
we.
We
could
certainly
factor
that
in
and
and
contribute
that
back
in
ourselves.
If
we,
if
we
want
to
go
that
way,
much
to
be
decided,
I
think
the
kind
of
this
time
we
haven't
had
an
objective
like
this
before,
but
this
objective
will
be
a
kind
of
a
encouragement
to
us
to
try
and
figure
some
of
this
stuff
out.
B
So
I
think
not
often
we
have
a
good
idea
of
what
the
solution
is
before
we
set
these
objectives,
but
in
this
case
I
think
we'll
try
and
figure
out
what's
needed
in
order
to
achieve
the
objective.
That
probably
means
the
outcome
is
more
is
less
is
less
certain,
but
I
think,
even
if
we
can
make
some
progress
and
like
have
a
plan-
and
perhaps
we
get
some
of
the
instances
migrated,
that
would
still
be
a
huge
win.
C
B
A
A
E
And
really
I
mean
also:
we
are
okay,
I'm
gonna
do
a
bold
statement
here.
We
should
not
use
redis
the
way
we
are.
We
should
have
moved
to
a
proper
queuing
system
like
rabbitmq
a
long
long
time
ago,
because
probably
so
redis
is
great
when
you
want
to
store
cash
on
it
and
if
you
lose
it
yeah
we
will
survive.
We
may
take
a
hit
in
terms
of
performance
because
we
reload
we
have
to
recompute
caches
and
things
like
that,
but
nothing
that
should
be
persisted.
E
B
E
E
And
then,
if
you
have
a
lot
of
stuff-
and
you
actually
want
to
have-
I
think
the
the
key
point
is
that
they
have
independent
read.
So
you
can
basically
produce
stuff
and
then
point
readers
back
in
point
backing
time
and
process
the
stream
from
within
a
different
piece.
Then
you
go
with
kafka.
I've
never
worked
with
kafka,
but
I
did
with
rabbit
mq
and
yeah
out
of
the
box.
E
It
doesn't
really
support
rails,
you
need
to
work
with
sneakers
and
it
completely
change
the
way
you
interact
with
asynchronous
job
processing,
but
it's
extremely
powerful,
because
basically
you
can
implement
the
processing
mechanism
in
any
other
language.
E
So
the
big
switch
is
that,
usually
you
put
things
in
a
queue
and
there
may
be
multiple
programs
and
multiple
applications
that
are
just
reading
processing
and
putting
things
into
another
queue
and
the
reason
you
may
lost
something
is
because
the
queueing
system
is
a
routing
system.
So,
if
you
misconfigure
it,
your
message
will
end
up
into
the
dead
letter
mailbox.
I
think
it's
named
basically.
B
B
Awesome,
okay,
yeah,
so
I
mean
it's
gonna.
I
think
it's
gonna
be
kind
of
a
fun
one.
I
don't
think
we've
had
a
quite
such
a
kind
of
architectural
design
challenge
for
a
little
while
so
it'll
be,
I
think,
it'll
be
fun
to
work
with
scalability
and
kind
of
figure
some
of
this
stuff
out,
but
I
think
I
said
it
kind
of
phases
off
what
we.
B
What
we
haven't
got
really
deeply
analyzed
is
is
what
the
different
needs
of
the
reddishes
are
that
I
know
there
are
different
types
of
wettest
instance
running
and
often
when
we
have
a
request
for
a
new
redis
instance.
It's
just
kind
of
we
don't
kind
of
like
cookie
cutter
them.
We
just
kind
of
make
it
specified
for
what
that
particular
problem
means,
there's
probably
some
benefit.
B
C
I
know
we've
also
got
some
interesting
configuration
inconsistencies
like
we've
got
some
radiuses
that
have
redis
and
sentinel
all
in
one
box,
and
we've
got
some
where
redis
and
sentinel
are
two
different
boxes
for
some
reason:
yeah
some
historical
context,
which
I
don't
know
why
we
set
stuff
up
that
way.
But
you
know
something.
C
Not
gonna
we'll
have
to
come
up
with
a
singular
method
of
a
unified
method
of
deploying
stuff
like
this
in
kubernetes.
B
Yeah
exactly
so,
it
should
be
a
fun
challenge.
One
thing
I
do
want
to
say
about
this
like:
I
think
that
this
is
a
really
objective
worthy
service,
because
you
know
lots
of
people
care
about
the
service
super
important
to
us,
but
that
doesn't
mean
that
we
don't
get
to
like
do
other
things,
so
I'm
definitely
expecting
us
to
continue
picking
away
at
kate's
workloads,
tooling,
improving
that
and
even
like
como
proxy,
I'm
totally
fine.
C
I
guess
that's
a
good
segue
into
the
conversation
about
our
q4
okrs.
Like
what
other
questions
do
we
need
to
answer
for
finalizing
what
we
want
to
do
for
that
quarter.
B
B
Skavik
is
our
kitten,
adopter
foster
sorry
they're,
pretty
much
adopted,
though
right
so
there's
always
like
a
steady
stream
of
new
kittens,
which
is,
I
appreciate
at
least
so
encounter
q4.
I
think
the
outstanding
one
is:
how
much
do
we
like
if
people
agree
with
kind
of
general
concept
of
reddish
and
figuring
out
a
kind
of
future
approach
for
kate's
workloads
and
release
tools,
the
other
one
that's
outstanding
is:
what
do
we
want
to
do
with
the
kate's
workloads
like
tooling?
B
Now
we
could
either
just
keep
chipping
away
at
this
as
we've
done
through
this
quarter
or,
if
you'd
rather
have
it
as
an
actual
key
result
we
could
set
up.
We
could
put
something
like
I
know
like
two
three
meaningful
workload,
improvements
or
something
like
that.
If
you'd
rather
have
it
as
a
key
result.
C
That
way,
we
could
focus
our
efforts
on
trying
to
figure
out
what
we
want
to
do
with
redis
and
also
just
finishing
up
the
page's
work.
B
Yeah,
I
think
that
makes
sense
like
I
don't
I've
sent
to
to
graham
yesterday
and,
like
I'm
really
happy
for
us
to
be
prayer
typing
these
things
around.
I
think
the
the
challenge
I
see
right
now
of
taking
our
tech
that
bucket
and
mapping
it
to
an
objective
at
the
moment
is
just
literally
like
we
haven't.
We
haven't
really
broken
that
down
into
like
what
are
the
small
tasks.
B
I
think
what
the
epic,
the
graham
just
completed
of
like
upgrading
helm,
was
a
nice
one
that
like
had
a
clear
outcome
and
it
had
sort
of
like
five
or
six
little
pieces
inside
it.
So
I
think
if
we
can
just
try
and
break
those
bits
down,
we
can
just
put
those
alongside
and
I
don't
think
they
have
to
be
in
an
objective
as
long
as
people
feel
happy
that
believe,
I
guess
that
will
still
be
prioritizing
them.
C
I
did
see
that
apparently
version
122
of
kubernetes
is
going
to
be
kind
of
harsh.
That's.
C
B
Yeah,
absolutely
we're
going
to
have
to
make
a
plan
for
that
one
and
figure
out
what
we
need
to
do
to
actually
get
that
upgrade
so
end
of
march
is
the
sort
of
deadline
for
that,
but
yeah,
I
reckon
like
say
maybe
like
end
of
november,
we
try
and
like
make
sure
we
know
what
pieces
we
actually
need
to
get
through
to
handle
that.
B
Okay,
so
how
about
we
don't
include
the
work
cage
workload,
tooling,
improvements
in
as
a
key
result,
but
we
make
sure
we
actually
like
have
some
pieces
cut
and
we
we
continue
to
prioritize
those
awesome
and
what
I'm
kind
of
expecting
is
once
we
do
have
the
I'm
like
so
I'll
start
up
a
blueprint,
and
we
can
actually
work
out
like
what
does
the
future
of
kate's
workloads
look
like
and
how
is
it
tie
into
release
tools
like
you
know,
is
there
a
point
where
you
can
just
ditch
deployer
and
like
kate's
workload,
steps
in
and
just
cause
deploy?
B
If
it
really
needs
to
you
know
and
can
change
all
that
and
then
off
the
back
of
that
we
can
make
a
kind
of
road
map
of
like
here
is
what
we
actually
need
to
do
to
cate's
workloads
to
like
make
it
the
I
mean
it's
going
to
be
our
kind
of
like
main
deployment
piece
for
the
future
right,
so
we
can
figure
out
on
that.
E
C
E
So
I
was
going
to
mention
this.
That
pages
is
the
last
vm
in
our
fleet
section
of
the
deployment-
and
we
were
discussing
this
with
graham
this
morning
that
once
the
migration
is
completed,
we
can
tear
apart
the
the
kate's
workload
from
the
deployer
and
bring
this
straight
into
release
tools,
which
is
something
that
will
unblock
a
lot
of
work
in
single
pipeline
in
reordering
of
deployments,
as
well
as
the
removal
of
post-deployment
migration.
E
Instead
of
having
it
in
kate's
workload,
but
that's
not
a
topic
and
then
when,
once
we
are
back
from
that
thing,
we
can
decide
if
we
want
to
move
to
the
next
environment
or
run
post-deployment
migration
or
trigger
qa
whatever
so,
depending
on
what
would
be
the
shape
of
the
feature
deployment
pipeline
but
yeah,
that's,
I
think
it's
an
important
goal.
So
when
pages
will
be
moved,
a
lot
of
things
will
be
unblocked.
C
E
I
would
like
to
extract.
I'm
gonna
be
a
bit
in
details
here.
I
would
like
to
extract
the
environment,
locking
feature
which
is
just
a
python
or
shell
script.
I
don't
remember
out
of
the
deployer
into
release
tools
so
that
I'm
going
to
lock
environment
at
release
tools,
level.
Okay,
so
I
do
put
the
lock
on
the
environment.
E
Then
I
will
trigger
the
deployer
with
a
basically
the
new
way
we
trigger.
The
deployer
is
just
prepare
me,
this
environment,
which
means
regular
migration.
Gitly
prefect,
stop
assets
yeah,
so
everything
up
until
the
fleet
point
and
then
the
pipeline
is
done.
It's
completed,
so
it
controls
goes
back
to
release
tools
which
can
do
the
kubernetes.
E
We
can
do
kubernetes
at
that
point,
depending
on
how
we
are
on
those
other
epics
that
I
mentioned,
we
may
decide
to
trigger
qa
from
release
tools
or
post
deployment.
Migration,
or
I
mean
it
depends
because
there
we
are
moving
several
parts
at
the
same
time.
But
the
point
is
that
we
can
basically
trigger
back
to
release
tools
and
say
completely
migrate,
complete
the
deployment
which,
but
everything
after
the
fleet
is
kind
of
book
keeping
stuff,
because
is
sending
notification
that
we
already
moved
so
probably
there's
nothing.
E
E
C
Like
a
rake
task
of
the
coordinated
pipeline,
for
example,
yeah,
that's
the
goal,
I'm
curious
as
to
what
that's
going
to
look
like,
because
what
I'm
I
think
when
I'm
initially
like
imagining,
is
that
our
coordinated
pipeline
becomes
a
lot
larger
in
scope,
larger
in
size
and
I'm
just
kind
of
curious.
As
to
like
how
big
it's
going
to
become.
E
Yeah
I
mean
there
are
things
that
got
triggered
that
are
already
there
to
just
expand
on
the
side
and
basically
those
things
that
expand
on
the
side
are
going
to
be
shorter
and
the
main
thing
going
to
be
larger.
We
can
even
consider
triggering
children
pipeline
in
the
same
project.
I
mean
it's
it's
an
option
if
the
visualization
around
this
is
not
that
good,
probably
we
can
reorganize
things
around,
but
yeah
I
mean
we
have
many
things
in
flight
on
this
for
one
year
now.
B
Brilliant
one
thing
also
scarborough
so
henry,
and
I
chatted
yesterday
and
kind
of
general
overview
in
the
team,
so
registry
is
on
track
for
heading
to
production,
hopefully
tomorrow,
so
we
should
be
kind
of
through
the
the
bulk
of
the
work
pretty
soon
for
that
the
engine
x,
removal
for
api
is
planned
in
for
afterwards
channel
pick.
B
Backup
game
has
started
work
on
the
auto
deploy
rescheduling,
which
I
expect
will
take
him
some
weeks
at
least,
but
what
we
can
do
at
some
point
next
week
is,
if
you
want
to
discover
if
you
want
to
like,
depending
on
how
you're
going
pager
it's
like
once
henry's
through
the
api
work,
removing
nginx
from
api,
I
should
say
if
you
want
to
redistribute
any
of
your
tasks,
give
you
more
release
management
time.
We
can
do
that.
C
I
think
when
it
comes
to
pages,
what
I'll
do
is
pretty
much
similar,
what
I've
done
in
the
past,
where
once
we
get
to
the
point
where
we're
like
close
to
running
things
into
production,
so
long
as
not,
everyone
else
is
terribly
busy
I'll,
probably
start
doing
handoffs
like
hey
we're
here.
If
you
could
help
me
get
this
across
rock
it
out,
you
know
just
to
try
to
nudge
it
along
a
little
faster,
but
I'm
not
going
to
do
that
until
I'm
not
going
to
consider
that
until
production
at
least.
D
D
B
Awesome,
okay,
stuff
and
then
we've
talked
so
much
about
very
random
things,
but
this
may
be
an
unfair
question,
but
just
in
case
ahmed
have
you
got
any
questions
you
want
us
like
feel
free
to
ask
like
any.
Don't
there's
no
stupid
questions
here?
If
there's
another
thing
that
you'd
like
to
cover,
what's
good
for
myself,
I'm
just
like.
B
B
Awesome
all
right,
if
there
is
nothing
else,
then
thank
you
very
much.
Everyone
enjoy
the
rest
of
your
day,
I'll,
follow
up
with
the
prioritization
of
the
charts
changes
for
the
rate
limiting
on
pages
and
see
if
we
can
get
that
moving,
but
good
luck
with
hka
proxy.
Let's
go
back,
give
us
a
shout.
If
you
get
stuck
there,
all
righty
all
right
take
care.
Everyone
have
a
good
rest
of
your
day.