►
From YouTube: 2021-11-17 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
So
just
give
me
one
sec,
I'm
just
aha!
You
guys
I
was
just
gonna
post
update,
call
so
welcome
everyone
to
the
17th
november.
Kate's
work,
kate's
migration
demo
meeting,
so
I
put
a
few
prompts
on
their
agenda
like
is
there
any
pages
demo
or
discussion
you'd
like
to
go
through
scovic?
No.
A
Awesome
and
then
on
the
reddest
stuff
like
and
if
you
have
any
other
demo
discussion
items
that
you
would
like
to
go
through
as
well
igor,
please
please
go
ahead
and
add
them
in.
I
wonder
if
this
may
be
whether
this
would
be
a
good
time
for
us
to
chat
a
bit
about
the
get
up
chart
and
whether
we
like
is
it
like.
What
would
it
take
for
us
to
be
able
to
use
to
get
upshot.
D
D
C
Well,
this
would
be
the
helm
chart
that
we
ship
with
the
helm,
chart
that
get
lab
inherits
out
of
the
redis
stuff.
So
like
there
wouldn't
be
anything,
get
lab
specific
toward
the
redis
image.
It'll
be
more
like.
Can
we
bundle
or
can
we
utilize
the
redis
chart
that
we
bundle
with
within
the
gitlab
chart.
B
Do
we
actually
bundle
a
redis
chart
because
I
I
know
we
used
to
have
an
in-house
lettuce
chart
like
redis
h,
a
or
something
like
that?
It's
what
it
was
called,
but
one
or
two
years
ago
we
migrated
to
recommending
the
bitnami
redis
chart
and
we're,
including
that
as
a
dependency
for
the
the
gitlab
helm
chart.
As
far
as
I'm
aware.
C
E
B
C
Yeah-
that's
probably
accurate,
I
don't
know
so.
As
far
as
I
know,
since
we
have
the
need
to
deploy
like
five
or
six
redis
clusters,
I
don't
our
home
chart
is
not
going
to
have
any
sort
of
support
to
break
that
up
currently,
so
I
think
to
answer
amy's
question:
we
would
need
to
figure
out
some
way
to
take
a
single
redis
deployment
and
create
x
number
of
deployments,
give
them
specific
names
and
give
them
the
necessary
configurations
that
we
could
all
link
to
within
the
side
of
our
application.
C
A
B
Yeah,
so
if
you
look
at
that
distribution
issue
that
I
linked,
this
is
one
of
the
topics
that
came
up
there
as
well,
and
I
don't
think
they
gave
a
definitive
answer
on
how
feasible
it
is
to
have
multiple
copies
of
the
redis
hound
chart
with
helm
as
dependencies,
and
I
don't
know
enough
about
hell
to
know
if
that
is
at
all
feasible.
B
I
think,
if
that's
easy
enough
to
do,
then
we
should
probably
aim
to
do
that,
but
if
it's
too
difficult
to
do
for
whatever
reason
right
now,
it
might
also
be
okay
to
kind
of
run
helm
directly
on
the
redis
chart
without
using
the
gitlab
one
and
differ
the
integration
side
of
that,
which
I
think
distribution
was
was
open
to
deferring
that
for
now,
if
needed,.
C
They
also
make
it
clear
inside
of
that
issue
that
they
don't
recommend
using
their
other
solution
that
we
currently
inherit
as
production
worthy.
So
you
know,
that's
probably
going
to
be
step
one
for
the
distribution
team
prior
to
them,
getting
to
the
point
where
they
could
start
splitting
things
up
for
our
use
case.
B
D
But
is
it
really
possible
with
helm
to
have
multiple
instances
of
a
single
dependency,
because
I
mean
the
if
we're
just
going
to
install
the
bitnami
redis
chart,
then
it's
easy
right,
because
you
just
see
how
to
give
you
several
deployment
and
each
deployment
is
single
things.
It's
how
ham
is
supposed
to
work
right,
but
here
we
are
saying
that
we
have
git
lab,
which
bundles
other
things,
and
then
you
really
want
to
have
several
options
for
telling.
I
want
one
two,
three,
four,
five,
seven,
eight
different
redis
I
mean
I.
D
I
don't
think
this
is
a
real
use
case
for
a
gitlab
installation.
It
is
more
about.
I
can
give
you
something
that
works
out
of
the
box
with
a
single
default
instance,
so
that
you
can
install
with
a
small
installation.
It
just
works.
But
then,
if
you
want
to
split
and
configure,
I
think
you
have
to
disable
the
bitnami
redis
from
the
from
the
chart
provide
your
own
chart
and
provide
in
the
in
the
necessary
configuration
in
the
in
the
github
chart
to
link
to
these
other
deployments.
D
B
I
yeah,
I
don't
know
if
helm
supports
that,
but
I
guess
we
can
make
a
task
and
follow
up
on
that
async,
but
I
do
think
we
could
have
an
if
condition
potentially
in
the
helm
chart
in
our
helm,
chart
where
we
say
where
we
have
three
options:
disable
redis,
install
one
shared
radius
or
provision
n
reduces
that.
We
need
to
see
how
configurable
that
is,
but
but
feasibly
we
could
then
tell
it
which
radiuses
we
want
right.
B
A
Sense,
I
think
that
makes
sense
yeah.
We
should
also,
let's
keep
an
eye
on
the
balance
between
the
they're
getting
getting
this
stuff
production
ready,
because
I
know
we've
made
a
lot
of
improvements
to
get
laptop
over
the
last
few
years
by
needing
to
put
stuff
in
in
order
to
support
the
migration
rather
than
sort
of
solving
a
problem
and
then
coming
back.
So
there
may
be
some
pieces
that
we
could
do
that
way
and
there
may
be
some
bits.
A
A
So,
okay,
yeah,
let's
see
about
this
issue
in
terms
of
just
wrapping
up
on
1391
ego
or
like
do
you
want
to
should
we
do
you
want
to
sort
of
set
some
expectations
for
jason
around
like
how
we're
going
to
try
and
handle
this
just
so
they
kind
of
know
what
to
expect
and
if
there's
anything
that
we
want
them
to
kind
of
be
open
to
receive
from
us
or
investigating.
A
How
about
we
just
leave
a
comment
with
where
we
are
going
to
go
next
so,
like?
I
think
it
sounds
like
the
next
steps
would
be
to
figure
out
if,
like
what
we
need
to
do
around
like
solve
like
what
do?
We
need
to
do
to
support
the
single
in
redis
instances
like
whether
we
can
do
that
with
a
single
chart
or
not,
and
then
based
on
that,
we
perhaps
have
a
proposed
kind
of
approach
for
this
migration
that
we
can
go
back
to
distribution
with.
A
Also,
just
a
little
bit
of
of
kind
of
context,
and
perhaps
not
everyone
has
awareness
that
every
other
monday
and
so
the
monday
coming
up,
I'm
in
the
kubernetes
migration
working
group,
primarily
with
distribution,
but
also
development
as
well.
So
that's
quite
a
good
place
that
we
sometimes
discuss
like
how
to
unblock
this
sort
of
stuff
or
or
get
people
kind
of
aligned
on
approach,
and
things
like
that.
So
if
we
need
extra
help
from
either
of
those
two
groups,
then
that's
also
a
good
place
to
ask
awesome.
A
Okay,
great
and
then
I
guess
just
a
little
context
for
everyone
else
on
the
call
who
wasn't
in
the
kickoff
on
monday
is.
We
have
kicks
off
the
reddish.
Migration
is
great
eagles,
put
together
like
a
whole
load
of
stuff
and
and
that
as
well
sort
of
thinking
through
how
we'll
go
about
doing
this.
A
You
know
we
will
be
pairing
up.
This
is
a
shared
q4
okr,
so
I
know
henry
scavek
and
also
graham
you're,
quite
deep
in
various
other
things
at
the
moment,
as
you
become
more
available
as
other
projects
wrap
up
we'll
all
be
joining
in
on
this
redis
migration,
because
this
is
definitely
not
a
single
quarter
of
work,
so
we
will
all
pair
up
to
complete
this.
A
Awesome
so
henry
over
to
you
for
some
nginx
insights
fun,
I'm
not
sure
what
is
fun.
E
Fun
yeah:
you
can
see
this
one,
maybe
yeah
so
yeah
just
wanted
to
report
the
last
state
of
this.
So
we
still
don't
have
the
reason
for
why
we
see
tcp
connection
terminations
unexpectedly
in
those
15
second
intervals,
and
I
reached
out
to
google
support,
and
there
was
a
little
bit
of
back
and
forth
in
the
support
issue
and
the
last
suggestions
I
got
from
the
support
engineer,
ciao
they're
kind
of
interesting.
E
So
one
thing
he
noticed
was
that
when
we
are
using
the
internal
load
balancer,
which
we
when
we
are
using
it
as
the
service
endpoint,
then
we
are
seeing
these
tcp
connection
terminations.
So
we
suspected
it
has
to
do
with
it.
And
the
support
engineer
mentioned
that
this
ilb
is
set
up
to
use
an
external
traffic
policy
called
cluster,
which
is
meaning
that
traffic
will
go
from
the
ilb
to
the
node
and
the
node
then
will
do
via
ip
table
rules.
Do
the
routing
to
the
right
parts,
which
can
also
be
on
another
node.
E
So
it
could
be
that
it
goes
to
one
node
and
from
there
traffic
is
going
to
another
node
right
and
I
think
there's
something
that
should
be
very
aware
of.
Also
graeme
was
talking
about
this,
so
this
is
maybe
an
issue
because
I
could
imagine
that
maybe
between
the
nodes
and
epitable
roots,
we
get
terminations
of
connections.
So
it.
B
E
Need
to
be
in
the
load
balancer
necessarily
so
one
idea
was
to
maybe
switch
this
to
using
a
local
external
traffic
policy.
This
is
the
other
setting
you
can
use,
for
instance,
on
the
internal
load
balancer
that
we
use
for
the
nginx
service
endpoint.
We
use
the
setting
so
engine
x
there
we
always
root
and
traffic
to
one
single
node
and
it
stays
there.
E
So
so
that
would
be
one
idea
to
go
to
this
switch
over
this
tool
to
local
and
the
other
finding
was
and
suggestion
from
ciao
was
that
you
were
saying
that
the
community's
regular
load
balancers
don't
have
the
capability
to
perform
health
checks
on
pots.
That
was
surprising
me
a
little
bit.
I'm
not
sure.
I
need
to
validate
this.
E
What
he's
saying
and
mentioning
here,
but
what
he
is
saying
is
we
should
try
to
use
neg
network
endpoint
groups,
which
is
a
feature
which
is
also
supported
by
google
cloud
gek,
and
this
has
extended
capabilities
for
rooting
traffic
and
is
aware
of
ports
and
can
do
a
lot
of
things
and
also
check
have
endpoints
and
stuff
like
that,
and
that's
that's
a
small
change
as
an
annotation.
E
E
A
Bleeding
edge
it'll,
be
fun,
graham
will
certainly
have
opinions
and
thoughts
on
this,
like,
I
think
this
is,
is
very
close
to
the
stuff
that
he
spends
a
lot
of
time.
Thinking
about
so
yeah
it'd
be
certainly
worth
I'll.
Add
the
recording
onto
this
straight
afterwards,
henry
but
yeah
definitely
ping
put
a
summary
in
pink.
Graham
on
this.
A
E
E
In
station
we
still
have
the
new
setting
like
like
bypassing
engine
x,
enabled
because
the
traffic
is
slow,
that
we
nearly
that
we
very
rarely
see
hours
like
every
few
hours.
You
see
two
or
three
of
them,
tcp
connection,
terminations
and
hr
proxy
locks.
A
Would
you
be
able
to
sort
of
put
together
some
sort
of
thoughts
on
how
the
plan
for
how
we'd
do
that?
Because
what
I'm
stage
is
always
a
bit
difficult,
because
we
have
so
little
traffic
that
I
guess
like
we,
we
don't
get
to
see
great
results
on
like
what
the
scaling
will
look
like
and
and
some
of
those
sort
of
side
effects.
It'd
be
good
to
understand
like
what
we
like.
What
steps
would
we
want
to
go
through
and
what
that
might
look
like
yeah.
E
I
think
the
first
step
is
to
figure
out
how
to
set
this
at
all,
because
if
it
needs
changes
in
our
charts,
then
it
needs
more
work
right.
So
we.
E
Get
these
changes
in
and
then
parallel
research
and
investigate
what
this
exactly
means
to
switch
to
this
other
way
of
load
balancing
we
have
would
hope
to
get
some
input
from
people
like
green,
maybe
or
anybody
else
who
has
some
info
here.
E
E
Yeah
sure,
but
but
the
thing
is,
this
is
something
officially
supported
and
it's
something
which
is
used
in
production
right,
so
it
should
be
a
viable
and
even
it's
a
it's
a
way
to
do
it
like
google
is
telling
people
how
to
do
it
right.
They
they
say
such
as.
Do
it
like
this,
if
you
can
and
your
clusters
are
new
enough,
so
I
think
it
should
be
working
but
yeah.
We
need
to
test
and
set
this
and
do
some
research.
A
Cool
okay:
well,
let's
see
how
we
go
on
this
one
I
say
like.
I
think,
let's
make
a
call
on
friday
as
to
whether
we
know
enough
to
be
able
to
go
ahead
or
not.
For
next
week
we
have
quite
a
long
pcl
coming
in
for
thanksgiving,
and
then
you
know
we're
we're
pretty
much
into
december
and
then
before
we
know
it's
the
holidays
as
well.
So
let's
find
out
what
we
can
and
then
see
how
far
we
can
get
on
this
testing.
A
Nope
awesome:
okay,
thank
you,
everyone
for
the
for
the
chats.
I
hope
you'll
have
a
good
rest
of
your
wednesday
I'll
speak
to
you
soon.
Take
care.