►
From YouTube: 2020-09-11 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good,
all
right
so
sidekick
inside
of
kubernetes,
we
are
creating
relatively
large,
merge
requests,
for
example,
if
I
could
quickly
grab
one
and
just
toss
it
as
a
link
inside
of
the
agenda.
A
Our
merge
requests
are
getting
really
really
long,
and
this
is
because
we
are
putting
literal
q
name
for
every
queue
that
we
are
migrating,
so
my
thought
was
just
moving
to
a
queue
selector,
just
like
we
do
with
everything
else.
This
would
entail
making
a
contribution
to
get
level
get
lab,
and
I'm
just
wondering
if
this
would
be
a
good
idea.
I
don't
know
what
the
implications
are
or
what
downsides
I'm
going
to
immediately
run
into.
So
I'm
asking
for
the
idea
why
we're
still
in
the
middle
of
the
middle
of
this
migration.
E
Yeah,
basically,
we
added
the
tags
for
you
folks,
so
you
know
go
ahead
and
use
them
like
they're.
For
you,
you
don't
the
part.
That's
gonna,
be
slightly
annoying
for
you
is
you
don't
edit
the
yaml
file
directly,
you
edit
all
the
worker
classes
and
add
the
tags
to
those
and
then
regenerate
the
ammo
file?
So
that's
going
to
be
a
bit
annoying,
but
it's
it's
something.
E
It's
like
one
line
in
a
bunch
of
files,
rather
than
a
bunch
of
lines
in
one
file,
basically
gotcha,
but
yeah.
No,
please
do
that.
Let
me
know
if
you
need
any
help
with
that,
but
that's
that's
exactly
what
the
tags
are
for
to
not
have
to
list
out
the
the
dozens
of
cues
that
we
have
in
the
catch-all
charge
individually
by
name.
A
Perfect:
okay,
unless
there's
any
other
feedback,
amy
you're
next.
C
C
Like
java,
you
got
some
stuff
like
yeah,
curious
from
the
takeaway,
so.
F
Sure
I
can
just
speak
to
these,
so
first
was
decades
use
the
for
for
the
case
logging.
This
was
just
so
that
we
can
get
our
event
logs
into
elasticsearch.
They
said
to
dig
into
how
gke
is
doing
this
with
fluid
d,
so
I
can
do
that
and
I'll
I'll
loop
back
with
them.
If
we
have
any
problems
figuring
it
out
worst
case,
we
can
always
just
forward
stack
driver
locks
to
elastic
and,
like
we
have
done
before,
we
can
do
that
for
just
the
google
logs,
which
is
the
main.
F
These
are
the
main
logs
that
I
feel
like
we're
missing
right
now
in
elasticsearch
are
the
keyboard
logs.
This
includes,
like
all
the
events
that
happen.
We
just
have
very
little
visibility.
Currently
we're
actually
using
you
know,
cube
ctl
to
look
at
events,
because
it's
kind
of
a
pain
to
go
to
stackdriver
for
this,
so
I'm
hoping
to
get
this
into
elasticsearch,
ideally
its
own
index,
and
then
we'll
have
a
bit
more
visibility.
D
D
F
D
F
Good
point
next
was
the
multi-cluster
network
egress.
We
were
talking
about
whether
we
should
split
the
cluster
to
avoid
additional
costs
for
network
egress.
They
said
oh
yeah.
This
is
a
very
typical
thing.
In
fact,
most
people
use
zonal
clusters
and
then
they
kind
of
showed
us
in
the
service
mesh
demo,
how
you
can
use
zonal
clusters
and
how
that's
typically
organized
by
project,
and
that
was
an
interesting
presentation.
F
I
think
we
have
a
link
for
that,
because
I
think
it's
like
a
public
presentation
that
they
have,
but
in
the
meanwhile
we're
not
going
to
go
to
service
mesh
right
away.
So
in
the
meantime,
I
think
what
we're
probably
going
to
do
is
what
we
were
discussing
before,
which
is
to
split
the
cluster
into
three
potentially
probably
have
one
regional
cluster
and
then
two
zonal
clusters,
one
regional
cluster,
because
we
definitely
want
to
keep
the
workloads
that
are
that
only
run
on
one
pod.
F
F
We
were
talking
a
bit
about
on
the
call
that
we
have
services
that
require
only
one
pod
like
the
throttled
shards
and
they
said
hey.
This
would
be
a
good
use
for
service
mesh
with
istio.
You
know
you
can
abstract
that
away.
You
know
you
basically
define
your
service
in
a
declarative
fashion
to
say,
like
you
should
only
run
on
one
pod
and
then
like
that
would
be
able
to
float
between
three
regions
or
three
zonal
clusters.
So
that
was
interesting.
F
I
think
like.
I
think
this
is
probably
the
direction
we're
gonna
go,
but
maybe
it's
not
the
first
thing
we
do.
A
If
we
wanted
to
rebuild
a
cluster,
for
example,
automating
that
through
ci,
it
sounded
like
they
had
a
vested
interest
in
helping
us
develop.
That
thing
I
don't
know
if
we
want
to
go
down
that
route,
but
I
feel
like
if
we
did
that
might
give
us
some
interesting
benefits.
I
had
a
very
short
conversation
with
amy
and
jarv
that
you
know
this
might
be
a
good,
interesting
marketing
opportunity
for
us
as
well.
If
I
don't
know,
if
that's
something
that
we're
interested
in,
though
I
think
in.
F
A
For
us
to
do
this,
we'd
have
to
have
a
conversation
with
other
people,
because
this
would
take
us
away
from
other
work
related
to
the
migration
that
we're
currently
concentrating
on.
A
I
couldn't
tell
a
few:
he
was
just
using
that
as
testing,
something
like
he
demoed
it
in
some
way
or
if
that
was
something
they
are
legitimately
using.
I
I
can't
imagine
them
using
gitlab
ci,
because
google
is
such
a
large
organization.
They've
got.
You
know,
boards
sitting
out
there
that
does
all
this
crap
for
them.
So
I
don't
know,
but
I
think.
B
That
that's
why
I
would
assume
that
it's
not
impossible
for
them
to
use
because
they're
such
a
large
organization.
But
what.
B
Should
I
mean
we
should
we
should
figure
that
one
out,
because
that
sounds
really
really
interesting
and
not
only
from
the
marketing
perspective?
Actually,
if,
if
they
can
help
us
by,
you
know
providing
someone
who
will
sit
down
work
with
us,
that
will
be
that
will
speed
up
some
things,
maybe
not
necessarily
the
migration
right
now,
but
you
know
things
that
are
supposed
to
be
at
the
tail
end
will
come
closer
right
now
and
we
can
be
safer
while
we
migrate.
B
So
let's,
let's
stay
on
on
that,
let's
maybe
get
in
touch
back
with
them
and
see
like
what
they
actually
meant
with
them.
What
how
they
are
seeing
this
unless
quartering
means
you
do
all
the
work
we
take
all
the
credits,
I'm
interested.
B
B
Some
other
partnerships
work.
That's
why
I'm
saying
this.
So
if
that's
not
what
they
mean,
then
let's
see
what
how
what
we
can
do
to
get
some
help.
F
To
customers
and
say,
like
you,
know
how
they
can
use
gitlab
for
gke.
They
had
one
interesting
question
because
they've
actually
taken
a
look
at
our
chart
and
they've
installed:
gitlab,
maybe
they're,
maybe
their
installation
is
using
our
chart.
Why
did
we
decide?
Maybe
you
know
like?
Why
did
we
decide
to
use
the
nginx
ingress
controller?
F
D
B
Other
invest
controllers.
The
only
thing
I
remember
about
that
time
is
that
I
was
pounding
on
the
drum
off.
Please
don't
introduce
any
new
components
that
we
don't
know.
We
are
using
nginx,
let's
use
nginx,
so
we
have
too
many
other
things
to
think
about.
So
that
was
the
story
at
the
time
and
don't
forget
this
was
2017..
B
There
was
not
a
lot
out
there
to
to
use.
F
B
Tls
termination
is
something
that
workhorse
doesn't
have,
and
we
were
thinking
at
that
time
as
well.
This
is
something
good
to
offload
to
nginx
and
not
think
about
anything
else,
and
it
provided
the
parity
between
omnibus
and
charts,
because
we
were
trying
to
wrap
our
head
around
while
migrating
this
installation
or
creating
this
installation
method
from
scratch.
When
there
was
not
much
out
there
again
2017,
there
was
not
much
out
there
to
to
reference.
C
B
B
First
of
all,
helping
us,
like
figure
out
some
basic
things
about
kubernetes,
but
then
also
hardening
our
setup.
If
you
remember
vic,
vic
iglesias,
he
was
well
with
us
for
almost
six
months
back
and
forth.
Working
together
on
this
and
he's
he
was
one
of
the
the
people
who
said
that.
Well,
this
is
super
complex.
This
is
also
super
awesome.
What
you're
trying
what
you're
managing
to
pull?
C
A
C
Okay
and
yeah
we'll
get
the
notes
to
into
like
something
a
bit
more
usable
because
I
think
not
just
from
the
conversation,
but
from
the
earlier
back
and
forth,
we
had
with
them
there's
quite
a
lot
of
useful
links
to
our
articles
and
stuff
in
there
as
well
cool,
so
based
off
the
off
the
back
of
that,
I
guess
was
kind
of
what
I
was
thinking
so
next
steps
for
this
a
z
issue.
C
C
So
that
we
we
so
we're
saying
that
one
option
two
and
we're
just
rejecting
option
one
in
the
sidecar
option.
F
C
F
Sidecar
really
doesn't
give
give
us
get
get
us
to
where
we
need
to
go
just
because
it
only
reduces
one
of
the
cross.
Egress
connection
points,
and
I
see
I
see
it
as
also
being
I
mean
it's
not
as
much
work
I
would
say,
but
it's
also
makes
our
chart
very
messy
to
have
to
support
both.
I
think
I
mean
I
haven't
really
fully
fleshed
it
out
or
really
talked
to
the
distribution
team
too
much
about
it.
But
it
sounds
like
from
my
conversation
with
jason
about
this.
F
It's
kind
of
going
to
suck
because
we're
going
to
have
to
like
support
both
and
we're
going
to
have
to
keep
those
configurations
aligned.
Even
though
nginx
really
doesn't
do
a
whole
lot,
we
can.
We
can
talk
to
them
again
about
it
to
see
like
if
they've
had
any
new
thoughts
about
doing
the
nginx
sidecar,
but
it
really
does
it
does
actually
in
the
short
term.
It
doesn't
help
us
as
much
right,
because
we
still
have
this
extra
cross,
crossability
availability
zone
point
you
know
between
the
gcp
internal
lb
and
nginx.
F
A
A
F
C
So
some
possibilities,
so
what
I
was
going
to
say
is:
if,
if
we
don't
have,
if,
if
there's
all,
if
there's
options
and
it's
a
case
of
evaluating
the
best
one,
then
is
the
next
step,
let's
evaluate
them
rather
than
because
I
thought
that
we've
got
option
one
option
two.
I
thought
google
talked
us
through
a
third
option
which
was
similar
to
option
one
but
slightly
different.
F
Oh,
that
yeah,
I
think
they
were
they
were
referring
to.
We
were
talking
about
like
we
have
workloads
that
span
all
of
these
zonal
clusters
and
they're
like
well.
Typically,
what
we
see
is
that
customers
create
an
ops
cluster
which
we
actually
have
already,
and
this
is
where
you
would
run
things
like
your
prometheus
exporters,
which
I
think
we're
doing
right
now
anyway.
So
I
think.
F
I
think
the
only
difference
I
think
that
they
were
highlighting
is
that
they
see
customers
have
an
ops
cluster
dedicated
to
their
production
environment,
where
our
ops
cluster,
I
think
right
now-
is
spanning
multiple
environments,
and
we
also
have
some
things
that
maybe
other
people
would
run
in
the
ops
cluster
are
running
in
the
actual
production
cluster,
but
I
think
that's
sort
of
I
mean
it's
related,
but
in
some
ways
orthogonal.
I
guess
to
the
discussion,
because
I
think
in
both
cases
we
would
still
need
to
build
out
these
separate
clusters.
F
That's
what
I'm
thinking
I
mean
there
are
two
challenges
with
this.
One
is
terraform,
and
I've
already
started
to
kind
of
prove
like
do
a
poc
of
this
refactor,
and
it
just
means
that
we're
going
to
have
to
kind
of
like
put
things
that
are
outside
the
gke
module
into
the
gke
module,
so
that
it
is
a
bit
less
hairy
to
duplicate
it
multiple
times.
F
I
don't
think
it's
going
to
be
too
bad,
but
yeah
like
I
said
it's
a
bit
hairy
and
then
the
other
thing
is
getting
our
helm
files,
not
our
hum
files,
our
cades
workloads,
gitlab.com
project
that
uses
helm
file
to
deploy
to
different
clusters,
basically
deploy
three
times
with
different
configuration
and
scarbec.
What
I'm?
What
I'm
thinking,
I
think
we
need
to
talk
to
graham
about
this
too,
is
that
we
have.
F
We
have
two
environments,
because
the
environment
is
so
overloaded
already,
but
I
think
we
have
to
create
another
type
of
environment.
It's
going
to
be
we're
going
to
have
gprod
we're
going
to
have
jeep
prod,
let's
say
beta,
charlie
and
gamma,
or
something
like
that
right
for
the
different
availability
zones.
F
D
F
G
D
F
D
D
D
F
I
think
it's
region,
but
we
we
can
yeah.
We
can
add
this
and
then
and
then
this
gives
us
a
little
bit
of
sharding
for
free,
which
is
nice,
because
we'll
have
yeah
three
prometheus
operators
and
then
the
aggregation
happens
at
thanos.
So
I
I
don't
think
there's
any
impact
there.
D
Well,
but
the
thing
is
that
that,
if
we
get
like
like
if
you've
got
three,
if
you've
got
three
clusters
and
one
of
them
is
behaving
really
badly,
but
then
the
way
it's
aggregating
it
up,
you
might
actually
just
miss
that
you
know
75
sorry,
25
percent
of
the
of
the
of
the
requests
going
into
that
cluster
are
failing.
So
it's
it's
not
hard
because
we
generate
all
the
stuff
now
right.
D
F
D
B
D
F
Yeah,
that's
that's
the
idea
I
think
we'll
do
like.
Maybe
maybe
we'll
do
like
the
regional
cluster
first
and
then
and
then
do
the
zonal
clusters
or
we'll
figure
out
what
the
best
way
to
do.
That
is.
F
F
Yeah
so
so
maybe
maybe
it
wouldn't
be
good
to
do
the
regional
cluster
first,
maybe
we
should
do
the
zono
cluster
as
a
smaller
blast
radius
and
then,
when
things
look
good
for
git
and
api
and
web,
then
you
move
on
to
sidekick.
But
who
knows
like
yeah,
we
have
to
think
about
it.
E
B
B
So
is
there
something
in
these
diagrams
that
you
created
that
we
can
remove
as
in
can
we
remove
nginx
ingress
to
simplify
this?
Can
we
remove
some
load
balancers
that
we
have?
Can
we
remove
something
to
simplify
this
and
don't
limit
yourself
with?
This
is
going
to
be
complex
because
we
need
to
go
to
products.
We
need
to
go
through
just
humor
me
here
with.
F
I
don't
know
the
what
the
effect
would
be
of
removing
what
would
be
the
removing
the
nginx
ingress
and
just
like
connecting
directly.
You
know,
directly
from
the
tcp
load
balancer
to
workhorse.
B
Okay,
I
need
to
leave
in
a
minute,
so
can
we
entertain
that
thought
and
just
maybe
play
with
it
a
bit
to
figure
out
whether
that's
a
option
right
like
an
option?
Rather
maybe
it's
not
on
the
equal
footing
like
the
proper
solution
of
tree
zonal
clusters,
but
maybe
it
actually
crystallizes
a
different
so
or
different
set
of
options
for
us.
So
let's
investigate
that
and
then
this
one
is
more
for
well,
I
guess
amy.
B
Is
there
a
way
for
us
to
parallelize
this
discussion
with
google
on
what
kind
of
support
they
want
to
offer
us
with
the
pocs
and
testing
that
we
want
to
do
and
maybe
put
the
timeline
on
it
and
say
after
this
much
time
we're
going
to
have
to
make
a
decision,
because
this,
given
how
complex
this
can
end
up
being,
it
can
just
completely
take
over
all
of
our
time
in
the
next
month.
So
I
would
like
to
see
a
time
box
on
both
or
all
approaches
there
that
we
are
taking.
B
Well,
thanks
everyone
I
have
to
leave,
but
this
was
a
great.
C
A
F
Can
start,
I
can
start
an
issue
for
discussing.
I
I'll
see
what
I
think
we'll
need
to
loop
in
the
distribution
team
to
talk
about
like
whether
we
can
simplify
things
a
bit
in
the
meantime,
we'll
probably
just
try
to
scope
out
the
work
a
bit
more
for
splitting
the
cluster
into
three.
F
A
I
don't
have
an
estimate,
you
know
batch
2,
I'm
hoping
to
complete
next
week,
and
then
it's
going
to
be
just
that
slow
churn
of
what
to
migrate
next,
which
that
will
take
less
effort
on
my
part,
it'll
just
take
longer
in
terms
of
time,
because
there's
you
know
all
these
will
require
some
form
of
deeper
investigation
to
determine
where
rights
are
happening
and
the
necessary
issues
and
then
for
a
lot
of
cues
we'll
be
waiting
on
engineering.
A
So
I
think
I
could
easily
parallelize
some
of
the
effort
to
work
on
some
multi-cluster
stuff
without
negatively
impacting
the
migration
effort
of
sidekick
just
due
to
the
nature
of
us.
Knowing
that
we
need
to
wait
on
development
effort
to
get
rid
of
some
of
the
work,
I
don't
see
us
completing
sidekick
for
quite
a
while,
just
due
to
the
fact
that
there's
some
quite
a
few
cues
that
are
left
over.
F
Okay,
amy,
maybe
we
can
see
if
we
can
get
time
from
green.
I
don't
know
what
his
work
looks
like,
but
he
would
be
very
it'd
be
helpful
to
get
him
for
like
we're.
Gonna
have
to
do
some
refactoring
of
helm
file
in
the
gates,
workloads
gitlab.com
project
to
support
multiple
clusters.
F
It'd
be
good
to
have
him
on
board
with
that.
But
I
don't
know
what
he's
his
other
tasks
are.
C
I
can
find
out
for
sure
in
terms
of
the
get
stuff
whether
we
should
go
ahead
is
there
like.
I
think
we
should
try
and
answer
this
as
soon
as
we
can.
Is
there
a
risk
if
we
leave
that
get
stuff
just
sitting
on
canary
that
things
move
on
beyond
it
like?
F
F
C
Cool
okay,
I'd
love
us
to
at
least
work
out
like
what
approach
like
what
this
looks
like
and
some
time
frames
on
this
stuff.
So
I
think
we
should
focus
on
this
first:
okay,
cool,
okay,
great.
That
sounds
good.
A
I
tried
to
capture
what
we
just
discussed
in
the
agenda,
but
both
you
and
jara
feel
free
to
edit
as
necessary.
C
Oh
okay.
So,
oh
before
we
go
into
the
ad
stuff,
I
guess
you've
got
a
task.
First
right
load,
balancing
fix,
jav.
F
Yeah
so
I'll
I'll
drive
it.
I
think
andreas
did
a
really
nice
job
in
getting
all
the
mrs
prepped
for
it.
So
it's
just
a
matter
of
pushing
them
through
and
I
haven't
caught
up
with
what
he
has
prepped
for
charts
or
maybe
maybe
charts
isn't.
Maybe
it
was
just
an
issue
for
that.
I
don't
know
if
there's
an
mr
for
that
yet,
but
it
sounds
pretty
simple,
so
we
just
need
to
get
the
configuration
change
to
add
the
database
timeout
to
databases.pml
and
we'll
be
done.
I
think.
G
Now
does
it
work?
Can
you
hear
me?
Okay,
sorry,
I
was
trying
to
say
something
before,
but
I
I
was
just
speaking
and
nobody
was
listening
to
him
hearing
me.
So
I
have
a
question
because
maybe
I
have
the
answer
to
marrying
question,
but
I'm
not
sure
if
I
got
derived
if
I
got
a
question
right,
so
the
idea
is
that
we
remove
is,
if
is
it
possible,
to
remove
nginx
from
our
chart
installation
so
that
we
rely
directly
on,
let's
say
the
gke
egress
or
things
like
that.
G
G
When
I
was
working,
maybe
it's
tangential
to
it,
but
when
I
was
working
on
the
what
it
is,
then
the
operator
for
registry.
I
had
exactly
this
kind
of
problem
that
I
was
not
able
to
run
the
the
provided
nginx
for
I
don't
even
remember,
oh
because
I
was
building
two
clusters
now.
I
do
remember,
because
I
wanted
to
have
a
cluster
with
only
the
operator
inside
it
another
one
with
just
everything
else.
So
I
actually,
I
have
a
terraform
configuration
that
does
this.
G
It
basically
install
in
that
case
install
nginx,
because
I
didn't
want
to
pay
the
extra
ingress
thing,
but
I
have
nginx
and
search
managers
that
are
deployed
externally
in
another
namespace
not
handled
by
our
own
charts
and
then
our
charts
reconfigured
with
write
annotations
so
that
it
can
get
so.
The
the
other
namespace
can
pick
up
the
service
definition
and
route
things
internally.
So
in
my
installation,
nginx
is
not
part
of
of
the
charts.
Basically,
and
I.
G
F
G
A
G
Terraform
project,
so
this
is
my
old
terraform
project.
Let
me
show
you
very
quickly,
so
it
was
designed
by
installing
gk
cluster,
then
installing
some
base
services
on
top
of
it
and
then
just
installing
gitlab
chart.
So
that's
the
kubernetes
services
that
I
was
running
and
if
you
read
from
the
top
basically
is
installing
and
configuring
the
cert
manager,
yeah
requirements
for
cert
manager,
jet
stack,
but
at
the
end
there's
the
nginx
ingress,
so
it
was
creating
a
namespace
and
then
basically
here
you.
G
So
ssh
was
coming
to
nginx
and
then,
when
you
go
through
this,
this
was
my
configuration
for
the
chart
and
when
you
go
to
the
chart,
basically
you
say
that
you
don't
want
the
nginx
ingress
enabled,
but
at
the
global
ingress
you
have
to
configure
things
so
that
it
will
so
that
your
your
service
will
be
picked
up
by
the
externally
provided
nginx.
So
this
is
yeah,
I'm
sure
it
used
to
work.
Then
I
just
changed
the
focus
and
started
working
on
something
else,
but
yeah
yeah.
F
F
F
I
think
for
yeah,
but
for
for
for
ssh
for
good
ssh.
I'm
really
surprised
that
it's
going
through
nginx.
It
just
seems
kind
of
weird
to
me.
G
Services,
you
need
something
that
declare
the
service
and
something
that
provides
the
ingress,
and
so
maybe
nginx
can
be
used
for
this,
and
they
just
prepared
the
configuration.
F
I
see
yeah,
I
guess
yeah,
I
guess
so
we
so
we
may
want
to
investigate
whether
we
can
just
use
a
different
ingress
controller.
Besides,
if
engine
x
is
a
bit
heavy
but
yeah,
I
don't
know.
A
A
A
It
is,
but
I
wonder
if
that
I
wonder
if
there's
any
benefits
there,
that
we
could
utilize.
A
F
C
F
Yeah,
so
I
added
some
more
content
today
and
addressed
a
bunch
of
comments.
Specifically,
the
content
was
on
like
this
network
egress
stuff,
which
I
thought
was
a
good
thing
to
add,
and
I
moved
some
things
around.
So
if
you
have
time,
I
would
love
to
have
another
look
through
and
I
think
we're
planning
to
post
it
next
week.
A
F
C
Cool
one
thing
before
we
finish
actually
I'll
just
stop
recording,
because
this
is
not
especially
relevant.