►
From YouTube: 2020-09-17 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Anyway,
I'm
going
to
move
some
of
the
blocker
issues
that
we
are
listing
into
a
header
called
projects
or
you
can
rename
it
whatever
you
want
later
on,
but
but,
for
example,
removing
pages
complete
or
changing
pages
completely.
It's
not
going
to
happen
immediately,
although
we
did
remove
the
nfs
dependencies,
so
that
should
help.
So
I
think
it's
it
makes
sense
to
track
these
things,
but
not
necessarily
like
have
them
in
focus.
There.
B
A
C
C
The
next
item
here
is
mapping
services
for
cloud
native.
This
doesn't
have
a
clear
solution
still,
and
we
probably
need
to
you
know,
put
our
heads
together
with
distribution,
to
figure
out
what
we're
going
to
actually
do
for
this.
C
We're
fine,
like
I
said,
like
I've,
been
saying
we're
fine
for
now
refined
for
kid
https.
Obviously
we're
going
to
be
fine
for
gitlab
shell,
because
that
is
a
separate
pod
for
git
ssh.
But
after
that,
then
it
becomes
an
issue
and
it's
become
and
it'll
be
even
a
more
pressing
issue
because,
as
far
as
I
know
like,
I
guess
after
the
build
logs
stuff
is
done,
we're
unblocked
on
the
front
end.
C
So
that's
like
you
know,
we'll
we'll
definitely
want
to
start
migrating
web
and
api
immediately
yeah,
and
the
next
item
is
for
proxy
request.
Buffering.
It's
still
unclear
whether
this
this
may
not
be
an
issue
if
we
just
disable
it
globally.
Although
we
haven't
tried
that
yet
on
the
vms,
we
think
that
it's
probably
okay,
because
we
have
cloudflare
in
front
but
we're
going
to
need
to
investigate
that.
C
Traffic
after
migration,
I
think
the
engine
next
to
the
side
car
is
off
the
table
now,
so
I'm
going
to
remove
that
that's
highest
priority
for
me
right
now,
or
what
I'm
working
on,
maybe
scarbeck
will
join
once
he
frees
up
from
sidekick
it's
moving
along.
C
I
was
hoping
to
have
something
in
staging
by
tomorrow
for
this
I
think
we'll
probably
have
the
clusters
spun
up,
I'm
not
sure
if
they'll
be
fully
configured
but
hoping
to
and
then
tentatively
saying,
like
we'll
be
able
to
have
the
other
clusters,
at
least
starting
to
run
in
production
like
configured
in
production
next
week,
maybe
even
running
in
production,
but
we'll
see
the
unencrypted
connection
and
workhorse.
C
I
don't
think,
there's
anything
we're
going
to
do
about
this.
Maybe
we
should
just
take
it
off
the
list.
C
D
Yes,
sir,
or
not
yesterday,
the
day
before
yesterday,
I
created
a
lot
of
issues
related
to
what's
not
even
worth
trying
out
in
sidekick
for
the
evaluation
before
we
move
into
kubernetes,
based
on
the
feedback
that
we
got
in
our
spreadsheet.
D
I
created
all
these.
I
was
going
to
mention
them
later,
but
yeah
there's
a
these,
are
all
the
cues
that
we
had
received
feedback
that
they
have
a
dependency,
and
I
wasn't
even
going
to
bother
putting
these
inside
of
catch
nfs
for
evaluation.
A
And
can
someone
post
a
link
to
all
those
issues?
I
mean
I
saw
them
and
I
marked
them
always
read,
but
I
need
a
link
because
I
need
to
inform
people
of
this,
because
this
is
adding
like
quite
a
lot
of
issues
to
him.
For
that.
B
On
so
are
these,
so
these
are
the
catch-all
cues
that
have
an
nfs
dependency.
B
D
C
I
mean
the
the
crown
jobs
were
already
like
these
that's
kind
of
expected
right.
These
are
the
crown
jobs
that
do
clean
up.
Maybe
we
should
take
a
look
at
those
individually,
but.
A
I
know
it's
hard
to
get
it
do
it
to
understand,
but
I
can
repeat
again.
A
Let's,
let's
focus
on
those
couple
first,
just
to
figure
it
out
before
I
go
and
explain
further.
Why
I
added
to
infra
death
20,
more
issues
that
are
high
priority,
if
possible,.
C
C
A
A
C
A
C
C
D
A
Garbage
can
I
ask
you
one
thing
then
right
now:
can
you
please
remove
infradev
from
those
issues
that
you
created
just
for
the
time
being
until
we
figure
this
out,
because
I'm
afraid
this
is
gonna
overwhelm
everyone?
It's
not
gonna
be
clear
what
they
need
to
do.
It's
gonna
drag
along
for
too
long,
so
we
need
to
do
some
organization
before
we
go
into
it.
C
Yeah,
like
yeah,
I
don't
know
why
security
stance
depends
on
shared
storage.
I'm
hoping
it's
not.
D
A
D
D
C
C
I
don't
know
like:
I
don't
think
this
is
a
blocker
I'll,
remove
it
boxer
request,
buffering
that
remains
to
be
seen.
Nginx
sounds
like
we're
just
going
to
close
that
one
and
remove
application.log
similar
to
unstructured
log.
I
guess
we
can
remove
this
as
well
with
unstructured
logs
turned
off
by
default,
or
we
can
at
least
remove
from
the
blocker
list.
So
I'll
update
those
issues.
C
So
from
what
I
see,
it
looks
pretty
good.
I
think
the
summary
is
besides
the
cue
work,
it
really
comes
down
to
breaking
out
the
services
builds,
build
logs
and
pages,
and
that's
that's
pretty
much.
It.
A
A
C
A
C
Okay,
scarbec,
I
think
you
just
added
these
items
here.
Do
you
want
to
this
we're
in
the
discussion
section?
Do
you
want
to
talk
about
yeah.
D
Yeah,
so
the
first
one
was
precise.
What
we
just
discussed-
I
guess
we
don't
have
anything
else
to
discuss
there,
because
I'm
going
to
have
some
action.
C
D
Here,
I've
also
compiled
the
batch
three
that
I'll
start
putting
together
the
necessary
merge,
requests
for
I'll,
probably
add
mailers
to
this
prior
to
performing
that
it
looks
like
they've
already
merged
and
gotten
into
production,
a
fix
for
the
mailer.
So
it
should
not
depend
on
nfs
anymore,
but
besides
that,
I'm
ready
to
start
rolling
with
the
batch
3
for
the
sidekick
migration
and
this
anyone
else
has
any
comments
about
it.
D
Observe
anything
I
haven't
observed
any
problems,
yet
I
haven't
seen
anything
or
no
one's
reported
anything.
So
I'd
like
to
keep
it
that
way.
I
still
occasionally
check
sidekick
on
sentry
for
any
new
or
errors
that
just
don't
seem
right
and
I
haven't
come
across
anything.
That's
new,
occasionally
I'll
see
a
new
error
because
it's
like
a
connection,
failure
to
some
random
thing
and
some
code
path
can
get
to,
but
that's
completely
unrelated
to
what
we're
doing
here,
but
I
haven't
seen
anything
so
I've
been
happy
so
far.
C
Yeah,
so
the
websockets
for
interactive
terminal
is
complete,
though
there
isn't
a
whole
lot
of
traffic
there
we're
now
discussing
action
cable.
There
were
some
comments
made
on
the
epic,
since
this
epic
was
originally
about
all
about
action.
Cable,
but
since
kind
of
like
expanded
its
scope
to
include
everything
that
the
git
fleet
handles.
So
I
broke
it
out
into
a
separate
issue
just
now.
C
I
just
enabled
it
in
staging
for
testing
we
can
we
need
to
figure
out
how
we're
going
to
kind
of
slow
roll
this
in
canary
production
to
ensure
that
we
don't
put
too
much
pressure
on
redis
comment
on
the
issue,
if
you
have
anything
to
suggest,
but
I
think
we'll
just
do
kind
of
the
normal
change
issue
where
we'll
do
a
slow
roll
and
you
know
make
sure
that
it
doesn't
cause
any
problems.
A
A
Okay,
I'll
read
it
up
I'll
read
up
the
concerns,
but
I'm
like
this
is
one
of
the
services
where
I
would
be
like
happy
to
say
yolo
it
in
in
canary,
if
nothing
else
or
probably
even
production,
given
that
it's
a
new
feature,
no
one
should
be
using
it
anyway.
So
I'll
read
that
up
thanks.
C
Yeah
on
to
zonal
clusters,
so
this
is
moving
along.
We
have
the
zono
cluster
config
being
reviewed
now
for
terraform,
and
we
have
some
decisions
on
how
we're
going
to
divide
things
up
in
terraform,
because
we
want
to
avoid
copying
and
pasting
a
lot
of
config
four
times
I
broke
it
down
into
the
things
that
are
repeated
for
each
for
each
cluster.
C
Well,
actually,
the
first
one
is
just
subnets,
and
this
is
just
the
fact
that
we
have
to
allocate
many
more
subnets
now
than
we
did
before.
There's
no
way
around
this,
we
we
can't
let
we
we
don't
want
to.
Let
gcp
just
allocate
these
subnets
dynamically,
because
we
want
to
avoid
overlap
between
vpcs
for
peering.
So
so
this
means
that
we're
going
from
basically
from
three
subnets
that
we
configured
to
12.-
it's
not
terrible,
but
it's
you
know
and
it's
it
is
what
it
is
naming.
C
So
yeah,
so
how
much
space
do
we
have?
Yes,.
C
Because
for
production
and
staging
we
have
plenty
of
subnet
pre-owned,
like
our
reserved
space,
is
plenty
for
pre-prod.
I
had
to
steal
a
bit
from
legacy
azure,
but
we're
not
using
that
anymore.
So
it's
not
a
big
deal.
I
mean
there's
a.
I
can
send
you
there's
a
markdown
document
that
has
all
the
subnets.
We
have
plenty
of
room
for
now
when
we
go
to
multi-large,
which
is
I'm
sure
what
you're
thinking
we
may
want
to
start
thinking
about
that.
It's
only
an
issue
when
we
peer,
so
it's
not
like,
like.
C
A
C
C
So
you
can
see
like
here
are
the
legacy
azure
slots
and
there
are
like
there's
a
lot
of
ip
space
here
in
legacy
azure.
I
just
allocated
this
one
for
gke
pods,
so
this
was
given
to
pre-prod.
C
This
is
production.
Here's
the
production
span,
here's
staging
span,
it
has
a
decent
amount.
I
mean
it's
from
216
to
223.
C
I
I
can
look
to
see
how
much
of
that
we're
using,
but
I'm
pretty
sure,
but
but
I
think
what
we
might
need
to
do
is
we
might
like
need
to
take
another.
Take
another
block
for
production,
maybe
take
another
block
for
staging.
We
have
plenty
of
these
slots
here.
I
don't
know
like.
I
said
we
don't
necessarily
need
to
worry
about
overlapping
subnets
for
multi-large.
A
Okay,
can
I
can,
I
then
ask
not
for
a
favor
but
very
nicely,
both
you
and
skarbek,
to
sit
down
and
think
about.
How
do
we
move
away
from
this?
I
don't
want
a
problem
with
addresses
in
kubernetes.
C
C
Yeah,
like
I
said
this,
this,
this
becomes
a
problem
when
we
start
peering
and
you
peer
at
the
vpc
level
and
unfortunately,
we
made
a
decision
to
have
a
single
vpc
per
environment.
A
C
A
A
Let's
say
for
for
some
reason:
we
decided
in
2021
to
double
the
size
of
infra,
which
is
not
impossible
right,
and
everyone
starts
like
spinning
up
their
new
things,
because
that's
the
next
best
thing
like
you
justify
the
money
you
spend
by
creating
new
things
that
you
need
or
not
like.
I
can't
I
don't
want
something
to
stomp
on.
C
C
Yeah,
I
think
part
of
the
what's
difficult
for
us
right
now
is
that
a
cluster
requires
two
16s,
because
for
the
pods
and
the
services,
so
that's
a
lot
of
ip
space.
To
give
that's
why
I
needed
to
grab
more
for
pre-prod,
and
we
might
need
to
grab
some
more
for
production.
C
You
can
tell
gke
to
dynamically
allocate
that,
but
then
you
could
have
overlapping
between
two
environments
and
then
we
won't
be
able
to
appear
them.
So.
C
Yeah
naming
our
main
regional
cluster
is
called
gitlab
gte,
I'm
just
going
to
slap
on
the
zone
name
after
gitlab
gt
gke,
so
we'll
have
kind
of
a
long
name
for
these
zonal
clusters.
But
I
think
it's
better
than
coming
up
with
something
clever.
A
C
Yeah,
the
only
thing,
the
only
reason
I
don't
don't
like
that
is
we
typically
move
from
general
to
more
specific
when
it
comes
to
names
like
when
you're
like
when
you're
sorting,
for
example,
if
you're
sorting
by
things
like
you,
would
see
these
groups
separately
from
the
gitlab
gke
cluster
yeah.
I
don't
know
man.
D
I
think
another
thing
it
may
not
be
worth
even
talking
about,
but
gitlab
appends
or
prefixes,
each
node
name
with
the
cluster
name.
D
C
D
A
Playing
with
my
hardware
mic,
the
only
problem
I
have
with
with
these
things
is
that
things
that
are
static.
You
are
you're
gonna
like
if
they're
gonna
be
in
your
face
more
than
what
is
not
static
in
this
case,
so
you'll
see
continuously
gitlab
gke.
You
know
it's,
why
do
we
name
it
even
gitlab
gk?
Why
are
we
naming
not
naming
gprd
or
something
I
don't
know
right
like
that?
A
B
B
D
A
Okay,
I'll
just
direct
everyone
to
jarv
for
a
history
lesson
on
gpro
g-staging
staging.
B
C
I
thought
it
was
brilliant.
The
g
fraud
g
stage
like
no
one,
knows
what
the
g
stands
for.
I
think
it's
great
okay!
Let
me
let
me
do
this.
I
mean
we're
not
we're
not
close
to
doing
this
on
prod.
Yet
I'll
do
this
on
staging
and
pre-prod.
We
can
always.
You
know
we're
not
going
to
be
sending
traffic
to
these
new
clusters
for
a
while,
so
we'll
see
what
it
looks
like
and
then
we
can
change
it.
C
One
map
or
four
nuts.
That
is
the
question
it
looks
like
pricing,
is
not
that
different
yeah.
I
saw
your
notes.
I
I'm
fine,
like
you,
can
do
four
nats,
we'll
move
it
either.
Maybe
move
this
into
the
gke
module
skybeck.
C
We
didn't
okay,
ip
addresses,
we
pre-allocate
a
lot
of
ips
for
cades
workloads.
It
sounds
like
we're
kind
of
on
the
same
page.
Here,
maybe
create
a
new
module.
We
just
need
a
name
for
it,
so
maybe
gke
reservations
or
gke
resources.
D
D
I
wonder
if
we
could
well,
I
don't
want
to.
I
don't
want
to
start
reserving.
I
p
addresses
inside
of
the
dns
module,
but.
C
B
C
C
D
C
C
I
I
say
it's
it's
yeah.
I
can
do
that,
but
if
I
move
moving
ips
into
a
module
is
going
to
be
like
you
have
to
use
terraform
state
surgery
as
well
right
you
can't
just
and
if
the
names
change
then
name
changes
are
usually
destructive,
so
it
means
recreating
them
which
could
have
other
implications.
It's
something
we
need
to
just
figure
out
now
and
do
I
think.
C
Yeah,
I
kind
of
like
gke
reservations,
just
because
it's
specific
to
gke,
so
there'll
be
less
people
opinionated
about
it.
If
I,
if
I
create
a
new
module,
called
gcp
reservations
and
or
gcp
reservations,
then
what
are
we
going
to
put
all
of
our
ips
there,
like
for
everything
or.
D
Yeah,
because
I
also
forget
where
we
created
our
reservations,
that
we
got
those
blocks
for
the
cloud
adding
stuff
that
that's
sometimes
using.
C
C
Okay,
next
is
firewall,
that's
obviously
going
to
go
in
the
module
service
accounts.
It
sounds
like
we're
on
the
same
page.
For
that,
that's
it.