►
From YouTube: 2020-10-01 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Right,
so
should
we
kick
off
with
a
run
through
of
the
blockers,
see
we're
up
to.
B
Cool
okay,
so
hopefully
you
can
see
the
demo
agenda.
So
first
one
build
logs
is
making
good
progress.
It
is
rolling
out
slowly,
so
you
can
see
pretty
much
all
done
where
it's
the
incremental
rollout,
so
it's
actually
shared
was
like
that.
It's
happening
happening
now.
B
B
Is
so
the
only
update
I
think
I
saw
unless
you
have
one
job
was
that
the
we've
got
this
bit's
been
split
out,
so
the
allowing
the
traffic
splitting
across
scalable
deployment
is
split
out,
but.
C
Sure
so
yeah,
so
jason
opened
up
this
issue.
Yesterday,
it's
along
the
lines
of
exactly
the
kind
of
thing
that
I
was
hoping
we
would
do,
which
is
to
split
up
web
service,
similar
to
how
we
split
up
sidekick.
C
I
left
a
comment
on
this
issue
about
how
I
think,
like
we
may
want
to
have
like,
like
a
default
at
the
end,
maybe
some
list
of
rules
that
are
in
order
for
precedence
because
we're
going
to
have
to
do
path
matching
here
but
overall,
like
if
we
can
specify
min
and
max
replicas,
we
can
specify
labels
and
things
like
that
it'll
be
super.
This
will
allow
us
to
create
like
arbitrary
services
in
the
monolith,
because
we
want
to
do
this
for
action.
Cable,
too.
C
I
was
talking
to
andrew
in
our
one-on-one
this
week
also
about
this
he's
of
the
opinion
that
no
we
don't
want
to
go
crazy
with
services.
We
really
need
to
be
thoughtful
and
deliberate
about
what
is
a
service.
C
We
were
also
talking
like
maybe
internal
api
is
a
service
because
that
would
have
different
slos
than
the
public
api
and
that's
something
to
think
about
as
well,
but
I
don't
anticipate
it
going
much
further
than
what
we
already
have
to
find
the
services
but
yeah.
C
So
this
I'm
really
kind
of
looking
forward
to
seeing
where
this
goes,
and
the
most
exciting
thing
for
me
is
that
once
we
have
this,
we
can
really
consider
getting
getting
rid
of
aj
proxy
and
that's
going
to
be
a
cost
savings
for
us,
as
well
as
just
the
complexity
savings.
It's
just
going
to
take
one
less
like
one
bus
network,
hop
and
you
know
one
less
or
one
fewer
component.
C
C
E
C
No,
no,
no
worries.
I
just
put
a
note
that,
like
it
may
be
with
this
case,
or
maybe
with
this-
that
ordering
is
important
because
you're
going
to
have
like
a
bunch
of
different
routing
rules
and
you
may
want
to
catch
all
at
the
end.
This
is
similar
to
what
we
do
in
aj
proxy,
where,
if
request
paths
don't
match
api,
git,
https
and
others,
then
eventually
you
end
up
on
web
and
you
know,
may
do
something
similar
to
that.
C
We'll
probably
want
to
yeah
we'll
have
to
think
about
the
things
that
we
want
to
configure
per.
You
know
per
pod
like
like
the,
for
example,
the
blackout,
the
blackout
window,
the
grace
period,
duration
for
draining.
That's
one
thing
that
we'll
definitely
want
to
be
able
to
configure
per
pod
and
there's
I'm
sure,
there's
other
other
ones
as
well.
E
Right
and
I
expect
that
there
will
be
certain
properties
that
will
need
specific
feedback
on.
Yes,
we
do
or
no
we
don't.
What
I
want
to
avoid
is
just
literally
making
it
all
the
properties
if
at
all,.
E
C
E
C
E
Great
so
essentially
it's
going
to
try
and
take
the
longest
match.
You
can
find
first
based
on
a
certain
number
of
rules.
If
those
paths
are
very
similar,
so
the
the
slash
route
would
actually
be
what
would
end
up
at
web
normally,
and
that
would
be
the
last
thing
that
actually
gets
caught,
because
it's
the
least
matched
item.
C
Yeah,
I
see
so
it's
really
depending
on
the
regex
and
the
path
I
mean.
This
is
just
standard,
I
guess
nginx
config
and
that
makes
sense
to
me.
Does
this
put
a
strange
coupling
between
our
engine
x
chart
and
our
web
service
subchart
that
we
didn't
that
we
don't
currently
have.
C
E
That
we
generate
is
not
none
of
the
ingress
usage
patterns
are
nginx
specific.
Technically,
when
you
can
define
paths
and
the
ingress
for
a
given
host,
they
would
operate
the
same
in
nginx
or
aj
proxy.
E
We
just
currently
us
have
certain
annotations
that
are
in
there
by
default,
because
we
understand
the
nginx
behaviors
and
that's
what's
currently
in
place.
I
will
chuck
out
there.
We
are
not
against
changing
nginx
out
for
something
else.
If
it's
deemed
necessary.
Okay,
the
original
choice
of
nginx
was
very
strictly.
This
is
what's
in
omnibus.
C
Yeah,
I
I
think
like
for
us,
yeah
nginx
seems
like
the
least
common
denominator
for
an
ingress
decision,
but
we
could
also
entertain
other
options
before
you
joined
or
maybe
just
as
you
were
joining.
We
were
saying
how
this
is
particularly
exciting
for
us,
because
we
can
remove
h.a
proxy
once
we
have
these
routing
rules
down
in
nginx
or
at
least
entertain
that
idea,
so
that
that
would
be
really
a
huge
benefit
to
this.
E
C
Possibly
we
would
have
to
create
an
external
ip
on
gke
so
that
they
can
talk
to
cloudflare,
assuming
that
cloudflare
can
only
talk
to
us
over
the
public
internet,
which
is
the
case
now
that
could
change
if
we
do
some
sort
of
peering
with
them.
But
I
don't
know
if
that
something
we
can
do,
but
yeah
that
being
like
this.
This
is
this
is
probably
the
biggest
benefit
for
us
is
to
get
rid
of
h.a
proxy.
If
it's
possible.
E
I
can
understand
that
is
it
just
that
I
know
that
we
infrastructure
wise
have
been
using
aha
proxy
for
a
long
time.
Do
we
have
any
any
reason
to
particularly
dislike
it,
or
do
we
just
think
that
it
is
an
additional
component
right
now.
C
Yeah,
it
doesn't
really
make
sense
right
now,
especially
with
the
addition
of
you
know:
two
two
additional
load
balancers
right.
We
have
like
the
engine,
we're
not
load
balancer,
we
have
engine
x,
we
have
engine
x
and
the
gke
load
balancer
now
in
play
for
gke.
C
So
it's
just
a
lot
of
load
balancers.
We
have
aj
proxy
gk
internal
load,
balancer
engine
s,
ingress,
workhorse
and
then
finally,
your
rails,
so
yeah
it
would
be
nice
to
eliminate
one.
E
Let
me
let
me
provide
a
small
bit
of
insight,
and
this
is
one
of
those
things
that
I
need
that
we
need
to
say,
but
we
need
to
better
document
so
the
way
that
these
these
things
work
is
with
communities.
When
you
give
a
load
balancer
an
ip
address,
you're,
basically
giving
it
a
virtual
network
ip
right
and
that
in
turn
gets
balanced,
whether
it's
through
gke
load,
balancer,
etc,
to
whichever
machines
have
those
pods
running.
C
E
E
C
C
Yeah
yeah,
you
absolutely
need
to
have
that
because
that's
where
the
nodes,
the
nodes
are
attached
to
the
gke
internal
load,
balancer
and
then-
and
then
nginx
comes
after
that
I
just
yeah.
Basically,
I
think
having
aj
proxy
in
between
is
just
an
extra
layer
that
we
shouldn't
need
in
general
sounds
great,
so
I
think
that's
pretty
much
all
like.
If,
if
you
think
this
has
legs-
and
it's
gonna-
I
guess
we'll
know
after
we
get
some
more
people
to
look
over
it.
But
yeah
sounds
great
to
me.
E
That
would
be
great
to
have
noted
ahead
of
time.
That
way,
we're
not
adding
features,
we're
eliminating
features
or
should
say
properties
instead
of
features
we're
not
eliminating
any
features
themselves,
just
making
sure
that
we
have
everything
that
you
need
to
be
able
to
configure
it
appropriately.
C
E
C
A
A
Per
group
of
behaviors,
rather
than
per
features,
so
to
speak,
because
we
could
split
api
even
further.
I
think
that
was
one
of
these
discussions
some
time
ago,
based
on
the
volume
right
like
ci
jobs
and
so
on
and
so
on,
and
I
don't
think
we
need
to
necessarily
go
all
the
way
down
there,
but
it
would
be
nice
to
like
jason
mentioned
at
least
log
it
as
a
hey.
We
thought
about
this
at
certain
point
in
time.
A
C
A
C
E
I
do
see
someone
with
what
erin
is
coming
up
along
with,
and
that's
that's
how
to
actually
differentiate
between
external
api
traffic
and
internal
api
traffic
will
clearly
throw
out
there
right
now
that,
because
workhorse
and
the
web
service,
whether
it's
puma
unicorn,
it's
puma,
but
until
those
are
actually
decoupled
safely,
the
ability
to
balance
internal
versus
external
api
is
moot
until
such
time
as
like
the
best
we
can
do
right
now
is
tell
italy
to
talk
to
an
internal
fleet
that
isn't
on
the
external
load.
E
C
C
The
internal
api
right
now
is
only
used
for
gitlab
shell
and
it's
used
for
giddily.
Those
two
services
rely
on
because
we
don't
run
rails
on
the
getaway
servers
so
having
having
a
separate
slo
would
be
nice
for
that's
why
we're
saying
like.
Maybe
it
should
be
its
own
service,
because
you
know
it's
a
service
that
we
want
to
be
more
reliable
than
the
public
api,
but
yeah.
A
C
Think
internal
api.
I
think
that
can
come
later
for
now.
I
would
just
focus
on
what
we
have,
which
is
api
web,
get
https
and
then
action
cable,
I
guess,
might
be
a
new
one
right.
A
A
You
writing
a
merge
request
that
lasts
for
a
month,
and
then
we
see
the
end
result
and
then
we
need
to
go
back
like
how
do
we
get
included
in
this
sooner
sorry,.
A
Forward
jar
scarvex
time
here,
but
I
think
it's
important
that
we
participate
as
early
as
possible.
E
So
I'm
going
to
be
talking
to
dj
here
in
a
few
minutes
and
the
intent
is
that
we
can
get
started
on
this
one
and
we
can
do
early
working
progress.
That
shows
look.
This
is
how
it
does.
We
don't
expect
to
be
able
to
do
an
upgrade
in
the
first
provisions.
E
So
it'll
be
one
of
those
scenarios
where
you
may
need
to
test
it
on
a
test
cluster
just
for
your
crew
to
actually
turn
around
and
bang
on
it
and
see
what
happens
and
if
all
the
traffic
is
flowing.
The
way
you
want
it.
So
what
we
really
need
is
exact
feature
not
feature
property
requirements
that
you
desire
out
of
the
box
and
anything
you
have
foresight
on
and
two
when
we
paying
you
for
review.
B
Cool
okay,
so
is
that
going
to
be
enough,
then
that
will
unblock
us
on
two
two
one,
two.
E
I
did
specifically
split
two
two
one,
two
r
out,
because
there's
more
than
one
discussion
going
on
there-
and
this
was
raised
in
the
multi-large-
that
there
are
multiple
points
here
we
need
to
discuss
if
the
route
that
I
am
proposing
is
actually
the
best
route
to
go
with
the
reasonings
behind
prometheus
and
how
to
scale
that
and
how
to
do.
Fleet
division.
There's
a
number
of
other
discussions
that
are
actually
wrapped
up
in
here
that
need
to
go
by.
E
C
Yeah,
I
think
I
think
once
we.
If,
if
we
do
decide
to
settle
on
this
approach,
then
I
think
the
parent
issue
can
probably
be
closed.
There
are
alternatives
to
this
approach,
but
so
far
I'm
not
really
enthusiastic
about
any
of
them.
One
alternative
would
be
just
to
deploy
the
chart
multiple
times
in
multiple
namespaces
for
each
service
and
so
that
you
would
have
multiple
nginx
ingresses
for
each
service.
C
D
E
You
can
share
one
ingress
deployment,
so
there
can
be
one
nginx
installed
with
one
load,
balancer
incoming,
and
then
you
can
actually
define
multiple
ingresses
at
the
same
host,
but
with
different
paths,
and
it
will
actually
combine
those
together.
So
you
can
deploy
the
chart
three
or
four
times
just
providing
the
the
path
argument
to
the
ingress
definition
itself,
and
then
it
will
spread
that
load
across
the
various
namespaces.
All
you
have
to
do
is
set
the
flag
that
it
should
be
a
global.
C
And
that
would
work
across
namespaces
like
you
could
have
the
nginx
ingress
in
the
gitlab
nem
space
and
then
have
well
okay.
Maybe
that
would
be
an
option,
but
I
still
don't
like
it
as
much,
but
I
guess
we'll
see
where
this
new
issue
goes
and
if
we
have
to
entertain
other
options,
then
we'll
do
that.
E
C
B
Nice
thanks
a
lot
cool,
so
just
on
the
next
blocker
on
the
block
list
is
the
cross
az
network
traffic
how's
that
going
job.
C
It's
going
well
last
two
days,
we've
made
a
lot
of
progress.
Monitoring
looks
like
it's
complete
on
the
new
clusters.
We
have
prometheus
endpoints
that
are
now
finally
working
basically
just
waiting
for
scarbec
to
hit
the
approve
button.
He
did
one
last
round
of
review
and
then
we're
complete
I'll
do
a
status
update
at
the
end
of
today
on
that
issue,
but
next
we'll
be
installing
the
gitlab
chart
in
the
clusters.
C
B
Great
nice
cool
and
then
next
on
the
project
we've
got
pages.
So
I'm
assuming
no
major
update
on
this.
It's
sort
of
continuing
on.
A
It's
continuing
on
it's
actually
running
in
production
right
now,
one
of
the
biggest
changes
they
made.
So
if
you
go
to
dogsgitlab.com
you're
gonna
get
your
pages
served
from
object
storage
instead
of
from
shared
storage.
So
but
there
is
a
lot
of
work
to
go
through
still
to
to
remove
this
blocker.
So
it's
in
progress,
if
nothing
else.
B
Okay,
that's
good
cool;
okay,
so
jav
over
to
you.
C
I
think
we've
already
kind
of
discussed
the
first
item.
I
don't
think
there's
anything
left
to
discuss
there
so
I'll
go
on
to
standardizing
labels.
I
brought
this
up
in
the
observability
already.
C
I
guess
we
need
to
decide
how
what
we
want
our
labels
to
look
like
for
these
new
zonal
clusters
in
both
logging
and
metrics,
whether
we
use
overload
region
overload,
zone,
use
both
region
and
zone
or
create
a
new
label
named
location.
I
think
those
are
those
are
the
options
we've
explored
so
far.
C
I
I
think
right
right
now
we
haven't
really
decided
yet
so
we're
not
doing
anything,
but
we
need
to
decide
soon.
Maybe.
F
C
Like
probably
the
best
you
know,
we
should
talk
this
through
with
observability
since
they're,
the
ones
that
are
most
affected
by
this.
Maybe
location
is
good.
Like
I
like
this,
we
can
use
location,
but
then
do
we
want
to
have
a
location
class
label.
You
know
that
says
like
whether
it's
regional
or
zonal,
I'm
not
sure.
C
Yeah,
so
the
thing
is
is
like
base
of
it
yeah.
If
it
ends
with
a
letter
or
not,
you
know
it's.
It's.
A
You
should
definitely
bring
it
up
with
them
and
get
them
to
supply
options.
If
we
cannot.
C
A
C
Yeah,
so
maybe
I
should
open
up.
I
can
open
up
an
issue
specifically
for
this,
so
we
just
can
make
sure
it's
resolved,
maybe
we'll
just
go
with.
Maybe
we'll
just
go
with
location
kind
of
like
that.
C
Yeah,
that's
definitely
doable
thursday
yeah
thursday.
Well,
yes,
it's
like
you
know
it's
friday
for
some
reason,
but
it's
not
it's
thursday,
so
that'll
work.
F
Exciting
but
it's
sidekick
is
it's,
it's
boring,
but
it's
going
well.
I
don't
really
have
anything
to
add
to
the
conversation.
You
know.
The
catch-all
stuff
is
slowly
chugging
along
the
get
ssh
work
was
just
started,
so
it's
slowly
chugging
along
as
well.
So
I'm
working
there's
an
update.
B
Awesome
all
right!
Well,
thanks,
everyone
enjoy
the
rest
of
your
thursdays.