►
From YouTube: 2020-12-17 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Awesome
so
welcome.
This
will
be
the
final
kubernetes
demo
of
this
year.
So,
let's
start
with
some
blockers,
so.
A
Cool,
so
the
first
one
we've
got
is
around
structured
logging,
unified
structure,
logging.
B
So
for
this
one
distribution
is
making
some
progress.
They
continue
the
work
on
the
gitlab
web
service
container.
B
I
don't
know
where
that
falls,
I'm
trying
to
figure
out
where,
because
we've
got
that
feedback
on
one
of
an
issue,
that's
outside
of
the
scope
to
the
actual
work
that
we're
trying
to
get
completed.
So
I
asked
the
question:
if
they
have
a
tracker,
an
issue
or
tracking
specifically
for
that
feedback,
because
it's
related
to
how
the
gitlab
blogger
works
versus
just
implementing
gitlab
logger-
I
haven't
heard
back
from
yet,
but
it
is
progressing.
B
They've
got
a
merge
request.
That's
ready
for
review
that
contains
the
enablement
and
disablement
of
gitlab
blogger.
That
would
allow
us
to
test
enabling
this
watching
the
logs
making
sure
they
are
up
to
the
stainers
that
we
need
prior
to
us,
enabling
in
production
at
this
moment
in
time
I
evaluated
the
merge
request
and
it
you
know
it
works
as
desired,
but
we
wouldn't
want
to
enable
it
in
production.
Yet
so.
A
And
that's
based
on
wanting
the
additional
stuff
or
based
on
just
what
you
see
as
fast.
It's
just
not
completely.
B
Yeah,
there
are
some
improvements
that
we
need
to
have
implemented
inside
of
the
gitlab
blogger
project,
otherwise
we're
duplicating
logs
and
the
messages
field
currently.
B
Api
work
and
the
web
fleet
for
web
sockets-
I
think
we
talked.
I
can't
remember
what
we
decided
in
the
last
meeting,
but
I
think
we're
fine
that
we
could
continue
work
with
websockets
without
the
logging
work,
but
we
still
need
the
nginx
upgrade
preferred
like.
We
really
need
to
finish
that
nginx
upgrade
and
the
helm3
upgrade
first.
A
Yeah,
okay,
that
makes
sense
perfect,
okay,
cool!
Is
there
anything
else
anyone
wants
to
talk
about
on
that
structured.
A
C
You
added
to
reiterate
our
request.
I
thought
that
was
the
comment
that
he
made
to.
A
Okay
makes
sense,
cool,
okay,
next
up,
labels
for
infrastructure.
B
I
received
some
initial
feedback
from
an
initial
review,
so
it's
just
a
work
in
progress.
I
got
sidetracked
because
I'm
trying
to
help
jarv
with
the
helm3
upgrade,
so
I'm
trying.
D
Sorry
sorry,
eight
interrupts
and
stop
craig
furman
is
trying
to
instrument
thanos
in
the
same
way
that
we've
done
everything
else
and
thanos
has
got
some
other
objects
like
stateful
sets,
and
so
we
had
a
very
productive
call
this
morning
where
we
were
like.
How
are
we
going
to
do
this
and
we
sort
of
settled
on
something?
D
So
it's
probably
just
worth
you
being
aware
that
he
might
make
some
changes,
so
we
might
send
some
merch
requests
out
so
probably
with
him
bringing
you
on
that
as
well
I'll,
let
him
know
I
don't
know
what
time
frames
and,
if
he's
awful
anything
like
that,
but
we're
just
kind
of
you
know.
At
the
moment
we've
got
the
cute
deployments
in
the
metrics
catalog
and
then
now
we'll
have
basically
stateful
sets
in
there
as
well.
So
he's
going
to
do
some
some
work
on
that,
so
just
to
be
aware
of
it,
cool.
B
D
D
A
Do
you
need
any
help
skybeck
with
that
upgrading
the
helm
chart.
A
Cool
okay
sounds
good
and
then
pages
is
still
ongoing.
A
Heard
cool,
so
it's
got
a
due
date
of
january,
so
hopefully,
hopefully
not
too
far
away
awesome.
Are
there
any
new
blockers.
A
E
Yeah
sure
so,
just
kind
of
let
everyone
know,
we've
we've
switched
from
using
the
master
branch
of
charts
to
using
the
gitlab.com
branch,
and
the
reason
for
this
is
because
the
nginx
upgrade
was
merged
to
master
and
we're
not
quite
ready
for
it,
and
we
had
a
charts
change
that
we
needed
to
take
last
night
that
emerged
this
morning
after
switching
us
from
master
to
the
gitlab
com
branch.
So
we're
going
to
be
working
from
that
branch.
E
For
now,
until
we
finish
the
home
3
upgrade
and
then
we
can
also
upgrade
the
enginex
ingress
controller.
I
think
I
think
this
is
the
right
thing
to
do.
The
only
thing
we
could,
the
only
alternative
would
be
to
revert
the
nginx
ingress
controller
work
that
was
merged
to
master
until
we're
ready
for
it,
but
that
doesn't
seem
fair
because
maybe
other
people
want
to
take
it.
So
I
think
we're
okay
in
this
state.
A
Does
it
mean,
do
we
need
to
track
anything
like
do?
We
need
to
make
sure
we're
picking
up
any
other
changes
or
anything
like
that
as
well?.
E
E
The
helm3
upgrade
is,
you
know,
asymptotically
approaching
completion,
I
would
say
like
it's.
It
feels
like
we
made
a
lot
of
progress,
but
we
are
making
smaller
increments
of
progress
and
not
quite
getting
to
the
finish
line.
E
We
fixed
a
bunch
of
issues.
Currently,
we
ran
in
currently
we're
blocked
on
gitlab
helm
files.
It
looks
like
the
gitlab
deployment
is
good.
There's
one
issue
there
with
with
well
there's
one
potential
issue
that
which
requires
us
to
change
a
flag
or
a
change
in
ops
option
when
we
run
helm
for
just
the
helm,
3
stuff,
so
we're
going
to
probably
have
to
fix
that.
But
I
think
in
general,
gitlab
is
good.
E
Gitlab
help
files
involves
upgrading
the
prometheus
operator
chart
which
changed
was
deprecated,
so
we
had
to
go
through
many
chart
upgrades
where
we
are
now
with
that
is.
We
ran
into
a
bug
with
the
latest
version
of
helm
that
we
have
that
isn't
fixed
yet
so
I
just
reverted
right
before
I
hopped
on
the
call.
E
I
reverted
that
version
to
an
earlier
version
of
home
three
which
doesn't
have
the
bug
and
linked
to
the
github
issue
and
now
we'll
see
where
we
are,
after
that,
I'm
really
really
hoping
that
we
can
wrap
up
pre-prod
today,
which
will
position
us
to
move
to
staging
january
1st.
But
we're
not
going
to
I.
I
don't
really
see
us
doing
staging
next
week
because
of
the
release,
and
I
because
we
need
to
block
all
deployments
while
we
do
the
helm
upgrade.
A
E
I
don't
know
if
that
falls
on
you
know,
yeah
I
mean.
E
A
Cool
okay
sounds
good
yeah.
Let's
work
out
a
schedule
for
the
actual
where
we
could
do
the
upgrade
around
deployments
job.
Maybe
let's
take
that
into
the
delivery
slack,
so
we
can
coordinate
around
other
things.
E
E
Maybe
I
mean,
if
you
have
time
we
can
pair
up
again
and
see
what
happens
now.
I
think
I,
I
think,
like
we're,
we're
on
a
good
path
to
get
home
files,
the
helm
files
branch
merged
for
pre-prod.
But
if
you
don't
have
anything
after
this
meeting,
we
can
we
can
zoom
and
work
together.
E
D
Yeah,
this
isn't
really
anything,
but
it
just
occurred
to
me
that
I
don't
know
if
you
have
discussed
it
as
a
team,
but
it
never
crossed
my
mind
that
a
possibly
good
potential
candidate
for
migration
to
kubernetes
is
prefect,
and
that
is
because
it's
a
fairly
straightforward
service.
It
doesn't
have
any
disk
or
anything
like
that.
D
It
has
a
database,
that's
proficient
to
cloud
sql
and
one
of
the
other
things
that's
quite
nice
about
it
is
the
traffic
is
pretty
low
at
the
moment,
and
you
know
we
could
kind
of
get
in
while
it's
low
and
low
risk
and
move
it
forward.
But
if
you've
discussed
it,
I
don't
know
if
it's
been
discussed,
but
I'd
never
really
thought
of
prefect
as
being
a
potential
candidate,
but
it
could
be
an
interesting
one.
B
D
D
A
E
Yeah,
I
just
don't
know
if
we
have
chart
support
for
it
yet
so
maybe
we
can
get
ahead
of
that.
If
there
isn't,
we
should
open
up
an
issue
to
put
it
on
their
radar.
E
G
D
That
is
that
that
is
one
of
the
things
the
the
what
I
was
thinking
is
you
know
if
you
look
at
the
the
services
this,
the
nice
clean,
stateful
services
that
have
been
running
like,
like,
I
think,
it's
kind
of
akin
to
registry,
it's
sort
of
a
similar.
It's
like
a
go-based
single
container
and
that's
been
running
so
well,
and
you
know
we
we
sort
of
starting
to
become
more
comfortable.
So
that's
not
a
full
answer,
but
that's
sort
of
tense.
D
I
I
share
your
your
concerns,
but
that
is
sort
of
maybe
like
we're
getting
to
the
point
where
we
can
start
working
on
on
those
kind
of
things,
something.
D
G
Yeah
another
problem
I
have
with
this
is
that
I
still
haven't
perfected
my
recipe
to
clone
jarvan's
car
back
yeah.
That
is
the
main
issue,
so
it's
kind
of
hard
to
fit
that
in
yeah,
but
I
have
a
long
period
over
the
holidays
to
to
get
that.
You.
G
I
would
be
very
happy
to
consider
it
yeah.
I
have
a
couple
of
concerns,
but
maybe
actually
maybe
we
could
do
both
as
in
we
don't
have
to
choose
right,
like
move
the
whole
of
prefect
right
to
production,
maybe
we
have
some
smaller,
like
a
couple
of
pods
that
are
going
to
be
serving
that
traffic
for
a
smaller
project
and
have
both
vms
and
the
cluster
serving
the
traffic.
G
D
A
But
I
think
your
point
job
is
also
a
good
one.
So
timing
wise
api
internal
api
next
and
then
we
can
start
working
out
this
one.
Does
that
make
sense,
go
from.
E
A
E
I
really
and
that's
the
next
point.
I
think
I
really
like
the
idea
of
doing
internal
api
by
itself,
which
could
mean
that
we
define
a
new
service
that
we
don't
have
now
called
internal
api,
and
I
don't
know
andrew
like
whether
we
want
to
catalog
this
under
api
and
have
it
as
a
shard
or
something
or
what.
But
right
now.
E
E
Shell
is
running
in
the
zono
cluster
and
then
it
uses
the
service
endpoint,
so
it
talks
directly
to
the
service
without
going
through
nginx
and
what
we're
thinking
is
with
like
the
new
feature
in
charts
that
allows
you
to
shard
traffic.
We
can
just
define
another
service
called
internal
api
that
will
create
the
service
in
the
chart.
It
will
allow
us
to
create
like
separate
pods
with
their
own.
You
know,
hpa
configuration
own
labeling
and
everything,
and
then
we
can
just
use
that
service
name
as
the
internal
api
for
shell
for
giddily.
E
It's
a
little
bit
more
complicated
because
we're
coming
from
outside
the
cluster,
and
we
have
to
come
in
through
something.
So
we
would
have
to
come
through
the
nginx
ingress
for
that,
but
we
could
also
use
maybe
nginx
routing
to
route,
because
I
think
all
paths
to
the
internal
api
are
like
something
like
slash
api,
slash,
internal
or
something
I
don't
remember,
but
maybe
we
can
use
nginx
path,
routing
for
that.
D
E
D
E
D
E
D
E
E
But
that's
the
case
now
right
like
there's:
no,
the
only
difference
internal
and
public
api.
The
only
difference
is
whether
you
enter
through
an
internal
ip
address
or
a
public
ip
address,
but
the
aha
proxy
layer
is
the
same
for
both
internal
and
external.
So
and
the
requests
are
routed
the
same,
there's
no
difference,
except
maybe
that
we
bypass
the
rate
limiting
in
h
a
proxy,
but
this
is
actually
what
are
we
doing
for
rate
limiting
application
rate,
limiting
that
we're
just
about
to
turn
on.
E
Is
there
a
white
list
for
all
internal
ips?
Well,.
D
D
About
what
about,
if
we
just
had
like
a
second
like
ingress
or
like
a
load
balancer
that
has
no
external
access,
you
can
only
get
to
it
from
from
inside
the
v,
because
that's
kind
of
like
how
I
imagine
an
internal
api
to
be
right.
Is
it
something
that
only
has
internal
access
and
it
routes?
Everything
to
you
know
directly
to
the
api
nodes?
D
E
But
I
think
we
can
internally
accessible
yeah,
so
we
can
do
that
by
setting
an
external
ip
for
the
internal
service
that
uses
an
internal
ip
address.
This
is
how
we
do
it
for
registry,
which,
which
means
that
we
won't
use
the
nginx
ingress
at
all,
for
the
service
and
we'll
just
have
an
external
ip,
that's
an
internal
ip
address
or
yeah.
This
is
confusing
because
the
terminology,
but
basically
yes,
we
can
do
that.
I
think
that
would
be
the
best
way
to
go.
D
Yeah
and
then,
and
then
we
know
that,
like
even
if
you
requesting
api
internal
from
externally,
you
actually
go
into
the
external
api
and
it's
not
path-based
or
anything
like
that,
yeah,
which,
which
is
fine,
because
we
have
no
evidence
that
we
don't
know
that
external
users.
You
know
they
could
flood
our
internal
api
or
anything
like
that.
A
E
E
A
F
I
can
give
you
an
update
on
where
we're
at
in
observability
for
kubernetes
migrations.
F
So
we
have
the
front
end
and
storage
components
for
thanos,
so
basically
the
query
layer
and
new
newly
sharded
back-end
storage
for
the
thanos
components.
That's
been
up
and
it's
now
up
and
running
and
we're
working
on
the
productionizing
it
getting
the
readiness
review
and
that
should
be
done.
I'd,
say,
mid-january,
we're
a
little
behind
on
getting
grafana
and
and
so
dashboards
get
lab
net
and
dashboards
get
lab.com.
F
That
will
also
hopefully
migrate
in
january
and
then,
after
those
components
are
done.
We
have
a
few
more
bits
of
the
thanos
stack
to
move
over
there's
some
the
rule
service
and
the
compaction
service
that
that
will
hopefully
all
get
cleaned
up
in
january,
and
then
we'll
be
done
with
that.
And
then
the
next
plan
is
we're
we're
it's.
It's
been
mostly
designed.
It
needs
a
lot
more
testing,
but
we're
planning
on
moving
the
prometheus
monitoring
component
that
monitors
the
chef
infrastructure
out
of
chef
and
into
kubernetes.
F
F
F
And
that'll,
the
part
of
the
reason
part
of
that
switchover
depends
on
console.
So
what
we'll
be
doing
is
right
now
prometheus
discovers
chef
components
using
chef
directly,
so
it
it
executes
a
chef
search,
every
30
minutes,
populates,
a
a
bunch
of
files
and
this
causes.
This
is
causes
a
whole
bunch
of
async
problems
for
moving
chef
stuff
around
and
we
want
to
switch
that
to
using
console
so
that
console
controls
the
discovery
and
that
allows
us
to
be
much
faster
in
terms
of
discovery.
D
D
E
What
was
the
do?
You
think
that
the
console,
because
we've
been
running
with
this,
like
janky
chef,
based
discovery
for
a
while
in
vms
shouldn't?
We
just
wait
until
we
move
to
to
kubernetes,
because
if
we
are
planning
to
move
the
front
most
of
the
front
end
in
the
next
two
quarters,
is
it
worth
the
time
to
invest
in
console.
E
F
There's
actually
there's
already
a
prometheus
running
in
g-stage.
That
is
doing
some
console
discovery.
So
it's
it
is
I've.
I've
already
done
all
the
proof
of
concept
work.
The
the
thing
that
needs
to
happen
is
all
the
all.
The
monitorable
endpoints
need
to
register
themselves
in
console
so
that
prometheus
can
do
the
discovery,
work
and
that's
that's
actually
all
done
all
the
the
standard
like
tier
type
stage.
F
It's
just
a
matter
of
it.
It's
mostly
a
matter
of
doing
the
actual
migration
one
service
endpoint
at
a
time
so
postgres
exporter
is
now
registered
in
console
prometheus
grabs
that
out
of
console
and
and
doing
the
flips
and
true
the
I
was
planning
on
doing
that
after
we
finished
moving
thanos
out
of
chef.
E
Cool
while
I
have
you
here,
I'm
very
worried
about
cardinality
for
metrics
as
it
pertains
to
pods,
and
I
hear
just
like
a
lot
of
people
having
this
problem
with
prometheus
and
kubernetes,
where
you
include
the
pod
as
a
label,
and
you
run
into
problems.
How
do
we
know?
How
can
we
be
sure
like
we're
going
to
be
okay,
as
we
move
the
front
end
and
like?
Where
are
we
going
to
hit
problems
like?
How
can
we
predict
that.
F
So
yeah
it's
the
the
real
problem
is:
is
the
total
the
total
metric
load
on
prometheus
right
now,
the
the
biggest
thing
that
has
actually
helped
has
been
moving
to
the
zonal
clusters,
yeah
so
that
we
have
prometheus
sharded
per
zonal
cluster.
The
other
thing
that
helps
is
moving
to
slightly
heavier
pods.
So
moving
from
the
the
one
core
to
the
four
core
pods
has
helped
quite
a
bit
the
actual
pod
churn.
So,
as
we
do
deployments,
that's
not
a
big
deal
at
all.
For
us.
F
We
don't
we're,
not
we're
not
churning
pods
every.
You
know
every
10
minutes,
and
actually
the
churn
is
actually,
if
you,
if
you
look
at
the
prometheus
servers
running
in
ci,
those
have
tons
of
churn
and
it's
just
fine.
F
It's
the
latest,
the
the
most
you
know
in
the
last
nine
months
of
prometheus
development,
they've
significantly
iterated
on
the
churn
handling.
F
So
churn's
not
a
concern,
but
total
number
of
pods
is
a
concern
and
part
of
the
reason
that
we're
working
on
moving
we're
planning
on
moving
from
prometheus
helm
chart
to
tonka
is
that
the
tonka
code
and
using
using
the
operator
code
directly
instead
of
using
the
helm
chart,
allows
us
to
implement
horizontal
sharding,
and
so
that
is
now
supported
in
the
prometheus
operator
and
it
will
be
it'll,
be
fairly
easy
and
what
we'll
be
able
to
do
is
we'll
be
able
to
spin
up,
say:
10
prometheus
servers,
each
one
monitoring
10
of
the
pods
in
a
hash
function,
distributed
way.
F
Thanos
thanos
will
basically
take
care
of
thanos
thanos
understands
how
to
distribute
all
that
query:
load.
Okay,
and
actually
this
will.
This
will
also
improve
query
load
because
it
will
thanos
will
fan
out
the
data
requests
to
the
each
of
the
prometheus
shards.
F
C
F
That's
part
of
the
reason
that
we're
moving
the
thanos
thanos
components
off
of
chef
and
onto
kubernetes
we've
already
implemented
horizontal
sharding
of
thanos
storage,
so
the
thanos
stores
there
are
now
10
pod
pairs
in
g
in
gprod.
Each
one
is
serving
10
of
the
object,
storage,
cool.
F
So
you
can
already
try
it
out.
Thanos.Gitlab.Net
is
the
new
query
front-end.
It
also
now
has
memcache
in
the
query
side.
So
as
you
quick
and
it
has
time
based
query
chunking,
it's
this.
So
basically,
if
you
query
for
say
a
year
of
data,
it
will
break
that
up
into
per
day
queries.
F
And
so
we're
we're
we're
moving
toward
a
much
more
horizontally,
scalable
thanos
front
end,
and
you
can
give
that
a
it's
it's
up
and
running,
but
we
you
know,
I
haven't,
we,
we
don't
have
any
matter
monitoring
on
it.
So
if
you
break
it,
I
don't
care
that.
E
E
So
for
the
short
term
ben,
I
think
what
we're
looking
at,
because
my
concern
is
that
we
reduce
the
number
of
puma
workers
from
four
to
or
increase
the
number
of
puma
workers
from
two
to
four
to
like
have
the
number
of
pods
we're,
probably
going
to
be
right
back
there
in
january
february,
when
we
bring
on
the
api,
because
we're
going
to
be
looking
at
a
2x
pod
2
to
3x
pod,
you
know
increase
yeah.
F
E
F
E
F
A
Cool
take
care
all
right
thanks.
Everyone
enjoy
the
rest
of
your
years
and
see
you
see
you
next
year,
cool
take
care.