►
From YouTube: 2020-11-05 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
C
So
cool,
so
things
are
progressing
on
so
the
first
one,
two
two
sorry,
two
three
three
four
is
scheduled
at
the
moment
for
13.7.
C
A
Yeah,
we
need
to
wait
for
the
the
next
one
on
that
list.
I
think.
C
Yes,
so
two
three
three
four
is
hopefully
jason's
just
about
to
pick
it
up.
So
hopefully
we'll
see
this
coming
in
the
next
week
or
so.
C
Cool
and
then
on
the
prometheus
metrics,
this
one
has
a
possible
solution
identified,
but.
A
E
A
D
A
D
A
D
A
E
D
The
ones
I
speak
of
all
the
time,
the
sli
matrix-
I
don't
speak
of
any
other
ones,
so
for
that
we've
got
something
called
gitlab
shell
and
we
measure
rpa
or
actually
all
we
have
is
rps.
So
there
you
go
but
like
yeah
I
mean
those
are
the
only
ones
that
I
you
know,
those
ones
always
focus
on
and
that
is
based
on.
Let
me
just
keep
taking
that
quickly,
hey
and
so
for
that
we're
actually
just
using
hp
proxy
as
as
a
proxy
case.
D
A
D
D
So
there's
a
there's:
a
component
called
kit
lab
shell
and
and
then,
if
you
go
down
to
gitlab
shell
component
detail,
then
it
gives
you
kind
of
like
the
raw
metric
rather
than
the
recording
rule
version
of
it.
Okay,.
D
A
Propose
that,
as
a
sidecar
container,
I
just
haven't
done
any
research
or
any
work
towards
that
solution.
A
So
it
doesn't
seem
terribly
difficult
off
the
cuff,
but
I
think
I'd
rather
focus
on
what
we
get
out
of
our
logs
and
create
a
dashboard
there
and
move
forward.
That
way,
it's
less
stuff
that
we
need
to
worry
about
and
maintain
inside
of
our
home
charts
a
and
then
a
secondary
container
that
we
need
to
have
built
somewhere.
That
holds
a
configuration
file.
C
I
don't
think
so.
I
think
we
did
we
discussed
this
a
few
weeks
ago
and
we
were
saying
that
one
of
the
big
things
is
unmodified
helm.
So
precisely.
A
C
Okay,
so
does
that
give
you
enough
of
a
way
forward
scumbag
for
that
one,
yes,
yeah
great
great,
maybe.
B
Maybe
we
should
revisit
this
after
we
get
to
canary
because
we're
blocked
on
the
logging
stuff
anyway,
until
like
until
that's
resolved,
we're
not
going
to
be
able
to
go
past
canary,
so
we
could
just
revisit
it.
Then.
B
B
We
used
to,
I
mean
that's
back
when
we
had
canary.gitlab.com.
This
was.
Is
that
not
around
anymore?
I
don't
know
if
the
dns
has
been
has
been
deleted.
D
B
So
what
does
that?
How
would
we
like,
I
guess
you
could
say
you-
could
tell
gitlab
qa
to
run
against
it
instead.
A
B
B
C
A
C
C
And
then
we
also
have
the
structure
vlogging
yeah,
so
this
one
is
not
even
scheduled
yet.
C
So
I
have
to
ask
ask
if
we
can
get
that
done
so
jav,
you've
added
a
comment.
A
B
Yeah,
I
mean
andrea.
I
think
my
understanding
was
that
the
logging
from
sshd
is
like
almost
doubling
our
log
volume,
but
I
could
be
wrong.
Is
that
isn't
that
what
we
were
saying?
Starbuck.
A
From
mikel
is
the
size
of
the
event
that
we
care
most,
the
ssh
logs
are
significantly
smaller,
but
there.
C
D
D
B
B
Them
separately
right
so
because
we're
not
collecting
those
logs
now
on
vms,
but
it
might
be
useful
for
us
to
send
them
to
elasticsearch
in
a
separate
index.
D
D
B
B
I
would
say:
let's
keep
it
on
the
blocker
list
for
now,
because
I
I
think,
there's
other
reasons
why
we
want
this.
C
Cool,
I
think
that
makes
sense,
and
one
I
was
going
to
ask
about.
Was
this
one
java
I
saw
you
mentioned:
should
we
be
adding
this
onto
the
list
as
well?
The.
B
Yeah,
it
probably
should
be.
I
mean
it's,
it's
a
blocker,
for
I
think
I
think
our
monitoring,
all
of
our
monitoring
issues
are
our
blockers.
At
this
point,
we've
been
typically
only
using
the
blocker
label
for
stuff,
that's
external,
to
right
what
what
we're
working
on
since
andrew's
working
on
this?
Supposedly
since
he's
the
assignee,
we
can
just
say
that
he's
a
blocker.
D
B
D
Let's
take
an
example
at
the
moment,
let
me
just
get
a.
Let
me
just
get
a
quick
copy
of
sure
type
of
link
that
we're
talking
about,
because
I
think
people
will
probably
understand
it
a
bit
better
if
they
can
actually
see
what
we're
talking
about
so.
D
D
D
But
what
would
be
much
better
is,
if,
like
a
computer,
could
associate
this
metric
with
like
a
running
service
and
say
well.
This
is
the
workhorse
container
for
the
get
service
on
the
main
stage
and
you
know
memories
blowing
up
and
then
we
we
can
do
that
in
our
graphs
and
also
do
that
without,
like
really
horrible,
regular
expressions
that
are
trying
to
match
things
inside
here.
That's
kind
of
the
the
problem
that
I've
been
thinking
about
and
then
and
then
you
know
on
the
dashboards.
D
We
can
just
put
graphs
in
to
show
you
like,
what's
happening
with
these
things,
and
you
know
if
they're
getting
evicted
and
all
of
that
into
our
the
graphs
that
we
generate
for
grafana,
but
I
think
the
kind
of
one
thing
that
would
really
help
you
is
if
this
container
label
could
be
unique
across
stage.
D
You
know
all
of
our
labels
like
stage
shard,
or
at
least
you
know,
type
and
because
stage
we
can
kind
of
map
with
the
name
space.
I
guess.
B
This
yeah,
but
not
every
one
of
these
some
this
metric
is
also
associated
with
notes
too,
and
I
think
that's
where
it
gets
tricky
right
like
if
you
say,
if
you,
if
you
do
this
query
for
like
without
the
container
label,
let's
do
like.
Yes,
it.
D
So
I
mean
what
I
was
thinking
and
like
tell
me
if
this
is
wrong,
because
it
could
well
be
like
we
only
for
the
for
the
service
metrics,
we
only
care
about
the
containers
that
we
specifically
know
are
part
of
the
service.
So
we
have
things
like
like
this
over
here,
which
is
just
some
because
by
default
c
advisor,
I
think
just
exports
every
single
c
group
on
a
on
a
box
right.
So
you
get
quite
a
lot
of
junk
in
here
and
forgiving.
D
What
we've
done
is
we
filter
it
specifically
down
onto
the
onto
the
gitlab
c
group
that
we
run
on
on
the
gilly
nodes,
but
so
there's.
D
Lot
of
stuff
in
here
like
this
id,
you
know
this
is
like
the
root
c
group
or
something
I
don't
know,
and
it
so
it
gives
you
a
lot
of
junk,
but
we
know
that
some
of
them
are.
You
know:
here's
like
the
docker
service,
like
we
probably
don't
need
to
monitor
that
ourselves.
D
Yeah,
I
think
that'll
work.
The
the
one
question
I
had
is
do
and
do
pods
that
are
sorry,
do
do
containers
that
are
running
in
different
name
spaces.
Will
they
run
on
the
same
like?
Will
the
canary
containers
run
in
the
same
node
pool,
or
do
they
run
in
a
separate
node
pool
they
run
in
the
same
pool?
They
do
okay,
so
it's
just
the
name
space
that
separates
them.
D
So
we
we
could
also
map
name
spaces
to
stages
and
say
because
that
that's
a
consistent
sort
of
one-to-one
mapping-
and
we
don't
have-
we
don't-
have
any
way
to
map
shard,
but
we
can
kind
of
probably
get
away
without
that,
unless
we
just
make
it
that
the
the
the
container
name
is
almost
like
generated,
and
it's
always
like
type
stage
shard.
You
know
it's
kind
of
like
an
automatically
generated
thing
like
that.
B
I
have
an
idea
for
shard,
which
is
that
we
also
have
node
labels
in
addition
to
pod
labels
and
if
we
could
add
node
labels
to
all
of
these
metrics,
then
that
would
give
us
the
shard,
because
we
run
shards
in
separate
node
pools.
B
A
True
inside
of
here
is
a
pod
label.
I
wonder
if
we
could
leverage.
I
don't
know
how
to
do
this,
because
I'm
not
familiar
enough
with
this,
but
we
could
leverage
a
join
query
and
utilize
supplement
this
information
with
what
comes
out
of
the
cube
state
metrics,
which
should
have
all
the
labels
that
we've
been
training.
F
D
So
we
want
to
let's
say
we
want
to
associate
some
of
these
c
advisor
metrics
with
a
pod.
So
so
sorry,
no
not
with
a
pod
with
a
with
an
application.
D
That's
running,
you
know
so
say
get
canary
stage,
workhorse
pod
and-
and
so
I
I
think
that
the
the
what's
what
scotland
just
mentioned
is
like
a
super
like
a
that.
That
sounds
like
if,
if
this.
F
Yet
because
something
that,
if,
if
we
think
that
this
information
are
already
available
in
that
matrix,
just
it
is
scattered
around
part
of
the
house
name,
part
of
the
other
metrics
or
other
labels,
or
things
like
that,
we
may
consider
using
logstash
so
that
we
we
collect
everything
at
logstash
level.
We
run
metrics
through
regular
expression
and
we
allow
logstash
to
change
the
information,
dropping
things
that
we
don't
want,
so
that
then,
when
we,
when
we
reach
elasticsearch,
it's
in
the
format
that
we
prefer.
D
Right,
but
we
wanted
in
the
we
specifically
wanted
in
prometheus,
though
lester
like
that's
the,
I
think
you
know
we
want
to
because
we
want
it
for
our
alerting
and
grafana
okay.
Okay,
so
I
this
this
looks.
This
is
the
most
promising
thing
I've
seen
so
far.
Scovic
I
mean
like
I
have
to
figure
it
all
out,
but
can
we
get
extra
labels
on
this?
Was
this
kind
of
fixed.
C
B
Look
at
the
two
queries:
I
just
sent
you
andrew
one
for
pod
labels
and
one
for
node
labels.
I
think
that's
what
you
want.
D
Oh,
this
is
super
cool.
Okay,
and
does
this
have
like
type
on
it?
Yeah?
Okay,
then,
I
think
wait.
Yes,
it
might
be
like
a
really
nasty
big
query,
because
we'll
have
to
have
like
one
outer
join
for
each
label
that
we're
trying
to
move
across,
but
but
that
doesn't
matter
because
we
can
kind
of
generate
it
or
whatever,
but
yeah.
This
is.
This
is
okay.
I
can
move
forward
now.
Thank
you.
This
is.
D
B
Yeah
I
mean
this
is,
I
guess
like
if
we
want
to
do
this
join.
I
guess
this
is
more
expensive
than
doing
the
relabel,
but
if
it's
okay,
this
also
like
the
node
label,
query
andrew
also
gives
you
this
shard
name
which
could
be
used
right.
It
has
we.
Unfortunately,
the
name
is
unfortunate.
We
use
the
main
type.
So
if
you
look
at
like
label
underscore
type
for
the
node
label,
you'll
see
the
the
shard
name
there.
We
could
change
that,
but
then
we
have
to
rebuild
the
node
pools
where.
B
E
B
B
You'll
see
it
there,
I
don't
know
if
it
makes
sense
to
use
this.
I
mean
for
this
staging.
B
Yeah
for
staging
production,
we
have
node
pools
dedicated
to
shards
so
that
that
could
work,
but
not
for
pre-prod.
So
maybe
that's
not
so
useful,
but.
D
Look
this
kind
of,
even
if
it
doesn't
get
us
all
the
way.
This
gets
us
like
a
whole
lot
further
and
like
if
we've
got
like
label
and
stage-
and
you
know
a
few
of
them-
that's
like
a
really
good
start
and
we
can
start
having,
because
even
on
the
like
on
most
of
the
service
overview
dashboards,
we
don't
actually
do
anything
with
charge.
B
Yeah
I
I
still
would
like
to
see
if
we
can
use
relabels,
though
this
would
be
ideal
right
would
be
just
to
for
for
metrics,
that
I
have
a
pod
I'd
like
to
pull
out
like
the
pod
labels,
as
well
as
even
the
node
labels
and
like
getting
the
node
pool
name
would
be
super
nice
because
then
we
could
start
looking
at
metrics
by
notepal.
D
So
I've
I've
only
ever
used
like
relabeling,
with
like
regular
expression
like
fancy,
regular
expressions
where
you're
like
pulling
out
subgroups
and
stuff
like
that.
But
if
there's
a
way
that
you
can
kind
of
get
the
labels
from
some
other
prometheus
metric,
then
you
could
do
that.
But
I've
never
done
that.
I
don't
know
if
that's
a
thing
it
could
well
be.
I
just
don't
have
any
experience
in
that
yeah.
B
D
Like
yeah,
we
can
yeah.
I
I
feel
like
I
need
to
go
and
play
with
this
and
and
then
you
know,
maybe
next
week
we
can
do
another
day.
Well,
maybe
even
before
that
we
can
have
something
to
kind
of
play
around
with
with
some
I
just
kind
of
explain.
So
this
is
blocking
you
because
you
don't
feel
like
you
have
enough
information
during
an
incident
to
kind
of
like
dig
into
things
or
is.
Is
that
where
the
blocker
is.
B
Yeah,
the
biggest
problem
we
have
now
is:
we
don't
have
a
good
view
by
service
into
memory
and
cpu
requests
and
limits
yeah.
We
can
look
at
that.
You
know
by
doing
thanos
queries
and
we
can
look
at
it
in
the
google
console,
but
I
think.
D
Yeah-
and
we
can-
we
can
put
these
in
as
saturation
metrics
and
do
all
of
those
yeah
cool
things
as
well.
So
like
I,
this
is
this
just
knowing
about
the
coupod
labels
has
kind
of
freed
me
up
to
to
to
be
able
to
move
forward
much
more
now
so
I'll
see
what
I
can
do
and
I'll
put
some
some,
mrs
together,
but
I've
been
feeling
super
blocked
until
now,.
D
D
B
D
C
Okay,
easy
see
that
blocker
label
so
effective,
okay
and
then
the
other
one
we
have
is
projects
we've
got
pages
still
going
on.
So
that's
like
progressing
as
expected.
I
believe
so
I
will
so
this.
We
need
to
do
a
quick
retro
on
the
q3
okay.
Oh,
we
can
do
async,
though
so
I'm
happy
to
leave
that
to
the
end
or
or
move
async
as
needed.
So
let's
move
on
so
jav.
Does
that
answer
all
of
the
things
you
need
for
service
labels.
B
Yeah
I
just
wanted
to
check
with
scarbec
to
see
if
there's
anything
we
need
to
update
with
regard
to
charts.
B
Okay,
all
right,
maybe
maybe
I'll
just
maybe
I'll,
just
I'll
I'll,
open
up
an
issue
to
make
sure
that.
A
I
could
just
update
on
what
I've
been
working
on.
I
plan
on
migrating
or
starting
the
evaluation
of
batch
six
for
kubernetes
or
sidekick
catch-all.
So
this
is
all
the
stuff
that
was
part
of
the
cloud
build
logs.
A
Hopefully
I
could
start
doing
that
today
and
then,
like
I
said
earlier,
the
gitlab
shell
stuff,
I'm
going
to
start
creating
the
necessary
dashboards
and
kibana
for
metrics,
and
you
know
making
sure
we
have
all
the
necessary
data
prior
to
just
trying
to
start
taking
a
sliver
of
traffic
inside
the
kubernetes
realm.
C
Cool,
that's
fun.
Anything
else
java,
I
feel
like
you
should
you
should
get
to
like
shout
about
removing
the
nfs
store.
B
Yeah,
but
that
was
easy
stuff,
just
removing
the
mounts.
It's
all
done
now.
So
that's
how
you
know.
That's
a
happy
thing.
You
just
have
pages
left
now.
C
Nice,
big
master,
now
awesome
so
yeah,
so
we
have
a
quick
retro
for
q3
okr,
so
literally
just
a
few
minutes,
but
what
I
thought
might
be
useful
is
so
the
format
is
basically
what
was
good.
What
was
bad
and
what
do
we
want
to?
Try?
Next
I'll,
just
drop
it
in
this
issue,
so
we
can
do
it
quickly
here.
C
So
I
think
for
sort
of
context
on
this.
Okay,
like
we
knew
it
was
going
to
be
a
stretch
goal
as
we
went
into
the
quarter.
C
We
obviously
hit
a
few
unexpected
things,
particularly,
I
think
around
needing
to
move
to
the
multi-cluster
and
the
alt
ssh
stuff,
I
would
say,
are
probably
with
the
two
that
stuck
in
my
mind,
but
I'm
sure
there
were
others
as
well.
So
yeah
it'd
be
good
to
get
people's
thoughts
on
like
what
in
the
last
quarter,
do
we
think
was
well
actually
before.
Are
we
still
unmodified
on
helm?
Have
we
had
to
modify
anything.
A
B
B
C
A
C
That's
absolutely
fine
like
yeah,
so
it
only
has
to
be
a
few
minutes
so
like
take
this,
I
think
that's
absolutely
fine!
The
try
is
the
bit
that
I
think
might
be
interesting.
So
I
mean
maybe
we
can
review
that
in
next
week's
demo,
because
it
might
be
interesting
to
think
about
like
how
do
we
try
and
going
into
this
next
quarter
we're
going
to
have
a
very
similar
okr.
A
I
guess
a
quick
question
regarding
the
process
of
this.
This
issue
is
closed.
A
All
right
should
I
just
comment
on
the
issue,
and
eventually
it
gets
pushed
up
into
the
issue
description
at
some
point.
C
Yeah
you
can
do
or
in
this
doc
whichever's
easiest
yeah
just
go
ahead
and
do
that
yeah
I'll
write
them
into
the
description.
Once
we
have
everything
awesome.
Is
there
anything
else
anybody
wants
to
cover
today.