►
From YouTube: 2021-06-23 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
Of
of
packing.
A
B
Yeah
so
yeah
a
little
bit
hectic,
but
but
everything
else
I
think,
will
work
out.
I'm
also
still
working
on
the
observability
stuff
to
finish
this
all
up
today,
but
I
think
it
will
be
a
little
bit
later
because
I
need
to
do
something
else
in
between
and
I
will
put
a
summary
and
everything
in
perfect.
A
Yeah
with
the
incident
going
on,
I
know
that
definitely
will
affect
things
so
just
leave
a
hand
over
on
things
and
if
you
could
make
sure
the
epic
kind
of
has
the
clear
status
something.
B
A
A
And
someone
else
may
be
able
to
pick
up
things
whilst
you're
out,
so
I'm
actually
not
sure
it
may
just
be
us
today
with
discover
being
sick.
So,
let's,
let's
begin
you've
got
the
demo.
Luckily,.
B
A
B
To
show
the
some
interesting
improvements
on
observability
that
we
get
now
between
this
missing
before
and
one
interesting
thing
is
the
node
scale
up
rate
and
scale
error
ratio
with
dashboards
which
just
was
just
merged
yesterday
and.
A
B
Actually,
exactly
the
the
corrective
action
for
things
that
happened
with
the
ssd
problems
right
when
we
couldn't
scale
our
node
pools,
we
didn't
have
alerts
for
that
and
we
couldn't
really
see
it
nicely
and
with
this
dashboard
I
can
just
share
my
screen.
B
It's
in
the
general
cube
service
dashboard
you
just
scroll
down,
then
you
see
the
both
dashboards
here,
which
just
scale
up
events,
error
ratio
here,
so
you
see
sometimes
errors
popping
up.
You
need
to
research
why
it's
happening
it's
fairly
new,
so
I
never
really
use
that
this
metric
to
look
into
what's
happening
right
now,
but.
A
B
That
should
show
us
for
all
the
scale-up
events,
how
many
of
them
failed.
It's
the
error
ratio.
So
if
you
see
shortly,
maybe
let's
make
this
a
little
bigger.
B
B
B
Is
to
look
into
the
other
one,
the
scale
upright,
because
this
one
is
then
showing
you
how
many
scale
up
node
scale
up
events
we
had
right.
So
you
can
see
here
we
scaled
up,
probably
four
or
five
nodes
and
see
it
going
up
and
down,
and
we
could
of
course,
use
thanos
to
look
into
finer
grain
metrics
for
looking
in
which
cluster
or
which.
A
B
B
Instance,
if
I
put
this
for
api,
for
instance,
then
oops.
B
A
A
B
B
This
should
be
canary
here,
no
one
yeah
and
another
one
which
is
nice
or
interesting,
is
that
we
we
previously
already
had
gold
memory,
saturation
metrics
for
our
vms,
but
we
missed
this
for
kubernetes.
But
now
we.
B
B
And-
and
this
is
now
in
communities
based
on
container
limits
and
because
this
in
most
cases
should
be
things
like
workhorse,
which
run
in
its
its
own
container,
I
think
we
can
drastically
reduce
the
memory
requests,
for
instance
there.
So
this
is
a
nice
signal
for
tuning
memory
requests.
So
it's
not
problematic.
It's
just
that
we
could
save
a
little
bit
of
memory,
but
yeah
mostly
cpu
bound
anyway.
So
it's
not
really
an
issue
right
now,
but
but
it
gives
us
some
visibility
at
least.
A
Yeah,
it's
great!
That's
nice.
B
And
another
one
which
I'm
still
working
on,
so
it's
not
finished
yet,
but
I
have
a
demo
dashboard
for
it.
Yes,
the
amount
of
how
much
how
good
we
are
using
requests
on
our
notes.
Let
me
see
that
let's
go
to
saturation
when
it
is
overview
right
and
those
two
panels
here
are
new
which
take
a
little
bit
time
to
load
like
node,
cpu,
request,
utilization
or
node
memory,
request
utilization,
and
you
see
for
each
of
the
nodes
in
the
api
dashboard.
B
B
B
The
user
nodes,
mostly
for
cpu
and
don't
make
much
use
of
the
memory
that
we
have.
A
A
Node,
then
the
memory
requests
and
things
would
would
go
up
and
that'd
be
looking
healthier.
B
B
Yeah
yeah,
that's
it
for
the
new
things.
One
big
thing
is
still
pending.
This
is
the
one
where
we
need
to
set
silences
for
alerting
it's
for
the
cpu
request
saturation.
We
could
merge
it,
but
then
we
would
get
alerts,
and
so
we
need
to.
A
And
then
we
just
complete
the
whole
thing
right,
because
I
don't
expect
anyone
will
have
time
to
to
change
that
to
like
deal
with
that
stuff,
whilst
you're
out
and
since
you've
already
got
all
the
mrs
open
like
it's
only
like
correctly
wrong
sounds
like
we
know
what
we
need
to
do.
It's
just
a
case
of
dedicate
some
days
to
actually
you
know
moving
those
strings
through
monitoring.
It's.
A
B
A
Yeah
exactly
yeah,
because
there's
a
couple
of
other
things
on
I'm
just
looking
through.
In
fact
I'll
just
show
my
screen,
I'm
sure
I
can
stop
shooting
so
so,
just
on
sort
of
just
looking
through
this,
the
epic
340.
A
and
so
there's
a
couple
of
other
issues
we
have
on
here,
which
I
think
they
may
not
block
us,
making
some
progress
on
the
web
migration,
but
we'll
want
them
before
too
long
and
that's
going
to
be
the
nginx
dashboards
and
the
other
one
I
think
will
be
console
just
simply
because
there
were
two
things
that
bit
us
on
the
api
stuff.
So
I
think
we
might
make
progress
on
those
two
things.
B
A
Want
to
focus
this
stuff
on
what
do
we
actually
need
for
the
web
migration
and
then
there's
probably
some
stuff
like
revamping
kubernetes
metrics
like
it
feels
like
a
mini
project
in
itself,
and
we
should
we
should
prioritize
it,
but
you
know,
maybe
we
do
that
after
the
web
fixing
hell
blogs
as
well
like
that's,
it's
probably
a
reasonably
large
piece
of
work.
A
But
the
other
thing
is,
we
don't
have
to
do
this
alone
right,
so
we're
also
going
to
start
pairing
up
and
see
where
we
can
coordinate
with
observability
and
see
like
some
of
the
things
like
there's
so
much
kind
of
overlap
on
like
things
like
console
nginx,
even
like
helm,
logs
and
things
that
there
may
be
some
of
these
things.
They
can
help
us
with.
B
B
A
It's
difficult
for
other
people
to
to
get
involved
in
like
the
stuff,
because
we're
changing
so
much
stuff
all
the
time,
but
I
think
I
think,
having
the
issues.
Land
in
the
right
place
is
probably
the
great
first
step,
even
if
we
end
up
being
the
ones
who
kind
of
do
a
lot
of
the
work
but
yeah
we,
we
absolutely
need
to
start
working
out
like
what
is
observability.
What
is
data
stores?
What
is
core
infra
and
what
is
migration
work
because
they're
not
always
the
same
thing.
B
Yeah,
I
mean
it's
really
about
getting
knowledge
and
experience
with
how
we
are
working
with
it
right
because
it
takes
a
while
until
you
get
into
it
and
understand
it
like
for
the
thing
with
the
node
pool
issues
that
wasn't
related
to
any
migration
of
communities
or
something
it
truly
was
a
gcp
issue.
And
we
need
to
have
the
skills
to
to
fix
those
right.
And
that's.
A
I
totally
agree
yeah
like
even
if
it's
just
pairing
up
and
having
people
kind
of
following
along
so
yeah.
I
totally
agree
on
that
incident.
I'm
curious
to
know
whether
did
we
see
any
impact
of
switching
over
to
hdds.
B
B
Some
temp
directories
normally
and
I
think
we
have
some
empty
deer
mounds
and
kubernetes
for
that.
So
it's
really
going
to
the
node
disk
there,
and
so
that's
the
only
thing
that
we
maybe
maybe
should
actively
look
into
if
we
see
something
which
could
look
like
a
degradation
and
performance
anywhere
right,
but
we
didn't
alert
for
some
things,
so
I
don't
see
anything
immediately.
B
B
B
Here
we
have
issues
right,
let's
look
into
that
one.
Yes,.
B
Over
time,
so
let's
zoom
out
to
24
hours.
B
B
A
In
this
case,
it's
true
yeah.
The
other
thing
was
there
anything
like.
Do
we
see
any
like
change
like?
Are
there
any
pod
startup
times
or
is
there
anything
else
that
we've
noticed
or
potentially
could
be
affected
by
switching.
B
Yes,
this
is
the
other
thing
that
maybe
I
think
that
node
startup
times
could
be
affected
so
that
it's
taking
longer
to
scale
up
nodes
because
they
run
from
hdd
instead
of
ssd
yeah.
I
think
for
pot
startups.
I
have
the
feeling
that
it
shouldn't
be
as
bad,
because
I
think
what
could
happen
is
that
you
download
container
images
right
and
then
they
are
spun
up
in
some
containers,
but
I
think
as
soon
as
they
land
on
disks,
they
are
in
cache
right.
So.
B
And
I
think
in
most
cases
they
really
should
come
from
memory
instead
fully
being
read
from
disk
when
they
are
used.
So
I
hope
it's
not
really
making
us
much
faster
and
anyway,
the
startup
time
for
our
pots,
mostly,
is
related
to
rails,
taking
a
long
time
to
start
up.
B
This
then
it
would
be
just
a
small
percentage
of
the
real
startup
time.
I
think
I
think
the
more
something
really
is
a
note
scaling
time
is
a
little
bit
longer
and
that
maybe
sidekick
disc
rights
could
be
affected.
B
A
Okay,
yeah,
that's
the
same,
so
what
I'm
kind
of
wondering
is
like
what
we
should
do
next,
like
like
stick
where
we
are
like.
Should
we
be
reverting
any
of
these
changes
like
what
would
be
the
sort
of
preferred
like
in
your
opinion,
like
next
steps,.
B
I
think
we
should
stay
where
we
are.
I
mean
there
was
this
discussion
that
we
switched
over
to
using
new
node
pool
names
instead
of
the
type
label
to
assign
our
pots,
but
I
think
it
doesn't
make
a
big
difference
for
us.
I
think
it
doesn't
help
much
with
the
scheduling
bug
that
we
found
now
that
we
are
using
the
no
pool
names
we
it's
easier
to
to
assign
pots
to
a
certain
pool.
So
I
think
it's
it's
better
like.
It
is
a
moment.
B
A
B
Yeah
so,
and-
and
I
think
for
the
that
can
stay
like
this-
and
we
plan
anyway
to
go
over
to
to
using
taints
to
assign
pots
to
notes
later.
So
I
think
that
would
be
the
way
forward,
which
is
not
related
to
this
incident
at
all.
B
B
And
for
what?
What
is
there
was
another
thing
I
wanted
to
say:
yeah
we
need
to
delete
the
old
node
pools
and,
yes,.
B
Don't
see
issues
right
now,
I
think
we
can
do
that,
but
it's
not
really
urgent
right.
So.
A
A
B
Yeah
and
we
still
have
terraform
dirty
plan
and
jeep
shot,
which
is
ugly.
B
Have
this,
mr,
I
need
to
see
if
it
was
approved.
Meanwhile,
I
linked
it
in
the
incident
channel.
Let
me
have
a
look.
B
This
was
that
one
because
of
this,
mrs
merged,
then
we
can
do
another
terraform
to
use
the
new
module
version
here
and
that
should
hopefully
fix
it.
Skavic
three
hours
ago,
nice
didn't
see
that
he
was
approving
it
already.
B
So
I
probably
can
get
this
reserved
today
also.
A
Awesome
that'd
be
great
if
there's
any
kind
of
like
follow-up,
or
you
know
like
recommendations
for
future
things
or
issues.
We
should
think
about
like
just
drop
them
in
all
in
on
that
issue,
and
we
can
we
can
see
what
the
the
next
steps
look
like.
B
Yeah,
it's
just
like
this
app,
but
by
the
way
it
was
so
interesting
that
all
this
happened
and
meanwhile
we
released
14-0
right
version,
so
that
was
really
cool.
I
didn't
even
notice
much
of
it
that
just
worked.
A
Yeah
yeah
absolutely
very
fortunate.
We
had
everything.
Timing
was
great
and
we
had
everything
tagged
and
ready
so
yeah.
It
was
good,
but
yeah.
14.00
super
super,
smooth
and
like
great
also
great
collaboration
on
this
like
well
within
our
team,
would
have
been
even
better
if
we
could
have
had
some
non-delivery
engineers,
I
think
seeing
some,
but
it's
super
hard
right
lots
of
changes
and
fast-paced.
So
that's
also
fine
but
yeah.
No,
it
was
great
having
you
and
graham
and
scarbeck
and
jarv
on
this
stuff.
So.
B
You
need
to
deal
with
the
next
patch
and
security
release.
A
That's
right,
yes,
so
today
it
was
quiet,
so
it
was
a
good
start.
At
least
the
first
day
was
good.
It's
when
ease
you
in
right,
so
yeah
we'll
start
patching
tomorrow.
So
it
should
be
good.
B
Okay,
yeah,
I
hope
it.
This
works
fine
for
you
and
you
have
a
quiet
month
with
that.
A
Yeah,
thank
you.
I
said
alright
and
I'll
hand
over
to
you
when
you
come
back,
so
you
can
join
the
fun
awesome.
Is
there
anything
else
we
need
to
go
through
on
any
of
this
stuff.
B
No,
I
will
just
finish
all
my
as
much
as
I
can
okay,
but
I
guess
some
of
them
will
get
reviews
from
people
like
andrew,
maybe
some
being
rejected
or
needing
changes,
but
I
can't
finish
them
then
today,
but
I
will
link
all
of
them
in
the
issue:
an
epic
and
fight
together.
What
what
is
left
awesome.
A
Sounds
good
all
right!
Excellent!
Well!
Good!
Luck
with
that
stuff!
Thanks
for
the
work
this
week
and
enjoy
your
enjoy
your
time
off,.