►
From YouTube: 2020-06-26 GitLab.com K8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
B
C
B
Cool,
so
oh,
let's
just
see
how
it
goes.
Well,
we've
been
there
and
we'll
see
if
it's
as
useful,
so
for
the
demo
I
thought.
Maybe
we
can
look
at
the
urgent
other
shard,
but
it's
kind
of
a
boring
chart
to
look
at
because
we're
not
doing
any
replica
scaling
and
there
isn't
like
a
lot
going
on
in
general,
because
we,
you
know,
are
a
bit
over
provisioned.
B
D
B
B
First
thing
this
is
that,
like
we
are
fairly
underutilized
on
this
well
I,
don't
know
what
this
big
like
CPU
ization
spike
is
here,
but
you
know
we're
seeing
like
these
spikes.
That
I
think
are
happening
during
the
ploys,
because
this
is
when
we're
you
know
bringing
all-new
new
pods,
but,
generally
speaking,
feels
like
we're
a
bit
over
provision.
We
could
probably
go
with
a
smaller
instance
type.
B
We
were
looking
in
the
last
meeting
about
these
interesting
disks.
Disk
utilization.
Spikes.
If
you
look
over
like
the
past
six
hours,
maybe
you'll
see
it
clearer
yeah.
So
this
is
curious
right
like
what's
going
on
here.
This
is
SDA
one
is
this
being
like?
Are
these
like
this
disk
being
utilized
in
the
pod
visit?
B
D
C
B
A
F
D
It's
a
read
volume,
and
this
is
what
I
was
talking
about
was
in
this
or
in
the
VM
worlds,
not
in
the
kubernetes
world,
but
we
see
like
we
put
monitoring
on
it.
So
we
wanted
to
see
if
disk
utilization
on
on
all
of
the
disks
was
beyond
the
spec.
That's
Google
give
us
and
it
was
a
super
noisy
alert
and
it
was
very
national
because
we
see
lots
of
preaching
of
the
thresholds,
but
it
didn't
actually
really
make
very
much
a
difference.
So
we've
just
eventually
turned
it
off.
F
Now
I
mean
it
really
depends
on
how
the
nodes
are
handled
by
you,
because
if
this
is
the
underlying
disk,
where,
for
instance,
the
images
images
get
downloaded
and
stored
on
disk.
So
during
a
deploy,
weary
download
images
of
the
things
that
we
run,
for
instance,
but
also
what
I'm
thinking
is
that
I'm
not
sure
if
this
behaves
the
same
on
a
kubernetes,
but
in
plain
darker,
the
basically
every
disk
is
an
hour
FS
layer.
F
So
if
you
start
writing
on
something
that
is
not
declared
as
a
volume
in
the
image,
you're,
basically
riding
on
the
on
the
underlying
disk,
because
it
creates
snapshot.
So
every
comments
should
just
write
the
state
of
the
filesystem
after
the
comment
is
executed.
I,
don't
know,
maybe
it's
completely
unrelated,
but
it
really
depends
on
what
is
on
those
disks.
B
D
D
But
we
we
should
come
up
with
a
scheme
that
we
use
inside
and
outside
and
there's
a
whole
discussion.
It's
definitely
related
because
if
we
switch
over
to
using
pod
name,
for
example,
in
the
instance
that'll
make
it
a
lot
clearer
than
using
IP
address
and
IP
address,
port
was,
which
is
what
we're
using
at
the
moment,
which
is
kind
of
difficult
to
figure
out.
What's
going
on,
yeah.
D
Well
inside
will
it's
an
outside
it'll,
be
node
name
colon
in
yeah
and
then
inside
it
will
be
Patna
and
colon
port.
But
at
the
moment
in
in
select
communities,
its
IP
address
colon
ports
and
that's
like
no
and
then
what
I
would
also
prefer.
I,
don't
think
Ben
and
I
see
eye-to-eye
on
this.
Yet
is
a
label
that
defines
like
where
something
is
running.
D
B
B
D
Scopic
and
I
just
had
a
call
about
that
which
was
about
making
sure
that
we
apply
the
same
label
taxonomy.
You
know
the
issue.
I
was
thinking
about.
D
B
D
B
B
B
C
B
Know
I
guess:
Craig
Craig
Barrett
was
taking
us
up
for
rails,
console
access
and
maybe
we
can
work
together
with
reliability,
but
I
yeah,
I
think
I
think
we
need
to
maybe
I
can
I
hate
to
like
add
it
to
the
DNA
agenda,
because
I'm
not
sure
if
that
will
go
anywhere,
it
might
be
better
just
Amy.
Maybe
we
should
get
together
in
a
smaller
group
with
reliability
to
discuss
it,
because
I
I
would
like
to
share
the
work
but
I'm
not
sure
how.
B
The
idea
we
were
talking
about-
or
grain
mentioned,
that
you
know,
coop
CTL,
will
soon
have
proxy
support
and
we
were
discussing
like
maybe
maybe
keeping
with
the
tunneling
idea.
We
would
have
an
ephemeral
proxy
or
even
a
an
ephemeral,
SSH
console
server
that
you
would
spin
up
temporarily
to
connect
to
production
and
that
would
solve
having
an
ephemeral
would
solve
the
problem.
We're
like
you,
don't
want
to.
You,
don't
want
to
run
it.
B
A
Know
if
I'll
have
anything
to
demo
but
I'll
I
do
have
some
goals.
Today,
we
are
currently
moving
one
of
the
known
NFS
required
worker
queues
into
catch-all,
so
I'm,
hoping
by
the
end
of
the
day,
I'll,
have
a
good
evaluation
as
to
whether
or
not
we're
still
writing
anything
to
NFS
with
the
urgent
CPU
bound
chart.
B
B
B
For
me
well,
like
I,
was
I
was
really
hoping
to
have
like
this
console
thing
demoed
by
the
next
Friday,
but
I
think
I
might
be
in
corrective
action
land
for
the
next
day
or
two
or
early
next
week.
So
I'm
not
sure
how
much
work
I'm
going
to
be
able
to
get
done.
It's
semi
blocked
right
now.
I
actually
marked
the
issue
as
blocked
because
of
we
don't
have
a
way
to
regenerate
keys
because
the
CA
the
work
currently
we're
currently
using
expired
certificates
in
production
for
consoles.
B
B
At
no
point,
I
hope
to
decide
like
I,
really
hope
to
I
think
like
for
the
key
regeneration,
I'm
hoping
reliability,
you
can
take
that
up.
I,
don't
I,
don't
particularly
want
to
do
that,
but
if,
if
we
have
to
be
closely
involved
with
that,
we'll
see
I
think
Dave
is
already
taking
the
reliability
label
on
it
and
I
think
was
added
to
a
milestone.
So
maybe
maybe
that'll
be
fine.
C
Scalability
is
just
trying
to
get
the
other
latest
status,
they're
still
working
at
the
scalability,
our
optimizing
database
connection
pool
usage,
so
they
reckon
by
the
end
of
today.
Hopefully
they
will
have
managed
to
configure
the
database
connection
pooling
kubernetes
to
be
similar
to
how
its
configured
in
the
VMS.
So
oh.
A
B
C
A
C
Was
should
have
been
in
yesterday's
infrastructure
meeting
that
they
reaction
by
the
end
of
the
week.
I
know
sorry,
say:
they're,
just
the
change
requests
for
adding
Headroom
on
production
or
the
action
by
the
end
of
the
week
and
then
they'll
also
be
configuring
data
connection
those
base
connectional
so
coming
soon.
Yeah.