►
From YouTube: 2021-06-30 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So
what
do
you
want
to
go
through
today?.
B
I
myself,
don't
really
have
anything
fun
to
show
case,
so
I'm
not
sure
I'm
working
on
trying
to
add
charts
for
console
and
trying
to
add
charts
for
nginx
and
also
trying
to
figure
out
what
to
do
with
logging
for
nginx.
I've
got
a
proposal
out
there
that
no
one
commented
on
yet,
which
was
surprising.
A
I
think
that
graham
sounded
like
he
would
have
comments,
but
it
was
a
little
bit
late
in
his
day.
I
think
when
I
flagged
that
to
him
there
have
been
a
few
incidents
that
might
have
distracted
jeff.
I
think
the
comment,
of
course,
always
I
think
graham's
kind
of
initial
heart
was
that
this
will
mean
a
lot
of
additional
logs
yeah
and
whether
that's
a
concern.
I
wonder.
B
Oh
yeah,
it's
a
concern,
so
my
proposal
is
to
send
this
data
to
bigquery
to
control
costs.
Obviously
it's
not
been
a
huge
thing
for
us
to
take
care
of
in
the
last
two
years,
because
we've
got
issues
going
back
that
far
regarding
nginx
as
a
whole,
so
my
goal
I'm
separating
the
logs
into
two
places
so
logs
that
are
considered
access
logs.
So
you
make
a
request
to
a
web
server.
B
A
B
B
The
amount
of
errors
that
we
receive
so
far
looks
pretty
minimal.
I
did
only
look
at
prepride
for
one,
so
you
know
that
might
be
significantly
different
when
we
push
this
into
staging
and
production,
but
I
figured
I'd
start
with
getting
the
proposal
out
there
and
see
what
people
think
so
yeah
jarv
thanks
for
joining
look.
My
engine
next
request.
D
B
B
A
A
Yeah,
I
think
that's
fine.
Would
it
be
worth
running
this
past?
I
can
ping
ken
on
this
and
get
the
observability
team
just
see
what
I
thought.
B
Shortly
before
this
meeting
I
signed
this
merge
request
to
jarvan
igor
for.
B
Well,
I
did
ping
well,
I
didn't
ping
anyone.
It
was
more
like
a
broadcast
message
to
the
official
infrastructure
team,
as
well
as
the
observability
channels
and
slack
just
to
get
some
visibility
on
it,
because
you
know
it
is
a
lot
of
log
volume.
I
don't
know
how
much
it's
going
to
cost,
but
I
think
having
it
in
a
place
where
we
could
at
least
see
the
logs
that's
beneficial.
In
the
case,
we
need
to
troubleshoot
something.
D
Why
I
mean
I
know
we
have
a
process
for
importing
data
into
bigquery
from
object
storage?
Why?
Why
wouldn't
we
do
the
same
thing
for
ngmx
or
why
don't
we
do
this
for
h.a
proxy.
D
B
D
B
D
Yeah
because,
because
the
last
time
I
did
the
run
book
for
this,
it
was,
I
thought
I
thought
I
was
importing
from
object.
Storage.
D
B
Yeah,
that
might
be
the
case.
Like
I
don't
know,
I
don't
know
if
there's
another
place
where
your
proxy
logs
go
to,
because
the
only
thing
I
see
a
sync
for
is
anything
for
the
underscore
proxy
or
has
proxy
underscored
logs.
D
Yeah,
no,
no,
I
know
I
the
last
time
I
did
this
it
was
like
for
it
was
for
like
workhorse
or
something
you
know
it
wasn't
for
aj
proxy,
but
I
had
to
import
it
from
object,
storage,
so
maybe
hi
proxy
we're
doing
enough
that
it's
going
directly
in
I
mean
I
don't
know
like
we
need
to.
We
need
to
figure
out.
D
D
D
B
But
outside
of
vlogging,
I'm
gonna
pair
with
andrew
tomorrow
to
see
if
I
can
figure
out
how
to
get
some
charts
going
for
console
and
and
nginx
console
was
missing
some
metadata,
so
I'm
currently
working
on
upgrading
our
console
release.
So
it
includes
that
new
metadata
I've
had
to
do
a
chart
upgrade
to
accomplish
that
task,
so
I'll
be
I'll,
be
creating
the
merge
request
for
staging
production
today.
For
that
one.
B
We
did
okay
with
tuning,
you
can
see
where
we
started
for
nginx.
B
We
were
maxed
out
like
we
configured
our
hp
to
max
out
at
100
pods,
and
once
we
started
setting
traffic
to
the
api,
the
nginx
ingress
immediately
went
up
to
its
maximum
availability
for
maximum
counterpods
and
jarv
actually
did
the
tuning
for
this
one,
where
we
just
adjusted
the
hpa,
and
we
got
it
to
where
it's
at
a
much
better
spot.
So,
like.
B
B
So
now
we're
running,
let's
see
instead
of
running
100
we're
ranging
between
16
and
25
pods,
which
I
think
is
perfectly
fine
in
nodes
we
were
originally
at
looks
like
we're
about
the
same
for
the
default
nodes,
which
is
a
little
strange.
But
I
didn't
look
into
that.
I
focused
more
on
the
api,
the
api.
B
We
saw
a
tremendous
improvement
so
when
we
first
enabled
the
api
we
set
our
minimum
to
48,
which
would
have
kind
of
equaled
out
to
the
number
of
workers
that
we
were
running
inside
of
when
we're
running
on
virtual
machines
and
when
we
first
enabled
traffic,
we
were
running
between
50
and
60
pods
in
each
zone,
which
was
around
840
workers
assuming
80
pods,
which
is
30
more
than
we
were
running
on
vm,
so
not
a
terrible
amount,
but
the
amount
of
nodes
was
tremendously
higher,
so
upwards
of
45
nodes
per
zone.
B
B
This
is
still
810
workers,
which
is
about
30,
more
just
shy
of
30
more
than
what
we
were
running
on
virtual
machines,
but
as
we
know,
we
were
already
under
provision
on
the
api
service
just
a
wee
bit.
So
I
think
that's
fine.
Our
nude
polls
vary
between
25
14
and
25
nodes,
so
it's
a
significant
drop
versus
the
45
nodes.
We
were
seeing
so
so
at
low
traffic
times.
B
This
is
right
right
right
around
where
we
want
it
to
be
in
the
first
place,
but
during
high
traffic
periods,
we're
seeing
50
more
nodes
than
we
did
when
we
were
on
virtual
machines,
artifact
of
scaling,
because
we
didn't
have
that
in
the
past.
But
this
is
also
compounded
by
the
fact
that,
due
to
scheduling,
we
simply
can't
fit
an
extra
pod
onto
a
node.
B
B
B
B
A
Nice
yeah,
I
agree
that
looks
good,
I'm
kind
of
imagining
that
once
we
come
through
the
web
migration,
that
will
be
a
great
time
that
we
can
do
all
the
taints
work,
and
you
know
we're
surely
going
to
have
similar
tuning
issues
so
yeah.
B
A
A
B
A
Awesome,
that's
great
stuff
that
we've
got
that
api
stuff
fully
closed
out.
One
thing
I
would
like
us
to
do
so
I'm
wondering-
and
it's
good
that
you're
here
jav
because
love
to
hear
your
thoughts
on
this.
There
was
an
incident
recently.
A
A
So
at
this
point
now
that
we've
just
wrapped
up
api,
where
should
we
like?
Where
should
we
be
maintaining,
or
are
we
already
maintaining
a
place
that
engineering
calls
could
look
to
actually
answer
that
question.
A
D
D
A
B
And
we
do
have
to
keep
that
up
to
date,
because
once
we
turn
down
the
virtual
machines,
we
want
to
remove
all
the
extra
dashboards
and
panels
and
alerts
associated
with
those
nodes
from
our
run
books,
dashboards
and
alerts
and
such
so
once
we
turn
down
those
nodes.
We
update
the
service
catalog
a
little
bit,
but
that's
not
a
definitive
answer.
I
would
say.
A
D
Get
this
information,
I
would
say
multiple
answers
or
multiple
thoughts
here.
One
is
to
get
every
manager
to
put
it
in
there,
like
one-on-one
document
with
every
report.
So
that's
one
option
well,.
A
D
Yeah,
I
was
gonna
suggest
that
too,
but
I
mean
the
problem
is,
is
like
not
everyone's
gonna
read
it.
One
idea
would
be
to
in
the
on
the
dashboard
itself,
and
maybe
scarbec
tell
me
what
you
think
about
this
is
like
we
could
put
like.
Maybe
we
can
change
this
title
to
like
in
parentheses,
but
like
kubernetes,
so
that
when
they
go
to
the
api
service,
like
it
says,
kubernetes
there?
D
If,
if
that's
easy
for
us
to
add,
then
I
think
that
would
be
the
most
prominent
place
that
they
would
see
it
like
during
an
incident,
because
this
is
what
you
look
at
like
if
you
were,
if
you
had
an
api
problem,
so
that's
that's,
that's
one
one
option
and
you
could
even
for
for
the
web
service
we
could
put
like
you
know
not
yet
or
like
bm's,
slash,
kubernetes
or,
like
you
know,
but
yeah
we
could
we
could
that
that'd
be.
That
would
be
the
best.
B
D
A
A
A
Just
on
the
you'll
see
it
in
the
apac
demo,
but
just
a
kind
of
a
say:
heads
up
is
a
new
web
blocker
web
migration
blocker
that
is
in
progress
of
being
looked
at.
It's
probably
not
a
quick
fix,
so
kind
of
mention
that,
depending
on
what
comes
up
with
that,
we
may
not
want
to
push
the
web
migration
onto
like
staging,
maybe
certainly
not
canary,
depending
on
how
long
we
think
this
thing
might
take
to
resolve.
A
If
that's
the
case,
then
obviously
we
have
lots
of
other
work.
So
if
we
finish
off
all
the
observability
stuff
that
you
want
to
do,
we
could
do
the
helm
upgrade.
We
could
do
the
pages
migration
and
we
can
just
reorder
things
if
we
need
to.
B
A
B
Crowd
we
can
work
on,
but,
alternatively,
if
if
it's
going
to
take
a
long
time,
if
we
can
figure
out
where
those
files
get
written
to
inside
of
our
kubernetes
infrastructure,
we
can
make
an
empty
dir
volume
specifically
for
the
web
service
and
just
create
a
cap.
B
A
Yeah
right
now:
okay,
okay,
all
right
that
might
be
worth
us
having
an
account
parallel.
So
I'm
hoping
to
hear
back
from
package
teams
say:
we've
got
the
multi-large
working
group
helping
to
get
this
work
prioritized
so
we'll
see
what
comes
back
on
that
one.
But
if
we
need
to
like
we've,
obviously
got
lots
of
other
stuff.
A
We
can
do
as
well
so
and
henry's
back
next
week
and
I
chatted
him
before
he
went
away
about
finishing
off
those
saturation
metrics
and
actually
rolling
out
the
addition,
the
sort
of
retuning
that
needs
to
happen
in
order
to
enable
all
of
those
alerts.
So
I'm
intending
to
get
him
focus
on
that
next
week,
so,
hopefully
we'll
get
all
of
those
dashboards
as
well.
A
There
was
actually
one
thing
in
the
observability
epic.
There
was
before
henry
went
away.
He
did
leave
an
emma.
A
Which
I'm
wondering
I
I
know
andrew
hasn't
got
to,
but
I
don't
know
if
it
has
to
only
be
andrew,
so
so
that
mr3667
I
don't
know,
does
anyone
feel
that
they
could.
B
The
one
worry
I
have
about
this
particular
set
of
dashboards
is
it's
based
on
the
node
pool,
and
I
don't
know
when
we'll
pull
the
work,
but
if
we
decide
to
pull
the
work
related
to
notates,
that's.
B
A
Yeah,
no,
it's
a
really
good
point
and
I
guess
it
kind
of
indicates
as
well
that
we
should
we
should
get
these
up
like.
We
should
get
an
iteration
of
these
up
so
to
help
us
with
the
web
stuff
but
yeah.
Maybe
they
don't
have
to
be
like
a
great
iteration,
because
I
I
do
want
to
try
and
do
that
taint
stuff.
Quite
soon,
I
think
you
know
we've
talked
about
it.
Quite
a
lot
so
be
good
to
prioritize
that.
A
Yep:
okay,
awesome:
nice!
Thank
you
for
the
demo
and
discussions
and
I
hope
you
all
have
a
great
rest
of
your
wednesday.
It's
tuned.