►
From YouTube: 2021-05-26 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
C
I
don't
have
anything
to
demo.
I
was
I
got
distracted
yesterday
with
release
management
tasks
and
I
was
hoping
to
do
some
improvements
on
the
hpa
configurations
for
both
nginx
and
api,
but
I
just
got
too
distracted
to
really
put
some
effort
into
it,
so
I'm
gonna
spend
today
and
do
that
and
hopefully
I'll
have
something
fun
to
show
for
next
week.
A
Awesome,
okay,
yeah
and
I
appreciate
like
release
management's,
full-time
job
cool,
so
a
couple
of
things
I
had
then
so
yeah
api
service,
migration,
so
yeah.
I
was
just
thinking
ahead
of
the
group
conversation
next
week,
like
whether
we
saw
any
like
nice
benefits
on
on
deployments,
but
like
java
you're
about
to
crush
my
hopes.
D
D
D
I
didn't
look
at
how
long
it's
taking
now
but
describe
that
quickly
if
I'm
wrong,
but
it's
still
around
like
30
minutes
right.
C
We
store
this,
but
I
couldn't
I
couldn't
generate
a
chart,
so
I
want
to
look
at
that
as
well.
I
I'm
concerned
that
kubernetes
takes
longer
because
we
deployed
to
all
four
clusters
synchronously
instead
of.
C
We
did
some
testing
and,
if
we're
looking
at
just
the
api,
it
takes
less
than
10
minutes
to
complete
a
deployment
per
cluster
and
that's
from
the
start
and
the
stopping
of
a
deploy
the
deployment
happening.
Now
we
did.
This
testing
was
performed
prior
to
us,
upping
the
grace
period
termination
window
because
of
the
issue
we
had
with
workhorse.
A
C
But
in
terms
of
what
takes
the
longest
in
kubernetes,
it's
going
to
be
the
get
fleet,
because
we
have
a
termination
grace
period
of
260
ish
seconds.
I
believe
so.
C
B
C
Happening,
that's
just
a
fact
of
life
at
this
moment.
A
A
We
don't
get
to
it,
that's
also
fine.
We
have
group
conversations
every
month.
So
if
we
don't
get
to
this
task
before
next
week,
that's
fine
if
we
can
push
it
in
the
future,
but
let's
open
initially.
A
A
So
clean
up
and
take
that
stuff,
so
super
thanks
for
the
ideas.
Graham
did
a
brain
dump
and
scarborough
you've
got
ideas
henry
if
you
have
additional
ones,
feel
free
to
also
just
drop
them
in
here.
One
thing
I
was
kind
of
relieved
about
is
actually
when
I
went
through
them,
we've
got
a
lot
of
overlap,
so
that
was
reassuring
in
that
there's.
We
don't
actually
have
like
20
things
we
want
to
do.
A
They
actually
converge
quite
quite
well,
there's
like
cost
like
cost
savings,
stuff,
there's,
observability,
stuff,
helm
logs
and
the
enginex
ingress
staff
ahead
of
web
were
kind
of
the
the
big
themes.
I
think
I
saw.
A
A
So
how
do
you
think
like?
Would
it
be
useful
if
I
drop
this
into
a
spreadsheet
and
we
do
a
similar
kind
of
ranking
thing
like
we've
done
previously,
with
okrs
and
literally
try
and
think
like?
What's
the
smallest
useful
iteration
and
this
goes
towards
cost
saving
or
you
know
like
web
or
whatever,
so
we
actually
get
a
bit
more
visibility
of
what
these
tasks
are
and
then
we
could
each
just
go
through
and
say,
like
hey
I'd
love
to
do
this
or
it'd
be
super.
A
Okay,
awesome:
I
will
do
that
today
so
that
we've
got
that
and
then
we've
got
the
apac
kate's
demo
tomorrow,
so
we
can
run
through
it
with
graham
as
well
in
that
in
that
time,
awesome.
Okay
I'll
do
that,
but
I
think
I
I
there
was.
These
were
all
great
things
and
I
think
there
was
like
there's
definitely
some
stuff
which
we
can
looks
like.
We
can
make
good
progress
on
in
june,
so.
A
So
what
is
happening
with
those
is
I'll
copy
these
into
the
agenda
after
I
should
open
a
different
meeting
agenda,
so
I
raised
three
potential
blockers
in
the
multi-large
working
group
on
monday.
So
one
was
the
workhorse
graceful
shutdown.
This
has
been
prioritized
by.
A
By
source
code
for
14.1,
now
I'm
hoping
they're
going
to
do
it
early
in
that
milestone,
so
that
we'll
get
it
early-ish
july,
but
they
are
going
to
prioritize
making
that
change.
So
we
will
get
that.
A
The
second
one
I
raised
was
around
the
entrance
ingress,
which
graham's
also
been
working
on.
We
asked
distribution
to
have
some
input
into
that
which
they
have
done
already
now.
Graeme's
got
a
bunch
of
ideas,
and
I
asked
him
this
morning
to
rewrite
that
issue
into
like
a
problem
statement
and
a
proposed
solution,
because
I
think
he's
at
the
stage
where,
depending
on
some
assumptions
that
he's
made,
he
has
a
good
idea.
A
But
I
think
if
we
have
that
clearly
written
out
it'll
be
quite
easy
for
everyone
else
to
pile
in
so
he's
going
to
do
that
and
we'll
see
if
we
actually
have
a
way
forwards
for
the
nginx
ingress
and
then
the
other
one
was
around
the
environment
variables
and
needing
us
needing
to
set
those
as
environment
variables
versus
being
settings
application
settings.
So
we
talked
about
this
a
little
bit
and
what
we're
going
to
do
is
open
these
all
up
as
issues
in
various
development
teams
to
get
these
things
moved
into
application
settings.
A
So,
in
the
short
term,
distribution
are
they've,
implemented
the
proposed
method
for
environment
from
file.
A
I
should
copy
this
whole
chunk
actually,
and
I
don't
have
to
read
buzz
so
they've
already
done
the
short
term,
but
that
doesn't
help
us
in
the
in
the
super
long
term,
because
it
still
means
we
have
to
like
mess
around
with
things.
So
I
don't
think
it'll
be
too
big
a
problem,
for
I
don't
think
it'll
be
too
big
a
problem
for
web,
hopefully,
but
it's
certainly
not
the
long
term
that
we
want,
so
they
will
all
become
application
settings.
C
So,
regarding
the
emv
from
file
that
is
specifically
for
the
situation
we
ran
into
where
we
were
having
what
I
thought
was
secret
data
that
was
being
populated.
That's
not
necessarily
the
configurations
that
we
could
only
set
via
environment,
environment
objects,
okay,
either
way,
I
think
we're
fine,
because
realistically
what
needs
to
happen
is
that
we
need
a
configuration
ins
built
inside
of
the
gitlab
product
itself.
C
C
A
Cool
okay,
yeah,
that
makes
total
sense
great.
That
sounds
good
and
as
we
come
across
any
others
or
if
we,
if
we
suspect
there
are
others
that
we
we
want
to
investigate-
or
you
know
we
find
out
are
definitely
looking
like
their
web
blockers
feel
free
to.
Let
me
know
because
one
thing
we're
seeing,
I
think
like
for
that
workhorse
one
is
the
most
clear
is
because
we
raised
it
on
monday.
A
A
B
Yeah
so
today
I
looked
into
the
issue
that
we
want
to
enable
action
cable
on
the
web
fleet
and
if
that
could
have
any
kind
of
impact
on
the
web
feed.
If
we
do
this,
and
the
background
on
this
is
that
we
want
to
enable
the
real-time
assignments
feature
and
for
that
we
need
to
use
web
sockets,
and
we
accomplish
this
by
using
action.
B
Cable
and
what
we
have
running
right
now
is
a
dedicated
feed
of
websocket
servers
and
kubernetes,
which
is
nice,
which
will
deal
with
the
traffic
for
for
real-time
features,
but
to
enable
the
feature
in
our
web
fleet
like
like
make
this
really
happen.
We
need
to
enable
action,
cable,
also
in
the
web
fleet,
which
will
just
mount
the
necessary
ruby
types
and
will
not
run
any
runners
workers,
because
the
track
traffic
will
be
routed
to
a
walk.
B
We
need
this
fleet,
but
this
is
the
only
way
to
make
it
enabled
so
we
enabled
on
the
web
feed,
but
don't
use
it
there
because
the
traffic
is
going
elsewhere.
So
we
wanted
to
know.
Does
this
have
impact
like
increasing
memory
cpu,
or
something
like
that,
and
for
looking
to
that?
It
doesn't
look
like
we
get
into
any
problems
there,
and
I
just
wanted
to
get
more
eyes
on
that.
If
I
maybe
missed
something
that
I
see,
jeff
already
also
looked
into
this
one.
B
The
only
thing
I'm
a
little
bit
uncertain
about
is,
if
we
really
enable
this
one
and
we
get
more
requests
at
the
scale
of
gitlab,
that
will
also
cause
more
reddish
requests,
and
things
like
that.
So
this
is
not
really
a
problem
for
the
webfleet
then,
but
but
for
the
feature
as
this,
if.
C
B
I
am
not
sure
I
asked
myself
this
question,
but
I
think
this
action
cable
is
a
framework
where
you
get
the
server
and
client
components,
and
I
think
the
client
component
is,
I
guess,
javascript,
that
the
clients
then
get
down
to
to
the
action
cable
connection,
and
I
guess
our
web
servers
are
just
so.
C
C
A
We've
got
I.
B
A
It's
so.
B
B
So,
what's
happening
is
that
we
we
have
a
feature
flag
and
that
will
be
set
to
enabled
by
default
and
as
soon
as
we
add
the
action
cable
configuration
it
will
be
enabled
on
the
web
fleet,
then.
D
Before
yeah,
maybe
we
can
see
if
there's
a
percentage
rollout
for
this
on
the
feature
flag,
and
we
can
turn
off
the
feature
flag
before
we
enable
the
environment
variable.
Otherwise
we
could
do
is
we
could
set
the
environment
variable
on
two
or
three
nodes
first
and
that
that
will
make
it
so
that
only
you
know
five
five
to
ten
percent
of
traffic
or
we
could
do
it
on
canary
right.
Do
it
on
canary
first
and
then
go
from
there
to
make
sure
we
don't
add
extra
load
to.
B
B
C
B
C
A
So
there
have
been
various
like
the
canary
ones,
a
great
point
yeah.
They
have.
A
Yeah
so
it's
been
tested
on
the
the
10k
architecture.
It
was
inconclusive
in
the
margin
of
error,
so
but
yeah
canary
sounds
sensible.
So
awesome
I
will
find
the
issue
that
goes
with
this
one,
which
is
the
actual
setup
to
enable,
but
let's
do
it
on
canary
brill
thanks
for
doing
that.
Henry.
A
So
is
there
anything
else
and
I
wanted
to
go
through.
A
Today
is
there,
like,
I
know
I
we're
just
doing
the
tidy
up
right
for
the
api
service
right,
I
saw
you
removing.
Did
you
remove
the
vms
preparing
for
removing
vms.
C
I've
got
merge,
requests
out
there
ready
for
review
awesome
and
I'm
pretty
excited
about
one
of
them,
because
that
removes
a
whopping
90.
Something
objects
out
of
terraform,
so
you
know
terraform
be
slightly
faster.
C
I
think
it's
pretty
cool,
but
the
removal
already
happened
for
deployer
and
patcher,
so
no
deploys
are
happening
on
the
api
fleet
anymore,
so
yeah,
and
then
once
I
finish
that
up
I'll
just
be
going
through
the
rest
of
the
epic
to
figure
out
what
needs
to
still
be
completed,
because
I
know
we've
got
some
documentation
updates
that
need
to
happen.
D
A
C
I
would
love,
since
we
do
have
a
little
bit
of
time,
I'm
curious
as
to
what
jarv
may
think
about
tuning
requests
and
limits
like
right
now
we're
limiting
ourselves
to
cpu
memory
for
determining
how
far
we're
saturated
or
how
well
things
are
performing.
But
I
don't
think
that's
the
right
metric.
We
should
be
using
to
scale
our
workloads,
and
I
say
that
because
we
we
know
that
there
can
be
an
upper
limit
on
say,
cpu
usage,
because
of
how
puma
may
work.
C
C
We
are
using
saturation
based
on
what
we
currently
develop,
based
on
what
we
configured
inside
of
our
resource
requests
and
limits,
which
doesn't
make
sense
because
that's
not
tied
to
the
workload
associated
with
say
puma
like
we're,
not
we're
not
saying
we're,
100
saturated,
because
all
four
puma
workers
are
doing
work,
we're
saying
we're
saturated
because
we're
using
1600
millicourse
of
cpu,
and
I
think
that's
a
little
bogus.
You
know
it's
not
it's
not
an
accurate
measurement
in
my
opinion.
C
So
when
I
was
looking
at
the
nginx
ingress
yesterday,
I
don't
think
we
have
a
good
view
into
what
we
could
constitute
as
what
is
saturation
and
how
to
fine-tune
that
hp
appropriately
to
ensure
that
we're
not
harming
ourselves.
D
So
my
so,
what
did
we
if
we're
talking
about
nginx?
What
did
we
do
so
far.
D
D
So
far,
and
after
doing
that,
did
we
see
oh
yeah,
I
remember
you
said
like
the
number
of
pods
is
still
at
the
max,
so
that
didn't
change
right,
yeah.
C
C
C
A
D
C
C
B
Calculation
like
if
we
just
use
10
of
cpu,
that
means
we
can
use,
let's
say
10
times
more,
and
so
we
could
roughly
set
say
we
request
10
times
more
memory
to
reduce
the
amount
of
pots
to
a
tenth
right.
I
mean
I
wouldn't
go
that
far,
but
something
like
this
maybe
just
go
with
them
five
times
as
much
so
that
we
have
more
alignment
on
cpu
utilization.
C
Yeah,
I
think
so.
I've
got
two
concerns.
One
is,
I
don't
know
the
appropriate
way
to
tune,
and
you
know
this
is
a
good
discussion
about
that,
but
the
secondly
is
like
when
we
come
time
to
defining
what
we
tune,
it's
trial
and
error,
and
it
takes
forever
to
get
through
the
process
of
let's
create
the
merge
request.
Let's
evaluate,
let's
revisit,
you
know
that
the
lead
time
for
all
that's
excruciating.
B
B
Kind
of
simulator
for
this
right.
That
would
be
great.
D
D
I
guess
the
concern
would
be
that
if
we
take
our
memory
requests,
which
is
at
500
megabytes
right
now,
let's
say
we
update
we
up
like
we
change
that
to
a
gigabyte
we
go
like
we
double
it.
I
guess
the
risk
there
is
that
we'll
reduce
the
number
of
pods
and
then
we'll
be
consuming
more
cpu
per
pod
with
fewer
nodes
and
then
we'll
like
blow
out
the
amount
of
cpu
that
we
have
because
we're
not
doing
any
target
seep
utilization.
D
We
could
I
I
again
like
I'm,
not
100
clear
on
what
it
what
kubernetes
does,
if
you
have
both
target
cpu
utilization
and
target
memory
utilization
specified
if
that
works,
like
you
would
expect,
then
what
we
could
do
is
we
could
say,
keep
the
target
seat
utilization
at
like
70
memory
utilization
at
70
bump
the
memory
requests
to,
I
don't
know
like
a
gigabyte
and
then,
if
we
become
cpu
bound,
I
assume
the
target
seat
utilization
will
save
us
right.
By
give
all
that's,
could
we
could
we
do
this?
D
C
I
agree-
I
guess
my
concern
with
that,
though,
is
like
what
is
the
saturation
goal
that
we're
aiming
for,
because,
if
we're
looking
at
only
cpu
usage,
it's
going
to
be
based
on
the
amount
that
we've
requested
and
I
don't
think
that's
fair,
because
nginx
could
theoretically
use
as
much
processing
power
as
necessary
to
perform
its
needs
before
something
else.
Breaks
yeah.
But.
D
D
C
The
latest
change
was
to
modify
the
amount
of
the
threshold
for
the
hpa.
We
changed
it
from
sixteen
hundred
millicourse
as
an
average
crossovers
to
twenty
two
hundred,
and
I
have
not
evaluated
that
yet
this
morning
I've
been
reading
up
on
various
blog
posts
and
publisher
publishment
published
data
from
other
people
about
how
to
tune
hp's
and
stuff.
So
I've
been
trying
to
read
and
learn.
D
Yeah,
I
think-
and
I
was
going
to
do
this
today-
I
just
didn't
get
to
it,
but
to
have
a
timeline
in
the
description
which
has
each
change
and
a
count
of
the
number
of
nodes
for
nginx.
I
guess
the
default
poll
for
api,
like
the
zonopulse
and
the
number
of
pods,
so
that
we
can
kind
of
see
now
those
flux,
I'm
sure
it's
going
to
fluctuate
during
the
day.
D
Maybe
we
can
pick
pick
a
time
and
you
know,
or
or
at
least
link
to
thanos
graphs
that
can
get
us
the
information
quickly.
D
C
D
D
Yeah,
ultimately
yeah.
We
may
also
want
to
state
what
our
goals
are,
which
I
like
to
think
is
like
I
don't
know
how
many
notes
are
acceptable,
but
I
assume
it's
no
more
than
like.
We
shouldn't
go
more
than
20
percent
of
what
we
had
before
for
the
git
fleet
total.
C
Yeah,
realistically,
I
would
agree.
Realistically,
I
would
love
to
see
us
running
the
same
amount
of
puma
workers
per
node
that
we
had
since
the
node
sizes
are
precisely
the
same
right
and
then
see
what
we
need
to
tune
to
drive
down
compaction
a
little
bit
more
and
see
if
we
could
tune
it
even
a
little
better
to
get
closer
to
36
nodes.
Overall
right
now
we
can't
stuff
more
than
three.
D
D
Yeah,
okay!
Well,
I
think,
like
our
pods
only
have
four,
it's
four
workers
per
pod
right.
B
D
D
Well,
yeah,
I
think,
because.
B
B
You
will
not
see
the
memory
sharing
that
that
is
possible.
A
D
I
mean
we
did
a
lot
of
analysis
on
this
for
the
get
https
migration
to
determine
like
how
many
workers
per
pod
and,
like
we
landed,
I
think
initially,
we
landed
on
two
and
then
four
and
I
did
my
testing
like
I
was
seeing
problems
going
more
than
four
at
the
time.
So
we
need
to
reevaluate
that
if
we
really
want
to
go
up
to
16
workers
per
pod,
we're
going
to
do
some
testing
to
see
whether
that's
okay,.
B
D
Yeah
but
I
think
if
we
do
that,
then
we
need
bigger,
bigger
nodes
right,
because
I
think
there's
just
like
a
lot
more
going
on
on
our
c2
nodes
and
kubernetes
than
there
were
on
our
c2
api
nodes
in
the
vm
land
right
and.
D
I
mean
I'm,
but
I'm
just
generally
concerned
that,
even
even
if
we
do
that,
we
have
daemon
sets
that
are
running
on
the
kubernetes
nodes
that
we
weren't
running
on
the
vms
and
like
we
just
have
a
little
bit
more
running
on
kubernetes,
so
keeping
the
node
size
the
same
between
vms
and
kubernetes,
I
think,
is
dangerous.
If
we
had
the
same
number
of
workers.
D
Yeah,
like
we
could
go
to
eight,
we
could
go
to
eight
and
see
we'll,
of
course,
have
to
adjust
our
requests
and
things
like
that,
but
yeah
we
could.
We
could
play
around
with
this
on
canary.
It
could
be
a
good.
B
The
thing
is
that
memory
between
processes
is
shared
on
the
linux
kernel
and
if
you
start
up
several
puma
workers,
then
more
than
half
of
the
memory
they're
using
other
same
shared
libraries
that
they
are
mapping
into
memory
and
the
linux
code
figures
this
out
and
now
okay,
I
don't
really
need
to
copy
this
into
memory.
B
Again,
I
can
just
you
know,
use
the
same
pages,
and
so,
if
you
see
in
the
process
listing
two
workers,
let's
say
for
puma
and
both
using
1.2
gigabyte
of
memory
in
reality,
they
just
loaded
the
first
one
loaded,
1.2
gigabyte
into
memory,
but
the
second
one
is
using
more
than
half
of
the
pages
that
the
other
worker
is
using.
So
it's
just
adding,
let's
say
500
megabytes
more
and
if
we
use
more
than
two
workers
that
effect
even
is
getting
better
right,
because
you
are
doing
more
using
off
of
shared
pages.
B
So
for
workers
is
great,
eight
is
even
better
so
and
if
we
really
are
bound
by
by
memory
of
how
many
pots
we
can
put
into
a
node,
then
increasing
the
amount
of
workers
could
help
to
get
more
on
one
node
to
figure
out.
If
that
really
helps
a
lot.
B
C
A
The
unluckiest
number
as
well
henry:
do
you
want
to
drop
that
on
your
suggestion
for,
like
you,
maybe
got
it
under
your
3a
kate's
workload
tuning
but
like
if
there's
a
more
specific,
like
you
know,
canary,
you
know,
like
runners,
well,
workers
change
or
whatever
you
wanna
like
summarize,
that
into
then
go
for
it.
A
C
C
A
Yeah,
absolutely
so,
I
think
it's
fine
if
that
bit
like
you
know,
takes
us
takes
us
some
days
and
you
know
you've
got
henry
around
and
graham
also
so
you
know
if
it's,
if
it's
sunday,
which
we
pass
through
time
zones
as
well,
we
can
do
that.
A
C
C
Time
and
effort,
I've
never
done
it
before.
I
don't
know
how
to
tie
prometheus
into
kubernetes
such
that
we
could
scale
based
on
custom,
metrics.
B
C
Think
that's
a
good
long-term
solution,
ideally
right
now
we
just
need
to
get
ourselves
to
a
much
better
cost
analysis
situation,
because
right
now
we're
using
entirely
too
many
nodes
we've
effectively
more
than
doubled
the
amount
of
nodes
we're
running
for
the
same
surface,
taking
the
same.