►
From YouTube: Reference Architecture for Cloud Native GitLab
Description
Discussion between Grant & Jason, related to the self-managed scalability workgroup's design of reference architecture using the Cloud Native GitLab Helm charts.
We covered things like:
- Why *not* Omnibus in Kubernetes
- Separate of components by concern within the Helm charts
- Scaling workloads vertically and/or horizontally
- pre-scaling at minimum 50% or more expected load, and maximum to 110% (straight to 100% for tests)
A
B
Basically,
what
we
do
flashy
days
is
going
down
some
good,
but
some
of
their
own
paths
on
how
to
tackle
this
due
to
misunderstandings
about
what
this
task
was.
And
it's
been
this
so
there's
been
a
lot
of
progress
and
reminding
me
relearning
cuban
a's
and
all
of
its
quirks
and
how
to
kind
of
achieve
with
it.
But
the
actual
overall
solution
is
kind
of
gone
back
to
square
one,
but
that's
fine
square
one
isn't
back
to
zero
square.
One
is
that
the
vast
majority
environment
is
already
running.
B
It's
just
literally
me
know
just
trying
to
figure
out
how
to
set
up
the
the
web
knows:
the
the
rails
and
the
psychic
nodes,
as
well
as
the
load
balancing
for
them,
which
comes
with
driven
ease
and
then
currently
hooking
them
up
to
the
back-end
services.
So
it's
just
figuring
out
how
to
do
that
with
the
helm
chars.
So
that's
where
we
are
okay,.
B
The
path
that
I
was
going
until
yesterday,
they
were
going
to
be
on
the
same
cluster
but
different
note,
pills,
I,
looked
up,
literature
online
about
separating
clusters
and
it
seems
a
bit
for
what
we
need.
We
don't
need
it,
but
if
there
is
another
argument
that
I'm
again,
that's
not
that
wouldn't
be
too
much
for
change,
but
certainly
clusters
seem
to
be
quite
capable
these
days,
so
the
different
little
tools,
G
sphere
would
have
been
enough.
No.
A
B
Will
be
on
the
bus
to
start
with
one
of
the
quick
questions
that
that
have
was
it's
going
to
be
on
them
on
their
buses
tariff,
because
that
this
has
been
driven
by
a
customer's
requirements
and
it
sounds
like
they're
gonna
go
from
the
bus.
However,
if
we
have
time
or
in
the
future,
it
would
be
certainly
interesting
to
explore
a
cloud
service.
B
If
there's
a
performance,
testing,
I
guess
the
goal,
there
is
just
a
validate
that
I
can
handle
load
in
theory.
It
should
else
were
these
cloud
providers
doing
I
suppose,
but
the
problem
comes
in
in
that
we
will
test
the
Google's
cloud
service.
We
could
do
AWS,
but
they
were
not
able
to
borrow
the
others,
so
that
opens
that
door,
but
nonetheless,
some
coasters
will
come
to
us
and
ask
to
use
that.
So
we
what
we
will
eventually
to
validate,
but
we're
posted
on
the
bus
than
those
since
we
actually
control
them
right.
B
All
the
state
sisters
will
be
well
no
leaf.
Well,
we
will
be
using
clothes,
object,
storage
that
will
be
that's.
That's
their
current
reference
environments
already
use
those
the
use
of
on
Google
cloud
I.
Think
there's
like
one
actual
small
teeny
bit
that
is
using
NFS
and
that's
purely
because
the
config
there
is
not
immeuble,
it's
a
very
as
a
manual
UI
step.
B
A
A
A
Effectively.
You
separate
out
a
single
psychic
you,
so
you
have
a
sidekick
cluster
that
has
one
cue
configured
yeah.
This
is
the
pages
job,
that's
it
and
then
you
configure
the
sidekick
pools
inside
of
the
health
charts
to
just
not
include
pages,
so
the
only
place
that
the
pages
jobs
get
run
are
on
the
the
sidekick,
node
or
nodes
outside
that
have
pages
on
it.
In
in
my
test
environment,
when
I
did
this
work,
I
effectively
had
everything
in
a
k3s
cluster
and
then
a
VM
right
beside
it
was
running
omnibus
with
sidekick.
A
B
B
A
Basically,
at
this
point,
the
only
thing
not
present
is
like
the
actual
setting
of
the
pages
value
inside
of
the
helmet
arts
issue.
37
has
the
straightest
information
of
what
needs
to
be
changed
to
make
that
work.
We
do
have
psychic
negation
in
play,
so
the
similar
to
the
way
the
psychic
cluster
has
that
ability,
so
it's
functional,
but
it
needs
explored
and
documented
before
we
go
really
playing
with
it.
Yeah.
B
I've
always
said
since
the
beginning
so
started
this
work
and
from
a
previous
experience
as
well,
my
recommendation
is
just
that
is
the
recommend.
It's
kind
of
a
place
to
start
every
customer
will
have
their
own
requirements
and
tweaks
and
things
don't
need
to
do
so.
I
expect
most
coasters
vary,
so
look
like
the
recommended
environment,
but
with
extra
tweaks
or
things
relevant
to
their
requirements.
So
if,
when
it
comes
to
pages
before,
II
won't
cover
that
in
this
work,
but
at
the
moment,
if
we
really
this
ones
within
us,
that's
probably
our
attacks.
B
B
Wandering
wandering
so
for
the
reference
architectures,
we
have
just
the
rails
nodes,
so
they're
doing
all
the
web
stuff
in
multiple
nodes,
although
those
nodes
are
essentially
identical,
so
that
was
API
web
and
I
said
workhorse
and
I
suppose
SSH
into
each
node.
That
was
just
based
on
previous
work
that
others
have
done
to
do
so.
The
work
I
was
doing,
was
kind
of
tackling
this
the
wrong
way
and
that
we
were
taking
on
the
omnibus
environment
that
we've
already
done
and
validated
and
verified,
and
we
were
going
to
just
try.
B
I
was
my
understanding,
was
just
gonna,
try
and
switch
a
few
things
out
of
Cuba
knees,
so
it's
kind
of
taken
that
omnibus
approach,
whereas
from
discussions
with
yourself
yesterday
and
I'm
kind
of
reading
more,
that
was
obviously
Cuba.
Knees
isn't
easy
and
trying
to
do
something
quickly.
Isn't
the
way
to
go.
So
the
right
approach
would
be
that
way
around,
which
is
get
the
use.
The
helm,
charts
to
build
out
the
on
the
bus.
B
B
And
then
the
idea
was
to
iterate
on
that
and
use
a
barrel
or
bouncer
ingress
and
then
eventually
been
the
psychic
knows
him
as
well.
They
were
still
in
the
back
end.
I
was
just
doing
it
piecemeal,
but
some
approach
today
would
be
to
I.
Guess
is
still
kind
of
a
similar
thing
of
doing
that,
but
taking
the
first
pro
job,
let's
use
the
helm,
charts
for
unicorn
and
psychic
and
get
them
currently
communicating
with
the
back
and
nose
themselves.
B
A
It
actually
the
it's
very
heavy,
because
it
contains
everything
it's
a
nature
of
being
an
omnibus.
It
literally
means
I
have
everything
it's
minimum
startup
time
is
well
outside
the
bounds
of
most
default
liveness
and
readiness
checks
you
basically
the
container
has
to
start
up.
It
sets
permissions.
It
then
runs
reconfigure.
It
tries
to
configure,
and
then
it
tries
to
start
everything
that
you
just
had
configured
configuring.
This
is
relatively
easy
because
it's
very
familiar
if
you've
done
a
lot
of
omnibus
work,
it's
literally
the
exact
same
file.
A
A
A
Okay,
and
on
top
of
that
should
worst
case
you
not
have
something
that
prevents
it
from
coming
up
prior
to
its
dependencies
being
prepared.
So
you
try
to
start
a
rails
worker
before
the
database
is
responding
or
the
scheme
is
up
to
the
date
that
it
needs
it's
gonna
restart
and
unfortunately,
it's
going
to
do
it
immediately.
So
if
you
have
one
trying
to
start
on
a
node
and
it
gets
most
of
the
way
up
to
earning
those
okay,
well,
it
seems
to
be
working.
A
We're
gonna
start
another
one,
and
then
both
of
these
flip
out,
because
they've
crashed,
then
you've
now
got
into
the
fight
of
how
many
tries
does
it
take
before
it
comes
all
the
way
up
and
now
you're
just
eating
CPU
and
churning,
and
you
can
actually
starve
other
processes
completely
out
of
a
node
just
because
the
omnibus
is
so
big
and
trying
to
do
so.
Much
with
the
cloud
native
containers,
which
is
which
is
what's
deployed
by
the
helm,
charts
we've
actually
done
some
work
to
try
and
smooth
all
of
that
process
out.
A
So
we've
made
it
so
it's
just
the
rails.
It's
effectively
an
insulation
from
source.
This
doesn't
mean
the
configuration
is
a
little
bit
different,
but
the
primary
behaviors
are
very,
very
similar.
The
containers.
However,
they
start
up
and
one
of
the
first
things
they
do
before
they
pass.
The
anit
stage
is
actually
check
to
make
sure
that
it
can
reach
Redis
that
it
can
reach
the
database.
Obviously
it's
more
transient
if
it
can
reach
the
object
storage,
because
it's
an
HTTP
API,
so
you
just
retry
Postgres.
A
If
you
can't
form
a
TCP
socket
connection
bad
day,
if
your
database
you're
trying
to
start
an
application
from
say
today-
and
your
database
is
still
from
November
of
last
year-
you're
actually
going
to
have
a
big
problem
code
base
wise
because
of
some
possible
changes
to
the
data
structures
with
an
active
record,
and
we
check
for
that
one
too.
So
that
process
will
actually
run
and
we'll
wait
and
retry
until
the
database
comes
up
and
then
it
will
start
the
process.
So
you
know
if
it
makes
it
into
the
primary
one.
B
Well,
I,
guess,
that's
all
fair
I
mean
I,
haven't
I,
obviously
didn't
know
that,
but
I
knew
from
when
you,
when
you
discussed
it
when
you
were
talking
about
yesterday,
I
knew
would
be
something
like
that.
I
know:
I'm,
the
boss
is
a
it's
a
it's
a
big,
it's
a
big
package
and
it
has
everything
and
in
Cuban,
is
where
all
that
is.
That's
not
the
way.
B
B
So
then
deploying
one
pod,
each
which
isn't
great
so
no
that'll
make
sense,
and
if
you
go
well
yeah
so
like
I,
say
I
mentioned
this
in
the
issue
of
ready,
but,
like
I
said
yes,
the
I
was
I
was
under
the
wrong
impression
that
helm
was
just
a
complete
all
in
one
thing,
and
I
can
use
it,
but
now
I
actually
realize
it's
a
completely
separate
method
to
omnibus.
As
you
say,
it's
complete
different
from
source
and
it's
essentially
a
different,
install
method
with
different
configuration,
which
is
fine,
that's
not
a
problem.
B
A
B
Give
me
a
second:
we
don't
actually
have
drawings
with
the
reference
environments
per
se
these
days,
although
I
can't
grab
one,
that's
pretty
close
to
it.
If
you
give
me
a
second
like
I
say,
this
is
all
change
from
yesterday,
where
I
probably
work
in
this
a
few
days
and
the
kind
of
the
design
of
it
was
still
very
what
was
still
very
much
up
in
the
air,
and
you
had
some
general
ideas.
Pencil
drawings,
essentially,
but
I
wouldn't
have
shared
more
until
actually
had
a
more
secure
kind
of.
B
Kind
of
design-
okay
right,
let's
use
one
of
the
existing
drawings,
which
is
fairly
close
to
what's
the
current
environments,
look
like
today.
B
Sorry,
so
the
joys
are
using
a
large
water,
so
I'm,
if
you
actually
got
in
here,
this
is
actually
the
table
of
like
a
one
of
the
reference
architectures.
This
is
actually
the
base.
It's
the
10
key
environment,
this
one's
actually
to
be
50,
K,
that's
cuz,
they're,
a
customer
of
that
scale
is
asking
for
it,
but
the
environments
are
scaled
in
such
a
way
that
it's
the
same
kind
of
design.
The
only
way
only
times
to
change
is
when
you
go
really
small
and
read
this
drops
down
to
just
one
node.
B
Instead
of
multiple
kind
of
pass
and
sentinels
combined
for
console
and
in
the
tank
II
can
itself
everything
separated.
That's
the
way
is
so
okay
that
diagram
this
one
here.
This
is
fairly
correct:
I'm
as
a
savory
console,
read,
ascend
or
actually
separated
in
our
larger
environment,
but
I
think
that's
right.
There
might
be
something
missing
off
top
my
head,
you
saw
a
psychic
Venus
at
console.
A
B
A
B
A
We
strongly
recommend
against
doing
that
unless
you
have
a
very
specific
use
case
and
knowing
that
the
the
cloud
native
charts
thus
only
use
giddily,
there's
we
actually
completely
disabled
the
path
that
you
can
use
rugged
for
shared
storage
and
that's
on
purpose,
because
it's
easy
enough
to
orchestrate
an
NFS
for
shared
storage.
Yes,
it's
not
easy
when
the
pod
that
is
attempting
to
consume
that
is
migrating
between
nodes,
and
now
you
have
to
worry
about.
A
Where
is
this
NFS
mounted,
how
many
clients
design
for
doesn't
handle
unexpected
disconnect
and
when
it
reconnects
on
the
different?
No,
now
do
be
blah
blah
it's
just,
and
if
this
does
work,
we're
not
gonna
tell
him
to
do
it,
because
you
don't
want
the
headache
and
if
anybody
else
has
why
it's
watching
this
and
has
ever
run
an
NFS
server.
A
B
Ya
know:
III
yeah
can
cannot
anything
more
so
that
yeah
NFS
is
very
much
on
recommended,
but
in
some
cases
where
people
start
to
use
it
but
yeah.
So
this
is
actually
a
more
up-to-date
drawing.
We
might
actually
add
this
to
the
to
the
page,
but
so
the
design
of
the
hell
environment
as
well
now
call
it
to
begin
with
will
be
that
services
tier
on
this
image
with
replace
of
helm
charts,
where
put
where
appropriate,
I
think
configured
to
speak
to
the
backends
as
shown
there.
B
A
That's
right
for
this
reference
architecture,
we're
primarily
going
to
be
working
with
the
the
nginx
ingress
and
GCPs
ingress
directly
to
that
I
should
say:
IP
directions
right
right!
Do
that
through
layer
four,
it
is
technically
possible
to
put
a
layer,
seven
load
balancer
in
front
of
cabrini
these
clusters.
A
B
Yeah,
that
should
be
fine
with
the
reference
and
virus
or
omnibus.
We
also
had
to
sell
about
separately.
There's
n.
There
is
n,
jinx
and
omnibus
for
the
gate
lab
knows
themselves,
but
it's
actually
the
balanced
us
knows
they
did
spin
on
one.
So
for
that
one
we
actually
used
hatre
proxy
just
as
a
popular
established
load
balancer,
and
that
did
give
us
a
little
bit
extra
functionality
to
do
with
health
checks,
but
with
kubernetes.
B
Those
are
built
in
at
the
pod
level.
So
that's
not
needed,
so
the
ingress
should
be
should.
A
Be
fine
in
theory
right,
the
the
biggest
difference,
I
know,
at
least
at
the
dot-com
scale,
on
possibly
in
the
50k
architecture.
We
do
actually
have
some
separation
between
web
workers
and
API
workers,
so
we
whole
rails,
but
some
paths
are
dedicated
specifically
for
API
and
go
to
specific
workers
as
well
as
web.
Going.
Okay.
Well,
there's
always
gonna,
be
at
least
these
ten
make
responses.
A
B
For
the
reference
architectures
that
was
considered,
but
the
50
keeper
was
performing
well
enough,
where
it
was
that
we
decided
not
to
go
down
that
path,
but
is
still
a
path
for
advanced
cases
where
our
customer
is
expecting.
Maybe
they're
expecting
a
heavy
API
traffic
compared
to
web
and
did
be
a
benefit
their
separators,
though
most
cost,
as
we
expect
there
to
be
mixed.
But
again
it
comes
down
to
that.
These
are
just
the
basis.
You
can
tweak
us
as
you
as
you
require
right.
B
B
A
B
B
Single
customer
would
ever
need
to
do
that,
but
like
say
it
is
an
option
and
for
the
kid
for
the
helm,
environments
I
certainly
would
be
looking
to
go
down
the
same
path
of
there's
just
reals
nodes
and
they
can
be
scaled
and
obviously
we
Cubans
you
can
automatically
scale
them
as
you
see
fit.
If
again,
that
becomes
a
problem,
then
at
the
beginning
and
then
say:
ok,
this
notes
for
API
knows
web
SSH.
That
kind
of
thing
right
well.
A
You're,
using
the
the
we
have
a
way
to
use
HTTPS
for
SSH
or
should
say
share
the
port
I,
don't
know
that
that's
actually
doable
in
the
charts,
I've,
never
tried
it
and
I'm,
not
100%
sure
how
exactly
we
do
it
on
comm.
So
it
might
be
an
H,
a
proxy
specific
pattern.
The
way
nginx
works
is
it
doesn't
identify
the
protocol.
So
if
you
say
this
is
an
HTTP
port,
it's
going
to
treat
it
like
an
HTTP
request
and
it
will
drop
anything
that
doesn't
speak.
A
B
B
I
mean
that's:
one
of
the
questions
that
it
have
is
either
see.
There
was
a
separate
NHS,
a
separate
chart
for
the
ssh
traffic,
but
some
as
I
say
the
moment
I'm
not
for
this
scale.
We're
not
expecting
that
to
be
separate.
So
I
was
wondering
if
the
charts
is
unicorn
just
unicorn
or
does
actually
includes
SSH
as
well.
B
A
For
every
component
that
you
want,
you
have
to
activate
that
section
of
the
chart
which,
out
of
the
box,
you
get
the
whole
chart,
and
then
we
tell
you,
you
know
hey
for
production,
don't
use
our
Postgres
configure
one!
Don't
use
our
Redis
configure
one!
Don't
use
Mineo,
configure
real
object,
storage,
otherwise
all
components
are
turned
on
by
default.
A
We
do
not
turn
on
kevanna,
but
all
operational
components
are
turned
on,
so
you
have
one
side
kick
and
then
you
can
provide
it
accuse
list
and
it
will
create
the
necessary
set
of
jobs
and
separate
Auto
scalars
for
those
so
that
you
can
actually
individually
manage
each
queues
number
of
pods
and
how
many
workers
it's
allowed
to
get
to.
Then
you
also
have
unicorn,
which
unicorn
is
well.
B
Know
yeah
Pumas
I
believe
it's
just
gonna
actually
live
on
gog.com
today
on
absurd,
so
I'm
it's
gonna
be
the
official
web
server,
I
think's
turn
around
and
later
the
recommended
one.
Even
so.
I
would
be
good
to
get
that
in,
but
unicorn
is
also
fine
for
now.
It
is
today
still
technically
the
recommended
web
server.
So
that's
what
work
with
all
those
reference
architectures
are
using
Finland,
but
that
was
done
in
anticipation
that
FEMA
would
already
be
here,
but
right
it
should.
A
Be
soon,
the
good
news
is
that
the
difference
in
configuration
for
the
charts
ideally
is
literally
going
to
be
get
lab:
dot,
web
service,
dot,
web
server,
yeah,
human
or
unicorn.
You
just
say
which
one
you
want,
and
then
you
say
how
many
worker
processes
should
I
have
and
how
many
threads
should
I
have
that
configuration
is
identical
between
unicorn
and
Puma.
You'd
literally
just
have
to
say
that
name.
You.
B
A
B
A
B
B
A
The
shells
only
job
is
it's
literally
just
a
container
that
has
open
sshd
in
it
configure
the
same
way.
We
do
effectively
the
same
way
we
do
inside
of
omnibus
and
they
share
host
keys,
so
no
matter
which
one
you
end
up
connecting
to
just
as
you
would
with
omnibus
you've
got
one
host
key
that
matches
everywhere.
A
So
every
component
is
isolated
within
the
charts.
So,
if
you
just
need
web,
that's
what
unicorn
does.
If
you
just
need
jobs,
that's
what
sidekick
does.
If
you
need
SSH,
you
can
even
and
I
have
demonstrated
this
taking
a
chart
deployed
just
get
live
shell
ssh
and
an
omnibus
right
beside
it
and
just
turn
to
ssh
off
on
the
box.
B
A
B
A
B
B
A
B
B
Environment,
we're
up
with
all
the
big
backends
separated
which
are
built
with
omnibus
just
for
convenience,
but
coupies
switch
to
services
we'll
see,
but
anything
that
can
be
done
today
in
helm,
easy,
maybe
they
long
term,
but
anything
that
can
be
done
there.
We
will
you'll
bring
in
and
that's
fine
right
in
helm,.
A
B
On
kill
of
calm
when
it's
once
it's
a
once,
it
is
out,
we'll
probably
add
into
the
reference
architectures
and
then
wider
to
the
helm,
one
as
well,
but
yeah.
As
you
say,
I,
don't
see,
there's
no
way
that
that's
going
to
be
on
communities,
because
that's
a
heavy
back-end.
So
even
though
there's
a
front
in
component
well.
A
A
However,
it
doubles
the
resource
requirements
and
if,
if
in
any
way
at
all,
we
don't
make
that
perfectly
clear
with
all
kinds
of
like
warnings
and
Docs
and
a
large
number
of
things,
it
would
be
a
very
bad
customer
experience,
which
is
why
we
still
treat
it
as
an
external
component
that
happen
to
hook
to,
because
you
don't
you
really
don't
want
to
have
that
fight.
No.
B
I
completely
agree:
you
could
you
could
if
you're
a
keen
you
could
you
could
diploid
Cabana
in
Cuban
eighties?
If
because
that
is
a
way,
but
that's
the
front
end,
but
last
exert
yourself
isn't
as
a
back
end
is
the
same
in
my
head
as
gisli,
if
not
more,
because
the
sheer
mode
a
to
the
es
can
store
is
incredible.
So
right.
A
A
One
thing
I
want
to
point
out
is
that
by
default
the
scaling
configuration
is
meant
for
small
instances
right
you,
you
cannot
install
a
proof-of-concept
deployment
and
then
just
go.
Okay
have
fun
and
watch
it
fall
over
when
you
push
5,000
users
at
it.
It's
it's
designed
for
like
a
hundred
out
of
the
box
and
then,
when
you
want
more,
you
need
to
do
some
pre-emptive
scaling.
Kubernetes
is
very
automated:
it's
not
black
magic.
A
If
the
nice
thing
is,
you
don't
have
to
run
your
infrastructure
at
110
percent,
all
the
time
for
service
piece,
but
then
it
sits
idle
90
percent
of
the
time,
but
you
still
have
to
run
it
at
least
a
significant
portion
above
5%
right,
there's
only
so
much
scaling.
We
can
do
in
an
instant
just
there's
only
so.
A
Can
do
so
you
have
to
preemptively
say
if
I
want
to
have
you
know
a
thousand
RPS
to
the
web
nodes.
Then
I
need
to
have
enough
to
actually
support
that.
So,
if
I'm
gonna
go
from
zero,
RPS
1000
RP
s,
I
need
to
have
the
ability
to
scale
to
that
point
before
it
fall
over.
If
you
go
from
zero
to
500
to
800
to
a
thousand
to
1200,
it
will
catch
up,
but
you
can't
just
go
BAM.
A
You
can't
go
from
0%
to
110
percent,
it
will
fall
over
and
that
is
the
nature
of
communities.
You
literally
can't
spawn
that
many
processes
that
fast
this
is
the
nature
of
it.
So
what
you
want
to
keep
in
mind
is
that
you're
gonna
want
to
set
a
minimum
number
of
replicas
for
the
horizontal
pot
of
scalars.
A
The
HPA's
you're
gonna
want
to
set
a
minimum
number
and
then
a
set
a
maximum
number
that
is
going
to
be
based
on
very
similar
to
how
we
say
how
many
rails
nodes
that
you
would
have
and
what
their
CPU
cores
are.
The
nodes
that
you
have
in
the
node
pool
for
running
the
web.
Api
etc.
Those
knows
you
should
be
pre,
defining
with
so
many
CPU
and
so
much
memory,
and
then
you
should
know
how
many
requests
per
second
you
want
each
pod
to
be
able
to
handle
so
I
like
by
default.
A
A
A
That's
a
lot
of
horsepower,
it's
so
by
default,
the
omnibus
goes.
Oh,
you
have
64
cores
we're
gonna,
give
you
like
32
unicorn
they're,
always
there
they're
always
sitting
idle
so
when
they
ran
a
load
testing
script
against
the
api's
that
didn't
have
a
problem
keeping
up,
because
it
had
well
over
32
processes
running
for
unicorn
to
be
able
to
answer
those
requests,
whereas.
B
A
A
However,
it's
entirely
possible
to
say
you
know:
I
want
each
one
to
have
four
or
even
eight
worker
processes.
You
need
to
make
sure
that
the
node
can
handle
more
than
one
pot
at
a
time,
because
when
you,
when
you
say
I,
want
this
many
CPU,
it's
going
to
allocate
you
dedicated
X
many
CPU
as
a
minimum.
B
B
B
B
A
B
B
Thank
you
so
I
say
15
here,
but
it's
probably
about
the
10.
A
separate
task
you
work
on
is
to
review
our
current
CPU
recommendations.
Today,
as
I
say,
this
was
done
basically
six
months
ago
generally,
that
was
basically
six
months
old
work
and
the
scaling
today
is
better
because
the
performance
is
better,
so
could
actually
get
wave
10,
so
that'd
be
10
notes.
A
B
A
Buries
yeah,
it
does
so.
Basically
what
you
do.
Is
you
up
your
thread.
Count
your
process
count
and
you
import.
So
you
go
from
two
processes
to
four
processes
and
you,
if
you
double
your
process,
you
double
your
memory
request.
So
you
could
probably
fit
six
or
seven
ponds
in
here
comfortably
and
be
close,
but
you
could
do
it
and
communities
will
try
not
to
make
you
go
bonkers,
yeah,
but
you
know
be
careful.
You
can
actually
say:
I
want
one
gigantic
pod
that
eats
half
of
a
node
give
it.
A
B
A
Speaking
I
would
say,
horizontal
but
don't
be
afraid
to
increase
the
individual
size
and
reduce
the
total
scale.
Okay,
so
what
you
don't
want
is
16
massive
pots
yep.
What
you
want
is
maybe
a
whole
bunch
of
four
or
at
most
eight
process.
Pods
now
I'm
going
to
do
eight
process,
pods,
that's
eight
gigs,
each
so
you're
gonna
fit
maybe
four
of
them
on
an
individual
node.
A
A
You
should
probably
have
like
a
hundred
of
these
things
or
allow
it
to
scale
to
that
point,
and
the
nice
part
is
it'll.
Do
it
like
that,
because
SSH
starts
up
stupidly
fast
you'll
actually
have
more
more
delay
in
the
scale
up
from
the
platform
that
is
kubernetes,
then
you
will
from
the
pot
when
it
comes
from
unicorn.
You
know
plan
to
be
able
to
support
50%
load
at
any
given
minute,
which
means
set
the
minimums
to
50%
load
requirement,
set
the
max
to
your
110%
mark
and
it
will
go
walk
as
load
goes
up.
B
Yep
make
sense
it's
for
our
for
my
for
our
intensive
purposes
for
performance
testing.
We
needed
to
be
ready
for
max,
because
that's
what
we
test
that
products.
A
A
B
Very
much
yeah,
it's
a
bit
weird
with
that.
So
that's
that's
how
it
works.
Okay,
that's
that's
all
been
really
helpful.
The
only
other
quick
question,
habitat
console
I,
know
an
omnibus
consoles
diner.
The
background
can
a
guy
that,
unless
everything
communicate
to
each
other,
I
thought
I
don't
see
any
reference
to
console
and
the
documentation
I've
seen
so
far
so
I
don't
know
how
that
works,
how
you,
the
unicorn
helm
nodes,
could
communicate
with
Vitas.
In
the
background
just.
A
A
B
A
At
all,
so
what
you
would
do
is
point
you
would
actually
just
configure
them
to
the
Redis
node
and
if
that
means
that
you
need
to
tell
them
to
go,
ask
console
for
the
node
by
DNS.
You
just
point
them
at
consoles,
DNS
naming,
but
we
haven't
actually
tried
using
that
right
now
we
actually
say:
okay.
Well,
here's
Sentinel
go
talk
to
it.
A
A
Communities
already
has
a
DNS
implementation,
so
we
don't
have
to
rely
on
console
at
all.
If
you
want
to
know
how
many,
how
many
Web
API
nodes
are
there,
you
literally
just
ask
the
service.
Forget
like
unicorny,
go:
hey,
okay!
Let
me
the
corner.
How
many
endpoints
do
you
have
and
it
goes
okay
and
it
doesn't
pick
one
to
go
to
per
say
it
it
round
robin,
and
you
say
I
say
mr.
service.
Can
you
point
me
at
one
of
your
pods?
It
goes
cool.
You
talk
to
that.
One
yeah.
B
A
B
Okay,
that's
fine,
there's
something
to
figure
out.
We
only
only
need
it
for
like
light
monitoring
performance
testing
internally
externally
for
customers-
that's
they
probably
they
could
have.
It
depends
on
what
monitoring
service
they
want
to
use
through
theaters
built
in
if
they
wants
to
use
it,
not
only
bus,
but
then
I
use
third
party
or
something
else.
So
it's
not
that's
not
so
bad,
but
there's
something
I
can
maybe
figure
to
get
that
working
currently,
but
yeah
yeah.
A
A
A
That's
not
a
bad
idea
when
it
comes
to
sidekick
based
on
the
actual
applications
performance
patterns,
so
out
of
the
box,
sidekick
comes
with
a
hva
that
has
it
set
with
a
single
pod
that
will
scale
from
I
think
one
two
three
and
our
load
where
I
was
kind
of
with
a
little
bit
higher.
Your
architecture
here
is
basically
four
by
four.
So
sixteen,
so
you
would
probably
want
that
HBA
to
have
four
pods
by
default,
and
then
it
would.
B
Yeah,
that's
fine
in
this
environment,
the
psychics
we'd
all
be
doing
the
same.
The
idea
is
just
have
them
spread
out
the
load
between
them,
so
that
should
be
fine
to
do
so.
I
click
on
this
all
not
so
worried
about
psychic,
it's
quite
performant
and
it
doesn't.
As
you
see
it,
doesn't
need
a
massive
MEMS
or
so
that's
okay.
It
does
but.
A
A
Rails
tends
to
be
very
heavy
long-term
because
it's
active
all
the
time
if
I
kick
being
their
nature,
that
it
is,
unless
you
get
a
massive
influx
on
the
queue
it's
just
gonna
boot
boot
boot,
boot
boot.
It's
when
say
five
thousand
users
are
importing
ten
thousand
repositories
and
just
go
smash
that
what's.
A
B
We
never
see
thanks,
thankfully,
that
that's
hopefully
an
unlikely
scenario,
certainly
not
be
able
to
handle
that.
Well,
unfortunately,
one
have
to
go.
I've
got
our
meeting,
but
hopefully
this
is.
She
address
any
fears
with
the
what
we're
doing
here.
Emily
were
taking
around
for
watch,
but
I
want
to
take
all
this
away
and
start
building
up
a
kind
of
a
new
design
which
is
gonna,
be
based
on
this
environment.