►
From YouTube: Cloud Native Live: How to right-size Kubernetes
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
Yeah,
let's
okay,
perfect!
So
are
we
taking
off
rice
from
the
beginning,
I
presume
so
we're
having
some
technical
difficulties
here?
Lovely
no.
A
B
So,
let's
kick
it
off
in
the
beginning,
always
challenging
yeah:
let's
go
so
yeah
Welcome
to
Cloud
native,
live
where
we
dive
deep
into
the
code
behind
Cloud
native
I'm,
Annie,
talasto
and
I'm.
A
cncf,
Ambassador
and
I
will
be
your
host
tonight.
So
every
week
we
bring
new
set
of
presenters
to
Showcase
how
to
work
with
Cloud
native
Technologies.
B
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions,
so
you
can
join
us
every
Wednesday
to
watch
live
so
this
week
we
have
Andy
here
with
us
to
talk
about
how
to
write
science
communities
with
a
live
coding.
Demo
so
very
excited
for
that,
and
as
always,
this
is
an
official
live
stream
of
the
cncf
and
as
such
it
is
subject
to
the
CNC
of
code
of
condor.
So
please
do
not
add
anything
to
the
chat
or
the
questions
that
might
be
a
violation
of
that
code
of
conduct.
B
A
So
I'm
Andy,
thanks
for
having
me
back
on
the
show
I'm
the
CTO
Fairwinds.
We
are
a
kubernetes
first
company
we've
been
doing
kubernetes
for
six
or
seven
years.
At
this
point
we
run
clusters
for
our
customers.
We
operate
those,
and
then
we
also
have
a
lot
of
software
that
we
provide
in
the
kubernetes
space
for
a
lot
of
different
things.
So,
today,
I'm
going
to
focus
on
setting
your
resource
requests
and
limits,
so
we've
always
told
our
customers
hey,
set
your
resource,
requests
and
limits.
A
This
is
what
makes
the
scheduler
work.
This
is
how
we're
able
to
Auto
scale
properly.
This
is
how
we
introduce
stability
into
our
clusters
and
it's
kind
of
the
the
first
thing
everybody
needs
to
do
with
all
of
their
workloads
and
all
of
our
customers
will
come
back
to
us
and
say
that
sounds
great.
What
do
we
set
them
to,
and
so
I
said
you
know
that.
That's
a
great
question,
that's
very
reasonable!
A
Ask
what
should
we
set
them
to
so
we
would
go
look
at
graph
and
you
know
dig
through
their
workloads
and
say
you
know,
here's
what
we
think
you
should
set
these
workloads
to
for
your
resource
requests
and
limits,
and
it's
you
know,
sort
of
a
bespoke
process.
One-Off
thing:
it's
not
really
something
we
can
do
continuously,
it's
not
something
we
can
re-up
on
frequently,
and
so
we
said,
there's
got
to
be
a
better
way
to
do
this,
so
I
went
looking
around
at
the
various
software
available
time.
A
This
was
probably
three
or
four
years
ago.
At
this
point,
and
I
really
wanted
to
learn,
go
at
the
time
and
I
was
like
all
right.
What
can
we
do
here?
What
can
we
write
and
I
found
the
vertical
pot
Auto
Skillet
project,
the
vertical
pod
Auto
scaler?
A
If
you're
not
familiar
with
it,
it
introduces
a
crde
called
a
vertical
pod,
Auto
scaler,
and
you
attach
that
to
your
workloads
and
it
can
automatically
resize
them
based
on
how
much
CPU
and
memory
they're
actually
using,
and
so
that
sounds
kind
of
cool,
but
I.
Don't
really
like
the
idea
of
automatically
doing
that
I'm
a
big
fan
of
infrastructure's
code.
A
It
doesn't
work
well
if
we're
using
hpas
with
our
horizontal
pod,
Auto
scalers
with
CPU
and
memory
and
or
memory,
and
so
you
know
what
what
else
can
we
do
here
and
I
looked
at
it?
So
well,
it's
providing
these
recommendations.
These
are
really
great.
Why
don't
we
just
operationalize
the
vertical
pot
Auto
scaler
a
little
bit
more,
and
so,
instead
of
having
to
create
a
vertical
pod,
Auto
scalar
object
for
every
deployment
in
my
cluster
and
manage
that
separately.
A
We
will
just
do
that,
for
you
so
write
a
little
controller
that
creates
all
these
vertical
pod.
Auto
scalar
objects
and
you
know,
take
all
those
those
results
from
the
vertical
pot.
B
A
Do
we
do
that
so
out
of
that
came
a
project
called
Goldilocks,
which
is
one
of
our
open
source
projects,
which
is
what
I'm
going
to
focus
on
today.
So
why
don't
we
just
go
ahead
and
jump
straight
into
the
screen
share.
Thank
you.
A
So
this
is
what
Goldilocks
looks
like
just
when
you
award,
but
under
the
hood,
there's
a
lot
more
going
on
here.
So
I'm
going
to
start
with
sort
of
the
setup
here
and
talk
about
how
you
install
Goldilocks,
you
get
it
working
so
I
have
here
a
kubernetes
cluster.
It's
a
kind
cluster
running
on
my
machine,
this
one's
1.23
excuse
me,
and
it's
all
a
couple
of
things
in
here.
So
the
first
thing
that
we
have
to
do
is
make
sure
we
have
the
metrics
available.
A
So
look
in
our
metric
server
namespace.
We
have
a
metric
server
running.
We
can,
you
know
top
pods
and
see
the
current
CPU
and
memory
usage
for
all
the
pods
in
our
cluster.
So
that's
good,
because
we
can't
really
make
recommendations
on
usage
have
existing
metrics
in
place.
A
So
the
second
thing
that
you
have
to
install
as
a
prerequisite
for
building
locks
is
the
vertical
pod.
Auto
scaler.
So
I
have
done
this
already
and
let
me
find
the
command
I
use
so
I
just
installed
the
vpa
helm
chart
from
the
Fairwinds
stable
repository.
So
we
have
a
vertical
pod,
Auto
scaler
controller
ring
and
a
recommend.
So
if
we
look
in
the
vpa
namespace,
we
have
a
recommender.
This
is
the
only
component
of
the
vertical
pod.
A
Auto
scaler
that's
required,
and
then
the
thing
is
the
the
crd
for
the
vertical
pod
Auto
scaler
and
the
vertical
pod
Auto
scaler
checkpoints
has
to
be
on
as
well,
so
Cube
CTL
get
vpa
and
that
doesn't
doesn't
yell
at
us
that
it
doesn't
exist.
A
So
we
have
the
vertical
pod
Auto
scaler
installed
and
then
the
next
thing
that
we'll
do
is
install
Goldilocks,
so
that
can
be
done
also
via
home,
a
fairly
straightforward
command
that
Helm
upgrade
back
dash,
install
or
a
Helm
install
I've
already
installed
it
so
I'm,
using
the
upgrade
Dash
Dash
and
in
the
Goldilocks
namespace
I'm,
going
to
create
the
namespace
and
then
I'm
going
to
set
a
flag
called
on
by
default.
So
this
is
a
diagrama
controller,
typically
Goldilocks
Works
via
annotating
or
labeling
namespaces,
to
enable
them
for
Goldilocks.
A
So
if
you
want
to
just
test
it
out
in
one
or
two,
you
can
just
run
it
in
the
default
way
and
then
label
a
namespace.
The
other
way
to
do
this
is
to
set
this
on
by
default
flag,
and
what
this
is
going
to
do
is
tell
Goldilocks
look
at
all
the
namespaces
everywhere,
all
the
time,
no
exceptions
so
Goldilocks
is
announced
solved.
A
So
in
the
Goldilocks
name,
two
components:
we
have
a
dashboard
and
a
controller
first
we're
going
to
focus
on
the
controller.
So
let's
make
it
look
real,
quick
at
the
logs
on
that
controller.
B
A
I
totally
agree,
so
that
is
a
great
great
way
to
talk
about
what
Goldilocks
is
for
and
what
Goldilocks
is
not
for
so
Goldilocks
is
intended
to
give
folks
a
starting
point.
You
know
when
you've,
when
you're
you're,
spinning
up
services
in
kubernetes-
and
you
have
know
how
much
resources
you
need
to
you
know
allocate
to
your
different
things.
If
you're,
not,
you
know,
testing
them
locally
to
see
how
many
resources
resources
your
pod
consumes,
or
you
just
need
some
place
to
start.
Goldilocks
is
great
for
this.
A
It's
a
great
way
to
just
see
hate.
You
know
I'm
roughly
using
this
amount
of
memory
in
CPU.
It's
a
great
starting
point.
For
that
is
it
you
know,
100
accurate
is
it
you
know.
You
know
the
the
absolute
truth
of
these
two,
absolutely
not
and
I.
A
Don't
think
you
should
just
automatically
cop
recommendations
from
Goldilocks
necessarily,
but
it
is
a
great
way
to
get
started,
but
always
definitely
review
over
time,
and
always
you
know
double
check
and
you
know:
do
it
a
sanity
check
to
make
sure
that
you're,
you
know
setting
things
reasonably
so
great
comment
totally
agree
with
you.
A
B
Yeah
and
then
there
was
a
question
regarding
the
logistics
here,
so
mauricia
missed
the
first
minutes.
Will
this
be
recorded,
so
they
can
revisit?
Yes,
it
is
being
recorded
and
you
can
access
it
and
see
the
recording
in
the
cncf
YouTube
channel
just
really
quickly
after
this.
So
if
you
missed
a
few
minutes,
no
worries,
you
can
always
watch
it
later.
All
right.
A
So,
coming
back
to
our
controller
here,
see
that
Goldilocks
has
run
a
few,
what
we
call
reconciliates
here,
and
so
it
has
gone
and
created
vertical
pod.
Auto
scaler
objects
for
all
deployments
in
my
cluster.
This
also
works
for
other
pop
control,
hateful
sets
it
will
work
for
technically
it'll
work
for
jobs
and
Crown
jobs.
A
If
you
enable
the
rbac
but
I've
had
great
results
with
the
vertical
pod,
Auto
scaler
and
jobs
and
crime
jobs,
something
we
need
to
look
into
a
little
bit
further,
but
we're
not
going
to
talk
about
that
today.
I
have
a
vertical
pod,
Auto
scaler,
and
we
see
that
it's
generating
records
for
all
of
our
workloads
in
this
cluster
and
you'll
see
here
that
vpa
is
in
mode
off,
which
means
it's
not
going
to
automatically
update
anything.
It's
not
going
to
change
anything
in
my
cluster.
It's
just
going
to
sit
here
and
generate.
A
What
I
want
from
it
so
I'm
going
to
focus
on
the
stress
namespace
for
the
moment,
because
there's
actually
load
here,
I'm
running
a
stress
container,
it's
attempting
to
consume
higher
CPU
and
I've
set
the
CPU
lit
to
500
Milli
quarters.
So
it
is
being
throttled
very
heavily
right
now,
I
would
assume,
and
so
we
can
take
a
look
at
that
vpa.
A
All
the
vpas
created
by
Goldilocks
will
be
prefix
with
Lux
so
that,
if
you
have
existing
vpas
like
nicely
with
those
and
interfere
with
them-
and
we
take
a
look
here
and
we
see
that
the
vpa
object
has
a
status
and
it
has
this
recommendation
object
in
it.
So.
A
B
A
The
pod
so
there's
pod
and
we
take
a
look
at
the
resources
block
here,
trying
to.
A
A
So
this
is
where
I'm
going
to
start
moving
a
little
bit
towards
the
live
coding
that
we're
going
to
do
today,
because
I
have
the
Goldilocks
repo
open
here
and
I,
have
a
branch
open
for
the
live
stream
and
I'm
going
to
go
ahead
and
build
this
and
run
the
dashboard
locally.
So
you
can.
A
So
let's
go
ahead
and
run
this
and
we're
going
to
run
the
dashboard
command
and
we're
going
to
say
on
by
Deep.
This
has
the
same
behavior
as
on
the
controller.
It
ignores
any
sort
of
labeling
or
anything
like
that
and
just
turns
on
Goldilocks
for
all
of
the
namespaces.
So
we
start
that
up
and
we
see
it's
running
on
port
8080,
so
we'll
go
back
over
here
to
our
browser
and
give
it
a
little
refresh
and
we'll
see
that
we're
running
the
dashboard
locally.
A
So
that's
great
I
can
make
changes,
and
hopefully
they'll
show
here.
So
if
we
go
take
a
look
at
our
stress,
namespace
we're
gonna
see
here
we
have
a
single
deployment
in
this
namespace
with
a
single
container
and
we're
going
to
see
two
different
sets
of
recommendations
and
we'll
talk
a
little
bit
about
where
these
come
from
right
now
we're
seeing
I
don't
have
an
explicitly
set
CPU
request.
A
It
has
been
implicit
by
kubernetes,
but
it's
not
been
explicitly
set
employment,
and
so
I
should
probably
do
that.
And
then
it's
going
to
surface
up
the
vpa
recommendation
here.
So
here
we're
saying
this
is
if
we
want
to
use
guaranteed
qos,
which
means
setting
our
resource
requests
equal
to
our
resource
limits.
A
There's
some
definitions
down
here
then
we're
going
to
want
to
set
our
CPU
requests
to
587
millicords
according
to
the
vppa
and
our
sleep
limit
to
587,
of
course,
and
our
memory
required
limits
to
105
M
and
so
that
recommendation
from
the
vpa-
and
we
have
some
yaml
here.
If
you
want
to
just
copy
paste
that
in
and
then
over
here
we
have
the
burstable
qos,
and
this
is
going
to
be
the
topic
of
the
day,
because
if
we
go
take
well,
let's
talk
about
this
a
little
bit.
A
This
is
where
it
gets
a
little
bit
confusing.
So
we
take
a
look
here.
We
have
four
different
values
that
the
vpa
gives
us
gives
us
a
lower
bound,
a
Target
and
uncaptorate,
and
an
upper
bound
so
Goldilocks
for
the
recommendations
are
indeed
qos.
It's
going
to
pull
this
target
for
both
values
for
both
the
best
and
the
limit,
but
then
for
the
burstable,
where
we're
setting
our
requests
lower
than
our
limits,
we're
allowing
the
container
to
burst
up
from
its
requests.
A
It's
going
to
pull
and
I
have
to
double
check
this
in
the
code
once
we
dive
into
it,
but
Luke's
going
to
pull
the
lower
bound
and
the
upper
bound
as
those
two
values,
and
so
the
dashboard
is
going
to
say
inside
of
that
upper
bound
and
lower
bound,
and
so
this
is
where
we
introduce
a
little
bit
of
confusing
behavior
in
Goldilocks,
and
this
is
a
decision
that
I've
made
randomly
years
ago
and
regret
it
all
do
when
we
write
code
I.
A
Imagine
so
if
we
go
take
a
look
at
a
different
new
space.
Let's
just
take
a
look
at
Goldilocks
itself.
We're
going
to
see
this
here
where
it
says
our
CPU
request.
25
Milli
course
is
equal
to
the
burstable
recommendation
of
15
milligram
cores.
Now
you
may
be
listening
going.
25
is
not
equal
to
15
Andy.
That
makes
no
sense
and
I
agree
with
you.
It
doesn't
make
any
sense.
We
have
a
open
issue
aux
with
quite
a
lot
of
discussion
that
the
you.
A
Is
under
but
really
the
way
it's
intended
to
work
is
hey.
We
we're
saying
we're
request
you're
lower
bound
to
15,
mil,
of
course,
your
upper
bound
to
253
millimeter
cores
you're,
currently
set
to
25..
A
A
From
from
some
co-workers
on
this
Behavior
as
well,
so
if
we
don't
have
any
questions
which
I
don't
think
we
do,
I
will
go
ahead.
B
A
An
Instagram
yeah,
so
let's
go
ahead.
All
right.
We
will
carry
on
so
I
have
over
here
the
Goldilocks
code
base,
and
so
let
me
close
this
and
we'll
just
start
with
the
tree
here.
So
I
can
kind
of
describe
what's
going.
B
A
A
Yes,
I
know:
gomux
gorilla
mux
is
Deborah
K
we'll
be
moving
eventually
at
some
point,
but
haven't
had
the
time.
So
we
will
take
a
look.
I
know
for
a
fact
that
the
contain
recommendation
here
this
blue
box
is
rendered
by
the
container.go
HL
and
so
we'll
take
a
look
at
our
go,
template
and
see
all
right.
So
we've
got
you
know
a
bunch
of
variables
being
defined
at
the
top
here.
We're
pulling
in
the
CPU
request.
Cpu
limit
memory
request
to
memory
limit
for
that
container.
A
So
this
would
be
the
existing
values
that
you're
set
to
we've
got
the
lower
bound
and
upper
Bound
for
both
memory
and
CPU.
Those
are
those
vpa
recommendation,
values
that
we're
going
to
pass
Sam,
and
then
we
have
the
the
CPU
Target
and
the
memory
Target,
which
is
what
we're
going
to
use
for
the
camera
tqos
and
then
some
other
random
stuff.
That's
not
super
important!
A
So,
let's
take
a
look
at
the
we've
got
current
values
for
the
guaranteed
qos
class,
but
we
want
to
go
down
to
the
burstable
qos
class
and
here
is
where
we
start
to
see
the
recommendations
so.
B
A
A
Cpu
lower
bound
we're
pulling
our
icon.
Let
me,
let's
do
this.
A
A
B
B
A
B
A
The
function
that
we're
looking
for
so
let's
go
back
to
our
router
and
take
a
look
and
we'll
dive
through
the
dashboard
namespace
function,
so
we're
here.
This
is
what
sort
of
the
dashboard
oh
I'm,
remembering
now
in
templates
here
we
have
a
set
of
template
functions
somewhere.
A
A
Now
we
have
found
the
source
of
our
problems,
so
we're
passing
in
an
existing
value,
a
low
value
and
upper
value
and
those
are
all
resource
quantities.
So
this
is
where
things
get
a
little
bit
fun.
These
are
resource
quantities
from
the
kubernetes
I
believe
it's
in
the
I'd
seen
API
Machinery.
So
we
have.
A
To
tackle
in
this
dashboard,
because
you
may
notice
our
we
typically
specify
megabytes
and
we're
tear
getting
I
believe
that's
megabytes
back,
so
maybe
a
problem
for
another
day,
we'll
see
if
we
get
to
it.
So
we're
going
to
take
a
look.
If
the
existing
is
zero,
it's
not
set
so
we're
going
to
return.
A
This
font
awesome,
icon,
called
exclamation,
and
then
we
also
have
a
concept
of
both
text
and
icons
because
we
want
to
put
in
both
a
a
text
version
of
it
for
anybody
using
a
script
instead
of
a
awesome
icon,
and
so
we
have
two
different
ways
that
we
return.
This
we're
going
to
turn
an
icon
in
a
text.
A
So
that's
if
it's
not
set
all
right,
so
we're
gonna
do
comparison
lower,
which
is
comparing
the
existing
value
to
the
lower
and
we're
going
to
compare
the
upper
to
the
existing
value.
A
And
so
we
have
our
two
comparisons.
And
so
these
come
back
as
integers.
A
The
comparison
function
is
going
to
return
negative
one
if
it's
less
than
the
other
one
and
a
positive
one,
if
it's
greater
and
two,
if,
if
the
upper
value
is
less
than
or
equal
to,
zero
and
or
if
the
upper
comparison
is
lower
and
the
lower
comparison
is
greater
than
we're
going
to
return
an
equals.
This
is
where
our
issue
exists.
If
it's
in
that
range,
we're
going
to
return
an
equal.
B
Yeah
we
have
a
new
audience
comment
again
from
Visalia
again
amazing.
Thank
you.
So
they
say
for
vpa
in
non-profit
environment.
To
be
useful.
Lt
should
be
around
constantly
in
Pro
environment.
There
will
be
a
resistance
from
App
teams
to
deploy,
slash
and
use
it.
A
Yep
great
point,
so
you
know
that
there's
saying
here,
I'm
assuming
LTS
means
load
tests,
I
can't
think
of
another
thing
that
would
stand
for
at
the
moment.
So
yes,
if
you're
running
this
in
your
non-production
environment,
you
need
to
be
generating
load
to
get
accurate.
You
know
it's
looking
at
existing
vpas,
looking
at
existing
utilization,
so
obviously
your
non-prot
environment.
Unless
you're
running
load
tests,
the
numbers
will
be
off
and
then
in
prod
there
will
be
resistance
from
App
teams
to
deployer
use
it.
A
I
would
argue
that
Goldilocks
is
not
a
responsibility
of
app
teams
to
be
deploying
and
using
I.
Think
operators.
You
know,
cluster
administrators
can
run
Goldilocks
in
the
cluster
and
provide
the
results
back
and
because
we're
using
vpa
in
non-update
mode
and
we're
using
it
in
off
mode.
It's
perfectly
safe
to
run
across
your
entire
cluster
to
provide
these
recommendations
in
production,
so
I
would
say:
Goldilocks
should
be
run
in
production
because
it's
perfectly
safe
to
do
and
by
cluster
operators.
So
thank
you
for
the
comments.
A
A
A
Let's
say:
let's
figure
out
percentage
difference
of
the
lower
bounds,
and
so
we're
going
to
say
that
I
never
remember
the
sexual
formula,
so
we're
gonna
have
to
I
signed
up
to
do.
Math
live
today,
probably
a
terrible
choice,
but
that's
we'll
see
how
it
goes.
So
we
have
definitely
the
existing
Quantum
the
lower
quantity.
B
A
A
So
in
this
case,
we
don't
have
to
remove
the
equal
sign
frankly
or
change
this
logic
completely.
B
And
now
there's
a
question
from
music
area:
how
can
we
set
the
CPU
limit
for
a
pot
to
avoid
CPU
throttling.
A
That
is
a
great
question
and
a
very
large
and
Meaty
topic
that
I,
don't
necessarily
have
the
time
to
go
into
today.
Cpu
throttling
is
a
very
common
issue,
a
very
contentious
issue,
there's
also
been
several
Linux
kernel,
bugs
that
made
it
in
the
past,
and
so
it's
gotten
a
lot
of
noise
and
it's
it's
complicated
because
and
it
was
actually
a
really
great
talk
that
somebody
sent
me
from
kubecon
I
believe
the
last
kubecon
North
America
that
talks
about.
A
Why
why
we
have
trouble
communicating
about
CPU,
because
we're
specifying
in
fractions
of
course
and
CPU,
is
actually
calculated
in
time
and
so
we're
sort
of
doing
this
weird
translation
of
quantities
that
makes
reasoning
about
CPU
requests
and
limits
a
little
bit
funky,
and
so
essentially
you
know
to
avoid
CPU
throttling
turn
up
the
CPU
limit
or
turn
it
off.
A
There's
a
lot
of
Advocates
out
there
for
saying:
don't
set
CPU
limits,
I've
even
quite
jumped
over
to
that
side
at
this
point
in
time,
but
it
is
in
some
cases
a
valid
evaluated
do
things,
but
generally
it's
just
increase
your
CPU
limit,
and
actually
you
know,
I
intentionally
set
this
demo
up
with
this
stress
container,
being
throttled
to
sort
of
show
that
the
vpa
will
detect
that
you're
at
the
high
end
of
CPU
usage,
but
I,
don't
believe
it
actually
looks,
takes
into
account
CPU
throttling.
A
So
if
you
are
seeing
a
lot
of
CPU
thrust
is
one
of
those
cases
where
your
vpa
may
not
be
best
recommendation
and
so
go
ahead
and
turn
that
CPU
limit
up.
If
it's
affecting
your
workloads
and
there's
you.
A
Doing
that,
in
most
cases,
assuming
you
have
the
space
to
schedule
that
pod
and
all
of
those
things
so
definitely
a
big
big
topic
and
something
that
we
should
all
learn
more
about
and
I
definitely
need
to
do
a
little
bit
more
research
there.
So
cool.
Let's
go
back
here
to
our
code,
I
think
I.
Think
what
we
want
to
do
is.
B
Just
there's
another
question
in
common
as
well,
which
is
Amazing
by
the
way.
Thank
you
so
much
for
everyone
for,
for
engaging
so
weekly,
asked.
Let's
say
for
microservice
EPA
recommendations
to
change
the
CPU
or
to
CPU
one
three.
How
can
you
ensure
app
teams
that
there
will
be
no
performance
degradation.
A
I
can't
that's
why
we
test
so
I
think
testing
these
changes
in
staging
is
the
right
way
to
go.
As
you
mentioned,
the
same
person
mentioned
earlier.
Load,
testing
and
staging
is
a
very
valuable
tool,
very
valuable
tool,
and
so,
if
you're
not
doing
that,
then
maybe
turning
down
your
CPU
recommendations
is
you
know
it's
a
highly
critical
workload
and
things
like
that,
then
perhaps
not
changing.
That
would
be
the
right
way
to
go.
I
always
recommend
you
know
best
judgment,
and
you
know,
everybody's
workload
is
different.
A
Everyone's
workload
has
different
requirements
and
we
have
to
take
all
of
those
things
into
consideration
as
operators
when
when
making
these
changes,
so
just
a
recommendation
for
a
reason
it
is
not
an
absolute
for
sure
all
right
so
back
to
this
I
think
what
we
need
to
do
is
just
remove
this.
A
Yeah,
let's
give
that
a
shot
that
let's
restart
our
process
here,
so
just
removed
the
statement
that
says
if
it's
in
between
the
two
give
us
an
equal
sign
which
I
believe
means
we're,
never
going
to
get
an
equal
sign,
so
that
concerns
me
a
little
bit.
So,
let's
take
a
look
at
the
effect
that
this
has
had
on
our
dashboard.
A
Let's
go
take
a
look.
What
was
that
Goldilocks,
Goldilocks
yep
yep
there's
the
problem.
Now
we
never
get
an
equal
sign
or
anything
at
all,
which
is
not
quite
right.
So
if
we
are
in
between,
but
we
need
a
not
equal,
so
this
is
get
status
range.
What
are
we
using?
We
must
be
using
a
different
function
for
the
for
these.
Let
me
make
sure
we
generate
an
equal
sign,
so.
A
Let's,
that
was
the
controller,
so
let's
add
the
controller
so
I'm
going
to
edit
the
resource
requests
on
the
controller
to
match
the
current
recommendation.
A
B
A
A
A
A
Hey,
how
type
servers
are,
let's
see.
A
How
do
you
type
so
we
need
to
know
if
we're
looking
for
resource
requests
or
resource
limits,
let's
say
resource
type,
all
right,
so
we're
going
to
add
for
a
resource
type
and
then
in
our.
A
Oh
we're
going
to
completely
change
the
logic
of
this
function.
That's
going
to
be
exciting
all
right,
but
for
our
equals
another
switch
on
resource
type,
we'll
do
case
a
request
or
if
it's
a
limit
and
if
comparison.
So,
if
it's
a
request,
we
want
to
compare
the
lower
and
if
it's
a
limit,
we
want
to
look
at
the
upper.
A
A
And
hopefully,
once
Apple
stops
yelling
at
me,
we
get
oh
arrogating
template
data.
All
right,
we
broke
request,
not
defined
I,
probably
need
to
you
know
what
then
now
I
know
I
wish
all
right.
A
A
B
A
A
Right
so
now
we're
seeing
if
we're
exactly
equal
to
the
lower,
we
get
the
lower
for
exactly
greater
than
exactly.
If
we're
off,
we
get
a
not
should
probably
be
a
not
equal
to,
but
that's
great.
So
let's
go
ahead
and
commit
the
change,
because
that
was
annoying
enough
as
it
was
replace
equals
logic.
Four,
first
of
all,
qos.
A
So
that's
kind
of
the
base
of
the
issue
that
fixes
the
beginning
of
it,
but
we
have
15
minutes
left
and
we
would
really
love
to
know
if
we
would
say
like
because
you
know
this.
A
Going
to
flight
as
you
generate
load,
if
we
go
look
at
our
stress
container,
I'm,
gonna
guess:
well:
okay,
it's
stress
is
pretty
consistent,
but
this
will
probably
be
between
587.
You
know
up
and
down
a
little
bit,
and
so
we
don't
necessarily
want
to
always
suggest
you
know
changing
those
small
increments
and
so
I
have
two
recommendations
from
co-workers,
which
I
think
are
really
good
to
round
either
round
the
recommendation,
or
only
only
say
it's
not
equal.
A
If
we're
you
know
within
10
or
something
like
that
and
I
think
what
we
should
first
do
is
round
the
recommendation
of.
So
let's
go
digging
for
where
we
might
do
that.
We
should
fix
our
tests.
You
know
testing
is
boring.
I'm
not
going
to
do
that
here
today,
I
will
fix
it
before
I
open
the
PR,
but
let's
leave
the
live
stream
on
features
and.
B
A
Can
do
all
the
the
testing
work
later?
Testing
is
very
important,
though
I'm
not
saying
that
you
shouldn't
have
tests,
but
I
think
what
we're
actually
going
to
need
to
do
is
modify.
My
summary
summary
package
is
what
actually
goes
and
gets
all
of
the
data
from
the
vpa.
So
if
we
run
the.
A
Here
we'll
actually
get
a
big
old
gnarly
Json
object
that
has
all
the
data
that
feeds
that
dashboard.
A
The
the
data,
the
API
side
of
things,
that's
getting
that
information,
so,
let's
see
where
we
go
collect
all
of
our
vpa
objects.
New
summarizer
get
summary:
where
is
our
function?
Where
we're
going
to
get
the
summary
an
aptly
named
function,
if
we're
so
we're
going
to
filter
by
namespaces
we're
going
to
look
in
a
cache
that
we
keep
locally
just
to
speed
things
up
a
little
bit
and
then
we
are
get
or
create
namespace
Center.
A
Workloads
summary,
so
the
first
thing
we
do
is
we
get
the
actual
settings
from
the
workload.
We
don't
need
to
worry
about
that
part
here,
but
now
here
we
are
if
the
vppa
status
is
null,
that's
what
we
do
with
that
and
if
the
length
is
less
than
or
equal
to
zero
blah
blah
you
get
the
excluded
containers.
Yes,
I'm
vaguely
remembering
that
now
this
is
one
big
ugly,
Loop
I'm.
Looking
for.
A
A
A
Because
that's
if
men
exists,
Max
allowable
string
like
this
five,
if
length
of
memory
is
greater
than
the
max
allow
length,
and
it
is
less
than
what
and
we
are
rounding
round
up
up
to
this
Aquarius
provide
a
scale
and
string
that
the
values
at
least
one
false,
is
returned.
If
the
rounding
operation
resulted
in
a
loss
of
precision
but
I
don't
actually
care
if
I
lose
a
little
bit
of
precision.
B
A
Great
question
great
question,
so
recommendations
can
be
based
on
historical
Prometheus,
metrics,
so
the
vpa
and
we're
really,
you
know
all
of
these
questions
about
you
know,
quality
of
recommendations
and
what
the
recommendations
are
actually
doing.
A
Not
responsible
for
folks
who
are
using
Goldilocks,
but
if
you
do
have
deeper
questions
about
how
the
vertical
product
Auto
scale
it
functions.
I
definitely
recommend
going
to
that
repository
and
maybe
contacting
that
Community.
A
A
And
I
do
believe
it
actually
taped
out
of
memory.
Events
specifically
so
I
have
not
dug
through
this
code
in
a
little
while,
and
it
has
changed
since
then,
but.
A
When
it
when
it
sees
historical
out
of
memory
events,
it
will
increase
in
the
memory
utilization,
but
I
can't
promise
that
so
I
definitely
recommend
checking
with
the
vertical
pod
Auto
scaler
folks.
A
So
that's
in
the
kubernetescaler
repository
under
the
vertical
pod,
Auto
scaler
folder
is
where
all
the
code
for
this
lives
and
they
I've
actually
done
a
lot
of
updates
that
were
incorporating
the
Goldilocks
here
soon
within
probably
the
next
few
months,
or
so
we
may
get
some
enhanced
behavior
from
that,
because
I
know,
there's
been
a
decent
amount
of
work
done
on
this
repository
since
our
last
update,
but
great
questions
keep
and
come.
A
B
Right
yeah,
another
question
immediately
here,
which
is
great:
can
it
be
integrated
with
Donald's
Gateway,
which
may
be
persisting
from
his
data
for
a
longer
period?
The
higher
continued.
A
I
have
no
idea
quite
possibly,
but
I
really
don't
know
so
from
the
vpa's
perspective.
If
we
go
back
to
that
repository
and
we
take
a
look
at
we're
in
the
recommender
package
and
we
look
at
the
flags
here
on
the
recommender
package,
all
we
can
give
it
is
a
Prometheus
address
and
a
previous
job
name
and
then
a
history.
A
So
assuming
you
have
a
previous
endpoint
available
that
you
know
you
can,
and
it
will
query
for
the
amount
of
time
that
you
have
told
it
to
in
theory
it
should
work,
but
I
am
not
at
all
familiar
with
Sano
skateboard
and
how
it
functions.
So
I
can't
answer
that
question
directly.
A
Okay,
but
we
only
have
a
few
minutes
left
so
I'm,
not
certain
I'm,
going
to
have
time
to
round
the
metrics
they're
around
the
numbers,
but
I
will
be
working
on
that
keep
an
eye
out
for
a
PR
on
Goldilocks
within
the
next
few
days
to
implement
these
changes,
but
hopefully
clarify
the
very
confusing
behavior
of
the
equal
sign
in
previous
versions
of
Goldilocks
telling
you
that
large
numbers
are
equal
to
small
numbers,
because
I
think
we
can
all
do
that
math
very
quickly.
So
do
we
have
any
more
questions?
We
do.
B
Not
not
at
the
moment,
but
while
we
see
if
there
is
any
questions
you
can
obviously
maybe
let
us
know
if
there's
any
learn
more
resources
that
we
can
check
out
after
the
session
or
or
anything
like
that.
A
Yes,
so
we
do
have
documentation
for
Goldilocks
there's
plan
information
in
here
about
how
it
functions,
there's
a
whole
FAQ
on
how
to
use
it.
So
if
you
want
to
use
Goldilocks
check
that
out,
that
is
at
goldilocks.docs.fairwinds.com.
A
If
you
want
to
take
a
look
at
any
existing
issues
or
anything
like
that
or
file
an
issue,
please
go
to
GitHub,
so
we
have
github.com
fairwindsops
Goldilocks,
fairly
easy
to
find
in
our
Fairwinds
Ops
repository.
We
have
a
whole
lot
of
other
open
source
projects,
so
please
check
those
out
with
them
policy.
A
We
have
things
around
checking
for
deprecated,
API
versions
via
we
have
Polarises
for
policy
and
then
Nova
for
checking
for
out
of
date
versions
of
things,
because
we
all
know
keeping
all
the
many
things
that
we
run
in
kubernetes
up
to
date
is
a
nightmare,
so
lots
of
great
open
source
resources
from
Fairwinds
there
I
think
that's
it
for
resources,
you're.
B
Right
and
I
think
it's
starting
to
get
closer
to
final.
If
anyone
has
any
questions,
because
we
only
have
a
few
minutes
left,
but
anything
else
that
you
wanted
to
finish
us
off
with
from
your
side,
while,
if
anyone's
typing
a
question
they
can
submit
it
too.
A
Set
your
resource
requests
and
limits,
folks,
that
is
it's
kind
of
my
my
thing.
I
talk
about
it
all
the
time,
I
work
with
Goldilocks
a
lot,
and
so
many
problems
can
be
mitigated
but
selling
them
and
setting
them
properly
and
reviewing
them
over
time
and
load
testing
in
your
non-production
environments,
if
possible,
so
highly,
recommend
that.
B
B
It
was
great
to
have
a
session
about
how
to
write
sites
kubernetes
today,
and
we
also
really
loved
all
the
interaction
and
questions
from
the
audience
such
as
I
mentioned
before
already,
and
we
as
always
bring
you
the
latest
Cloud
native
code,
every
Wednesday,
so
in
the
coming
weeks
we
have
more
great
sessions
coming
up
so
tune
in
then
as
well.
Thanks
for
joining
us
today
and
see
you
next
week.