►
From YouTube: Cloud Native Live: Cloud cost monitoring
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Foreign
Welcome
to
Cloud
native,
live
where
we
dive
into
the
code
behind
Cloud
native
I,
am
Annie
talvasto
and
I
am
a
cncf
Ambassador
and
a
lead
marketing
at
Vision
as
well
and
I
will
be
your
host
tonight.
So
every
week
we
bring
a
new
set
of
presenters
to
Showcase
how
to
work
with
Cloud
native
Technologies.
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions.
A
You
can
join
us
every
Wednesday
to
watch
live
so
this
week
we
have
Andy
and
Stevie
here
with
us
to
talk
about
Cloud,
cost
monitoring
very
excited
for
this,
and
as
always,
this
is
an
official
live
stream
of
the
cncf
and
at
such
a
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
the
questions
that
would
be
in
violation
of
that
code
of
conduct.
B
Thank
you
very
much.
It's
it's
great
to
be
back
here.
So,
as
the
title
says,
we're
talking
about
kubernetes
cost
management.
The
title
is
a
little
bit
of
a
a
misnomer,
though,
because
we're
actually
here
to
talk
about
the
thing
that
I
care
most
about,
which
is
resource
requests
and
limits
and
kubernetes
if
you've
listened
to
anything
that
I've
done.
B
You've
probably
heard
me
say
this,
but
I'm
also
joined
by
Stevie
today,
so
I'll
do
a
quick
intro
of
myself,
I'm
Andy
I'm,
the
CTO
at
Fairwinds,
I'm,
author
and
maintainer
of
a
lot
of
our
open
source
at
Fairwinds,
as
well
as
a
long
time,
kubernetes
practitioner
and
then
I
will
hand
it
over
to
Stevie
who's,
going
to
do
the
majority
of
our
demo
today
and
to
introduce
herself.
C
Hi,
so
my
name
is
Stevie,
it's
very
weird
because
you
all
are
on
a
screen
to
the
right,
so
I'm,
looking
at
my
camera
and
I'm,
not
seeing
you.
So
it's
it's
very
weird
to
be
just
speaking
into
the
ether.
But
my
name
is
Stevie.
I
am
a
SRE
technical,
lead
at
Fairwinds
I've
been
in
the
field
for
a
number
of
years
and
have
had
many
different
roles
and
am
currently
at
Fairwinds
helping
customers
with
kubernetes
clusters
and
working
on
some
of
our
open
source
stuff.
B
C
C
It
off,
although
before
I
kick
it
off,
I
did
have
a
quick
question.
Oh.
B
C
C
Was
great
I
love
it
so
yeah,
so
we're
here
today
to
talk
about
so
you
know
it
says,
cost
optimization
and
as
Andy
said,
you
know
really.
What
we're
going
to
talk
about
is
his
favorite
topic,
which
is
resource
requests
and
things,
because
that
is
a
big
part
really
of
cost
optimization.
C
So
you
know
it's
it's
already
difficult
like
if
you
just
have
a
plain
app
running
on
like
an
ec2
instance
or
wherever
right,
you've
already
got
the
the
challenge
of
every
time.
You
change
your
app
every
time.
You
add
a
feature
to
it
or
something
that
changes
sort
of
your
app's
profile
in
terms
of
like
it's,
the
resources
that
it
needs
to
use
to
do
its
job
right,
then
you
add
containerization
and
orchestration
and,
like
the
whole,
just
kubernetes
thing.
C
On
top
of
that,
and
you
have
a
bigger
challenge
in
trying
to
understand
you
know
what
cost
your
app
you
know
the
cost
of
your
app.
How
much
of
the
cost
of
your
overall
infrastructure
is
due
to
that
app
right.
C
You've
got
things
like
the
fact
that
if
you're
running
a
kubernetes
cluster,
you
likely
have
multiple
teams
deploying
to
it
at
any
time
right,
so
you
don't
know
who's,
throwing
what
into
your
cluster,
how
they're
setting
things
or
not
setting
things,
as
the
case
may
be.
You
know
you
get
someone
who's,
just
like
I'm
gonna
toss
this
thing
in
here
and
just
these
two
gigs
two
gigs
of
of
memory-
and
you
know
that's
that's
what
I
think.
So,
let's
just
go
right.
C
You
know,
and
then
you
have
things
like
potentially
running
a
multi-cloud
right.
You
could
be
running
in
gke,
AKs
eks,
you
know,
so
you
have
things
running
all
over
the
place.
C
How
do
you
keep
track
of
like
the
cost
when
it's
spread
across
clouds
and
then
the
most
major
thing
is
the
fact
that
workloads
are
dynamic
in
a
in
an
environment
like
kubernetes
right
if
you've
got
like
just
in
a
general
everyday
operations
of
the
cluster,
your
workloads
will
move
around
they'll,
get
rescheduled
right
to
different
nodes
or
whatever,
and
if
you've
got
any
kind
of
Auto
scaling
in
place.
C
If
you've
got
a
horizontal
pod,
Auto,
scaler
or
vertical
pod,
Auto
scaling,
you
know
or
cluster
Auto
scaler,
then
you've
got
these
things
like
moving
around
a
lot
potentially
right,
they're,
just
always
they're.
Just
you
never
know
where
they
are.
So
it's
really
hard
to
track
them.
They're,
like
ninjas,
it's
really
hard
to
track
them
across
your
cluster
and
figure
out.
What
exactly
do
you
need?
How
much
are
you
costing
me
at
any
given
time.
C
Got
to
dig
through
yeah,
exactly
like
it
becomes
very
difficult
to
take
a
basic
thing
like
a
node
and
slice
it
up
and
say:
oh
this
application
is
costing
this
and
this
and
that
so
you
know
that's
where
that's
where
we
come
in
like
so.
How
do
you
get
to
the
point
where
you
can
start
understanding
what
the
cost
of
your
workloads
in
a
kubernetes
cluster
might
be?
So
you
know,
Andy
again
was
talking
about
resources
and
requests
a
resource
request,
because
that
is
what
the
cube
scheduler.
C
That's
one
of
the
things
the
cube
scheduler
is
using
to
make
decisions
about
where
to
schedule
your
workloads
in
a
cluster
right.
It's
got
a
pretty
complex
algorithm
actually,
but
there
are
multiple
factors
that
it
takes
into
consideration,
including
what
you
tell
it
you
need
for
your
application.
It
uses
that
to
determine
how
well
and
how
efficiently
it
can
put
your
workloads
together
on
a
node
in
your
cluster
right.
C
Inside
of
your
cluster
right,
so
you
might
be
thinking
okay,
well
great,
so
now
I
know
I
might
not
I
might
be
overspending
in
my
cluster.
My
cluster
may
not
be
efficient
in
that
way.
How
do
I
even
get
to
know
that,
like
how
do
I
get
that
information?
C
So
one
of
the
first
things
that
am
I
sharing?
My
screen
am
I
on
my
terminal.
C
So
one
of
the
first
things
that
you
might
want
to
do
is
just
get
an
overall
look
at
what
your
cluster
looks
like
in
terms
of
utilization
or
in
terms
of
like
what
you're
requesting
we
like
to
use
a
tool.
It's
open
source
tool
called
Cube
capacity.
It
was
actually
written
by
Rob
Scott,
who
used
to
be
a
Fairwinds
employee.
C
So
you
know
we're
trying
to
sort
of
keep
it
in
the
family
and
Cube
capacity
is
really
neat
because
it
essentially
sort
of
munges
top
and
describe
together
Cube
control
top
and
describe
together
and
gives
you
some
good
information,
so
I've
already
obviously
downloaded
it.
So
if
I
just
run
key
capacity,
what
it's
going
to
show
me
is
a
sort
of
high
level
view
of
the
resources
in
my
cluster,
the
top.
C
The
top
line
is
going
to
sort
is
going
to
show
me
how
much
of
the
CPU
and
memory
that
my
cluster
has
that
it
can
give
me
this
is
not
like
allocatable
or
anything.
It's
just
like
in
general
right.
How
much
of
that
am
I
asking
for
so.
B
C
Yeah
I
mean
that
seems
like
what
you
want.
You
definitely
want
to
get
as
close
as
possible
without
with
leaving
a
little
bit
of
Headroom
right
for
for
things
like
spikes
and
stuff
like
that,
and
so
anyone
looking
at
this
would
be
like.
Oh
yeah,
that
looks
that
looks
about
right
right.
But
if
you
add-
and
this
is
the
useful
part
here-
there's
a
little
flag
util,
so
it
will
add
another
two
columns
that
will
show
you
what
you're
actually
using
right.
C
So
you
can
request
a
certain
amount
right,
and
so
the
cube
scheduler
will
say.
Okay,
this
is
the
amount.
This
person
is
telling
me
they
need
at
a
minimum,
to
start
up
their
workload,
so
I'm
going
to
make
sure
I
put
them
on
a
node
that
has
that
amount
and
if
there
is
no
node,
that
has
an
amount
I'm
gonna.
C
You
know
the
the
cluster
Auto
scaler
is
I'm
going
to
put
this
thing
in
pending
and
the
cluster
Auto
scale
is
going
to
see
that
and
it's
going
to
pop
up
another
node
to
accommodate
that
right,
but
you're
not
actually
using
the
if
you're
not
actually
using
the
amount
that
you're
requesting.
Then
that's
a
lot
of
like
you're,
essentially
over
provisioning.
Your
cluster
right.
C
B
C
Of
like
nah
I
need
this,
so
this
is
a
Sandbox
cluster
that
we're
running
here
at
Fairwinds,
and
so
you
can
see
across
just
in
general
from
that
top
line
across
our
entire
cluster,
we're
requesting
84,
but
we're
actually
only
using
at
this
point
in
time,
four
percent.
This
is
a
point
in
time.
You
know
representation
right.
So
it's
clear
that
there's
some
room
here
for
improvement
memory
requests
are
also
a
little
off,
not
as
not
as
like
wildly
skewed
as
a
CPU,
though
right
so
you'd
want
to
so
we're
like.
C
Okay,
we're
gonna
concentrate
on
the
CPU
requests,
so
what
I
have
to
do
now?
I
have
to
go
and
look
at
like
some
graphs,
I
gotta,
pull
up
Prometheus
or
something
and
look
at
a
bunch
of
graphs
and
try
and
like
figure
out
an
average
on
my
own,
and
things
like
that.
This
is
where
Goldilocks
comes
in.
So
Goldilocks
is
an
open
source
tool
that
Fairwinds
created
that
uses
another
tool,
that
is,
you
know,
also
open
source.
C
The
vertical
pod
Auto
scaler
right
as
the
vertical
pod
Auto
scaler
lives
in
your
cluster
and
essentially
like
makes
recommendations
and
can
also
like
manually
change
or
dynamically
change
your
workloads,
and
we
don't
do
that
with
Goldilocks.
We
just
used
a
recommender,
so
I
am
gonna,
go
over
to
Goldilocks
and
just
show
you
so
we
have
really
great
documentation,
I'm,
very
proud
of
our
documentation,
and
so
here
Goldilocks,
the
installation
is
very
simple.
C
As
you
see
it
just
you
know,
you
need
the
vertical
pod,
Auto
scaler.
You
can
install
that
separately
or
you
can
use
it.
C
If
you
install
Goldilocks
using
our
chart,
which
we
recommend
you
do,
you
can
enable
a
sub
chart
which
we
also
maintain
for
vpa,
and
so
you
can
install
Goldilocks
in
your
cluster
and
it
will
help
you
visualize
and
give
you
recommendations
using
data
from
the
vpa
recommender
to
help
you
set
your
resource
requests
and
limits
in
a
way
that
is
a
little
closer
to
what
you're
actually
doing,
which
will
then
you
know
have
a
domino
effect
of
allowing
you
to
schedule
more
pods
on
fewer
nodes
which
will
help
your
cost
right.
C
C
So
the
ball
did
the
way
we've
installed
Goldilocks
our
demo.
We
use
Argo
CD
in
our
environment,
so
we
actually.
C
We
do
this
a
thing
where
we
have
we
use
reckoner,
which
is
another
open
source
tool
that
Fairwinds
maintains
that
allows,
helps
you
manage
multiple
Helm
charts
in
one
file
right.
So
we
have
this
thing
called
a
course.yaml.
C
We
put
a
bunch
of
we
point
to
a
bunch
of
Helen
manifest
and
include
some
values
in
there,
and
so
this
is
what
it
looks
like
the
way
we've
done
it
with
Argo
right,
so
we've
got
Goldilocks
chart
we've
enabled
the
vpa,
which,
again,
if
you
did
not
enable
this,
you
would
need
to
install
it
separately,
and
we've
also
pointed
the
vpa
to
an
existing
Prometheus
installation
that
we
have
in
our
cluster.
Using
the
Prometheus
stack
chart.
A
C
See
if
I
can
yeah
how's
that
better
yeah
thanks
for
that
yeah,
so
we
yeah,
so
we
have
Prometheus
running
in
the
cluster,
which
means
that
right
off
the
bat
vpa
will
have
access
to
historical
data
that
Prometheus
has,
depending
on
you
know
what
you've
set
for
retention
and
things
like
that
in
Prometheus.
C
These
are
a
bunch
of
you
see.
We
are
also
setting
resources
and
requests,
and
we've
enabled
the
controller
in
the
dashboard
to
be
on
by
default.
If
you
did
not
do
this,
essentially
you
would
want
to
go
in
and
tell
Goldilocks
which
workloads
you
want
it
to
do.
C
The
monitoring
for
and
I'll
show
you
that
when
we
look
at
the
when
we
look
at
the
helm
command
so
anyway,
this
is
the
rest
of
that,
and
this
is
how
we
deployed
that
into
the
into
the
cluster
that
we're
working
with
here
right.
We
just
yeah,
sorry.
C
Should
be,
but
probably
you
know
a
lot
of
you
probably
use
Helm,
and
this
is
the
equivalent
of
essentially
that
right.
So
this
is
the.
Let
me
actually
do
this,
so
you
can
see
the
top
right.
So
this
is
the
equivalent
of
that.
This
is
a
Helm
upgrade.
It's
all
Goldilocks,
we're
gonna,
put
it
in
its
own
namespace
and
pass
in
the
command
to
create
the
namespace.
C
If
it
doesn't
exist
already
we're
pointing
to
our
Goldilocks
chart
in
the
Fairwinds
stable,
chart
repo
and
then
I'm
pointing
it
to
a
values
file
which
you
may
have
seen
on
the
side
here
right-
and
this
is
exactly
the
same
thing
as
what
you
saw
in
the
course
yaml.
This
is
the
values
file
that
you
pass
in
directly
again
the
same
enabling
vpa.
So
we're
going
to
use
our
sub
chart
we're
going
to
disable
the
updater,
because
we
don't
want
vpa,
making
actual
changes
to
the
workloads
right.
C
We
are
pointing
it
to
an
in
cluster
Prometheus,
we're
enabling
a
dashboard.
You
know
so
pretty
much
that
that
same
deal
just
a
different
way.
So
this
is.
This
is
just
two
different
ways
that
you
can
install
Goldilocks
they're,
both
using
helm
under
the
hood.
It's
just
different
approaches
for
it,
so
you
install
Goldilocks.
If
you
install
it
this
way,
you
notice
the
the
flags
on
by
default
are
not
in
this
values,
diamo
file.
C
So
if
you
install
it
this
way
what
you
wind
up
having
to
do
when
you
want
to,
if
you
don't,
if
you
don't
specify
those
flags,
you'll
just
need
to
label
the
namespaces
that
you
want
Goldilocks
to
watch
for
which
the
workloads
that
you
want
Goldilocks
to
watch.
So
you
just
do
a
let's
see
if
I.
C
So
like,
for
example,
here,
this
is
an
example
command
for
enabling
Goldilocks
on
the
carpenter,
namespace,
meaning
that
Goldilocks
will
then
create
vpas
for
the
workloads
that
are
in
that
namespace
and
then
we'll
use
that
vpa
information
to
give
you
recommendations
or
to
show
you
recommendations
for
how
to
optimize
your
resources,
requests
and
limits
for
those
workloads.
B
When
you
say
workloads,
what
kind
of
workloads
does
Goldilocks
support?
Which
kinds
will
it
work
with.
C
C
C
C
All
right,
so
we've
got
the
controller,
we've
got
the
dashboard
and
we've
got
the
vpa
recommender
and
again,
we've
set
it
to
on
by
default
for
all
the
namespaces.
So
there
should
be
vpas
for
like
everything
in
this
cluster
at
this
point
so
yeah.
So
as
you
see,
there's
vpas
for
everything
mode
off
means
again
that
it's
not
actually
going
to
be
doing
any
updating
of
the
of
the
workloads
not
going
to
patch
the
resources
and
requests.
I
would
say,
resources
and
requests
the
requests
and
limits
dynamically.
C
So
it's
safe,
so
this
is
what
our
dashboard
looks
like
right.
So
we
set
up
Goldilocks.
We
set
an
Ingress
in
front
of
it.
We're
hitting
it
here,
and
these
are
all
the
namespaces
that
are
in
our
cluster,
and
so
these
are
all
the
namespaces
that
Goldilocks
has.
You
know,
as
we
saw
created
the
vpas
for
the
workloads
that
live
in
them
right.
C
So
if
we
click
on
any
of
these
and
click
on
Carpenter,
because
you
know,
as
you
saw
I'd
actually
manually,
labeled
Carpenter
in
a
previous
previous
run
through
here,
shows
you
the
details
about
Carpenter,
so
the
namespace.
This
is
the
top
level
controller
deployment
in
this
case
and
here's
the
container,
and
it
shows
you
two
different
recommendations
for
how
to
set
your
requests
and
limits.
C
There's
a
guaranteed
quality
of
service
and
a
burstable
quality
of
service,
and
we
handily
Define
these
for
you
below,
but
the
tldr
is
that
guaranteed.
Qos
is
generally
set
both
your
requests
and
limits
to
the
same
thing,
and
that
affects
how
the
kubernetes,
how
how
your
pods
get
evicted
from
a
node
or
don't
get
evicted
in
the
case
of
of
the
guaranteed
qos.
It
helps
set
like
a
hierarchy,
for
you
know
what
happens
when
pods
need
to
be
evicted
because
of
some
resource
contention.
C
First
of
all,
qos
is
exactly
what
it
sounds
like
right.
It's
it
just
means
that
you
set
the
two
differently
so
that
you're
able
to
burst
for
a
short
amount
of
time,
also,
depending
of
course
on
what
else
is
running
on
that
node,
but
you're
able
to
burst
above
to
handle,
like
short,
spikes
of
traffic,
and
things
like
that.
So
if
yeah.
C
Yep,
that's
yep,
that's
a
good
point
to
the
point
to
make
so
yeah.
So
if
you
click
here,
we
provide
a
nice
little
yaml
block
for
you
that
you
could
use
to
update
your
workload
in
whichever
fashion
you
typically
update
your
workloads
in
our
case
again,
because
we're
using
get
Ops,
we
would
take
this
and
put
it
into
our
course.yaml
file
and
run
a
record
or
plot
on
it
and
change
it.
Do
we?
How
are
we
doing
on
time.
B
C
Magic,
where
am
I?
Actually,
my
notes
said
before
you
did
this
to
go
into
the
right
inventory,
so
you
wouldn't
have
to
do
this
and
I
ignored
my
notes
all
right.
So
here's
our
course.yaml,
actually
I,
think
this
is
my
course
idea,
but
for
here
anyway,
all
right,
so
we
were
looking
for
Carpenter
all
right.
C
Warming,
while
I
try
and
figure
out
where
my
screens
are
all
right,
so
we're
going
to
change
this
35
for
CPU
and
226
for
memory
according
to
this
I
can
actually
just
do
that.
That
was
exactly
the
thing
I
said.
I
could
do
come
here
and
copy
this.
Now.
Here's
the
question,
yeah
yeah
I
hate
that
so
much
I'm
sure
there's
something
I
could
turn
off
to
make
it
not
do
that.
C
Weird,
like
ignoring
my
indentation,
so.
B
C
Request,
yes,
all
right!
So
now
we
have
Set
the
request
and
request
and
limits
for
Carpenter
to
be
the
same
so
right.
So
we
have
this
up
here.
So
let's
go
over
there
and
I
like
to
switch
everywhere
that
I
can
possibly
do
it
now
we're
going
to
of
stage
resource
request
and
limits.
B
C
I
forgot
to
even
create
a
branch
I'm
on
Master
whoopsies.
B
C
Is
true
and
then
I
have
done
something
very
goofy
here
undo
last
commit
and
let
me
do
I'm
doing
my
stations.
I
I
hate
this.
You.
C
B
A
C
B
Here
we're
we're
templating
out
the
actual
manifests
from
the
helm
chart
into
a
directory
so
that
our
diff,
when
we,
when
we
make
the
pull
request,
actually
reflects
the
the
top
level.
The
the
full
change
set
that's
going
into
our
cluster,
rather
than
if
we
had
specified
a
Helm
release
going
into
the
cluster.
C
Remember
I
had
to
download
manually
the
old
version
of
reckoner
or
the
newer
version
of
reckoner,
so
I
have
to
run
this
command
again,
but
this
time
I
have
to
do
it
using
my
reckoner,
which
is
in
my
downloads,
folder.
A
B
You
would
appreciate
this
Stevie.
Somebody
in
LinkedIn
said:
I
reckon
it's
all
going
fine,
so.
C
B
A
B
A
C
All
right,
so
what
we
should
see
in
this
cluster
once
Argo
has
reconciled,
should
I
go
to
Argo
or
do
you
want
me
to
just
say
where
I
am.
C
C
All
right
so
yeah,
so
we're
waiting
for
Argo
to
pick
up
the
change
and
redeploy
with
our
new
with
our
new
resource
settings
and
then
hopefully,
what
we'll
see
and
Goldilocks
is
that
it
does
not
have
it'll,
probably
still
show
the
the
burstable
qos,
but
it
should
be
good
on
the
on.
C
C
So
so
so
this
is,
you
know,
probably
a
bit
of
a
slower,
but
an
idea
of
a
workflow
that
you
could
have
right.
You
check
your
cluster
first
to
see
sort
of
get
an
overall
view
of
your
efficiency
in
terms
of
costs
in
terms
of
utilization,
and
then
you
use
a
tool
like
Goldilocks
to
get
an
idea
of
how
you
can
adjust
your
resources
to
make
the
keep
scheduler
or
help
just
keep
schedule.
C
I
should
say:
schedule
your
workloads
in
a
more
cost
efficient
way,
but
I'm
going
to
turn
it
over
to
Andy.
Now
he's
going
to
share
his
screen
because
he's
going
to
talk
to
you
and
about
Goldilocks
cost
feature
that
the
direct
Goldie
last
cost
feature.
A
And
before
we
go
over
there,
we
had
a
comment
or
a
bit
of
a
question
as
well
most
a
comment
from
the
audience.
So
there's
mark
saying
no
Argo
CD
webhook
used
I'm
in
shock.
B
Good
comment
I
mean
it
would
have
picked
it
up
without
me
hitting
refresh
so
it's
just
a
little
too
slow
for
me,
but
never
did
get
around
to
enabling
the
web
hook.
It's
a
Sandbox,
so
it
doesn't
get
as
much
love
as
we
might
hope
it
does.
It
would
so
so
yeah
Stevie
thank
you
for
showing
us
Goldilocks
and
how
it
can
help.
You
go
find
your
you
know
the
right
CPU
and
memory
requests
and
limits
I'm.
B
Actually
looking
at
the
Goldilocks
screen,
just
like
you
were
so
seamless
transition
here,
so
yeah
you
may
be
asking
you
know:
hey
I
came
I
came
here
to
learn
about
cost
and
you've
just
been
talking
about.
Cpu
requested
limits,
the
whole
time
and
I
I
I,
always
love
to
reiterate.
You
know
to
folks
we
get
a
lot
of
questions
from
our
customers
that
are
like.
Can
you
recommend
what
node
size
to
use?
B
Can
you
recommend
you
know
tools
to
help
us
save
costs
and
really
the
thing
that
drives
all
of
the
scaling
and
all
of
the
bin
packing
and
scheduling
in
kubernetes?
Is
this
setting
right
here?
It's
resource
for
customers,
and
so
I
will
probably
be
saying
this
for
another
five
years.
I've
been
saying
for
the
last
five
years,
but
I
think
this
is
probably
the
most
important
thing
that
we
can
all
do
to
enable
stability
and
cost.
You
know
control
in
our
clusters.
B
So
that
being
said,
you
know
the
tie
between
these
numbers
right
here.
These
35
Milli
cores
and
those
226
megabytes
of
memory
to
cost
is
a
little
bit
obscure.
Right
like
it
is
a
portion
of
an
node
running
in
a
cloud
provider.
That's
billing
me
by
the
hour
for
that
node
that
node
has
a
certain
amount
of
CPU
and
memory
available.
There's
a
certain
amount
of
overhead,
that's
taken
up
by
kubernetes
on
each
node.
So
how
do
we
really
understand
what
this
is
costing
us?
B
And
so
you
know
in
our
commercial
product
we
have
a
ton
of
functionality
around
this,
but
we
wanted
to
bring
some
of
it
back
to
the
open
source.
And
so
here
we
wanted
to
take
the
data
that
we
have
from
the
cloud
providers
around
the
cost
of
various
bits
of
infrastructure
and
expose
that
here
in
Goldilocks,
and
so
you
may
have
seen
this
banner
up
at
the
top
four
Goldilocks
And.
B
Basically,
what
this
enables
you
to
do
is
get
some
cost
information
right
here
in
the
Goldilocks
dashboard,
so
I'm
gonna
put
my
email
address
in
and
it's
going
to
send
me
an
email
with
an
API
key.
It's
totally
free,
API
key
and
we're
going
to
put
that
API
key
in
and
hit
submit
here
and
a
bunch
of
numbers
are
going
to
start
showing
up.
Well,
actually,
sorry,
there's
one
more
step
here.
B
We
have
to
tell
it
what
our
infrastructure
is,
so
we
have
AWS
and
gcp
cost
data
in
here
we
also
have
the
ability
to
say
others.
So
if
you're
running
on.
B
You're
running
in
a
different
cloud
provider-
or
you
just
want
to
put
in
your
own
numbers,
because
you
don't
trust
us
totally
fine-
you
can
hit
other
and
put
in
the
dollars
per
CPU
hour
and
dollars
per
gigabyte
hour,
I'm
going
to
click
on
AWS,
because
I
know
this
cluster
is
running
in
AWS
and
I'm,
going
to
find
our
node
size
here,
which
is
most
likely
I,
don't
actually
remember,
but
it's
most
likely
an
M5
large,
because
that's
kind
of
typically
where
we
start
with
demo
clusters.
B
And
so
we
have
a
rough
estimate
here
on.
Well,
we
have
an
actual
number
from
AWS
as
to
the
on-demand
cost
of
a
CPU
hour
and
the
on-demand
cost
of
a
gigabyte
hour
for
this
node
type,
so
I'm
going
to
hit,
save
and
then
we're
going
to
start
to
see.
Some
numbers
show
up
on
the
dashboard
that
weren't
there
before
so
now.
I
have
an
idea
of
how
much
this
container
is
costing
me
per
hour
to
run
based
on
the
current
settings.
B
So
obviously
I
could
go
punch
all
these
numbers
into
an
Excel
spreadsheet
and
do
all
this
myself.
This
seems
a
little
bit
more
convenient
to
me
and
then,
if
we
go
look
at
say
a
workload
that
is
over
provisioned,
we
can
take
a
look
here
at
this
demo
app
and
see
the
recommendation
to
lower
our
CPU
memory
requests
because
it
is
over
provisioned
and
we
can
see
roughly
how
much
this
is
going
to
save
us
in
our
cluster,
and
we
can
also
do
this
across
all
namespaces.
B
B
Perhaps
not,
we
shall
see
well
live
demos
again,
but
we'll
be
able
to
look
through
all
of
our
recommendations
all
at
once
and
see
sort
of
the
cost
of
applying
those
recommendations.
So
we
can
start
to
Target
the
things
that
we
think
will
save
us,
the
most
money
and
so
I've
actually
tuned
a
lot
of
these
already.
But.
B
Yes,
yes
yeah,
so
these
numbers
are
all
really
small,
because,
frankly,
this
cluster
as
a
Sandbox
is
not
doing
a
ton,
but
we
could
look
through
all
of
these
and
see
okay.
This
is
probably
you
know
our
most
expensive
workload.
Let's
go
ahead
and
see
if
we
can
reduce
the
cost
of
that
and
so
we're
starting
to
enable
some
of
that
that
functionality
here
in
Goldilocks
lots
of
opportunity
for
improvement.
We're
welcome
to
happy
to
accept
enhancement,
requests
on
the
repo,
but
that's
what
I'm
here
to
share.
C
When
we
look
at
this-
and
this
is
you
know,
frankly,
a
new
feature
for
me,
so
this
is
cool
to
be
seeing
in
action,
and
so
what
we're
talking
about
here
is
that
this
will
is,
if
you
change
your
workload
to
the
current
and
guaranteed
sorry
to
get.
If
you
change
your
workload
to
match
the
Goldilocks
recommendation,
that
is
a
way
that
we
will
see
like
be
able
to
see
almost
in
real
time.
I
guess
like
how
much
you'll
save.
C
We
would
hope
that
we'd
see
this
the
numbers
next
to
the
qos
and
at
the
top
Under
The,
Container
we'd,
hope
to
see
those
like
match,
like
we'd,
had
I
hope
to
see
that
container
number
decrease
to
what
Goldilocks
that
it
could
get
to
right.
Yes,.
B
Yes,
so
if
we
had
looked
at
Carpenter
beforehand
and
had
cost
enabled
you
know
it
would
have
said
something
slightly
higher
than
you
know:
18
cents
an
hour
right
or
I,
don't
think
I'm
doing
math
there
right,
but.
B
Than
that
cost,
and
then
we
reduced
it
now
it's
important
to
note
that
this
is
a
a
recommendation
so
right,
these
numbers
are
not
set
in
stone,
they're
recommended
by
the
vpa
they're
dependent
upon
you
having
actual
usage
they're.
You
know
averages
across
time.
We
are
hooking
it
up
to
Prometheus,
so
we're
getting
a
little
bit
more
accurate
data,
but
it
is
still
a
starting
point.
So
you.
B
About
you
know
the
needs
of
your
application
when
you're
going
to
apply
these
and
then
also
important,
to
note
that
you
may
see
increases
because
sometimes
the
vpa
is
going
to
recommend
hey
you're,
always
right
at
the
top
of
your
CPU
limit.
We
think
you
need
to
bump
this
up
and
that
will
increase
cost.
B
You
know
it's
as
much
about
efficiency
as
it
is
about
costs
higher
levels
of
efficiency.
Get
you,
you
know
ideally
less
cost,
but
that
man
may
not
be
the
case
in
all
all
environments.
I
will
say
it's
the
case
in
like
90
of
the
environments.
B
At
but
you
know.
C
B
C
And
that
and
I
think
of
Goldilocks.
You
know
the
same
way
that
I
think
of
like
Google
Maps.
In
a
way
right,
like
you
know,
Google
Maps
will
tell
you
where
to
go,
but
you
know
you
know
a
bit
more
about
like
what's
on
the
ground
and
so
very
clearly,
if
you
see
a
barrier
in
front
of
you
like
a
police
blockade
and
Google
Maps
is
like
continue
straight.
C
You
know
not
to
continue
straight
and
I
feel
like
using
Goldilocks
is,
you
know,
has
some
of
the
same
things
right
like
you
know
if
if
it's
like
yeah
take
this
workload
and
decrease
it
and
you're
like
okay,
but
I
know
that
my
workload
has
like
some
ridiculous
Spike
that
maybe
got
you
know
evened
out
over
like
the
aggregation
in
Prometheus
or
something
like
that.
Like
you
know,
it's
all
like
a
it's
a
it's
just
a
little
Common
Sense
along
with
the
you
know,
because
you
know
your
app
right.
B
Yes,
strongly
agree
and
I
think
that's
a
great
great
metaphor:
there's
Google
Maps,
you
know,
maybe
you
know
that
road's
closed
and
they
haven't
figured
it
out
yet.
B
A
Is
when
you
ask
for
questions
they
will
come
so
I.
Think
now
is
the
perfect
time
to
ask
the
questions,
and
then
we
can
see.
Mark
has
already
kicked
us
off,
so
they
ask
regarding
cost
for
burstable
configurations.
Das
Goldilocks
estimate
cost
according
to
the
average
use
of
the
workload
or
based
on
the
resource
limits.
B
I
should
know
the
answer
to
that
and
I,
don't
I
believe
we
would
actually
calculate
based
on
the
request,
because
that's
the
it's
definitely
not
on
usage.
I
can
tell
you
that,
because
the
the
Goldilocks
itself
doesn't
really
have
historical
information
about
the
usage,
the
actual
usage,
that's
mostly
piped
into
vpa
and
then
vpa
provides
the
recommendation
to
Goldilocks.
So
it
wouldn't
be
based
on
that.
B
My
guess
is
that
it
would
be
the
request,
but
frankly,
I
didn't
actually
write
this
piece
of
functionality,
which
is
unusual
and
so
I,
don't
know
exactly
in
the
context
of
this.
I'd
have
to
go
diving
through
the
code
to
find
that,
but
I
can
tell
you
it's
definitely
not
usage.
A
If
anyone
wants
to
reach
out
to
you
with
this
answer
or
like
similar
questions
later
on,
is
there
like
a
place
where
they
should
reach
out
like
a
slack
or
social
handle
somewhere.
B
The
Goldilocks
repo
filing
an
issue
was
always
a
great
place.
We
have
a
community
Slack
for
all
of
our
open
source
projects.
That's
a
great
place
to
get
a
hold
of
us.
You
can
find
a
link
to
that
in
any
one
of
our
readme's
or
in
any
one
of
our
documentation,
Pages
which
incorporate
the
readme
as
well,
and
so
there's
always
a
join
the
slack
button
there
and
then
I
am
personally
in
the
kubernetes
and
cncf
slacks
as
soon
Junior,
suiterman,
Jr
and
happy
to
respond
there
as
well.
A
Perfect
and
the
audience,
please
ask
your
questions
now.
Now
it's
the
Q
a
moment
perfect
and
while
we
see
if
anyone
else
is
gonna
send
anything
and
I
would
have
a
few
questions.
So,
if
someone's
super
excited
about
the
topics
right
now,
is
there
any
really
good
learn
more
resources
that
you
could
share
to
our
audience?.
B
Ooh,
that's
a
good
one.
I
mean
our
documentation.
Has
a
ton
of
information,
that's
an
obvious
one.
We
have
a
ton
of
content
on
the
fairwinds.com
website
about
cost
and
about
our
open
source,
lots
of
lots
of
webinars
and
blog
articles
there
that
are
authored
by
our
Engineers
as
well.
As
you
know,
Stevie
and
I
do
a
lot
of
webinars
about
this
topic,
so
you
can
find
a
ton
of
content.
A
B
Definitely
a
potential
enhancement
there
to
you
know
expose
more
of
that
information
to
the
dashboard,
and
then
we
also
have
the
free
tier
of
our
product
that
allows
you
to
actually
incorporate
your
AWS
billing
data
directly.
So
we
get
not
only
more
instance
types,
but
also
your
bill
itself,
so
options
there,
but.
A
B
That's
a
good
question:
there's
a
lot
of
possibilities
and
a
lot
of
options
going
forward.
There's
you
know
a
lot
of
different
concerns
that
we
have
to
balance
on
the
open
source
side
being
that
we
are
also
supported.
You
know
from
a
commercial
entity,
and
so
we
have
to
keep
in
mind
the
tie-ins
between
those
two
things.
So
there
are
some
potential
changes
around
how
we
incorporate
Prometheus
metrics
and
how
we
use
vpa.
B
Those
are
a
little
bit
more
under
the
hood
type
things
as
far
as
large
feature
enhancements
for
the
dashboard
and
other
things.
We
don't
have
anything
planned
at
the
moment,
but
we
are
always
always
excited
to
accept
Community
contributions
as
well
as
suggestions
and
we'll
take
those
into
account
as
we
we
find
more
time
and
resources
to
dedicate,
but
that's
always
the
trouble
with
open
source.
So.
A
Great
now,
if
there's
no
new
audience,
questions
coming
in
I'm,
gonna,
say
kind
of
last
call
for
questions.
So
if
anyone
is
typing
away
furiously
and
about
to
send
a
question
in
send
that
question
in
so
we'll
get
to
it.
But
before
we
see
if
there's
anything
coming
in
and
your
Stevie
do,
you
have
any
kind
of
final
words
any
reminders
to
people
that
you
want
to
say.
B
C
To
ask
you
to
explain
because
I'm
sure
there
are
people
on
this
call
who
are
curious
or
thought
about
it,
but
were
like
it's
not
important
to
ask,
but
our
stuff
is
named.
Is
space
themed,
Polaris,
Nova
Goldilocks?
B
Great
question
great
question:
so
there
is
a
space
related
term
called
the
Goldilocks
zone,
which
is
the
S
the
distance
from
the
star
in
a
solar
system
that
a
planet
has
to
be
to
be
habitable
by
humans,
and
so
the
Earth
is
in
the
Goldilocks
zone
and
when
we're
looking
for
planets
that
might
sustain
life,
that's
sort
of
the
term.
That's
used
to
describe
that
area
around
the
Sun.
That.
A
Perfect
and
while
that
good
question
was
asked
and
answered,
we
had
one
more
question
from
the
audience
or,
if
you
have
any
more
just
you
know
send
them
in
still
does
Goldilocks
provide
cost
for
PV
or
PVCs.
Also.
B
Not
at
the
moment,
no
we're
focused
mostly
on
efficiency
of
workloads,
which
was
the
initial
sort
of
goal
of
Goldilocks,
and
so
we're
continuing
on
that
theme
at
the
moment.
So.
A
Final
final
call
for
questions
right
now,
yeah
anything
else
that
you
wanted
to
finish
with
before
we.
B
No
I
think
I
have
said
my
personal
Mantra
about
10
times
today.
So
I
don't
need
to
repeat
that
and
we
really
appreciate
you
having
us
on
the
show
again
yeah.
It's
always
good
to
come
in.
It's.
A
A
But
thank
you,
everyone
for
joining
the
latest
episode
of
cloud
native
live.
It
was
great
to
have
a
session
about
Cloud
cost
monitoring
today
and
also
we
really
love
the
introduction
and
questions
from
the
audience
and
we
bring
you
the
latest
Cloud
native
code,
every
Wednesday
and
in
the
coming
weeks
we
have
more
great
sessions
coming
up
so
tune
in
then.
Thank
you
for
joining
us
today
and
see
you
all
next
week.