►
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
A
The
complexity
of
operating
kubernetes
efficiently
efficiently
is
real,
but
you
already
knew
this.
Kubernetes
brings
technical
complexity
with
higher
level
abstractions
moving
you
further
from
underlying
resources.
These
abstractions
provide
a
standard
interface
for
building
deploying
software,
but
there
are
a
lot
of
moving
Parts
engineering
teams
are
having
to
deal
with
lots
of
new
Concepts.
A
A
Luckily,
help
is
available.
Open
cost
is
an
open
source,
cncf
sandbox
project
for
measuring
and
allocating
infrastructure
and
container
costs
in
real
time
built
by
kubernetes
experts
and
supported
by
kubernetes
practitioners.
Open
cost
shines
a
light
into
the
black
box
of
kubernetes
spend
opencast
is
both
a
project
and
written
specification
for
modeling
current
and
historical
kubernetes
spend
and
resource
allocations.
You
can
view
your
kubernetes
cost
by
service
deployments,
namespaces
and
much
more
right.
Now.
It
supports
kubernetes
clusters
on
AWS,
Azure
and
gcp
through
their
on-demand
pricing
and
on-premises
kubernetes
clusters
are
supported
as
well.
A
Open
cost
is
the
engine
for
Kube
Cost
commercial
product
I'm
talking
about
open
cost,
because
it's
a
tool
for
digging
into.
What's
going
on
in
our
complex
and
dynamic
kubernetes
infrastructure,
we
want
to
be
able
to
slice
and
dice
our
Cloud
bills
by
kubernetes
Primitives,
seeing
who's
using
what
and
which
services
are
costing
the
most.
This
will
be
the
foundation
for
future
optimization.
A
A
A
A
An
example
of
an
asset
cost
would
be
a
node
where
their
CPU
allocation
costs,
which
relate
the
cores,
the
duration
of
use,
the
price
and
the
total
cost.
The
ram
allocation
costs
are
also
included
when
a
kubernetes
node
is
deployed
on
your
Cloud.
It
now
has
a
total
cost.
Workloads
are
the
actual
applications
and
containers
allocated
and
running
on
our
kubernetes
nodes.
These
are
the
containers.
Pods
deployments,
persistent
volumes
Etc.
These
are
calculated
by
querying
the
kubernetes
API
C
advisor
metrics,
the
kernel
and
billing
data.
A
They
may
be
usage
costs
or
allocation
costs,
but
they
are
directly
managed
by
the
Kube
scheduler.
We
want
to
measure
these
at
the
lowest
level
possible,
so
we
can
track
this
data
along
any
dimension.
Any
unallocated
costs
on
the
kubernetes
cluster
are
cluster
idle
costs,
you're
paying
for
them,
but
not
directly,
using
them
to
run
your
application.
A
A
A
Workload
costs
are
committed,
allocated
costs.
This
is
what's
happening
inside
your
kubernetes
cluster
they've
been
requested
from
the
kubernetes
cluster,
so
you're
paying
for
them
from
the
cloud
provider
billing
API.
We
get
the
numbers
on
the
left.
These
are
the
raw
metrics
that
you're
paying
for
open
costs
allows
you
to
view
these
costs
by
the
workload
aggregations
on
the
right.
You
can
see
CPU
usage
by
labels
GPU
by
deployments.
However,
you
want
to
query
your
cost.
You
can
also
stack
these,
so
you
could
see
things
such
as
container
per
namespace.
A
A
A
The
opencast
architecture
is
fairly
straightforward,
open
cost
uses
Prometheus
as
both
the
source
and
destination
for
data.
A
Prometheus
node
exporter
runs
on
each
node
in
the
cluster.
Exposing
kubernetes
data
and
open
cost
writes
this
out
to
the
Prometheus
data
store.
The
opencast
service
queries,
the
cloud
providers
API
for
the
cost
of
each
service
used.
A
A
Open
costs
relies
on
Metric
scraped
by
Prometheus
for
Express
installation
of
Prometheus.
We
use
the
prom
Community
hum
chart,
but
you
can
use
it
already
existing
installation.
If
you
want
to
deploy
open
costs.
The
current
recommended
installation
method
is
to
Simply.
Coupe
cuddle
apply
the
yaml
Manifest.
A
A
Once
you've
deployed
open,
cost
you'll
start
collecting
data
almost
immediately
showing
costs
by
any
Dimension
enables
action
and
fitting
allocation
tagging
and
mapping
into
developer
workflows
is
very
powerful.
Cluster
cost
efficiencies
is
a
great
starting
point
for
kubernetes
cost
optimizations
we'll
get
back
to
that
in
a
second,
the
Upper
Crust
API
exposes
cost
allocations
for
kubernetes
workloads
and
Cloud
infrastructure,
supporting
them.
There's
a
swaggered
Json
and
the
open
cost
repository
and
we're
fleshing
out
additional
API
documentation.
A
The
opencast
community
continues
to
expand
and
we're
always
eager
to
help
new
folks,
we're
mostly
in
slack
and
GitHub,
with
the
fortnightly
open
cost
working
group
calls
there's
also
a
Google
group
and
Linkedin
if
you're
so
inclined,
there's
links
there
for
the
open
cost
calendar
and
the
open
cost
meeting
notes.
If
you
want
to
get
involved
in
the
open
cost
working
group.
A
A
The
first
step
is
make
sure
you
have
a
phenops
practice.
Finops
is
an
evolving
Cloud
financial
management
discipline
and
a
cultural
practice
that
enables
organizations
to
get
the
maximum
business
value
by
helping
coordinate
engineering,
Finance
technology
and
business
teams.
So
they
can
collaborate
with
data-driven
spending
decisions.
The
finops
foundation
is
part
of
the
Linux
Foundation.
A
The
finops
foundation
provides
guidance
on
cloud
financial
management
through
Best,
Practices,
education
and
standards.
Their
fin
Ops
framework
is
a
set
of
organizational
recommendations
for
building
your
financial
practice.
While
it's
not
focused
on
kubernetes,
these
guidelines
help
establish
patterns
for
applying
those
Cloud
cost
savings
and
large
organizations.
Cloud
costs
are
rarely
isolated
to
a
single
team.
Many
of
the
cost
optimizations
at
the
cloud
account
level
are
more
effective
when
centrally
managed
by
a
cloud
operations.
Finance
team,
given
their
visibility
of
consumption
patterns
across
the
organization
and
their
ability
to
quantify
opportunities
for
savings.
A
A
We
take
a
top-down
approach
to
optimizing
kubernetes
infrastructure.
Improving
the
efficiency
of
containers
affects
the
efficiency
of
pods,
which
in
turn
affects
the
efficiency
of
clusters.
Once
you've
started
optimizing
your
kubernetes
infrastructure,
you
can
investigate
higher
levels
of
cost
reductions
like
reserved
instances
and
other
commitment-based
savings.
This
is
an
iterative
process.
You're
never
quite
done.
A
Workloads
are
the
applications
running
on
your
kubernetes.
Cluster
workloads
may
be
pods
deployments,
replica
sets
staple
sets,
Daemon
sets
jobs
or
crime
jobs
optimizations
at
this
level
relate
to
the
requested
resources
made
by
the
workloads
to
kubernetes
itself
and
whether
they're
still
in
use
when
specifying
pods
containers
may
be
assigned
requests
and
limits
for
resources
such
as
CPU
and
memory.
A
If
these
are
not
assigned
or
are
over,
provisioned
containers
may
be
allocating
more
resources
than
they
actually
need
costing
you
money
under
premium
containers
may
cause
CPU
throttling
or
out
of
memory
errors
leading
to
poor
performance.
There
are
several
options
for
working
with
these
recommendations.
A
A
You
may
provide
limits
and
or
requests
for
CPU
and
or
memory
as
necessary
containers
with
limits
and
resources
provided
in
their
manifestable
work
as
expected,
but
defaults
will
provide,
will
be
provided
for
containers
not
explicitly
setting
those
values.
Container
utilization
is
fairly
straightforward
to
measure
and
verify
with
open
cost.
All
the
data
is
captured
and
exposed
via
the
API,
CLI
and
UI.
A
Optimizations
at
the
kubernetes
level
are
related
to
the
configuration
of
the
kubernetes
cluster
itself
when
the
kubernator's
cluster
is
over
provisioned
or
using
inefficient
node
sizes.
There
are
opportunities
for
improving
the
effectiveness
of
workloads
on
the
cluster.
An
over-provision
kubernetes
cluster
may
have
nodes
that
are
primarily
idle,
making
them
strong
candidates
for
reducing
the
number
of
nodes
in
the
cluster
or
decreasing.
The
resources
allocated
to
them
workloads
may
be
redistributed
across
the
cluster
to
ensure
availability,
while
reducing
overspending
on
unused
capacity.
A
There
may
be
situations
where
the
capacity
of
the
cluster
node
is
over
provisioned,
but
a
minimum
number
of
nodes
may
still
be
required
for
workload
durability.
In
this
case,
it's
recommended
to
deploy
smaller
cluster
nodes
and
then
drain
the
other
provision
nodes
in
favor
of
their
more
efficient
replacements.
A
A
There
are
many
use
cases
where
arm.
Cpus
are
highly
capable
of
replacements
for
x86
processors
as
significant
cost
reductions.
Not
every
workload
is
migratable,
but
most
kubernetes
and
open
source
tooling
is
armed
compatible
at
this
point.
Open
cost
makes
it
easy
to
compare
workloads
across
nodes
with
different
architectures.
A
A
So,
in
summary,
remember
this
is
an
iterative
process.
There
are
no
silver
bullets,
but
there
may
be
low
hanging
fruit
as
you
work
through
optimizations.
At
each
level,
there
will
continue
to
be
savings.
Opportunities
below
improving
the
efficiency
of
containers
affects
the
efficiency
of
pods,
which
in
turn
affects
the
efficiency
of
your
clusters
and
coordinate
your
savings
across
the
org.