►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
open
obserability
talks,
I'm
your
hostan
horvitz
and
here
at
open,
observability
talks.
We
talk
about
anything,
devops,
observability
and
open
source.
So
may
the
open
source
be
with
you
I'd
like
to
thank
our
sponsors
logs.io.
The
cloud
native
observability
platform.
Logsio
takes
the
best
of
breed
open
source
projects
such
as
Prometheus,
open
search
and
Diego,
and
offer
them
as
a
unified
observability
platform
built
for
scale
for
those
joining
the
live
stream
stream
or
on
YouTube
or
twitch
feel
free
to
share
questions
or
and
comments
on
the
chat.
A
It
definitely
makes
things
more
interesting
for
us
here
on
the
fireside
chat
and,
let's
move
on
to
today's
episode
last
episode,
I
discussed
here.
The
challenges
of
monitoring
kubernetes
operationally
things
such
as
configuration
complexity,
High,
churn
rate
Etc
today,
I'd
like
to
talk
about
the
challenge
of
monitoring
your
kubernetes
spend
with
the
current
Financial
climate.
Cost
reduction
is
the
top
of
mind
for
everyone.
I.T
is
one
of
the
biggest
cost
centers
and
companies
realize
that
they
simply
don't
understand
the
cost
of
their
kubernetes
workloads
or
even
have
observability
into
basic
units
of
cost.
A
So
we'll
discuss
this
and
the
phenops
discipline
for
addressing
this
and
there's
also
a
fascinating,
open
source
project,
open
course,
which
aims
to
provide
an
open
standard
around
that
and
for
this
topic,
I
invited
Matt,
Ray
who's.
The
senior
community
manager
for
the
the
open
source,
open
cost
project
is
also
a
veteran
in
the
open
source
and
the
devops
communities,
and
also
a
fellow
podcaster.
A
Let
me
invite
Matt
to
the
stream
hey
Matt
good
morning
good
morning,
thanks
for
having
me
thanks,
glad
to
be
here
and
thanks
for
taking
this
live
stream
so
early
in
your
time,
you're
based
in
Australia.
So
it's
like
6
a.m.
Now,
right.
B
Yeah
yeah
yeah,
it's
it's
early,
but
you
know
monitoring,
never
sleeps.
B
A
Know
the
the
challenges,
but
with
Australia
it
can
be
even
even
more
interesting
yeah
and
also,
as
I
mentioned
you're
a
co-host
of
your
podcaster
and
your
co-host
of
the
software
defined
Talk
podcast.
Yes,
a
small
anecdote
by
the
way.
Last
week,
I
I
delivered
talks
in
Belgium
at
Foster
and
the
config
management
camp,
and
they
are
in
I,
ran
into
your
co-host,
Michael
Cote,
so
I
hope
he's
with
us
today
on
the
live
stream.
B
Sure
sure
so,
software-defined
talk
is
a
podcast
that
I
formed
with
two
of
my
friends,
Brandon
Richard
and
Michael
Cote.
Each
of
us
has
kind
of
a
different
background
in
the
enterprise
software
industry.
Cote
is
he's
coming
from
he's
currently
in
like
kind
of
marketing,
but
he's
got
a
background
as
an
industry
analyst.
He
worked
for
red,
Monk
and
451
group.
B
He
also
worked
at
Dell
and
emergency
Acquisitions,
so
he's
got
kind
of
an
interest,
interesting
industry
background
and
then
Brandon
has
been
product
manager
for
quite
a
while
different
monitoring
platforms.
I
think
he
worked
a
boundary
back
in
the
day.
B
You
know
one
of
the
granddaddies
of
monitoring
and,
in
my
background
is
engineering,
and
you
know
community
and
of
developer
relations
and
and
kind
of
down
down
that
path,
and
so
the
three
of
us
have
different
kind
of
viewpoints
on
how
the
industry
works
and
we've
been
podcasting
for
about
seven
or
eight
years
now
and
I
think
we
hit
episode
400
last
week,
so
we're
we're
still
still
going
strong
and
we
all
live
in
different
sides
of
the
planet.
B
So
it's
it's
a
it's
fun
to
bring
that
in
Cote
lives
in
Amsterdam,
I
live
in
Sydney
and
Brandon's
back
in
Austin
Texas.
That's.
A
Amazing
I
this
this
show
has
been
around,
for
this
is
the
third
year,
but
looking
at
more
veteran
shows
such
as
yourselves
is
definitely
a
lot
to
learn
there
so
great
to
see
that,
and
hopefully
my
followers
that
that,
like
podcast,
will
definitely
find
an
interest.
A
I
I
highly
recommend
that
and
and
let's
talk
about
finops,
that's
a
whole
topic
these
days,
as
I
mentioned
in
the
opening
and
with
the
potential
recession,
many
organizations
these
days
are
looking
for
what
the
what
they're
spending
money
on
and
definitely
cloud
and
infrastructure
cost
is
typically
the
second
highest
line
item
after
the
salary
cost.
I
think
so
it's
definitely
top
of
mind
for
everyone.
So
before
delving
into
the
details,
maybe
let's
start
with
level
setting
the
the
basics.
B
So
so
finops
is
well
there's
a
the
Finish
org,
which
is
a
foundation
under
the
Linux
foundation.
So
you
know
they're
open
source
yeah.
Without
being
you
know,
code
based,
it's
a
group
that
came
together
to
kind
of
talk
about
the
intersection
between
Cloud
finance
and
operations.
You
know
everybody
is
not
everybody,
but
most
people
are
starting
to
run
a
lot
of
operations
in
the
cloud
and
it's
a
different
cost
model.
B
You
know
instead
of
going
and
buying
a
bunch
of
servers,
and
you
know
waiting
them
to
be
racked
and
eventually
deploying
your
stuff.
Three
months
later
you
buy
on
demand
and
so
from
the
finance
side
of
the
house.
B
That's
a
really
different
model,
instead
of
instead
of
just
buying
a
bunch
of
stuff
and
sitting
it
in
your
own
data
center,
now
you're
renting
by
the
hour
by
the
minute,
by
the
second
and
having
to
like
bring
that
intersection
of
of
understanding
how
your
costs
are
are
run
and
how
they,
how
they
escalate,
how
they
are
managed
and
what
the
development
and
operations
need
to
do
with
your
infrastructure,
that's
kind
of
where
finops
lives,
and
so
nobody
is
it's.
It's
it's
a
new
practice,
I
mean
I.
B
Was
you
know
talking
to
my
kids,
the
other
day
you
know
my
son
was
like.
Oh
is
AI,
going
to
put
everybody
out
of
business.
I
was
like
no
you
just
you
keep
moving
forward.
You
know
the
jobs
of
the
future.
B
You
don't
know
about
today
and
like
if
you
told
me
when
I
was
a
kid
that
I
would
be
working
at
an
open
source,
Financial
operations,
monitoring
platform
I
would
just
look
at
you
cross-eyed,
and
so
you
know
it's
it's
a
new
thing,
but
they
they
put
on
their
first
conference
in
2019.
They
put
out
a
platform
that
our
framework,
if
you
will
that
kind
of
explains
what
they
consider,
how
you
think
about
these
things.
You
know
what
you
need
to
consider
what
principles
you
need
to
adopt.
B
You
know
things
like
bringing
observability
into
your
entire
stack,
bringing
in
all
the
stakeholders
in
you
know
it's
not
just
it's
not
just
Engineers,
it's
just
not
just
Finance
and
yeah.
It's
lots
of
different
folks
in
between,
and
you
know
how
you
how
you
track
these
things,
how
you
how
business
you
know
it
makes
decisions
you
know.
Are
you
going
to
spend
more
because
you're
selling
more
or
you
do
you
need
to
cut
costs?
Because
you
know
you
know,
there's
a
belt
tightening
phase.
B
What
do
you
need
to
do
and
and
finops
gives
you
kind
of
a
bunch
of
different?
You
know
maturity,
phases
and
and
principles
around
it.
It's.
A
B
It's
a
great
framework,
you
know
if
you
haven't
heard
about
this
and
if,
if
you
haven't
started
and
you're
in
the
cloud,
you
should
start
yeah.
A
Yeah
I
think,
first
of
all
definitely,
and
then
it's
very
easy,
just
find
it
in
phoenix.org,
so
very
easy
to
find
it
online
and
lots
of
useful
resources.
It
is
under
the
Linux
Foundation
but,
as
you
said,
it's
not
about
code
rather
than
about,
let's
say
a
principle
and
guidelines
and
what
I
found
not
not
specifically
just
in
the
Phoenix
Foundation,
but
in
general,
in
applying
finops.
We,
for
example,
in
at
logs,
IO
or
I
work.
A
We
have
a
designated
finops
team
and
when
I
try
to
explain
to
other
people
or
to
newcomers
or
others,
what
it's
about
I
emphasize
always
that
it's
about
maybe
first
and
foremost
about
communication
and
about
culture.
That's
for
me
the
essence
about
maybe
breaking
silos,
obviously
providing
visibility
and
observability
into
the
cost
unit,
but
also
creating
this
ongoing
conversation
about
the
cloud
course
Loop.
These
costs
into
business
decisions
and
and
getting
people
to
talk
together.
A
You
know
bringing
business
and
finance
out
of
the
house
to
talk
with
engineering
or
with
product
in
under
normal
circumstances.
These
organizations
don't
communicate
so
smoothly
together.
They
also
think
differently.
You
know
engineering
with
the
agile,
with
the
Quake,
with
the
fast,
with
the
breakfast
fail
fast
and.
B
B
A
Formal
processes
of
both
both
sides
of
the
house
need
to
adjust
to
make
this
happen.
So
that's
for
me,
and
maybe
one
more
point
that
I
found
useful
is
the
the
core
principles
the.
B
A
The
sixth
core
principles
that
I
think
are
very
good
for
those
who
just
starting
the
house,
like
the
collaboration,
is
obviously
one
of
them,
and
the
ownership
is
another
very
important
thing
that
engineering
can't
just
say:
okay,
we
are
about
building
the
software,
someone
else
to
take
care
of
of
the
cost
and
the
infrastructure
elements,
and
things
like
that.
Now,
it's
an
integral
part:
accountability:
ownership
built
into
the
baked
into
the
organization.
Obviously
the
observability
side
of
things
that
comes
with
it.
A
You
can't
take
ownership
if
you
don't
have
very
clear
reporting
and
dashboarding
and
ways
to
see
where
things
stand.
So
all
of
these
I
think
principles
are
very,
very
useful,
especially
for
those
who
are
not
who
are
new
to
that
and
even
before
getting
into
kubernetes
and
things
like
that,
whoever
uses
SAS
and
uses
Cloud
definitely
highly
highly.
B
Rare
yeah,
yeah
and
and
they've
got
a
great
O'reilly
book
called
you
know:
Cloud
pin-ups
and
Second
Edition
just
came
out
like
a
week
ago,
two
weeks
ago.
So
if
you
haven't,
if
you
haven't,
bought
that
book,
yet
you
know
definitely
check
it
out.
It's
it's
by
a
bunch
of
the
different
authors,
our
authors,
most
of
them
work
for
the
the
synops
org.
B
At
this
point,
and
so
you
know,
I
I
highly
recommend
that
just
to
anybody
who's
getting
into
the
space
yeah,
because
you
know
you're
going
to
learn
something
new
and.
A
We
mentioned
several
stakeholders
involved,
I
mentioned
like
business
and
finance
and
Engineering
product
I'm
curious
from
your
perspective,
who
are
the
core
stakeholders
that
you
see
involved
in
these
processes.
B
Well,
so
you
know
to
clarify:
like
my
day,
job
is
I
work
for
a
kubernetes
cost
management
platform.
So
we
are,
we
are
on
the
the
more
technical
end
of
the
spectrum.
Generally,
you
know
a
lot
of
a
lot
of
my
co-workers
have
come
from
some
of
the
the
larger
more
established
Cloud
Financial
platforms.
You
know
so
the
space
is
it's
not
new.
I
mean
you
know
as
soon
as
as
soon
as
Amazon
started
letting
people
you
know,
do
S3
and
e32.
B
You
know,
Bill
started
showing
up
and
people
started,
having
conversations
with
Finance
and,
and
so
you
know
the
stakeholders
over
the
years.
Have
you
know
it's?
It's
you
know
it's
Finance,
it's.
You
know
ctOS
CFOs,
it's
engineering
everyone's
trying
to
work
that
that
out
most
of
the
people
I
talk
to
before
I
I,
you
know
changed
roles
within
kubkus
to
the
the
open
cost
side
of
the
house
is
I
was
working
with
larger
our
larger
customers.
You
know
customers
spending
million
dollars
a
month
on
AWS.
B
You
know
that
that
sort
of
stuff-
and
mostly
I,
was
seeing
folks
from
the
engineering
side.
You
know
they,
they
had
been
told
that
they
were
spending
too
much
and
they
needed
to
get
a
handle
on
what
was
going
on,
or
they
were
a
large
Enterprise,
where
lots
of
different
teams
were
consuming
Cloud
resources
and
they
needed
to
they
needed
to
sort
out.
You
know
charge
back
or
show
back,
and
you
know
for,
for
those
of
you
who
are
unfamiliar
with
large
Enterprises
a
lot
of
times.
B
You
get
this
bill
and
you
know
somebody
has
to
take
responsibility
internally,
and
you
know
you
might
have
different
budgets
and,
and
so
the
the
nice
version
of
it
is
showback.
Where
you
show
who's
responsible.
For
you
know,
hey
you
know,
team
a
is
60
of
the
usage
and
Team
B
is,
is
40
and
charge
it
back
is
when
you
actually
have
a
established
budget.
It
came
from
the
days
of
enterprise
software,
where
you
had
all
this.
B
You
know
compute
internally
and
people
had
to
share,
and
now
that
you
know
you
have
to
share
the
bill
for
an
external
course.
It's
it's
chargeback
and
you
know,
but
with
with
kubernetes
a
lot
of
it
is
such
a
black
box.
To
you
know,
even
the
engineering
engineering
generally
doesn't
pay
close
attention
to
their
bills.
They're
they're,
not
thinking
you
know.
Oh
you
know.
When
I'm
call
this
function,
it's
going
to
cost
you
30
cents
more
a
day.
B
No,
but
nobody
thinks
that
way,
and
so
initially,
just
like
the
thin
Ops
model
talks
about
you
know,
crawl,
walk
run.
B
We
like
to
just
bring
in
some
observability
get
people
looking
at
what
they're
doing-
and
you
know
you
don't
have
to
you
know
you
don't
have
to
have
chargeback
you
don't
even
have
to
you
know
it's
showback
You're,
just
showing
people
what
they're
using
and
in
some
cases
we
even
call
it
shameback,
because.
B
You're,
like
did
you
really
need?
You
know
a
triple
XL
large
to
run
nginx.
You
know,
probably
not,
but
you
know
it
is
costing
a
dollar
Thirty
an
hour.
So
you
know
it's.
It's
just
getting
people
comfortable
with
the
idea
that
everything
you're
doing
costs,
money
somewhere
and
so
yeah
we
like
to
bring
that
that
cost
monitoring
into
the
conversation.
A
And,
and
how
do
you
beyond
the
tooling
I
know
that
you're
from
the
tooling
side
of
the
house,
but
looking
at
it
from
the
end
user
perspective
or
the
one,
the
driver,
a
the
agent
within
the
organization
trying
to
drive
this
awareness?
How
do
you
create
awareness
amongst
Engineers
to
force
and
to
spend.
B
Usually,
usually
someone
outside
of
engineering
has
noticed
the
bill
right.
Somebody
has
said
you
know:
hey.
Have
you
noticed
that
you
know
last
month
we
spent
a
hundred
thousand,
and
this
month
we
spent
150
000..
Is
this
going
to
continue
every
month?
Yeah
is
this?
What's
this
growth
look
like,
and
so
you
know
somebody
who's
responsible
for
that
bill
goes
over
to
Engineering
Management.
Usually
it's
not
like
you
know.
They
call
up
the
the
nginx,
the
devops
team
and
says:
hey
guys,
you
know,
can
you
fix
this?
B
They
they
start
at
the
management
layer
and
say
you
know
this
is
this?
Is
you
know,
can
you
justify
this
they're,
not
saying
stop
it?
You
know
because
clearly
the
business
is
there
to
deliver
some
purpose.
You
know
it
is
it's
not
just
running
service
because
I,
like
blinking
lights,
they're
they're,
there
to
you,
know,
deliver
value,
and
so
you
know
that
conversation
gets
held.
Where
can
we
bring
in
just
a
little
bit
of
visibility?
B
Can
we
see
what's
going
on,
you
know,
and,
and
maybe
maybe
you
could
say
well,
you
know
what
last
month
was
Christmas
or
you
know
Lunar
New
Year
on
our
side
of
the
planet,
and
you
know
there
was
a
big
rush
of
of
you
know
need
for
compute.
You
know
the
big
shopping
season
and
you
know
next
month's
gonna
be
fine,
and
if
it's
not
we'll
we'll
come
back
and
revisit
this,
but
it's
usually
coming
from
the
business
side.
B
They've
got
concerns
you
know
and
if
you're
in
a
small
startup
everybody's
on
the
same
team
at
the
beginning
right,
you
know
you
see
that
bill,
you
you
become
aware
of
it,
but
really
as
soon
as
you
start
to
spend
that
money.
Somebody
will
probably
wonder
like.
Are
we
spending
too
much?
And
you
know
what
we
want
to
do
is.
Is
I
start
that
conversation
of
like
look?
Here's,
how
you're
spending
your
money-
and
maybe
it's
just
fine
but
I-
can
tell
you
when
I
look
at
customers
bills.
It's
not
fine.
B
There
usually
there's
a
lot
of
waste.
What
we
call
it.
We
call
it
idle
right
because
you've
with
the
cloud
there's
there's
kind
of
two
models
of
consumption
there's
on
usage
based.
So
if
you're
doing
S3,
you
know
well
investors,
not
a
great
example.
If
you're
doing
like
Lambda
right,
you
pay
by
the
usage,
you
pay
every
time,
you
make
a
call.
You
pay
some
fraction
of
a
Cent.
A
B
You
know
if
you
do
a
million
calls
in
one
day
it's
this
much
and
if
you
do
five
calls
the
next
day
it's
less
and
that's
great,
but
the
other
the
more
common
model
is
is
is
but
based
on
what
you've
allocated
you
know,
I
say:
I
need
to
run
15,
ec2
instances
and
they're
going
to
run
24
7
for
three
weeks.
Well,
I,
don't
get
to
say
well,
you
know
they
weren't
really
busy
for
part
of
that
time.
So
I'm
not
gonna
pay.
The
full
price
like
Amazon
doesn't
care.
B
You
have
those
machines
allocated
to
you,
you
will
pay
the
full
price,
and
so,
if
you're,
using
them
100,
that's
great
for
you,
if
you're
using
them
five
percent.
That's
great
for
Amazon,
because
they're
gonna
they're
gonna
resell
that
capacity
to
someone
else,
but
you
know
you're
paying
either
way,
and
so
what
we
would
look
for
is
that
unused
cost.
We
want
to
optimize.
We
want
to
optimize
for
usage,
you
know
if
your
usage
is
spiky.
Well,
you
know.
Sometimes
you
have
to
you
have
to
give
yourself
some
Headroom.
B
A
Obviously-
and
you
mentioned
Amazon
in
general-
the
cloud
providers
they
do
provide,
you
know,
first
of
all,
you
get
the
bill
and
you
have
some
breakdown
in
the
bill
and
they
have
their
own
cost
tools
such
as
you
know,
AWS
cost
Explorer
such
as
that.
So
what
can
you
do
with
these
and
where
do
they
fall
short?
In
your
perspective,
right.
B
Right
so
for
most
people,
the
the
Amazon
bill,
there's
there's
two
bills:
there's
the
the
one
page
bill
which
nobody
really
likes
that
bill
I
mean
you
know
if
if
you
show
it
to
a
CEO
they're
like
what
does
you
know,
a
million
dollars
of
ec2
mean
and
there's
no
breakdown,
and
then
the
other
bill
is
called
the
cost
and
usage
report,
and
that
is
the
very,
very,
very,
very
fine-grained
Json
report
of
everything
that
you
do
in
Institute
that
cost
money
and
Amazon
drops
it
in
an
S3
bucket
for
you
and
you
can
consume
it
and
with
the
tool
of
your
choice,
and
you
know
all
the
different
billing
tools.
B
You
know
that
that
read
your
bill
they're
going
to
look
in
this,
and
this
is
where
it
has
your
discounts.
You
know
you,
you
might
get
savings
plans
or
reserved
instances.
You
might
have.
You
know
something.
You
know
discounted
pricing
that
you've
negotiated
with
Amazon.
Well,
you
know
I
keep
saying
Amazon,
but
you
know
Azure
gcp.
All
the
cloud
providers
do
this.
B
You
can
continue
to
to
go
down
and
see
like
per
instance,
how
much
you
were
spending
which
days
you
know
what
what
you
were
paying
for.
You
know.
You've
got
some
vpcs,
some
storage,
you
know
all
those
things
and
you
can.
You
can
see
all
the
details,
but
it's
not
machine
readable.
It's
not
something
that
you
can
easily
integrate
on
your
side
into
your.
You
know
your
visualization
tool
of
choice.
B
You
know
both
of
them
are
our
endpoints,
that
you
know
people
care
about,
and
so
that's
and
they
don't
know
anything
about
kubernetes
right
and,
and
you
probably
don't
you
don't
want
Amazon
saying
you
know,
hey
business
team,
a
did
this
and
business
team
beat
did
that
you're,
like
you
know,
stay
out
of
our
business,
so
they
don't
know
anything
about
what's
happening
inside
the
nodes.
You
know
so
they're
like
when
you
get
your
your
eks.
You
know
your
Amazon
kubernetes
bill,
it's
ec2
with
a
management
fee.
B
You
don't
know
anything
more
than
that
for
you
know
which,
which
namespace
was
doing.
What
which
you
know
deployments
were
costing
you
that's.
You
know
completely
opaque
to
you
as
an
easy
as
a
AWS
customer
from
your
bill
aspect
and
that's
that's
what
we
do.
A
So
before
getting
into
the
tooling
I,
just
I
want
to
clarify
so
Finn
up
started
around
the
cloud
costs
in
general,
and
you
gave
some
good
examples.
Some
people
don't
even
understand
what
they
have
like
reserved
capacity,
how
well
they
utilize
it
or
when
they
do
on
demand
and
how
they
on
demand,
performs
and
and
the
trade
of
things
such
as
that
lots
to
do
there
as
well.
A
B
I
spent
for
most
customers
just
part
of
their
bill,
you
know
most
most
shops
are,
are
not
100
kubernetes,
you
know,
they're
gonna
have
some.
You
know
traditional
workloads
that
are
are
running
on
on
cloud
instances.
You
know
you've
got
some
windows
machines
running
in
the
cloud.
That's
generally
not
kubernetes,
you
know,
you've
got
some
S3s,
some
databases,
you
know
Beanstalk,
you
know
whatever,
whatever
might
be,
that's
not
the
kubernetes
side
of
the
house
and
so
for
us.
You
know
we
saw
you
know.
B
We
saw
that
most
workloads
were
headed
this
direction
and
and
not
most
but
a
significant
portion
of
the
market
and
and
the
tooling
did
not
really
serve
them,
and
and
the
first
edition
of
the
cloudfinops
book
didn't
really
cover
containers
and
kubernetes.
Really
at
all.
You
know,
because
it
I
mean
there's
a
brief
chapter
on
it.
But
it's
brief
and
you
know
so
we
kind
of
saw
this
opportunity
and
that's
that's
where
you
know
and
later
open
cost
the
open
source
component
came
from.
B
Is
you
know
we
wanted
to
make
sure
that
everyone
could
kind
of
get
visibility
into
what's
happening
there?
So
what's
you
know?
What's
kind
of
different
about
it
is
it's
just
not
a
box
that
had
been
opened
by
most
of
the
cloud
tools
at
this
point.
A
Yeah
but
but
I
do
think
at
least
when
I,
when
I
try
to
analyze
the
way
that
that
I
did
that
in
this
organization,
previous
organizations
that
I
worked
in
there
are
some
different
characteristics
that
I
I
do
see,
like
you
know
the
the
difficulty
to
track.
When
you
look
at
the
cloud
costs
and
the
shared
resources
allocating
the
spend
you
gave
some-
maybe
you
alluded
to
that
before
allocate
spend
to
cost
per
customer
per
team
per.
A
You
know:
different
environments,
things
like
that
or
tracking
the
cost
efficiency
of
your
kubernetes
workload,
the
locations
over
time
across
different
aggregations.
Can
you
say
what
you've
been
seeing
with
your
customers?
Well,.
B
Kubernetes
changes,
the
game
for
a
lot
of
shops.
Moving
to
the
cloud
was,
was
lift
and
shift
right.
They
they're
like
hey.
Now
we
don't
have
to
have
a
Data
Center
and
they
just
move
their
workloads
they're.
You
know
fairly
static,
not
very
Dynamic
workloads
into
the
cloud.
You
know
they
clean
things
up
and
you
know
for
some
of
them.
They
didn't
get
the
savings
they
really
expected
in
the
cloud
because
they
didn't
really
change
their
operations
they're
just
now
in
somebody
else's
data
center.
B
But
if
you've
made
the
transition
to
more
of
a
cloud
native
model,
which
is
you
know,
that's
essentially
what
kubernetes
is
where
resources
are
allocated
on
demand?
You
know
you've
got
you've
got
some.
You
know,
applications
that
are
going
to
run
for
maybe
a
certain
amount
of
time
or
they're
going
to
run
wherever
it's
cheapest
and
they
you
know
they
don't
have
a
lot
of
that
that
they
are
potentially
ephemeral.
You
know
they
don't
have
a
lot
of
State,
they
can
be.
B
You
know,
killed
and
rerun
and
moved
around
when
you
start
to
get
into
that
use
case.
Building
becomes
more
complicated,
but
also
you
can
save
a
lot
more
money.
Kubernetes
allows
you
to
you
know
potentially
condense
the
amount
of
compute
you're
using
so
now.
B
Instead
of
having
you
know,
one
application
per
instance,
you
can
say
well,
I've
got
a
cluster
running
there
and,
let's
deploy
you,
know,
100
applications
to
it
and
the
cluster
can
be
resized
at
you
know
up
or
down,
and
those
instances
will
real,
you
know,
will
remove,
will
move
to
different
compute
nodes
as
necessary
to
to
run,
and
you
know
just
like
just
like
virtualization
saved,
you
know
potentially
saved
a
lot
of
money
by
reducing
the
the
bare
metal
count
or
you
know,
condensing
it
into
you
know
more
powerful
servers
that
cost
less,
because
you
had
fewer
of
them.
B
Kubernetes
allows
us
that
option
too
and
from
you
know,
from
the
building
side
side
of
things
that
can
be
a
nightmare
because
you've
got
machine.
You've
got
your
application
today,
it's
running
on
node
one
and
you
know
kubernetes
decided
that
it
wanted
to
move
it
over
to
node
two
to
node
three
to
node.
Four.
It
killed
some
instances
redeployed
it.
You
deployed
a
Hot
Patch,
you
got
put
on
node
seven.
You
know
that
that
application
is
just
running
all
over
the
cloud
all
over
what
you're
paying
for.
B
But
when
you
get
that
bill,
it
just
says
you
know
15
compute
nodes
and
you
have
no
idea
like
well.
You
know
what
what
did
team
a
do?
What
did
Team
B
do?
What
was
how
much
of
that
you
know
15,
you
know
how
much
of
that
15
machines
was
was
any
of
the
particular
team
and
so
that
changing
characteristics.
B
It
makes
things
a
little
more
exciting,
but
you
know
potentially
there's
savings
because
you
can
look
at
it
and
say
well
we're
paying
for
15
nodes,
but
looking
at
the
compute,
the
the
you
know
the
the
idle
across
the
cluster.
Well,
we've
never
passed
the
you
know:
20
usage,
maybe
we
don't
need
15
nodes
in
our
cluster.
Maybe
we
can
get
by
with
you
know,
10.
B
B
You
know
hey
we're
gonna,
we're
gonna,
go
ahead
and
pay
for
10
nodes
a
month
at
a
50
discount
for
the
next
year,
and
you
know
that
you
can
get
deals
like
that,
because
Amazon
likes
to
know
that
you've
paid
up
front
and
you
like
to
save
money
and
so
and
your
workloads,
you
know
they
might
be
really
Dynamic,
but
the
base
infrastructure
they're
running
on
is
statically
priced,
and
so
that's
that's
comforting
on
the
financial
side
of
things.
B
A
That's
from
the
from
the
infrastructure,
but
still
when
trying
to
create
this,
this
accountability
that
we
talked
about
before
and
into
to
attribute
this
attribution
model
per
team
or
per
customer
or
per
environment.
Then
the
fact
that
the
deployments
and
namespaces
and
so
on
are
not
really
isolated
and
they
actually
share
the
underlying
resources
you
mentioned
nodes.
We
can
also
mention
the
persistent
volumes,
the
load
balances
and
so
on.
A
Then,
the
ability
to
make
this
attribution
and
then
make
the
the
forecasting
the
capacity
planning
and
also
negotiating
based
on
where
you
expect
the
the
business
to
grow
in
the
different
teams
and
the
different
product
lines,
and
so
on,
becomes
maybe
a
bit
more
more
challenging.
Let's
position
for
sure.
A
Of
this
is
is
Justified
from
business
perspective
like
if
it's
I
don't
know
an
Inca
spike
in
incoming
requests
for
Rihanna's
Super,
Bowl
halftime
performance
this
week
and
you're
in
the
business
of
the
media.
Then
it's
expected.
The
spike
is
tightly
related
to
what
you
deliver
to
your
customers,
the
value
and-
and
hopefully
you
know
how
to
monetize-
or
this
is
the
top
line
kpis,
it's
fine.
So
it's
different
when
the
cost
spikes,
because
of
a
legitimate
business
need
that
arises
right.
B
B
B
Definitely
helped
customers
find
you
know,
unsecured
instances
or
applications
that
have
been.
You
know
they
start
getting
Bitcoin
and
you're
like.
Why
did
this
cost
you
a
hundred
dollars
over
the
weekend
when
it
was?
You
know
a
Dev
instance
that
wasn't
supposed
to
be
doing
anything,
and
that
happens
too.
It's
it's
weird
to
think
of
of
of
you
know
the
finance
as
a
intrusion
detection,
but
there
you
go.
A
Laughs
yeah,
he
proved
more
than
one.
That's
definitely
one
of
the
common
stories
that
people
discovered
Bitcoin
mining
after
employing
simple
fire,
finops
monitoring.
So
let's
talk
I
think
we
talked
about
all
these
vendors
and
one
of
the
challenges
is
also
that
they
speak
a
slightly
different
language.
A
It's,
so
there
is
a
way
that
you
can
also
compare
your
AWS
to
Azure,
to
gcp
or
to
other
bills,
because
there
is
no
sort
of
common
way
of
communicating
which,
for
me
was,
was
the
way
for
me
to
explain
to
others
about
the
the
maybe
the
most
basic
value
proposition
of
open
cost.
So,
let's
move
on
to
open
cost.
This
is
the
new
open
source
project,
the
new
kid
on
the
Block
and
the
phynops
and
the
cloud
native
space
for
sure
that
recently,
by
the
way,
been
accepted
to
the
cncf
sandbox.
A
So
a
big
congratulations
to
you
and
the
team
there
yeah.
So
can
you
tell
us
a
bit
about
what
open
cost
is
about
sure.
B
So
so
Kube
cost.
You
know
kubecraft
started
I
guess
about
four
years
ago
now,
and
you
know
the
the
two
Founders
you
know
had
come
from
Google.
They
did
some
other
work
before
starting
Kube
costs,
but
they
were
already.
You
know
well
well
familiar
with
the
open
source
space
and
started
Kube
cost
as
the
Kube
cost
cost
model
as
an
open
source
project
and
then
added
additional
value
on
top
of
it
and
but
they,
you
know.
B
The
intention
was
like:
let's
get
this
out
there
as
the
standard
for
kubernetes
cost
monitoring,
you
know,
let's
you
know
they,
they
started
it
and
you
know
covet
happened,
kind
of
disrupted
a
lot
of
plans,
but
in
June
of
last
year
2022
we
announced
that
the
cncf
had
elevated
the
the
kubernetes,
the
Kube
cost
cost
model
to
a
Sandbox
project
and
they
renamed
it
open
cost.
B
You
know
one
of
the
one
of
the
things
you're
not
allowed
to
do
and
cncf
land
is
have
the
company
with
the
same
name
as
the
the
project.
You
know
you
got
to
give
up
your
your
trademarks
and
stuff
wow,
which
is
good
and
so,
in
addition
to
the
code
base,
they've
been
working
with
other.
B
You
know,
other
vendors,
other
individuals
other
end
users
on
writing
a
specification
for
what
it
means
to
monitor
kubernetes
for
costs.
You
know
how
you
identify,
you
know
different
types
of
usage.
You
know
whether
it's
idle
or
allocated,
and
so
there's
both
a
specification.
You
know
the
the
open
cost
specification.
V1
that
you
know,
talks
about
allocation
monitoring
and
that's
that's.
The
first
pass
of
open
class
is
is
what
it
does
is.
B
It
goes
and
looks
at
the
cloud
API
you
know
so
it
says
AWS
you
have
four
ec2
instances
running
and
how
much
do
they
cost
per
hour
and
the
open
the
API
just
says.
You
know
list
price.
Is
this
and
it
takes
that
and
then
it
Compares
that
to
your
kubernetes
usage,
it
says:
well,
we've
got
you
know
five
namespaces
and
we've
got.
You
know
this
many
instances.
B
You
know
these
are
our
workloads
or
pods
or
containers,
and
it
lets
you
slice
and
dice
that
so
you
break
down
those
ec2
instances
by
all
all
of
those.
You
know
kubernetes
Primitives
and
that's
essentially
what
you
get
with
open
cost
today,
there's
a
UI
that
lets
you
explore
this.
The
data
is
stored
in
Prometheus.
So
you
know,
if
you
have
a
Prometheus
compatible
Tool,
we
could,
you
know,
put
it
in
a
different
back
end.
If
you
want
the
folks
over
at
grafana,
are
storing
it
in
mimir.
B
You
know
people
use
Thanos,
cortex,
Victoria
metrics,
you
know
you
name
it.
There
somebody's
put
it
in
a
different
Prometheus
compatible
back
end,
but
open
cost.
Is
you
know
it's
it's
a
cncf
project,
so
it's
Apache,
License,
the
the
goal
for
us
with
open
cost
is
just
make
it
the
you
know,
ubiquitous
default.
Monitoring
stack
for
you
know
cost.
B
A
And
who
is
the
today
essentially
the
it
supports
both
on-prem
environments
and
also
Cloud
managed
kubernetes?
Can
you
mention
who
is
currently.
B
Yeah
yeah
so
because
this
was
the
the
engine
of
of
KU
cost,
you
know
they
it
already
came.
It
came.
You
know,
working
out
of
the
box
with
AWS,
gcp
and
Azure.
There
is
support
for
on-prem
pricing,
so
you
can
upload
or
provide
through
a
config
map
a
a
default
pricing.
So
you
can
say:
hey
in
my
data
center,
we
charge
a
dollar
for
you
know
we
charge
a
dollar
per
hour
per
core
and
we
charge
two
dollars
per
hour
for
Ram.
B
You
know
some
nice,
simple
pricing
and
that's
what
I
did
in
my
home
instances
is
like
I.
Don't
have
fractional
sense.
I
just
want
to
see
nice
round
numbers,
you
know
for
my
home
usage,
but
it
lets
you.
You
know
set
that
pricing
their
their
support.
For
you
know
more
fine-grained
pricing.
You
can
charge
on
gpus,
you
know
because
you
may
be,
you
know
consuming
gpus,
for
you
know
AI
or
whatever,
and
so
that
that's
in
there
too.
B
So
that's
what
it
shipped
with
back
in
in
June
and
then
the
open
source
Community
has
started
adding
other
other
sorts
of
platforms.
So
we've
got
some
patches
in
for
scaleway,
you
know
a
European
provider.
We've
got
some
alien.
You
know
alibaba's
public
Cloud,
that's
in
there
there.
You
know
there
have
been
conversations
with
some
other
providers.
We've
got
a
document
for
how
to
to
get
started,
got
very
friendly,
slack
channel
over
in
the
cncf
slack.
B
So
you
know
come
ask
questions
I'm,
happy
to
help.
You
add
your
public
Cloud
and
it's
it's
not
that
hard,
because
we're
not
we're
not
digesting
the
bill.
You
know
so
I
mentioned
the
the
cost
and
usage
report.
As
this
multi
multi-gigabyte
Json
file
every
vendor
does
it
differently
those
files
come
out.
They
don't
come
out
in
real
time.
So
that's
one
of
the
one
of
the
weird
things
about
Cloud
billing.
Is
you
know
you
have
your
on-demand
cost
you're
like
you
look
at
it.
B
When
you
kick
off
your
ec2s
and
says
you
know
15
cents
an
hour
and
you're
like
okay
and
then
maybe
48
hours
later
Amazon
says.
Well,
you
know
you
did
have
some
discount
savings.
You
had
a
couple
credits.
You
had
the
reserved
instances.
It
was
actually
only
you
know,
seven
cents
an
hour,
and
you
know
that
might
change
how
you
look
at
things
and
so
open
cost
that
that
is
a
lot
of
complexity.
You
know
going
and
doing
that
reconciliation
parsing
that
bill
on
demand.
You
know
finding
the
data
in
there.
B
Oh
because
doesn't
do
that
yet,
and
you
know
that
that's
all
I
can
tell
you
from
the
Kube
cost
engineering
side
of
the
house.
That's
a
lot
of
work!
Most
people,
don't
don't
you
know
from
the
network
outside
of
the
things
most
people
don't
seem
to
miss
that
you
know
we're
we're
really
just
looking
at
like.
How
much
does
this
generally
costing
me?
And
you
know
the
actual
numbers
are
less
important
than
the
the
direction
to.
B
B
B
B
Right
it
doesn't,
it
does
not
parse.
You
know
the
the
final
bill,
it's
going
with
the
list
price,
and
you
know
the
list
price
for
people
like
me
and
I
pay
less
price.
You
know,
I
just
run
some
instances
on
my
own
and
you
know
I
I,
don't
I,
don't
have
an
Amazon
sales
person
for
yeah.
So
a
lot
of
small
medium
businesses
are
not
in
any
sort
of
negotiation,
but
even
on
on
places
like
Google
with
their
continued
usage
discounts.
B
You
know
that
that
would
show
up
later,
but
that
shows
up
like
days
or
even
weeks
after
you've
spent
the
money.
So
you
know
they're.
They're
kind
of
you
know
retrofitting
your
bill
afterwards,
open
class
doesn't
do
that.
Open
cost
is.
B
It's
a
large
engineering
effort
that,
at
this
point,
we're
we're
still
we're
still
looking
at
other
cost
sources.
So
you
know
right
now:
open
cost
provides
what
your
what's
allocated
for
your
kubernetes,
so
the
instances,
the
storage
and
networking
you
know
we're
giving
you
that
cost.
Just
you
know,
based
off
of
your
deployed
kubernetes
cluster
right
now,
we're
actually
headed
towards
the
direction
of
out
of
out
of
cluster
costs,
so
you've
got
a
remote
database.
B
You
have
a
database
is
a
service
that
you're
integrating
with
you
know.
Kubernetes
doesn't
actually
know
anything
about
that,
but
if
you
you
want
to
incorporate
that
bill,
if
you
want
to
see
that
with
open
cost,
that's
what
we're
headed
towards
is
is
bringing
in
external
asset
costs.
So
object,
storage.
You
know,
S3,
RDS
monitoring!
You
know
you
want
to
see
how
much
data
costs
you,
how
much
logs.io
cost
you
bring
it
back
to
your
your
workloads?
B
Well,
that's
that's
where
we're
headed,
which
is
actually
different
from
what
Kube
cost
does
so
we're
kind
of
diverging,
because
open
source
is
usually
about
people
scratching
their
itches
and
the
itch
that
most
of
our
users
have.
Is
I
have
other
costs
that
I
want
to
bring
in
people?
Aren't
I
mean
people
are
definitely
concerned
about.
B
You
know
their
final
bills
and
you
know
to
to
send
some
business
to
KU
cost
kubecast
is
free.
It's
just
not.
B
Yeah
so
KU
cross
is
free
for
for
single
for
single
clusters
and
gives
you
14
days
of
storage.
So
you
can
you
can
deploy
all
the
cube
crafts
you
want,
and
you
know
it's
it's
free
until
you
want
to
do
like
Federated
views,
and
you
know
more
storage
and
stuff
like
that,
but
maybe
someday
they
will
open
source
their
Bill
processing
engine.
B
But
that
is
a
a
Enterprise
Beast
of
its
own
and
you
know:
what's
what's
I,
you
know
on
a
different
podcast
I
heard
an
interview
with
the
the
product
manager
for
Amazon's
building
engine,
and
he
said
that
he
he's
pretty
confident
that
the
Amazon
billing
engine
is
the
largest
non-government
billing
non-government
software
project
in
the
world.
B
You
know
to
think
about
that.
You
think
about
how
much
compute
how
how
big
AWS
is,
and
everyone
is
running,
their
workloads
on
AWS
and
they're,
generating
you
know,
hundreds
of
thousands,
if
not
millions
of
metrics
per
second
or
no,
no,
no
per
customer
and
they've
got
millions
of
customers
and
all
that
data
has
to
be
stored,
processed
and
you
know
sent
to
billing.
B
You
know
in
a
timely
manner,
and
so
you
know
they
are
a
substantial
piece
of
aws's
infrastructure
is
running
their
own
internal
building
engine
for
everybody
who's
on
there,
and
so
your
bills,
bills
are
are
very,
very,
very
complicated.
A
Yeah
I
can
tell
you
I
know
my
company
is
on
the
on
the
larger
end
of
the
medium
size,
let's
say,
and
from
what
I
see
the
the
small
Enterprises.
Definitely
the
the
all
these
packaging
and
all
these
crediting
math
can
definitely
make
move
the
needle
and
change
entirely
the
the
bottom
line,
and
you
have
your
dedicated
person,
whether
you're,
an
isv
or
or
a
medium
size
or
Enterprise,
but
so
I'm,
just
wondering
but
you're
saying
that
the
pain
is
is
less
from
the
community.
A
B
I
mean
definitely,
you
know,
one
of
our
goals
with
opencast
is
to
move
out
of
the
cncf
sandbox.
So
you
know
it's.
It's
been
open
source
for
I
guess
about
six
months
now
and
you
know
we're
trying
to
build
up
our
community
get
external
contributors
more
more
people
active
besides,
you
know,
Kube
cost
employees.
B
B
I
mean
I,
I,
don't
know
if
that
was
their
number,
but
I've
heard
the
number
of
you
know
thousands
of
kubernetes
clusters
thrown
around
some
of
them
very
Dynamic,
and
so
they
just
deploy
open
cost
onto
everything.
Just
as
you
know,
a
monitoring
agent
to
get
visibility
into
you
know
all
the
costs
and
and
they're
pumping
it
into
my
mirror,
which
is
you
know,
their
storage
back
end
and
that
you
know
their
use
cases.
You
know
they
want.
B
They
want
everything
on
dashboards,
of
course-
and
you
know
so,
they've
shown
up
started,
contributing
making
open
cost
more
efficient
and
you're
part
of
the
roadmap.
Is
you
know
what
sort
of
stuff
is
appealing
to
those
sorts
of
shops?
They've
got
they've
brought
some
things.
We've
had.
You
know,
folks
from
other
Cloud
vendors
showing
up
they
they
want
to
have.
You
know
better
support
for
their
clouds.
You
know
that's
always
on
the
roadmap.
B
We
you
know
open
cost
is,
is
slowly
diverging
from
KU
cost.
You
know,
they're
they're,
they're
different,
you
know,
use
cases,
and
so
part
of
the
roadmap
will
be
things
like
you
know.
Taking
you
know,
taking
these
open
source
contributions,
getting
our
own
release
Cadence
and
as
we
as
we
have
more
external
contributors
more,
you
know
different
documentation.
B
We
can
move
up
up
the
the
cncf
ladder
you
know
graduate
out
of
being
a
Sandbox
for
an
incubating
project
and,
and
part
of
that
is
just
having
more
external
folks
involved,
and
you
know
it's
it's
still
early
days,
but
we're
having
you
know
a
good
healthy
share
of
contributions
from
outside,
open
Telemetry
is,
is
something
that
has
has
popped
up.
B
There's
some
open
issues
features
for
that
people
would
like
to
see
open
cost
Implement
that
and
that's
the
sort
of
thing
that
you
know
open
costs
can
do
faster
than
Cube
cost,
because
we're
a
much
more,
you
know
we're
a
much
smaller,
more
efficient
project.
We
just
yeah.
We
need
more
Community
folks
to
get
involved.
A
That's
amazing
and
then
glad
to
hear
that
you
have
some
more
people
outside
of
cubicle.
So
it's
a
cubicles
people,
but
now
also
grafana
lab
people.
Any
other
major
figures
that
are
involved
in
terms
of
entities
that
they
got
into
the
project
into.
B
The
world
yeah
yeah,
so
when
we,
when
we
launched
open
cost,
you
know
I
mentioned
it,
wasn't
just
it
wasn't
just
Kube
cost.
You
know
open
sourcing,
something
we
we
had
a
the
specification.
We
have
folks
from
Adobe
Armory
AWS,
DT,
IQ
Google.
B
You
know
New
Relic,
Sousa,
pixie,
Red
Hat,
you
know
so
those
folks
were
all
involved
with
the
specification,
and
so
you
know
open
cost
is
both
a
specification
and
a
project,
and
so
some
of
those
shops
will
be
taking
the
specification
and
releasing
their
own
implementations.
And
so
you
know
part
of
part
of
that
is
eventually
we'll
need
to
form
like
an
acceptance
criteria
framework.
You
know
that
looks
at
your
implementation
ensures
that
you
are
implementing
the
API
yeah.
So
it's
got
all
of
those.
B
You
have
some
of
it.
Some
of
it's
in
place,
some
of
it's
in
progress,
and
you
know
different
and
now
and
today,
like
as
we're
working
on
the
external
allocation
costs,
you
know
they're
different
contributors
are
getting
involved,
which
is
is
great
to
see.
A
B
Absolutely
yes
and
others
right.
So,
let's
yeah
yeah
yeah,
we
we,
you
know
there.
There
are
folks,
you
know
showing
up.
You
know
multiple!
Well,
you
know
lots
of
questions
in
in
the
slack
channel.
You
know
grafana
is
one
of
the
largest.
You
know
public
deployments,
you
know
just
it
fit
their
use
their
model
very
well,
but
I'm,
hoping
to
get
more
names
to
publish.
B
We've
got
a
couple
of
Partners
who
have
you
know,
added
open
cost
apis
to
their
products,
so
I
think
Advantage
is
one
of
them
I'm
just.
A
B
Blanks
right
now,
but
you
know,
look
to
see
more
Open
Cross
compatible
API
endpoints
out.
B
In
a
lot
of
these
kubernetes
total
platforms,
where
they're
like
hey,
you
know
we
Pro,
you
know
we
provide
a
dashboard
to
give
you
everything.
Well,
they
you
know,
they'll
put
the
open
cost
apis
on
there,
so
you
can
pull
your
financial
data
out
of
them,
whether
it's
you
know
actually
provided
by
open
cost
the
project
who
knows,
but
you
know
that's
the
great
thing
about
being
a
specification,
is
you're
driving
a
standard
exactly.
A
And
I
think
you
mentioned
open
Telemetry,
it's
a
great
role
model
to
follow,
and
if
you
can
also
converge
and
and
do
something
between
the
project,
it's
definitely
going
to
be
a
force
multiplier.
Before
we
wrap
up
the
the
main
part
of
this,
the
show
can
you
share
you
you
mentioned
briefly
before
some
of
them,
but
how
can
people
join
the
community
conversation
learn
more
and
get
involved.
B
Right,
so
you
know
the
the
primary
the
cncf
runs
a
a
slack.
So
if,
if
you're,
a
member
of
the
cncf
slack
join
us
over
on
the
opencast
channel
is
the
website
github.com
opencost
is
where
the
the
project,
the
website
the
helm
chart
those
are
currently
there.
B
We
have
a
calendar
where
every
every
two
weeks,
every
Fortnight
we
have
a
working
group,
which
we
kind
of
gather
to
talk
about
yeah.
We
have
an
agenda
like
you
know
what
we're
working
on
what
you
know
we
need
help
with.
You
know
what
people
would
like
to
see.
B
Some
people
show
up,
and
they
just
you
know,
want
to
talk
about
their
issue
and
some
people
say
you
know:
hey
let's,
let's
start
this
external
asset
working
group,
so
we're
going
to
have
a
group
kicking
off
a
new
specification
and
a
an
example
project.
Those
are
those
are
about
to
kick
off.
So
if
you'd
like
to
see
external
asset
costs,
get
added
to
open
cost,
you
know
join
up.
B
I
think
I
think
we're
going
to
implement
S3
as
the
example,
but
once
we're
done,
we'll
have
a
specification
and
documentation,
and
you
know
and
a
working
example,
so
you
can
add
whatever
it
might
be,
that
you'd
like
tracked
and
you're
cross
monitoring
and
you
know
and
then
tie
it
back
to
your
kubernetes
usage.
That's
that's
what
we're
doing
over
in
open
cost.
A
Amazing
and
then
just
a
note
for
the
listeners,
if
you're,
not
even
if
you're,
not
a
member,
an
official
member
of
CNC,
if
you
don't
need
to
pay
anything,
the
slack
channel
for
cncf
is
open
for
everyone.
You
can
just
open
your
user
there
and
once
there
you
have
all
the
channels
in
the
world
for
all
the
projects,
one
of
which
is
open
course.
Just
you
know:
hash
open,
cost
and
and
you'll
be
there
in
the
conversation,
but
don't
don't
think
that
you
need
to
be
some
sort
of
formal
member
or
something.
B
No
no
open
for
everyone
and-
and
you
should
also
join
the
thin
apps
organization,
so
thin
Ops
org.
They
have
their
own
slack.
There's
not
an
open
cost
Channel,
but
there
is.
There
are
there's
a
lot
of
channels
for
different
clouds.
Different
tool
sets
they're
working
groups
for
things
like
open
billing.
B
There's
a
kubernetes
and
and
containers
working
group
that
you
know,
obviously
we're
active
in
and
and
so
open
cost
is,
is
the
only
fin
Ops
certified
project
and
cncf
project,
so
we're
the
we
are
the
intersection
of
those
two
worlds,
and
so
you
know
definitely
join
either
or
both
slacks
and
I'll
see
you
there
yeah
sibling.
A
Organizations
under
the
Linux
Foundation-
yes
great
great-
that
was
fascinating
and
would
ever
like
to
wrap
up
this
part
and
with
the
few
minutes
that
we
have
left
to
cover
some
interesting
bits
and
some
breaking
news
and
very
happy
for
you
to
stay.
Stick
around
with
me
for,
for
these
parts,
you'll
probably
have
some
interesting
insights,
especially
with
your
perspective
and
and
familiarity.
The
first
one
I
wanted
to
share
actually
is
something
that
I
have
been
working
on
recently.
A
It's
I
call
it
the
metrics
Essentials
Trilogy.
It's
three
articles
that
are
meant
to
cover
some
a
lot
of
the
common
topics
that
I
keep
on
encountering
with
users
with
community
members
with.
A
And
others,
so
it's
I
called
it.
The
Phantom
metrics,
the
expensive
metrics
and
the
unreadable
metrics
phantomatrix,
is
why
you
monitoring
dashboard
may
be
lying
to
you
a
bit
about
the
basics
of
how
it
works
and
what
to
expect
of
it
and
where
the
monitoring
will
not
show
you
exactly
where
in
real
time
happens
and
to
take
it
with
with
the
relevant
perspective.
A
The
expensive
metrics
is
maybe
somewhat
tied
to
what
we
talked
about
today
about
why
you're
monitoring
data
and
build
may
get
out
of
hand
and
Associated
costs
with
it
like
ordinality
problem
and
other
that
and
and
the
last
one
is
about,
maybe
some
guide
to
effective
dashboard
design
for
devops
type
monitoring
so
you're
more
than
welcome
to
to
check
it
out
it's
on
on
medium
and
very
happy
to
some
of
that,
by
the
way
ties
back
to
topics
that
we've
covered
on
this
show
like
with
Ben
ziegelman,
about
cost
of
monitoring
and
others.
A
So
definitely
you'll.
If
you
follow
this,
show
you
it
will
resonate
but
I
think
it's
a
good
summary
and
do
share
some
feedback
glad
to
make
it
a
starting
point
for
a
broader
discussion.
A
The
next
one
is
a
cncf
Blog
about
what's
new
in
Prometheus
ecosystem
things
such
as
the
agent
mode
native
histograms,
newly
added
service,
Discovery
mechanisms,
the
prom
lens
project-
that's
been
contributed
to
to
Prometheus
and
much
more
so
it
offers
a
good
rundown
list
and
I
highly
recommend.
You
check
it
out.
It
also
I
highly
recommend,
checking
out
the
episode
we
had
here
on
the
show
recently
with
Julian
pivoto,
which
gives
much
more
in
depth,
but
this
this
blog
on
the
cncf
blog
is,
is
definitely
worthwhile.
Checking
out.
A
Another
thing
we
mentioned
here
before
about
open
Telemetry,
so
I
I
saw
on
the
cncf
blog
a
very
interesting
post
about
migrating
from
open
tracing
to
open
Telemetry.
For
those
who
are
not
familiar
or
count,
racing
is
a
deprecated
standard.
The
API
specification
that
existed
before
and
was
then
merged
into
open,
Telemetry,
together
with
open,
sensors
and
and
some
other
pieces.
So
many
older
implementations
that
used
to
run
on
open
tracing
now
needed
to
migrate
to
open
Telemetry.
A
This
was
a
very
good
walkthrough
on
what
you
need
to
in
order
to
do
that
in
a
pragmatic,
pragmatic
way.
Matt,
have
you
had
a
chance
to
do
some
sort
of
a
migration
such
as
that
or
play
around
with
that?
No.
B
A
Yeah,
it's
it's
a
good
one
and
you
it
used
to
have
a
shame
that
made
it
very
easy,
but
then
again
shielded
from
the
real
migration
work.
That
needs
to
be
done
so
I
I
definitely
advise
not
to
take
the
shin
path.
I
think
it.
Even
the
shame
is
already
sunsettered
and
not
supported
anymore.
It's
important
to
understand
it's
been.
It
diverged
it
expanded,
Far,
Beyond,
open
tracing
I
mean
open
Telemetry
project.
Api
specifications
are
definitely
worthwhile
doing
a
deep
migration
path
rather
than
the
shallow
one.
A
So
it's
a
good
good
coverage
on
that
and
also
some
some
cncf
project
updates.
In
addition
to
the
good
news
around
around
open
course
project,
we
had
some
good
news
end
of
year
about
some
projects
like
I,
think
the
most
prominent
ones
were
Argo
and
flux
that
have
graduated
from
the
CNF
incubation,
so
they're.
Now
in
the
graduated
State
and
some
others
next
month
on
the
episode
out
here,
I
will
have
a
Chris
CRA,
as
most
of
them
know
them.
A
The
CTO
of
the
cncf
will
be
here
with
me
on
the
show
and
will
definitely
be
talking
about
the
changes.
The
the
the
project
landscape
and
some
of
these
predictions
so
do
join
us
on
next
month's
episode.
I
promise
it'd
be
interesting,
he's
an
interesting
guy
and
Matt
anything
else
that
you
found
interesting
this
week
or
in
the
past
month.
B
Well,
I
I
just
wanted
to
point
out
open
cost
we're
going
to
be
at
the
Southern
California
Linux
Expo
next
month.
So
if,
if
you
are
in
the
California,
Southern
California
area
definitely
show
up
at
the
conference,
it's
North
America's
largest
open
source,
Community
Conference.
A
B
Yes,
scale20x
I'll
be
given
a
talk
and
we'll
be.
The
open
cost
will
be
sharing
a
booth
with
Google,
and
so
hopefully,
I'll
have
some
stickers
there.
If
you
show
up
and
of
course,
I
look
forward
to
seeing
everyone
at
kubecon
EU,
if
you
can
make
that.
A
And
by
the
way,
there's
lots
of
co-located
events.
On
the
first
day,
it's
been
reshuffled
on
the
the
cncf
did
some
bit
of
organization
there,
because
it
got
a
bit
out
of
hand
with
all
the
colors.
So
now
it's
Consolidated,
for
example,
all
the
prom
the
Prometheus
day
and
the
hotel
day
and
others
now
are
Consolidated
to
One
open
observability
day,
not
not
to
do
with
open
observability
talks
here
at
the
show,
but
definitely
touching
upon
the
same
the
same
topics.
A
So
it
makes
it
easier
just
one
day
of
all
the
colors
together
so
highly
recommended
also
to
check
out
the
relevant
colos
and
look
forward
to
seeing
you
Matt
and
all
the
others
they're
highly
recommended.
That's
great!
So
thank
you
very
much
Matt.
How
can
people
reach
out
to
you
after
the
the
show.
B
Yeah,
so
if
if,
if,
if
you're,
not
in
slack,
if
you
want
to
get
a
hold
of
me
via
email,
I'm,
Matt
Ray
at
kubecost.com,
you
know
I'm
on
LinkedIn
and
Mastered
on
I
kind
of
stopped
using
Twitter
but
I'm
Matt
Ray
and
in
those
places
so
I'm
usually
pretty
easy
to
find
Matt
Ray
on
GitHub.
A
So,
thank
you
very
much
Matt
for
joining
me
on
this
early
time,
their
Australia
time.
It
was
a
fascinating
talk,
and
thank
you,
of
course,
all
the
listeners
who
joined
us
today
on
this
episode.
A
All
the
episode,
as
always,
are
made
available
on
all
the
favorite
podcast
apps
or
on
YouTube,
so
do
check
them
out
and
if
you
are
listening
to
this
show
to
this
show
to
this
episode
on
on
on
demand,
then
do
know
that
we
stream
the
episodes
live
on
Twitch
and
YouTube,
so
just
find
all
the
details
on
openobservability.io
or
follow
us
on
Twitter
at
openobserve,
for
updates
on
the
next
live
streams
and
to
share
your
comments,
suggestions,
news
bits
or
anything
else,
and
if
you
have
something
specific
that
you
want
to
talk
about
on
the
show
that
you
think
that
your
subject
matter,
expert
on
these
relevant
topics
do
feel
free
to
submit
a
talk
proposal
on
openobservability
dot
IO.