►
Description
How to install and use the Cost Management Metrics Operator in and Air-Gapped environment to update Cost Management for Red Hat OpenShift.
Lear more at https://www.redhat.com/en/blog/introducing-openshift-cost-management-human-readable-view-cloud-native-application-costs
A
So
what
we're
looking
at
here
is
some
operator
lifecycle
manager,
documentation
on
restricted
networks.
This
gives
a
good
overview
of
you
know,
essentially
how
to
install
an
operator
on
a
disconnected
cluster.
So
this
first
section
gives
you
an
understanding
of
the
different
operator
catalogs
and
the
cocometrics
operator
is
found
within
the
community
operators
catalog.
A
So
when
working
through
this
document,
you'll
want
to
use
the
community
operator
index
after
disabling
the
default
operator
hub
sources,
your
operator
hub
should
look
pretty
much
like
this.
There
should
be
nothing
here.
So
if
you
come
back
to
this
page,
we
recommend
pruning
your
index
image,
because
there
are
quite
a
few
operators
in
the
community
repo.
A
You
don't
necessarily
want
to
build
your
your
mirror,
catalog
using
all
those
operators.
So
when
you're
pruning
you'll
want
to
make
sure
you're
appointed
at
the
correct
community
operator
index
and
for
the
packages
to
prune
you'll
want
to
add
cocoa.
Metrics
operator
this
target
will
be
your
mirror
registry,
so
once
you've
printed
your
image
index
you'll
then
want
to
mirror
the
operator
catalog
here
all
the
different
steps
for
doing
that
and
then
below
that
we'll
get
to
creating
a
catalog
from
an
index
image.
A
So
here
they
give
you
a
catalog
source
that
needs
to
be
created,
and
it
needs
to
point
to
the
mirror
registry
that
contains
the
operator
that
you
want
to
install.
So
I'm
going
to
go
ahead
and
create
this
catalog
in
my
cluster
and
make
sure
that
was
created
correctly.
A
So
if
I
refresh
this
page,
I
now
see
the
cocometrics
operator.
So
if
I
click
on
this,
here's
some
documentation
listed
here.
A
good
section
to
read
is
the
limitations
in
prereqs,
especially
the
storage
configuration
section
and
then
further
down
below.
There
is
a
section
dedicated
to
using
the
operator
on
our
stricken
network,
so
I'm
going
to
go
ahead
and
install
the
operator
I'm
going
to
leave
everything
here
the
same.
A
If
the
operator
needs
to
be
installed
within
the
metrics
operator,
I
don't
have
it
created,
so
I'm
going
to
let
olm
do
it
for
us,
so
I'm
going
to
install
okay.
So
after
a
few
moments
it
installs
and
then
we'll
go
click
on
installed
operators
and
we
now
have
the
cocometrics
operator.
Namespace
and
you'll
see
the
operator
installed
here.
A
A
The
operator
is
capable
of
creating
its
own
storage,
but
again
it's
it's
good
to
read
the
the
prereq
section
first
before
you
let
the
operator
do
this,
but
essentially
what
what
the
operator
is
going
to
do
is
going
to
create
a
persistent
volume
claim
that
is
similar
to
this
except
it'll.
Just
use
the
default
storage
class
name
and
the
the
name
will
be
different.
It's
listed
above,
if
you
want
to
know
which
one
it
is
next
step,
is
to
specify
the
desired
number
of
reports
to
keep
so
the
default
value
is
30
reports.
A
This
will
equate
to
approximately
one
week's
worth
of
data
if
all
the
other
settings
remain
the
same.
One
other
key
thing
to
change
here
is
the
upload
toggle.
This
needs
to
be
set
to
false,
or
else
the
operator
will
try
to
upload
reports
to
cloud.redhat.com,
which,
of
course
it
should
fail
if
you're
in
a
restricted
network.
A
Okay,
so
we're
going
to
go
ahead
and
create
a
cocoa
metrics
config,
and
I'm
going
to
change
this
in
the
yaml
view.
So
I'm
going
to
leave
this
max
reports
to
soar
and
I'll
leave
that
at
30.,
a
good
thing
to
do
would
be
to
remove
this
whole
sources
section
and
just
replace
it
with
an
empty
bracket.
The
little
warning
here
is
is
fine
to
ignore
and
set
the
upload
toggle
to
false
okay.
A
So
after
you
do
that
click
create
now,
it's
been
created,
I'm
going
to
come
in
here
and
then
click
on
yaml
view
scroll
down
to
the
status
section
which
take
a
minute,
but
the
status
will
show
up.
Okay,
so
now
reload
scroll
down
and
see
the
status
section.
So
this
looks
good
and
if
we
look
at
the
packaging
section
we'll
see
that
the
last
successful
packaging
it
just
occurred.
There
is
currently
one
report
in
storage.
A
This
is
because,
when
the
operator
first
spins
up,
it's
gonna
collect
all
the
metrics
for
the
for
the
last
hour,
and
then
it's
gonna
create
this
package
here.
So
this
tells
you
the
the
full
path
to
the
package.
That's
in
storage
and
as
more
reports
are
added
you'll
see
all
of
them
listed
here.
The
prometheus
section,
this
one's
good
to
look
at
gives
you
a
bit
of
information
about
you
know
when
the
last
query
was
started
and
when
it
was
successful.
A
Another
thing
to
look
at
is
the
persistent
volume
claim
section.
This
tells
you
exactly
what
persistent
volume
claim
is
in
use
by
the
operator?
Okay,
so
everything
here
looks
good.
The
next
thing
you'll
want
to
do
is
retrieve
all
the
reports
from
the
pvc
that
you
want
to
upload.
A
A
So
one
thing
that
we
have
listed
here
is
a
pod
that
you
can
spin
up,
it'll
just
install
busybox,
and
this
will
give
us
shell
access
to
the
pvc
itself,
so
we're
going
to
go
ahead
and
create
this
pod,
okay
and
so
within
the
the
pods
workloads
we'll
see
that
we
now
have
this.
This
new
pod
spinning
up
called
volume
shell.
A
So
now
what
we
want
to
do
is
copy
the
reports
from
the
pvc
to
a
location
locally,
and
we
can
do
this
using
this
oc
rsync
command,
I'm
going
to
copy
that,
and
I'm
just
going
to
save
it
to
local
reports
directory.
A
What
this
is
going
to
do
is
going
to
copy
all
the
files
that
are
within
the
upload
directory
okay.
So
this
will
give
you
a
warning
that
can
be
ignored
as
long
as
you
check
the
upload
folder
that
you
just
downloaded
to
make
sure
that
the
report
is
there
so
just
to
make
sure
and
sure
enough.
The
report
that
is
within
the
pvc
is
now
local.
A
A
Remove
the
the
file
that's
within
the
upload
directory,
so
the
command-
that's
written
here
will
remove
everything
that's
in
there,
but
it
would
be
good
to
make
sure
that
you're
only
removing
the
files
that
you
have
stored
locally,
just
make
sure
we're
gonna
look
in
the
temp
directory
just
make
sure
it
was
cleaned
out
and
sure
enough.
The
report
is
now
gone.
A
Okay,
so
now
that
we
have
these
reports
locally,
we
can
exit
this
shell.
One
thing
you
can
do
is
you
can
you
can
leave
this
pod
here?
It's
not
really
going
to
do
anything
or
you
can
just
remove
it,
but
the
next
time
that
you
need
to
gather
the
reports.
You'll
need
to
spin
it
back
up
again,
okay,
so
now
with
that,
what
we
need
to
do
is
we
need
to
create
a
source
in
cloud.redhat.com,
so
real,
quick
log
to
your
account.
Okay,
so
I'm
going
to
click
on
the
settings.
A
Next,
I'm
shipping
container
platform
cost
management
next
and
then
we'll
need
the
cluster
id
and
I
copy
that
from
the
cluster
and
get
this
from
the
overview
page.
But
we'll
need
this
cluster
id
paste
that
in
here
next
and
then
add
all
right.
So
now
that
we've
successfully
created
our
source
now
we
can
upload
our
documents
to
cloud.redhat.com.