►
From YouTube: Red Hat Advanced Cluster Management Presents: Cluster Pool Scaling and Automated Dev Clusters
Description
Red Hat Advanced Cluster Management as a central management tool that provides capabilities to manage clusters at large. How does Central IT enable developers to quickly access clusters and make use of them? How do developers want to use clusters? How can we initiate cost savings strategies along the way?
A
Good
morning,
good
afternoon
good
evening
and
welcome
to
another
edition
of
red
hat
advanced
cluster
management
presents,
I
am
chris
short
executive
producer
of
openshift
tv
here
and
I
am
joined
by
a
bevy
of
rackham
team
members
and
scott.
I
will
rely
on
you
to
do
the
round
of
intros
this
time,
because.
B
B
To
introduce
various
characters
and
the
plot
has
thickened.
You
know,
I
think,
we're
getting
some
peaks
and
valleys
as
we
take
the
the
the
experience
through
there's
no
breaker
of
change,
yet
we
haven't
quite
reached
like
the
dragon
phase,
but
anyway,
metaphors
aside,
I'm
excited
to
bring
more
of
our
team
to
meet
you
and
to
meet
the
world
and
talk
about
what
we're
doing
with
multi-cluster
management.
B
C
Yeah
can
do
all
hop
and
scott
thanks.
I'm
guerni
buchanan,
I'm
from
the
cicd
slash
devops
tools,
whatever
gets
thrown
our
way
squad,
so
we
do
about
it
all
and
whenever
scott
needs
to
needs
to
spin
up
some
more
aws
resources,
he
hollers
our
way.
So
when.
B
D
All
right,
my
name
is
dale
heidesek,
I'm
a
developer
with
the
grc
squad.
It's
a
little
governed
risk
tab
in
advanced
cluster
management.
F
Hi
everyone
I'm
kevin
cormier.
I
work
on
the
application
lifecycle
squad,
particularly
on
the
ui
side,.
B
Hive
is
part
of
that
that
message
so
hive
api
I'll
drop
the
link
in
the
chat
when
I
get
back
to
that
window.
Thank
you
and
that's
a
place
for
you
to
hang
out
upstream
and
start
learning
about
what
we're
doing
with
with
multi-cluster
and
how
we're
driving
a
single
api
experience
there.
B
Then,
when
you
do
that,
you
find
out
that
you've
got
cluster
sprawl
right
all
of
a
sudden,
you're
spinning
up
clusters
rapidly
and
maybe
you're
not
paying
attention
to
the
budget
and
the
billing,
and
when
you
get
to
the
end
of
the
month,
you
realize
you've
just
spent
ten
thousand
dollars
on
a
cloud
that
you
didn't
expect
to,
and
that
was
actually
what
gurney
he
doesn't
actually
have
these
wake
up
moments.
He
has
these
shower
moments
I'll.
B
Let
him
describe
how
that
works,
but
he
had
this
showering
moment
where
he's
like
we're
spending
all
this
money,
but
we
could
use
something
called
cluster
pools
and
we
can
really
start
to
centralize
our
cost
and
start
to
bring
down
things
with
hibernate,
so
it
all
kind
of
started
to
come
together.
We
pitched
it
early
with
you
guys
a
couple
of
months
ago
and
we're
coming
back
for
another
round
of
it
that
goes
deeper
and
wider
and
and
farther
in
between
so
nice.
Take
that
cluster
cattle
concept.
C
Sounds
good
thanks.
Scott
yeah
scott
was
referencing
the
the
usual.
I
I
think
the
subreddit
that
I
find
a
good
home
in
is
the
shower
thoughts,
because,
most
of
the
time,
oh,
that
is
a.
E
C
I
sent
one
up.
I
sent
my
tech
lead
a
message
at
admittedly
3am
last
saturday
morning
and
said:
I've
had
an
idea
also
ignore
that
it's
3am,
so
it's
one
of
those
so
I'll
go
ahead
and
grab
the
screen
share
and
just
just
share
my
little
we're
just
going
to
be
pretty
much
100
shell
100
of
the
time
today.
C
So
well,
at
least
for
me,
and
I
think
the
other
folks
as
well.
They
may
show
some
ui,
so
I
guess
the
the
quick
intro
on
cluster
pools
for
for
those
that
didn't
hear
me
blather
about
it
last
time
and
haven't
heard
me
talk
about
it
for
the
past
six
months
to
all
of
my
coworkers
bless
them
is
that
we
have
a.
We
have
a
I
guess
at
first
we
had
a
ci
cd
scale
problem.
C
So
so
we
are,
you
know,
utility
lets
you
manage,
or
a
piece
of
software
that
lets.
You
manage
a
bunch
of
different
open
shift
clusters
and
and
some
splat
ks
clusters
on
all
of
these
different
cloud
platforms
and,
of
course,
when
we're
going
to
ship
and
say
hey
customer,
we
support
this
we're
going
to
test
it.
So
we
need
a
lot
of
clusters
and
we
do
ci
every
two
hours
or
so
we
do
a
new
pass
for
all
of
our
maintenance
releases.
So
we're
looking
at
like
hey,
we
need
like
15,
20
25
clusters.
C
We
need
that
to
shrink
in
scale
if
we
want
it
to
be
cost
effective.
So,
as
we
started
looking
at
hey,
wait,
we
ship
functionality
to
help
with
this.
So
what
if
we
start
using
some
of
the
hive
bits
that
they
ship,
so
the
big
thing
that
we
we
settled
on
is
cluster
pools.
So
what
hive
lets
you
do?
Is
you
can
say
hey.
I
want
to
make
a
cluster
deployment
and
my
cluster
deployment
is
just
me
making
some
yaml.
C
C
I
want
you
to
make
five
clusters
that
look
like
this,
and
I
want
to
be
able
to
check
one
out
and
then
say
I
want
to
use
it
and
then
hand
it
back
and
say
I'm
done
with
this
and
you
you
go
clean
it
up
and
replace
it
have
another
one
ready
and
wait
in
there
for
me.
So
that's
what
we
arrived
at
as
a
really
good
solution.
This
problem
I'll
go
in
a
couple
more
of
the
details
and
other
folks
will
as
well
where
this
really
gets
powerful.
C
But
to
start
off
with,
I
will
just
go
ahead
and
do
a
quick
oc
gets
cluster
pool
in
our
we're
going
to
be
doing
a
twitch
demo
namespace
on
this
openshift
cluster
today.
So
hopefully
this
will.
I
haven't
tried
this
a
little
bit
there
we
go
so
we
have
one
cluster
pool
out
there.
So
this
is
a
great
way
to
just
kind
of
give
the
the
quick
five
minute.
Here's
where
the
cluster
pool
is.
We
have
a
cluster
pool
named
twitchdemo470,
it's
running
on
this
base
domain
and
it's
running
on
this.
C
Oh,
this
open
shift
image.
So
this
is
just
saying:
hey.
I
want
openshift47
just
came
out
all
the
shiny,
new
bells
and
whistles,
and
I
want
there
to
be
four
of
them
at
all
times,
make
sure
there's
four
and
right
now
it
says:
hey!
You
have
four
ready
already
waiting
out
there
now
we'll
make
something
later
called
a
cluster
claim.
That
is
you
saying,
hey
I'd
like
one
of
those
four,
so
we
can
do
oc,
get
cluster
claim
and
see
that
I
cheated
and
went
ahead
and
made
two.
C
So
these
are
cluster
claims
that
are
just
literally
point
directly
to
an
open
shift
cluster
that
we
can
poke
around
at
in
a
bit.
That's
those
are
full
open
shift
clusters.
The
moment
we
go
and
delete
these,
these
kubernetes
objects,
it'll
it'll,
go
clean
up
those
clusters,
so
everything's
managed
as
cube
objects.
Even
your
kubernetes
clusters
are
cube
objects
and
the
recursion
is
not
lost
on
me
because
we'll
we'll
have
some
fun
with
that.
C
So
the
first
thing
that
I'll
show
off
here
is
just
this:
the
process
of
making
a
cluster
pool.
So
last
time
I
came
on
the
show
bored
everyone
with
a
ton
of
yaml.
I
just
pulled
up
the
ammo
files.
I
edited
them
and
I
oc
applied
them,
and
I
know
that's
like
watching
paint
dry
well.
This
time
we
have
a
project
called
lifeguard,
which
makes
life
a
lot
easier.
I
did
make
a
terrible
pun
in
the
readme
up
near
the
top.
I
should
have
had
dead
ahead
on
that,
but
yeah.
C
We
we
make
a
terrible
pun,
that
this
is
just
something
to
keep
you
from
drowning
in
the
club
in
the
cluster
pools
yeah.
Here
we
are
keeping
you
safe
in
the
cluster
pools
yeah.
We
make
far
too
many
puns
and
you'll
get
more
later
today.
C
So
it's
just
a
bundle
of
bash
scripts
that
I
wrote
in
an
afternoon
and
now
everyone
has
started
adopting
and
dale
keeps
making
prs
into,
because
I
have
typos
everywhere
and
this
and
these
bash
scripts,
and
it
just
exposes
some
real,
simple
capabilities
in
an
open
source
project.
I
don't
know
if
scott
wants
to
drop
the
link
in
the
chat,
but
we'll
go
ahead
and
make
ourselves
a
new
cluster
pool.
So
it's
as
simple
as
going
into
this
cluster
pool
directory
and
then
seeing
hey
yeah.
Oh,
oh
gurney!
C
I
see
that
you
already
have
your
your
one
set
of
animals
sitting
here,
we'll
ignore
that.
So
we
can
go
ahead
and
it's
just
a
bunch
of
scripts
that
let
you
run
through
and
apply,
and
it
says:
hey
yeah
you're
on
this
cluster,
here's
your
name
spaces!
You
can
put
a
cluster
pool
in
so
we're
going
to
go
into
twitch
demo
and
say:
hey
yeah.
We
want
to
put
it
there,
we're
gonna
go
ahead
and
toss
it
on
aws
and
it
says:
hey.
C
C
C
We
only
want
one
cluster
here.
I
don't
think
that
we'll
be
using
this,
so
you
can
put
any
number
of
clusters
to
keep
kind
of
little
caesar's
hot
and
ready
style
for
checkout
yeah.
It's
and
we'll
just
call
this
twitch
doing
it
live
as
I
mistyped
it
live
yeah
and
it
just
applies.
It'll
they'll
tell
you,
hey
here's
the
ammo
I
applied.
If
you
want
to
be
boring
and
snoop,
you
can
look
at
the
ammo.
C
We'll
just
see
that
hey,
we
have
a
new
cluster
pool
and
we
can
go
investigate
if
we
really
wanted
to
and
see
that
it's
running
a
nice
little
pod
off
in
the
background
in
a
different
name
space
to
provision
one
cluster
for
us
and
that
that
little
bit
of
flip
to
one
whenever
we
have
a
cluster
so
as
that's
often
provisioning,
we
already
have
our
other
other
cluster
pool
ready
to
do
some
do
some
checking
out.
C
So
we
can
go
over
here
and
just
go
into
cluster
claims,
and
this
is
just
once
again
a
little
utility
that
lets
you
create,
delete,
grab
your
creds
from
and
reconcile
your
your
cluster
claims,
so
we'll
go
ahead
and
just
apply
real,
quick
and
this
will
run
through
and
do
a
little
bit
more
fanciness
and
a
nice
little
script,
namely
we'll
want
to
grab
it
from
the
twitch
demo.
Namespace
it'll
say:
hey
you
have
these
only
one
of
them
has
it
ready,
we'll
go
ahead
and
grab
a
cluster
from
that?
C
C
So
I'll
set
this
number
to
say
you
know
hey.
I
only
want
this
cluster
to
live
for
eight
hours
and
after
eight
hours,
no
matter
what
it
will
go
in
and
reconcile
and
tear
down
these
resources.
So
we
don't
get
charged,
there's
no
risk
that
your
ci
is
gonna
leak,
a
bunch
of
expensive
resources
because
you'll
always
have
this
to
clean
it
up.
C
We
then
can
associate
it
with
an
r
back
group
in
this
case
sure,
let's
associate
it
with
rr
back
group,
so
that
we
everyone
knows
that
that
us
twitch,
dome
or
demoers
do
indeed
own
it
and
then
what
it's
going
to
do-
and
this
is
the
number
two
really
cool
thing-
that
I
absolutely
love
about
cluster
bowls
and
that
saved
me
saved
us.
C
So
much
money
is
when
a
cluster
is
sitting
in
that
pool
counting,
you
know,
hey,
we
have
four
ready,
it's
actually
hibernated
and
and
it's
hive
hibernation,
I
call
it
hibernated.
It
hasn't
caught
on.
C
Yeah
still
haven't
gotten
it
to
catch
on,
but
basically
it
shuts
down
the
vms.
It's
your
usual.
Have
you
tried
unplugging
it
after
the
the
day
is
over
to
save
on
your
electricity
bill.
You
know
it
shuts
down
the
vms
and
then,
whenever
you
check
out
the
cluster,
what
it's
doing
right
now
is
it
says:
hey
I've
claimed
a
cluster,
it's
still
resuming.
So
all
those
vms
in
aws
are
literally
changing
their
power
state
to
running,
and
then
it's
going
to
make
sure
it
can
connect
to
the
cluster.
C
So
in
about
five
minutes
here
we're
going
to
get
a
cluster
that
we
can
actually
reach
and
log
into
and
poke
around
with,
and
it's
a
full
open
shift
cluster
and
in
the
release
we're
about
to
punt
out
the
door,
we're
actually
going
to
have
the
ability
to
customize
these
clusters.
You
can
say:
hey,
I
want
bigger
workers
or
smaller
workers
or
more
or
less
workers.
All
of
that
sort
of
configuration
that
you
normally
get
in
the
open
shift.
Installer
is
going
to
be
here
too.
C
All
these
functions
will
kind
of
work,
so
I've
babbled
on
about
this.
For
a
little
while
and
said,
hey,
we
have
all
these
cool
pools
and
you
can
grab
clusters
at
some
crazy
scale,
other
cool
things
you
can
scale
the
pools
up
and
down
in
size.
These
are
all
just
kind
of
oc
operations.
You
can
just
apply
some
yaml,
do
an
oc
get
do
you
know
poll
for
the
status
of
these
kubernetes
objects
and
that's
all
well
and
good,
and
we
have
a
little
tool
that
makes
it
easier.
C
But
then
we
kind
of
just
threw
these
ideas
and
said:
hey
we're
using
them
devs.
Why?
Don't
you
guys
go
have
a
heyday
with
this
we
can
save
some
money
if
you
guys
do
it
and
hibernate.
So
that's
where
dale
and
kevin
come
in
so
I'll
go
ahead
and
give
an
intro
on
these
folks,
dale
and
kevin
are
both
awesome
guys.
I
sat
next
to
dale
in
the
before
times
when
we
were
still
in
the
office
in
rtp
and
he
said
hey.
C
These
are
really
cool
and
I
think
I
can
make
my
life
a
whole
lot
easier,
not
having
to
like
manage
having
clusters
everywhere
and
provisioning
them
and
de-versioning
them
and
forgetting
where
they
are,
and
you
know,
oops
I
broke
the
cluster.
I
now
need
to
spend
a
half
hour,
fixing
it
so
they've
built
some
really
cool
tools.
B
B
C
Yep,
it's
it's
really.
It's
an
automated
life
cycle
management
I'll
admit
that
I've
left
expensive
clusters
up
over
the
weekend,
because
five
o'clock
on
friday
rolls
around
and
it's
like
it's
board
game
night.
I'm
gone,
you
know,
so
I
I
have
other
obligations.
So
it's
like
you
know.
I
forgot
to
turn
on
that
cluster
and
that
cost
the
company
500
this
weekend.
C
Yeah
all
right
these
are,
these-
are
dev
tests,
we're
we're
we're
cost
optimizing,
it's
stuff
like
that
and
they're
development
environments
and
the
other
problem
that
we
encountered
with
our
developers-
and
I
don't
know
if
anyone
else
developing
on
openshift
has
ever
had
this
problem.
I'm
sure
they
sure
they
totally
haven't.
Where
you
you
know
accidentally
run
something
in
a
deploy
of
something
and
you
you
make
a
huge
mess
on
your
cluster.
You
have
you
have
you've
screwed
up
your
dev
test
environment.
A
C
And-
and
we
had
I-
I
recall
where
it's
like-
oh
man,
my
dev
cluster
screwed
up-
it's
going
to
take
me
two
hours
to
get
this
thing
back,
so
that's
kind
of
the
other
problem
that
we
said
hey.
Maybe
this
will
solve
that
because
you
can
kind
of
just
throw
it
in
the
garbage
bin
and
ask
for
another
one.
Is
the
really
cool
one?
So
I
think
at
this
point
my
best.
My
best
next
step
is
to
hand
it
off
to.
C
A
A
D
Right
so
here
we
have
an
empty
terminal.
Let
me
just
hop
on
over
all
right.
Yes,
like
gurney
said
he
came
to
us
and
said:
hey
guys,
you
need
to
save
a
couple
bucks
because
we
were
it
was
it
was
messy.
We
were
just.
It
was
like
kids
in
a
candy
store.
They
were
like
here's
aws
guys,
it's
open
season,
do
whatever
you
want
with
it.
So
we
all
we
each
had
our
own
cluster.
D
It
was,
it
was
great,
but
they
came
back
and
said.
Oh,
you
know
you
guys
have
spent
a
lot
of
money.
We
need
to
reign
it
back
so
so
so
then
gurney
says
we
have
these
cluster
pools
and
we
go
well
that's
great,
but
we're
not
saving
any
time
like.
D
If
we
have
this
cluster
just
ready
to
go,
then
it's
just
sitting
there
like
we
can
just
get
to
it
right
away
so
like
there
was
a
concern
with
getting
to
rackham
as
quickly
as
possible
like,
and
so
that's
where
our
my
solutions
came
in
so
gurney
built
up
lifeguard,
and
that
was
that
was
a
great
way
to
lower
the
bar
to
cluster
pools.
D
And
then
I
came
up
with
a
script,
I
called
it
start
rackham
that
encapsulates
lifeguard
and
our
deployment
scripts,
so
that
you
can
just
one
stop
shop,
run
a
script
and
it
would
deploy.
It
would
like
claim
a
cluster
and
then
also
deploy
rackham
with
the
configurations
we
need
for
for
development.
So
that's
that
was
my
story.
B
E
B
D
Yeah
so
we're
gonna
crank
up
visual
web
terminal,
we're
gonna
we're
gonna,
bring
the
terminal
into
rackham
and
try
to
interact
with
it
here
as
much
as.
D
D
All
right
so
now
we're
where
I
want
to
be,
and
so
before
I
start
talking
about
start
rackham.
I
wanted
to
crank
up
this
job,
so
I
have
I
containerized
what
what
I
did
is
I
created
an
image
that
would
containerize
the
start
rack
of
script
and
so
so
that
we
could
have
that
development
cluster,
that
we
don't
really
care
about.
Don't
really
have
to
maintain
if
it
like.
If
it
gets
broken,
it's
fine.
We
get
a
new
one
the
next
day
and
it
only
deploys
monday
through
friday.
D
It
has
a
lifetime
that
as
gurney
described,
so
it
deletes
itself
at
the
end
of
the
day.
So
we
don't
even
have
to
think
about
cleaning
it
up,
which
is
really
nice.
You.
B
Burdens
that
you
would
normally
run
to
and
waste
time,
because
your
squad
of
10
developers
no
longer
has
to
spin
up
their
individual
clusters
face
those
ad
hoc
headaches
that
they
get
with
infrastructure
or
code
changes
or
some
script
that
they're
keeping
on
their
desktop.
You
know
to
make
it
all
work
right,
yeah,.
B
B
A
D
So
here's
the
here's,
the
start
rackham
script
running
inside
the
container,
so
it
needs
to
know
where
your
so
it.
Actually,
if
you
run
and
start
rackham
locally,
it
needs
to
know
where
your
lifeguard
repo
is.
D
We
have
a
private
repo
currently
and
then
called
pipeline
where
we
store
all
of
our
tags
and
things
and
then
the
deploy
repo,
which
is
open
and
that's
the
scripts
to
install
rakom,
and
so
you
can
see
it
grabbing
those
and
then
starting
up,
and
then
here
is
where
it
enters
into
lifeguard
and
claims
a
cluster
you
can
see
it
has
a
12-hour
claim,
and
it
also
populates
with
our
our
back
group.
So
we
can
get
to
it
once
it
actually
deploys
we'll.
D
Let
that
run
and
we'll
talk
about
start
racking
a
little
bit
so,
like
I
said
like
we
wanted
a
quick
way
to
get
to
rackham,
and
but
we
also
needed
a
lot
of
configuration
we
needed
to
be
able
to
get
any
version.
We
need
to
get
the
upstream.
We
need
to
get
down
to
downstream
right
now
right
now.
D
Those
processes
are
a
little
bit
complicated
because
we
have
to
because
we're
not
we're
working
on
being
open
like
it's
an
urgent
thing,
but
we're
not
we're
we're
nearly
there,
but
not
very
close,
probably.
D
Able
really
totally
yeah
yeah
yeah
there's
a
huge
push,
even
in
this
next
two
weeks,
so
awesome,
but
so
so
it
can.
It
can
claim
any
any
branch
version
or
snapshot
that
you
want.
So
if
you,
by
default
it'll
just
get
the
latest
one,
which
is
usually
what
we
need
if
we're
trying
to
verify
bugs.
That
was,
that
was
like
the
biggest
hang
up.
We'd
have
to
constantly
update
clusters.
So
now,
every
morning
we
get
the
latest.
D
The
latest
upst
upstream
snapshot
on
a
cluster
and
then,
but
if
you
give
it
a
branch,
you
can
give
it
2o
and
it'll,
give
you
the
latest
z
stream,
so
it'll
give
you
204
208,
I'm
not
sure
where
we
are
right
now,
but
if
it
doesn't
have
enough
space,
it'll
resize
the
pool
automatically
you
can
tell
it
too.
You
can
also.
So
we
do
development
from
localhost.
We
run
run
our
run
our
component
locally
and
then
so.
D
D
It
pulls
configurations
from
a
script,
so
it
does
all
the
exports
inside
of
this
config
script
and
you
can
see
the
all
the
clustered
the
lifeguard
exports,
the
rackham
exports
and
then
the
nice
thing
about
that
is
that
I'm
able
to.
D
D
D
You
feed
it
secrets
and
you
also
feed
it
a
slack
url
or
slack
token,
and
that
lets
you
post
the
credentials
when
it
when
it
deploys
and
then,
if
you
give
it
a
slack
token
and
channel
id
slack,
now
allows
you
to
create
a
scheduled
message.
So
it
schedules
a
message
to
to
pop
up
20
minutes
before
the
cluster
get
is
set
to
expire.
So
you
can,
you
can
get
in
there
and
extend
it
if
you
need
to.
If
you
happen
to
be
working
with
it.
D
D
B
D
D
B
C
D
C
Yeah,
so
I
I
guess
to
go
to
your
question,
scott
there's
actually
in
the
new
version
of
acm,
which
is
the
newer
version
of
hive,
we
put
in
a
we
started
using
cluster,
pools
a
lot
and
found
a
couple,
our
back
scenarios
that
were
like
hey.
You
know
this
would
be
really
nice.
So
now
it's
set
up
where,
when
you
claim
a
cluster,
you
can
associate
it,
and
this
has
already
been
been
like
this.
C
You
can
associate
that
claim
with
you
know
an
rbac
group
or
a
user
or
some
owner
or
a
list
of
owners
and
they'll
all
be
able
to
access
that
cluster.
So
if
you
and
your
squad
have
like
an
rbac
group
that
has
some
permission
on
your
hub
cluster,
you
can
check
out
a
cluster
and
say
all
of
these
folks
own
this
cluster.
So
everyone,
that's
in
that
group,
can
read
and
access
your
cluster
in
the
same
way.
C
Now
the
hive
is
letting
you
and
acm's
letting
you
associate
a
a
a
group
or
a
user
as
a
cluster
pool
owner,
so
they
also
get
to
see
all
the
resources
associated
with
the
cluster
pool.
So
they
can
see
like
the
inner
workings
of
it's
it's.
It's
provisioning,
three
new
clusters,
because
it's
out
of
clusters
that
that
sort
of
detail
that's
doing
they
can
kind
of
pull
back
the
covers
and
see,
and
that's
a
hive
cluster
pool
admin.
C
So
that's
going
to
be
ga
sometime
this
week,
hopefully,
where
that
that
capability
acm,
where
you
can
say,
hey
this
person's
the
cluster
pool
owner,
they
should
be
able
to
see
all
the
special
behind
the
scenes.
C
So
that's
going
to
be
a
really
cool
one
too,
for
that
our
backstory,
and
that
may
be
a
good
follow-up
as
well
scott,
because
we're
working
on
I
know
tim
one
of
my
tech
leads-
is
working
on
a
really
good
r
backstory
around
automatically
configuring.
Our
back
on
these
clusters
that
you
check
out.
B
Nice
yeah
yeah,
that's
awesome
and
that
solves
a
huge
security
gap
that
we
we
don't
want
to
put
this
out
there
without
having
thought
through,
and
it's
awesome
to
see
the
enterprise
controls
coming
into
hive
more
and
more
yep
sweet
dale.
Where
are
we
at
with
that
rackham
and
stack
them.
D
Yeah,
so
in
the
background
you
can
see
it
just
finished
right
here
is
our
deployment.
It's
deploying
the
latest
snapshot
that
it
could
find
and
you
can
see
it
deploying
and
right
now
it
looks
like
it.
It
wrapped
up
right
here.
So
there's
our
url
that
we'll
meet
later
and
right
now
it's
waiting.
It
looks
for
the
ingress,
so
it's
waiting
for
the
ingress
to
come
up
because
the
even
after
the
deployment,
the
pods
are,
reconciling
and
still
installing
and
so
stabilizing.
B
C
Yeah,
we
have
basically
a
unified
deploy
methodology
for
our
dev
stuff
between
our
ci
and
our
our
developers.
What
our
developers
are
using
is
these
dev
tools
now,
so
this
is
even
something,
especially
with
like
the
app
model
in
rackham.
You
could
do
for
your
team,
where
you
say:
hey
here's
a
dev
tool
that
we
use
for
ci
behind
the
you
know
on
our
meta
level,
and
we
used
to
deploy
the
stuff
that
that
also
you
as
a
developer,
can
run
this
image
and
just
use
nice.
D
Yeah
so
let's
see.
D
So
real,
quick,
here's,
the
message
that
you
get
that
our
team
gets
from
slack
from
the
bot,
so
it
just
gives
us
the
rbac
users
and
the
password
is
generated.
Randomly
gives
us
what
snapshot
we're
looking
at
it's
lifetime
and
when
it
was
created.
So
the
lifetime
is
from
the
creation
point
and
then
other
credentials
in
the
url
and
then
20
minutes
before
it'll
tell
you
I'm
about
to
go
away.
You
have
20
minutes.
C
D
B
C
C
D
So
we'll,
since
that
that's
been
claimed,
so
we
should
be
able
to,
if
hop
over,
to
lifeguard
and
look
at
the
cluster
claims,
go
to
the
cluster
phones
directory
and
do
a
reconcile
claims
and
that
what
that'll
do
I
so
full
disclosure?
D
All
the
claims
that
are
remote,
it'll
wipe
away
all
the
local
ones
that
are
no
longer
relevant
and
it'll
pull
in
all
the
remote
ones
and
update
them.
So
I
can
go
into
this
one.
D
Then
we
we
have
our
credential
files
here
and
I
think
I'll
be
able
to
show
them.
But
because
it's
we're
going
to
delete
this
after
this.
But
we'll
still.
D
All
right,
let's
see
how
we're
doing
oh-
and
this
is
stopped
so
here
we
are
it
so
it
it
patched
the
ingress
with
all
of
our
local
host
connections,
that
all
the
paths
that
I
wanted,
because
our
team
is
responsible
for
a
couple
different
things.
So
there's
a
couple
different
paths
that
we
want
it
to
have
and
then
here
it
is
creating
our
users,
so
it
just
uses
a
quick
hd
password
and
instantiates
the
rbac
users,
and
it
should
be
ready
to
go
so.
Let's
go
back
up
and
get
to
our.
B
D
D
I
can't
quite
get
to
the
rackham
console
yet,
but
we
do
have
a
cube
config
file
in
here,
so
this
is
a
handy
command.
I
use
all
the
time
to
get
to
it.
C
D
Yeah,
so
that's
the
other
risk
with
getting
the
latest
is
that
sometimes
it
doesn't
actually
work
for
whatever
reason,
because
it
is
like
bleeding
edge.
You
never
know.
What's
in
there.
B
E
B
D
Right
and
and
yeah
only
monday,
through
friday,
let's
close
this
out
and
see
what
we
got
so
there's
this
cron
job
yep,
the
one
through
five
is
monday
through
friday.
E
D
And
then
we
also
have
a
cluster
pool,
expand
and
shrink
job
and
it's
those
are
hosted
in
start
rackham.
Also,
if
you
want
to
go
back
and
take
a
look
under
the
extras,
folder,
and
so
all
those
do
is
run
a
patch
on
every
cluster
pool
in
your
namespace
and
so
that
at
night
our
cluster
pools
are
scaled
down
to
zero
and
during
every
morning,
they're
scaled
back
up
to
one.
So
we
so
we
have
like
right
now
we
don't
have
any.
We
haven't
migrated
everyone
over.
D
So
we
literally
have
zero
cluster
pool
clusters
running
at
night
and
then,
with
with
star
rackham's
ability
to
scale
cluster
pools
dynamically.
We
don't
need
to
have
a
large
cluster
pool
running.
We
can
just
have
one
cluster
and
then,
if
someone
claims
one
half
an
hour
later,
another
one
pops
up
to
be
ready
and
then,
if
you
run
start
rackham
it'll
scale
the
pool
automatically
to
two
and
gla
and
grab
that
extra
cluster.
So
nice.
C
Yeah,
it
makes
a
really
nice
aws
cost
graph
as
the
person
who's
supposed
to
look
at
all
the
cost.
Graphs
there's
also
another
red
hat
project,
they're
working
on
called
cost
management.
That's
like
a
software
as
a
service
thing,
and
that
makes
for
really
nice
graphs
for
each
of
the
accounts,
because
you
kind
of
get
to
see
it
do
this
and
it's
every
12
hours,
sort
of
thing.
B
So
that's
that's
kind
of
what
we
get
to
look
forward
to
is
the
user
experience
of
bringing
pools,
hibernation,
cost
management,
bringing
that
under
rackham's
purview
so
that
at
the
fleet
level
you
now
have
controls
to
say:
hey
hibernate,
all
these
all
these
dev
clusters
on
fridays,
or
you
know,
delete
them.
If
they're
just
dev
clusters,
we
have
we're
going
to
be
baking,
those
controls
into
rackham
so
that
you
have
that
sort
of
level
of
capability
to
drive
it
out
there
to
the
fleet.
E
B
E
B
Right,
do
you
want
to
do
you
want
to
do
a
check
box
on
all
dev
clusters
and
say
hibernating
right
now?
Do
you
want
to
be
able
to
scale
those
down
to
three?
You
know
shared
master
workers
or
single
node
clusters,
like
let's
start
thinking
through
how
we
really
achieve
that
out
of
fleet
capability.
You
know
a
thousand
clusters
that
we're
managing
that
kind
of
stuff,
yeah
wow.
D
And
and
the
answer
bringing
an
ansible
which
should
be
coming
up
anytime
is
is
going
to
be
even
more
powerful
because,
like
really
my
script
should
be
ansible
like
it
should
be
like
waiting
and
checking
to
see
that
resources
are
exist
and
then
continuing
and
it
would,
it
might
even
make
it
run
faster
for
all.
I
know
so.
Nice.
C
And
once
you
have
all
this
kind
of
codified,
you
know
all
these
best
practices
codified.
For
you
know
dale.
He
can
just
kind
of
use
this
tool.
That's
been
provided
to
him
by
the
you
know,
whatever,
whatever
infrastructure
provider
you
have
at
said
company
for
us,
it's
the
ci
cd
team
kind
of
says,
here's
your
cloud
account.
C
We
have
one
shared
cluster
that
everyone
has
access
to
and
he
can
just
put
up,
pools
and
use
this
and
know
that
he's
gonna
just
conveniently
have
a
cluster
there
when
he
needs
it
and
whatever
he
does
it'll
be
about
as
cost
effective
as
it
can
be
within
reason.
So
if
you
can
kind
of
codify
these
best
practices
in
a
way
that
that
everyone
can
just
so
easily
do
the
the
most
cost
effective
and
easiest
thing
and
secure
thing,
that's
really
great,
because
it
takes
a
lot
of
weight
off
of.
C
E
B
Has
spent
time
in
the
acm
console
they
understand
what
a
buttery
experience
that
is
and
how
smooth
it
is,
and
a
lot
of
that
comes
to
the
the
eye
and
the
articulation
of
kevin,
and
I
want
to
hear
you
know
what
are
we?
What
are
we
looking
forward
to
tell
me
a
little
bit
more
about
what
you've
been
working
on
and
this
project
called
cluster
keeper,
I
think,
is,
is
that
the
right
name.
E
B
F
So
we
won't
be
looking
at
ui
a
whole
lot.
This
is
a
cli
based
tool
that
I
came
up
with
much
the
same
as
dale
gurney
came,
knocking
and
said:
hey.
Why?
Don't
you
take
a
look
and
see
if
your
squad
can
maybe
spend
less
money
on
your
cloud
costs?
F
So
I
thought
you
know.
Cluster
pools
and
hibernation
are
really
interesting
ideas.
We
wanted
to
start
using
them,
but
I
did
recognize
that
there
was
going
to
be
some
overhead
in
using
these
in
our
day-to-day
activities.
F
You'll
see
they
their
names,
they
have
these
auto-generated
bits
at
the
end,
right,
five
characters
and
that's
even
in
the
urls
for
these
clusters.
So
they're
very
you
know
these
names
are
hard
to
recognize,
are
to
memorize
and
then,
as
you
get
into
using
cluster
pools,
you're
probably
going
to
be
recycling
your
clusters
more
often,
so,
on
top
of
that,
these
names
are
changing
all
the
time
and
anytime.
You
need
to
manage
the
power
state
or
the
life
cycle
of
these
clusters.
F
You
need
to
target
your
oc
or
your
group
control
back
to
that.
I
call
it
the
cluster
pull
host,
so
the
cluster,
where
all
those
cluster
pools
and
cluster
claims
and
cluster
deployments
are
defined.
So
I
created
this
cli
called
cluster
keeper
to
try
and
deal
with
some
of
those
issues.
F
So
that
was
the
cluster
deployments.
If
we
look
at
the
just
get
the
cluster
claims
by
default,
we
just
get
the
names
of
the
claims.
F
If
you
actually
wanted
to
see
the
actual
cluster
deployment,
you
would
need
to
maybe
look
at
the
yaml
output
for
that,
for
example,
so
the
first
cluster
keeper
command
I'll
show.
You
then,
is
list.
You
can
say
claims
cluster
claims
claims
I
usually
use
cc.
So
I
it
recognizes
a
bunch
of
aliases.
E
F
F
F
I
can
see
the
lifetime,
so
gurney
set
a
lifetime
of
eight
hours
on
the
one
he
created
at
the
beginning
and
dale's
lives
for
12
hours.
We
have
the
age
column
here,
so
you
can
do
the
math
and
see
how
much
time
you
have
left.
F
B
F
Exactly
and
yeah,
like
none
of
these
things
are
difficult
to
do
it's
just
time,
consuming
if
you're
doing
them
day
after
day
right.
You
know
switching
clusters
so
and
my
oc
was
already
targeting
this
cluster
pool
host.
But
the
the
nice
thing
about
the
ck
cli
is
that
anytime,
it
needs
to
access
the
cluster
bullhost.
It
has
created
a
context
in
your
coup
config.
F
So
right
so
normally
I
could
say
oc
context,
sorry,
oc,
config,
current
context
and
that's
ck,
so
the
ck
context
is
the
context
it
has
created
when
it
for
when
it
talks
to
clusterful
host.
I
also
have
a
short
form,
because
I
don't
like
typing
that
long
string,
so
ck
current
tells
me
I'm
pointing
to
the
clusterful
host.
So
this
is
probably
my
most
used
command
right
because
I'm
frequently
checking
you
know
what
is
the
power
state
of
the
clusters
I
need
to
use?
F
Are
we
using
more
clusters
than
we
should
be
etc?
So
then,
if
we
talk
about
when
I
want
to
actually
use
one
of
these
clusters,
so
if
I
didn't
have
cluster
keeper,
as
I
mentioned,
I
would
have
to
target
my
oc
to
the
cluster
pool
host.
F
Then
I
could
use
lifeguard,
so
I
think
dale
pointed
out
the
get
credentials
script.
I
could
run
that
to
get
the
credentials,
that's
going
to
create
a
directory
associated
with
that
cluster.
That
has
a
kookan
duke
file.
It
has
a
oc,
login
script.
If
I
want
to
use
password-based
login,
so
I
would
have
to
use
one
of
those
methods
to
connect
and
then
maybe
I
would
there's
also
a
credentials
file
if
I
want
the
console
url,
so
I
wanted
to
kind
of
get
that
down
into
one
step
for
cluster
feedback.
F
So
the
idea
is
that
everything
is
keyed
by
the
cluster
claim,
name
so
on
my
scene,
we're
using
very
simple
names
for
our
cluster
claims,
so
I
might
rather
than
having
like
a
numbered
claim,
I
might
just
call
it
prototype
or
demo
or
dev.
Something
like
that,
and
so
I
mentioned
that.
Ck
list
is
probably
my
most
used.
The
most
useful
would
be
ck
use.
F
I
really
should
have
copied
and
pasted
that,
but
you
know
on
openshift
clusters
we
have
this
infrastructure
resource
called
cluster.
That
has
various
information.
So
here's.
F
Oh
okay,
well
I'll
try
that
later,
but
as
you
can
see
here,
the
the
url
matches
the
cluster
deployment
name
that
tssch.
F
So
I
want
to
show
you
this
one
I've
used
before,
so
I
want
to
show
you
if
I
try
to
use,
claim
number
two:
what
actually
happens
behind
the
scenes,
so
ckus
intentionally
switches,
your
config
contacts
to
use
that
cluster,
but
any
all
other
ck
commands
try
to
not
mess
with
your
current
context
at
all.
But
what
happens
when
you
use
something
like
this?
For
the
first
time,
you'll
see
it's
creating
context.
F
It
fetches
those
credentials
automatically
prepares
the
coup.
Config
there's
a
little
bit
of
fiddling
that
it
does
with
that,
creates
a
service
account
so
that
you're
not
constantly
having
to
log
in
you
have
a
reliable
session,
and
then
it
actually
backs
up
your
personal
coop
config
file
before
updating
it.
To
add
the
new
context
for
this
cluster
claim
and
it
switches.
F
So
cluster
keeper
is
all
centered
around
kind
of
these
contacts
for
cluster
claims.
So
now
that
my
current
context
is
claim
number
two
I
can
use
commands
like
cluster
claim
console
and
I
don't
have
to
give
the
name
of
the
cluster
here,
so
it
will
infer
that
for
my
current
context.
F
So
what
that
will
do
is
open,
so
it
actually
copies
the
kubeman
password
to
the
clipboard
and
it
opens
this
in
the
console,
so
these
aren't
set
up
with
a
proper
certificate
right.
So
I'm
going
to
get
this
warning.
E
F
So
I
want
to
show
you
there's
a
similar
one
for
accessing
acm
or
rackham,
so
this
cluster
threat
that
dale
created
with
start
rackham-
I
haven't
touched
it
yet.
There's
one
small
caveat
here
which
is
clusterkeeper
uses
service
accounts.
So
I
need
to
enable
permission
on
that
cluster
with
to
work
to
be
accessed
by
service
accounts.
So
I'm
going
to
run
that
first
and
then,
if
I
do
ckacm.
F
We
should
have
our
well,
we
have
to
go
through
the
same
steps,
obviously
creating
the
context,
fetching
those
credentials
and
eventually
we
should
see
the
browser
open,
a
new
tab
and
similarly
copy
the
password
for
us
I'll
have
to
go
through
the
same
cert
stuff.
F
Yeah
that
that's
just
by
by
convention
the
tool
does
look
up
the
route
for
acm
from
the
cluster.
So
if
it,
if
it
doesn't
have
acm,
this
will
fail.
Okay,
that's.
F
B
A
F
So
I
mentioned
that
it
usually
tries
not
to
change
your
context,
so
say
you're
working
with
one
of
these
clusters.
Someone
asks
you
hey.
Can
you
run
this
other
task
on
this
other
cluster,
so
whoops
this
used
to
be
called
cm,
so
my
finger's
still
sometimes
typed
cm
instead
of
ck,
so
there's
a
similar
command
called
ck
with,
and
that
takes
the
the
name
of
it
whoops.
That
was
the
password
there.
Sorry
about
that.
We
will
be
deleting
these
clusters
after
anyway,.
F
F
I
can
use
my
cluster
keeper
with
command
so
that
will
extract
the
kubeconfig
for
this
cluster
here
to
a
temporary
file
set
the
kubeconfig
variable
environment
variable
and
then
run
this
script
so
that
you
can
carry
on
with
your
regular
regularly
scheduled
programming,
that's
handy
and
what
I
usually
use
this
for
actually
is
for
working
with
the
the
cluster
pool
host.
F
So
some
of
the
things
I
haven't
added
in
cluster
keeper
are
working
with
the
cluster
pools,
so
dale
has
the
crown
jobs
to
automatically
scale
the
cluster
pool
size
up
and
down.
F
F
So
remember,
ck!
Is
that
special
context
for
the
clusterful
host?
So
then
I
can
say
I
can
run
a
regular
oc
command
to
edit
say
the
cluster
pool,
which
was
what
twitch
demo
470.
F
Yes
yeah,
so
this
is
just
if
you
didn't
want
to
change
your
current
context,
and
you
could,
you
know,
come
down
here,
edit,
your
size,
for
example.
I
like
it
now,
you
can
also
say
use
ck,
and
now
your
context
is
ck,
and
then
you
can
run
your
oc
commands
against
the
cluster
pool
host.
F
Yep,
nice,
so
the
other
bits
that
I've
added.
I
think
it's
okay,
if
I
hibernate
any
of
these
clusters
right,
especially.
F
Wow
so
there's
a
hibernate.
If
you
want
to
manually
hibernate
a
cluster.
F
If
I
look
at
the
list
of
clusters
now
we'll
see
that
that
cluster
is
stopping,
you
have,
of
course
the.
F
Switch
to
use
oh
well,
it's
probably
that's,
probably
not
yeah.
It's
gonna,
so
cluster
keeper
tries
to
help
you
out
and
do
a
lot
of
things
automatically
right.
So
if
you
try
to
use
use
ck
with
your
ck
console
ckacm,
it's
going
to
wake
up
that
cluster,
so
that
cluster
was
already
stopping.
F
The
other
thing
it's
gonna,
so
this
hibernating
and
running
is
done
by
editing
the
power
state
spec
power
state
on
the
cluster
deployment.
So
I
have
gotten
into
some
problems
with
hive.
Sometimes
if
one
of
these
operations
was
in
progress-
and
I
was
too
eager-
and
I
changed
the
state
again
so
another
thing
cluster
keeper
does-
is
it
checks
for
that?
F
So
if
I
try
to
run
this
this
cluster,
while
it's
in
the
process
of
stopping
and
it
might
already
be
stopped
but
yeah,
so
it's
waiting
up
to
15
minutes
for
it
to
actually
be
in
hibernating
state,
and
then
it
will
restart
it.
F
So
that
was
manual
running
and
hibernating,
so
I
did
want
to
mention
that
on
my
team
we
are
using
another
project
called
hibernate,
cron
job
and
cluster
keeper
is
designed
to
work
well
with
hibernity
crown
jobs.
So
I
think
this
might
have
been
presented
before
on
this
show,
but
basically
it
just
helps
you
set
up
some
kubernetes
crown
jobs
that
say
at
like
6
p.m.
Every
day,
we'll
hibernate
all
your
clusters.
F
So
that's
what
this
schedule
column
is
all
about
in
this
list
display.
So
if
I
wanted
to.
F
And
then
we
will
see,
schedule
is
true
and
the
hibernate
crown
job.
It
looks
at
a
hibernate
label
on
the
cluster
deployment,
for
whether
it
should
operate
on
that
cluster
or
not
so
because
this
one
that
has
the
schedule
enabled
that
is
set
to
true.
If
the
schedule
was,
if
I
then
disabled,
scheduled
hibernation,
this
would
be
set
to
skip
to
tell
that
cron
job
not
to
operate
on
this
cluster.
F
Yeah
so
and
then
the
other
aspect,
yeah
that
I
wanted
to
mention-
is
locks,
so
this
locks
column
on
my
team.
We
share
a
lot
of
our
clusters,
so
we
do
have
them
scheduled
to
go
to
hibernate
every
day
at
6
pm,
but
obviously
sometimes
that's
not
realistic.
Somebody
needs
to
keep
working
later
than
6
pm,
so
there's
a
lock
feature,
so
I
can
say.
F
What
this
also
does
is
other
cluster
keeper
commands
will
warn
users
that
this
cluster
is
locked,
so
it's
hibernating.
Now,
if
I
try
to
run
it.
F
It
says
you
know
can't
operate
it's
locked
by
me.
You
know,
use
dash
f
divorce.
If
you
really
need
to.
F
F
And
then
we
see
that
lock
is
in
the
list
when
it's
finished
with
that
cluster,
what
it
does
is,
it
will
run
if
it's
if
it's
during
normal
off
hours,
so
after
6
pm
or
on
a
weekend,
it
will
run
the
ck
hibernate
command,
but
it
doesn't
add
the
dash
f.
So
that
way,
if
any
other
build
jobs,
have
the
cluster
locked,
then
it
doesn't
actually
get
hibernated
awesome.
B
B
These
are
exactly
the
types
of
scenarios
you'd
run
into
where
someone's
trying
to
claim,
but
it's
already
in
use
or
someone's
trying
to
run,
but
it's
in
the
hibernate
mode
or
or
going
into
hibernate
mode,
and
so
yeah
my
mind
is
just
getting
excited.
I
hope
that
the
community
is
ready
to
tap
in
and
play
open
dash.
Cluster
dash
management,
of
course,
is
where
you
want
to
hang
out
or
the
new
hibernated.com.
E
F
Yeah
no
worries
well,
there's
a
whole
there's
a
whole
list
of
sub
commands
here.
So
I
didn't
go
through
them
all.
There
is
a
shortcut
for
creating
a
new
cluster
quickly.
It's
just
you
know
it
runs
through
lifeguard.
It
uses
lifeguard.
It
just
speeds
it
up
a
little
bit,
so
you
can
type
one
line
and
then
go.
B
A
No
thank
you
for
coming
back
right,
like
you
all,
do
such
an
amazing
job
of
putting
the
right
people
on
the
call
doing
the
right
thing.
So
I
appreciate
all
your
behind-the-scenes
work
and
and
definitely
the
fact
that
all
this
is
being
done
out
in
the
open,
and
this
is
going
to
help
not
just
one
company.
This
is
going
to
help
lots
of
people.
B
Yeah
we're
having
fun
too,
and
these
are
these-
are
projects
that
come
out
of
the
demand
that
we
find
in-house
and
we
think
other
other
teams
are
probably
suffering
these
same
challenges.
You
know
across
the
globe
so
come
play
with
us
in
open
cluster
management.
We
do
meet
on
thursdays,
for
the
community
call
and
you'll
be
able
to
find
that
at
the
open
cluster
management
site.