►
Description
Multi-cluster management is hard. Technology, teams and culture clash in a race to deliver clusters and applications in a secure and compliant way. Red Hat Advanced Cluster Management for Kubernetes (RHACM) provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
B
A
A
C
E
C
Hello,
everyone,
we
are
back
sorry
zoom,
has
a
new
feature,
apparently
like
it
talks
to
other
computers
on
your
network
to
control
zoom
meeting
functions,
that's
weird
anyways!
So
here
we
are,
I
don't
know
what
intro
of
any
we
got.
So
let's
do
it
again,
I'm
chris
short
executive
producer
here
on
openshift
tv,
scott,
I'm
handing
it
off
to
you
who
can
then
introduce
everyone
else.
So
this
is.
D
Fantastic.
Thank
you
chris
thanks
for
having
us
we're
on
our
third
session
here
at
red
hat
advanced
cluster
management
presents.
It
is
a
mouthful,
so
we
can
call
it
raccoon
today,
I'm
a
product
manager
with
recom
and
we're
focused
on
the
multi-cluster
problem.
Solving
that
challenge
with
management
around
openshift
busters,
I'm
going
to
turn
it
over
to
josh
packer
for
his
link
to
introduction.
G
A
D
B
Yeah,
I'm
old
enough
not
to
know
what
twitch
is
so.
My
name
is
tim
poyer
and
I
lead
up
the
cicd.
I'm
a
lead
architect
of
the
cicd
process
for
acm.
So
we
figure
out
how
to
take
all
the
lovely
code
that
all
these
beautiful,
smart
developers
put
together
and
we
throw
it
into
a
bucket
and
figure
out
how
to
make
it
run.
So
so
gurney
is
part
of
my
team
and-
and
I'm
excited
to
be
here
and
talking
about
some
of
the
stuff
we're
doing
today.
H
Hey
I'm
I'm
devon
goodwin,
I'm
a
principal
software
engineer
at
red
hat.
I
lead
the
openshift
hive
team,
which
is
a
small
component
of
rackham
and
just
here
today,
to
give
a
brief
intro
to
hive,
because
I
think
it's
going
to
come
up
a
few
times.
D
D
I
think
it
was
kind
of
october
november
of
last
year,
when
we
got
together
in
a
room
in
raleigh
on
an
ibm
site
with
devin
and
15
of
his
wizards
josh
was
there
tim
was
floating
around
I
mean
it
was.
It
was
literally
a
hackathon,
and
that
was
where
we
started
to
put
together
the
the
bits
and
pieces
of
acm
with
hive,
providing
you
know,
clusters
as
a
service,
the
whole
concept
of
the
multi-cluster
management
on
top
of
openshift
and
really
bringing
the
best
of
both
to
the
next
level.
D
That's
where
I
got
to
know
devon
and
got
to
learn
more
about
what
that
team
is
doing,
but
you
know
I
was
blown
away
from
that
point
and
when
I
heard
that
they
were
using
hive
in
production
on
openshift
dedicated,
you
know
it
was
like
okay,
this
is
a
win-win.
This
is
a
perfect
marriage.
So
devin,
you
know,
you've
got
the
history
there
and
I
thought
maybe
you
want
to
share
some
insights
around
what
you've
been
doing
with
hive
over
the
past
year
and
a
half.
H
Dora
thanks
yeah,
we
hive
is
a
kubernetes
operator
for
provisioning,
de-provisioning,
openshift,
four
clusters
and
some
day,
two
management,
not
a
lot,
but
certain
things
that
are
needed
when
you're
dealing
with
a
lot
of
openshift
clusters
across
the
broad
fleet
or
just
provisioning
and
de-provisioning
a
lot
at
scale.
So
we
we
built
hive
to
just
give
you
that
api
to
do
that,
declare
the
clusters
you
want,
how
you
want
them
configured,
be
able
to
mutate
certain
things
and
have
that
constantly
reconciled.
H
That's
all
done
via
kubernetes
custom
resources,
as
it
would
be
with
with
most
operators
openshift
for
itself.
We,
we
hope,
is
we
we
aim
to
have
it
self-managing
and
self-healing
as
much
as
possible,
and
hive
sits
at
sort
of
a
layer
above
that
for
the
things
that
don't
make
sense
to
do
within
a
cluster.
We
try
to
just
help.
You
get
the
bootstrap
to
the
point
where
you
can
work
within
the
cluster
or
with
a
more
feature-rich
solution.
Like
ecm.
H
H
We
wanted
to
help
reduce
the
human
intervention
and
toil
required
to
configure
manage
them
and
that
that's
the
that
was
the
driving
factor
or
our
first
use
case
that
we
tried
to
satisfy
with
hive,
and
that
has
been
successful,
that
hive
powers
almost
all
openshift
four
dedicated
clusters
today,
and
we
work
closely
with
those
sre
teams
to
make
sure
that
it
exposes
the
features
they
need
to
be
able
to
deal
with
so
many
clusters
and,
of
course,
since
then,
we've
integrated
with
rackham
and
we've
even
seen
some
adoption
just
with
not
in
terms
of
an
official
product,
but
just
customers
who
found
the
open
source
project
and
were
interested
and
started
using
it
for
their
ci
workflows
for
developers
and
that
sort
of
thing.
H
Basically,
we
just
make
it
we
offer
apis
such
that
you
can
provision
your
clusters.
That's
all
backed
by
the
same
installer
that
you
would
use
if
you
installed
openshift4
on
the
command
line
openshift-install
and
that
works.
You
know
across
aws,
azure
gcp,
vmware,
openstack,
rev
hive
will
help
you
automate
dns
in
terms
of
setting
up
the
delegated
zones
for
a
new
cluster.
H
We
offer
machine
pool
apis
so
that
you
can
manage
the
machine
sets
that
are
in
your
cluster
on
an
ongoing
basis.
Bring
in
you
know,
new
pools
of
machines
and
nodes
that
you
want
to
run
specific
workloads
on
spread
across
availability
zones
automatically
like
you
would
get
when
you
do
the
openshift
installer,
but
not
something
you
can
do
once
the
cluster
is
up
today.
H
Unless
you
do
it
yourself,
we
do
have
a
rudimentary
mechanism
for
delivering
arbitrary
config
to
clusters,
but
there
are
much
better
options
available
in
rackham
and
a
couple
of
things
that
we've
done
more
recently.
That
will
come
up.
H
I
think
a
few
times
today
in
the
demos,
but
we
make
it
such
that
you
can
hibernate
and
resume
your
clusters
to
save
on
costs
when
they're,
not
in
use
and
we'll
also
handle
the
cab
like
the
product,
gain
support
for
that
somewhere
around
458
and
we
just
integrated
with
it,
and
we
also
handle
the
edge
case
where
you
might
bring
your
cluster
back
to
life
after
its
certs
have
expired
and
csrs
need
to
be
approved.
H
So
we
can
handle
that
for
you,
and
we've
also
introduced
a
concept
of
a
cluster
pool
where
you
can
keep
a
selection
of
a
fixed
number
of
clusters
up
and
running
and
ready
to
go
when
needed.
They
are
hibernated
once
they're
ready
and
then
they
can
be
claimed
by
any
user.
With
permission
to
do
so
and
signed
out
and
then
immediately
replaced
in
that
pool.
B
Yeah
and
and
specifically
that
that
clusterful
capability
is
something
we're
going
to
talk
about
today
that
that,
from
a
ci
cd
perspective
has
been
very
valuable
for
us,
not
only
in
a
cost
saving
mode,
but
but
also
just
in
a
time
to
provision
right.
We
don't
have
to
wait
around
for
45
minutes,
for
you
know
an
ocp
cluster
to
be
provisioned.
B
We
we
just
get
one
right
away
right,
which
is
which
is
fantastic,
and
you
know
one
way
I
I
started
to
think
about
hive
and-
and
I
don't
know
if
devin
would
agree
with
me
on
this
or
not,
but
but
it
seems
easier
for
me
to
think
of
it.
This
way
is
that
hive
is
a
control,
it's
an
operator
for
openshift
clusters.
Right
now.
B
I
can't
control
every
aspect
of
an
openshift
cluster
from
hive,
but
but
I
can
control
many
of
the
most
valuable
functionalities
of
operating
an
openshift
cluster
with
hive
and
and
bringing
that
operator
type
control
into
this
paradigm
of
like
infrastructure
as
code
is
a
very,
very
powerful
concept
and
one
that
I
think
many
of
the
viewers
will
probably
gravitate
towards.
D
Well,
that's
that
whole
concept
tim,
where
we
don't
treat
the
clusters
like
pets
right,
we
want
to
be
able
to
spin
up
a
cluster
in
the
shortest
amount
of
time
possible.
Make
use
of
it,
provide
it
to
the
developer,
to
credit,
provide
that
stack
to
anybody
who
needs
it
without
with,
without
a
bunch
of
you
know,.
B
B
Think
about
two
years
ago
right
when
people
are
three
years
ago,
when
people
are
really
starting
to
to
evaluate
kubernetes
and
start
to
understand
it
and
and
you
get
in
there-
and
you
know
once
you
understand
what
kubernetes
does
then
you're
talking
to
developers
and
you're
like
well,
this
pod's
not
working
well,
we'll
just
delete
it
and
they
look
at
you
with
this
stunned.
Look
this
this
almost
like
deer
in
the
headlights.
What
do
you
mean
you're
going
to
delete
it
like?
Well,
the
pod
doesn't
mean
anything
it's
the
deployment.
That
means
something
right.
B
Kubernetes
is
just
going
to
recreate
the
pod
and
and
to
bring
that
kind
of
concept
to
the
cluster
level
right
where
the
cluster
stops,
meaning
you
know
everything
to
you
and
it's
more.
The
configuration
of
the
cluster
that
that
is
the
the
meaning
that
that
that
that's
the
value
is,
is
in
the
configuration
right
and
and
the
applications
that
are
that
are
scheduled
to
deploy
and
run
on
that
cluster.
B
So,
if
I
can,
you
know,
destroy
a
cluster
and
bring
up
another
cluster
somewhere
else,
and
all
my
applications
are
automatically
synced
to
it
and
deployed
to
it
and
running
there
wow,
you
know,
like
imagine
the
weekends
we
can
give
back
to
people.
D
And
that
that
last
part
tim,
the
applications
right
the
workloads,
it's
one
thing
to
have
the
cluster
as
a
pet,
but
to
know
that
the
applications
are
going
to
sink
back
there.
The
workloads
will
come
back
based
on
your
desired
state
model.
That's
what
acm
brings
to
the
table.
We
not
only
give
you
that
cluster
lifecycle
capability
through
hive
to
provide
clusters
as
a
service,
but
then
also
be
able
to
provision
the
workload
on
them
as
it
was
before.
B
B
I
can
define
my
clusters
using
yaml
in
a
git
repo,
and
I
can
sync
those
into
my
cluster
and
all
of
a
sudden.
It
starts
going
out
and
provisioning
clusters
for
me
and
then,
if
I
want
to
change
them,
I
just
go
into
the
git
repo
and
make
a
change
to
the
machine
pool
that
gets
synced
back
to
the
cluster.
And
now
I've
got
infrastructure
nodes
set
up
on
my
cluster
I've
or
I've
got
logging
nodes
set
up
on
my
cluster
or
I've
created.
B
You
know
some
sort
of
horizontal
scalability
right
by
defining
a
new
set
of
worker
nodes
right
with
a
new
label,
and
so
I
can
schedule
just
the
just
the
applications.
I
want
to
run
on
those
specific
nodes
and
you
can
do
all
that
with
the
combination
of
hive,
which,
which
is
an
open
source
project
and,
like
devon,
said,
you
can
go
out
and
download
it
yourself
and
play
around
with
it
and
acm,
and
I
think
acm
brings
a
higher
level
of
management
to
that.
B
With
some
of
the
capabilities
we
have
around
being
able
to
do
this
sort
of
get
ops
model,
especially
with
applications
where
we
can
take
an
application,
that's
defined
in
yaml
in
a
git
repo
and
sync
it
to
a
cluster,
and
then
sync
it
out
to
one
of
the
managed
clusters
that
we've
that
we've
provisioned-
and
you
know
it's
all
done
with
labels
just
like
you-
would
deploy
an
application
to
a
particular
node
that
has
a
particular
label.
B
We
can
deploy
applications
to
particular
clusters
that
have
those
labels
and
it
it's
a
very
powerful
concept
and
and
something
that
I
think
developers
will
get
very
excited
about,
and
I
hope
you
know
that
we
get
to
this
point
where
you
know
we
can
stop
treating
our
clusters
as
these
precious
little
things
like.
I
always
think
of
the
lord
of
the
rings
and
golems
like
mike
precious.
B
You
know,
that's
really
how
people
treat
these
clusters
and
and
quite
honestly,
we
need
to
get
to
a
point
where,
where
these
clusters,
we
just
throw
them
away
when
we
don't
need
them
anymore,
and
that's
something
that
gurney
here
is
going
to
be
talking
about
with
cluster
pools,
because
it
really
takes
that
concept
of
throwaway
clusters
to
a
whole
new
level,
and
it
has
really
increased
the
speed
and
productivity
and
just
the
velocity
of
our
cicd
team.
Since
we've
started
using
it.
A
I
was
about
to
say
yeah,
that
was
a
wonderful
segue,
tim
I'll.
I
guess
I'll
go
ahead
and
steal
the
screen
chair
to
hop
in
basically
when
we
started
working
on
rackham.
I
arrived
a
little
while
ago,
and
it's
like
hey
we're
doing
this
multi-cluster
management
thing
also.
That
means
we
need
lots
of
clusters
to
to
do
ci
against.
We
need
to.
We
need
to
test
everything
and
we
need
to
test
it
against
a
bunch
of
clusters
and
we
need
to
represent.
You
know
our
support
matrix
all
that
sort
of
stuff.
A
So
you
run
into
immediately
the
problem
of.
I
need
four
different
clusters
on
four
different
cloud
platforms:
every
ci
run-
and
you
know
if
we
reach
an
error
case
I'll
need
new
clusters
to
replace
those.
You
know
fresh
clusters
to
really
prove
out
that
this
application's
deploying
on
a
a
fresh
platform,
so
we
fortunately
were
able
to
alongside
hive
leverage
hive
to
provide
those
resources
for
us.
A
So
we
do
a
ci
run
every
two
hours
on
every
two
hours
in
the
hour,
we'll
attempt
to
do
a
full
ci
run,
and
that
involves
checking
out
numerous
clusters
for
our
hub
for
our
imports
and
representing
as
much
of
our
support
support
matrix
as
we
can.
So
we
need
a
lot
of
clusters
to
really
achieve
that
and
through
that,
we
we
really
end
up
with
this
problem
of
well.
A
We
need
rackham
to
be
able
to
do
ci
for
rackham,
conveniently
we've
released
it
now,
and
we
have
the
high
folks
to
help
us
out
so
we're
using
rackham
to
do
ci
for
rack
and
I'm
going
to
show
off
a
little
bit
of
what
we've
started
leveraging,
namely
cluster
pools.
So
cluster
pools
are
this.
This
technique,
like
devon,
talked
about
before,
where
you
can
have
a
pool
of
programmatically
defined
clusters.
A
In
our
case,
we
typically
have
the
supported,
open
shift
version.
Some
some
good
group
of
openshift
versions
represented
and
we'll
go
ahead
and
we'll
select
clusters
we'll
check
them
out
from
the
pool
and
rather
than
taking
40
minutes,
50
minutes
to
provision
a
cluster
for
each
one
of
those
that
we
need
to
run
on
our
ci
run
and
then
merging
in
and
running.
A
So
if
you
claim
a
cluster-
and
you
know
your
ci
system,
just
just
kind
of
blips
out-
you've
you've
somehow
happened
upon
an
outage
you're
not
going
to
leak.
Anything
because
hive
will
clean
up
that
claim.
After
you
know
your
defined
number
of
hours,
so
we'll
say
you
know,
maybe
ci
run
takes
two
hours
so
we'll
have
it
automatically
clean
up
after
four.
So.
B
Yeah,
I
think,
there's
a
important
distinction.
I
just
wanted
to
call
it
so
the
the
cluster
pool
it's
a
stack
of
clusters
right
and
when
we
create
a
cluster
claim,
it
pops
a
cluster
off
the
stack
and
assigns
it
to
that
claim,
and
then
it
goes
through
and
using
its
own
logic
will
provision
another
cluster
to
make
sure
that
the
the
number
of
clusters
in
the
stack
always
stays
consistent.
So
the
cluster
you've
claimed
will
never
go
back
into
the
stack.
It's
yours,
you
own
it
and
you
can
treat
its
life
cycle.
B
A
Yeah
tim,
you
make
a
very
good
point
there,
and
hopefully
here
we
are
so
we
have
I'm
showing
kui
right
now,
it's
just
a
kubernetes
web
terminal
in
the
well
in
part
of
rackham,
and
it
will
blip
over
to
my
command
line
in
a
second.
But
this
achieves
the
same
goal.
A
This
is
actually
one
of
our
cluster
pools,
so
it
currently
says:
hey.
We
asked
for
a
cluster
pool
size,
one
we
defined
it
on
this
base
domain
and
we
have
in
this
case
this
is
an
aws
cluster
pool
on
openshift461
and
we've
pointed
to
an
image
that
we'll
get
into
that
in
a
second.
But
what
it
says
is
we
have
one
cluster
ready.
A
That
means
there's
a
cluster
cluster
provisioned
and
waiting
and
ready
for
us
to
check
it
out
whenever
we
need
to
so,
let's
hop
over
and
say
we're
going
to
start,
our
ci
run
we're
going
to
start
a
deploy
or
we
just
need
a
cluster
to
do
a
dev
test
run
on.
All
of
these
are
use
cases
we're
kind
of
working
on
inside
the
organization
for
acm
development
already
so,
let's
pop
over,
so
we
have
oc,
get
cluster
pools.
A
We
have
some
cluster
pools
here,
hopefully
we're
logged
in
still
there
we
go
and
we'll
go
ahead
and
start
a
checkout
from
this
cluster
pool
and
then
we'll
kind
of
pull
back
the
curtain
and
look
at
how
these
cluster
pools
are
created
and
defined
and
that'll
see
how
you
can
how
you
can
change
them
up.
So
in
here
we
have
a
couple
yammels.
The
nice
thing
is
cluster
claims,
cluster
pools,
they're
all
defined
in
the
animals,
and
you
can
use
or
enamel,
and
you
can
use
acm
to
sync.
A
Some
of
these,
so
josh
will
probably
talk
about
some
cool
stuff
with
that
in
a
bit.
But
if
we
look
at
our
cluster
claim
here
we're
going
to
go
ahead
and
apply
this
yaml,
all
it
says
is
hey.
We
want
it
named
this
we're
going
to
do
it
live,
and
we
have
this
we're
going
to
do
it
against
this
namespace.
A
B
I'm
going
to
so
that's
that's
a
good
important
point
right.
In
order
to
preserve
security,
hive
will
assign
or
create
a
namespace
for
each
cluster
deployment.
It
creates
right
so
for
each
cluster.
That
hive
creates
for
you
and
controls
the
lifecycle
for
you.
You
will
have
a
namespace
with
the
same
name
as
your
cluster
name,
and
that's
done
to
preserve
our
back
rules
so
that
people
who
don't
have
access
that
namespace
would
never
know
that
that
cluster
ever
exists.
A
Yep,
thanks
tim,
so
while
tim
was
talking
there,
I
went
ahead
and
logged
in
as
a
using
a
nice
little
script
here.
So
I
have
to
show
my
token
with
my
github
credentials
that
only
give
me
access
to
specifically
the
live
demo
or
sorry.
The
ci
cd
cluster
pools
name
space
so
I'll
have
access
to
the
resources
in
those
name
space
and
then
once
I
set
the
subject
on
the
claim,
all
related
resources
to
the
claim,
in
this
case
the
cluster
deployment
that
hive
has
waiting
for
me.
A
I'll,
have
access
to
read
so
we'll
go
ahead
and
kick
we'll
apply.
This
yaml
right
here,
it'll
take
a
second
so
that
cluster
claim
has
been
created.
So
if
we
do
an
oc
get
cluster
claims
here,
typoed
live
as
we
always
do.
A
Be
a
live
stream
without
a
typo
exactly
so
we
have
two
cluster
claims
here,
we'll
take
a
peek
at
the
one
that
I
I
just
created
and
then
we'll
dive
over
to
the
ones
that
I
created
yesterday,
because
I
know
how
live
demos
go
so
let's
go
ahead
and
describe
this
cluster
claim
right
here.
A
So,
as
we
can
see,
we
have
all
of
these
details
on
our
cluster
claim
and
down
here
at
the
bottom.
We
have
a
message:
cluster
claimed
status
or
type
pending.
So
it's
saying
is
the
claim
pending
false
the
claim's,
not
pending,
that
tells
us
the
claim's
been
fulfilled.
We
have
indeed
claimed
the
cluster,
so
we
can
go
up
here
and
take
a
look
and
see
hey.
We
have
a
cluster
pool
namespace
listed
now.
That's
good
news.
It's
given
us
a
cluster,
so
we
can
go
ahead
and
do
oc
get
a
cluster
deployment.
A
For
that
cluster
deployment
and
it's
provisioned
in
that
namespace,
so
it's
the
cluster
deployment
and
the
namespace
same
name,
so
you
can
go
and
take
a
peek
at
that
cluster
deployment
and
we'll
see
that
oh,
it's
there
it's
installed,
it
is
an
openshift
461
cluster
and
it's
resuming
so.
That
means
hey.
This
was
hibernated,
which
is
good
news
because
it
costs
us
in
our
current,
like
back
in
app,
can
math
about
68
of
the
cost
or
sorry
about
32
of
the
cost
of
a
running
cluster
so
and.
B
B
You
know,
like
gurney
said
you
know
60
what
67
percent
of
the
cost
of
running
that
cluster
by
leaving
it
in
the
cluster
pool
until
it's
ready
to
be
claimed.
A
Yeah
in
that
percentage
I
should
say,
as
on
aws,
we
haven't
done
any
more
analytics
and
I
should
also
say
that's
mostly
back
of
the
napkin
math.
This
is
just
us
looking
into
it
ourselves,
so
we
see
it's
resuming
we'll,
go
ahead
and
just
hop
out
of
this
and
take
a
look
at
the
one
that
we
already
have
up
and
going.
A
So,
let's
do
an
oc,
get
cluster
claims
again
and
we're
going
to
go
ahead
and
dig
in
on
this
cluster
claim
here
that
I
created
yesterday,
that's
been
fulfilled
and
is
on
a
cluster,
that's
already
there.
So
if
we
look
at
the
cluster
claim
here
once
again,.
A
We
have
this
cluster
claim,
take
a
peek,
we
see
that
it
has
indeed
been
fulfilled
and
then
we're
going
to
go
up
and
grab
once
again,
we'll
take
a
peek
at
the
cluster
deployment
in
this
namesake
space
and
then
we'll
log
into
it
and
take
a
peek
at
the
cluster
that
we
now
have
all
access
and
rights
to
so
there
we
are
it's
now
running,
so
this
cluster
has
been
up
for
about
a
day
and
we
checked
it
out.
Yesterday.
A
It's
fully
resumed,
so
we
have
a
running
cluster
and
we
can
go
ahead
and
get
this
deployment
as
yaml,
and
that
tells
us
some
important
facts.
It
says:
hey,
it's
no
longer
hibernating
we
can
and
we
now
have
the
clusters
reachable.
So
it
does
a
liveness
probe
and
then
we
take
a
peek
here
as
well
and
we
get
references
to
two
secrets
that
are
stored
in
the
same
namespace
with
the
same,
our
back
restraints
that
allow
us
to
access
the
admin
password
and
the
coop
config
for
this
cluster.
A
So
I've
gone
ahead
and
taken
the
liberty
of
already
grabbing
the
admin
username
and
password,
and
I
have
this
script
right
here
that
will
use
the
password,
username
and
password
that
I
pulled
from
there
to
log
into
that
cluster,
namely
it'll,
be
a
cluster
named
with
all
of
this
mess
here
in
the
name
and
there
we
are,
we
are
indeed
logged
into
the
cluster
we
checked
out
now.
The
biggest
feature
is
we'll.
This
is
our
simulated
ci
run.
A
So
this
is
where
I,
as
a
developer,
might
either
run
ci
in
a
ci
system
or
check
this
out,
and
you
know
verify
a
bug
test,
my
code,
whatever
I
want
with
basically
no
wait
time,
five
to
ten
minutes,
for
it
to
hybrid
unhibernate
at
most
and
then
as
easy
as
could
be.
I
can
just
go
ahead
and
say:
okay,
what
did
I
call
that
cluster
claim
so
we'll
oc
get
cluster
claim
and
it'll
pop
up
need
to
relog
in.
A
And
we'll
log
in
with
my
github
credentials,
you
know
the
ones
that
are
very
limited
just
to
access
this,
because
I'm
a
lowly
developer
here
and
we'll
say:
hey
I'm
done
with
this
cluster.
I
no
longer
need
it
so
I'll
go
ahead
and
just
delete
that
cluster
claim
and
if
I
did
this
correctly,
that
will
delete
and
if
we
take
a
peek
back
over
here
in
this
namespace
we'll
hop
down
to,
and
hopefully
I
guess,
the
right
one.
A
Under
the
covers,
hive
is
now
actually
uninstalling
and
deprovisioning
that
cluster
just
kind
of
under
the
covers.
Now
I,
as
a
developer,
don't
need
to
care
about
this
anymore
or
a
ci
system,
because
hive's
doing
all
the
work.
For
me,
I
told
if
I'm
done
with
the
cluster
the
cluster's
going
away.
So
if
we
hop
in
here
to
the
terminal,
it'll
show,
let's
see-
oh
sorry,
I'm
the
wrong
one.
So
either
way,
this
uninstall
pod
will
actually
run
through
the
full
uninstall
on
this
cluster
and
the
cluster
will
disappear
from
my
aws
account.
A
B
Right
so
so
you
can
think
I
I
don't
know
how
many
of
the
viewers
may
be
working
with
a
ci
cd
team
or
maybe
working
on
a
ci
cd
team,
but
you
can
imagine
the
usefulness
of
this
right
having
clusters
on
on
demand
ready
to
go
at
all
different
versions
across
all
different
cloud
providers.
That
openshift
supports
ipi
installs
on
and
you
can
pick
and
choose
whichever
version
you
want
on
whichever
cloud
provider
you
want,
you
automatically
have
a
cluster
ready
to
go.
B
You
execute
your
tests
and
when
you're
done
with
your
test,
you
just
delete
the
claim
and
the
cluster
goes
away.
Yeah.
That's
that's
a
very,
very
powerful
way
of
running
ci
cd
for
applications
that
are
going
to
be
running
and
targeted
onto
openshift
clusters.
We've
seen
not
not
everybody's
going
to
have
our
exact
scenario
right.
We
have
to
test
our
builds
across
multiple
versions
of
openshift
across
multiple
different
cloud
providers
that
run
on
openshift.
Our
test
matrix
ends
up
being
quite
complex,
but
this
has
allowed
us
to
simplify
that
test
matrix
quite
a
bit.
D
Yep
I
was
going
to
ask
the
question
there
tim
and
gurney.
You
know
what
was
the
moment
of
clarity
when
you
realized.
You
know,
we've
got
this
in
our
back
pocket
and
we're
ready
to
rock.
Was
it
time?
Was
it
cost?
You
know
what
what
pain
point,
because
I
think
that's
what
resonates
with
the
users
is
helping
them
understand.
They
may
not
be
here
today,
and
they
may
not
be
here
in
two
weeks,
but
they're
going
to
be
here
eventually
and
what?
What
brought
that
out
for
us.
A
So
I
have
a.
I
have
some
great
great
talk
about
this,
so
I
I
arrived
out
of
college
at
ibm
in
june
of
last
year
june
2019.
So
I
came
into
welcome
here's
kubernetes,
so
I
got
to
learn
what
kubernetes
was
also
you're
on
the
cicd
team.
Okay,
also
you're
working
on
supporting
openshift4,
and
they
have
this
really
cool
installer
that
provisions
the
infrastructure
for
you
so
immediately
we're
like
okay.
A
What
you're
testing
is
not
even
being
exercised,
but
during
that
time
occasionally
you'd
like
hit
a
quota
and
you'd
have
to
go
in
and
manually
restart
the
ci
run
and
make
sure
that
your
resources
that
you'd
utilized
already
were
cleaned
up.
You
have
to
make
sure
that
you
were
robust
enough
to
store
that
state
and
to
manage
that
state,
and
you
had
to
make
sure
that
you
were
robust
enough
that
if
your
ci
ci
test
failed,
you'd
still
clean
up
the
cluster
at
the
end.
A
So
all
of
those
concerns
came
in
where,
when
we
moved
over
from
ibm
to
red
hat,
we
said
okay,
we're
going
to
solve
one
of
these
problems,
real
quick,
we're
no
longer
going
to
wait.
You
know
provision
on
demand.
The
clusters
for
our
our
ci
runs
we're
going
to
go
ahead
and
have
some
pool
of
fixed
clusters
that
we're
going
to
we're
just
going
to
pull
out
we're
going
to
clean
up.
A
You
know,
run
run
our
own
cleanup
on
and
then
deploy
to,
but
that
involved
us
having
like
12
static
clusters
running
at
any
time
and
we'd
had
to
check
them
out.
We
had
to
make
sure
that
we
could
uninstall.
I
did
identity
even
in
a
failure
case,
and
it
really
just
caused
a
bunch
of
headaches
there,
but
we
knew
that
was
a
transitionary
step
that
that
saved
us
that
50
60
minutes
each
ci
run
and
some
complication
to
do
with
state
management
by
just
grabbing
a
cluster
from
some
fixed
set.
A
A
So
you
can,
if
you
ever
need
to
intervene
and
we're
even
working
on
and
getting
that
a
little
bit
more
automated.
So
actually.
B
So
to
answer
your
question,
scott
from
from
my
perspective,
it
was
time
right
time
in
that
we
had
to
develop
and
and
continue
to
maintain
our
own
ansible
scripts
to
do
the
install
across
all
these
different
cloud
providers,
which
we
no
longer
have
to
do
with
sorry.
My
screen
blank
for
a
second
which
we
no
longer
have
to
do
with
by
utilizing
hive
right,
hives
and
ipi
install
it
works
across
all
the
different
cloud
providers
and
it
takes
care
of
everything
for
us.
B
So
we
don't
have
to
maintain
our
own
ansible
scripts,
not
there's
anything
wrong
with
that.
It's
just
extra
work
that
we
have
to
maintain
and
we
don't
want
to
have
to
maintain
it
and
we
had
to
maintain
it
for
every
cloud
provider
and
every
version
of
openshift,
which
is
not
terribly
difficult.
We
were
doing
it.
B
Right
and-
and
we
were
having
to
help
developers
with
anytime
they'd-
have
a
problem
with
those
groups.
We
we
had
to
spend
the
time
to
help
developers
with
it,
and
now
we
don't
have
to
right
now.
We've
got
hive
to
do
all
that
for
us,
which
is
great.
So
that's
one
aspect
of
time.
The
other
aspect
of
time
is
the
45
minutes.
It
takes
to
deploy
that
on
demand
right
now.
We
don't
have
to
wait
that
right.
B
We
wait
five,
ten
minutes
max
for
it
to
unhibernate,
and
it's
ready
to
go
that
you
know
is
extremely
valuable
when
you're
looking
at
a
velocity
of
trying
to
get
as
many
builds
done
in
that
eight
hours
of
productivity
time
per
day,
I
mean
especially
with
covet
right.
Everybody
is
working
from
home
everybody's
on
these
meetings.
Everybody
gets
a
little
tired
and
aggravated
eventually
right,
so
the
more
we
ran
builds
the
more
aggravated
we
got
with
this.
B
I
you
know
the
concept
we
were
using
before
we
started
using
cluster
pools,
which
was
we
kept
the
same
clusters
around
we
reused
clusters
right
and,
and
the
point
of
that
was
to
keep
our
build
time.
You
know
the
distance
between
our
build
times
around
two
hours
right.
We
didn't
want
to
exceed
two
hours.
We
wanted
to
capture
every
single
change
that
was
coming
in
as
quickly
as
possible,
identify
those
changes
so
that
it
would
be
easier
to
fix
when
we
found
problems
right.
B
The
amount
of
change
that
goes
into
a
build
in
two
hours
is
less
than
the
amount
of
change
that
goes
into
a
build
over
eight
hours
right.
So
it's
easier
for
us
to
debug
the
problems
and
identify
the
issues
if
we
have
less
time
between
our
ci
builds.
So
so
utilizing
cluster
pools
allowed
us
to
move
away
from
reusing
clusters,
because
reusing
clusters
is
a
horrible
idea
and
the
reason
being
is
because
you
leave
artifacts
around
right.
You
leave
artifacts
around
and
there
are
so
many
objects
in
kubernetes.
B
B
B
Right
and
the
added
benefit
the
added
benefit
because,
like
to
me
time,
was
the
number
one
issue
right
and
and
the
ability
to
spread
this
out
across
all
of
our
supported
test
matrix.
We
have
an
enormous
test
matrix
right.
We've
got
to
support
four
different
versions:
four
different
y
versions
of
openshift
plus
we
have
this
kind
of
shady,
not
shady,
let's
say
let's
say
not
well
articulated
z
stream,
support
right
that
that
that
is,
you
know.
Well,
maybe
we
support
you
know.
You
know
x,
minus
two
or
or
z
minus
two.
B
Maybe
we
support
z
minus
ten
right.
We
don't
really
know
right
that
that's
one
of
those,
at
least
from
a
cic
perspective,
it's
very
hard
to
test
right,
yeah.
B
Yep,
so
so
we
still,
we
can
stand
up
these
cluster
pools
with
every
supported
version
of
openshift,
all
the
way
down
to
the
z
releases
on
every
cloud,
and
we
can
randomly
select
these
clusters
from
any
of
these
cluster
pools
to
run
our
ci
tests
on
and
we
run.
You
know
eight
ci
tests
a
day
right.
B
So
you
know,
statistically
speaking,
over
the
course
of
release,
we're
hitting
every
single
version
and
of
oshi
of
ocp
that
we
need
to
test
before
we
ever
go
out
the
door
and
release
it
so
so
that
that
is
a
time
constraint
now
to
josh's
point.
B
D
I'm
not
sure
that's
a
different
different
podcast.
B
Right,
we
won't
go
there,
but
yes,
I
mean,
ultimately
it
saves
time
and
it
saves
money
and
it
reduces
complexity,
and
all
of
these
things
add
up
to
increased
velocity
right
and
we
have
seen
a
huge
increase,
not
only
in
my
seat,
icd
and
ci
cd
teams,
productivity,
but
in
our
velocity
and
our
ability
to
contain
all
these
different
ci
processes.
B
That,
frankly,
you
know
everybody
is
relying
on
us
to
contain,
and
not
just
the
developers
right,
ultimately
that
we're
responsible
for
making
sure
that
everything
has
been
tested
and
thoroughly
evaluated
before
it
ever
releases
to
the
to
the
general
public,
and-
and
you
know
so-
it's
it's
something.
We
owe
our
customers
to
do.
A
One
addendum
tim,
we
run
24
ci,
builds
a
day
since
we're
cross
geo,
so
it
never
sleeps.
It
runs
on
weekends
as
well.
A
Also
gives
us
another
point:
is
this
gives
us
near
it?
The
the
scalability
of
this
system
is
just
impressive
because
before
we
when
we
needed
to
add
a
new
version
to
our
cluster
pools-
and
I
completely
I
realize
now-
I
completely
forgot
to
I'll
click
over
here.
Real,
quick,
just
I'll
do
a
quick
overview.
If
we
need
to
add
a
new
open
shift
version
to
our
represented
cluster
pools,
it
is
quite
simply
as
easy
as
adding
a
cluster
image
set.
A
A
One
thing
of
note:
these
are
global
resources,
so
the
admin
on
the
cluster
will
have
to
provide
these,
which
is
really
a
good
thing,
because
these
are
these
are
what
are
what
is
used
to
provision
these
resources,
so
you
kind
of
don't
want
that
going
awry,
so
we're
going
to
go
ahead
and
log
in
as
the
cube
admin
user
and
let's
just
apply
this-
we'll
see
that
and
we'll
we'll
make
us
a
new
cluster
pool,
and
this
is
how
we're
able
to
scale
you
know
this
is
what
we
do.
A
Every
release
openshift
puts
out
461
and
I
went
through
this
process
to
add
461
for
gcp,
aws
and
azure
to
our
ci
matrix.
So
it's
as
easy,
just
as
just
doing
an
oc
apply
for,
let's
see
here
our
cluster
image
set,
and
now
we
can
see
that
we
have
a
cluster
image
set
right
here.
Go
ahead.
B
Yep
and-
and
so
it
it
it's
super
easy
to
set
this
up
and
and
use
it,
and
I
think
the
documentation
on
the
hive
site
is
not
quite
there
yet,
but
I
think
it
is,
I
think,
there's
a
pr
coming
for
that
real
soon,
but
it
I
was
able
to
go
in
and
figure
this
stuff
out
without
any
documentation,
just
by
looking
at
the
crs
right,
which
you
can
find
in
that
in
that
git
repo.
B
B
F
No,
actually,
I
think
I
was
only
promised
10
minutes,
so
I
was
looking
at
the
clockwork.
You
know
it's
all
right,
we're
all
good
and
and
so
yeah
I
mean
I
kept
mentioning
cost
and.
G
Cost
comes
up
day,
two
as
well,
and
so
I'm
going
to
talk
a
little
bit
about
how
we're
leveraging
again
the
hibernation
function.
That
hive
makes
very
easy
to
access
and
takes
care
of
all
of
the
heavy
lifting
for
us,
and
so
really
you
know
you
or
I
or
we,
as
acm,
specifically,
are
just
left
with
creating
the
business
logic
around.
How
do
I
want
to
implement
these?
When
do
I
want
to
implement
these,
and
so,
as
gurney
was
saying,
our
developers
come?
G
They
get
these
clusters
and
for
us
we
are
a
cluster
provisioning,
automation
and
day
two
management
product,
and
so
everybody
just
deploys
acm
hub
onto
these
clusters
that
came
out
of
the
pool
and
the
next
thing
they
do.
Is
they
spin
up
three
or
four
or
six,
hopefully
not
too
many,
but
a
bunch
of
additional
children
clusters
that
come
out
of
that
acm?
And
so
even
though
acm
was
asleep
it
you
know
we
powered
it
up,
so
we
had
been
saving
money,
we
bring
it
in,
they
get
it
quickly.
G
They've
suddenly
created
a
bunch
of
clusters
out
of
acm
that
are
now
running
and
when
running
24
by
seven,
you
know
starts
to
add
up
across
our
three
cloud
providers,
and
so
we
what
the
problem
gave
us
was.
We
came
up
with
sort
of
two
different
solutions,
so
one
is
a
opt-in
model
for
hibernating,
and
so
that's
where
I
can
curate
a
list
as
an
example
of
clusters
that
I
would
want
to
that.
G
I
would
want
to
hibernate
and
the
other
is
an
opt
out
model
where
we
try
to
hibernate
everything
that
acm
by
default,
because
developers
may
not
remember
to
curate
in
their
labels
etc
for
it,
and
so
we
will
put
everything
to
sleep
so
without
further
ado.
Let's
take
a
quick
look
at
some
of
these
and
how
they
work.
So
the
first
one
we'll
do
the
opt-in
model,
which
is
we
use
a
github
scenario
and
we
actually
leverage
the
application
model
that
tim
and
both
gurney
touched
on
at
the
beginning.
G
Let
me
just
share
the
screen
here,
so
this
is
our
cluster
list
page.
So
we
see
I
have
an
acm
hub,
that
was
that
was
provisioned
out
and
then
I've
gone
out,
and
I
built
a
bunch
of
these
clusters
with
hive,
and
so
I
can
build
across.
We
support
five
platforms
out
of
the
box
in
the
ui,
which
is
the
and
sorry
I'm
gonna
do
a
little
plug
here,
but
is
the
amazon,
the
gcp,
the
azure,
the
vmware
and
the
bare
metal
and
then
as
well?
G
You
have
openstack
that
you
can
still
use
the
hive
through
the
apis
and
clies
as
well,
and
so
what
we
wanted
to
do
was
figure
out
was
or
was
come
up
with
a
way
to
do
a
curated
hibernation
of
these
systems,
and
so
what
we
did
was
leverage
our
applications.
And
so
we
we
say
applications
and
you
think
about
running
my
web
server
or
another
one.
I
think
we
did
the
demo
last
time
was
the
pacman
application,
but
really
the
subscriptions
and
our
application
model
will
allow
you
to
work
with
any
kubernetes
resources.
G
G
So
this
is
the
open
sourced
one
it's
available
today,
actually
I'm
in
the
fork,
but
it
came
from
open
cluster
management
cluster
hibernate.
So
you
can
fork
your
own
copy.
You
can
put
it
into
private
mode
if
you
want
to
secure
it,
but
it
allows
you
to
you.
G
G
That's
only
about
30
of
all
the
time
you
would
otherwise
be
running
a
cluster
if
you
left
it
on
24x7.
So
again
you
can
see
it's
a
pretty
big.
It's
a
70
savings
that
you
can
cut
off
there
and
again,
if
you
need
to
bring
it
back.
Let's
say
I
come
in
on
a
saturday
and
I
want
to
bring
the
system
back
online.
Well,
it's
just
a
matter
of
flipping
that
state
and
so
again
we'll
use
the
visual
web
terminal
here
just
to
show
so
behind
the
scenes
when
the
subscription
is
applying
to
hibernate.
G
Put
the
get
in
here,
and
so
we
can
see
we
have
a
couple
of
these
for
the
different
clusters
and
then
it's
just
making
a
mod
to
the
single
parameter,
which
is
the
power
state
which
we'll
find
find
down
here.
So
it's
running,
so
you
can
make
the
change,
and
even
here
I
can
make
the
change
directly
and
I
can
move
from
running
state
over
to
a
hibernating
state
and
the
system
will
make
the
switch.
G
We'll
click
on
the
hibernating,
we'll
click
edit,
and
so
just
the
idea
here
is
the
subscription
is
leveraging.
What
we
have
is
the
the
time
windows.
So
I
can
pick
the
time
of
day
where
I
wanted
to
execute
the
change
on
the
cluster
in
git.
I
defined
all
of
the
clusters
that
I
wanted
it
to
hibernate
and
then
at
that
time
it's
going
to
apply
those
to
the
system
and
put
them
all
to
sleep,
and
so,
in
this
case
it's
at
7
pm.
It's
going
to
change
all
of
those
power
states
to
hibernating.
G
So
this
was
the
opt
in
where
you're
curating
a
set
of
clusters
that
you
want
to.
You
want
to
turn
off.
G
The
other
one
we
created
was
because
gurney
asked
me:
would
this
work
if
the
developers
just
created
a
whole
whack
load
of
clusters
and
didn't
you
know
and
didn't
take
any
action,
and
the
answer
is
no,
because
this
was
an
opt-in
model,
and
so
we
built
a
very
quick
opt-out
model,
in
which
case
we
were
going
to
hibernate
everything
unless
you
opt
it
out
by
putting
a
label
on
the
cluster
to
say
to
skip
it,
and
so
again
this
one
is
available
in
the
open
source
as
well-
and
I
will
admit
this
one
was-
I
had
never
had
an
excuse
to
use
a
kubernetes
cron
job,
and
this
seemed
like
the
perfect
opportunity
to
use
a
kubernetes
crom
job
as
well
as
all
linux
developers
will
be
familiar
with
the
cron
job.
G
So
you
know
we
created
a
very
simple
little
app
that
runs
in
our
image.
I
should
say
that's
able
to
reach
in
with
a
very
limited
service
account
and
read
the
cluster
deployment
object
and
make
patches
to
the
cluster
deployment.
Obviously,
so
it
doesn't
even
need
permission
to
to
create
the
cluster.
The
cluster
deployment
object.
It's
just
a
very
limited
set
of
list
and
patch
and
get
data
for
for
again
that
cluster
deployment
object.
That
comes
from
hive.
G
G
Eastern
time
it
goes
through
and
it
will
shut
off
all
the
clusters
in
the
in
that
acm
hub,
and
so
the
idea
being
is
we're
going
to
start
to
pre-load
this
into
our
clusters
that
we've
pre-provisioned
with
the
cluster
pool
so
that
each
evening,
when
a
what
a
developer
goes
home
for
the
day
and
forgets
to
have
turned
these
off
and
they
had
three
of
them,
and
you
know
I'll,
throw
a
number
out
there.
G
It
reduces
the
cost
to
under
250
dollars.
So
it's
a
pretty
big
significant
savings,
and
you
know
the
flip
side
to
make
this
easy
is.
If
I
do
need
the
system
back
though,
and
again
this
is,
if
you've
checked
out
the
git,
we
have
a
couple
of
different
make
commands,
but
we
created
some
manual
jobs
that
will
allow
you
to
bring
them
online
or
bring
them
offline
as
well.
So
you
can
just
do
something
as
simple
as
run
a
make
running,
and
this
is
going
to
go
off
and
launch
the
job.
G
With
the
opt-out
model,
you
just
have
to
create
a
label
on
the
cluster
in
acm
and
the
job
will
skip
it.
So
we
do
once
in
a
while
have
a
have
a
cluster
that
someone
creates,
that
they
need
to
leave
long
running.
They
have
that
opportunity,
and
so
these
are
both.
We
made
them
open
source
so
that
you
can
play
with
them.
One
is
more
just
a
demonstration
in
our
how
to
use
our
subscription
application
model
and,
while
the
other
one
is
the
other
one
is
just
a
pod.
G
H
G
D
I
know
gurney's
been
peppering
your
team
devon,
with
a
handful
of
other
enhancements
to
to
bring
it
all
together.
You
know,
looking
ahead,
acm
delivers
hive
within
our
product,
so
we've
actually
had
a
couple
of
customers
that
have
come
forward
leveraging
hive
in
the
open
and
they've
come
forward
and
said:
oh
now,
I
can
get
rackham
have
full
support,
have
all
the
goodness
and
baked
in
application
policy
day
two
config
everything
we
handle
in
the
full
breadbasket
of
feature
set
with
the
hive
functionality
under
the
hood
that
they
know
they
can
drive
these
cluster
pools.
D
They
can
drive
hibernation,
they
can
drive
openstack.
You
know
which
is
already
in
hive
in
the
api
layer.
So
all
that
bucket
of
goodness,
is
there
ready
to
rock
ready
to
go.
E
D
We
we
look
for
your
team,
you
know
and
the
open
shift
installer
and
what's
going
on
on
the
ground
level,
for
the
infrastructure
management,
we
look
to
you
guys
as
kind
of
giving
us
signals,
but
now
it's
coming
full
circle
where,
since
we
are
consumers
of
it
and
using
it
in-house
we're
able
to
drive
back
enhancements
to
improve
things
for
the
cicd
flow
anyway,
just
it's
been
cool
to
work
with
your
team
and
to
see
this
develop
out
in
the
open
and
and
get
value
from
it
right
away.
D
I
know
we've
got
to
be
cognizant
of
the
time
here,
because
the
the
zoom
call
is
going
to
self-destruct
in
four
minutes,
but
you
know,
there's
we've
covered
a
lot
of
ground
chris.
I
want
to
get
back
to
you
for
a
minute.
Did
we
miss
anything
in
the
chats
or
is
there
anything?
That's
sparkled
in
your
mind
that
we
should
cover.
C
C
If
there's
anything
any
questions
that
may
come
in,
I
can
get
them
answered
after
the
fact
folks,
so
feel
free
to
email
me
see
short
at
red
hat
and
I
can
get
the
various
people
plugged
in
and
get
you
the
right
answer.
If
you
need
it
now,
it
looks
good.
I
think
all
good.
D
C
D
Yeah
management
tool-
here
you
know
tim-
your
points
were
spot-on.
I
kind
of
want
to
give
you
a
minute
just
to
kind
of
recap
on
where
we're
going
with
ci
cd,
how
you
see
this
use
case
evolving
over
time.
You
know
give
us
some
insight
as
to
how
we're
building
this
for
our
squads
to
consume
it
internally
and
looking
at
broader
adoption
inside
of
red
hat.
B
Yeah,
so
so
that's
a
good
point,
so
we've
been
using
it
with
cicd
for
just
the
cicd
use
cases
specifically
around
our
ci
canary
testing
processes.
We've
been
doing
that
for
about
two
months
now,
with
with
great
success.
B
You
know
a
few
hiccups
here
and
there,
but
we've
been
we've
been
submitting
issues
to
the
hive
team
and
they've
they've
been
very
quick
about
getting
those
fixed
and
resolved
for
us
and
at
the
moment
we're
we're
pretty
happy
with
where
we're
at
now,
we
have
seen
a
significant
cost
savings
right
between
what
we
were
doing
like
I
said
what
we
were
doing
was
leaving
these
clusters
up
and
running
up
and
running
and
reusing
them,
and
now
that
we've
switched
over
to
hive
they're
not
up
and
running
all
the
time,
they're
hibernating,
so
we've
seen
a
significant
drop
in
our
cost.
B
Just
from
the
ci
cd
processes,
we
are
rolling
this
out
to
the
entire
development
org
for
acm,
not
not
entire
red
hat,
just
just
acm,
but
it's
it's
about
60
developers
right.
We
are
spread
across
about
10
different
squads.
Each
squad
has
its
own
aws
account
when
it
has
its
own
gcp
count
known
as
their
account
right
for
cost
control,
and
those
guys
will
be
all
using
a
single
cluster
and
they'll
be
logging
in
with
their
git.
B
It's
all
authenticated
against
our
github
org
and
they
have
their
own
name
space,
so
they
all
share
a
namespace,
so
each
cloud
has
its
own
namespace
and
they
can
create
their
own
cluster
pools
and
and
how
they
manage.
Those
clusters
is
completely
up
to
them,
we're
giving
them
the
control
that
they
need
to
be
able
to
control
their
own
costs
right
for
their
own
development
needs,
and
you
know
we're
basically
empowering
them,
and
I
think
this
is
very
much
a
use
case
that
that
most
customers
right
will
have
right.
B
You
you,
you
may
run
production
on-prem
in
your
own
data
center,
but
you're
likely
using
the
cloud
to
do
development
and-
and
you
certainly
want
an
easy
way
to
stand
up
the
clusters
that
you
need
and
an
easy
way
to
control
your
costs.
C
C
Yeah
that
very
well
said:
okay,
folks,
we're
gonna
be
stream
jumping
here,
the
next
stream
will
be
doing
some
chaos
testing.
So
if
you
want
to
take
what
you've
learned
here
and
then
apply
some
chaos
testing,
which
works
very
well
with
acm,
I
feel
like
because
you
can
spin
things
up
and
down
very
quickly,
join
us
here
in
30
seconds,
and
thank
you
all
for
joining
today.
Thank
you.
Thank
you.
Everyone
in
chat,
thank
you.
C
C
And
oh,
my
twitch
started
yeah,
you
are
a
twitch
star,
welcome
to
it.