►
From YouTube: Kubicorn + Cluster API Demo
Description
Quick demo of a Kubicorn cluster on AWS with a cluster API controller courtesy of Kris Nova
A
Hey:
what's
up
everybody,
so
I'm
going
to
be
doing
a
quick
demo
of
cuba,
corn
with
the
cluster
api
for
sig
cluster
lifecycle,
the
cluster
api
folks,
I
gave
a
demo
from
denver
the
last
week
or
the
week
before,
and
we've
lost
the
recording
due
to
some
video,
some
video
technical
difficulties
doing
it
live.
So
I'm
redoing
this
video
to
kind
of
give
a
quick
and
dirty
demo
of
user
experience
of
deploying
a
cluster
api
controller
with
cuba
coin
from
the
ground
map
so
to
get
started.
A
They
have
in
open
source
anyway,
here's
one
for
Amazon.
So
the
first
thing
we
want
to
do
is
we
want
to
create
a
concept
of
a
cluster,
so
we're
in
a
new
car
state
store
because
we
don't
we
have
nothing
running
in
so
we
want
to
go
ahead
and
delete
the
underbar
state
directory.
The
next
thing
we're
to
do
is
we
are
going
to
compile
a
fresh
copy
of
the
cubic
or
binary
from
master,
so
we
can
do
a
get
status,
we're
not
even
in
the
right
repo.
A
Let's
do
go
source
github,
calm,
keep
a
corn
in
cuba,
corn
get
status,
ok,
I've
tweaked
a
few
things,
and
we
can
just
reset
hard.
We
want
to
reset
hard
to
origin
master.
Now,
let's
do
a
git
status,
remove
this
machine
that
we're
gonna
actually
recreate
that
file
later
and
one
more
get
status
for
good
measure,
good,
we're
upstate
with
origin
master.
So
now
we
can
make
all
and
drop
off
a
new
keeper
corn
binary
in
our
path,
keep
a
corn
and
all
right
freshly
compiled
binary
anyway.
A
The
first
thing
you
want
to
do
is:
keep
the
point:
create
will
do
it?
The
name
of
the
cluster
which
we're
gonna
call
rad
demo
I
use
that
name
a
lot
and
we're
going
to
say
profile
is
equal
to
CE
AWS
and
before
you're
on
this
command.
I
want
to
show
folks
you
can
actually
go
and
see.
The
different
profiles
you
can
mix
may
be
that
you
keep
recording,
create
each
for
help
in
here
on
the
left,
you
see
we
have
this
list
of
profiles.
A
These
are
all
the
different
available
profiles
and
valid
streams
you
could
pass
in,
so
you
could
do
PA,
WS
PC
do
P
do
and
these
kind
of
explain
the
shorthand.
So
these
are
just
easy
convenience
flags
for
you
to
easily
define
whichever
profile
you
want
and,
of
course,
if
you
want
to
write
your
own
profile,
you
certainly
can
as
well.
A
Profiles
are
stored
in
the
profiles
directory
anyway.
In
this
case,
we
did
see
in
WS,
which
is
this
one
here.
We
also
could
have
done
controller
AWS
bun,
which
would
have
been
a
few
more
keystrokes.
So
we
just
went
ahead
and
did
the
shorthand
see
AWS
for
controller?
That's
what
the
C
stands
for
on
Amazon.
Okay.
A
So,
let's
run
our
command:
keeper
corn
create
rad
demo,
PC
AWS
rad
demo
exists,
we
nuke
the
state
store
and
let's
try
this
again
so
I
just
did
this
on
stage
in
Vancouver
yesterday,
now
I'm
back
in
Seattle
re-recording
this
for
folks
at
home.
So
that's
why
I
read:
demo
is
already
there
anyway.
Let's
try
to
create
this
again
and
poof
underbar
state
rad,
demo
class
2
ml
has
been
created.
A
We
can
cat
out
this
yeah
mo
file
and
actually
see
what's
going
on
here
and
you'll
notice
that
it's
a
very
poor
implementation
of
the
cluster
API,
because
this
is
our
first
implementation
in
Cuba,
corn
and
all
we've
done
is:
we've
completely
abused
the
provider
config
field
declaration
here
and
we've
managed
to
the
existing
cubic
or
an
API
into
provider
config
and
effectively
defined
it
nothing
else
and
our
cluster
API
spec
whatsoever.
A
The
goal
of
the
project
is
to
get
this
working
which
it's
not
working,
and
then
we
can
start
to
cherry-pick
directives
up
to
the
higher
level
master
structs
called
cluster
further
downstream,
and
this
is
all
hinging
on
what
open-source
decides
to
do
and
from
the
looks
of
things
it
looks
like
we
might
be
coming
together
to
write
a
tool
to
do
all
this
stuff
anyway,
so
exciting
stuff
to
see
where
all
this
goes.
But
here
you
can
actually
see
it
and
how
it's
supposed
to
work
in
a
while.
Okay.
A
A
So
would
you
keep
a
quart
of
ply
rad
a
demo
and
before
we
do
that
command
I
have
gone
ahead
and
I
have
exported
there's
a
couple
of
a
WSI
environmental
variables
that
you
want
to
declare
before
you
do
all
this,
so
you
can
authenticate
with
AWS
API
I'm
not
going
to
share
mine
on
the
screen,
since
we
record
right
now,
but
there
is
documentation
and
they
keep
a
corne
github
repo.
That
explains
which
ones
you
need
to
export
in
order
for
this
command
to
work,
so
we're
gonna
go
ahead
and
run
our
apply
command.
A
I,
keep
a
chord,
apply,
read
demo
and
that's
gonna
start
to
take
action
in
Amazon,
so
right
away.
We
can
already
see
that
we're
doing
stuff
in
Amazon
in
work.
We
ran
into
an
error
so
right
away.
We
started
to
take
action
in
the
Amazon.
This
is
actually
a
really
great
example
of
cuba,
corn
behaving
atomically,
basically,
meaning
that
we
started
to
create
infrastructure
and
mutate
infrastructure.
Something
went
wrong
in
this
case.
It
looks
like
the
old
rad
demo.
Internet
gateway
was
already
created
and
not
deleted,
which
is
probably
a
reflection
of
I'm
gonna.
A
Guess
a
leftover
VPC
that
we
can
go
and
manually
delete
anyway
to
keep
a
corn
said.
Oh
no
I
can't
create
this
and
we're
gonna
undo
everything
that
we've
done
so
here
you
can
see.
We
created
a
security
group
called
a
BB
3-0
and
some
other
characters,
and
one
of
the
first
things
it
did.
Is
it
sure
enough
undid
that
creation
and
deleted
it
afterwards?
A
So
the
before
the
program
exited
it
actually
like
starting
to
go
through
the
map
realize
something
went
wrong
and
then
unlocked
itself
back
the
other
way
to
undo
whatever
mutation
it
was
done.
This
is
going
to
be
handy
because
we're
going
to
be
running
to
keep
a
corn
source
code
as
a
controller
moving
forward
so
having
this
sort
of
atomic
guarantee
allows
us
to
not
really
care
about
stuff
like
this
at
the
higher
level
construct,
where
we're
plugging
it
into
the
rest
of
the
controller
code.
So
anyway,
let's
go
to
leave
this
VPC.
A
Try
this
command
again,
so
go
to
Google
Chrome!
Well,
go
to
our
instances.
We
don't
want
instances.
We
want
to
go
to
V,
PC
and
I
bet,
we're
gonna,
see
yeah.
We
have
three
P
V
pcs
here,
so
we
have
two
of
them
called
rat
demo
and
aha,
there
is
our
problem
here.
So
naming
cluster
is
the
same
thing
can
sometimes
get
you
into
trouble
because
we
actually
use
the
rat
demo
string
to
identify
a
lot
of
the
resources.
A
This
is
something
that
you
have
to
concern
yourself
with
when
you
are
doing
any
type
of
infrastructure
mutation,
which
is
how
do
we
look
this
stuff
up
later?
How
do
we
track
it?
What
becomes
our
source
of
truth
years,
they're
gonna
get
stored
in
the
Amazon
API?
Do
we
track
it
externally
in
a
database
like
EDD,
like
what
does
this
whole
process
look
like
in
the
case
of
cuba,
corn
and
most
of
cops?
A
We
actually
just
give
all
the
resources
convenient
names
and
we
have
a
naming
convention
so
that
we
can
look
at
another
runtime
as
needed.
This
comes
back
to
I
kinda.
Invite
us,
as
we
name
resources,
the
same
name
and
the
program
can't
figure
out
which
one
I'm
supposed
to
use.
So
here
we're
deleting
both
of
our
Brad
MOV,
pcs
and
well,
you
run
out
of
keep
it
going
to
command
again
as
soon
as
this
is
done.
A
It's
always
weird
like
when
you're
talking
to
yourself
cuz,
it's
just
like
I
hope
that
I'm
making
somewhat
logical
sense
right
now,
anyway,
clear
our
screen
go
back,
looks
here,
your
V
pcs
are
deleted,
so
we
can
pull
up
our
terminal
here
and
we
should
be
able
to
run
our
apply
a
command
again
and
this
time
we
shouldn't
get
any
errors.
It
should
actually
go
through
yeah
as
expected,
and
it
started
to
creative
structure
for
us
so
again,
the
the
convention,
here
being
that
anytime,
we
mutate
infrastructure,
which
is
what
we're
doing
right
now.
A
We
want
to
signal
that
with
a
check
mark
now,
lets
us
know
that
something
behind
the
scenes
has
changed
and
if
we
actually
go
and
we
look
in
the
Amazon
console,
we
should
be
able
to
go
and
and
see
stuff
happening
behind
the
scenes.
So,
let's
go
and
check
out
easy
to
it
see
if
any
instances
are
coming
up,
it
doesn't
look
like
we
have
any
instance.
Oh
we
have
one
that's
pending,
so
yeah
we're
starting
to
create
instances.
A
A
So
the
first
thing
we
do
is
we
create
what
we
call
a
master
node
if
you
pay
attention
on
the
cluster
API
call
today,
and
we
learned
that
that
masters
and
nodes
are
kind
of
like
this.
This
weird
set
of
amplifications
that
sort
of
mean
different
things
to
different
people,
so
I
think
moving
away
from
from
calling
a
particular
type
of
infrastructure
or
server
or
virtual
machine.
A
master
is
going
to
be
something
that
we
see
more
and
more
in
the
upcoming
days
in
kubernetes
I.
Think,
instead
of
you
maybe
calling
it
a
master.
A
Maybe
we
call
it.
You
know
the
API
server
node
and
we
just
know
that
this
is
a
piece
of
infrastructure.
That's
gonna
run
an
API
server
component.
Maybe
we
also
have
an
EDD
node
and
we
know
that's
a
very
special
of
virtual
machine
that
runs
at
C
key.
Maybe
we
don't
care,
maybe
they're
all
just
knows
so.
In
this
case,
following
older
primitives,
we
actually
have
a
master.
What
a
master
means
to
cubic
or
it
is
it's
gonna
run
the
kubernetes
control
plane.
A
So
I
mean
this
is
kubernetes
101
stuff,
a
controller
manager,
scheduler
API
server,
SD
all
the
things
that
sort
of
bring
kubernetes
to
life
and
make
it
work,
that's
gonna
be
run
on
his
master
node
here,
in
addition
to
the
master
node,
we
have
one
worker
node
and
this
one
worker
node
is
a
little
bit
special
because
that's
going
to
be
the
node
that
allows
us
to
schedule
the
first
workload.
That's
then
going
to
create
other
nodes.
A
So
after
this
new
node
comes
up,
we're
going
to
create
a
kubernetes
deployment
and
deploy
our
controller
that
will
then
get
scheduled
on
this
new.
Now
that
has
yet
to
come
into
existence
here
and
then
that
controller
will
then
create
an
addition,
sedition,
'el
subset
of
nodes,
and
then
we
can
scale
up
and
down
just
by
updating
a
few
CDs
from
the
command
line
using
a
key
Bechdel
which
we'll
do
as
soon
as
the
the
rest
of
the
question
comes
up.
A
So,
let's
refresh
see,
if
our
note
is
coming
up
yet
aha,
so
they're,
both
initializing
and
I
bet.
If
we
go
back
here,
you
always
know
that
cue
Pokorny
worked
well
when
you
see
this
friendly
little
green
line
at
the
bottom
of
your
your
output.
Here
that
says,
yes,
everything
worked
as
expected,
and
you
can
now
actually
use
the
cubital
command
line
tool
to
interact
with
your
kubernetes
cluster.
So
no
and
we
should
see.
A
Aha,
we
have
a
master
which
is
defined
by
a
role
and
we
have
a
one
worker
now
that
we
can
schedule
workloads
to
also
if
we
look
here
in
the
the
output,
we
see
this
snippet
of
code
here
now,
we're
only
gonna
see
the
snippet
of
code.
If
we
run
a
profile
that
has
a
concept
of
a
controller,
older
cubic
or
installations
and
deployments
did
not
deploy
a
controller,
so
you
we
would
have
no
reason
to
define
these
CR
IDs
or
to
even
create
a
deployment
to
run
our
controller
in
the
first
place.
A
So
the
first
thing
we
did
is
we
deployed
a
cluster
controller
and
the
controller
itself
takes
about
30
to
45
seconds,
for
the
cubelet
to
actually
pull
down
the
image
and
get
the
thing
up
and
running.
It's
a
fairly
big
container
image
in
part,
because
it
then
there's
pretty
much
every
library
of
kubernetes
source
code
under
the
side,
so
even
pushing
this
thing
up
to
a
docker
registry
or
container
registry
takes
a
little
bit
of
time.
A
So
as
that's
pulling
down
in
the
background,
we
also
went
ahead
and
we
declared
the
name
of
the
cluster,
which
this
is
a
cluster
cid.
So
we
can
get
cluster
and
that's
called
rad
demo.
We
also
declared
our
r40
indexed,
of
course,
so
0
1
2
&
3
totaling,
four
machines
that
sort
of
looked
familiar
to
anybody,
who's
ever
looked
at
a
stateful
set
for
it,
and
then
we
have
our
two
sort
of
special
machines
here
on
the
side
which,
for
the
ones
I
talked
about
originally
which
this
serves
as
the
preliminary
cluster.
A
So
if
we
do
a
can't
get
nodes
again,
we
can
see
that
we're
already
starting
to
bring
up
our
first
piece
of
infrastructure
here
and
I'm
gonna
try
to
kind
of
jump
on
this
piece
of
infrastructure,
a
virtual
machine,
an
ec2
instance
and
show
you
guys
what's
going
on
behind
the
scenes
with
cube
admin,
cube
admin
has
to
a
lot
to
do
with
everything
that's
happening
here.
A
A
A
So
we
were
able
to
establish
a
connection
with
the
API
server
here
on
port
443
and
we
were
able
to
use
a
cubed
atom
and
join
token
to
actually
join
the
rest
of
the
cluster
know
how
that
whole
process
worked.
Was
cuba,
corn
randomly
created
cuba,
admin
join
token
at
runtime,
when
we
did
our
keep
a
corn
apply
that
plugged
it
into
the
the
kubernetes
master
and
then
all
the
nodes
had
a
concept
of
this
token
defined
not
as
a
secret,
as
it
probably
should
be,
but
it
was
actually
written
to
disk.
A
If
we
actually
want
to
go,
we
can
go
to
change
directory.
To,
let's
see
cuba,
corn,
I
think,
is
the
directory.
Then,
in
here
we
have
this
cluster
jason
file
and
we
can
actually
tap
this
cluster
now
Jason
well
pipe
into
gaq
for
folks
at
home,
and
we
can
actually
see
that
this
is
adjacent
representation
of
that
yeah
mafia.
We
looked
at
earlier
and
the
reason
we're
doing
Jason
here
is
so
that
we
can
easily
query
directives
out
of
it.
A
In
this
case,
we're
gonna
be
finding
Jason
within
Jason
and
we're
gonna
be
pulling
out
our
cube
admin
token,
which
is
going
to
be
defined
somewhere
in
this
big
blob
below.
So,
if
we
wanted
to,
we
could
do
actually.
You
know
what
let's
see.
If
we
can't
do
this,
we
could
go
pipe
to
JQ.
We
want
to
go
into
spec
provider,
config,
see
with
that
that
spec
dot
provider
config
see
if
we
can
this
out
of
there.
A
It's
no
wonder
if
spike
is
a
cluster
API
not
spec
anyway,
I,
don't
even
like
hack
on
JQ
right
now.
If
you
guys
want
to
go
and
figure
it
out,
you
can
but
it's
JSON
within
Jason
everything's
in
the
provider
config,
and
you
could
actually
go
and
query
your
token
out
of
that,
and
we
can
see
an
example
of
this
in
this
directory.
Let's
see
if
I
get
this
right
on
the
first
time,
var
I
think
bar
Lib
cloud
instance
scripts
yeah.
A
So
if
we
cut
out
this
part
zero
zero
one
file
here
this
is
the
bootstrap
script.
That's
got
the
the
Etsy
cubic
corn
clustered
JSON,
it
actually
cats
it
here.
Cat
yo,
F
and
it'll
count
it
to
that
file.
We
just
looked
at
and
then
you
can
actually
go
and
see
all
of
you
to
get
installs,
and
then
here
you
can
see.
This
is
the
token
command
I
was
just
about.
A
This
is
the
actual
command
that
would
pull
the
hope
get
out
of
that
Jason
blot
we
just
looked
at
you
see
that
it
restarts
the
cubelet
service
and
it
does
a
cute
admin
join
so
pretty
straightforward
stuff.
The
club
ride
has
already
been
configured
on
the
master,
and
this
just
does
a
cube.
Admin
join
and
becomes
a
node
in
the
cluster,
so
there's
a
lot
of
different
ways.
We
could
look
at
like
I
said
as
a
cluster
API
working
group.
A
Managing
these
secrets,
like
clearly
passing
around
a
cube
admin
token
in
a
blob
of
JSON
within
Jason,
is
not
our
most
viable
alternative
to
solving
this
problem.
But
the
point
is,
we
can
now
start
to
have
these
conversations
and
we
can
actually
see
how
this
stuff
is
working,
so
we
can
start
to
like
figure
out
best
practices
and
figure
out.
How
is
a
community
where
you
want
to
solve
these
problems?
A
Good
thing,
Cooper
Nettie's
is
in
place
and
we
can
start
to
borrow
these
existing
primitives,
like
secrets
and
config
maps,
to
make
this
whole
process
a
little
bit
easier.
Ok,
so
enough
about
me
ranting
about
what
we
should
do
more
of
me,
ranting
about
what
I
did
the
wrong
way.
Okay.
So
let's
go
back
to
write
AWS
console
here
and
you
can
see
that
we
have
our
original
two
nodes,
our
master
in
our
node
and
0,
1,
2
and
3
all
up
and
right.
So
we
jump
in
our
terminal.
A
We
should
be
able
to
do
a
can't
get
into
and
poof
you
can
see.
We
have
notes
coming
up
over
the
past
couple
of
minutes,
one
of
which
is
is
8,
minutes
old
and
then
once
even
sold.
We
have
one,
that's
5!
We
have
a
couple.
Other
four
they're
all
come
up
at
different
times,
but
they
are
nodes
in
our
cluster.
So
now
let's
actually
look
at
our
pods,
so
we
can
k
keeo
in
the
cube
horn
namespace
and
see
that
we
have
a
controller
running
as
a
pod.
A
So
let's
get
our
logs
so
paste
in
the
name
of
our
pod,
tell
it
to
looking
to
keep
the
core
namespace
and
we'll
go
ahead
and
follow
those
logs
every
time.
So
we
can
see
what's
going
on
here.
So
this
is
just
a
really
simple
controller.
Every
time
you
see
it
sort
of
pause,
it's
sleeping
for
a
second
and
then
it's
iterating
through
a
loop
again
and
sleeps
for
a
second
and
it
rates
for
a
loop
again.
The
the
second
sleep
is
basically
there
for
me
to
demonstrate
that
it
is
a
loop.
A
We
don't
really
need
it
for
anything.
In
fact,
if
we
really
wanted
to
like
crank
this
thing
up
and
see
how
fast
it
could
go,
we
could
just
let
it
loop
indefinitely,
but
for
the
demo
we
have
it
set
to
one
second
sleep
per
iteration
right
now
everything
is
reconciled
as
we
told
it.
So
it
doesn't
really
do
anything.
It's
basically
a
no
op
every
iteration,
and
we
see
it
like
query
an
AWS
API
and
seeing
what
instances
are
there
and
saying?
A
Okay,
everything
is
as
expected:
I
don't
need
to
take
any
action,
so
here
is
where
we
really
see
the
value
of
the
controller.
Let's
go
in
and
let's
delete
one
of
these
virtual
machines
cool
shutting
down.
So
if
I
refresh
the
console
here,
you
can
see
the
controllers
already
rescheduled
for
lack
of
a
better
term,
ec2
instance
behind
the
scenes
and
I
bet.
If
we,
if
we
go
back
and
we
scroll
up
in
our
logs,
you
will
be
able
to
find
somewhere
in
here.
A
It
was
probably
a
while
ago,
because
this
thing's
going
every
second
somewhere
in
here
is
gonna,
be
the
controller
saying.
Aha,
we
no
longer
have
an
ec2
instance
and
ready
state.
We
should
go
ahead
and
create
one
I'm,
not
gonna,
try
to
look
through
it
now,
but
you
can
see
that
it
did
in
fact
work.
So
this
is
pretty
cool
and
we
can
actually
go
through
and
we
could
take
this
one
or
second
one,
our
first
one
in
our
zero
and
we
can
terminate
all
of
these
what's
interesting.
A
Is
the
scheduler
probably
has
already
scheduled
the
controller
pod
on
one
of
the
new
nodes,
meaning
without
draining
the
node?
We
just
completely
nuked
the
underlying
infrastructure
that
was
running
the
controller
that
was
managing
other
infrastructure,
so
the
beauty
of
having
this
one
special
unindexed
node
is
we
the
ensures
that
we
always
have
a
place
to
recover
from
a
disaster
like
this
and
to
begin
to
schedule
the
controller
to
scale
our
cluster
moving
forward.
So
if
we
write
here,
it
looks
like
okay,
so
the
scheduler
did
not
reschedule
it.
Maybe
it
did.
A
Let's
see
it
doesn't:
okay
yeah,
so
the
locks
still
are
working,
so
we
still
have
a
socket
open
there.
So
the
scheduler
had
not
rescheduled
it
onto
a
different
node.
I
think
I
might
have
forgot
to
kill
it.
Usually
I
would
kill
the
pot
and
let
the
scheduler
picking
up
somewhere
else
and
let's
go
back
and
refresh
and
see
filter
on
interesting
state,
and
you
can
see
that
the
rad
demo
0
is
now
running
and
initializing
and
in
rad
demo.
A
1
is
now
running
an
initialized
way
that
this
controller
works
again
in
sort
of
model
of
stateful
sets.
That
says,
like,
let's
create
our
first
node
and
let's
not
move
on
until
that.
Node
is
either
ready
and
happy
and
registered
with
the
API
server
or
something
goes
wrong
and
we
undo
it
and
that's
not
the
Tomica
guarantee
we
talked
about
earlier,
which
I
think
is
candy,
because
you
know
you're
not
to
be
able
to
scale
up
or
down
unless
you
can
get
to
the
next
step
in
line.
A
So
in
this
case
it
looks
like
0
went
off
without
a
hitch.
1
is
coming
up
and
running.
So
there's
one
good
side,
we'll
see
you
another
two
and
then
ultimately
three
so
now,
let's
look
at
some
CDs,
so
we'll
kill
our
logs.
We
all
can
get
machines
and
you'll
see
that
we
have
this
rad
demo
node
3.
So
what
I
can
do
here
is
I
can
K
get
machines,
rad,
demo
node
3,
oh
yeah,
mo
pipe
to
our
friend
PT
copy
echo,
we'll
paste
that
out.
A
Ok,
so
rad
demo
node
3.
So
let's
call
this
rad
demo
node
4
cool
and
we
can
K
apply
Mike's
F
new
machine
that
you
been
kicking.
Machines
now
see
that
we
have
this
new
serie
t
declared
for
the
past
couple
of
seconds,
and
if
we
go
back
to
our
ec2
console
here,
we
can
probably
filter
on
instance,
state
it's
equal
to
running.
A
A
A
Imagine
what
would
happen
if
we
didn't
ensure
that
imagine
if
say,
like
the
kubernetes
cluster
tag
wasn't
defined
well
for
one
the
clouds
right,
it
would
break.
So
we
would
have
some
issues
with
the
AWS
cloud
provider
and
for
two
we
wouldn't
be
able
to
query
that
we
would
effectively
have
a
banded
resources
here.
So
atomic
infrastructure
mutations
are
really
really
important.
Here
we
write
in
one
of
these
infrastructure
controllers,
so
we
have
rad
demo
for
up
and
running
and
we
should
be
able
to
can't
get
nodes
and.
A
See
that
yeah
we've
had
this
one
up
for
the
past
40
seconds,
and
so
we
can
actually
edit
this
where's
my
Emacs
command
machine.
At
again
this
would
be
a
really
big
one,
so
we'll
do
five,
we'll
create
it
we'll
do
our
apply
command
again.
K
apply.
Now
we
have
to
get
machines,
there
should
be
zero.
One.
Two
three
four
five
six
nodes
defined
go
to
our
Amazon
con,
we'll
be
fresh,
should
see
five
coming
up
any
money
here.
A
This
looks
like
four
still
initializing,
so
our
controllers
still
hasn't
moved
on
to
restart
the
reconciliation
loop
yet
because
it's
still
an
initializing
and
not
a
ready
state.
So
in
this
case
we're
using
the
Amazon
API
to
figure
out
if
an
instance
is
ready
or
not,
and
then
we
can
do
a
really
awesome
disaster
recovery
scenario
here,
if
we
get
the
Cuba
corn
pod
and
to
keep
the
core
namespace,
we
can
go
ahead
and
eat
this
pie.
A
Will
reschedule
this
pie
on
a
different
node?
So
if
we
do
get
Pio
BAM
namespace
Cuba
corn
Oh
wide
yeah,
we
can
actually
see
that
it's
now
coming
up
on
this
node
here,
100
157.
So
if
we
go-
and
we
look
here-
I'm
looking
over
here
on
the
right
in
this
private
IP
space-
so
that's
one
hundred
twelve
hundred
twenty
seven
hundred
thirty-four
one
hundred
dot,
251
100.1,
57,
okay,
so
our
controller
pod
was
scheduled
to
Hein
rad
demo
node
four.
A
So
let's
go
ahead
and
know
what
would
still
lead
rad
demo
for
holder
lead
three
two
we'll
do
that
well,
say
something:
horrible
happened:
there's
bad
weather
and
amazon.
An
unruly
software
engineer
came
in
and
started
to
try
to
delete
ec2
instances.
There
was
some
bad
software
out
running
somewhere.
Who
knows
what
happened,
but
we're
gonna
lose
the
note
that
our
controllers
running
on
so
will
terminate
them
already.
The
the
machines
are
shutting
down.
If
we
get
our
pot
here,
we
want
to
do
this
command.
We
can
see
that.
A
It's
still
trying
to
run
it
on
157,
so
the
kubernetes
api
server
hasn't
figured
out
war.
It
actually
hasn't
dropped
the
container
yet
so
some
sort
of
eventual
peace
he's
happening,
but
it's
going
to
reschedule
the
pot
as
soon
as
it
figures
out
that
that
note
is
actually
gone
online.
So
let's
continue
to
get
these
pods
here
and
we're
looking
for
a
different
pot
name
as
we
do
this.
A
Want
everything
to
work
or
super
fast?
Okay.
So
if
we
look,
we
have
our
last
couple
of
characters
here,
9m
in
whereas
before
we
were
6
cm
J,
so
we
actually
have
a
new
pod.
It's
in
container
creating
no
restarts.
Yet
the
schedule
is
recreated.
The
pod
I
was
going
to
come
up,
it's
gonna
see.
What's
going
on,
it's
gonna
fix
the
infrastructure
when
KaBlam
cluster
is
fixing
itself
both
on
the
container
level
as
well
on
the
infrastructure
level.
A
Community
used
to
the
rescue
again
deleting
the
cluster
is
actually
kind
of
tricky
because
the
controller
is
running
and
we'll
try
to
undelete
the
cluster
as
you
go
through
it.
So
the
best
way
I
have
found
to
delete
one
of
these
clusters
until
we
can
orchestrate
this
better
and
keep
a
corn
is
to
delete
manually
and
there's.
This
is
kind
of
like
a
timing
element
to
this,
so
you
gotta
kind
of
get
like
the
timing.
A
Just
right,
you
delete
the
node,
the
special
node,
the
one
that
is
not
managed
by
the
controller
the
controller
ignores
and
then
once
that's
the
lead.
You
delete
all
of
the
other
nodes.
Hopefully,
if
all
goes
well,
the
autoscale
group
won't
redeploy
at
the
other
node
and
you'll
be
able
to
bring
down
all
the
nodes
in
the
kubernetes
cluster,
which
will
take
the
controller
outline,
and
then
you
can
shut
down
the
rest
of
the
cluster
in
general.
A
So
as
we're
doing
all
of
that,
we're
gonna
write,
I,
keep
a
corn
delete,
command,
keep
going,
delete,
rad
demo
and
we
got
like
I
said
we
gotta
kind
of
time.
This
thing
you
just
write
and
I
will
select
all
instances
and
state
running
and
it'll
do
instance:
one
can
I
terminate
this.
Those
are
already
terminated.
Let's
see
what
we
got
here.
If
we
refresh
one
okay,
so
we've
got
zero
and
one
in
five,
the
controller's,
probably
in
the
process
of
coming
back
to
life.
A
Oh
yeah,
it
looks
like
it
just
created
five
here,
so
the
controller
is
already
kind
of
catching
up
to
us.
So
we
really
need
to
be
quick
here
so
instance,
eight
terminate
terminate
all
those
jump
back
in
the
command
line
as
quickly
as
possible
run
our
delete
command,
which
it's
now
deleting
auto-scale
groups,
launch
configurations
in
round
tables.
A
So
clearly,
a
better
viable
solution
would
be
to
first
delete
the
deployment
that
the
controller
is
running
on
and
then
have
keeper
corn
go
in
and
delete
all
nodes
matching
certain
criteria,
probably
gonna
trigger
on
that
same
key
value,
pair
kubernetes
cluster
name
of
our
cluster
good
for
everything
else.
Anyway.
This
is
how
the
controller
works
in
the
Amazon.
The
source
code
is
horrible,
don't
look
at
it.
We're
gonna,
rewrite
everything,
an
open
source,
but
you
actually
get
to
see
how
it
works,
which
is
exciting
and
I.