►
Description
Come learn how DRP can make provisioning of your Kubernetes clusters easy. We can take zero infrastructure to fully deployed Kubernetes with minimal effort. Additionally, we are focusing on strong "Immutable Infrastructure" capabilities, with integrations for node admission.
We discuss the architecture of DRP, immutable provisioning, kubernetes deployment integration, and advanced concepts around node admission. Capping off the presentation will be a Demo of deploying multiple hosts, and demonstrating multiple Kubernetes clusters deployed on those hosts
A
Meetup
group
I
have
a
lot
of
experience
in
infrastructure
technology
dating
back
to
gosh.
My
equipment
experience
dates
back
into
the
late
70s
when
I
was
in
the
United
States
Marine
Corps
running
and
operating
mainframe
computer
equipments
that
were
15
to
20
years
old
at
the
time
that
I
was
in
the
Marine
Corps
on
up
into
UNIX
systems,
network
engineering,
network
architecture
systems,
architecture,
I've.
A
My
experiences
spans
sort
of
a
large
gamut
of
the
technology
stack,
which
I
think
has
really
helped
me
with
the
foundational
provisioning
technology
solutions
racking
and
digital
rebar
is
a
company,
that's
been
around
in
one
form
or
another
for
quite
a
while.
Basically,
it
has
an
eight-year
history
going
back
to
Dell
who
originally
founded
a
project
within
Dell,
to
support
very
large
customer
deployments
and
when
I
say
very
large
customers,
typically
on
the
order
of
10,000
physical
machines
or
more
and
through
the
process.
A
They
found
that
by
doing
applying
strong
automation
patterns
that
a
lot
of
those
problems
go
away
and
a
lot
of
the
provisioning
systems
and
use
today,
don't
really
operate
around
automation
patterns
in
sort
of
a
modern
cloud
native
way,
which
we'll
talk
about
a
little
bit.
We'll
also
talk
a
little
bit
about
immutable
infrastructure,
which
is
one
of
the
patterns
that
was
a
strong
learn,
learning
experience
which
is
sort
of
the
apply,
rinse
repeat
or
create,
destroy,
recreate
sort
of
pattern.
A
But
first,
let's
talk
a
little
bit
about
immutable
infrastructure.
A
lot
of
people
have
different
ideas
about
what
immutable
infrastructure
is
and
and
what
it
means
to
them
and
for
the
most
part,
all
of
those
different
views
are
probably
right.
So
what
we
like
to
do
is
just
sort
of
level
set
a
little
bit
on
what
we
refer
to
as
immutable
infrastructure
and
what
that
means
to
us
for
us.
Immutable
infrastructure
is
essentially
replicating
the
cloud
and
container
pattern,
which
is
that
create
destroy
recreate
pattern.
A
So,
typically
in
cloud
or
containers,
you
create
an
ami,
a
cue
cow,
a
VMDK
or
a
container
and
of
some
sort,
and
you
bacon
a
bunch
of
stuff
into
those
things
and
you
deploy
it.
How
you
deploy.
It
is
a
little
bit
different
with
a
basic
operation
of
give
me
this
thing,
instantiate
it
and
make
it
whole
go
and
do
it.
It's
sort
of
that
that
create
destroy
pattern
when
I'm
done
with
it,
destroy
it
spin
up
a
new
version
or
a
new
revision
with
updated
pieces
and
parts
to
it.
A
It
should
be
noted
that,
simply
because
we
define
immutable,
it
doesn't
mean
that
the
image
or
the
thing
being
deployed
doesn't
need
to
be
configured
in
an
immutable
provisioning
environment.
You
still
need
to
be
able
to
do
one-time
configuration
of
basic
runtime
information
that
might
be
things
like
IP
address
your
DNS
servers
and
TV
servers
might
be
the
cluster
you
should
join.
It
might
be
some
secrets
that
are
injected
into
the
application.
A
It
might
be
a
join
command
of
some
sort,
but
typically
that's
a
one
and
done
pattern
at
instantiation
of
the
image
or
the
service
being
created.
It's
not
a
continuation
of
patch
update
patch
update.
So
what
this
basically
means
is
it
sort
of
shifts
to
the
left?
If
you
look
at
our
dev
CI
CD
pre-production
prod
pipeline,
which
is
kind
of
a
standard
pipeline
that
you
see
in
most
production
environments.
They're
variations
to
this.
A
Instead
of
creating
a
production
system,
configuring
it
update
patch
update
patch
update
patch,
we'll
talk
a
little
bit
more
about
that,
and
there
are
a
couple
of
different
patterns
that
immutable
provisioning
supports.
There's
there's
traditional
and
package
based
installation
methodology,
which
a
lot
of
people
are
familiar
with
when
they
talk
about
automating
provisioning
systems
and
that's
something
like
using
repos
kick-starts
precedes
traditional
sort
of
define
what
I
want
configured
methodology.
A
A
The
more
complex
your
environment
gets
the
more
benefits
you
have
to
the
mutable
pattern
as
an
environment
scales
and
the
more
things
can
be
different.
The
more
variability
you
get
in
your
environment,
the
more
important
it
is
to
start
shifting
towards
an
immutable
infrastructure.
If
you
have
a
cluster,
an
application
cluster
of
a
thousand
nodes
and
a
hundred
of
them
have
drifted
because,
through
your
update
patch
update
patch
twiddle
tweak
change
fix
as
things
break
process,
you
know
50
of
them
fall
out
of
drift.
A
You
start
having
inter
behaviors,
you
have
applications
that
stop
responding
the
same
way.
You
have
variability,
that's
very
hard
to
track
down
and
figure
out
what's
going
on,
and
even
though
we
try
and
apply
best
practices
with
a
configuration
management
tooling,
it's
inevitable
that
this
drift
happens,
particularly
the
more
you
scale.
A
So
with
that,
what
is
digital
reverb
provision,
digital
rebar
provision,
is
essentially
it's
a
single
going
binary.
It's
super
lightweight.
We
are
a
hundred
percent
API
first
and
we
do
every
enhancement
or
feature
in
an
API
in
go
first
and
we
dynamically
generate
a
CLI
from
the
API.
So
we
don't
try
and
reverse
engineer
the
ABI
API
with
building
a
CLI
that
consumes
the
API.
The
API
is
dynamically
programmatically
generated
from
the
API
very
important
distinction,
because
it
also
allows
us
to
have
an
API
resource
path.
A
A
Api
gives
you
the
ability
to
integrate
with
external
systems
so
that
you
can
use
other
tools
to
drive
digital
rebar
provision
to
drive
provisioning
workflows
through
what
we
call
our
composable,
workflow
or
stages
stages
is
sort
of
what
allows
us
to
do
a
lot
of
the
magic
of
what
we
do
within
our
composable
workflow.
At
the
end
of
the
day,
we're
a
traditional
pixie,
DHCP
provisioning
service
underneath
the
hood.
We
can't
get
away
from
that,
because
all
infrastructure
is
pretty
much
baked
around
pixie
DHCP
for
automation
and
provisioning
in
one
form
or
another.
A
We
also
provide
a
web
event
system
for
consuming
information
events
coming
off
visual
rebar
provision,
so
it's
very
easy
to
integrate
into
other
solutions.
It's
also
pluggable
I,
didn't
note
that
here
on
the
slide,
but
it's
pluggable,
so
we
can
add
plug-in
functionality
to
extend
the
capabilities
of
digital
rebar
provision
in
a
number
of
different
ways
and
directions
which
enables
it
to
be
able
to
further
integrate
into
environments
much
more
successfully.
A
Like
I
said
it's
a
traditional
DHCP
service,
where
we
have
TFTP
DHCP
to
manage
our
pixie
process.
We
have
an
HTTP
server
that
sits
over
the
top
of
the
TFTP
environment
and
then
we
have
our
api
service,
which
runs
by
default
on
port
80
92,
which
is
all
the
service
api
interaction.
Obviously,
all
of
these
ports
are
interchanged
exchangeable,
with
the
exception
of
DHCP
TFTP.
Typically,
don't
change
those,
because
a
lot
of
firmware
is
baked
in
with
the
assumptions
on
these
ports.
However,
you
can
change
them
if
you
want.
A
There
are
three
things
that
we
currently
rely
on,
which
is
7-zip
BSD,
tar
and
unzip
we're
working
on
getting
those
pulled
out
of
the
dependency
chain
so
that,
ultimately,
it
is
truly
a
single
going
binary,
no
external
dependencies.
Those
dependencies
are
not
yet
fully
realized
in
full
libraries
and
going
that
were
found.
Yet
to
support
the
capabilities.
We
need
I.
A
Things
that
I
talked
about
composable
workflow.
We
implement
that
through
stages.
So
this
is
sort
of
an
example
of
a
simple
stage
setup
that
might
include
both
the
mix
of
the
open
source,
blue
and
the
green
rack
and
components,
and
then
say
a
customer
specific
component.
So
you
can
interchangeably,
mix
and
match
pieces
that
you
obtain
from
the
open
source,
digital
rebar,
community
content.
A
If
you
have
a
commercial
engagement
with
rack
and
you
could
in
mix-and-match
with
our
commercial
components,
you
can
create
your
own
components
and
you
can
put
these
together
in
a
workflow
that
allows
you
to
define
how
you
want
a
machine.
A
single
machine
to
drive
through
the
provisioning
process.
So
in
this
example,
is
left
to
right.
We
have
a
discover
stage
where
we'll
new
machine
comes
up
with
DHCP
a
we
inventory.
It
register
it.
We
might
send
out
some
logging
or
notification
message.
A
We
move
forward
to
a
raid
BIOS
configuration
where
we'll
take
that
inventory
information
we
might
apply
and
ensure
that
BIOS
versions
are
up
to
a
specific
date.
We'll
set
up
raid
configuration
for
a
given
machine
based
on
its
role.
We
might
configure
the
base
board
management
controller
for
the
IPMI
function.
Then
we
chain
next
to
the
the
burnin
stage
we
might
customer
might
have
a
burnin
workload.
They
want
to
verify
CPU
memory.
Disks
are
good.
A
Maybe
a
given
application
workload
returns
a
specific
performance
level
that
we
expect
from
a
given
SKU
or
configuration
of
a
server
there's
a
number
of
variations
that
can
happen
there
and
then
last
we'll
do
well
not
last.
We'll
will
then
follow
up
with
the
installation
of
the
target
operating
system,
either
through
an
image
deployment
or
through
a
traditional
package
deployment,
and
that
can
be
anything
from
Linux
to
Windows.
To
anything
you
can
imagine
that
can
be
automated
for
provisioning.
A
We
might
grant
SSH
access
as
part
of
the
install
package
path,
pass
it
to
a
post
provisioning
stage.
After
that,
where
we
might
note
information
back
to
a
configuration
management,
database
or
DCIM,
or
we
might
note
information
back
to
I
Pam
is
where
you
can
integrate
and
do
interesting
things,
and
at
that
point
you
would
hand
it
off
for
production.
A
So
that's
an
example:
workflow
our
workflows
are
very
flexible
and
you
can
compose
and
dynamically
choose
various
stages
and
put
them
together
in
different
ways
and
methods
to
meet
your
requirements
and
mix
and
match
them
to
meet
your
hardware
and
your
given
goals
for
a
given
provisioning,
workflow
process
and,
like
I,
said
API
first
very,
very
important.
We
very
much
believe
that
the
API
is
must
be
the
first-class
citizen
because
we
generate
the
CLI
from
the
API
that
makes
our
CLI
a
first-class
citizen.
A
So
there
are
no
corner
cases
with
a
CLI
doesn't
cover
what
the
API
can
do.
Our
UI
or
our
UX
consumes
the
API.
So
as
a
consequence,
our
UX
does
tend
to
fall
behind
a
little
bit
in
capabilities
of,
but
the
API
or
the
seal
I
can
do,
but
it's
still
a
fairly
useful
and
fairly
broad,
encompassing
tool
and
again
I
mentioned.
We
have
standard
web
hooks
and
web
events
that
we
integrate
with.
All
of
this
means
that
it's
very
easy
to
do.
A
Inbound
or
outbound
integration,
so
inbound
would
be
something
like
a
configuration
management
service
or
a
configuration
management
database
that
drives
provisioning
activities
based
off
of
Hardware,
inventory
or
hardware
information
and
defines
roles
and
then
drives
provisioning,
workflows,
outbound
would
be
say,
for
example,
taking
inventory
and
pushing
that
out
to
a
DCIM
environment.
So
you
have
an
inventory
management
with
very
fine-grained
detailed
information
about
all
of
your
physical
server
infrastructure.
A
Going
back
to
that
cloud,
great
destroy
pattern.
Typically
it
in
crowd
clouds
cloud
and
container
provisioning
solutions.
You
have
a
request
for
a
given
state,
which
is
give
me
a
VM
or
instantiate
this
container
or
load
this
container
and
then
the
return
state
of
did
it
succeed
or
not,
so
that
sort
of
simplicity
is
what's
been
lacking
for
the
most
part
in
bare-metal
provisioning
and
that's
what
digital
rebar
does
through
our
workflow
in
our
stages.
Is
we
create
sort
of
this
black
box
that
the
workflow
sits
in
we're
externally?
You
can
make
that
up.
A
A
Just
give
me
this
thing
tell
me
when
it's
done
and
that's
one
of
the
the
primary
goals
about
digital
rebar
provision
is
to
make
provisioning
fast,
simple
and
capable
and
hide
that
sort
of
reset
install
config
test
join
all
of
those
stages
that
might
exist
in
that
black
box.
Some
of
the
things
that
digital
rebar
provision
also
does
is
coordination
and
coordination
is
not
orchestration
coordination
because
we
typically
focus
on
specifically
per
machine
workflow.
A
Basically,
the
cluster
pattern
and
the
cluster
pattern
is:
there's
a
lot
of
services
out
there
that
has
a
master
or
minions
or
primary
enslaves,
or
you
know
a
leader
and
follower
whatever
you
want
to
call
them,
and
you
have
somebody
that's
responsible
for
coordinating
the
activity
of
the
cluster.
When
you
bring
those
clusters
up,
they
typically
have
the
same
pattern,
which
is
you
have
some
sort
of
secret
that
the
master
holds
that
the
minions
are.
The
nodes
that
join
the
cluster
must
have
to
be
able
to
support
joining
the
cluster.
A
In
this
case,
how
we
do
the
coordination
we
do
it
on
atomically,
so
that
we
know
that
we're
gonna
have
a
master
or
a
process
that
must
occur
first
and
until
that
process
completes
we're
going
to
lock
the
rest
of
the
cluster,
so
that
it
can't
do
anything,
and
it
must
wait
for
the
rest
of
its
provisioning
activities
until
the
master
process
is
done.
So
in
this
example,
we
have
a
master
which
provides
a
secret.
The
secret
gets
recorded
on
the
digital
rebar
provision,
endpoint,
and
then
the
minions
pull
the
secret
from
the
DRP
endpoint.
A
They
present
that
secret
to
the
master,
to
typically
do
something
like
a
join
configuration,
so
this
allows
us
to
provide
atomic
coordination,
not
full-blown
orchestration
of
basic
cluster
patterns.
If
we
take
that
and
we
apply
it
to
kubernetes,
we
have
sort
of
this
basic
bootstrapping
illustration
that
centers
around
cube
atom
as
a
solution.
We
have
got
four
nodes
and
we
start
out
with
a
couple
of
stages.
We
install
the
OS,
we
install
docker.
A
These
are
all
stages
that
we'd
walk
the
Machine
through
there's
a
selected
master
and
you
can
define
which
machine
becomes
master
in
advance
or
you
can
just
let
the
cluster
elect
a
master
and
we
call
it
a
race
to
master.
Whoever
becomes
master
first
is
the
master,
the
master
runs
it
Skoob
atom
in
it,
which
creates
a
cluster
token.
The
cluster
token
is
recorded
back
in
digital
rebar
provision
and
then
the
end
points.
The
minions
then
do
a
coup
bottom
join
with
that
token,
and
they
dynamically
bring
up
a
cluster
later
on.
A
You
can
tie
in
an
existing
set
of
machines
or
dynamically
expand
the
cluster
using
that
ku
bottom
join
process,
so
it
allows
us
to
create
an
instantly
hydrated
kubernetes
cluster,
with
almost
zero
configuration
aside
from
the
basic
setup
and
the
digital
rebar
provision
side.
Outside
of
that,
we
can
automatically
deploy
kubernetes
from
master
to
all
of
the
minions.
Now
it's
important
to
note
that
qu
Badham
doesn't
really
support
high
availability
patterns.
Yet
so
we
don't
support
that
yet.
So
we
get
that
question
awful
lot
because
people
go
oh,
this
is
awesome.
A
I
want
this
in
production.
Now
I
want
a
che
on
my
cluster.
Well
until
kou
Badham
solves
that
problem.
Simply
for
us,
then
we're
not
going
to
be
doing
that.
We
do
have
full-blown
Kubb
spray,
ansible
playbook
implementations
that
allow
you
to
create
full-blown
h.a
clusters
through
the
coop
spray
pattern.
If
you
want
to
do
it
that
way,
but
that's
outside
the
scope
of
this
discussion.
A
Essentially
gives
us
our
token
very
our
cluster
very
quickly
through
that
that
cluster
pattern,
and
so
going
back
to
that
shift
left
discussion.
What
does
that
really
mean?
Essentially
you
have
the
problem,
which
is
an
environment.
You
create
a
package
of
a
crate
and
package,
a
server
image.
You
provision
the
server
you
set
up.
The
initial
configuration
you
patch
it
you
patch
it
and
you
end
up
with
a
snowflake.
Then
the
madness
doesn't
really
stop
at
patch
two.
A
You
continue
this
process
until
eventually
you
have
to
replace
the
service
and
your
hundreds,
if
not
thousands,
of
patch
iterations
later
and
that's
bad,
because
that
creates
snowflakes,
because
eventually
you
get
drift
between
large
clusters,
large
fleets
of
servers
that
it
just
happens
that
some
machines
don't
get
updated
in
line
with
a
patch
process
and
something
goes
wrong
with
a
patch
process,
and
you
end
up
with
these
snow,
snowflakes
and
large
estates.
So
let's
take
that
same
pattern
and
apply
it
to
bare
metal
in
an
immutable
way
using
sort
of
the
cloud
and
container
lessons.
A
So
now
we
have
our
package
server
image
provision,
server,
initial
config
or
a
running
server.
Now
we
realize
we
need
to
apply
some
patches.
Let's
take
those
patches
apply
them
back
to
our
original
package.
Server
image.
Do
our
CI
CD
our
dev
test
or
pre
prod
our
acceptance,
testing,
whatever
that
process
and
pipeline
looks
like
in
your
environment,
create
version
two
of
that
that
packaged
image
and
again
when
I
say
packaged
image
that
could
be
using
traditional
repos
and
kickstart,
some
pre
seeds
or
it
could
be
an
actual
image.
A
It's
a
raw
image
or
a
TFT
or
tarball
of
the
filesystem
whatever,
and
let's
apply
that
cloud
pattern,
destroy
it
and
provision
a
new
server.
Now,
technically
we
might
provision
a
new
server,
ensure
that
it's
valid
up
and
running
and
then
turn
off
the
old
server
and
the
load
balancer
and
turn
on
the
new
server
and
the
load
balancer.
A
Applying
our
initial
configuration,
those
steps,
you
know
they
vary
from
shop
to
shop,
but
essentially
this
gets
us
back
to
that
sort
of
cloud
and
container
pattern
where
we
are
applying
patches
to
the
left
side
of
our
workflow
and
not
unto
the
right
side
of
our
workflow,
and
we
continue
that
process
on
and
on
and
on.
Instead
of
patch
update
patch
update
a
couple
of
pieces
of
information
about
immutable,
as
we
see
it,
there's
a
couple
of
different
patterns:
there's
live
boot
or
in
RAM
memory.
A
This
is
very
similar
to
if
you
got
like
a
DVD
or
a
CD,
Linux
distribution,
you
put
it
in
your
CD
or
DVD
Drive,
and
you
boot
the
machine
up
into
a
live
mode.
That
OS
is
running
in
memory.
We
also
see
this
pattern
with
things
like
ranch
or
OS
or
Core.
Os
is
an
emerging
pattern
with
some
op
distros
in
general,
and
we
can
take
and
then
operate
in
this
live
boot
pattern
through
a
pixie
live
boot
into
an
in
RAM
or
memory
image,
and
you
then
actually
just
reboot
to
apply
updates
very
simple.
A
You
just
reboot
in
the
next
time
you
come
up.
You
bring
up
the
new
image
job
done,
it's
very,
very
fast,
there's
no
write
to
disk
overhead
and,
if
you're,
using
a
image
based
deployment,
it's
or
an
image
that
you
actually
load
as
opposed
to
packaging
configuration
for
one-time
configuration,
it's
super
fast
downside
is
consumes
additional
memory
to
run
the
operating
system
in
and
it
does
make
your
provision
or
a
much
more
critical
path
component
to
your
production
operations,
environment.
A
Additionally,
like
I
mentioned,
this
can
operate
with
packages
using
repos
kick-starts
pre
sheets,
whatever
they
happen
to
be.
The
problem
is
with
this.
This
model
is
it's
very,
very
hard
to
control
your
dependency
chain,
which
packages
and
libraries
are
pulled
in
unless
you
own
your
repose,
and
are
very,
very
careful
about
controlling
updates
into
your
own
personal
repose
that
you
do
your
provisioning
off
of.
That
is
a
pattern
that
can
support
immutable
principles
that
you
ensure
what
you
intended
to
deploy.
A
What
you
tested
in
your
CI
CD
environment,
actually
gets
deployed
to
production
and
not
six
weeks
later,
you've
got
52
new
package
versions
and
libraries
and
all
of
a
sudden
you
get
some
weird
implementation
issues.
It
is
a
very
easy
pattern
to
implement
a
lot
of
systems.
Administrators
DevOps,
SRT
types
are
very
familiar.
Operators
are
familiar
with
this
pattern
and
there's
a
lot
of
solutions
out
there
to
follow
are
examples
out
there
to
follow.
The
image
based
model
has
been
around
for
a
long
time.
A
I've
personally
started
with
image
based
as
early
as
I
believe
1999
with
a
product
called
system
imager.
So
it's
not
a
new
idea.
It's
just
an
idea.
That's
coming
back
around
again
coming
back
into
fashion,
which
can
be
a
raw
image,
tarball
windows
image,
it's
very,
very
fast,
to
install
if
you're
doing,
install
to
disk.
We
can
do
very
large
windows
images
in
two
to
three
minutes,
with
only
one
reboot
of
the
system,
as
opposed
to
a
traditional
Windows
install
which
might
take
you
forever
and
lots
of
hair
pulling
and
many
reboots.
A
This
pattern
of
image
based
deploy
sort
of
a
far
left
shift.
So
it's
really
pushing
things
way
far
to
the
left
in
that
shift
left
pattern
and
you
get
very
strong
guarantees
that
your
production
deployment
matches
almost
precisely
your
CI
CD
dev
test
environment.
So
you
get
very
little
variability
in
production
from
what
you
test
and
expect.
So
it's
a
very
strong
pattern
for
large
shops
looking
to
try
and
guarantee
that's
it
for
the
demo
slide
deck
for
me,
I
have
a
demo
that
I'll
show
for
you.
A
If
you
have
any
questions,
please
you
can
reach
out
to
me
Shane
a
track
end.
Comm
I
will
post
the
slide
deck
to
the
meetup
groups,
so,
if
anyone's
interested
in
seeing
the
additional
slide
deck,
you
can
take
a
look
at
that
and
then
also
there
are
some
resources
at
the
back
of
the
slide
deck.
That
might
be
interesting
if
you're
interested
in
taking
a
look
at
digital
rebar.
A
We
have
a
QuickStart
document
which
lays
the
foundation
for
what
we
were
doing
in
our
kubernetes
rebar
immutable
bootstrap
or
what
we
call
crib,
which
is
what
I'm
going
to
demo
here
in
just
a
moment,
we've
got
a
bunch
of
slide
decks
that
are
basic
training,
decks.
That
sort
of
give
you
a
quick
bootstrap
on
digital
rebar,
in
addition
to
our
documentation
and
then
a
whole
host
of
resources.
If
you
have
questions
or
comments,
or
you
want
to
get
in
touch
with
us,
you
can
take
a
look
at
these
in
the
slide
deck.
A
There's
chat,
they
hide
it.
Okay,
let's
put
my
chat
up.
So
if
anyone
has
any
questions,
please
drop
those
in
chat
as
I
go
along.
So
what
I'm
going
to
we're
gonna
show
here
today
is
we
have
six
systems
they're
based
in
packet
net,
which
is
bare
metal
service
provider?
Some
of
the
problems
I
had
with
hardware
notwithstanding
they've,
been
an
exceptional
solution.
We
really
liked
them
at
digital
rebar
because
they
follow
cloud-like
patterns
as
apply
it
to
bare
metal
service
provider.
A
The
first
three
machines,
I've
already
preset
up
into
a
kubernetes
cluster
and
then
what
I'm
going
to
do
is
kick
off
the
configuration
of
the
other
three
into
a
second
cluster
and
because
of
the
hardware
problems
I
wanted
to
make
sure
I
out
of
cluster.
That's
up
and
working
for
you,
because
what
happens?
Is
the
disks
disappear
on
a
reboot
on
some
of
these
machines,
sometimes,
and
so
are
installed
to
disk
for
the
docker
data
directory
fails
and
that
causes
the
implement
a
configuration
to
fail.
A
So,
let's
take
a
look
at
how
we
do
this,
so
how
we
define
this
in
digital
ray
bar
provision
is
we
have
our
set
of
machines
which
I
was
referring
to
are
six
machines
and
we
see
that
the
first
three
are
in
a
stage
I've
called
cribs
staged
done.
So
this
is
the
demo
that
I've
pre-staged
for
everybody
and
it's
in
this
final
stage
and
if
I
come
over
here
and
oh,
they
make
this
really
hard
toggle
broadcast
input.
Let
me
turn
off
broadcast
input.
A
Actually,
let's
take
a
look
at
the
how
we
can
interact
with
this
cluster
so
part
of
it.
The
the
solution
is
using
what
we
call
a
profile
to
hold
cluster
state
information,
going
back
to
the
cluster
pattern
and
in
this
case,
a
profile
that
has
not
been
used
yet,
which
is
our
live.
One
like
this
looks
like
this.
We
just
have
a
parameter
called
crib
cluster
profile.
It
defines
we're
going
to
be
one
of
the
crib
clusters.
A
Our
change
stage
map
is
our
workflow
that
defines
how
we
step
through
the
processes,
and
then
we
have
access
keys
for
access
to
the
environment.
So
this
is
essentially
a
naked
profile,
and
this
is
what
a
profile
looks
like
when
in
a
cluster
is
done.
We
see
that
we
actually
get
a
crew
crew.
Crib
cluster
join
command,
which
is
our
COO
bottom
init
command.
We
get
a
specification
on
who
the
master
is,
and
it
would
then
we
have
our
base
cluster
profile.
A
So
this
tells
me
that
this
cluster
has
been
configured,
but
it
also
gives
me
the
COO
bottom
join
secret,
which
I
can
provide
to
bring
other
machines
into
the
cluster,
and
so
that
is
sort
of
the
basic
backbone
of
the
configuration.
That's
necessary,
not
a
whole
lot,
that's
necessary,
but
if
we
take
and
run
command
on.
A
A
The
configuration
and
let
me
pipe
by
the
JQ,
so
it's
a
little
prettier,
but
this
is
essentially
the
kubernetes
information
necessary
to
define
the
cluster
and
interact
with
the
cluster
and
the
cluster
secrets.
So
now,
if
we
do
an
export
of
the
coop
config
to
our
comp
file,
we
can
now
do
cope.
Cuddle
get
nodes,
and
now
we
see
that
the
three
machines-
one
two
and
three
in
this
case-
the
race
to
master
machine
to
one,
a
master
election
and
the
other
two
machines-
are
just
nodes
enrolled
in
the
cluster.
A
So
that
is
essentially
what
we're
going
to
demonstrate
here.
We're
going
to
demonstrate
that
by
going
to
our
bulk
actions,
we're
going
to
and
stuff
and
I
see
your
question
I'll
get
to
that
in
a
moment
as
soon
as
I
get
this
kicked
off.
So
what
I'm
going
to
do
is
and
I'm
going
to
apply
the
profile,
which
is
my
live
demo
profile
to
these
three
machines.
A
A
A
A
A
Going
to
stuff
inside
which
OS
has
supported
preferred
for
the
host
system.
We
really
don't
care.
Our
demo
here
is
CentOS
7.
Our
demo
works
equally
well
as
an
Ambu
on
a
bunch
of
1604.
Those
are
the
only
two
that
we've
tested
with
any
regularity
in
the
production
environment,
so
you
can
certainly
create
boot
environments
for
other
operating
systems.
We
support
other
operating
systems,
their
distributions
as
well.
We
have
demonstrated
using
core
OS
in
the
past.
We
don't
currently
have
core
OS
working
in
digital
rebar
version
3,
which
is
the
modern
product
version.
A
It's
on
a
roadmap
to
get
working.
It's
not
very
hard
to
do.
It's
just
something
we
haven't
gotten
around
to
this
pattern,
actually
replicates
very
fairly.
Similarly,
what
core
OS
is
because
our
actual
distribution
is
sent
off
7
that
we're
actually
doing
a
live
boot
from
a
docker
install
to
and
then
a
crib
install,
which
is
setting
up
the
kubernetes
cluster
with
Kubb
admin
pieces.
A
So,
let's
see,
if
we
take
this
machine
back
here,
we
take
the
crib
ready
profile
off
the
machines
and
the
reason
I'm
taking
this
off
is
because
they
have
competing
workflows
for
the
purposes
of
staging
the
demo.
I
have
to
do
it
in
two
steps,
and
then
we
apply
the
actual
crib
live
demo
profile
to
the
machines.
A
A
So
in
that
case,
what
we
might
want
to
do
is
go
to
workflow
and
go
to
crib
live
demo,
and
so
our
our
workflow
starts
with
start
crib
live,
which
is
the
the
stage
I
need
to
put
the
machines
in,
which
is
what
I
did
wrong
previously
then
mount
local
discs,
so
we
mount
a
local
disc.
Do
our
doctor
install
do
our
crib
install
and
then
we're
done
with
the
crib
demo?
So
let's
go
back
to
the
machines,
select
our
machines.
We
have
our
our
profile.
Crib
live
demo
applied.
A
Start
crib
life,
Thank,
You,
Shane
start
curb.
Somebody
not
cue
me
next
time,
I
forget
start
crib
live
there.
We
go
now
apply
that
start
crib
live
stage
and
we
should
kick
off
and
we're
not
kicking
off.
So
what
what
I'm
gonna
do
is
take
a
look
at
what
I
did
wrong
and
my
demo,
when
I
read
up
in
atlast
night,
what
we
can
also
do
as
a
debugging
troubleshooting
step
is
we
can
take
a
look
at
our
jobs
log
and
we
see
that
our
jobs
logged
finished
successfully.
A
So
I've
made
an
error
in
that
stage
somewhere,
so
I'll
debug.
That
the
takeaway,
though,
is
from
our
bulk
actions.
We
can
see
that
these
first
three
machines
were
put
into
the
cribs
staged
profile
and
they
were
driven
through
to
success
for
completing
the
crib
cluster.
If
anyone's
curious,
what's
actually
happening
behind
the
scenes.
On
that,
the
the
actual
work
that
does
the
kubernetes
piece
is
the
crib
install
stage.
A
Crib
install
stage
itself
has
a
task
very
uniquely
named
crib
install
the
crib,
install
task
itself
defined
a
template
and
a
template
lets
us
create
tasks
that
can
reuse
the
same
configuration
in
multiple
ways
through
golang
templating
when
within
the
template.
So
if
we
take
a
look
at
the
template
with
this
is
what
actually
happens
so
we
actually
get
our
DRP
CLI
binary
CLI
on
the
machine,
which
is
our
agent
to
help
with
the
workflow
process.
We
determine
what
OS
we
are
currently
on.
A
We
do
some
package
installs
and
set
up
for
kubernetes,
and
then
we
do
some
go,
laying
templating
stuff
to
get
our
information
from
the
master
so
the
from
the
cluster.
So
we
get
our
actual
crib
cluster
credentials
and
information
and
we
drop
those
into
this
template
dynamically.
On
the
fly
when
it's
instantiated,
we
determine
who
the
master
is.
So
if
we
don't
have
a
master,
yet
we
need
to
create
a
master.
So
then
we
create
a
master
through
the
coup
bottom,
an
it
process
and
then
so.
A
The
last
part,
Stefan
had
a
question
was
how
much
can
the
kubernetes
installation
be
customized,
so
essentially
what
I
just
walked
you
through?
You
can
take
a
look
at
the
qu
bottom
template
that
we
use
for
doing
the
actual
crib
install
and
you
can
clone
that
template.
You
can
make
changes
to
that
template,
that's
appropriate
for
your
environment
and
that
allows
you
to
do
things.
The
way
you
want
them
done.
You
can
use
it
as
a
base
starting
point.
A
The
crib
install
stuff
is
not
a
very
good
example
that,
because
we
put
it
together
very
quickly
for
demo,
but
hopefully
that
answers
your
question
there
and
then
the
last
question
was
project
calico
and
in
fact
we
do
I
believe
we
do
have
calico
stuff
I
think
calico
is
configured
actually
in
the
Koo
bottom.
Yes,
so
we
have
calico
actually
running
in
the
Koo
bottom
demo
here
as
the
software
defined
Network
components.
A
For
that
any
other
questions
from
anyone
else
on
online
with
us
right
now,
if
not
I
will
drop
the
presentation
link
into
the
meetup
groups.
So
you
guys
can
take
a
look
at
that
if
you're
interested
in
running
through
it
I'll
sort
out
what
I
did
wrong
at
2:00
a.m.
in
the
morning,
feverishly
trying
to
clean
it
up
so
I'd
pretty
names
on
my
stages
and
get
that
fixed
and
working
and
make
sure
our
documentation
reflects
those
changes
like
I
mentioned.
If
you
very
quickly
wanted
to
get
started,
rebar
digital
is
the
website.
A
That's
a
starting
point
for
getting
us
going
getting
going
with
digital
rebar.
No
further
questions
and
everybody
I
really
appreciate
your
participation.
Is
a
pleasure
presenting
to
you
guys,
Kristen,
thank
you
very
much
for
the
invitation
hope
they
hopefully
somewhere
in
the
future.
You
guys
are
interested
and
some
follow-up
or
additional,
more
detailed
presentations
or
a
rerun
of
this
presentation.
If
there's
additional
interest
for
it
again,
my
name
is
Shane
Gibson
with
a
digital
rebar
project
and
rockin
rockin
comm.
Thank
you
very
much.
A
A
A
A
B
Yeah
I
just
wanted
to
say
thank
you
Shane
for
sharing
insights
with
us
and
for
giving
us
this
nice
demo
and
thank
you
to
all
the
others
for
coming
I'm.
As
I
said,
we
will
share
the
video
with
with
the
crew
in
the
coming
days
via
YouTube
and
I.
Wish
you
all
a
very
nice
day
or
a
very
nice
evening,
depending
on
where
you
are
Thank.