►
Description
OKD Home Lab Deployments - 3 approaches
Guest Speakers:
Sri Ramanujam (Datto)
Craig Robinson (Red Hat)
Vadim Rutkovsky (Red Hat)
OKD4 using ER-X, NAS w/ Ceph using 1 master and 1 worker
Productionesque OKD4 Homelab-3 masters, 9 workers, with Rook+Ceph
Vadim's Home Lab
from OKD Working Group Testing and Deployment WorkShop
2021 03 20
https://okd.io
A
Alrighty,
so
I
thought
we
could
do
this
in
a
sort
of
format
that
like
where
we
look
so
over
here.
I
pulled
up
the
bare
metal
cluster
installation
instructions.
This
is
a
full
document
that
goes
over.
Basically
everything
you
could
ever
possibly
think
of
to
get
an
okd
cluster
going
from
scratch
on
bare
metal
bare
metal
upi
I
should
specify-
and
over
here
is
my
sort
of
explanation
of
my
setup
and
supporting
infrastructure
that
I
have.
A
This
is
in
that
repo
that
mike
mckeon
is
maintaining
so
that'll
be
available
on
his
repo
and
then
eventually,
I
think
in
the
main,
open
shift,
okd
repo
once
everything's
sort
of
straightened
out-
and
I
thought
we
could
just
go
through
and
sort
of
go
through
everything
here
and
I'll
show
you
what
I
did
in
my
home
lab
to
fulfill
the
requirements.
Like
you
know,
network
connectivity
is
a
really
big
one.
A
A
bunch
of
us
were
talking
about
that
during
the
the
set
the
stage
session
earlier
stuff
about
like
csrs
and
creating
infrastructure
and
I'll
show
you
the
the
terraform
scripts
and
the
bash
scripts
and
and
the
stuff
that
I
have
set
up
to
get
all
that
going
and
as
we
go
along.
A
You
know
if
you
have
any
questions
pop
them
in
the
chat
and
craig
feel
free
to
interrupt
me
literally
at
any
time,
and
just
like
ask
me
the
questions
and
I'll
answer
them
and
then,
whenever
I
get
through
that,
you
know
we
can
talk
about.
You
know
other
home
lab
setups
or
you
know
if
the
dean
wants
to
come
on.
I
think
he's
in
here,
maybe
he'll
pop
away
later.
If
he
wants
to
come
back
and
sort
of
talk
about
his
home
lab
setup.
A
A
So
without
further
ado-
let's
I
think
I
don't
know
I
I
can
go
first
or
you
can
go
first
it
I
think
I
mean
so
long
as
I'm
talking
I'll
go
and
then
you
can
talk
about
your
setup
afterwards,
video.
I
think
that
yep
mine's
not
much
better.
It's
just
more
completely
documented.
I
think
so
without
any
further
ado.
Let's
get
going
so
over
here!
I
you
know
these
are
the
resources
I'm
using
basically
to
get
the
setup
going?
A
A
They
have
a
ryzen
5
3600,
so
that's
12
virtual
cores
right
there
64
gigs
of
ram
each
one
of
them
comes
with
three
four
terabyte
hard
drives:
two
500
gig
ssds
that
I
raid
one
together
for
redundancy
and
then
the
boot
disk
for
each
other
hypervisor
is
just
some
random
little
budget.
Nvme
m2
drive
that
I
that
I
just
sort
of
stuck
in
there,
because
that's
not
the
important
part
and
all
of
my
supporting
infrastructure
is
mostly
run
off
of
this
one
nook
that
I
had
laying
around
gathering
dust
small
little
intel
core.
A
A
But
you
know
what
that's
what
makes
it
fun
my
hypervisors
are
each
hosting
an
identical
workload.
So
this
is
the
way
I
planned
it
out.
Originally,
I
was
gonna.
Have
the
size
of
cluster?
Be
three
control
plane
nodes,
one
on
each
hypervisor
and
then
nine
worker
nodes
split
three
three
three
on
each
hypervisor,
so
my
control
plane
nodes.
I
gave
them
four
vcpus
10
gigs
of
ram
and
a
very
small
50
gig
root
disk.
A
They
don't
really
need
much
more
than
that
for
what
I
use
them
for,
and
the
worker
nodes
get
eight
cpus
and
16
gigs
50
gigs
of
root
disk,
and
then
I
also
pass
in
one
of
the
four
terabyte
hard
drives
to
each
one.
This
will
get
used
later
to
set
up
the
the
rook
plus
ceph
cluster,
for
the
distributed
storage
for
all
of
the
container
workloads,
and
then
the
bootstrap
node,
which
is
very
temporary,
is
just
another
vm
that
gets
spun
up
four
vcpus,
eight
gigs
120
gigs
of
root
disk.
A
A
So
like
that
kind
of
takes
care
of
the
required
machines
requirement
over
here,
I
read
this
and
I
was
like
okay,
let's
see
how
far
I
can
push
it
the
boom.
You
know
the
control
plane.
You'll.
Note,
I'm
oh
wait!
Let
me
zoom
in
on
this,
so
people
can
see
it
so
the
control
plane
here
you'll
note
that
I'm
really
sort
of
going
not
as
hard
on
storage
as
the
recommendations
say,
but
that's
okay,
because
log
rotation
is
a
thing
and
so
far
I
haven't
run
out
of
disk
space.
A
Yet
the
compute
nodes,
I'm
over
provisioning
and
the
control
planes
I'm
kind
of
under
provisioning.
That's
also
okay,
mainly
because
the
it
as
you'll
see.
I
have
all
my
nodes
up
over
here.
Where
are
they
right
here?
Is
that
showing
up
at
all
hopefully
but
like
they
don't
run
out
of
memory
very
much
as
it
is,
so
it
all
works
out
in
the
end
for
something
like
this,
where
it's
totally
overkill
anyway,
the
main
important
thing
that
you
know
took
me
ages
to
get
going
was
the
the
networking
stuff?
A
A
It
goes
into
a
lot
of
detail
on
like
what
ports
need
to
be
reachable
from
what
subnets,
and
I
suspect
that
is
so
that
you
know
people
who
have
actual
real
network
topologies
can
set
up
their
routing
rules
correctly,
whereas
I'm
just
on
a
flat
home
network.
Everything
can
talk
to
everything
else.
A
So
a
lot
of
this,
actually,
you
can
just
straight
up
ignore
which
is
really
great,
if
you're
in
a
home,
lab
setup
that
isn't
too
complicated
or
doesn't
have
too
many
weird
vlan
stuff
things
going
on,
and
then
the
the
really
important
thing
that
these
docs
don't
actually
mention
for
whatever
reason,
and
hopefully
after
this
somebody
maybe
it'll
be
me-
will
remember
to
make
it
a
pr
or
something
up
to
the
docs
repo
is
that
the
nodes
during
their
initial
bootstrap
need
ptr
records,
set
up
for
them
to
figure
out
their
host
name
from
dhcp
and
dns.
A
If
you
don't
have
that,
then
they
all
come
up
with
the
same
hostname
and
then
the
cluster
doesn't
come
up
at
all
so
yeah
and
as
vadim
says,
the
docs
do
take
like
a
whole
bunch
of
proxies
and
meters
and
all
this
sort
of
stuff
into
into
account.
So
they
look
really
complicated,
even
though
I
think
for
most
deployments
at
least
in
a
home
lab
skill,
probably
more
than
that,
you
don't
really
need
to
worry
about
most
of
it
and
then
the
important
thing
is
just
to
have
a
load
balancer.
A
A
A
So
like
this
thing,
you
just
point
this
at
a
vm.
It
will
set
up
to
run
all
of
it
really
helpful,
but
for
a
lot
of
people's
home
labs.
I
don't
know
that
it
is
if,
if
you
can
figure
out
a
way
to
get
it
all
into
your
environment,
it
work
it'll
work
really
really
well,
but
if
you
already
run
your
own
dhcp
or
you
run
your
own
dns,
then
parts
of
it
become
less
helpful.
A
A
I
don't
know
where
in
the
docs
there
it
is
user
provision
dns
requirements,
so
all
these
dns
records
you
can
actually
set
up
once
before
you
even
you
have
to
set
them
up
before
you
even
try
to
start
deploying
a
cluster.
The
good
thing
is
after
you
set
them
up
once
you
don't
have
to
touch
them
ever
again.
So
that's
what
I
did.
A
A
A
All
of
my
sort
of
like
the
api
and
api
in
to
all
the
various
sort
of
things
like
these
two
are
pointing
at
my
load.
Balancer,
the
everything
gets
it
all
the
dhcp
static
reservations
get
put
here,
so
I
just
set
this
up
once
maybe
like
ages
and
ages
and
ages
ago,
and
it
all
just
works
and
then
especially
the
xcd
records
the
a
records
and
also
crucially,
the
srv
records.
A
This
is
too
big
there.
Crucially,
the
srv
records
are
the
actual
some
of
the
more
important
parts
for
this,
which
is,
I
think,
they're
in
here.
Yes,
the
srv
records
for
the
ncb
server
ssl
stuff-
I
don't
know
why
they're
necessary,
but
the
documentation
assures
me
that
they
are-
or
at
least
they
were
when
I
first
set
all
this
up
back
in
the
okd
four
three
four
four
days
also
very
helpfully
this
didn't.
This
actually
didn't
used
to
be
there
when
I
set
up
these
records,
but
they
had
an
example.
A
They
have
an
example
zone
database
now,
so
that
helps
to
serve
as
a
instructive
example
and
see
here
the
ptr
records
that
did
not
used
to
be
called
out
by
name,
but
now
they
are
it's
very
helpful.
Oh
yeah
and
dns
ptr
records.
Look
at
that.
A
So
once
you
have
all
of
the
records
and
your
load
balancer
and
all
that
stuff
in
place,
you
can
actually
get
around
to
deploying
them.
So
I
I
believe
that
the
actual
open
shift,
install
program
bundles
in
the
terraform
terraform
and
the
terraform
lever
provider
to
actually
do
this
for
the
ipi
based
deploys.
So
I
just
broke
that
out
and
I
use
that
I
I
have
a
whole
bunch
of
terraform.
A
I
don't
know
what
they
call
it
modules
I
think
playbooks
somethings
for
each
of
my
hypervisors
and
the
bootstrap,
and
they
all
take
care,
and
so
I
have
a
module
in
here
that
sets
up
the
bootstrap,
the
master
and
the
worker
node,
and
I
have
a
module
just
for
making
sure
that
I
can
download
and
push
the
fcos
base
image
to
to
the
vms
to
boot
off
of
and
then
each
one
of
these
you
know
will
take
care
of
setting
up
the
appropriate
number
of
masters
and
the
appropriate
number
of
workers
based
on
variables
that
I
pass
in.
A
A
I
did
seriously
consider
running
an
openstack,
but
that
would
have
been
too
much
even
for
me.
This
was
already
overkill
enough
as
it
was.
I
thought
the
bootstrap
of
course
gets
its
own
separate
one,
so
I
can
put
it
up
and
tear
it
down
separately
from
the
rest
of
the
infrastructure.
I'm
sort
of
running
through
it.
We
after
I
go
and
vadim
goes.
A
You
know
there
will
be
time
for
for
people
watching
to
ask
more
details
on
all
of
this,
but
that
takes
care
of
basically
getting
everything
into
place,
especially
this
section
about
creating
f
cos
machines.
The
documentation
itself,
talks
about
you
know
you
want
to
do
a
pixie
install
or
an
iso
install
if
you're,
deploying
to
vms
a
pixie
install
is
probably
a
little
bit
too
much.
If
you
don't
already
have
a
pixie
environment
set
up
that
you
can
just
use
for
this.
A
A
What
else
needs
to
happen?
And
then
so?
The
bulk
of
my
orchestration
is
actually
done
with
a
with
a
script.
Here
called:
do
the
thing:
dot
sh,
it's
a
great,
it's
a
fantastic
script,
and
so
it
takes.
I
I
it's
very
specialized
for
just
my
environment,
but
it
gives
you
an
example
of
just
like
here's
everything
that
needs
to
happen.
A
So
I
download
the
latest
okd
release.
I
download
the
latest
core
os
release
and
then
I
create
the
manifest
right
here.
So
I
have
an
install
config.yml.
Let
me
pull
that
up
very
quickly.
Here's
my
install
config.yaml,
really
really
simple.
You
know
my
base
domain
for
upi.
You
always
set
the
work
replicas
to
zero
mastery.
A
I
set
to
three
because
that's
how
many
I
have
I
give
it
a
name
set
cluster
network
service
network
network
type,
my
pub
key,
so
I
can
ssh
another
nodes
if
I
need
to
and
a
fake
pull
secret,
which
I
don't
think
it's
necessary
anymore,
but
it
used
to
be-
and
I
just
have
been
too
lazy
to
get
rid
of
it,
and
so
from
here
I
create
my
initial
configs.
A
I
use
terraform
terraform
has
been
configured
to
point
at
the
ignition
configs
that
are
generated
by
the
install
I
terraform
apply,
and
then
I
wait
for
the
bootstrap
to
complete,
and
so
that's.
A
This
is
what
you
know
we
were
talking
about
earlier,
like
in
stark
contrast
of
3.11
sort
of
you
know,
giant
pile
of
ansible
playbooks
that
I
don't
know
if
any
of
you
guys
are
familiar
with
that,
but
it
was
it
took
a
long
time
and
could
fail
it
any
part
of
it,
and
you
always
had
no
idea
why
it
failed
or
what
you
could
do
about
it.
A
This
is
way
easier
because
it's
kind
of
a
binary
it
either
worked
or
it
didn't
like.
If
this
doesn't
work,
you
there's
really
don't
worry
about
it.
Take
the
vms
down,
try
again
and
if
it
doesn't
work
three
times
in
a
row,
ask
for
help.
It's
great
you
don't
have
to,
as
as
somebody
just
trying
to
use
it
and
get
it
going,
there's
so
much
less.
That
is
environment
specific.
That
could
go
wrong
with
this
setup,
and
that's
very,
I
think
I.
B
B
A
No
for
sure
for
sure,
and
then
after
that
I
take
down
the
the
the
bootstrap
after
the
bootstrap's
up
woohoo,
I
sleep
for
20
seconds,
which
is
actually
too
much
for
so
I
just
give
aj
proxy
time
to
realize
that
my
bootstrap
is
out
of
the
rotation,
because
I
am
incredibly
lazy
and
nope.
A
My
port
9000,
please
so
here's
my
h,
a
proxy
and
all
of
its
glory
is
detail
so,
like
I
just
leave
the
bootstrap
in
the
aha
proxy
and
I
use
the
tcp
check
and
it
just
doesn't
route
anything
to
it.
It's
great,
I
don't
have
to
think
about
it.
This
is
again
more
like
static
configuration
that
I
get
to
just
set
up
once
and
leave
forever.
It
also
takes
care
of
figuring
out
where
my
ingress
replicas
are,
which
is
great.
The
machine,
config
server,
stuff
and
all
just
it's
all.
A
Wonderful,
aha
proxy
is
truly
a
beautiful
piece
of
software,
so
I
sleep
for
20
seconds
to
get
a
load
balancer
out
of
the
rotation
and
then
I
sleep
for
another
10
minutes,
because
something
is
happening
here
and
I
don't
know
what
it
is,
and
so
this
is
kind
of
the
downside
of
having
sort
of
very
opaque
it
either
works
or
doesn't
set
up.
A
I
really
have
no
clue
why
I
have
to
wait
10
minutes
here,
but
I
know
if
I
don't,
the
api
server
will
sometimes
refuse
to
work
like
I
will
make
an
oc
call
and
it
will
come
back
and
the
api
server
will
just
say.
No,
I'm
sorry,
I
don't
know
who
you
are
go
away.
I
don't
know
why.
But
if
I
wait
10
minutes
it
doesn't
happen.
A
So
in
the
interest
of
just
having
a
run,
it
walk
away,
come
back
an
hour
later
and
your
clusters
up
kind
of
script.
I
just
sleep
10
minutes
whatever,
and
then
I
do
this
specifically
to
annoy
the
team.
I
sit
in
a
loop
and
just
approve
all
the
initial
worker
certs,
because
I
trust
the
vms
that
I
spun
up.
10
minutes
ago
and
well,
the
dean
has
repeatedly
told
me,
and
it
is
good
advice-
do
not
trust
infrastructure
that
you
just
spin
up
in
the
cloud
to
be
from
yourself
he's
right.
A
I
don't
know
how
that
could
happen
really,
but
he's
right.
That
is
a
possibility.
It's
not
a
possibility
in
my
setup
behind
my
tv
here,
so
I
I
just
spin
in
a
loop
until
I
get
all
of
my
workers
approved
and
then
once
that's
done,
I
label
some
stuff
for
the
rook
step
deployment.
That's
the
other
big
thing
that
I
do.
The
four
terabyte
disks
that
I
use
are
all
they
I
just
pass
in
the
renders
to
each
worker
vm
and
then
each
worker
is
actually
running
a
ceph
osd
on
it.
A
So
I
have
basically
one
worker
per
ceph
osd,
so
I
have
nine
discs
in
there.
So
a
nine
node
os
ceph
cluster.
So
this
is
all
labeling,
some
chassis
for
seps
topology
stuff,
so
that
it
does
spread
those
placement
properly,
but-
and
so
actually
right
about
here-
I
didn't
want
to
point
out
after
after
this
step
after
the
workers
are
approved,
the
csrs
are
approved
and
they
all
report
ready
into
the
cluster
as
of
right
here,
technically
the
okd
setup
part
is
done.
A
That's
the
fun
part
everything
after
this
point
so
about
halfway
through
my
scripture.
Everything
after
this
is
just
like
post
deployment
configuration
or
day
two
setup,
I
think,
is
the
as
the
docs
call
it
somewhere
near
post
installation
configuration
so
like
act
right
after
that.
Everything
like
the
cluster
is
up
and
it
is
technically
usable.
It's
not
very
helpful
to
use
it
at
this
point
because
there's
no
like
container
storage.
The
registry
is
not
deployed.
A
Nothing
like
that,
but
technically
it
is
up
and
it
could
run
workloads
at
this
point
and
that's
very
cool
to
think
about.
So
after
that
I
I
do
some
housekeeping
kind
of
things.
I
patch
the
ingress
controller
for
my
wildcard
cert
that
I
use
for
internal
stuff,
and
so
then
I
have
to
wait
for
the
ingress
to
restart
itself
and
then
the
mcd
reboots
all
the
nodes.
For
some
reason,
I
don't
know
why.
I
think
it's
probably
to
get
the
ca
certificate
stuff
in
there.
A
A
This
is
like
such
an
amazing
thing
that
I
can
just
literally
oc
apply
a
couple
of
ymls
just
from
github
and
it'll
come
up,
a
cluster
will
come
up.
It's
incredible.
It's
amazing!
I
cannot
recommend
it
enough
like
if
you
have
gotten
to
the
point
where
you
can
stand
up
an
okd
cluster
or
even
really
a
kubernetes
cluster
in
general,
and
you
just
have
some
spare
disks,
give
them
to
kubernetes
put
rook
in
it.
Life
is
so
much
better.
It
all
just
works.
It's
incredible!
It's
amazing!
A
A
I
tell
the
I
tell
the
registry
to
go,
use
it,
and
so
it
goes
and
just
makes
its
a
pvc
for
itself.
I
wait
for
that
to
bind
I
patch
the
registry,
so
it
does
the
external
route
I
send.
I
configure
metal
lb,
which
is
the
other
half
of
the
magic
here
that
allows
home
labs
to
just
be
super
super
super
overkill
and
cool,
because
load
balancers
are
basically
the
only
way
to.
A
As
far
as
I
understand
with
my,
I
will
admit,
incomplete
knowledge
of
the
kubernetes
ecosystem
in
general.
If
you
have
something
that
can't
be
routed
through
your
inquest
controller,
non-http
traffic
or
something
like
that,
then
basically,
the
only
way
you
you
can
get
it
out
is
by
a
load,
balancer
or
a
node
port.
A
I
didn't
really
want
to
do
node
ports,
but
they
were.
I
was
using
them
as
my
only
option
for
a
while,
until
I
discovered
metal
lb,
which
is
basically
it
makes
you
feel
like
you're
running
in
a
real
data
center,
because
it
will
just
use
like
arp
to
broadcast
around
like
advertise
for
a
random
ip
and
it'll
just
redirect
traffic
to
it,
and
it
works
really
really
well.
I
would
recommend
everybody
to
just
deploy
metal
lb
anyway.
A
Just
so
you
have
access
to
some
to
like
load,
balancer
type
services,
and
then
I
configure
my
monitoring
I
have
I
have
I
it
like.
Openshift
comes
and
like
okd
comes
with
all
this
monitoring,
so
I
just
I
have
a
little
helper
program
that
I
will
pull
up.
It's
just
a
tiny
little
thing
written
in
rust.
I
actually
have
it
linked
from
my
from
my
okd
deployment
configuration
guide
here.
It
is
just
a
small
program.
A
I
wrote
that
it's
just
a
little
web
server
that
waits
for
alerts
from
the
alert
manager
and
we'll
just
post
them
to
discord.
So
I
have
basically
my
own
ad
hoc
single
person,
monitoring
and
alerting
setup.
All
thanks
to
okd.
I
shudder
to
think
how
much
work
it
would
be
to
set
up
the
kubernetes
mix-ins
and
do
all
of
the
prometheus
stuff
manually
from
a
vanilla,
kubernetes
cluster.
So
this
is
honestly
a
huge
value.
Add
for
okd
in
my
book
that
I
get
such
comprehensive,
alerting
for
free.
A
I've
got
an
alert
for
everything
from
hey
the
ntp
service
isn't
running
in
here,
and
your
clocks
are
out
of
sync
too.
Your
xcds
are
slow
to
you
know:
hey
you've
got
a
pdb
up.
You
know,
during
updates,
like
rook,
sets
up
pdb's
for
the
set
clusters
to
make
sure
that
the
club
that
the
rebalancing
is
settled
like
it's
really
really
comprehensive
and
it
all
just
works,
and
it's
amazing.
It's
amazing
and
that's
the
final
thing.
A
A
I
could
take
the
cluster
down
right
now
and
spin
it
up
again
and
two
hours
later
we'd
be
right
back
where
we
started,
and
so
then,
at
the
end
of
it
I
have
a
12-node
completely
overkill
home
lab
cluster,
in
which
I
run
basically
everything
a
whole
bunch
of
stuff
that
I
used
to
just
run
and
have
bespoke
random
vms.
I
now
just
run
here
and
I
have
crime
jobs
for
backups.
I'm
you
know
running
all
kinds
of
weird
things.
A
I,
as
an
experiment,
set
up
an
authenticated
smb
share,
so
I'm
running
a
domain
join
samba
as
a
pod
staple
set
inside
of
this
cluster.
That's
really
fun
like
it's
been,
it's
been
all
it
took
a
while
to
get
here.
You
know,
because
I
I
had
a
single,
I
had
a
single
machine
sort
of.
A
I
think
it
was
three
note:
okay,
openshift
origin,
311
cluster
that
I
started
with
and
then,
when
okd4
came
along,
I
sort
of
was
very
eager
to
hop
on
the
train
and
sort
of
make
the
home
lab
as
big
as
I
wanted
it
to
be,
and
but
now
after
a
lot
of
overkill,
I'm
in
a
really
cool
place,
and
so
that
that's
kind
of
a
quick
overview
of
my
totally
totally
unnecessarily
overkill
home
lab
setup.
Here's
some
just
software,
I
run
in
it
completely
not
worth
the
amount
of
resources
I've
thrown
at
this.
A
B
A
It's
yeah,
it
is
private,
because
there
are
secrets
all
over
it
right,
like
the
the
this
is
half
of
it.
The
other
half
is
services,
which
is
where
I
have
all
of
my
like.
I
deploy
all
my
workloads
with
ansible
playbooks
so
like
I
have
a
role
for
each
sort
of
name
space
that
I
run
stuff
in.
So
these
are
all
the
services
I'm
running
and
like
there
are
secrets
all
over
here,
so
I
unfortunately
can't
make
it
public.
I
can
pull
this.
A
I
can
pull
the
the
scripts
out
that
I'm
using
there's
nothing
too
weird
in
there.
I
will
definitely
think
about
pulling
the
scripts
out
and
adding
them
to
the
deployment
configuration
guides.
As
an
example,
that's
a
good
thought,
but
I
can't
show
you
the
scripts
as
they
are
now
because
secret
secrets
all
over
here,
it's
a
private
repo.
I
don't
know
just
so
that
I
didn't
have
to
worry
about
it.
Yeah
link
to
the
deployment
configuration.
B
A
A
A
I
don't
know
honestly
it
whichever
way
you
want
to
do
it,
I
don't
know
if
there
will
be,
maybe
we
can
go
through
both
of
our
setups
and
then
we'll
take
questions.
Yeah.
C
C
Second
of
all,
it's
very
very
slim
and
like
three
stuff
on
the
resources.
I
have
a
machine
with
20
gigs
of
ram
and
the
other
one
is
default
laptop
with
eight
gigs
of
ram.
You
won't
be
able
to
run
there
much,
but
as
an
example
of
how
low
getty
can
go
that
kind
of
works,
so
the
pinnacle.
The
part
of
my
core
part
of
my
stuff
is
my
router,
which
is
a
standard
edge
router
from
ubiquiti.
Here's
a
picture
you
can
enjoy
my
insane
cabling
skills.
C
C
This
is
why
I've
set
up
an
ad
guard
home
here,
because
I
can
set
up
a
tls
dns
over
tls
and
that
would
stop
pushing
my
isp
to
the
limits
because
for
some
reason
it
hates
udp
and
I
can
also
define
my
own
custom
hosts
here.
All
of
them
are
pointing
to
my
load
balancer
machine
effectively,
and
so
that's
that
that's
the
router
next
comes
my
storage
box,
which
is
here
it
has
nfs.
It
has
a
single
node,
sev
cluster.
C
C
This
host
also
runs
a
typical
hd
proxy,
also
copied
from
the
helper
node,
which
has
a
very,
very
standard.
He
proxy
template
next
come
the
actual
host.
I
have
just
two
of
them
and
I
initially
provisioned
a
laptop
which
is
now
my
compute
node.
I
started
it
with
as
a
bootstrap
node.
It
has
eight
gigs
of
ram,
it's
barely
close
to
what
the
bootstrap
needs.
If
you
have
a
chance
to
get
a
16
gigs
of
ram,
that
would
save
you
a
lot
of
time
and
I
also
upgraded
the
default
ssd
disk
to
something
m2.
C
Because
the
yet
cd
was
happy
but
was
showing
quite
a
huge
latency
during
upgrades,
because
this
is
the
time
where
you
pull
a
lot
of
images,
start
new
containers,
and
I
said
he
was
very
unhappy
about
that.
Yeah
pictures,
it's
just
the
box,
that's
my
self
dashboard.
Also
don't
do
that,
but
since
I
have
like
I'm
using
20
gigs
of
it
for
the
storage,
I
don't
experiment
on
this
cluster.
It's
just
actual
production.
C
So
that's
more
than
enough
for
me
and
that's
the
view
of
my
oh
giddy
stuff.
I
run
quite
a
few
projects
there,
most
notably
the
most
helpful
operators,
probably
the
pipelines,
because
what
I
can
do
is
that
I
can
change
things
on
the
console.
Unlike
the
git
ops
approach,
I
change
things
in
the
console
and
periodically.
I
think,
every
couple
of
hours
the
script
runs
and
saves
all
the
manifests
using
oc
item
inspect.
C
C
I
don't
care
about
particular
parts.
I
don't
care
about
events,
I
strip
the
boring
stuff
from
the
ammos
generations
of
link
and
so
on
and
finally,
it
gets
committed,
saved
and
pushed
into
my
internal
guitar
instance.
So,
every
couple
of
hours,
a
new
comment
is
created.
I
can
see
what
has
changed
in
that
cluster
and
I
should
trip
that
out.
It's
boring
and
I
should
rip
that
out
too
and
so
on
and
so
forth.
C
C
Volume
then
nope
no
snapshots,
maybe
I
this
user
doesn't
have
no
and
every
couple
of
hours
or
days
it
creates
a
new
snapshot.
So
if
something
breaks
in
the
application
itself,
we
can
easily
roll
it
back
as
a
pvc
and
replace
next
comes
a
wonderful
piece
of
software
called
loki
and
grafana,
which
stores
all
the
container
logs
almost
effortlessly.
I
think
it
uses
150
mac
and
each
promoter
agent
uses
70
mac,
which
is
nothing
but
I
can
do
searching
by
logs,
for
instance,.
C
Okay,
the
biggest
downside
of
this
setup
is
that
since
it's
a
single
node,
it's
incredibly
hard
to
update
all
the
operators
would
work
fine
until
it
stumbles
on
machine
config,
because
machine
config
has
a
setting
that
no
more
than
one
master
node
can
be
down
at
all
times,
and
I
only
have
one
so.
What
I
have
to
do
is
to
make
it
reprovision
it
back
to
original.
C
I'm
fetching
the
masters
machine,
config
annotate
it
with
the
desired
config
notion
that
note
as
if
it
has
already
upgraded
and
tell
machine
config
daemon
to
upgrade
it
to
reprovision
it
the
whole
stuff.
It
doesn't
work
out
of
the
box
because
it
also
tries
to
install
necessary
os
extensions
like
qml
agent
and
and
most
importantly,
network
manager
obs.
So
I
have
to
cancel
it
in
the
middle
and
if
the
node
doesn't
come
back,
I
have
a
small
heart
attack
because
I
would
have
to
fix
it.
C
This
is
very
dangerous,
but
the
the
whole
issue
is
supposed
to
be
fixed
in
4-h.
So
I'm
really
waiting
for
this
to
land
and
stable,
useful
software
yep.
This
graphana
is
provided
by
the
graphene
operator,
which
takes
care
of
all
the
data
sources,
the
dashboards
and
can
upgrade
graphene
from
one
version
to
the
other.
By
just
changing
one
setting
in
the
operator,
snap
scheduler
covered
tickets
around
also
covered
useful
software,
your
own
git
server,
home
assistant,
great
stuff
to
control,
smart
home
appliances
or
just
collects
all
the
information
in
one
single
piece.
C
I
don't
think
I
mounted
it
anywhere
in
my
apps,
but
certainly
possible
with
different
csi
stuff
next
cloud,
a
terrible
php
application,
but
it
does
the
job
syncs.
The
files
across
multiple
devices
navi
drone
is
a
great
music
server
which
follows
the
air
sonic
protocol,
miniflux
hair
accessory
there
and
a
lot
of
iterated
stuff
like
method,
synopse
and
pleroma,
and
the
wallaboc
is
a
great
application
to
keep
the
pages
and
read
them
later,
and
I
think
that's
probably
all
I've
got.
B
B
C
B
C
A
That
definitely
will
save
me
some
time
yeah.
I
didn't.
I
didn't
realize
that
everything
I
I
to
be
fair.
I
have
not
really
looked
at
all
of
the
yamls
in
there
and
tried
to
figure
out
what
they're
doing
so,
like
everything
all
of
the
cluster
configs.
All
of
that
stuff
gets
laid
out
in
that
folder.
First
cool.
C
A
C
I
put
it
right,
they
all
get
merged
into
one
in
the
end,
so
there
are
two
ways
to
approach
this
either
generate
and
do
said
or
create
your
own
wait.
No,
maybe
it
won't
work
with
a
schedule
or
conflict.
There
has
to
be
only
one
or
maybe
it
would
so.
The
other
option
is
you
lay
out
just
the
master
schedule
change
into
its
own
yaml
file.
C
A
C
A
Proxy
stuff
certainly
should
save
you
like
a
lot
of
time.
Yeah
most
most
of
the
time
here
is
just,
and
this
will
also
answer
the
question.
Dan
just
asked
what
provider
I'm
using
the
libert
terraform
provider.
I
think
that's
what
it's
called
this
guy.
A
This,
I
also
believe,
is
what
gets
built
in
is
one
of
the
providers
that
gets
built
into
the
openshift
install
itself
for
use
for
the
libert
ipi
deploys.
So
this
this
provider
is
extremely
handy.
It
handles
doing
the
ignition
it
handles.
Disconfiguration
network
configuration
the
whole
smash.
It's
it
basically
does
almost
everything
and
they
also
provide
an
xslt
escape
hatch
to
configure
bits
of
the
liver
xml
that
it
doesn't
quite
know
how
to
do
yet
so
extremely
flexible
tool.
A
That's
what
I
run
against,
so
I
just
have
it
set
up
like
each
one
of
my:
where
is
it
inside
terraform,
here
so
library,
library,
three
and
library,
four,
those
are
the
names
of
the
three
hypervisors
library.
Two
is
the
knuck,
which
gives
you
any
of
the
ordering
of
how
all
this
stuff
got
set
up.
A
So,
library,
library,
three
and
library
four
are
each
of
the
hypervisors,
and
in
here
I
have
a
I
have
a
main
and
so
like
host
equals
var.hose,
so
they
all
just
set
up
and
it
will
deploy
a
master
and
three
workers
with
all
of
the
config
file
and
then
each
of
the
modules
in
here
I'll
just
look
at
the
master
one
as
an
example
right.
So
this
is.
This
is
all
using
the
lib,
the
resources
provided
by
that
provider.
A
Libvard
ignition
look
for
volume,
look
for
domain,
and
so
like
this
all
just
gets
these
files
I
get
passed
in
by
up
top
it's
it's
kind
of
confusing
to
sort
of
see
it
all
in
one
place.
Let
me
go
up
here
and
I'll
show
you
so
like,
for
example,
this
this
con,
this
content
thing.
So
this
is
where
the
ignition
file
comes
in
and
that's
var.ign
file.
A
C
A
Yeah,
I
I
get
that,
but
just
an
example
of
what
you
can
do
with
terraform
and
patience
like
most
most.
I
think
the
longest
part
of
this
unfortunately
is
waiting
for
the
fedora
coro
is
download,
and
then
I
I
actually
have
an
lvm
library
set
up
to
like
chop
it
out.
So
I
have
to
like
turn
the
qcow
2
that
comes
down
from
the
f
cos
thing
into
a
raw
and
then
take
that
raw
dd
four
times
for
one
master,
three
workers
to
three
separate
lvm
things.
A
A
No,
I
I
no
machine
sets
or
anything,
because
I
would
need
the
the
liv
vert
thing.
So
I
that
would
be
another
approach,
but
I
don't
know
how
flexible
the
the
livf
machine
operator
is
going
to
be
that's
something
to
look
at,
but
I
definitely
have
it's
interesting.
I've
seen
the
project,
I'm
sort
of
keeping
an
eye
on
it,
but
for
now
deploying
it
statically
is
the
way
to
go,
especially
because
I
need
to
it's
a
very
static
configuration.
It's
only
ever
going
to
be
these
12.
B
A
C
A
A
Yep,
the
main
part
is
the
nodes.
I
some
it's
funny
here,
like
you
know
all
of
this
and
like
all
of
my
all
of
my
ram
legitimately,
all
of
it
goes
straight
to
how
do
I
do
it
yeah
nodes
or
I
guess
I
can
go
to
projects
right
and
I
can
sort
and
go
legitimately.
All
of
my
ram
goes
to
goes
to
seth.
I
have
no
idea
what
it
does
with
it.
To
be
honest,
I
should
I
don't,
but
yeah
45
gigs,
basically
44
gigs
straight
to
ceph,
it's
half
the
reason.
A
A
A
C
A
Everything
else
is
kind
of
tolerable,
the
mons
and
the
and
the
mds
is
for
the
for
the
sapphifes,
but
in
in
general,
because
because
rook
is
upstream
of,
I
think
ocs
these
days,
they
have
special
openshift
support
in
the
upstream
rook
project.
So
I
get
to
set
up
the
I
get
the
the
pdb
support,
which
is
very
helpful,
especially
during
rolling
cluster
upgrades,
which
is
possible
and
works
pretty
much
flawlessly
with
a
multi.
With
this
multi-master
setup.
A
I
have
so
kudos
to
everybody
on
the
okd
team
for
that
the
openshift
team,
because
that
can't
have
been
easy,
but
the
pdbs
are
really
really
cool
because
the
rook
operator
will
set
a
pdb
as
it
reboots
worker
node
so
or
and
it
from
its
view
nodes
with
osds
on
them.
So
it'll
reboot,
one
of
them,
wait
for
it
to
come
back
up
and
then
wait
for
the
the
ceph
cluster
itself
to
settle
and
stop
rebalancing
before
it
reboots
the
next
one.
C
Yeah,
the
consistency
and
disruption
are
the
principle
of
the
upgrades.
The
time
is
not
rather
we
care
about
control,
plane
upgrades.
This
is
super
important
and
workload
disruptions
if
it
takes
you
years
to
upgrade
sad,
but
you
can
move
on,
you
can
still
keep
on
upgrading
and
the
worker
names
will
catch
up.
So
that's
the
trade-off.
We
have
to
make.
A
No,
it
I
totally
get
it
and
it,
and
I'm
just.
I
just
wanted
to
point
out
that
it
works
like
extraordinarily
well
it
all
like
in
the
pdb
sort
of
fit
on
with
it
nicely.
It's
like
little
things
like
that
that
I'm
thinking
how
would
I
set
this
up
with
like
a
vanilla
kubernetes,
especially
like
these,
these
views
here
with
the
monitoring
where,
where
is
where
is
the
world's
best
dashboard?
A
A
I
I
don't
think
they'd
be
helpful
for
anybody
who
isn't
me,
because
they
are
very,
very
very
specific
to
my
hardware
topology
as
it
were,
but
I
can
definitely
you
show
them
as
an
example
or
annotate
it
or
something
in
the
in
the
in
the
configuration
guide
that
I
have
here
and
sort
of
do
that
have
have
I
experienced
density
slowness,
dan's
asking.
A
Do
you
believe
that's
a
config
problem
or
inherent
to
the
hardware,
I'm
using
I'm
pretty
sure
it's
my
hardware.
I
honestly
this
is
like
the
weirdest
issue.
A
I
think
I've
ever
ever
seen,
because
it's
weird
because
two
of
my
nodes,
every
now
and
again,
they'll
be
like
hey
lcd,
is
running
slow
and
then
it
just
resolves
itself
a
few
seconds
later
and
then
the
third
one
is
just
rock
solid,
and
so
that's
what
the
world's
worst
dashboard
likes
to
show
me
a
lot,
because
the
distinct
durations
here
like
you'd,
see
four
of
them
are
up
in,
like
the
fifth
like
the
the
re
too
high,
and
then
one
of
them
is
in
like
the
six
millisecond
four
millisecond
and
the
other
one's
like
a
100
milliseconds
at
90
milliseconds,
it's
identical
hardware.
A
C
B
A
It
it
is
weird
I
haven't
figured
out
any
rhyme
or
reason
for
it.
I've
seen
that
some
things
like
there's
some,
the
back
before
I
sort
of
leaned
off
of
there.
There
was
a
time
when,
like
oc,
get
cluster
versions
was
consistently
like
hello.
A
It
takes
60
seconds
for
oc,
get
cluster
versions,
and
I
was
like
that's
weird
so
there
there
are
some
oddities
about
etcd
and
what
it's
running
on
and
it
doesn't
seem
inherently
related
to
disk
usage
because
they're
they're
nice
ssds.
They
if
I
go,
look
at
them
with
iostat
or
something
they're,
fine,
but
that's
sort
of
troubleshooting
for
another
time.
Neil.
A
Could
I
split
out
the
secrets
into
a
bash
file
to
source
or
something
perhaps
I
am-
I
am
kind
of
thinking
about
figuring
out
some
way
to
sort
of
full
like
to
pull
in
the
secrets
from
a
separate
private
repo
and
then
have
all
this
stuff
up
public
so
that
people
can
reference
it.
I
might.
I
might
put
some
more
effort
into
doing
that.
It's
just
not
been
mostly
on
my
radar,
but
if
people
are
interested
I'll,
I
can
definitely
make
the
effort
over
a
couple
of
weekends
to
do.
C
A
D
A
Yes,
definitely
something
very
interesting
to
play
with,
but
one
of
my
primary
goal
here
was
to
just
see
how
highly
available
could
I
get
something
that
wasn't
running
in
somebody's
cloud.
Like
the
reason
I
I'm
using
active
directory,
which
is
a
complete
tangent,
is
because
active
directory
is
basically
the
only
thing
I
could
find
that
does
transparent,
dhcp,
failover,
back
and
forth.
A
So
after
the
first
time
that
my
dhcp
server
decided
to
crap
out
for
some
reason-
and
I
woke
up
one
morning
and
nothing
could
get
a
dhcp
address.
I
was
like
this
sucks
and
ad
was
the
only
thing
I
found
that
can
transparently
fail
over
dhcp
management
to
another
computer
and
then
fail
it
back
to
the
to
the
first
one
when
it
comes
back
and
so
that
that's
actually
a
lot
of
the
reason
why
I'm
using
ad,
even
though
it
is
in
itself
incredibly
overkill.
D
C
Okay,
folks,
I
have
to
drop,
I
think,
I'll
I'll
rejoin
in
a
couple
of
hours
to
see
if
some
questions
need
answering,
but
if
something
left
and
answered,
let's
keep
it
in
slack,
we'll
get
back
to
that.
Okay
have
a
great
evening.
A
A
Of
time
it's
too
much
time,
I
think,
because
I
I
think,
neil
and
I
actually
first
started
with
an
okd
like
single
machine.
Three
node
cluster
like
not
okay,
openshift,
origin
311..
So
like
I
first
started
this
path
in
when
was
it
neil
2018
2019
somewhere
around
there
and
it's
just
yeah
2018
and
it's
so
this
is.
A
B
A
A
A
A
A
But
yeah
these
scripts,
I
mean
neil
neil-
has
been
badgering
me
for
ages
to
get
the
to
get
these
to
polish
them
up
and
bring
them
out
publicly
but
or
to
turn
this
repo
public,
but
like
especially
in
here
like,
and
I
think
it's
worth
looking
into,
like
the
the
just
the
flexibility
of
the
ansible
playbooks.
I
know
the
guys
who
were
who
are
who
are
maintaining
the
oh,
the
ansible
collection
for
like
the
the
k
and
s
module
and
then
some
community.okd
operators
on
top,
but
they
dropped
by.
A
They
dropped
by
the
working
group
a
few
meetings
ago
to
talk
about
their
work
there,
and
so
I've
actually
been
using
the
ansible
stuff
really
heavily
since
way
before
they
they
showed
up
simply
because,
like
I'm
more
familiar
with
ansible
than
I
am
with
like
helm,
templating,
because,
like
all
of
the
config
maps
and
stuff
you'll
note,
it's
just,
it
looks
like
a
config
map
or
like
a
normal
kubernetes
yaml
file.
But
I
I
think
the
ginger
templating
is
more
powerful
and
flexible
than
helm
templates,
which
are
it's
just
text
interpolation.
A
I
I
just
I
prefer
ansible,
because
I
know
how
to
use
them,
because
I
think
the
templating
is
more
powerful
and
it
also
makes
it
really
easy
to
it
because
of
ansible
sort
of
focus
on
just
being
able
to
reapply
the
same
thing
and
it'll
incrementally
make
progress.
It's
super
easy
to.
If
you
mess
something
up,
it'll
stop
right
there
and
you
just
fix
it
and
it'll
keep
going,
whereas
with
helm
like.
A
I
know,
kubernetes
itself
has
the
capability
to
do
that,
but
I
can
also
mix
in
other
types
of
configuration
if
I
need
to
into
one
of
these
playbooks
so
that
that's
also
super
powerful
enough
to
script
it
with.
I
can
just
use
an
ansible
to
do
everything.
I
don't
have
to
worry
about
a
bash
script
that
has
to
also
help
and
then
maybe
a
little
playbook
for
something
else
that
I
need
to
set
up.
A
It's
really
nice
to
be
able
to
standardize
on
one
thing
and
ansible
has
the
most
flexibility
so
like
I,
I
deploy
everything
through
through
through
ansible
and
one
of
the
neat
tricks.
I
actually
found
which
I
don't
recommend
anybody
ever
do
for
a
real
production
workload
unless
they
really
know
what
they're
doing,
but
it's
super
helpful
in
a
home
lab
environment
where
you,
where
you
don't
have
to
care,
is
that
hold
on.
Let
me
go
find
a
good
example.
A
I
think
yeah,
the
media
stuff
is
full
of
it
right
here
so
like
what
I
do
is
I
I
have
okb.
Has
the
image
streams
have
the
capability
to
monitor
an
upstream
tag
for
changes
and
when
the
tag
changes,
it'll,
pull
it
in
and
that's
super
helpful.
So
what
I
do
is
I
just
set
up
a
build
config
that
triggers
on
type
image
change
from.
A
So
I
set
up
a
tag
here,
that's
just
upstream,
and
that
looks
at
wherever
the
upstream
docker
image
is,
and
whenever
that
changes
it
triggers
the
build
config
to
do
a
new
build
and
then
the
deployment
config
and
then
that
bill
config
pushes
it
to
a
tag.
That's
only
used
internally,
oh
shoot,
I'm
not
you're
right
dan.
I'm
sorry.
A
No,
no,
please
interrupt
me
so,
as
I
was
saying,
this
is
a
very
neat
trick
for
home
lab
deploys
to
just
keep
your
the
stuff
running
inside
up
to
date
without
any
manual
intervention
whatsoever
and
the
key
and
the
key
is
that
openshift
image
streams
and
image
stream
tags
can
just
keep
it
can
auto,
will
automatically
pull
an
upstream
docker
image
repository
for
changes
to
a
tag
on
an
upstream
image
so
like
here.
I
have
this
image
stream
tag
called
internally
jacket
upstream.
So
this
is
a
tag
in
the
internal
registry.
A
Jacket
upstream
is
monitoring
the
the
github
container
registry
tag
jacket
latest
up
up
in
the
up
on
github's
infrastructure
and
it'll,
pull
that
for
changes
and
any
time
it
changes.
It'll,
pull
that
down
locally
and
send
and
trigger
an
image
change,
update,
which
gets
picked
up
by
the
build
config,
which
will
then,
which
can
then
rebuild
the
image
stream
tag.
A
A
Every
time
that
every
time
a
build
succeeds
so
once
you
set
all
that
up
so
long
as
there
is
an
upstream
tag
that
you
know
sees
changes
whether
it's
a
latest
or
a
version
tag
or
something
like
that,
every
time
it
changes
without
any
thinking
or
manual
management.
On
my
part,
it
will
literally
just
push
and
redeploy,
and
it's
wonderful
so
like.
If
I
go
to
builds
here
for
media,
you
can
see
like
a
whole
bunch
of
stuff.
A
So,
like
you
know,
here's
the
you
know,
you'd
see
this
one's
up
to
33
and
it
just
does
it
randomly
whenever
it
sees
an
update.
Whatever
time
of
day,
I
believe
by
default
the
they
pull
every
15
seconds,
which
is
aggressive,
but
I
can't
change
it
so
we're
stuck
with
that
and
the
deployment
configs,
or
rather
the
pods
down
here
you
can
see
like
all
of
the
various
deploy
pods
from
all
the
various
versions.
A
It
just
happens.
I
don't
have
to
think
about
it
and,
like
you'll
note
like
this
one
here,
these
two
didn't
work
for
whatever
reason,
so
it
just
stayed
on
five
and
I
have
to
go
figure
out.
Why
that
didn't
work,
but
whatever,
but
honestly,
that's
really
really
powerful
and
to
get
that
in
a
vanilla,
kubernetes
distribution
would
be
I'm.
I
can't
even
really
think
of
a
good
way
to
do
it.
Out
of
the
box,
like
you'd,
have
to
cobble
together
two
or
three
different
things,
like
maybe
an
argo
cd
or
there's
a
thing.
A
I've
heard
that
people
use
in
smaller,
like
k3s
setups
called
watchtower.
They
can
do
something
similar,
maybe
that
one
only
works
by
monitoring
the
local
docker
demon.
It
would
be
a
pain
to
get
that
working,
but
with
with
openshift
and
okd.
It's
just
basically
done
for
you
really
really
handy,
and
it's
just
one
of
those
things
like
a
lot
like
to
me,
the
biggest
things
that
just
make
my
life
easier.
A
Like
a
lot
of
the
a
lot
of
the
toil
around
maintaining
a
big
distributed
system
like
this,
whether
it's
for
product
or
for
personal
use,
it's
just
setting
up
the
sort
of
automating
sort
of
little
toil
tasks.
You
know
getting
monitoring,
getting
alerting,
figuring
out
what
to
monitor
figuring
out
what
to
alert
on
setting
up
like
automated
image
deploys
for
the
things
that
you
know
you
can
set
it
up
for
things
like
that,
all
that
sort
of
just
day-to-day
sort
of
blah
stuff-
that's
not
really
interesting
or
fun,
but
has
to
get
done
anyway.
A
Okay,
just
does
for
you.
I
have
really
comprehensive
monitoring
like
and
I
get
that
for
free
they're,
using
as
far
as
I
can
tell
the
upstream
kubernetes
mix-in
monitoring
stuff
and
that
all
just
gets
deployed
and
it
works,
and
then
it
monitors
the
worker
nodes
themselves
for
disk
space
ntp.
Anything
I
could
think
of
I've
seen
alerts
for
all
of
it.
Just
while
playing
around
super
helpful,
it
makes
me
feel,
like
I'm
a
badass,
and
it
frees
up
my
time
to
worry
about,
like
actually
like
doing
more
complicated
things
with
my
workloads.
A
A
That's
just
you
know
my
two
neat
tricks,
you
know
run
a
rook
run
a
ceph,
it
works.
It
is
by
far,
I
think,
the
easiest
way
to
get
a
set
cluster
deployed
of
any
really
thing
anyway.
Yes,
it
assumes
you
have
a
kubernetes
cluster
somewhere,
but
once
you
do,
ceph
is
like
laughably
easy.
I
don't
know
how
they
did
it.
I
suspect
black
magic.
A
When
people
ask
you
why
in
the
world,
would
you
run
kubernetes
in
your
home
lab?
What's
your
favorite
example
of
a
workload
that
kubernetes
makes
easier
for
you,
I
think,
honestly,
the
the
monitoring
just
being
able,
because
it's
really
difficult
to
set
up
proper,
proper,
like
sort
of
monitoring
and
metrics
management
for
internal
stuff,
to
the
point
where
a
lot
of
people
don't
do
it
so.
B
A
B
B
A
A
Those
people,
I
think,
would
be
well
served
to
move
to
kubernetes,
simply
because
it
gives
you
a
lot
of
stuff
for
free
that
you
either
didn't
know
you
were
missing
or
knew
you
were
missing,
but
didn't
really
feel
like
setting
up
things
like
log
rotation.
Things
like
you
know,
ready
checks,
things
like
failover.
If,
if
one
of
your
you
know
random
little
knuck
boxes,
fails
you
know,
will
your
workloads,
like
you,
lost
everything
running
on
that
box?
A
It's
a
it's
a
it's
a
platform
as
much
as
it
is
a
a
tool
more
than
it
is
a
tool.
So
when
I
it
kubernetes
doesn't
it
makes
it
makes
my
workloads
easier
to
run,
but
the
nice
thing
about
it
is
that
the
the
nice
thing
about
kubernetes
is
that
any
workload
that
you
are
already
sort
of
managing
yourself
with
docker.
You
now
don't
have
to
really
manage
you
just
put
it
in
there
and
it'll
run
and
it'll
be
there.
A
You
can
get
to
it
and
that's
really
powerful,
because
it
makes
it
super
easy
to
just
throw
new
things
in
there,
because
all
of
these
home
lab
type
things
are
giving
people
options
to
run
it
as
a
docker
container
these
days,
and
so
it
just
makes
it
really
easy
to
just
take
what
they
give.
You
chuck
it
into
the
cluster,
and
it's
just
there
like.
A
I
at
this
point
I'm
able
to
deploy
new
stuff
in
maybe
minutes
to
hours,
which
is
no
better
or
worse
than
it
would
have
been
with
compose,
except
I
get
so
much
more
for
free
stuff.
That
ordinarily
would
have
taken
me
more
hours
or
maybe
even
days,
to
set
up
for
each
individual
application
and
neil's
following
up
yeah.
A
If
you,
if
you
have
okd
on
top
of
kubernetes
the
best
because
like
if
you,
if
you
if
I
were
to
make
a
crude
comparison
to
linux,
distributions
right,
vanilla
kubernetes
is
like
arch
or
gen2,
it's
a
it's
a
base.
It
moves
quickly
and
you
are
meant
to
bring
your
own
sort
of
opinions
and
workflows
and
put
them
on
top
of
it.
Right.
A
Arch
boots,
you
up
to
a
to
a
console,
login
screen,
you're
supposed
to
install
your
programs,
your
workloads,
your
whether
you
want
gnome
or
kde
or
something
okd,
is
trying
to
be
more
like
the
ubuntu.
They
already
bring
their
opinions
about
stuff,
but
they
make
it
fit
together.
Very
well,
so
the
ubuntu
or
fedora
of
of
the
sort
of
kubernetes
world
and
for
people
who
just
want
everything
out
of
the
box
to
get
stuff
done.
A
I
think
okd
is
a
fantastic
option
if
you're
already
committed
to
looking
into
kubernetes,
especially
once
the
single
node
cluster
stuff
comes
out.
So
people
don't
have
to
you
know,
set
up
multiple
machines
to
make
it
all
go
nicely,
and
but
they'll
still
get
the
advantage
of
updates
and
that's
the
other
thing
I
don't
have
to
update
any
part
of
this
setup.
Manually,
it'll
just
say:
hey,
your
cluster
is
going
to
be
update
and
it'll.
Take
care
of
updating
the
the
containers
is
running
and
the
underlying
base
os.
A
A
I
know
right
because
I
figure
archos
right
red
hat
core
os
probably
has
some
there's
something
similar
to
it.
For
ocp,
like
we've
got
a
fedora
core
os
here
I
don't
have
enough
money
to
pay
red
hat
for
yeah.
A
You
only
need
to
go
into
it.
If
you
notice
something
is
broken,
otherwise,
it'll
just
keep
chugging
away.
You
don't
have
to
do
any
toil
like
so
much.
I
think
that
that
that's
the
word
I
use,
I
don't
know
if
anyone
else
like
toil
is
just
like
the
sort
of
maintenance
stuff,
it's
sort
of
a
chore,
it's
sort
of
like.
I
don't
want
to
do
it.
It's
boring,
but
I
gotta.
A
A
I
got
this
basically
locked
in
january
february
of
last
year
somewhere
around
there,
and
that
was
back
in
the
okd
for
four
four
five-ish
days
and
ever
since
then,
I've
just
been
able
to
upgrade
and
redeploy
clusters
yeah
for
three,
so
I've
been
able
to
upgrade
and
redeploy
this
cluster
over
and
over
and
over
and
over
and
I've
been
able
to
bring
it
back
exactly
as
it
is
every
time
and
that's
amazing,
just
being
able
to
work
with
you
know,
can
just
infrastructure
they.
I
don't
have
to
worry
about
tearing
like
what
happens.
A
If
I
tear
it
down,
what
am
I
going
to
lose?
Do
I
have
all
my
configuration
this
kind
of
forces
you
to
do
everything
sort
of
properly,
which
is
you
know,
that's
helpful
in
and
of
itself
like
this
is
all
I
need.
This
will
set
up
the
exact
same
workload
that
I
had
it'll
bring
everything
in
it'll
restore
from
backups.
It
all
just
works,
and
once
you
have
your
pattern
figured
out.
One
way
you
can
just
do
it
forever.
A
A
You
need
to
worry
about
to
make
it
actually
usable
and
there's
a
lot
of
there's
a
lot
of
hidden
things
that
you
don't
really
think
about,
because
kubernetes
is
big
and
complex
and
there's
a
lot
of
little
things
that
could
break
that
you'll
never
know
about,
because
kubernetes
is
really
good
at
keeping
going
even
when
half
of
it
is
actually
broken.
That's
the
one
thing
I
have
noticed
very
good
at
that.
A
A
If
vanilla
k3s
was,
it
was
a
good
option
for
me
and
in
some
ways
it
was,
it
was
quicker
to
stand
up
and
use
less
resources,
but
I
never
had
the
confidence
in
it
that
I
could
like
put
stuff
on
it,
and
it
would
stay
running
two
weeks
three
weeks,
four
weeks
later,
a
lot
of
times
it
never
did.
This
okd
cluster
has
been
up
for
I
think,
a
couple
of
months
now
through
various
version
upgrades
works.
Fine,
I
I
I'm
pretty
sure
I'd
be
able
to
keep
it
up
forever.
A
At
this
point,
yeah
neil
neil
mentions
that
his
vanilla,
k8s
cluster
keeps
falling
over
and
I
think
that's
just
because
like
there's
that
it
because
it
doesn't
come
with
anything
other
than
the
bare
minimum,
you
need
to
put
your
own
monitoring
stack
into
it.
You
need
to
put
the
kubernetes
monitoring
mixins
in
and
collect
them,
and
then
that'll
that'll
definitely
tell
you
what
you're
doing
wrong
neil,
but
you
have
to
know
to
do
that.
You
have
to
put
them
in
there.
You
have
to
set
up
something
to
collect
them.
A
A
I
don't
know,
that's
the
one
thing
I
I
kind
of
want
to
know
how
they're
going
to
do
rolling
updates
on
the
single
nodes,
because
this
thing
as
vadim
was
mentioning
earlier.
It
was
like
it'll
update
a
master,
but
the
fcd
cluster
needs
quorum,
so
it
knocks
one
out
updates
it
knocks
the
next
one
out,
updates
it
knocks
the
third
one
out,
updates
it,
but
with
a
single
note
cluster,
I
don't
know
how
they're
gonna
do
it,
maybe
they'll
just
I'm
sure
it'll
be
some
measure
of
hackery.
A
A
Then
he
tells
it
to
pivot
and
reboot,
because
normally
the
the
machine
config
demon,
my
understanding
of
it
anyways
the
machine
config
daemon
is
the
thing,
gets
its
orders
from
the
machine
config
operator
and
the
operator
is
keeping
track
of
who's
rebooted
and
when
and
like
what
state
everybody's
in
and
then,
as
that
works,
it'll
instruct
individual
machine
config
daemons
to
do
the
os
tree
piping
and
reboot
process.
A
A
D
Up
awesome,
yes,
yeah
did
you
get
any
feedback
on
what
we
should
be
updating
in
the
documentation
and
yeah
so
sri?
I
heard.
B
D
A
Yeah,
no,
I
definitely
I
I.
I
think
that
you
know
I
it's
possible
for
me
to
split
all
of
that
to,
like
you,
know,
sort
of
pull
out
the
stuff,
that's
specific
to
my
setup,
or
at
least
have
that
script
in.
I
think
I
want
to
have
it
eventually
in
the
okd
repo,
just
as
like
here's
an
example
of
what
you
can
do
and
I
I'll
flesh
out
the
stuff,
I'm
writing
up
for
mike
mckeon's
into
that.
Oh.
D
And
I
I
it's
probably
killing
neil
right
now
that
he's
not
on
stage,
because
I'm
not
sure
if
we
can
add
him
in
to
do
that,
I'm
not
quite
sure
if
we
can
invite
him
but
yeah.
So
I
would
love
to
see
maybe
today
his
meals
in
three
different
sessions.
That's
what
I
like
about
it
see.
That's
that
that's
that's
that's!
D
So
what
I
would
love
to
see
you
do
today
is
at
least
put
make
a
a
pull
request
for
a
stub
for
your
for
your
documentation
into
el
nico's
repo-
and
you
know
just
say
this-
is
a
holding
pattern
and
a
link
to
any
of
the.
Maybe
just
put
the
additional
resources
that
you
had
that
you've
shared
with
people.
D
I
think
I
I
saw
a
few
things
get
shared
in
there
in
that
stub
and
get
that
in
there
as
a
holding
pattern
for
this
approach
to
that,
and
I
didn't
yeah
so
and
and
craig
if
that
background
is
real
on
your
behind
your
chair,
I
love
that
it.
B
D
D
You
look
just
like
rock
and
hipster
ops
awesome
and
I'm
in
my
partner's
basement
in
the
art
supplies
because,
as
we
all
know,
I
have
no
internet
at
my
house.
That's
why
I
paid
for
fiber
optic.
D
Like
this
and
yeah,
do
you
have
sound
proofing
with
those
those
panels.
D
Oh
helpful,
so
I
I
know
you
did
a
blog
post
craig
and
I
I
I
haven't
looked
at
el
maiko's
repo
lately,
but
I'm
wondering
if
there's
a
if
you've
entered
it,
could
make
a
stub
for
the
stuff
that
you've
done
to
link
out
there
so
that
we
have
access.
B
To
that
the
the
most
recent
one
I've
got
is
for
four
or
five,
and
I'm
in
the
process
of
writing
a
four
seven
so,
but
also
a
lot
well.
I've
got
you
here.
I've
been
using
medium
to
post
my
my
gosh
just
because
it's
a
little
bit
easier,
but
I
was
curious
if
you
had
any
other
suggestions
of
a
better
way
to
do
it.
That's
maybe
as
easy.
D
Well,
I.
B
D
D
Yeah
we
do
and-
and
that's
pretty
good,
if
you,
if
you're
fine
with
markup,
which
I
think
you
are
that's
an
easy
way
to
put
it
there-
I'm
not
sure
you're
gonna
get
the
same
traffic
as
medium,
and
I
kind
of
like
medium
too.
What
I'm
not
a
proponent
of
is
documentation
by
blogging,
though
I
think
for
some
of
this
stuff.
You
know
it
is
one-off
kind
of
thing.
So,
if
you
can,
if
you
did
it
in
medium,
we
could
put
a
stub
blog
in
the
okd.io
blog
that
link
back
out
to
your
medium.
D
That's
what
I
would
suggest
so
that
you
drive
some
traffic
back
and
forth,
but
also
to
think
about
in
the
way
that
you're
documenting
it
you're
writing
this
blog.
What
about
it?
Can
we
put
into
the
deployment
guide
for
home
labs
and
you
know
how
we
and-
and
I
don't
have
a
real
opinion
about
it?
It's
just
that
I
would
like
those
deployment
glides
to
be
updatable
and
maintainable
so
like
when
we
go
to
4-8
and
that
sort
of
stuff
that
someone
could
pull
at
an
issue
and
do
it
so.
B
I
think
a
lot
of
it
is
that
you
know
the
guys
that
the
other
guys
that
you're
talking
about
are
complete
like
they
should
they
show
the
steps.
They
say
you
need
to
do
this
this
and
this
this,
but
they
don't
actually
take
the
time
to
show
you
how
to
do
that,
because
you're
really
kind
of
supposed
to
have
already
known
how
to
do
that,
and
the
only
difference
is
in
my
guys
as
I
go
through
and
show
that
stuff,
which
is
kind
of
a
pain
honestly.
B
D
That
would
be
great,
and
you
know,
even
today,
what
I'm
trying
to
do
is
just
get
people
to
make
to
that.
Take
that
first
step
and
pull
you
know
make
a
pull
request
to
put
the
stub
in
for
it.
You
know
sort
of
commitment,
halfway
commitment
to
getting
that
there.
So
if
you
can
take
the
time
and
do
that
today,
I'm
looking
to
see
who's,
there's
six
people
with
and
three
of
them
are
us.
D
So
I'm
wondering
about
the
other
folks
who
are
here
kareem
and
daniel
and
neil
neil
who's
floating
back
and
forth,
so
maybe
just
daniel
and
kareem.
What
different?
D
What
is
the
the
major
difference
between
your
home
lab
or
your
fantasy
island,
home
lab
and
craig
and
sris?
You
know
how:
how
different
are
you
guys
from
that
and
it
does
that
deserve
yet
another
home
lab
blog
post
with
instructions.
B
What
one
thing
that
came
out
today
watching
vadim
showed
his
home
lab
and
it
was
kind
of
interesting
to
see
you
know
because
vadim
is
on,
like
this
whole
other
level,
and
it
was
interesting
to
see
his
home
lab
and
it
was
interesting
to
see
sheree's
home
lab
and
it's
just
to
get
to
get
to
go
through
and
look
at
how
everybody
has
something
different
set
up.
It's
like
okay!
Well,
I
like
that
part.
B
Oh
maybe
I
can
use
that
it
would
be
kind
of
a
cool
demo
just
to
have
everyone
kind
of
show
off
their
home
lab
at
one
time
or
another.
I
don't
know,
maybe
it's.
What
do
you
think
she
just
is
that
yeah.
A
D
D
So
when
daniel,
when
you
get
yours
going,
I
expect
a
blog
post,
at
least
out
of
you
and
coming
to
the
okd
working
group
and
and
we'll
we'll
do
that
and
then,
if
yours
is
significantly
different
than
craigs
or
shrees
and
vadim's,
I'm
I'm
really
interested
in
having
all
four
or
multiple
home
lab
configurations
linked
into
the
in
into
the
into
the
deployment
guide,
because
I
think
that's
the
one
of
the
many
value
propositions
for
okd
is
helping
people
get
their
home
lab,
set
up,
learning
and
and
getting
and
getting
some
experience
with
this,
and
I
think
the
the
issue
for
me
has
always
been
is
that
diy
kubernetes
was
great.
D
Installed
it
was
running
yippee,
but
I
couldn't
get.
I
really
couldn't
get
an
app
running
on
it
and
you
know-
and
I
wasn't
even
trying
on
a
raspberry
pi,
so
that
was
yeah
and
then
then
I
run
out
of
time
because
that's
kind
of
the
whole.
A
Thing
we
were
actually
just
talking
about
that,
but
before
you
came
in
it's
just
like
running
kubernetes
by
yourself,
you
get
it
up,
it's
almost
nothing.
You
have
to
put
everything
into
it.
Still
nobody
will
tell
you
what
it
is
and
then
your
cluster
will
fall
over
because
something
went
wrong
or
you
didn't
configure
something
quite
right
or
some
piece
of
the
hardware
fell
over
and
you
have
to
throw
it
away.
Do
it
again,
it's
a
pain.
D
It's
pain
but,
and-
and
I
don't
want
kubernetes
to
be
painful
for
anybody,
even
if
they're,
not
a
red
hat
customer,
I
want
everybody
to
have
a
good
experience
because
it
just
you
know
when
people
have
bad
experiences
with
stuff.
That
just
says:
okay.
Well,
I'm
gonna
go
over
and
do
something
totally
different,
and
then
we
get
all
this.
We
don't
get
their
feedback.
We
lose
them
from
the
communities
that
we're
supporting.
So
that's
that's
kind
of
it
and
and
I'm
old
school
I've
been
on
openshift
team
for
a
very
long
time.
D
I
just
hit
my
eight
years
working
on
openshift,
which
is,
and
I
came
on
board
because
I
love
the
promise
of
platform
as
a
service.
I
I
I
was
a
heroku
addict
that
went
in
I
even
did
I
worked
on
cloud
foundry
and
at
activestate
for
a
little
while
and
then
came
over
to
open
shift
land,
and
for
me
the
nirvana
is
that
wonderful,
dev
experience
that
we
were
promised
and
then
going
to
kubernetes
was
kind
of
yeah
poor
cloud
foundry.
D
I
know-
and
I
and
I
appreciate
what
cloud
foundry
has
done
and
where
they're
going
and
all
of
that,
so
this
is
not
really
me
slamming
them
that
that
more
what
I'm
I'm
interested
in
is
getting
back
to
that
early
days,
promise
of
platform
as
a
service
so-
and
I
think
what
we
are
doing
with
okd-
is
getting
us
close
to
that.
It's
still
what
you
guys
had
to
do
today,
much
more
complicated
than
I
wanted.
A
But
you
know
the
the
nice
thing
about
it
is.
I
was
able
to
point
out
that
I
was
going
when
I
was
going
over
my
script.
Originally,
I
was
able
to
point
at
a
point
maybe
about
halfway
through
and
be
like
at
this
point.
The
cluster
is
technically
set
up.
Everything
on
top
of
here
is
customizations
for
my
particular
environment.
A
So
I
think
it's
a
lot
better.
It's
a
lot
better
than
my
old
kubernetes
scripts,
which
were
based
on,
I
think
k3s,
and
this
was
like
maybe
two
or
three
years
ago,
and
those
were
just
it
was
not
pretty.
It
was
not
pretty
what
I
had
to
do
to
get
k3s
going.
I
I
think
k3s
is
better
now,
of
course,
but
still
okay
is
by
far
a
nicer
experience.
D
So
I
know
we're
all
here
chatting,
but
I
don't
want
to
keep
you
guys
away
from
your
weekend
longer
than
you
need
to
be
so
thinking.
We
have
come
to
a
an
imminent
closure
of
this
conversation.
So
if
you
leave
speakers
and
if
no
one,
daniel
or
anyone
else-
and
you
want
to
go
over
to
another
session-
maybe
go
hang
out
with
the
single
node
cluster
people,
who
are
still
probably
bootstrapping
something
and
just
slowly
merge
into
whatever
the
final
session
is
that
people
are
still
yabbering
on.
D
Well,
we'll
we'll
do
that
now
and
I
will
say
thank
you
and
you
should
find
an
invite
to
join
us
at
kubecon
eu
in
your
inbox
sometime
next
week.
If
you
haven't
already
done
that.