►
From YouTube: OKD4 on Digital Ocean + Intro to Fedora CoreOS Dusty Mabe Neal Gompa OKD4 Deployment Marathon
Description
Deploying OKD4 on Digital Ocean
Intro to Fedora CoreOS
Dusty Mabe (Red Hat) and Neal Gompa (Datto)
OKD4 Deploment Marathon
August 17 2020
Day Zero Kubecon/EU
A
Cool,
I
I'm
wearing
a
coral
s
shirt.
So
if
it's
off
screen
at
any
point,
just
don't
forget
it:
okay,
cool!
Let
me
share
my
screen
and
see
if
I
can
hop
into
this
real,
we'll
quick.
A
So
let
me
go
ahead
and
kick
that
off
and
just
see
if
it
actually
starts
running
without
issues
and
then
I'll
kick
over
to
a
presentation
and
start
okay
looks
like
it's
started
doing
its
job,
so
I'm
gonna
go
over
to
a
presentation.
Let
me
know
when
you
guys
can
see
the
presentation
and
then
I'll
start
off.
A
Perfect
okay,
so
this
is
a
presentation
about
okd
on
fedora
core
os
on
digitalocean.
My
name
is
dusty.
Mabe,
I'm
a
software
engineer
at
red
hat.
I
also
have
neil
gompa
who's
going
to
talk
about
his
interest
in
digitalocean
as
well,
and
help
me
fill
some
of
the
dead
space
in
the
room
or
the
dead
air
in
the
room
when
the
when
the
install
is
taking
forever.
So
neil,
would
you
like
to
introduce
yourself
to
yeah.
C
So
my
name
is
neil
gampa,
I'm
a
devops
engineer
at
datto
and
I'm
a
member
of
the
okd
working
group.
I
mean
professionally
I'm
sort
of
interested
in
running
okd
on
openstack.
But
personally
I
don't
have
money
for
that
and
I
don't
have
money
for
the
cloud
so
okd
on
digitalocean
is
a
rather
affordable
way
to
kind
of
pry
it
out
and
use
it
in
a
semi-production-ish
kind
of
way.
So
that's
what
I'm
here
for
just
to
kind
of
pretend
to
be
the
idiot
to
kind
of
help,
explain
everything!
C
That's
going
on
and
just
as
dusty
said,
fill
in
the
dead
air
with
the
very
slow
and
oddly,
not
very
fulfilling
screencast
of
the
installation
process
and.
A
Don't
let
neil
fool
you
he
he
is.
He
can
pretend
to
be
the
idiot,
but
he's
definitely
not
okay,
cool
I'll,
hop
in
and
do
a
little
short
presentation.
So
this
okd
running
on
digitalocean
is
kind
of
a
blog
post
series
that
I
kicked
off
a
little
while
ago.
I'm
slowly
releasing
new
content
for
it.
A
Just
in
case
you
know
we'll
need
those
for
different
things
and
it's
nice
to
have
them
versioned
together.
The
openshift
installer
we
definitely
need
because
we
need
that
to
generate
the
information
that
we'll
use
to
kick
off
the
cluster.
A
A
This
seems
a
little
weird,
but
this
is
used
for
access
to
digitalocean's
object
storage,
so
they
reuse
the
s3
apis
for
that,
and
so
we
actually
use
a
different
tool
that
has
different
set
of
credentials
and
I'll
explain
why
why
we
do
that
here
in
just
a
minute?
A
A
But
more
or
less
okd
needs
entries
in
dns
and
what
you'll
want
to
do
is
you'll,
want
to
go
to
your
registrar
and
set
that
up
and
point
the
entries
or
the
entries
for
that
particular
domain
over
to
digitalocean's
domain
servers,
so
that
digitalocean
can
essentially
manage
that
sub
domain.
For
you,
okay,
the
next
thing
is
actually
doing
the
deployment.
A
This
is
the
second
blog
post
in
the
series,
and
this
one
actually
talks
about
the
automation
script
that
I
created
and
also
you
know
what
each
piece
of
it
does.
A
One
of
them
is
a
resources
directory
that
contains
a
few
files
I'll
go
over
those
here
in
a
little
more
detail
here
in
a
second
and
then
there's
also
a
config
file
which
has
user
customizations
in
it
and
then
there's
the
script
itself,
which
I'll
go
over
a
little
bit
more
here
in
just
a
second.
But
first
of
all,
let
me
dig
into
a
few
of
those
things,
so
we
have
the
resources
directory,
which
has
a
few
files
in
it.
A
One
of
them
is
an
install
config
for
openshift,
and
this
install
config
is
just
about
as
bare
bones
as
you
can
get,
and
it
also
contains
a
few
things
that
get
substituted
into
it.
So
this
is
actually
a
template,
so
I
just
called
it
install
config.yaml.in.
A
So
this
is
a
template
that
a
few
different
tokens
get
substituted
out.
One
is
base
domain,
cluster,
cluster
name,
number
of
okd
workers
and
the
number
of
okd
control
plane
nodes,
so
that
is
just
a
basic,
install
config
that
we
feed
in
and
substitute
a
few
values
out
of,
and
then
we
also
have
a
few
fcct
config
files
that
are
used
to
either.
A
A
A
You
know
name
of
the
node
that
you
gave
it
in
digitalocean.
So
this
was
an
example
of
running
a
script
called
hostname
ctl
run
hostname
ctl
via
the
set
hostname.service
that
you
could
use
to
work
around
that
particular
bug.
So
this
is
an
illustration
of
something
that
you
could
do
to
work
around
an
issue
in
fedora
coreos.
If
it
happened
to
exist
at
the
point
that
you
were
running
this
and
there's
similar
ones
for
worker
and
control
plane
as
well.
A
These
files
themselves
incorporate
the
output
of
the
openshift
installer,
so
it
will
merge
in
the
config
that
is
in
the
generated
files
directory
and
the
worker.ignition
file,
so
that
basically
pulls
in
what
the
installer
spit
out
and
then
merges
it
in
with
this
other
information.
That's
in
here,
so
it's
kind
of
like
an
additive
thing:
real
quick
I'll
go
over
the
config
file.
A
So
if
we
look
at
this
file,
it
basically
is
a
bunch
of
key
value
pairs
that
allows
somebody
not
who
is
not
me
to
come
in
and
edit
this
to
be
what
they
want
it
to
be.
So,
for
example,
when
you
choose
your
host
name
or
sorry,
your
your
domain
name,
you
can
go
in
here
and
set
this
mine
was
okdtest.dustymabe.com.
A
If
you
wanted
to
change
the
number
of
workers
or
control
plane
nodes
that
get
started
initially
by
the
script,
you
could
change
these
two
variables
to
the
different
numbers.
I
think
the
minimum
number
of
control
planes
recommended
is
three,
so
I
wouldn't
go
below
that.
I
know
christian,
I
think,
showed
a
example
of
just
doing
an
all-in-one
earlier.
Maybe
you
can
change
the
region
that
you
want
to
use.
A
So
if
you
don't
want
to
use
nyc3,
you
want
to
use
something,
maybe
closer
to
your
locale.
You
can
do
that.
The
one
thing
you
will
need
to
update
is
your
key
pair,
so
whatever
ssh
key
that
you
want
to
use
when
you
create
an
instance
in
digitalocean,
I
believe
it's
required
that
you
actually
specify
an
ssh
key.
A
So
you
have
to
you
have
to
specify
one.
So
you
have
to
update
this
information
right
here.
The
size
of
the
droplets
that
you
want
to
use,
you
can
run
the
octl
compute
size
list
to
get
more
options
there,
the
fedora
core
os
image
url.
So
this
is
basically
a
path
to
the
digitalocean
artifact
and
that
will
be
used
to
create
a
new
image
in
digitalocean
for
you.
So
you
don't
have
to
do
that
yourself.
It
does
that
via
the
custom
images
workflow.
A
It
also
will
basically
derive
the
name
of
the
image
from
this
and
it
will
name
it
that,
and
if
you
run
this
script
multiple
times,
it
won't
attempt
to
recreate
one.
A
So
if
you
go
and
look
at
some
of
the
first
output
from
the
script,
it's
it
says,
image
with
this
name
already
exists
skipping
image
creation,
so
that
saved
us
a
little
bit
of
time.
A
Let's
see,
and
then
you
know
a
few
other
things
that
you
can
change
what
you
want.
Your
registry
volume
size
to
be.
A
C
A
C
A
A
Yeah,
the
the
only
thing
that's
missing
right
now
is
with
digitalocean,
with
with
the
images
that
they
provide.
They
don't
do
dhcp
by
default.
I
don't
think
so.
There's
some
networking
work
that
needs
to
be
done
on
the
fedora
coreos
side
to
support
their
static
networking
config,
but
with
custom
images
they
do
dhcp
by
default,
which
is
something
we
already
support,
so
digitalocean
images
work
with
the
custom
image
workflow,
but
not
with
the
you
know,
provided
image
from
digitalocean
right
now.
A
So
that's
why
this
script
will
automatically
create
it
for
you
if
it
doesn't
exist,
okay-
and
let
me
hop
back
over
to
the
presentation,
so
the
digitalocean
okd,
install
script
itself
does
quite
a
few
different
things,
and
this
at
a
high
level
is
what
it
does.
The
first
thing
it
does
is
create
a
faces
also
known
as
s3
bucket
to
hold
the
bootstrap
ignition
config.
The
reason
we
do
that
is
you
don't
you
know.
First
of
all,
we
already
have
access
to
digitalocean.
A
So
this
is,
you
know,
assuming
that
you
can
use
another
digital
ocean
service
seems
safe
and
the
other
thing
is
you
don't
want
to
just
put
that
particular
ignition
config
anywhere,
because
it
has
secrets
and
stuff
like
that
in
it.
The
other
thing
is,
you
can't
actually
provide
that
ignition
config
directly
to
the
instance
on
boot,
because
it's
so
large,
it's
above
64,
kilobytes
or
yeah
kilobytes,
which
means
digitalocean's
user
data
service,
will
basically
say
sorry,
your
user
data
is
too
large.
A
We
can't
you
know
when
you
try
to
do
the
api
call
to
create
the
instance
with
that
user
data.
It
won't
allow
you
so
what
we
needed
reference
to
this
ignition
config
and
the
safest
place
was
to
put
it
in
s3
and
then
use
a
pre-signed
image
url
to
grab
it
on
boot
and
that
image
url
that
I'm
using
is
only
valid
for
five
minutes.
A
So
it's
just
a
way
to
kind
of
lock
it
down
as
much
as
possible.
So
nobody
else
can
grab
your
your
configs
and
somehow
take
over
your
cluster.
The
second
thing
it
does
is
creates
a
custom
image
and
digitalocean
for
the
linked
fedora
core
os
image.
We
talked
about
that
briefly
just
a
minute
ago.
A
The
third
thing
it
creates
a
vpc
for
private
network
traffic.
So
that
way
you
know
anything
within
the
vpc
is
good.
You
don't
have
to
worry
about
it
affecting
your
other
nodes
that
exist
in
your
account.
It
creates
a
load
balancer
to
balance
the
traffic
it
creates.
A
firewall
to
block
against
unwanted
traffic,
generates
manifesting
configs
via
the
open
shift,
install
binary
it
uploads,
the
bootstrap
config
to
spaces
to
be
retrieved
by
the
bootstrap
bootstrap
instance
creates
the
bootstrap
control,
plane
and
worker
droplets,
creates
a
digital
ocean
domain
and
the
required
dns
records.
A
It
provisions,
the
digital
ocean,
block
storage,
csi
driver
to
the
cluster,
and
then
it
once
everything
is
up.
It
removes
the
bootstrap
droplet
and
the
spaces
bucket.
Since
you
know,
those
are
no
longer
needed.
A
One
thing
that
I
will
mention
at
least
for
right
now.
I
took
a
shortcut
with
the
automation
script.
I
only
create
one
load
balancer,
so
normally
what
you
would
do
is
you
would
create
a
load
balancer
for
like
the
the
control
plane
nodes
and
then
a
separate
one
for
the
worker
nodes
where
the
routers
are
running,
but
to
simplify
things.
I
just
created
a
one
load
balancer
that
has
the
control
plane
nodes
in
it
and
I
modified
the
ingress
routers
to
run
on
the
control
plane
nodes.
Instead,
that's
not
best
practice.
A
It
just
happened
to
be
something
that
simplified
the
script
quite
a
bit
and
yeah.
I
might
change
that
in
the
future.
Okay,
yeah
and
there's
more
posts.
I
have
to
come
in
this
series.
I
want
to
start
talking
about
like
configuration,
for
example
your
your
certificate,
so
you
don't
have
self-signed
certificate
warnings
every
time
you
know
if
you
wanted
to
share
this
cluster
with
somebody
else
and
then
also
you
know,
setting
up
identity
access
like
for
me.
For
my
personal
blog,
I
use
git
lab
identity.
A
Access
to
log
in
you
can
use
many
different
ones,
but
okay-
and
this
is
my
shameless
plug
for
fedora
core
os,
but
I
know
we're
talking
about
okd
today.
So
let
me
hop
back
over
to
the
install
and
see
kind
of
where
we
are
and
I'll
talk
briefly
about
where
we
are
and
then
I'll
see.
If
anybody
has
questions
or
if
neil
wants
to
jump
in
and
tell
me
what
I
should
be
doing
differently.
Okay,
so
where
we
are
in
the
install
process.
A
Right
now
is
first
off
a
check
to
see
if
the
image
that
we
were
going
to
upload
already
exists,
and
it
appears
that
it
does
so.
It
basically
looks
for
an
image
with
this
particular
name,
and
it
assumes
that,
if
an
image
with
that
name
exists,
then
it
is
what
we
want
to
use.
You
know
it's
theoretically
possible.
Somebody
could
specify
an
image
with
that
name
that
wasn't
that
content,
but
you
know
we're
gonna
make
that
assumption
here
and
we
should
probably
be
okay
with
that.
A
A
We
created
a
firewall,
we
generated
the
manifest
for
the
install
from
the
openshift
install
script,
we
created
the
droplets
and
then
there's
an
informational
command
that
runs
that
basically
prints
out
all
the
droplet.
You
know
names
the
I
and
the
ips.
This
is
useful
if
you
kind
of
want
to
hop
in
and
debug
something
on
on
boot.
Maybe
it
created
the
domain
and
the
dns
records,
and
then
it
also.
A
Ran
openshift
installed
to
say:
hey,
let's
wait
for
the
bootstrap
to
come
up,
so
the
bootstrap
came
up
and
let's
see
it
took
11
minutes
for
the
bootstrap
to
come
up
and
finish,
and
then
it
removed
the
bootstrap
resources.
So
the
bootstrap
node
is
now
gone.
A
That
s3
bucket
is
now
gone,
so
you
don't
have
anybody
able
to
pick
up
that
that
config
and
use
it
or
whatever
it's
it's
gone
out
of
s3
and
now
what
it's
doing
or
what
it
was
doing,
was
waiting
for
the
workers
to
come
up
and
make
certificate,
signing
requests
and
those
all
have
been
approved
now
and
it's
moving
the
routers
to
the
control
plane
nodes.
I
mentioned
this
earlier,
so
the
we
only
have
one
load
balancer.
A
So
what
we
needed
to
do
was
move
the
the
routers
over
to
the
control
plane
nodes.
So
what
it's
doing
right
now
is
before
it
can
move
the
routers
to
the
control
plane,
nodes,
the
routers
need
to
actually
be
up
and
created
so
that
it's
waiting
on
the
cluster
to
create
those
so
that
it
can
then
move
them.
A
So
right
now
the
cluster
you
can
see.
We
have
three
nodes:
okay,
control,
one
zero
and
two
workers,
and
we
have
quite
a
few
odds
that
are
cancelled,
that
are
kind
of
up
and
running.
You
can
use
control
z
in
here
to
see
which
ones
are
kind
of
in
a
not
running
state.
At
this
point,
and
if
there's
anything
you
might
need
to
investigate
as
the
cluster
is
coming
up,
the
cluster
operators
themselves
are
in
the
proper
process
of
coming
up.
A
A
This
is
the
domain
that
was
created
just
now
with
all
of
the
different
dns
records
that
are
needed
for
the
cluster
and,
let's
see
what
other
thing
networking
this
is
the
load
balancer
that
was
created
and
the
way
these
these
things
are
set
up
for
the
firewall
and
the
load
balancer
they're,
based
on
a
tag
that's
given
to
nodes,
and
so
for
the
control
plane
nodes.
I
gave
it
a
tag
of
okd
test,
dash
control
and
so
all
nodes
that
match
that
tag
will
be
a
part
of
this
load
balancer.
A
That
means
that,
if
I
add
another
control,
plane
node
in
the
future,
as
long
as
I
create
it,
with
the
appropriate
tags,
it'll
get
automatically
added
into
this
load
balancer,
which
is
kind
of
nice,
and
the
same
goes
for
the
firewall,
so
the
firewall
itself
is
has.
It
is
matches
on
a
tag
of
okd
test,
and
so
it
matches
all
of
the
nodes
in
the
cluster
right
now,
which
is
kind
of
nice.
A
Okay.
So
while
we
wait
for
this,
do
we
have
any
questions
or
neil
do
you
have
anything
to
add
right
now.
B
Hill
is
is
frantically
multitasking
with
two
different
calls
at
the
same
time.
So
I
think
this
is
the
first
time
I've
ever
seen.
Neil
quiet,
he
was
hand
waving
and
everything
on
video
a
minute
ago,
I'm
not
and
when
I
I'm
not
seeing
any
questions
right
now,
which
is
is
great
actually
but
is
is.
Are
you
back.
C
I
am,
I
am
kind
of
back
now.
Yes,
so
I
mean
you
did
a
good
job
dusty,
you
like
actually
explained
basically
everything.
So
the
the
one
thing
I
I
I
wonder
was
wanted
to
ask
was
how
flexible
is
this
deployment
strategy
like
is
there?
Is
it
really
easy
to
two
knobs
to
be
like
make
it
bigger
or
smaller
or
like
what
kind
of
knobs
do
you
have
with
this
deployment
strategy?.
A
Yeah,
so
what
I
have
right
now
is
this:
this
config
file,
which
essentially
has
a
bunch
of
key
value
pairs
in
it
there
are.
I
try
to
make
it
somewhat
flexible,
without
making
it
too
complicated
so
like
the
easiest
way
to
be
flexible,
is
to
adjust
the
number
of
workers
and
the
number
of
control
plane
nodes.
Just
with
this
so
like,
if
you
just
wanted,
you
know
a
bare
minimum
cluster
with
just
workers
in
it
and
or
sorry
with
just
control
plane
nodes
in
it.
A
A
Both
the
the
control
plane
nodes
would
be
both
you
know,
master
and
worker
role,
but
in
this
case
I
decided
to
bring
up
three
masters
and
three
control
plane
nodes
and
two
worker
nodes,
but
you
can
easily
change
that
if
you,
if
you
want
to
just
by
changing
you,
know
these
variables
right
here,
you
know
different
region
is
something
that
you
can
do.
One
thing
that's
kind
of
inflexible
right
now.
A
Is
you
can't
droplet
size
for
your
workers
versus
your
control
plane
nodes,
so
it
just
uses
a
uniform
instance
type
for
all
of
them,
and
the
other
thing
I
think
I
kind
of
mentioned
or
touched
on
was
the
the
load
balancer
not
ideal
configuration
that
I
have,
which
is
yeah,
that
one
that
one's
not
great
might
be
something
I
need
to
switch
up
in
the
future,
but
I
mean
it.
A
I
don't
know,
I
don't
think
it's
like
super
flexible,
but
I
don't
know
how
flexible
you
need
something
like
this
to
be,
and
the
other
thing
is
it's
also
written
in
bash,
which
I
don't
I
don't
want
to
go
too
far
there
with
with
making
things
super
flexible.
So
it's
it's.
It's
not
ugly
bash,
but
it's
still.
A
C
So
how
how
much
does
this
whole
thing
like?
How
much
is
this
gonna
cost
you
right
now,
with
with
three
essentially
six
nodes.
A
Yeah,
so
the
the
cost
is
not
quite
what
I
would
like
it
to
be,
so
I've
got
some
things
that
you
can
do
to
bring
the
instant
size
down
that
I
haven't
quite
published.
I
was
gonna
make
that
a
follow-up
blog
post,
but
let's
see
what
do
we
have?
A
We've
got
the
16
gigabyte
size
instance.
A
And
if
we
look
at
the
one
that
I'm
using
it's-
probably
not
so
for
for
this
one,
I'm
using
the
16
gigabyte
instance,
because
that
is
what
that's
what
the
openshift
documentation
recommends
for
my
personal
cluster,
I'm
using
the
eight
gigabyte
instances
which
are
half
the
price,
but
this
particular
cluster
with
you
know
five
nodes
at
80
bucks.
A
piece
is
yeah:
it's
going
to
cost
you
some
money
every
month
and
it's
probably
not
going
to
be
something
you're
just
doing
for
fun.
A
At
least
it's
better.
If
you
were
doing
this
on
aws
a
little
bit,
yeah
yeah
and,
like
I
said,
I've
got
some
things
that
I've
done
in
cvo
overrides
which
basically
cuts.
You
know
some
of
the
more
fancy
features
out,
but
aren't
really
you
know
if
I'm
just
running
my
blog
and
a
couple
other
things.
I
don't
need
that
right
and
I
honestly,
if
I'm
just
running
my
blog
and
a
couple,
other
small
things,
I
don't
really
need
okd,
but
I
use
it
as
a
kingdom.
A
Yeah,
I
use
it
as
a
as
a
tool
to
learn.
I
I
want
to
learn
this
stuff
and
I
want
to
you
know,
be
up
to
date
on.
You
know
what
the
state
of
the
art
and
the
kubernetes
landscape
is,
and
I
think
this
is
a
really
good
way
to
do
it.
The
I
think,
the
nodes
that
I'm
using
are
like
these
eight
gigabyte
nodes
for
my
personal
cluster,
which
are
half
of
that,
and
I
I
only
I
don't
run
any
worker
nodes.
I
just
run
three
control
plane
nodes,
so.
A
Yeah,
it's
it's
better,
but
yeah.
So,
let's
see
it
looks
like.
A
Let's
see,
let's
see
all
right,
so
what
is
the
magic
button?
There's
also.
A
A
Commands
just
directly
in
your
terminal,
so
I
find
it
a
lot
easier
to
use,
especially
when
you're,
just
learning,
kubernetes
or
openshift,
let's
see
ingress,
so
the
ingress
is
not
up.
Yet,
let's
see
if.
B
A
And
no,
no,
we
we
shouldn't
be
worried,
although
you
know,
I
actually
don't
know
how
long
it
took
earlier
when
I
brought
up
the
node,
so
it
just
takes
a
while
yeah
I
mean
it's
just
part
of
the
cluster
right
and
it
takes
a
while
for
an
open
shift
install
to
complete.
A
Unfortunately,
let's
see,
let's
see
what
do
I
want
to
do
right
now,
let's
go
look
at
the
pods
and
kind
of
see
where
we
are
in
so
it
says
all
of
them
are
in
a
good
state
right
now
and
let's
see
which
ones
were
brought
up
recently
so
yeah.
I
think
it's
just
progressing
through
bringing
up
the
cluster.
B
Aaron
is
asking:
could
the
cluster
just
be
built
as
three
nodes
with
a
schedulable
masters.
A
Yes,
so
if
you
just
want
three
nodes
with
control,
you
know
just
three
control
plane
nodes.
Basically
all
you
do
is
you
come
into
your
config
file
and
you
just
replace
the
number
of
workers
with
just
make
them
zero,
so
in
the
documentation
which
is
not
really
documentation?
It's
just
a
comment
I
mentioned.
The
minimum
number
of
workers
is
zero
and
you
know,
according
to
the
documentation,
the
minimum
number
of
control
plane
nodes
is
three,
although
I
think
christian
showed
earlier
you
can.
A
You
can
run
like
an
all-in-one
node,
but
that's
not
something
that
this
script
necessarily
handles
yeah
and
basically,
if
you
have
no
workers,
then
the
control
plane
nodes
will
be
marked
as
schedulable.
So
I
think
that's
just
something
that
the
openshift
installer
does
by
default.
A
I
know
charlo
mentioned
this
earlier
and
I
was
going
to
ask
him
when
he
mentioned
it.
But
if
I
look
at,
if
I
look
in
the
resources
directory
at
the
install
config,
I
substitute
in
whatever
values
the
are
set
in
that
config
file,
and
so
if
I,
if
that
value,
is
zero
right
here
for
the
number
of
workers,
the
openshift
installer
will
automatically
say.
A
Okay,
I
have
zero
workers,
that's
being
configured,
so
I'm
going
to
mark
them
the
control
plane
nodes
as
scheduled
schedule,
schedule,
loadable
and-
and
you
know,
in
this
output
when
we're
looking
at
the
nodes.
A
These
three
nodes
up
here
would
be.
They
would
have
two
roles,
so
they'd
be
master
and
worker,
so
yeah.
If
you
want
to
change
that
to
just
run
three.
All
you
have
to
do
is
update
the
config
file
to
to
have
zero
worker
nodes,
and
that
should
do
it.
That's
pretty
much
all
you
have
to
do.
A
And
that's
actually
how
how
I
started
with
this
script.
I
didn't
start
with
worker
nodes.
I
just
started
with
control,
plane
nodes
and
that's
part
of
the
reason
why
the
load
balancer
is
still
just
one
load
balancer,
it's
just
much
easier
just
to
have
one
with
how
I
set
everything
up
with
tags
and
whatnot
okay,
so
the
ingress
controller
was
created.
So
we
can
go.
Look
at
that
now.
A
So
that
exists
and
we
actually
updated
it
so
that
it
would
run
on
the
master
or
on
the
control,
plane,
nodes
themselves.
A
A
You
don't
you,
don't
typically
schedule
workloads
on
them,
so
we
had
to
modify
the
ingress
controller
to
ignore
that
and
then
also
to
make
it
match
on
the
control
plane
nodes
so
that
it
would
get
scheduled
there.
So
we
patched
that
and
then
the
other
thing
we
did
was
we
want
to
make
the
router
run
on
every
control
plane,
because
the
way
the
load
balancer
set
up
it
routes
traffic
to
all
of
them.
A
A
We
have
read
pods
in
the
openshift
ingress
namespace
that
are
the
routers,
so
they
they're
running
on
control,
plane,
zero,
one
and
two.
So
that's
all
of
them
and
let's
see
where
we
are
now
so
now,
it's
basically
waiting
for
the
cluster
to
come
up
and
then
install,
and
it
shouldn't
be
that
much
longer,
just
because
it's.
A
A
There
yeah
yeah,
I
actually
I've
been
wanting
to
do
it
for
a
while.
I
just
yeah.
Last
time
I
had
I
was
in
that
setup
and
I
was
doing
some
rust
packaging.
I
was
planning
on
just
running
it
against
k9s
and
then
I
think
my
box
got
shut
down
it's
kind
of
funny
so
like
I
did
a
lot
of
research
recently
into
why
my
box
shut
down
and
and
in
the
logs
it
says,
power
button
keep
was
pressed
and
I
was
like
searching
google,
I'm
like.
A
Why
would
my
box
shut
down
when
and
tell
me
the
power
button
key
was
pressed
when
I
know
I
didn't
press
it
and
then
somebody
on
stack
exchange
was
like.
Are
you
sure
you
didn't
press
the
button
and
it
turns
out
my
wife.
Had
my
wife
had
let
my
kid
come
up
here
and
my
one
and
a
half
year
old
come
up
here
and
and
get
around
my
computer
and
stuff
and
she
was
like
oh
yeah,
he
pressed
the
button
so
well
then
there
you.
B
A
A
Computer
is
not
a
great
thing
to
be
working
on
in
2020,
I
know
yeah
I've
been
meaning
to
get
a
new
desktop,
but
it's
something
I've
put
off
for
a
little
bit,
but
I
mean
honestly,
the
performance
of
this
thing
is
pretty
good
and
so
well
it
has.
I
mean
you
know
linux.
Linux
does
a
good
job
of
of
harnessing
those
resources
anyway,
that's
true!
A
So
do
we
have
any
questions
in
general
about
kind
of
how
how
all
this
is
set
up?
Does
anybody
want
me
to
poke
around
in
here
and
show
you
know
kind
of
what
you
know
what
resources
are
created
or
whatnot.
B
So
I'm
wondering
if
you
you
haven't
finished,
deploying
correct
yet,
oh.
B
That's
that's
what
I
thought
so,
no,
no,
there
aren't
any
big
questions
and
I
think
someone
put
the
link
into
your
k9s
blog
post
in
the
chat.
So
I
think
there's
a
lot.
People
are
very
interested
in
that,
so
that'll
be
cool.
If
you
can
get
more
folks
using
that.
A
Yep
yeah,
so
that's
in
the
chat
and
if,
if
you,
if
you
don't
like
typing
links,
you
can
just
type
dustymag.com
and
it's
like
the
third
one
down,
so
it
happens
to
be
there.
A
B
Well
played
so
the.
B
So
maybe,
while
we're
waiting
talk
a
little
bit
about
fedora
core
os
and
where
that's
going
and
what
that
community
is,
I
know
you
did
a
shameless
plug
for
it
earlier,
but
maybe
we've
got
a
few
minutes
here.
While
we
wait
for
this
to
initialize.
B
A
Yeah,
okay,
give
me
one
second
I'll
see
if
I
can
find
find
a
good
presentation
that
I
might
be
able
to
sponge
off
of
here.
B
And
and
while
we're
doing
it,
I
just
wanted
to
say
that
yeah
we
are
in
the
okd
side
of
the
house,
incredibly
grateful
to
the
fedora
and
the
fedora
core
west
community,
as
well
as
the
atomic
folks
that
preceded
it
and
that
for
all
the
efforts
that
went
into
making
fedora
coreos
what
it
is
today
and
we
really
have
been
doing
some
real
tight
coordination
in
our
releases
and
their
releases
and
look
forward
to
continuing
to
collaborate
with
the
fedora
community.
B
I
think
this
is
an
exciting
new
chapter
for
cloud
and
fedora
and
openshift.
So
thanks
for
everything
you
all
are
doing.
A
Yeah
absolutely
so,
I
found
I'm
going
to
reuse
a
presentation
that
I
gave
a
little
maybe
a
week
ago
or
so,
and
I've
been
making
the
rounds
with
this
presentation.
So
if
you've
seen
it
before,
I
apologize
but
I'll
just
briefly
go
into
what
fedora
coral
west
is-
and
I
might
blow
over
a
few
of
these
slides,
but
that's
okay,
so
fedora
gore,
os
itself
is
an
emerging
fedora
edition
and
it
kind
of
came
from
two
different
communities
that
we
put
together.
A
One
of
them
is
coreos
inc,
container
linux,
community
and
then
also
project
atomic's,
atomic
host
and
project
atomic
was
primarily
backed
by
a
lot
of
you
know:
different
people
in
the
red
hat
ecosystem
and
fedor
cores
itself
kind
of
incorporates
the
container
linux
philosophy,
provisioning,
stack
and
cloud
native
expertise
and
then
fedora
coreos
also
incorporates
atomic
hosts
fedora
foundation.
The
update
stack
from
atomic
hosts
and
the
sc
linux,
enhanced
security
from
atomic
host
and
some
of
the
features
of
fedora
core
os
are
automatic
updates.
A
A
So
in
order
to
not
break
people's
systems,
we
try
to
catch
issues
in
several
ways.
One
of
them
is
having
extensive
tests
in
our
automated
ci
pipelines
and
then
also
having
several
update
streams
to
prevent
you
know,
what's
coming
from
landing
and
stable.
If
users
happen
to
find
issues,
sorry
to
preview.
What's
coming
so,
users
run
various
streams
to
help
find
issues,
and
then
we
also
have
managed
upgrade
rollouts
over
several
days
and
what
that
allows
us
to
do
is
find
issues
with
updates
and
stop
an
upgrade
roll
out.
A
So
you
know
if
the
first
10
of
users
that
got
an
upgrade
had
issues
with
it.
We
will
stop
the
rollout
so
the
rest
of
the
90,
basically
don't
ever
get
affected
by
it,
and
when
things
go
wrong,
people
can
always
roll
back
to
the
previous
version.
Hopefully,
you
know
that
still
works
for
them,
but
you
know
for
okd
itself.
A
The
updates
are
kind
of
managed
by
the
cluster,
so
some
of
this
doesn't
quite
apply
as
much
there
I
mean
it
does,
but
in
a
different
way,
so
like
as
far
as
different
update
streams,
okd
doesn't
necessarily
take
advantage
of
that.
Although
in
the
testing
infrastructure,
we
do
so,
we
try
to
catch
things
in
okd
before
it
hits
users
as
well
and
in
fedor
coreos.
These
update
streams
that
are
offered
are
next
testing
and
stable.
A
Next
is
kind
of
experimental
features
and
fedora
major
rebases,
so
our
next
stream
should
be
moving
over
to
fedora
33.
You
know,
sometime
soonish
testing
is
a
preview
of
what's
coming
to
stable,
so
usually
that's
just
a
point
in
time
snapshot
and
that
will
end
up
in
the
stable
stream
in
a
few
weeks
time.
A
Assuming
that
people
don't
find
issues
stable
is
just
the
most
reliable
stream
that
we
offer,
which
is
the
promotion
of
testing
stream
after
some
fake
time
and
the
goals
of
having
these
multiple
streams
is
to
publish
new
releases
into
update
streams,
the
every
two
weeks
and
then
find
issues
in
the
next
slash
testing
streams
before
they
hit
stable.
So
our
users
happily
leave
automatic
updates
on
and
don't
disable
them
for,
fedora
coreos
we
have
automated
provisioning,
so
fedora
coreos
uses
ignition,
which
is
something
that
container
linux
also
use
to
automate
provisioning.
A
All
of
that
information
for
bootstrap
is
in
the
config
and
then
the
the
node
comes
up
joins
the
cluster
and
then
the
cluster
itself
kind
of
manages,
so
it's
kind
of
a
hybrid
approach
there
and
then
for
these
nodes,
whether
you're,
starting
in
the
cloud
or
on
bare
metal
you
you
use
the
same
starting
point.
So
you
use
an
ignition
config,
either
way,
which
is
kind
of
nice.
A
A
You
know
having
the
host
basically
be
host
software
and
then
a
container
run
time,
and
you
know
doing
that
well
and
then
on
any
applications
running
in
containers
makes
it
easier
to
upgrade
more
reliably,
but
in
general
fedora
core
os
is
ready
for
a
cluster
deployment,
so
you
can
spin
up
100
nodes,
have
them
draw
in
a
cluster
and
then
spin
down
the
nodes
when
they're
no
longer
needed.
A
If
we
have
enough
time
I'll
actually
demonstrate
adding
a
new
worker
node
to
this
cluster
here
in
just
a
minute
and
then
fedor
core
s
itself
is
on
offered
on
or
four
of
a
lot
of
different
cloud,
slash
virtualization
platforms,
so
we
have
alibaba
aws,
azure,
digitalocean,
exoscale,
gcp,
openstack,
volter,
vmware,
qmu
qmukvm
and
we're
trying
to
add
more
all
the
time.
A
Os
versioning
and
security
is
another
feature
that
we
like
to
mention.
So,
for
example,
if
you
run
rpmos
tree
status
on
a
machine,
you
will
get
a
a
specific
identifier
that
says
all
right.
You
know
you
can
take
this
particular
either
version
or
hash
and
share
it
with
somebody
and
say:
hey,
I'm
running
fedor
coreos.
A
You
know
this
version
and
I'm
seeing
this
problem
and
that
single
statement
tells
me
as
a
developer
on
on
the
platform,
everything
almost
everything
I
need
to
know.
So
it
tells
me
exactly
what
version
of
systemd
you're
running,
exactly
which
version
of
kernel
you're
running
and
all
that,
so
that's
very
valuable.
In
my
opinion,
it
also
uses
read-only
file
system
mounts
so
theoretically
anything
that
you
have
delivered
with
the
os.
A
The
software
hasn't
been
changed
so,
for
example,
if
somebody
accidentally
does
an
rmrf
if
they
happen
to
be
on
the
system
playing
around
it'll,
prevent
issues
there
and
then
also
unsophisticated
attacks.
So
if,
if
somebody
happened
to
get
onto
the
system
and
try
to
run
a
script
that
they,
you
know,
was
kind
of
a
dumb
script
or
whatever
it
wouldn't
allow
you
to
modify
the
existing
software
on
the
system.
A
But
you
know
for
more
more
sophisticated
attacks.
We
also
have
se
linux
enforcing
by
default.
So
hopefully,
if,
if
one
of
your
containers
does
get
compromised,
it
doesn't
gain
any
further
access
to
the
system.
I
just
can
modify
that
on
that
single
application
and
as
far
as
what's
next,
we
want
to
add
more
cloud
platforms.
A
We
want
to
do
multi-arch
support
for
fedora
core
os,
so
ar-64
is
the
first
one
we're
kind
of
chewing
off
and
then
hopefully
we
can
add
support
for
power,
pc
and
s390x.
A
We
want
to
make
fccts
like
the
configs
that
are
used
to
generate
ignition
a
little
more
human
friendly,
making
some
common
things
that
people
do
easier
and
then
also
host
extensions
for
software.
That
is
just
extremely
hard
to
containerize
and
maybe
like
a
small
system
utility.
We
want
to
enable
people
to
be
able
to
package
layer
that
stuff
and
not
have
issues
with
their
upgrades
right
now.
A
If
you
package
layer
stuff,
your
upgrades
can
can
pretty
easily
get
in
a
in
a
situation
where
versions
of
things
don't
work
well
together
and
the
upgrade
won't
won't
actually
go
through
just
because
which
you
know
is
a
good
thing,
because
the
upgrade
catches
it
but
and
stops,
but
it's
also
a
bad
thing,
because
it
means
that
you're
no
longer
automatically
upgrading
and
you
have
a
system
that
might
be
in
a
state
that
you
don't
understand,
improved
documentations
and
tighter
integrations
with
okd
blah
blah
blah
so
yeah,
that's
kind
of
a
short
spiel
on
fedora
core
west.
A
And
let's
see
oh,
oh,
I
see
all
right.
So
let
me
explain
this
so
the
install
actually
finished.
So
that's
good.
This
error
down
here
is
because,
for
some
reason
I
guess
I
pressed
the
up
arrow
key
and
enter
at
some
point.
A
C
When
you're
with
the
output,
it
says
that
the
time
elapsed
for
actually
doing
the
installation,
eight
minutes
and
25
seconds.
I
feel
like
that's
not
even
close
to
how.
A
Long
that
was
no,
so
that
is
so.
That
is
from
the
time
that
I
think
that
was
from
the
time
from
this
point
right.
C
A
Right
yeah
so
like
yeah,
we
waited
a
long
time
for
the
ingress
controller
to
be
created,
and
then
it
popped
in
back
into
openshift
installer
waiting
for
it
to
initialize,
and
then
the
next
state
was
waiting
for
10
minutes
for
the
route
to
be
created,
and
then
the
install
was
complete
and
yeah.
It
took
eight
minutes
for
that.
So
yeah
these
times
are
a
little
misleading,
I'm
I'm!
Maybe
I
should
wrap
the
entire
script
into
some
sort
of
time
called
that
basically
says
how
long
the
entire
thing
took.
A
Right,
okay,
so
real,
quick,
I'll
copy
that-
and
I
know
I'm
running
out
of
time
a
little
bit.
So
I
just
don't
wanna.
C
It's
fine,
don't
worry
about
it
like
we
started
what
15
minutes
late
anyway.
So
who
cares
yeah.
A
So
here
we
are
into
the
cluster
and
you
know,
obviously
I
think
charles
gave
quite
an
overview
of
you
know
everything
that
you
can
do
in
here.
Real
quick,
I
want
to.
A
A
But
this
script
itself
will
will
more
or
l
compute
droplet
create
and
give
it
a
name
and
then
appropriate
tags,
and
it
will
pretty
much
just
join
itself
to
the
cluster.
So
let
me
run
okd
worker.
A
Did
I
use
what
dash
okay
dash
tube,
so
basically
what's
happening
now?
Is
a
new
droplet
will
get
created
this
one
and
it
will
join
itself
to
the
cluster
which
is
kind
of
nice,
so
the
the
way
that
I
set
things
up
with
tags
means
that
you
know
if,
if
an
instance
has
a
certain
tag,
which
I
guess
that's
a
bad
example,
because
that's
just
the
control
plane
nodes.
A
If
the
instances
have
a
certain
tag,
then
they'll
automatically
get
added
to
two
things
that
apply
to
them.
So
okd
worker
2,
is
automatically
a
part
of
this
firewall
now
because
it
got
it
has
that
tag
in
it.
So
that's
kind
of
nice.
It
makes
joining
nodes
to
the
cluster
a
lot
easier.
I
think
theoretic.
You
know
you
don't
really
only
have
to
run.
A
Oh,
oh
yeah,
sorry.
I
ran
that
with
bash
set
dash
x.
So
that's
why
it's
so
confusing,
but
yeah
you
really
only
have
to
run
the
these.
The
compute
create
command
droplet,
create
command
in
order
to
join
a
node,
so
that's
kind
of
nice
and
then
oh
actually
before
that
that
node
can
join
well
once
it
actually
comes
up,
we'll
have
a
certificate
request
that
comes
in
that
we'll
need
to
approve.
So
I
plan
to
automate
that
as
well.
I
just
haven't
done
that
just
yet
so.
C
Dusty,
a
question
showed
up
in
the
chat
that
I
think
is
important
enough
for
you
to
answer
in
in
recording.
Is
it
possible
to
auto
scale
okd
through
in
digitalocean
and
if
not,
when
will
it
be.
A
So
I
I
don't
think
it's
possible.
It's
definitely
not
possible
as
part
of
what
I'm
doing
here,
but
part
of
the
reason
that
I
went
through
and
automated.
This
is
because
it's
just
not
part
of
like
openshift
install
right,
openshift
install
itself
knows
how
to
talk
to
various
clouds.
So
it
knows
how
to
talk
to
aws.
A
E
Mike
mccune
actually
did
a
demo
of
the
otter
scaler
for
us
a
few
weeks
ago
in
one
of
our
working
group
meetings.
E
I
don't
know
if
it's
in
four
or
five
fully
functional
he
might
have
been,
he
might
have
been
doing
it
off
of
four
six,
but
I
have
actually
seen
it
alive.
A
Nice
yeah,
oh
yeah,
so,
and
and
for
yeah,
so
for
auto
scaling
itself.
I
haven't
personally
used
that,
but
I've
definitely
done
the
thing
where
I've
got
a
cluster
up
in
in
gcp
and
I
just
go
and
edit
the
number
of
replicas
in
a
machine
set
and
it
spins
me
up
a
node
right
and
it
just
does
that
for
you,
which
is
really
darn
cool
but
yeah.
I
think
the
the
moral
of
the
story
is
or
the
answers.
The
question
is
no.
A
A
But
if
digitalocean
was
a
supported
platform
in
openshift,
you
know,
maybe
if
it
was
just
a
supportive
platform
in
okd,
not
necessarily
ocp
the
product
that
would
pretty
much
obsolete
everything.
I've
done
with
what,
with
this
script
more
or
less,
I
was
just
trying
to
scratch
my
own
hitch
with
this,
and
it
started
as
a
short
bash
script
and
ended
up
as
a
bigger
bash
script.
A
C
A
If
that
didn't
exist,
this
is
the
bare
metal
workflow
automated,
because
digitalocean
has
an
api
right.
So
that's
it
okay.
So
I've
got
a
pending
csr
right
now,
so
I'm
gonna
go
approve
that
wait.
Yeah.
A
A
That
yeah
these
haven't
come
up
yet,
but
eventually
it'll
have
some
pods
on
it
and
it'll
be
able
to
schedule
stuff
and
let's
see
where
yeah,
okay
and
the
other
thing
too
is
I
I
installed
an
older
version,
but
you
know
as
soon
as
I
log
into
the
console,
it
tells
me
a
new
cluster
version
is
available
and
if
I
get
cluster
version,
it
tells
me
what
I'm
at
by
oc
adm
upgrade.
A
It
tells
me
what
is
available
that
I
can
upgrade
to,
but
I
won't
do
that
because
that
takes
a
long
time
and
I
don't
have
time
but
yeah
so
that
what
was
the
demo
do?
We
have
any
other
questions.
B
I
think
that
kind
of
covered
all
the
questions
that
I
can
see
coming
in
from
other
places
too.
So
I
think
you've
done
an
awesome
job
covering
off
everything
and
we
look
forward
to
continuing
to
collaborate
with
you
guys
and
seeing
more
digitalocean
folks
coming
on
board
and
testing
this
out
and
giving
us
feedback
on
your
scripts
and
everything
else.
So
I'm
gonna
queue
up.
Our
next
speaker,
ivanko
is
going
to
demo
deploying
and
configuring
nvidia
gpus
for
okd4
on
aws.