►
From YouTube: OKD WG Single Node Cluster UnEditied
Description
OKD WG Single Node Cluster UnEditied
B
Okay,
amazing-
and
I
just
I
just
got
the
backstage
thing
so
that
must
have
been
a
limitation
as
well.
I
think.
C
C
Yeah,
all
right,
so
there
are
10
people
watching
you
now
you
are
recording
mike
mckeown
is
in
in
the
chat
room,
I'm
just
going
to
bounce
over
to
the
jamie
session.
All
the
other
ones
are
going
so,
as
I
said,
it
says
time
left
on
the
top
six
hours.
You
don't
have
to
go
six
hours,
but
you
are
recording
and
they
say
we
get
the
recordings
back
within
24
hours.
C
So
if
you
want
to
let
people
know-
and
I
will
bounce
in
and
out
so
see
you
in
a
bit-
it's
all
yours
start
when
you're
ready.
You
got
34
people
in
your
session.
I
no
not
34..
You
get.
A
A
all
right
well
we'll
go
ahead
and
kick
this
off
what
you're,
seeing
actually
on
your
screen
right
now
is
my
much
larger
openshift
cluster
running.
So
I'm
going
to
get
things
organized
here
to
start.
A
A
I
have
several
tutorials
that
I've
worked
up
and
a
few
more
that
are
coming
this
one
right
here
that
I
have
pinned
to
the
top
right
is
an
installation
for
a
single
node
cluster.
Now,
what
I'm
going
to
show
you
guys
here
is
going
to
be
running
on
a
nook,
an
intel,
nook,
8i3
bek.
It's
got
32
gigabytes
of
ram
in
it.
A
It's
got
a
a
one,
terabyte
ssd
m2
drive
in
it
and
it's
a
dual
core,
which
means
four
vcpus.
A
So
so
this
little
guy
right
here
is
not
a
high-powered
piece
of
equipment
by
any
means
and
they're
actually
pretty
affordable.
This
one
is
getting
as
computing
goes.
It's
getting
a
little
long
in
the
tooth.
The
newer
i3
fnk
will
actually
allow
you
to
put
64
gigabytes
of
ram
in
it,
has
faster
cpus
and
and
if
you
really
want
to
spend
a
little
bit
more
of
the
i7-
and
I
should
have
brought
one
of
them
down
here,
so
I
could
hold
it
up
to
the
camera.
A
So
when
you
get
to
this
point
here,
you
can
either
navigate
into
the
documentation
through
the
readme
or
I
do
have
it
up
on
github
pages.
A
So
this
right
here
and
the
link
is
hanging
off
of
michael
human's
site
where
he's
hosting
the
documentation
for
us,
but
I
will
also
post
it
in
our
chat
right
here
right
now.
Oh
there
we
go.
A
Yeah
there
you
go
bruce
is
holding
one
up
for
you.
If
you
can
see
it
on
the
on
the
image
there,
they
are
they're,
tiny,
little
things
and
I'm
ashamed
to
admit
it,
and
it's
probably
embarrassing
to
my
wife,
but
I
actually
travel
with
four
of
those
when
we
go
on
vacation,
because
I
like
to
take
my
openshift
cluster
with
me,
because
it
keeps
me
happy
so
without
further
ado,
let's
deploy
a
single
node
open
cluster.
A
I'm
going
to
move
this
link
back
over
here
out
of
the
way
I'm
going
to
minimize
this
browser
because
we
won't
need
it
for
a
while.
We
are
going
to
be
working
from
the
command
line
for
a
bit
okay.
So
if
you,
if
you
look
at
the
github
pages,
the
first
section
is
installing
and
setting
up
your
host
now
in
in
my
lab,
I'm
actually
running
a
a
pixie
environment
with
with
ipixi
that
I
boot
and
install
my
machines
from
so
I'm
skipping
that
point
for
you.
A
I
have
a
a
brand
new
installation
of
this
particular
one
is
centos
eight,
but
this
tutorial
should
work
almost
out
of
the
box
without
modification
for
centos
stream
or
if
you're
running
fedora.
A
It
does
need
to
be
an
rpm
based
if
you're,
if
you're
following
this
tutorial,
because
I
make
some
assumptions
about
the
the
commands
that
are
available
for
managing
the
machine.
So
the
first
thing
I'm
going
to
do
is
install
my
vert
environment
and
you'll
see.
This
is
not
going
to
do
anything
because
my
pixie
boot
already
knew
that
this
was
going
to
be
a
kvm
host.
So
it
did
that
for
me.
So
the
next
thing
I'm
going
to
do
is
install
all
of
the
tooling
that
we're
going
to
need
for
this
lab.
A
C
A
I'm
installing
rsync
the
guest
fs
tools,
because
we
are
running
libvert
and
kvm
here.
Okay,
I.
D
C
B
D
B
Because
charlo
can
probably
see
the
you
can't
see
the
chat
right.
A
No,
I
can't
not
not,
while
I'm
in
this
in
this
form
here.
C
B
Okay,
that's
a
er
eric
says,
bigger.
C
B
A
C
A
A
So
hopefully
you
guys
can
see
it
give
me
a
thumbs
up
if
this
is
good.
Folks.
A
Okay,
so
we're
involving
installing
the
epel
release,
because
we're
going
to
need
some
of
the
libraries
from
there
libvert
development,
the
httpd
tools
that
are
going
to
give
us
some
of
the
things
that
might
some
of
the
tooling
that
my
scripts
use
and
finally
nginx
and
all
nginx
is
going
to
do
here-
is
host
the
ignition
configs
for
a
newly
booted
machine
to
grab
its
ignition
from.
So
I'm
going
to
go
ahead
and
let
those.
A
And
now
what
I'm
going
to
do
is
because
I
just
like
to
control
things,
I'm
going
to
tell
libert
where
to
store
its
virtual
machines
when
I
create
them,
so
I'm
creating
I'm
deleting
the
default
pool
if
it
exists.
Now
you
can
see
here
in
this
case
there
was
no
default
pool
and
then
I'm
going
to
create
this
default
pool
and
I'm
going
to
point
it
at
slash
virtual
machines.
So
all
of
the
vms
that
I
build
with
libvert
they're
going
to
go
into
that
location.
A
And
finally,
I'm
going
to
configure
the
firewall
all
right.
So
I'm
enabling
http
and
https
from
the
firewall
I'm
enabling
dns
because
we
are
running
bind
and
the
rest
of
the
firewall
I
will
leave
in
place
next
step.
Is
we're
going
to
need
an
ssh
key,
so
we
can
get
to
our
openshift
cluster
if
we
need
to
directly
connect
to
any
of
the
nodes.
A
So
I'm
going
to
create
myself
an
519
key
pair,
the
next
part
of
the
tutorial
here,
I'm
not
going
to
actually
execute
I'll
put
it
on
the
screen
here
for
just
a
minute
so
that
you
guys
can
see
it.
This
is
creating
the
network
bridge
and
what
I've
included
for
you
here
is
the
command
line
way
using
nmcli
to
create
your
network
bridge
now
there
are.
There
are
graphical
ways
to
do
this
as
well.
You
can
even
do
it
during
the
install.
A
If
you
want,
the
network
bridge
was
actually
already
created
for
this
environment.
A
You
can
see
here
this
is
my
bridge
br0.
It
was
created
by
the
by
the
pixie
boot,
so
the
the
networking
was
already
configured
for
me
when
this
host
was
installed
just
a
little
bit
ago,
but
I
did
include
in
the
tutorial
everything
you
need
to
set
up
bridge
networking
on
your
own.
A
So
this
will
take
just
a
little
bit
and
while
this
is
doing
its
thing,
getting
the
operating
system
all
fresh
and
ready
to
go
I'll
bring
this
back
over
here.
For
you
guys.
A
We're
going
to
do
is
we're
going
to
clone
the
git
repository
where
this
tutorial
resides
and
I've.
I've
already
provided
several
utility
scripts
that
are
going
to
make
the
the
work
that
you're
going
to
do
here.
Much
smoother
I'll.
Take
you
through
what
the
scripts
do
in
case.
There's
anything
you
want
to
customize,
or
you
want
to
take
apart
and
and
reuse
for
something
else.
A
There
are
several
environment
variables
that
you're
going
to
set
that
I've
provided
an
out
of
the
box
in
in
the
set
snc
env
script,
but
you
will
probably
want
to
modify
them
for
your
own
environment.
The
10.11.11.0
is
what
I'm
currently
using
in
my
home
lab
network,
so
we'll
set
a
domain
that
will
be
our
dns
domain
for
this
lab.
A
This
snc
host
is
going
to
be
this
server.
That
is
rebooting
right
now.
The
name
server
actually
is
going
to
be
the
same
one,
but
I've
allowed
by
using
variables
for
you
to
use
different
hosts.
So
you
can
create
your
own
name
server
host.
If
you
want
to
you,
can
have
your
own
bastion
server.
You
can
run
the
single
node
cluster
on
its
own
machine.
A
A
A
There's
a
script
for
setting
up
the
environment
that
will
just
set
all
of
the
environment
variables
for
us.
A
That's
what
I
was
just
showing
you
over
on
the
other
screen
and,
as
you
can
see,
it
does
have
entries
that
are
biased
toward
this
particular
setup
and
in
fact
I
believe
their
bias
toward
this
particular
machine.
In
my
lab.
A
I'm
on
10.11.11.206.,
so
so
this
this
script
conveniently
was
already
set
up
to
run
on
this
particular
machine.
The
last
couple
of
things
is:
there
is
a
script
to
set
up
the
dns
for
you
and
there
is
a
script
to
destroy
your
single
node
cluster.
The
undeploy
snc
will
will
completely
remove
your
single
node
cluster
and
when
you
rerun
the
setup
dns
that
will
actually
put
your
environment
back
to
a
point
that
you
can
redeploy
the
cluster,
so
you
can
also
clean
this
whole
thing
up
and
then
run
it
again.
A
To
modify
this
script
here,
to
ensure
that
everything
is
set
up
for
your
particular
network,
I'm
not
going
to
change
anything
here.
I
believe,
because
I'm
going
to
use
the
domain
snc
test
206
is
this
host
that
we're
currently
on.
A
A
A
I'll
put
this
in
the
chat
for
you
guys
there
you
go!
So
if
you
go
there,
you
will
see
the
current
state
of
the
given
okd
releases.
The
very
top
of
the
list
is
the
stable
channel
right
and
those
are
in
quay.
Dot
io
under
open
shift,
slash
okd.
So
if
I'm
going
to
build
one
of
these,
which
is
what
we're
going
to
do
today,
I
would
build
from
this
channel,
but
this
will
also
build
nightlys.
A
But
if
I
want
to
build
from
a
4.7
nightly,
which
I
have
done
actually
recently
as
I
was
looking
to
see
tracking
particular
bugs
as
we
get
them
wiped
out,
then
you
can
come
down
here
to
the
nightlys
or
you
can
even
under
the
stable,
build
older
versions
of
4.6
or
4.5.
Okay.
So
today
we're
going
to
build
a
stable
release
from
from
4.7
and
it
will
come
from
quay.io.
A
If
I
did
want
to
build
a
nightly,
then
I
would
change.
Excuse
me,
this
okd
registry
environment
variable
to
registry.ci.openshift.org
origin,
slash,
release,
okay,
so
that's
really
the
only
difference
there
and-
and
I'm
realizing-
that's
probably
really
small
font
for
you
guys,
so
there
I'll
crank
that
up
a
little
bit
all
right.
So
that's
where
you
go
to
find
the
the
current
releases
and
what
their
disposition
is.
A
If
I
log
out
log
back
in
you
can
see
that
now,
because
it's
in
my
bashrc
and
bash
is
the
shell.
I'm
running
that
my
environment
is
set
up
and
ready
to
go
all
right.
The
dns
configuration
I'm
going
to
run
it
real,
quick
and
then
I'm
going
to
show
you
what
it
did.
So.
A
The
first
thing
I'm
going
to
do
is
I'm
going
to
enable
name
d
so
that
it
can
start
and
then
I'm
going
to
run
actually
before
I
do
that
now,
I'm
going
to
run
it
first
and
then
I'll
show
you
what
the
the
magic
behind
the
covers.
So
I'm
going
to
run
my
setup
dns
and
what
it
did
leveraging
those
environment
variables.
A
Give
you
a
quick
view
of
the
named.com,
it's
pretty
much
a
generic
named.conf
that
it
created,
but
you
can
see
it.
It's
populated
it
with
the
the
snc.test
zone,
it
it
knows
the
the
reverse
lookup
for
the
pointers
for
this
environment
and
there's
a
file
that
contains
my
pointer
records.
A
I
did
create
an
acl
that
includes
my
anything.
That's
on
this
24
network
right
under
my
10.11.11.0
network
and
my
local
host.
Now,
if
you're
also
running
a
podman
or
docker
on
this,
you
will
need
to
add
the
ip
address
for
that
network
as
well,
so
that
any
of
the
containers
that
you
run
can
do
name
resolution
that
has
bit
me
before
in
the
past,
when
I
forgot
to
do
that.
A
Listening
on
port
53
on
both
loopback
and
my
primary
nick
and
the
rest
of
this
is
is
pretty
much
plain:
vanilla,
setting
up
name
d.
A
The
pointer
records
for
my
snc
host,
which
is
this
guy,
we're
sitting
on
right
now
and
then
the
bootstrap
and
the
master
node.
A
And
it
created
the
a
records
for
all
of
the
hosts
as
well.
Now
I
am
going
to
spend
just
a
couple
of
minutes
talking
about
a
couple
of
things
here:
one
I'm
not
setting
up
a
load
balancer,
because
this
is
a
single
node
cluster
right.
So
I
don't
need
to
balance
across
different
ingress
nodes.
I
don't
have
infrastructure
nodes
in
my
other
tutorial,
you
would
set
up
a
load
balancer
to
manage
your
traffic,
but
here
it
was
complexity.
We
didn't
need.
A
However,
during
the
bootstrap
you
have
to
be
able
to
talk
to
either
the
bootstrap
node
or
then
the
master
node,
and
they
have
to
both
be
able
to
resolve.
So
what
I'm
doing
here
is
a
little
dns
trick
by
having
the
same
address
with
the
two
different
ip
addresses
in
there.
The
dns
round
robin
will
give
me
a
poor
man's
load
balancer.
A
These
two
records
that
have
the
remove
after
bootstrap
annotation
on
them
will
be
pulled
out
of
this
when
we
destroy
the
bootstrap
node,
leaving
just
the
master
node
in
place
and
the
the
master
node
has
multiple
records.
We
have
the
actual
host
for
the
master
node.
We
have
the
etcd
node
and
then
we
have
a
wild
card
for
our
cluster
and
the
api
endpoint
for
our
cluster
and
the
api
internal
for
the
cluster.
So
all
of
those
you
see
they
all
resolve
to
the
same
address
they're
all
the
master
node.
A
A
All
right
so
pretty
simple,
it's
it's
said
magic
in
the
repository
I've
got
stub
files
for
the
named.com,
the
zone
files
and
the
named.conf.local
and
those
those
stub
files.
There
are
records
in
them
that
look
like
this,
that
get
substituted
with
some
said
magic
and
when
these
files
get
put
in
place
and
then
the
last
thing
it
does
is
refresh
name
d
so
that
you
get
a
clean
start.
A
One
word
of
caution:
don't
do
this
on
a
machine
that
you've
already
configured
name
d,
because
this
will
destroy
your
name
deconfiguration,
okay,
so
this
is.
This
is
set
up
to
to
be
a
clean,
install
you're,
not
using
this
particular
machine
for
anything
else
except
the
single
node
cluster.
Then
this
is
safe.
A
Let's
make
we'll
do
a
reverse
dns
lookup
of
this
guy
here
and
oh
it
didn't
oh,
yes,
I've
got
to
do
one
other
thing
that
is
unique
to
my
environment.
A
A
All
right
I've
got
this
conveniently
set
aside,
so
what
I'm
doing
is
I'm
doing
an
nmcli
connect
against
my
bridge
connection
and
setting
its
name
server
to
the
environment
variable
that
is
set.
This
is
why
I
use
that
that
set
env
in
my
bash
rc
is
because
then
I
don't
have
to
remember
numbers
and
things.
I
know
that
the
variables
are
set
from
a
consistent
thing,
so
it's
going
to
put
the
name
server
and
the
domain
in
there.
A
Now
one
other
thing,
and-
and
this
is
for
macbook
users,
if
you're
doing
this
and
there's
probably
a
similar
way,
you
guys
can
do
this
on
on
a
windows
system
on
a
linux
system.
You,
you
can
point
your
box
to
this
dns
that
you
just
set
up.
A
There's
a
directory
called
etsy
resolver
that
you
can
put
other
domains
in
you
can
see.
I've
got
an
entry
for
my
home
lab
right
now
that
any
any
name
address
that
I
try
to
look
up
that
ends
in
clg.lab.
A
It's
going
to
go
to
this
name
server,
10.11.11.10,
to
find
anything
that
is
snc.test.
It's
going
to
go
to
this
host
that
I'm
showing
you
in
the
other
tab.
So
this
way
I
from
my
my
workstation
I
can
resolve
the
the
cluster
that
I'm
getting
ready
to
set
up,
but
I
don't
have
to
do
any
other
fancy
tricks
with
my
local
dns.
A
That
also
works,
if
you're
running
code
ready
containers
on
your
local
machine
as
well
so
helpful
tip
there
all
right.
The
next
thing
is
to
prepare
to
install
our
cluster
and
the
utility
scripts
that
I've
put
in
place
are
hopefully
going
to
make
this
pretty
straightforward
for
you,
alright,
so
first
thing
we're
going
to
do
is
get
this
to
go
to
the
top.
There
we
go,
so
you
guys
can
see
it.
I'm
gonna
set
that
okd
release
variable
that
I
was
showing
you
guys
previously.
A
A
And
then
we
have
to
prepare
our
installation
file
the
manifest
and
I've
got
a
stub
of
that
for
you
here
as
well,
so
you
can
see
what's
going
to
happen,
your
base
domain
is
going
to
again
with
some
said
magic.
Your
base
domain
is
going
to
get
populated
the
cluster
network.
These
defaults
should
work
for
you
out
of
the
box.
A
If,
on
your
home
network,
you
have
a
10.100
or
a
172.30,
you
might
need
to
change
either
the
cider
for
the
cluster
network
or
the
service
network,
but
since
most
people
are
sitting
on
a
10.10
or
a
192.168,
these
should
work
for.
For
most
of
you,
the
compute
section,
you
want
zero
replicas
under
the
worker
section,
and
this
is
what
makes
it
a
single
node
cluster.
A
I'm
telling
it
to
do
one
master
because
we're
doing
a
bare
metal
install
you,
don't
you
put
none
under
the
platform
since
we're
not
doing
aws
or
vsphere
or
azure
right.
If
you're
doing
this
on
one
of
those
platforms,
then
you
can
take
advantage
of
their
capabilities
and
you
would
put
your
settings
here.
This
pull
secret
is
just
a
fake
pull
secret.
That
is
base64
encoded,
fubar
right
there,
and
this
is
going
to
get
replaced
with
the
ssh
key
that
we
created
earlier.
A
A
So
now
you
can
see
I've
got
that
install
config
in
my
working
directory
and
I'm
going
to
run
this
set
command
against
it.
To
put
the
domain
in
place
like
that,
then
I'm
going
to
populate
a
environment
variable
with
my
ssh
key
that
we
created
earlier
with
the
public
key.
Don't
put
your
private
key
in.
A
B
A
And
I'm
going
to
install
the
single
node
cluster
now
before
I
kick
this
off,
I'm
going
to
go
ahead
and
kick
it
off
and
then,
while
the
install
is
going
I'll,
pull
up
the
code
and
and
show
you
guys
what
is
actually
going
on
under
the
covers
in
this,
because
I've
abstracted
away
building
the
virtual
machines
and
configuration
and
a
whole
bunch
of
stuff.
So
so
I'm
going
to
show
you
guys
the
the
code
that
I
wrote.
A
A
Oh,
except,
I
think
I
skipped
an
important
step.
Did
I
skipped
an
important
step?
I
don't
have
oc
installed
all
right,
so
we'll
pretend
that
didn't
just
happen
and
I'm
gonna
come
over
here
and
grab
the
oc
command
where
we
get
oc
from
is
I'll.
Show
you
here.
A
Quickest
way
to
get
it
when
you've,
you
don't
have
anything
installed
on
this
machine,
yet
is
to
go
to.
A
Under
the
releases,
folder
you'll
see
all
of
the
all
of
the
recent
releases.
Okay
with
each
of
these,
the
latest
one
will
be
the
one
always
on
top
under
that
you'll
find
downloads
for
the
binaries.
A
Now
it
is
at
this
stage-
and
I
say
this
in
the
in
the
tutorial
documentation-
it
is
not
critical
that
you
have
the
latest
or
the
the
version
of
oc
that
you're
going
to
be
installing
from
it
just
needs
to
be
a
recent
4.x
version
of
oc,
because
the
the
script
that
we
run
to
build
this
is
actually
going
to
retrieve
the
the
real
version
of
oc
all
right.
So
I
just
pulled
that
from
the
page.
A
A
A
A
Once
the
boot
strapping
begins,
it's
actually
going
to
replace
the
operating
system
with
the
version
of
fedora
core
os.
That
is
bundled
with
this
version
of
openshift,
so
that
everything
is,
is
controlled
and
managed
by
the
openshift
cluster
itself,
and
that's
actually
one
of
the
things
I
didn't
mention
this
morning.
That
is
somewhat
unique
about
openshift
versus
other
kubernetes
distributions.
A
A
I'll
show
I'll
show
you
here
in
a
minute
it's
actually
embedded
in
the
in
the
binary
or
not
the
binary,
but
the
script
that
I
ran.
So
when
I
pop
open
the
the
script
I'll
show
you
there,
because
you
can
override
that
it
again,
it
doesn't
have
to
be
the
specific
version
of
fedora
core
s
that
is
needed
for
for
that
install
of
openshift.
It
just
needs
to
be
a
fairly
recent
one.
B
A
All
right
now,
because
my
script
disconnected
from
those
when
it
kicked
off
the
initial
install
they
did,
they
did
not
actually
reboot
themselves.
They
just
shut
themselves
down.
So
so
the
next
thing
I
need
to
do
is
to
boot
these
up.
But
when
I
do,
they
are
going
to
begin
the
single
node
cluster
install.
A
So
I'm
going
to
kick
that
off.
Let's
see
what
time
is
it
we've?
It's
45
minutes
in
okay,
so
I'm
gonna
go
ahead
and
kick
that
off
and
we'll
let
it
scroll
and
while
it's
doing
its
thing
I'll
take
you
guys
on
a
tour
of
the
of
the
script
code.
A
A
A
Up
and
I'll
start
the
master
node
and
I'm
actually
going
to
switch,
and
let
you
see
the
master,
because
I
want
to
point
out
something
that
it's
going
to
be
doing
all
right.
You
see
it
actually
didn't
really
do
anything.
You
see
it's
pending
on
this
start
job
here
it
is
waiting
to
get
its
ignition
config
from
the
bootstrap
node.
A
All
right,
you
see
the
rpm
os
tree
stuff,
that's
going
on
what
it's
doing
here
is
it's
downloading
the
correct
version
of
fedora
core
os
that
this
cluster
needs
to
run
and
it's
going
to
apply
it
and
there
it
just
finished.
So
you
saw
this
guy
kicked
me
out,
because
the
bootstrap
is
now
rebooting,
so
I'll
go
ahead
and
pin
that
there.
Let
me
separate
this
one
out
shrink
it
down
a
bit.
A
Everything
in
here
is
containers
we're
rendering
the
manifests
and
in
just
a
little
bit
this
api
is
going
to
come
up,
and
here
you
will
see
the
masternode
finally
get
a
hold
of
its
ignition
file
and
start
its
install,
see
there
class
starting
cluster
bootstrap
and
there
you
go
so.
The
masternode
now
has
its
ignition,
and
now
it
is
booting
itself
up.
It's
going
to
do
a
very
similar
thing.
It's
going
to
pull
its
core
coreos
image.
A
A
All
right
now,
if
you
watch
these
logs
like
this,
don't
don't
panic
when
you
see
errors
and
things
flying
by
like
I
saw
all
this
failed
to
fetch
failed
to
fetch
failed
to
fetch.
A
A
A
A
So
I'm
going
to
export
my
cube,
config
variable
to
where
my
install
is
so
that
I
can
issue
oc
commands
and
I'm
going
to
see
if
the
etcd
object
exists.
Yet,
since
I
was
talking
to
you
guys
yeah
it
does.
If
we
had
done
this
40
seconds
ago,
you
would
have
seen
an
error.
I
call
that
out
in
my
tutorial,
because
you
do
need
to
wait
a
little
bit
for
the
scd
object
to
show
up
now
that
it's
there
we're
going
to
patch
its
specification.
A
A
You
can
see
now
under
the
spec
there's
this
unsupported
config
overrides
where
aj
is
set
to
true.
So
that's
that
is
one
thing
that
in
4.8,
because
single
node
will
be
an
officially
supported
configuration
in
4.8.
We
won't
have
to
do
that
anymore
right
now.
We
have
to
do
that
because
it's
still
not
a
fully
supported
configuration,
but
it
does
work
right
so
now
we're
waiting
for
the
bootstrapping
to
complete.
A
While
that
is
running,
I'm
going
to.
A
A
A
A
All
right,
so
what
this
is
going
to
do
is
set
up
the
infrastructure
for
you.
You
can
see
here
I'm
setting
some
variables
that
you
can
override
I'm
setting
the
cpus
for
the
cluster
to
be
four,
I'm
setting
the
memory
to
16
gigabytes,
I'm
setting
the
disk
size
that
it
creates
to
200
gigabytes.
A
My
fedora
core
os
version
that
I'm
going
to
do
my
initial
boot
from
is
this
one
right
here.
That
was
that
was
your
question
previously
bruce
and
I'm
pulling
from
the
stable
stream
of
fedora
core
os.
So
you
can
you
can
experiment
with
with
different
versions
of
fedora
core
os
and
and
using
the
testing
stream
or
the
you
know
the
what's
next
stream,
if,
if
you
want
to,
this
will
also
take
some
flags
for
cpu
memory
and
disk
as
well.
A
The
actual
start
of
the
script,
so
one
thing
I'm
doing
I'm.
This
is
a
this-
is
a
cheesy
little
random
generator
here.
So
I
apologize
to
everybody
up
front,
but
I'm
generating
mac
addresses
for
my
bootstrap
and
my
master
node,
so
I'm
generating
those
mac
addresses
and
and
the
reason
I'm
doing
that
it
really
has
nothing
to
do
with
openshift.
A
So
so
then
I
use
a.
I
use
a
dig
to
get
the
ip
addresses
that
are
already
in
dns
for
the
bootstrap
and
the
master
node,
and
I
populate
variables
with
those
here's
where
I'm
pulling
the
okd
tooling,
for
the
open
shift
installer
and
for
the
oc
command
and
I'm
putting
those
into
my
home
bin
directory
and
then
getting
rid
of
that
that
temporary
folder
that
I
created
here,
I'm
grabbing
a
tool
called
fcct.
A
This
is
a
fedora
core
os
tool
that
allows
you
to
manipulate
ignition
files
for
machines
and
I'll
show
you
the
configuration
further
up
in
this
script,
for
what
I'm,
when
I'm
using
fcct,
for
that
enables
you,
especially
if
you're,
doing
bare
metal
cluster
work
that
enables
you
to
configure
what
the
operating
system
is
going
to
look
like.
Let's
say,
you're
deploying
some
nodes
and
you
want
you
know
ssd
devices
that
are
in
them
to
be
preserved
for
rook
ceph.
A
This
is
a
mechanism
that
you
could
do
that,
and
you
want
to
specify
what
your
file
system
configurations
or
other
resource
utilizations
or
gpus,
or
things
like
that.
This
is
a
good
way
that
you
can
you.
You
can
inject
things
into
the
ignition,
so
then
this
is
just
good
old-fashioned
shift
doing
its
thing,
creating
the
manifests,
I'm
creating
the
ignition
configs.
A
Here
I'm
calling
one
of
my
functions:
config
okd
node
that
I'll
pop
up
to
the
top
of
the
script
and
show
you
guys
and
what
it's
doing
is
using
that
fcct
command
to
then
create
the
custom
ignition
files
for
these
machines
and
then
I'm
putting
those
in
the
directory
of
my
nginx
server.
A
The
next
things
here
from
this
syslinux
on
down,
I'm
actually
preparing
an
iso
for
this
machine
to
boot.
From
again
because
I
didn't
want
to
create
a
pixie
environment,
I
I'm
not
using
vsphere.
This
is
all
bare
metal,
so
I'm
creating
an
iso
for
the
the
bootstrap
node
and
for
the
master
node
that
have
the
specific
configuration
in
them
that
they
need
just
to
come
up
and
then
start
being
part
of
the
cluster.
A
And
the
last
thing
here
after
creating
those
those
isos
is
then
to
create
the
virtual
machines
themselves
and
the
creation
of
the
virtual
machines
themselves
is
using
the
vert
install
and
passing
the
appropriate
parameters,
including
the
particular
iso
file
that
I
just
created
all
right.
So
the
last
thing
I'm
going
to
show
you
in
here,
let's
check
on
our
our
bootstrap
bootstrap,
is
still
running,
make
sure
nothing's
blown
up
over
here
everything
appears
to
be
healthy,
okay,
so
this
function,
config,
okd
node.
A
What
what
the
fcct
tool
works
with
is
yaml
files,
an
ignition
config
and
I'm
setting
this
up
as
a
merge
type.
So
it's
going
to
merge
the
the
following
configuration
into
the
ignition
config.
That
then
becomes
the
initial
ignition
config
for
this
machine
to
boot.
Off
of
this
is
why
I
had
to
know
the
mac
address,
because
I
am
explicitly
hard
coding,
the
name
of
that
nick
to
be
nick
0,
so
that
it's
always
predictable,
and
then
I'm
configuring
network
manager
with
the
network
information
about
that
machine.
A
It's
ip
address
the
net
net
mask
for
the
network
that
it's
on
its
gateway,
the
name
server
and
the
domain
and
finally
populating
its
hostname.
A
So
it's
fairly
short,
but
it
gives
you
a
clue
into
the
power
of
ignition
configs
and
what
you
can
do
with
them.
So
I
am
prescriptively
setting
up
the
network
configuration
for
this
guy
before
he
boots
up
and
that's
the
deploy
okd
snc
script.
It
does
all
of
that
for
you
and
looks
like
the
bootstrap
is
now
done.
A
A
A
It
uses
verse
commands
to
remove
the
the
node
to
shut
it
down,
hard
undefine
it
remove
its
pool
and
then
destroy
the
files
under
slash
virtual
machines.
A
A
A
A
A
A
Back
off
is
not
good,
but
I
don't
see
any
of
those,
so
everything
appears
to
be
fat
and
happy
right
now.
So
a
couple
of
things
that
we
need
that
we
need
to
do.
The
ingress
operator
is
going
to
be
unhappy
because
it
expects
to
have
two
replicas
and
the
authentication
operator
is
also
going
to
be
unhappy
because
it
is
expecting
three
etcd
nodes.
So
we
have
to
tell
those
two
that
we're
running
in
a
single
node
configuration.
A
A
Oh,
oh,
I
probably
did
that,
while
the
api
server
was
in
the
middle
of
doing
its
thing,
that
will
happen
occasionally
the
because,
while
it's
while
it's
working
toward
completion
it
it
will
stop
and
start
a
couple
of
of
things.
So
don't
be
alarmed
if
you
temporarily
lose
access
to
your
cluster.
A
If
it
doesn't
come
back,
then
we'll
be
worried,
but
it
should
come
back
here.
A
A
A
A
A
A
A
A
A
A
A
B
C
A
All
right,
let's
see
if
this
guy
is
he's
rebooting.
D
A
B
A
A
C
A
Nothing
is
standing
out
when
we
grab
the
logs,
probably
be
able
to
find
something.
A
All
right,
okay!
So
what
I'm
gonna
do
is
tear
this
sucker
down
and
show
you
what
it
looks
like.
A
A
A
A
A
And
then
I
run
my
deploy
again.
Oh,
I
need
to
make
sure
I'm
in
the
screen
that
had
the
openshift
version
set
so
I'll
just
go
ahead
and
re-export
that
get
some
of
these
extra
screens
out
of
the
way
here.
A
A
A
And
I
will
say
in
my
in
my
larger
lab
environment,
I
actually
have
an
nginx
server
set
up,
that
I
have
a
mirror
of
these
image
repositories
in
and
that
allows
me
to
have
much
much
faster
installs,
because
it's
not
having
to
reach
all
the
way
across
to
quay
dot
io
to
to
get
the
container
images
it
already
has
them
locally,
and
that
helps
if,
if
you're
experimenting
or
building
and
tearing
down
a
lot
of
clusters-
or
something
goes
wrong
like
it
did
today,.
A
B
Michael
was
asking
about
any
thoughts
regarding
the
network
bridge
part,
if
you're
doing
a
remote
machine
where
you
don't
have
keyboard
access,
okay
and
now
what
what
you've
generally
done
is
start
out
with
keyboard.
A
Correct
well
in
in
the
tutorial,
that's
what
I
suggest
in
my
home
lab.
I
actually
have
a
pixie
set
up
and
while
here
let
me
let
me
kick
this
off
again
and
then
we'll
we'll
wander
over
and
I'll.
Show
you
how
I
do
that
in
my
home,
lab.
A
A
To
monitor
and
stall,
because
I
do
need
to
I'm
gonna,
I'm
gonna
move
these
out
of
the
way
so
that
I
can
set
that
etsy
deep
patch
quickly.
D
A
And
you
know,
I
will
say
that
that
weird,
the
the
cluster
couldn't
finish
thing.
A
A
Correct
you're
right,
I'm
not
using
the
the
kubernetes
ovn.
A
All
right,
let
me
show
you
what
were
we
talking
about?
Oh
yeah,
pixie.
A
A
All
right
so
in
my
environment,
I
should
probably
be
using
ansible
for
some
of
this,
but
I
know
shell,
so
I
write
shell.
So
in
my
environment,
when
I,
when
I
get
a
new
nook
and
I'm
adding
it
to
my
lab,
I
have
a
utility
script
here,
deploy
kvm
host
that
all
I
have
to
give.
It
is
the
host
name
that
that
new
nook
is
going
to
have
the
mac
address
off
the
butt
of
that
nook.
A
A
A
Let
me
let
me
come
down
past
the
function,
so
it
creates
a
little
working
directory
for
some
ipixie.
It
does
a
dig
just
like
you
saw
in
the
in
the
single
node
cluster
one
to
get
the
ip
address
that
I've
already
pre-populated
in
dns.
A
All
right,
okay,
so
now
bootstrapping
is
still
running.
Okay
and
then
I
create
a
kickstart
file
for
that
machine
and
the
the
kickstart
file
then
has
all
of
the
configuration
that
I
want
it
to
have.
And
here's
where
I
feed
in
the
disk
information
it's
going
to
create
some
of
the
file
systems
with
prescriptive
sizes
and
then
just
blow
the
rest
of
them
out
to
whatever
the
the
size
of
the
disk
is.
A
A
In
fact,
most
of
the
nooks
up
in
my
lab,
I'm
embarrassed
to
say:
I've
got
16
of
them.
The
nooks
in
my
lab,
most
of
them
have
never
had
a
monitor
plugged
into
them,
though
ones
that
have
had
a
monitor
plugged
into
them.
A
It
was
probably
so
that
I
could
update
the
firmware,
because
every
so
often
you
need
to
update
the
firmware
hey
neil
I
just
saw
so
there
is
a
hackified
way
to
do
upgrades
of
a
single
node
cluster,
but
it
doesn't
always
work
now
with
with
with
4.8,
you
will
be
able
to
up
upgrade
a
single
node
cluster
and
it
will
be
more
of
a
supported
configuration
for
updating
single
node
clusters,
but
it
still
obviously
requires
that
the
whole
cluster
goes
down
so
like
we're
with
a
you
know,
with
a
full
three
node
plus
workers
cluster,
you
can
do
upgrades
without
any
downtime
with
a
single
node
cluster.
A
A
And
I
think
I've
got
a
section
in
my
tutorial
yeah
for
setting
up
the
pixie
boot.
So
there's
a
section
in
here.
It
also
talks
about
the
little
router
that
I'm
using.
B
Yeah
patrick
also
had
a
question
about
trying
this
on
a
mac,
and
I
told
him
that
I
wasn't
courageous
enough
myself,
but
you're
braver
than
I.
A
Am
I
have
not
tried?
I
have
not
tried
this
on
on
my
mac.
Actually
code
ready
containers
will
will
run
on
your
mac
and
once
we
once
we're
waiting
on
a
a
bug
fix
for
the
bare
metal
installer
that
actually
the
dean
will
probably
drop
the
new
release
today
and
if
that's
working,
then
I'm
going
to
build
the
4.7
version
of
code
ready
containers
for
okd
the
version
that's
currently
out.
A
There
is
still
4.6,
but
actually
coming
up
soon
because
we're
having
to
work
on
support
for
apple
silicon,
you
will
be
able
to
build
code,
ready
containers
on
your
macbook
and
once
we
do
that,
you
can
actually
use
that
to
build
a
single
node
cluster
on
your
macbook.
A
If
you
were
so
inclined,
you'd
probably
need
one
of
the
macbooks
with
32
gig
of
ram.
I
don't
know
that
it
would
work
on
16.
once
we're
at
4.8
and
you
don't
need
the
bootstrap
node
anymore.
Then
it's
probably
more
likely
to
work.
A
All
right
things
are
creating
it's.
Okay,
there
will
be
errors.
If
you
look
at
this
before
bootstrapping
is
complete.
You
will
see
containers
in
error
states
again
just
like
when
we
were
watching
the
log
scroll
most
of
the
time
that
is
okay.
It
just
means
that
there's
some
collaborator
resource
that
this
particular
operator
is
waiting
on
that
isn't
there
yet,
and
so
it
continues
to
error
until
its
collaborator
is
in
place
and
then
it
will,
it
will
finish
its
installation.
A
A
And
one
thing
once
once
you
get
this
up:
there
are
some
of
the
operators
from
the
marketplace
that
might
be
assuming
a
a
fuller
cluster.
I
know
like
rook
seth
out
of
the
box,
you
if
you
want
to
run
rook
ceph
on
a
single
node
you'll
have
to
adjust
the
replica
count
for
some
of
the
components
because
it
will
expect.
You
know
at
least
three
nodes
for
particular
things:
code,
ready
containers,
yes
and
no
patrick
code.
A
Ready
containers
doesn't
configure
the
nodes
in
exactly
the
same
way
that
this
does,
but,
while
we're
waiting
while
we're
waiting
for
the
bootstrapping
to
complete,
which
actually
this
it
might
be
getting
ready,
let's
watch.
A
Here
I'll
come
to
that
in
just
a
minute
and
I'll
show
you
guys
how
code
ready
containers
is
built.
A
B
A
A
A
A
Okay,
single
node
cluster
from
code
ready
containers
it
it's
actually
doing.
It
is
useful
to
give
you
some
insights
into
something.
It's
it's
actually
doing.
An
ipi
libert
install
of
your
openshift
cluster,
which
is,
is
pretty
cool
in
and
of
itself
in
a
similar
fashion.
To
what
I'm
doing
here,
in
fact,
let
me
see
I
was
hacking
on
code
ready
containers.
A
A
Yeah
there
we
go
so
there's
a
a
project.
Snc
and
crc
are
the
two
projects
that
it's
get
it's
built
from.
A
A
All
right
crank
the
font
up,
for
you
guys,
apologies
that
that
makes
the
screen
really
crowded.
So
what
the
punch
line
in
this
it
it
is
very
prescriptive
toward
building
code
ready
containers.
So
it
strips
a
bunch
of
things
out
of
the
openshift
cluster
like
monitoring
and
things
like
that
to
make
it
fit
in
a
really
compact
size
or
compact
for
open
shift.
Let
me
find
the
installer,
so
it
is
it's
writing
into
the
manifests.
A
A
Using
the
bare
metal
to
then,
let's
write
in
here
so
here
we
go
creating
the
cluster,
so
this
is
using
the
I
it's
using
an
ipi
install
where
the
installer
actually
does
all
of
the
provisioning
and
stuff
for
you.
A
So
not
you
know
not
giving
you
control
over
the
network
configuration
or
things
like
that.
This
is
more
like
the
I'm
deploying
in
the
cloud
on
aws
or
on
gcp
or
or
on
on
vmware,
or
you
know
something
along
those
lines.
So
that's
what
that's
what
the
code
ready
containers
is
built
from.
So
this
big
long,
snc.sa
script
here,
what
it's
actually
doing
is
standing
up
a
single
node
cluster
on
libert,
and
then
you
run
another
script.
A
That's
part
of
this
bundle
called
where's
that
create
disk
which
create
disk
right.
There
called
create
disk
that
then
creates
a
qcal's
image
of
the
the
cluster
it
strips
a
whole
bunch
of
operators.
Out
to
you
know,
like
I
said,
to
try
to
get
the
size
down
as
much
as
possible
and
once
it's
once
it's
done.
A
All
of
that,
it
does
a
couple
of
little
tweaks
to
the
running
virtual
machine
so
that
it's
compatible
with
hyper-v
for
running
on
windows
or
some
other
things,
and
then
at
the
very
end
here
it
creates
where
it's
doing
this
create
tarball
for
hyper-v,
for
hyperkit
and
for
libvert.
A
So
so,
for
each
of
these
there's
functions
further
up
here
you
see
it
it's
creating
the
q
qmu
image.
It
creates
then,
from
this
images,
for
windows,
for
linux
and
for
mac,
os
right
and
that's
base,
and
and
when
you're
done
with
that,
then
you
have
a
a
file,
a
virtual
machine
file,
that
is
a
single
node
cluster
that
would
run
on
one
of
those
platforms
code.
A
Ready
containers
is
then
a
wrapper
around
that
that
bundles
the
the
image
so
that
you
can
start
and
stop
it
with
the
crc
and
that's
why
the
crc
binary
is
so
fat.
If
you
go
download
it
you'll
see
that,
like
the
the
mac
os
crc
binary
is
like
two
and
a
quarter
gigabytes.
I
think
most
of
that
is
the
virtual
machine
image,
because
it's
when
it
uncompresses
it
uncompresses
to
about
a
9,
10
gigabyte
image.
A
A
Because,
like
I
was
telling
you
before
our
previous
one
went,
south
authentication
is
going
to
be
upset
and
the
ingress
controller
is
going
to
be
upset
because
it's
not
going
to
be
able
to
start
all
of
the
replicas
that
it
wants
to
start.
A
A
A
A
And
scrolling
scrolling
scrolling,
you
see
it
wants
to
have
two
replicas
all
right.
Well,
we
don't
have
two
nodes
for
it
to
have
replicas
on,
because
if
we
were
able
to
dig
further
into
its
configuration
that
is
not
shown
to
us
here.
A
A
A
Oc
edit
and
I
could
go
down
here
and
change
that
replicas
to
one
and
save
it,
but
if
you
really
want
to
impress
your
friends,
the
oc
patch
command
looks
like
magic
because
from
the
command
line
without
doing
anything
else-
and
this
is
also
how
you
can
do
this
kind
of
thing-
with
infrastructure
as
code.
So
what
I'm
going
to
do?
This
is
a
json
patch
and
what
I'm
saying
is
under
the
under
spec.
A
I
want
replicas
to
be
one
and
I'm
doing
a
patch
type
of
merge,
rather
than
replace
I'm
going
to
I'm
going
to
merge
this
into
the
existing
config
and
and
I'm
going
to
patch
the
ingress
controller
named
default
in
the
openshift
ingress
operator,
namespace
with
that
patch
and
there
it
is
now
it's
patched,
and
if
I
do
this
dasho
yaml
again
and
scrolling
scrolling
up,
you
can
see
that
now.
Replicas
is
one
okay!
A
Well,
there's
there's
one
other
that
we
need
to
do
this
with,
and
that
is
the
authentications
operator
and
it
needs
a
patch
similar
to
what
we
did
to
etsy
d.
A
So
I
need
to
tell
it
that
it's
going
to
have
an
unsupported
configuration
override
of
use-
unsupported
unsafe,
non-ha,
non-production,
unstable
oauth
server
to
true-
and
you
can
tell
that
that
the
engineers
that
put
this
in
here
wanted
you
to
know
that
you
were
intentionally
using
unsupported,
unsafe,
non-ha,
non-production,
unstable
oauth
server,
just
in
case
you
weren't
quite
sure,
and
there
we
are
so
now
we
said
I
want
to
use
the
unsupported,
unsafe,
non-h,
a
non-production,
unstable,
oauth
server
and
hopefully
now
that
I've
done
that
this
message
down
here,
you
see
we're
98,
we're
almost
there.
A
Hopefully
now
we'll
get
past
that,
because
now
the
authentication
operator
should
be
able
to
complete
its
install
and
update,
and
we
actually
don't
have
to
wait
for
that.
The
secret
here
is,
we
already
have
a
use.
We
already
have
a
usable
cluster
at
this
point,
so
let's
go
ahead
and
go
get
the.
A
Let's
go
ahead
and
go
get
the
authentication
okay,
you
see
this
ok
d4
installed
there
that
was
created
by
my
script
and
that's
where
it
put
the
manifests
and
things
and
then
told
the
installer
where
to
go
to
get
its
manifest.
So
that's
all
of
this
stuff
under
here
was
created
by
the
installer.
So
so
there's
the
there's,
the
ignitions
that
were
created.
You
see
this
guy
right
here,
the
worker
ignition.
A
Oh
yeah,
look
at
that
there
it
is.
We
have
a
working
cluster,
so
we'll
prove
it
now
by
logging
into
it
from
the
console.
A
Before
I
step
away
from
this,
though
you
see
this
worker
ignition,
if
you
were
in
the
main
stage,
when
we
were
first
starting-
and
we
were
talking
about-
you
know
the
fact
that
you
can't
go
from
a
single
node
cluster
to
a
full
cluster.
That's
really
just
talking
about
the
control
plane.
We
could
use
this
ignition
file,
this
worker
ignition
file.
I
could
build
a
virtual
machine
boot
that
virtual
machine
off
of
this
worker
ignition
file
and
it
would
join
the
cluster
as
a
worker.
A
I
would
not
have
an
h
a
c
d
control
plane
because
I
would
be
using
an
unsupported,
unsafe,
non-ha
non-production,
unstable
etcd,
but
I
can
create
lots
of
worker
nodes
now
and
and
have
a
single
master
cluster
with
with
worker
nodes.
So,
since
our
cluster
is
up,
let's
go
ahead
and
log
into
it.
You'll
notice
down
here
it.
It
gave
me
a
url
and
it
gave
me
a
username
and
a
password.
A
If
you
lose
those
you
can
get
them
back.
What
I
was
going
to
show
you
is
under
this
auth
directory
here.
This
cube
config
is
what
I've
been
using
when
you've
seen
me.
Do
this
that
cube,
config
is
kind
of
like
the
you
know,
passwordless
ssh,
so
it
gives
me
immediate
access
to
it.
That's
like
my
back
door.
Key!
A
Don't
share
that
unless
you
want
to
give
people
cluster
admin
access
to
your
cluster,
and
if
you
delete
it,
you
can't
get
it
back,
but
it's
probably
a
good
idea
to
either
keep
it
in
a
super
secret,
safe
place
or
delete
it
and
use
the
the
identity
management.
That's
there.
The
other
thing
that
it
gives
you
is
this
cube.
Admin
password,
which
is
a
file
that
contains
this
randomly
generated,
string
right
there,
that
is
the
password.
A
A
And
the
url
to
our
new
cluster,
okay,
it's
using
self-signed
certs
right,
because
I
didn't
I
had
a
very,
very
simple,
install
config,
so
I
didn't
give
it
any
search
to
use
so
it's
using
self-signed
search.
So
I
do
need
to
accept
the
certs
and
it's
going
to
make
me.
Do
this
twice.
B
A
Now
one
thing
to
point
out
here:
you
see
this
this
two
right
here
in
a
in
a
fully
functional
cluster.
What
that
number
off
to
the
side
of
the
pod
shows
you
is
pods
that
are
in
some
sort
of
a
not
ready
state.
A
Let
me
show
you
in
one
of
my
larger
lab
clusters:
you
see
that's
not
there,
because
all
of
the
pods
are
are
in
a
good
state.
You
will
always
see
this
in
the
4.7
openshift
cluster,
because
there
are
two
etsy
decorum
guard
pods
that
don't
have
anywhere
to
run
so
nothing
to
worry
about
here.
It's
not
broken.
It's
just.
You
will
see
that
because
these
two
etcd
quorum
guard
pods,
don't
have
a
node
to
run
on
when
we
get
the
full
bootstrap
list.
4.8.
A
You
shouldn't
see
that
anymore.
But
at
this
point
now
the
cluster
is
ready
to
run.
There's
actually
a
couple
of
other
things
that
I
included
in
the
tutorial.
That
will
be
helpful
for
you.
One
of
them
is,
let's
go
ahead
and
set
up
a
real
user
account
and
a
developer
user
account
all
right
and-
and
I
did
include
in
the
in
the
tutorial-
and
let
me
get
back
to
get
back
to
that.
A
So
in
this
in
this
space
you
see
this
hd
password
cr.yaml.
This
is
a
custom
resource
that
I
created
for
you.
A
Within
the
cluster
itself,
and
what
we're
going
to
do
right
now
is:
let's
create
I'm
just
going
to
pull
these
straight
out
of
the
out
of
the
tutorial,
so
I'm
going
to
create
a
working
directory
for
the
credentials
and
locally
on
my
box.
I'm
going
to
this
was
actually
when
you
saw
initially,
we
were
installing
the
http
tools
that
was
to
get
to
the
hd
password
command
so
that
I
can
create
hd
password
files.
A
So
I'm
going
to
create
an
hd
password
file
called
ht
password
in
that
directory
I
just
created
and
for
a
user
admin
and
I'm
going
to
make
its
password
the
the
password
that
was
auto-gen
right.
If
you
don't
want
to
do
that,
you
just
put
whatever
you
want
your
password
to
be
right
here.
A
All
right
and
while
I'm
at
it,
let's
go
ahead
and
let's
create
just
a
regular
developer
user
too
and
we'll
add
it
to
the
same
password
and
his
password
will
be
dev
password
and
now
from
that
ht
password
file,
I'm
going
to
create
a
secret
in
the
openshift
config
namespace,
it's
a
generic
secret,
I'm
going
to
name
it
ht
password
secret
and
I'm
going
to
create
the
secret
from
the
file
and
it's
going
to
name
the
file
ht
password
and
I'm
giving
it
the
path
to
that
file.
So.
B
A
A
Right
there
it
is
so
there's
the
secret
we
just
created
in
the
openshift
config
namespace,
and
it
has
that
the
content
of
that
hd
password
that
we
just
created
all
right.
So
that's
step
one.
Now
it's
in
the
openshift
config
step
two
is
to
apply
that
custom
resource
right
there,
the
hd
password
custom
resource,
and
it
will
do
this
because
I'm
using
an
oc
apply
and
not
an
oc,
create
I
just
I'm
in
a
habit
of
always
using
oc
ply.
A
A
You
see
there,
it
is
there's
my
identity
provider,
it's
a
type
of
hd
password
and
it's
using
the
hd
password
secret
to
get
its
data.
A
A
A
And
there
I
just
logged
in
need
to
do
one
other
thing,
though,
because
I'm
not
admin,
I'm
I'm
just
a
developer
with
with
self
provisioner
right.
Okay,
so
I
need
to
give
my
admin
user
authority.
A
So
I'm
going
to
give
my
admin
user
cluster
admin
right
by
adding
the
cluster
role
of
cluster
admin
policy
to
that
user,
and
there
you
go.
You
saw
in
the
background
boom
now
it
knows
who
I
am
now.
It
knows
that
I'm
the
administrator
of
this
system.
A
A
A
Don't
do
this
until
you
have
another
way
to
administer
your
system,
especially
if
you've
also
deleted
the
that.
A
This
right
here,
if
you've,
deleted
that
coupe
config,
because
at
that
point
you've
locked
yourself
out
of
your
cluster
and
your
only
option
is
your
only
recourse
is
reinstall.
But
now,
if
I
refresh
this
there,
I
don't
have
to
select
my
identity
provider.
Now
I
can
just
log
in
as
admin
and
secure.
A
Password,
oh
oops,
I
didn't.
I
didn't,
go
back
to
the
console
and
get
back
to
the
console.
A
A
So
we
now
have
two
users,
a
dev
user
and
a
cluster
admin,
a
couple
of
other
things
just
to
keep
it
from
complaining
at
you.
If
you're
going
to
use
the
internal
image
registry,
you
either
need
to
give
it
a
host
volume
or
an
ephemeral
volume
in
the
tutorial,
I've
got
the
command
for
you
to
give
it
a
ephemeral
volume.
A
So
I'm
going
to
switch
the
image
registry
to
a
managed
state
and
I'm
going
to
give
it
an
empty
dir,
which
is
an
ephemeral
on
volume.
But
since
you
only
have
one
node
that
ephemeral
volume
is
really
sitting
on
the
file
system
of
that
one
node.
So
the
images
that
you
put
there
but
they're
living
on
that
one
node.
So
you
can
use
the
internal
image
registry
at
this
point
now
and
you
saw
in
the
background
there,
it's
actually
doing
some
things
because
of
that
you
see
the
image
registry
kicked
up.
A
And
that
one
is
shutting
down
and
replaced
itself.
So
now
we've
got
a
configured
image
registry
and
the
last
thing
is
image
pruners.
It
will
complain
if
you
don't
have
an
image.
Printer
scheduled
you'll,
get
warnings
in
your
console,
so
let's
just
go
ahead
and
set
up
a
image
pruner
that
will
prune
out
images
based
on
this
configuration
at
midnight.
A
And
there
we
go:
what
time
is
it?
We've
got
an
hour
left
in
our
official
time.
If
I
haven't
run
everybody
off,
are
there
any
other
questions
or
anything
else
that
you
guys
want
to
talk
about
anything
else
bruce
in
the
chat.
B
A
Yeah,
I
know
you
know
I
need
to
run
enough
in
stalls.
I
might
kick
off
a
loop
that
just
spins
them
and
shuts
them
down
to
see
if
I
can
catch
what
it
is
that
causes
etcd
to
fall
over
because,
like
I
said,
I've
seen
that
just
a
few
times-
and
I
literally
ran
this
at
least
three
times
this
morning
and
four
or
five
times
yesterday.
While
I
was
yeah.
A
But,
of
course,
the
one
that
we're
recording
is
the
one
that
the
good
and
that's
that's
probably
when
it
happens.
I
might
experiment
with
that
and
see
if
there's
a
race
condition
in
yanking
the
bootstrap
node
after
it
tells
you
it's
okay.
Maybe
you
need
to
hold
your
breath
and
count
to
ten
and
then
yank
the
bootstrap.
B
Yeah,
I
had
a
question
for
you
too,
because,
like
I
was
in
a
situation
once
where
the
authentication
was
no
longer
working
because
of
a
failed
upgrade
and
the
only
way
I
get
into
the
system
at
all
to
debug,
it
was
with
the
cube
admin.
A
A
That
that's
right!
So
what
I've
generally
done!
I
I've
I
have
found
that
ht
password
is
pretty
stable,
even
across
upgrades
at
a
previous
place,
where,
where
I
was
doing
this
for
real
in
data
center,
what
I
set
up
for
them,
because
they
they
had
azure
ad
as
their
authentication
mechanism,
so
but
but
if
something
went
either
wrong
with
that
or
with
the
cluster's
integration
to
azure
ad
that
they
would
lose
access
to
their
cluster.
A
So
we
set
up
ht
password
for
just
a
few
of
the
cluster
admins
and
that
was
kind
of
the
always
their
back
door.
And
then
you
just
set
up
a
you
know
a
password
rotation
and
complexity
policy.
That
periodically
replaces
that
secret
and
that
way
you've
at
least
got
a
way
to
get
into
your
cluster.
If
you,
you
lose
your
primary
oauth
provider,
I've
also
set
it
up.
A
Charo,
yes,
yes,
and
and
mike
you're,
absolutely
right.
In
fact,
you
can
do
that
with
code
ready
containers
too.
That's
the
way
to
make
code
ready
containers
available
off
the
workstation
that
you're
running
it
on
is
to
slap
h.a
proxy
or
your
favorite
proxy,
in
front
of
it
and
and
use
that
to
route
the
traffic
in
this.
In
this
case,
I'm
using
the
etsy
the
etsy
resolver
on
my
macbook
to
get
to
it.
In
my.
A
Bastion
host,
that
is,
balancing
traffic
across
the
three
control
and
the
three
control
plane
nodes
are
also
my
infrastructure
nodes,
so
they're
also
running
ingress
and
the
image
registry
and
and
other
things.
B
A
Yeah,
which
I
I
did,
I
bought
a
domain
name
because
I
plan
on
actually
one
of
these
days.
My
wife
is
encouraging
me
to
do
this,
and
I
just
never
get
around
to
it
is
to
start
right
is
to
start
keeping
an
official
blog
of
this
stuff.
So
I
did
buy
a
domain
upstream
without
a
paddle.
There's,
nothing
there
right
now.
A
If
you
go
to
it,
but
that's
me
upstream
without
a
pass,
so
I
actually
do
to
to
create
some
real
certs
and
do
this
with
with
real
certs,
so
that
I
can.
I
can
show
folks
how
that
works
as
well
I'll
get
around
to
it.
One
of
these
days,
one
of
the
other
things
that
I
I'm
not
sure.
If
I
I
would
have
to
make
any
endpoints
available
on
the
internet
to
make
that
work.
A
But
but
that's
another
thing
I
don't
do
is
make
any
of
the
endpoints
available
on
the
internet.
In
fact,
I
actually
run
my
lab
as
though
it
was
a
disconnected
data
center,
so
I've
got
firewalls
and
routers.
A
I
actually
in
the
big
tutorial,
there's
a
section
where
I'll
lead
you
through
doing
a
disconnected
install
of
openshift
using
nexus
and,
and
you
actually
put
quay.io
and
the
red
hat
domains
in
a
dns
sinkhole
to
simulate
being
firewalled
off
things
that
that
I
also
would
like
to
do
is
to
set
up
a
certificate
signing
service
in
the
lab
so
that
you're
you've
got
your
own
certificate
authority
that
you
use
to
sign
the
certs
and
it's
kind
of
the
in
between
where
you
don't
have
fully
signed.
A
Search
from
you
know
like
a
verisign
or
or
what's
the
what's,
the
free
service
that
I
always
forget,
the
name
of
but
you're
doing
your
own
certificate
management.
A
And
there's
there
actually
is
a
cert
manager
operator.
I
actually
have
it
installed
in
here,
because
it
was
needed
for
either
key
cloak
or
maybe
it
was
skyla
db.
I
forget
why
I
installed
it,
but
I've
got
the
certificate
manager
operator
installed
in
my
cluster
to
do
internal
cert
management.
A
Hey
neil
dude.
A
A
Oh
cert
manager
is
not
in
the
operator
hub.
I
installed
it
via
the
from
the
github
repair.
B
A
A
A
A
C
C
I
know
I'm
fine
with
that,
I'm
totally
fine
with
it.
As
you
can
see
the
clock
at
the
top
says
you
have
four
hours
and
51
minutes,
because
I
knew
some
of
you
would
just
run
or
your
deployment
wouldn't
work
or
whatever,
but
just
make
sure
we'll
be
posting.
All
of
this
stuff
is
recorded
and
we'll
be
posting
it.
I
just
got
a
call
from
my
internet
provider
so
monday.
C
I've
got
an
appointment
and
the
cable
guy
is
coming
so
we'll
be
able
to
upload
it
all,
hopefully
on
monday
and
get
it
available
to
everyone.
C
And
then
we'll
do
it
all
again
when
4.8
comes
out
and
you
don't
have
to
bootstrap.
A
D
D
C
Craft
brewing
so
looking
to
see
if
you've
got
other
questions
for
from
folks
here,
I'll
I'll.
Let
mike
mccune
know
that
andrew
sullivan
is
still
pontificating
in
the
bare
metal
one
and
he's
on
the
hook
for
making
a
pull
request
and
getting
his
documentation
in
so
he's
been
nudged.
C
Just
looking
at
the
chat
and
if,
if
you
are
at
a
good.
C
Yeah,
if
you're
a
good
place
to
close,
you
can
always
go
join
the
other,
the
bare
metal
session
and
cheer
them
on,
as
as
they
probably
be,
the
slowest
deployment
so
far
today,.
A
All
right,
we
might
do
that
any
any
other
questions.
Folks,
if
not
we'll
call
this
one
down.
C
Well
done
guys,
thank
you!
So
much!
Hopefully
you
can
join
us
at
kubecon
eu
and
on
may
4th
we're
going
to
do
some
more
more
fun
stuff
there
with
okd
and
openshift
and
kubernetes
and
all
kinds
of
stuff.
So
please
do
join
us
and
and
come
to
the
next
okd
working
group
meeting.
There's
always
something
going
on
on
tuesdays.
So
watch
the
mailing
list
and
join
us.