►
Description
Deploying OKD4 on Libvirt on Bare metal UPI
Charro Gruver (Red Hat)
OKD Live Deployment Marathon
August 17, 2020
Day Zero Kubecon EU Virtual
A
Charo,
I
know
you're
up
next
with
the
bare
metal,
which
always
sounds
to
me
like
a
heavy
metal
band
kind
of
deployment,
and
I
saw
the
guitars
behind
you,
so
it
might
be
appropriate
if
we
pause
now
and
let
the
aws
lives
thing
go
and
let
charo
queue
up
for
his
deployment
and
share
his
screen.
A
B
A
Very
much
there
christian
for
hanging
out
with
us-
and
I
hope
you
can
spend
some
more
time
today
because
I'm
sure
we'll
be
repeating
some
of
these
questions.
A
D
Okay,
I'm
charu
groover.
I
am
a
new
architect
for
red
hat
services
here
in
the
southeast
you've
reached.
E
It's
like
it's
in
your
lip.
Yes,.
D
D
On
well,
like
diane,
has
said
a
couple
of
times.
These
are
live
demos,
so
we're
fully
expecting
a
a
bill
gates
moment.
It
might
not
be
a
blue
screen,
but
we
might
see
a
stat
trace
of
death
and
all
kinds
of
other
interruptions,
but
I'm
char
groover.
I,
like
I
said
I
I've
been
with
red
hat
for
for
one
week,
but
I've
been
a
consumer
of
red
hat
products
both
upstream
and
subscription,
based
for
most
of
my
20-year
career
in
I.t.
D
D
D
This
is
a
user
provision
infrastructure
deployment,
so
the
installer
is
not
going
to
be
provisioning.
The
machines
for
us
these
machines
are
already
provisioned.
If
you
see
in
this
terminal
right
here
I've,
given
you
sort
of
a
verse
list,
view
of
the
machines
that
are
currently
provisioned,
you
can
see,
we've
got
a
bootstrap
node
that
is
not
running.
D
Now,
I'm
using
virtualbmc,
which
is
a
tool
that
comes
out
of
the
openstack
world,
to
simulate
the
ipmi
management
of
these
virtual
bare
metal
machines,
and
these
machines
are
going
to
boot
into
ipixi
and
using
the
mac
address
of
the
machine
as
it
boots.
It's
going
to
pull
the
appropriate
ipixie
boot
configuration
file
that
sets
its
kernel
parameters,
sets
the
fedora
co
core
os,
install
url
and
the
ignition
file
that
it's
going
to
use
to
to
start
from
I'm
using
fixed
ips
for
this
particular
lab
setup.
D
I've
got
all
of
this
written
up
in
in
a
a
little
tutorial
that
I've
got
out
in
my
github
page,
which
we
can
provide
a
link
to,
but
without
further
ado,
we'll
go
ahead
and
fire
this
thing
up.
D
Now
I'm
going
to
attach
to
its
console
and
what
we're
going
to
watch
here,
it's
going
to
do
an
ipixie
boot,
the
it's
a
chained
boot,
so
it
first
pulls
just
a
boot.ipixie
file
is
what's
being
served
up
by
the
the
dhcp
server
for
it
to
pull
from
tftp
that
then
chains
it
to
look
for
a
file
that
is
named
after
its
mac
address.
D
D
D
This
will
take
a
little
bit
with
the
scrolling
logs.
It's
pulling
like,
I
said
it's
pulling
down
the
image
one
other
thing
I'll
point
out,
while
we're
waiting
for
the
bootstrap
node
to
to
complete
its
install.
Is
that
we're
also
doing
a
mirrored
install
today,
which
hopefully
makes
this
go
a
little
bit
faster
than
pulling
all
of
the
images
across
the
wire.
D
D
And
so
the
install
is
actually
going
to
pull
its
images
from
the
sonotype
nexus
right
now
I've
got
quay.io
in
a
dns
sinkhole
so
that
it
can't
it
can't
resolve
and
because
it
can't
resolve
it's
going
to
assume
it's
an
air
gap,
installation
and
it
will
pull
from
the
the
configured
mirror
image
all
right.
Fedora
core
os
is
booting.
Now
it's
going
to
overlay
the
rpm
os
tree
and
when
it
finishes
it
will
boot
one
more
time
and
it
will
start
the.
B
B
D
D
D
Right
corner
here,
I'm
going
to
run
the
openshift
install
command
and
direct
it
to
monitor
the
bootstrap.
D
Process
and
if
you,
if
you
do
this
at
home-
and
you
monitor
the
logs
like
this,
don't
be
alarmed
by
these
failed-
failed,
failed
entries
that
you
see
coming
out
in
the
logs
this.
This
is
the
bootstrap
process
waiting
for
its
resources
to
go
live,
and
so
it
will
continue
to
loop
until
the
various
resources
come
up
and
you
can
see
the
api
just
came
up
so
so
our
api
is
now
live
and
we're
waiting
for
the
bootstrap
process
to
complete.
D
Down
here,
in
the
bottom
right
hand,
corner
we're
just
tailing
the
journal,
control
logs
of
the
bootstrap
process.
D
Itself,
this
this
all
in
takes
about
10
minutes
from
the
the
bootstrap
node
firing
up
to
the
bootstrap
process
itself,
completing
the
the
installation
itself
will
complete
after
about
another
25
minutes,
so
we
we've
got
some
time
now
to
pick
some
questions.
If
folks
want.
A
Yeah
james
cassell
is
asking
from
twitch.
Is
the
sinkhole
necessary
to
use
mirror.
D
I
think
it
still
is
I
I
know
it
has
been
for
a
while
that,
if
you
don't
create
the
sinkhole
and
it
can
resolve
the
external
host,
it
will
pull
the
images
from
the
from
quay.io.
D
And
that
that's!
Why
that's
why
I
created
the
sinkhole
to
simulate
a
disconnected
install
where,
where
I'm
behind
a
bunch
of
firewalls
and
proxies
that
that
prevent
my
nodes
from
having
direct
internet
access.
A
A
couple
of
questions
just
to
double
check
the
link
to
the
documentation
on
this.
Is
this
the
same
as
the
stuff
that
you
did
in
the
okd4upi
lab
setup.
D
Yes,
yes,
there's
a
there's,
a
new
branch
called
ipixi
that
when
we're
done
today,
I've
got
a
little
more
cleanup
on
the
documentation
to
do,
but
I'm
going
to
merge
that
branch
into
master
the
the
old
tutorial
that
was
the
centos
7
based
one
I've
branched
master
to
a
centos,
7
branch.
So
anybody
that's
still
running
centos
7
would
want
to
use
the
centos
7
branch.
D
D
A
A
The
other
feeds
are
nanosecond
behind
us,
so
in
blue
jeans,
so
trying
to
do
there
and
brian
jacob
hepworth
is
saying
that
he
really
likes
the
fedora
core
os
news
and
seeing
that.
A
A
I'm
going
to
do
another
pitch
for
people
to
join
the
okd
working
group.
While
we
are
waiting
here
because
that's
what
I'm
charged
with
is
getting
more
folks
in
so
if
you're
liking,
what
you're
seeing
here
or
if
there's
features
missing
or
other
platforms
that
we
should
be
demoing
to
or
testing
on,
or
that
you're
using
okd
on
or
wishing
to
do
so.
A
Please
join
the
okd
working
group.
The
mailing
list
is
here:
I
just
put
it
in
the
chat
and
it
is
in
open
google
group
and
we
have
a
lot
of
meetings
every,
but
we
meet
bi-weekly
and
we
have
a
meeting
tomorrow
and
I'll,
throw
the
fedora
for
os
and
a
chef
thanks
for
joining
us,
and
we
will
do
the
azure
one
that
you
requested
earlier.
That
is
our
second
to
last
demo.
I
think
today
is
azure,
deploy
the
fedora
calendar
link
here.
D
Okay,
it
bootstrap
has
succeeded,
and
it's
going
to
wait
just
a
little
bit
longer
to
send
the
event
and
then
you'll
see.
Okay,
there
it
went
so.
The
bootstrap
is
now
done.
You
can
see
in
the
middle
terminal
that
we
do
have
three
master
nodes
that
are
live.
D
D
Now
the
this
is
something
odd
about
this
install
monitor
here
it
will
say,
42
complete
here
in
a
minute,
it
may
barf
a
couple
of
errors
as
some
of
the
resources
restart
and
it
will
also
reset
the
clock,
so
it
it's,
it
plays
with
you
a
little
bit,
you'll
get
up,
74,
complete
and
then
all
of
a
sudden
you'll
see
12
percent
complete
and
then
it
will
quickly
wind
its
way
back
up.
D
So
if
you,
if
you
see
that
running
this
at
home,
don't
don't
be
alarmed,
it
is
actually
working
towards
completion
and
you
need
to
be
patient
because
from
this
point
it
does
take
about
another
23
minutes.
A
23
minutes:
well,
you
want
to
talk
a
little
bit
while
you're
doing
this
about
the
work
you're
doing
around
che.
D
Sure,
well,
actually
it
wasn't.
It
turned
out
not
to
be
much
work
at
all
and
in
fact,
if,
if
we
end
up
with
enough
time,
I
can
I
can
deploy
a
hyper-converged
ceph
instance
into
this
cluster
to
give
us
a
storage
provisioner,
because
that's
really,
I
think
I
think
the
folks
that
might
have
struggled
with
getting
eclipse
che
up
and
running
is
that
it
does
need
persistent
volumes
both
for
postgres.
D
D
Now
something
else
I'll
mention
here
I'll
run
this
again.
D
D
So
the
installer
by
default
makes
the
masters
schedulable.
When
the
installation
is
complete.
That's
something
that
that
we're
gonna
we're
gonna
change.
We
will
add
the
three
worker
nodes
and
then
we
will
make
the
masters
unscheduleable.
D
C
D
D
D
So
here
I'll
I'll
walk
you
through
a
few
of
the
things
that
that
were
prepared
ahead
of
time.
I
I
said
a
lot
of
words
to
describe
it.
One
of
the
especially
the
the
way
I'm
I'm
doing
this
with
with
fixed
ip
addresses.
One
of
the
things
that
you
have
to
provision
are
dns
records.
A
few
key
dns
records.
D
You
can
see.
I've
got
in
here
the
provisioning
for
several
different
clusters
that
I
run.
But
this
is
this
is
the
one
that
we're
presently
looking
at
right
here.
So
each
of
the
master
nodes
worker
nodes
and
the
etcd
nodes
requires
an
a
record,
the
the
master
and
the
etsyd
obviously
are
sharing
the
same
node.
D
So
so
they're
going
to
have
a
records
with
the
with
the
same
ip
address.
You
also
need
three
server
records
for
the
etsy
b
and
then
you
need
a
pointer
record
for
reverse
lookup
for
each
of
the
of
the
physical
nodes.
So
your
masters
and
your
worker
nodes
you'll
need
pointer
records
for
those,
but
the
as
you
can
see,
the
dns
setup
is
not
onerous,
but
it
is.
D
As
you
can
see
is
very
simple,
I'm
echoing
some
information
just
to
make
sure
the
right
host,
booted
and
then
chaining
in
an
ipixie
file
that
is
literally
named
after
the
mac
address,
with
hyphens
replacing
the
colons.
D
And
here's
one
of
them
right
here
that
I
believe
will
be
one
of
the
worker
nudes,
and
so
this
right
here
gives
it
the
kernel
parameters
necessary
to
boot,
tells
it
yes,
we
want
to
install
core
os,
tells
it
where
to
install
core
os,
tells
it
where
to
get
core
os
and
it
tells
it
which
ignition
file
to
use,
and
that's
really
the
secret
sauce.
D
D
D
All
right,
we
are
in
theory,
at
84,
complete,
I
expected
to
reset
the
clock
at
least
once
while
it's
while
it's
doing
this,
but
this
is
how
do
you
determine
this
percentages
because,
like
I
don't
see
anything
on
screen,
that
would
tell
you
percentages,
oh
right
here,
can
you
see
the
the.
B
D
Okay,
there
it
is
okay,
it
helped
when
you
highlighted
it,
there's
a
lot
of
word
soup
on
screen.
Yes,
there
is.
This
is
how
I
keep
the
install
from
being
boring
is
give
you
lots
of
journal
control
and
logs
to
look
at
because
otherwise,
there's
not
a
lot
to
look
at
no,
no.
D
So
how
did
you
come
up
with
this
setup,
for
I
mean
you're
doing
the
bare
metal
right,
so
yeah
how'd
you?
How
did
you
come
up
with
it?
Oh
gosh,
because,
like
I,
I
remember
that
that
bare
metal
is
like
the
least
fleshed
out
deployment
method
of
them
all.
So
the
fact
that
you
came
up
with
something
is
impressive.
All
on
its
own,
so
that's
worth
the
story,
I'm
sure
yeah.
D
You
know
I
back
in
at
the
end
of
2017,
I
got
addicted
to
the
intel
nook
machines
and
you
know
those
little
form
factor
boxes
are:
are
they're
not
they're,
not
cheap,
comparatively,
but
considering
the
amount
of
compute
that
you
can
pack
into
one
of
them
for
a
for
a
home
lab
set
up,
they
they
are
pretty
affordable
and
if
you
buy
the
right
chipset,
you
can
put
64
gigabytes
of
ram
in
one
of
those
little
suckers.
D
So
you
know
you
get
one
with
a
core
i7,
the
newest
ones,
that
the
10th
generation
they've
got
six
cpus,
so
you've
got
12
vcpus
available
and
64
gig
ram
you
you
can
run
quite
a
bit
on
them
and
and
my
idea
was
actually
get
an
openshift
cluster
running
on
the
the
nux.
D
And
then
I
stumbled
across
this
thing
called
nested
virtualization
with
libert
and
well
I
don't
do
openstack.
I
had
a
curiosity
about
it
and
that's
how
I
came
across
virtual
bmc
and
and
so
decided
to
basically
bump
it
up
a
level
and
used
lib
virtual
machines
with
virtual
bmc
to
simulate
bare
metal,
and
then
it
was
just
sort
of.
D
I
want
to
make
this
work,
so
I
powered
through
making
it
work
to
get
bare
metal
install
of
okd
up
and
running,
submitted
a
few
tickets
to
the
fedora
core
os
team
that
they
were
very,
very,
very
gracious
to
help
out
somebody
that
didn't
know
what
they
were
doing.
I
I
had
never,
you
know
touched
four
os
before
so
so
that
was
quite
a
bit
of
a
learning
experience
and
thanks.
D
From
from
that
point,
the
the
latest
iteration
of
it
now
uses
the
the
fcct
tool
to
inject
some
customization
into
the
machines.
Actually,
while
we're,
while
we're
still
waiting
for
that.
Oh
there,
hey,
quick
here,
here's
the
reset
I
was
talking
about,
see
how
we
went
back
to
zero
percent.
Complete,
don't
panic!
D
I
don't
know
why
it
resets
the
clock
like
this.
Maybe
somebody
in
engineering
could
tell
us,
but
it
is
still
progressing.
I
assure
you
that
is
very
confusing
and
kind
of
frightening.
D
Actually,
it
looks
like
it
resets
after
it
downloads
an
update,
so
it
probably
loses
all
of
its
state
when
it
does
that
yeah
that
that
that's
my
suspicion,
because
it
does
go
through
several
iterations
of
updating
some
operators
yeah.
So
it's
just
probably
losing
its
state
every
time
that
happens,
which
is
unfortunate
and
I'm
not
sure
that
makes
sense,
but
the
best
I
got
it
still
works.
D
D
There
we
go,
then
it's
readable
yeah.
This
is
a
shell
script
that
I
wrote
that
actually
does
the
the
provisioning
of
the
of
the
quote:
unquote
bare
metal
for
me
and
right
right
here.
D
This
is
a
yaml
file
that
gets
created
where
it's
injecting
the
customizations
that
I
want
each
of
the
machines
to
have.
So
in
this
case,
what
I'm
doing
is
I'm
creating
a
basically
a
rename
of
the
primary
nic
to
nick
0
so
that
it
doesn't
come
up
as
some
funky
enp
blah
blah
blah
blah
blah.
D
I
want
it.
I
want
it
to
be
more
than
predictable.
I
want
it
to
be
predictable
and
known,
and
so
I'm
using
the
mac
address
of
the
machine
to
explicitly
name
that
network
interconnect
device
as
nik
0.,
and
that
way
I
always
know
what
it's
going
to
be
and
where
it's
going
to
be,
and
then
I
inject
into
that.
It's
specific
configuration,
so
I'm
setting
you
know
it's.
It's
name
server.
It's
domain,
it's
ip
address
with
the
netmask
and
a
gateway,
then
I'm
also
injecting
its
host
name
so
that
it
persists
its
host
name.
D
There's
a
bunch
of
other
stuff
that
the
the
script
does,
which
is
one
thing
I
am
going
to
do,
I'm
going
to
add
better
comments
into
this,
so
that,
if
any
of
you
are
are
looking
at
how
this
thing
is
working
you'll
understand
what
each
of
these
sections
is
doing.
D
All
right
we're
back
up
to
84
complete
at
this
point,
I'm
going
to
go
ahead
and
fire
up
the
worker
nodes.
It
is
safe
to
do
so
now.
Actually
I
could
have
done
it
a
while
back,
but
I'm
going
to
go
ahead
and
do
it
now.
D
I'm
sending
each
of
them
an
ipmi
command,
given
a
10
second
pause
and
in
between
each
one.
Just
so
they
don't
slam.
My
poor
little
travel
router
with
dhcp
and
file,
pull
requests
at
the
same.
D
D
And
we'll
go
ahead
and
watch
one
of
those
guys
boot
up.
There's
one
of
the
workers.
It's
going
to
do
the
the
same
thing
that
you
guys
saw
the
bootstrap
node
doing
it's
pulling
the
core
os
image
right
now.
D
And
then
it's
going
to
go
through
the
same
process,
except
that
it
will
retrieve
its
ignition
file
once
it
once
it
processes
the
initial
ignition
overlays
the
the
os
tree
and
starts
its
process
to
join
the
cluster.
It's
going
to
get
its
ign
its
ignition
file
from
the
cluster
that
will
give
it
the
personality
of
a
worker
node.
D
And
if
you
watch
the
left
hand
side
of
the
screen
closely
you
you
should
see
it
hit
a
point
where
it's
waiting
on
and
then
you'll
see
it
very
quickly
pull
that
ignition
config
and
at
that
point
it
will
start
to
join
the.
D
C
I
do
have
to
leave
now
for
like
15
minutes,
20
minutes
I'll
be
back
after
that,
and
I
hope
my
cluster
will
be
up
by
then
and
I'll.
B
D
All
right
and
as
before,
self-signed
certs,
so
in
whatever
os
and
browser
you're
using
you
are
going
to
have
to
accept
those
certs.
D
It's
okay,
self-signed
certs
are
fine
all
right
now.
It
creates
a
temporary
cluster
administrator
for
you
and
that
it
dumps
that
password
at
the
end
of
the
install
process
that
you
can
use
to
gain
access
to
your
cluster
and
there
we
are
now
there
will
still
be
some
operator
updating
things
going
on
and
your
control
plane
will
still
be
settling
out.
D
If
you
will
indulge
me
for
a
few
minutes,
we'll
go
ahead
and
finish
adding
the
worker
nodes
and
then
we'll
do
a
couple
of
housekeeping
things
on
our
cluster.
So
you
see,
we've
got
some
pending
certificate.
Signing
requests.
That
is
also
an
artifact
of
the
way
we're
doing
this
user
provision.
Infrastructure
install
is
that
it's
not
automatically
going
to
approve
those
search
because
it
doesn't
necessarily
trust
anybody
that
wants
to
join
the
cluster.
D
And
I'm
going
to
do
a
couple
of
house
cleaning
things
here:
one
is
I'm
going
to
remove
the
samples
operator
because
it
unless
something
has
changed
recently.
Unfortunately,
christian
isn't
here,
we
can
ask
him
later
the
samples
operator,
because
you
don't
have
an
official
red
hat
secret
at
this
point
it
won't
be
fully
functional
and
can
in
fact
impede
updates
to
your
cluster.
D
So
I'm
patching
its
configuration
with
an
empty
dir
specification
for
a
persistent
volume
and
I'm
going
to
create
an
image
pruner
to
run
it
midnight
every
night,
because
the
the
console
will
gripe
at
you.
If
you
don't
have
an
image
pruner
configured
until
you
do
so
anything
older
than
60
minutes,
it's
going
to
prune
at
midnight
every
night
or
60
days,
rather
60
minutes
would
be
aggressive.
D
D
Our
workers
are
schedulable,
but
that's
not
bad.
Well,
it's
not,
but
there
is
a
gotcha
in
here
which
of
course,
I
never
tripped
over.
Your
ingress
odds
will
deploy
on
a
schedulable
node.
D
Well,
if
your
load,
balancer
is
only
configured
to
look
at
certain
nodes
here,
you
see
I've
got
my
the
port
80
and
port
443
and
port
6443
they're
all
directed
to
the
master
nodes.
D
So
so
the
key
the
key
here
is
either
to
span
your
load,
balancer,
which
I
don't
really
want
to
do,
because
that's
a
lot
of
extra
cruft
in
the
the
load,
balancer
configuration
or
designate
some
infrastructure
nodes
and
that's
the
path
that
that
I
chose
to
take.
So
what
I'm
going
to
do?
Real
quick
is
I'm
going
to
designate
my
master
nodes
to
also
be
infrastructure
nodes?
D
Why
doesn't
it
do
that
by
default?
Well,
because
the
the
the
best
practice
is
to
create
a
couple
of
worker
nodes
that
you
set
aside
as
infrastructure
nodes?
D
D
I
don't
know
good,
okay,
just
making
sure,
because,
like
I've,
seen
these
recommendations
listed
in
the
documentation,
but
there
doesn't
seem
to
be
any
particular
reasoning
to
back
them
up,
like,
historically
speaking,
I've
seen
clusters
typically
do
the
masters
as
in
for
nodes,
because
that
way
they
handle
essentially
the
stuff
that
keeps
the
cluster
itself
running
and
the
worker
nodes
are
free
to
work
on
developer
user
workloads
yeah.
I
think
one
of
the
things
you
need
to
consider
is
how
how
beefy
you
make
your
masternodes.
You
know
if
you've
got
heavy
heavy
heavy
ingress
operations.
D
You
know,
given
everything
else,
that
the
masternodes
are
doing,
that
that
might
be
a
little
overwhelming
for
them.
In
my
particular
lab
environment,
the
the
master
nodes
are
heavyweight
enough.
Each
of
them
has
30
gig
of
ram
and
six
vcpus,
so
so
I
feel
pretty
confident
designating
them
as
infra
nodes.
So
what
you
do
once
you
once
you
run
this
label
on
them,
then
you
need
to
patch
the
scheduler
so
that
the
master
nodes
are
no
longer
schedulable.
You'll
see
right
now
they
are
infra
master
and
worker
nodes.
D
When
I
run
this
now,
they're
just
infra
and
master
nodes
now
at
this
point,
nothing
got
evicted
off
of
them.
So
if
you
want
to
boot
things
off
of
them
that
you
don't
want
running
on
there
anymore,
you
do
need
to
either
go
through
and
evict
all
the
pods
that
are
running
on
each
of
those
nodes
manually
or
reboot,
your
master
nodes,
which
is
a
bit
more
of
an
aggressive
way
of
doing
it.
D
Now,
I'm
going
to
patch
the
ingress
operator
to
tell
it
that
it's
okay
for
it
to
run
on
those
master
nodes,
and
if
you
can
read
the
eye
chart
here,
I'll,
explain
what
it's
doing.
It's
setting
a
node
placement
policy,
giving
it
a
match
label
of
infra
node.
It's
also
that's
not
enough!
You
also
have
to
set
some
tolerations
because
the
master
node
is
now
tainted.
D
D
Not
in
a
ready
state,
yet
as
soon
as
this
one
is
in
a
running
state,
the
second
one
will
begin
terminating.
Don't
panic
that
your
other
one
sits
in
a
pending
state
for
a
while,
because
it
has
an
anti-affinity
rule
that
it
won't
run
on
a
node
that
already
has
an
ingress
pod
running
on
it.
So
it
has
to
wait
for
one
of
those
terminating
pods
to
complete
terminating
before
it
will
schedule
on
the
master.
D
D
D
E
D
Ask
are
we
just
do
you
have
to
make
sure
you
you
save
that
output
text,
or
will
it
actually
be
somewhere
where
you
can
get
to
it?
Yeah,
if
you,
if
you
look
at
the
the
directory
that
you
used
for
the
installation,
so
there's
you
know,
there's
the
boot,
the
the
ignition
files
that
it
created
and
the
metadata
it
creates
an
auth
directory.
D
Right
there,
but
if
you
get
rid
of
the
cube
admin
user,
doesn't
everything
that,
like
links
to
the
cube
admin
user
break,
it's
a
temporary
account.
So
here's
what
we're
gonna
do.
I
I
created
an
ht
password
file
ahead
of
time.
My
tutorial
has
instructions
for
how
to
do
that.
D
So
so
I've
got
an
admin
user
and
a
dev
user
with
passwords.
Already
in
there.
You
saw
me
just
create
a
secret.
D
D
So
this
is
the
custom
resource
that
we're
going
to
apply
it's
setting
up
an
hd
password
identity
provider
and
it's
going
to
link
it
to
that
secret
that
we
just
created
the
hd
password
secret.
So.
D
I
will
apply
that
it
complains
that
I
used
apply
instead
of
create,
but
I'm
just
in
a
habit
of
using
apply
to
update
objects,
so
you
can
ignore
that
that
complaint
there
and
then
the
last
thing
I
need
to
do
is
this
admin
user
that
I
I
just
set
up
a
secret
for,
but
doesn't
exist,
I'm
going
to
give
him
cluster
admin
rights,
and
now
I'm
going
to
be
brave
and
I'm
going
to
delete.
D
Well,
it
also
says
the
admin
user
doesn't
exist,
that's
correct,
but
it
creates
it
in
the
background.
What
yeah
it's
not
intuitive,
no
or
obvious,
but
it
does
and
it
works.
Okay.
D
So
there
we
go.
I
just
logged
in
with
my
new,
somewhat
more
secure
cluster
admin
account
and
you
can
see
our
four
green
check
boxes.
We've
got
a
happy
cluster.
It
will
complain
about
alerts
until
you
like
set
up
a
slack
channel
or
something
to
send
your
alerts
to
it's
actually
pretty
easy
to
do.
You
create
a
receiver
and
walk
through
it,
but
I
have
used
up
most
of
my
allotted
time
so
I'll
stop
playing
now
and
I
think.
D
A
All
right
well
played,
and
can
you
do
one
more
thing
for
me
just
because
I
think
people
keep
asking
me
these
questions
go
back
to
the
console
and
show
the
operators
that
are
installed
in
your
installation.
B
D
Operator
hub
operator
might
not
actually
be
up
yet
yeah
because
it
does
it
does
take
a
while
after
you
know
that
that
initial
install
took
us
another
23
minutes,
it
does
take
things
a
while
to
settle
down.
Let
me,
let
me
show
you
what
it
does
look
like,
because
I
have
another
cluster
that
I
stood
up
this
morning.
D
It
seems
less
healthy
yeah.
I
think
I
I
think
I
did
something
to
upset
it,
but
here's.
The
here
are
the
operators
that
are
available.
D
Quite
a
few
you
can
see,
there's
if
you
want
code
ready
workspaces,
the
the
upstream
of
it
eclipse
che
is
in
here.
D
I
might
especially,
if
you
don't
mind,
going
a
couple
minutes
over,
because
the
first
thing
I
need
to
do
is
deploy.
Oh
actually,
no.
I
can't
because
I've
already
got.
Let
me
make
sure
I've
got
death
deployed
in
this
cluster,
so
we're
going
to
go
to
the
brook
ceph
namespace.
D
D
Okay,
and
unless
you
want
to
do
something
different
about
it,
you
install
and
we're
going
to
keep
the
stable.
It
is
going
to
create
the
eclipse,
che
namespace
and
we're
gonna.
Let
it
have
an
automatic
strategy
for
its
approval.
If
you
switch
that
to
manual,
then,
when
the
installer
installs
you
you
have
to
go
to
the
installer
and
then
say
yes,
you
can
actually
install.
D
Well,
if
you
think
about
it,
you
know
I'm
doing
everything
as
a
cluster
administrator.
So
if
you're,
not
a
cluster
administrator,
but
you
you
know,
you
want
to
request
something.
D
That's
part
of
what
what
we've
got
going
on
here,
because
there's
all
kinds
of
configurable
r
back
capabilities
within
this
thing.
D
So
when
you
install
this
operator
as
an
as
a
cluster
admin,
does
that
mean
that
anybody
who,
logs
in
with
an
account,
can
then
instantiate
it
afterwards?
Absolutely?
Yes,
absolutely
the
workspaces
people
will
be
able
to
get
in
and
create
workspaces
again.
You
know
it's
got
lots
of
role-based
access
control
so
so
that
you
can
control
who
can
do
what?
But,
yes,
anybody
that
you've
got
created.
D
D
D
And
when
we
create
our
cluster,
I'm
going
to
tell
it
to
use
that
for
postgrass,
and
I'm
going
to
tell
it
to
use
that.
For
the
workspaces
also
note,
each
workspace
is
going
to
get
a
gigabyte
of
provision.
Storage
that
may
or
may
not
be
enough,
depending
on
the
type
of
development
that
you're
doing
that's
pretty
minimal.
D
This
will
take
this
takes
a
couple
of
minutes
and
then
key
cloak
is
going
to
provision
itself
after
postgres
is
done
so
now.
Key
cloak
is
provisioning
and
key
cloak
actually
goes
through
a
couple
of
phases.
It
has
an
init
phase
that
it
that
it
runs
through
so
you'll
see
that
pod
come
up
and
then
terminate
and
then
and
be
replaced
by
another
key
color
pod.
That
will
be
your
your
final
configuration
and
you
won't
see
the
the
che
controller
come
up
until
both
postgres
and
key
cloak
have
completed
their
provisioning.
D
A
That's
all
right,
I
have
plenty
of
coffee
today
and
michael
has
just
pointed
out.
Maybe
there
you
still
have
quay
dot,
io
block
via
dns
and.
D
You
know
what
I
I
I
don't.
That
was
a
good
point
out.
I
snuck
that
in
while
neil
was
talking,
I
right
here
I
blasted
a
command
to
my
dns
server,
to
remove
the
sinkholes
for
quay.io
and
for
registry.server.
D
D
All
right,
so
we've
got
key
cloak
is,
is
bootstrapping
itself
now
so
you'll
see
you'll
see
some
activity.
Go
there
all
right
and
there
it
is
so
now
you
see
another
key
cloak
instance
provisioning
and
it
will
take
over
from
the
the
first
one
here
in
a
minute.
A
In
other
news,
christian
says
that
his
full-blown
aws
cluster
has
finished
installation.
So
when
we're
done
we'll
pop
over
and
let
him
prove
that
and
then
we'll
we'll
grab
dusty
when
he's
back
and
we'll
hit
up
the
digital
ocean
stuff.
A
Any
of
you
who
are
joining
us
for
the
digital
ocean
demo
we'll
probably
get
started
on
that
one.
A
few
minutes
after
the
hour
we're
running
pretty
close
to
on
time,
which
I
think
is
amazing
indeed,
and
we'll
we'll
probably
lose
that
thread
at
some
point.
But
hey.
D
D
Programming
skills
yeah,
yes,
indeed,
so
so
the
the
first
key
cloak
instance.
You
see
it
terminating
now,
so
it's
getting
itself
out
of
the
way
the
plug-in
registry
is
fired
up.
Now
you
see
other
activity.
There's
our
che
controller
right
here.
That
is
creating
we've
got
a
dev
file
registry.
We've
got
a
plug-in
registry.
D
A
I
wish
I
had
a
fan
here.
The
temperature
was
popping
up
here
in
canada
on
the
west
coast.
It's
probably
going
to
hit
32
today,
so.
D
D
D
Create
a
folder
here
for
you
guys,
so
you
don't
have
to
see
all
the
craft
on
my
screen,
I'm
going
to
go
here
and
show
the
certificate.
This
is
safari,
specific,
obviously
so
follow
the
instructions
for
your
favorite
browser.
Safari
is
not
my
favorite,
but
here
it
is
grab
that
and
then
what
you're
going
to
do
is
once
you've
got
that
certificate.
You
need
to
add
it
to
the
trust
store
of
your
operating
system.
D
D
D
D
And
I'm
gonna
say:
yeah
allow
these
permissions,
and
now
it's
gonna.
It's
gonna
ask
you
to
create
an
account
now
another
important
safety
tip.
If
you
do
what
I
did,
there
is
an
admin
account
that
che
creates.
Well,
I
named
my
cluster
administrator
admin,
so
I
need
to
give
this
a
different
name
or
I
will
cause
some
pain
for.