►
Description
Jaime Magiera and Josef Meier demonstrate automated installation of OKD on vSphere with User Provisioned Infrastructure (UPI).
A
All
right,
let's
go
ahead
and
get
started
joseph
did
you
want
to
lead
off
with
yours,
or
do
you
want
me
to
go?
First.
B
What
what
what's
the
plan
jamie?
Let's
synchronize,.
A
Sure
so,
let's
have
you
walk
people
through
a
detailed
explanation
of
your
setup
with
dns,
dhcp
and
vsphere,
and
then
I'll
show
my
automation
so.
B
C
All
right,
I'm
going
to
just
pop
into
the
other
ones,
just
know
that
you're
recording
you've
got
about
10
people
watching
you
and
take
it
away
and
you
don't
have
to
go
for
six
hours.
There
are
I
set
it
up,
so
you
could
go
as
long
as
you
wanted,
but
both
six
hours
you're
doing
good
and
you're
putting
tech
support.
So
that's
the
other
thing
remind
people
that
you're
not
doing
tech
support.
You're
you're
demoing
take
care
thanks.
B
Thank
you,
everett
you're,
asking
about
vsphere.
We
want
workers
in
different
clusters.
Can
I
use
api
and
move
forward
after
this
is
created,
or
should
I
use
upi
you
want
to
con,
you
want
to
share
workers
over
different
clusters
or
what
is
your
purpose.
B
A
The
question
to
be
honest,
so
I
can
respond
to
larry
real
quick,
so
larry
3
x
is
no
longer
supported,
as
of
may
so
you'll
want
to
move
over
as
soon
as
possible.
A
There
are
some
migration
guides
available
on
the
web,
and
if
you
have
any
particular
questions
about
migration,
we
might
be
able
to
handle
them
or
folks
in
the
event,
chat
could
handle
them
as
well,
but
you'll
definitely
want
to
move
over
as
soon
as
possible.
A
And
okay:
well,
while
we're
waiting
for
for
everett,
why
don't
you
go
ahead?
Joseph
and
oh
here
we
go.
We
got
a
response.
Yeah.
A
A
scenario,
so
are
you
thinking
that
so
moving
the
vm
to
a
different
vsphere
installation
and
have
it
work?
A
Is
this
upi
or
ipi?
So
oh
you're,
asking
okay,
so
essentially
a
worker
is
bound
to
its
control
plane
and,
if
you
are
going
to,
if
you
want
to
use
a
worker
on
a
different
cluster,
you
basically
need
to
redeploy
it,
and
so
that's
what
the
that's
one
of
the
foundations
of
f
costs
of
fedora
core
os
is
that
when
you
want
to
make
a
change,
you
basically
just
redeploy
the
node,
so
you
would
just
redeploy
the
node
by
taking
the
metadata.
A
You
could
do
this
via
upi.
If
you
did
upi
take
the
metadata
that
was
generated
in
the
ignition
config
file
and
then
insert
that
into
the
vm
and
there's
a
flag
you
have
to
set
it.
I
can't
think
about
the
top
of
my
head,
but
basically,
when
you
boot
up
the
vm
next
time,
it'll
reprovision
and
make
that
connection
to
the
other
control
plane
that
you
want
to
move
it
to.
B
I
could
show
installation
of
based
on
the
okd
repository,
maybe
the
first
steps.
A
Yeah
yeah
go
ahead
and
then
show
the
automation
stuff.
That
sort
of
simplifies
all
of
that.
B
B
We
have
currently
ipi
azure
description
and
a
upi
in
different
flavors.
I
could
show
you
how
I
start
with
the
v-sphere
terraform
version.
How
does
anybody
of
you
use
a
terraform?
Maybe
you
can
just
cut
that
first.
B
Okay,
two.
Okay!
I
think
it's
worth
two
yeah
three!
Okay,
three
is
a
three
is
a
set.
I
could
okay,
I
will
spend
a
few
minutes
about
that.
So
the
first
thing
is:
you
should
clone
this
repository
here
say
okd
repository
I've
done
that
in
advance.
I
I
built
a
cluster
an
hour
ago
from
that
this
is
a
repository,
I'm
in
the
vsphere
terra
from
folder
a
few
days
ago.
I
I
can
try
to
make
it
bigger.
Maybe
that
helps
oops.
It
scrolls
away.
B
Give
me
a
second,
maybe
like
this.
I
hope
I
hope
you
can
see
something.
B
So
jimmy,
if
somebody
asks
anything,
maybe
you
can
throw
that
to
me,
because
I
can't
see
the
chat
absolutely
yeah,
so
this
is
repository,
I'm
starting
from
scratch.
B
This
is
the
same.
You
see
in
the
github
repository.
If
you
go
to
guides
upi
vsphere
terraform,
you
will
land
in
this
repository
here
and
if
you
you
see,
you
will
see
that
there
is
a
file
in
the
repositories.
That's
called
terraform.
Tf
was
example.
B
I
will
show
you
this
one
here
it
is
and
at
first
you
have
to
fill
out
the
variables.
Don't
don't
get
frightened,
it's
yeah,
it's
nothing!
Special!
You
have
here
your
cluster
id
your
cluster
domain.
My
ca
can
show
you
what
I
built
from
that
it
is
here.
B
The
cluster
name
is
c1
my
example.
Here
the
domain
is
home,
lab
net
yeah
and
you
have
to
fill
that
out.
I
have
to
find
my
put
it
like
this.
Then
you
have
to
yeah
tell
them
whether
this
vsphere
server
is
your
vsphere
credentials.
B
B
This
is
the
installation
I've
bought
with
my
150
bucks,
vm
user
group,
this
version
of
vsphere,
I'm
using
that
in
my
home
lab
here
you
see
the
data
center
and
the
storage
is
called
datastore2
and
you
have
to
fill
out
that
in
this
terraform,
but
the
apples
file,
you
have
to
say
how
much
masters,
how
much
workers
you
want
to
have
deployed.
You
have
to
provide
a
few
more
ignition
configuration
based
informations.
B
You
can
leave
this
tool
fixed
and
here
you
have
to
provide,
say
url
of
a
of
web
server
where
you
serve
the
bootstrap
ignition
file
yeah,
because,
as
maybe
you
remember
from
my
slides
that
in
the
first
step,
the
bootstrap
node
will
fetch
its
ignition
file
from
somewhere
and
that's
what
you
have
to
provide
here.
You
have
to
provide
the
location
of
web
server
where
the
bootstrap
ignition
policies.
B
I
will
show
you
where,
how
you
create
that
in
a
few
minutes
then
yeah
here
I
you
have
to
you-
can
provide
the
mac
addresses
of
your
vms.
The
bootstrap
vm
has
the
smac
address
here:
control,
plane,
three
masters.
Three
mac
addresses
three
workers,
three
make
addresses
for
the
workers
and
that's
pretty
much
everything
you
have
to
provide
here.
That's
a
most
work
you
have
to
do.
B
Then.
We
have
a
folder
here,
installation
dear
installed,
here
in
this
folder.
You
have
to
provide
the
ignition
files.
I
downloaded
the
openshift
installer
from
okd's
web
page.
Maybe
you
remember
it's.
B
If
you
go
to
the
openshift
okd
site,
you
have
your
releases
here
on
the
right
and
you
simply
have
to
choose
the
installer
binary
for
the
version
you
are
interested
in.
You
can
download
it
yep.
In
this
case
I
would
download
yeah,
maybe
this
openshift
installer,
because
I'm
on
linux,
I
would
download
this
one.
B
I
have
done
that
previously
for
an
older
version,
and
then
you
enter
it
until
you
have
your
installer
here.
B
Let
me
set
this:
the
installer
version,
okd
version
4.5,
that's
okay!
For
this
demonstration.
Afterwards,
you
have
to
provide
the
install
and
install
config
yaml
file.
I
can't
show
it
in
detail
because
my
credentials
for
vsphere
in
that,
but
I
will
copy
it
in
the
location
where
the
installer
is.
B
Okay,
now
we
have
an
install
config
ovn
file.
I
will
replace
it
to
a.
B
Install
config,
jammer,
okay,
the
install
complex,
contains
information
like
again
how
much
workers
masters
you
have
what
the
domain
name
is.
You
can
provide
vsphere
credentials.
B
So
if
you
later
in
your
running
cluster,
want
and
to
dynamically,
create
more
workers,
you
can
open
shift
where
okd
will
use
the
vsphere
api
with
the
credentials
provided
here
in
this
phase
and
to
provision
more
more
workers
on
the
fly.
Also,
an
auto
scaler
could
use
the
b
sphere
credentials
provided
here
to
create
more
nodes
dynamically.
B
The
next
thing
is
you
have
to
do
this.
You
have
more,
you
have
a
few
possibilities.
What
you
can
do
here,
I
create
ignition
config
files
also
create
well.
You
also
could
create
the
manifests
of
the
cluster
operators
in
this
stage
and
configure
them
and
afterwards
build
ignition
config
files
from
this
manifests.
B
That's
useful,
if
you
don't
want
to
create
a
cluster
and
configure
it
afterwards,
but
if
you
want
to
create
a
cluster
that
is
pre-configured
with
your
with
your
implementation
details
this
example,
I
don't
do
anything
like
that.
I
straightforward
create
my
ignition
files,
and
here
we
have
the
most
important
one.
It's
a
bootstrap
ignition
file.
We
can
have
a
look
inside
of
it
looks
like
it's
a
huge
json
file.
B
B
B
This
port
is,
if
you
see
that
it's
always
support
of
the
config.
B
Sorry
ignition
configuration
server
that
is
running
on
the
bootstrap
node
and
also
on
the
control
plane,
and
from
that
this
small
ignition
file,
as
that's
used
for
the
master
vms,
will
fetch
constantly
will
pull
for
ignition
file
from
the
control
plane
or
from
from
the
bootstrap
node
in
the
first
phase
on
this
path
and
it
tries
and
tries
and
tries
to
get
this
configuration
file
until
it
gets
it,
and
then
it
will
provision
itself
the
size
of
this
ignition
file.
B
B
I
don't
know
the
exact
size
it's
possible
to
support
a
vm
in
vsphere
for
the
ignition
files,
but
it's
it's
much
smaller
than
what's
needed,
and
this
two
phase
ignition
fetch
is
used
to
overcome
this.
This
limit
in
vsphere
also
for
the
bootstrap
vm.
We
have
to
use
a
small
stub,
it's
called
append.
Bootstrap
looks
it's
even
smaller
than
the
master
ignition
file,
it
contains,
say
address
of
a
web
server.
I
used
apache
simple
apache
web
server
on
this
helper
node
here,
and
this
apache
provides
a
bootstrap
end
file.
B
See
here
that
I
already
provisioned
vms
resistor
method,
the
cluster
is
already
set
up.
B
I
hope
you
can
read
it.
Okay,
can
I
make
it
bigger
here?
We
see
all
cpms
and
this
is
a
running
cluster
yeah.
It
should
look
like
cis
in
the
end,
if
you
see
a
dashboard
and
what
is
this,
the
samples?
Okay,
it's
don't
worry
about
that
and
you
have
normally
three
green
check
marks
and
you
are
fine.
Then
you
are
have
your
first
running
okd
cluster.
B
I
don't
know
if
I
should
destroy
this
one
and
create
a
new
one.
Are
you
interested
in
that.
B
B
Okay,
great
because
I
think
lots
of
people
are
struggling
in
this
first
phase
that
the
bootstrap
vm
comes
up.
It
gets
its
bootstrap
ignition
file,
but
afterwards
things
get
stuck
yeah,
and
maybe
it's
also
interesting
how
you
can
debug
that
jamie?
We
can
maybe
do
a
debug
session.
Maybe
we
can
produce
a
problem,
a
common
one
and
try
to
find
a
solution,
because
you
will
find
videos
in
the
internet
that
show
you
the
perfect
world.
B
There
are
lots
of
videos
for
that,
but
maybe
you
have
you
can
take
more
about
the
sessions.
If
you
see
how
to
troubleshoot
that.
E
B
B
Okay,
I
have
to
get
used
to
this
tool
here.
Okay,
I
hope
you
see
my
screen
for
that
to
work.
You
have
to
create
a
machine
set,
how
it's
called
in
openshift
and
the
machine
set.
Has
a
provider
spec
where
you
can
say
openshift
yeah,
which,
which
kind
of
of
provider
you
want
to
use
to
create
machines.
D
B
To
this
section
here
you
have
also
to
provide
the
information
about
your
vsphere
cluster.
You
can
provide
disk
space
memory
of
your
vms
that
are
created
in
bisphere,
cpus
and
so
on,
data
center
data
store
and
so
on,
and
if
you
provide
this
machine
set
to
vsphere,
you
can
simply
do
two
things.
I
I
don't
have
a
machine
set
up
tonight
and
try
to,
but
I
don't
think
it
will
work.
Okay.
B
At
least
we
see
something
you
will
see
here,
that
you
can
create
more
machines
by
simply
pressing
plus
minus,
and
if
I
would
have
filled
out
my
vsphere
credentials,
I
would
see
that
if
I
press
save,
set
vsphere
immediately
will
create
new
vms.
B
You
have
to
also
to
provide
a
template,
a
vm
template
for
fedora
cos
in
your
machine
set.
That
was
one
of
the
fields.
I
think
it's
somewhere.
B
I
it
should.
I
don't
see
it,
but
it's
it's.
It
should
be
here
one
of
this
fields
and
then
you
will
see
the
face
here:
the
provider
state
it's
taken
from
vsphere,
it
says
powered
on
powered
off,
then
the
machine
will
fedora
cos
will
try
to
join
the
cluster.
B
B
If
you
want
to
provide
some
specialties
on
the
newly
created
workers,
you
have
to
provide
ignition
files
for
that,
and
there
is
a
I
don't
know
if
it
is
documented,
but
I
found
that
in
I
will
show
you
that
it's
rather
useful
in
the
namespace
open
machine
machine
openshift
machine.
B
B
B
This
is
here
the
ignition
file.
It
will
force
a
new
f-cause
machine
to
constantly
pull
the
ignition
config
from
this
url.
B
B
B
B
B
B
If
you
use
resources
like,
for
example,
put
disruption
budgets,
then
it
can
be
that
if
the
cluster
auto
scaler
tries
to
delete
nodes
that
this
pdb's
support
disruption
budget
are
blocking
the
eviction
of
notes,
because
then
you
are
yeah
our
I
don't
know
the
english
word,
our
yeah,
destroying
the
contract
of
how
much
pods
must
run
in
your
deployment.
That's
exactly
what
rpd
or
pdb
does
said.
It
defines
a
minimum,
a
number
of
pots
that
always
must
run,
but
the
autoscaler
works.
Pretty
good.
B
B
A
Any
more
questions
before
we
move
on
to
sort
of
automating
the
process
api,
and
I
did
put
two
polls
in
the
polls.
There's
are
you
currently
running
okd
and
if,
yes,
what
version
are
you
running
it'd,
be
interesting
to
see
what
folks
have
and
be
able
to
get
a
sense
of
what
folks
are
running,
and
so
now,
let's
talk
a
little
bit
about
automating
the
upi
process.
A
So
if
you're
doing
upi,
you
know,
as
mentioned
before,
you're
going
to
want
to
have
a
proxy
or
a
load,
balancer
and
possibly
a
proxy
if
you're
on
a
private
network
and
so
there's
a
link
to
some
scripts
or
a
script
that
I
wrote
I'll
put
this
in
the
chat.
This
is
a
project.
I've
been
working
on
for
a
while,
it's
called
oct
and
it's
basically
automating
the
upi
process
so
that
you
can
continuously
test
okd,
cluster,
installs
and
and
everything
after
that.
That
goes
with
that.
A
And
it
allows
you
to
do
everything
from
basically
generating
the
ignition
config
files
to
downloading
the
version
of
fedora
core
os
that
you
want
to
use
as
your
base
operating
system
on
the
nodes
and
running
the
openshift
installer,
and
I've
got
a
list
of
the
prerequisites
I
went
through
these
before,
but
essentially
you'll
want
to
have
your
dns
entries
that
we've
talked
about
for
your
bootstrap,
for
your
master
for
your
worker
api
and
api
ins
and
for
your
apps,
the
wild
card
for
your
for
your
apps,
and
so
your
load
bouncer
is
going
to
have.
A
Basically,
you
need
your
load
balancing
to
handle
the
api
in
ingress.
So
you'll
need
two
different
pools
for
that
and
again,
if
you're
on
a
private
network
you're
going
to
want
to
use
a
proxy
for
outgoing
traffic,
that's
for
the
installation
itself
to
download
the
containers
off
of
quay
or
those
for
from
the
testing
releases,
and
also
for
regular
operation
of
your
cluster.
A
For
your
pods
to
do
any
outgoing
traffic
like
running
a
yum,
update
or
composer,
install
or
retrieving
network
resources,
resources
out
on
the
net,
and
so
squid
is
a
good
one.
For
that,
squid
is
something
that
you
can
utilize
pretty
easily
and
configure
pretty
easily.
So
let
me
share
my
screen
here
and.
A
There
we
go,
and
so
here
is
the
repository
for
the
tool
that
I
have
been
working
on
and
it's
oct
and
it's
a
command
line
tools,
simplify
the
process
of
building
and
destroying
okd
clusters
and
vsphere.
This
utilizes,
the
govc
command
and
the
oc
and
cube
control
tools
that
come
with
openshift.
So
govc
is
a
separate
project
that
this
tool
utilizes
and
then
also
oc
and
coup
control
that
are
provided
with
the
okd
and
open
shift
installs,
and
so
here's
the
command
line
argument.
So
we'll
go
through
all
of
them.
A
But
basically
it
allows
you
to
automate
all
of
the
stuff
that
joseph
was
just
talking
about
in
terms
of
having
your
configuration
file
and
it
also
has
a
bunch
of
extra
features.
So
I've
got
a
list
of
the
functions
here,
for
example,
the
tool
checks
if
you
have
oc
installed
and
if
you
don't
have
oc
installed,
it'll
pull
a
version
of
the
tool
down
to
your
machine
to
your
working
directory,
a
bin
folder
in
your
working
directory-
and
this
was
mentioned
before
by
vadim
and
I'll,
expand
on
a
little
bit.
A
So
if
you're
running
a
4.6
cluster-
and
you
happen
to
have
the
oc
tool,
you
can
use
that
to
download
the
installer
and
and
the
oc
binary
for
a
particular
version
like
4
7
or
a
nightly
release
and
whatnot.
That's
something
that
a
lot
of
folks
aren't
really
aware
of,
but
allows
you
to
do
testing
very
efficiently,
just
by
pulling
the
installation
tools
and
the
and
the
relevant
oc,
and
this
script
also
checks.
A
If
you
have
the
govc
tool,
which
is
a
tool
for
working
in
the
command
line
remotely
or
locally
with
vsphere,
so
you
can
create
vms
import
templates,
ova
templates
and
do
all
sorts
of
stuff
very
simply
with
the
govc
tool.
And
so
my
script
works
with
that
and
there's
a
couple
of
functions
within
the
script
that
do
the
the
heavy
lifting
the
first
one
is
install
cluster
tools
and
that
one
installs
oc
control
and
the
open
shift
installer
binary
for
the
version
that
you
want.
A
Then
there's
one
called
launch
pre-run
so
launch
prerun.
Does
the
work
that
you
know
when
joseph
was
talking
about
you
have
that
configuration
file
and
the
installer
generates
ignition
files
and
then
eventually
use
those
those
the
ignition
or
generally
then,
the
installer.
A
You
generate
those
ignition
config
files
and
the
installer
utilizes,
the
ssh
account
that
was
created.
The
core
account
to
go
in
and
trigger
the
installation
and
whatnot
launch
pre-run
helps
you.
It
basically
generates
those
files
for
you
and
modifies
them
in
the
way
like
inserting
your
what
they
call
a
pool.
A
It
takes
care
of
doing
all
that
for
you,
so
you
once
you've
created
a
template
of
the
config
file.
This
actually
copies
the
template
into
a
fresh
one,
installs,
the
necessary
information
that
you
need
and
then
gets
everything
set
up
so
that
you
can
do
a
deployment
and
deploy
node
function
in
the
script.
Actually
does
the
part
of
generating
a
vm
in
vsphere
for
each
of
the
types
of
nodes
that
you
need:
worker
control,
plane
and
bootstrap,
and
then
inserts
the
appropriate
ignition
config
into
that
and
can
also
booted
up.
A
So
this
is
for
the
individual
node.
This
class
gets
called
by
the
individual
node
build
cluster
is
what
calls
deploy.
Node
build
cluster
basically
takes
all
of
the
information
and
then
calls
deploy
node
for
each
node
that
you
need
deploy.
Node
can
also
be
used
for
deploying
standalone
fedora
core
os
nodes.
So
if
you
want
to
play
around
with
fedora
core
os,
we
talked
about
that
a
little
bit
in
the
main
opening
session.
A
This
tool
can
be
used
to
automate
that
process
as
well
and
then
there's
destroy
cluster,
which
allows
you
to
very
easily
tear
down
your
cluster
and
then
manage
power,
obviously
bringing
the
nodes
up
or
down
and
then
also
clean.
So
what
clean
does?
A
Is
it
actually
cleans
up
those
files
that
we
were
talking
about
before
that
you
know
get
generated
when
you
go
to
do
an
install
the
master
ignition
metadata,
json
file
that
gets
created
and
all
that
stuff,
so
clean
will
actually
clean
that
stuff
up,
and
so
I
think
what
I'll
do
is
actually
demonstrate
that
right
now
and
then
I'll
do
it
destroy
and
then
we'll
go
from
there.
A
So
here
is
a
cluster
that
I
have
running,
it's
called
logos
and
it's
got
a
bootstrap
master,
two
masternodes,
three,
sorry,
three
master
nodes
and
two
worker
nodes,
and
so
what
I'm
going
to
do
is
first
off
clean
up
this
mess
here
that
gets
created.
So
I
call
the
script
and
I
go
clean.
A
Okay,
so
I
have
this
append
bootstrap
and
install
config
template,
and
this
script
is
being
run
on
a
control
on
a
on
a
machine
that
I
use
an
installer
machine
that
has
apache
running
on
it.
So
for
upi
you,
if
you're
doing
upi
vsphere
you're,
often
going
to
be
pulling
the
bootstrap
script
off
of
a
web
server.
A
A
Oh,
for
some
reason
it's
not
working,
I'm
not
sure
why.
But
let
me
see
real
quick
what
I
missed
here,
but
basically
you
can
delete
all
of
this
stuff
or
sorry.
That's
why
destroy
and
cluster
name.
I
forgot
okay,
so
this
allows
you
to
work
with
multiple
clusters
and
I
need
to
put
the
cluster
name,
and
so
we
do.
A
This,
okay,
and
so
now
you
can
watch
these
actually
disappear
as
they
get
deleted,
and
this
is
again
using
the
govc
command
line
tool
to
connect
to
vsphere
and
delete
those
nodes.
A
A
And
joseph,
let
me
know
if
there
are
any
questions
in
the
chat,
since
I
can't
see
what's
going
on
yeah,
and
so
now
we've
got
a
fresh
environment.
A
All
we
have
is
the
append
bootstrap
which
we
can
reuse,
because
it
doesn't
have
any
unique
data
and
it
just
has
the
web
address
and
then
basic
parameters,
and
then
we've
got
a
template
here
that
template
I
won't
cat
it
out.
But
basically
it's
the
template
that
you
see
in
the
instructions
for
an
install.
A
A
Yeah,
it's
basically,
this
template
with
the
ssh
key
and
credentials
in
it,
and
the
script
knows
to
copy
that
template
into
a
a
fresh
version
of
the
of
the
config
file
and
to
utilize
that
so
I'm
going
to
show
you
the
wrapper
script
that
I
created.
A
That
calls
all
the
different
functions
to
do
those
different
stages.
So
I
have
a
script
called
build
logo
since
sort
of
a
wrapper
script
and,
as
you
can
see,
it
calls
all
of
the
different
functions.
This
allows
you
I'm
disabling
it
right
now,
because
it
just
it's
you're
watching
it's
like
watching
paint,
dry
you're
watching
the
import,
but
you
can
import
from
url
into
your
vsphere.
A
The
template
that
you
want.
The
ova
template
that
you
want
to
use
for
your
okd
install
and
you
set.
Your
basic
parameters,
appeared
how
many
masters
workers
the
template,
url
that
you
wanted
to
use
or
the
template
name
if
you're
going
to
skip
that
step,
your
library
within
vsphere,
your
cluster
name,
and
where
you
want
to
put
that
cluster
in
your
vsphere,
what
network
it's
gonna
use!
So
I'm
using
vm
network
the
the
default
network
and
then
the
the
folder
for
the
installs
there.
A
This
source
is
just
reading
in
my
credentials
for
the
tasks
and
then
again
here's
the
imp.
I've
commented
this
out,
but
this
is
importing
the
template
at
the
library
that
you
wanted
this
installs
the
tools,
so
I'm
going
to
install
the
tools
for
a
4.6,
okd,
installation,
pre-run
and
auto
secret.
This
is
auto.
A
Secret
is
a
flag
that
I
created
that
inserts
a
sort
of
dummy
secret
and
this
it
prevents
you
from
having
to
go
to
the
open
shift
portal
to
get
a
a
generated,
pull
secret
and
there's
a
dummy
secret
that
you
don't
really
lose
any
functionality
by
using
the
dummy
secret.
Eventually,
there
is
talk
in
in
okd
of
utilizing
the
functionality
that
the
pull
secret
provides.
A
But
that's
it's
right
now,
it's
not
as
as
doesn't
have
as
many
ramifications
just
using
the
dummy
one,
and
here
we
have
the
the
build
a
call
to
the
build
where
you
provide.
You
know
the
cluster
folder
cluster
name,
node
count
all
of
those
things
that
we
discussed,
and
this
is
a
little
trick
that
I
do
so
that
I
can
use
reserve
dhcp.
A
So
I
know
what
the
name
and
the
number
that
the
nodes
are
going
to
be
at
the
name
and
I
p
number,
but
I'm
not
doing
a
static
ip
installation
static.
Ip
installation
is
a
little
less
flexible
and
this
turns
the
nodes
on
and
then
this
runs
the
openshift
installer
here.
So
I'm
you
know
in
my
installation
folder
and
I
am
going
to
run
that
script-
that
we
were
just
looking
at
build
logos.
A
A
A
It's
done
that
it's
modified
the
manifests
to
make
sure
the
control
nodes
are
unscheduleable
for
worker
pods
and
basically
set
up
the
control
plane.
So
you
don't
have
to
do
those
manual
steps.
It
copied
the
bootstrap
to
my
var
html
for
the
web
server,
and
now
it's
deploying
those
individual
nodes.
So,
as
you
see
here,
the
bootstrap
is
getting
deployed
here
and
now
it's
going
to
go
through
each
one
and
it's
adding
again
that
ignition
metadata
so
that
when
these
are
booted
up,
they
will
automatically
start
performing
their
tasks.
A
The
boot
no,
the
bootstrap
node
will
automatically
start
will
be
available
to
for
the
installer
to
pull
down
the
initial
containers
and
the
worker
nodes
will
be
booted
and
do
their
fedora
core
os
update,
and
then
they
will
restart.
So
that's
a
process
that
folks
may
or
may
not
be
familiar
with
where
essentially,
when
your
nodes
first
boot
up,
they
install
the
updated
version,
the
most
recent
version
of
fedora
core
os
and
then
boot
into
that
again.
A
So
you
can
see.
All
of
these
nodes
are
getting
built
and
we'll
wait
just
a
minute
for
those
to
complete
and.
B
Joseph
yeah,
you
could
show
the
cloud
yes
this
this
window.
If
it
starts
so,
we
can
see
the
bootstrap
process
yeah.
B
A
So
it
waits
until
all
of
the
nodes
are
created
before
it
powers
up.
I
wrote
it
that
way,
because
there's
a
flag
that
basically
you
can
set,
if
you,
if
you,
for
example,
wanted
to
build
the
cluster
but
not
turn
it
on
yet
or
run
the
installer
yet
and
whatnot.
A
So
it's
components
it's
done
as
components
so
that
you
have
the
flexibility
there.
So
now
we're
gonna
it'll,
probably
take
about
two
more
minutes.
So
are
there
any
questions
at
this
point
or
anything.
B
B
A
So,
if
you're
utilizing
vsphere's,
but
it
depends
on
your
v-series
configured
like
it,
will
automatically
put
them
in
the
best
place
for
resources.
If
you
have
that
enabled
you
there's
another
route
that
you
could
do,
which
my
script
doesn't
do
that,
but
you
can
do
this
very
easily
and
actually
an
old
version
of
my
script.
You
should
do
this
where
you
can
set
the
v,
the
data
store
on
each
individual
node,
and
it
won't
make
a
difference
for
the
cluster.
A
I
can
share
the
code
with
you
if
you,
if
you
want
to
share
me
your
email
was
it
mark
delaney.
If
you
want
share
me,
your
email
and
I'll
show
you
share
with
you
the
old
code
that
I
have
that
actually
does
allow
you
on
a
per
node
basis,
to
select
your
data
store,
and
that
is
something
that
some
folks
might
want
to
do.
If,
for
example,
they
anticipate
their
workers
are
going
to
have
a
larger
size
versus
your
control
plane
or
whatever.
A
So,
okay,
this
is
going
a
little
bit
slower
than
I
hoped.
Are
there
any
other
questions
about
this.
B
D
B
A
Yeah,
I
believe
it
is
on
the
at
the
very
top.
D
A
4.6,
if
it's
not,
I
can
find
that
for
you
and
I'm
happy
to
to
post
it.
If
you
leave
some
contact,
actually
we
can
post
it
in
the
blog
post.
We're
gonna
be
doing
a
blog
post
sort
of
post
this
session.
That's
it's
a
blog
post
of
like
all
the
stuff
that
we
covered
in
this,
and
I
can
put
that
info.
If
I
can't
find
it
real
quick
here.
B
B
Maybe
we
could
ssh
into
the
bootstrap
node.
I
think
it's
interesting
to
see
what
it
does.
A
Yeah
well,
let's
watch
the
let's
watch
the
bootstrap
process
first
and
we'll
see
how
folks
feel
about
watching
things
scroll
by,
but
let's
let's
could
you
start
with
the
bootstrap?
Could.
B
A
Okay,
so
now
it's
powering
all
of
the
nodes
up
it's
going
through,
so
here's
the
bootstrap.
So
here
is
that
first
run
of
the
bootstrap
node
and
it
always
takes
six
times
on
the
sixth
attempt.
It's
able
to
get
the
ignition
config
off
my
server,
and
now
you
see
it's
reading
that
ignition
config
in
and
configuring
network
and
all
of
that
stuff
resizing.
A
Right
exactly
okay,
so
now
in
the
background,
well,
you
can
see
actually
here
on
the
screen.
It's
performing
all
of
the
tasks
of
updating
the
fcos
image,
so
downloading
the
latest
fedora
core
os
version
and.
A
It's
then
it
will
reboot
into
the
updated
version
and
actually
let
me
get
a
master
up.
B
A
Yeah
so
the
masters
are
running
and
they
are
waiting
patiently
for
the
bootstrap
to
complete
and
yeah.
So
this
should
be
going.
A
D
A
Okay,
so
it's
going
through
that
process
and
then
you'll
see
it
reboot
again
and
then
a
couple
seconds
later,
the
masternodes
will
be
pulling
their
their
info
and
again,
this
is
calling
that
pool
that
of
the
master
plane.
That's
set
up
in
the
f5
that
I
have,
and
for
for
joseph
it'd,
be
the
h
a
proxy.
A
Yep
here
we
go
rebooting
into
the
latest
version
and
you
can
see
down
here.
The
installer
is
waiting
for
that
initial
20
minutes
for
the
bootstrap
to
start
and
make
itself
available,
and
then
once
the
bootstrap
is
done
in
a
couple
seconds
after
it
reboots
and
and
turns
everything
on
to
provide
the
machine
configs.
This
will
change
and
you'll
then
see
it
switch
to
waiting
for
the
control
plane
for
30
minutes,
I
think,
is
the
amount
of
time
that
does
we.
B
But
I
think
you
could
create
use
your
cube
admin
password
to
call
oc,
get
ports
all
name
spaces,
so
we
can
watch
what
it
does.
That's
also
possible.
I
think
it
should
be
possible.
Now
simply
we
see
the
cluster
operators
how
how
they
get
into
running.
A
State-
let's
see
here
at
least
forget
this.
A
Yeah,
let's
see,
I
have
a
a
script
that
does
this.
Let
me
just
remember
what
it
is.
B
Jasper
is
asking
about
which
core
os,
which
f
cos
version
you
have
to
use
for
okd,
it
depends,
and
I
don't
think,
and
normally
you
can.
You
can
start
always
with
f
cos
version
that
is
mentioned
in
the
okd
version.
If
you
go
to
the
download
page,
I
always
look
at
I.
B
I
google
for
origin
ci
release
and
then
the
first
hit
is
the
page
where
all
stable
releases
of
okd
are
listed,
and
if
you
click
on
the
version,
then
you
will
see
which
fcos
version
is
installed
by
this
version,
and
you
always
can
start
with
this
version.
A
Right,
so
if
you
click
on
a
particular
version
like
this,
you
can
see.
Let's
see
where
is
it.
A
Oh,
you
know
what
it's
in
the
get
page
is
what
it
is.
So
if
we
go
to
get
started
and
go
to
releases
on
there,
so
it's
it's
the
releases
page
on
the
github.
You
can
actually
see.
A
The
different
components
that
are
there-
and
that
includes
the
os
version
that
it
used
for
the
for
the
installation,
and
that
is
where
is
it.
D
B
4.6,
normally
normally
in
almost
every
release,
you
see
it.
There
are
a
few
releases
recently
where
they
did
not
write
it
down.
D
B
A
So
this
is
the
one
you
want
machine
os
and
was
it
not?
Were
we
missing
it
or
no,
it
really
isn't
up
there.
Okay,
that's
strange!
Okay,
yes,
so
machine
os.
That
tells
you
what
version
of
fedora
core
os
that
it
works
with.
There
are
some
bugs,
though,
whereas
like,
for
example,
the
most
recent
version
of
fedora
os,
you
don't
want
to
start
fresh
with
that.
You
want
to
start
with
a
previous
version
and
that
will
okay
so
and
that'll
work.
A
A
Where
you
can
find
things
along
those
lines
and
there's
a
known
issues
here,
let's
see
if
it's
in
there,
no,
it's
not,
but
on
the
blog
there's
an
article
about
it
right.
I
think
you
put
something
on
the
blog
about
that.
A
So,
as
we
can
see
here,
the
installation
is
now
going
to
waiting
for
the
control
plane
to
cons
to
figure
configured
and
we
see
one
is
booting
up
and
if
we
look
at
the
others,
those
are
booting
up
as
well
yep
now
so
these
are
going
and
your
workers
are
still
going
to
be
waiting
because
they
won't
be
able
to
get
anything
until
the
control
plane
is
going.
A
A
And
then
they'll,
the
scd
cluster
will
configure
itself.
A
B
Can
export
cubeconfig
uppercase
to
your
os
directory?
I
saw
it
somewhere.
A
Yeah
I've
got
it
so
I've
I've
loaded
it
in
and
then
let's
see
it's
oc,
what's
the
flag
to
use
it
for
each
game.
B
B
All
hyphen
name
spaces,
two
hyphens
yeah.
A
Yep,
so
those
are
all
of
the
operators
that
are
getting
installed
and
you
can
also
do
oc
get
co,
which
is
going
to
give
you
the
list
of
those
as
well
and
let's
see
where
we
are
in
the
nodes.
Okay,
so
they
are
not
ready
yet,
but
they
will
soon
be
joining
themselves
to
the
cluster.
The.
A
And
here's
a
step
that
I
need
to
automate.
I
should
put
this
in
my
script.
You
will
have
to
approve
once
the
installation
is
done,
certificates
and
those
certificates,
the
process
of
approving
those
can
take
a
while,
and
so
it's
kind
of
handy
to
have
the
certificate
approval
process
sort
of
there.
It
is
you
have
to
approve
all
the
certificates,
there
won't
be
any
yet.
I
don't
think
because.
B
B
It's
a
certificate,
signing
requests.
A
Yeah
and
there
there's
a
handy
little
tool
that
you
can
use
once
this
is
done.
Well,
we
don't
have
to
wait
for
it
to
be
done,
but
essentially
you
call
this
that
will
get
all
of
the
csrs
and
approve
them.
They
added
a
nice
little
flag.
I
noticed
recently
no
run
if
empty
in
the
documentation
which,
before
you
would
get
an
error,
if
there
were
so,
it
actually
is
like
so
x.
Args
actually
doesn't
run
if
there's,
if
it's,
if
it
gets
enough
back
so
yeah.
That
is
good
so
anyway.
A
So
this
now
we're
waiting
for
the
for
the
install
to
happen
and
yeah.
Are
there
any
questions
on
on
what
we've
shown.
A
Love
it.
I
wrote
all
of
this
yeah,
it's
it's
yeah.
I
wrote
this
over
the
period
of
time,
basically
about
a
year
and
a
half,
because
you
know
doing
the
installation
process
over
and
over
and
over
again
for
testing.
A
You
know
my
production
cluster
has
been
up
my
production,
I've
got
a
production,
ocp
production,
okd
and
then
a
testing
okd
and
rebuilding
those
and,
having
you
know,
disaster
recovery,
essentially
to
get
them
up
quickly.
I
wanted
something
that
simplified
the
process,
and
this
is
the
actual
script
code
here
you
know
and
if
there's
any
features
that
folks
would
like
to
see
happy
to
write
those
in-
and
you
know,
there's
a
lot
of
folks
doing
upi.
A
Yes,
yeah
it'll
work
for
both
there's
no
difference.
The
only
thing
is
that
for
ocp
to
get
that
support,
you
want
to
have
the
pull
secret.
So
in
the
code
let
me-
and
I
sorry
sorry
for
like
flipping
through
really
fast.
I
hope
I'm
not
making
people
dizzy.
But
if
you
look
in
my
code,
where
is
the
pre-run
right?
So
basically,
if
you
don't
add,
if
you
don't
say
to
use
the
dummy,
pull
secret
or
auto
pull
secret,
as
I
call
it,
it
actually
says.
A
Please
enter
your
pool
secret,
so
you
can
paste
it
into
a
dialogue
in
the
script,
so
it
doesn't
make
it
completely
automated.
I
think
what
I'm
gonna
do
is.
A
Add
another
else
to
this:
where
it'll
read,
if
in
your
config
file,
if
there
is
a
pull
secret
already
there
and
if
it
is,
then
it
won't
it'll
just
use
that
and
duplicate
it.
Pull
secrets
are
good
for
was
it
just
24
hours
or
48
hours?
I
can't
remember,
I
think
it's
24
hours
of
the
pull
secrets
are
good
for
it.
I
don't,
I
think
they
expire,
there's
a
certificate
on
the
other
side
that
expires.
B
Jasper
is
asking
you
about
he's,
saying,
awesome,
work
and
here's
a
question
he's
a
bit
confused
why
you
are
not
using
ipi
because
sure
yeah.
What's
the
difference.
A
Sure
so,
with
the
reason
that
I'm
not
using
ipi
is
because
we
wanted
to
have
the
f5
there's
a
couple
reasons
we
wanted
to
have
the
f5
as
front
facing
for
the
cluster,
because
we
could
then
route
requests
through
that
load,
balancing
and
have
things
such
as
if
the
cluster
in
its
entirety
goes
down
still
have
monitoring.
Notifications
still
have
redirecting
those
urls
to
outage
pages
and
whatnot.
A
Also
the
ability
to
use
the
the
other
functions
that
we
have
in
the
f5
that
are
a
little
bit
superior
to
the
load
bouncing
within
vsphere.
A
If
you
use
the
internal
load
bouncing
within
vsphere,
there's
also
in
terms
of
ipi,
the
ability
to
add
more
network
customizations
and
also
there's
some
benefit
to
ipi
in
terms
of
sort
of
portability,
because
we
have
this
script
and
because
we
have
the
f5
and
whatever
we
can
duplicate
this
in
other
places
that
maybe
aren't
necessarily
vsphere
or
don't
necessarily
have
or
have
slightly
different
infrastructure
and
whatnot.
A
So
it
not
relying
on
everything
being
internal
to
v
sphere
was
the
way
that
we
wanted
to
go
right
now
and
also
at
the
time
it
wasn't
clearly
document.
When
I
first
started
this,
it
wasn't
clearly
documented
how
to
set
your
subnet
for
your
cluster
and
whatnot
they've
added
a
lot
of
functionality
and
a
lot
of
documentation
recently
about
setting
your
subnets
for
the
cluster
and
for
the
pod
network
and
stuff,
like
that.
A
lot
of
that
wasn't
clearly
defined
early
on
in
the
four
releases.
A
Okay-
and
it's
done
so
now-
it
completed
in
17
minutes
so
now
we'll
see
the
workers
are
now
booting
up
and
if
we
go
to
here
and
go
check,
there's
no
certificates.
No,
not
yet.
So
we
see
now
that
the
masters
are
ready
and
they're
running
the
most
recent
version
of
of
4
6,
ok,
d46
and
and
fedora
core
os,
and
now
the
worker
nodes
are
going
to
come
up.
That's
one
thing
I
didn't
mention
is
so
in
your
call
to
the
script.
A
When
you
put
the
release
version,
you
can
do
the
whole
release
version
like
this
like
copy
that
string
for
a
very
particular
release
version
like
from
the
nightlys
or
whatever,
or
you
can
just
put
a
major
minor
string
like
that,
like
four
six
or
four
seven,
and
what
you'll
get
back
is
the
most
recent
version
for
that
major
miner.
That's.
B
B
Asking
or
is
saying
that
he's
still
confused
about
the
cluster
api
subnets
compared
to
application
subnets.
A
The
rest
calls
to
an
api
address,
api
dot,
cluster
name,
dot
domain
name
and
those
mapped
to
a
pool,
whether
you
do
it
upi
or
ipi,
that
domain
name
maps
to
a
pool,
load,
balancing
pool,
and
that
can
be
a
any
subnet.
They
don't
necessarily.
I
don't
think
that
you
necessarily
have
to
have
the
same
subnets
for
in
terms
of
ip
numbers
for
those,
but
you
want
different
pools
because
requests
to
the
api
address.
A
The
other
subnet
is
the
actual
subnet
that
the
pods
are
running
on
and
the
pool
for
that
to
connect
to
that
is
another
address,
and
that's
the
apps
dot
cluster
name,
dot
rest
of
the
domain
name,
and
that
is
anytime.
You
spin
up
a
pod.
It's
gonna
have
the
the
or
spin
up
an
application.
The
application
is
gonna,
be
have
a
route
that
is
application,
name
or
whatever
dot,
apps
dot,
cluster
domain
name,
and
you
can
do
mapping
to
that.
A
Okay,
we'll
see
so
now
we
see
when
we
do
get
nodes.
Those
are
ready,
we're
gonna
check
for
certificates,
none
yet
because
the
the
workers
are
still
sort
of
rebooting
into
the
os
sort
of
they
are
rebooting
right
now,
yeah.
Here
we.
C
A
B
Listen,
this
subnet
larry
is
answering,
I
think
so.
A
Okay,
so
it's
pools
basically
there's
two
different
pools
and
those
may
or
may
not
necessarily
map
to
different
ip
subnets,
but
it's
different
pools
for
different
tasks.
Okay,
so
now
this
is
up
and
there
we
go.
We
just
approved
the
csrs
for
the
two
worker
nodes
and
here's
the
other
one.
There
we
go
and
then
there's
another
one,
another
set
of
certificates,
that'll
pop
up
and
there
they
are
again-
and
I
just
approved
them
so
now.
A
If
we
do
get
nodes
in
a
couple
of
seconds,
usually
in
about
a
minute
the
worker
nodes
are
then
there
they
are
so
with
yeah.
So
with
a
with
the
script-
and
you
know
in
just
a
couple
of
of
clicks,
if
you
wait
patiently
to
approve
the
csrs
within
an
hour,
you
can
have
a
an
open,
an
okd
cluster
running,
and
so
let's
do
get
notes
up
still,
not
quite
yet
any
questions
about
any
of
this.
B
The
thing
is
that
you
can
customize
lots
of
of
everything
with
upi
yeah.
You
can
export
all
yaml
manifests
right
with
the
installer.
You
can
patch
everything
and
create,
with
the
installer
from
the
patched
manifests
as
ignition
files,
so
that
your
cluster
right
from
the
start
is
completely
pre-configured
and
that's,
I
think
it's
it's
not
possible
with
ipi.
A
Right
and
oh
here
we
go,
we've
got
one
ready
and
I
bet
when
this
returns,
the
other
one
will
be
ready
and
we
will
have
a
working
cluster.
Now,
it's
going
to
take
some
time.
If
you
do
oc,
get
co
you'll
see
that
the
machine
operators
are
not
done
yet
there's
still
one
that
needs
to
finish
here
and
ingress
always
needs
to
finish
and
monitoring.
A
That
will
happen
over
a
period
of
a
few
minutes.
They
are
getting
configured
right
now.
B
Larry
yeah,
I'm
sorry,
yeah,
no
larry
see
you
have
only
have
to
s
signs
a
certificate
during
the
first
installation.
B
A
So
if
we
go
well,
here's
an
exam,
here's
a
brief
one.
So
if
we
actually
go
to
the
code
of
oh
yeah
right
in
pre-run,
so
here's
here's
an
example
of
some
things
happening
in
terms
of
manifest.
So
there's
a
manifest
that
gets
created
when
you
run
the
gen
the
create
manifest
call
from
the
opening
a
little
bit.
It's
a
little
bit.
A
Yes,
so
that
is
there's
when
I
do
when
you
do
create,
manifests,
there's
some
manifests
that
actually
need
to
get
deleted
for
upi,
and
so
in
my
script
I
actually
delete
the
ones
that
need
to
get
deleted
and
then
there's
also
one
that
needs
to
get
changed
this
one
here,
cluster
scheduler
needs
to
have
the
the
part
where
it
says
is:
are
the
master
schedule
for
pods,
like
application
pods,
and
to
say
that
as
no
you
change
that
to
false
that's
something
that
you
want
to
do
on
a
on
a
standard.
B
D
A
When
you
run
the
the
installer,
it
deletes
the
openshift
directory
the
manifest
system,
so
it.
A
Here's
here's
what
I
can
do.
Let
me
do
use
my
tool
to
do
this
quickly,
oct
yeah
I'll
do
this.
Let's
do
this
clean,
okay,
my.
B
A
Is
still
there
the
authentication
I
mean
I'm
keeping
that
cool,
because
if
I
do,
if
I
do
create
manifest,
I'm
pretty
sure
it
doesn't.
Let
me
do
it.
A
I
actually
accidentally
deleted
the
auth,
sorry
so,
but
in
short,
yeah
you,
basically
it's
it's
xml
or
sorry,
json
or
yaml
files
for
the
manifests,
and
you
basically
just
changed
the
ammo
for
them.
A
Well,
I
can
do
it's.
Is
it
still?
No,
it's
not
so
low,
so
I
lost
that.
But
in
it
you
know,
the
the
yaml
files
are
standard.
Yaml
files,
as
you
would
expect,
and
the
modifications
are
pretty
perfect.
They
even
cover
some
modifications
that
you
can
do
in
at
the
bottom
of
the.
B
A
No,
no!
No!
No
that's
that's
it's
it's
all
it
is,
is
the
the
the
password
the
access
is
gone,
but
that's
fine.
I
can
re
redo
it.
So
installing
this
restriction
users,
where
is
the
yeah?
It
is
this
one,
but
basically
at
the
end,
there's
examples.
A
I
thought
it
was
this
one
of
like
creating
all
sorts
of
modifications
to
your
config
files
to
do
things
on
install
like
what
was
what
was
the
example
that
they
give
oh
setting
your
ntp
server,
so
there's
an
example
of
using
a
config
to
set
the
ntp
server
on
the
nodes
when
they
boot
up,
and
things
like
that.
So
it's
it's
there's
a
lot
of
examples
in
the
documentation
for
that.
A
Well,
I
I
lost
the
the
config
stuff
when
I
cleaned
it.
A
Yeah,
but
it
may
or
may
not
have
been
because
I've
noticed
that
it
can
take
a
long
time.
So
I
don't
think
we
want
people
to
sit
through
that.
A
B
Yes,
so
I
have
to
take
a
look
first
here.
It
is
okay.
After
the
first
installation,
I'm
on
okd
4.5,
I
have
a
default
machine
set.
I
always
deleted
this.
One
is
from
a
question
someone
asked
me,
so
it's
not
really
a
useful
machine
set.
B
Okay,
you
should
normally
start
an
upgrade
if
nothing
is
degraded.
In
this
example,
I
think
the
openshift
samples
operator
in
the
older
versions
of
okd
sometimes
gets
degraded,
but
the
upgrade
will
should
succeed.
Nevertheless,
yeah
and
I
will
do
an
upgrade
now
I
go
to.
B
If
you
have
an
internet
connection
like
I
have
you
can
choose,
the
next
version
you
are
can
upgrade
to,
in
my
case
I'm
able
to
goes
directly
to
version
4.6.
I
will
choose
that
this
one.
B
B
B
Don't
sorry
scrolling
is.
B
Yeah,
no
everything
is
fine.
I
can
upgrade
on
okd
4.5
some
repositories
on
the
fedora
code
core
as
nodes
were
enabled,
and
if
you
upgrade
during
the
upgrade
process,
then
it
if
the
repos
are
enabled
it
tries
to
pull
images
packages.
Sorry
from
the
internet,
which
sometimes
could
be
a
problem,
so
you
have
to
disable
them
before
you
upgrade.
B
B
B
B
B
B
Where
is
it
samsung
operator?
Where
is
it
open
shift?
The
machine
config
operator
is
the
last
one,
and
this
operator
upgrades
the
operating
system
at
the
very
end.
B
In
this
situation
you
will
see
that
always
one
master
and
one
worker
get
are
getting
unscheduled
and
the
nodes
were
automatically
drained
and
then
new
and
uos
version
is
applied
and
the
system
automatically
boots
into
the
new
fedora
cores
version.
In
this
case
I
have
fedora
core
is
32
released
last
year
in
june
and
after
the
nodes
are
upgraded,
we
will
see
that
we
have
fedora
cos
version
33
from
I
think
january
as
this
year,
and
you
have
to
do
nothing
it's
running
completely
on
its
own.
B
B
Okay,
it's
it's
already
over.
It's
upgrading!
It's
installing
a
new
version
of
etcd
and
automatically
does
the
migration
from
the
old
etcd
to
the
new
version.
You
can
have
a
look
yeah
if
it
if
it
really,
if
you
watch
it
like.
I
do
here
said
always.
If
something
pops
up,
you
will
see
all
the
pots
that
are
spawned
to
do
the
upgrade.
Here
we
see
a
new
etcd,
some
quorum
guards
are
created.
B
B
B
It's
not
really
that
it
does
not
really
mean
that
this
etc
is
of
version
four
or
an
image
and
docker
image
was
tag
four
and
no.
It
means
that
something
has
changed.
B
Also,
if
you
delete
an
etcd
port,
you
will
see
that
it's
upgrading
yeah,
because
the
operator
always
tries
to
get
to
its
desired
state
and
if,
even
if,
it's
not
upgrading
but
only
getting
to
its
desired
state,
you
will
see
that
it's
getting
getting
a
new
version
yeah.
Now
we
have
three
nodes.
Sorry.
A
Question
from
jesper
jesper
says
when
you
have
installed
with
upi
and
have
the
vsphere
template
and
credentials
specified
in
the
machine
set
as
talked
about
previously
for
scaling,
would
it
roll
vms
or
would
it
upgrade
the
existing
vms.
B
It
will
create
vms
with
the
new,
a
good
question.
I
have
to
think
about
that.
I
think
it
will
if
at
first
it
will
create
a
version
with
the
older
os
version,
and
if
this
one
is
created
and
has
joined
the
cluster,
it
will
do
automatically
an
upgrade
to
the
target
os
version.
So,
in
the
end,
all
nodes
have
the
same
version,
regardless
of
where
you
started.
A
So
are
you,
let
me
ask
this:
are
you
asking
if
new
nodes,
so
a
machine
set
can
be
used?
A
machine
set
is
used
at
two
different
times.
It's
used
for
the
initial
building
of
the
cluster
and
then
also
for
adding
nodes
and
applying
to
it.
So
if,
if
you're,
applying
and
adding
new
nodes,
those
new
nodes
will
brought
up
to
be
brought
up
to
the
same
across
the
machine
set
and
it
would
be
new
vms.
So
that's.
B
A
As
I
and
jesper
says,
as
I
understand
for
ipi,
then
upgrades
are
when
upgrades
happen
by
creating
new
vms
and
if
they
join
successfully,
the
old
vm
will
evict
containers
and
then
it
will
spin
up
containers
in
the
new
vm
and
continue
good
question.
I
I
don't
think
so.
I
think
for
ipi
it's
it's
the
same
that
so,
if
you're
updating
the
os,
it's
rebooting
that
same
vm
with
the
new
os
and
any
new
changes
from
the
machine
config.
It's
it's
not
spinning
up
a
new
vm.
I
don't
think
in
ipv.
B
B
A
This
is
oh,
I
did
regenerate
my
yamo
configs,
so
just
to
get
folks
if
you
want
to
stop
sharing
and
then
I'll
go
ahead
and
share.
D
A
A
Oh
right
now
going
here,
so
this
is
there's
a
step
where,
when
you
create
the
ignition
config
files,
it
then
deletes
this
folder
called
openshift,
and
this
folder
called
manifests.
A
A
A
These
manifests
that'll
then
get
put
into
the
nodes
for
things
such
as
disk
encryption
for
crony
configuring,
crony
to
use
a
particular
ncp
server
and
anything
that
you
can
think
of.
So
this
is
actually
factors
into
why
I
was
encouraging
folks
to
familiarize
themselves
with
fedora
core
os
fedora
core
os.
A
Does
everything
through
manifests-
or
I
should
say
everything
through
ignition
config,
and
so
you,
you
create
a
yaml
file,
that's
similar
to
a
manifest,
and
you
put
it
through
a
tool
that
they
have
and
then
it
creates
the
json
based
ignition
config,
and
then
you
apply
that
to
a
fedora
core
os
node
and
all
of
these
things
that
we're
talking
about
and
a
lot
of
stuff
of
the
os
can
actually
be
configured
through
these.
So
if
we
go
down
here,
this
is
like
pre-creating,
the
ignition
configs,
here's
the
manifest.
A
So
if
you
go
to
manifests,
these
are
the
manifests
that
go
into
creating
the
ignition
config
and
an
example
of
a
modification.
Is
I
found
that
my
nodes
were
timing
out?
My
worker
notes
were
timing
out
when
trying
to
connect
to
the
control
plane
when
they
would
first
boot
up,
okay
and
and-
and
I
think
we
too-
we
talked
about
this
a
little
bit
in
the
channel.
So
what
I
found
is,
if
you
modify
in
the
master
and
worker
ign's
a
time-out
value
yeah,
you
do
that.
A
Okay,
nice,
let's
see
which
one
is
it
it's
in
the
open
shift,
one
there
we
go.
Oh,
no!
Actually
it's
in
it's!
I
have
to.
Actually
you
can't
do
it
in
the
manifest
you
have
to
do
it
actually
in
the
in
the
ignition
that
gets
generated
after
I'm,
not
sure
why,
but
basically
you
can
modify
the
timeouts
that
the
nodes
have
when
trying
to
connect
to
machine
config
from
the
api
server.
B
I
can
I
can
show
something
similar
if
you
have
a
running
cluster
with
the
machine
configs.
I
don't
know.
If
everybody
knows
that
yeah
we
love,
we
use
it
a
lot
jamie.
D
B
Okay,
so
do
you
see
my
screen?
Yes,
okay,
so
here
we
have
one
of
the
coolest
features
that
has
to
do
with
core
os
this
other
machine
configs
under
compute
machine
configs.
You
find
yeah
some
manifests
that
are
telling
openshift
or
okd
what
you
want
to
configure
on
your
host
yeah
formally.
If
you
wanted
to
install
something
or
write
a
file
or
change
your
file,
you
had
to
ssh
into
the
node
on
ok
d3.
It
was
very
common
to
use
ansible
to
change
something
on
all
nodes.
B
You
don't
have
to
do
that
anymore.
With
a
fedora
or
red
hat
core
os.
You
can
almost
change
everything
with
machine
config
manifest.
Let
me
show
you
an
example
am
searching
some
random
yeah
here
we
have
an
example.
Let's
fold
that
away
this
is
a
machine
config,
it's
applying
configuration
to
all
workers
like
you
can
set
the
role
here.
B
B
B
Is
yeah
it's
some
yeah.
I
don't
know
what
this
is.
It's
a
content
of
a
file
and
if
you
apply
that
machine
config
here,
it's
only
a
short
ignition
snippet,
you
can
can
write
everywhere
on
the
host,
where
you
have
read,
write
permissions
normally,
I
think
etc,
and
the
other
one
was
the
var
I
think
the
var
folder
you
can
write
into
it
with
this
method
and
was
and
what
happens
then,
if
you
apply,
that
is
that
you
get.
I
will
sort
the
date
here.
B
You
will
get
a
new
render
file
yeah.
We
have
here
a
rendered
worker
and
a
rendered
master
file,
and
I
always
sort
it
to
the
date
because
say
every
time
you
apply
something
here,
you
will
get
a
new
rendered
file.
This
files
contain
all
configurations,
the
base
configurations
and
the
ones
here,
the
additional
ones
they
are
merged
together
in
one
big
ignition
file.
Let's
have
a
look
into
it
here.
You
see,
we
have
c
ssh
some
ssh
files.
B
B
You
see
lots
of
services.
What
is
this?
This
is
a
hypercube
service
and
you
can
change
what
new
neos
will
see
by
applying
machine
config
objects.
One
thing
we
have
done
in
my
company
is:
we
wanted.
We
have
an
an
aircapped
cluster
and
we
were
not
wanted
to
pull
the
images
not
from
the
internet
but
from
our
internal
registry,
our
internal
registry.
B
B
I
will
open
a
terminal
on
the
host.
Normally
you
never
have
to
go
to
with
ssh
to
your
nodes.
You
can
use
this
mechanism
here,
it's
upgrading,
so
I
have
a
terminal.
Now
now
I'm
gonna,
I
will
enter
a
change
root
command
and
now
I'm
on
the
host
yeah,
I'm
not
using
ssh
to
go
to
the
my
my
master,
zero,
I'm
using
this
method
here,
I'm
now
on
the
host
oops.
It's
upgrading
that
example.
B
B
You
can
elf
pokemon
2
at
always,
if
you,
if
someone
wants
to
pull
an
image
from
quay
or
from
docker,
io
or
whatever,
that
it
in
in
in
fact
should
go
to
a
different
registry,
and
this
process
is
completely
transparent,
yeah.
So
in
this
this
way,
with
the
spotman
file
configuration
file,
you
can
tell
portman
to
in
fact
pull
images
from
a
completely
different
registry
than
the
one
that
was
addressed
in
the
in
the
manifest,
and
we
write
this
file
with
machine
configs.
B
Now
you
maybe
you
ask
yeah
but
sorry,
but
during
the
cluster
installation,
this
file
will
be
overwritten.
How
do
you
control
that
at
the
very
end
of
the
cluster
installations,
this
file
has
to
be
written?
This
can
be
done
with
this
numbers.
B
B
It's
a
kind
of
priority
prioritization
you
can
do
here
and
we
have
done
lots
lots
of
configurations
this
way
on
the
host
and
we
absolutely
not
use
ansible
for
changing
existing
clusters.
You
we
not
even
have
to
go
to
nodes
with
ssh,
because
you
know
normally,
you
don't
have
to
do
that.
If
you
want
to
change
your
cubelet
parameters,
I
think
you
even
have
an
configuration
object
for
that.
B
B
A
Operator
to
do
that
so.
B
B
C
So
from
the
the
folks
who
are
listening
in,
I
I
hope
this
has
been.
This
is
still
really
helpful.
It's
I
think
it's
great
that
we've
got
these
two
guys
here
doing
this.
Are
there
other
things
that
you
would
like
to
see
us
document
better
other?
Have
they
gotten
it?
I
can
see.
Some
of
the
questions
in
the
chat
are
those
things
that
we
should
be
updating
in
the
documentation
guide.
B
A
C
Yeah,
I
know
because
in
a
couple
of
the
other
sessions
there
are
things
that
are
just
in
in
the
openshift.
Documentation
are
as
clear
as
mud
too,
so
it
may
be
a
thing
that
we
end
up
with
better
documentation
in
okd
and
then
kind
of
move
some
stuff
there,
so
yeah
so
larry.
C
If
you
find
those
references,
whether
they're
in
the
openshift,
docs
or
okd,
just
make
log
an
issue
or
make
a
pull
request
against
that
chunk,
and
we
will
make
sure
that
we
get
it
merged
and
and
get
it
in
we're
really
trying
hard
to
get
other
other
other
eyeballs
on
this
documentation.
Help.
C
C
Yeah,
I'm
definitely
game
for
even
grammar.
Checkers
are
our
friends
of
mine
because,
as
you
know,
we
all
are,
I
type
too
fast.
Other
people,
english
is
a
second
language,
and
if
you,
if
the
documentation
was
alfdeuc,
joseph
you'd
be
rocking.
It.
C
So
we've
got
some
some
issues
with
that,
so
we're
doing
pretty
good,
I
think
so
far,
but
we
have
a
couple
more
stubs.
I
think
from
the
home
labs
group
that
we're
probably
going
to
get
put
in
make
vedim
put
a
stub
in
for
his
and
craig
for
his
and
sri
for
his.
C
But
the
home
labs
are
pretty
tricky
because
it's
really
specific
to
people's
what
they
have
for
hardware.
A
C
Cool
jessica's
asking
another
question
of
joseph:
it's
on
a
blog
about
newish
quay
feature
for
transferring
images
to
offline
networks.
Do
you
have
a
link
just
for
to
that
blog
post.
B
B
But
more
a
work
around
for
a
buck.
You
don't,
if
I
don't
know
what
you
mean,
newish
quay
feature
for
transferring
images
yeah.
No,
it's
jasper.
I
think
what
you
mean
if
I'm
correct,
if
I
understand
it
correct,
is
that
series
of
there
is
a
bug
that
affected
one
of
the
core
libraries
that
have
to
do
with
docker
images
and
as
this
library
is
used
yeah,
this
library
is
used
by
lots
of
tools
by
scopio
by
podman
by
whatever
and
because
of
this
bug.
B
B
C
It
could
get
turned
into
a
feature
and
product
and
we
could
sell
it
back
to
somebody
I
don't
know
but
but
anyways
if,
if,
if
you're
still,
if
you're
waiting
for
something
to
complete
is
that,
in
the
background.
B
Yes,
we
are
still
waiting
for
an
upgrade
to
finish
so
that
we
can
show
the
os
upgrade
it
will.
I
don't
know
when
it
will
start
it
can
take
10
minutes.
Maybe.
A
C
Yeah,
I
think
it's
been
it's
been
kind
of
a
long
day
and
we
believe
you
that
the
upgrade
will
work
and
I,
even
though
the
time
left
says
five
hours.
I
just
did
that
in
case.
C
Somebody
wanted
to
do
a
complete
live
deploy
somewhere
on
the
hardware,
so
you're
not
obliged
to
stay
another
five
hours,
because
I
know
you
can
see
the
timer
at
the
top
in
the
back
room,
but
if
you're,
if
you
guys
don't
have
any
more
questions
and
anyone
else
next
up
we're
going
to
try
and
do
these
things
quarterly.
I
think
that
it
feels
good
feels
right
to
do
this
so
larry
your
team,
just
this
is
a
chess
game.
C
Your
checkmate
here,
your
team,
is
on
the
hook
for
doing
some
sort
of
deployment
demo
next
up
and
jesper
will
will
get
you
in
on
a
saturday
since
saturday
is
better
for
you.
I
don't
know
what
time
it
is
where
you
are.
I
think
you're
in
finland
or
someplace
like
that,
so
we'll
we'll
make
you
guys
do
some
of
the
demos
too
and
maybe
have
fireside
chats
as
well.
So
I
wanted
to
really
thank
both
joseph
and
jamie
jamie,
especially
for
getting
in
the
background
and
helping
with
organizing
stuff
this
week.
C
Well,
I
had
some
family
stuff
going
on,
so
that
was
really
wonderful
to
have
my
back
and
joseph
for
all
the
work
you
do
around
getting
the
blog
post
done
it's
getting
late
there
and
it's
a
saturday-
and
I
know
you
guys,
took
all
the
time
to
make
this
happen.
So
thank
you
very.