►
Description
2021-03-20
Narrator: Jaime Magiera (UMich)
from OKD Working Group's OKD Testing and Deployment Workshop
https://www.okd.io/blog/2021/03/16/save-the-date-okd-testing-deployment-workshop.html
http://okd.io
A
Okay,
great
well,
thank
you
very
much
folks
for
joining
this
overall
community
gathering
and
for
joining
the
sessions
later
for
folks
that
are
just
tuning
in
there
will
be
sessions,
broken
up
four
different
sessions
actually
for
the
different
types
of
installations.
Basically,
that
you
can
do
so,
I'm
going
to
be
providing
right
now
is
just
a
quick
walkthrough
of
a
installation
on
vsphere
with
user-provisioned
infrastructure,
and
what
does
that
mean?
A
Well,
so
user-provisioned
infrastructure
means
that
instead
of
the
installer
configuring,
a
load
bouncer
within
vsphere
or
configuring,
the
ip
numbers,
or
any
of
that
that
this
is
all
done
with
infrastructure
that
the
user
provides
on
the
outside
before
they
run
the
installer,
and
so
the
prerequisites
for
that
are
basically
handling
dns,
dhcp,
load,
balancer
and
optionally.
A
Right
now
support
three
master
nodes
in
the
control
plane,
as
it's
called
and
then
you're
gonna
need
an
entry
for
each
of
the
desired
workers
and
also
an
entry
for
your
endpoint
and
for
your
your
api
endpoint
and
your
api
internal
endpoint
that
the
nodes
use
to
connect
to
each
other
and
then
a
wildcard
entry.
A
A
wildcard
dns
entry
of
the
form
like
this,
so
that
once
you've
deployed
apps
on
there
by
default,
they
would
have
the
app
name,
dot,
apps,
dot
cluster
name
at
your
domain
and
to
give
you
an
example
of
that,
so
for
user
provisioned
infrastructure.
On
my
end,
I'm
utilizing
the
dns
that
is
provided
at
the
university
of
michigan,
which
is
a
system
called
blue
cat
running
on
proteus,
and
so
this
is
a
way
of
very
easily
configuring,
dns
and
dhcp.
A
And
so
you
can
see
for
my
demonstration
cluster
that
you'll
see
more
of
in
the
session
that
I'm
doing.
Basically,
you
can
set
up
your
dns,
and
this
is
what
it
would
look
like
right.
So
you've
got
your
masters
and
your
worker
nodes
at
set
ips
and
gcp.
A
I
didn't
feel
in
film
the
details.
There
this
is
something
that
you'll
want
to
do
in
most
cases,
so
the
way
open
shift
clusters
work,
you
can
do
static,
ips
or
you
can
do
dhcp,
but
you
cannot
do
both,
and
this
is
see
if
I
can
find
it
here.
A
I
don't
have
it
in
front
of
me,
but
basically,
when
you
have
chosen
to
go
one
route,
you
can't
go
back
to
the
other.
So
if
you're
gonna
do
static
ips,
you
can
do
that
by
setting
some
kernel
parameters
with
something
called
after
burn,
and
this
is
something
that
you
pass
to
in
the
configuration
of
your
nodes,
you
pass
a
string,
a
configuration
string,
that's
handed
to
the
kernel
with
your
static
ip
or
you
can
rely
on
just
the
dhcp
on
your
network
and
any
address.
That's
handed
out
alternately.
A
I
took
a
third
path
and
I'll
get
into
more
details
in
my
session
of
using
reserve
dhcp
and
setting
the
mac
addresses
on
the
nodes
and
there's
some
advantages
to
that
for
upi.
That
I'll
talk
about
and
you're
also
going
to
need
a
load
balancer
outside,
so
that
incoming
requests
to
the
api
and
the
ingress
get
passed
to
the
respective
machines
right.
A
So
in
terms
of
a
load,
balancing
we've
got
a
load
bouncing
proxy
proxy
that
is
called
a
big-ip
from
f5
networks,
and
so
in
my
configuration
I
use
a
big-ip
which
allows
you
to
define
pools
of
machines.
So
here
you
can
see.
A
This
is
the
api
pool,
and
this
is
the
worker
pool,
and
so
this
will
load
bounce
requests
to
their
respective
pools
and
there's
also
some
one
thing
you
don't
see
here,
but
I'll
be
showing
in
more
detail
is
you
can
also
do
some
checks
so
for
those
of
you
that
are
familiar
with
the
internals
of
kubernetes,
you
know
that
there
are
health,
z
and
ready
z.
Rest
calls
that
you
can
make
to
get
the
status
of
of
your
cluster
of
your
nodes
in
your
cluster
and
in
the
f5.
A
You
can
actually
define
those
types
of
checks
as
well,
so
be
performing
those
checks
externally.
An
advantage
of
this
is
that,
if
you,
if
your
entire
cluster
goes
down
and
the
internal
notifications
aren't
working,
you
have
an
external
source
of
notification
and
monitoring
to
see
that
and
I'll
get
into
more
details
of
that
in
the
other
session.
Another
thing
that
you
would
need
is
a
proxy
if
you're
going
to
be
on
a
private
network,
so
this
is
something
that
openshift
has
been
growing
into
when
it
was
originally
in
versions.
A
Three,
I
should
say
unless
there
wasn't
as
much
focus
or
support
for
private
networks
and
that's
been
increasing,
but
if
you're
going
to
be
doing
a
private
network,
you
will
need
a
load
bouncer
for
or
a
proxy
for.
Your
calls
out
of
your
containers
once
you
have
your
cluster
up,
but
also
for
the
installation
process
as
well
pulling
down
those
containers
that
are
part
of
the
installation
process.
A
So
in
terms
of
a
proxy
you
can
use,
squid
squid
is
a
freely
available
proxy
that
is
very
easy
to
set
up
and
and
has
a
simple
configuration
file
and
I'll,
be
providing
some
examples
of
that.
A
In
the
session
that
I
have
and
if
you
look
at
the
documentation
on
the
okd
website,
there
is
a
link
to
installing
and
then
sub
sections,
and
so
here
is
the
section
installing
on
vsphere
and
then
there's
ins,
another
subsection
under
that
installing
on
vsphere
with
user
provisioned
infrastructure,
and
that
is
what
I've
been
working
with.
And
this
has
a
lot
of
great
information.
A
I
would
encourage
folks
when
they're
trying,
either
using
the
standard,
install
or
the
user
provision
to
install
functionality
either
one
check
out
the
upi
documentation
for
the
platform
that
you're,
using
the
reason
that
I
suggest
that
is
the
upi
documentation
shows
you
some
of
the
things
that
are
needed
and
some
of
the
underlying
details
of
a
open
shift
install
and
it
can
be
really
helpful
for
understanding
overall
how
the
process
works,
and
it's
sectionized
quite
well
and
shows
you
what
you'll
need
in
terms
of
your
nodes
and
about
creating
the
user,
provisioned
infrastructure
and
ports
that
you'll
need
and
whatnot
so
definitely
check
this
documentation
out
and
one
of
the
things
that
they've
done
is
they've
broken
it
out
into
several
sections
with
the
more
levels
of
detail
that
you
want
to
control
the
higher
the
resolution
of
detail
that
you
want
to
control
in
your
install.
A
That
is
more.
A
higher
resolution
of
manipulation
of
the
install
process
and
the
install
usually
takes
about
about
30,
to
40
minutes
and
in
the
session
that
I'm
doing
I'll
be
talking
about
how
you
can
automate
that
process
literally
to
be
able
to
just
run
a
script
and
configure
generate
the
necessary,
install
files
and
whatnot
and
load
those
into
newly
created
vms
and
then
kick
off
the
openshift
installer.
So
that
you,
you
get
a.
A
Very
near
to
non-upi
installation
experience
and
actually
some
extras,
let
me
bounce
over
here
to
provide
an
overview
of
some
of
the
files
that
are
involved
in
a
upi
installation.
So
what
you'll
see
is,
after
you've
generated
what
are
called
ignition
config
files.
A
You'll
see
a
bootstrap
ignition,
config
and
ignition
is
the
basically
the
metadata.
That's
used
that
you
put
into
the
metadata
of
the
node
to
tell
it
to
connect
to
the
bootstrap
server
or,
in
the
case
of
workers,
to
connect
to
the
control
plane
to
download
the
necessary
components
to
join
the
cluster,
and
so
you'll
see
multiple
ignition
configs
for
the
bootstrap
for
the
master
for
the
worker
after
you've
run
the
installation.
There
are
some
hidden
files,
an
install
log
and
a
state
file
that
says
the
state
of
the
cluster.
A
Now
there's
one
thing
that
I
want
to
point
out
for
upi
installations,
that
is
true
across
the
board,
and
it's
something
that
sometimes
surprises
folks
is
that
the
installation,
the
openshift,
install
binary,
actually
ingests
and
deletes
your
install
config.
So
you'll
have
like
a
general
install
config
that
you'll
configure
the
parameters
for
for
your
cluster
and
they
reference
that
in
the
documentation.
What
you
need
to
have
in
that
one
thing
that
happens,
though,
is
when
you
run
the
installer.
A
It
actually
eats
that
file
up
so
you'll
want
to
always
make
a
backup
of
it.
Trying
to
find
an
example
of
it
here.
You'll
always
want
to
make
it
here.
We
go,
you'll
always
want
to
make
a
backup
of
it
so
that
you
can
control
us,
or
so
you
can
duplicate
the
process
again
without
having
to
do
a
lot
of
work
and
the
tool
that
I'll
be
demonstrating.
That
I
wrote
actually
allows
you
to
have
a
template
and
then
it
duplicates
that
template
and
then
goes
from
there.
A
So
you
don't
have
to
do
anything
by
hand,
and
that
is
the
overall
process
of
installing
with
vsphere.
Basically,
you
generate
your
files
and
you
deploy
your
infrastructure.
You
generate
your
files
and
then
you
create
the
nodes
with
the
metadata
from
those
ignition
config
files,
and
then
you
run
the
installer.
A
So
that's
a
general
overview
and
again,
if
you
want
more
specifics
of
that-
and
you
want
to
see
an
automated
example
of
that,
then
please
check
out
this
session
that
I'll
be
hosting
with
joseph
and
with
that
I'm
going
to
stop
sharing
myself
and
then
we'll
move.