►
From YouTube: OKD WG Bare Metal Deployment Uneditied
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
I
can
hear
you
both
and
you
seem
both
good,
so
I'm
going
to
leave
you
for
now
it
is
recording.
You
can
see
the
little
record
button
here.
So
that's
good
and
I
will
bounce
back
in
a
few
minutes.
A
I'm
just
going
to
make
sure
each
session
is
set
up
correctly
and
you
can
go,
as
I
said
in
the
at
the
end
of
the
last
thing,
go
as
long
as
it
takes
and
when
you're
done
find
me
in
chat
somewhere
and
let
me
know-
and
I
will
end
your
session
for
you
sounds.
B
I'm
just
gonna
chat
folks.
I
think
we
have
four
attendees.
C
B
B
B
Oh
sorry,
I
no
no
I
I
I
will.
I
just
wanted
to
see
if
my
my
browser
had
an
issue.
I
don't
have
a
tab
open
that
says
loading.
D
B
D
B
C
If
you
just
want
me
to
share
this
and
walk
through
it
or
and
advance
slides
for
you
just
let
me
know,
I'm
I'm
fine
with
that.
B
B
B
Yep
go
ahead,
okay,
good
afternoon
folks,
thanks
for
joining
our
session,
we'll
do
a
brief,
intro
here
andrews
here.
B
What
we're
going
to
do
today,
description
of
the
session
here,
metal
api,
so
just
to
clarify
what
we're
going
to
do
today
is
okay
and
andrew
is
going
to
using
a
linux
with
libert
kvm
how
to
get
it
before
installed
on
such
a
linux
box.
B
So
that's
just
to
clarify
bare
metal
often
gets
used
and
okay
openshift
a
agnostic
platform
install.
So
this
is
agnostic
you
just
in
the
sense
that
you
run
it
on
any
linux,
but
it's
not
on
the.
B
B
So
our
goals
today
is
we
will
show
the
actual
call
using
a
linux.
Andrew
has,
I
think
he
said
for
33
andrew,
so
we'll
show
that
and
then
we'll
there's
lots
of
recommendations
and
gotchas
andrew
was
chatting
me
today.
He
finally
saw
a
lot
of
things
that
he
probably
will
share
with
you
in
in
getting
this
working
and
then
we'll
wrap
up
with
some
q.
A
from
you
all
about
your
questions
about
spinning
up
or
in
this
way
so
andrew
you
want
to
do
a
brief,
bio
intro
sure.
B
So
we're
kind
of
learning
as
we
go.
Let
me
try
this
again.
C
John,
I
I
agree.
This
is
it's
a
whole
new
experience
for
sure.
I'm
I'm
just
seeing
a
black
screen
justin
it
has
a
this
platform,
has
a
lot
of
potential.
I
like
the
separation
of
the
stage
in
the
sessions.
I
like
how
everybody
can
break
out
and
there's
chat
for
everything,
but
I
don't
know
if
it's
just
unfamiliar
or
slightly
glitchy
or
but
it's
it's
been
an
exercise.
D
C
All
right
so
apologies
for
the
the
rough
starts,
such
as
life,
sometimes
with
technology
and
especially
with
streaming
platforms.
So
I
am
andrew
sullivan.
I
am
a
technical
marketing
manager
with
the
red
hat
cloud
platforms,
business
unit,
so
my
day,
job
is
really
focused
around
openshift
and,
broadly
speaking,
openshift
on
various
virtual
platforms,
as
well
as
openshift
being
a
virtual
platform,
so
openshift
virtualization
convert
that
type
of
stuff.
C
So
there's
some
contact
information
there.
Anybody
is
always
welcome
to
reach
out
to
me
andrew.sullivanredhat.com
or
via
twitter.
I
also
have
a
weekly
live
stream
on
openshift.tv.
It's
the
ask
an
openshift
administrator
office
hour,
so
that
happens
wednesdays
at
11am
eastern
time,
so
anybody's
welcome
to
stop
by.
We
have
a
whole
bunch
of
different
topics.
Last
week
we
talked
about
etcd.
This
week
we
talked
about
vmware.
C
B
Okay,
thanks
justin
here
again
the
the
guy
that
has
the
glitchy
screen
share
by
the
way
andrew's
demos
are
really
awesome.
Do
you
have
a
direct
link
to
those
streams
on
your
on
your
twitter
feed?
Like?
Can
people
get
them
there?
I
usually
don't
like
them
on
twitter,
they're
all
on.
C
The
re
are
openshift
youtube,
so
if
you
just
go
to
or
openshift.tv,
if
you
go
to
openshift.tv
you'll
find
links
to
twitch
as
well
as
to
youtube
where
they're,
all
at
yeah
john
on
openshift
youtube
we're
working
on.
We
have
a
an
intern
who
will
be
starting
in
the
next
few
weeks,
who
is
going
to
work
on
automating
the
process
of
creating
playlists
for
all
of
our
different
streams
and
all
the
other
metadata
and
stuff
that's
going
on
there.
So,
hopefully,
it'll
get
a
lot
easier
to
find
them
in
the
future.
B
Awesome
because
you
mentioned
that
I
I
had
to
follow
up
so
people
would
know.
Good
thing.
John
has
is
already
familiar
with
them.
They
are
really
extremely
beneficial
just
to
see
you
and
others,
because
there's
a
bunch
of
others
who
also
go
on
there
too,
just
demoing
the
the
technology
live,
so
I'm
justin
pittman.
You
can
find
the
links
there
on
my
linkedin.
I've
been
in
tech
for
a
while.
What
I
do
for
red
hat
is:
I
currently
do
enablement
for
our
partner
sales.
B
So
if,
if
you
know
any
of
our
partners,
it
could
be
f5
or
dynatrace
or
dell,
I'm
usually
involved
in
those
types
of
opportunities
for
our
technology
and
then
at
home.
I
use
a
bunch
of
stuff
like
that's,
not
necessarily
red
hat
productize,
but
might
be
upstream
like
I'll
use
overt
or
something
like
that
in
my
home
lab
anyway
good
to
be
here
with
with
you
all.
B
C
I
I
think,
I'm
good
if
you
know
whenever
we
want
to
go
and
demo
if
you
want
to
cover
any
of
the
other
material
around
kind
of
the
overall
flow
or
anything
like
that.
That's
that's
completely
up
to
you
and
I
can
talk
well.
Let's
do
that.
B
B
This
may
look
somewhat
familiar
to
you,
but
let's
just
describe
that
for
anyone
watching
it
live
here
or
in
recording
so
okd
and
kubernetes
in
general,
they
do
have
certain
external
dependencies,
whether
you're
trying
to
access
a
application,
that's
hosted
inside
the
cluster,
and
you
need
that
application
to
be
routable.
That's
the
first
thing
through,
let's
say
a
load
balancer
and
the
client
can
only
get
to
it
through
dns
resolution
right.
A
lot
of
those
components,
especially
at
scale,
will
be
done
externally
from
the
cluster.
B
So
to
help
us
today,
andrew
and
I
talked
about
how
best
to
kind
of
demonstrate
those
external
dependencies.
So
one
way
to
do
it
is
what
we
call
a
helper
node.
Actually,
one
of
andrew's
peers,
christian
hernandez,
initially
came
up
with
the
idea
of
this
helper
node.
That
has
a
bunch
of
these
external
dependencies
as
services,
so
it
comes
with
a
load.
Balancer
is
h,
a
proxy.
B
B
So
we'll
have
a
link
at
the
end
so
that
you
can
get
to
the
ansible
playbooks
for
the
helper
node
and
it
it
usually
really
simplifies
the
process
for
building
out
andrew
did
you
want
to
mention
that
livfert
does
have
a
subset
of
these
services
now
yeah
I'll
talk
about
that.
B
Okay,
all
right
so
after
the
helper
nodes,
the
basic
workflow,
if
you
could
go
forward
one
andrew,
just
one
slide.
Thank
you,
the
the
basic
workflow
for
what
we
call
a
user
provisioned
infrastructure,
which
is
a
nice
kind
of
way
to
say,
bring
your
own
bring
your
own
infrastructure.
B
The
workflow
is
you,
you
will
use
the
installer
and
you
have
to
feed
that
installer
several
different
pieces
of
information
about
how
you
want
the
cluster
to
operate.
So
a
big
one
is
the
operating
system
for
the
nodes.
So
in
this
case
for
okd,
it's
fedora
core
os
is
the
operating
system
for
the
nodes.
B
B
If,
for
example,
you
can't
download
them
like?
Maybe
you
have
a
bit
of
a
restricted
network
in
some
way
or
you
need
to
configure
the
boot
up
process
like
we're
going
to
do
here.
Andrew's
going
to
show
you
well,
you
can
store
those
on
a
web
server,
so
the
image
gets
stored
on
a
web
server
that
is
on
the
helper
node
accessible
from
the
bootstrap
node
that
will
then
ingest
and
download
that
fedora
coreos
image
right,
that's
just
one
part
of
the
flow
this
diagram
is,
is
meant
to
help
you
kind
of
visualize.
B
That
is
one
piece
about
how
a
node
gets
its
initial
operating
system.
Other
pieces
of
the
installation
are
pretty
detailed
and
and
documented
elsewhere.
We
have
some
links
about
those
documents,
but
basically
there's
a
couple
of
things
like
the
ignition
config
about
how
those
machines
are
supposed
to
boot
up
inside
the
cluster
and
those
can
be
manipulated,
but
the
installer
does
generate
those
ignition
configs
as
well.
B
So
that's
why
this
is
visually
is
presented
here,
but
the
big
thing
to
remember
is
for
now
meaning
okd
4.7,
a
bootstrap
node
will
be
spun
up
first,
so
you'll
you'll
see
that
in
the
demo
today
that
the
bootstrap
node
comes
up
first,
it
pulls
down
things
like
the
ignition
config,
any
kernel,
boot
parameters
and
the
the
operating
system
image
that's
needed,
so
just
trying
to
give
people
a
workflow
here,
the
first
time
you
see
an
a
okd
or
openshift
install
without
any
visual
diagram
of
what's
going
on,
it
gets
rather
confusing
or
lost.
B
At
least
personally
I
find
I
don't
know
your
opinion
on
that
andrew
you've
done
so
many
of
them.
Maybe
it's
like
yeah,
it's
so
easy
for
you
now,
but
for
newbies.
I
kind
of
think
that
it's
helpful
to
understand.
Okay,
you
have
the
installer
binary,
but
there's
some
other
things
that
are
needed
like
a
node
os
image,
and
you
need
other
things
that
need
to
be
ingested
by
the
installer
generated
by
the
install
like
ignition
configs.
It's
just
helping
people
visualize
that
process
any
other
thoughts
about
that.
Andrew
yeah.
C
C
Shift
install
when
we
do
all
these
other
things.
Basically
we're
setting
up
the
prerequisites,
so
all
the
things
that
the
cluster
is
going
to
need
like
dns
right
and
an
http
server,
so
it
can
access
the
resources
it
needs,
and
then
we
create
those
ignition
configs
and
when
we
turn
on
the
nodes
they
read
in
that
ignition
and
then
basically
it
instantiates
and
self
configures
itself.
C
B
Better,
because
is
that,
is
that
referenced
back
to
oh
kitty
and
openshift,
three,
the
ansible
install
scripts
yeah.
C
B
C
Three,
you
know
install
process
was
basically
a
huge
set
of
ansible
playbooks,
which
is
really
nice
because
well
ansel
is
familiar
to
most
everybody
at
this
point
and
you
could
easily
go
in
and
troubleshoot
and
look
at
the
nodes
and
find
logs
and
see
what's
going
on,
and
you
know
they
were
rel
nodes,
so
you
could
easily
connect
to
them
and
and
see
all
of
that
with
four
it
switched
to
where
now
it's
core
os
right,
so
fedora
core
os
is
harder
to
connect
to.
C
C
They
change
the
installer
so
that
it's
only
responsible
for
basically
getting
the
cluster
up
and
running,
and
then
everything
after
that
is
a
day
two.
So
the
installer
is
much
more
successful
or
more
frequently
successful,
I
should
say,
but
it
can
be
more
difficult
to
troubleshoot
if
you're
not
familiar
with
it,
and
we've
done
a
couple
of
streams
about
that
I'll
touch
on
some
of
those
things.
Inside
of
here
you
know
fingers
crossed
it
will
go
well
and
we
won't
have
to
do
a
lot
of
troubleshooting.
B
But
I'll
show
you
andrew
was
was
troubleshooting
late
into
the
evening.
I
I
I
saw
those
messages
late,
but
this
is
the
nature
of
I
think
the
sometimes,
if
it's
a
latest
build
right,
which
I
think
is
what
you
bumped
into
last
night
and
a
couple
others
that
are
in
the
other
sessions
here
today
that
they
bumped
into
similar
issues
with
the
latest
builds.
But
you
know
that's
what
happens
when
you're
fed
or
core
west
instead
of
the
downstream,
like
rel
core
os.
B
I
don't
think
that
you
know
speaking
speaking
of
the
demo
andrew.
Maybe
we
we
jumped
to
that
that
the
next
slide.
I
don't
want
to
bore
people
with
slides
right.
The
next
slide.
We
can
always
come
back
to
later.
Actually
you
know
the
recommended
sizing.
Do
you
mind?
Just
let's
go
to
the
recommended
sizing?
First,
just
so
people
understand
yeah.
So
there
was
some
discussion.
B
If
people
didn't
see
it
in
chat
earlier
that
you
could
get
a
single
node
okd
on
16
gig
all
right
now.
I
have
tried
single
note
on
my
laptop
the
16
gig.
It
really
you
better
not
have
anything
else
running
basically
right.
So
it
is
possible,
but
the
what
you're
seeing
now
are
the
recommended
specs
for
a
a
fuller
cluster
and
we
kind
of
added
I
kind
of
just
threw
this
together
added
it
up
for
you.
B
This
is
like
double
16
gig
right.
If
you're
gonna
have
more
than
one
node
and
andrew's
box,
I
believe,
is
even
bigger
than
this
right.
I
think
you,
you
have
a
yeah.
C
Even
a
bigger
box
I'll
talk
about
that
in
a
moment
but
effective,
it's
a
desktop
pc.
It's
running
a
ryzen
5
2600
with
64
gigs
of
ram
with
fedora
33.
B
The
good
news
is,
it
doesn't
have
to
be
ginormous,
so
you
can
get
by
and
and
the
demo
that
andrew
is
going
to
do
is
a
fully
featured
cluster
and,
and
there
have
been
inroads
to
to
kind
of
reduce
the
footprint
over
time.
Probably
another
thing
to
think
about
is
storage,
and
if
you
can,
what
is
that
term?
B
It's
not
sparse
provisioning,
but
it's
thank
you,
then
provisioning,
so
that
you
don't
bump
into
errors
with
kvm
or
or
keemu
complaining
about
your
storage
going
beyond
what
you
have
locally
vadim
and
in
chat
also
recommended
that
you
at
least
have
an
ssd
andrew,
and
I
have
been
in
many
conversations
where
the
ncd
instances
inside
okd
just
expect
a
certain
amount
of
response
time:
low,
latency,
a
certain
amount
of
iops.
They
just
expect
that
from
the
discs.
So
don't
don't
try
this
with
old,
spinning
rust.
B
Those
are
just
some
some
basics
for
how
to
get
this
up
and
running
what?
What
kind
of
box
you
need
yeah.
I
would
definitely.
C
Recommend
at
a
minimum,
an
ssd
and
if
you,
if
you
have
available
in
nvme
disk,
I
I
a
week
and
a
half
ago,
so
not
this
past
week,
but
the
10th
I
believe
it
was
march
10th.
C
B
C
The
better
and
the
happier
it
will
be
for
a
lab,
which
is
more
or
less
what
we'll
be
deploying
today.
You
know
everything
is
going
to
be
in
one
box
that
will
be
deployed
with
libverts
you'll,
see
I'll,
be
using
virtual
machine
manager
for
some
parts
I'll
be
using
the
cli
for
some
parts,
but
you
can
probably
get
away.
B
B
C
You
know
there's
no
at
least
four
for
what
I'm
doing
today.
You
wouldn't
necessarily
want
to
use
it
for
production.
You
wouldn't
want
to
deploy.
You
know
massive
applications
that
are
scaling
the
hundreds
of
pods
on
there,
but
definitely
take
into
account
storage
and
storage
latency
when
you're,
if
you're
doing
any
sort
of
production
type
of
deployment.
B
Okay,
we're
about
30
minutes
in
or
25,
because
we
we
were
dealing
with
that.
So
I
think
it's
time
to
let
andrew
show
us
the
the
remarkable
demo
that
you
all
have
waited
for
ready
andrew.
I.
C
Am
so
hopefully
you're
able
to
see
my
window
here
so
I'm
I
actually
run
a
a
mac
desktop,
but
I
have
a
vnc
session
into
my
my
other
desktop
there.
C
C
So
if
you
want
to
like
see
the
exact
steps
that
I
took
and
all
that
because
I
am
going
to
skip
over
some
things
like
my
helper
node
is
already
set
up.
So
if
you
wanted
to
see
how
I
did
that
it's
all
inside
of
that
just
so
now
I
need
to
so.
I
don't
distract
myself
by
seeing
myself
moving
the
mouse
around
and
I
hide
one
of
these
windows
all
right.
C
So,
let's
start
with
well,
let's
start
at
the
beginning,
I'm
going
to
use
this
as
a
bit
of
a
reference
and
kind
of
walk
through
the
different
parts.
So
I'll
start
here
by
saying
that
you'll
note
that
I'm
going
to
be
deploying
more
or
less
a
five
node
okd
cluster,
so
three
control
plane
nodes.
You
can
see.
I
I've
worked
up
the
name
there,
but
that's
okay.
We
will
temporarily
need
a
bootstrap
machine
and
then
I
have
the
helper
machine.
C
C
B
C
As
justin
said
right,
we
can
use
libert
for
some
of
that.
So
let
me
bring
up
virtual
machine
manager
here,
so
I've
got
another
a
number
of
other
machines,
so
these
are
just
laptops
and
other
stuff
that
are
sometimes
powered
on
that
I
sometimes
use
in
my
lab.
If
we
look
at
the
details
for
this
machine,
however,
all
I
did
was
create
a
new
virtual
network.
So
if
we
click
down
here
on
the
plus,
so
give
it
a
name
whatever
we
want
to
call
it.
C
I
am
going
to
nat
it
and
then
pick
a
subnet
that
we
want
to
use
for
it.
I
used
zero,
slash,
192.168.110.0
four,
and
then
we
don't
need
the
hcp,
because
we're
gonna
be
using
static
ips
in
this
particular
configuration.
C
C
Now,
liver
does
use
effectively
behind
the
scenes,
dns
mask
as
well,
and
you
can
go
in
and
you
can
create
essentially
remember
which
section
it
is
you
can
assign
it
to
give
out
static,
dhtp
leases
you
can
assign
it
to
do.
Dns
resolution
for
all
of
those
nodes
and
all
that
other
stuff
inside
of
there.
I
didn't
do
that
today,
just
because
you
know
I
wanted
to
show
the
helper
node
so
pretty
straightforward
there.
The
only
other
thing
that
I
did
there
to
go.
C
C
It
is
a
nvme
device,
so
I
just
have
a
second
nvme
device
in
in
the
box,
and
then
I
mounted
that
into
this
location
with,
I
think
this
one's
running
ext4
instead
of
xfs,
because
fedora
seems
to
be
shying
away
from
xfs
for
some
reason,
but
yeah
pretty
straightforward.
So
I
need
a
network.
C
B
I
was
going
to
ask
folks
if
they
have
some
input,
while
andrew
shows
us
this
of
what
they
were
targeting
we'd
be
interested
to
hear
you
know,
are
you
targeting
an
install
on
ubuntu
or
or
is
it
for
your
home
lab
or
is
it
for
dev?
We
just
were.
We
were
talking
about
this
before
in
prepping.
We
would
like
to
hear
your
feedback
as
we
go
through
this
about
what
you're
thinking
john
says,
general
okd
knowledge,
which
is
also
great.
C
Yeah
I'll
trust
you
to
keep
an
eye
on
the
chat,
because
I
can't
really
see
the
whole
part
of
the
window.
So
the
other
thing
to
be
aware
of
here
is
so
I'm
using.
C
B
C
Is
one
of
the
operating
systems
that
is
now
using
systemd
resolve
d
as
the
defaults
resolver,
so
the
result
of
that
is
basically
I
want
to
be
able
to
have
my
box
right.
My
workstation
here
resolve
over
to
the
machines
that
we're
going
to
be
deploying
with
okd
inside
them.
So
I
already
know
that
I'm
going
to
call
my
that
internal
and
that
temporary
one
that
I'm
using
for
my
okd
deployments,
okd.lan
so
effectively.
C
C
Virtual
bridge,
one
here
is
my
110
network.
So
essentially
I
need
to
tell
systemd
resolve
d
that
hey
when
you
see
an
okd.lan
domain
name.
I
want
you
to
send
those
dns
queries
to
this
location
and
same
thing
with
reverse
dns
and
all
of
that.
So
that's
what
this
set
of
commands
here
is
going
to
do
so,
I'm
literally
going
to
copy
and
paste
those
and
have
them
execute,
and
then
we
can
do
a
solve
sorry.
C
C
C
C
Api.Cluster.Okd.Clan,
it
will
well
theoretically,
it
will
resolve
over
there
and
I'll
have
to
see
why
that's
it
could
be
that
my
dns
service
isn't
running
over
here
in
the
helper.
So
we'll
take
a
look
at
that
now,
so
here
I'm
connected
to
the
helper,
so
this
particular
interface.
I
guess
I
should
have
changed
the
colors
on
these,
so
that
way,
they're
easier
to
see.
So
this
particular
one
is
in
so
beret
is
the
name
of
my
fedora
machine
helper
is
the
name
of
my
helper
machine
and.
B
C
C
C
B
C
Christian
hernandez
created
the
helper
node.
So
if
you
go
to
the
helper
node,
let
me
dig
up
that
link
real,
quick,
I'll
I'll
share
it
in
the
chat
okay.
So
if
you
change
the
tag
on
that,
if
there
is
a
helper,
node,
v2
beta
and
that
whole
thing
is
containerized,
so
christian
has
gone
through
and
he
took
it
from
being.
You
know,
services
that
are
deployed
as
normal
services
to
a
system
they're
all
now
in
pods,
and
they
can
all
now
be
used
that
way.
C
Liking
this
particular
item
I'll,
have
to
take
a
look
at
that
here
in
just
a
moment.
C
So
if
I
do
at
you
know,
test.apps.cluster.okd.land
that
should
resolve
same
as
over
here
to
the
the
helper
node,
which
has
that
load
balancer
going
on
the
other
thing
that
we
need
is
our
node
entries.
So
I
said
before
that
we're
doing
static
ip
addresses,
so
I
went
ahead
and
added
those
ip
address
and
name
resolutions
into
the
dns
mass
config,
and
that's
really
all
we've
got
so.
Let
me
double
check
why
this
is
not
in
p1
s0.
C
B
Hey
dianna,
I
saw
you
joined,
welcome,
do
you
know
in
hopman?
Is
there
a
way
to
make
andrew's
screen
share
bigger
as
an
attendee
or
a
viewer.
A
It
really
doesn't
because
it
got
it,
has
our
faces
down
below
so
they're,
seeing
it
at
75.
So
if
you
can
make
your
font
bigger
in
the
black,
that
would
probably
make
people
like
me
with
my
glasses
on
my
top
of
my
head,
as
opposed
to
on
my
nose.
I'm
happier.
B
Well,
plus,
I
wanted
to
clearly
see
what,
as
we
look
at
this
dns,
try
to
see
what
was
what
it
was
saying
by
the
way
andrew
you
will
get
a
chuckle
out
of
a
comment
that
john
said
john
said:
system
d
resolve
d
was
a
pitta.
C
Yes,
yes,
it
was
the
the
ranting
that
I
was
doing
on
my
team's
slack
channel
the
other
day
about
having
to
figure
out
how
it
was
working.
C
And
why-
and
it
took
me-
I
know
why
probably
two
hours
of
searching
the
internet
and
testing
things
to
finally
figure
out
those
three
silly
commands-
and
you
know
the
the
fedora
team-
did
a
great
job
when
they
first
announced
fedora
33
about
you
know:
hey
you
can
you
know
we
have
this
new
thing
and
it's
great
for
split,
split
dns
when
you're
connected
to
vpns
and
all
this
other
stuff,
but
there's
nothing
about
how
to
configure
it
so
yeah.
B
A
C
Control
control
shifts
for
plus
from
inside
of
a
vnc
session,
yeah
all
right
every
time.
So
hopefully
that's
easier
to
see
so,
okay
good,
so
I
did
get
dns
mask
working.
I
don't
know
why
it
didn't
start
by
default.
If
you
were
paying
attention
there.
I
just
did
a
simple:
you
know,
assist
control
starts
dns
mask
and
if
we
do
the
same
thing
here,
you
know
I
did
that
ns
lookup
and
it
goes
right
over
and
it
resolves
my
test.
So
I
can
also
do
a.
C
Test.Apps.Cluster.Okd.Lan
and
it
resolves
right
over
to
our
helper
node,
so
a
couple
of
things
to
note
here,
so
you
saw
when
I
first
set
up
everything
I'm
using
this
okd.lan
subnet
or
a
domain
name
rather
so
cluster
is
actually
going
to
be
my
extremely
creative
right,
because
you
know
I'm
I'm
a
tme,
not
a
not
a
designer
extremely
creative
cluster
name.
So
this
will
be
unique
to
each
okd
deployment
that
you
have
and
then,
after
that,
we
have
these
different
names
for
the
different
functions
inside
of
there,
with
apps
being
a
wild
card.
C
Over
to
that
load,
balancer
that
it's
pointed
at
so
that
covers
dns
the
next
one
that.
C
C
So
after
that
we
start
getting
into
the
actual
load
balanced
endpoints.
So
the
first
one
is
the
api
server.
So
this
is
what
api
dot
cluster
name
dot
domain
name
points
at.
You
can
see
it's
6443
and
then,
on
the
back
end,
we're
passing
it
to
our
bootstrap,
because
the
bootstrap
comes
up
first
and
then
our
three
control
plane
nodes
being
h
a
proxy.
I
don't
need
to
modify
the
configuration
of
this
as
this
bootstrap
node
goes
up
and
down
and
as
I
reload
the
clusters,
because
it'll
automatically
say:
hey
you're,
not
responding
right.
C
C
So
one
interesting
thing
to
note
here
right:
the
machine
config
server-
is
how
the
machine
config
operator
serves
up
the
configuration
the
machine
config
to
the
nodes.
So
this
is
an
unauthenticated
endpoint.
Basically,
you
can
go
in.
You
can
pull
that
http
traffic
or
https
traffic,
but
it
is
not
authenticated
and
be
able
to
see
that
machine
config.
So
this
one
is,
this
is
effectively
the
api
int
api
internal
endpoint.
So
if
you
are
separating
your
traffic,
if
you
want
to
have
kind
of
maximum
security
protection
again,
this
is
a
lab.
B
C
In
the
cluster,
or
even
possibly
on
an
internal,
only
network-
that's
available
there
so
moving
on
down
here.
We
then
have
our
ingress
endpoints.
So
this
these
two
are
going
to
be
the
api
or
excuse
me
the
star.apps
wildcard,
so
we
have
one
for
http
and
one
for
https
and
notice
that
it's
not
in
here,
because
I
set
it
up
at
the
top.
I
apologize
on
my
page
up,
so
it's
going
to
jump
around
on
you
see
my
default
mode.
Here
is
tcp,
so
it's
doing
layer,
4
or
yeah
layer,
4
load
balancing.
C
C
C
C
Ip
yeah
it
should
resolve,
but
I
didn't
put
the
that
that
name
in
there
and
here's
our
aha
proxy
stats
page.
So
you
can
see
there's
no
nodes
that
are
up
at
the
moment
and
then.
B
B
C
C
B
Well,
they
used
to
require
like
docker
used
to
require
some
root.
So
was
that
just
like
an
old
leftover.
C
B
C
Does
and
I
what
I
probably
did
I
was
I
had
sued
over
s
I
over
to
root,
because
I
was
probably
modifying
some
config
and
wasn't
paying
attention
who
I
was
logged
in
as
and
created
this
pod
anyways
it.
It
works,
super
easy,
I'm
just
redirecting
port
8080.
C
C
Inside
of
my
web
docs-
and
I
just
linked
it-
will
bring
up
oops
bring
up
that
page
again.
So
you
see,
I
attached
a
volume
of
our
www.html
to
the
pods,
hd,
docs
or
ht
root,
and
inside
of
here
you
see,
I've
got
two
different
directories.
I'll
do
a
quick
tree
in
here,
so
the
first
one
is
ignition.
We'll
use
that
in
just
a
moment.
This
is
where
we'll
dump
the
ignition
files
that
our
nodes
are
going
to
need
when
we
install
them
the
other
one.
Is
this
install
directory?
C
So
these
are
the
three
metal
files
that
we
need
for
our
deployment.
So
why
is
that?
And
what
do
I
mean
by
that?
So
with
libert,
there
is
no
upi.
There
is
no
ipi
type
of
integration
right.
So
that
means
that
we
have
to
do
what
red
hat
and
what
openshift
calls
a
platform
agnostic
non-integrated
deployment.
C
Now,
if
you're
deploying
to
overt
or
something
like
that,
there
is
that
integration
right,
you
can
deploy
an
installer,
provisioned
infrastructure,
ipi
cluster
and
it
will
connect
to
your
your
overt
manager.
It
will
talk
through
the
apis
to
provision
the
vms.
It
will
configure
the
csi
provisioner
right.
It
will
do
all
that
stuff
out
of
the
box,
but
that's
not
what
we're
doing
here,
because
this
is
livert
and
we
don't
have
that
integration,
except
when
we
do
and
I'll
talk
about
that
in
just
a
moment
too.
C
So
I'm
jumping
ahead
here
by
showing
you
this,
but
a
little
bit
further
down
and
I'll
revisit
this
in
a
moment,
a
little
bit
further
down
I'll
show
the
links
to
download
those.
I
just
do
a
quick
w
get
and
then
move
them
over
there
and
make
sure
that
the
permissions
are
correct.
So
that
way
they
can
be
retrieved
by
the
nodes
when
we
need
them,
which
is
the
next
step
there
we
go
so
from
my
libvert
host.
So
that
means
that
I'm
on
this
beret
host,
you
can
see
I'm
in
the
okd
directory.
C
I
tried
doing
it
with
the
very
latest
bits
with
the
latest
4.7
bits
and
was
having
issues
with
the
bootstrap.
So
if
you
happen
to
be
on
the
github
page,
if
you're
looking
for
the
issues,
you'll
see
there's
some
comments
for
me
regarding
that,
so
I'm
using
4.6
I'm
using
from
back
in
mid-february
here,
because
that's
the
one
that
I
was
able
to
get
to
work.
C
We
need
to
pull
down
two
files,
so
we
need
the
open
shift
install
and
we
need
the
openshift
client.
So
when
we
pull
those
two
down
pretty
straightforward,
all
we're
going
to
do
is
unpack
them
and
then
move
the
three
files
that
are
within
openshift-install
oc
and
kubecuttle
into
user
local
bin.
You
don't
have
to
put
them
in
the
path.
I
just
find
it
easiest
to
do
that.
So
if
we
come
back
over
here.
C
C
C
B
Sometimes
I
don't
know
about
the
the
containerized
hdbd,
but
sometimes
the
permissions
but
you're
well,
is
that
pod
running
as
root
and
it
has
permissions
to
all
those
files
right,
yeah.
B
B
C
So
I
just
did
a
podman
logs
against
the
pod
just
to
see
what
the
logs
are
saying.
Vlogs
seem
to
think
that
it's
fine.
B
C
Well,
well,
no
always
a
always
a
difficult
one.
Something's
always
got
an
auto
break
or
gotta
break.
That's
okay!
We
can
work
around
that
because
I
happen
to
have
a
separate
web
server
available
and
we'll
just
point
to
that.
So
I'll.
B
See
see
sorry
cli,
probably
I'm
mispronouncing
your
name.
I
cannot
pronounce.
Your
name
is
firewall
d
running
on
this
box.
B
C
B
While
you
do
that,
I
think
it
it's
good
to
rehash
something
andrew.
You
probably
were
about
to
get
to
this,
so
I'll
jump
the
gun
and
just
mention
it
wait
a
minute
john
says
in
chat
503,
it
means
h8
proxy,
isn't
seeing
the
server
it
is
that
oh
web
server
behind
the
proxy
yeah.
You
know
what
I'm
doing
here.
C
Oh
perfect,
that's
not
embarrassing
at
all.
Yes,
port
80
is
h
a
proxy
that
is
going
to
the
cluster
that
we're
going
to
deploy
port
8080
is
the
web
server
that
we
want
to
use
here.
So
thank
you.
I
I
appreciate
your
help.
There
john.
C
I
know
never
a
dull
moment
all
right,
so
our
web
server
is
up
we're
good
there.
I
have
pulled
all
of
our
install
files.
If
I
go
here
to
the
install
we
see,
we
have
our
three
files
so
from
our
libert
host.
Now
we.
D
B
Here's
here
I
want
to
wrap
up
my
previous
thought,
because
I
think
this
is
still
a
good
point
to
mention
it.
You
know
some
folks,
maybe
not
here,
because
we
seem
pretty
techy,
but
in
the
recording
we'll
say
this
is
a
lot
of
heavy
lift.
I
download
all
these
files
I
had
to
set
up
all
of
this
configuration.
B
Isn't
there
and
a
push
button
thing
it
sounded
like
you
were
going
to
hint
at
this.
B
C
So
if
we
walk
through
this,
it's
pretty
straightforward.
So
first
we
want
to
make
sure
that
we
have
an
ssh
key
in
here.
So
that
way
we
can
connect
over
to
the
vms
once
they've
been
deployed.
Remember,
there's
only
key
based
authentication
with
the
user
core
and
then
the
pull
secret
is
actually
just
a
dummy
secret.
You
can
see
here
we're
just
using
this
non
non-useful
text.
C
B
C
Workers
so,
but
we
still
want
to
set
this
to
zero.
This
is
basically
indicating
that
it's
not
responsible
for
provisioning
any
of
those
we'll
have
three
master
replicas
or
three
control,
plane,
replicas
and
then
down
here.
These
will
be
defaults,
so
the
the
cluster
network.
This
is
what
it
uses
to
assign
pod
networks
to
each
one
of
the
nodes.
So
it'll
take
this
slash,
14
and
it'll,
carve
it
up
into
23s
and
assign
a
23
to
each
one
of
the
nodes
in
the
cluster,
and
that's
where
those
pods
get
assigned.
C
The
service
network
is
the
set
of
ips
that
are
used
for
services.
So
when
you
create
a
service,
the
ipa
gets
assigned
comes
from
here,
so
this
networking.machinenetwork.citer
is
very
important,
so
we
want
to
make
sure
that
this
value
matches
the
subnet
that
we're
deploying
our
virtual
machines
to
so
110.0
is
what
I'm
using
here.
The
reason
why
this
is
important
is
because
okd
core
os
looks
for
an
interface
on
this
subnet,
and
that
is
the
interface
that
it
will
configure,
for
example,
the
sdn
on
it's
also
used,
for
example,
if
you're
using
a
proxy.
C
C
I'll
get
to
the
the
faster
version
of
that
is
because,
regardless
of
whether
we're
doing
this
kind
of
non-integrated,
what
used
to
be
called
bare
metal,
upi
or
an
ipi
type
version,
we
always
want
to
create
an
install
config.
B
C
Create
ignition
config,
not
ignition,
config,
install
config
right
so
using
ipi.
I
can
use
the
interactive
installer
and
I
can
step
through
each
one
of
these
things
that's
happening
here
and
I
can
choose
the
one
that
I
want
to
use
or
the
infrastructure
that
I
want
to
deploy
to,
and
actually
I
need
to
pull
the
most
recent
version
of
the
cli
tool.
C
C
B
C
Take
this
guy
and
paste
it
here,
so
this
is
effectively
the
the
kind
of
how
to
set
up
and
how
to
use
this
ipi
livered
api
side
of
things.
C
So
the
important
thing
here
and
I've
done
this
with
my
host-
that
I'm
working
with
we'll
skip
past
all
this
configure
libvert
to
accept
tcp
connections
so
effectively.
What
we're
doing
is
telling
libvert
to
accept
an
unauthenticated
connection
across
tcp.
C
That's
how
it
connects.
So
if
I
copy
this
string
here
and
we'll
copy
that-
and
if
I
go
to
virtual
machine
manager
and
do
a
add
connection,
I
can
do
a
custom
url
paste
that
in
and
you
notice
it
didn't
authentic.
It
didn't
ask
for
any
kind
of
authentication
right.
These
two
are
the
same
host.
So
if
I
were
to
go
here
and
disconnect
this
one
and
then
connect
it
again,
it
prompts
me
for
authentication.
C
B
C
Thing
that
we
were
talking
about
with
fedora
33
and
systemd
resulti,
this
is
the
same
thing
if
you're
using
other
operating
systems
that
use
network
manager
with
the
dns
overlay.
So
this
is
how
you
tell
network
manager
to
configure
your
resolver
to
point
to
that
local
network
inside
of
there
and
note
that
it
does
deliberately
do
this
126.1
so
make
sure
that
you
do
that
ip
address.
C
C
And
what
it's
going
to
do
is
it
will
go
through
and
I
should
have
turned
on
the
debug
logging,
but
behind
the
scenes
it's
creating
the
resources
that
it
needs.
Oh,
I
forgot.
I
created
the
ignition
config
and
if
we
look
at
our
so
here's
our
ignitions
and
all
of
that
I
meant
to
create
install
config,
not
ignition
config
and
if
I
do
a
simple,
create
cluster
now.
C
So
it's
going
to
pull
down
the
terraform
provider,
it's
going
to
do
everything
that
it
needs
to
get
started,
and
if
I
check
over
here,
you
can
see
that
it's
automatically
created
a
new
network
right
and
this
is
where
it's
using
you
know
the
internal
essentially
a
round
robin
dns
load,
balancer
quote
unquote
to
resolve
all
those
things
here.
In
a
moment,
it'll
start
up
a
virtual
machine.
Do
all
that
other
stuff.
It
also
created
a
new
storage
pool.
So
we
have
this
cluster
one.
C
C
There's
a
bug
in
the
current
one
where
here
in
a
moment
it'll
finish
and
it
will
create
two
virtual
machines:
that'll
create
a
bootstrap
and
it
will
create
a
control
plane,
node
and
the
bootstrap
only
has
four
gigs
of
ram,
which
is
not
enough.
So
it
turns
on
when
the
the
okd
bootstrap
process
goes
through.
It
pulls
down
a
new
image
and
then
uses
rpm
mastery
to
switch
to
that
image,
and
that
image
is
larger
than
the
temporary
swap
space.
The
the
slash
var
run
or
slash,
run
rather
space.
C
That
is
available,
so
it
runs
out
of
space
and
it
never
succeeds.
You
can
get
it
to
go
further
if
you-
and
I
think
I
almost
had
one
running
earlier
today,
if
you
catch
it
before
it
boots.
So
basically,
if
you
immediately
turn
off
those
virtual
machines,
edit,
both
of
them
to
give
them
at
least
16
gigabytes
of
ram
and
then
turn
them
back
on
and
then
wait
long
enough,
you
should
be
able
to
get
a
cluster
at
that
point
and
it
will
do
it
will
be
ipi.
C
I
would
recommend
again
doing
the
creating
an
ignition
config.
Excuse
me
and
install
config,
not
ignition
config,
install
config
and
setting
it
to
have
more
than
just
the
default
one
control
plane,
one
or.
D
C
B
But
so,
let's
pause
for
a
moment
just
to
to
recap,
because
that
that
was
a
lot
to
digest.
So
there's
two
methods:
the
the
method
that
you
just
are
showing
us
now
showing
everyone
now
is
the
automated
integrated
quote:
unquote:
ipi
method
against
libvert,
where
the
installer,
the
okd
installer,
is
calling
libvert,
making
all
those
calls
creating
the
vms.
It
already
downloaded
everything
it
needed.
Oh
there,
it
is
yep
the
vms.
B
So
the
previous
method
that
you
had
started
to
take
us
down
was
the
non-integrated
non-automated,
where
you
basically
have
to
download
all
the
images
and
set
up
all
the
configuration,
etc.
So
just
to
be
just
to
restate
that
these
were
two
different
install
methods
and
the
reason
to
not
use
the
automated
one.
Is
it's
brand
new
as
of
4.7,
and
there
is
that
memory
bug
that
you
were
mentioning
correct,
yeah.
C
And
and
the
goal
is,
and
I'm
sure
that
we'll
get
there
before
long,
the
the
goal
is
sure
it
would
be
great
to
have
ipi
and
especially
if
we
can
do
a
single
node
api
on
something
like
this.
Where
you
just
say,
you
know
open
shift,
install
create
cluster
and
it
points
to
that
local
libvert.
It
deploys
a
virtual
machine,
it
gets
everything
up
and
running
all
inside
of
that
single
node.
That
would
be
great.
C
I
would
love
for
that
to
happen,
and
I
I
actually
I
intend
to
continue
seeing
if
I
can
get
that
to
work
on
my
node
even
after
this
particular
event.
So.
B
C
C
Yeah,
so
I
said
so,
can
I
can
you
set
the
memory
in
the
install
config
so
unfortunately,
no
and
the
reason
why
that
is,
is
because
it's
not
actually
defined
as
an
option
inside
of
there.
So
if
I
do
open
shift,
install,
explain,
install
config
dot
and
we
can
just
step
through
this
right.
So
we
see
we
have
our
platform
here.
So
if
I
do
install
config.platform
and
then
we
have
livevert.
C
C
C
Those
values
live,
verts
see,
we
don't
have
any
options
in
install
config
to
set
them
so
for
the
worker
nodes,
you
could
do
it
by
so
creating
the
install
config
and
then
doing
a
generate
manifest
and
then
going
into
the
manifest
and
modifying
the
machine
set
that
it
creates
to
put
the
right
amount
of
memory,
but
the
control
plane
nodes
are
created
by
the
installer
and
the
installer
has
no
right.
We
we
just
walk
through
that
tree.
There's
no
parameters
in
there
for
adjusting
it.
So
yeah
we
have
seen
hidden
parameters.
C
B
Yeah,
john,
I
think
john,
you
were
making
some
comments
about
vsphere
you
can
set
the
sizing,
I
think,
with
most
of
the
other
platforms
you
can.
I
was
about
to
say
overt.
Has
that
too?
Yes,.
C
At
that
yeah
the
the
library
ipi
feels
a
little
neglected.
Sometimes
so
it's
it's
not
there,
but
definitely
with
overt
slash,
rev,
definitely
with
vsphere.
Definitely
with
openstack
as
well
as
I
believe,
the
hyperscalers.
You
can
change
all
of
those
things
so
with
the
hyperscalers
like
aws,
you
would
change
the
instance
type
that
it's
using,
but.
C
C
C
Oh,
I
say
that
and
look
I
have
it
set
up
for
my
other,
my
other
one,
so
I
should
go
ahead
and
modify
these
so
we're
going
to
use
our
okd
lan
domain
name
coming
down
through
here.
We
need
to
make
sure
that
our
machine
sider
is
set
correctly.
So
this
is
110
and
then
everything
else
we
can
pretty
much
leave
at
the
default
again.
Definitely
make
sure
that
you
have
your
your
public
key
for
ssh
inside
of
here,
so
you
can
connect
over
to
the
nodes.
C
I'm
now
going
to
copy
that
into
this
cluster
folder
and
it's
just
an
empty
folder.
I
just
created
it
as
a
holding
spot.
The
reason
why
I
always
do
this
is
because
when
you
do
the
next
step,
it's
quote:
unquote
ingests
the
install
config.yaml
and
deletes
it.
So
if
you
want
to
go
through
multiple
iterations
without
having
to
constantly
recreate
that
file
by
hand,
I
always
put
it
someplace
and
then
copy
it
into
so
we'll
put
that
guy
inside
of
there
and
then
the
next
step
that
we
want
to
do
so
all
right.
C
So
at
this
point
we
have
created
our
install
config.
We've
staged
it
in
in
our
folder
right.
Remember.
We
already
have
the
the
bits
that
we
need
for
like
the
kernel
for
the
net
ram
fs,
all
that
other
stuff
staged
on
our
web
server.
So
now
we
need
to
generate
manifest.
C
C
So
if
we
go
into
our
cluster
directory
here
and
do
an
lsla
see
these
two
files,
those
are
files
that
it
uses
to
make
decisions
for
you
right
so
more
or
less
pick
up
where
an
install
left
off
or
something
like
that.
So
this
is
why
I
always
recommend
you
know:
create
a
subfolder
copy,
your
install
config
into
there
and
then
do
everything
inside
of
there
when
you're
done
with
it
just
rm
dish,
rf
that
whole
folder,
so
you
can
clear
it
out
and
start
completely
fresh
each
time.
B
Or
what
I
do
is
I
create
a
new
folder
each
time
and
switch
over
to
it
is
that
okay.
C
C
C
B
I
put
it
in
the
chat
earlier,
but
I
do
take
the
time.
This
is
actually
a
good
time
to
ask
you
andrew.
Sometimes
it
gets
a
little
confusing
which
of
the
steps
to
execute
for
the
installer,
sir.
You
showed
us
generate
the
install
config
generate
the
ignition
config
generate
the
manifest,
and
sometimes
it's
not
clear
when
to
do
which,
like
the
manifest.
C
B
C
So
that's
a
really
good
point
and
it's
something:
that's
you
know.
Even
the
openshift
documentation
does
a
terrible
job
of
pointing
out.
So
if
you're
doing
ipi
with
a
hyperscaler,
so
aws
azure,
google
et
cetera,
then
you
can,
I
absolutely
go
in
and
do
an
open
shift
install
create
cluster
and
just
go.
You
don't
have
to
worry
about
anything.
C
If
you're
doing
an
on-prem
ipi
deployment,
I
always
recommend
doing
a
create,
install
config.
First
answer:
all
the
questions
so
on-prem
ipi
open
shift,
install,
create
install
config
it'll.
Ask
you
all
the
questions.
So
what
do
you
want
to
install
to?
I
want
to
install
to
overt
okay:
what's
your
over
manager
endpoint?
What's
your
over
manager
credentials?
C
Which
cluster
do
you
want
to
use?
Which
network
do
you
want
to
use
which
storage
domain
you
want
to
use
so
on
and
so
forth,
and
we'll
spit
out
that
install
config
at
the
end,
then
you
want
to
modify
that
install
config
to
make
sure
that
that
networking
dot
machine
network
dot
sider
is
correct
because
by
default
it'll
be,
I
think
it's
10.00.
C
B
C
In
that
install
config
and
it'll
begin
the
deployment
process,
if
you
want
to
have,
if
you
want
to
do
a.
C
Now,
if
you're
doing
on-prem
upi,
I
tend
to
recommend
always
doing
well,
it
varies,
but
almost
always
I
end
up
doing
I
create
the
install
config
by
hand.
Sometimes
I'll,
do
a
kick
it
off.
With
the
open
shift
install
create,
install
config,
I
keep
wanting
to
say
ignition
configs,
which
is
wrong,
install
config.
C
It
gives
me
that
template
and
then
I
can
go
in
and
edit
it
by
hand
to
remove
the
platform
section,
for
example,
so
create
the
ignition
config
or
install
config
rather
create
manifests
specifically
because
we
want
to
mark
the
schedule
of
the
masters
as
non-schedulable
unless
you
want
that
three-note
compact
deployment-
I
very
rarely
do
that
just
because
I
I
almost
always
have
other
my
default.
Config
is
a
lot
of
times
the
test,
couper
and
stuff
like
that.
So
I
want
to
have
dedicated
physical
notes.
C
So
that's
why,
like
I
have
all
of
those
other
random
laptops,
old
laptops
and
stuff,
like
that,
I
turn
those
into
okd
nodes
for
coovert
and
then
from
there.
You
know
going
on
with
the
rest
of
the
process,
so
yeah
we,
we
don't
make
it
easy
in
the
documentation
for
when
to
use
each
one
and
why
to
use
each
one.
C
So
if
we
were
to
look
at
that,
it
would
have
all
of
those
files
base64
encoded
in
there,
along
with
the
certificates
and
all
the
other
things
that
are
inside
of
there.
So
okay.
So
let's
switch
back
over
to
let's
switch
back
over
to
our
gist
that
I
was
using.
C
C
So
now
I
need
to
put
those
ignition
configs
onto
our
helper
node
onto
the
web
server,
and
this
is
so
that
when
our
nodes
boot
up
right,
they're
able
to
reach
out
they
pull
down
that
ignition
config.
That
way,
they
can
do
that
initial
configuration
so
we're
getting
better
with
the
various
ipi
installs
of
being
able
to
attach
those
without
having
to
host
them
on
a
web
server,
but
with
the
non-integrated
method.
We
do
still
have
to
do
that.
C
And
if
we
switch
back
over
here,
so
this
is
our
helper
node
web
server
and
we
check
ignition,
we
see
we
have
our
ignition
files
and
I
need
to
set
the
permissions
correctly.
I
do
note
that
on
the
over
here
made
you
to
adjust
the
permissions
so
I'll.
Do
that
real,
quick.
C
Step
six,
and
this
is
where
it
gets
interesting
with
libert's.
I
had
a
lot
of
fun
automating
this
and
playing
with
some
new
things
that
I
didn't
know
about
libbert
yesterday
and
yesterday
evening.
C
So
the
first
thing
that
we're
going
to
do
is
create
the
disks
that
we
need
for
our
virtual
machines,
so
I
am
not
on
the
helper
node,
I
am
on
my
libert
node
and
I'm
going
to
paste
that
command
in
all
we're
doing
here
is
a
simple
loop.
Where,
for
each
one
of
the
nodes,
we
do
a
qmu
image,
create
of
a
120
gigabytes,
qca2
image.
Inside
of
our
remember,
my
mounted
storage
pool
all
right.
So
now
we
have
all
of
those
files
inside
of
there.
C
C
It's
it
I
would
say
it's
the
minimum
size
that
you
will
want
for
any
kind
of
production
cluster,
any
kind
of
long-running
cluster.
I
remember
that
this
space,
this
120
gigabytes,
will
be
used
for
everything.
Fedora
coreos
is
doing
as
well
as
any
container
images
that
are
downloaded
as
well
as
any
scratch
space,
as
well
as
any
empty
doors
logs
that
are
generated
right.
All
of
those
things
go
inside
of
there
and
effectively.
If
that
fills
up,
it's
it's
a
bad
day
right.
C
C
B
C
Directory,
I
really
miss
being
able
to
do
a
keyboard
copy
and
paste,
but
pnc
doesn't.
Let
me
do
that.
We
have
our
files
inside
of
here
they're
available
to
us
now.
The
reason
why
I
want
them
locally
on
this
box
is
because,
in
the
next
step,
we're
going
to
tell
it
to
do
a
local
kernel
boot
of
our
virtual
machine
to
do
the
install
so
we're
going
to
use
a
couple
of
variables.
B
B
And
just
the
second
reason
that
I,
in
my
brief
headaches
with
libvert
that
var
lib
libvert
has
certain
permissions.
B
I
don't
even
remember
if
it's
just
standard
it
might
be
extended,
attribute
permissions
that
if
you
don't
put
things
there,
whatever
isn't
there
or
if
you
try
to
use
a
different
directory,
libvert
will
start
to
complain
a
vm
might
not
it
might
boot
once
and
then
not.
My
I've
seen
it
not
be
able
to
reboot
after
a
while,
so
that
that's
a
special
directory
for
sure
for,
for
things
like
storing
images
for
libvert.
C
Yeah,
it's
I've
had
the
same
thing
and
permissions
are
always
an
issue.
It
seems
like
so
my
three
variables
here
right
kernel
is
pointing
to
the
file
that
we
just
described
up
here
same
thing
within
it,
rd
kernel
args
is
important.
So
remember.
I
said
that
we
needed
to
point
it
to
where
the
root
fs
is
at
and
it's
hosted
off
of
our
helper
web
server.
C
C
So
let's
break
the
string
down
and
all
of
this
is
in
the
documentation.
So
if
you
go
into
that
agnostic
out
here
installing
on
on
any
platform,
all
of
this
is
described
inside
of
there.
So
don't
don't
worry
that
you
have
to
remember
what
what
I'm
saying
here.
So
the
first
thing
is
the
ip
address
that
we
want
this
node
to
use
double
colon
and
then
what
is
the
gateway
for
this
subnet?
C
What
is
the
netmask?
What
is
the
hostname,
including
fully
qualified
domain
name,
that
we
want
to
assign
to
it
the
interface
that
we
want
to
configure
with
all
of
this
information?
None.
I
don't
remember
what
that
one
is,
but
it's
always
none
and
then
the
dns
server
that
we
wanted
to
use
so
because
all
of
this
information,
except
for
these
two
bits,
the
node
ip
and
the
node
name-
is
the
same.
C
C
C
So
all
I'm
saying
is
your
bootstrap
ignition
file
right,
so
coreos.inst.ignitionurl
is
on
our
helper
node
port
8080
ignitionbootstrap.ign.
So
this
is
where
it
gets
interesting
in
my
opinion,
so
what
I've
done
is
used
vert
install
to
assign.
So
we
see
we
have
a
disk
here,
which
is
our
bootstrap.
We
just
created
that
remember.
C
B
C
So
essentially
I
determine
what
the
node
name
is
without
the
dash,
because
that
throws
an
error
and
I
create
basically
the
same
variable
name
that
we
created
up
here
and
then
down
here
where
I
actually
want
to
use
it.
I
reference
it
or
I
use
it
by
referencing
that
name
here
with
the
bang
in
front
of
it.
So
I
had
some
fun
doing
research
on
bash
last
night,
because
I
had
never
done
that
method
before.
C
C
C
B
C
C
Yeah
because
it's
reading
the
file
name,
that's
why.
C
C
C
C
C
So
the
first
thing
I'll
do
is
open
up
this
vm
and
kind
of
peruse
through
the
different
settings
that
are
inside
of
here
so
cpus
I'm
giving
these
four
any
the
control,
plane
and
bootstrap
four
cpus
12
gigabytes
of
memory.
That's
just
to
make
sure
that
has
more
than
enough
to
do
the
things
that
it
needs
to
do,
and
then
the
magic
that's
happening
here
is
underneath
the
boot
options.
C
I'm
doing
this
direct
kernel
boot
and
I'm
having
it
boot
off
of
that
kernel
and
that
init
ram
fs
image
with
all
of
those
kernel
args
that
we
built
to
do
the
static
ip
assignments
and
where
to
install
to
and
all
that
other
stuff.
After
that,
it's
pretty
standard
right,
the
disk
that
we
created
the
network
adapter.
We
don't
really
care
what
mac
address
it,
gave
it
or
anything
like
that,
because
we're
using
static,
ips,
not
dhcp,
and
everything
after
that
is
standard.
C
B
C
C
B
C
So
I'm
just
going
to
let
it
default
create
a
disk
here
we
can
select,
we
can
put
it
into
an
alternate
pool
if
we
choose
and
then
give
it
some
sort
of
name.
So
one
thing
to
note:
we
want
to
make
sure
that
we
put
it
onto
the
right
network
and
I'm
going
to
customize
the
configuration
before
install
and
then
this
is
where
I
would
want
to
go
in
and
make
sure
that
my
boot
options
are
set
correctly.
So
we
just
click
that
and
then
we
browse
to
our
bar
live.
C
C
C
So
the
reason
why
I'm
doing
it
this
way
is
because
it's
much
faster,
so
you
can
you
can
host
those
files.
You
can
have
it
pixie
boot,
for
example,
there's
a
number
of
different
ways
of
doing
it.
I
find
especially
with
a
single
libert
host
like
this
hosting
those
on
the
local
local
file
system,
makes
it
tremendously
quicker
to
do
the
the
install
process.
C
C
So
I'm
going
to
do
this
part
manually
just
so
we
can
see
what's
going
on
inside
of
here,
so
I'm
going
to
start
with
our
bootstrap
machine
and
I'm
going
to
turn
it
on.
So
when
I
turned
it
on
it
did
that
kernel
boot
right,
it
read
in
the
kernel,
it
read
in
the
inet
rd
now
it's
going
through
and
it
will
have
red
in
so
there's
all
of
our
kernel
command
lines
right,
all
that
other
stuff,
so
it
reached
out
and
it
is
pulling
down
the
root,
fs
and
the
ignition
file.
B
C
Yep
so
neil,
so
we
have
gone
through.
We've
set
up
our
stage,
everything
that
we
need
on
the
helper
node
we've
created
our
ignition
configs
we've
staged
those.
C
So
at
this
point
I
just
installed
the
bootstrap
but
we're
getting
ready
to
install
coreos
to
the
rest
of
them
as
well,
so
very
much
to
justin's
point
it's
nice
to
have
the
console
up,
so
we
can
see
if
something
goes
wrong.
You'll
also
notice
that
once
it's
finished
instead
of
doing
a
reboot,
it
shut
down
the
machine.
C
So
that
is
quite
convenient
and
one
of
the
one
of
the
things
that's
nice
here.
So
I'm
just
going
to
turn
the
rest
of
these
on
and
let
them
do
their
thing.
I
think
in
the
gist.
I
note
I
have
automation
over
here
that
says
just
over
shell
start
and
then
optionally
add
in
a
sleep
if
you're
using
slightly
slower
storage.
So
this
is
an
nvme
device.
Now
you
may
want
to
consider
staggering
these
because
they
can
take.
B
C
C
So
remember
at
the
very
beginning,
I
said
that
effectively
the
install
process
is
super
complex
and
and
like
we've
spent
what
an
hour
now
going
through
all
these
different
steps
of
staging
everything
and
making
sure
all
the
processes
are
set.
Well
now
all
we
do
is
turn
these
vms
on
and
it
sets
itself
up
with
one
exception-
that's
approving
csrs,
so
I
do
need
to
go
in
and
I
need
to
remove
that
config
that
we
had
just
done
so.
C
The
reason
why
it
didn't
restart
when
it
told
it
to
reboot
is
because,
by
default
libvert
when
you
set
the
kernel
boot,
it
sets
the
on
reboot
to
destroy,
so
it
shuts
down
instead
of
actually
rebooting,
because
if
we
didn't
remove
those
kernel
parameters.
C
C
Now,
if
we
look
at
our
vm
definition
here,
you
can
see
our
boot
options
have
been
removed
and
if
I
check
the
xml
here,
our
on
reboot
is
now
set
to
restart
instead
of
destroy
like
it
would
have
been
before
so
now
we
simply
power
it
on
right.
We
can.
We
can
check
over
here.
With
our
I
mean
so
virtual
start,
I'm
going
to
stagger
these
just
a
little
bit
because
they
do.
C
C
C
So
what
happened
here
see
I
was
connected
to
core
at
bootstrap
and
then
it
terminated
that's
because
it
went
through
and
rebooted
itself.
I
was
busy
running
my
mouth.
What
what
happens
is
on
the
first
boot
it
goes
through
and
it
instantiates
a
uses,
this
release
image.
So
it
goes
through
and
it
pulls
down
the
newest,
rpm,
os,
3
image
and
then
switches
to
that
image
and
then
reboots
the
node.
So
that's
what
we
just.
C
B
At
this
point,
go
ahead:
justin,
while
we
watch
the
the
bootstrap,
I
think
a
couple
of
points
have
been
brought
up
too.
One
is
from
neil,
he
just
joined
his
question
was
about
upi
and
why
is
it
required
to
use
static
ip
you?
You
me,
you
talked
a
little
bit
about
this
earlier,
but
just
to
repeat,
while
we're
doing
it
this
way,
so
it's
not
required
to
use
static.
Ip.
C
C
So
we
work
around
this
with
ipi
because,
like
with
ipi
when
we
provision
a
new
node
or
whatever
ip
address,
the
node
happens
to
get
assigned
from
the
aws
dhcp
server
or
whatnot.
The
cloud
controller
will
update
the
load
balancer
with
that
node's
information.
We
don't
have
that
with
upi.
Remember,
there's
not
that
same
level
of
integration
with
the
infrastructure,
so
instead
we
need
to
have
access
over
to
it.
B
I
think
some
of
this
may
also
go
back
to
why
we're
doing
this.
Install
method
which
was
okd
4.7,
has
the
new,
automated
ipi
that
that
will
automate
a
bunch
of
this
versus
we
are
stepping
through
with
you
through
the
essentially
on
it,
not
automated
non-integrated
installation
against
libvert.
C
C
C
Some
of
the
more
enterprisy
load
balancers,
so
I
don't
know
if
citrix
or
f5
or
something
like
that
can
do
like
a
dns
reverse
lookup,
to
find
the
machine
on
the
back
end
h,
a
proxy
can't
I'll.
So
your
question
so
load
balancers
can't
do
dns
based
selection,
aj
proxy.
I
don't
think,
can
the
last
time
I
checked,
which
I'll
admit
has
been
like
a
year
ago.
It
could
not
so
maybe
that's
changed.
C
I
should
probably
look
into
it
but
yeah
that
that
is
why
I've
always
done
either
static,
ips
or
dhcp
reservations.
C
B
C
C
It
you
know
it
probably
my
fault
right,
I
should
have
checked
you
know
I
should
be
checking
periodically
or
even
pinging
our
partners
to
see
if
it's
something
that
they're
working
on
or
they
can
do
instead
of
you
know,
having
checked
once
a
while
ago,
and
just
assuming
that
it
is
the
way
that
it
is
yeah.
B
I'm
I'm
looking
at
aj
proxy's
site
to
see
what
they
say
about
integration
with
dhcp.
B
I
think
you
know
just
to
give
everyone
background
andrew
and
I
kind
of
went
back
and
forth
on
this
about
we
we
did
want
to
show
the
most
simplified
way
to
install
at
first,
but
when
we
saw
that
the
automated
integrated
installer,
for
example,
that
was
the
first
issue,
then
we
decided
okay.
Well,
let's,
let's
do
the
upi
method
and
just
it
provides
a
more
under
the
covers
view.
When
you
say
andrew,
you
get
to
really
see
the
mechanisms
that
are
working
right,
yeah,
just
definitely
yeah.
B
I
mean
there's
all
kinds
of
different
ways
to
do
this.
I
mean
let's
not
get
into
even
like
pixie
booting
stuff,
because
even
though
that's
possible-
and
it
would
definitely
get
a
fleet
of
nodes
operational,
we
kind
of
figured
this-
would
be
a
small
group
that
probably
doesn't
have
enterprise
stuff
like
pixie
servers
and
enterprise
level,
load
balancers,
and
things
like
that.
So
we
wanted
to
try
to
say:
hey
if
we
do
just
one
box,
the
linux
box
hosting
okd,
what
is
a
way
that
could
be
streamlined
to
get
it
working?
C
I
was
as
justin
and
I
were
preparing
for
this-
I
kept
telling
him
like
you
know.
We
have
like
this
world
of
possibilities,
and
this
is
this
is
one
way
out
of
about
300
of
doing
it,
and
it
was
I
I
switch
between
them,
because
I
try
to
be
familiar
with
a
lot
of
the
different
ways.
So
this
it
was
hard
for
me
to
narrow
it
down.
C
So
if
we
switch
over
to
watching
the
bootstrap
logs,
so
we
see
we
have
all
of
these
pod
status
messages
going
by.
So
these
are
actually
done
in
groups
of
four
that
will
repeat,
you
can
see,
there's
cluster
version
operator
cube
api
server,
coupe,
scheduler
and
coupe
controller
manager,
and
what
we're
waiting
for
is
all
four
of
them
to
be
in
a
status
of
ready.
C
B
B
C
That
ignition
config
and,
among
other
things
it
uses
the
lcd
operator,
the
cluster
scd
operator
to
instantiate
a
single
node,
ncd
and
then
instantiate
the
cluster
or
the
machine
config
operator.
The
control
plane
nodes
then
come
up.
So
those
three
nodes
come
online.
They
look
to
that
machine,
config
operator
instance.
They
get
their
configuration
and
then,
after
they
configure
themselves,
they
start
talking
to
the
bootstrap
and
the
bootstrap
says
I'm
going
to
using
the
cd
cluster
config
operator,
I'm
going
to
increase
my
xcd
node
count
to
three.
C
So
adding
two
control
play
nodes
decrease
it
to
two:
removing
the
bootstrap
and
then
increase
it
back
to
three
bringing
it
up
to
the
three
control
play.
Notes
the
bootstrap
then
hands
over
all
of
the
kubernetes
control
plane
operations
to
the
newly
instantiated,
those
three
new
control
plane
nodes.
So
that's
effectively
what's
happening
here
in
the
background.
That's
what
we're
watching
all
of
these
pods
right.
All
these
messages
scroll
by
this
is
it
handing
over
all
of
that
information
and
waiting
for
the
control
plane
nodes
to
take
ownership
of
it.
So
we
can
see
right.
C
C
C
So
at
this
point,
what's
happened
is
and
it's
going
through
and
you
see
all
of
these
different
mammals
that
are
being
applied
so
on
that
new
control
plane.
Remember
it's
just
kubernetes,
so
it's
a
kubernetes
control,
plane
the
cluster,
instantiated
or
deployed
operator
lifecycle,
manager,
olm,
and
it
is
now
using
operators
to
instantiate
all
of
those
okd
services.
So
there
there
are.
Our
bq
service
has
succeeded.
C
B
C
The
cluster
just
like
bootstrap
wasn't,
and
it's
not
saying,
okay.
Now
you
need
to
do
this.
Okay.
Now
you
need
to
do
this.
It's
just
monitoring
and
reporting
back
the
status
of
what's
going
on
in
the
cluster,
and
we
can
actually
get
this
information
this
right
here,
that's
working
towards
five
percent
complete
by
querying
the
cluster
directly.
So
that's
what
I
want
to
show
over
here
in
this
one,
while
that's
going
so
inside
of
my
cluster
folder,
so
remember
this
is
where
I
placed
the
ignition
or
install
config
file.
C
This
is
when
we
created
the
manifest
it
was
done
inside
of
here.
This
is
where
we
have
our
ignition
files
when
they
were
generated.
Very
importantly,
we
have
this
off
directory
inside
of
here
are
two
things.
One
is
the
cube
admin
password.
So
if
you
walk
away,
if
you
forget
to
record
it
or
anything
like
that,
the
coupe
admin
password
is
stored
in
this
file
and
then
the
kube
config
file
that
we
can
use
to
connect
as
a
system
admin
to
the.
B
C
So
at
this
point
we
have
so
control
plane
is
handed
over.
If
I
were
to
do
an
oc
get
cluster
version,
we
can
see
here's
our
working
towards
the
same
thing
that
we
see
over
here.
It's
at
88
percent,
it's
at
88.
C
C
C
C
C
So
the
first
it
will
request
two
of
these
one
for
each
one
of
the
nodes.
So
here's
the
line
right,
you
see
it's
requesting
one
for
each
one
of
the
nodes
and
then
once
those
are
approved,
there'll
be
two
additional
ones
that
come
up.
So
here's
worker.
B
C
C
And
that
guy
is
shut
down.
C
C
B
C
C
I've
never
had
that
happen
before,
but
it's
coming
up
the
cluster
will
deploy
with
a
single
worker.
What
we'll
see
is
the
ingress
operator
is
angry
because
it
wants
to
have
two
replicas
by
default,
so
we
can
fix
that
by
just
changing
it
to
having
one
replica
of
course,
then
it's
not
highly
available.
B
C
B
I'm
just
kind
of
curious
because
we
have
two
minutes
to
the
top
of
the
hour.
What
is
the
resource
utilization?
Looking
like
on
your
your
host
box,
the
liver
kvm
box,
yeah.
C
C
This
so
system
monitor
here
so
usually
in
idle
states.
With
this
five
node
cluster
deployed,
it's
sitting.
C
C
C
C
If
we
do
oc
get
node
and
oc
describe
node
on
one
of
our
control
plane
nodes
here
way
down
at
the
bottom.
We
have
this
allocated
resources.
You
can
see
that
requests
only
total
five
gigabytes
of
memory.
So
that
tells
me
that
your
slide
that
you
had
shared
earlier,
that
has
the
eight
gigabyte
ram
that.
B
C
B
B
C
C
Okay,
here
we
can
find
oh.
C
It
depends
on
where
I'm
at
so
remember.
B
B
B
B
So
now
that
the
the
nodes
are
are
booting
up
and
the
operators
are
coming
online,
oh,
I
see
diane
joining
back
here.
I
think
this
is
a
really
time
to
just
take
a
a
moment.
B
Maybe
for
recap
any
any
q
a
that
we
have.
How
are
things
going
diane
in
the
other
sessions.
A
We
have
had
success
in
the
home
in
the
home
lab
groups,
they
got
three
different
home,
lab
deployments
walked
through
and
demoed
amazemont
and
in
the
the
which
one
was
the
one
that
just
ended.
A
You
guys
are
bare
metal,
so
it
was
the
vsphere
folks
finished
theirs
and
they
did
cut
out
a
little
bit
early
because
it
was
going
to
be
another
10
to
20
minutes
of
staring
at
the
screen
to
watch
an
upgrade.
So
those
two
are
done
and
you
know
just
keep
on
keeping
on
as
as
you
want
to,
and
what
I'm
curious
about
is,
if
there's
things
that
people
who
are
listening
in
here
think
that
we
should
be
adding
to
the
documentation.
A
A
C
B
No,
that's
explaining
I
I
did
this
was
awesome
again,
andrew
because
the
you're
I
I've
noticed
that
first
timers,
whether
it's
okd
or
open
shift,
the
the
docs
can
really
lead
you
astray,
so
diane.
I
think
this
is
a
great
effort
to
have
maybe
more
streamlined
or
verbose
documents
on
deployment
of
okd,
because
it
it
you
know
you
explaining
the
process
is
kind
of
what's
missing
in
some
of
the
documentation.
Andrew.
C
Yeah
yeah,
so
I
I'll
also
say
that
so
it's
funny
my
team,
the
tech
marketing
team.
We
have
a
bi-weekly
meeting
with
the
ux
group
where
they
just
go
through
and
say
hey.
This
is
what
we've
been
working
on.
You
guys
interact
with
a
lot
of
customers.
What
do
you
think
so?
One
of
the
things
that
we've
been
trying
to
set
up
is
basically
the
same
thing
with
the
docs
folks.
C
A
Yeah
yeah,
you
know
nobody's
going
to
argue
about
that.
Our
documentation
is
always
a
work
in
progress.
I
think
this
we're
going
to
try
and
do
these
hop-in
sessions
there's
some
variation
like
once
a
quarter,
and
I
think
we're
also
going
to
try
on
thursdays
to
have
like
open
office
hours
for
okd,
that
is
that
are
all
community
driven.
So
this
has
been
this.
A
This
is
really
useful
for
us
who
are
in
the
working
group,
and
if
I
I
know,
I
probably
I'm
speaking
to
the
converted,
because
I
think
all
of
you
who
are
I
can
see
online
here
now
are
all
in
the
working
group
now
and
if
you're
not
kareem,
if
you're
here
and
you're
not
coming
it's,
it's
usually
just
a
skeleton
if
you
haven't
joined
yet
do
so
that
it's
usually
a
time
zone.
A
Issue
too,
I
think
we
have
people
from
all
over
the
world
trying
to
come
in
and
see
this
stuff,
but
the
working
group
has
been
great
in
terms
of
giving
feedback
and
stepping
up
and
doing
stuff
so
yeah,
my
my
heart
of
hearts.
A
I
would
love
to
get
one
product
manager
from
the
openshift
team
to
come
on
a
regular
basis
to
these
just
to
hear
some
of
the
things
that
people
are
doing
and-
and
I
think
a
lot
of
the
pms
are
also
deploying
home
labs
and
things
like
that,
and
I
think
I
heard
a
a
threat
that
all
of
them
have
to
manage
the
internal
cluster
for
a
period
of
time.
Each
pm.
It's.
C
Not
it's
not
a
threat,
it
is
a
reality
yeah.
We
actually
right
now
we're
going
through
a
process
of
pairing
product
managers
with
tech
marketing
managers,
because
I
love
our
product
managers
and
they
are
extremely
knowledgeable
and
deeply
technical
on
their
focus
areas
and
having
them
branch
out
and
learn
more
about
openshift
as
a
whole
is
nothing
but
amazingly
positive.
So
I've
already
seen
some
benefits
there.
C
There
they've
already
experienced
some
pain,
trying
to
set
up
authentication,
so
so
they've
already
created
some
jira
issues
themselves
saying
this
is
way
too
hard.
We
should
make
this
easier.
A
So
it's
really
it
in,
and
I
think
and-
and
I
will
make
all
of
the
recordings
of
these
sessions
available
to
everybody
to
use
it-
does
take
24
hours
to
render
all
of
these
videos.
A
So
if
you're,
you
know
anxiously
waiting
for
it,
it
probably
and
my
internet
got
restored
as
of
monday
morning
at
my
house
I
will
be
able
to
upload
them
on
wednesday
or
on
monday
afternoon,
so
look
for
them
there
and
I'll
make
a
post
to
the
mailing
list
when
they're
there,
but
this
stuff
is
just
hugely
useful
for
everybody
and
just
really
grateful
for
you
guys
time.
A
C
While
we
were
chatting
there,
if
you
were
watching
my
screen,
I
found
out
where
the
issue
with
the
the
single
worker,
zero
name,
is
here
so
up.
When
I
was
doing
this
variable
assignment,
I
did
not
replace
that
one
which
then
propagated
down
into
the
others,
so
they
do
have
distinct
ips
right
all
that
other
stuff.
They
just
have
the
same
host
name.
C
C
C
C
But
yep
just
waiting
watching
this
thing
go
through
and
do
its
thing,
so
you
can
a
lot
of
times
see
authentication
that
one
worries
me.
That
could
be
because
we
only
have
the
one.
B
C
I
know
what
I
could
do.
I
could
reload
that
node.
C
C
So
I'm
going
to
set
the
node
name
correctly
this
time
so
here.
So
the
issue
is
right
because
remember
we
did
static,
static,
ipconfig
and
we
set
the
node
name
here.
C
So
what
I
need
to
do
is
because
you
can't
really
change
the
ip
address
or
hostname
of
an
a
coreos
node
easily.
Really
the
recommendation
is
treated
like
an
appliance
blow
it
away
right
and
reload
it.
So
that's
exactly
what
I'm
going
to
do
so,
I'm
just
going
to
rather
than
doing
it
all
automated
from
the
cli
like
I
did
before.
I'm
just
going
to
do
it
manually
from
here.
C
C
Okay,
I
got
it
so
we'll
set
that
guy
there
where'd
you
go
so
hit
apply
there
and
then
I
don't
think.
A
C
B
C
B
B
C
B
B
C
You
can
see
so
it
started
and
installed,
and
then
it
stopped.
So,
let's
flip
this
back
to
restart
and
apply
and
then
on
our
boot
options.
We
uncheck
this
and
apply,
and
then
we'll
start
our
node
and
just
like
during
the
install
process.
What's
going
to
happen
is
it'll
turn
on
it'll
come
up.
It
already
has
its
network
config
and
everything
it'll
look
to
the
machine
config
operator
so
that
api
ends.
C
You
can
see
it
trying
to
get
to
it
from
here,
so
it
pulled
down
its
ignition
config
and
now
it's
going
through
and
it's
trying
to
it'll
reconfigure
itself
here.
In
a
moment
it
will
reboot
again
because
it
pulled
down
a
new
rpm
os
tree.
So
it's
going
to
flip
over
to
the
new
one
and
then
we'll
reboot
and
then
at
that
point
is
windows.
We
should
have
to
approve
the
csr
for
to
join
the
cluster,
but
that'll
take.
B
C
You
can
see
we
do
have
a
a
degraded
operator,
that's
the
ingress.
So
remember,
I
said
with
only
one
worker
node
it
will
deploy.
He'll
just
have
the
ingress
will
be
angry
because
there's
a
it
wants
by
default
to
be
scaled
to
two,
and
that
means
that
there
needs
to
be
two
worker
nodes
available
for
it.
B
So
this
is,
this
is
great.
I
I
think
this
is
where
I
totally
get
that
you
wanted
to
see
that
second
worker
node
up,
but
you
successfully
showed
us
the
deployment
of
okd.
I
think
we
had
some
good
questions
so
far.
Any
questions
that
folks
have
that
just
joined
us
or
had
been
sticking
with
us
from
the
get-go
about
the
install
to
to
we
used
andrew
use
fedora
here
any
any
questions
that
still
don't
make
sense.
C
Yeah
john
you're
correct
I
I
could
create
a
brand
new
vm,
do
the
exact
same
process
and
add
as
many
worker
nodes
as
I
need
to
into
the
cluster.
C
C
Here
so
this
this
helper
node
is
running,
bind
it's
running,
dhcpd
right,
all
those
other
things.
So
if
I
go
to
instead
of
dns
mask.
C
C
So
if
you
were
sharp
eyed
when
I
was
reviewing
my
my
stuff
here
under
the
virtual
networks,
I
have
this
br0
and
it
just
connects
directly
out
to
the
same
network
as
the
rest
of
my
my
host.
So
if
we
had
wanted
to,
I
could
show
you
know
doing
it
from
there
and
you
see.
I
just
have
these
dhcp
reservations
set
up
for
each
one
of
them
and
then
just
as
with
the
other
one,
I
have
an
aj
proxy
config
for
that
and
then
I
have
we
go
to
varnamd
and
then.
D
C
C
Go
to
there
we
go
tftp
boots,
so
if
you're
familiar
with
pixi,
you
can
actually
have
it
based
off
of
the
mac
address
automatically
boot
nodes.
So
in
my
when
I
normally
because
I
turn
through
especially
openshift
clusters,
sometimes
three
or
four
a
day
when
I'm
creating
demos
and
stuff
like
that,
so
all.
B
C
B
C
B
Note
of
this,
because
we're
going
through
the
the
details
of
like
adding
an
alternate
method-
pixie
booted,
I'm
kind
of
curious.
You
also
said
that
you
would
need
to
approve
a
csr
right.
So
it's
not
just
the
node
boots,
but
someone
some
operator
has
to
approve
yeah.
C
So
with
with
upi,
regardless
of
the
of
the
platform
that
you're
using
you
always
have
to
approve
the
csrs
for
the
for
newly
or
nodes
that
you
want
to
join
so
oops.
C
So
that
would
be
the
third
point,
john
to
your
question.
Yeah.
C
See
how
I
have
a
pending
one.
So
if
you
were
again
quick
eyed
with
my
worker
one
here
see:
oh,
it's
only
been
up
for
like
30
seconds.
My
my
cursor
disappears,
but
it's
only
been
up
for
a
minute
or
two,
because
it
it
booted
and
then
it
got
the
new
rpm
os
tree
image
and
then
applied
that
and
rebooted,
and
then
it
came
up
and
now
it
wants
to
join
the
cluster.
C
C
So
this
is
node.
Bootstrapper
is
the
first
thing
that
will
request
one
for
each
new
node
and
then
we'll
give
it
a
second
and
we
should
see
another
pending
show
up
after
just
a
second
and
these
csrs
they
do
disappear
after
I
think
24
hours
they'll
automatically
disappear
from
the
list
there.
Here's
our
pending-
and
we
see
it's
for
worker
1
this
time.
So
we'll
do
the
same
approve
and
now,
if
I
go
to
oc,
get
node,
we
have
our
worker
1.
C
C
It'll
deploy
another
ingress,
a
router
and
then
that
error
or
this
let
me
make
that
bigger,
and
then
let
me
make
this
bigger
and
then
this
operator
being
angry
home.
Now
it's
that
one's
updating
because
of
the
new
node,
but
the
ingress
will
eventually
stop
being
angry
and
it'll.
B
C
C
Yes,
this
is
also
upset.
That's
because
we
just
added
the
new
node
and
therefore
it's
going
through
and
all
of
those
operators
are
going
to
adjust
for
the
new
node
and
deploy
additional
services
once
that
new
node
is
fully
joined,
it'll
go
back
into
a
healthy,
happy
status
and
it'll
be
ready
to
go
one
thing
to
note
with
4.7.
B
So
back
to
the
deployment
documentation
that
diane
had
mentioned,
it
sounded
like
you,
wanted
to
make
a
few
edits
or
or
changes
to
your
deployment
before
you
submitted
it.
C
A
Well,
let's
do
this:
let's
get
you
to
make
a
progress
on
mike's
stuff
today,
so
I
can
at
least
get
one
new
chunk
of
docs
in
there
and
then
we
will
get
people
to
comment
on
it.
So
that
would
be
wonderful.
We
could
get
that
one
in
and
that
would
make
diane's
day.
C
C
It
will,
after
a
minute
but
deployment's
good,
we're
ready
to.
B
C
As
you
can
see,
I
idle
state
right
now,
so
there
are
48
gigabytes
of
ram
somewhere
between
40
and
60
of
cpu,
and
this
is
an
older
cpu.
It's
a
ryzen,
5
2600!
So
that's
four
years
old,
now,
three
years
old.
Now
something
like
that.
C
B
So,
with
five
vms
running
a
a
cluster,
I
mean
this
is
a
a
decent
size.
Like
you
said,
odor
hardware,
someone
can
spin
this
up
if
they
have
wherever,
where
they
work
or
if
it's
home
lab.
I
mean
this.
Is
you
don't
have
to
break
the
bank
to
spin
this
up
on
something
like
a
linux
box,
yeah.
C
Really,
the
the
only
investment
quote
unquote
that
I
did
is
an
nvme
device.
I
think
I
spent
eighty
dollars
on
a
512
gigabyte,
nvme
device,
pcie
nvme
device
just
make
sure
that
your
motherboard
is
capable
of
plugging
those
in
and
it's
it
solved
all
of
the
performance
deployment
issues
right.
You
saw
it
took
us
what
less
than
30
minutes
total
to
deploy
the
cluster.
C
Before
that
I
was
trying
to
use,
I
had
a
lvm
device
with
lvm
cache,
so
it
was
two
spinning
drives
and
a
raid
one
with
an
ssd
cache
and
it
would
deploy,
but
you
couldn't
really
do
anything
with
it
and
the
deployment
took
the
better
part
of
an
hour
instead
of
half
an
hour
so
yeah
I
asked,
and
my
wife
was
very
kind
and
said
sure,
that's.
C
B
Authorization
to
purchase
that's
great
yeah,
so
I
don't
see
any
more
questions
in
the
chat
and
the
the
cluster
is
up
you.
You
showed
us
that
I
think
we
had
a
great
chat
back
and
forth
and
totally
understand
about
the
document
wanting
to
get
that
up
to
so
that
other
people
can
can
run
through
this.
It
would
be
great
because
I
think
it's
very
clear,
well
written
and
now
they
have
this
recording
to
step
through
it
as
well.
If
they
have
a
question.
A
Yeah,
I
think
this
this
was
excellent.
This
is
this
is
like
the
perfect
way
to
end
the
end
the
day
with
this
up
and
running,
and
I
think
you
guys
nailed
it
on
the
head
here.
So
thank
you
so
much
for
taking
the
time
today
I
I
know
it's
a
saturday
it
may
it
we
may
all
be
down
in
lockdown
and
that,
but
I
you
know,
you
still
need
to
go
out
for
a
walk
and
get
some
fresh
air.
A
Now,
wherever
you
are,
I
think
your
east
coasters,
both
of
you,
andrew
and
justin,
is
that
right,
yeah,
yeah
yeah,
so
there's
still
a
little
light
out
there
in
the
background,
even
though
you're,
probably
in
your
basements
as
I
am
so
I'm
not
seeing
any
other
questions,
john,
thank
you
for
joining
us
and
sticking
with
us
all
day
and
all
your
wonderful
feedback
love
this,
and
so,
when
we
do
this
again
next
quarter,
john
and
other
folks,
we're
going
to
get
you
guys
to
do
walkthroughs
like
this,
you
don't
talk
too
much.
A
That's
why
we
put
you
in
chat
between
you
and
neil
john.
That's
yeah.
Both
of
you
are
going
to
have
to
be
on
the
hook
for
the
next
one.
A
So
I'd
like
to
do
these
like
once
a
quarter,
something
you
know
as
cause
depending
on
the
release
cadence
for
okd,
but
this
really
is
very,
very
helpful,
and
if
people
have
any
issues
that
they
want
to
do
against
the
the
documentation,
that
would
be
great
if
there
was
a
platform
or
a
target
that
we
didn't
hit,
throw
a
an
issue
that
just
adds
a
stub
or
make
a
pull
request
for
that,
a
stub
on
that
and
in
the
in
the
repo,
and
we
will
endeavor
either
to
make
you
do
it
or
find
people
who
will
collaborate
with
you
and
make
it
happen.
A
This
is
it's
really
been
great.
I
think
the
feedback
people
are
giving
is
just
wonderful.
So
thank
you
both
for
doing
this
today,.
B
Thank
you,
andrew,
especially
because
he
was
up
late
last
night
with
a
couple
issues
with
the
installer.
I
I
saw
my
chat
pretty
late
that
he
he
was
banging
away
at
the
keyboard,
getting
things
fixed.
C
C
It's
it's
a
lot
of
fun
and,
like
I
said
I
learned
a
few
things
today,
so
it's
it's
worth
it
for
me.
So
thank
you.
A
Awesome
well
you're
getting
thunderous
applause
and
clapping
in
the
chat
so
well
done,
and
we
will
see
you
all
at
the
next
okd
working
group
meeting
and
hopefully
you
both
can
join
us
at
kubecon
eu
with
the
speaker's
passes
and
we'll
we'll
we
should
have
an
okd
section
in
the
community.
Central
booth
at
that
is
being
hosted
as
we're
sponsoring
kubecon.
A
So
that's
may
4th,
but
mostly
we
want
you
to
come
and
join
us
at
the
okd
working
group
so
come
in
and
hang
out
with
us
and
we're
we're
thrilled
that
you
were
here
today.
So
thanks
guys.
A
Thank
you.
I
I'm
gonna.
Do
this!
I'm
gonna
make
you
next
time.
I
don't
know
what
region
you're
coming
from,
but
we'll
figure
out
something
this
and
and
hopefully
the
feedback
will
come
in
and
we
can
get
get
all
the
community
members
doing
at
least
one
of
the
next
one.
So
cool
turkey.
Okay,
there
you
go,
that's
a
challenge.
We
have
a
couple
of
really
big
openshift
customers
out
there
in
turkey
too.
So
yeah
you've
got
colleagues
there.
There
are
other
folks
to
pull
in
so
all
right.
A
Well,
thank
you
guys.
You
you
have
the
honor
of
being
the
the
slowest
deployment
today.
I
think
it
was
that
you
were
the
slowest.
I
think
it
was.
You
just
talked
the
most,
so
I.
B
A
We'll
send
you
the
transcript
yeah,
so
that's
that'll
be
good
but
perfect.
So
I
will
let
you
go.
You
gotta
promise
me
you
go
out,
get
a
breath
of
fresh
air
now,
wherever
you
are
enjoy
your
evening
morning
afternoon,
whatever
time
zone
and
I
will
get
endeavor
the
like.
I
said
the
internet
dude
is
coming
to
fix
my
cable,
my
fiber
optic
on
monday.
So
I
should
be
able
to
upload
all
of
these
monday
afternoon
and
I
will
push
it
out
to
the
okd
working
group.