►
From YouTube: OpenShift Administrator’s Office Hour (Ep 3): Assisted Installer with Special Guests
Description
Join Andrew Sullivan, Chris Short, and the occasional special guest for an hour designed specifically to help the OpenShift admins out there. Come with your questions, leave with solutions.
https://openshift.tv
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
hailing
from
welcome
to
another
episode
of
openshift
administrator
office
hours
on
openshift
tv,
I
am
chris
short
executive
producer
of
openshift
tv.
I
am
joined
today
by
two
of
my
fellow
red
hatters,
the
one
and
only
something
of
get
ops
christian
hernandez.
I'm
not
sure
what
to
call
you
yet
I'm
not
sure
what
you
want
to
be
called,
but.
A
Pending
and
then,
as
as
I
referred
to
him
in
an
internal
email,
a
the
role
of
andrew
sullivan
today
will
be
played
by
a
was
it
otter
sounding,
but.
A
Yes,
so
like
better,
better
sounding
like
different
sounding
but,
like
you
know,
smoother
right,
like
so
yeah,
so
but
to
be
honest,
andrew's
off
helping
one
of
our
customers
right
now,
and
we
are
talking
about
something
that
reese
is
well
reese,
introduce
yourself
and
then
we'll
get
to
what
we're
talking.
Real,
quick.
C
Yeah
thanks
chris,
so
good
morning,
good
afternoon,
good
evening.
Everyone,
my
name,
is
reese
oxendum.
I
am
probably
as
surprised
as
surprised
as
you
are
to
be
invited
back
again
to
deliver
another
session.
We
are
gonna
be
going
through
the
assisted
installer,
which
is
something
that
I've
been
working
on
considerably
and
we're
gonna.
Go
into
that
in
a
lot
more
detail,
I'm
going
to
try
and
do
a
live
demonstration
of
this
and
all
being
well
doing
it
on
real
bare
metal
systems.
C
So
there's
gonna
be
a
little
bit
of
a
delay
in
a
few
different
areas,
but
but
yeah
some
exciting
stuff.
A
Yeah
no,
like
I
can't
wait
and
if
everything
hits
the
fan,
I
have
a
nice
clean
server
across
the
office
here
that
we
can
use
this
thing
for.
C
A
B
A
Like
a
plan
jump
all
over
it,
so
what
is
the
goal
of
the
assistant
installer,
basically
rhys
or
christian
either?
One
of
you
can
answer
that
question.
I
guess.
C
Yeah,
I
I'm
happy
to
give
it
give
a
stab
at
what
I
as
I
would
explain
it.
So
we
have
tried
for
a
very
long
time
to
make
the
consumption
of
openshift.
You
know
a
very
good
experience
right.
We
know
that
customers
are
looking
to
deploy
openshift
across
a
wide
variety
of
footprints,
be
that
on
top
of
virtual
platforms,
be
that
on
top
of
public
clouds,
private
clouds,
whatever
they
are,
but
we
also
know
that
a
lot
of
customers
are
looking
at
deploying
on
top
of
bare
metal.
C
There's
lots
of
reasons
why
customers
want
to
deploy
on
top
of
bare
metal.
You
know
whether
it's
what
they
have
particular
requirements
for
their
applications,
be
that
performance
or
they
might
be
want
to
exploit
underlying
implementations
like
gpus
or
you
know,
fpgas
whatever
it
might
be,
and
so
we
have
to
also
make
openshift
really
really
consumable.
On
top
of
those
platforms,
we
have
a
number
of
different
investments
around
things
like
the
bare
metal,
ipi
implementation.
C
So
you
know
automating
the
infrastructure
right
the
way
from
the
ground
up,
but
we
want
to
go
that
little
bit
further
and
make
it
even
easier
to
deploy
an
openshift
cluster,
and
we
do
we're
doing
that
in
the
form
of
a
web-based
utility
which
I'm
going
to
show
you
you
can
see
it
live
in
action
for
driving.
All
of
that,
you
know
right
from
bootstrapping
the
machine
up,
deploying
openshift,
sharing
everything
you
need.
C
A
B
Yeah
yeah
yeah.
So
my
my
quick,
my
quick
question,
I
guess
my
my
open-ended
question
is
here
the
differences
or
how
would
you
categorize
the
bare
metal
ipi
versus
the
assisted
installer
right,
because
there
is
a
lot
of
there's
there's
some
overlap
in
terms
of
like
the
technologies
we
use
in
in
both
situations.
So
when
would
you
use
one
over
the
other
or
when?
What's
what
are
the
the
major
differences.
C
C
The
main
difference
for
me
is
just
ease
in
getting
that
set
up
and-
and
you
know
getting
up
and
running
in
the
first
place,
so
with
bare
metal
ipi,
you
have
to
find
a
system
that
you
want
to
run.
The
openshift
installer
on
just
like
you'd,
be
deploying
on
top
of
any
infrastructure
public
cloud,
vmware
rare
openstack,
whatever
it
might
be.
You
then
have
to
tell
it
well.
These
are
all
my
nodes
that
I
want
you
to
use.
C
You
know
you
have
to
fill
out
the
the
ipmi
out-of-band
management
configurations
used
in
passwords.
You
need
to
make
sure
your
openshift
installer
machine
has
networking
access
to
those
machines,
so
they
can
provision
them,
run
the
bootstrap
vm
on
top
of
them
and
and
so
on,
whereas
the
assistive
installer
requires
nothing
but
the
ability
to
attach
and
iso
to
the
machines
you
power
them
on
manually
through
the
out-of-band
management
platform.
C
They
provision
create
a
cluster
and
away
you
go.
There
are
very
minimal
requirements
to
getting
this
up
and
running,
and
the
beauty
of
it
is
is
that
you
can
do
it
all
via
the
web
ui.
You
don't
have
to
download
and
run
the
openshift
installer
or
generate
your
own
install,
install
convig.
Everything
is
done
inside
of
a
web
ui.
B
So
literally
assisted
it,
it
assists
you
yeah
each
step
of
the
way
versus
right,
the
the
regular
ipi
method,
where
you
have
to
download
the
the
the
cli,
and
then
you
have
to
kind
of
pretty
much
prep
everything
beforehand.
This
act
literally
assists
you
in
each
step
of
the
way.
C
B
It's
been
the
tires
on
this
puppy
yeah.
There's,
there's
there's
a
expression
here
in
the
us.
I
don't
know
if
you've
heard
it
over
there
in
the
uk,
but
this
is,
I
think,
chris
shorts
in
the
show
me
state
right.
It's
like
we're
in
the.
A
C
B
C
No,
no
all
right,
so
the
beauty
of
the
assisted
installer
is
that
it's
now
directly
integrated
within
cloud.redhat.com.
So
this
is
hosted
by
red
hat.
You
don't
have
to
download
and
install
anything
on
premise.
It
is
just
out
there.
It's
all.
You
know
sas
sort
of
based
now,
I'm
inside
of
my
login
to
this.
It's
just
my
employee
account
and
it's
got
a
whole
list
of
various
different
clusters
deployed
across
the
world,
now
you're
seeing
lots
of
clusters
here,
just
because
I'm
part
of
an
employee
group,
we
can
ignore
these
for
now.
C
Now
all
I'm
going
to
do
is
I'm
going
to
say
I'm
going
to
create
a
cluster,
and
at
this
point
it's
agnostic
right.
It's
not
going
down
the
the
assisted
installer
route
yet,
but
I
will
show
you
how
we
get
there,
so
I
just
want
to
deploy
openshift
container
platform,
I'm
not
going
to
deploy
openshift
dedicated
here.
C
This
is
just
an
employee
environment
and,
like
normal,
it's
going
to
ask
well,
where
do
I
want
to
deploy
onto
public
cloud
platforms
that
we
we
support
inside
of
my
data
center,
or
indeed,
on
top
of
my
laptop
now,
the
entryway
or
the
path
to
getting
into
the
assisted
installer
is
to
say,
run
on
bare
metal,
so
I'll
hit
that-
and
this
is
where
we
get
the
two
options-
you're
probably
expecting
to
see
three
options
here,
because
we've
talked
about
that
third
option
that
bare
metal,
ipi
infrastructure.
C
That
is,
you
know,
that's
not
a
fully
supported
mechanism
right
now,
that's
still
in
the
dev
preview
or
technology
preview
realm
it
can
be
used,
but
it's
it's
not
yet
supported.
That's
why
you
only
see
this
user
provisioned
infrastructure
path
here
and
the
assisted
bare
metal
installer.
So
this
is
how
you
get
into
it,
and
this
is
where
you
get
into
the
pipeline
of
deploying
the
the
bare
metal
cluster
with
the
assisted
installer.
C
C
So
when
it
comes
to
setting
up
my
cluster
name,
I
want
it
to
match
what
my
dns
domain
name
is
inside
of
the
bare
metal
infrastructure.
And
what
we'll
see,
if
I
jump
over
to
my
jump
host
all
of
my
machines,
I'm
not
sure
whether
you
can
see
it.
These
are
all
it's
an
internal
address,
but
pemlab.rdu2.red.
B
Question
yeah
so
quick
question:
where
is
this,
so
you
said
that
you
can
deploy
this
on
a
pre-existing
like
dns
right
like
as
in
what
you
have
right
like
I
have
my
own
dns
server
yeah
or
you
can
use
etsy
host
where's
that
sliding
scale
how
much
in
can
I
have
something
like
in
the
middle
right?
Can
I
have
like
a
host
of
dns
somewhere
else
or
what
what's
the?
What's
that
sliding
scale
in
terms
of
sure?
What's
allowed.
C
C
C
You
can
either
update
your
corporate
dns
afterwards
to
point
at
the
ips,
the
assisted
installer
used,
or
you
can
update
your
etsy
hosts
and
that's
that's
what
I'll
do
at
the
end
of
this
just
for
convenience
and
to
show
you
how
it
works,
but
right
now,
there's
not
really
a
sliding
scale
where
you
can
say
well,
no,
I
don't
want
you
to
do
that.
C
It's
kind
of
that's
the
way
it
works
and
after
deployment
you
can
choose
to
make
it
easier
for
your
clients
to
access
the
cluster
by
using
the
ips
that
the
assisted
installer
used,
and
you
can
define
them
if
you
want
to
and
I'll
show
you
how
you
define
them,
but
but
yeah,
it's
not
it's
not
really
a
choice.
B
Gotcha
gotcha,
so
it
this
uses
the
same
mechanism,
let's
say
as
like:
vsphere
ipi
or
openstack
ipi,
where
the
mdns
kind
of
just
takes
care
of
the
cluster
itself
and
then,
as
a
day,
two
task,
let's
say
or
or
even
like
now
as
a
day,
one
test
or
a
separate
task.
You
can
point
your
dns
to
this
cluster,
exactly
right.
A
Yeah,
so
carlos
santana
is
asking
for
like
a
drawing
diagram
or
something
like
is
this
inside
of
epc?
You
know
whatever,
like.
Maybe
you
can
talk
about
that
kind
of
infra
layout,
but
then
there
he
mentions
that,
like
infrastructure
will
never
let
you
edit
corporate
dns.
So
this
is
kind
of
a
good
thing
right,
like.
C
C
A
C
No,
you
don't!
No,
you
don't
at
all
so
long
as
there
are
as
long
as
the
cluster
has
ip
addresses
that
it
can
use
you're
good
to
go
nice
yeah.
C
C
Yeah
indeed
so
the
assisted
installer,
as
I
mentioned
it's
really
designed
for
bare
metal
clusters,
you
can
use
it
for
virtual
clusters.
If
you
want
it
really
doesn't
matter,
but
it's
really
designed
for
the
bare
metal
path,
because
that's
typically
the
more
difficult
configuration
to
to
bring
up
for
obvious
reasons.
Inside
of
my
lab,
I
just
have
three
bare
metal
machines,
so
we're
really
looking
at
deploying
a
converged
worker
and
master
configuration,
so
the
nodes
are
simultaneously,
you
know
all
roles.
C
C
C
B
Yeah,
so,
even
though
it's
sort
of
automated,
you
still
need
to
have
your
out
of
bands
set
up
set
up
beforehand
like
it
just
doesn't
magically
have
you're
out
of
band
all
set
up
for
you.
So
exactly.
C
And
and
all
I've
got
here,
is
I'm
just
it's
just
a
jump
box
that
I'm
I'm
using
to
allow
me
access
to
these
to
these
machines
and
also
as
as
we'll
explain
how
we
use
the
iso
to
provision
them
all
these
machines
that
they're
currently
in
in
in
the
raleigh
area,
I'm
in
the
uk.
If
we
were
to
pull,
if
I
was
to
attach
a
virtual
media
iso
to
all
these
machines
from
my
machine,
it's.
B
Is
it
will
take
a
while
to
go
over
the
pond
exactly.
C
Exactly
all
right,
so,
let's
get
into
it
so
because
I
want
to
somewhat
match
the
dns
environment
that
I
currently
have
in
my
environment.
I'm
going
to
call
this
pem
lab
it's
just
the
the
name
of
the
environment,
openshift
version
we
can
only
select
4.6
nightly
at
the
moment,
so
this
is
a
nightly,
build
we're
not
using
you
know.
It's
obviously
pre-release
bits
at
the
moment,
and
the
really
cool
thing
now
that
this
is
integrated
within
cloud.redhead.com
is
that
it
already
knows
my
poor
secret,
so
yeah.
C
A
So
that's
already.
A
C
And
what
we
now
have
is
an
additional
pain
and
there's
lots
of
different
options
in
here
and
we're
going
to
go
through
these.
But
the
first
thing
it's
going
to
tell
us
to
do
is
to
generate
a
discovery
iso
now
this
is
the
clever
part.
C
When
I
want
to
add
my
machines
to
this
potential
deployment,
I
simply
need
to
provision
them
using
a
discovery.
Iso
we
attach
the
iso
over
the
virtual
console,
virtual
media
interface
or
whatever
it
might
be,
or
you
know
as
simple
as
you
could
burn
the
iso
and
attach
it
using
a
real
cd-rom
if
you
really
wanted
to.
But
the
fact
is
these
machines.
They
only
need
to
come
up
with
this.
With
this
discovery,
iso,
they
then
connect
back
in
to
cloud.redhead.com
using
this
dynamically
generated.
You
know
generated
just
for
this
cluster
iso.
B
Yeah
someone
someone
just
mentioned
in
the
chat
that
out
of
band
is
technically
optional
right
because
you
can
literally,
as
as
I
joked
on
the
on
the
on
the
is,
you
can
literally
burn
this
to
a
dvd
or
a
cd
and
have
the
intern
go,
and
you
know
boot
it
off
the
of
the
cd-rom
or
dvd
round
drive.
C
Whatever
you
wanted
to
do,
all
we
need
is
to
get
the
machine
to
boot.
This
iso.
A
D
C
All
right
so
there's
a
few
additional
things
he's
going
to
want
to
ask
us
now,
first
of
all,
ssh
public
key.
Now
this
I'm
just
going
to
put
my
key
in
the
benefit
of
this-
is
that
if
these
machines
have
any
problems
booting
up
or
they
can't
access
this
console,
perhaps
as
a
networking
issue
somewhere,
perhaps
a
dns
issue,
because
you
know
these
machines,
they
still
need
to
be
able
to
connect
out
to
the
internet.
They
need
to
reach
cloud.redder.com.
C
They
need
to
pull
down
container
images,
so
there
needs
to
be
basic
dns
inside
of
the
environment,
but
we
don't
have
to
worry
about
full
dns
already
pre-established
in
the
environment.
As
I
said,
the
environment
will
take,
the
openshift
cluster
will
manage
the
core
dns
of
the
cluster.
This
is
you
know.
Hosts
must
be
connected
to
the
internet
to
form
a
cluster
using
this
installer.
Each
host
want
a
valid
ip
address
assigned
by
dhcp
right,
so
obviously
it
needs
an
ip.
C
It
needs
to
have
a
route
to
cloud.redder.com,
so
the
dns
capable
for
that
and,
crucially,
all
machines
need
to
be
on
the
same
layer,
2
network.
Now
this
is
again
because
the
cluster
is
self
managing
all
of
its
virtual
ips.
Those
virtual
ips
have
to
be
on
the
same
subnet
same
layer,
2
network,
so
okay,
ssh
public,
key
I'll.
Just
pull
this
in.
C
D
A
E
A
C
All
right,
so
this
discovery,
iso
is
now
ready
for
me.
What's.
C
Yeah
exactly
so,
it
throws
it
into
an
s3
bucket.
It's
ready
to
go.
I
can
copy
it
there.
The
one
bit
I
did
just
gloss
over
accidentally
is:
there
was
an
option
that
if
I
have
a
http
proxy
inside
of
my
environment,
I
can
set
it
before
it
generates
that
iso,
and
so
when
my
machines
come
up
if
they
need
to
use
http
proxy
to
access
cloud.redder.com
or
any
of
the
container
image
repositories,
sure
you're
good
to
go
yeah.
C
C
And
this
shouldn't
take
too
long
at
all
and
what
I'll
do
then?
Is
I'm
going
to
attach
this
to
all
three
of
the
physical
machines?
C
Again,
you
could
do
this
in
a
number
of
different
ways:
my
three
machines
they're
just
they're
dell
blades,
essentially
little
fc430
nodes
and
I'll-
just
use
the
virtual
media
interface
to
to
grab
those
nice
and
there
we
are
that's
done
so
I'll
start
off
with
my
first
machine,
so
connect
virtual
media
and
again
I
apologize
that
this
is
likely
a
little
small
on
your
machine.
But
I'll
show
you
exactly
what
I'm
doing.
I'm
mapping
a
cd
slash
dvd,
a
browse
for
the
file
discovery.iso,
and
I
map
that
device.
C
C
I
will
say
virtual
cd
dvd
apply
that
and
I
will
then
go
to
power,
slash
thermal
and
I
will
power
on
the
system
and,
yes,
I
definitely
want
to
power
on
the
system,
and
what
you'll
see
is
that
this
machine
should
turn
on
in
a
second
there
we
go.
This
machine
is
now
turning
on
and
it
takes
a
little
while,
of
course
remember.
It's
still
pulling
this.
C
It's
still
a
real
bare
metal
machine
and
it's
going
to
get
to
the
point
where
it's
going
to
try
and
boot,
the
virtual
cd
iso
and
it's
going
to
be
pulling
across
the
network.
Now
these
machines,
the
the
this
jump
host,
I'm
using,
is
in
the
same
rack
it's
a
vm
inside
of
the
same
rack,
so
it
shouldn't
be
too
bad,
certainly
a
lot
quicker
than
pulling
over
the
over
the
atlantic,
but
it
still
does
take
a
little
while
so.
C
B
I
remember
when
the
out-of-bound
management
first
came
out
and
I
got
that
it
was.
I
was
like
this
is
a
game
changer.
I
don't
have
to
run
down
to
the
data
center.
A
A
B
C
C
When
that
comes
up,
it
connects
out
to
the
installer
and
the
assisted,
installer
pane
that
allows
you
to
then
set
the
rest
of
the
configuration
and
then
we
can
actually
say
we'll
install
a
cluster
now
and
away.
It
goes,
but
we'll
see
that
we'll
see
that
happen
in
a
few
minutes,
all
right.
C
So
now
it's
loading
the
kernel,
it's
gonna
do
the
ram
disc
and
the
ignition
convic
as
well,
but
this
is
unfortunately
going
to
take
a
little
while,
just
because
of
again
bare
metal
machines
and
indeed
see
this
rel
core
os
and
because
we're
pulling
all
three
of
these
isos
at
the
same
time.
Sorry,
the
same
iso
to
three
machines.
At
the
same
time,.
C
B
D
B
C
Indeed,
so,
yeah
you're
gonna
see
these
these
three
machines
they'll
eventually
now
boot
up
into
the
coreos
image
and
what
we're
gonna
see.
If
I
just
quickly
revert
back
to
this
window,
I
can
close
that
now.
What
we're
going
to
see
is
that,
essentially,
this
window
pane
is
just
going
to
sit
there
waiting
for
these
hosts
to
appear
so
they're
going
to
boot
up
they're
going
to
look
at
their
essentially
just
an
ignition
convig,
which
defines
exactly
what
the
machine
needs
to
do.
You
know
needs
to
start.
C
You
know
a
basic
container
that
brings
the
connects
into
this.
It's
going
to
start
getting
information
about
all
of
the
machines
that
get
provisioned.
We
can
see
all
the
specification.
We
can
make
sure
that
they're,
you
know
they're
the
right
nodes,
that
they
have
the
right,
cpu
memory,
disk
networking
configuration
and
we
can
start
doing
some
additional
tasks
which
I
I
will
show.
B
Prerequisite
here,
since
the
hosts
are
actually
essentially
phoning
home
right
they're,
they
are
connecting
to
our
sas
cloud.redhead.com
there's
an
implicit.
This
needs.
Your
service
needs
access
to
the
internet
right
so
correct.
It
actually
needs
to
be
able
to
reach
out
and
communicate
with
cloud.redhead.com.
C
Yeah,
so
they
need
an
internet
connection
that
doesn't
have
to
be
direct.
It
can
be
via
a
proxy
and
that's
why,
when
we
generated
the
discovery
iso
it
you
know,
unfortunately,
I'm
sorry,
I
skipped
over
it
there's
an
option
where
you
can
say
well.
This
is
my
http
proxy,
so
you
can
provide
that
information
and
they
can
get
out.
But
obviously,
when
these
machines
come
up,
the
first
thing
they're
going
to
try
and
do
is
they're
going
to
try
and
dhcp.
C
Just
you
know
cloud.redhat.com
and
the
the
various
different
image
registries,
where
we're
going
to
need
to
pull
the
open
shift
images
from
during
the
installation.
B
Yeah
there's
a
there's,
a
question
and
I
think
I
know
the
answer
but
I'll.
Let
you
answer
it
says:
can
you
run
this
on
premise
or
is
it
red
hat
so
is?
Is
this?
Is
this
hosted
only
or
is
there
a
plan
to
like
bring
this
into?
Let's
say
I
assume
they're
asking
for
like
disconnected
or.
C
Yeah,
so
so,
right
now,
this
is
only
on
cloud.reddit.com,
of
course,
like
everything
that
we
do,
it's
all
open
source,
and
so
you
can
go
out
on
github
and
you
can
see
how
you
can
deploy
instances
of
this,
but
in
terms
of
a
supported
configuration
in
a
sort
of
disconnected
environment
or
something
you
just
want
to
deploy
inside
of
your
own
infrastructure.
B
Sure
we
got
you
oh
and,
and
someone
actually
just
mentioned
yeah-
you
can
actually
run
this
as
a
as
a
container.
I
guess
that's
the
the
open
source
version
as
you,
as
you
mentioned,
yeah,
okay,.
C
C
Yeah,
let's
see
what's
going,
oh
as
I
clicked
on
this,
it
appeared
so
here
you
can
see
our
first
node
has
just
come
up
and
what
we'll
see
is
you
know
if
we
we
dive
into
you,
can
see
it's
some
additional
information
about
it.
So,
yes,
a
dell
machine,
it's
an
fc430
blade.
C
You
know
cpu
memory,
bmc,
some
additional
information
about
the
disk
disk
sizes.
The
networking
that
we
have
inside
of
this
machine
and
detected
ips
and
subnets
and
you'll
see
that
all
three
of
these
machines
will
will
all
start
to
come
in.
So
this
second
one
node
12
should
be
our
next
one
and
then
what
do
we
have
here.
B
C
A
B
There
was
also
a
question
while
I
know
you're
multi-threaded
right
now,
but
there
was
a
question
about
if
you
can
take
the
the
contents
of
the
iso
and
just
boot.
The
the
bare
metal
servers
off
of
pixie.
C
There's
no
reason
why
you
couldn't
you
know:
deliver
the
iso
in
that
format.
It's
it
don't.
It
only
runs
that
iso
temporarily
just
to
get
it
up
and
running,
and
then
we,
you
know
we.
We
redeploy
core
os
onto
the
root
disk
anyway,
so.
B
C
C
This
this
system,
I
did
actually
have
a
little
bit
of
trouble
with
the
other
day,
is
some
kind
of.
It
was
complaining
about
some
kind
of
networking
issue
between.
C
D
A
You
want
them
to
yeah,
it's
flashing
flashing.
B
A
No,
it's
funny
my
my
first
job
out
of
the
air
force,
my
it
was
right
before
we
moved
up
here
or
right
after
we
moved
into
our
new
home
in
wake
forest
that
we
bought.
While
we
were
down
there,
I
became
the
person
that
lived
closest
to
the
data
center,
the
primary
data
center,
so
it.
A
C
So
this
one
should
go
a
little
quicker
because
it's
going
to
be
the
only
one
actually
pulling
that
iso,
but
whilst
that
is
doing
that,
I
can
revert
to
this,
and
I
can
show
you
some
of
the
additional
things
we
have
in
here.
So
we've
been
through.
You
know.
This
shows
you
all
about
the
information
about
the
machine
that
is
there.
You'll
also
see
that
it's
brought
through
a
host
name.
Now
those
host
names
match
the
names
of
the
machines
you
would
have
seen
in
the
idrac
now.
C
The
only
reason
why
that
has
happened
is
because
my
dhcp
server
allocates
hostnames,
so
it
provides
that
you
know
through
an
option,
but
your
dhcp
server
when
you're
using
this
doesn't
have
to
provide
hostnames.
If
your
dhcp
server
isn't
issuing
host
names,
you'll
simply
have
a
random
uuid
displayed
here
and
to
identify
the
machine.
You
can,
of
course,
use
lots
of
different
things.
C
Perhaps
serial
numbers
bmc
addresses,
so
you
know
exactly
which
machine
is
which,
but
you
can
then
set
a
host
name
in
here
edit
host
and
it's
got
a
discovered
hostname
and
you
can
override
it.
So
if
you
want
to
well
make
sure
that
this
has
a
known
host
name,
something
you
obviously
you
want
to
use,
you
can
set
it
here
and
then
all
of
the
nodes
inside
of
the
cluster
will
be
able
to
contact
this
machine
through
the
host
name.
You
said
here
no.
B
Requirements
yeah
using
the
the
mdns.
C
So,
just
waiting
for
this
guy
still
and
yeah
there's
a
few
additional
things
you
can
do
here
as
well.
You
can
disable
it.
You
can
view
host
events
or
you
can
delete
it
so
viewing
host
events
you
can.
This
is
very,
it's
not
full
right
now,
but
when
you
are
actually
deploying
and
you're
actually
running
through
a
deployment,
this
will
be
filled
with
really
useful
information
about
what
is
going
on
inside
of
that
machine.
So
you
know
for
troubleshooting
purposes.
You
can
absolutely
dive
into
this
and
see
exactly
you
know.
B
And
someone
someone
asked-
and
I
don't
know
if
you're
gonna
get
to
it
when
the
third
note
is
added,
they
asked
early
on
like
how
do
you
tell
the
iso
whether
something's
a
master,
whether
it's
a
bootstrap
but
yeah.
C
You
can
and
I'll
absolutely
talk
about
that,
so
the
the
role.
What
we'll
cover
now
so
here
you
can
see
roll
it
can
either
be
automatic
master
or
worker.
Now,
if
you've
only
got
three
machines
you're
by
default,
all
three
of
them
are
going
to
be
both
workers
and
masters.
But
you
could
you
know
I
could
be
provisioning
six,
seven,
eight!
You
know,
however,
many
nodes
that
I
would
like
to,
and
I
can
I
can
address,
which
ones
are,
which
there's
also
some
logic
in
here.
C
You
could
leave
them
all
on
automatic
and
then
the
assisted
installer
would
based
on
you
know
best
fit.
What
resources
do
I
have
or
specification
of
the
machines
which
one
would
which
ones
would
make
the
best
masters,
which
one
should
make
the
best
workers,
and
so
it
will
automatically
assign
them,
but
it
makes
no
difference
here.
I
can
just
leave
this
on
automatic
because
I
only
have
I
only
have
three
machines
so.
B
What
happens
to
so
anytime?
I
do
a
an
install
right.
There
is
an
implicit
for
like
even
if
you're
doing
a
compact
cluster
there's
an
implicit
fourth
node
right
yeah.
What
role
or
where
is
the
bootstrap
right
is.
Is
that
in
the
platform
itself,
is
that
in
the
install.
C
C
D
B
Cool,
so
that
was
that
was
actually
early
on.
I
I
I
asked
I'm
like
hey:
can
we
like
make
the
bootstrap
like
a
container?
Can
I
run
it
like
on
my
laptop
it'd?
Be
cool
like
I
just
do
a
podman
run,
but
I
think
this
is
a
lot
cooler
implementation,
especially
if
you're
like
low
on
resources
or,
if
you're
doing
like
a
bare
metal
install
or
if
you
like,
really
only
have
three
nodes,
it'll
pivot,
that
that
that
one
of
the
one
of
the
masters
sorry.
C
B
Bootstraps
into
into
a
master,
so
that's
that's,
really
cool
exactly.
C
And
you
know
you
think
about
some
of
the
configurations
that
you
know:
customers
and
partners
are
looking
to
do
your
minimal
footprint
right
at
the
edge.
You
don't
want
to
have
that.
Fourth
node
there
or
you
don't
you
know
if
you
can
help
it
right,
just
so
just
having
three
nodes
that
that's
incredibly
useful,
all
right,
how's
this
getting.
C
That
additional
node
coming
in
here
in
just
a
couple
of
minutes
and
then
we
should
be
able
to
proceed
with
the
rest
of
the
installation.
C
B
C
Yeah
all
right,
we
now
have
our
three
masters
or
our
three
nodes
reporting
into
the
assisted,
installer
ui.
Now
again,
I'm
just
going
to
leave
this
role
automatic.
I
could
force
it
to
be
a
master.
I
couldn't
force
it
to
be
workers,
because
that
would
mean
there
were
no
masters
and
the
install
would
definitely
fail.
So
I'm
just
going
to
leave
this
on
automatic
for
now,
but,
as
I
said,
there's
logic
built
into
the
assisted,
installer
ui
to
try
and
do
that
best
fit
placement
of
of
the
roles.
C
Should
I
have
more
than
three
nodes?
Okay,
so
now
we
can
move
on
assuming
there's
no
additional
questions
on.
What's
what
I've
shown
so
far.
Now
we
have
to
provide
the
base
domain
now
again,
just
like
any
install
convic,
you
have
to
provide
a
cluster
name
and
a
base
domain,
plus
the
name
we've
already
defined
as
pemlab,
because
I
wanted
to
somewhat
match
the
dns
environment
that
it's
going
to
be
going
into.
C
Now
then,
it's
going
to
ask
me
some
additional
config
questions.
What
does
my
network
configuration?
Look
like
now
basic
is
very,
very
simple:
we
use
the
out
of
the
box
subnet
allocations,
so
you
know
the
the
the
cluster
ips
that
that
would
be
used
and
the
various
other
subnets
that
we
use
inside
of
an
openshift
cluster.
C
Now
I
only
have
one
network
configured
inside
of
this
environment,
but
if
you
had
multiple
you
could
say
well,
I
want
to
use
this
network
to
do
all
of
the,
which
is
where
I
want
to
put
all
of
my
apis
and
the
ingress
going
on.
So
I'm
just
going
to
select
this
default
network
it'll
verify
that
all
of
those
is
available
are
available
on
all
three
nodes.
C
Now
this
is
important
because,
as
I
said
earlier,
the
cluster
is
responsible
for
doing
all
of
the
load,
balancing
all
of
the
ip,
the
keeping
those
vips
up
up
and
running
and
managing
internal
dns.
So
it
has
to
make
sure
that
the
network
that
you
select
for
where
those
ips
listen
is
common
across
all
three
of
your
masters.
C
Now
then,
the
last
one
question
is
around:
what
are
the
ip
addresses?
I
want
to
use
for
the
the
api
and
the
ingress
now
again,
if
you
want
to
use
known
ip
addresses,
for
example,
you've
already
communicated
with
it
and
you've
said.
Well,
I
want
my.
I
need
dns
entries
for
api
and
ingress
and
I
just
need
to
know
what
I
p
addresses
you've
set
up
for
those
you
can
plug
them
in
here,
and
the
cluster
will
bring
those
ip
addresses
up
and
it
will
just
receive
a
question.
C
You
can
rely
on
your
external
dns
to
point
all
the
traffic
to
those
ips
that
we
put
in
here
nice,
but
what's
really
cool
one
of
the
newest
features
of
this
is
I
can
allocate
these
virtual
ips
by
my
dhcp
server.
So,
if
you're,
unsure
of
what
ips
you
want
to
use
and
remember
that
I
just
want
to
put
this
into
an
environment
where
I
have
very
minimal
external
requirements,
and
I'm
going
to
update
my
etsy
host
to
point
to
this
environment
when
I'm
done.
This
is
really
cool.
C
C
You
know
the
same
for
all
of
our
customers.
I
completely
get
that,
but
in
environments
that
will
this
is
handy,
but
of
course
it
has
the
option
where
you
can
fix
them.
Should
you
want
to
what
I'm
going
to
do
at
this
stage,
I'm
going
to
say
validate
and
save
changes,
because
I
wanted
to
try
and
grab
ips
you'll,
see
that
you
have
these
this
spinning
wheel
here,
where
it'll
try
and
grab
some
ips
from
my
dhcp
server
in
the
lab.
B
If
someone-
and
I
guess
you
alluded
to
the
answer-
is
no,
but
if
someone
has
does
have
a
dhcp
server,
but
they
do
like
mac
address
filtering.
Is
there
any
way
to
have
the
have
them
to
automatically
assign
it,
or
do
they
have
to
go
the
static
route?
At
that
point,.
C
C
Know,
although
I'm
sure
we
could
find
out
from
from
some
of
the
engineers,
I
don't
think
that
we
know
what
they
are
up
front,
but
they
are,
if
I
recall,
just
sort
of
the
they
just
use.
The
libert
type
mac
address.
D
C
The
range
of
okay,
yeah,
so
yeah,
now
we've
got
these
ips,
so
10
11,
173
189
for
the
api
and
187
for
the
ingress
traffic
yeah
allocated
by
a
dhcp
server.
But
it's
the
cluster's
responsibility
to
keep
renewing
this
lease.
C
And
again,
those
ips
will
simply
listen
using
keeper
ifd
right
last
question:
do
I
wanna
use
the
same
host
discovery,
ssh
key,
so
the
same
ssh
key
for
the
resulting
cluster
yep?
I
absolutely
do
what
you'll
then
see
is
the
cluster
is
ready
to
be
installed
and
you'll
also
see
that
these
three
have
turned
to
known.
So
these
machines
are
ready
to
go
so
I'll,
go
ahead
and
click
install
cluster
and
what
you'll
see
is
that
in
a
second
one
of
these
will
turn
to
be
the
bootstrap
machine.
C
Take
a
couple
of
seconds
and
you'll
see
it.
You
know
lists
a
few
different
things
about
the
environment
that
are
that
have
been
coded
for
me
because
I
selected
the
default
so
standard
cluster
network
standard
service
network
and
so
on.
C
C
We
can
go
in
and
view
cluster
events,
and
this
is
this
is
fantastic,
that
this
will
will
show
you
all
the
information
about.
What's
going
on
as
and
when
you
want
it.
So
you
can
see
you
know.
Node
11
is
writing
image
to
disk.
So
that's
actually
writing
down
the
core
os
image
directly
to
the
root
disk,
and
if
I
was
to
go
on
to
the
environment
here,
node
11
you'll
see
in
a
few
seconds.
It'll
you'll
see
this
machine
essentially
rebooting,
so
it's
no
longer
reliant
on
that
discovery.
C
A
Have
I'm
about
to
type
one
of
my
idioms.
E
A
C
Right
so
I
mean
there's
not
really
too
much
to
see.
What's
going
on
now
note
11,
that's
our
temporary
bootstrap
machine.
So
you
can
see
this
you're
going
to
see
this
one.
You
know,
keep
provisioning
additional
container
images
they're
going
to
come
up
because
that's
you
know
this
is
the
one
that's
going
to
help
orchestrate
the
deployment
of
the
openshift
cluster,
the
temporary
two
node
openshift
cluster
on
this
node
10
and
node
12..
So
we're
going
to
see
these
two
reboot
first
into
the
into
their
disk.
B
B
C
Yeah,
sorry,
it's
not
fun
yeah,
what's
written
here
is,
is
not
really
too
important.
I'm
essentially
just
saying
that
these
machines
will
have
their
root
disk
provisioned
first
by
the
assisted
installer,
and
they
will
come
up
as
the
two
node
cluster.
So
I
just
wanted
you
to
see
that
these
two
machines
will
reboot
first,
so
you
can
see.
Node
10
is
currently
in
its
bios.
Node
12
has
just
been
rebooted,
so
those
two
will
come
up
with
the
openshift
cluster.
First,
a
temporary
two
node
cluster.
C
You
know
again,
two
node
clusters
are
not
supported,
but
it's
only
temporary
because
we're
using
this
third
node
or
in
our
case
node
11,
as
you
can
see
here,
node
11
as
the
bootstrap
machine
and
then
once
the
openshift
installation
is
complete
on
the
2
node
10
and
node
12.
It
will
then
pivot
and
become
that
third
cluster
node.
B
So,
just
a
bit
of
a
technical
question
when,
when
this
node,
when
it's
a
temporary
bootstrap
is,
is
that
running
off
the
iso
or
okay,
okay
and
then
the
last
part
of
it
is
to
actually
write
the
master
configuration
to
the
disk
and
then
it'll
reboot.
So
it
doesn't
yeah,
it
doesn't
reboot
as
bootstrap
and
it
does
some
magic
for
it
to
reboot
as
master
it.
It's
all
running
off
the
live
iso.
C
C
So
you
can
see
there's
as
well.
Some
additional
steps
will
show
exactly
where
it
is
so
node
10.
It
knows
it's
in
the
rebooting
step,
which
is
step
four
of
seven
node
12
will
also
be
rebooting,
node
11,
which
is
our
bootstrap.
You
see
waiting
for
control
plane,
so
it
wants
to
make
sure
that
the
open
shift
control
plane
has
been
established
on
those
two
nodes
before
it
will
actually
proceed
any
further
and
again
all
of
the
cluster
events.
They
will
continue
to.
C
You
know,
spread
out
information
about
what
exactly.
What's
going
on
all
the
various
different
stages.
You
know
node
12,
which
was
one
of
the
last
masters
provisioned
its
its
root
disk,
and
now
it's
rebooting,
and
now
it's
reached
the
configuring
stage.
So
you'll
see
if
I
go
back,
you
see
this
node
12
is
now
in
the
fifth
stage,
which
is
configuring,
so
it's
actually
bringing
up
that
openshift
control
plane
on
that
machine.
C
So,
whilst
we're
waiting
for
some
of
those
things,
you
know
you
can
abort
installations
at
this
stage.
You
know
if
things
go
wrong
and
there
are
options
for
you
know
retrying
installation,
it's
it.
It's
actually
really
powerful,
there's
also
the
ability
to
download
installation
logs.
So
you
know
we'll
be
downloading
halfway
through.
So
it's
not
really
a
good
indication
of
what's
going
on
and
in
fact,
you've
only
got
two
of
the
machines,
which
will
be
our
two
masters
so
far.
B
C
Yeah,
so
this
is
just
output
out
of
one
of
my
master
machines,
so
you'll
see,
for
example,
you
know
it's
all
about
talking
about
the
root
disks,
writing
out
the
coreos
metal
image
downloading
it
from
the
essentially
openshift
ci,
writing
that
out
to
the
file
system
and
then
essentially
rebooting
that
machine.
So
we
downloaded
these
logs
a
little
bit
early
in
the
process,
but
you
know
you
can
continue
to
download
these
files
and
get
more
insight
into
into
what's
going
on
or
if,
for
example,
the
install
failed.
C
C
You
know
it
can't
retrieve
the
or
can't
access
the
container
registries,
because
of
perhaps
a
dns
problem
or
or
something
like
that,
or
did
it
just
time
out?
There's
lots
of
different
reasons
as
to
why
this
can
fail
and
having
access
to
those
installed
logs
is
is
incredibly
incredibly
useful,
especially
being
able
to
download
it
directly
from
this
ui.
B
Yeah
yeah
well,
even
though
we
endeavored
to
try
to
make
things
as
automated
and
as
as
as
self-contained
as
possible
it.
It's
it'll,
still
inevitably
fail
somewhere
right,
because
we
can't
account
for
every
environmental
right
issue
the
one
of
the
issues
that
people
actually
run
into,
as
even
as
someone
that
I
wrote,
the
the
helper
node
right.
So
a
lot
of
people
know
about
that.
B
I
actually
get
a
lot
of
github
issues
and
it
turns
out-
and
actually
chris,
who
had
to
step
away
for
a
second
actually
had
a
similar
issue
where
the
discs
were
just
too
slow
right.
So
etsy.
B
A
Yeah,
like,
while
you
were
moving,
it
was
like
dude.
What
is
going
on
with
my
like,
I
had
like
the
three
masternodes
were
just
like,
like
one
would
reboot
every
once
in
a
while.
I
was
like.
I
have
never
seen
this
before
what
could
possibly
be
going
on
and
he
was
like
what
kind
of
disc
are
these
and
I'm
like?
No
they're
hdd,
10ks
yeah.
A
A
B
E
No,
I
I
had
a
previous
commitment
that
somehow
I
missed
on
my
calendar,
so
not
that
I
had
any
worry
that
you
all
actually
needed
me
for
any
reason,
but.
E
So
I
did
want
to
comment
on
the
etsy.
The
performance
thing-
I
don't
know
if
you
all
noticed,
I
sent
a
long
diatribe
to
our
internal
openshift
sme
mailing
list
around
that.
The
other
day
of
you
know
xcd
and
the
way
that
it
functions
is
extremely
latency,
sensitive
and
not
just
for
disks,
but
also
for
the
network
and
the
lower.
That
latency
is
the
better
an
experience
you'll
have
and
the
higher
it
goes.
The
worse
things
get
they.
E
You
really
want
to
get
that.
I
think
our
unofficial
recommendations
are
what
10
milliseconds
of
disk
latency
and
I
think
five
milliseconds
of
network
latency.
A
E
Yeah
assisted
installer
is
it's:
it's
nice,
I'm
excited
about
it.
I'm
I'm
excited
to
see
it
in
action
and
see
it's
really
getting
the
tires
kicked
so
to
speak.
A
C
Yeah,
so
you
know
essentially,
though,
when
the
when
the
the
iso
boots
up
there's
in
theory
when
we
are
doing
the
discovery,
there's
no
reason
why
we
couldn't
actually
run
a
disk
benchmark
or
or
something
like
that,
just
to
check
and
put
a
warning
here
I
mean
that
sounds
like
a
great
rfe,
so
I
could
see
you
know
yeah.
I
could
go
into
my
node
and
say:
okay,
this
machine.
C
B
A
Right
and
if
they're
not.
B
Yeah
this
someone
says
faster
disks
for
masters
and
also
I'll
actually
reiterate
what
andrew
said.
Also
network
latency
right,
because
the
the
algorithm
has
to
run
over
the
network.
So.
E
B
D
E
Or
why
it's
highly
highly
complex
right?
Well,.
A
You
know
independent
work
loads
that
happen
in
each
cloud
kind
of
thing
right,
but
those
independent
workloads
could
in
theory
be
the
same
and
you
can
use
bgp
anycast
and
make
it
look
like
you
have
two
different
data
centers
at
the
same
time,
kind
of
thing,
so
that's
entirely
possible
right,
like
multi-cloud
doing
it.
That
way
is
the
multi-cloud
that
I
think
of
right,
not
necessarily
like
I'm
using
multiple
clouds
at
the
same
time,
just
because
I
can
kind
of
thing
right
like
well.
B
Yeah
in
and
not
to
go
too
far
off
off
topic
right
is
in
the
end.
Multi-Cloud
is
really
dependent
on
the
application
right.
The
people
are
trying
to
solve
it
with
infrastructure
and
you
you
can
right
it's
it's
all,
it's
all
in
the
app
right.
So
if
your
app
can't
run
in
multiple
clouds,
then
it
doesn't
matter
how
you
set
up
your
underlying
infrastructure.
B
C
Yeah,
I
agree
exactly
all
right.
Sorry
to
interject,
so
both
of
my
master
nodes
are
now
installed
right.
So
that's
you
know
with
those
two
are
good
to
go.
This
one
is
still
waiting
for
everything
to
now
properly
come
up
on
those
two
machines
and
then
it'll
go
ahead
and
it'll
proceed
and
it'll
then
pivot
and
become
that
third
master
node
and
you
see
waiting
for
control
plane.
This
is
this
is
installed,
so
this
has
completed
its
installation
successfully.
C
Okay.
Now,
whilst
that
third
machine
actually
then
goes
ahead,
and
we
might
actually
see
this
rebooting
it,
no
it's
not.
So
I
want
to
show
you
download,
coupe
convic
so
directly,
right
from
the
ui
we
can
say
I
want
to
download
the
kubecon
vig
and
I
will
bring
this
up
and
you
can
see
that
there
now.
I
can't
contact
this
yet
primarily
because
I
haven't
set
my
dns
to
point
at
this
environment.
I
could
go
in
and
I
could
edit
the
dns
records
inside
of
my
lab.
C
The
corporate
dns
records
to
you
know
point
to
the
ips
that
it
allocated
me
inside
of
this
cluster,
but
I'm
going
to
wait
until
the
openshift
cluster
is
fully
up
and
running
to
do
that.
But
you
know
it's
very
convenient
to
be
able
to
use
this
use
this
or
grab.
This
cube
comic
directly
from
the
ui
nice.
C
And
what
I
will
do
is
whilst
we're
waiting
for
that
third
machine
to
come
online,
I
will
do
export
coupon,
big.
D
D
C
E
So
reese
or
christian
reese,
you're
typing
so
I'll
poke
it
christian.
There's
a
there's,
a
question
from
c
santana.
How
does
system
file
partition
configuration
work,
for
example,
infrastructure
nodes
that
need
some
special
disk
setup
for
logs
or
registry
images.
A
B
Well,
yeah,
there
is
the
the
partitioning
happens.
I
know
automatically.
I
don't
know
how
that
layout
looks
like
I
would
have
to
just
like
dig
in
and
just
kind
of
just
see
what
that
that
that
layout
looks
like
I
know
you
can,
as
chris
alluded
to
as
a
day
two
step
add
additional
disks
where
those
where,
where
the
containers
live,
I
I
don't
think
it's
configurable.
B
E
D
E
By
very
much
to
that
point
right
by
default,
we
don't
configure
it
at
all.
It
assumes
that
the
entire
installation
disk
which
can
be
controlled
when
you
install
coreos
right,
you
can
specify,
I
think
it's
inst
disk
or
something
like
that
as
sda
sdb,
whatever
you
want
to
use
for
that,
but
it
will
assume
that
it
can
occupy
the
entire
disk
and
there
is
no
partitioning.
There
is
no
right.
E
We
used
to
recommend
things
like
graph
storage
being
you
know,
for,
for
the
docker
demon
being
on
its
own
partition
or
its
own
disk,
et
cetera.
The
official
recommendation
is
none
of
that
is
a
recommendation
anymore.
Okay,
so
and
and
implementing
that
at
install
time
can
be
done,
but
it's
not
straightforward
and
reese.
I
don't
know
if,
if
any
of
that
is
incorporated
into
the
assisted,
install
or
not,
I
don't
think
it
is.
C
E
A
B
A
Yeah
thanks
twitch
my
bad.
A
Don't
know
as
a
warning
to
others.
Yes,
yes,
yeah
black
christian
posted,
don't
know
why
some!
I
think
it
was
universe
that
mentioned
it.
It
like
put
your
thing
on
pause
and
I
was
like
wait
why
it
doesn't
tell
you
why
it
just
says:
automod,
that's
it
it's
just
okay!
Is
it
the
word
christian?
I
don't.
I
don't
know.
A
C
All
right
folks,
so
where
we
are
now,
is
this
third
machine,
our
node
11,
which
was
our
bootstrap
machine.
This
is
now
since
rebooted
and
it's
you
know
this
machine
has
a
bunch
of
nics
on
it,
so
it's
just
gonna
go
through
and
try
and
dhcp
on
them
all
just
by
default.
It's
eventually
going
to
give
up
which
it
just
has,
and
you
can
you
will
see
this
machine
then
bring
up
all
the
necessary
control,
plane
services.
C
You
know
lcd
all
of
various
different
kubernetes
bits
and
pieces
to
be
that
that
full
third
third
master,
node
and
you'll
see
that
now
we're
at
95
complete.
This
is
in
the
configuring
stage,
which
we
previously
saw
on
the
two
masters,
and
hopefully
in
just
a
few
minutes.
This
will
go
to
100
complete
and
it's
going
to
give
us
some
additional
options,
just
at
the
top,
where
we
can
go
ahead
and
actually
connect
into
our
new
bare
metal
cluster
that
have
been
deployed
with
the
assisted
installer.
B
C
B
Yeah
yeah
there's
like
a
little
triangle
next
to
oh
yeah
angle,.
C
Yeah
this
one
is
up,
and
it's
just
going
to
take
a
few
more
minutes
to
get
exactly
where
we
need
to
be
what
I
can
probably
show
this,
the
latest
installation
logs
and
let's
see
31
kilobytes,
that's
going
to
be
our
bootstrap
machine
and
you
can
see
you
got
the
boot
cue
blogs.
C
The
agent
logs,
which
is
the
agent
is
for
the
is
the
logs
the
so
the
agent
is
the
is
a
service
that
we
bring
up
on
each
node
when
it
boots
up
as
part
of
the
assisted
installer,
and
so
this
will
show
us
or
give
us
logs
for
pretty
much
everything
that
that's
trying
to
do.
You
know
connect
out
to
the
clouderreda.com,
get
everything
set
up
bootcube.
Well,
we
all
know
what
bootcube
is.
C
That's
the
service
that
establishes
or
helps
us
bring
up
that
cluster
in
the
first
place,
and
then
we,
of
course
we
have
the
installer
logs,
and
so
this
is
the
openshift
installer
logs
and
we
can
see
you
know
everything.
That's
that's
going
on.
This
is
a
pretty
long
file.
You
can
you
know
we're
gonna
go
through
all
the
text
in
here,
but
this
is
obviously
really
helpful
to
to
troubleshoot
anything
that
goes
on,
but
there's
plenty
of
information,
that's
available
just
at
the
click
of
a
button
from
the
ui.
E
C
As
you
can
see,
these
are
all
the
pods
that
are
running.
I
mean
this
is
this
is
a
machine
that
is
that
I
have
provisioned.
This
is
one
of
the
ones
that
is
installed
by
this.
So
you
know
I
have
direct
connectivity
and
obviously
it
took
my
key
because
I
didn't
ask
me
for
a
password
and
so
that
got
injected
during
the
the
provisioning
of
this.
So
you
know
you
still
can
go
in
there.
C
If,
if,
if
you
want
to
you
can
view
the
logs,
you
can
watch
big
cube
running
if
the
bootstrap
machine
was
still
you
know
up
and
running,
but
you
know
we're
kind
of
past
that
at
this
all
stage,.
B
Yeah
someone
mentioned-
and
it
was
already
answered,
but
someone
had
mentioned
that
this
is
rel
core
os
only
right.
It's
not
rel7,
correct,
correct.
B
By
the
way,
rail,
seven,
seven
dot,
nine
right.
A
And
then
I
remember
the
big
like
sea
change.
That
seven
was
in
like
now,
eight
like
seven's
retiring
and
it's
like.
C
B
A
B
C
E
And
you
might
have
already
covered
this
right.
My
fault
for
being
late
is:
does
this
leverage
an
external
load
balancer
or
no.
E
C
Yes,
I
believe
we
did
yeah
all
right,
so
installation
has
now
finished
we're
kind
of
good
to
go.
We're
gonna
have
a
problem
here
because
again
I
randomly
allocated
my
api
and
my
ingress
ips,
so
they're
not
going
to
have
a
corresponding
record.
So
if
I
click
on
this,
it's
simply
not
going
to
work.
But
what's
really
cool
is,
if
I
say
well,
I'm
not
able
to
access
the
web
console
it's
going
to
give
me
all
of
the
entries
that
I
can
throw
directly
into
etsy
hosts.
C
So,
there's
a
reason
why
I
use
nano
when
I
was
in
my
teens,
I
guess
I
got
really
into
gen
2
linux,
yeah
and
all
of
the
documentation
for
gen
2
uses
nano,
and
so
I
just
started
to
use
it,
and
I
know
every
single
person
I've
ever
run
into
completely
slates
me
for
using
nano.
But
it's
just
what
you're
used
to
you
know:
you're
used
to
all
the
keyboard
shortcuts
for
doing
everything,
so
I'm
just
gonna
stick
with
it.
I
think
all.
C
I'm
gonna,
so
all
I've
done
there
is
I've
pasted
those
in
there.
So
you
can
see
187,
which
is
the
ingress,
and
I've
got
some.
You
know
fixed
entries
in
there
and
the
api
on
189,
which
corresponds
to
the
ip
addresses
that
got
allocated
by
my
dhcp
server.
So
I
can
save
that
now
and
right
away.
I
should
be
able
to
do
oc,
get
hosts
at
nodes,
come
on
race
and
those
three
machines
are
there
and
you
see
they
are
all
both
master
and
worker
and
you'll
see.
C
We
have
a
discrepancy
here
between
these
two
machines
have
been
online
for
16
and
19
minutes
respectively,
and
the
third
one
is
only
two
minutes
44,
because
that
was
that
third
temporary
bootstrap
machine
all
right,
oc
version,
yeah
four
or
six
nightly.
As
expected,
my
client
is
four
five,
but
that's
irrelevant
so
now.
What
I
should
be
able
to
do,
I
should
be
able
to
launch
the
openshift
console
connection
is
not
private
to
be
expected.
C
It's
gonna
do
that
for
a
second
time
as
it
goes
to
the
oauth,
but
this
is
a
good
sign
and
let
me
make
that
a
little
bit
bigger
and
get
rid
of
that
and
that
so
by
default
it's
cube
admin
and
the
password
I
can
grab
directly
from
here.
I
can
copy
it
there
and
I'll
paste
that
and
away
I
go
so
you
can
see
we're
up
and
running
openshift
4.6
provider
is
bare
metal
so
again
because
it's
following
the
bare
metal,
ipi
path,
it
has
already
deployed
all
of
the
pieces.
C
You
know,
so
it's
got
the
keeper
id.
It's
got
the
core
dns,
the
mdns,
everything
all
ready
to
go
up
and
running
for
me.
You'll
see
that
some
of
these
machines,
some
of
the
pods
and
stuff,
are
still
coming
up
and
running.
It's
got
the
bare
metal
host
century,
so
I
can
dive
straight
straight
into
this
now.
The
important
thing
to
note
here
is
that
it
does
deploy
all
of
the
metal
cubed
pieces
for
full
bare
metal
management,
but
metal
cube
is
currently
disabled
in
this
release.
C
The
main
reason
for
that
is
that
when
we
use
the
metal
cubed
configuration
today,
it
relies
on
a
second
dedicated
provisioning
network
right
now,
assisted
installer.
We
try
and
make
it
very,
very
simple
and
easy
to
use
and
only
require
a
single
network
so
until
we
catch
up.
I
think
around
about
the
4.7
time
frame
is
when
we're
going
to
drop
the
requirement
for
that
second
provisioning
network,
where
we
will
be
able
to
enable
metal
cube
right
out
of
the
box.
C
So
what
I'll
be
able
to
do
in
here
is
go
and
edit
this
bare
metal
configuration
set,
the
bmc
or
the
outer
band
management
configurations
and
openshift
will
be
able
to
manage
the
the
control
plane
manage
the
underlying
infrastructure,
just
as
it
would
with
a
like.
It
does
on
a
full
bare
metal,
ipi
configuration.
C
But
yeah
we're
we're
there
and
yeah.
A
B
E
B
A
C
Update
manager
yeah
because
because
metalcubed
uses
openstack
ironic,
an
openstack
ironic
does
have
some
capabilities
to
do.
You
know
out
of
sort
of
out
of
band
configurations.
C
D
C
Yeah
awesome
and
that's
really
technically
super.
B
Simple
install
openshift
on
openshift
right
using
for
openshift
vert
right
because
they're,
just
virtual
machines
sort
of
well
I
mean,
if
you
use,
if
you
use
the
agnostic
installer.
E
You
can
deploy
openshift
to
openshift
virtualization
virtual
machines,
so
why
am
I
talking
about
it?
The
way
that
I'm
talking
about
it.
E
Clearly
understand
the
expectations
right.
Am
I
creating
child
clusters
where
the
parent
physical
cluster
is,
for
example,
I'm
creating
routes
there
that
route
to
applications
in
the
child
clusters?
Excuse
me
sorry
for
bumping
the
microphone
or
am
I
creating
virtual
machines.
You
know
open
shift
instances
that
are
connected
directly
to
an
external
network,
just
like
a
traditional
one,
so
it's
not
as
straightforward
as
just
oh
yeah
deploy
openshift
right
into
into
openshift
it's.
E
E
So
but
yes,
there
is
nothing
that
technically
prevents
you
from,
especially
if
you
have
configured
you
know
a
direct
l2.
You
know
layer,
2
network
connection
for
those
virtual
machines
right,
you
know
sure,
deploy
it
into
a
virtual
machine
connect
it
out,
just
like
you
would,
with
a
traditional
hypervisor,
not
a
kubernetes,
based
hypervisor
and
and
connect
to
a
standard,
openshift
cluster.
But
if
you
expect
any
kind
of
integration
between
parent
and
child
that
that's
the
complex
part
or
the
missing
part.
B
Yeah,
there's
a
there's
the
the
way
andrew
answered.
This
is
indicative
of
the
questions
we
get
asked
because
sometimes
we'll
get
asked
questions
and
go
yeah.
That's
like
pause,
technically
possible
and
then
people
take
it
as
like.
Oh
I
can
do
it
and
expect
the
same
performance
or
I
I
can
do
it
and
it's
and
it's
and.
B
But
yeah,
or
we
recommend
doing
this
or
as
long
as
you
keep
this
in
mind,
you
can
do
it
so
like
we've,
this
and
andrew
talks
like
someone
who's
been
burned
on
that
a
few
times
just
like
well,.
E
It's
funny
right
because
there
is
chris,
you
were
in
the
military
right
of
there's
kind
of
two
different
types
of
people
that
you'll
encounter,
sometimes
right,
those
who,
if
it's
not
explicitly
permitted
it's
denied
denial.
E
B
C
A
C
You
could
have
lots
of
different
clusters
in
in
different
places
and-
and
you
know,
I've
deployed
this
on
bare
metal.
There's,
no
reason
why
you
couldn't
follow
the
same
path
if
you're
deploying
on
top
of
you
know
your
virtualization
cluster.
You
know
it
doesn't
really
matter.
Just
attach
the
iso
away.
You
go,
you
can
view
all
of
the
information
here.
You
can
delete
the
cluster
if
you
want
just
deleting
it
here,
just
deletes
it
from
the
assist
installer
window.
C
It
doesn't
actually
decommission
the
actual
cluster
itself,
but
you
know
you
can
imagine
you've
got
a
list
in
here.
You
can
go
into
them
and
you
can
launch
the
open
shift
console
directly
from
here.
You
know:
you've
got
everything,
you
need
your
coup,
convig
passwords,
it's
all
stored
by
the
assisted,
installer
configuration.
So
it's
it's
incredibly
convenient.
B
Someone
someone
asked-
and
this
is
way
off
topic,
but.
B
It
says:
what's
the
estimate
date
for
4.7
release
right,
so
our
I
think
so.
E
E
Yeah
so
there
you
go
so
4.5
was
june
no
july
july.
D
E
A
E
Do
now
so
reese,
are
you
at
liberty
to
discuss
anything
road
map
related
or.
C
No,
I
would
say
not
at
this
stage:
we've
just
made
this
available.
You
know
what
I've
just
done.
Anyone
can
do.
You
know.
C
Yeah,
just
because
I
have
an
employee
account
doesn't
mean
you
shouldn't
see
that
same
direction
or
you
shouldn't
see
that
that
same
option
inside
of
your
cloud.redder.com
interface,
but
this
is
the
first
time
that
we've
made
this
available.
You
know
it's
obviously
been
a
work
in
progress.
We've
been
working
on
this
for
a
number
of
months.
I've
been
doing
some
internal
demos
to
show
it
and
the
rate
at
which
this
has
improved.
C
You
know
new
features
and
rfes
that
have
been
brought
from
folks
in
in
you
know
in
my
team
and
and
lots
of
different
areas,
they've
been
implemented,
and
it's
really
really
strong.
It
has
a
strong
roadmap.
We
talked
about
the
possibility
of
having
this
available
inside
of
your
own
data
center.
You
know
that's
something
that
we're
really
looking
at
as
well
that
pixie
boot.
A
C
Yeah
yeah
there
could
be
things
like
that,
but
at
this
stage
no,
I
am
not
really
at
liberty
to
to
discuss
the
the
roadmap
for
it.
E
That's
fair,
I
will.
I
will
say
that
I
saw
the
first
video
that
you
created
of
this
and
saying
that
there
has
been
tremendous
change
and
improvements
and
expansion
of
capabilities
is
a
bit
of
an
understatement.
Yeah
the
the
team-
and
I
don't
know
the
team-
that's
behind
this
very
well.
I
know
their
product
managers
a
little
bit
because
I
used
to
work
with
work
with
them
on
the
virtualization
side,
but
they're
they're
doing
some
incredible
work,
not
that
the
rest
of
the.
C
E
Isn't
but
really
really
really
impressive,
so
there's
a
question
in
the
chat
and
I
think
that
it
will
probably
be
a
somewhat
subjective
answer,
but
I'll
I'll
throw
you
under
the
bus
first
christian
of
in
general,
for
a
fixed
amount
of
resources.
Would
you
recommend
fewer
large
virtual
machines
or
more
smaller
virtual
machines?
E
B
Yeah,
so
the
first
thing
that
you
need
to
keep
in
mind
is
that
there
is
a
ceiling
in
terms
of
what
how
many
pods
per
per
node
kubernetes
and
by
extension,
openshift
supports.
So
it
depends
how
dense
you
think
your
workload's
going
to
be
right.
If
you're
going
to
run
containers
that
are
five
gigs
each
then
yeah
yeah,
then
yeah
I
mean
you
won't
hit
that
the
ceiling.
B
Yeah,
that's
right!
That's
the
first
step,
let's
just
put
everything
in
one
giant
container,
which
I
wouldn't
recommend,
but
people
do
that
we
can't
change
that.
You
can't
change
that.
I'm
a
fan
of
actually
of
leveraging,
kubernetes
and
openshift
just
how
it
was
designed.
It
was
designed
to
scale
out
and
not
necessarily
to
scale
up.
B
I've
always
been
a
fan
of
having
lots
of
number
of
smaller
vms
right
and
letting
kubernetes
totally
do
the
scheduling
do
do
do
what
it
does
best
in
scheduling
those
workloads
right,
but
again
you
know
going
back
to
it.
It
all.
It
all
depends
on
like
your
workload
right
so
or
which
ceiling.
Are
you
gonna
hit
right?
Are
you
gonna
hit
capacity
ceiling
first,
or
are
you
gonna
hit
the
the
total
amount
of
supportable
containers
and-
and
that's
also
networking
right
because
there's
just
a
finite
number
of
of.
E
Yeah
I'll
add
a
couple
of
additional
considerations,
so
one
it's
not
just
cpu
and
ram
and
on
the
node
and
of
the
pods
themselves.
It's
also
storage
right,
so
you
mentioned
the
size
of
the
images,
but
also
how
much
of
that
local
storage
they're
using.
E
So,
for
example,
if
you
instantiate
a
container
image
and
it's
writing
gigabytes
of
log
data,
you
know
every
minute-
that's
a
lot
of
local
storage
and
that
local
storage
needs
to
be
sized
appropriately,
both
gigabytes
and
iops
and
latency
for
that
workload,
and
that
could
affect
how
dense
your
pods
can
be,
which
then
therefore
affects
the
size
of
those
nodes
and
the
quantity
that
you
will
need
same
thing
with
network
right.
If
I'm
running
you
know
I'll
pick
on
cnfs
right,
containerized
network
functions.
E
If
I've
got
a
cnf,
that's
consuming
an
sr,
iov
or
dpdk
device
and
eating
tens
of
gigabytes
of
or
gigabits
of
throughput,
I
might
not
be
able
to
you
know,
put
those
very
densely
you
know
even
just
a
database
node
right
or
a
database
pod.
If
my
database
pod
has
massive
amounts
of
traffic
or
web
server,
pod
or
redus
pod
or
whatever,
it
may
be,
that's
something
to
take
into
account,
which
is
the
the
the
amount
of
traffic
that
that
is
capable
of
supporting.
And
I.
D
E
Virtual
deployments
that's
important,
because
your
virtualization
deployments
are
consolidating
those
nodes
as
well
right.
I
may
have
30
nodes
in
openshift,
but
if
they're
on
four
physical
nodes
in
my
hypervisor
right
stay
still
have
to
be
aware
of
those
things
and
the
last
thing
before
I
I
talked
over
you
chris
so
I'll
hand
it
back
to
you,
but
the
last
thing
is:
failure,
domain
right,
yeah,
your
application
and
understanding
the
application
architecture
and
how
it
handles
failure
is
important.
E
So
if
your
application
is,
you
know
built
assuming
that
it's
going
to
fan
out
horizontally
and
have
you
know,
40
instances
to
be
able
to
tolerate.
You
know
node
failure
or
something
like
that,
and
then
you've
only
got
three
nodes
where
suddenly
it
loses
33
percent
of
its
capacity,
because
one
node
failed.
That's
something
to
take
into
account
so
be
aware
of
your
application
or
at
least
make
sure
the
application
is
aware
of
your
infrastructure
architecture
so
that
they
can
make
decisions
as
well.
A
It
always
comes
back
to
you
have
to
know
your
application
stack.
You
have
to
know
what's
going
on
inside
your
data
center,
and
you
know
we
we
can
expose
all
those
metrics
you
want,
but
if
you
don't
understand
that
the
message
queue
in
your
workcenter
is
like
you
know
the
most
important
thing
and
most
I
o
intensive
network
intensive
thing
in
your
entire
stack,
like
you've,
got
to
treat
that
accordingly,
if
you
don't
know
that
you're
going
to
hit
a
bump
in
the
road,
real
quick
right
like
got
to
know
your
infrastructure.
E
Yeah-
and
I
I
think
it
goes
both
ways
right
of
the
infrastructure
team
or
the
platform
team-
needs
to
understand
the
application
and
vice
versa.
You
know
I
used
to
give
a
talk
about.
Strangely
enough,
it
was
vulnerability
in
I.t
right
and
bene.
E
Brown,
who
gives
a
phenomenal
ted
talk
about
vulnerability
and
relationships
right
and,
of
course,
it's
often
in
the
context
of
spouses
right
or
you
know,
long-term
partners
et
cetera,
but
the
same
thing
applies
oftentimes
in
work
relationships
and
in
particular,
through
teams
like
well
developers
and
infrastructure
of
the
infrastructure
team
can
be
or
the
business
is
best
suited
by
the
infrastructure
team.
Being
honest
and
open
with
the
developer
team
about
hey
the
infrastructure.
Can't
do
this
or
it
doesn't
do
this
well.
Can
you
make
up
for
that?
E
Can
you
compensate
at
the
application
layer
and
vice
versa?
Hey
it's
really
hard,
really
complex,
really
expensive
for
this.
To
do
this
at
the
application
layer.
Can
the
infrastructure
do
it
right
and
it's
scary
right,
because
oftentimes
those
teams
are
measured
by
very
different
things.
Right
developers
are
measured
by
number
of
features
added
by
you
know,
number
of
bugs
fixed,
et
cetera
infrastructure
is
measured
usually
by
efficiency.
E
The
minimum
amount
of
resources
to
keep
things
running
and
nobody
complaining
so
being
open
and
honest
about
that
is
it's
scary.
A
You've
got
to
work
together,
though
right
like
and
it's
I
have
a
talk
about
this
like
heaven
is
not
a
cloud
right
like
it's
important
to
bring
in
all
the
people
at
the
table-
and
I
mentioned
in
that
talk
specifically
finance
because
they
know
how
to
talk
aws
better
than
you
do
right
like
when
it
comes
down
to
brass
tacks
and
dollar
bills
they
own
that
piece
you
got
to
bring
in
more
than
just
devanops.
Sometimes
too
is
a
lesson.
I've
learned.
E
I
see
jp
dave
talking
about
you
know
the
the
ramming
speed
in
there,
which
I
saw
a
post
about
cd,
projekt
red,
who
you
know
their
ceo
swore
up
and
down
that
they
would
never
have
you
know
crunch
for
releases.
E
They
just
announced
that
they're
doing
crunch.
Well,
I'm
not
you
know
we.
We
have
a
relationship
with
engineering
and
I'm
aware
of
what
they're
going
on
there.
I'm
not
aware
of
like
a.
E
Very
much
to
the
answer
that
you
provided
in
chat.
You
know
our
our
teams
work
hard.
They
they
do
their
jobs
really
well,
and
it
makes
it
when
it
makes
it
yeah
and
and.
B
B
B
I'm
gonna
buy
this
off
the
shelf
and
you
know
he
said
it
kind
of
abrasively,
but
I
mean
it's
true
like
you
know,
you
can't
buy
this
off
the
shelf
and
then
just
like
throw
it
at
a
problem
and
then
the
problem
just
like
magically
goes
away
right,
so
you
do
have
to,
and
judging
by
the
questions
that
you
know
the
coming
through,
it
makes
sense
right
like
how
how
do
I
size
this
like?
A
A
How
do
we
fully
utilize
it
yeah
yeah,
so
just
to
let
I'm
gonna
post,
I
can't
post
screenshots
in
the
stream
that
sucks
but
I'll
post
it
on
twitter.
That's
fine!
How
about
that!
I
am
in
the
process
of
installing
using
the
assist
installer
on
my
art,
820
across
the
house.
All.
D
A
E
I
know
we're
coming
up
on
the
the
end
of
our
time
together.
So
have
you
talked
about
next
week.
E
So
next
week
the
the
admin
hour
is
not
happening.
What
what
I
I
know,
I
know.
A
You
know
how
hard
it
is.
No,
no
do
you
understand
how
hard
it
was
not
to
have
like
just
like
morning,
routine
wise,
not
to
have
langdon's
show
last
week
right
like
like,
like
it
broke,.
E
Only
only
once
well
once
so
far,
so
the
good
news
is
the
good
news
is
something
that's,
I
think,
is
very
relevant
for
the
audience
for
the
admin
hour,
and
that
is,
we
are
having
the
first
of
the
openshift
product
management
team,
presenting
what's
new
in
openshift
4.6,
so.
A
Yeah,
so
this
is
actually
the
the
same
thing
that
goes
out
to
our
sales
and
our
solutions.
Architects
and
our
you
know
our
product
managers,
our
tmm's,
like
me,
our
pmms,
the
whole
nine
yards,
you're
gonna,
get
it
at
the
same
time.
This
month,
4.6
everybody
learns
what's
new.
At
the
same
time,
this
is
now
an
internal
meeting
that
we're
opening
up
to
the
world.
B
E
So
importantly,
for
red
hat
folks,
who
will
still
be
joining
the
blue
jeans,
you
can
still
ask
questions.
You
can
still
do
all
that
other
stuff,
the
pm
team
and
some
others
will
be
there
to
help
answer
internal
questions
and
there
will
be
a
whole
team
of
folks
here
on
the
live
stream.
That
will
be
doing
the
same
thing.
We'll
be
answering
questions
we'll
be
transferring
questions
over
back.
A
It
will
not
be
broadcast,
so
you
will
not,
as
the
viewer
see
the
internal
questions,
that's
the
only
kind
of
like
firewall
we're
putting
up
in
between
here.
Right
like
I
just
want
to
give
you
the
presentation,
I
don't
necessarily
want
you
diving
into
all
the
questions
and
answers.
I
want
you
to
ask
the
questions
and
answers.
A
There's
going
to
be
some
bespoke
question
about
like
a
specific
customer
needs,
this
kind
of
configuration
is
this
being
satisfied
by
this
version
kind
of
thing
right
like
there's,
going
to
be
questions
like
that
y'all,
don't
necessarily
they're,
not
your
main
to
you
right
like
I
don't
want
to
show
you
that
stuff.
What
I
want
to
show
you
is
the
actual
presentation,
and
I
want
you
to
be
able
to
have
questions
answered.
There'll,
be
people
here
on
openshift
tv
from
the
pm
team
to
help
us
all.
A
A
All
right
guys
or
gentlemen
folks,
thank
you
all
for
joining
coming
up
immediately
after
this,
so
that
was
next
next
week,
we'll
have
the
what's
new
and
4
6
show
starting
at
the
same
time.
This
show
started
today
actually
started
at
the
10
o'clock.
Slot
am
time
eastern
1400
utc,
but
coming
up
next
is
openshift
commerce,
briefing
modernize
your
application
development
for
openshift
with
jorgette.
It's
a
joget.
E
B
Promote
I
am
starting
a
new,
a
new
series
right,
a
bi-weekly
series
called
the
the
get
ops
happy
hour,
which
will
premiere
august
sorry
august
october
october,
8th.
B
B
It'll
premiere
october,
8th
at
3
p.m.
Eastern
we'll
be
talking
basically
every
two
weeks
kind
of
similar
format,
we'll
be
talking,
get
ops,
we'll
be
bringing
in
a
special
guest
here
and
there.
B
B
B
A
A
Yeah,
so
this
will
be
fun,
so
join
us
here
in
a
few
minutes
for
openshift
commons.
Thank
you
for
joining
us
today
for
this
wonderful,
extended,
open
shift,
admin
hours,
admin,
office
hours
and
we'll
be
back
next
week
with
the
what's
new
briefing
in
this
time.
Slot
so
it'll
be
fun,
we'll
be
there
and
stay
tuned
when
in
doubt
check
out
the.
A
If
you
want
to
try
openshift
do
what
we
did
today
head
on
over
to
that
tri
link
that
I
just
dropped
in
chat
and
subscribe
to
our
calendar
so
that
you
know
what
shows
are
coming
and
that's
all
we
got
for
this
show
and
I'll
see
you
here
in
a
few
minutes
for
those
tuning
in
dope
shift
comments
thanks,
y'all
all.
E
Special
thanks
to
recent
christian
for
filling
in
while
I
was
delayed.