►
Description
OKD Live Deployment Marathon Kick-off
OKD4 on Cheapest AWS cluster possible and Full AWS Deployment
with OKD-WG Co-chairs: Christian Glombek and Diane Mueller (Red Hat)
Deploying Eclipse Che on OKD4 - Charro Gruver (Red Hat)
August 17, 2020
A
Welcome
to
the
start
of
a
wonderful
okd
deployment
marathon,
we've
got
a
number
of
folks
from
the
okd,
which
is
the
open
shifts.
Community
distribution
of
red
hat
red
hat's
open
shift,
and
we
are
going
to
try
today
to
demo
on
as
many
platforms
as
we
can
coerce
our
members
into
doing
so
all
day.
Long.
If
you
don't
know
okd,
you
can
pop
over
to
the
okd
dot
io
website
and
read
all
about
it.
A
We've
been
working
diligently
on
it
and
have
recently
just
done
our
ga
release
for
okd4,
and
it
is
out
there
in
the
wild
and
that's
what
we're
going
to
be
demoing
today.
So
to
kick
us
off,
we
decided
to
start
with
aws
and
the
cheapest
aws
cluster
possible
and
a
full
aws
deployment
to
follow
on
that
by
and
christian
glombeck,
who
is,
the
co-chair
of
the
working
group
is
going
to
set
us
up
and
show
us
show
off
the
first
bits
here
and
we're
going
to
drive
through
today.
A
You
know
putting
a
general
surgeon's
warning
on
the
day.
It
will
be
fluid
because
these
demos
are
live
and
we
may
sneak
in
a
few
more
in
between
things
if
people
go
short
and
we're
going
to
try
and
do
the
q
a
during
the
cluster
up,
all
the
clusters
are
booting
up
because
there
is
a
lag
and
there's
really
not
much
to
see
on
the
screen.
A
So
if
you're
watching
you
can
ask
questions
in
the
chat
wherever
you
are,
whether
you're
in
one
of
the
live
streams
or
in
the
blue
james
itself,
without
any
further
ado,
I'm
going
to
let
christian
take
over
the
screen
here
and
share
his
screen
and
walk
us
through
the
cheapest
aw
s
deployment
we
could
figure
out.
So
I'm
going
to
share
your
screen.
A
B
B
Okay,
so
in
order
to
get
a
minimal
install
going,
we
really
just
have
to
change
a
few
two
parts
of
the
install
config.
So
usually
what
you
do
is
open
shift,
install,
create,
install
config.
I've
already
prepared
that,
so
I'm
not
going
to
run
it
again.
This
is
where
you
you're
going
to
be
asked
for
your
essentially
your
aws
or
your
account
credentials
and
where
you
want
to
install
this,
is
the
installer
provision
infrastructure.
B
Instead,
I'm
going
to
just
show
you
what,
after
the
install
config
yaml
is
created
with
that
command.
I
you
just
go
ahead
and
edit
it
a
little
bit.
So
what
we're
going
to
do
here
is
the
number
of
worker
replicas
is
scaled
down
to
zero
and
the
master
replicas
are
just
one
now.
I've
also
changed
the
type
of
the
aws
node
to
m5x
large.
B
Just
to
be
sure,
we
get
that
we
need
quite
a
bit
of
ram
still
for
the
install
so
12
gigabits
for
for
installing
and
then,
when
it's
running
it's
going
to
be
six,
you
could
additionally
also
put
in
a
fewer
a
smaller
node
for
worker
nodes.
If
you
want
to
scale
up
worker
nodes
right
now
and
we're
just
going
to
install
the
one
master
cluster
right
now,
this
is
kind
of
it.
So
after
the
open
shift,
install
create
install
config,
we're
gonna,
do
open
shift,
install,
create
manifest.
B
And
after
that,
we
can
go
ahead.
You
can
actually,
if
you
don't
want
to,
if
you
don't
want
to
edit
anything
and
just
take
the
defaults,
it's
not
going
to
be
the
smallest
cluster,
cheapest
one.
You
would
just
do
this
command
right
off
the
bat
we're
because
we've
wanted
to
edit
the
install
config
I
have
to.
I
had
to
do
them
in
in
sequence
here,
but
create
cluster
will
also,
if
there's
no
install
config
and
no
manifests
present
it'll
it'll
create
them.
So
openshift
install
create
cluster.
We
can
start
now.
B
So
really
what
I've
done
is
just
scale
down
the
replicas
to
the
master
node,
the
control
plane
nodes
to
one
and
no
worker
nodes
at
all.
B
B
So
usually
you
have
to
change
the
label
as
well
for
the
for
the
ingress
controller
to
run
on
the
usually
it
runs
on
the
infra
nodes,
which
is
scheduled
on
a
worker.
I
should
have
changed
that
to
master
anyways,
let's
see
where
we,
where
we
get
here
with
this,
so
that's
an
additional
step
for
the
ingress
controller,
to
change
the
label
to
point
to
mask
well
and
yeah.
Now
we
have
to
wait
a
little
bit.
The
install
is
running.
A
What
someone
was
asking
what
the
o
stands
for
in
okd-
and
I
think
that's
it's
it's
interesting.
The
o
stands
for
nothing
and
when
we
shifted
from
kubernetes
to
two
kubernetes
from
the
older
version
of
openshift,
which
was
a
ruby
on
rails,
mongodb
platform
as
a
service,
and
we
shipped
it
over
to
being
on
kubernetes.
A
We
had
to
rename
the
project
to
be
more
in
line
with
other
okd
other
kubernetes
distributions
and
legal
marketing
and
trademark
issues
made
us
have
to
use
a
three-letter
acronym,
much
like
eks
or
okay.
You
know
ocp
and
other
other
acronyms
that
are
out
there
for
kubernetes
distributions,
so
the
o
actually
doesn't
stand
for
anything,
not
even
origin.
I
like
to
joke
that.
A
It
stands
for
okay,
diane,
because
everybody
agrees
with
me
on
that,
but
it
took
a
lot
to
figure
out
that
acronym
and
that's
where
we
have
our
okd
panda
and
everything
else.
So
a
technical
question
for
you
is
asking:
do
we
need
to
generate
ignition
files
for
the
install.
B
So
I'm
gonna-
I'm
gonna,
take
this,
so
this
is
what
the
installer
does
for
you
in
the
install.
In
the
second
step,
I
just
ran
the
openshift
install,
create
manifest
it'll
generate
all
the
ignition
files
within
machine
config
kubernetes
objects.
B
But
then,
if
you
do
because,
if
you
run
it
step
by
step,
you
still
have
the
the
opportunity
to
change
those.
If
you
do
want
to
provide
your
custom
ignition,
then
you
can.
You
can
edit
the
generated
or
add
to
the
generated
ins
ignition
config
that
is,
output
by
the
create
manifest.
B
There
is
also
the
question
curious
if
install
could
be
done
with
spot
instances.
Yes,
I
think
you
can
you
can,
because
I
didn't,
because
I
did
don't
install
any
worker
nodes
right
now.
If
I
had
changed
that
to
to
the
spot
instance
type
for
the
for
the
worker
nodes,
I
could
later
scale
up
workers
that
would
be
spotting,
so
that
is
possible
as
well.
I'm
not
sure
how
how
that
would
work
on
a
master
node
to
be
honest,
but
it's
definitely
possible
with
workers.
A
All
right-
and
there
was
one
other
one-
and
this
is
the
question-
that
everybody
always
asks
were
those
steps
documented
anywhere
yet.
B
So
we
have
a
draft
document
right
now
and
there's
a
bit
of
documentation
floating
around
as
well.
This
was
originally
done
by
vadim
rutkos,
who
can't
be
here
today,
so
I
will
definitely,
together
with
him,
submit
the
document
to
the
okd
repository
so
we'll
have
that
properly
documented
as
well.
Maybe
just
to
note
this
is
really
just
for
testing
purposes.
It's
not
an
h,
a
cluster.
B
You
won't
be
able
to
upgrade
easily
because
the
city
quorum
will
not
be
kept
so
yeah
we'll
still
document
it.
But
it's
not
it's
just
a
testing
setup
for
if,
if
you
want
to
really
do
some
a
cheap
test
on
aws
here,
not
supported
for
any
workloads
really.
A
It
is
the
openshift
kubernetes
engine.
I
think
that's
what
that
one's
is
supposed
to
stand
for,
and
it
is
another
variation
of
the
product
that
is
simply
the
kubernetes
pieces
of
it
that
you
can
get
support
for
and
all
that,
and
I
will
look
for
that
on
on
the
corporate
website.
A
I
have
not
seen
that
and
I
have
not
prepared
one
on
that
side
yet
or
been
asked
to
before.
So
I
will
see
if
I
can
find
one
that's
at
least
an
ocp
and
okay
and
share
that
with
with
the
group
shortly.
I
think
there
is
an
okay
versus
ocp
one,
but
I'm
not
sure
about
okd
and
there
shouldn't
be
too
much
difference
between
ocp
and
okd,
except
for,
and
maybe
you
want
to
talk
a
little
bit
about
fedora
core
os.
B
Yeah
sure
I
can
I
can
do
that.
So
that's
actually
the
main
difference,
I'm
not
sure
what
the
difference
between
ocp
and
okay
here
is
in,
in
particular,
but
ocp
and
okd
they're,
essentially
the
same
cluster
code.
The
only
real
difference
we
have
is
we
use
fedora
core
os
instead
of
rel
coro
as
a
base
operating
system,
we
manage
the
the
operating
system
updates
the
same
way
we
do
through
the
cluster.
So
there
is
no
it's
one
life
cycle,
the
cluster
and
the
base
operating
system.
B
So
if
you
update
okd
or
ocp,
you'll
also
get
a
an
os
update,
so
the
boots
will
automatic
the
nodes
will
automatically
pull
down
the
new
image
laid
onto
disk
and
reboot
into
that,
obviously
in
a
safe
fashion,
one
after
the
other.
So
if
there's
any
blockers,
one
node
doesn't
come
up
or
something
the
update
will
fail,
but
yeah
we
have
this.
The
core
os
technology,
which
is
essentially
today
a
fusion
of
rpm,
os
tree
and
ignition
rpmos
trees-
are
image
based
operating
system,
essentially
are
the
the
creation
tool
for
it.
B
B
It's
I
think
colin
walters,
the
creator
of
of
rpmos,
he
likes
to
say
it's
like
a
git
for
os's,
so
you
really
have
a
commit
hash
that
represents
your
own
disk
state
and
then
you
can
upgrade
from
one
to
the
next
atomically
and
we
do
that
through
the
cluster
and
in
the
case
of
okd
we
do
it
on
the
base
of
fedora
core
os,
which
you
can
also
use
standalone,
so
fedora
core
os
ships,
docker
and
ships
podman
engines.
B
So
if
you
just
want
to
run
a
single
container
single
node
workloads
in
containers,
then
that's
the
right
operating
system
for
you.
It's
really
geared
towards
running
containerized,
workloads,
yeah
and
even
fedora,
core
os
and
railcar
os.
Aren't
that
different?
It's
just
the
the
package
sets
they
use.
One
is
the
fedora
package
set
and
the
other
one
is
the
rel
package
set.
So
it's
mostly
the
same
packages.
Just
you
know
different
versions,
a
different
kernel.
A
And
in
about
two
hours,
I
think
at
1500
utc,
our
third
demoer
will
be
dusty
mabe,
who
is
sharing
the
fedora
core
os
and
helping
do
with
the
community
management
for
that
and
he's
going
to
demo
okd
on
digitalocean.
And
so,
if
you
have
more
questions
about
fedora
core
os,
we
can
pepper
dusty
at
1500
utc.
So
in
two
hours
time
too.
So,
if
you
want
a
deeper
dive,
then
come
back
and
join
us
for
that.
B
It
is
running
still
so.
The
bootstrap
api
is
up
by
now
and
now
I
can
share
my
screen
for
a
second
here
that
helps
just
to
let
you
see.
What's
going
on.
A
And
ashraf
you
were
asking
about
azure
azure
is
one
of
our
second
to
last
presentation
today,
so
we
will
have
a
demo
of
deploying
on
azure
the
only
one
that
we
had
to
cancel
was
gcp,
and
that
was
only
because
vedim
wasn't
coming
and
it's
not
much
different
than
the
aws
one.
B
Okay,
perfect,
so
this
is
what
you're
gonna
see
when
you
run
the
create
cluster
command.
It's
just
gonna
take
a
while!
So
first
we
have
to
wait
for
the
bootstrap
node,
the
the
kubernetes
api
on
the
bootstrap
to
come
up,
and
that
has
happened
now
and
our
boots
driving
and
that
may
take
up
to
40
minutes.
Usually
it
doesn't
take
that
long,
but
yeah
we're
about
halfway
through.
I
think
yeah.
A
There's
one
more
question,
and
these
are
great
questions-
everybody,
because
after
this
we're
gonna
grab
all
of
the
questions
and
turn
them
into
faq.
So
you're
helping
us
develop
our
faq
for
okd
and
one
of
the
questions
was.
Can
I
convert
an
okd
cluster
to
an
ocp
cluster.
B
So
yeah,
that's
definitely
a
thing
we
wanna
do
and
it
should
already
be
possible
really,
if
you
force
an
upgrade
to
to
the
okay
ocp
release.
So
I've
never
tried
that,
but
it
should
be
technically
possible.
We
want
to
actually
test
that
nci
at
some
point
to
make
that
a
good
story,
but
yeah
it
should
work
but
nobody's
tested
it
so
far.
B
I
think
I
also
just
pasted
a
link
in
the
chat,
which
is
our
working
document
here
for
the
cheaper
aws
cluster
41
single
node,
there's
also
a
few
more
tricks
in
there
like
yeah
running
the
infra
on
the
master
node,
and
also
setting
up
spot
instances
for
workers.
A
Now
one
other
thing
I
would
add,
is
if
you're
interested
in
this,
because,
as
you
can
see,
this
is
a
working
group
and
we
work
if
you'd
like
to
join
the
working
group.
I've
just
put
the
link
to
groupsgoogle.com
or
the
okd
working
group,
or
you
can
go
to
okd.io
and
find
a
link
there.
But
if
you
have
questions
that
we
don't
answer
or
you
want
to
work
on,
maybe
a
migration
path
from
okd
to
ocp
with
us.
We
would
love
to
have
you
come
christian?
A
Yeah,
there
is
one
I
know
it's
kubecon
week,
but
we
did
not
cancel
because
the
work
never
ends
and
we
would
love
you
to
join
and
and
help
out.
Please
feel
free
to
sign
up
for
that
mailing
list
and
that
group
and
that's
a
good
question.
B
I'll
also
quickly
paste
the
link
for
the
pedocal
for
the
fedora
calendar,
where
we
have
our
okd
meeting.
So
it's
please
also
join
the
google
group,
but
this
is
this
is
a
calendar
you
can
also
subscribe
to
and
we'll
have
all
of
our
meetings
on
there
frank
had
another
question:
do
you
need
a
full
secret
to
okay
to
deploy
okd?
No,
you
don't!
You
do
need
a
fake
pull
secret,
though
you
can
read
that
in
the
okd.
B
You
read
me
here
and
it's
really
just
an
a
json
struct
with
with
like
a
fake
application
in
it,
because
we're
still
using
we're
very
close
to
the
to
the
cluster
code
to
the
ocp
code.
Also
in
the
installer,
the
installer
is
actually
the
only
part
right
now
that
isn't
exactly
upstream,
so
we've
had
to
fork
it
a
little
bit
we're
gonna
re-merge
those
soon
I
hope,
and
it's
not
too
different
anyways.
B
But
that
is
one
of
the
things
we
we
didn't
want
to
pull
pull
out
entirely
because
it's
definitely
necessary
for
the
ocp
product
and
for
okd.
We
don't
really
need
it,
but
we
haven't
found
a
super
nice
solution
to
dealing
with
that.
So
if
you
don't
have
a
pull
secret
and
don't
want
to
create
one
with
red
hat,
you
can
use
a
fake
one
and
then
you
won't
be
reporting
telemetry
data
to
red
hat.
If
you
use
a
red
hat
full
secret,
then
you'll
be
providing
telemetry
data
that
collects
it.
A
Which
would
be
awesome
because
we
would
love
the
okds
to
have
some
okay
dennis
show
up
in
that
as
well.
So,
just
personally
I'd
love
to
see
that
show
up
on
some
of
that.
But
it's
because
one
of
the
things
with
an
open
source
project
there
is
no
gatekeeping
on
okd
at
all.
So
there
we
know
there's
a
ton
of
deployments
out
there,
there's
a
lot
of
interest
in
it,
but
other
than
the
working
group
and
people
asking
us
questions
in
either
the
slack
channel
or
on
different
tech
support
things.
A
We
don't
really
know
a
lot
about
how
okd
is
used
in
the
wild
and
so
later
I'll
share
the
survey,
and
if
you
are
using
okd
or
planning
to,
I
will
have
you
fill
that
survey
out
and
I'll
share
that
with
all
the
videos
as
well.
The
other
thing
someone
had
asked
about
was
the
release
cycle,
the
difference
between
ocps
release
cycle
and
okd's
with
life
cycle.
You
want
to
talk
a
little
bit
about
that.
B
Yeah
sure
so
we
don't
yeah,
we
don't
adhere
to
the
ocp
release
cycle
at
all.
B
Really
we
just
kind
of
wait
for
ocp
to
become
stable
to
go
to
the
next
minor
version,
like
the
the
switch
from
4.5
to
4.6,
we'll
be
doing
at
the
same
time
as
ocp
won't
be
going
ahead
and
using
the
master
before
ocp
hasn't
tested
it
out
enough
to
say
it's
stable
as
well,
but
we
do
releases
roughly
every
two
weeks
from
the
current
stable
branch,
which
is
4.5,
and
we
do
that
in
a
way
where
we
do
do
that
in
the
alternating
weeks
in
which
fedora
core
os
does
its
releases.
B
So
they
do
bi-weekly
releases
as
well,
and
then
we
kind
of
have
a
one-week
soak
period
to
see
whether
the
new
fedora
core
os
works
well
and
then
we'll
up
one
week
after
that,
we'll
release
the
new
okd
on
on
top
of
that
new
fedora
core
os.
B
A
B
A
B
Not
an
expert
on
the
monitoring
side,
so
I
don't
know
the
answer
to
that
question.
Unfortunately,.
A
I
will
find
someone
who
can
answer
that
for
you,
steve,
and
so
if
you
want
to
hang
out
for
a
little
bit,
do
you
know
that
answer
charo
by
any
chance.
C
I
can,
I
can
venture
an
uneducated
answer.
The
solution
in
in
the
three
dot
was
to
provision
an
additional
monitoring
infrastructure
for
your
applications.
The
what
was
provisioned
with
the
cluster
was
intended
to
be
cluster
specific,
and
so
it
was.
It
was
never
meant
to
be
modifiable
in
four
dot.
I
believe
it's
a
similar
situation,
but
there's
an
operator
for
that
and
that
you
can
you
you
can
provision
additional
rules
within
the
configuration
of
that
operator.
C
I
know
when
you
stand
up
a
4.x
out
of
the
box.
You
can
start
creating
additional
alerting
rules.
I
don't
know
if
you
can
create
them
for
applications
or
if
it's
still
infrastructure
specific,
but
but
there
is
an
operator
that
that
you
can
deploy.
That
will
enable
that,
and
with
that
said,
I'm
prepared
to
be
wrong.
A
Thanks,
charles
and
charo
is
one
of
our
long-standing
working
group
members
and
he
is
a
new
red
hatter.
So
welcome
welcome
to
the
fold
charles
happy.
A
A
B
Yeah,
so
I
think
that
is
just
the
fake
pull
request.
Yeah
pull
secret
that
you
can
use
it's
the
you.
You
essentially
need
a
a
json
with
the
offs
field
and
then
just
anything
in
it
like
at
least
one
called
fake
here
it
could
be
any
name
and
then
another
would
have
filled
with
an
auth
field
in
it
and
then
anything
in
there.
It's
not
going
to
be
checked.
A
B
A
Okay,
so
well
we'll
we'll
fill
the
time,
and
this
is
going
to
be
the
interesting
thing.
The
whole
day
is
because,
basically
I
I
told
everybody
to
try
and
do
live
deployments
and
so
filling
that
time,
while
we're
waiting
for
openshift
and
okd
to
to
do
their
thing
will
be
interesting.
So
if
you
have
more
questions,
please
do
ask
them
in
the
interim.
What
we'll
probably
keep
doing
is
I'll
keep
asking
christian
and
charo
and
unfortunately,
vadim.
A
Is
not
available
to
to
play
with
us
today,
but
there's
a
lot
of
other
folks
on
on
the
line
who'll
be
coming
on
today,
with
different
levels
of
expertise
on
different
platforms.
Yeah
there
you
go
destroying
that
destroying
bootstrap
resources.
B
So
yeah
it's
that
usually
happens
when
it's
successful.
So
let's
see
whether
we
will
have
something
soon
to
look
into
here.
A
Perhaps
you
can
tell
us
a
little
bit,
maybe
christian,
what
you're
working
on
now
for
the
next
release
of
of
okd.
B
Oh
yeah
sure
so
for
the
next
release
of
okd,
we
will
really
have.
Essentially
we
will
converge
much
closer
to
to
ocp
or
even
closer
to
ocp.
Right
now
we
still
have
two
repositories:
forked
in
the
well
one
in
the
payload
you're
running
for
4.5,
with
with
the
next
version,
4.6
ocp
will
have
moved
to
ignition
spec
3
and
okd's
been
using
ignition
spec
3
for
its
entire
time.
B
So
we
really
we're
really
closing
a
gap
here
with
the
machine
config
operator
and
also
on
the
on
the
installer
side,
so
we'll
be
much
closer
to
to
the
actual
product.
Essentially,
okay
he's
been
using
ignition
spec.
Three,
all
the
time,
as
I
said,
and
for
ocp
we
were
using
spec
two
and
the
migration
had
turned
out
a
little
bit
finicky,
so
that
took
its
time,
but
now
we're
finally
able
to
to
just
move
everything
to
spec
three.
B
So
yeah
that's
going
to
be
good
and
it's
also
going
to
be
less
of
an
effort
to
maintain.
I
think,
and
that's
introduced
quite
a
few
bugs
we've
seen
in
the
past-
that
we,
because
we
didn't
have
this
rigorous.
The
rigorousity
of
checking
on
our
forks
is
just
not
that
big,
but
yeah
with
that
moving
to
be
essentially
the
exactly
the
same
code.
B
B
But
hopefully
the
one
after
that
and
another
thing
we're
going
to
focus
on
is
really
bringing
all
the
operators
from
from
the
operator
hub
and
all
the
operators
that
are
supported
on
red
hat
on
ocp
bring
a
community
version
of
those
into
okd,
not
by
default
of
course,
but
kind
of
have
them
in
a
operator
hub
catalog
and
have
them
installable
on
demand.
A
That
I
think
that'll
be
the
next
big
community
effort
that
we've
been
talking
about
in
the
working
group
meetings
is
to
review
those
prioritize
them
and
start
working
pecking
away
at
that
list
of
things.
So
there's
a
couple
other
questions
coming
in
and
I
know
everybody
keeps
asking
me
for
a
comparison
matrix
between
okd,
ocp
and
oke,
and
yes,
we
will
get
one
somewhere.
A
I
think
I
saw
one
for
ocp
versus
ok,
so
just
repeating
myself
from
earlier,
and
I
will
endeavor
to
see
if
I
can
pull
that
one
out
of
somewhere
in
corporate
marketing,
but
I
haven't
done
one
comparing
okd
yet
no
I'll
get
that
on.
That.
James
is
asking
a
question
which
is
probably
going
to
get
asked.
A
lot
today
is
what's
causing
the
slowdown
in
the
and
what
could
be
done
to
make
the
deployment
faster.
B
Yeah,
so
that
is
a
big
issue
and
because
we're
installing
quite
a
lot,
so
it's
not
openshift
is
not
a
minimal
cluster
per
se.
We
have
a
lot
of
operators
just
a
lot
of
resources.
We
apply
to
the
cluster
that
you
know,
help
manage
itself
essentially,
which
makes
it
so
stable,
but
it's
also
kind
of
big.
Because
of
that
and
just
there
is
some
yeah.
This
seems
to
be
a
problem
right
now
with
the
well
it
may
still.
It
may
actually
still
come
up.
B
It's
gonna
try
for
some
time
now
to
to
grab
the
api,
so
yeah
we're
working
on
that.
It's
a
long-term
goal
within
all
of
openshift,
not
just
okd,
also
the
product
yeah,
but
because
we
have
the
demand
of
really
being
secure
and
having
a
big
feature
set
by
default.
Yeah.
It's
not
really
super
close
by
now
to
to
minimize
that
footprint.
It's
a
long
term
thing,
and
that
is
yeah.
A
There's
one
question
and
I
don't
really
know
the
answer
to
it
personally:
how
and
when
does
red
hat
engineering
use,
okd
versus
ocp
in
non-production
or
production
and
and
right
now
I'm
you're
on
the
red
hat
engineering
team.
There
any
okd
use
inside
there
other
than
testing.
B
I
don't
think
we
run
any
services
on
okd
right
now.
We
do
have
something
planned
in
collaboration
with
with
the
fedora
community
to
have
a
cluster
there
I
mean
internally
at
red
hat.
There
is
we
have
our
sre
team
that
manages
our
own
clusters
and
and
the
customer
clusters
that
are
managed
and
those
are
all
ocp.
So
I'm
yeah,
I
don't
think
we
we
have
that
right
now,
but
with
the
fedora
community
we
will
have
at
least
one
quite
big
cluster
at
some
point
in
the
future
for
things
to
test
out.
B
But
I
myself
don't
have
a
lot
of
ops
experience.
To
be
honest,
I
usually
just
develop
and
write
the
code,
and
then
we
have
this
great
ci
system
that
really
tests
out
everything
but
yeah.
I
think,
if
there's,
and
especially
with
the
we
plan,
to
test
these
okd
to
ocp
upgrades
kind
of
upgrading
into
a
subscription,
we
will
maybe
see
more
okay
usage
within
red
hat
as
well.
A
So
paris
is
asking
the
question
that
everybody
ever
asks
as
well
as
what
is
the
status
of
okd,
ready
containers
and
maybe
char
that
was
kind
of
the
inference
with
charo's
che
demo,
but
maybe
charo.
If
you've
got
some
insights.
C
If
the,
if
the
question
was
specific
to
code,
ready
containers
effectively
for
okd
sort
of
the
the
minimal
single
node
cluster
that
you
you
just
download
and
run,
I
know
there
is
work
progressing
toward
that.
But
I
don't
know
what
the
current
state
of
it
is.
I
actually
I
haven't
looked
in
a
while,
but
I
know
the
praveen
one
of
the
leads
on
code.
Ready
containers
was
actually
working
on
something
so
that
the
same
thing
would
work
with
okd.
A
Yeah
so
we'll
we'll
see
if
we
can
get
a
status
out
of
praveen,
but
what
we've
sort
of
the
workaround
is
has
been
this
simple
single
cluster
installs
and
some
of
the
work
that
hopefully
will
get
some
time
for
charo
to
talk
about
using
che
with
with
okd.
A
So,
let's
see
we're
gonna
yeah.
B
I
think
just
to
add
to
that
real
quick.
I
think
we
we've
had
a
a
proof
of
concept
for
a
code
ready
containers
on
the
base
of
okd.
It's
just
not.
I
don't
think
it's
really
been
decided
by
the
team
that
actually
does
that
to
deliver
that
continuously.
B
Maybe
we
should
push
on
that
a
little
bit
as
well,
but
because
it
it
is
not
that
different,
especially
if
you
run
it
on
a
laptop.
I
know
why
people
I
can
see
why
people
would
want
it,
but
for
the
crc
team,
which
has
limited
resources,
I
think
it
may
be
difficult
to
deliver
on
that
right
now,
even
though
yeah
I
think
we
should
still
follow
on
up
on
that.
It's
not
really
a
thing
that
exists
right
now.
B
There
has
been
one
one
testing
release,
but
it's
not
like
they
do
that
for
all
our
releases.
C
C
We
also
have
a
proof
of
concept
for
for
actually
upgrading
a
single
node
cluster
as
well,
which
has
been
one
of
the
limitations
that,
if
you,
if
you
download
and
run
crc
it
effectively,
has
a
limited
life
and
you
need
to
pull
another
image
of
it.
But
hopefully
we'll
we'll
be
getting
some
progress
on
on
those
things
in
the
future,
so
that
it
feels
more
like
the
the
mini
shift.
Experience
that
that
people
were
probably
used
to.
But
going
back
to
the
previous
point
about
the
enterprise
class
of
openshift.
C
B
Yeah,
I
think
crc
tries
to
get
one
release
out
per
ocp
release
right
now
and
they
obviously
don't
do
as
many
releases
as
we
do
so
yeah
having
all
of
okd's
releases
also
done
at
crc
may
be
a
little
too
much
for
that
team.
I
don't
think
it's
too
big,
but
yeah.
We
should
definitely
get
to
a
point
where
we
can
just
get
side-by-side
releases
of
based
on
okd
and
ocp.
A
B
Yeah,
well,
it's
it's
not
running
on
my
laptop
right.
It's
I'm
just
pulling
the
api
here,
but
it's
not
up
yet
so
yeah.
Let's
hope.
Let's
hope,
it'll
come
up.
Time
still
have
around
20
minutes
for
it
to
finish,
I
guess
or
even
more
it
said,
wait
up
to
40
minutes.
B
A
So
mike
is
asking
sort
of
what
the
purpose
is
of
today
and
whether
or
not
it's
just
a
day-long
prep
for
kubecon,
and
it's
we
normally.
We
do
an
openshift
commons
gathering
the
day
before
kubecon
and
because
kubecon
changed
its
date
so
many
times
virtually.
I
decided
that
I
wasn't
going
to
try
and
keep
up
with
them
and
scheduling
and
and
because
okd
did
a
ga
release.
A
Then
we
will
we
decided
that
we
were
going
to
celebrate
by
forcing
everybody
on
the
working
group
to
do
a
demo
of
their
favorite
platforms
or
whatever
they
had
access
to
hardware
or
clouds
to
do
for
a
day-long
thing,
one
to
capture
some
of
the
the
videos
and
and
the
how-to
bits
of
it
for
our
our
website
and
for
our
youtube
playlists,
but
also
really
to
to
give
a
little
bit
more
build
a
little
bit
more
awareness
of
okd
out
there
in
the
universe.
A
It's
not
the
most
well-known
kubernetes
distribution
at
the
moment,
but
hopefully
we'll
get
there.
Though
openshift
is
pretty
damn
popular
these
days
so
and
someone's
asking
you
should
have
used
a
more
powerful
aws
flavor
would
not
have
been
cheap.
You're
right,
but
that
is
going
to
be
the
the
next
demo
which
christian's
going
to
do,
which
is
going
to
go
over
again,
which
is
why
the
whole
day
we
say
is
very
fluid.
B
Yeah
we
can
actually
I
mean
it's
not
going
to
be
that
different,
because
the
only
thing
I'll
just
leave
out
the
editing
of
the
install
config
but
other
than
that.
It's
really
the
same.
So
maybe
maybe
we
can
maybe
we
can
drop
it.
I
don't
know.
I
can
do
it
again,
of
course,
but
we'll
have
another
half
an
hour
wait
period.
A
Let's,
let's
get
through
the
cheapest
and
then
maybe
with
the
aws
one.
You
can
just
leave
that
running
and
we'll
come
back
to
it
sometime
after
charo
starts,
but
and
that
would
be
a
good
way
to
to
segue
into
what
charo's
going
to
do
next.
A
We're
getting
one
done
the
cheapest
and
that
way,
because
in
the
working
group
we
tend
to
have
the
open
source
folks,
who
don't
have
the
biggest
budgets
and
or
or
the
most
permission
to
use
their
hardware.
A
But
I
think
that
the
goal
is
to
give
someone
a
lots
of
alternatives
and
ways
of
doing
things.
B
So
it's
still
just
polling
the
api
to
see
whether
it
comes
up
it'll.
Do
that
for
up
to
40
minutes
or
actually
30
minutes
at
this
stage-
and
this
is
yeah,
it
has
bootstrapped
and
now
it's
just
waiting
for
for
the
one
control
plane
node.
We
have
to
come
up.
That'll
do
a
few
few
reboots,
because
it'll
pull
down
the
current
or
the
fedora
core
os
version
we've
referenced
here
in
our
payload
and
pivot
into
it's
called
pivoting
with
the
way
we
do
it.
B
It's
the
rpm
os
tree
image
will
be
or
the
rps
rpm
os
3
comet
will
be
delivered
to
the
cluster
to
the
node
in
a
container,
and
then
we
have
a
binary.
The
machine,
config
daemon,
which
has
a
command
pivot
and
that'll
unpack
that
rpmos
tree
commit
from
the
container
put
it
onto
disk
and
reboot
into
it,
and
this
is
yeah.
B
This
can
take
a
little
while
so
we
have
yeah,
I
don't
think
we've,
although
we're
getting
close
to
the
30
minutes
now,
but
yeah
soon
it'll
either
say
it
failed
or
timed
out
then
it'll
roll,
then
it'll
destroy
the
resources
here
or
it'll,
say
success
and
give
me
give
me
the
domain
to
log
into
it's.
Unfortunately,
not
a
lot
to
see.
While
this
is
going
on
it'll.
Just
it's
just
trying
to
yeah
get
on
to
onto
the
api.
A
So
in
terms
of
faster
deployments,
how
does
this
compare
to
like
a
vanilla,
kubernetes
like
how
have
you
tried?
I'm
just
curious
here,
because
I
know
there's
a
lot
of
extras
in.
B
Openshift,
it's
I
mean
it's
only
one
time
thing
the
install
right
when
you
scale
up
nodes
after
that.
That
is
much
quicker,
but
our
rollout,
our
initial
install
does
take,
does
take
longer
than
just
the
vanilla
kubernetes.
B
B
So
we
do
have
a
lot
of
you
know,
space
to
for
improvement
here.
That
is
that,
and
it's
definitely
a
thing.
Our
customers
for
the
product
also
want
it's
just
because
it's
only
you
know
once
at
the
very
beginning.
It
is
not
that
super
important,
but
obviously
it
is
a
little
bit
annoying,
especially
if
you
do
presentations
like
this
waiting
that
long
and
yeah
we're
on
it.
A
And-
and
you
know,
mikey
is
making
a
great
suggestion.
The
installer
needs
to
be
more
exciting
and
engaging,
and
I
think
someone
should
take
up
the
challenge
of
doing
an
ascii
version
of
the
panda
and
inserting
it
into
the
installer
somewhere.
That's,
I
think,
would
be
that
that'll
be
the
next
one.
Some
ascii
art
in
there.
B
A
B
So
it
is,
it
is
an
https
encrypted
connection,
but
it
is
self-signed.
So
you'll
have
to
accept
this
risk
and
continue.
B
B
A
So
can
you
well
well
we're
here
for
a
second
and
just
for
a
second
go
into
the
operators
and
show
which
operators
are
running
in
this
minimal
configuration.
B
So
yeah
and
we
don't
have
any
operators
really
installed
from
the
operator
hub.
I
think.
B
So
we
we
only
install
car
operators,
but
even
the
car
operators
bring
quite
a
big
set
of
functionality.
Here,
I
think
in
the
future.
The
way
we
will
minimize
this
is
to
split
out
a
few,
a
few
more
of
those
operators
into
operators
on
on
the
operator
hub,
so
they
can
kind
of
be
installed
on
demand,
but
they
aren't
necessarily
always
included
yeah
on
the
operator
hub.
B
A
And
there's
two
more
questions
coming
in
and
but
I
get
to
press
the
easy
button.
A
There
we
go,
that
was
our
first
demo.
It
ran
and
you
got
into
okd
well
done
a
couple
more
questions,
but
well,
if
you
want
to
destroy
that
and
bring
up
set
up
your
other
one.
Now
that
we've
done
that
that
the
full
one
frank
is,
is
there
a
link
from
which
I
can
download
the
ca
bundle
for
this
cluster
to
import
it
into
my
browser.
B
The
ca
bundle
for
this
cluster,
so
I
I
don't
understand
to
be
honest,.
B
Are
you
I
alright
you?
Well,
so
you
don't
have
to
click
the
accept
the
risk
and
continue
one.
So
you
could
use
a
ca
that
is
signed
by
by
an
authority
that
is
accepted
by
the
browsers,
and
then
you
wouldn't
have
that
problem.
As
this
is
a
self-signed
one,
you
yeah,
you
just
have
to
trust
it
yourself.
B
There
is
documentation
about
that
in
the
in
the
okd
and
ocp
docs.
I
will
quickly
stop
sharing
and
set
up
my.
F
Yeah
the
what
he's
looking
for
is
probably
that
might
be
under
the
on
in
the
generated
files
from
the
installer
might
be
somewhere
under
there.
He
could
actually
use
to
import
into
his
browser,
but
yeah.
B
So
this
time
really
I'm
just
running
a
one
command
here.
Openshift
install
create
cluster
right
away.
There's
no
install
config
prepared
here,
that's
gonna
generate
it's
gonna
include
the
two
commands
that
I
just
ran
separately.
The
create
install
config
and
create
manifests
it'll.
Do
all
that
for
you,
so
you
just
have
to
put
in
your
ssh
public
key
the
platform
you
want
to
install
the
region.
B
B
B
Oh
no,
actually
it
is
running
okay,
perfect,
so
yeah
and
that's
essentially
all
you
have
to
do
to
start
that
install
process.
B
Perfect
and
now
we
are
at
the
same
point
again
where
we
just
have
to
wait,
and
we
have
time
now
to
answer
questions
again.
A
All
right,
so
why
don't
we
do
that
and
charo?
I
know
you're
up
next
with
the
bare
metal,
which
always
sounds
to
me
like
a
heavy
metal
band
kind
of
deployment,
and
I
saw
the
guitars
behind
you,
so
I
it
might
be
appropriate
if
we
pause
now
and
let
the
aws
lives
thing
go
and
let
charo
queue
up
for
his
deployment
and
share
his
screen.
A
So
thanks
very
much
there
christian
for
hanging
out
with
us-
and
I
hope
you
can
spend
some
more
time
today
because
I'm
sure
we'll
be
repeating
some
of
these
questions.
A
C
Okay,
I'm
charu
groover.
I
am
a
new
architect
for
red
hat
services
here
in
the
southeast
community,
you've
reached.
D
F
It's
like
it's
in
your
lip.
Yes,
yes,.
C
On
well,
like
diane,
has
said
a
couple
of
times.
These
are
live
demos,
so
we're
fully
expecting
a
a
bill
gates
moment.
It
might
not
be
a
blue
screen,
but
we
might
see
a
stat
trace
of
death
and
all
kinds
of
other
interruptions,
but
I'm
char
groover.
I,
like
I
said
I've
been
with
red
hat
for
for
one
week,
but
I've
been
a
consumer
of
red
hat
products
both
upstream
and
subscription,
based
for
most
of
my
20-year
career
in
I.t.
C
This
is
going
to
be
simulated
bare
metal
in
that
I'm
actually
using
libert
to
to
run
the
machines
so
that
one
so
that
you
guys
can
actually
see
what's
going
on
right
because
it'd
be
hard
to
get
your
console
views
to
bare
metal
machines
in
in
this
current
configuration.
C
This
is
a
user
provision
infrastructure
deployment,
so
the
installer
is
not
going
to
be
provisioning.
The
machines
for
us
these
machines
are
already
provisioned.
If
you
see
in
this
terminal
right
here
I've,
given
you
sort
of
a
verse
list,
view
of
the
machines
that
are
currently
provisioned,
you
can
see,
we've
got
a
bootstrap
node
that
is
not
running.
C
Now,
I'm
using
virtualbmc,
which
is
a
tool
that
comes
out
of
the
openstack
world,
to
simulate
the
ipmi
management
of
these
virtual
bare
metal
machines,
and
these
machines
are
going
to
boot
into
ipxi
and
using
the
mac
address
of
the
machine
as
it
boots.
It's
going
to
pull
the
appropriate
ipixie
boot
configuration
file
that
sets
its
kernel
parameters,
sets
the
fedora
co
core
os,
install
url
and
the
ignition
file
that
it's
going
to
use
to
to
start
from
I'm
using
fixed
ips
for
this
particular
lab
up.
C
So
the
first
thing
I'm
going
to
do
over
here
in
the
left
terminal
is
I'm
going
to
power
on
the
bootstrap
node.
Now
I'm
going
to
attach
to
its
console
and
what
we're
going
to
watch
here.
It's
going
to
do
an
ipixie
boot,
the
it's
a
chained
boot,
so
it
first
pulls
just
a
boot.ipixie
file
is
what's
being
served
up
by
the
the
dhcp
server
for
it
to
pull
from
tftp
that
then
chains
it
to
look
for
a
file
that
is
named
after
its
mac
address.
C
It
pulls
that
file.
You
see
it
got
its
kernel
and
its
initial
ram
disk.
The
kernel
parameters
that
were
passed
to
it
gave
its
instructions
for
installing
fedora
core
os,
and
you
can
see
right
now.
It's
actually
pulling
that
f
cause
image
across.
C
Now
we've
got
an
h,
a
proxy
load
balancer,
it's
this
guy
right
here:
okay,
d4
lb01-
that
is
already
running
and
is
configured
to
sit
in
front
of
this
new
cluster.
As
it
comes.
C
C
This
will
take
a
little
bit
with
the
scrolling
logs.
It's
pull
like,
I
said
it's
pulling
down
the
image
one
other
thing
I'll
point
out,
while
we're
waiting
for
the
bootstrap
node
to
to
complete
its
install.
Is
that
we're
also
doing
a
mirrored
install
today,
which
hopefully
makes
this
go
a
little
bit
faster
than
pulling
all
of
the
images
across
the
wire.
C
C
And
so
the
install
is
actually
going
to
pull
its
images
from
the
sonotype
nexus
right
now
I've
got
quay.io
in
a
dns
sinkhole
so
that
it
can't
it
can't
resolve
and
because
it
can't
resolve
it's
going
to
assume
it's
an
air
gap,
installation
and
it
will
pull
from
the
the
configured
mirror
image
all
right.
Fedora
core
os
is
booting.
Now
it's
going
to
overlay
the
rpm
os
tree
and
when
it
finishes
it
will
boot
one
more
time
and
it
will
start
the.
D
D
C
C
C
C
And
if
you,
if
you
do
this
at
home-
and
you
monitor
the
logs
like
this,
don't
be
alarmed
by
these
failed-
failed,
failed
entries
that
you
see
coming
out
in
the
logs
this.
This
is
the
bootstrap
process
waiting
for
its
resources
to
go
live,
and
so
it
will
continue
to
loop
until
the
various
resources
come
up
and
you
can
see
the
api
just
came
up
so
so
our
api
is
now
live
and
we're
waiting
for
the
bootstrap
process
to
complete.
C
This
this
all
in
takes
about
10
minutes
from
the
the
bootstrap
mode,
firing
up
to
the
bootstrap
process
itself,
completing
the
the
installation
itself
will
complete
after
about
another
25
minutes,
so
we
we've
got
some
time
now
to
pick
some
questions.
If
folks
want.
C
I
think
it
still
is
I
I
know
it
has
been
for
a
while
that,
if
you
don't
create
the
sinkhole
and
it
can
resolve
the
external
host,
it
will
pull
the
images
from
the
from
quay
dot.
Io.
C
A
Let's
see
a
couple
of
questions
just
to
double
check
the
link
to
the
documentation
on
this.
Is
this
the
same
as
the
stuff
that
you
did
in
the
okay
d4
upi
lab
setup.
C
Yes,
yes,
there's
a
there's,
a
new
branch
called
ipixi
that
when
we're
done
today,
I've
got
a
little
more
cleanup
on
the
documentation
to
do,
but
I'm
gonna
merge
that
branch
into
master
the
the
old
tutorial
that
was
the
centos
7
based
one
I've
branched
master
to
a
centos,
7
branch.
So
anybody
that's
still
running
centos
7
would
want
to
use
the
centos
7
branch.
C
I've
upgraded
my
entire
lab
to
centos
8
and
have
enabled
ipixie
even
for
the
for
the
hardware
for
the
bare
metal
itself,
so
the
so
that,
just
by
creating
a
an
ipixie
boot
file
with
the
mac
address
of
you
know,
a
new
piece
of
metal.
C
A
A
The
other
feeds
are
nanosecond
behind
us,
so
in
blue
jeans,
so
trying
to
do
there
and
brian
jacob
hepworth
is
saying
that
he
really
likes
the
fedora
core
os
news
and
seeing
that.
A
A
I'm
going
to
do
another
pitch
for
people
to
join
the
okd
working
group.
While
we
are
waiting
here
because
that's
what
I'm
charged
with
is
getting
more
folks
in
so
if
you're
liking,
what
you're
seeing
here
or
if
there's
features
missing
or
other
platforms
that
we
should
be
demoing
to
or
testing
on,
or
that
you're
using
okd
on
or
wishing
to
do
so.
Please
join
the
okd
working
group.
A
The
mailing
list
is
here:
I
just
put
it
in
the
chat
and
it
is
in
open
google
group
and
we
have
a
lot
of
meetings.
Every
week
we
meet
bi-weekly
and
we
have
a
meeting
tomorrow
and
I'll,
throw
the
fedora
for
os
and
a
chef
thanks
for
joining
us,
and
we
will
do
the
azure
one
that
you
requested
earlier.
That
is
our
second
to
last
demo.
I
think
today
is
azure
for
the
fedora
calendar
link
here.
C
C
And
I'm
going
to
take
it
out
of
the
proxy
configuration
as
well
that
we
will
forget
everything
that
we
know
about
the
bootstrap
name
now
we'll
watch
the
install.
C
Now
the
this
is
something
odd
about
this
install
monitor
here
it
will
say,
42
complete
here
in
a
minute,
it
may
barf
a
couple
of
errors
as
some
of
the
resources
restart,
and
it
will
also
reset
the
clock.
C
So
it's
it
plays
with
you
a
little
bit,
you'll
get
up,
74,
complete
and
then
all
of
a
sudden
you'll
see
12
complete
and
then
it
will
quickly
wind
its
way
back
up,
I'm
making
a
bold
assumption
here
that
that
is
actually
the
result
of
it
monitoring
some
of
the
resources
that,
through
this
process,
update
themselves
and
so
that
percentage
of
complete
becomes
a
little
bit
variable.
C
So
if
you,
if
you
see
that
running
this
at
home,
don't
don't
be
alarmed,
it
is
actually
working
towards
completion
and
you
need
to
be
patient
because
from
this
point
it
does
take
about
another
23
minutes.
C
Sure,
well,
actually
it
wasn't.
It
turned
out
not
to
be
much
work
at
all
and
in
fact,
if,
if
we
end
up
with
enough
time,
I
can
I
can
deploy
a
hyper-converged
ceph
instance
into
this
cluster,
to
give
us
a
storage
provisioner,
because
that
that's
really,
I
think,
I
think
the
folks
that
might
have
struggled
with
getting
eclipse
che
up
and
running,
is
that
it
does
need
persistent
volumes
both
for
postgres.
C
C
Something
else
I'll
mention
here
I'll
run
this
again.
C
So
you
see,
we've
got
three
masternodes
that
are
running
but
they're
also
designated
as
worker
nodes.
That's
an
artifact
of
how
we're
provisioning
here,
because
the
install
config
that
we
used
does
not
designate
any
worker
nodes.
So
the
installer
by
default
makes
the
masters
schedulable
when
the
installation
is
complete.
That's
something
that
that
we're
going
to
we're
going
to
change,
we'll,
add
the
three
worker
nodes
and
then
we
will
make
the
masters
unscheduleable.
A
C
B
C
C
I
just
sort
of
forgot
that
I
hadn't
actually
been
introduced
so
I'll
just
oh
yeah.
I
can't
why
does
it
say
the
camera
isn't
used
by
some
whatever
anyway,
the
microphone
works
figure
out
why
the
camera
doesn't
in
a
little
bit.
I'm
I'm
a
devops
engineer
at
datto,
I'm
here
as
an
okay
working
group
member
and
I'm
going
to
be
assisting
dusty
in
a
little
bit
once
we
once
he.
He
and
I
get
to
our
part
of
this.
D
C
Yeah,
so
here
I'll
walk
you
through
a
few
of
the
things
that
that
were
prepared
ahead
of
time.
I
I
said
a
lot
of
words
to
describe
it.
One
of
the
especially
the
the
way
I'm
I'm
doing
this
with
with
fixed
ip
addresses.
One
of
the
things
that
you
have
to
provision
are
dns
records.
A
few
key
dns
records.
C
You
can
see.
I've
got
in
here
the
provisioning
for
several
different
clusters
that
I
run,
but
this
is
this
is
the
one
that
we're
presently
looking
at
right
here.
So
each
of
the
master
nodes
worker
nodes
and
the
etsyd
nodes
requires
an
a
record.
C
The
the
master
and
the
etsy
d
obviously
are
sharing
the
same
node,
so
so
they're
going
to
have
a
records
with
the
with
the
same
ip
address.
You
also
need
three
server
records
for
the
etsy
d,
and
then
you
need
a
pointer
record
for
reverse
lookup
for
each
of
the
of
the
physical
nodes.
So
your
masters
and
your
worker
nodes
you'll
need
pointer
records
for
those,
but
the
as
you
can
see,
the
dns
setup
is
not
onerous,
but
it
is.
F
C
As
you
can
see
is
very
simple,
I'm
echoing
some
information
just
to
make
sure
the
right
host,
booted
and
then
chaining
in
an
ipixie
file
that
is
literally
named
after
the
mac
address,
with
hyphens
replacing
the
colons.
C
D
C
All
right,
we
are
in
theory,
at
84
percent,
complete,
I
expected
to
reset
the
clock
at
least
once
while
it's
while
it's
doing
this,
but
this
is
how
do
you
determine
this
percentages
because,
like
I
don't
see
anything
on
screen,
that
would
tell
you
percentages.
C
Oh
right
here,
can
you
see
the
the
oh
okay
there?
It
is
okay,
it
helps
when
you
highlighted
it,
there's
a
lot
of
word
soup
on
screen.
Yes,
there
is.
This
is
how
I
keep
the
install
from
being
boring
is
give
you
lots
of
journal
control
and
logs
to
look
at
because
otherwise,
there's
not
a
lot
to
look
at
no,
no.
C
So
how
did
you
come
up
with
this
setup,
for
I
mean
you're
doing
the
bare
metal
right,
so
yeah
how'd
you?
How
did
you
come
up
with
it?
Oh
gosh,
because,
like
I,
I
remember
that
that
bare
metal
is
like
the
least
fleshed
out
deployment
method
of
them
all.
So
the
fact
that
you
came
up
with
something
is
impressive.
All
on
its
own,
so
that's
worth
the
story,
I'm
sure
yeah.
C
You
know
I
back
in
at
the
end
of
2017,
I
got
addicted
to
the
intel
nook
machines
and
you
know
those
little
form
factor
boxes
are:
are
they're
not
they're,
not
cheap,
comparatively,
but
considering
the
amount
of
compute
that
you
can
pack
into
one
of
them
for
a
for
a
home
lab
set
up,
they
they
are
pretty
affordable
and
if
you
buy
the
right
chipset,
you
can
put
64
gigabytes
of
ram
in
one
of
those
little
suckers.
C
C
You
can
run
quite
a
bit
on
them
and,
and
my
idea
was
actually
get
an
openshift
cluster
running
on
the
the
nux
and
then
I
stumbled
across
this
thing
called
nested
virtualization
with
libbert,
and
while
I
don't
do
openstack,
I
had
a
curiosity
about
it
and
that's
how
I
came
across
virtual
bmc
and
and
so
decided
to
basically
bump
it
up
a
level
and
used
lib
virtual
machines
with
virtual
bmc
to
simulate
bare
metal,
and
then
it
was
just
sort
of.
C
I
want
to
make
this
work,
so
I
powered
through
making
it
work
to
get
bare
metal
install
of
okd
up
and
running,
submitted
a
few
tickets
to
the
fedora
core
os
team
that
they
were
very,
very,
very
gracious
to
help
out
somebody
that
didn't
know
what
they
were
doing.
I
I
had
never,
you
know,
touched
core
os
before
so
so
that
was
quite
a
bit
of
a
learning
experience
and
thanks.
C
From
from
that
point,
the
the
latest
iteration
of
it
now
uses
the
the
fcct
tool
to
inject
some
customization
into
the
machines.
Actually,
while
we're,
while
we're
still
waiting
for
that.
Oh
there,
hey,
quick
here,
here's
the
reset
I
was
talking
about,
see
how
we
went
back
to
zero
percent.
Complete,
don't
panic!
C
I
don't
know
why
it
resets
the
clock
like
this.
Maybe
somebody
in
engineering
could
tell
us,
but
it
is
still
progressing.
I
assure
you
that
is
very
confusing
and
kind
of
frightening.
C
Actually,
it
looks
like
it
resets
after
it
downloads
an
update,
so
it
probably
loses
all
of
its
state
when
it
does
that
yeah
that
that
that's
my
suspicion,
because
it
does
go
through
several
iterations
of
updating
some
operators
yeah.
So
it's
just
probably
losing
its
state
every
time
that
happens,
which
is
unfortunate
and
I'm
not
sure
that
makes
sense,
but
the
best
I
got
it
still
works.
That's
what?
Yes,
that's
the
important
part,
so
don't
freak
out
when
it
goes
from
80
to
90
to
zero
yeah.
C
So
right
here,
if
you
guys
can,
if
you
I
don't
know
if
this
is
readable,
but
but
you
can
get
to
it
on
my
github
page.
So
so
this,
if
you
zoom
it
up
just
a
little
bit
just
zooming
up
one
level
there
we
go,
then
it's
readable
yeah.
This
is
a
shell
script
that
I
wrote
that
actually
does
the
the
provisioning
of
the
of
the
quote-unquote
bare
metal.
For
me
and
right
right
here.
C
This
is
a
yaml
file
that
gets
created
where
it's
injecting
the
customizations
that
I
want
each
of
the
machines
to
have.
So
in
this
case,
what
I'm
doing
is
I'm
creating
a
basically
a
rename
of
the
primary
nic
to
nic
0
so
that
it
doesn't
come
up
as
some
funky
enp
blah
blah
blah
blah
blah.
C
I
want
it.
I
want
it
to
be
more
than
predictable.
I
want
it
to
be
predictable
and
known,
and
so
I'm
using
the
mac
address
of
the
machine
to
explicitly
name
that
network
interconnect
device
as
nic
0.,
and
that
way
I
always
know
what
it's
going
to
be
and
where
it's
going
to
be,
and
then
I
inject
into
that.
It's
specific
configuration,
so
I'm
setting
you
know
it's.
C
All
right
we're
back
up
to
84
complete
at
this
point,
I'm
going
to
go
ahead
and
fire
up
the
worker
nodes.
It
is
safe
to
do
so
now.
Actually
I
could
have
done
it
a
while
back,
but
I'm
going
to
go
ahead
and
do
it
now.
C
C
C
There's
one
of
the
workers
it's
going
to
do
the
the
same
thing
that
you
guys
saw
the
bootstrap
node
doing
it's
pulling
the
core
os
image
right
now,.
C
And
then
it's
going
to
go
through
the
same
process,
except
that
it
will
retrieve
its
ignition
file
once
it
once
it
processes
the
initial
ignition
overlays
the
the
os
tree
and
starts
its
process
to
join
the
cluster.
It's
going
to
get
its
ign
its
ignition
file
from
the
cluster
that
will
give
it
the
personality
of
a
worker
node.
C
And
if
you
watch
the
left
hand
side
of
the
screen
closely
you
you
should
see
it
hit
a
point
where
it's
waiting
on
and
then
you'll
see
it
very
quickly
pull
that
ignition
config
and
at
that
point
it
will
start
to
join
the
cluster.
B
So,
just
to
give
you
a
quick
update
on
the
aws
cluster,
it's
still
waiting
for
the
cluster
api
to
come
up.
I
do
have
to
leave
now
for
like
15
minutes,
20
minutes
I'll
be
back
after
that,
and
I
hope
my
cluster
will
be
up
by
then.
G
C
Alive
all
right
and
as
before,
self-signed
certs,
so
in
whatever
os
and
browser
you're
using
you
are
going
to
have
to
accept
those
certs.
C
It's
okay,
self-signed
certs
are
fine
all
right
now.
It
creates
a
temporary
cluster
administrator
for
you
and
that
it
dumps
that
password
at
the
end
of
the
install
process
that
you
can
use
to
gain
access
to
your
cluster
and
there
we
are
now
there
will
still
be
some
operator
updating
things
going
on
and
your
control
plane
will
still
be
settling
out.
C
C
C
And
I'm
going
to
do
a
couple
of
house
clip
cleaning
things
here.
One
is
I'm
going
to
remove
the
samples
operator
because
it
unless
something
has
changed
recently.
Unfortunately,
christian
isn't
here,
we
can
ask
him
later
the
samples
operator,
because
you
don't
have
an
official
red
hat
secret
at
this
point
it
won't
be
fully
functional
and
can
in
fact
impede
updates
to
your
cluster.
C
So
I
yank
it
out
not
using
it
anyway,
at
least
at
this
point,
I'm
also
going
to
create
a
ephemeral
storage
for
the
image
registry,
because
it
will
also
be
in
a
removed
state
because
it
doesn't
have
a
persistent
volume.
C
So
I'm
patching
its
configuration
with
an
empty
dir
specification
for
a
persistent
volume
and
I'm
going
to
create
an
image
pruner
to
run
it
midnight
every
night,
because
the
the
console
will
gripe
at
you.
If
you
don't
have
an
image
pruner
configured
until
you
do
so
anything
older
than
60
minutes,
it's
going
to
prune
at
midnight
every
night
or
60
days,
rather
60
minutes
would
be
aggressive.
C
C
Our
workers
are
schedulable,
but
that's
not
bad.
Well,
it's
not,
but
there
is
a
gotcha
in
here
which
of
course,
I
never
tripped
over.
Your
ingress
odds
will
deploy
on
a
schedulable
node.
C
Well,
if
your
load,
balancer
is
only
configured
to
look
at
certain
nodes
here,
you
see
I've
got
my
the
port
80
and
port
443
and
port
6443
they're
all
directed
to
the
master
nodes.
C
So
so
the
key
the
key
here
is
either
to
span
your
load,
balancer,
which
I
don't
really
want
to
do,
because
that's
a
lot
of
extra
cruft
in
the
the
load,
balancer
configuration
or
designate
some
infrastructure
nodes
and
that's
the
path
that
that
I
chose
to
take.
So
what
I'm
going
to
do?
Real
quick
is
I'm
going
to
designate
my
master
nodes
to
also
be
infrastructure
nodes?
C
Why
doesn't
it
do
that
by
default,
because
the
the
the
best
practice
is
to
create
a
couple
of
worker
nodes
that
you
set
aside
as
infrastructure
nodes?
C
C
I
don't
know
good,
okay,
just
making
sure,
because,
like
I've,
seen
these
recommendations
listed
in
the
documentation,
but
there
doesn't
seem
to
be
any
particular
reasoning
to
back
them
up,
like,
historically
speaking,
I've
seen
clusters
typically
do
the
masters
as
in
for
nodes,
because
that
way
they
handle
essentially
the
stuff
that
keeps
the
cluster
itself
running
and
the
worker
nodes
are
free
to
work
on
developer
user
workloads
yeah.
I
think
one
of
the
things
you
need
to
consider
is
how
how
beefy
you
make
your
master
nodes.
C
You
know
if
you've
got
heavy
heavy
heavy
ingress
operations.
You
know,
given
everything
else,
that
the
masternodes
are
doing,
that
that
might
be
a
little
overwhelming
for
them.
In
my
particular
lab
environment,
the
the
master
nodes
are
heavyweight
enough.
Each
of
them
has
30
gig
of
ram
and
six
vcpus,
so
so
I
feel
pretty
confident
designating
them
as
infra
nodes.
C
When
I
run
this
now,
they're
just
infra
and
masternodes
now
at
this
point,
nothing
got
evicted
off
of
them.
So
if
you
want
to
boot
things
off
of
them
that
you
don't
want
running
on
there
anymore,
you
do
need
to
either
go
through
and
evict
all
the
pods
that
are
running
on
each
of
those
nodes
manually
or
reboot,
your
master
nodes,
which
is
a
bit
more
of
an
aggressive
way
of
doing
it.
C
Now,
I'm
going
to
patch
the
ingress
operator
to
tell
it
that
it's
okay
for
it
to
run
on
those
master
nodes,
and
if
you
can
read
the
eye
chart
here,
I'll,
explain
what
it's
doing.
It's
setting
a
node
placement
policy,
giving
it
a
match
label
of
infra
node.
It's
also
that's
not
enough!
You
also
have
to
set
some
tolerations
because
the
master
node
is
now
tainted.
C
Is
not
in
a
ready
state
yet
as
soon
as
this
one
is
in
a
running
state,
the
second
one
will
begin
terminating.
Don't
panic
that
your
other
one
sits
in
a
pending
state
for
a
while,
because
it
has
an
anti-affinity
rule
that
it
won't
run
on
a
node
that
already
has
an
ingress
pod
running
on
it.
So
it
has
to
wait
for
one
of
those
terminating
pods
to
complete
terminating
before
it
will
schedule
on
the
masternode.
C
C
C
E
C
You,
if
you
look
at
the
the
directory
that
you
used
for
the
installation
so
there's
you
know,
there's
the
the
ignition
files
that
it
created
and
the
metadata
it
creates
an
auth
directory.
C
C
Right
there,
but
if
you
get
rid
of
the
cube
admin
user,
doesn't
everything
that,
like
links
to
the
cube
admin
user
break,
it's
a
temporary
account.
So
here's
what
we're
gonna
do.
I
I
created
an
ht
password
file
ahead
of
time.
My
tutorial
has
instructions
for
how
to
do
that.
C
So
so
I've
got
an
admin
user
and
a
dev
user
with
passwords.
Already
in
there.
You
saw
me
just
create
a
secret.
C
C
I
will
apply
that
it
complains
that
I
used
apply
instead
of
create,
but
I'm
just
in
a
habit
of
using
apply
to
update
objects,
so
you
can
ignore
that
that
complaint
there
and
then
the
last
thing
I
need
to
do
is
this
admin
user
that
I
I
just
set
up
a
secret
for,
but
doesn't
exist,
I'm
going
to
give
him
cluster
admin
rights,
and
now
I'm
going
to
be
brave
and
I'm
going
to
delete.
C
Well,
it
also
says
the
admin
user
doesn't
exist,
that's
correct,
but
it
creates
it
in
the
background,
but
yeah
it's
not
intuitive,
no
or
obvious,
but
it
does
and
it
works.
Okay.
C
So
there
we
go.
I
just
logged
in
with
my
new,
somewhat
more
secure
cluster
admin
account
and
you
can
see
our
four
green
check
boxes.
We've
got
a
happy
cluster.
It
will
complain
about
alerts
until
you
like
set
up
a
slack
channel
or
something
to
send
your
alerts
to
it's
actually
pretty
easy
to
do.
You
create
a
receiver
and
walk
through
it,
but
I
have
used
up
most
of
my
allotted
time
so
I'll
stop
playing
now
and
I.
C
A
All
right
well
played,
and
can
you
do
one
more
thing
for
me
just
because
I
think
people
keep
asking
me
these
questions
go
back
to
the
console
and
show
the
operators
that
are
installed
in
your
installation.
E
B
C
Operator
hub
operator
might
not
actually
be
up
yet
yeah
because
it
does
it
does
take
a
while
after
you
know
that
that
initial
install
took
us
another
23
minutes,
it
does
take
things
a
while
to
settle
down.
Let
me
show
you
what
it
does
look
like,
because
I
have
another
cluster
that
I
stood
up
this
morning.
C
C
Quite
a
few
you
can
see,
there's
if
you
want
code
ready
workspaces,
the
the
upstream
of
it
eclipse
che
is
in
here.
C
I
might
especially,
if
you
don't
mind,
going
a
couple
minutes
over,
because
the
first
thing
I
need
to
do
is
deploy.
Oh
actually,
no.
I
can't
because
I've
already
got,
let
let
me
make
sure
I've
got
def
deployed
in
this
cluster,
so
we're
going
to
go
to
the
rook
seth
namespace.
C
C
Okay,
and
unless
you
want
to
do
something
different
about
it,
you
install
and
we're
going
to
keep
the
stable.
It
is
going
to
create
the
eclipse,
che
namespace
and
we're
going
to
let
it
have
an
automatic
strategy
for
its
approval.
If
you
switch
that
to
manual
then,
when
the
installer
installs
you
you
have
to
go
to
the
installer
and
then
say
yes,
you
can
actually
install.
C
That
seems
painful.
Well,
if
you
think
about
it,
you
know
I'm
doing
everything
as
a
cluster
administrator.
So
if
you're,
not
a
cluster
administrator,
but
you
you
know,
you
want
to
request
something.
That's
part
of
what
what
we've
got
going
on
here,
because
there's
all
kinds
of
configurable
are
back
capabilities
within
this
thing.
C
So
when
you
install
this
operator
as
an
as
a
cluster
admin,
does
that
mean
that
anybody
who,
logs
in
with
an
account,
can
then
instantiate
it
afterwards?
Absolutely
yes,
absolutely
the
workspaces
people
will
be
able
to
get
in
and
create
workspaces
again.
You
know
it's
got
lots
of
role
role
based
access
control
so
that
so
that
you
can
control
who
can
do
what?
C
C
C
C
C
B
C
And
this
will
take
this
takes
a
couple
of
minutes
and
then
key
cloak
is
going
to
provision
itself
after
postgres
is
done
so
now.
Key
cloak
is
provisioning
and
key
cloak
actually
goes
through
a
couple
of
phases.
It
has
an
init
phase
that
it
that
it
runs
through
so
you'll
see
that
pod
come
up
and
then
terminate
and
then
and
be
replaced
by
another
key
color
pod.
That
will
be
your
your
final
configuration
and
you
won't
see
the
the
che
controller
come
up
until
both
postgres
and
key
cloak
have
completed
their
provisioning.
C
Not
terribly
long
a
couple
of
minutes
cool,
it
feels
like
a
long
time
when
you're
staring
at
the
screen.
A
That's
all
right,
I
have
plenty
of
coffee
today
and
michael
has
just
pointed
out.
Maybe
there
you
still
have
quay
dot,
io
block
via
dns
and.
C
You
know
what
I
I
I
don't.
That
was
a
good
point
out.
I
snuck
that
in
while
neil
was
talking,
I
right
here
I
blasted
a
command
to
my
dns
server
to
remove
the
sinkholes
for
quay.io
and
for
registry.service.ci.openship.org.
C
C
All
right,
so
we've
got
key
cloak
is,
is
bootstrapping
itself
now
so
you'll
see
you'll
see
some
activity.
Go
there
all
right
and
there
it
is
so
now
you
see
another
key
cloak
instance
provisioning
and
it
will
take
over
from
the
the
first
one
here
in
a
minute.
A
In
other
news,
christian
says
that
his
full-blown
aws
cluster
has
finished
installation.
So
when
we're
done
we'll
pop
over
and
let
him
prove
that
and
then
we'll
we'll
grab
dusty
when
he's
back
and
we'll
hit
up
the
digital
ocean
stuff.
A
Any
of
you
who
are
joining
us
for
the
digital
ocean
demo
we'll
probably
get
started
on
that
one.
A
few
minutes
after
the
hour
we're
running
pretty
close
to
on
time,
which
I
think
is
amazing
indeed,
and
we'll
we'll
probably
lose
that
thread
at
some
point.
But
hey.
C
C
Okay,
programming
skills
yeah,
yes,
indeed,
so
so
the
the
first
key
cloak
instance.
You
see
it
terminating
now,
so
it's
getting
itself
out
of
the
way
the
plug-in
registry
is
fired
up.
Now
you
see
other
activity.
There's
our
che
controller
right
here
that
is
creating
we've
got
a
dev
file
registry.
We've
got
a
plug-in
registry.
C
A
I
wish
I
had
a
fan
here.
The
temperature
is
popping
up
here
in
canada
on
the
west
coast.
It's
probably
going
to
hit
32
today.
C
C
C
Create
a
folder
here
for
you
guys,
so
you
don't
have
to
see
all
the
craft
on
my
screen,
I'm
going
to
go
here
and
show
the
certificate.
This
is
safari,
specific,
obviously
so
follow
the
instructions
for
your
favorite
browser.
Safari
is
not
my
favorite,
but
here
it
is
grab
that
and
then
what
you're
going
to
do
is
once
you've
got
that
certificate.
You
need
to
add
it
to
the
trust
store
of
your
operating
system.
C
C
C
C
Now
it's
going
to
make
me
certify
that
I
am
me
one
more
time
now.
D
C
And
I'm
going
to
say
yet
allow
these
permissions,
and
now
it's
going
to
it's
going
to
ask
you
to
create
an
account
now
another
important
safety
tip.
If
you
do
what
I
did,
there
is
an
admin
account
that
che
creates.
Well,
I
named
my
cluster
administrator
admin,
so
I
need
to
give
this
a
different
name
or
I
will
cause
some
pain
for.