►
Description
OKD Working Group
Testing and Deployment Basics Main Stage Session
2021 03 20
Hosted by: Diane Mueller (Red Hat)
Agenda:
Welome and the logisitics for the day
Brief introduction to OKD4
Walk thru the documentation for OKD4 deployment to Vsphere using UPI
Walk thru a DNS/DHCP server configuration for most deployment targets.
AMA session
Guest speakers:
Charro Gruver (Red Hat)
Vadim Rutkovsky (Red Hat)
Jaime Magiera (UMich)
Josef Meier (Rohde & Schwarz)
A
A
We're
live
now
and
we're
just
getting
our
sounds,
and
everything
else
adjusted
and
we're
gonna
give
everybody
about
five
minutes
to
come
in
and
hang
out,
and
so
hopefully
you
can
all
see
us
and
hang
out
with
us
for
the
day
and
we're
really
excited
to
do
this
first
broadcast
of
the
okd
testing
and
deployment
workshop,
and
so
we're
really
glad
that
you've
joined
us
and
we've
got
myself.
I'm
diane
mueller
vadim
watkowski
who's
from
calling
in
from
bruno
and
charo
groover.
And
where
are
you
charo
in
the
universe,
north
carolina.
A
Roanoke
virginia
and
jamie
magria
is
with
us
from
the
university
of
michigan,
and
there
are
a
number
of
other
guest
speakers
here
today
from
who
will
introduce
themselves
as
they
as
they
come
along
and
we're
just
excited
to
try
out
this
new
platform
so
bear
with
us.
If
we
have
slight
technical
difficulties,
this
is
a
new
one.
A
We
used
it
for
dev
comp,
cz
and
with
good
success,
and
we
have
some
really
interesting
deployment
configurations
that
we
wanted
to
share
with
you,
and
we
really
want
to
hear
about
your
deployment,
adventures
and
requests.
So
this
is
going
to
be
some
some
fun
today.
Let's
just
see
the
event
and-
and
I'm
just
going
to
go,
I'm
in
the
event
chat
right
now.
So
if
you
can
see
us
all
four
of
us
sri,
can
you
just
make
sure?
A
Yep
all
right
perfect!
So
now
we
can't
say
anything
bad
about
any
of
our
friends
and
family,
but
we
never
would
do
that.
A
Anyways,
hey
everett,
so
we're
going
to
wait
until
just
the
top
of
the
hour,
because
we
did
say
this
thing
was
going
to
start
at
nine
o'clock
and
we're
going
to
do
some
dapping
and
and
hang
out
and
so
all
four
now
perfect
jumpy
dropped
out
for
a
moment
perfect
all
right
and
while
you
are
getting
ready,
if
you
haven't
already
joined
the
mailing
list,
I'm
going
to
pop
that
mailing
list
here
into
into
the
chat.
So
you
have
something
to
do
while
you're
watching
us.
A
This
is,
if
you
want
to
get
on.
Other
events
come
to
the
working
group
meetings.
That's
the
link
to
go
to,
and
I
would
be
remiss
if
I
didn't
also
invite
you
to
join
openshift
commons.
So
I'm
going
to
grab
that
and
throw
that
into
the
chat
as
well.
A
And
here
you
go
with
that
one.
So
if
you
haven't
joined
openshift
commons,
you
can
go
there
and
do
that
all
stages.
Kareem
are
going
to
be
recorded.
A
Everything
we
say
and
do
today
will
be
recorded,
including
the
the
sessions
so
and
I
will
upload
them
as
soon
as
hop
in
which
is
the
platform
we're
using
gives
them
to
me
up
to
the
youtube
channel,
so
you'll
be
able
to
watch
them
and
play
them
back
and
as
such
will
also
and
I'll
say
this
again
that
in
two
minutes
I
think
one
minute,
then
I'm
and
we'll
we'll
show
you
where
you
can
find
things
and
get
started.
A
So
I'm
going
to
share
my
screen
now
guys
and
we're
going
to
rock
and
roll
and
see
if
I
can
do
this
all.
A
All
right,
you
should
all
see
the
okd
working
group
site
here,
okd.io,
and
so,
if
you're
joining
me,
I'm
assuming
everybody
can
hear
me.
I
cannot
see
the
chat
right
now
jamie,
so
I'm
in
full
screen
mode.
A
So
I'm
just
going
to
motor
on
here
and
expect
someone
to
interrupt
me
if
something
goes
south
so
jamie's.
My
co-organizer
jamie
magria
from
university
of
michigan
has
kindly
offered
and
is
the
back
in
the
background
here
with
me
today.
So
I'm
totally
grateful
for
jamie's
support.
If
you
don't
know
me,
I'm
diane
mueller,
I'm
the
director
of
community
development
here
at
red
hat
for
the
cloud
platform
group
and
one
of
the
co-chairs
of
the
okd
working
group
and
so
a
little
bit
about
the
logistics.
A
Today
we
are
using
a
new
platform
hop
in,
and
so
you
have
the
ability
to
ask
questions
in
the
chat
and
we
have
the
ability
to
share
our
screens.
A
We've
set
this
up
so
there's
two
hours
in
the
morning
that
we're
going
to
walk
through
what
I'm
telling
you
now
a
little
bit
about
what
is
okd
charlo
groover,
who
is
formerly
with
old
dominion.
Freight
is
now
a
red
hatter.
A
Thank
you
for
joining
us,
I'm
charo,
and
then
we're
going
to
get
a
walk
through
of
the
okd
4
release
process
and
a
bit
of
a
tour
of
github
and
where
resources
are
that
we're
going
to
be
using
in
the
release
processes
by
vadim,
rutkowski
and
he's
a
red
hatter
and
an
engineer
here,
and
then
we're
going
to
get
a
walk
through
of
install
deployment
to
vsphere,
using
the
upi
approach
by
jamie
and
we're
not
actually
going
to
do
it.
A
We're
going
to
just
walk
through
the
bits
and
parts
of
it.
So
you
everyone
in
all
of
the
other
sessions,
can
see
where
they
can
find
the
resources
and
then
we're
going
to
bring
on
joseph
meyer
from
rhodium.
Schwartz
who's
been
an
active
community
member
as
well,
and
one
of
the
great
folks
who's
been
working
on
the
okd.io
site
and
who
has
just
turned
on
the
blogging
capabilities.
A
And
so
someone
asked
in
the
chat
already
where
things
would
be
after
this,
and
so
I
will
post
a
blog
on
okd.io
with
links
to
all
of
the
youtube
videos,
as
well
as
to
our
google
group.
I'll
send
a
note
there
as
well
afterwards,
with
all
any
slides,
any
youtube
videos
for
all
of
this,
and
then
we're
going
to
ask
you
to
divide
and
conquer
yourselves
into
the
breakout
sessions,
and
we
have
four
of
them
set
up
today.
A
We're
going
to
do
the
deep
dive
into
vsphere
you
on
upi
and
jamie
and
joseph
are
going
to
lead
that
and
then
we
do
have.
Hopefully
it's
not
tbd's,
I'm
typo
on
my
part,
andrew
sullivan
and
justin
pittman
from
red
hat,
are
going
to
walk
through
the
bare
metal
upi
with
you
guys
and
then
charo
is
going
to
lead
the
single
node
cluster
deployment
workshop
with
bruce
link
from
bcit
the
british
columbia
I.t
group.
A
It
community
college,
which
is
you
know,
awesome
to
have
bruce
with
us
today
and
then
there's
going
to
be
an
interesting
one
in
the
home
lab
set
up.
So
many
of
you
might
have
read
craig
robertson's
home
lab
blog
post,
so
he's
gonna
walk
through
a
bit
of
that
and
then
sri
raman
jam
from
dado
is
going
to
walk
through
his
version
of
that
and
after
each
of
these
little
walk-throughs,
there
will
be
q
a
via
chat,
and
we
are
going
to
answer
your
questions.
A
A
Also
in
the
background
here
in
the
chat-
and
he
has
graciously
set
it
up
in
his
repo
and
we're
going
to
be
trying
to
get
you
to
do
log
issues
against
the
docs
and
make
a
pull
request
or
if
you
have
another
home
lab
or
a
different
approach
to
something
to
make
a
request
to
add
that
documentation
in
so
we're
trying
to
fill
out
our
install
and
deployment
documentation.
So
my
end
goal
here
is
to
get
all
of
you
to
participate
in
the
working
group
and
to
help
us
drive
our
documentation
updates.
A
So
let's
go
forward
one
more
so
today,
like
I
said
in
the
hop
in
chat,
please
ask
your
questions
and
the
moderators
myself
vadim
and
jamie
and
others
will
be
popping
around
in
each
of
the
sessions,
we're
all
empowered
to
do
that,
and
then
we
will
try
and
relay
them
to
the
folks
who
are
asking
delivering
the
content
and
get
them
answered,
and
then
we'll
just
do
some
deep
dives
into
this,
so
that
el
mico
repo
that
you
see
up
there
and
I
post
I'll
post
it
into
the
chat
after
I
stop
sharing
these
slides.
A
That's
where
the
the
working
drafts
of
the
okd
deployment
configuration
guides
live
that
we're
going
to
be
using
today
the
okd
docs
live
in
docs.okd.io
and
after
today
we're
going
to
hope
that
all
of
you
join.
If
you
haven't
already
the
google
group
and
come
to
our
meetings,
if
you
haven't
found
them
already
in
the
kubernetes
slack
channels,
there's
two
other
channels:
openshift
dev
and
openshift
users,
where
you
can
ask
questions
and
as
always,
we
would
love
it.
If
you
joined
openshift
commons
and
openshift
commons
is
organizational
based.
A
So
if
you
go
to
the
participants
list-
and
you
see
your
company's
name
already
there-
your
company
has
already
joined,
and
I
can
just
add
you
in
and
add
you
to
the
slack
channels
there
as
well
easily
and
if
your
name
company's
name
isn't
there,
just
click
on
the
join
form
and
fill
it
out
and
we
will
get
you
rocking
and
rolling.
A
So
with
that,
I'm
going
to
stop
talking
and
I
never
stop
talking,
but
I'm
going
to
stop
sharing
and
we're
going
to
switch
over
and
get
charo,
groover
and
exit
and
see.
If
I
did
that
right
and
so
now,
charo
who
is
going
to
stun
us
with
his
what
is
okd
so
take
it
away,
charo
and
you're,
almost
in
full
screen
there.
You
go
all.
B
Welcome
to
the
testing
and
deployment
workshop,
I'm
going
to
give
you
a
quick
overview
of
okd
we're
going
to
start
with
an
overview
talk
a
little
bit
about
where
we
are
current
state
of
okd
spend
a
little
bit
of
time
talking
about
operators
in
the
operator
hub
and
then
finally,
leave
you
with
a
whole
bunch
of
links
and
references
so
for
how
you
can
get
in
contact
with
us
so
okd.
B
What
is
it?
It
is
a
community
distribution
of
kubernetes,
but
it's
actually
more
than
that.
It's
a
community
distribution
of
kubernetes
that
is
actually
built
off
of
the
openshift
code
base.
So
it's
it
is
the
same
code
that
you
are
running
in
your
data
center
or
on
your
cloud
provider.
If
you
are
a
red
hat,
openshift
subscriber
this
code
base
is
what
we
build
okd
from
with,
with
very
little,
if
any
variation.
C
B
Diane
mentioned
earlier,
you
can
reach
us
at
okd
dot,
io
to
find
documentation
reference,
materials
and
code
ready
containers
built
from
okd,
which
I'll
talk
about
here
in
just
a
minute.
B
The
the
whole
premise
behind
openshift,
as
sort
of
a
kubernetes
plus
distribution,
is
that
automation
is
king
automation
for
installation,
automation
for
patching
and
updates
and
automation
for
resiliency
and
recovery
in
your
data
center
or
your
cloud
platform.
B
With
a
a
twist
of
additional
automation
provided
by
the
the
open
shift,
plus
plus,
as
I
mentioned,
fedora
core
os,
is
the
underlying
operating
system
on
which
the
whole
thing
is
built
and
fedora.
Core
os
itself
brings
a
lot
of
what
provides
the
automation
and
the
resiliency
to
okd
as
a
kubernetes
distribution.
B
You
can
run
this
on
just
about
any
platform.
You
can
imagine
we're
not
quite
to
arm
64,
but
you
can
bet
there
are
people
who
care
about
arm
64
and
would
love
to
see
this
thing
running
on
the
edge.
So
right
now
all
of
the
major
cloud
platforms,
amazon,
web
services,
microsoft,
azure,
gcp,
the
google
cloud
platform-
you
can
run
this
on
openstack,
you
can
run
it
on
overt
and,
as
some
folks
are
going
to
demonstrate
today,
you
can
run
it
on
vmware
and
bare
metal.
B
B
We've
been
taking
a
lot
of
community
contributions
that
are
improving
this
platform
and
making
it
much
richer.
As
we
continue
to
evolve.
We've
got
active
collaboration
established
now
between
both
the
fedora
communities.
For
our
underlying
operating
system,
as
well
as
folks
who
are
contributing
to
the
operator
hub,
which
we'll
we'll
talk
a
bit
more
about
here
in
a
couple
of
slides,
there
are
quite
a
number
of
bespoke
operators
that
are
now
available
for
okd
to
provide
all
kinds
of
value.
B
Add
for
your
clusters
and
one
of
the
things
that
this
platform
really
allows
you
to
do
outside
of
like
a
subscription.
Subscription-Based
open
shift
is
enabling
early
adoption
of
upcoming
technologies,
especially
with
the
underlying
fedora
core
os.
You
get
a
preview
of
of
what's
coming
down
the
road
which
sometimes
can
bring
a
extra
level
of
excitement,
but
always
brings
an
extra
level
of
functionality
that
you
can
take
advantage
of
and
we've
also
just
recently.
B
I
was
probably
six
months
ago-
I'm
I
believe
it
was
this
summer
we
we
got
code,
ready
containers
finally
released
for
okd4
code.
Ready
containers
for
okd
is
based
on
the
code
ready
containers
code
base
for
openshift.
So
so,
just
like
everything
else
that
we
do
with
red
hat,
it's
all
open
source.
It
all
accepts
community
contributions.
B
It
enables
you
to
run
a
single
node
openshift
cluster,
on
your
laptop
or
workstation,
so
it
gives
you
all,
the
goodness
that
you
get
with
an
okd4
cluster
and
fedora
core
os
on
your
local
workstation.
You
just
have
to
add
your
code.
One
quick
note
on
that.
The
the
code
ready
containers
release
that
is
currently
hanging
off
of
our
okd.io
site
is
still
built
on
4.6.
B
So
those
of
you
who
are
watching
us
live
look
for
the
4.7
of
code,
ready
containers
to
to
be
available
here
within
the
next
week
or
two,
hopefully
within
the
next
couple
of
days,
vadim
and
I
are
working
on
a
couple
of
things
that
that
we
got
to
get
in
place
so
that
the
build
will
run
off
of
some
fairly
significant
changes
to
the
underlying
fedora
core
os
and
a
couple
of
the
operators
that
we
need
to
update
the
the
code
base
to
support
crc.
So
look
for
that
to
come
very
soon.
B
B
So
operators
are
like
a
a
bundled
system
administrator
that
is
always
with
you
always
watching
the
application
and
ensuring
that
it
continues
to
run
the
core
of
okd
is
built
on
operators.
So
everything
that
provides
the
functionality
from
etsy
d
up
is
is
controlled
by
operators,
but
operators
also
bring
value
adds.
So
if
you
need
rook
ceph
as
a
storage,
provisioner,
well,
there's
an
operator
for
that.
Your
internal
image
registry
is
an
operator.
B
If
you
need
a
kafka
cluster,
there's
a
strimzy
operator.
If
you
need
a
service
mesh.
Well,
there's
an
operator
for
that
too.
So
operators
are
a
way
to
bundle
the
capabilities
that
give
your
applications
the
the
added
richness,
resiliency
and
capability
that
you
need
so
that,
as
a
provider
of
software
solutions,
all
you
have
to
focus
on
is
your
code
and
let
the
operator
take
care
of
everything
else.
B
The
operator
hub
is
where
you
can
go
to
to
retrieve
these
and
install
them
into
your
local
cluster.
When
you
have
a
cluster
up
and
running
the
operator
hub
will
be
there
and
from
the
console
you
can
navigate
to
the
operator
hub
and
go
shopping.
When
you
stand
up
in
okd,
4
cluster,
all
of
the
operators
that
are
available,
you
will
be
able
to
install
free
of
charge.
There
are
not
subscription-based
operators
in
there.
They
will
be
the
community
supported
versions
of
the
operators
that
you
would
see
if
you
had
a
subscription-based
openshift
cluster.
B
So
if
you
need
grafana
or
stremzy
or
service
mesh
with
istio,
they
will
all
be
there
and
installable
from
the
operator
hub.
Now,
another
quick
caveat
on
that.
We
we
are
still
working
with
several
of
the
the
operator
providers
to
ensure
that
the
community
version
of
the
operator
is
available
in
operator
hub.
So
some
of
them,
you
do
still
have
to
go
to
the
the
github
repo
or
wherever
the
operator
lives,
to
get
the
installation
materials
for
it.
B
B
A
All
right
well,
next
up
we
have
vadim
rotovsky
dialing
in
from
bruno
and
he's
going
to
walk
us
through
the
current
release
process
and
tell
us
a
little
bit
about
where
things
live
in
github,
so
vadim.
You
want
to
share
your
screen
and
take
it
away.
Hello.
D
My
name
is
vadim
and
I
work
for
redhead
and
my
day
job
is
being
an
engineer
for
openshift,
but
in
the
evenings
I
like
to
tinker
with
okd
and
other
community
distributions.
So
today
we'll
take
a
guide
about
how
okiti
gets
built
where
things
happening
where's
the
source,
and
why
do
we
need
such
a
complex
ci
to
make
it
happen?
D
So
our
final
step
in
the
release
process
is
uploading,
the
binaries
of
installer
and
oc
to
github,
but
in
order
to
make
it
happen,
we
first
need
to
build
it
from
the
source
code
and
as
everything
else,
things
are
happening
in
the
organization
called
openshift,
where
the
code
for
both
society
knows
okd
being
created
for
those
who
are
not
familiar
with
semicrimes
ocp
stands
for
openshift
container
platform.
That's
a
product
which
red
hat
officially
supports,
provides
a
subscription
to
the
clients
and
okd
is
a
community
distribution
which
is
related
to
ocp
as
well.
D
It
contains
several
simple
things:
first
of
all
is
local
file,
because
all
pieces
of
okd
and
ocp
are
container
images.
So
this
their
file
explains
how
to
build
it,
we're
building
from
scratch.
Then
we
copy
the
files
in
the
manifest
and
label
this
image
as
an
operator.
That
means,
as
chara
has
explained
previously,
everything
in
okay
in
ocp
is
based
on
operators.
So
we
have
a
top
level
operator
called
cluster
version
operator
which
all
it
does
is
applies
all
the
pieces
to
your
cluster
and
they
assemble
into
an
open
shift
release.
D
D
When
the
openshift
console
is
started,
and
it
finds
this
configuration,
it
applies
to
okd
branding
and
in
the
help
message
you
would
have
the
different
documentation
based
url.
So
all
those
changes
we're
looking
into
are
built
by
ci.
You
can
see
we
get
green
marks,
meaning
it
is
past
and
everything
is
done
via
the
pull
requests.
D
D
Build
the
image
we
have
submitted
run
some
tests
if
they
are
present,
for
instance
in
other
repos,
who
might
have
additional
end-to-end
verifications
for
aws
same
test
for
aws,
but
an
upgrade
test
and
use
a
previous
state
and
a
new
release
with
this
image,
gcp,
vsphere
and
so
on
and
so
forth.
Depending
on
what
kind
of
repo
that
is,
and
once
we're
done,
it
would
promote
it
as
an
official
image
in
the
48.
D
So
the
whole
release
parts
are
stored
as
a
part
of
image
stream
in
our
ci
and
once
we're
done,
the
ci
would
also
track
that
image
stream
and
would
be
building
new
releases
out
of
that
meaning
you
would
be
able
it
would
be
able
to
compile
them
into
one
single
image
which
refers
to
a
bunch
of
other
images
and
fetch
some
metadata
from
them.
For
instance,
if
we
would
use
oc
adam
release
info
command
on
some
release
images,
we
would
be
able
to
extract
urls
to
each
particular
comets.
D
This
image
has
been
built
from,
as
you
can
see
on
this
picture,
for
instance,
and
since
ci
can
do
that,
so
that
users
can
also
create
their
own
release.
Payloads
based
on
the
payloads,
we
released
already
replacing
some
particular
images
with
the
fixes
they
would
like
to
test
or
some
changes
they
would
like
to
to
have
performed.
D
This
script
would
show
us
that
a
coordinate
image
in
the
latest
origin
or
so-called
okidi
payload
is
the
same
coordinates
image
as
in
one
of
the
releases
of
ocp,
we're
fetching
the
pulse
back
for
those
image,
and
once
we
use
command
oc
image
info
to
display
information
about
labels,
layers,
environment
variables
and
so
on,
and
if
we
div
the
output
from
this
image,
the
only
difference
it's
just
name,
they
are
being
pushed
all
the
rest
is
the
same,
because
these
are
the
same
images.
D
Release
controller
page,
where
that's
the
front
end
of
our
ci,
which
detects
that,
when
a
new
release
lands
in
a
new
image
lands
in
the
image
stream,
we
can
prepare
a
new
release.
Let's
look
at
something
more
greenish
like
this
one
builds.
The
difference
between
the
previous
release
shows
that
two
new
images
have
landed
based
on
the
metadata
they
contain.
D
We
can
also
build
links
to
the
pluto
quest
which
have
caused
this
change
and
so
on
supports,
and
these
nightly
images
can
eventually
be
promoted
to
stable.
For
instance,
this
latest
47
release
used
to
be
a
nightly
release
with
the
same
date
and
you
know
to
perform
stable
release.
We
have
a
small
instruction.
D
D
All
the
rest
is
done
by
ci
itself,
which
automatically
updates
the
update
graph
runs
additional
tests,
and
we
can
see
that
users
from
the
previous
releases
can
upgrade
and
what's
the
best
result
for
that,
since
the
rokit
is
slightly
different
from
ok
or
ocp,
as
it
uses
different
images.
In
some
cases
we
also
have
a
different
issue
tracker.
D
We
use
openshift
issues
to
track
okay,
specific
problems,
however,
since
most
of
the
images
we're
reusing
from
ocp,
for
instance,
console
is
copied,
as
is
from
ocp,
so
any
kind
of
ui
issue
you're
hitting
would
be
reproducible
on
ocp
as
well.
That
means
you
would
file
a
proper
ocp
image,
because
you
will
be
sure
that
it
happens
for
ocp
as
well,
and
you
would
get
direct
developer
attention
to
to
fix
it
and
once
we're
there.
We
also
request
you
to.
D
Let
me
pick
one
of
my
favorite
ones
speak
the
closed.
We
also
ask
you
to
provide
a
log
bundle
so
called
where
a
lot
of
logs
from
the
failed
installation
or
broken
cluster
itself.
That
archive
should
contain
all
the
laws
for
us
to
find
out
what's
happening
well,
which
part
do
we
need
to
fix?
How
or
what's
missing.
D
And
after
we're
done,
we're
also
uploading
the
client
tools
to
github
and
send
a
message
to
ok
working
group
to
to
reiterate
in
the
end,
okay
is
a
community
distribution.
That
means
all
the
images
we're
listing
here,
have
their
github
repos,
meaning
you
can
rebuild
them,
think
of
them
replacing
parts
and
collaborate
with
us
to
make
it
better.
D
D
D
So
we
coined
the
term
sibling
distributions
for
this
kind
of
event,
and
we
heavily
rely
on
automation
and
ci
verification
and
also
user
feedback
when
we
are
releasing
okay-
and
that
concludes
my
demo.
Thank
you.
A
Awesome,
thank
you,
and
very
you
know
someone
mentioned,
I
think,
just
for
future
reference
when
you
have
a
black
background
and
you're,
showing
something
it's
a
little
blurry
in
the
refresh
rate.
So
maybe
a
bigger
font
size
next
time
we're
all
learning
here
too.
So
we'll
figure
it
out,
but
that
was
great.
So
if
you
have
questions
folks,
please
ask
them
in
the
chat.
We're
we're
running
pretty
much
on
time,
maybe
a
little
ahead
of
time,
even
which
is
great.
A
My
theory
is
that
this
whole
upfront
section
which
is
going
to
set
us
up
for
the
sessions,
will
take
two
hours.
We
may
have
a
little
extra
time
at
the
end
in
which
we
will
all
go
grab,
coffee,
more
water
or,
and
some
of
us
will
stay
online
and
answer
your
questions.
If
you
have
them
and
then
on
the
for
my
I'm
on
pacific
standard
time
at
11
o'clock,
we
will
switch
in
and
the
sessions
will
go
live
and
you
will
be
able
to
join
then.
A
So
if
you're
wondering,
I
can't
advance
the
sessions
to
start
earlier
with
hop
in
so
they
will
start
on
time,
especially
if
people
are
coming
just
for
the
session,
so
we
don't
want
to
start
them
early.
So
with
that
jamie
I'm
going
to
bow
out,
because
we
can
only
have
four
faces
on
stage
at
the
time
and
let
you
do
your
talk
and
bring
joseph
in
because
he's
got
the
talk
after
you
and
then
we
will
kick
charo
off.
A
I
think
I
just
picked
you
when
joseph
finishes
and
I'll
come
back
in
so
just
so
everybody
knows
this
is
this:
is
the
logistics
of
the
day
so
jamie?
If
you
would
like
to
to
share
the
screen
and
give
us
a
tour
of
the
documentation,
we'll
cue
you
up
and
take
it
away.
E
Okay,
great
well,
thank
you
very
much
folks
for
joining
this
overall
community
gathering
and
for
joining
the
sessions
later
for
folks
that
are
just
tuning
in
there
will
be
sessions,
broken
up
four
different
sessions
actually
for
the
different
types
of
of
installations.
Basically,
that
you
can
do
so,
I'm
gonna
be
providing
right.
Now
is
just
a
quick
walkthrough
of
a
installation
on
vsphere
with
user
provisioned
infrastructure,
and
what
does
that
mean?
E
Well,
so
user-provisioned
infrastructure
means
that
instead
of
the
installer
configuring,
a
load
bouncer
within
vsphere
or
configuring,
the
ip
numbers,
or
any
of
that
that
this
is
all
done
with
infrastructure
that
the
user
provides
on
the
outside
before
they
run
the
installer,
and
so
the
prerequisites
for
that
are
basically
handling
dns,
dhcp,
load,
balancer
and
optionally.
E
A
proxy
and
joseph
is
going
to
get
more
into
the
details
of
of
doing
these
specific
things,
but
in
short,
you're
going
to
need
some
dns
entries
for
the
bootstrap
machine,
you're
going
to
need
three
entries
for
your
masternodes
openshift
clusters.
E
Right
now
support
three
masternodes
in
the
control
plane,
as
it's
called
and
then
you're
gonna
need
an
entry
for
each
of
the
desired
workers
and
also
an
entry
for
your
endpoint
and
for
your
your
api
endpoint
and
your
api
internal
endpoint
that
the
nodes
use
to
connect
to
each
other
and
then
a
wildcard
entry,
a
wildcard
dns
entry
of
the
form
like
this,
so
that
once
you've
deployed
apps
on
there
by
default,
they
would
have
the
app
name,
dot,
apps,
dot
cluster
name
at
your
domain
and
to
give
you
an
example
of
that,
so
for
user
provisioned
infrastructure.
E
On
my
end,
I'm
utilizing
the
dns
that
is
provided
at
the
university
of
michigan,
which
is
a
system
called
blue
cat
running
on
proteus,
and
so
this
is
a
way
of
very
easily
configuring,
dns
and
dhcp,
and
so
you
can
see
for
my
demonstration
cluster
that
you'll
see
more
of
in
the
session
that
I'm
doing.
Basically,
you
can
set
up
your
dns,
and
this
is
what
it
would
look
like
right.
So
you've
got
your
masters
and
your
worker
nodes
at
set
ips
and
gcp.
E
I
didn't
feel
fill
in
the
details
there.
This
is
something
that
you'll
want
to
do
in
most
cases.
So
the
way
open
shift
clusters
work,
you
can
do
static,
ips
or
you
can
do
dhcp,
but
you
cannot
do
both,
and
this
is
let's
see
if
I
can
find
it
here.
I
don't
have
it
in
front
of
me,
but
basically,
when
you
have
chosen
to
go
one
route,
you
can't
go
back
to
the
other.
E
So
if
you're
going
to
do
static
ips,
you
can
do
that
by
setting
some
kernel
parameters
with
something
called
afterburn,
and
this
is
something
that
you
pass
to
in
the
configuration
of
your
nodes,
you
pass
a
string,
a
configuration
string,
that's
handed
to
the
kernel
with
your
static
ip
or
you
can
rely
on
just
the
dhcp
on
your
network
and
any
address.
That's
handed
out
alternately.
E
I
took
a
third
path
and
I'll
get
into
more
details
in
my
session
of
using
reserve
dhcp
and
setting
the
mac
addresses
on
the
nodes
and
there's
some
advantages
to
that
for
upi.
That
I'll
talk
about
and
you're
also
going
to
need
a
load
balancer
outside,
so
that
incoming
requests
to
the
api
and
the
ingress
get
passed
to
the
respective
machines
right.
E
So
in
terms
of
a
load,
balancing
we've
got
a
load
bouncing
proxy
proxy
that
is
called
a
big-ip
from
f5
networks,
and
so
in
my
configuration
I
use
a
big-ip
which
allows
you
to
define
pools
of
machines.
So
here
you
can
see.
E
This
is
the
api
pool,
and
this
is
the
worker
pool,
and
so
this
will
load
bounce
requests
to
their
respective
pools
and
there's
also
some
one
thing
you
don't
see
here,
but
I'll
be
showing
in
more
detail
is
you
can
also
do
some
checks
so
for
those
of
you
that
are
familiar
with
the
internals
of
kubernetes,
you
know
that
there
are
health,
z
and
ready
z.
Rest
calls
that
you
can
make
to
get
the
status
of
of
your
cluster
of
your
nodes
in
your
cluster
and
in
the
f5.
E
An
advantage
of
this
is
that,
if
you,
if
your
entire
cluster
goes
down
and
the
internal
notifications
aren't
working,
you
have
an
external
source
of
notification
and
monitoring
to
see
that
and
I'll
get
into
more
details
of
that
in
the
other
session.
Another
thing
that
you
would
need
is
a
proxy
if
you're
going
to
be
on
a
private
network,
so
this
is
something
that
openshift
has
been
growing
into
when
it
was
originally
in
versions.
E
Three,
I
should
say
unless
there
wasn't
as
much
focus
or
support
for
private
networks
and
that's
been
increasing,
but
if
you're
going
to
be
doing
a
private
network,
you
will
need
a
load
bouncer
for
or
a
proxy
for.
Your
calls
out
of
your
containers
once
you
have
your
cluster
up,
but
also
for
the
installation
process
as
well
pulling
down
those
containers
that
are
part
of
the
installation
process.
So
in
terms
of
a
proxy
you
can
use,
squid
squid
is
a
freely
available
proxy.
E
That
is
very
easy
to
set
up
and
has
a
simple
configuration
file
and
I'll
be
providing
some
examples
of
that.
In
the
session
that
I
have
and
if
you
look
at
the
documentation
on
the
okd
website,
there
is
a
link
to
installing
and
then
sub
sections,
and
so
here
is
the
section
installing
on
vsphere
and
then
there's
is
another
sub
section
under
that
installing
on
vsphere
with
user
provisioned
infrastructure,
and
that
is
what
I've
been
working
with,
and
this
has
a
lot
of
great
information.
E
I
would
encourage
folks
when
they're
trying,
either
using
the
standard,
install
or
the
user
provision
to
install
functionality
either
one
check
out
the
upi
documentation
for
the
platform
that
you're,
using
the
reason
that
I
suggest
that
is
the
upi
documentation
shows
you
some
of
the
things
that
are
needed
and
some
of
the
underlying
details
of
a
open
shift
install
and
it
can
be
really
helpful
for
understanding
overall
how
the
process
works,
and
it's
sectionized
quite
well
and
shows
you
what
you'll
need
in
terms
of
your
nodes
and
about
creating
the
user,
provisioned
infrastructure
and
ports
that
you'll
need
and
whatnot
so
definitely
check
this
documentation
out
and
one
of
the
things
that
they've
done
is
they've.
E
Broken
it
out
into
several
sections
with
the
more
levels
of
detail
that
you
want
to
control
the
higher
resolution
of
detail
that
you
want
to
control
in
your
install.
So
there's
a
section
installing
a
cluster
on
vsphere
with
user
provisioned
infrastructure
and
network
customization
customizations,
and
so
that
one,
for
example,
will
provide
you
details
about
setting
static,
ips
and
disk
partitioning
and
some
of
the
other
stuff.
E
Of
the
install
process
and
the
install
usually
takes
about
about
30
to
40
minutes
and
in
the
session
that
I'm
doing
I'll
be
talking
about
how
you
can
automate
that
process
literally
to
be
able
to
just
run
a
script
and
configure
generate
the
necessary
install
files
and
whatnot
and
load
those
into
newly
created
vms,
and
they
kick
off
the
openshift
installer,
so
that
you,
you
get
a
very
near
to
non-upi
installation
experience
and
actually
some
extras.
E
Let
me
bounce
over
here
to
provide
an
overview
of
some
of
the
files
that
are
involved
in
a
upi
installation.
So
what
you'll
see
is,
after
you've
generated
what
are
called
ignition
config
files.
You'll
see
a
bootstrap
ignition,
config
and
ignition
is
the
basically
the
metadata.
E
That's
used
that
you
put
into
the
metadata
of
the
node
to
tell
it
to
connect
to
the
bootstrap
server
or,
in
the
case
of
workers,
to
connect
to
the
control
plane
to
download
the
necessary
components
to
join
the
cluster,
and
so
you'll
see
multiple
ignition
configs
for
the
bootstrap
for
the
master
for
the
worker
after
you've
run
the
installation.
There
are
some
hidden
files,
an
install
log
and
a
state
file
that
says
the
state
of
the
cluster.
E
Now
there's
one
thing
that
I
want
to
point
out
for
upi
installations,
that
is
true
across
the
board,
and
it's
something
that
sometimes
surprises
folks
is
that
the
installation,
the
open
shift,
install
binary,
actually
ingests
and
deletes
your
install
config.
So
you'll
have
like
a
general
install
config
that
you'll
configure
the
parameters
for
for
your
cluster
and
they
reference
that
in
the
documentation.
What
you
need
to
have
in
that
one
thing
that
happens,
though,
is
when
you
run
the
installer.
E
It
actually
eats
that
file
up
so
you'll
want
to
always
make
a
backup
of
it.
Trying
to
find
an
example
of
it
here.
You'll
always
want
to
make
a
here.
We
go,
you'll
always
want
to
make
a
backup
of
it
so
that
you
can
control,
or
so
you
can
duplicate
the
process
again
without
having
to
do
a
lot
of
work
and
the
tool
that
I'll
be
demonstrating.
That
I
wrote
actually
allows
you
to
have
a
template
and
then
it
it
duplicates
that
template
and
then
goes
from
there.
E
E
You
deploy
your
infrastructure,
you
generate
your
files
and
then
you
create
the
nodes
with
the
metadata
from
those
ignition
config
files,
and
then
you
run
the
installer.
A
All
right-
and
we
have
successfully
brought
joseph
meyer
into
the
fold
here
today
and
we
are
running
pretty
good
timing
here.
So
I'm
going
to
let
joseph
talk
about
some
dns
dchp
issues
and
best
practices.
So
joseph
you
want
to
try
turning
unmuting
yourself
and
sharing
your
screen.
Yep.
C
C
Currently
we
are,
there
okd
helped
us
a
lot
in
yeah,
my
company,
a
lot
in
getting
in
touch
with
kubernetes
and
gaining
the
skills
for
that,
because
kubernetes,
vanilla
kubernetes
is
not
an
easy,
easy
thing
and
we
yeah
we.
We
used
okd
to
learn
that
all
and
now
we
are
in
a
stage
where
we
move
parts
of
our
kubernetes
clusters
to
openshift
for
having
more
production
loads
and
getting
the
support
from
red
hat
yeah.
C
And
now
I
try
to
show
you
a
little
bit
about
my
first
or
what
I
thought
are
the
the
heaviest
steps
in
the
beginning.
If
you
start
with
user
provisioned
infrastructure
and
with
dns
dhcp
and
an
external
load.
C
Balancer,
can
you
see
my
my
screen
share?
I
hope
so
yep.
C
Thank
you.
This
is
a
diagram
of
my
home
lab
I'm
running
here
at
home,
I'm
using
I'm
using
a
vmware
vsphere.
I
bought
a
license
for
that's
very
suitable
for
home
lab
users.
It's
it
costs
around
150
bucks.
It's
called
the
envia
user
group
advantage
edition.
C
You
have
to
pay
150
euros
for
a
one
year
license.
I
think,
that's
pretty
affordable
for
home
labs.
Why
do
I
use
vsphere
at
home
because
I
like
to
have
an
environment
here
in
my
home
lab
sets
similar
to
the
one
I
use
in
my
company
and
yeah?
That's
why
I'm
using
the
mbv
sphere
it's
running
on
a
on
a
ryzen
pc,
it's
a
very
capable
one.
It
has
16
physical
cores
and
the
32
was
multi-threading
enabled
you
don't
need
that
much
course.
C
Don't
be
frightened
of
that,
but
I
like
to
have
possibilities
and
to
yeah
to
add
more
workers
to
play
with
new
things.
One
thing
I
tried
at
home
is
virtual
open
shift,
virtualization
based
on
cupert,
and
this
requires
a
little
bit
more
horsepower
than
you
normally
have
on
your
test
or
on
your
laptop.
C
So
we
have
one
pc
for
vm
there.
This
here
I
have
another
computer
running,
an
epic
attached
storage,
I'm
using
truna's
core
c
community
edition,
but
this
year
truna's
scale
will
go
into
general
availability.
C
I
like
that,
because
this
one
will
have
a
small
kubernetes
cluster
running
on
it,
where
you
can
deploy
your
helm
charts
and
I
like
to
have
some
components
outside
of
my
okd
cluster,
because
I
am
constantly
deleting
and
creating
clusters
to
test
new
things,
and
I
yeah
I
like
to
have
the
possibility
to
have
some
components
outside
of
my
cluster.
C
C
C
So
yes
and
I
have
a
dsl
modem,
a
router,
that's
connected
to
the
internet
and
the
first
thing
you
should
know
if
you
want
to
set
up
a
dns
and
dhcp
server
at
home,
you
should
be
sure
that
no
other
of
this
service
is
running
in
your
your
subnet.
C
In
my
case,
I
had
to
turn
off
the
dhcp
server
and
the
dns
server
running
in
my
in
my
dsl
router
before
well.
I
have
to
say
in
between
because
for
sure
during
the
installation
you
need
internet
access.
You
need
dhcp
dns
server
if
things
go
wrong,
but
if
the
custom
build
servers
are
running,
you
don't
need
c
yeah
servers
in
the
fritz
box
anymore.
C
What
do
you
have
to
achieve
here
on
the
vmware
vsphere
server?
There
are
a
few
vms
created
during
the
installation
process
and
just
for
your
information,
I
use
the
instructions
from
as
a
github
repository
of
okd,
it's
located
in
github.com
openshift
okd.
C
C
That's
a
tool
that
can
take
about
infrastructure,
uses
a
domain
specific
language
for
that,
and
there
is
also
a
terraform
provider
available
for
vsphere.
I
have
seven
vms
one
bootstrap
vm
three
masters
and
three
workers.
You
don't
need
so
much
vms
nodes.
Normally
I
have
this
is
my
standard
setup.
I
don't
want
to
yeah
take
care
about
with
limited
cpu
and
memory
space.
I
want
to
go
and
have
fun.
C
That's
because
it's
7
vms
and
the
first
step
in
the
installation
is
that
the
bootstrap
vm
starts
creating
a
fake
control
plane.
C
This
takes
only,
I
think,
a
few
minutes.
It
depends
on
how
fast
your
internet
speed
is.
If
you
have
a
local
registry
in
your
home,
lab
sensings
can
be
faster
because
of
improved
network
speeds
normally
have
in
your
home
lab.
The
second
step
is
that,
in
addition
to
the
bootstrap
node,
you
have
your
master
nodes.
The
master
nodes
are
constantly
using
the
load.
C
Balancer,
that's
running
on
the
raspberry
to
get
ci
condition
configuration
files
from
the
bootstrap
node
and
if
the
bootstrap
remote
is
in
a
later
stage
of
a
stage
of
its
installation,
it
will
provide
with
a
local
web
server.
This
ignition
file
to
all
the
masters
that
are,
they
are
constantly
calling
for
that.
C
If
they
get
this
ignition
files
from
the
bootstrap
vm,
they
are
provisioning
themselves
say
normally
boot
twice.
I
bought
once
sorry
in
minimum
to
boot
into
a
new
version
of
your
operating
system
and
you
fit
your
records
version
because
you
start
initially,
you
start
with
a
yeah.
This
vm
template,
that's
run
that's
stored
in
vsphere
and
beginning
from
this
operating
system
version
say:
vms
are
running
waiting
for
the
ignition
file
say,
get
it
fetch
a
new
or
the
say,
os
versions,
that's
a
determined
for
a
certain
okg
release.
C
They
are
booting
into
the
new
os
version
and
join
the
fake
control
plane.
That's
created
that's
running
on
the
bootstrap
node.
If
the
control
pane
is
running,
then
in
the
next
step
the
bootstrap
load
node
stops
serving
an
ignition
file.
The
load
balancer
will
see
that
and
turn
off
the
bootstrap
communication.
C
In
this
phase
you
could,
in
theory,
delete
your
bootstrap
vm,
because
you
don't
need
it
anymore,
then
the
worker
vms,
they
are
running
all
the
time
and
also
I'm
fetching
an
ignition
file
for
the
workers
from
at
this
time,
the
control
plane
that's
running
with
our
masters.
C
They
are
constantly
pulling
for
that
and
if
the
master
control
as
a
control
plane
is
set
up
again,
a
web
server
will
serve
the
ignition
file
for
the
workers,
so
workers
will
fetch
the
ignition
files
load,
the
current
version
of
fedora
cores
boot
into
it
and
finish
the
installation,
and
afterwards
you
have
a
running
okd
cluster.
C
So
to
achieve
that
said,
you
have
a
load,
balancer
and
dns
dhtp
server.
You
have
to
set
up
a
little
bit
in
advance.
I
created
some
documentation
about
this
process.
Don't
get
frightened!
It's
it's
lots
of
text.
I
only
will
sweep
fast
over
that.
C
C
I,
because
I'm
not
using
the
dhcp
and
dns
server
for
the
home
lab,
but
yes
for
all
my
home
environment,
I
turned
on
dynamic
dns,
so
new
devices
automatically
register
themselves
to
the
dns
server,
so
I
don't
have
to
maintain
a
list
there
manually
and
for
that
I
have
done
this.
C
C
I
have
my
home
lab
net
domain
where
everything
in
my
in
my
home
is
regis
registered
to
and
then
I
have
a
sub
domain
c1.
It's
a
c1
means
a
cluster
one
home
lab
net.
Where
all
my
kubernetes
nodes,
my
okade
nodes,
are
running
inside
my
I
have
a
dhcp
range
and
I
use
a
static
eyepiece
for
the
most
important
notes
and
because
I
yeah,
because
I
try
lots
of
installation
strategies
I
like
to
have
fixed
ips
for
the
most
that's
for
the
most
important
vms
and
for
them
I
use
static
apiece
for
sure.
C
If
I
create
dynamic
nodes
through
machine
sets
later,
I
can
use
a
dhcp
for
them,
but
yeah
it's
a.
I
use
a
mixed
scenario
for
that
for
them.
C
First,
you
have
to
do
the
usual
things
I
use
raspberry,
pi
os
for
my
recipe.
I
update
a
package
list.
I
give
them
a
static
ip.
Then
I
install
isc
dhcp
server.
In
my
case,
I
use
that
for
the
dhcp
server
I
set
up
the
ethernet
port
configure.
I
do
the
basic
configuration
here
with
this
business
file
and
the
first
section
is
for
dynamic,
dns
yeah.
It's
not
nothing
special
about
that.
C
I
am
here.
This
section
is
served
by
the
dfcp
server
every
time
a
new
node
requires
an
internet
address
your
etc
resolve.
Conf
file
will
be
filled
by
parts
of
this
options
here
here
we
have
the
definition
of
our
subnet
rate
of
sorry,
our
dhcp
range,
and
here
are
the
the
static
ip
sections
where
I
use
the
mac
address,
that
is
configured
by
terror
form
in
vsphere
to
serve
c
vms
that
are
registrating
themselves
or
I'm
asking
for
ip
addresses
fixed
ip
addresses.
C
I
do
that
for
the
bootstrap
masternodes
worker
nodes
and
yeah
that
sets
it
for
the
dhcp
server.
The
next
thing
is
setting
up,
say:
dns
server,
it's
a
little
bit.
Yeah
more
files
are
involved
in
this
because
I
use
bind
for
that.
I
started
with
dns
mask,
but
I
was
not
convinced
with
its
features
so
as
I
throw
it
out
very
quickly
and
use
bind
also
for
that.
You
find
lots
of
information
in
the
internet
and
it's
nothing
special
in
the
configuration
about
that
that
I
use.
C
We
have
a
access
control
list
here,
where
I
say
every
ip
from
my
subnet
in
the
home
lab
can
access
the
dns
server.
I
have
yeah.
I
configure
it
also
as
a
forwarder
if,
if
a
domain
name
is
not
known
to
the
dns
server,
it
will
forward
its
request
to
our.
I
think
it's
google,
it's
a
google
dns
servers
in
the
internet
here.
C
I
turn
off
a
few
security
switches,
because
I
had
problems
with
that
and
I
yeah
I
don't
have
the
energy
to
find
out
how
to
make
it
really
really
secure,
but
I
will
improve
that
the
next
time.
It's
a
a
side
task.
I
gave
myself
here.
I
define
my
zones,
I
have
a
homelab.net
zone.
I
have
the
zone
where
okady
will
run
its
vms.
C
It's
referencing,
a
file
where
I
configure
the
records,
as
described
in
the
official
documentation,
and
I
have
a
reserve,
a
reverse
zone
setup,
because
it's
yeah
it's
best
practice
you
you
don't
really
need
it
for
a
home
lab,
but
I'm
using
it
because
I
I
wanted
to
try
out
how
to
set
this
up
here.
We
have
a
few.
Yes,
this
is
a
the
zone
file
for
my
home
lab
in
real
life.
C
A
lots
of
entries
are
here
because
I
use
things
other
than
okt
that
are
using
this
dns
server,
and
this
is
a
setup
for
my
reverse.
Lookup.
A
reverse
zone
file
so
here
is
now
is
interesting
part
because
here
we
have
the
sound
file
for
c1
cluster,
one
home
lab.net,
and
you
will
see
yeah
lots
of
records
here.
C
These
are
the
records
that
are
required
by
okd
to
work.
We
have
here,
wildcard
cname,
that's
everything
here
is
going
to
the
load
balancer.
This
is
the
internal
api
jamie
talked
about.
We
have
an
external
api
record
here
we
have
the
workers,
the
master,
the
bootstrap
node
and
the
load
balancer
node
sets
pretty
much.
C
Is
it
the
next
section?
Is
it's
a
load?
Balancer?
It's?
The
third
and
last
component
I
have
set
up
on
my
recipe.
Haproxy
is
a
load
balancer.
I
like
it.
I
like
it
much
because
it
is
rather
easy
to
set
up
it's
fast
and
yeah.
It's
it's
yeah!
Don't
get
confused
by
this
section.
It's
pretty
much
default.
C
You
have
a
dashboard
where
you
can
see
which,
which
back
end
nodes
are
available
or
responding
and
which
are
not.
C
We
have
here
the
look
balancer
for
say
api
here
we
have
a
load
balancer
for
the
ignition
configuration
file
server
and
because
maybe
you
remember
it's,
the
first
step
is
that
the
bootstrap
node
will
serve
the
ignition
files
afterwards.
Also
the
master
files
will
serve
them
and,
if
say,
serves
the
ignition
file,
support,
strip
mode
output,
step
node
will
stop
serving
the
conditional
files
and
the
switchover
is
controlled
by
the
load
balancer,
and
here
we
have
some
routes,
say:
port
80
for
http
local
lenses,
and
it's
the
same
for
https.
C
I
have
provided
all
my
nodes
to
see
local
lenses
because
I
like
to
move
the
open,
shifter
or
okd
router
around
between
the
nodes
to
test
things.
That's
why
I
not
only
have
the
workers
in
the
list,
but
also
the
masters.
C
If
you
reboot
your
node,
you
should
check
if
all
systemd
services
are
still
running
or
if
there
are
errors
thrown
out.
You
can
use
var,
lock
and
syslog
to
find
to
troubleshoot
if
something
goes
wrong,
but
my
experience
is
that
if
you
follow
this
guide
here,
it's
not
so
much
as
it
looks
like
then.
Normally
it
should
work.
Rather
fast
and
then
in
the
end
you
have
external
components:
load,
balancer,
dns,
server,
dhcp
server,
it's
a
setup.
I
think
it's
rather
common
in
in
lots
of
companies.
C
I
could
imagine
that's
why
I
use
this
and
not
say
yeah
easier
to
set
up
ipi
installation
method
and
because
I
want
to
try
out
things
that
I
have
also
available
in
my
company
and
that's
pretty
much
everything.
I
can
tell
you
about
that.
A
Perfect
and
thank
you
for
that,
we
were
when
we
were
discussing
how
to
run
today.
A
One
of
the
things
that
we
we
figured
was
generic
to
everybody's
deployment
was
figuring
out
these
bits
and
so
joseph
thank
you
very
much
for
taking
the
time
to
walk
us
through
your
experience
with
that
and
we'd
love
to
hear
everybody
else's
experiences
and
we'll
probably
rinse
and
repeat
that
a
little
bit
during
some
of
the
sessions
here,
we
have
a
bit
of
time
right
now
and
I
think
what
I
would
like
to
do
is
to
take
a
few
minutes
to
see
if
anyone's
got
questions
in
the
background
in
the
chat.
A
That
would
be
great
if
you
could
do
that
now
and
we're
all,
as
I
keep
saying,
learning
this
new
system.
So
hopefully
you
don't
have
to
log
back
out
and
log
back
in
to
get
the
backstage
but
charo.
If
you
can
join
us
again-
and
I
don't
think
christian
has
has
managed
to
to
join
and
there.
But
if
you
have
questions
that
would
be
great
in
the
interim
vadim.
I
know
you
showed
off
charo's
wonderful
issue
with
the
good
documentation
and
the
right
comments
and
tags
and
everything.
A
Maybe
if
you
could
well
just
take
one
more
moment
and
walk
through
that
and
and
talk
a
little
bit
about-
why
that's
a
good
issue
and
and
then
we'll
wait
and
see
if
people
have
some
questions
in
the
chat.
D
Sure
so
the
core
of
the
problems
is
that
there
are
different
assumptions
on
what
people
try
to
deploy
and
versus
what
they
expect
to
happen.
But
there
are
lots
of
errors
in
the
way.
There
are
simple
areas
which
we
can
prevent.
There
are
a
lot
of
things
from
internal
infrastructure
which
we
have
no
aware
of
so.
D
We've
hit
this
problem
during
3x
days,
where
we
ask
people
to
show
some
random
pieces
of
logs
show
the
package
versions
of
this
and
package
versions
of
that
that
took
quite
a
while
in
the
end
qraying
for
okd4
and
ocp4
design.
D
The
goal
of
collecting
necessary
information
has
been
right
on
the
day
one,
so
we
came
up
with
two
different
tools
to
have
that
provided
to
us
for
any
initiative.
D
One
of
them
is
log
bundle
collected
by
installer
that
basically
gives
us
all
the
logs
from
bootstrap
nodes.
If
masters
are
available,
we'll
also
fetch
their
logs.
Since
all
of
that
is
centered
around
kubernetes,
meaning
we
can
collect
a
lot
of
information
from
installer
using
kubernetes
primitives,
like
installer
version,
is
stored
in
the
config
map.
So
if
it
gets
stored
in
the
log
bundle,
I
don't
have
to
request
a
person
to
ask
which
particular
binary
have
they
been
running.
D
It's
been
recorded
already
that
prevents
us
from
a
bunch
of
issues
like
I
think
I
ran
47
installer,
but
in
fact
I
type
out
and
accidentally
run
the
4.5
installer
and
so
on.
That
can
happen
that
no
one
is
to
blame,
but
what
we
want
is
actual
auditing
of
all
the
events.
D
So
the
charities
issue
already
had
a
log
bundle
which
gives
us
a
lot
of
information,
like
which
version
are
you
installing
which
installer
has
been
used
for
that?
What
infrastructure
is
it
running
have
mastered,
joined
already?
Let's
say
saves
a
lot
of
time
from
asking
the
person
directly
and
finding
out
the
truth
here.
D
Of
course,
some
missions
don't
require
look,
bundle,
they're
clear
as
a
day,
for
instance,
okd
clusters
don't
have
a
branding
setup,
that's
pretty
easy
for
us
to
fix
and
you
don't
have
to
provide
it,
but
if
you
would,
that
would
be
very
nice.
Of
course,
we're
also
working
on.
D
We
also
work
that
log
bundle
and
oc
adam
must
gather
archives,
do
not
contain
any
sensitive
or
private
information.
It
still
might
slip
in.
So
if
you
could
review
that
file
and
say
that
accidentally,
some
secret
with
my
password
to
the
sphere
is
being
locked.
That's
another
bug,
of
course,
but
if
you,
if
the
issue
has
a
log
bundle
that
gives
us
a
lot
of
information,
so
we
can
get
start
working
without
spending
a
lot
of
time
and
figuring
different
details,
of
course
yeah.
So
that's
basically
that.
A
Awesome
and
I'm
testing
using
a
poll
here
right
now
and
I
may
have
blown
it
already
by
showing
the
results
before
I
actually
ask
the
question,
so
I
don't
know
can
can
one
of
you
try
and
test
that
just
can
you
still
vote
in
that
poll
or
do
I
need
to
recreate
it
again.
A
Okay,
okay,
here
comes
somebody's
testing
it,
so
it's
still
live
there
perfect,
so
that
that's
great.
There
was
one
question
from
jesper
in
the
chat
and
I'm
not
sure
which
one
of
you
would
like
to
try
and
attempt
to
answer
that.
But
it
was
you
didn't
quite
get
the
note
on
ipi
and
what
is
the
state
of
okd,
ipi,
vsphere
and
small
installer?
Is
it
fully
supported?
A
Here's
some
and
the
dots
are
a
bit
sparse
scarce
on
the
subject.
As
far
as
you
can
see,
and
that's
why
we're
here
today,
too,
is
docs
is
what
we're
all
about
right
now.
So
who
would
like
to
take
that
one
on
vadim
charo,
jamie
yeah,
okay,.
D
Well,
okay,
as
a
community
distribution
is
aimed
to
be
providing
all
the
install
methods,
all
the
functionality
of
the
ocp
platform
that
builds
on
top
of
community
things,
meaning
you
would
get
all
the
install
methods
there
is
a
discussion
about
assisted,
installer
and
so
on.
So
that's
definitely
on
the
table.
D
However,
there
are
bugs,
for
instance,
azure
installer
is
in
a
very
poor
shape,
mostly
because
red
hat
as
a
company
can
communicate
with
microsoft
as
a
company
and
add
our
costs
to
the
marketplace,
and
you
would
be
able
to
start
a
new
arcos
image
from
scratch
that
doesn't
take
time,
but
okd
is
based
on
top
of
the
oracle
s
meaning
fedora
is
a
community
has
to
talk
to
microsoft
as
a
company
and
convince
them
that
fedora
cores
is
a
viable
distribution.
It
needs
to
be
landed
on
the
marketplace.
D
That's
the
core
part
of
the
installer.
Other
parts
are
absolutely
there,
so
what
users
have
to
do
right
now
is
to
upload
it
manually
start
a
temporary
vm
so
that
they
could
use
the
temporary
image
they
have
uploaded
and
run
the
azure
installer
there.
From
from
this
point,
everything
goes
just
like
it
was
started
in
gcp
or
aws
or
anywhere
else
where
we
already
have
the
required
thing
just
loaded,
but
this
initial
blocker,
where
we
don't
have
fedora
core,
is
uploaded
to
azure.
D
Marketplace
is
really
stopping
us
now
and
when
it
comes
to
vsphere,
as
you
can
see,
vsphere
is
incredibly
popular
here
and
we've
already
added
a
vsphere
test
for
every
single
nightly.
It
is
limited
it.
It
may
seem
totally
broken
down,
but
what
we
see
is
a
limitation
of
our
ci,
which
we'll
be
fixing
with
infra
guys
and
it
hasn't
covered.
D
D
D
D
So
we
are
not
very
rushing
with
adding
upi
tests
because,
while
ci
might
show
everything
is
perfect,
it
doesn't
mean
that
users
are
actually
succeeding
on
that
front,
we're
relying
on
direct
community
feedback,
meaning
you
broke
static,
ap
for
instance,
and
so
on.
So
we
collect
logs,
try
to
figure
out.
Is
it
the
info
problem
or
is
it
the
organic
problem?.
E
We
had
a
question
from
mike
mccune
a
little
bit
early
earlier
in
the
presentation
on
the
on
the
stage
that
I
wanted
to
get
to
mike
asked
jamie
the
dns
records
you
reference
are
those
just
a
records
or,
do
you
add
the
pointer
records
as
well?
So
this
is
interesting.
It
actually
brings
up
an
interesting
topic.
E
That's
been
particularly
relevant
the
past
couple
months,
so
in
my
vsphere
upi
setup
I
am
using
both
forward
and
reverse
records
in
my
dns
configuration.
E
The
reason
for
that
was
because
I
was
utilizing
the
reverse
lookups
of
the
nodes
os
to
get
the
host
name
and
the
fully
qualified
the
fully
qualified
host
name
in
what
was
that
end
of
november
beginning
of
december,
an
issue
was
introduced
upstream
from
fedora,
where
the
precedence
of
methods
for
for
determining
the
host
name
was
rearranged
and
what
I
relied
on
reverse
dns
actually
was
pushed
to
the
back
and
a
a
mechanism.
E
Further
up
front
was
to
just
name
the
node
fedora,
so
you
ended
up
with
a
bunch
of
nodes.
All
just
host
named
fedora,
there's
a
workaround
for
that
right
now
and
there's
also
a
script
in
my
repo
to
go
and
launch
onto
the
notes
and
fix
their
host
name
but
yeah.
E
I
I
still
do
rely
on
reverse
and
I
think
that's
something
that
should
be
supported
and
there's
it's
working
its
way
through
the
system
to
address
the
issue
that
that
was
introduced
so
and
vadim
might
also
have
some
more
info
on
that
as
well.
D
Yeah,
it
basically
boils
down
to
network
manager
bug.
The
problem
here
is
that
we
okd
community
have
our
own
goals
to
deploy
a
fully
functioning
cluster,
but
the
door
across
is
a
broader
scope,
meaning
if
they
break
us,
we
still
have
a
voice,
but
our
voice
is
not
the
only
one.
We
have
to
carefully
clarify
why
this
is
important
for
all
doracares
users
in
fcp
situation
is
entirely
different.
It's
much
simpler.
Everything
barcos
is
designed
solely
for
our
for
ocp.
D
D
Things
are
much
more
complex,
so
it
might
take
some
more
time,
but
I
think
we
have
a
great
experience
with
network
manager.
In
particular,
we've
reported
a
bunch
of
bugs
for
them
they
are
very
responsive
and
we
can
skip
a
few
layers
here
and
go
straight
to
fedora
bugzilla
and
ask
them
because
they
seem
to
be
quite
familiar
with
what
fedorocar
says,
what
okd
use
cases
are
and
the
amount
of
testing
we
provide
is
very
helpful
for
them
as
well.
A
I
have
to
unmute
myself,
there's
a
bruce
and
joseph
are
having
a
little
back
and
forth
here
about
whether
or
not
joseph
has
used
h.a
proxy
to
use
ready,
z,
endpoints
for
checks,
I'm
wondering
if
joseph,
if
you
want
to
pop
back
into
the
the
panel
and
talk
there,
we
go
I'll.
Add
you
in
here
you
go
and
bruce.
If
you
want
to
join
too
since
we're
not
sharing
screens,
we
can
have
more
people
in
no.
I
haven't.
A
That
was
a
short
answer,
the
the
other
secret
reason.
I
wanted
that
you're
also
having
a
side
discussion
there
that
I
can
see
around
azure
with
john
fortin.
So
maybe,
if
bruce,
if
you
want
to
join
too
I'll,
add
you
into
the
background.
Unfortunately
I
can't
add
john,
but
if
you
joseph,
you
want
to
reiterate
a
little
bit
about
the
what
you
found
with
running
on
azure.
C
It
works,
you
don't
have
to
change
anything
in
the
installer,
no,
no
tricks.
You
just
need
a
fast
internet
connection,
because,
because
fedora
core
s,
the
base
image
is
not
available
on
the
azure
marketplace.
C
You
have
to
bring
it
to
azure,
and
this
means
that
the
installer
has
to
download
it
first
locally,
where
you
run
your
installer
afterwards
upload
it
expand
it
first
to
I
don't
remember,
I
think,
eight
gigabyte
and
uploads
it
has
to
upload
it
again
to
azure,
and
this
takes
more
than
a
half
hour.
On
my
side
at
least
I
upgraded
my
internet
connection,
but
normal
easy
way
to
go
is
to
run
the
installer
in
an
azure
cloud.
C
Shell,
because
the
speed
the
internet,
speed,
if
you
are
in
azure,
is
much
much
higher
than
you
normally
have
at
home
and
there
I
always
get
a
100
success
rate
to
install
okd
on
azure.
It
works
pretty
well,
but
yeah.
It's
the
only
only
change
to
a
normal
installation
is
that
you
have
to
download
the
fedora
core's
image.
I
don't
understand.
C
I
I
really
don't
understand
why
that
is
so,
but
yeah
I
accepted
it
and
that's
also
a
reason
why
we
use
ocp
or
azure
red
hat
openshift
for
azure
instead
of
okd,
because
this
seems
not
to
be
released
in
the
short
term.
A
All
right
cool
and
we're
having
a
little
technical
difficulty
getting
bruce
into
the
back
here,
to
ask
his
questions
too
so
we'll
and
to
answer
his
his
questions
there,
all
right
folks,
I
I
did
run
a
poll
which
I
think
is,
you
know
always
fun
to
test
out
the
functionality
here
and
the
results
of
how
people
are
using
okd
and
it's
pretty
much
50
50
with
production
and
home
lab
use,
which
I
think
is
interesting
because
I
love
all
you
folks
who
are
using
in
production,
john
and
jamie
and
bruce
who's
coming
in
the
back
and
there's
joseph
yeah.
A
You
guys
really
are
we
love
you
and
then
the
home
lab
folks
are
very
interesting
to
me
too,
because
they
give
us
some
really
nice
feedback
and
help
on
board
and
train
people
up.
So
that's
great
and
nobody's
experimenting
at
their
company,
which
is
what
I
would
be
doing
with
okd.
If
you,
you
know,
once
I've
seen
all
the
issues
and
everything
going
on,
I
think
I
would
be
reserving
it
for
some
experimentation
and
somebody
is
using
it
for
something
else.
E
And
just
to
clarify
for
what
I'm
using
for
I'm
actually
using
it
for
doing
a
development
platform
for
folks
for
developers
at
the
university
of
michigan
to
get
familiar
with
openshift
and
test
their
applications
and
deploy
their
applications.
And
then
I'm
also
doing
builds
testing
the
build
process.
So
I
have
a
separate
okd
cluster.
That's
just
getting
rebuilt
all
the
time
to
do
tests
of
installations
and
various
things.
A
So
skeleton
just
nodded
in
he
must
have
been
the
other
or
that
person
must
have
been
the
other
person,
and
that
was
his
or
her
answer
and
not
in
production
but
dev
test
environment,
which
is
kind
of
to
me
that
that's
the
experimentation
phrasing
that
that
I
was
thinking
of
that
it
would
be
really
a
great
place
to
do
your
your
dev
and
your
testing.
A
So
that's
that's
awesome
and-
and
we
do
know
you
know
there
are
quite
a
few
people
who
are
using
it
in
production.
I'm
always
surprised-
maybe
that's
just
because
I'm
so
risk
averse
myself,
but
it
is,
and
it
is
pretty
much
I've
seen
a
lot
of
use
like
jamie's
describing
and
bruce
if
he
manages
to
get
in
at
edu's
as
well.
So
I
saw
there
were
a
few
other
folks
here.
A
I
think
there's
someone
here
from
one
of
the
hong
kong,
universities
and
other
places,
but
that's
also
a
nice
way
to
get
people
trained
up
on
using
containers
and
understanding
cloud
native
technologies
too.
So
there's
been
a
lot
of
dot
edu's
that
have
been
consistently
in
the
community
for
the
past.
You
know
I've
been
working
on
openshift
when
it
was
origin
and
now
renamed
okd
and
that's
really
consistently
been
a
good
testing,
ground
and
training
ground
for
people
in
the
dot
edu
space.
A
So
that's
that's!
Wonderful
and-
and
it
has
been
a
wonderful
collaboration
with
the
fedora
community
and
I
can
see
a
few
things
in
the
chat
where
we've
attended
in
the
past,
maybe
to
blame
fedora
colonels
for
being
buggy
when
it
was
something
on
the
openshift
side,
and
I
think
that
collaboration
that
goes
back
and
forth
between
the
openshift
engineers,
the
community
resources
and
okd
and
the
fedora
folks
has
really
been
amazing
and
very
healthy.
So
we're
really
kind
of
excited
about
that.
B
I
I'd
use
an
ocp
cluster
with
a
subscription
to
to
red
hat,
but
I
might
be
a
little
bit
biased.
A
Yeah
yeah,
it's
it's
interesting
because
openshift
is,
as
charo
said
in
the
in
the
in
the
intro
on
what
is
open
okd.
It
really
is
kubernetes
plus
plus
there's
a
lot
more
in
it
than
just
diy
kubernetes,
and
so
it's
sort
of
what
we're
seeing
is
a
lot.
I
I
like
to
think
of
kubernetes
diy
as
the
pipeline
for
people
who
want
to
use
okd
right,
so
you
go
and
you
deploy
kubernetes,
and
then
you
realize
you
don't
want
to
manage.
C
A
And
the
operators
concepts
it
is
really
have
have
changed
the
game.
I
think
in
deploying
and
installing
and
managing
this
container
orchestration
so
and
vadim
has
some
has
some
opinions
too.
So,
once
you.
D
I
think
a
lot
depends
on
what
what
purpose
are
you
filling
in
with
okiti?
And
what's
your
use
case?
Basically,
we
I
don't
work
for
sales,
but
for
what
sales
told
me
they
always
start
is
what
what
goal
are
you
about
to
to
complete?
It
doesn't
mean
we
would
start
selling
you
ansible
or
openstack
or
whatever
matters
is
that
you
would
succeed,
because
that
ensures
that
the
collaboration
would
be
long-term
and
the
same
for
our
community
safe.
D
You
would
not
use
okiti,
that's
entirely
fine,
but
we're
interested
in
which
calls
would
you
like
to
have
completed
because
so
kitty
might
fit
in
there
or
another
tool
might
be
useful.
For
instance,
I
run
a
plane
fedora
cross
image
because
it
already
has
spot
man
and
if
I
don't
need
to
change
it
a
lot
I
could
just
run
a
couple
of
containers
leave
them
there.
It
would
be
automatically
updated.
Then
you
probably
don't
need
a
complex
container
infrastructure
and
management
for
that.
D
B
In
all
seriousness,
at
my,
I
I've
only
been
with
red
hat
since
august
this
past
year,
so
my
previous
employer,
we
actually
did
run
okd
clusters
a
lot
of
okd
clusters
in
our
lab
environment,
and
I
used
it
in
one
way
very
similar
to
how
jamie's
using
it
at
the
university
of
michigan.
It
was
a
platform
to
teach
developers
in
a
safe
environment
that
that
they
could
destroy
the
entire
ecosystem
and
nobody
got
hurt.
B
It
was
a
way
for
us
to
try
out
new
releases
of
openshift
before
we
updated
any
of
our
production
environment
and
we
ran
ocp
in
production
and
having
the
the
confidence
that
okd
is
built
from
the
same
code
base
as
ocp
allowed
us
to
do
that.
Experimentation.
E
Yeah-
and
I
would
echo
that
that's
so
actually
at
university
of
michigan-
we
are
doing
okd
and
ocp
and
also
doing
fedora
core
os,
and
I
run
fedora
core
os
single
just
as
a
single
os
and
do
some
development
work
on
that.
E
I
would
encourage
folks
to
check
out
fedora
core
os
we've,
given
it
a
little
bit
of
discussion
here,
but
it
is
basically
an
operating
system
for
container
usage
and
container
development
and
it's
going
to
be
making
some
headways,
I
think,
even
in
research
areas
in
edu
and
whatnot,
because
there's
a
lot
of
instances
where
people
need
an
operating
system
to
do
research
or
whatever
and
have
a
lot
of
dependencies.
That
work
well
with
the
container
metaphor
right
that
work
well
with
a
container
environment.
E
So
I
would
encourage
folks
to
check
out
the
fedora
core
os
website
and
also
there's
a
working
group.
There.
That's
a
really
excellent
working
group.
I
participate
a
little
bit
if
there's
christian
glombeck
is
another
person.
That's
on
that.
That
participates
so
definitely
check
out
fedora
core
os
as
well.
C
We
use
okd
for
the
development
we
build
software
on
it,
we
run
lots
of
applications,
so
it's
our
workhorse
for
everything
and
more
and
more
applications
from
central
it
or
our
internal
for
internal
customers
or
for
external
customers
are
running
on
on
okd.
Currently
I
think
I
don't
know
how
much
developers
we
already
have
on
it.
It
must
be
more
than
100
now
and
it's
constantly
growing
each
week
I
have
on-boarding
calls
and
I
don't.
I
don't
can't
talk
about
the
applications,
but
it's
a
very
interesting
task.
C
Together
with
azure
and
all
the
added
services
you
have
there
databases
and
so
on,
and
it's
absolutely
great
experience.
I,
like
also
that
you
have
the
same
user
experience
on-premises
and
on
yeah
other
clouds.
That's
also
a
big
added
value
for
me
said.
I
know
the
stack
is
absolutely
the
same
everywhere
and
just
below
the
operating
systems.
There
are
the
differences
you
have
azure,
you
have
vsphere
yeah,
but
everything
going
up.
The
stack
is
the
same.
A
All
right
well-
and
I
just
created
another
poll
because
that's
what
I
get
to
do,
and
so
I'm
just
curious.
How
many
of
you
who
are
here
today
have
actually
joined
the
okd
working
group,
and
by
that
I
I
kind
of
mean
the
the
google
group
that
gets
you
on
the
mailing
list.
Gets
you
all
the
announcements,
that's
kind
of
diane's
theory
of
joining
and
just
curious?
How
many
of
you
have.
This
is
really
the
first
time
you've
participated
in
an
event.
A
So
that's
by
the
meetings
I
mean
the
actual
weekly
bi-weekly
cadence
of
okd
working
group
meetings
and
if
you
haven't,
why
aren't
you
that's
really
the
question
and
how
did
you
find
out
about
this
event
if
you're
not
on
the
mailing
list?
That
would
be
a
curious
thing
too.
For
me,
as
the
person
trying
to
corral
all
of
you
into
participating
and
make
sure
you
all
have
all
the
information
you
have
and
need.
A
So
I
am
thinking
that
will
take
15
minutes
before
the
next
session
starts
and
everybody
who's
got
a
session,
and
I
can
see
andrew
and
sri
and
other
people
there.
You
can
join
your
sessions,
five
minutes
before
the
session
starts,
so
the
moderators
can
jump
into
that
session
and
add,
and
people
can
join
the
sessions
that
they
want.
We'll
we'll
see
how
populated
some
of
them
are.
A
If
everybody's
in
the
vsphere
one,
then
we
will
still
motor
through
each
of
the
other
ones,
because
then
we
will
try
and
get
those
sessions
recorded.
So
those
just
like
we
did
way
back
with
the
okd
marathon.
The
youtube
watching
of
these
things
was
really
key
to
getting
more
people
on
board.
So
I
am
going
to
leave
that
poll
running
if
everybody
is
okay
with
that
and
see.
If
anyone
has
any
more
questions
before
we
take
this
bio
break
and.
C
Cool,
I
don't
know
if
people
know
that
a
single
node
cluster
of
okd
openshift
is
coming
in
the
next
in
the
next
weeks.
C
Yeah.
I
think
no
in
the
in
the
next
yeah
12
weeks,
or
also,
if
you
can
count
it
and
it's
a
few,
and
I
think,
is
also
a
game
changer
for
okt,
for
especially
for
home
labs,
to
try
it
out
and
scale
out
a
full
cluster.
If
that's
possible,
I
don't
know.
D
That
reminds
me
of
another
important
difference
between
ocp.
We
as
a
community,
decide
when
to
release
and
what
to
release.
Basically,
we
may
go
to
the
feature
joseph
has
mentioned
his
bootstrap
in
place,
meaning
when
you
create
a
single
cluster,
you
can
reuse
one
single
machine
to
be
a
bootstrap
and
then
it
boots
into
master.
D
So
it's
feature
is
available
in
for
eight
and
we
already
have
okd
for
eight
nightly,
so
you
can
try
them
out
right
now,
but
when
it
becomes
stable,
it's
up
to
us
to
decide
because
we
might
have
entirely
different
requirements
like
a
huge
focus
on
vsphere
or
a
upi
and
so
on,
and
if
we
decide
that
we
want
this
feature
sooner
before
braille
cross.
Well,
ocp
48
becomes
g.
We
can
do
that.
Absolutely
and
ocd.
Folks
would
be
super
happy
about
this.
D
If
we
want
to
delay
this
because
of
the
known
issue
since
the
one
we
can
delete
this
as
long
as
we
want
to,
but
you
can
give
it
a
try
already
now
and
report
us
back.
We
will
be
very,
very
interested
in
this
yeah.
B
A
Has
one
more
question
for
vadim
in
the
chat
joseph
might
have
answered
it,
but
does
some
of
the
cluster
operators
still
on?
Is
some
of
the
the
cluster
operators
still
on
docker
hub?
You
read
some
place,
but
after
docker
hub
limitations,
all
okay
images
are
moved
to
quay.io.
D
Partially
correct
all
the
operators,
images
are
already
took
away
and
we
push
our
images
there.
There
is,
however,
one
component
which
is
samples
operator.
It
still
fetches
images
from
docker
io,
because
another
red
hat
project
called
software
collections
is
publishing
that
to
docker
hub
as
well.
So
what
we'll
do
is
we'll
connect
to
them?
D
Ask
them
to
move
to
koio,
because
ogde
clusters
are
using
those
images
and
get
hit
by
rate,
limiting
and
ask
them
to
push
well
effectively,
we'll
be
consuming
just
images
or
from
any
other
source
where
people
are
not
treated
limited
because
that's
a
huge
issue
for
our
home
labs,
in
fact,
and
that
breaks
down
our
vsphere
test.
So
that's
in
progress.
A
Cool
and
then
patrick
is
asking
what
are
the
minimum
specs
for
a
single
node
cluster
and
charo
is
going
to
be
showing
that
in
the
single
node
cluster
session
and
talk
about
that
too.
So,
if
you
want
to
add
that
and
then
yeah,
and
so
my
my
goal
for
today,
folks
the
the
sessions
can
I
put
the
sessions
to
be
very
long.
A
So
if
you
don't
end
up
taking
the
whole
thing
I'll
I'll
be
online
for
everybody,
but
and
we
can
stop
the
broadcast
when
the
last
session
completes
its
and
runs
peter's
out,
shall
we
say
so
that's
my
goal
and
my
other
goal
is
to
is
for
everyone,
who's
listening
who's,
a
participant
or
an
attendee.
A
If
there's
something
missing
in
the
docks,
you
might
jamie
whomever,
if
one
somebody
could
throw
the
docks
that
are
in
mike's
thing
into
the
chat
again.
So
while
we
take
this
break
before
we
get
started
again,
take
a
look
at
that.
That's
where
I
would
love
everybody.
My
my
fantasy
island.
This
is
where
I
live
on.
A
His
fantasy
island
is
if
we
could
get
a
couple
of
folks
who
are
not
in
the
working
group
or
haven't
made
and
logged
an
issue
or
done
a
pull
request
against
anything
to
surface
themselves
today
and
look
at
that
documentation.
Even
if
it's
a
grammar
mistake
we've
made,
or
you
know,
additional
stub
for
another
deployment
target
that
we
haven't
done.
If,
if
you
could
take
the
time,
take
a
look
at
that.
A
While
we
take
the
break
and
start
you
know
seeing
if
you
could
could
help
us
out
with
these
documentation
issues
that
we
have,
we
know
we
have
them.
We
are
not
perfect
and
though-
and
we
do
think,
you're
all
psychic-
and
you
know
what
we're
talking
about.
A
So
if
we
are
not
explicit
enough
or
you
need
more
detail,
let
us
know
and
we
will
be
taking
these
and
cleaning
them
up
and
then
moving
them
to
a
proper
location
in
the
okd
official
repo,
once
we've
gotten
to
a
good
point
there,
and
so
we'll
still
be
referencing
them,
probably
in
a
few
blog
posts
on
okd.io
for
the
next
little
while.
A
But
the
goal
is
to
get
some
of
you
to
take
a
look
at
that
see
what
we're
missing,
make
a
pull
request,
put
an
issue
in
fix
our
pixar
docks
for
us
help
us
help
you
and
well
we'd
love
to
love
to
know
more
about
where
your,
what
your
use
cases
are
for
okd,
where
you're
deploying
it
what
issues
you're
running
into
really
this
is
this.
Is
your
your
your
team
here
and
yes,
joseph
thank
you
for
the
plug
for
the
blog.
We
just
got
that
added.
A
So
there's
you'll
see
my
one
blog
about
this
event
there
and
if
you
have
a
deployment
or
a
tip
or
a
trick
or
something
that
you
want
to
blog
about
in
that
repo
for
okd
dot
io
there,
there
is
instructions
on
how
to
add
that
there
are
also
not
quite
yet
instructions,
but
if
you
are
using
okd
in
production
or
in
at
an
edu
site
or
on
your
home
lab,
and
you
would
like
to
to
list
your
organization
as
a
participant
in
okd
we're
going
to
be
adding
that
into
the
the
notes-
and
maybe
that's
what
I'll
do
this
afternoon.
A
While
you
all
are
doing,
this
is
add,
not
just
how
to
do
a
blog
post
but
to
how
to
add
yourself
to
the
little
yaml
file
that
I'm
creating
there
too.
So
because
we
would
love
to
know
who
you
are
and
where
you're
using
it
and
start
growing.
The
community.
B
C
B
Going
from
a
single
node
to
multi-node
we're
really
talking
about
the
control
plane
to
go
from
which
you
need.
If
you
want
high
availability,
I
would
highly
recommend
your
control
plane
being
more
than
one
node.
But
if,
if
you
start
with
a
single
node
cluster-
and
you
want
to
make
your
home
lab
bigger,
you
can
add
worker
nodes
to
it,
and
if
we
get
bored
and
run
out
of
time
in
the
single
node
workshop,
we
might
actually
try
that
I've
done
it
in
a
long
long
time.
So,
yeah.
B
A
Up
yeah,
so
with
that,
I'm
going
to
let
everybody
who's
on
the
main
stage
pause
and
we
will
be
back
and
I
will
I
will
hang
out
here
for
a
little
bit
and
so
people
don't
freak
out
and
think
we've
dumped
them,
but
I'm
going
to
go
grab
a
glass
of
water
and
I
hope
all
you
take
your
bio
breaks
and
we
will
be
back
in
15
minutes
and
first
session
folks,
five
minutes
before
you
can
join
your
session
and
as
we
sort
of
jamie
and
I
will
be
in
the
back
trying
to
help
the
session
moderators
make
sure
they're
set
up
correctly
and
and
vadim
and
charo.
A
I
think
I've
empowered
you
to
jo,
maybe
not
cheryl
but
vadim
for
sure
and
myself
and
jamie.
We
can
pop
back
and
forth
into
other
things
and
and
help
out
as
needed
and
if
you're
really
messing
up
and
you
lost
yourselves
go
over
into
the
reception
area
in
in
hop
in,
and
I
will
try
and
keep
an
eye
on
that.
So
good
luck
have
a
bio
break
and
we
will
be
back
in
15
minutes.