►
From YouTube: OKD Working Group Meeting 10-26-2021
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
Well,
let's
go
ahead
and
get
started.
Let's
do
a
quick
agenda
review.
Is
everyone
happy
with
the
agenda?
Is
there
anything
that
you
want
to
add,
remove
change.
A
All
right
folks
seem
to
be
happy
with
it,
so
let's
jump
right
into
it.
Christian
you
are
here,
vadim
is
not,
could
you
let
us
know
the
latest
in
terms
of
okd,
really
whatever
you
have.
A
Okay,
christian,
we
don't
hear
you
or
you're,
not
around
it's
hard
to
tell
you
are
still
muted,
so,
let's
actually
okay.
So
let's
move
on
christian's
gonna,
I
think
connect
again.
Let's
move
on
to
documentation
updates
with
brian.
B
Okay,
so
there's
not
actually
a
lot
to
talk
about
since
last
time
I
did
ping
diane
earlier
trying
to
get
the
dns
switched
so
well,
everything's
ready
to
go
and
we're
just
waiting
for
the
dns
to
switch
over
to
the
new
site,
so
the
new
site
is
being
served
by
github,
but
when
it
transitions
over,
we
we're
all
good
to
go.
C
B
Other
things
4.9,
if
you've
looked
at
the
main
documentation,
you'll
see
that
we
now
have
versions
rather
than
just
going
to
the
latest.
When
you
go
to
the
official
documentation,
the
docs
okay.I
o,
you
now
can
choose
the
version
that
you
want
and,
and
initially
it
said
latest
was
4.9,
but
I
can
just
say
that
that
that's
now
been
resolved
and
then
the
other
thing
there's
a
link
to
the
code
of
conduct
and
jamie.
Do
you
want
to
talk
about
that?
A
Sure,
at
the
docs
meeting
we
went
over
the
code
of
conduct
that
diane
pilfered,
I
mean
shared
from
the
ansible
group
who
in
turn,
actually,
if
you
look
at
the
bottom
of
that
document,
they
based
it
on
like
five
or
six
other
code
of
conducts,
so
the
documentation
group
signed
off
on
it.
We're
happy
with
it.
Michael
is
going
to
go
through
and
change
all
of
the
references
to
ansible
to
okd
and
anything
else
that
that
needs
to
happen
to
make
it
our
own.
A
A
If
anyone
has
any
questions
on
the
code
of
conduct
that
could
be
answered
now,
we'd
be
happy
to
answer
it
or
you
can
submit
a
question
later
to
the
group
or
in
the
email
or
anything
like
any
questions.
That
folks
would
like
answered
now
about
our
motivation
and
plans
for
having
a
code
of
conduct.
A
Okay,
great,
if
anything
comes
up
and
you
have
any
questions,
feel
free
to
reach
out,
and
you
know,
ideally,
this
will
be
done
by
you
know-
maybe
the
end
of
the
month
or
at
the
latest
by
the
end
of
next
month,
and
then
we'll
get
it
up
on
the
website
and
then,
at
the
beginning
of
every
meeting
this
meeting
doc's
meeting,
we
will
mention
the
code
of
conduct
just
so
that
folks
attending
the
meeting
can
be
aware
of
it
and
sandro.
A
If
you
could
do
the
same
in
terms
of
your
meeting
as
well
your
subgroup
meeting
that
would
be
helpful.
Just
at
the
beginning
of
the
meeting
mentioned,
you
know,
you
know
this
is
you
know,
as
this
is
an
event
of
okd,
you
know
we
have
this
code
of
contact
and
we
ask
that
people
adhere
to
that.
A
Let's
see,
I
think,
that's
it
for
docs
and
big
kudos
to
michael
he's,
not
on
this
call,
but
he
did
a
lot
of
work
and,
of
course,
a
lot
of
kudos
to
brian.
My
theory
is
that
the
dns
outage
at
red
hat
last
week
is
related
to
all
of
this.
I'm
sure
it's
totally
good.
C
C
Don't
think
that's
it.
I
just
there's
two
folks
that
normally
I
ping
to
get
this
stuff
done.
Will
gordon,
who
also
just
had
a
child,
so
he's
been
out
on
leave
and
the
other
person
is
in
the
czech
republic,
and
I
pinged
him
the
week
prior
and
I
didn't
so
I'll,
just
ping
him
again
and
keep
paying
until
the
dns
is
pingable
and
we'll
get
there.
A
D
Hey
everybody,
can
you
hear
me?
Does
it
work?
Yes?
Yes!
Thank
you.
Okay,
sorry
about
that.
I
have
a
new
laptop
and
too
many
microphones
and
blue
jeans
insists
on
choosing
the
wrong
one
apparently
well.
No,
it
works
so
yeah.
Regarding
our
release
schedule
I
think
there's
been
a
new
release,
but
he
cut
the
new
release
and
yeah.
I
don't
really
have.
I
was
out
last
week,
so
I
don't
really
have
any
any
other
news
than
that.
D
Unfortunately,
rmci
is
still
blocked
by
some
internal
things
with
regards
to
our
build
pipeline,
but
yeah
working
on
it,
it
shouldn't
be
too
long.
So
hopefully
it's
gonna
be
an
early
christmas
present.
I
do
plan
on
getting
that
done
in
the
beginning
of
november.
D
Yeah,
I'm
I'm
not
sure
I
think
so
vadim
and
I
have
been
working
on
on
creating
all
the
job
configurations
for
our
ci
build
system
for
pro
of
the
the
okd
build
system.
Essentially-
and
we
should
be
there
soon
and
once
we
get
a
an
upgrade
running
of
finishing
succeeding
to
4.9,
I
think
we
should
be
ready
to
switch
over.
I
haven't
synced
with
madim
since
the
week
before
last,
so
I
don't
know
if
there's
any
any
current
issues
with
that,
but
I'm
not
aware
of
any.
A
Right
we
have
in
the
past,
vadim
has
created
a
road
to
document
in
the
past.
So,
for
example,
back
in
june
he
had
a
road
to
okd
4.8
document.
A
It
doesn't
look
like
he's
created
one
for
4.9,
but
we
can
create
one
and
christian
and
vadim,
and
anyone
else
can
throw
things
in
for
things
that
are
blockers
or
things
that
shortly
after
the
release,
we
would
need
to
deal
with.
So
that's
that's
a
great
suggestion
sandra
and
then
also
what
I
think.
You
also
mentioned
a
list
of
features
that
would
be
something
to
get
ahead
of
the
game.
A
That
would
be
really
cool
if
we
could
get
ahead
of
the
game
on
that
and
as
4.9
is
coming,
get
something
together
that
we
can
throw
on
our
website,
throw
in
social
media
and
throw
in
the
chat
about
hey.
This
is
okd.
4.9
is
going
to
feature
xyz
cool
things.
So
if
anyone
wants
to
help
with
that,
you
know.
D
I
think
that's
a
great
idea-
and
I
actually
just
before
we
cut
the
4.8
release,
which
isn't
too
long
ago.
If
I
remember,
I
went
through
that
list
again
and
actually
created
prs
to
update
the
configs
for
all
the
branches
that
we
have
now,
which
is
4.10
4.9.
D
So
most
of
that
should
now
be
in
place
for
4.9
as
well
already
and
there
shouldn't
be,
but
we'll
we'll
have
to
check.
So
if
somebody
could
open
open
that
issue
and
essentially
copy
over
that
list
of
things
that
vadim
had
for
4.8,
then
I'll
go
through
it
again
and
make
sure
everything's
in
place
and
yeah
with
regards
to
features,
I
I'm
not
not
aware
of
that
list,
but
it
does
exist.
D
I
think,
and
it
should
be
the
the
same
features
that
that
openshift
4.9
yeah
ocp,
so
that
should
really
be
the
same,
and
maybe
we
can
just
copy
that
over
from
from
some
open
shift
blocks.
Yes,
there's.
C
There's
a
bunch
of
4.9
blog
posts
and
things
like
that
that
we
could
cross-reference,
maybe
in
the
next
docs
meeting
jamie.
I
I
was
thinking
that
it
might
be
a
nice
thing
to
take
the
4.9
release,
update
and
make
you
the
author
of
it,
whether
you
do
all
the
work
or
not.
It
doesn't
matter,
but
in
order
to
introduce
you
to
the
the
greater
greater
red
hat
openshift
ecosystem
as
being
one
of
the
co-chairs
here
and
we
can
grab
some
of
the
content,
probably
from
the
ocp
updates.
C
Vadim
just
texted
me
he's
babysitting
and
I
I
asked
him
if
he
was
creating
that
road
map
for
for
the
road
to
4.9
doc,
and
I
haven't
got
a
response
yet
so
perhaps
he's
got
his
hands
full,
literally
literally
yeah.
So
so
I
think,
maybe
next
week
in
the
docs
meeting,
we
could
take
that
up
as
a
topic
and
then
just
do
and
figure
out
how
we
can
do
that
like
on
a
regular.
Whenever
we
do
have
a
major
release
of
the
docs
team.
C
Take
on
doing
that,
and
then
we
can
rotate
who
authors
it.
So
we
can
showcase
different
people
from
the
working
group
each
time
and
you
know
I
have
tons
of
video
of
people
from
the
product
management
team
talking
about
the
latest
movies,
but
I
think
on
the
openshift
blog
there's
a
few
for
4.9
release
updates
from,
I
think
rob.
Sumski
wrote
them
this
time
around,
I'm
not
sure,
but
we
can
grab
those
and
and
talk
about
them.
Next,
tuesday.
D
Fantastic
one
thing
to
get
in
for
4.9
would
be
a
newer,
fedora
core
os
release,
because
I
think
we're
still
stuck
on
a
pretty
old
version
now,
and
I
I
think
that
last
outstanding
issue
has
now
been
resolved.
At
least
that
was
a
race
condition.
I
think
somewhere
yeah.
C
Yeah
so
then
we
can
sneak
that
into
the
update
as
well
and
maybe
get
a
quote
from
timothy
as
our
resident
fedora
person
on
there
is
there
any
and
this
maybe
this
is
for
neil
and
the
dado
folks.
The
crc
build
for
for
the
latest
release
is.
Is
that
an,
and
is
there
any
gotchas
for
doing
a
4.9
code,
ready
container
release
for
okd.
H
I've
had
no
time
to
to
work
on
that
in
the
past
month,
so
I
I
don't
actually
know
if
you
have
actual
information,
please
go
ahead.
G
Yeah,
so
so
I
I
was,
that
was
actually
one
of
the
I
popped
out
of
another
meeting
to
come
over
here,
because
I
wanted
to
talk
to
you
guys
just
to
see
how
and
when
we
wanted
to
accelerate
spinning
up
that
special
interest
group
that
sig
for
crc.
G
G
It
actually
breaks
the
crc
build
because
the
crc
build
is
checking
for
things
like
the
the
etsy
decorum
guard
and
some
stuff
that
went
away
in
4.8
so
that
so
the
4.8
build
for
crc
just
needs
a
few
tweaks
to
to
complete
the
build
of
the
single
node
cluster
4.9
is
going
to
be
a
whole
different
ball
game
that
I
haven't
even
touched
because
with
the
full
support
for
snow,
I
expect
that
really
changes.
How
crc
gets
built
for
4.9
and
to
be
honest
with
you
guys
it
it.
G
It
becomes
a
lower
priority
for
me
because
I
don't
use
code
ready,
containers
and-
and
if
I'm
being
honest
with
this
group
of
friends,
I
don't
actually
like
it
so
I've
been
I've
been
building
it
kind
of
as
a
favor
to
the
community,
but
but
it's
not
something
I
use
so
so
it
ends
up
kind
of
falling
down.
On
my
on
my
priority
change
when,
when
I'd
much
rather
be
working
on
my
bare
metal
cluster
yeah,
do
you.
H
Do
we
have
insight
from
the
ocp
direction
whether
crc
is
going
to
continue
now
that
single
node,
openshift
and
okd
you're
a
thing.
G
You
know
I
I
don't.
I
don't
know
because
actually
I'm
I'm
this
close
to
for
my
from
my
bare
metal
lab
to
being
able
to
build
or
to
run
the
bootstrap
node
on
my
macbook,
using
using
the
native.
What
hyper-v
is
no
that's
windows?
What's
the
whatever
the
whatever
the
macbook
one
is
and
and
advisor
framework
yeah
that
thing?
G
Yes,
that
thing
and
honestly,
if,
if
that
is
working,
and
we
already
know
we
can
do
something
similar
on
you
know
on
linux
districts,
then
we're
really
not
that
far
away
from
being
able
to
just
spin
up
your
own
single
node
cluster
natively,
and
then
you
don't
have
all
the
constraints
of
it
only
being
accessible
from
the
you
know
from
the
workstation
or
things
like
that.
So
I'm
not
sure.
C
Well,
I
will,
I
will
ask
the
product
manager
steve
spiker
about
that,
because
I
haven't.
I
hadn't
heard
that
feedback
yet
so
and
charo.
I
think
you
probably
can
reach
out
to
steve
too,
but
I
I
I
haven't,
heard
that
it's
being
obsoleted
so
in
any
of
the
messaging
that
I've
been
listening
to
or
listening
for
so
I'll
check
in
on
that.
But
that's
that's.
C
Those
are
good
insights,
charo,
and
so,
if
we're
that
close,
I
yeah
I'm
my
mic
is
saying
you
haven't
heard
about
going
away,
but
yeah
the
single
node
option
is
really
kind
of
what
we're
targeting.
B
Have
you
actually
cut
down
the
the
footprint
because
one
of
the
one
of
the
selling
points
of
crc
is
it
actually
turns
off
a
lot
of
the
admin
stuff
and
to
actually
make
it
run
in
a
smaller
footprint?
So
it'll
run
on
a
single
laptop
so
because
to
me
that
that
is
the
main
difference
between
snow
and
crc.
G
Right
and
that's
what
that's,
what
that
the
snc
dot
sh,
the
this,
the
single
note,
cluster
script,
there's
actually
no
hidden
magic
there,
because
that's
actually
what
that
thing
does
when
it
builds
the
the
single
node
cluster
and-
and
so
this
may
be
something
that
our
new
sig
wants
to
take
on
is
instead
of
embracing
the
paradigm
of
code
ready
containers.
Maybe
we
challenge
the
paradigm
of
code,
ready
containers
and
see
if
there's
a
way
for
us
to
deliver.
G
Like
a
you
know,
a
packaged
and
opinionated
ignition
config,
or
something
that
that
we
can
just
enable
people
to
spin
their
spin
their
own
up
with,
because
really
that's
what
code
ready
containers
actually
does?
Is
it
it
sort
of
hamstrings
some
of
the
operators
it
it
force,
turns
them
off
or
or
literally
rips
them
out
of
the
running
cluster
before
it
then
turns
the
the
cluster
into
a
cue
cow's.
Image
then
gets
embedded
in
the
crc
executable.
D
I
I
think
the
crc
and
and
single
node
openshift
are
slightly
different
use
cases.
So
crc
is,
as
has
been
said,
supposed
to
be
runnable
on
a
laptop.
It's
virtualized,
while
the
single
node
openshift
snow
is
really
more
of
a
production
system
that
is
just
running
on
on
the
bare
metal
right
system
and
yeah,
and
now
the
machine
config
operator,
for
example,
has
support
first
for
running
in
in
that
environment
as
a
single
node.
So
it
doesn't
have
to
be
disabled
anymore,
like
it
used
to
be
in
crc.
D
I
don't
know
whether
they
still
do
it
in
crc
or
not,
but
so
so,
maybe
for
us,
it's
more
it's
more
useful
to
have
the
snow,
or
maybe
maybe
this
maybe
it
isn't,
but
I
think
it
does
have.
It
is
a
good
use
case
that
we're
not
yet
really
covering
here.
If
you
just
have
like
this
machine
over
there,
just
one
machine,
you
can
run
open
shift
like
a
full
open
shift
on
there
and
having
that
for
okay.
I
think
that
would
be.
D
That
would
be
great,
and
that
is
the
assisted
installer
that
we'll
have
to
rebuild
for
okd
then,
and
that
actually
doesn't
require
a
bootstrap
node,
so
it'll
it'll
pivot
from
the
bootstrap
node
into
a
master
single
node
master.
The
the
assisted
installer
also
has
support
for
compact
clusters,
which
is
like
a
three
node
cluster.
Where
also
you
don't
need
a
bootstrap
mode.
One
of
the
masters
will
pivot
and
one
of
the
booster
will
will
pivot
into
becoming
one
of
the
masters
there.
D
So
I
do
think
that
is
very
nice
and
we
should
look
at
supporting
that
in
okd.
G
Yeah
and
if
I
were
going
to
throw
something
out
for
the
sig
to
be
thinking
about,
would
be
how
cool
would
it
be
if
we
could
fire
up
whatever
our
okd
crc
is
from
podman
machine
if
you're
not
familiar
with
podman
machine,
it's
it's
relatively
new
and
it's
really
slick.
A
Let
me
bring
this
in
just
a
little
bit
here,
because
I
think
we're
starting
to
get
into
the
discussion
that
the
actual
subgroup
should
be
having
right
neil
daniel.
Can
you
organize
a
meeting?
Do
you
think
to
get
interested
parties
all
together
to
talk
about
this.
H
A
So
off
we
can
talk
to
you
offline
about
how
to
do
that.
Diane
and
I
try
and
get
you
everything
you
need
to
to
round
people
up
and
get
things
going.
Okay,.
A
C
I
would
just
ask
a
question
of
this
group
since
charo's
brought
it
up
that
he's
not
really
using
crc.
Is
anyone
on
this
group
in
on
the
call
right
now,
an
active
user
of
the
code
ready
container
for
okd.
A
Yeah,
I
kind
of
the
only
the
only
usage
that
I
know
of
is
that
we
had
a
slew
of
comments
in
the
channel
and
a
couple
of
discussion
items
posted
in
regards
to
it,
I
think
in
spring
right
or
winter,
but
we
haven't
really
had
any
sense,
and
so
it
might
be
helpful
to
explain
out
to
the
users
the
difference
between
crc
and
and
single
node
and
find
out.
If
really
people
want
one
or
the
other.
B
B
I
find
crc
was
just
just
a
bit
too
big
and
everything
was
a
bit
too
sluggish
and
you
then
quickly
run
into
the
problem.
If
you
want
to
do
too
much
in
it,
you're
either
going
to
run
out
of
memory,
or
I
find
I
run
a
disk
space
right
because
on
macs,
the
the
image
isn't
resizable
because
of
the
silly
version
of
hypervisor
they
use.
You
can't
resize
disk
images
and
anecdotally.
K
Yeah,
anecdotally,
we've
been
doing
the
same
thing.
We
used
to
deploy
a
mini
cube
as
part
of
our
as
part
of
our
dev
stack
and
nowadays
we're
moving
towards
tooling
that
will
actually
spin
up
a
full
little
cluster
for
you
in
aws,
just
because,
like
once,
you
have
the
base
cluster
plus
all
the
add-ons
that
you
need
for
your
specific
environment.
Things
like
that.
I
I
You
know
the
local
dev
case,
but
the
fundamentally
something
that
has
made
made
it
a
little
harder
to
justify
is
that
it's
getting
harder
to
get
computers
and
it's
getting
much
harder
to
get
computers
with
actual
capacity
in
them,
and
that
has
shifted
the
balance
of
things
lately,
which
is
why
dan
and
I
have
de-prioritized
crc
so
hard,
because
I
don't
even
have
a
computer,
that's
powerful
enough
to
run
the
build
personally
on
my
like
locally,
even
I
don't
have
the
like.
I
Even
if
I
had
the
crc
stuff,
I
can't
run
it
because
I
don't
have
a
computer
powerful
enough
to
do
it,
and
this
is
a
for
a
lot
of
the
newer
developers,
a
lot
of
the
ones
that
are
in
our
teams
that
are
using.
You
know
cloudy
things
containery
things.
This
is
because,
because
of
the
shortages
and
stuff,
that's
the
more
common
case
now.
So
I
I
don't
know
what
else
to
say.
C
So
then,
my
my
follow-up
question
to
that
and
thank
you
for
the
feedback
is
the
single
node
option,
that's
snow
or
whatever
I'm
supposed
to
call
it
is
that
too
hefty
out
also
for
local
use.
G
E
G
I
think
the
cdrc
team
has
just
kind
of
been
they've,
been
in
their
channel
long
enough,
that
it's
become
a
rut
and-
and
I
think
if
we,
if
we
could
pop
some
innovative
ideas
over
their
way,
they
would
probably
latch
on
to
them
and
run
because
they
have
other
tasks
that
they
have
to
do
too.
So
crc
is
not
their
full-time
job.
I
I
don't
think
it's
unreasonable
to
say
you
know,
maybe
you
don't
need
all
the
metrics
and
monitoring
operators
and
services
deployed
on
a
single
node
openshift
configuration
in
some
cases.
Maybe
you
don't
need
all
the
you
know,
some
of
the
other
extra
fancy
stuff
there
like?
Maybe
you
don't
need
the
service
catalog
or
whatever
you
know
based
on
you
know
a
profile
that
is
passed
to
sno
deployments
or
something
like
that
like
I,
I
just
don't
know
why
this
isn't.
G
G
D
Is
in
the
works
so,
for
example,
we're
going
to
we're
going
to
throw
out
jenkins
out
of
the
core
payload,
I
think
with
the
next
release
and
obviously
it's
a
process
but
yeah.
If
you
have
suggestions
what
could
be
cut
down,
please
do
do
voice
them
and
we'll
we'll
try
to
you
know
get
that
too
to
the
to
the
respective
teams,
so
they
can
work
on
making
their
component
optional,
and
I
know
that
the
the
cutting
this
single
now
single
note
open
shift
use
case
down.
D
B
So
diane
a
suggest
might
be
when
daniel
arranges
a
meeting.
Can
we
invite
any
of
the
crc
core
team
from
red
hat
into
the
working
group
that
sounds
like
there
could
be
win-win.
C
That's
what
I
was
thinking
I'll
reach
out
to
the
pm
for
the
product
manager
steve
and
see
who
he
can
if
he
can
come
and
hear,
hear
what
we're
saying.
C
First
of
all
and
then,
if
there's
some
resource
or
map
for
for
making
it
smaller
or
making
it
more
useful
and
or
where
we
and
the
other
thing
is
where
we
can,
as
a
working
group
can
be
useful
to
them
to
aid
in
the
bet,
their
their
work.
C
A
All
right
take
care
all
right,
daniel
and
neil.
If
you're
still
interested
in
leading
that
group,
we
can
get
you
set
up
on
reaching
out
to
people
and
getting
everything
together
so
diane,
and
I
will
reach
out
to
you
for
that.
Let's
move
on
now
to
issues
in
the
repo.
If
any
issues
stuck
out
to
folks
that
we
need
to
address,
or
do
they
point
to
something
larger
that
that
we
need
to
do,
we
haven't
really
gotten
a
lot
in
and
there's
a
couple
of
documentation
ones.
A
Actually
that
came
in
they're
labeled
as
such,
and
I
don't
know.
Does
anyone
see
anything
in
issues
that
you
want
to
talk
about
real,
quick.
K
L
A
Things
that
are
more
discussion,
which
leads
us
to
the
next
section.
Is
there
anything
in
discussions
that
folks
wanted
to
talk
about,
but.
H
So
so,
at
long
last,
I've
I've
been
able
to
run
through
the
the
ipi
install
on
on
openstack,
and
so
I
I
wrote
up
a
bunch
of
notes
for
myself
there
there
were
some.
H
There
were
some
kind
of
major
things
that
prevented
the
cluster
from
coming
up,
and
then
there
were
a
bunch
more
minor
paper
cuts
where
a
lot
of
them
were
just
the
docs
could
be
better
in
particular
areas.
So
I'm
trying
to
figure
out
the
best
way
to
get
all
this
written
up.
Should
I
file
like
six
or
seven
different
tickets?
Should
I
write
up
one
document
and
then
people
can
decide
whether
they
want
it
to
be
tickets.
H
A
C
My
two
cents
would
be
to
start
an
issue
with
a
a
list
of
of
the
items
or
okay,
just
like
one
issue.
A
A
And
that's
that's
where
we're
trying
to
direct
folks
to
things
that
are
not
necessarily
bugs,
but
there
might
be
conversation
about
techniques
or
process
or
you
know
something
like
that
or
something
you're
not
sure.
If
it's
a
bug
or
whatever
yeah.
H
Cool
okay,
yeah
I'll
I'll
start
a
discussion.
All
I'll
write
up
kind
of
bullet
points
of
what
my
experience
was,
then
we
can,
we
can
go
from
there
figuring
out
how
to
best
turn
them
into
into
action.
Thank
you
excellent.
So
I
do
want
to
say
other
than
a
couple
things.
It
was
a
surprisingly
smooth
and
delightful
process.
H
So
I'm
really
really
impressed
that
at
the
work
that
everybody
has
done
at
getting
to
this
to
such
a
pleasant
experience.
So
thank
you.
Everybody.
C
So
what
I
would
also
love
to
do,
maybe
is:
let's
do
the
discussion,
but
also
do
a
little
write
up
blog
posty
thing
or
even
do
a
recording
of
it,
because
that
seems
to
be
how
people
digest
of
of
walking
through
it.
So
at
some
point
you
have
spare
time.
That
would
be
a
great
thing.
I
can
record
that
with
you
or
help
you
get
that
done
just
the
step-by-step
stuff.
Oh.
H
I
see
just
like
just
like
a
a
screencast
or
something
of
me
going
through
it
sure
yeah
that
I'd
be
I'd,
be
happy
to
do
something
like
that.
I
I'm
sure
we
could
also,
probably,
since
we
did
an
ip,
a
non-trivial
ipi
setup.
We
can
probably
clean
those
up
a
bit
and
put
it
somewhere
as
associated
like
reference
material
for
such
a
screencast
or
whatever.
So
people
can
can
see
what
it
is
because,
like
that
would
be
that'd
probably
be
beneficial
for
people
because,
like
it
seems
like
the
openstack
one,
even
though
it
is
one
of
the
better
supported
providers
seems
to
be
the
one
that
has
some
of
the
least
comprehensive
documentation.
B
A
So
well
done
all
right,
we'll
add
this
as
a
task.
Next
up
mike,
you
have
something
here
about
rpms.
L
Yeah
so
little
privateer
neil,
I
promise,
is
purely
coincidental.
The
other
day
I
was
just
kind
of
messing
around
and
I
was
like
kind
of
curious-
were
the
oc
and
like
openshift,
installed
binaries
available
in
fedora
to
just
rpm
installed.
They
were
not.
There
is
a
kubernetes
package
for
the
cube
ctl
stuff.
I
was
like
okay,
let's
try
packaging
up.
L
I
want
to
see
if
I
can
package
up
the
actual
kd
binaries
for
a
given
stream
into
fedora,
and
I
was
able
to
generate
a
a
copper
or
a
copper
repo
that
does
contain
an
okd
and
an
okd
install
binary.
I
did
rename
them
so
they
wouldn't
conflict.
L
If
someone
wanted
to
install
the
openshift
binaries,
but
I
was
wondering
if
it
was
actually
a
useful
thing
to
have
available
in
fedor,
considering
I'm
not
fully
aware
of
the
entire
process
of
like
if
a
given
binary
can
only
install
a
specific
version
that
they
that
it
comes
bundled
with
or
if
you
can
pick
different
streams
to
install
with
any
given
installer.
L
K
D
L
I
D
Well,
it's
not
supported
right
now.
I
think,
but
that
might
be
an
interesting
enhancement
for
all
of
openshift.
I
I'm
thinking
so
the
the
thing
I'm
thinking
of
is
so
dan
and
I
have
been
internally
talking
about.
I
Is
it's
not
straightforward
to
just
like
make
open
shift,
install
do
different
things
by
default,
and
you
know
maybe
some
kind
of
config,
drop-in
or
whatever,
like
mike's
thing,
could
like
the
mics,
build
of
it.
That
as
a
package,
could
actually
like
read
that
and
that
would
make
it
meaningfully
simpler
to
be
able
to
do
the
right
thing
like.
I
don't
imagine
that
the
installer
code,
like
the
core
code
of
the
installer,
changes
as
much
as
the
stuff
that
it
fetches
to
actually
do
the
deployment.
I
B
F
There's
there's
his
and
chorus
perspective
on
this.
F
We
do
that
because,
essentially,
when
we
bake
version
into
openshift
installer,
it
makes
sure
that
the
version
actually
works
so
that
we
test
that
the
version
that
you're
going
to
boot
is
is
working
at
least,
and
you
can
get
a
cluster
with
this
version
and
we
don't
update
from
an
openshift
perspective.
We
don't
support
users,
updating
their
boot
images,
so
switching
the
image
they
put
their
cluster
on.
F
So
you
should
you
keep
essentially
using
the
same
image
to
boost
your
cluster
for
the
whole
lives
of
your
cluster
for
kd.
I
don't
know
how
much
this
would
be
tested,
but
I
don't
think
this
is
tested
either.
So
essentially,
you
still
should
use
the
same
image
you've
used
in
the
beginning
to
boot.
Your
cluster.
I
think.
I
Right,
but
that
can
all
be
that's
all
part
of
a
manifest
of
sorts
right
like
if
you
say
I
want
to
install
okd
four
eight
zero
dash,
20,
21,
10,
15
or
some
made
up
date
or
whatever
right
that
reference
is
a
point
in
time
that
you've
released
a
bunch
of
blobs.
It
has
a
referential
point
to
an
f
cause
image
and
so
on
and
so
on
and
so
on.
Right,
like
that
is
a
thing
that
exists.
F
I
think
I
understand
what
you
want
to
do,
but
I
don't
understand
the
what
you
get
from
it
like
right
now,
when
you
get
a
specific
binary
of
open
shift,
install
whether
it's
4pg
or
kg
you've
got
everything
in
it.
So
you
know
that
this
specific
configuration
has
been
tested
and
if
you
want
to
do
the
overrides,
then
you
go
ahead.
You
have
got
some
two
or
three
variables
that
you
can
use
environment
variables,
so
you
can
use
to
do
the
overrides.
F
But
if
you
split
this,
then
it
means
that
once
you've
got
like
some
random
version
of
the
openshift
installer
binary,
coupled
with
soft
random
version
of
the
the
manifest
of
whatever
version
you
want
to
install
or
whatever,
okay
or
something
else
and
who's
current
going
to
guarantee
that
this
actually
works.
Like.
F
When
we
ship
that
we
ship
with
the
specific
arcos
or,
of
course
version
and
we
ship
the
specific
version
of
ocp
and
this
one
boots
and
works
on
all
the
cluster
we
test
for
so
this
is
like
none
in
a
sense,
it's
non-negotiable,
like
we
know
this.
This
works
because
we
test
it
before
we
ship
it.
Otherwise
we
don't
ship
it,
and
if
we
split
that
up,
then
it
means
that
you
will
use
combination
which
potentially
don't
work.
L
So
part
of
the
reason
why
I
brought
this
up
is
when
I
said
coincidence
earlier,
I
went
into
the
github
project
was
trying
to
find
where
the
agendas
were.
I
forgot.
They
were
on
hackmd
and
at
the
very
bottom
of
the
to-do
list,
there's
a
card
for
door
packages
or
photo
rpms
that
have
that
I've
came
across
after
I
actually
got
these
things
to
build.
L
L
If
I
don't
update
that
package,
when
people
do
an
install,
it's
still
going
to
sell
the
1024
one,
but
if
I
upgrade
to
the
new
one
now
people,
if
they're,
trying
to
run
with
their
same
stuff,
they're
going
to
be
running
off
a
newer
stream
and
not
they're.
Also,
I
don't
know
if
that's
going
to
cause
a
problem
or
not
so
from
a
packaging
standpoint
I
was
like.
L
Is
this
a
good
idea
in
the
first
place
to
have
considering
that's
not
the
way
like
openshift
itself
kind
of
works,
or
am
I
going
to
be
causing
problems
for
people
down
the
line
if
we
were
to
start
off
and
they
were
getting
updates
for
the
installer
and
the
client
that
they
weren't
expecting
to
get
kind
of
thing?
L
L
F
The
the
only
option
that
would
be
would
be
to
have
like
the
version
of
ocp.
Well,
there's
there's
complete
binding
between
the
data
and
the
installer
bytes.
So
because
there's
a
bunch
of
manifest
and
everything
that's
generated,
so
you
can
not
just
like
use
a
random
version
of
the
installer
and
run
the
version
of
boot
image
and
ocp
release
and
they
split
back
to
work.
So.
D
Get
you
I
think,
there's
there's
some
merit
to
both
sides,
but
with
okd
we
use
a
slightly
different
process
than
with
ocp,
in
that
we
have
the
boot
image.
But
then
we
pivot
right
away
into
the
machine
os
content
that
is
part
of
the
okd
payload.
So
we
do
have
some
more
leeway
that,
as
as
long
as
we
can
pivot
from
the
boot
image
version
of
fedora
core
os,
it
doesn't
really
matter
what
we
what
we
pivot
into
then
because
it'll
always
be
the
version.
The
right
version
for
the
reference
payload.
D
So
you
can,
using
this
openshift
install
release,
image
override
and
var
you
can
override
the
the
payload.
That's
the
release
image
from
the
payload
you're
referencing
there,
not
not
a
different
f
cross
version,
it'll
still,
the
the
first
boot
will
still
be
the
fedora
core
os
version
that
is
hard-coded
in
that
installer
binary
that
you're
running,
but
then
it'll
it'll
pivot,
right
away
into
the
right
os
version
for
that.
D
For
that
okd
version
that
you're
trying
to
install
so
and
then
that,
as
long
as
we,
we
still
just
use
the
same
pivot
mechanism,
which
is
essentially
an
rpm
os,
3
rebase.
Oh,
I
didn't
know
that
you
can
override
the
boot
image
that
might
be.
That
might
be
interesting,
but
it
shouldn't
it
shouldn't
really
matter,
because
as
long
as
we
we
have
something
that
boots
and
that
has
rpm
os
tree
in
it.
We
can
kind
of
use
that
to
to
to
then
pivot
over
into
the
into
the
operating
system.
D
That
is
part
of
the
payload
and
kind
of
it's
a
it's
a
right
now
how
we
build
the
machine.
Os
content
is
a
bit
messy
and
that
we
we
don't
really
do
it
on
our
own
ci
vadim
has
set
up
the
series
ci
for
it,
which
works
great,
but
it's
still
like.
We
don't
control
that
entirely.
D
There
is
this
enhancement
from
colin
walters,
though,
and
that
will
really
make
things
much
much
easier,
because
then
we
can
kind
of
just
take
your
fedora
corves
and
layer
stuff
on
top
making
that
easily
layer
things
on
top
as
a
docker
build
or
pop
man
build
and
that'll
be
the
the
okd
machine
os,
and
I
think
that
that's
gonna,
be
that's.
Gonna,
make
things
much
more
streamlined
there,
because
yeah
it'll
be
much
easier
to
build
the
machine
os
in
the
first
place
and
also
it'll
be
easier
for.
D
If
you
want
to
have
your
own
package
installed
on
the
on
the
on
the
image
you
boot
from
in
the
first
place,
it'll
be
easier
to
create
that
image.
There's
I
think,
in
the
chorus
team,
more
efforts
going
on
to
actually
have
a
have
the
os
tree
delivered
in
a
in
a
con
in
a
container
and
create
boot
images
from
the
container
image.
D
So
eventually,
what
we're
aiming
at
is
not
having
these
boot
images
as
part
of
the
required
things
you
need
to
mirror
in
the
first
place,
but
you
just
mirror
containers
and
then,
from
one
container
you
kind
of
create
the
boot
image
yourself
within
with
a
documented
easy
to
do
process.
D
So
there
is
going
to
be
some
changes
in
that
area
and
I
don't
think
it
makes
sense
to
kind
of
make
this
such
a
huge
problem.
If,
if
the
versions
don't
match
entirely
exactly
it
might
not
work,
it's
definitely
not
tested,
but
it
could
work
too,
but
in
the
future,
it'll
be
easier
to
just
create
your
own
image
from
the
payload.
D
L
People,
if
people
do
find
this
something
that's
potentially
practical,
I'm
happy
to
go
further
on
and
actually
try
packaging
it
and
getting
this
into
proposing
as
a
package
for
fedora
and
whatnot.
But
obviously
I
want
to
get
feedback
when
that
this
is
something
that's
at
this
current
point
in
time
is
something
that's
usable
or
would
be
useful
to
have.
L
L
Like
I
do
have
the
okd
client
or
the
oc
binary
in
that
in
the
in
the
copper,
renamed
okd
and
doing
a
lot
of
setting
to
get
the
batch
completion
to
not
conflict
with
oc.
It's
also
not
it's.
I'm
also
kind
of
hacking
around
the
actual
oc.
When
I
was
looking
to
the
oc,
build
process
is
some
heavy
nested
file
work
and
I
kind
of
bypassed
all
of
that
by
just
calling
the
go
build
directly
as
best
I
could.
So
I
need
feedback.
L
Well,
that's
a
good
thing
to
do,
or
by
about
has
to
be
the
standard
make
process,
but
I
do
have
that
bundled
up
and
it
seems
from
the
least
initial
stuff
functional,
but
I
haven't
actually
done
any
against
cluster
testing.
So.
I
And
as
for
open
shift
install,
I
think
that's
one
of
those
things
that's
going
to
wind
up
being
useful
to
have
as
a
modular.
I
If
you,
if
you
decide
to
go
forward
with
packaging
it
doing
it
with
modular
streams
and
doing
it
on
the
feature
versions.
You
know
four
4849
and
whatever
and
then
set
them
up
with
eol
so
that
they
retire
fairly
aggressively
sure.
I
think
it
would
be
super
useful
and
make
people
it'd
make
life
easier
for
for
people,
but
between
the
two
I
would
say,
having
the
client
tools
shipped
is
really
really
important.
I
The
installer
is
a
big
pile
of
insanity,
and
so
that's
that's
that
might
it
might
be
worth
having
a
longer
discussion
with
the
openshift
install
developers
about
handling
this
case
a
little
better,
but
it
is
certainly
a
long.
I
think
it's
a
valuable
longer
term
thing.
A
So,
let's
we
got
to
move
on,
we've
got
about
seven
minutes
left,
I
think
we're
all
basically
saying
to
mike
go
for
it.
We
can
give
feedback
post
stuff
in
the
working
group,
either
as
a
discussion
item
in
the
repo
or
actually
a
discussion
item
in
the
report,
because
then
people
from
all
over
can
suppose
this
has
a
discussion
item
in
the
repo
and
give
folks.
You
know
the
commands
there
that
you
have
for
testing
and
what
not
and
let's
go
from
there.
C
I,
if
we're
back
to
the
agenda,
I
had
one
more
question
christian.
I
did
see
that
you
submitted
a
talk
for
devconf
and
attached,
my
name
to
it.
I'm
assuming
that's
an
okd
related
talk.
D
Yeah
I
actually
submitted
one
talk
about
okdy
for
devconf
in
czech
republic
and
I
also
submitted
a
a
meet
up
for
book
kd
and
that's
that
okay,
one.
I
tagged
you
on
as
well
all
right,
so
he's
just
a
a
kind
of
hybrid
in
person,
virtual
event
or
okd
working
group
meeting.
C
No,
I'm
I'm
pretty
sure
if
you
put
my
name
on
both
of
those.
I
can't
for
some
reason
it
doesn't.
Let
me
I
can
see
that
you
put
my
name
on
at
least
one
of
them.
If
you
could
put
my
name
on
that,
and
I
was
just
gonna
flag
jamie,
I
don't
know
if
you
have
the
budget
to
travel
to
the
czech
republic
or
the
desire
to
do
so,
but
maybe
we
could.
C
We
could
chat
about
that
again
and
see
if
that's
a
possibility
and
and
do
some
socializing
either
that
or
participate
via
the
virtual
stuff.
We
usually
get
that
slot
both
of
those
at
devconf
cz.
So
thank
you
for
doing
that
and
hopefully
we'll
actually
be
able
to
go.
A
Excellent,
all
right,
let's
move
on
we've
got
a
couple
of
quick
ones
to
get
through
here.
Diane.
Can
you
confirm
that
we
can
have
stuff
forwarded
through
the
okd
twitter
if
you
haven't
yet
let
us
know
the
idea
being
that
we
we
wouldn't
get
our
own
twitter,
but
we
would
or
we
would
get
our
own
twitter
but
at
least
have
stuff
forwarded
through
there
depending,
but
if
you
could
have
them
post
our
content
when
we
need
it,
that
would
be
great.
C
Yeah,
absolutely
anything
you
have
to
get
her
on
twitter
yeah.
I
I've
yet
okay,
so
I
I
miss
the
I
apologize.
I
missed
the
last
docs
meeting
and
so
I
can
post
anything.
I
want
on
openshift
commons
relative
to
okd.
So
that's
not
a
problem.
If
there's
anything
and
maybe
each
time
we
do
like
a
blog
or
anything
like
that,
we
just
automatically
do
that,
but
I
usually
use
the
hashtag
okt.
C
I
I
think
the
last
time
I
talked
to
pit
folks
were
we
thinking
about
creating
a
twitter
handle
for
okd.
Was
that
another
thing
you
wanted
to
do?
We
did
and.
A
C
A
Excellent
great,
we
have
four
more
minutes
left.
There
was
discussion
about
bare
metal,
ci
and
testing
group.
I
do
now
have
a
some
bare
metal
that
I
can
test
on.
If
anyone
else
does
just
to
try
to
have
some
something
that
we
can
talk
about
with
people
or
have
you
know
at
least
create
a
testing
matrix
for
bare
metal
so
that
we
can
get
some
results.
Let
me
know
folks
who
are
interested
in
that
wanted
to
squeeze
in
one
thing.
A
A
No
all
right
cool
all
right,
so
task
list.
Basically,
diane
is
going
to
check
on
the
two
things.
Twitter
and
the
repo
have.
C
C
As
issues
and
or
as
tasks
in
github
have
we
started
doing
that
we
haven't
done
them
as
tasks
yet,
but
we
could
yeah
for
sure
some
tasks
for
me:
okay
and
assign
d
mueller
2001
my
github
id
to
them,
and
let's
see
if
that
makes
me
do
things
better.
A
Okay
sounds
good,
mike
you're
gonna
post
a
discussion
message
with
your
work:
daniel's
gonna,
post
a
discussion
message
with
his
work
and
what
else
did
we
have
from
this
meeting?.
H
Group
meeting:
do
you
wanna
just
do
that
by
email?
Do
you
have
my
email
address,
yeah.
C
Hit
you
up
about
recording
that
the
the
the
install
of
on
openstack,
I
think,
there's
a
good
a
bit
of
outreach.
We
can
do
to
the
openstack
community
once
you
have
that
done
daniel,
so
kudos.
If
you
can
find
the
time
to
do
that
mike
rochefort,
the
task,
the
the
stuff
that
you're
doing
to
get
the
fedora
over
there
is
there
an
issue
link
for
that.
L
L
C
A
And
last
item
is
I
I'm
diane
found
one
error
in
the
video
intro
to
that
I
created
for
our
meetings
turns
out
that
my
conversion
to
a
ping
to
import
into
my
video
software
screwed
up
the
head
of
of
our
little
mascot.
So
I'm
doing
an
export
again
from
the
from
the
initial
file
to
ping
and
then
recreating
the
video
and
it's
all
in
red,
hat
font
and
it'll.
Look
great
so
expect
all
of
our
future
meetings
to
have
that
intro.
At
the
beginning,.