►
From YouTube: OKD Working Group: Meeting 09-28-2021
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
Okay,
all
right,
don't
forget
to
put
your
name
in
the
attendees
section
of
the
meeting,
notes
that
we
know
you
were
here.
Keeping
attendance
helps
us
know
who
was
available
for
particular
conversations
and
if
there's
votes
on
things
we
like
to
be
sure
everybody's
been
included
and
whatnot
all
right.
We
good
on
the
agenda.
Is
there
anything
folks
want
to
add
or
we're
good
all
right,
not
hearing
anything
christian
go
ahead
with
release
updates.
B
Unfortunately,
I
don't
really
have
a
lot.
Vadim
is
out.
Currently,
once
he's
back,
we
will
actually
start
to
write
like
a
a
standard
operating
procedure
for
creating
okd
releases,
so
we
can
yeah
better
disseminate
that
knowledge
and
have
more
people
help
out
with
that.
I'm
not
sure
how
much
the
community
can
can
do
in
that
regards,
since
those
things
will
require
some
permissions
within
our
openshift
shift.
B
Org,
but
yeah,
disseminating
that
knowledge
is
definitely
going
to
be
a
good
thing,
so
yeah
for
now
no
updates
on
on
the
release,
part
for
for
the
usual
okd
releases
and
for
arm
releases
we're
still
working
on
it,
unfortunately,
that
we're
still
blocked
internally
by
some
issue,
but
they
once
we
have
resolved
that
issue
and
we
we
actually
have
ci,
builds
for
for
openshift
for
all
of
openshift.
B
Then
we
can
also
mirror
out
and
untag
okd
releases,
so
that
is
kind
of
bound
together
internally
with
dci
work
for
for
arm
for
the
arm
platform,
and
then
that'll
actually
enable
us
to
also
do
the
okd
release
for
that
platform.
We
do
now
have
maybe
timothy,
as
as
I
think,
that's
already
in
the
fcos
updates
mentioned
there,
so
we
do
now
have
the
dora
core
os
uploaded
to
aws,
so
the
amis
are
now
available.
B
I
think
in
all
the
aws
that
was
one
of
the
main
missing
pieces
there
as
well,
because
we
need
the
installer
to
reference
that
ami.
So
now
we
have
those
mis
and
now
the
only
part
missing
is
having
actually
the
component
on
on
our
prowess
isis,
which
should
be
yeah.
I'm
hoping
it's
gonna
happen
this
week,
but
it
shouldn't
be
much
more
we're
really
on
the
verge
here.
C
When
you,
when
you
get
that
piece
in
the
prow
system-
and
we
can,
could
we
do
a
little
very
short
blog
announcing
it
somewhere?
Oh.
B
Absolutely
there's
also
I'm
also
going
to
prepare
a
presentation
on
the
whole
openshift
on
our.
Maybe
if
we
get
to
do
doing
the
oh,
the
okd
releases
right
away,
that'll
be
focused
on
okay
d2.
So
that
is
definitely
my
plan.
C
Yeah,
if
you're
going
to
do
a
presentation
on
okd
on
arm,
we
could
do
that
as
a
briefing
or
an
ama
too
and
broadcast
it
out,
and
did
I
invite
you
to
the
office
hours
at
kucon?
I
think
you're
in
there.
So
if
you
could
do
a
little
if
that
could
be
part
of
the
spiel
there,
that
would
be
great
to
include
that
and
that
update.
A
Are
you
welcome?
Thank
you
christian
next
up,
fcos
updates
with
timothy.
D
Hey
so
timothy
from
the
for
estimate,
trad
for
fcos
updates,
so
I
have
three
items
today,
and
most
of
them
are
forthcoming
things.
So
the
first
one
is
like
kind
of
a
warning,
but
shouldn't
be
a
big
one.
We
are
trying
to
move
away
from
the
legacy:
people
potable
bacon
in
federal
course
to
the
nf
table
based
epithelium
spiken.
D
So
this
is
something
that
has
been
done
for
a
while
in
fedora,
but
for
various
bugs.
It
hasn't
happened
in
federal
course,
and
we
want
to
have
that
happen
at
the
same
time,
approximately
at
the
federal
house,
35
rebates.
So
it's
coming
like
in
a
couple
of
weeks
and
we'll
try
to
do
that
for
new
notes
first
and
then
for
everything
you
know,
because
we
don't
really
want.
We
don't
really
expect
any
issues
here,
but
we
still
want
to
give
folks
some
time
to
try
things
and
make
sure
that
nothing
breaks.
The
the
like.
D
The
short
version
of
this
is
the
difference
between
the
two
is
essentially
you're
using
different
parts
of
the
kernel,
but
it's
fully
it
should
be
fully
compatible.
So
this
should
be.
There
should
be
no
broken
changes.
D
The
second
item
is
about
the
federal
course
test
day
that
we
are
putting
up
in.
I
think
it's
next
week,
something
like
that,
and
so
we'll
prime
and
just
a
few
tests
that
people
can
do
to
make
sure
that
federer
cares
works
well
on
their
platform.
So
this
is
like
slightly
on
the
side
related
to
a
qd,
because
if
you
want
to
make
sure
that
fear
requires
work
well
on
your
platform,
that
will
certainly
help
okay
work
well
on
your
platform,.
D
And
yeah,
so
we
are
testing
next,
which
will
be
fedora.
35
based
will
be
the
most
interesting
from
a
testing
perspective.
The
other
ones
are
less,
don't
have
that
change
right
now
and
then.
The
third
point
is
around
our
arch
64,
which
is
coming
which
so
we
have
bills
right
now.
They
are
ready
and
we
are
enabling
that
in
the
the
download
page,
so
you
should
be
able
to
try
that
out
more
easy,
an
easier
way
between.
B
And
this
just
timothy,
this
is
just
the
links
right.
The
bills
already
exist,
they're
just
not
linked
out
to
yet.
D
Yeah
yeah,
it's
just
official
links
to
have
that
officially
in
the
in
the
in
the
dollar
page,
because
you
can
like
figure
out
the
links
from
the
full
bills,
brother
and
everything,
but
we
don't
necessarily
publish
as
officially
all
the
builds
that
we
do
all
the
release
booths
that
we
do
so
like.
If
you
get
the
raw
list
of
builds.
Potentially,
you
are
using
builds
that
we
don't
think
are
about
it.
Do
like
the
links
from
the
download
page
are
fully
like,
tempt
and
should
be
good
for
for
usage.
E
B
Not
testing
4.9
yet
is
that
it
hasn't
actually
been
released
yet
so
it's
not
a
a
stable
version
of
okay
of
the
openshift
code
base.
Yet
that
is
supposed
to
happen
very
soon,
but
since
I
think
we
are
still
not
able
to
upgrade
4.7
to
4.8,
our
stable
stream
is
still
stuck
on
4.7.
B
For
now
we
we
should
definitely
test
the
latest
94.8,
though,
because
once
that
is
figured
out,
we
will
be
updating
stable
to
4.8
and
then
subsequently,
once
4.9
is
released
to
4.9
as
well,
but
for
now
4.8
is
still
the
the
most
current
stable
version,
even
though
it's
not
yet
in
the
stable
stream,
because
we're
missing
the
upgrade
path,
it
should
be
installable
on
a
fresh,
install
yeah,
and
for
that
I
would
just
take
the
latest
id.
Obviously
yeah.
E
Just
wondering
because
from
the
okd
virtualization
6
side,
we
are
running
on
bare
metal
upi
for
now,
so
it's
kind
of
complicated
in
its
own
ways,
so
yeah
and
we
are
kind
of
trying
to
use
the
latest
version
in
order
to
provide
feedback
before
it
gets
released.
So
that's
I.
E
B
You're,
if
you're
using
experimental
features,
you
can
probably
test
the
4.9
bills
as
well.
They
should
be
like
you
should
be
able
to
install
them.
I
wouldn't
expect
two
big,
like
any
problems
that
that
are
really
big,
because
I
think
yeah
4.9
openshift
openshift
compute
platform,
product
4.9.
B
The
release
is
not
far
out
so
bills
by
now
should
be
pretty
stable,
yeah
we're
not
saying
they're
stable
usable
but
yeah
for
experimental
testing,
especially
if
you
rely
on
features
that
have
only
been
added
after
feel
free
to
to
use
those.
I
think
they
they
should
be.
E
D
A
Excellent,
any
other
questions
regarding
fedora
koro
asks
questions
or
comments
for
timothy.
A
All
right,
excellent:
let's
move
on
to
the
next
item,
which
is
doc
updates.
Take
it
away
brian.
F
Okay,
so
we've
got
the
beta
side
up
and
running,
and
I've
ported
all
the
sort
of
content
across
there
and
there's
still
some
work
on
the
content
to
be
done,
because
I
mean
things
like
the
faq,
that's
in
the
repo,
it
hasn't
been
updated
for
quite
a
while.
So
I'm
not
saying
that
the
documentation
is
finished,
but
we're
now
in
a
state
where
at
least
it's
got
everything
that's
on
the
current
site
in
the
in
the
new
beta
site,
and
so
it's
really.
F
When
do
we
want
to
switch
it
live
as
far
as
I'm
concerned
we're
good
to
go
now
there
have
been
a
couple
of
questions
asked
this
week.
One
of
them
is
why
are
we
in
the
openshift
cs?
Org,
not
the
open
shift,
and
is
this
a
problem?
If
so,
should
we
be
looking
to
move
before
we
switch?
It
live
because
it's
all
going
to
be
linked
in
with
the
github
pages,
and
the
other
thing
that
I
just
wanted
to
ask
is
about
the
code
of
conduct.
F
A
C
All
right,
so
I
I
did-
I
don't
know
brian
if
you
saw
the
email
that
I
responded
to
that
thread
with
the
the
short
history
of
why
the
repos
are
the
way
they
are.
Did
you
get
a
copy
that
I'm
not
sure
what
I
probably
should
have?
I
don't
think
it
went
out
to
the
whole
main
list,
so
I
think
it
was
just
an
instead
of
individual,
so
I
apologize
for
that.
Basically,
when
we
started
the
open
source
side
of
open
shift,
it
was
called
origin.
C
If
anyone
remembers
that
and
the
repos
are
still
all
where
the
code
lives
is
still
all
origin
which
makes
things
wonderfully
confusing.
We
didn't
change
the
name
at
the
time
for
multiple
reasons.
One
is
we
had
lots
of
end
users
who
had
written
scripts
and
things
that
we
didn't
want
to
break
during
the
three
point
x
era
and
the
other
was
we
didn't
really
have
the
resources
to
change,
even
our
own
internal
processes.
C
So
we
kept
that
and
you'll
still
see
all
of
our
open
source
code
in
the
origin,
repo
and
the
other
is
in
order
to
get
and-
and
I
think
christian
touched
on
this
earlier
in
the
call
the
permission
from
the
engineering
team
to
edit
and
create
a
landing
page
under
the
openshift
organization
was
not
wasn't,
was
not
going
to
happen
for
external
users
or
even
for
myself
to
get
that
kind
of
privileges
and
permissions.
C
So
we
created
the
the
okd
repo.
The
reason
it's
in
the
openshift-cs
which
stands
for
customer
success
is
the
folks
that
helped
us
build
it.
That
was
the
repo
they
owned
on
github
to
do
it
in
and
it's
not
an
optimal
place
for
us
to
live
as
the
divert
folks.
The
kubert
folks
pointed
out.
C
That
was
the
I
think
the
kicker,
the
okay
d
vert
folks,
were
the
ones
who
were
asking
the
question
so
and
it's
not
a
perfect
world,
but
now
as
we're
moving
with
brian
to
your
mk
docs
version
and
creating
that
in
github,
once
we
get
that
done,
I
think
it's
a
very
good
time
to
revisit
that.
I
have
a
theory
that
we
could
do.
C
Okay,
a
github,
slash,
okd
repo
was
what
I
suggested
in
in
the
email
thread
and
that
would
be
much
more
open
source
politically
correct
than
openshift
cs,
especially
as
we
move
to
not
need
the
resources
we
needed
from
the
customer
success
team,
which
has
been
renamed
five
times
since
the
beginning
of
that
time.
But
it
still
is
the
customer
dash
cs
repo.
C
So
it
was
a
good
question
and-
and
I
just
wanted
to
give
the
background
on-
why
it's
there
and
it's
you
know
it
may
sound
lame,
but
that's
that's
the
history
and
I
would
love
to
move
it
to
okd,
but
putting
it
underneath
the
the
open
shift.
Repo
just
in
just
asks
begs
the
question
of
multiple
permissions
that
we'd
need
to
get
from
the
engineering
teams
and
everybody
else,
and
that
would
just
slow
down
the
process
and
make
it
less
open.
C
Shall
we
say
so,
I'm
seeing
everybody
plus
one
the
idea
of
creating
the
okd.org.
I
would
love
to
do
that.
We'll
have
to
run
it
by
the
engineering
folks,
and
so
maybe
christian
in
one
of
the
next
engineering
sessions
that
we
have
team
meetings.
We
have
we'll
do
that.
F
C
Yeah,
I
would
love
to
see
this
happen.
I
don't
see
anybody
having
any
objections
to
it
and
it
would
just
make
the
whole
and
especially
that
the
guys
who
used
to
be
in
this
customer
success
team
would
love
to
get
this
off
their
and
out
of
their
repo
so
and
move
on
to
their
new
jobs,
because
they
all
have
been
promoted
like
at
least
three
times
since
they
were
customer
success,
people
so
yeah.
E
C
Yeah,
so
the
code
of
contact.
Well,
we
can't
really
point
to
the
cncf
one.
I
had
pulled
the
the
one
from
ansible
and
I
have
been
lacks
and
was
going
to
post
it
as
a
discussion
for
this
group
to
look
at
the
ansible
one
and
see
if
we
could
use
that,
because
I
know
that's
been
vetted
by
red
hat
legal
multiple
times
and
it
seemed
pretty
robust
and
to
the
point.
C
So
I
will
do
that
as
an
issue
and
let
the
group
review
that
and
then
we
can
pull
it
in
brian
to
the
landing
pages
that
you're
creating.
If
that's
works
and
that
way
at
our
next
docs
meeting,
we
can
look
at
it
edit
it
any
way
we
need
to
do
and
then
the
next
working
group
meeting
we
can
approve
or
debate
the
finer
points
of
it.
F
Okay
and
then
just
the
final
one
in
the
footer,
we
do
have
some
social
media
links
which
point
to
various
places
link
to
do
with
openshift.
So
I
think
in
the
agenda
jamie
did
put
about
a
twitter,
but
there's
also
a
facebook
link
as
well.
C
F
A
A
The
idea
came
up
again
of
twitter
of
having
a
twitter,
because
so
much
communication
is
done
in
terms
of
announcements
of
new
releases
and
updates
and
bugs
and
whatever,
and
so
this
was
been
revisited
a
couple
times
once
before
I
was
in
the
group
and
once
I
think,
just
after
I
joined
the
group,
what
do
people
think
about
registering
a
twitter
with
okd,
something
because
obviously
it
I
think
okd
has
already
taken.
We
had
this
discussion,
but.
C
E
C
Yeah,
so
my
thoughts
are
it's
a
pain
in
the
arse
to
to
manage,
and
if
we
have
announcements,
the
openshift
twitter
handle
would
reach
a
much
wider
audience
for
us.
So
if
we
wanted
to
start
doing
announcements
of
okd
releases,
I'm
sure
I
could
get
them
to
be
added
to
the
openshift
one
and
it
would
create
and
that
there
is
a
person
who
manages
that
and
watches
it
and
responds
to
stuff
on
it.
C
I'm
hesitating
because
I
already
have
openshift
commons,
which
I
I
manage
and,
and
it
is
often
quite
silent,
but
it
does.
You
know
both
of
those,
the
openshift
commons
and
openshift.
If
we
really-
and
we
can
talk
about
this
and
maybe
in
the
docslash
communications
working
group
yeah,
I'm
wondering
if
I
created
that
that
okd
github
repo-
I
don't
just-
have
to
check
and
see
if
it's
me
that
opened
it.
A
But
for
people
who
are
who
are
watching
this-
and
that
seems
like
a
non-sequitur,
someone
posted
a
link
to
github.com
okd
yeah.
A
Yeah
so
mike
you
got
your
hand
up,
go
for
it.
H
H
I
I
would
expect
if
we
were
going
to
have
like
an
okd
specific
communications
channel,
that
we'd
want
to
have
a
little
like
process
behind
it
before
we
just
start.
You
know
blasting
stuff
out,
so
I'm
kind
of
curious,
like
does
fedora,
do
something
similar
with
the
way
that
they
roll
out
their
announcements
and
their
kind
of
official
communication
channels.
G
So
fedora
has
the
mainline
fedora
twitter
handle,
but
also
has
for
many
of
the
product
variants
you
can
for
many
of
the
product
variants
or
or
sig
variants,
or
whatever
there
are
specif
specific
twitter
handles
for
them.
It
is
actually
up
to
the
working
group,
slash
sid,
to
elect
to
have
one.
So,
for
example,
choreos
has
one
silver
blue
has
one
and
a
number
of
other
one.
A
number
of
other
variants
do
as
well
like
kde
and
kino
white.
Don't
yet
they
may.
They
may
not,
don't
know
for
sure.
G
Yet
we
haven't
decided,
so
it
is
up
to
them
and
there
is
a
there's,
an
informal
policy
that
things
coming
from
specific
sub
team
deliverables.
Sub
team
twitter
accounts
get
retweeted
by
the
main
one
so
to
to
to
amplify
that
reach,
but
also
there
is
some
complexity
in
terms
of
making
sure
people
are
granted
access
to
those
twitter
handles,
while
at
the
same
time
fedora
retains
control
of
all
of
the
twitter
handles.
G
So
like
the
main,
the
main
problem
is
delegating
permissions
and
I
don't
think
we
actually
have
a
setup
for
that
here
within
the
openshift
team,
so
yeah.
H
A
H
I
was
just
gonna
say
like
thanks
neil,
like
that
really
appreciate
the
kind
of
guidance
there.
I
guess,
like
my
preference,
would
be
to
see
some
of
that
kind
of
governance
and
process
set
up
first
before
we
start
doing
the
other
thing.
But
you
know
that's
just
kind
of
my
gut
feeling.
A
Well,
let
me
ask
this
diane:
do
you
know
so
what,
if
it's
something
not
not
a
release
like
what,
if
it's
the
meeting
videos
getting
posted
and
stuff
like
that,
would
they
be
willing
to
post
stuff
like
that
as
well
or
does
it
have
to
have
a
certain
threshold
of
coolness
and.
C
C
Social
media
skills,
which
are
so
advanced
what
I
I
think
like
what
would
be
easier
for
the
openshift
twitter
handle
managers
and
and
myself
as
an
openshift
commons
twitter
handle
handle
manager,
is
to
have
an
okd
one
created
of
some
ilk,
whether
it's
project,
okd
or
okd,
dot,
io
or
whatever.
We
do
in
order
to
find
an
okd
one
in
twitter
land.
C
If
we
tweet
it,
and
then
we
ask
for
it
to
be
retweeted
sort
of
what
neil
is
doing
as
a
way
of
going
through
it,
though
I
do
think
it
needs
to
be
owned
by
or
managed
by,
someone
and
I'll
have
to
look
into
the
the
okd
remark
that
someone's
making
in
the
chat
about
it
not
being
trademarked
for
it
use.
C
C
I
would
say
by
redhat
is
my
gut
is
what
it
said,
but
I
think
it's
much
easier
if
there's
a
sub
one
to
get
others
to
reap
re
re
broadcast
it
for
us
like
the
red
hat,
red
hat,
open
red
hat
community
red
hat,
you
know
and
red
hat
in
general,
when
we
have
releases
and
stuff
so
and
yeah.
The
videos
are
kind
of
just
specific
to
us.
C
They're,
not
really
big,
big
watch
things
for
for
normal
openshift
folks,
so
I
would
say
creating
I'm
not
against
creating
a
twitter
handle.
If
we
can
find
agree
to
one.
A
Okay,
well,
let's
talk
about
it
in
the
docs
group.
I
just
wanted
to
bring
that
up,
because
there
is
so
much
communication
right
now
that
happens
via
twitter
in
terms
of
kubernetes,
stuff
and
tech
stuff
in
general,
that
it
seems
like
we're
sort
of
missing
out
if
we
don't
take
advantage
of
of
the
medium
there.
Let's
move
on
and
I'll
add
that
as
a
an
item
to
put
on
the
agenda
for
the
docs
meeting,
let's
now
move
on
to.
F
A
A
A
The
proposal
is
on
the
table.
I'm
making
the
proposal
that
we
allow
brian
to
make
the
shift
and
diane
and
brian
to
work
on
getting
the
dns
change
to
make
the
beta
site
go.
Live
asap
motion
is
on
the
floor.
Does
anyone
want
to
second
that
I'll,
second,
that
okay
seconded
by
bruce
any
further
discussion?
A
Yes,
we
are
following
robert's
rules
here.
Any
further
discussion.
A
Okay
and
anyone
opposed.
A
Okay,
anyone
abstaining
all
right,
so
let
the
record
show
that
everyone
in
the
call
voted
with
a
plus
one
er
ex.
Let's
see
yeah,
that's
everybody,
okay
and
so
go
forth
and
move
it
at
your
earliest
convenience
and
I'll
send
something
out.
I'm
assuming
that
no
one
is
gonna
like
override
the
12
or
whatever
votes
that
we
have
here
when
I
post
to
the
mailing
list.
So
I
would
say
that
just
go
ahead
and
do
it,
and
we
will
note
that
if
an
official
vote
was
taken
on
this
issue,
moving
forward.
C
So
brian,
let's
chat
via
email
over
this
and
we'll
add
in
its
jury.
Follow
because
will
gordon
is
off
on
paternity
leave
and
so
we'll
get
it
switched
over.
Hopefully,
in
the
next
you
know
48
hours
or
72
hours
or
whatever
it
takes.
A
Yeah,
I
echo
that
that
was
some
awesome
work.
Man
thank
you
unlock
things.
Yes
very
much
so,
and
I
want
to
move
on
to
issues
there's
one
that
is
kind
of
it's
causing
some
weird
problems
in
a
couple
different
places.
If
you
look
at
in
the
agenda,
I've
got
a
link
to
it.
It's
issue:
873.
A
we've
had
a
couple
opened
up
that
are
duplicates
on
this
sort
of
overall
issue.
The
changelog
stuff
in
the
nightlys
from
the
ci
is
broken
you're,
not
able
to
actually
see
the
the
change
logs.
It
ends
up
showing
an
error
saying
that
it
could
not
generate
it.
If
you
go
to
the
repo-
and
this
is
vadim's
repo,
where
the
ci
is
pulling
from,
all
of
the
commits
are
gone
after
february
like
14th
or
something
like
that,
and
so
these
are
all
referencing
commits.
A
That's
why
I
can't
get
the
change
log,
because
it's
references
and
commits
that
don't
exist
in
the
repo
anymore.
For
some
reason,
vadim
is
out
until
october,
probably
the
first
week
of
october,
maybe
the
second
week.
So
we
don't
really
have
a
way
of
fixing
this
at.
B
A
Time,
if
folks
could
just
be
aware
of
it
and
the
different
ways
in
which
this
impacts
users
and
ideally
we'll
find
some
solution,
where
maybe
a
couple
folks
have
access
to
vadim's
repo
or
have
I
don't
know,
we'll
have
to
figure
something
out,
because
it
seems
like
a
really
weak
point
in
our
process
for
one
person,
if,
if
the
repo
goes
south,
that
ci
is
based
off
of
that,
there's
nothing
that
we
can
do
about
it.
If
they
happen
to
not
be
here.
So
we
can
have
that
discussion.
B
Yeah,
if
I
just
can,
can
add
to
that,
so
we
are
working
internally
on
pulling
all
that
code
back
into
the
openshift
codebase
and
essentially
unforking
the
machine
config
operator
now
upstreaming
that
into
the
master
branch
and
kind
of
not
needing
vadim's
fork
there
anymore.
Yes,
for
the
time
being,
we
cannot
fix
this
until
he's
back,
but
in
the
future
we
will
definitely
pull
that
back
into
the
openshift
arc
and
make
make
all
those
branches
force
push
protected.
So
so
these
things
cannot
happen
in
the
future.
A
Excellent
and
in
terms
of
other
issues,
are
there
any
issues
that
people
wanted
to
highlight
out
of
the
issues
submitted
in
the
repo.
E
E
I'm
still
investigating
them
and
a
few
other
are
probably
related
to
the
linux
issues
that
I
also
reported
to
the
fedora
st
linux
policy
package
and
a
few
others.
I
don't
know.
I
just
reported
them,
because
I
have
no
clue
on
on.
What's
going
on
there
and
it's
kind
of
hard
to
understand
where
the
installer
gets
stuck
and
why
it's
not
an
easy
task
to
find
the
right
place.
A
Right
we're
actually
working
on
a
document.
Actually
vadim
was
helping
with
this
that
sort
of
helps
people,
troubleshoot,
installer
issues,
or
at
least
better
know
better,
where
to
look
on
that.
That's
a
work
in
progress,
so
this
might
be
a
good
case
actually
use
case
to
help
build
that
documentation
so
expect
us
once
like
once
we
can
comb
through
what
you've
submitted
this
morning.
We
can
provide
some
feedback
and
then
maybe
use
that
to
inform
a
document
that
can
help
folks
with
installer
issues.
A
It
brings
up
another
point
that
I'll
talk
about
a
little
bit
later
in
the
meeting,
but
bare
metal.
We
don't
have
a
lot
of
bare
metal
testing.
We
have
a
lot
of
people
coming
to
us
with
bare
metal
issues
when
they're
trying
to
actually
use
okd
out
in
the
world.
A
We
don't
have
bare
metal
testing
and
that's
always
hard,
because
obviously
there's
20
million
configurations,
but
it
might
be
worth
us
to
look
at
rounding
up
some
folks
willing
to
do
bare
metal
testing
just
get
the
community
to
help
us
so
that
we
have
some
something
that
we
can
see
what's
going
on
and
not
just
sort
of
be
reactive
but
be
proactive
in
terms
of
bare
metal,
stuff.
B
So
yeah
I've
actually
been
working
with
the
bare
metal
team
internally
to
get
support
for
bare
metal,
ipi
installer
provision
infrastructure.
So
far,
we've
only
had
support
for
a
user
provision.
Infrastructure
upi
but
yeah,
the
all
the
pieces
are
now
in
place.
All
the
code
should
now
be
there
for
yeah.
Essentially,
the
the
ironic
parts
that
I
use
to
automate
this
installation
for
for
bare
metal,
installs
and,
what's
still
missing,
is
actually
building
those
images
in
ci
and
putting
them
into
the
okd
release
payload.
B
I
have
account
for
that
to
work
on
on
it
this
sprint.
So
this
should
also
be
happening
very
soon.
Once
once
those
images
are
built,
support
should
kind
of
just
arrive
with
the
next
nightly
build.
Then
this
is
for
now
this
is
master,
so
this
will
probably
only
land
in
4.10.
B
I
don't
see
a
good
chance
for
us
to
backport
that
to
4.9
but
yeah,
just
as
a
heads
up,
we
should
soon
have
on
the
4.10,
builds
yeah
bare
metal,
ipi
support,
and
you
can
check
out
the
code
and
in
the
ironic
image
repository
there's
also
the
ironic
ipa
downloader
repository
most
of
that
is
in
the
ironic
image
repository
though
there's
a
specific,
an
okay,
specific
docker
file
now
and
yeah
once
that
is
hooked
up
into
ci
and
automatically
build
we'll
add
it
to
the
to
the
new
okay
release.
B
This
is
this
is
completely
separate
to
the
upi
bare
metal
that
we've
that
we're
already
that
we
already
have
obviously
more
testing
on.
That
is
always.
A
Yeah
yeah,
so
it
might
be
worth
it
for
us
to
get
a
communication
out
to
the
community
saying
hey.
If
there's
some
folks
who
can
test
bare
metal
upi
for
us,
it
would
really
help
the
the
okd
project-
and
you
know
something
like
that,
so
communications
can
and
documentation
can
take
that
out.
Anything
else.
E
G
A
All
right,
so
we
do,
but
for
folks
that
don't
know
there
was
a
con
conversation
between
myself
and
sandro
and
a
few
other
folks
about
trying
to
make
sure
that
we
have
a
joined
effort
in
this
and
stuff
like
that,
so
expect
in
the
next
couple
weeks
more
info
on
on
how
that
we're
all
going
to
be
united
in,
and
that's
going
to
be
a
sub
group
basically
of
the
okd
working
group,
so
that
we
can
share
resources
and
and
whatnot,
and
some
of
these
website
changes
and
possibly
new
repo
and
stuff
like
that.
A
Are
all
gonna
help
this,
I
think
any
other
issues.
Oh
there's
one
that
I'll
address.
Actually
we
have
a
handful
of
aws
ipi
single
node
issues,
get
submitted,
I'm
actually
setting
up
a
aws
ipi
ci
to
test
this
regularly
for
four
seven
and
four
eight.
Just
because
we
had
there
was
some
mix-ups
and
it
was
supposed
to
work,
wasn't
work,
etc.
A
So
if
anyone
can
do
other
providers,
if
you
can
offer
up
some
resources
to
try
other
providers
to
do
quick,
builds
of
of
a
single
node,
that
would
be
awesome.
So
just
reach
out
to
the
group,
if
you're
interested
anything
else
for
issues.
A
All
right
moving
now
over
to
discussions,
there
was
one
discussion
item
I'll
have
to
put
the
link
in
the
meeting
notes,
but
896
is
nvidia.
Gpu
question
on
okd.
This
is
like
our
third
question
in
that
regard,
and
in
particular
about
the
operator
that's
available.
A
Has
anyone
had
a
chance
to
use
the
operator
nvidia
operator
to
do
the
nvidia
gpu
stuff
with
okd.
C
C
It
is
now
being
maintained
by
nvidia
as
opposed
to
something
that's
created
out
of
red
hat,
so
there
are
contact
people
there,
so
we
could
hook.
I
could
ask
diane
fedema
to
weigh
in
on
the
go
ahead.
H
Oh
I
didn't
want
to.
I
didn't
want
to
interrupt
you.
I
just
I've
used
it
a
ton
on
ocp
as
well.
I've
done
a
lot
of
testing
around
it
so,
like
I
don't
have
the
same
like
diane's,
using
it
a
lot
for
like
workloads
and
whatnot.
I've
done
a
lot
of
work
with
it
in
terms
of
like
auto
scaler
and
like
cloud
infrastructure
stuff.
So,
but
I
haven't
tested
it
on
okd,
yet
I
would
imagine
it
works.
The
only
the
only
the
only
difficult
part
is
it
needs
like
on
a
full
ocp
cluster.
H
It
needs
build
entitlements
to
do
like
a
build
in
there.
I
don't
think
there's
an
equivalent
on
okd,
so
I'm
not
sure
if
it
will
pull
the
proper
packages
to
do
the
building
that
it
needs
to
do
because,
right
now,
that
operator
looks
to
pull
a
couple
specific
like
kernel
header
packages
that
are
specific
to
the
rel
installation.
H
You
know
the
core
rs
installation,
it's
on
so,
like
I
don't
know
what
it
would
do
on
an
okd
cluster.
It
would
probably
try
to
pull
a
package
that
that
doesn't
exist
or
something
so
you
might
get.
You
might
be
able
to
get
up
to
the
point
where
it's
trying
to
build
the
driver
and
then
the
driver
might
fail
there.
That
would
be.
That
would
be
my
suspicion.
But
let's
I
haven't
tested
on
okd.
H
Yeah
sure
I
can
I
I
can
share
what
I
know.
Where
is
there
a
link
in
the
notes
to
that
sorry.
A
Yeah,
there's
just
put
it
in
the
notes
yep.
So
it's
under
the
discussion
section
discussions
and
I
put
a
link.
It's
discussion,
item
896.,
yeah,
okay,.
C
If
I
recall
there
have
been
a
couple
of
openshift
commons
briefings
on
it
with
people
from
nvidia
and
there
was
a
red
hatter
who
was
key
to
creating
it,
I'm
not
sure
I
can't
think
of
his
name
off
the
top
of
my
head,
but
yeah
it
would
be
lovely
to
get
it
get
it
working,
but
I
think
it's
pretty
finicky
if
I
recall
it.
H
Is
so
so
so
finicky
that
it
is
not
even
funny,
supposedly,
though,
nvidia
is
going
to
be
making
this
better
in
the
future
like
right
now,
the
way
that
they're
compiling
this
driver
is
that
it
must
have
the
kernel
headers
for
the
exact
kernel
version
that
is
running
on
the
node
that
it's
compiling
to
supposedly
in
the
future.
Nvidia
is
going
to
be
getting
better
about
creating
dynamic
kernel
modules,
so
it
will
be
able
to
install
into
a
range
of
kernels.
Is
what
I
understand,
but
that
work.
E
G
They've
got
k-mod
tracking
packages,
however,
the
red
hat
team
that
created
the
operator
didn't
incorporate
any
of
the
work
that
nvidia
already
did
to
use
kapi
tracking
k
mods.
So
the
nvidia
team
doesn't
know
how
to
do
this,
and
so
they're
stuck
okay.
G
Is
yeah,
so
somebody
between
red
hat
and
nvidia
needs
to
work
together
to
get
them
to
start
using
the
stuff
that
nvidia
already
created
for
regular
rel
to
do
this
stuff?
For
for
this
rel
chorio's
open
shift
stuff.
It,
however,
will
not
work
on
okd
and
so
for
okd.
We're
gonna
need
to
be
able
to
detect
and
do
the
right
thing
and
stuff
like
that.
A
Well,
I
think,
then,
what
we
can
do
is
find
out
what
those
things
are
specifically
and
push
up
some
changes
or
submit
some
issues
to
get
us
there
right
all
right.
So,
let's
mike
respond
and
then
I'll
put
this
on
the
agenda
for
the
next
meetings.
When
we'll
see
where
we
are
with
that.
A
Okay
sandra
go
ahead
and
take
it
away.
The
virtualization
sig.
E
I
almost
covered
the
the
whole
section
in
the
previous
discussion,
so
not
much
left
just
raising
the
awareness
on
the
existence
of
kind
of
a
prototype
of
the
sig
website
that
should
be
included
in
the
upcoming
okay,
the
new
website
when
we
have
a
place
to
to
squeeze
it
in
and
the
twitter
and
reddit
handle.
E
A
And
maybe
for
folks
that
are
going
to
be
watching
this
video,
can
you
give
a
quick,
elevator
30
second
explanation
of
this
change?
That's
coming
and
why
you're
testing
in
terms
of
virtualization
changes
and
virtualization
on
openshift
and
okd.
Therefore,.
E
G
Yeah,
like
so
my
thing
about
this
when
it
comes
to
cube,
vert
and
the
lesser
extent
openshift
virtualization,
is
that
to
be
blunt,
it's
terrible
to
use
if
you,
if
you
want
virtualization,
if
you
want
to
use
virtualization
even
in
the
openstack
style
way,
the
the
kubernetes
api
exposes
too
many
details
and
the
uis
that
are
available
for
cube
vert
all
are
absolutely
horrific,
with
the
exception
of
harvester,
with
from
rancher,
which
I
think
is
somewhat
promising,
there
doesn't
seem
to
be
anybody
trying
to
make
a
cube
front
end.
G
That
actually
is
appealing
for
people
who
need
virtualization
to
be
able
to
use
and
manage
it.
I
I
mean,
if
I'm
being
somewhat,
you
know
optimistic
hopeful
dreaming
or
whatever
you
know
I.
I
would
love
to
see
the
over
over
ui.
You
know
pulled
right
out
and
layered
on
top
of
of
cube
vert
because
well.
D
E
Yeah,
just
a
quick
tip
about
this.
There
is
a
covert
provider
that
you
can
use
for:
managing
the
okay
virtualization
vms
from
the
overt
web
ui.
So
it
already
exists.
It's
already
there
so
yeah,
that's
another
good
point
and
yeah
this
kind
of
discussion
about
the
user.
Experience
of
the
world
thing
is
one
of
the
things
that
we
like
to
see
discussed
in
the
viet,
so
you're
welcome
to
start
the
discussion
there
and
we'll
follow
up.
A
Excellent,
all
right,
so
we
have
eight
minutes
left.
I
want
to
make
sure
we
get
everything
in
in
terms
of
new
business.
I'm
gonna
I
struck
out
crossed
out
location
of
maine
repo
because
we
sort
of
touched
on
that.
It
sounds
like
a
discussion
for
maybe
in
a
month
once
the
website
stuff
has
settled
crc
sub
group
neil.
You
had
originally
voiced
interest
in
sort
of
leading
the
charge
on
some
of
that.
A
We
do
need
someone
to
to
really
get
the
subgroup
going,
because
it
charo
is
still
sort
of
doing
the
builds,
which
means
it's
sort
of
at
his
whim
and
and
time
availability.
Are
you
and
is
it
dan
that
was
interested?
Are.
G
G
Yeah,
I
think
we
are
it's
it's
we
just
haven't
had
any
any
time
at
the
moment
to
to
start
that
that
work
up,
but
we
are-
we've
talked
about
it
about
like
what
we
how
we
want
to
approach
this
this
problem,
it's
just.
We
probably
need
to
sync
up
with
charo
at
some
point
and
just
like
have
a
a
one-on-one
conversation
about
it
and
then
and
then
proceed
from
there,
because
one
of
the
goals
you
know
dan-
and
I
agree
on-
is
that
we
want
to
make.
G
We
don't
want
to
make
the
automation
for
producing.
We
want
to
ultimately
automate
this,
and
so
what
we
want
to
do
is
make
sure
that
this
can
just
be
straight
up,
run
from
within
fedora
infrastructure,
on
one
of
the
ci
cd
platforms
that
exists
within
fedor
infrastructure
and
then
be
able
to
provide
a
way
for
people
to
easily
take
that
automation
and
use
it
for
their
own
internal
deployments
as
well.
If
they
need
to
have
customized
deployments
for
their
own
use,
because,
like
almost
everything
else
around
openshift,
nobody
really
understands
how
to
assemble
anything.
G
And
I'd
really
like
to
not
make
that
worse.
With
with
crc-
and
so
you
know,
demystifying
just
a
little
bit
of
it
and
making
it
approachable
and
also
automated
means
that
we
don't
really
have
to
worry
so
much
about
you
know
whether
a
person
is
doing
it
or
not.
C
So,
do
you
want
help
arranging
a
time
to
meet
with
charo,
or
you
want
to
just
reach
out
to
him
via
the
slack
and
try
and
find
something
that
works
for
all
of
you?
Do
you
want
to
make
an
official
first
meeting.
G
If
you
can
help
arrange
that
that
first
meeting
thing
that'd
be
that'd,
be
great,
especially
since
I
think
dan
hasn't
met,
charo
at
all
and
and
so
it'd
be
it'd,
probably
just
be
good
to
get
that
initial
introduction
out
of
the
way,
then
we
can,
we
can
take
it
from
there.
It's
dan
axelrod.
G
Check
real,
quick
I'll.
G
Like
because
dan's
on
pto
right
now-
and
I
don't
actually
remember
what
day
he
comes
back,
I
think
it's.
I
think
it's
either
the
end
of
this
week
or
middle
of
next
week.
I
don't
know
but
I'll
sync
up
with
you
afterwards
and
let
you
know.
C
Yeah
I'll
just
create
a
little
a
little
slack
with
the
four
of
us
in
it
or
five
of
us
now
and
see
if
we
can't
come
up
with
a
time.
A
And
some
other
folks
that
voiced
interest,
I
think
mike
said
he
was
interested
in
sort
of
following
along
with
what's
happening,
and
so,
let's
yeah
and
anyone
else.
That's
interested
driti
in
fact
also
said
that
she
was
interested
next
up.
I'm
gonna
strike
out
bare
metal,
ci,
slash
testing
group
because
it
sounds
like
we've
got
an
interest
in
that
and
we
can
pull
people
together
and
so
next
up
is
the
office
hours,
which
is
wednesday,
the
13th
at
5
00
pm
eastern
during
kubecon
diane.
C
So
we're
gonna,
rinse
and
repeat
our
format
that
we
used
before,
and
I
think
we
have
an
hour
is
how
long
we've
done
it.
It
again
will
be
live
streamed.
I
if
there
is
someone-
and
I
will
send
a
note
out
to
there-
we
have
we-
I
tapped
a
bunch
of
people
already
to
come
on
to
do
that.
It
would
be
great
if
we
had
a
few
more
external
people
jamie's.
I
think
our
one
external
person
at
the
moment
on
it.
C
So
if
someone
else
wants
to-
and
I
was
thinking
of
you-
john
fortin
since
you're-
giving
the
talk
there
and
I'll
send
the
time
and
that
but
and
if
we
can
get
christian
to
do
a
little
spiel
on
what's
going
on
with
arm,
that
would
be
great,
but
we
have
a
slide
deck
that
we
use.
C
That
needs
a
slight
bit
of
tweaking
that
I
will
tap
jamie
to
to
do
and
to
lead
and-
and
I
will
try
and
keep
my
mouth
shut
as
much
as
possible
on
it,
so
that
everybody
else
can
shine.
C
But
it's
a
it's
a
simple.
It's
a
great
way
to
do
outreach
to
the
kubernetes
community
and
timothy.
I
thank
you
for
agreeing
to
stay
up
late
and
do
the
answer.
Fedora
coros
questions
but
yeah,
I
don't
think
there's
there
is
a
limitation
because
it's
being
it's
not
the
same
as
blue
jeans,
you
can't
have
as
many
people
as
you
want
we're
using
the
cncf's
platform
for
it.
C
So
you
have
to
get
what's
called
a
booth
pass
to
kukan
virtual
and
get
in
so
it's
a
little
bit
more
complicated
than
usual.
But
if
you
are
interested
in
sharing
and
being
part
of
that,
I
think
we're
limited
to
five
people
in
in
that
window.
So
I'll
have
to
count,
but
I
think
we
might
have
might
have
hit
that
already.
But
I
was
hoping
to
get
one
more
external
person
so.
A
Well,
let's
see
what
we
can
do
so
if
anyone's
interested
reach
out
to
diane
or
myself,
and
we
will
see
what
we
can
do
and
how
we
can
arrange
it
in
the
last
minute,
or
so.
I
wanted
to
point
out
that
I'm
actually
starting
a
task
list
and
putting
tasks
that
we
have
assigned
to
people.
H
Oh
yeah,
I
was
just
gonna
neil
said
something
that
kind
of
made
me
remember
this
and
I'll
just
share
a
little
bit
of
like
I
guess
news
from
inside
the
hat
or
whatever.
This
community
might
find
interesting,
we're
working
on
a
series
of
documentation
right
now
that
we're
going
to
make
as
like
a
public,
git
repo
and
eventually
be
rendered
to
a
site.
That's
basically
instructions
for
adding
new
infrastructure
providers
to
the
openshift
platform.
H
You
know
we
had
talked
about
doing
this
as
like
an
okd
forward
kind
of
thing,
but
we're
focusing
on
ocp
right
now,
because
we
have
providers
who
are
interested
in
getting
in
so
this
is
hopefully,
when
it'll
be
done
in,
like
you
know,
maybe
a
couple
months
or
something
it'll
be
a
way
to
show
off
more
of
how
the
you
know
how
the
sauce
is
made
or
whatever
for
this
community.
A
Thank
you
all
right.
We
are
at
time.
So
let's
call
it
here
thanks
everyone
for
your
participation,
look
forward
to
the
video
in
the
meeting
notes
coming
up
in
the
task
list
and
talk
to
you
all
soon.