►
From YouTube: OKD Working Group Meeting 04-12-2022
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
C
Folks,
they
may
have
to
jump
off
in
a
hurry,
I'm
having
some
real
world
production
issues,
but
they
haven't
told
me.
I
have
to
join
yet
so.
C
All
right
there's
a
couple
of
things
that
I'm
working
on
I've
been
working
on
with
christian
and
vadim
a
little
bit
trying
to
get
a
couple
of
things
fixed,
that's
pretty
much
it
with
the
installer.
I
think
it's,
the
the
installer
issue,
I
think,
is
a
blocker
for
the
next
release,
at
least
on
vsphere,
but
I'll.
C
Let
christian
indicate
whether
that's
actually
the
issue
or
not,
but
other
than
that
I
mean,
I
think,
we've
gotten
a
lot
of
bug
squash
that
we've
been
looking
at,
did
send
some
a
pull
request
to
brian
for
some
build
information.
So
he
can
try
to
take
a
look
and
see
about
you
know:
building
a
new
release
and
stuff
like
that,
which
I
said
I
was
gonna
do
so
I
did.
I
think,
that's
it
for
me.
So
christian
you
have
anything
on
that.
B
Yeah
regarding
that,
well,
I
think
it's
all
the
the
dns
search
domain
right
and
it
pops
up
in
various
places.
So
I
opened
a
a
system
dcr
to
make
that
behavior
configurable.
That
was
rejected,
but
there's
now
another
way
we
could.
We
could
do
it
by
by
kind
of
having
a
kernel
argument,
a
systemd
kernel
arc.
That
would
let
you
define
the
dns
search
domain
at
provisioning
time,
essentially
passing
it
into
directly.
That
would
help,
I
think,
the
other
one
you
mentioned
the
other
thing,
the
other
pr.
B
You
opened
it's
kind
of
dealing
with
the
same
problem,
but
it's
doing
it
later
it
that's
the
dns
search
domain
as
a
network
manager,
dispatcher
script,
which
should
also
work.
So
I'm
I'm
fine
with
doing
it.
That
way,
we'll
have
to
see
what
the
installer
folks
say
there,
but
I
I'm
fine
with
that
approach
too.
I
think
we
should
definitely
also
do
this
thing
enough.
Actually,
I've
actually
got
my
team
to
to
allow
me
to
work
on
on
that
directly.
So
I'm
gonna
spend
a
few
seconds
on
systemd.
B
You
know
I
haven't
done
that
too
much
too
much,
so
I'll
be
I'll.
Be
working
on
that,
it's
that
are
obviously
things
that
aren't
going
to
be
landing
immediately,
so
they'll
take
time,
and
I
think
in
the
time
being,
hopefully
I'm
not
sure
what
the
like
this
workaround
service
unit.
We
have
whether
that
now
works.
B
I
think
the
latest
iteration
isn't
in
the
current
payload.
I'm
not
sure
about
that,
though.
So
we
will
see.
I
am
planning
on
doing
a
an
okd
release
for
the
first
time
this
week,
vadim
is
currently
taking
a
break
from
okd,
so
that
is
kind
of
on
me.
We,
we
did
fix
the
the
issue
that
we
had
regarding
upgrades,
so
hopefully
that
is
all
on
black
and
we
can
cut
a
new
release,
but
I'm
not
sure
about
the
the
specific
issue.
I
do
hope
our
workaround
works
now
efficiently.
C
I
think
the
installer
issue
for
vsphere
ipi,
I
think,
that's
a
blocker
for
a
new
release.
So
if
we
can,
if
we
have
a
way
to
push
that
with
the
installer
team,
it
doesn't
affect
ocp,
it
only
affects
okd.
So
maybe
we
can
use
that
as
a
as
a
push
thing.
It's
a
blocker.
B
There's
a
lot
of
process
involved,
so
we
will
obviously
push
that
into
master
first
and
then
we'll
need
to
back
part
it
to
4.10
branch,
and
that
definitely
requires
a
bugzilla
connected
to
it.
We
we
can
put
on
the
backzilla.
We
can
save
this
okay,
specific
and
it'll
it'll
pass,
but
we
still
need
the
engineers
that
own
the
installer
to
yeah
to
kind
of
say
this
looks
good
to
me.
I
I'll
be
following
through
with
that,
though
so
hopefully
yeah.
B
If,
if
that
really
ends
up
blocking
us,
I
doubt
there
will
be
a
release
this
week,
but
if
it
doesn't,
we
will
hopefully
have
a
release
by
end
of
the
day
after
and
yeah.
So
I'll
definitely
follow
up
with
you
on
that
and.
A
Can
one
of
you
pick
up
the
link?
Can
one
of
you
put
the
link
to
the
issues
links
to
the
two
issues
in
the
meeting
minutes
under
the
okd
release
updates
section.
B
Yes,
john,
if
you
could
paste
the
link
to
your
pr
there,
and
I
will
put
one
in
for
the
system
yeah
that
was
rejected.
B
Yeah
not
a
lot
there
yeah
in
terms
of
release
news
really
just
we
have
fixed.
I
think
the
blocker
that
we
had
so
a
new
release
should
be
possible.
Now
I
haven't
been
like.
I
have
never
done
any
release,
but
even
thankfully
wrote
up
this
this
operating
like
release
operating
standard
operating
procedure
that
I'll
be
following,
and
I
will
tackle
that
first
thing
tomorrow.
B
Hopefully,
by
end
of
that
tomorrow,
there's
gonna
be
another
release,
but
you
know
we
will
see
by
then
that's.
A
B
And
there,
which
isn't
really
specific
to
the
next
this
next
release,
there
has
been
some
talk
internally
about
how
we
could
make
okd
fit
in
better
with
the
whole
openshift
development
model.
I
don't
really
want
to
talk
too
much
about
it
because
it
was
eternal
and
there
hasn't
been.
I
haven't
been
any
decisions
been
been
made
as
of
today,
but
there
will
probably
be
some
changes
coming
and
hopefully
that
will
make
okd
fit
in
much
better
with
how
openshift
as
a
whole
is
being
developed
and
will
make.
B
Let's
say
the
the
way
from
from
okd
operating
system
to
rel
core
os
will
make
that
feedback,
look
much
shorter
and
just
much
better
and
will
enable
external
to
red
hat
folks
to
contribute
in
in
a
meaningful
manner,
much
more
than
than
all
of
you
have
been
able
to,
because
I
I
know
it's
been
hard
to
contribute,
or
even
just
to
rebuild
the
things,
and
I
think
that
will
be
easier
in
the
future.
But
I
I
can't
really
tell
you
any
details,
yeah,
not
yet
that
there
will
be
more
on
this
soon.
B
A
All
right
great
up
next
is
fedora
core
os
news.
E
So
I've
written
in
the
notes
most
of
the
details
and
the
links
essentially
on
the
federal
course
side.
We
are
moving
to
federal,
36
so
well
in
advance.
E
Okay,
he's
not
moved
yet
to
35
and
arena
proof
236
really
soon,
but
still
we
don't
stand
still
and
we
look
at
the
36
release,
which
is
now
in
beta
and
will
be
released
a
couple
of
weeks,
so
we
are
having
a
test
week.
We
were
having
a
test
week
test
day
last
week,
but
if
you're
interested
it's
it's
still
possible
to
take
part
into
the
testing
phase,
testing
fedora
cares
next
stream,
which
is
based
on
36.,
so
yeah
do
feel
free
to
chime
in
the
timeline
for
the
rebase
is
in
the
third
link.
E
Apart
from
that
we've,
we
are
planning
to
remove
the
liberalic
utils
from
federacress.
Essentially,
if
you
don't
know
what
that
is,
then
probably
you
don't
need
to
care.
E
Unless
you
are
explicitly
using
that,
then
you
probably
won't
mind
us
removing
it.
So
just
a
heads
up,
it
was
used
in
about
by
podman,
but
it's
not
used
anymore,
and
then
we
have
a
small
change
that
is
coming
to
the
way
we
specify
the
format,
the
the
the
version
compatibility
that
we
specify
for
vmware
images,
so
the
vmware
platforms
themselves
are
going
end
of
life.
E
Some
point
later,
I
think
it's
this
year.
I
don't
remember
the
exact
dates,
but
some
versions
are
soon
to
be
end
of
life
and
those
versions
were
preventing
us
from
increasing
the
hardware
version
in
our
vmware
images,
and
so
once
those
platforms
with
the
end
of
life,
we
will
date
version,
and
so
this
will,
like
the
default
messages
have
become,
will
require
you
to
use
the
vmware
version
higher.
It's
not
completely
blocking
if
you're
still
using
an
older
version,
you
can
still
use
those
images.
E
Just
have
to
do
a
small
manual
change
with
both
images.
Everything
is
linked
in
the
documentation.
It's
not
the
end
of
the
world,
it's
just
by
default.
It
might
not
work
on
all
the
version
now
well,
not
now,
but
soon
and
finally,
we
have
virtualbox
images
coming
soon,
so
they
are
not
fully
displayed
yet
into
all
interface.
E
Probably
they
will
be
in
the
next
release
or
or
the
one
right
after,
but
you
can
have
a
look
and
give
them
a
try
if
you
want
to
try
a
visual
box
images
of
a
rocker
s
and
that's
about
it
for
me
for
this
week,.
A
G
Okay,
so
we
had
docs
meeting
last
week,
okay,
so
the
first
one
is
to
do
with
the
community
repo.
We
have
a
community
repo
in
the
open
shifts,
github
org,
which
is
just
called
community
and
we've
moved
some
of
the
pieces
across
the
proposal
is
that
we
do
not
continue
the
membership.
G
So
currently
there
is
an
a
reasonably
out
of
date,
membership
list
of
the
working
group
in
that
community
repo.
So
we
thought
well
at
the
minute
we're
not
to
a
stage
here.
We
actually
have
official
membership
of
the
working
group,
but
so
we
should
drop
the
membership
list
as
it
stands,
and
we
do
need
to
actually
make
sure
that
we
list
the
offices
for
each
working
group,
because
that
is
a
requirement
in
the
charter.
G
A
And
just
for
a
little
more
context,
yeah
timothy,
you
should
be
on
the
on
one
of
the
lists:
yeah,
we'll
we're
going
to.
A
Yeah
so
we'll
we'll
take
care
of
that.
The
other
thing
is
that
in
terms
of
and
we'll
talk
about
this
later
in
terms
of
the
officers,
shares
of
the
subgroups
will
be
denoted
and
we
have
to
vote
on
that
we're
going
to
take
a
vote.
Ultimately
it's
up
to
the
chairs
of
the
main
group,
but
we'll
take
a
vote
from
amongst
the
chairs
and
get
the
feedback
from
the
greater
group
at
large
about
who
should
be
chairs
of
these
subgroups.
It's
pretty
straightforward.
A
I
think,
and
we'll
talk
about
that
a
little
bit
later.
One
of
the
other
things
that
came
out
of
the
docs
meeting
was
that
the
meeting
minutes
are
going
to
go
into
the
website,
so
I
ran
it
by
brian.
You
know
what's
the
best
place
and
it
seems
like
breaking
it
off
of
the
hack
md
for
each
meeting
and
creating
individual
pages
going
into
the
site,
for
that
is
good,
because
then
you
can
point
people
directly
into
a
particular
meeting
and
not
make
it
a
really
long
page.
A
I'm
looking
to
automate
this,
so
I
actually
started
messing
around
a
little
bit
with
pulling
from
the
hackmd.
You
know
having
delimiters
polling
and
then
putting
in
merge
requests
and
so
I'll
be
fixing
that
automation
a
little
bit
better
and
hopefully
it'll
be
functional,
probably
within
the
next
week
or
two.
So
we'll
have
an
automated
way
of
our
meeting
minutes
getting
up
to
the
to
the
website.
G
Okay,
so
brandon
has
been
looking
at
the
styling,
I'm
waiting
on
a
pull
request
to
actually
update
that
there
is
a
link
in
the
documents-
hack,
md,
I'll
post
it
in
chat.
But
if
you
go
to
the
hackmd
for
the
document
working
group,
you
will
actually
see
that
there,
so
that
is
the
prototype
of
the
new
styling
and
the
dark
hasn't
changed
that
much.
G
The
light
has
had
bigger
changes,
look
and
feels
pretty
much
the
same,
but
it's
a
lot
easier
to
read
a
lot
more
accessibility
verified
and
I
think
it
looks
quite
nice.
So
I'm
waiting
for
that
to
go
live.
G
Okay,
so
then
we've
actually
started
pulling
some
technical
documentation
together.
Really
following
on
from
the
discussion
we
had
two
weeks
ago
in
this
meeting,
I'm
actually
doing
it
on
a
fork
in
my
repo
just
to
actually
get
stuff
going,
and
that's
that's.
Where
john's
put
the
pull
request
again,
the
link
is
in
the
docs
meeting
groups.
G
If
we
put
it
in
put
it
on
our
site,
we
need
to
obviously
a
way
to
keep
that
up
to
date.
So
the
version
of
go
go
along
changes
or
any
updates
to
okd's
base
and
just
really
need
to
work
out
how
we
keep
this
up
to
date.
I'm
nico
you
want
to
come
in.
D
Yeah
I
just
wanted.
I
just
wanted
to
respond
a
little
bit
about
the
images
thing
you
know
so
our
team,
like
the
cloud
team,
if
you
look
at
like
machine
api
operator,
we've
tried
to
maintain
two
sets
of
docker
files.
You
know
we
have
one
called
dockerfile.rel
that
we
use
for
like
the
actual
release
stuff
and
then
our
regular
docker
file
is
like
the
public.
You
know
anyone
in
the
community
can
build.
It.
We've
tried
to
share
that
pattern
with
other
teams
internally,
but
I
think
you
know,
john.
D
You
opened
up
an
issue
I
think
on
the
mco
repo
and
that
that
actually
caused
the
discussion
internally
because
kirsten
the
person
who
was
looking
at
that
on
the
mco
repo
was
like.
Are
we
even
supposed
to
be
doing
this?
So
there
is
like
a
question
internally
about
like?
Should
we
be
creating
these
rocker
files
and
like,
although
our
team
has
taken
it
upon
themselves,
to
do
this?
This
is
not
guidance
across
the
board.
D
G
But
if
you
can't
build
it
because
all
that
source
isn't
there,
then
is
it?
Do
we
have
a
route
to
actually
raise
that
with
this
sort
of
strategy,
but
red
hat.
B
Christian
go
ahead,
hey
christian!
I
I
think
that
that's
a
great
idea
and
and
to
add
what
mike
just
said.
So
these
docker
files-
they
aren't
actually
the
from
directives
in
the
dockerfiles,
the
actual
image
references.
They
aren't
the
canonical
place
to
store
them.
They
are
actually
and
there's
those
are
the
the
references
ci
uses.
B
So
whenever
ci
changes
the
newer
version,
some
ci
bot
will
open
a
pr
to
update
that
that
image
reference,
so
they
aren't
actually
and
they
are
kind
of
being
replaced
on
the
fly
by
ci
anyway,
they're,
not
the
canonical
images
that
are
always
used
if
they're
in
the
file,
so
being
being
the
non-canonical
they're,
just
kind
of
kept
in
sync
on
a
best
effort
basis.
I'd
say
being
that
I
would
think
we
have
a
good
argument
to
say:
look.
We
don't
want
these
weird
internal
names
to
pop
up
there.
B
B
But
if
you
could
also-
and
I
think
that's
a
great
idea-
kind
of
approach-
that
from
the
outside
and
say
look-
we
already
have
an
engineer
internally
who
says:
look
these
aren't
economical
image
names,
they're
being
replaced,
and
it
would
just
be
so
much
easier
for
everybody
on
the
outside.
If
they
were
publicly
pullable
image
references,
so
could
we
just
make
the
default
to
be
those
public
images
instead
of
the
internal
ones?
B
G
D
Right
right,
that's
what
I
wanted
to
get
back
to
was
like
your
original
question,
which
is
like
how
do
we
make
that
communication
happen?
And
I
don't
I
don't
think,
there's
actually
a
good
place
to
do
that
right
now
and
I
think
that's
something
that,
like
probably
christian
and
myself,
and
you
know,
we
should
probably
raise
internally
to
say.
Okay,
we've
got
this
great
meeting.
We've
got
this
great
community,
that's
growing
and
growing,
and
growing
yeah
like
how
do
we
make
that
connection?
I
see
neil's
recommending
the
matrix
room.
D
You
know
like
yeah
like
if
there
are
red
hatters
who
want
to
hang
out
there
and
be
part
of
that.
I
think
we
probably
need
like
an
official
forum
or
something
where
it's
like
yeah
like.
If
you
want
to
make
a
request,
it's
there
it's
out
in
public.
Everyone
can
go,
look
at
it
vote
on
it
do
whatever,
and
I
just
don't.
I
don't
think
we
have
that
currently.
G
C
D
Well,
right,
yeah,
so
the
other
side
of
this
is
yeah.
You
could
go
through
and
just
open
up
pull
requests
and
create
docker.dockerfile.okd
in
every
repo.
But
the
problem
is:
that's
not
going
to
really
solve
the
issue,
because
what
what
christian's
talking
about
is
like
using
the
public
images
allowing
ci
to
automatically
updating
them
so
they're
always
fresh,
it's
not
a
maintenance
burden
and
then
yeah.
We
can
all
just
use
the
same
images
to
kind
of
build
from
there.
D
C
D
Right
right
and
like
and
like
I
said,
our
team
maintains
two
two
sets
of
docker
files,
but
we've
had
issues
in
the
past
where
we
revenue
version
and
someone
goes
and
forgets
to
update
the
community
docker
file.
Now
it's
got
an
old
version
of
golang
on
it
or
whatever
you
know
like.
We
don't
want
to
create
that
burden
for
ourselves
right.
B
Yep
and
and
those
kinds
of
community
specific
doctor
faults,
they
already
exist
in
a
couple
of
places
like
I
think,
you're,
the
the
only
team
that
just
made
them,
but
there's
other
other
teams
that
I
pushed
them.
You
know
I,
I
pushed
a
community
dockerfile
onto
them
because
we
needed
an
ironic
okd
specific
file
and
they
are
mostly
in
like
in
some
cases,
obviously
things
but
really
what
we
are
using
in
that
place
is
just
the
centos
stream
base
and
that
that
should
really
just
work
for
our
ci
as
well.
B
So
I
think
eventually,
we
really
want
to
improve
our
ci
as
well
to
just
use
the
same
images,
and
then
we
have
this
internal
second
build
system
for
for
building
the
actual
release,
payloads
of
ocp
of
openshift
anyways,
while
we
in
in
okd
land
just
use
the
builds
from
our
ci
system.
B
So
really,
I
think,
moving
that
these
image
places
due
to
just
the
central
stream
base
would
would
maybe
even
work
even
for
for
the
for
ci
that
we
already
have
without
because
right
now
these
images
can't
be
publicly
distributed
because
they
they
aren't
just
the
ubi
base.
That
is
publicly
available.
They
are
also.
They
also
contain
some
some
rpms
that
come
from
from
a
rail
gun
repository,
so
we
can't
just
make
them
publicly
available
without
a
subscription,
because
everybody
loves
subscription,
though.
B
Everybody
loves,
subscription
manager
and
subscriptions;
no
don't
don't
we
all
so,
but
we
we
just
have
to
make
the
same
default
of
of
being
open.
B
I
think
here
and
I
I
think,
that's
a
valid
and
good
argument,
and
I
think
our
management
will
also
understand
that,
because
it's
much
much
easier
for
external
people
also
to
help
us
with
our
work,
if
they
can
actually
build
the
stuff
themselves
without
having
to
figure
out
okay,
this
this
image
reference,
I
can't
pull
what
what
else
do
I
use
that
actually
has
the
very
similar
if
not
the
same
contents,.
G
I
think
I
think
that's
the
problem
christian.
We
can't
even
see
what
the
spec
of
that
image
is.
So
there's
no
way
for
a
community
member
to
work
out.
We
can
probably
get
it
working,
but
we
we
might
be
building
on
totally
different
versions
of
the
of
the
underlying
sort
of
language,
libraries,
which
means
any
pull
requests.
We
do
then
may
fail
and
behave
differently
when
you
do
an
official
build.
I
think
that's
that's
the
biggest
issue
of
just
trying
to.
B
Absolutely,
and
and
that's
why
I
would
argue
this
should
be
central
stream
for
everything,
and
then
we
can.
We
can
always
relate
our
internal.
Rail
builds
to
a
set,
a
specific
central
stream
thing,
or
you
know
just
compare
the
versions
and
on
the
outside,
you
would
just
be
able
to
to
build
it
with
with
the
centos
stream.
H
E
H
You
know
that
makes
sense
to
me
like:
let's
not
try
to
hurt
ourselves
by
trying
to
use
rel
ubi
unless
the
relub,
the
rel
container
group,
has
decided
we're
gonna,
make
it
not
hurt
ourselves
trying
to
use
rel
uvi,
like
you
know,
if
they
decide
to
go
through,
you
know.
I
know.
H
Scott
mccarty
has
mentioned
it
a
bunch
of
times
that
he
that
we
may
see
like
a
large
expansion
of
the
content
available
in
in
the
rel
ubi
thing
to
cover
like
full
user
space
and
just
not
include
things
like
kernel
boot,
loaders
other
things
that
make
it
useful
for
it
to
run
as
a
as
a
as
a
real
operating
system.
H
If
that
actually
happens,
then
like
full
steam
ahead,
rel
ubi
all
the
things,
but
otherwise
I
think
it's
super
reasonable
for
us
to
just
do
centaur
stream,
and
it
also
provides
us
an
avenue
to
do
something
that
we
don't
currently
do
right
now.
We
don't
currently
make
it
so
that
those
containers
that
are
built,
we
can't
pre-qualify.
H
D
Yeah
I
mean
it
all
make
all
this
makes
sense
to
me.
I
think
the
the
real
issue
again
like
there's
like
a
communication
that
needs
to
happen
internally
like
like
when
I
talk
to
developers
inside
red
hat,
you
know,
there's
kind
of
like
a
bifurcation
right.
Some
developers
are
totally
on
board
when
you
talk
about
open
source
and
it's
like
oh
yeah,
yeah,
we're
building
this
open
source
stuff,
but
the
community
can't
build
it
and
they're
like.
D
Oh,
that's,
a
big
problem
and
others
just
aren't
even
aware
that
this
is
happening
so
like
there
needs
to
be
a
shift
in
mentality
on
the
way,
the
development
teams
internal
to
red,
hat
kind
of
look
at
this,
and
they
need
to
accept
the
kind
of
notion
of
okay,
we're
going
to
build
these
things
in
a
community-centric
way,
so
that
anyone
can
build
them
and
get
out
of
this
notion
of
like
okay.
What
is
subscription
based?
G
And
anybody
else
wanna
on
this
one
before
we
move
on
okay
and
then
just
just
the
last
thing
on
the
technical
document,
we
also
discussed
creating
automated
test
scripts
and
frameworks,
and
one
of
the
big
challenges
that
we've
ever
since
I've
joined
the
community
and
we've
been
trying
to
get
the
community
to
do
more
testing.
G
I'll
put
that
in
the
chat,
it's
again,
if
you
want
it,
it's
in
the
hackmd
for
the
documentation
working
group
on
the
last
meeting,
which
was
dated
the
fifth
of
april.
So
again
I
think
that's
that's
something
else.
We
want
to
work
on
to
actually
document
how
you
can
test
releases
and
then
hopefully,
more
of
the
community
will
be
able
to
help
us
out
there.
G
There
was
a
there's
been
a
request
for
especially
bare
metal,
I
think
obviously
vmware
vert
as
well
community
members
have
access
to
to
resources
there.
So-
and
that
was
the
last
piece
for
that-
and
I
think
that's
all
we
covered
at
the
meeting.
A
And
just
to
riff
on
what
brian
said,
one
of
the
things
that
I've
been
thinking
about
lately
is
upgrades.
A
lot
of
folks
had
issues
with
upgrades
from
four
eight
to
four
nine
because
of
the
that
change
in
the
cloud
operator
right.
So
there
it
it's
shifted
from
being
yeah
bruce.
I
think
you
had
an
issue
with
this
right.
It
went
from
being
the
the
the
cloud.
What
is
it
cloud
controller
operator?
A
It
went
from
being
cloud
controller
to
cloud
controller
operator
namespace
or
something
like
that,
and
there
were
some
other
issues
with
the
replica
set
that
I
had
not
actually
spinning
up
the
pods
correctly,
and
that
got
me
to
thinking
is
we
don't
actually
have
any
documentation
anywhere
where,
if
someone's
performing
an
upgrade
and
it
starts
to
fail-
where
do
they
look?
They
don't
know
the
various
name,
spaces
and
operators
to
look
at
when
an
upgrade
fails.
A
I
think
that
would
be
helpful
because
a
lot
of
times
we
get
tickets
coming
in
that
are
like
yeah.
I
did
this
upgrade
from
four
eight
to
four
nine
or
four
seven
to
four
nine
or
whatever,
and
I'm
stuck
it's
just
not
doing
this,
and
there
are
some
basic
troubleshooting
steps
that
we
could
share
with
people
and
showing
folks
how
to
pivot.
You
know
to
get
onto
a
new
release
and
stuff
like
that.
A
If
we
actually
documented
some
of
that
stuff,
I
think
it
would
lighten
the
burden
on
folks
in
the
chat
and
some
of
the
discussion
messages
that
we
get
and
might
build
up
more
people
willing
to
contribute
because
they
start
sort
of
getting
into
some
of
the
details
of
actually
how
okd
operates.
So,
just
a
thought
on
that
does
yeah
shri
go
ahead.
Just
say
it
yeah.
F
Yeah
yeah
sure
no
yeah.
I
think
that's
a
really
great
idea,
and
the
first
thing
I'm
thinking
of
is
just
like
the
okd
and
ocp.
Are
they
fairly
identical
in
that
regard?
So
ocp
ought
to
should
have
something
right,
I'm
sure
somebody's
written,
something
by
now
basic
troubleshooting
steps
and
like
a
kb
article
somewhere,
we
could
pull
that
first
step.
D
I
think
there
are
knowledge
base
articles,
it's
not
in
the
product
doc,
but
this
you
know
honestly
it's
interesting.
This
is
a
problem
that
I
think
affects
both
ocp
and
okd,
because
the
nature
of
the
upgrade
questions
that
I
see
come
across
the
okd
channel
in
some
ways
parallels
questions
that
we
get
from
customers
who
are
getting
stuck
in
the
same
upgrade
positions.
Now
to
your
question.
D
Unfortunately,
we
don't
have
like
a
great
piece
of
public
documentation
that
I've
seen
I've
just
seen
it
come
up
on
a
case-by-case
basis
where
they
recommend
you
know.
Kcs
articles,
like
the
one
that
comes
to
mind
for
me,
is
like
this
seems
to
happen
a
lot
people
change
vcenters
and
then
want
to
migrate
their
openshift
from
one
to
the
other
and,
like
all
the
vm
names
change
or
something
like
that,
and
it
becomes
a
massive.
D
You
know
pain
in
the
butt,
and
this
is
like
something
that
I
know
customers
have
dealt
with
and
also
you
know,
community
members,
but
I've
only
seen
like
a
kcs
article
about
it.
I
haven't
seen
like
an
official
like
here's,
a
massive
like
workbook,
for
update
issues.
So
I
think
like
having
something
like
that,
especially
if
it
came
from
the
community,
would
be
like
tremendous.
I
think
it'd
be
amazing.
A
A
Can
we
maybe
we
should
just
create
a
dock,
and
the
docs
group
can
take
this
up.
That
just
starts
to
collate
links
to
various
places
and
maybe
small
snippets.
G
G
We
haven't
shifted
really,
so
the
discussion
forum
is
still
currently
open
shift,
slash,
okd,
okay,
sure.
So
let's
go
there
at
some
point.
We
do
have
to
have
a
migration
strategy
because
we've
got
the
new
okd
project
organization.
G
D
D
I
know
some
of
the
project
teams
have
begun
work
on
creating
like
troubleshooting
docs
inside
the
various
different
component
repositories,
like
our
team
has
been
trying
to
do
this.
I've
seen
a
couple
others
doing
it.
That's
probably
another
area
where
I
think
like
prs
would
be
welcome.
You
know
like
if
we
have
community
members
who
have
figured
out
how
to
troubleshoot
something
on
like
a
various.
D
You
know,
whatever
like
a
networking
component
or
something
like
opening
a
pr
to
suggest
changes
to
the
troubleshooting
jock
or
even
to
like
start
a
troubleshooting
dock
like
I
think
that
that's
like
a
tremendous
value
that
could
be
added.
So
if
people
are
kind
of
figure
that
stuff
out
and
they
want
to
make
a
pr
but
they're,
not
sure
where
to
go.
You
know
I'm
certainly
happy
to
help
like
direct
people
or
if
we
need
guidance
on
how
to
put
that
together,
like
I'm
happy
to
get
involved
as
well.
So.
I
I
I
also
just
want
to
point
out
that
there's
a
lot
of
overlap
between
upgrade
troubleshooting
and
initial
install
troubleshooting
in
that
the
cluster
is
in
a
state.
How
do
I
find
out?
Which
component
is
failing,
so
that
I
can
get
to
the
next
steps
of
troubleshooting?
That
component?
I,
I
suspect,
a
lot.
The
a
lot
of
the
docs
will
apply
to
both,
but,
but
maybe
I'm
wrong
about
that
like
basically,
yes,
maybe
you
look
in
a
different
initial
log
to
figure
out
what's
broken,
but
then
everything
after
that
is
oh.
I
B
I
I
agree
that
we
don't
currently
have
great
debugging
documentation
like
how
do
I
debug
thing?
It's
it's
mostly,
as
I
think
mike
said,
on
a
case-by-case
basis,
like
somebody
will
find
the
right
knowledge
based
article
and
link
that
out
to
the
customer
or
whoever
requested
that
info.
But
it's
never
really
like.
Let's
round
up
all
the
necessary
info,
it's
even
even
like
when
you
debug
some
ci
values
like
what
there's
the
must
gather.
What
do
I
look
at
first?
We,
there
is
a
lot
of
documentation
internally,
but
it's
not
like.
B
I
would
very
much
like
to
at
least
participate
in
in
creating
that
that
information-
and
I
know
that
mike,
has
been
absolutely
instrumental
to
creating
some
of
the
best
documentation
I've
seen
in
the
past,
which
is
the
provider
onboarding
docs,
which
kind
of
leads
into
this,
it's
more
from
the
development
side,
but
yeah,
and
we're
now
working
on
on
more
of
like
on
the
continuation
of
this
internally
so
yeah.
B
If
I
can
help
with
any
of
that,
I
I'm
very
happy
to
do
that,
and
I
will
try
to
raise
that
topic
specifically
of
like
debugging
document
documentation
for
debugging,
specifically
and
maybe
making
that
an
open
place
instead
of
like
the
knowledge
base
article,
which
is
always
behind
the
the
login
okay,
I
don't
think
it's
a
pay
wall,
it
might
be
in
some
cases.
I
I'm
not
sure
so.
This
is.
A
A
I
think
that
is
something
that
I've
not
seen
anywhere
externally
and
I
I
think
it
would
benefit
because
we
always
ask
people
to
provide
one
so
they're
there
and
multiple
people
could
look
at
it,
but
we
don't
help
people
figure
out
like
what
do
you?
What
would
you
do
with
it?
If
you
wanted
to
help
troubleshoot
when
someone
posts
their
must
gather.
D
I
think
that's
a
really
poignant
point
there.
Jamie,
like
the
musk
gathers
like
it,
is
just
kind
of
a
collection
of
data
and
there's
kind
of
this
tribal
knowledge
about
what's
in
there
and
then
like,
depending
on
what
component
you're
looking
at
you
kind
of
know
where
to
go.
But
there
are
also
a
couple
tools
that
we
should
highlight.
D
There's
one
called,
oh,
must
gather,
there's
like
another
version
of
that
omas
gather
tool
and
I've
also
got
a
tool
that
I've
been
working
on
and
like,
oh,
must
gather,
gives
you
like
an
oc
interface
to
a
must
gather.
So,
like
you
unzip
you
untie
a
must
gather
and
you
point
this
tool
at
it
and
all
of
a
sudden
you
can
do
like
omg
get
pods
and
it'll
show
you
you
know,
so
you
can
interact
with
the
must
gather
as
if
it
were
a
cluster.
D
And
then
you
know
the
tool
that
I've
been
working
on
is
like
a
web
interface
so
like
it,
creates
a
static
web
page
for
you
from
a
must
gather
that
highlights
where
problem
areas
are
happening
and
like
so
you
can
just
directly
go
to
those
records
and
just
look
at
them
immediately.
So
I
think,
like
yeah,
sharing,
some
of
these
tools
is
probably
you
know,
probably
helpful
as
well,
because
those
are
the
main
ways
that
we
interact
with
these
things.
D
Yeah
well,
I
I
actually
had
been
working
to
get
it
into
our
ci
system
so
that
it
would
be
available
everywhere.
So
that
is
the
first
version
of
the
tool.
I
wrote
it's
a
python
application,
but
I'm
actually
rewriting
it
in
rust
now,
so
that
I
can
bundle
it
as
a
binary
that
will
get
included
in
the
ci
package
so
like,
if
you
want
to.
If
you
want
to
see
where
the
new
version
is
at
I'm,
and
I'm
almost
done
with
my
rewrite.
D
So
if
anybody
is
into
rust
and
wants
to
help
out,
that's
where
the
new
version
is
going
to
be
at
but
like,
but
yeah
so
like.
I
think
that
looking
at
musk
gathers
through
the
lens
of
the
tooling
that
we've
created
to
like
understand
them
is
actually
probably
like
the
biggest
step
up
to
figuring
out
like
what
do
you
want
out
of
a
must
gather.
You
know.
D
So
I'm
not
like
this
is
really
far
out
there,
but
like
if
you,
if
you're
getting
into
thinking
about
how
to
create
like
ci
infrastructures
or
how
to
use
the
current
red
hat
ci
infrastructure
to
replicate
tests.
I
think
this
is
an
interesting
tool
to
look
at
because
it
will
allow
you
to
take
the
release
repository
and
then
run
specific
tests
out
of
it
like
against
a
local
cluster,
so
you
can
build
like
almost
a
mini
version
of
your
own
ci
infrastructure.
D
It
doesn't
actually
run
prowl
and
all
those
other
things,
but
it
kind
of
shortcuts
some
of
that
process
for
you.
So
I
know
like
john,
I
know
you
you're
like
deep
into
some
of
this
stuff,
so
it
might
be.
You
know
this
might
be
something
that
would
be
interesting
with
the
work
yeah,
especially
as
people
start
to
think
about
putting
pr's
up
that
might
change
the
product
configurations
and
whatnot.
This
is
another
tool
that
just
gives
a
window
into
like
how
we're
doing
things
excellent.
This
is.
B
Great,
I
just
wanted
a
second
that
this
has
been
super
useful.
Richard
is
my
team
lead
we're
on
the
specialist
platform
team.
He's
the
team
lead
there,
and
so
we
we've
been
using
that
tool
from
time
to
time
and
it's
yeah,
it's
been
proven
very
useful
to
us
and
especially,
if
you
don't
want
to
deal
with
all
this
complexity
of
pro
you
just
want
to
consume.
Whatever
is
in
the
release.
B
Repository
prowess
is
essentially
a
runner
for
kubernetes
jobs,
and
this
will
just
create
the
job
for
you
that
you
want
and
it'll
run
it
locally
in
a
kind
cluster
and
it's
yeah
it's
it's
super
useful
and
yeah
great.
Bringing
that
up.
Thank
you
so
much
michael
for
that.
I
I
think,
because
our
rci
system
is
super
complex,
we
don't
have
to
lie
about
that.
It's
it's.
B
You
know
not
not
trivial,
and
this
makes
it
actually
very
easy
to
just
do
one
thing
and
focus
on
one
thing
and
then
you
can
upstream
that
into
the
proper
pro
config
after
testing
your
changes
locally.
So
that
really
yeah,
I
think,
for
even
trying
the
concept
of
pro
our
rci
system,
which
is
also
the
okd
build
system.
We
essentially
reuse
our
ci
system
for
pro
kd
as
a
build
system,
or
you
could
turn
it.
The
other
way
around
our
build
system
is
also
the
ci
system
for
our
product,
but
yeah.
A
Excellent,
well,
I
want
to
move
on
because
we've
got
about
nine
minutes
left
and
and
a
couple
more
things,
but
if
folks
have
any
more
comments
or
suggestions
for
stuff
brian's
going
to
create
the
discussion
thread
and
then
folks
can
chip
in
on
that
and
christian
will
make
sure
that
you
know
where
that
is
so,
you
can
add
any
additional
stuff.
So
moving
on,
we've
got
two
issues
that
folks
wanted
to
to
talk
about.
So
who
put
up
the
the
seth
rook
john
was
that
you.
A
F
F
F
Yeah
so
john,
and
I
both
run
into
this
issue
where
there
is
some
it
looks.
It
looks
to
me
to
be
an
essay
linux
thing,
but
I
can't
say
for
sure,
but
for
whatever
reason,
with
the
release
of
fcos
fedora
coreos
35,
that
is
underneath
okd
410.
This
latest
release
cfs
mounts
just
do
not
work
anymore,
and
that
seems
to
be
impacting
anybody
anything
who
is
running
ceph
within
their
cluster
or
trying
to
mount
cfs
into
their
cluster.
F
From
an
external
place
like
I
personally
I'm
running
rook
in
my
cluster
and
all
of
my
saffs
mounts
are
basically
broken
at
this
point,
notably
including
the
image
registry,
which
is
how
I
noticed,
because
my
builds
weren't
working
anymore
and
that
was
a
pain
block,
mounts
still
work,
which
makes
it
really
weird.
F
I'm
not
an
sc
linux
expert,
john,
very
kindly
figured
it
out
before
me
and,
like
you
know,
filed
a
bugzilla
with
fcos
upstream,
but
I
don't
think
anyone
has
looked
at
it,
and
so
I
wanted
to
take
advantage
of
being
in
the
same
call
as
timothy.
If
he's
still
around
to,
you
know
bring
that
up
and
also
raise
awareness
within
everyone
else,
just
in
case
they
see
issues
like
that.
C
Yeah,
that's
on
my
list
to
look
at
more
closely
next
week
agreed.
I
think
it
probably
is
a
se
linux
thing,
but
that's
been
lower
in
my
priority
list
than
the
than
the
build
stoppers.
A
I'm
gonna
so
well.
Timothy
is
looking
at
this.
I
want
to
quickly
get
to
this.
We've
got
like
seven
minutes
left.
I
wanted
to
get
this
user
posted
in
the
chat
and
also
in
as
an
issue,
and
I
don't
know
that
this
was
ever
resolved,
so
network
policy
denial
policy
does
not
correctly
restrict
traffic
to
a
pod
when
using
node
ports.
A
So
within
that
node
does
anyone
know
if
that's
correct
or
if
that
is.
Is
that
a
known
bug
or
is
that
expected
behavior?
There
was
some
discussion
about
this
about
how
deny
all
policy
works.
D
A
F
Of
those
weird
things
I've
noticed,
ovn
cube
has
a
couple
of
just
weird
corners.
You
wouldn't
expect
so
I
ran
into
a
thing
a
few
months
ago
about
a
external
traffic
policy,
just
like
some
part
like
one
of
the
one
of
the
policies
just
wasn't
implemented,
and
I
tracked
it
down
to
a
bug-
and
I
was
just
like
well
all
right,
I'll
wait.
A
Is
it
the
default
now
in
ocp
and
okay?
I
know
in
okd
it's
the
default
now,
but
is
ovn
cube
default
in
ocp
now.
A
E
Harassment
is
that,
if
it's
not
an
issue
in
the
federacra
striker
federacores
developer
won't
see
it.
So
I
would
say
the
first
step
is
to
make
an
issue
there.
C
C
Here's
my
I'm
going
to
call
it
a
beef,
but
I
mean
that
tongue-in-cheek
we've
been
told
over
and
over
that
we're
supposed
to
be
using
bugzilla
for
reporting.
You
know
pretty
much
any
type
of
bugs.
I
do
most
of
the
time
for
something
significant,
so
you
know
I
opened
bugzilla
shouldn't
the
fedora
folks,
be
you
know
getting
stuff
from
bugzilla
in
order
to
see
this
because
having
you
do
two
things
having
to
do
with
bugzilla
and
then
having
to
go.
Do
something
someplace
else.
C
C
E
Yeah,
so
that's
that's
the
thing,
that's
the
thing
that
is
difficult
is,
as
we
have
a
mix
of
products
and
community
and
so
different
things
through
different
places
to
report
different
things,
depending
on
which
side
of
the
contract
you're
on
and
so
essentially,
federer
caress
is
a
community
project
and
unfortunately,
we
do
not
use
bugzilla
as
the
rest
of
fedora.
We
use
the
github
issue
tracker,
so
anything
that
is
related
to
fedora
cores
itself
is
best
reported
into
the
better
aquarius
tracker.
E
No,
if
you
have
something
that
is
ocp
based,
ocp
is
a
product.
So
essentially,
if
you
want
to
have
this
thing
fixed,
you
need
to
report
berkshire
because
that's
where
ocp
bugs
are,
but
don't
worry,
we
are
in
the
progress
of
changing
that
too.
So
this
is
changing
soon.
I'm
gonna.
I
don't
know
how
much
of
this
is
public,
but
it's
not
pretty
timid.
E
Don't
care
I'm
trying
to
explain
you
what
the
state
is,
I'm
not
responsible
for
it,
so
just
trying
to
find
anything
yeah.
So
bugzilla
is
probably
soon
at
some
point
going
away
and
everything
will
be
jira
base
at
some
point,
but
yeah
blah.
E
J
F
J
F
E
C
C
J
E
A
B
B
Which
is
a
bit
of
a
special
process
for
us
but
yeah.
So
this
is,
I
think,
entirely
our
fault
and
internally
in
in
okd,
not
not
f,
cause
folks,
and
I
think
the
debugzilla
is
perfectly
assigned
to
to
steph
in
fedora.
B
But,
yes,
that's
still
the
project
release,
so
they
might
not
be
looking
at
that
quickly
and
yeah.
I
think
opening
it
on
the
fedora
tracker.
The
the
f
course
folks
really
do
a
great
job
in
reminding
people
in
in
their
respective
teams
to
look
at
things
because
they
require
those
changes
much
better
than
we
do
in
in
okd.
Maybe
we
just
have
kind
of
the
openshift
arc
as
our
focus,
whereas
everything
else
is
just
like
put
it
on
fedora
and
they'll
fix
it.
E
A
So,
let's
yeah,
for
if
the
three
of
you
could
work
together
to
get
it
to
the
right
place.
That
would
be
awesome
all
right.
Three
last
things:
crc
we've
gotten
a
slew
of
c
okd,
crc
questions
of
the
past
couple
days,
so
crc
is
still
somewhat
viable.
I
think
until
microshift
sort
of
covers
more
ground.
A
So
if
we
can
put
out
a
call
for
folks
to
build
crc
and
maybe
play
with
crc
a
little
bit,
so
we
can
improve
the
documentation.
That
would
be
helpful.
Documentation
group
is
going
to
take
this
up.
So
if
you
know
anyone
that
wants
to
build
crc
charo
left
some
great
instructions
on
how
to
do
it.
It's
not
that
hard
and
we
talked
about
automating
it
survey.
I've
been
reaching
out
to
driti
and
she
has
not
responded.
So
our
survey
is
still
in
limbo.
E
A
But
I
do
think
we
should
do
the
survey
to
get
a
sense
of
okd
usage
and
then
last
thing
is:
there
will
be
an
email
sent
out
for
vote
for
the
subcommittee
co-chair.
So
look
for
that
sandro,
I
think
will
be
you
know,
is-
has
thrown
his
name
in
for
the
virtualization,
the
okay
z,
ver,
okd,
virtualization
subgroup.
A
A
Awesome
just
a
few
minutes
over
thanks
folks,
look
for
this
meeting
video
to
be
up
relatively
soon
and
the
notes
to
be
up
as
a
web
page
great
meeting
talk
to
you.