►
From YouTube: OKD Working Group Meeting 05-10-2022
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
A
And
don't
forget
to
put
your
name
in
the
attendees
list
that
helps
us
know
who's
here
and
who
we
might
need
to
reach
out
to
if
they
weren't
here,
for
something.
A
B
C
No,
no
probably
the
biggest
thing
for
that
release
is
probably
the
fcd
fix
that
is
supposedly,
that
is
in
there
that's
supposed
to
fix
the
possible
data,
the
data
issues,
but
that
looks
to
be
the
biggest
thing
and
they
and
the
installer
fix
is
there
for
a
vmware.
Those
are
probably
the
two
biggest
things.
A
So
christian
do
we
have
a
sense
of,
and
I
don't
want
to,
pin
anyone
to
anything,
but
it
will
it
be
you
or
vadim,
or
a
combination
of
you
and
vadim
in
the
near
future
cutting
releases
or
who
do
we
know
who's
going
to
be
cutting
their
releases?
Is
there
a
way
for
us
to
know.
B
I
I
think
there
is
currently
no
way
for
you
to
know
we
will
essentially
what
we're
trying
to
do
is
we
will
find-
or
we
already
know
which
team
internally
will
pick
up
responsibility
for
doing
that
and
it'll
just
be
shared
better
the
workload
we
don't
want
to
have
it
all
on
valeem's
shoulders,
but
we
want
to
have
an
entire
team.
That
is
irresponsible
and
they
will
take
turns
doing
the
releases,
which
is
also
great
onboarding
experience
for
for
engine
or
new
engineers
that
come
into
the
open
shift
organization.
B
Obviously,
so
we
will
use
that
as
something
they
can
gather
experience
with
and
obviously
we'll
as
a
side
benefit.
B
There
will
be
less
work
on
on
the
core
maintainers
for
the
actual
releases,
we're
currently
in
the
process
of
working
out
the
details
there.
So
I
I
can't
name
anybody.
It
would
be
too
early
for
that,
but
there
is
going
to
be
an
entire
team
that
will
take
on
this
responsibility.
Excellent,
fantastic.
D
A
Excellent
and
any
other
questions
or
comments
in
terms
of
okd
releases,
anything
for
christian.
B
I
met
maybe
just
a
quick
heads-up,
and
I
mentioned
that
earlier
we
will
introduce
some
changes
to
the
way
we
build
okd.
So
one
of
the
things
we're
going-
I
I
actually
I'm
going
to
do
or
trying
to
do
soon
is.
D
B
The
the
okay
dos
builds
back
into
our
brow
system
from
the
external
series
ci
that
we
have.
We
want
to
move
that
back
into
browser.
We
have
just
more
ease
of
maintenance,
especially
now
that
we
want
to
give
the
responsibility
for
cutting
releases
to
another
team
and
yeah.
We
want
to
try
and
pull
that
back
in
and
also
okay,
that'll,
just
enable
us
to
do
things
internally
and
also
enable
us
to
do
more
things
internally
than
just
that.
One
build
that
we
currently
do
just
as
a
quick
heads
up.
B
A
Right
moving
on
now
to
the
fedora
core
os
updates
with
timothy.
E
Can
hear
you
thank
you
all
right,
so,
fortunately,
the
main
thing
for
federal
currency
side
is
that
we
are
moving
the
testing
release
to
federation
6
today,
which
has
been
released
today
too.
E
So
that's
yeah,
so
we're
moving
forward
to
fighter
of
the
six
and
this
will
be
stable
in
two
weeks.
I
don't
remember
exactly
which
version
is
shipped.
You
know
kd
right
now,
but
I
think
it's
34
something
and
hopefully
yeah
so
we're
going
to
stop
using
35
really
soon
in
two
weeks,
basically
so
yeah,
and
apart
from
that
there's
also
I've
put
a
link
into
the
icandy
and
we've
started
something.
Maybe
I
don't
know
if
we
have
mentioned
it
here
before,
but
we're
sending
you
this
month
in
courts.
E
Some
sort
of
summary
that
you
can
find
on
the
description
forum
in
the
federation
forum
and
that
that
helps
you
get
a
little
bit
of
overview
of
what's
happening
in
federal
arrests
across
months
and
changes
along
the
lines
so
do
check
them
out
and
we
try
to
publish
them
regularly
every
month
and
the
final
and
yeah
and
that's
and
that's
about
it.
For
me,.
B
Yeah
just
timothy,
could
you
paste
the
link
to
that
latest
monthly
summary
awesome.
Thank
you
very
much.
F
Okay,
so
we
had
our
meeting
last
week,
a
few
things
come
out
of
it.
We've
published
the
style
updates
that
brandon
did
so.
You
should
see,
particularly
in
the
light
rendering
it's
it's
easier
to
read
better
contrast,
not
so
much
in
the
dark,
if
you
like,
dark
mode,
but
in
light
mode,
have
been
a
number
of
changes.
F
There.
We've
got
the
community
archive
the
community,
repo,
archived
and
we're
beginning
to
try
and
tidy
up
the
repos
ready
for
moving
to
the
new
org
and
coming
up
with
the
process
of
actually
moving
into
the
okd
project.
Org
out
of
the
openshift.
F
F
If
anybody
wants
to
look
at
it
update
it
before,
we
actually
finally
send
it
out,
and
but
we
are
planning
to
send
the
community
survey
out
and
fairly
soon
and
then
I
did
move
one
issue
into
a
discussion,
and
that
is
the
good
old
topic
of
communication
channels.
F
There
was
a
request
to
actually
sort
of
publicize
the
matrix
channel,
but
no
one
seems
to
be
using
it.
So
just
a
a
really
discussion
as
to
is
that
something
that
we
want
to
actively
promote,
and
if
we
do
are
people
actually
going
to
use
it
and
because,
if
we
invite
people
to
using
a
channel,
then
some
of
the
community
do
need
to
be
there
and
respond
to
questions
and
comments.
F
B
D
B
I'd
be
open
to
using
the
matrix
channel
but
yeah.
I
haven't
seen
a
lot
of
engagement
if
I'm
trying
to
check
whether
I'm
already
on
the
channel
matrix
is
a
bit
slow
for
me
today,
but
yeah
in
general.
I
I
do
like
matrix
and
I
think
it's
a
good
alternative
to
slack,
but
we
should
probably
agree
on
one
avenue
and
not
have
both
simultaneously.
A
The
vadim
chimed
in
on
the
discussion
that
he
thought
it
would
be
helpful
to
have
multiple
places,
particularly
if
they
could
be
bridged.
I
think
you
know
brian
makes
a
good
point.
If
we're
going
to
have
places,
they
have
to
be
staffed,
so
to
speak
in
the
sense
that
there
have
to
be
people
there
in
the
community
that
are
actively
engaging
and
helping
and
whatnot.
C
C
This
is
john
from
my
point
of
view.
Honestly,
I
mean
I'm
I'm
busy
enough
watching
slack
in
the
okd
discussion
groups
and
issues,
I'm
not
sure
if
I
necessarily
want
to
look
at
yet
another
place
where
you
know
content
could
be
coming
in.
I
mean
if
we're
gonna
in
my
mind,
if
we're
gonna
move,
then
let's
move
and
not
use
slack
at
all,
but,
like
I
said
in
the
in
the
chat,
the
advantage
we
get
with
slack
currently
that
we
also
get
the
ocp
stuff.
You
know
somebody
has
questions
about
ocp.
C
F
D
Yeah,
I
I'm
trying
to
let
everybody
else
weigh
in
on
this
I
mean
I
I
have
an
opinion,
but
it's
you
know
my.
I
I'm
I'm
along
the
lines
of
john
fort
and
I'm
watching
enough
channels,
and
I
haven't
ever
even
every
time
we
ask,
I
haven't
even
been
able
to
log
into
the
matrix,
not
because
there's
anything
wrong
with
it.
It's
just
that
I
haven't
even
found
the
time
to
go
and
do
that
and
and
retest
it
because
I
tested
it
ages
ago.
I
got
in
and
I'm
sure
I
forgot
it.
D
So
I
I
I'm
for
I
guess
not
opening
yet
another
one,
but
yeah
we're
missing
a
few
people
who
were
very
pro
matrix
on
the
on
this
call
right
now.
F
Yeah,
I
think
neil
was
the
driving
force
behind
it,
so
I'm
sure
he's
gonna
come
in
into
the
conversation
at
some
point
yeah.
I
I
agree.
F
If
we're
opening
matrix,
then
the
question
is:
do
we
want
to
keep
slack
going
or
I
I
I
is
it
possible
with
matrix
to
actually
consolidate
and
chris
vadim
seemed
to
say
that
you
could
use
one
channel
to
consolidate
them
all.
B
B
It
would
also
post
that,
on
on
matrix,
I
I
do
think
as
a
we
don't
have
to
use
it
as
our
main
channel,
but
we
could,
because
it's
already
there
and
a
few
folks
are
in
there
and
especially
the
fedora
community-
is
very
active
on
on
matrix,
because
it's
essentially
the
irc
replacement
there
and
matrix
also
bridges
through
to
ifc
so
yeah.
B
I
would
suggest
we
just
keep
it,
and
maybe
we
add
in
the
room
description
we
just
add
a
link
to
the
slack
channel
and-
and
maybe
also
we
can
have
something
like
team
tags
on
on
both
slack
and
matrix,
where
we
could
point
people
towards
you
know,
tag
this
team
and
then
we'll
have
individual
contributors
getting
notified
and
hopefully
answer
to
whatever
the
inquiry
is.
B
B
F
I
don't
think
any's
actually
found
it
or
used
it
yet
so
yeah
I
mean
if
somebody
is
a
matrix
sort
of
guru
and
wants
to
work
out
if
it's
possible
to
do
that
cross
posting
an
integration
which
means
that
we
can
still
just
keep
watching
on
one
channel.
F
G
Yeah
brian,
like
I
sort
of
well,
I
think
john,
had
an
excellent
point
and
I
totally
agree
that
few
less
is
better.
But
what
you're
saying
is
that
basically
red
hat
itself
is
you
know,
half
using
slack
half
using
matrix,
and
so,
if
you're,
only
using
one
you're
going
to
lose
half
and
both
fedora
and
ocp
are
big
feeders
into
okd.
B
And
a
lot
of
the
openshift
developers
aren't
on
the
on
the
upstream
kubernetes
slack,
because
we
have
a
separate
instance,
an
internal
one
and
while
the
matrix
channel
really
is
open
to
everybody,
of
course,
and
well,
so
is
the
the
slack
channel
that
we
have.
But
you
don't
yeah,
there's
many
developers
who
aren't
who
aren't
on
the
kubernetes
slack.
A
B
A
F
D
I
think
we
volunteer
neil
and
then
I'll,
try
and
reach
him
next
week,
while
we're
at
kubecon
and
I'm
in
we're
in
the
right
time
zones
and
see
if
we
can't
get
it,
get
it
working
so
I'll
I'll,
volunteer
and
I'll
reach
out
to
neil,
because
he
seems
to
know
the
most
about
it
and
then
I'm
I'm
buying
dinner
for
christian
at
some
time.
So
we
can
all
test
it
together
over
a
glass
of
sangria
or
something.
A
All
right,
brian,
you
got
anything
else
in
terms
of
documentation
stuff,
I
think
that's
it.
We
did
open
the
discussion.
There
is
a
discussion
item
on
the
transition
for
the
repos.
That's
where
we're
going
to
be
discussing
the
plan
to
transition
to
the
news
so
expect
conversations
starting
there
shortly,
particularly
brian,
and
I
have
both
been
busy
respectively,
but
I
think
we'll
start
chiming
in
on
that
and
hammer
out
a
plan
pretty
quickly
to
move
to
the
new
repo.
A
All
right.
Moving
on
to
next
up
is.
A
Discombobulated
today
we
talked
about
the
survey
talked
about
the
documentation,
rook
seth
status.
Is
that
taken
care
of
now?
John?
Do
you
know.
G
Thing
yeah,
john
yeah.
I
was
looking
at
that
whole
chain
of
things
and
it
looks
like
it's
actually
a
chef
project
in
the
colonel
looking
at
the
patch
that
was
put
in
well,
it
was,
it
was
a
change
for
the
seth.
G
Right,
although,
as
I
added
the
the
bugzilla
report
that
contains
the
comment
with
the
patch
is
still
open,
so
I
would
assume
that
they
close
that
when
it
makes
it
in
and
then
what
wasn't
clear
to
me
is
whether
or
not
that
patch
actually
does
this
any
short-term
good,
because
if
you're
on
4-9.
G
The
only
way
you
can
get
410
at
the
moment
is
by
getting
to
the
earliest
410,
which
is
the
only
upgrade
path,
and
presumably
that
wouldn't
have
the
patch
in
it,
which
would
which
would
mean
that
your
self
would
be
turfed
and
then
you'd
have
to
recreate
like
sort
of
patch
it
by
hand
and
recreate
all
of
the
pv's
which
doesn't
sound
like
a
fun
experience.
Really.
C
B
Yeah
we,
it
might
land
in
a
nightly,
but
I'm
I'm
definitely
not
gonna
work
on
back
porting
that
there,
because
we
we
focus
on
4.10
now.
Unfortunately,
now
it's
not
out
of
the
question
that
you
could
upgrade
directly,
but
obviously
we
haven't
tested
it.
So
maybe
we
can
just
test
it
before
the
for
the
next
release
kind
of
test,
the
direct
upgrade
strategy
where
we
just
skip
the
release.
B
There
isn't
an
obvious
reason
that
it
shouldn't
work,
but
obviously
there
might
be
there
might
be
some
education.
I
just.
C
C
B
So
if
it
lands
in
fedora
36,
we
should
be
getting
it
in
4.10
yeah.
I
think,
because
we
well-
that
is
I
we
talked
about
this
a
couple.
Do
we
do
we
do
major
upgrades
of
fedora
within
one
minor
okd
version.
B
Yeah
we've
seen
in
the
past
that
that
does
create
issues
for
us,
but
sometimes
it
doesn't
so
we
will
definitely
test
it
if
it
upgrades
fine,
if
it
doesn't
we'll
have
to
see
whether
that
kernel
gets
backpacked
to
fedora
35
which
we're
currently
on
and
that
that's
that's
a
possibility
as
well.
Fedora
35
is
going
to
be
maintained
for
another
six
months,
so
not
not
for
the
record
os,
but
the
fedora
package
packages
themselves,
the
rpms,
the
kernel
everything.
B
So
if
we
don't,
if
we
can't
make
it
work
with
36,
we
can.
We
can
definitely
open
a
bugzilla,
so
it
gets
backported
to
to
the
fedora
35
kernel.
If
that
isn't
happening.
Anything
already.
C
I
don't
know
how
I
did
that,
but
anyway,
so
there's
a
there's,
a
link
to
the
kernel
to
the
patch,
but
I'm
not
sure
you
know
who
can
track
that
to
see
where
it
is
in
terms
of
the
where
it
is
in
the
process.
B
To
know
yeah,
we
will
have
to
get
the
fedora
kernel,
maintenance.
C
So
there's
a
kernel
patch,
that
is
to
fix
a
ceph
bug.
If
you
look
in
the
in
the
meeting
notes,
there's
a
link
to
it
and
we're
just
curious
when
you
know
that
might
hit
fedora,
is
it
going
to
hit
fedora
at
fit
36
it
you
know?
Is
it
going
to
be
backported
to
35,
because
it's
a
significant
bug
for
multiple
people?
You
know
bruce
can't
update
4.10
because
of
the
bug.
E
C
B
Yeah,
I
guess
we
can
follow,
follow
that
on
the
bugzilla,
we'll
make
sure
to
ask
there
for
a
backpack
if
they
don't
already
plan
on
doing
it.
G
Yeah
and
I
guess
it
would
be
useful
to
know
sort
of
what
the
outcome
is,
because
if
there's
you
know,
you
know,
if
there's
basically
no
way
to
upgrade
to
a
working
version
of
4.10
from
4.9,
then
I
might
as
well.
You
know
just
do
the
update,
throw
away
all
my
pvs.
You
know,
after
of
course,
backing
them
up
and
then
recreate
everything
from
scratch
on
the
cef
side,
which
is
definitely
worth.
B
Worth
trying
to
upgrade,
there
isn't
really
a
reason
it
shouldn't
work.
It's
just.
We
don't
recommend
it
because
we
don't
test
it.
We
don't
have
the
capacity
to
test
more
than
one
upgrade
path
at
the
moment,
but
it's
it's
certainly
theoretically
possible
that
you
could
upgrade
from
4.9
directly
to
the
newest
4.10
that
includes
that
fix.
Once
it
comes
out.
B
Right,
yeah
you'd
have
to
exactly
you'd
have
to
use
the
force
flag
because
it's
not
in
our
upgrade
graph,
so
it'll
complain,
but-
and
we
can
test
that
too
we
can.
I
can
make
a
note
for
the
next
or
for
the
release
that
includes
this
one,
that
we
test,
upgrades
from
4.9
directly
might
not
succeed,
and
then
we
we
might
have
to
think
of
a
contingency
plan.
G
B
Yeah
and
yeah,
unfortunately,
obviously
you
could
override
the
kernel
manually
on
each
node,
but
that
is
lot
of
manual
work
and
we
don't
want
to
really
recommend
anybody
doing
that
so
yeah.
If,
if
you
can
wait,
I
think
that.
C
Might
be
the
best
option?
The
other
issue
too,
that
we
think
about
is
that
right
now
we
have
a
pin
kernel
because
of
the
issue
with
the
kernel
crapping
out,
so
we
have
to
get.
We
have
to
make
sure
that
that
gets
fixed
before
we
can,
even
you
know,
look
at
updating
to
a
kernel
with
the
fix,
so
there's
a
variety
of
things
that
have
to
happen
before
that
becomes
live.
A
Diane,
have
you
reached
out
to
the
operate
first
folks,
or
do
we
even
need
to
do
that
anymore?
Given
changes
happening
backstage.
D
I
have
reached
out.
I've
had
a
couple
of
conversations
with
them.
They
are
coming
to
kubecon
next
week,
so
I
was
going
to
coerce
them
over
paella
and
wine
sit
down
with
christian
and
vadim
and
make
them
talk
to
each
other
and
see
if
it's
even
viable
on
the
it's,
the
boston
university,
mass
open
cloud
that
has
some
hosting
resources
for
people
who
aren't
aware
and
there's
some
resources
inside
of
red
hat
on
the
operate
first,
and
I
was
hoping
to
get
them
to
do
the
code
ready.
D
A
Well,
let
me
let
me
ask
first
with
the
the
paisley
elephant
in
the
room
christian.
If
the
changes
that
you're
talking
about
in
the
background
happen,
is
there
going
to
be
a
need
for
automated
community
build
testing,
or
do
you
think
that
the
expanded
testing
in
this
new
situation
will
cover
a
lot
more
territory
than
currently
okd?
Automated
testing
does.
B
I
think
both
it,
so
we
we
shouldn't,
see
the
the
internal
reorganization
as
as
solving
what
we
want
to
solve
with
operate.
First,
I
think
we
still
want
to
have
those
additional
resources
and
have
community
builds
available,
because
what
we're
gonna
change
internally
isn't
gonna.
B
There
is
gonna,
be
changes,
but
not
all
of
them
are
are
really
end
user
facing
so
the
first
thing
of
just
pulling
the
build
back
into
pro
from
cirrus
isn't
really
going
to
change
anything
to
the
outside.
It's
just
that.
We
internally
have
a
much
more
streamlined
process
that
isn't
shelling
out
to
a
third
platform
that
we
don't
control
ourselves.
B
So
we
we
still
want
to
enable
true
community
builds
and
rebuilds
from
folks
outside
of
red
hat,
which
currently,
because
they
can't
nobody,
can
access
prowl
and
use
prowl.
So
if
we
have
this
community
built
project
on
operate
first,
that
would
still
be
a
huge
benefit.
So
I'm
I
think
we
we
we're
just
going
to
do
both
and,
for
example,
drc
bills
aren't
going
to
be
pulled
into
into
what
I'm
doing
with
crowd
now
and
that
still
has
to
be
solved.
B
B
Exactly
it's
especially
on
the
okd
machine
os
side.
Currently
you
can't
you
can
create
a
pull
request,
but
the
dci
is
only
going
to
run
if
the
if
the
branch
is
from
the
same
repository
and
only
red
hat
folks
can
obviously
create
new
branches
on
that
repo,
and
that
is
going
to
change.
So
that
is
going
to
be
it's
much
easier.
It's
going
to
be
much
easier
for
for
community
members
to
open
a
pr
and
actually
have
it
tested
without
us.
First
moving
that
branch
into
the
roof.
B
That
is
one
of
the
main
reasons:
yeah,
that's
fantastic!
That
will
be
nice.
A
All
right,
let's
see
I'm
still
working
on
gathering
info
charo,
is
not
here
daniel.
I
have
not
seen
serial
output
for
installs.
This
came
up.
Someone
gave
me
the
context,
because
actually
timothy's
here
can
probably
answer
this.
A
A
H
Yeah
I
sort
of
figured
it
out,
so
I
wrote
it
in
the
notes
that
during
the
bootstrap
or
the
bare
metal
ipi,
I
was
interested
in
getting
serial
output
from
the
machines
that
are
getting
booted
and
the
way
I
did
it
is
on
the
bootstrap
node.
H
I
go
into
this
directory
where
the
ironic
service
generates
the
configuration
files
and
then
just
add
whatever
I
needed,
but
I
don't
know
if
there's
a
process
where
you
can
do
that
without
yet
doing
it
manually
like
this,
but
at
least
I
figured
something
else
where
I
actually
get
serial
output,
which
was
quite
handy.
D
B
I'm
not
sure
there
is
a
more
streamlined
way
to
do
to
doing
it,
but
have
you
documented
what
you
did?
Could
you
share
a
link?
Is
it
in
the
agenda?
It's
in
the
agenda.
I
just
there's
a
road.
B
All
right,
thank
you
very
much.
I
will.
I
will
follow
up
on
that.
If
there's
yeah,
if
there's
a
better
way
to
do
it,
I
I'll
find
out.
C
I'm
not
sure
if
that
patch
is
going
to
change
that,
but
because,
when
I
looked
at
the
bear
installer
I
mean
it
looked
like
was
everything
in
it
was
talking.
You
know,
referencing
efco's
images,
so
I'm
wondering
if
there's
another
piece
that
very
deep
that
has
arcos
images
built
into
it
for
bare
metal
and
I'm
not
sure
where
to
look
for
that,
but
I
can't
reproduce
it
so
I
don't.
I
can't
I
can't
deep
dive
into
it.
H
The
last
problem,
I'm
still
having
with
the
bare
metal
insulation,
is
my
networking
setup,
but
then
I
think
I
can
actually
do
take
that
with
the
openshift
itself,
because
it
doesn't
work
at
yeah
with
the
openshift
installer,
either.
H
Oh
yeah,
so
what
I'm
trying
to
do
is
I'm
not
providing
two
separate
nics,
I'm
providing
one
nick
with
two
vlans,
the
native
where
the
provisioning
can
happen
and
then
a
different
vlan
like
10,
or
something
that
the
bare
metal
can
continue
on.
And
it's
not
one
nick.
It's
a
bond
of
multiple
mix
and
the
thing
I
read
was
in
like
4
10,
you
can
actually
specify
an
nm
state.
H
I
think
they
call
it
where
you
supply
the
configuration,
so
they
actually
show
up
with
the
full
networking
and
everything.
But
during
the
ironic
python
agents,
part
of
it
that
they
lose
network
because
they
start
probing
every
single
network,
interface
or
lldp
and
figures
out,
vlans
and
stuff
and
then
yeah,
I
lose
complete
network.
So
that's
where
the
serial
console
comes
in.
C
So
I
know
what
I've
done
and
I've
done
on
vmware
and
I've
rebooted
a
node
and
stopped
it
at
at
the
single
single
user
mode.
You
can
go
in
there.
You
can
there's
a
couple
of
console
pieces
that
you
can
delete
and
then
reboot
it'll
come
up
in
a
single
user
node
mode,
but
I'm
not
sure
if
that'll
help
in
your
case,
because
you
want
to
see
it
while
it's
actually
booting
for
real.
H
Yeah
well
yeah,
so
the
just
editing
the
files
where
you
get
the
serial
output
helps
a
lot.
And
then
you
can
just
pass
another
thing
that
allows
the
serial
gettys
service
to
just
whenever
something
makes
a
connection
on
the
serial
port
you're
automatically
logged
in
as
roots.
A
Great
and
timothy
left
a
link
in
the
in
the
chat
there
for
serial
console
config
for
f
cos
some
helpful
info
there.
B
I
think
this
is
also
essentially
configuring
after
the
fact
that
what
we
really
want
is
to
provide
that
kernel
argument
up
front
and
then
have
the
machine
come
up
with
it
immediately.
B
So
ignition
does
have
support
for
setting
kernel
arguments,
so
you
might
be
able
to
just
set
it
in
ignition
is
not
going
to
be
respected
or
understood
by
the
machine
config
operator
it
has
a
a
shim
api
in
the
machine.
Config
object
and
there
is
a
jira
card
open
to
move
the
mco
to
to
the
ignition
native
api,
but
that
hasn't
happened
yet.
But
if
you,
if
you
specify
the,
if
you
specify
that
in
the
ignition
that
gets
downloaded
by
the
by
the
nodes
at
provisioning
time,
then
that
might
already
work.
B
I'm
not
sure
if
you,
you,
then
also
have
to
create
a
machine
config
object
to
reflect
that.
I
I
don't
think
so,
because
nothing
is
gonna
check
if
there's
a
difference,
but
if
you
only,
if
you
only
add
the
machine,
config
object
that
is
essentially
a
day
two
operation
because
doesn't
doesn't
do
it
through
kernel,
rx
or
through
ignition,
but
it
has
a
separate
process
for
for
setting
those
arcs
later
on.
A
A
Excellent,
all
right
next
up
is
find
out
about
bare
metal
ipi,
installing
our
cost
nodes
we
talked
about
and
that's
about
it
is
there
anything
else
that
folks
want
to
bring
to
the
table
at
this
meeting.
G
So
I
I've
been
wrestling
for
a
terminal
amount
of
time
on
an
issue
that
happened
in
upgrading
to
from,
I
guess
four.
G
Seven,
four
eight,
when
one
of
the
operators
that
I
had
installed
on
four
seven
wasn't
supported
up
for
eight
and
in
the
upgrade
it
turned
out
that
some
stuff
was
left
over
from
the
old
operator
and
then
that
turned
out
to
cause
a
an
internal
error
with
oc,
get
which
then
prevented
pods
and
packages,
and
god
knows
what
else
from
being
deleted
and
because,
when
you
ran
oc
to
try
and
find
out
what
the
problem
was,
you
got
an
internal
server
error,
so
that
was
sort
of
annoying
in
a
long.
G
You
know
comedy
after
long
comedy
path.
I
finally
had
time
to
track
it
down
and
it
turned
out
to
be
like
a
a
relatively
trivial
fix.
But
then
I
noticed
when
I
you
know
like
in
the
in
the
path
of
chasing
it
down.
I
uninstalled
the
strimzy
operator,
and
then
I
noticed
that
even
after
the
streams
the
operator
was
uninstalled,
I
have
all
these
strimsy
crds
hanging
around,
and
so
then
I
started
to
wonder.
G
G
And
the
so
in.
In
my
I
left
sort
of
a
discussion
set
of
breadcrumbs
on
that
which
you
can
look
in
the
okd
discussions.
G
G
But
what
I
don't
know
is,
philosophically
how
much
stuff
should
be
hanging
around
if,
if
you
uninstall
an
operator
or
if
you
get
rid
of
an
operator
when
you
upgrade.
G
You
know
because
the
like,
without
an
operator,
you
might
still
have
objects
that
are
still
functioning,
and
so
you
can't
necessarily
eliminate
all
the
crds
but
anyway,
when
thinking
about
it
for
about
five
seconds,
it
seemed
like
that
was
a
non-trivial
issue
that
you
couldn't
just
do
something
and
it
would
work
in
all
cases.
But
I
don't
know
if
people
have
thought
about
that
in
upgrades.
A
I
have
thought
of
it
and
I've
run
into
the
issue
as
well
when
removing
operators
that
yeah
there
is
some
crud
left
around
that
that
will
prevent
things
from
updating,
and
I
got
bit
remember
which
operator
it
was
but
yeah
I
mean
basically
to
go
around
manually,
deleting
stuff
and
then
killing
pods
to
let
things
refresh
and
whatnot
it'd
be
interesting
to
document
those
bruce.
A
Can
you
put
links
to
your
breadcrumbs
if
you
have
any
discussions
or
things
put
them
in
the
meeting
notes,
and
then
that
way
we
can
bring
them
to
the
attention
of
the
larger
community.
C
Excellent,
I
mean
it
might
be
part
of
just
how
the
operators
were
built
and
they
don't
either
they
don't
clean
up
by
design
or
they
didn't
think
about
it,
because
there
are
some
operators
that
are
designed
to
be
removed,
but
also
be
able
to
be
reinstalled
and
not
lose
your
configurations
right.
So
it's
probably
really
operator
dependent
on
how
well
they
clean
up.
A
G
G
G
A
That
can
be
so
another
one
that
I
ran
into
recently
in
four
nine,
and
it's
a
shame
that
this
isn't
going
to
be
fixed
because
it
looks
like
it
was
fixed
upstream
is
there's
some
issue
with
the
basically
a
lot
of
pods
getting
created
to
the
point
where
you
can
either
run
out
of
pods
or
run
out
of
networking
with
the
collect
profiles,
cron
job
the
collect
profiles,
cron
job
ends
up
creating
literally
thousands
of
pods
and,
depending
on
your
configuration
you're
either
gonna
run
out
of
ips
or
you're
gonna
run
out
of
pods.
A
G
And
basically,
every
every
day
I
would
go
through
and
strangely
it
would
work
on
from
the
console.
So
I
would
have
a
everyday
delete.
All
the
positive
succeeded
and
the
pods
that
failed,
and
that
would
clean
it
up
until
the
next
day.
A
Humorous
after
you've
banged
your
head
up
against
the
wall,
trying
to
figure
out
what
it
is,
I've
got
this
issue
on
4
9,
with
with
with
you
know
this
cron
job
creating
all
of
these
collect
profile
sealed
and
they
they
did
fix
it.
A
G
A
G
Is
a
well
but
probably
there's
a
separate
underlying
issue
that
I
ran
into,
and
I
was
following
a
a
knowledge
base
thing
that
I
found
from
red
hat
on
the
packages
won't
delete,
okay
and,
although
that
didn't
fix
it
because
of
you
know
the
internal
server
error
and
funnily
enough,
I
did
get
a
even
though
I
wasn't
asking
for
red
hat
support.
I
did
get
a
lecture
from
the
guy
that
had
the
knowledge
area
saying
that
okd
wasn't
supported.
Please
contact
the
okd
community.
G
A
Yes,
all
right
well
in
the
last
few
minutes,
is
there
anything
else
that
folks
want
to
talk
about
before
we
end
the
meeting.
C
Well,
I
I
actually
have
a
question
about
that,
because
one
of
the
things
that
was
said
you
know
what
six
seven
eight
months
ago
was
that
bugs
that
we
found
in
okd,
you
know,
will
be
looked
at
by
whatever
team
and
that
this
is,
I
mean,
not
a
supported
product
per
se.
But
you
know
we
don't
get
the
run
around
saying
that
this
is
okd
versus
ocp
yeah.
We
can
open
bugs
for
okd
and
they
will
be
fixed
because
they'll
probably
exist
in
ocp,
also
so
that
that
seems
like
a
weird
response.
B
Yeah,
that's
probably
more
due
to
lack
of
of
involvement
or
knowledge
on
on
that
person's
part.
I
I
do
think
in
general,
especially
if
it's
something
that
is
also
an
issue
or
a
potential
issue
in
the
product.
B
The
developers
are
supposed
to
look
into
that.
We
are
trying
to
promote
this
effort
more
internally
and
and
raise
awareness
that
we
aren't,
that
we
are
part
of
openshift
essentially
and
that
each
and
everybody
has
to
do
their
part.
That's
a
process,
unfortunately,
and
yeah
not
the
ideal
response
I
would
have.
I
would
have
given
but
yeah.
If
it's
a
real
issue,
it'll
be
looked
at
eventually
and
they
just.
A
And
don't
be
too
surprised
if
people
don't
know,
I
I've
had
conversations
with
red
hat
folks,
sales,
folks
and
other
engineers
who
are
doing
sales,
support,
who
don't
know
what
okd
is
and
they're
like
well
here,
let
me
tell
you
about
openshift
and
all
the
great
things
it
does.
I'm
like
yeah,
I'm
culture,
the
okd
working
group,
but
I
get
it
and
they're
like
the.
Why
and
it's
like
yeah
okay,
so
it's
it's.
We
need
to
do
some
work
and
I
think
the
red
hat
folks
need
to
do
some
work
internally
to
help.
E
B
B
Engineers,
yes-
and
I
think
that
that
is
also
part
of
offloading
the
release
engineering
work
to
to
this
other
team,
because
that
will
be
confused
as
onboarding
for
lots
of
new
engineers
get
used
to
and
to
get
to
know
the
whole
ecosystem.
Because
a
lot
of
engineers
they
come
into
a
team
and
then
they
have
a
very
specific
focus
and
they
don't
and
and
obviously
that,
that's
enough
to
to
be
effective
on
those
teams.
B
They
don't
need
to
understand
or
know
the
entire
ecosystem,
but
it
is
there
and
awareness
should
also
be
there.
So
we
are
working.
D
A
C
A
A
Let's
call
today
and
we'll
see
you
next
time
same
bat
channel
same
bedtime
and
feel
free
to
do
some
asynchronous
work,
because
we
can
always
use
some
asynchronous
work
on
some
of
these
discussion
issues
and
stuff
and
and
muhammad
I
see
you
in
there,
we
gotta
talk
security
stuff
soon.
So
all
right
folks
talk
to
you
soon.
Bye,
take
care
thanks.
Everybody.