►
From YouTube: OKD Working Group Meeting - 12-06-2022
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
B
Thank
you
so
much
folks
for
coming
to
the
okd
working
group
meeting
for
December
6th
of
2022
and
we've
got
a
relatively
light
agenda.
Please
take
just
a
few
seconds
to
to
poke
at
it
and
let
me
know
if
there's
anything
that
you
want
to
change
or
modify
and
be
sure
to
put
your
name
in
the
attendees
section
so
that
we
know
you
were
here
or,
if
you
weren't
here,
to
get
some
info
to
you
if
it's
important
so
we'll
wait.
A
second
for
that.
B
Anything
that
folks
want
to
change
or
modify.
Are
you
good,
no
okay
and
then
let's
go
ahead
and
get
started
with
the
okd
release
and
CI
CD
updates
with
Christian
and
Luigi
at
all
good.
C
We
have
two
new
releases,
one
for
okay,
dionf
course,
and
another
one
for
okdms
cause
for
the
I
haven't
I
I
just
came
back
today
from
from
vacation,
so
I
haven't
really
been
following
things
closely,
but
I
know
that
for
the
s-course
releases
we
already
have
a
couple
of
reports
that
installation
on
AWS
is
timing
out,
so
we
are
going
to
look
into
that
yeah
and
for
the
new
4.11,
stable,
OK
dnf
course
release
that
was
just
cut
three
days
ago
and
I
haven't
really
seen
any
any
feedback.
C
Yet
there
yeah
there
is
no
feedback
yet
on
the
GitHub
thread.
So
hopefully
that
is
an
improvement
to
be
far
I.
Do
think
that
we
still
have
the
the
memory
or
is
it
yeah
I
think
it's
memory
leakage
in
the
cubic.
We
still
have
that
issue.
I,
think
we
we
just
have
to
wait
till
a
new
version
of
kubernetes
lands,
but
yeah.
That's
it
for
me
very
quickly.
I
will
put
the
link
to
the
new
s-cost.
D
Yeah
yeah
yeah,
so
I
tried
411,
you
know,
Benjamin
Dwayne
is
and
Mustafa
working
on
the
UPI
installs
and
so
I
I
tried
the
latest
411
on
if
costs
and
installed
it
on
for
single
node
and
it's
working
so
just
some
some
feedback
Christian,
the
411
is
working
with
okd
and
on
ifcos,
that
is
so
I
haven't,
tried
the
s-cos
yet
and
then
yeah
the
the
the
the
release.
D
So
what
we
did
on
the
release
pipeline,
one
of
the
the
guys
on
on
shiftwick,
has
built
us
an
automated
attack
time
pipeline
task
that
does
the
full
release
and
pushes
to
GitHub
and
pushes
over
artifacts
and
gives
you
a
nice
message
and
everything.
So
for
us
as
a
team.
Now
we
just
it's.
Basically
a
pipeline
run
push
button
really
and
we
have
the
release
out.
So
in
the
time
that
it
takes
to
build
s-cos
and
do
the
release,
we
could
do
it
in
about
four
or
five
hours.
So
it's
really
cool.
D
It's
yes,
I
just
wanted
to
say
kudos
to
the
guys
that
helped
with
the
the
build
platline.
B
Thank
you.
Excellent
excellent
I
have
a
quick
question.
Maybe
one
of
you
can
answer?
If
not,
we
might
have
to
get
Vadim
into
it.
We
had
several
people
over
the
weekend
who
had
the
who
installed
the
okd
release
that
came
out.
Was
it
the
the
12th
2
release
I
think
it
was
that
came
out
and
then
they
got
offered
a
nightly?
B
Can
you
illuminate
us
a
little
bit
about
how
that
could
happen,
and
is
that
something
that
we
should
be
aware
of
to
communicate
to
folks
that
it's
an
it's
a
known
issue?
E
B
E
B
Yeah,
let
me
let
me
link
you
to
it
right
now.
Actually,
I
just
found
the
the
comment,
the
thread.
Okay.
So
what
Vadim
says
is
this
is
a
long-standing
release,
controller
bug,
weird
behavior.
When
a
new
stable
is
released,
we
mirror
a
nightly
to
Quay
and
make
a
GitHub
release.
The
process
is
not
yet
instantaneous
and
a
new
nightly
may
be
scheduled
at
that
time,
starting
at
nightly
to
Nightly
upgrade
test.
B
As
a
result,
we
may
end
up
with
a
stable
to
Nightly
upgrade
Edge,
so
he
removed
the
12-3
nightly
and
it
rebuilt
the
graph,
but
they
still
need
there's
apparently
a
issue:
that's
open
to
filter
channels
using
a
release,
annotation
name
to
get
around
this
yeah
it's.
So
it's
a
it's
a
pull
request
in
the
release
controller
that
hasn't
been
accepted.
Yet
it
looks
like
oh,
it
says
closed,
so
maybe
he
did
get
it
in
no.
B
Oh
closed
right:
okay,
I'll,
reopen
it
all
right,
Okay
cool,
so
so
this
does
have
potential
to
go
again.
Should
our
stock
answer
be
revert
to
the
last
official
release
for
the
meantime,
and
maybe
we
should
do
an
FAQ
on
that
I.
Don't
know
what
like
what?
What
do
you
think
the
turnaround
time
would
be
to
get
this
to
get
that
merge
request
in
there
and
to
filter
down
into.
C
C
All
right-
and
you
won't
get
an
upgrade
from
that
version
either
because
it's
not
going
to
be
right
graph,
so
right,
yeah,
reverting
to
the
latest
stable.
If
you
have
done
that
upgrade
to
the
nightly
would
be
okay
advised
here,
I!
Think
yes,
okay,.
B
I
will
create
an
FAQ
for
it
and
submit
that
to
the
to
the
website
all
right.
So
any
other
questions
about
this
portion
of
our
discussion
in
terms
of
releases
and
whatnot.
D
Yes,
I
was
chatting
to
Christian
earlier
on
about
once
we've
got
a
successful
release.
We
wanted
to
basically
just
publish
it
on
the
okdio
Twitter
handle.
Do
you
perhaps
generally
have
the
the
credentials
for
that
I
handle
to
the
okdi
handle
so
that
we
can
post
our
release
stuff
on
this?
Yes,.
B
And
I,
yes,
look:
we
can
talk
offline,
I,
think
we.
What
we
want
to
do
is-
and
this
actually
ties
into
a
lot
of
stuff.
We
I
just
got
confirmation
that
we
are
going
to
get
some
email
aliases.
B
So,
ideally,
what
we'll
do
is
set
some
things
up
with
the
email
aliases
to
the
Twitter
account
and
also
set
up
I,
don't
know
what
what
password
system
do
we
want
to
use
to
start?
Storing
red
hat
is
using
what,
by
default,
for
storing
credentials.
C
There's
a
couple
of
systems,
so
we
could
put
it
in
a
vault
somewhere
on
on
some
cluster,
and
that
would
give
us
some
some
rule-based
Access
Control
there.
We
could
also
just
share
it
manually
just
in
an
encrypted
manner,
share
the
password
and
have
people
store
store
them
in
their
own
password
managers
locally
I
mean
this
group
is
very
small
that
that
should
have
access
to
these
credentials
right.
It's
really
just
the
the
working
group
leadership,
which
really
is
just
Jamie,
Luigi
and
I'm-
probably
missing.
C
Well,
maybe
maybe
Vadim
or
myself,
but
it
really
really
just
the
the
the
Brian.
C
Yeah
yeah
right
until
the
end
of
year,
at
least
and
I
mean
even
going
forward.
She's,
not
gonna
yeah
she's
she's
gonna
stick
around
as
a.
C
Least
so
yeah
I'm,
not
sure
we
don't
really
have
a
process
for
for
that
internally.
We
we
prefer
a
vault
on
a
cluster
but
I'm
not
sure
that's
necessary
for
for
this
of
this
credential
at
least
specifically.
B
Well,
we'll
talk
about
it
at
the
community
development
meeting,
maybe
not
spend
too
much
time
out
here,
but
yeah
Luigi
I'll
make
sure
you
get
the
credentials
somehow
I
lean
towards
some
sort
of
official
password
system
instead
of
folks,
because
the
problem
is
is
if
folks
start
passing
around
credentials,
yes
and
then
the
credentials
need
to
be
changed
and
everybody's
like.
Oh,
do
you
got
the
right
ones
here?
Let
me
share
this
Google
doc.
Let
me
share
this
whatever,
like.
B
You
expose
everything
in
right,
exactly
so
so
I
don't
know,
but
we'll
talk
about
it.
The
Community
Development
Luigi
I'll
hook
you
up
so
that
you
can
post
I've
been
doing
the
The
Stables
on
fkos
releases
that
vadim's
been
doing
I've
been
pushing
those
out
and
then
also
tweeting,
so
I've
been
tweeting
about
those
and
they're
tweeting
about
the
meeting
videos
coming
up
the
past
couple
weeks,
yeah.
D
Well,
you're
welcome
you're,
welcome
to
do
the
s-cos
ones
as
well,
but
we
just
wanted
to
make
sure
that
the
installer
was
working
as
as
Christopher
was
saying
on
the
AWS
stuff,
so
once
that's
been
verified,
then
we
could
actually
just
ping
you
and
say
cool
here's.
The
link
command,
making
the
Tweet.
If
that's
okay,.
C
Shireen
has
already
researched
a
essentially
a
tecton
task
that
will
push
messages
to
select
and
Matrix
channels
which
we
could
use
as
part
of
the
release
procedure
to
automatically
announce
that
on
our
chats
and
we
might
even
find
it
or
create
another
task
that
that
then
speaks
to
the
Twitter
API.
To
do
that,
yeah.
C
There's
lots
of
opportunities
here
for
for
making
releases
less
work
and
automating
all
the
things
and
I
think
we've
yeah
we're
taking
big
steps
towards
towards
that
now.
Excellent.
B
Jack,
do
you
want
to
reference?
You
know
you
mentioned
something
in
the
in
the
chat
and
I
think
it
might
be
helpful
for
the
group
to
sort
of
hear
what
you
mentioned
in
terms
of
c
groups
and
sort
of
this
ongoing.
So.
F
Yeah
good
evening,
everyone
a
good
day,
everyone,
depending
on
where
you
are
it.
G
F
Sort
of
related
to
to
what
I
put
in
the
agenda
at
the
bottom
so
I
hope
it's,
okay,
that
we
kind
of
skip
ahead,
but
I'm
sure
that
lots
of
people
have
noticed
that
on
411
there
is
this
issue
with
the
c
groups,
not
being
cleaned
up
and
this.
So
the
issue
has
been
very
active
on
GitHub
and
I.
Feel,
like
half
of
the
comments
are
actually
just
trying
people
trying
to
figure
out
which
c
groups
versions
they
are
on
if
they
are
still
on
B1
or
V2,
and
in
fact
also
today.
F
We
just
noticed
that,
like
we
are
not
entirely
sure
and
depending
on
which
cluster
you
look
at
like
it's
a
different
secret
version
and
that's
really
not
helpful
for
debugging
anything
or
deploying
anything.
So
our
like
quote
unquote
old,
like
older
than
okd49
clusters,
are
still
using
the
C
group
B1
and
everything
that
has
been
created
afterwards
is
defaulting
to
V2.
Only
and
well.
It's
just
super
confusing
and
I'm
not
well.
First
of
all,
I
don't
think
this
was
really
publicized.
F
F
I
understand
that
like
okd
is
trying
to
like,
like
you,
know,
test
out
all
the
new
features
and
stuff
like
that,
so
it's
totally
fine
but
like
at
least
make
people
properly
aware
of
it,
and
then
also
this
behavior,
that
only
new
clusters
will
only
get
well
kit
will
get
it
by
default,
but
all
clusters
will
just
keep
their
current
behavior,
silently
is
I,
think
also
just
extremely
dangerous,
because
it's
not
just
confusing
admins,
but
also
now
that,
like
we
as
a
community,
get
issues
on
GitHub
like
you
never
really
know.
F
If
the
person
is
now
running,
C
group,
P1
or
C
group
B2,
so
it
just
complicates
everything
massively
so
I
I
believe
there
was
a
task
to
eventually
switch
off
over
all
clusters
to
C
group
B2,
at
least
that's
from
what
I
understood
from
the
there
was
a
discussion
thread
called
road
to
okd49,
but
I.
Don't
think
that
ever
happened
and
I
I
think
this
should
be
should
be
looked
into
and
kind
of
like
straightened
up
and
and
and
maybe
also
again,
another
FAQ
entry.
Saying
hey
this
happened.
F
If
you
want
V2,
then
please
do
this
and
that
but
be
aware
of
these
and
that
side
effects
and
well
also
be
aware
that
actually,
by
default,
new
clusters
will
have
it.
But
your
old
clusters
will
not.
B
Yeah
Christian,
can
you
shed
some
light
on
that
and
if
you
can't
that's
fine
yeah.
C
Not
a
lot
I
think
this
is
a
question
best
asked
to
Vadim,
because
he's
been
kind
of
working
on
this
and
I
do
agree
that
we
might
have
that.
We
certainly
messed
this
up
in
both
both
functionally
and
in
Communications
as
well
so
yeah.
Ideally
we
can.
We
can
still
add
that
task
to
upgrade
any
cluster
that
has
yeah
to
to
switch
any
cluster
over
to
c
groups
V2.
C
If
it
isn't
already
going
forward
and
then
yeah,
we
will
have
to
kind
of
dig
into
the
I
I'm
still
not
entirely
sure
what
what
the
the
cause
is
of
this
bug
and
I
think
vadima
has
been
saying
that
we
are
waiting
for
the
new
kubernetes
version
to
land
and
then
the
the
issue
at
hand
here
will
go
away,
but
we
haven't
really
been
able
to
verify
that
I
think
I
haven't
really
been
following.
It
super
closely
so
best
we
get
Vadim
in
a
meeting
somehow
and
talk
to
him
directly.
B
Okay,
I
actually
just
messaged
him
and
asked
him
to
clarify
a
little
bit.
My
thought
would
be
if
we
could
have
a
table
in
the
FAQ.
That's
like.
Under
these
circumstances,
you've
got
V1
under
these
circumstances.
V2.
Under
these
circumstances,
you
can
go
from
one
to
two
or
whatever
proof
says
here:
go
ahead.
Yeah.
A
Yeah
I
think
it's
a
little
bit
more
complicated
than
that
because
originally
when
video
went
to
V2,
it
was
required
to
be
an
upgrade
if
you
wanted
to
change,
but
I've
noticed
that
I
never
upgraded
to
SD
groups
V2,
but
my
cluster
is
running
both
V1
and
V2.
A
Yeah,
probably,
but
that
wasn't
that
was
not
there
originally
like.
When
I
saw
the
my
cluster,
it
was
V1
and
because
there
were
some
versions
of
of
the
Java
jdk
that
didn't
work
on
V2
and
weren't
planning
on
being
upgraded
to
V2
with
the
those
are
like
like
adoptable
from
jdk
or
whatever
open
source
versions.
A
I
was
staying
on.
The
V1
and
I
was
a
little
bit
surprised
when
all
of
this
came
up
and
I
I,
checked
and
found
out.
Yes,
indeed,
I
do
have
both
going
there,
which
was
not
something
that
I
did
consciously
so
that
so
maybe
that
aspect
should
go
in
the
fact
as
well,
and
you
know
from
where
I,
following
this
big
long
thread,
it
looks
like
a
lot
of
people
are
in
the
same
situation,
they're
running
in
mixed
mode,
so
they're
doing
both.
B
Are
you
well,
that
was
the
thing
is,
is
that
there
was
also
mixed
Communications
about
how
to
determine
that.
There's
one
document
that
folks
were
pointing
to
that
was
a
kubernetes
dock,
but
for
some
reason,
folks
we're
also
using
another
method
to
figure
out
what
they
were
on.
That
was
getting
different.
Different
people
were
getting
different
results
and
I
I,
don't
have
the
threads
in
front
of
me
for
it,
but
I
think
that
should
be
included
in
the
FAQ
as
well
as
how
can
you
actually
check
to
be
sure
which
you're
running
yeah.
A
F
You
can
kind
of
look
at
the
machine
conflicts
that
are
being
deployed,
but
then
also
probably,
you
should
just
actually
look
at
what's
happening
on
the
notes
and
and
what
is
actually
mounted
there
in
the
kernel
parameters
that
are
actually
being
used
there.
But
you
see
the
thing
is
why
I'm
bringing
it
up
because
is
so
okay?
We
are
aware
of
this,
like
c
groups,
garbage
collection
issue
and
okay.
We
understand
that's
an
upstream
issue
and
things
like
that
happen,
and
that's
that's
fine,
like
we
can
deal
with
that.
F
But
it's
already
difficult
enough
to
troubleshoot,
like
it's
already
difficult
enough
to
to
to
exactly
pinpoint
when
you're
getting
these
these
reports,
who
is
using,
which
version
okay,
I'm
using
okd411
11
such
and
such
zero
five
and
okay
I'm
using
22,
but
on
my
cluster
it
works.
But
on
this
other
cluster,
which
is
running
the
same
version,
it
doesn't
work.
F
This
should
be
communicated
a
lot
more
clearly
and,
and
in
fact
it
might
be-
might
be
useful
to
have
something
like
I
think
we
were
talking
about
something
like
this
last
time,
like
more
blog
posts,
when
a
new
okd
version
comes
out
like
four
nine
four
ten
for
11
that
we
say
hey
like
please.
These
are.
These
are
the
like.
F
F
B
You
I
think
that's
good
I
think
we
need
to
leverage
the
community
for
that,
because,
obviously
you
know
we've
only
got
so
many
people
so
Jack.
If
you
want
to
contribute
to
that,
if
there's
anyone
else,
that's
interested
I'm
happy
to
contribute
a
little
bit
but
yeah.
It
would
be
great
to
actually
have
in
our
release
docs
like
known
issues,
type
stuff
or
possible
issues
type
stuff,
so
that
folks
are
aware
of
this,
because
if,
if
we
want
to
be
Pro
right,
we
have
to
provide
this
information.
B
D
F
Yeah
but
then
the
question
is:
why
is
okd
opted
to
use
c
groups
V2
by
default
if,
if
the
minimum
requirements
are
not
not
meta
right,
but
I
think
I
think
as
a
like
as
an
actionable
First
Step,
maybe
we
could
just
started
with
a
change
log
document
in
the
okd
repo,
so
we
don't
like
put
too
much
like
a
blog
posts
and
everything
are
nice,
but
you
know
we
don't
want
to
put
too
much
how
to
say
like
barrier
into
it
like
to
too
high
of
the
threshold.
F
So
maybe
we
can
just
start
with
a
change
log,
as
is
he
getting
started
and
maybe
from
there
we
can
grow
it
then.
A
Well,
the
release
is:
do
come
with
a
change,
Lord
going
I
guess
with
each
artifact
and
what
problem
reports
were
fixed,
so
the
question
would
be.
What
do
we
want?
Besides,
the
change
log
is
already
there
like
it's,
not
it's
at
a
very
detailed
level.
B
F
C
Absolutely
I
agree
with
that.
I
think.
Another
change
that
we
probably
have
to
announce
are
stress
is
when
we
rebase
Fedora
versions,
which
is
another
thing
that
we
don't
have
in
ocp.
Obviously,
yeah
I
do
think
we
need
to
find
kind
of,
like
highlighted
a
high
change.
Lock
highlights
you
know
things
that
people
have
to
be
aware
of
I,
don't
yeah.
C
We
did
the
change
to
to
see
groups
V2
because
we
were
excited
about
it,
I
think,
but
we
probably
rushed
it
a
little
bit
which
is
now
biting
us.
C
I
do
hope
that
we
well
for
people
that
are
still
on
c
groups,
V1
they
they
don't
hit
that
problem
right.
So
one
one
thing
that
might
work
is
to
switch
back
to
c
groups.
We
V1
on
on
newer
clusters
and
classes
that
are
still
on
the
older
version.
Don't
haven't
upgraded
anyways
because
that
needs
to
a
manual
edit
of
the
machine
config
that
is
setting
the
kernel
arguments
because
those
don't
those
aren't
changed
during
upgrades.
C
F
Which,
which
is
fine,
we
should
just
like
call
that
out
in
the
in
the
change
log
like
hey.
This
has
changed.
If
you
want
the
new
Behavior
everywhere
like
please,
then
like
manually
copy
the
machine
config,
not
nothing
else.
We
need
that.
One
thing
that
was
especially
confusing
is
actually
because,
if
you
dig
somewhere
in
the
OPD
docs
about
all
of
this,
like
kernel
arguments
and
c
groups,
V2,
you
actually
find
one
of
the
pages
that
I
assume
has
been
copied
from
the
openshift
documentation.
That
actually
says
OC
groups.
F
V2
is
only
a
technology
preview
feature,
and
you
can
enable
it
this
in
that
way,
which
kind
of
would
indicate
right
that
it's
not
enabled
by
default
and-
and
this
is
in
the
okd
docs,
but
in
fact
it
is
enabled
by
default
in
okt.
So
this
is
kind
of
just
where
all
of
its
confusion
comes
from
and
then
of
course
course,
at
the
end
of
the
day,
like
everyone,
including
ourselves,
super
confused
like
what
you're
actually
running
yeah.
A
F
I'm
also
happy
to
submit
a
PR
for
it.
I
just
I
just
need
to
understand
if,
if,
if
it's
not
gonna
get
automatically
overwritten
by
something
or
you
know.
C
Yeah
we
we
do
have
the
the
ability
to
distinguish
between
two
releases
for
things
that
aren't
the
same.
We
just
yeah
we're
still
catching
these
things
in
the
docks.
There's.
B
A
so
if
you
go
to
the
link
that
I
provided
there's
a
little
bit
of
description
of
how
you
can
Target
it
directly
to
Michael
our
our
okd
docs
person,
and
so
that's
the
way
to
do
it
so
open
it
up
on
the
docs
and
then
follow
the
process.
That's
outlined
in
the
link
that
I've
provided
and
then
that
will
make
sure
that
it
gets
addressed.
Excellent.
B
Yeah
Christian
responded
that
he's
going
to
summarize
that
thread
and
what
the
situation
is
with
c
groups
and
that
it's
actually
multiple
issues.
So
as
soon
as
he
gets
that
summarized,
we
can
turn
it
into
an
FAQ
I.
Think.
B
Oh
one
thing:
actually
Timothy:
can
you
talk
a
little
bit
about
how
the
Fedora
core
OS
folks
handle
upcoming
changes
in
the
door?
The
process
you
have
of
and
I
participated
in
it
a
few
times.
You
know
when
I
was
involved
a
little
bit,
but
this
process
of
oh
there's,
these
Fedora
Fedora
changes
coming.
This
is
how
it's
going
to
predict,
affect
Fedora
koros
and
you
sort
of
go
through
a
cycle
and
create
a
list
and
stuff.
B
G
Sure
to
the
ieg
is
that
when
we
want
to
make
a
change
in
federal
course,
usually
when
it's
a
big
change
that
impacts
more
than
just
Federal
crisp
but
generally
impacts
Federer,
we
do
that
via
the
change
process.
So
in
federal
there
is
a
change
process
where
you
submit
to
change
and
then
it
gets
approved.
G
It
has
all
set
of
steps
and
you're
essentially
publishing
what
you're
going
to
do
to
the
community,
and
we
do
that
for
federal
currency.
So
part
of
our
change
are
published
this
way
and
we
also
track
the
risk
of
changes
that
happen
as
part
of
federalizer
for
every
new
release
of
Fedora.
That
happens
every
six
months.
G
We
this
bunch
of
changes
that
happens
that
are
made
either
virus
by
other
folks
in
inside
the
the
federal
distribution
and
we
track
them
inside
the
federal
request
tracker
to
make
sure
that
we
are
not
impacted
in
a
bad
way
by
them
or
if
we
need
to
do
things
related
to
that.
So
let
me
find
you
a
link
as
an
example
before
we
do
that.
C
We
should
probably
align
our
our
vadim's
road
to
4.12
issues
that
he
has
for
tracking
new
releases
with
that
process.
The
the
Fedora
core
OS
folks
have
because
yeah
that
is
much
more
professional
than
what
we
have
I
think
and
much
more
yeah,
just
seeing
more
things
there.
So
yeah.
Thank
you
for
for
pointing
that
out
again.
Timothy
I
think
yeah
Fedora
korres
has
an
awesome
process
in
terms
of
how
how
you
document
upgrades
and
we
kind
of
we
yeah
under
the
hood.
C
We
are
using
fcos,
but
it's
not.
We
don't
have
that
Clarity
when
we
do
upgrade
especially
major
versions
introducing
c
groups.
V2
was
one
thing
that
kind
of
came
came
with
a
major
Fedora
upgrade
and
we
just
took
it
in
as
well,
so
in
the
future.
I
think
we'll
we'll
probably
include
the
Fedora
core
OS
notes
when
we
also
rebase.
C
Just
to
compare
this
is
what
the
the
road
to
4.12
issue
we
have
currently.
G
Just
just
a
few
notes:
we
also
have
a
major
change
page
in
federal
chorus
that
I'm
linking
again
into
the
notes
that
has
a
little
bit
more
details
and
usually
we
announced
changes
in
federal
Aquarius
via
the
main
English,
the
chorus
status
mailing
list,
but
yeah.
We
follow
the
life
cycle
slightly
different
from
okd
to
in
federal
price
itself.
So
note
all
changes
apply
at
the
same
time,
etc,
etc.
That's
like
the
trick
here.
B
All
right:
well,
we
can
have
a
discussion
I
think
in
the
Community
Development
Group
about
how
to
sort
of
message
this
out
to
folks.
You
know
because
we
actually
don't
have
like
an
okd
user's
mailing
list.
We've
got
the
working
group
Google
group
mailing
list,
but
we
don't
actually
have
like
an
a
single
like
communicate
things.
We've
got
the
social
media,
but
nothing
like
an
email
or
anything
like
that.
B
So
maybe
a
discussion
is
in
order
for
to
see
if
we
want
like
a
mailing
list
just
for
announcements
or
something
like
that,
I
know,
there's
probably
one
more
question
for
Christian,
that's
Brewing,
I
know
myself
and
Brian,
and
maybe
a
few
other
folks
are
curious
about
operators
and
operator
catalogs
and
Christian.
What
can
you
tell
us
about
where
we
are
we're
Luigi
for
that
matter?
Where
are
we
in
our
road
to
a
catalog.
C
So
I'll
actually
refer
that
question
to
Luigi,
because
I
I
haven't
heard
any
updates
recently,
but
I
know
that
the
CFE
team,
which
is
Luigi's
team,
has
been
continuing
to
to
work
on
this.
In
the
background,
yeah.
D
Yeah
so
so,
to
be
honest,
we've
left
it
at
a
very
generic
both
pipeline
for
tecton.
Obviously-
and
I've
mentioned
this
before
it's
a
very
naive,
very
opinionated,
build
of
of
an
operator
to
be
included
in
a
catalog.
So
we
need
as
many
people
to
play
with
it
and
break
it
and
create
PRS
and
tell
us
what
we've
done
wrong.
D
But
what
what
it's
meant
to
do
is
really
help
us
to
build
an
operator
and
and
create
all
the
the
necessary
bundles,
the
the
image
index
and
and
then
include,
and
then
the
catalog
and
so
on
and
then
and
be
able
to
include
it
in
a
community
project.
I
know
that
Brian
did
start
a
a
thread
where
you
know
there
are
some
there's
some
issues
about
just
putting
any
operator
up
there
and
obviously
the
more
important
operators
that
we
want
to
look
at
and
we
we
haven't
actually
got
to
it.
D
It's
been
one
of
our
tasks
is
looking
at
the
Techtron
operator
and
obviously
the
Argo
CD
GitHub
stuff
stuff
that
we
want
to
look
at,
but
we've
just
been
honestly.
We've
just
been
slammed
as
a
team.
We've
got
some
other
issues
with
IBM
that
we're
looking
at
and-
and
you
know
that
IBM
does
own
Red
Hats,
so
they
have
become
more
important
foreign.
B
Well,
thank
you
for
the
update
on
that
I
know,
folks
are,
are
anxious
and
I
think
what
we
can
do
to
help
it
sounds
like
is
just
to
file
bugs
and
issues
on
things
that
we
find
in
that
process.
Yeah.
D
C
Yeah
and
we
have
we
agreed
on
a
repository
to
use
for
our
catalog,
because
if
we
have
a
repository
that
we
even
that
we
own
in
our
Arc,
we
can
essentially
start
filling
it
with
operators.
We
still
need
maintainers
for
each
of
these
operators.
It's
like
building
rebuilding
an
RPM
or
something
right.
Somebody
still
needs
to
trigger
that
Pipeline
and
do
the
new
build
and
we
obviously
we
we
want
the
we
want
to
have
as
many
community
members
as
possible
included
in
this
process.
C
So
I
think
we
might
kind
of
get
away
with
creating
a
new
repository
that
is
kind
of
a
blank
catalog
and
then
adding
that
catalog
to
okd
by
default,
which
again
only
only
new
installs
will
get
because
that's
another
resource
which
you
can
then
add
manually
to
your
existing
cluster.
C
It's
catalog,
Source
I
think
is
the
mpcrd,
and
if,
when
we
have
kind
of
that
that
infrastructure
set
up,
we
would
be
able
to
to
accept,
builds
from
anybody,
though
it'll
be
just
a
PR.
Anybody
could
open
it
and
yeah,
so
we
could
kind
of
do
both
things
there.
The
community
can
already
do
a
lot
and
if,
if
it
kind
of,
if,
if
you're
able
to
build
an
operator
using
the
pipeline
it
it,
that
operator
should
really
qualify
for
inclusion
in
our
catalog.
So
I
know
there
was.
C
There
was
the
idea
of
an
okd,
specific
catalog
and
that
I
think
was
in
the
openshift
ecosystem,
something
org
on
GitHub,
but
they
have
kind
of
yeah.
They
haven't
prioritized
it
still.
It's
still
an
open
issue
there
and
I
think
the
the
person
who
Camila
who
was
working
on
it.
She
has,
unfortunately,
left
red
hat
in
the
meantime,
so
we
are
kind
of
nobody's
working
on
that
right
now.
C
E
Do
we
actually
need
to
actually
have
some
sort
of
brainstorming
to
actually
work
out,
because
we
need
to
do
the
build
of
the
operator
in
the
Oakley
community,
so
I
either
have
a
techton
pipeline
per
operator,
a
GitHub
or
some
way
of
controlling
the
builds.
So
again
we
don't
want
this
to
be
manual.
We
want
it
to
be
open
and
automated
from
a
source
of
Truth,
so
everything
needs
to
be
in
a
git
repo.
B
Yeah
I
think
defining
process
is
I,
think
you
get
the
nail
on
the
head,
I
think
defining
process
for
that,
and
also
defining
process
for
the
people.
Part
of
maybe
it's
following
the
Fedora
RPM
maintainer
process
similar
to
that
is
people
volunteer
to
be
maintainers
and
if
something
gets
orphaned
it
has
x
amount
of
time
until
we
pull
it
because
it's
been
orphaned
and
someone
isn't
running
those
pipelines
and
making
sure
everything
was
up
to
date
and
whatever
so
I
I.
B
If
we
could
have
both
of
those
components
defined
I,
think
it
would
help
people
because
then,
if
you
want
people
to
volunteer,
you
have
to
kind
of
show
them
what
directly
sort
of
how
they
can
help
and
what
easy
steps
they
can
do
to
help
and
I.
Think
that
if
we
outline
this
somehow,
then
we
could
do
that.
C
So
the
way
I
see
it
is,
we
should
use
a
very
git,
Ops,
oriented
approach
and
then
there's
really
two
parts
of
it.
We
we
have
to
have
the
pipeline
definitions
or
the
pipeline
run,
really
that
that
you
then
trigger
for
a
new
build
which
I
think
we
have
a
more
or
less
abstract
Pipeline
and
then
in
the
pipeline
run.
You
put
all
the
the
options
of
all
the
parameters
in
there,
and
that
makes
it
build
a
different
repository
with
different
variables
or
parameters,
and
then
the
second
thing
is.
C
We
then
have
to
update
an
index
where
we're
kind
of
that
is
advertised
that
release.
One
problem
is
that
the
openshift
Opera
operators
don't
get
new
releases
on
GitHub,
for
if
there's
a
new
release
in
in
in
the
ocp
operator
that
doesn't
show
up
on
GitHub,
so
we
can.
We
could
then.
Obviously,
if
we
have
like
a
pipeline,
run
template
that
we
can
use,
we
could
automate
that
on
a
on
a
like,
do
a
build
each
week
or
every
two
weeks,
or
we
would
have
a
maintainer
trigger
that
build.
C
That
maintainer
would
have
to
watch
the
ocp
kind
of
equivalent
of
the
operator
and
then
trigger
new
build
for
okd,
but
it
could
all
could
also
be
a
time-based
thing
where
we
just
cut
a
new
release
every
two
weeks
and
we
don't
really
follow
the
release
Cadence
in
ocp.
D
D
Brian,
you
brought
up
some
good
good
issues
regarding
icons,
I
mean
a
small
thing
like
that
it
does.
It
does
impact,
and-
and
so
these
are
the
things
we
need
to
talk
about,
and
how
do
we
include
them
and
what
are
the
icons
going
to
be
and
how
are
we
going
to
name
the
thing?
Is
it
going
to
keep
the
same
name
or
whatever?
Those
are?
Those
are
the
issues
we
need
to
talk
about.
Yep.
B
We
still
have
the
other
things,
so,
let's
be
mindful
of
of
time
here.
So,
let's
schedule
a
separate
brainstorming
session
on
operators
get
some
basic
docs
done
just
enough,
so
that
people
know
how
to
contribute,
and
so
let's
what
are
Folks
up
for
scheduling
that
for
January
is
that
good
yep?
Does
that
sound,
reasonable?
Okay,
all
right
yeah
put
something
together:
cool!
B
Okay,
let's
see
where
are
we
at
F
cost
update,
updates,
Timothy.
G
B
Very
good,
thank
you
for
your
other
contribution,
so
that
was
very
helpful.
Okay,
Community
Development
updates,
Brian.
E
Okay,
we
had
a
meeting
last
week,
so
we've
already
mentioned
it,
but
Luigi
and
Dwayne
are
working
on
the
single
node
open
shift
of
single
node
okd
in
store
docs,
and
we
have
nothing
that
works
at
the
minute.
So
that's
something
useful
they've
started
working
on
that.
E
So
looking
forward
to
that,
getting
to
a
point
where
we
can
start
reviewing
it,
then,
and
using
it
and
Jamie's
gonna,
put
something
around
techton
until
we
get
the
okay,
the
catalog
there
email
hosting
as
of
last
week,
we
had
nothing
but
it
sounds
like
we
do
have
something
now
that
will
pick
up
at
the
next
meeting
and
that
then
fed
in
again,
we've
had
a
conversation
today
about
social
media.
So
we
don't
need
to
really
go
around
that
in
terms
of
how
we
actually
get
the
the
tweets
out.
The
information
out.
E
E
We
sort
of
shut
down
some
channels
earlier
in
the
year.
It's
do
we
actually
want
to
go
and
start
opening
new
channels,
and
so
there
was
a
whole
conversation
on
there.
I
don't
think
we
actually
got
to
a
resolution,
so
we
need
to
pick
that
up
again
next
week
and
anyone's
got
any
thoughts.
E
Let
us
know
because
again
we
didn't
really
pick
up
the
Fedora
Channel
and
we
looked
at
that
the
Matrix
and
we
we
didn't
really
pick
that
up,
and
so
there
is
a
thought
of
do
we
want
to
keep
it
to
get
discussion,
and
that
is
the
way
you
contact
us.
Do
we
want
to
use
other
channels
for
just
push
and
say
if
you
want
to
get
back
to
us,
then
point
them
at
the
git
Channel
and
so
there's
a
conversation
to
be
had
there
and
feedback
from
the
zoom.
E
Everyone
seemed
to
be
quite
happy
that
it's
a
A
system
that
works
and
we're
going
to
keep
using
that.
We
also
then
had
the
initial
look
at
some
of
the
feedback
around
the
install
documentation.
Obviously,
we
just
to
cast
people
up,
there's
a
lot
of
tutorials
docs,
how
to's
going
right
back
from
early
4.x
releases
up
to
date
and
we're
getting
quite
a
few
posts
where
people
are
obviously
following
those
and
installing
the
old
version
of
okd.
E
So
if
I
post
around
4.5
4.6,
where
people
are
obviously
following
a
an
online
tutorial
and
then
asking
questions,
nothing
did
lead
a
question
in
the
community
around.
Should
we
support
more
than
one
active
stream
and
most
projects
do
current
and
previous
or
and
there's
a
question
around
there.
E
So
we
do
need
to
go
and
look
at
those
installs
and
work
out.
What's
our
strategy
as
a
community,
how
do
we
sort
of
get
people
onto
the
most
recent
update,
so
they're
not
following
out
of
date
stuff
and
so
that's
again,
something
we
need
to
follow
up
on.
We
do
have
quite
a
good
collection.
People
are
quite
sort
of
diligent
around
finding
those
and
and
going
and
actually
listing
those.
E
So
we
have
an
issue
where
they're
all
listed
in
and
then
the
last
thing
we
talked
about
is
we
have
things
like
red
hat,
Summit,
obviously,
with
Diane
exiting
stage
left
it's.
How
do
we
actually
get
participation
in
Red,
Hat
events
going
next
year
and
I
think
the
idea
is
that
we're
going
to
invite
Diane's
replacement
and
I
can't
remember
her
name
to
this
meeting
to
actually
talk
around
how
we
interact
and
obviously
Rock
Luigi,
acting
as
our
sort
of
liaison
into
the
red
hat
organization.
E
But
it
is
how
do
we
actually
make
sure
that
we
don't
lose
that
link
to
Red
Hat
local
Red
Hat
events
where
we
can
have
okay,
the
representation
in
them?
I
think
that
was
the
meeting.
G
B
I'll
add
a
few
other
things,
which
is
that
El
Miko
did
volunteer
to
get
some
Mastodon
info
together
because
there's
the
question
of
well
what
if
we
were
to
go
to
Mastodon
what
server
would
we
go
to
and
stuff
like
that
and
so
he's
going
to
bring
some
info
to
the
next
Community
Development
meeting
I.
Think
one
thing
is
that
is
a
Karina
Karine.
B
She
is
not
a
Karina
she's,
not
a
direct
replacement
of
Diane,
apparently
she's,
slightly
different
in
the
structure.
So
I
don't
know
that
we'll
have
the
same
time
allotted
to
us,
but
we'll
see
what
happens.
We
are
going
to
do
an
invite
for
probably
this
main
meeting
in
January,
so
that'll
that'll
help
get
the
conversation
going.
Yeah
yeah
any
other
questions
for
Brian
in
terms
of
community
development,
stuff.
E
Oh
there's
there's
one
other
point:
if
anybody
wants
to
contribute
to
the
documents
and
does
documentation
doesn't
know
how
to
go,
we've
mentioned
people
who've
gone,
adding
to
the
FAQ
a
number
of
times.
I
did
a
session
with
Dwayne.
Just
let
him
know
how
to
use
empty
docs
and
how
would
the
spell
check
link
check
what
you
need
to
do
to
get
it
to
publish
them,
so
nobody
else
wants
to
go
through
anything
like
that
happy
to
do
that.
B
Excellent,
fantastic
I
think
that's
another
example
of
something
where,
if
we
have
a
little
bit
of
help,
then
folks
are
more
likely
to
contribute
because
they
had
they
know
where,
where
and
how
to
contribute,
yeah
easily
cool
Luigi.
Switching
hats.
Do
you
have
anything
to
to
share
with
us
in
terms
of
other
type
things
that
Diane
is
handing
off
to
you?
Yeah.
D
She
just
she.
She
just
mentioned
that
she'll
she'll,
give
me
an
update
on
on
on
new
events
and
events
that
are
coming
up.
D
D
That
was
about
all
that
she
really
mentioned
and
we're
keeping
a
working
doc
I've
got
a
I've,
got
a
link
to
her
private
email.
So,
if
I
need
anything,
I
I
still
have
put
it
this
way.
As
a
group,
we
still
have
contact
with
her
if
we
needed
anything
urgently.
So
that's
basically
yeah.