►
From YouTube: OKD Community Development Meeting 06-13-2023
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
B
So
we
first
thing
that
I
think
we
should
do
is
and
Brian
if
you
think
otherwise,
let
me
know,
but
I
was
thinking
that
we
start
with
the
basic.
B
So
everyone
is,
on
the
same
page,
have
a
discussion
about
sort
of
what
is
the
catalog
and
where
are
the
resources
to
understand
how
catalogs
are
built,
and
we've
got
a
couple
of
folks
on
the
call
that
are
familiar
with
that
Christian
and
Kevin
I
think
you
know
a
little
bit
about
it,
and
so
that
seems
like
a
good
place
to
start.
Brian
hasn't
seen
about
right.
C
C
We
see
as
as
Mission
critical
operators
that
you
can't
currently
get
unless
you
get
an
activation
key
from
Red
Hat
to
add
the
red
hat,
catalog
and
or
you
actually
work
out
how
to
install
the
open
source
versions
and
from
the
open
source
catalogs.
So
that's
the
goal
here
that
we
will
provide
a
catalog
that
will
come
into.
C
We
create
a
catalog
Source
in
okd
and
that
we
then
provide
some
of
these
key
catalog
operators,
so
I
think
I
think
that's
what
we're
trying
to
do
and
yes
I,
think
now
is
a
good
time
to
actually
just
go
and
do
a
level
set
on
what
an
operator
is,
what
an
operator
catalog
is
and
and
the
resources
we
need
to
do.
That.
B
Yeah
so
who
wants
to
tackle
operators
just
to
provide
any-
and
this
is
also
I-
should
add,
not
gear
just
towards
the
attendees
at
the
meeting,
but
think
about.
There
are
folks
that
watch
the
videos
after
the
fact
once
they're
posted,
so
it's
it's.
We
are
trying
to
really
sort
of
provide
a
base
level.
Understanding
of
this
also
who
wants
to
chip
in
with
operators
basic
explanation
of
operators.
D
Well,
we've
got,
we've
got
Kevin
ritzy
here
is
on
the
olm
team
and
he's
like
The
Domain
expert,
so
Kevin,
if
you
don't
mind,
kicking
off
and
and
giving
us
the
whole
rundown
on
operators.
If
that's
okay,
yeah.
E
A
Fair,
so
hi
everybody
I'm
Kevin,
it's
my
first
time
on
this
call
so
I'm
one
of
the
developers
on
the
operator
framework
project.
A
If
I
was
to
summarize
at
a
very
high
level
what
an
operator
is,
it's
essentially
a
pattern
used
to
define
an
application
that
runs
in
a
cloud-native
way
on
a
kubernetes
cluster
that
uses
the
same
pattern
that
kubernetes
itself
so
the
API
server
and
how
it
defines
objects
to
extend
the
kubernetes
control
plan
to
allow
you
to
Define
arbitrary
resource
types.
So,
basically,
you
can
build
an
app
that
speaks
q,
natively
and
it
uses
that
by,
like
I,
said
using
the
kubernetes
controller
pattern
and
the
crd
API.
A
I
I
like
to
talk
about
frequently,
is
so
there's,
there's
kind
of
two
main
parts
to
the
operator
framework
right,
there's
the
aperture
SDK,
which
is
kind
of
a
project
that
allows
you
to
bootstrap
and
and
scaffold
and
build
operators
which
is
a
wrapper
around
a
community
effort.
A
It's
called
controller
runtime
and
then
there's
olm,
which
is
what
I
kind
of
like
to
refer
to
as
a
package
manager
for
kubernetes
operators,
and
so
the
catalogs
that
we're
discussing
here
are
really
think
of
them
as
repositories
for
content
that
wants
to
be
distributed
in
some
way.
So
with
the
operator
framework.
A
Often
the
metaphor
we
like
to
use
is,
if
you're
familiar
with
the
concept
of
a
young
repository
or
repos
and
app
get,
you
can
consider
the
metadata
for
a
specific
operator,
which
we
call
an
operator
bundle
to
be
kind
of
like
an
RPM
and
then
the
catalog
is
kind
of
like
the
young
repository.
It's
a
collection
of
packages
that
you
can
present
to
a
cluster,
so
they
can
be
installed
by
users
on
the
cluster.
B
A
Yeah,
so
you
you
can
so
there's
actually
the
so
the
catalog
Source
API,
which
is
part
of
Opera
the
operator
lifecycle
manager,
allows
you
to
Define,
not
just
multiple
catalogs,
but
also
the
scope
of
where
that
catalog
is
available
right.
So
generally,
if
you
create
a
catalog
in
a
specific
namespace,
then
the
content
available
for
that
catalog
to
install
on
the
cluster
is
scoped
specifically
to
that
namespace.
C
F
F
Catalog
Source
by
default,
so
you
can
still
re-enable
it.
That's
actually
the
default
build
setting
for
for
that
operator
in
okd.
G
F
Have
it
enabled?
Well,
we
disable
it
and
then
instead,
we
are.
Additionally,
we
enable
the
community
operators
catalog,
which
contains.
B
G
F
Of
other
things,
but
I
think
what
the
goal
with
the
okd
specific
catalog
is
is
in
in
in
cases
like
this,
and
maybe
we
could
even.
G
F
Our
our
operators
to
the
community
operators
catalog
too,
if
they're,
okay
with
it
it's
just
not
going
to
be
yeah
they're,
not
going
to
be
working
on
on
any
kind
of
standard,
kubernetes
cluster,
but
really.
F
What
the
the
catalog
does
is
provide
the
The
Operators
and
then
they
can
as
I
understand
it
even
conflict
with
each
other
on
these
same
same
resource.
But
what
we
want
to
do
with
the
okd
builds
is
have
the
the
okd
optional
operators,
which
are
olm
operators
which
have
specific
Integrations
with
OPD
and
openshift,
tooling
and
and
machinery
who
want
these
builds
for
OPD.
F
F
C
I
I
think
it's
also
worth
mentioning
if
I
remember
correctly,
that
the
community
catalog
on
okd
is
actually
the
one
built
for
ocp,
so
it
actually
has
some
links
into
the
red
hat
catalog.
So,
for
example,
you
can't
put
the
the
che
operator
from
the
community
because
it
wants
the
dev
spaces
or
Dev
file
from
the
red
hat
catalog.
So
that's
not
purely
the
operator
Hub
catalog,
that
is
a
community
catalog,
that's
been
built
for
ocp.
That's
that's
shipped
with
okd
I.
F
F
There
was
something
with
that
yeah
we
do
have
that
special
Community
thing,
but
those
are
really
just
operators
that
are
meant
for
openshift,
but
not
officially
supported
by
Red
Hat,
but
yeah
and
I
mean
ideally
those
who
would
work
with
OK
D2,
but
I
guess
that
in
in
the
community
catalog,
and
in
that
one
I
don't
think
we
have
protect
on
at
all.
For
example,
that
has
been
moved
from
that
Community
catalog
into
the
redhead
subscription
catalog.
C
F
And
and
and
then
I
think
the
the
group
agrees
that
that
we
should
rebuild
them
ourselves
and
yeah.
Hopefully
we
can.
We.
E
F
The
experts
like
Kevin
on
the
call
today,
like
kind
of
any
best
practices
for
for
creating
a
bundle
and
adding
it
to
a
catalog.
How?
How
would.
G
B
Great
any
any
other
additions
to
the
to
those
sort
of
foundational
that
foundational
information.
Anything
that
folks
want
to
add
or
ask
to
get
clarification.
D
What
maybe,
on
from
a
development
perspective-
and
it's
just
I'm
just
asking-
would
we
want
to
be
able
to
look
Kevin
did
mention
the
operator
SDK?
D
Would
we
want
to
mention
that
you
could
develop
and
both
from
scratch
and
operator,
develop
it
and
use
it
within
the
okd
environment
and
then
build
it
and
use
it
and
publish
it
in
this?
Whatever
we're
going
to
use
this
open
source
operator
catalog,
and
we're
also
going
to
address
that
or
would
that
be
a
complete
different
topic.
B
I
think
that
that's
a
that's
a
personally
I
think
that's
a
great
sort
of
in
for
people
to
use
okd
is
to
use
it
as
a
platform
for
developing
operators
and
getting
them
into
the
catalog,
and
so
they
can
see
the
results
of
their
work
and
make
that
work
available
to
the
community.
F
So
I
I
think.
Ideally
we
could
build
some
some
kind
of
pipeline
that
automates
these
builds
for
us
and
there
is
actually
I
I
just
put
a
link
in
the
chat.
There
is
actually
a
new
operator
built
by
red
headers.
That
is
now
an
internal
preview
for
redheaders
internally,
and
that
is
meant
to
replace
the
build
system
for
The
productized
Operators
eventually.
F
F
It
can
build
is
operator,
bundles
and
internally.
We
have
the
system
called
CPAs,
which
Luigi
and
sharing
have
a
lot
of
experience
with
I
think
and
the
goal
for
that
red
hat
app
Studio
thing
is
to
internally
replace
that
old,
build
system
for
the
productized
operator
bundle
releases.
So.
G
F
Maybe
that's
a
bit
too
far
out
and
that
red
app
Studio
thing
isn't
production
ready.
Yet
we
have
an
internal
preview,
it's
usable
internally.
Now
people
are
starting
to
use
it.
We
have
presentation
about
it
internally
that
we
can
see
and
get
some
testimonials
and
see
how
it
works
and
it
to
me
it
looks
really
promising.
They
don't
have
multi-arch
image,
build
support
yet,
but
that
is
coming
this
year,
and
so
maybe
that
is
something
that
is
worth
investigating,
deploying
this
thing,
because
it.
G
F
All
kinds
of
niceties
like
GitHub
integration
and
bot
integration
with
techton.
So
maybe
if
we
could
stand
up
our
own
instance
of
that
and
use
that
to
automate
the
builds
of
operators.
F
That
might
you
know
that
might
be
nice,
and
that
thing
is
an
operator
itself
too.
So
that's
where
it
closes
the
loop,
but
yeah
they're,
just
kind
of
because
that
was
just
announced
internally
to
us
and
as
it's
open
source,
you
know
and
I
I
was
immediately
interested
in
using.
G
C
F
System
we
have
internally
for
building
openshift
payloads.
This
thing
builds
images.
E
F
We
could
maybe
use
it
to
build
okd
payloads
with
it
eventually,
but.
F
Interested
in
kind
of
how
we
set
up
a
system
and.
E
F
Hopefully,
we'll
have
a
cluster
on
the
messages
from
the
cloud
to
actually
deploy
this
on.
C
I
think
it
might
just
be
worth
well
I'll,
just
just
talking
about
what
gets
created.
So
what
how
is
it
operated
delivered
and
what
actually
does
a
catalog
contain,
because,
when
I
started,
this
I
think
that
was
something
that
it
took
a
little
bit
of
digging
into
the
operator
framework.
C
Just
to
understand
the
relationship
between
an
individual
operator
on
multiple
versions
of
an
individual
operator,
the
olm
and
how
it
sort
of
managed
the
operator
and
then
how
the
catalog
sat
on
top
of
everything
and
everything
is
sort
of
delivered
as
a
container
within
a
registry.
C
Because,
at
the
end
of
the
day,
that's
where
we're
going
to
get
to,
we
have
to
create
these
tasks.
To
take
the
source
code
for
say,
a
red
hat
operator
builds
the
actual
operator
and
then
work
out
how
to
add
that
specific
operator
version
into
the
catalog
and
and
how
is
that
managed
across
multiple
git
repos,
so
I
think
that's
where
we
need
to
get
to
and
try
and
understand
how
everything
fits
together.
So.
A
A
This
and
then
there's
a
bunch
of
category
setting
to
add,
after
effect,
so
a
really
high
level.
Let's,
let's
start
from
the
top
down,
so
a
catalog,
a
catalog
is
a
container
image
that
contains
two
things.
One
is
a
binary
that
allows
that
catalog
to
actually
run
on
the
cluster,
it's
a
basically
a
server
that
exposes
an
API
so
that
olm
can
query
and
find
out
what
data
the
catalog
actually
has.
A
A
So
that's
the
top
level
thing
below
that
there
is
the
concept
of
the
bundle
image
so
for
each
version
of
every
operator
right,
the
catalog
itself
is
actually
just
a
bunch
of
pointers.
All
it
has
is
the
metadata
about
how
to
install
it
doesn't
have
the
you
know,
qbml
inside
of
it
to
actually
install
The
Operators,
that's
what
the
bundle
image
is
so
for
each
version
of
an
operator
there's
another
image
and
that
image
purely
exists
as
a
repository
for
two
things.
One
is
some
metadata
about
that
operator
like.
What's
what
version
is
it?
A
A
None
of
that
is
the
actual
operator
source
code.
So
the
third
thing
that
you
actually
need
is
the
right,
because,
because
it's
a
thing
that
runs
on
kubernetes,
it's
a
container
image
generally
is
the
actual
image
of
the
operator
controller,
which
is
the
the
software.
Usually
it's
written
in,
go,
although
there's
some
other
different
options.
You
know
anything
that
can
speak
Cube.
You
can
write
an
operator
for
it.
You
need
an
image
for
that
operator
itself
and
that
thing
has
to
be
referenced
by
that
bundle
image.
A
So,
in
total,
if
you
wanted
to
have
a
catalog
with
one
operator
in
it,
there
would
be
a
total
of
three
images:
the
catalog
image,
the
bundle
image
and
the
actual
operator
image
so
that
that's
at
a
high
level
how
it
should
work
in
reality,
there's
some
more
complexity
and
I'm
gonna.
Throw
this
up
there
very
quickly.
Just
so
everyone's
aware,
which
is
a
lot
of
operators,
are
actually
software
that
manages
life
cycle
of
some
piece
of
software.
That
is
not
kubernetes
native
right.
You
can
imagine
like
the
a
database
operator.
A
Usually
the
operator
knows
how
to
deploy
one
or
some
set
of
software
onto
the
cluster
as
a
container
image
so
like
for
the
postgres
operator
as
a
trivial
example,
there's
also
going
to
most
likely
be
another
container
image
or
multiple
other
container
images
that
the
operator
itself
knows
how
to
deploy,
and
all
of
that
is
completely
abstracted
from
olm.
So
if
you've
ever
heard,
there's
a
phrase
that
references
as
this
called
related
images,
it's
basically.
G
A
In
my
opinion
and
I
think
this
is
actually
probably
important
is
that
you
want
to
like
whatever
we're
going
to
build
here.
We
probably
want
to
separate
the
problem
space
of
how
do
I
build
the
operator
source
code
and
all
the
related
images
that
exist
as
part
of
running
an
operator
from
the
metadata
build,
because
really
that,
like
that
metadata,
the
fact
that
it's
a
container
image
is
not
really
important
right.
We're
using
container
images
because
they're
a
convenient
tool
that
scales
pretty
well
to
get
lots
of
data
from
lots
of
different
places.
A
You
technically
don't
need
to
do
that.
There's
some
other
bits
that
exist
in
olm.
At
one
point
we
built
something
called
app
registry-
that's
been
mostly
deprecated
since
then,
but
was
another
store
for
that
metadata
that
just
didn't
scale,
particularly
well,
so
it
I
I
do
I
feel
pretty
strongly
that
like
whenever
we're
due
to
build
and
there's
actually
an
example
of
this.
A
If
you've
ever
seen
the
operator
Hub
there's
this
Upstream
open
source
project
that
works,
that's
aimed
at
playing
vanilla,
kubernetes
clusters,
that
has
a
project
that
builds
catalogs
and
that's
all
it
does
is
it
builds
catalogs
and
bundle
images,
and
in
order
for
that
to
actually
work,
all
of
the
other
images
related
to
the
actual
operator
itself
are
built
elsewhere
beforehand
before
you
ever
interact
with
that
project.
C
And
I
I
think
the
last
piece
Kevin.
If
you
can
just
talk
about
update
channels
and
update
paths,
because
I
think
it's
important
to
realize
that
once
an
operator
gets
into
a
catalog,
we
need
not
to
strand
somebody
at
a
specific
version.
Just
like
the
actual
platform
itself,
and
there
should
be
paths
forward
and
I.
Think
olm
manages
that
as
well.
Doesn't
it.
A
Yeah,
that's
that's
correct,
so
a
core
concept
for
olm
is
it's.
It's
chiefly
concerned
with
the
life
cycle
of
operators
and
making
sure
that
they
upgrade
successfully.
That's
a
big
reason.
Why,
like
you
know,
if
you
why
why
use
olm
instead
of
just
writing
a
Helm
chart,
is
because
of
this
concept
of
upgrade
so
attached
to
the
CSV
API,
which
is
basically
the
wrapper
olm
uses
to
define
a
lot
of
the
metadata
for
the
operator
itself
is
graph
metadata.
So
there's
this
concept
of
the
replaces
chain.
A
Basically
olm
for
each
operator
understands
the
context
of
an
upgrade
graph
for
that
operator,
so
that
you
can
jump
from
one
version
to
the
next
and
it
has
a
lot
of
features.
So
you
can
do
things
like
if
you
want
to
skip
over
certain
versions
or
allow
a
user
to
upgrade
directly
to
the
next
Miner
version.
You
can
specify
metadata
that
allows
you
to
do
that.
A
A
B
Be
great
if
you
throw
it
in
the
channel
or
just
add
it
directly
to
the
hack
MD
there,
whatever
works
best
for
you.
Thank
you
any
other
questions
or
comments
on
this
foundational
stuff.
B
All
right,
then,
let's
sort
of
Define
I
think
next
steps
like
what's
the
process
we
want
to
follow.
We
did
a
little
bit
of
research
over
the
past
couple
months
in
terms
of
operators
that
exist
out
there
out
in
the
world,
like
tecton
and
whatnot,
about
seeing
about
getting
those
tweaked
to
work
and
to
be
built
in
such
a
way
that
they
could
be
included
in
a
separate
catalog.
B
What
what
would
the
process
look
like
for
us
to
go
from
from
start
to
finish
to
get
our
own
catalog
out
there?
How
would
we
break
this
up
into
steps
into
it,
basically
into
an
epic
and
then
break
that
Epic
into
into
tasks
or
multiple
epics,
as
the
case
may
be?
What
are
we
thinking.
D
Could
we
could
we
start
off
with
an
existing
Community
operator
and
then
try
add
to
that
I
mean
looking
at
adding
tecton
and
githubs
and
that
type
of
thing
or
I
don't
know?
Do
we
want
to
start
completely
from
scratch.
B
What
do
folks
think
so
in
terms
of
tecton
they
were
making
some
changes
to
their
repo
for
the
operator
and
I
need
to
check
in
with
them
to
see.
If
those
changes
are
maybe
removing
some
things
around,
but
we
could
do
tecton,
you
know
if
we
wanted
to
there
was
you
know
it's
basically
two
different.
B
It
was
two
different
sections.
Basically
that
went
into
what
got
deployed
on
openshift
in
terms
of
those
extra
tasks,
the
openshift
specific
tasks
and
then
the
operator
itself.
B
So
we
could
do
that
or
we
could
do
something
from
scratch
or
we
could
pick
a
different
operator.
What
are
folks
thinking.
C
So
I
think
that
might
be
easier
operators
to
go
as
the
first
one
I
mean
I
had
to
go.
I
I
actually
managed
to
get
a
catalog
installed
in
okd
with
an
operator,
so
I
actually
went
through
the
process
for
one
operator
and
I
got
that
going
so
I
can
share
that
because
I
wrote
it
up
as
part
of
the
repo
but
yeah
I
mean
to
me.
C
I
think
we
need
to
find
a
fairly
simple
standard
operator
that
doesn't
have
a
lot
of
extra
caveats
and
just
get
that
going
doing
and
then
work
out.
What
a
pipeline
would
look
like
because,
as
Kevin
says,
I
think
this
is
a
multi-stage
thing
where
we
need
to
build
the
operator
and
somehow
we've
got
to
manage
the
versions
and
how
The
Operators
sort
of
incrementally
you'd
progress
through
them.
E
B
C
Serene
I
I
actually
think
I.
The
answer
to
that
one
is
yes,
I
think
it'll
be
quite
nice
if
we
could
take
something
that
worked
on
okd
and
just
trans
just
translated,
and
it
will
then
run
on
ocp
can.
C
B
C
G
Yeah
sure
so
today,
in
the
red
hat
OCB
up
operator
catalogs,
we
kind
of
have
the
concept
of
channels
which
will
show
us
a
little
bit
the
upgrade
graphs
that
an
operator
can
can
have,
and
what,
from
which
version
of
an
operator
can
we
go
to
which
other
version
of
the
operator,
so
an
operator
usually
can
have
one
to
several
channels
so
making
one
or
more
possible
upgrade
graphs
there,
and
my
question
was
when
we
are
building
our
okd
catalogs.
G
C
I
I
think
yes,
because
say
somebody
is,
is
doing
some
work
on
okd
and
then
they
want
to
sort
of
move
it
over
to
an
ocp
cluster.
If
we're
in
line
it
means
that
any
automation
they
build
any
terraform
any
scripts
they
build
around.
The
okd
platform
should
work
around
the
ocp
platform,
so
we're
just
going
to
switch
from
the
okd
catalog
to
the
ocp
red
hat
catalog
and
hopefully
should
work.
If
we
make
things
very
different,
it
means
that
you
need
to
do
that.
F
So
there
aren't
that
many
versions
to
choose
from
as
an
ocp,
where
you
have
all
the
release
branches
being
supported
with
patch
releases
patch
version
releases
for
forever.
Essentially
we
don't
have
that
in
okd,
so
we
don't
necessarily
need
to
maintain
different
builds
for
The
Operators.
If
we
say
look,
we
only
have
this
one
current
okd
release,
then
we
only
build
one
current
operator
release
that
works
on
that
latest
okd
release,
maybe
latest
minus
one,
but
something
like
that.
F
I
I,
don't
think
it
makes
sense
to
create
and
I'm
not
sure
what
what
the
branches
actually
are
or
the
channels
in
in
the
catalog
are
those
the
reference
branches
order,
12
or
the
13,
color,
14.
F
and
so
forth,
or
are
they
version
version
for
at
The,
Operators
themselves?
I
I'd
say
we
have
one
release
channel
for
an
operator
and
that
is
stable
and
maybe
another
one
next
preview,
and
that
would
work
on
the
current
okd
release
or
the
current
stable
okd
release.
C
F
Absolutely
the
the
red
hat
operators
do
have
a
version,
dependency
or
requirement
on
the
the
underlying
openshift
cluster.
So
once
you
upgrade
that
there
have
even
been
cases
where
customers
upgrade
and
openshift
clusters
and
then
the
operators
stop
working
because
there's
no.
G
F
Version
at
the
old
resource
economics
resources
don't
work
anymore,
so
I
mean
yeah,
there's
just
more
work
involved
with
saying:
look
we
we
have,
and
in
openshift
it's
mostly
minor
version,
so
it's
for
the
12,
13,
14
and
so
forth.
So
the
thing
is
we
don't
maintain
that
same
concurrent
release
of
the
the
same
current
release
streams
in
okd.
We
only
ever
have
one
current
and
we've
recently
moved
to
Fallout
13..
F
Obviously
not
everybody
moves
to
4.13
right
away,
but
if
we
did
a
new
operator
build
now,
I'd
say:
let's
make
it
work
with
our
current
stable
release.
And
yes,
if
anybody
would
like
to
back
part
that
and
build
it
for
okay
default
12
that
operator
it's
an
additional
build,
so
who's
going
to
do
it.
I
I
think
if
we
keep.
F
And
kind
of
maintain
the
official
builds
and
it
shouldn't
be
that
hard
to
actually
trigger
another
build
if
we
have
some
effect
on
task
or
something
that
that
does
it
for
us
that
builds
the
operator
bundle
for
it.
For
us
and
that's
another
thing:
how
do
you
import
and
operate
a
manually?
F
You
can,
probably
you
don't
need
to
cataloging
problem,
just
install
it,
bypassing
that
and
I
think
that
if
you
then
require
an
operator
with
a
newer
version
for
your
older
openshift
cluster
or
okd
cluster
you'll
have
to
build
it
yourself,
but
I.
F
E
To
tell
me
anything:
oh
well,
I
I
actually
mainly
disagree
with
what
was
just
said,
so
we
just
have
these
one
or
two
streams
really,
so
we
should
just
also
offer
that
for
for
The
Operators
and
like
okay,
if
you
don't,
if
you
didn't
upgrade
to
the
latest
version
of
okd
yet
then
you
cannot
upgrade
to
the
latest
version
of
that
operator
and
that's
it
and
you're
gonna
need
to
wait
until
you
upgrade
your
base
cluster
before
you
can
upgrade
the
operator.
E
If
we
have
these
requirements
there,
but
I
wouldn't
like
burden
ourselves
even
more
with
having
different
streams
or
branches
or
whatever
they
are
called
exactly
because
it's
already
now
we
have
a
very
limited
support
policy,
even
for
the
base
okd
right
that
we
technically
only
support
in
air,
quotes
the
current
version
and
not
the
previous
versions
of
okd.
B
Yeah
I
I
tend
to
agree
with
that
I
think
right
now
we
have
minimal
folks
involved
and
the
easier
it
is
that
we
make
it
to.
These
are
the
processes,
the
more
we
might
attract
people
and
eventually
get
people
doing
separate,
builds
and
and
whatnot,
but
right
now
it
seems
like
just
keeping
up
with
the
current
releases
seems
like
a
good
idea.
That's
just
my
phone
Luigi
did
you
have.
Did
you
watch
I'm
in
at
all.
E
G
I'm,
not
an
expert
on
an
operator
building,
but
I
I
think
I
saw
somewhere
that
the
when
we
build
and
operator
bundle
it's
based
on
the
annotations
that
the
maintainers
of
this
operator
put
in
the
docker
file
that
say
what
the
channel
names
are
going
to
be
and
what,
for
example,
the
default
channel
for
that
that
operator
will
be
so
maybe
this
question
is
for
Kevin
can?
G
Can
we
say,
can
we
do
what
we're
we're
trying
to
achieve
here,
which
is
we
want
to
keep
just
a
stable
channel
on
the
next
Channel,
probably
for
all
operators,
or
is
that
something
that
is
hard
to
achieve,
because
the
it's
the
annotations
back
in
the
git
repo?
That
kind
of
tell
us
what
we
can
can
make.
A
So
I
think
there's
two
I
think
there's
two
things
here,
I
think
one
is:
is
it
possible
with
the
software?
The
answer
is
yes,.
A
That,
whatever
we
build
here
for
a
pipeline,
is
probably
going
to
use
the
the
new
interface
that
olm
has
for
catalogs,
which
is
the
FPC
catalogs,
so
anything
related
to
channel
metadata.
What
channels
operators
are
in
what
versions
of
those
are
is
just
directly
editable
by
the
FEC.
A
So
there's
no
reason
why
we
couldn't
just
modify
that
data
in
from
a
technical
perspective.
That
being
said,
I
do
think
we
want
to
be
quite
careful
with
making
modifications
to
what
our
what
the,
what
these
operators
release
streams
are.
A
If
we're
not
very
confident
that
those
upgrades
are
actually
valid
and
I
I'd
say
that,
because
is
very
possible
in
some
cases
for
certain
operators
that
they've
defined
their
upgrade
hierarchy
specifically
to
solve
for
problems
like
schema
migration,
and
if
we,
if
we
make
some
heuristic
and
automatically
modify
all
of
these
upgrade
graphs
automatically
I
I
you're
I
feel
pretty
strongly
you're
going
to
run
into
issues
with
certain
operators
and
upgrades
in
certain
contexts.
A
B
We're
going
to
have
to
test
these
as
well,
so
we're
going
to
have
to
have
a
process
for
not
just
building
them,
but
also
verifying
that
what
we've
built
is
is
actually
working
at
a
base
level.
You
know
in
terms
of
these
operators,
so
that's
a
whole
other
aspect
to
this
that
we're
going
to
be
taking
on
going
this
route.
So
just
something
to
consider.
F
Terms
of
of
yeah
are
the
channels
labels
on
the
on
the
container
images.
F
Think
what
what
we
could
do
if
they're
just
labels,
we
could
overwrite
them
and
what
we
could
do
in
terms
of
how
we,
what
what
stream
or
what
branch
we
built
for
the
okd
catalog,
what
Upstream
openshift,
unfortunately,
we're
still
Downstream
bills.
Ideally,
we
would
switch
that
around
too
and
provide
a
build
system
where
the
the
openshift
teams
can
actually
do
their
own
testing
eventually
and
then
just
like
Central
stream
cut
their
own
release
and
put
that
into
openshift
and
the
the
certified
catalog
anyways
going
up
here.
F
F
We
want
to
then
also
switch
the
branch
we
built
the
operator
from
or
the
channel
Source
we
built
the
operator
from
and
I
I
think
we
can
probably
solve
that
by
adding
those
labels
or
replacing
those
labels
overriding
them
in
the
build
process.
Whether
we
do
that
locally
or
in
a
build
system,
we've
learned
that
we
can
build.
F
Think
and
just
push
them,
and
we
only
need
the
reference
to
the
image
in
the
bundle
yeah.
So
I
wouldn't
worry
about
that
too
much
I
think
we
can.
We
can
work
around
these
these
channels.
Obviously
there
is
what
what
Jamie
just
mentioned
and
I
think
that
is
probably
what
everybody
is
worried
about.
We
do
need
to
test
those
upgrades
between
versions,
whatever
they
may
be.
F
Cluster
version,
a
plus
operator
version,
a
to
Cluster
version,
B
plus
operator
version
B,
so
I
think
we
do
have
to
think
about
the
build
system
for
this.
Yes,
we
can
get
started
with
locally
built
image.
Artifacts
push
them
to
a
registry,
make
a
bottle
release
that,
but
what
I'd
really
like
to
see
and
that's
why
the
the
app
registry
thing
is
really
cool.
It
includes
all
these
things,
like
s-bomb,
Generation,.
F
The
sources
it
has
a
snake
checking
for
for
all
kinds
of
issues,
security,
wise
and
that
is
really
cool
in
a
pipeline,
so
I
I
think.
Yes,
we
can
get
started
with
with
a
catalog
I.
Don't
think
it's
that
hard
to
to
get
the
catalog
started,
but
really,
how
do
we
build
and
maintain
builds
of
these
operators
and
I?
Think
yeah,
I've
been
working
and
thinking
about
build
systems
a
lot
and
yeah?
F
It
just
becomes
very
clear
to
me
that
we
four
operators
also
need
a
build
system
that
we
can
rely
on
and
reproduce
Builds
on.
If
needed,
do
all
these
things
that
that
we
just
need
to
do
if
we
want
to
be
serious
about
software
delivery,
that's
my
two
cents
so
about
the.
B
B
F
Thing
I
think
what
we've
been,
what
we've
been
doing,
at
least
in
the
openshift
organization,
is
we've
been
adding
build
flags,
you
know
to
make
the
ocp
build
and
the
okb
build
of
of
an
artifact
that
that's
mostly
within
the
payload
within
the
core
okd
but
I'm
sure
if
we
were
to
open
a
PR
to
add
a
build
switch
to
make.
You
know
that.
F
For
us,
that'll
be
accepted
in
the
in
the
repositories
and
by
the
teams
working
on
on
this
within.
C
C
Because
Kevin
question
for
you
I
think:
if
you
look
across
the
The
Operators
or
in
the
current
Red
Hat
catalog,
they
are
there
isn't
a
a
standard,
build
there,
isn't
a
standard
project
format
that
being
built
over
time
as
the
operator
framework
has
matured.
So
there
is
differences
between
operators,
so
I
think
that's
something
that
we're
going
to
have
to
get
to
grips
with.
A
That
you're
actually
going
to
need
one,
which
is
the
images
which
are
hopefully
built
somewhere
else
and
then
the
actual
like
metadata
spec,
which
the
trivial,
the
most
trivial
version
of
that
and
what
the
operator
from
a
project
has
been
working
on
for
a
while
to
try
to
simplify,
is
a
public
interface,
which
is
basically
just
a
big
Json
blob
of
all
that
metadata
I'm.
Guessing
that
the
simplest
thing
to
do
would
be
just
commit
that
blob
as
the
file
that
exists
and
get
and
then
wrap
that
in
a
Docker
file.
C
C
C
A
But
you
crowd,
what
you
could
probably
do
is
take
the
made
the
metadata
from
those
images
and
then
modify
it
submitted
points
to
some
public
version
of
the
image.
B
All
right
we've
got
about
five
minutes
left
I
want
to
narrow
down
to
our
plausible
next
steps.
What
are
the
next
steps
that
we
want
to
take.
B
B
Okay,
so
maybe
that's
step,
one
is
getting
the
getting
the
catalog
repo
created,
create
and
populate
catalog
repo,
and
we
could
do
that
based
on
the
work
that
you
did
Brian.
B
F
Well,
what
I'd
really
like
is
for
operator,
builds
to
happen
on
a
kubernetes
cluster,
so
if
you
could
wrap
your
bash
script
or
whatever
you
have
for
building
the
operator
in
a
tecton
task
and
run
that
with
on
a
kind
cluster
locally,
and
if
that
works,
that's
a
grade
of
reproved
usability.
That
I
am
already
pretty
comfortable
with
and.
F
B
B
F
It
is
online
again
we
were
offline
all
of
last
week.
They
shut
down
the
operate
first
cluster
and
our
Moc
cluster
was
also
down
for
maintenance.
It's
back
on
now
and
tomorrow,
Shireen
and
Alex
are
gonna,
tackle,
deploying
our
current
build
pipelines
for
for
s-cos
the
base
OS
there
I'm
not
sure
we
are
at
a
point
yet
where
we
can
create
kind
of
use
it
as
a
as
a
service
for
for
our
community
members,
I
really
like
to
do
that
on
on
the
secondary
test
book.
F
That
might
actually
we
might
be
able
to
get
you
your
own,
namespace
and
Tinker
with
it.
We
do
have
some
limitations
in
terms
of
resources.
Currently
we
only
have
I
think
three
nodes
and
there's
going
to
be
a
couple
of
builds
for
escos
each
day,
but
you
could
deploy
that
I
hope
starting
next
week
on
that
test
and
then
later
we'll
move
it
to
to
another
build
Farm,
okay,
let's
so
it's
not
far
out
but
yeah.
Okay,.
B
Christian,
why
don't
you
work
that
angle?
I?
Can
I
can
give
Brian
a
cluster
to
work
on
an
okd
cluster
with
tecton
to
work
on
the
next
week?
Or
so?
C
C
What
are
the
artifacts
we
need
in
in
which
git
repo
and
what
is
the
process
of
actually
saying,
adding
a
new
bundle
version
into
the
the
operator
metadata?
Is
that
going
to
be
automatic?
Is
that
going
to
be
a
manual
step?
I
just
need
to
sort
of
probably
sit
down
and
just
get
my
head
around
noodle.
C
What
does
an
end-to-end
pipeline?
Look
like
so
say,
we've
got
the
text
on
operator
and
there's
the
version.
The
next
version
comes
out.
We
need
to
build
the
operator
bundle
we
need
to.
We
need
to
build
the
operator.
We
need
to
build
the
operator
bundle.
We
then
need
to
augment
the
catalog
with
that
new
version
on
which
of
those
steps
are
manual
which
of
them
are
going
to
be
automated,
so
we
can
then
sort
of
work
out
what
the
pipelines
need
to
look
like.
F
Right,
nothing
is,
is
manual
in
the
end
right
now,
so
we
do
have
maybe
examples
or
templates
for
for
how
this
could
work.
I
think
the
apartment
should
include
something
that
you
would
want
to
run
on
a
release
build.
It
pulls
all
the
sources.
It
builds
the
operator
it
pushes
the
resulting
image
artifact
to
a
registry,
and
then
you
can
intact
on.
You
can
add
yeah
yeah
you,
you
can
add
the
ability
to
not
run
the
task
or
skip
a
task.
F
So
you
shouldn't
configure
your
pipeline
with
parameters
that
can
pass
to
the
pipeline
run
in
order
to
to
do
what
you
want
on
a
local,
build
that
doesn't
push
the
image
and
where
you
maybe
already
have
the
the
sources
locally
checked
out,
where
you
don't
need
to
pull
them.
F
Things
like
that,
so
and
I
think
a
kind
of
a
flexible
pipeline
that
does
that
would
be
the
goal,
but
the
the
start
would
be
just
a
task
with
the
proper
parameters
to
configure
the
task
and
then
it
can
be
put
into
whatever
pipeline,
as
you
want
to
build
out
of
it.
B
D
B
Okay,
I
want
to
be
mindful
of
time.
Is
there
anything
else
that
we
think
is
a
should
be
listed
in
next
steps
between
now
and
let's
say
two
weeks
when
we
meet
again
anything
else
that
we
should
have
folks
test
with.
C
I
I've
got
one
last
question
for
for
Kevin
in
terms
of
when
we
get
all
this
done.
What
sort
of
security
should
we
be
looking
to
put
within
a
catalog
in
terms
of
signatures
or
validation,
to
sort
of
say
that
this
is
an
official
or
Kitty
catalog
to
give
people
some
sort
of
assurance
that
it
comes
from
us
and
mechanisms
to
say
it
has
been
tampered
with?
C
A
D
B
Yeah
does
any
does
anyone
know
of
basically
the
the
hello
world
of
of
operators
like
a
real
operator,
but
in
other
words,
a
functioning
operator
that
has
some
use
to
okd
users,
but
is
really
straightforward
in
terms
of
its
repo
and
its
layout
that
we
could
that
we
could
tweak
and
and
build
just
as
an
example.
What's
a
anyone
know
of
a
good
operator,
that's
sort
of
simple
Prometheus,
okay
for
DNS,
okay,.
B
Throw
these
in
here:
let's
do
a
little
bit
of
research.
I
want
to
be
mindful
of
time.
I'll
say,
look
at
operator
examples.
We.
F
B
Do
this
AC
I'll
put
something
in
the
group
discussion
the
Google
discussion
and
we
can
go
from
there
to
see
internal.
G
Dns
operator
is
really
a
very,
very
small
one.
It
has.
It
has
basically
two
images.
One
is
the
controller.
The
the
main
operator-
and
the
second
is
the-
is
a
kind
of
an
agent
right.
So
it
has.
It
needs
to
build
okay,
two
images,
plus
one
bundle
image
so
and
the
make
file
that
it
has
already
has
what
it
needs
to
build
that
and
if
you
look
at
the
readme
I
think
that
it
also
has
steps
to
show
you
how
to
create
a
small
catalog.
G
Just
for
this
one
operator,
it's
for
test
purposes,
so
the
developers
added
these
steps
to
show
how
to
create
a
catalog
so
that
we
can
test
it
as
if
it's
coming
in
from
a
real
red
hat
operator
catalog.
But
it
might
help
you
to
see
what
are
the
steps
that
are
needed.
G
B
B
We
can
hammer
out
issues
and
I
if,
if
you
need
a
cluster,
I
can
provide
it
or
you
can
spin
up
your
own
and
all
right
folks.
Let's
take
this
up
again
in
two
weeks
because
we're
finally
getting
momentum
on
this
topic,
that's
really
important
to
a
lot
of
folks.
So
all
right,
thank
you!
So
much
Kevin
and
all
the
other
folks
that
are
new
attendants
that
have
chipped
in
very
important
and
very
appreciated.