►
Description
OpenShift Commons Operator Lifecycle Management with Evan Cordell Red Hat
Operator Lifecycle Management offers:
- app store-like experience for discovering and installing operators
- automated upgrades for operators
- framework for building rich, re-usable user interfaces
- package management and dependency resolution
Recorded as part of August 17th 2018 Operator Framework SIG
A
B
Awesome,
thank
you.
I
am
Evan
Cornell
I
am
an
engineer.
Everett
had
the
operator
of
lifecycle
manager,
team
and
so
I
wanted
to
go
over
some
of
the
philosophy
and
and
architecture
and
implementation,
and
then
do
demo,
oh,
and
because
we
do
get
a
lot
of
questions
about
it.
What
is
it
why
do
I
want
it?
How
does
it
work
and
I?
B
Don't
think
we
really
put
together
anything
for
public
consumption,
that
sort
of
lays
it
all
out
there
for
you
very
well
so
I'm
gonna
start
right
at
the
basics,
but
I'll
go
through
quickly,
so
I
know
we're
in
the
operator
SIG's.
So
everyone
has
an
idea
of
what
an
operator
is
already,
but
for
us
for
operator,
lifecycle
manager.
We
really
think
of
an
operator
as
a
community's
application,
a
community,
it's
native
application,
I
should
say,
and
by
that
I
just
mean
that
it's
not
for
an
application.
B
B
Copyright
issues
there,
but
so
like
we
all
have
this
idea
of
an
operator
to
send
control
loops
to
watch,
T,
IDs
and
that's
all
well
and
good,
and
you
need
those
things
but
as
far
as
we're
concerned,
that's
all
implementation
detail
of
this
concept
of
extending
the
communities
API
and
if
humanities
comes
out
with
better
tools
for
doing
that
in
the
future.
Well,
don't
keep
up
with
those
same
exact
way.
So
we
know
what
an
operator
is.
What
is
operator
lifecycle?
B
Well
I'm,
going
to
propose
this
definition
that
the
operator
lifecycle
is
the
definition
or
specification,
the
installation,
the
resolution
by
which
I
mean
dependency
resolution,
the
upgrading
and
the
automated
upgrading
and
then
the
removal
of
operators.
So
these
are
all
the
responsibilities
of
the
operator
lifecycle
manager,
our
defining
and
managing
these
different
aspects
of
an
operator
over
its
lifetime
in
a
cluster.
B
So
what
did
some
of
the
goals
of
the
operator
like
like
a
manager?
This
is
what
we're
trying
to
provide
with
the
piece
of
software
called
Olin.
We
want
to
give
you
an
app
store
like
experience
for
discovering
and
installing
operators
so
what's
available
to
you
and
what
will
work
when
you
run
it
on
your
cluster
and
then
also
simplifying
the
installation
of
an
operator
down
to
just
a
single
coop
level
of
pliers
and
go
click
in
a
user
interface.
The
other
aspect
is
automated
upgrades.
B
This
is
some
philosophy
that
we've
carried
over
from
for
us
continued
Linux
techtronic.
We
wanted
that.
We
know
that
there
is
value
in
updating
things
automatically,
because
we
can
ship
out
fixes
to
any
problems
as
soon
as
we
know
about
them.
So
we
want
to
bring
that
philosophy
into
operators
as
well.
B
Ideally,
when
you're
writing
your
own
operator,
you
don't
need
to
bender
in
15
other
operators
just
to
get
the
orders
up
and
running.
We
have
lessons
around
how
a
dependency
resolution
can
work
from
other
ecosystems
and
we'll
take
those
lessons
and
apply
them
to
operators
as
well.
So
that's
some
of
the
philosophy.
Why
we're
doing
this?
What
we're
trying
to
get
out
of
it?
Let's
go
over
the
actual
architecture
at
a
high
level.
Right
now,
om
is
defined
by
two
separate
operators.
B
B
It
manages
these
objects
that
we
call
catalog
sources
which
have
virtual
objects
of
packages
and
then
subscriptions
and
install
plans
as
well
I'm,
go
into
exactly
what
each
visit,
but
I
wanted
to
emphasize
that
we
have
a
layered
architecture
where
one
thing
has
very
specific
responsibilities,
which
is
just
the
service
version,
and
then
another
operator
has
a
different
set
of
responsibilities
and
overlays
those
on
top
of
the
other.
So
what
is
a
cluster
service
version?
This
is
our
specification
file
or
what
an
operator
is
and
how
it
should
run.
B
It
is
100%
analogous
to
something
like
a
D
package
definition
or
an
RPM.
If
you're
talking
about
operating
system,
dependency
management,
dependency
management,
it's
also
perfectly
analogous
to
set
up
two
pi.
If
you
have
a
Python
project
or
a
brew
formula
using
homebrew
for
Mac,
it's
the
definition
of
MIT
metadata
about
how
to
install
and
run
in
the
orchestrate
piece
of
software
on
the
underlying
system.
B
B
Spec
could
be
a
set
of
the
plan
inspector,
but
these
are
literally
defining
the
pods
and
deployments
that
will
be
running
the
operator
software
itself
and
then,
aside
from
that,
the
only
information
we
really
need
to
know
is
the
series,
as
the
Sierra
T's,
defined,
that
interface
or
the
operator
in
kubernetes.
So
we
say
that
each
operator
can
declare
that
it
owns
a
set
of
C
IDs.
These
are
the
ones.
B
This
is
essentially
an
operated,
declaring
that
to
the
OLM
framework
that
it
will
be
responsible
for
the
life
cycle
of
particular
C
IDs
and
then
there's
a
set
of
required
cia's,
and
this
is
a
list
of
series
not
that
the
clusters
service
version
will
own,
but
that
the
operator
will
interact
with
in
some
other
way.
The
rest
of
our
dependency
management
is
hamako,
these
new
concepts
of
owned
and
required
series,
because
here
these
are
the
interfaces
on
for
each
of
these
things.
We
have
what
we
call
descriptors,
which
are
essentially
indexes
into
the
C
IDs.
B
It's
somewhat
similar
to
some
of
the
newer
features
into
like
the
with
a
cup
and
the
additional
printer
columns
and
CDs.
Where
you
you
go
into
an
object,
and
you
give
some
additional
metadata
about
that.
So
this
is
how
we
drive
some
of
our
UI.
This
is
how
we
drive
some
of
the
DLL
interaction
as
well,
so
the
OLM
operator
is
responsible
for
within
every
namespace,
managing
the
set
of
cluster
service
durations
that
exists.
B
So
for
the
most
part,
those
will
be
independent
operators,
but
I
wanted
to
call
out
what
happens
when
you
are
in
the
in
the
upgrading
state
for
an
operator
in
which
you
have
to
cluster
service
operations
that
define
the
same
operator
but
at
different
versions.
Some
of
the
metadata
that
we
collect
for
operators
is
which
one
that
replaces
so
this
gives
us
a
chain
back
to
an
older
version
of
an
operator
and
then
Olin
manages
the
lifecycle
of
those
two
operators
during
the
upgrade
process.
B
So
that's
essentially
the
only
responsibility
of
the
Olin
operator,
the
other
operator,
which
we
call
the
catalog
operator,
which
has
nothing
to
do
with
Service
Catalog.
It
is
responsible
for
these
other
resources.
The
primary
one
is
the
install
plan.
I
mean
it's
all
plan,
similar
to
our
analogy
for
cluster
service
versions.
The
install
plan
is
analogous
to
using
your
apps
for
orchestrating
packages
on
an
operating
system
or
pip
for
using,
if
you're,
using
Python
with
the
setup
dot
I
file.
B
So
an
install
plan,
like
I
said,
has
an
input
that
essentially
just
the
desired,
see
jeez.
You
can
have
multiple
install
plans
for
a
namespace
or
you
can
have
multiple
CSU's
defined
in
an
install
plan.
This
trigger
is
the
operator
to
go
in,
find
whatever
CSV
is
in
series
it
knows
about,
whatever
definitions
have
have.
B
So
where
does
an
install
plan
find
the
CFCs
in
series?
We
have
a
set
of
objects,
called
catalog
sources
that
define
what
CFCs
and
CDs
are
known
and
then
organizes
those
CFCs
and
series
into
packages
and
channels
which
is
just
an
organization
mechanism
for
saying,
for
example,
these
50
DSPs
all
are
installing
the
NCP
operator,
so
we'll
call
that
a
TD
package
and
then
within
that
chipset
of
50
here's
a
chain
of
20
of
them
that
are
the
alpha
Channel
yeah
or
the
beta
Channel.
B
So
this
is
just
a
way
to
collect
and
organize
all
of
the
definitions
of
things
that
are
available
to
the
user
to
install
and
then
once
we
have
those
two
things.
We
have
an
additional
object,
called
the
subscription
and
subscriptions
signal
to
the
operator
to
a
catalog
source,
frequently
for
updates
and
then,
whenever
an
update
occurs,
to
create
a
new
install
plan
or
that
object.
This
is
our
full
sort
of
graph
of
objects.
If
you're
subscribed
to
a
package,
for
example,
a
TV
I'm,
you
specify
an
action,
a
channel
that
points
to
a
catalog
source.
B
So
our
operator
goes
and
looks
at
the
catalog
source
for
any
updates
whenever
it
finds
an
update,
it
creates
an
install
plan
with
the
new
versions
of
the
CSVs
defined.
We
have
another
control
of
looking
for
new
install
plans.
It
sees
that
there's
a
new
install
plan
with
the
new
desired
C
as
we
it
resolves
that
from
any
of
the
catalog
sources
that
it
needs
to
talk
to
and
write
those
back
into
the
status
of
the
install
plan.
B
Then
we
take
those
resolves
to
a
T's
in
Sadie's
and
actually
apply
it
into
the
cluster,
and
once
that
point
has
done
everything
else
is
handled
by
a
viola
operator.
So
there
will
be
a
giant
set
of
cluster
service
stations
in
a
namespace,
and
some
of
those
might
be
new.
Some
of
those
might
remove.
Some
of
them
might
be
upgrading
from
old
versions
and
the
OLM
operator
handles
all
those
migrations
for
you
and
also
creates
the
you
see
are
these
themselves.
B
So
you
end
up
with
a
state
in
your
namespace,
where
you
have
all
your
serie
needs
to
find.
These
are
all
of
the
new
API
interfaces
you
can
use
via
group
cuddle
or
the
CI.
So
now,
I'm
going
to
flip
over
to
a
live
demo
of
this
I
have
a
namespace.
This
is
an
open
ship.
311
cluster
I
have
two
I,
have
one
catalog
source
right
now
and
it
has
to
do
packages
defined.
We
happen
to
have
just
one
panel
at
the
moment
when
I
create
a
subscription.
B
B
In
this
case
it
didn't
automatic
approval,
so
the
install
plan
was
immediately
applied.
We
also
have
manual
approval,
which
requires
a
user
with
right.
Access
to
go
in
and
manually
approve
an
install
plan
before
it
great,
and
then
we
specified
a
specific
cluster
service
version
in
that
one
and
then
in
the
status
blog
it
actually
goes
and
puts
the
definitions
that
will
eventually
be
applied
in
the
UI.
B
You
can
go
and
see
that
in
this
case
this
particular
cluster
already
had
these
particular
@td
API
versions
installed
as
series
and
then
the
one
thing
that
wasn't
existing
when
the
namespace
was
the
actual
CSV,
and
so
that
was
created
for
us.
If
I
go
back
the
operator
page
I
and
see
that
this
is
representing
the
sed
CSV.
Well,
I
can
go
look
at
ya
mole
for
that,
and
so
the
sed
defines
a
set
of
own
custom
resource
definitions.
B
The
sed
one
does
not
require
any
other,
so
it's
it
doesn't
depend
on
anything
else,
but
it
does
describe
the
deities
that
it
uses.
So,
for
example,
here
is
a
spec
descriptor.
So
these
are
the
descriptors
that
we
used
to
drive
the
UI.
This
says
that
if
you
go
into
and
that's
a
deep
luster
object
and
you
look
at
the
spec
that
science
app,
you
can
expect
it
to
be
of
this
particular
type
and
this
particular
type
is
for
us-
a
pot
count
object.
It
has
a
description
here,
but
we
can
go
into
I.
B
Can
go
and
create,
and
it's
a
cluster
and
once
I
do
that
I
am
driving
the
UI.
This
is
generated
based
on
that
descriptor.
So
this
is
saying
that
the
size
is
a
pod
count
and
so
I
know
UI
has
this
widget.
That
should
mean
I
can
edit
a
pod
count
for
cluster
size.
I
can
increase
the
size
to
5
pods
and
that
it's
translated
back
means
the
yellow
on
the
backend
Wow.
B
B
But
here
we
go
a
fourth
benefit
as
it's
adjusting
to
my
poster
size
of
5,
but
we
know
that
these
are
the
objects
that
it's
creating
based
on
the
Itsuki
foster
and
if
I
go
and
I
delete
the
NTT
cluster
myself,
it
will
eventually
GC
all
of
the
related
resources
for
me.
So
that's
what
the
when
Shawn
was
talking
about
that
around.
Let's,
we
were
like
heavily
on
that
I.
A
B
The
ideal
scenario
is
that
there's
some
like
persona
split
between
an
administrator
and
a
user,
and
so
the
administrator
would
go
instead
of
namespaces
with
the
operators
installed
for
you
and
then,
as
a
user,
you
just
go
in
and
you'll
see.
You'll
have
that
are
back
access
to
create
an
entity
cluster
and
you
just
go
in
and
create
one.
That's
that's
all
the
interaction
you
have
to
do
it.
So
it's
up
to
the
administrator
to
set
up.
You
know
things
like
subscription
policy,
whether
or
not
a
particular
name
stage
should
be
updating
constantly
or
not.