►
Description
OpenShift Commons Briefing
All Things Operators
State of Operators with Daniel Messer and Diane Mueller
Red Hat OpenShift
A
You
ready
and
welcome
again
to
another
openshift
Commons
briefing
here:
we're
gonna
talk
about
the
state
of
operators
and
a
little
difficulty
with
the
mute
button.
So
if
you're,
not
a
speaker,
could
you
please
mute
and
I
will
try
and
keep
ever
letting
you
until
the
Q&A?
At
the
end,
today
we
have
Daniel
Messier,
who
is
a
product
manager
with
the
open
ship
team
focused
on
operators
and
he's
going
to
take
us
through
all
the
things
operators
and
I'm
gonna,
let
Daniel
introducing
himself
and
his
topic
and
they'll
be.
A
You
can
ask
questions
in
the
chat
because
we
have
a
lot
of
the
operator
team.
They
may
be
able
to
answer
them
while
Daniels
talking
and
then
we'll
have
live
Q&A
at
the
end,
and
all
of
this
is
going
to
be
recorded
and
uploaded
on
YouTube
by
the
end
of
day
tomorrow,
along
with
the
slides,
so
take
it
away.
Daniel.
B
Sure,
thank
then,
and
thanks
for
giving
us
this
is
Nauti
and
comments,
briefing,
call
and
then
already
told
you
I'm
daniel
matter:
I'm
based
out
of
Stuttgart
Germany
and
I'm,
a
product
manager
in
the
open
service
business
and
in
Red,
Hat
and
I'm,
specifically
looking
after
what
we
call
the
operator
in
flame
and
I'm.
Gonna
also
tell
you
about
a
component
today
that
I'm
looking
after,
which
is
called
operator
papa,
but
first
things.
B
It's
amazing
how
much
traction
there
is
and
how
much
contribution
there
is
in
this
in
this
ecosystem
as
what
seems
to
be
becoming
the
new
way
of
running
applications,
specifically
applications
that
are
a
little
bit
more
complex
than
they
for
applications
on
top
of
kubernetes
and
and
authorship,
and
to
understand,
or
to
kind
of
reason
about
why
this
is
so
popular.
You
don't
have
to
go
very
far
right,
so
the
whole
reason
why
we
need
our
credits
is
they
fill
a
specific
gap
and
that
still
exists
in
the
modern
days
of
document
companies
right.
B
So
what
doctor
did
for
container
adoption
was
specifically
for
developers
was
very
simple
right
because
rich
are
actually
installing
and
the
packaging
and
application
and
making
it
available
on
various
operating
systems.
Try
to
use
the
isolation
that
came
with
containers
and
see
groups
and
various
namespace
resources
in
the
kernel
and
made
it
very
easy
to
approach
behind
the
single
CI
command
and
boom.
You
have
an
easy
way
to
ship
your
application,
and
you
can
be
pretty
sure
that
it's
running
on
your
colleagues
laptop.
B
B
So
all
you
need
to
do
is
you
need
to
ensure
that
there
are
several
copies
of
his
application
up
for
high
availability
and
load?
Balancing
and
kubernetes,
almost
as
all
read
for
you
right,
you
hide
it
behind
a
replica
set
or
a
deployment,
and
you
have
traffic
that
gets
low
balanced
across
a
number
of
these
application
instances
in
production
and
that's
pretty
much
it
right.
B
So
these
are
applications
where
it's
not
as
easy
as
running
a
number
of
copies
of
this
application,
binary
containerized
and
apart
throughout
your
cluster,
you
actually
need
to
have
more
things
that
need
to
happen
before
these
applications
make
it
on
to
the
cluster
and
while
they
are
forming
on
the
cluster,
they
need
to
be
typically
aware
of
each
other
right.
So
you
have
these
days
distributed.
Applications
that
use
clustering
mechanisms
like,
for
instance,
EDD,
which
uses
graft
to
maintain
clusters
data.
So
it's
not
enough
that
they
are
most
recording
application
out
there.
B
They
also
need
to
be
aware
of
each
other
need
to
be
coordinating
with
each
other.
In
order
to
you
know,
deliver
a
useful
functionality
and
some
of
these
application
types
could
be
addressed
with
things
like
stateful
test,
where
you
were
at
least
able
to
guarantee
a
certain
identity
per
pod,
and
then
you
could
have
an
additional
piece
of
logic
running
inside
your
part
at
the
sidecar
or
init
container.
B
That
would
make
use
of
this
stable
identity
of
the
pod
and
would
always
be
able
to
say:
ok,
I'm,
the
master,
I'm
gonna
start
a
cluster
and
I'm
gonna.
Wait
for
members
to
join
before
I
actually
offer
service
right,
but
this
is
only
half
the
story
right.
What
about
things
like
actually
reconfiguring
the
application
backing
it
up
be
covering
from
partial
loss
of
a
typical
clustered
application?
How
do
you
restore
from
a
backup
right?
How
do
you
expand
a
cluster
application
inside
in
a
predictable
way,
so
that
the
cluster
maintained
state?
B
These
are
things
that
you
can't
really
do
with
the
existing
controllers,
as
we
call
them
in
kubernetes
and
object
right.
They
are
simply
too
simple
and
too
generic
for
now
and
what
it
really
comes
down
to
is
that
we
need
a
way
at
some
point
to
store
application,
specific
orchestration
and
operational
logic
somewhere
right.
So
we
will
never
have
a
generic
controller
that
treat
a
clustered
database
like
prosperous,
equally
well
as
a
cluster
distributed
key-value
store
like
STD
all
right.
B
It
needs
to
be
able
to
provide
an
interface
to
the
user
to
express
desired
configuration
desired
state,
as
we
are
doing
today
were
things
like
requirements
and
services
and
it
needed
to
implement
what
needs
to
happen
with
the
application
and
a
consistent
and
reliable
way,
and
the
answer
to
where
we
put
this
operational
logic,
its
operators
right.
So
we
decided
that
this
piece
of
application,
logic,
theater
script
and
zoom,
cable
or
some
custom
go
code,
isn't
something
that
runs
somewhere
outside
of
the
cluster.
B
It's
not
something
that
should
you
should
be
dependent
on
a
managed
service
from
from
the
service
of
cloud
provider
right.
You
want
this
stuff
to
run
inside
to
Benitez,
and
we
wanted
to
integrate
with
kubernetes
and
open
ship
so
that
an
application
is
almost
treated
the
very
same
way
as
you
treat
kubernetes
a
knowledge
of
primitives
today,
like
a
part
or
a
config
map
or
a
persistent
boil
right.
You
want
our
applications
to
feel
like
an
encapsulated
object
like
this
in
order
to
integrate.
B
Well,
with
all
the
rest
that
we
have
learned
and
gotten
to
love
in
the
past
years,
discriminators-
and
that
is
exactly
what
could
about
Vanilla's
controllers-
give
up
in
terms
of
operators
the
when
it
comes
down
to
what
is
actually
an
operator.
An
operator
is
custom
operational
logic
for
a
specific
application
running
inside
kubernetes
as
a
workload
integrating
with
kubernetes
via
the
api.
So
what
it
really
is
is
these
are
custom
resource
controller.
These
are
custom
controllers
that
come
with
custom
resources.
B
B
The
user
create
these
objects,
as
they
create
deployments,
replica
sets
and
whatnot
on
kubernetes
and
orbiters,
but
instead
of
and
kubernetes
controller
taking
actions
on
these
primitive
things,
we
have
an
application-specific
controller
that
is
called
the
operator
then
react
to
these
application,
definitions
and
and
desired
user
configuration
and
does
the
right
thing
and
this
doing
the
right
thing
is
application.
Specific
code,
so
what
you
end
up
with
is
you
have
a
pod
running
in
your
overhead
cluster?
That
runs
the
operator.
B
This
starts
from
the
very
beginning
and
developing
these
operators,
as
well
as
moving
them
on
to
a
cluster
installing
them
and
keeping
them
updated
over
time
and
use
them
to
start
collect
application,
specific
metrics
as
well-
and
this
is
exactly
what
the
operative
framework
does
so-
the
operative
frame
workers
and
community
upstream
project
that
basically
has
three
goals.
The
first
goal
is
to
make
it
easy
to
write
operators
right.
B
If
you
want
to
write
an
operator
or
your
custom
controller
in
general
or
kubernetes,
you
need
to
be
very
fluent
and
go
as
well
as
the
kubernetes
api
and
the
api
machinery
that
gives
you
access
to
to
the
kubernetes
customer
right.
Not
all
people
I
think
it's
fair
to
say.
Not
all
people
have
that,
but
there
are
much
more
people
than
want
to
write
offering
us.
So
there's
a
component
called
the
SDK
that
allows
you
to
easily
write
an
operator
with
hours,
necessarily
being
a
client
go
expert
or
a
controller
on
time.
Explore.
B
Ok,
now,
you've
written
your
operator
now
you
need
a
way
to
offer
it
to
the
users
right.
What
we
don't
want
to
end
up
with
is
shipping
a
bunch
of
yellow
files
that
people
need
to
keep
CTL
F
and
then
things
to
automatically
appear
in
their
current
namespace
wherever
they
are
right.
This
is
neither
reproducible.
Nor
does
this
give
us
consistent
way
of
updating
operators,
which
is
very
important
because
when
we
fully
buy
into
this
concept
of
kubernetes
native
applications,
the
entry
point
to
deploy
and
update
your
application
becomes
the
operator.
B
That
means
whenever
there
is
a
new
version
of
your
application,
there's
a
new
version
of
the
operator
that
can
manage
this
application,
and
this
is
how
we
are
going
to
introduce
what
was
known
in
the
Korres
phase.
As
over-the-air
updates
applications
running
on
objects
in
kubernetes,
we
are
going
to
introduce
frequent
updates
to
operators
which,
in
turn,
update
the
running
application
version
in
your
cluster.
So
we
need
to
find
a
way
to
make
the
content
of
updates,
something
that
is
handled
well
in
the
kubernetes
and
an
enormous
shift.
B
Rustic
right
and
then
third
is,
since
we
now
have
programmatic
control
over
how
an
application
gets
instantiated
how
the
application
behaves
over
its
life
cycle
in
the
cluster.
We
can
use
that
opportunity
to
start
gathering
metrics
about
these
applications,
which
we
can
then
expose
in
reports
to
to
show
vacant
hardware
or
to
react
on
certain
metrics
to
automatically
do
something
with
the
application
like
scale,
the
class,
the
capacity
to
accommodate
or
more
load
right.
So
these
are
all
capabilities
that
we
now
have,
because
we
have
full
control
of
the
application
from
inside
of
coordinators.
B
So,
let's
see
how
this
works
and
reality,
so
it
all
starts
with
you
want
to
write
an
operator
so
that
the
developer,
who
has
the
intent
to
put
operational
logic
into
an
operator
pattern
and
applauded
in
kubernetes
so
with
the
Opera,
is
DK.
He
can
actually
get
started
right
away,
based
on
the
fact
that
this
SDK
generates
a
very
comprehensive
scaffold
that
almost
two
let's
say,
80%
cuts
out
all
the
boilerplate
code
that
you
need
to
write
and
maintain
every
time
you
create
a
custom
controller
right.
B
So
when
you
arrive
custom
controllers,
there
are
certain
things
that
you
need
to
do
all
the
time
like
registers,
the
cube
API
works
for
certain
resource
events
and
so
on
and
so
on.
So
the
operators
DK
is
able
to
scaffold
a
lot
of
that
and
make
sure
the
code
that
you
use,
for
that
is
for
the
in
practices
and
guidelines
and
whatnot.
So
actually
you
can
almost
right
away,
get
started
implementing
the
application
object
and
you
don't
have
to
deal
with
all
the
intricate
details
and
intricate
disease
or
of
the
kubernetes
api
or
kubernetes
controllers.
B
That
is
basically
an
application
that
is
able
to
integrate
with
the
kubernetes
api
and
it's
packaged
inside
a
container
that
will
typically
be
run
in
a
port
inside
the
kubernetes
cluster,
where
the
application
should
land
as
well
so
dennis
the
entry
side
of
that
is
here
at
the
SDK
gives
you
actually
you
multiple
options
on
how
to
make
an
operator
now
in
terms
of
maturity,
of
what
an
operator
can
actually
do
with
an
application.
We
kind
of
came
up
with
the
model
or
operator
lifestyle
to
operate
the
maturity
right
on
the
very
left
hand,
side.
B
You
have
the
very
basic
capability
of
installing
your
application
right.
So
every
operator,
Libyan
student,
otherwise
there's
not
a
lot
of
value.
There
are
cases
where
operators
actually
don't
really
install
anything.
They
better
watch
existing
things
that
are
already
there
on
the
cluster
and
start
implementing
some
logic
on
this.
So
an
example
for
this
could
be
an
gatherer
that
aims
to
consolidate
as
many
parts
on
a
certain
on
a
certain
set
of
nodes
as
possible
so
that
he
can
shut
down.
B
I,
don't
know
in
place
the
money
or
power
if
you're
running
your
own
data
center
right,
but
normally
operators
install
applications
and
that's
the
very
basic
level.
Ideally,
an
operator
is
also
capable
of
upgrading
an
application,
and
this
is
usually
especially
in
the
world
of
stateful
applications,
not
as
simple
as
exchanging
the
container
image
that
the
part
is
running
right
before
state
less
application.
That
might
be
enough,
but
for
databases-
usually
not
it
right,
because
you
need
to
do
things
like
replaying
the
database
law
and
make
sure
the
thing
is
consistent.
B
If
any
new
features
were
added,
you
may
need
to
regenerate
certain
indexes
or
regenerate
certain
things
in
order
to
make
these
new
features
available.
So
there
might
be
additional
steps,
except
or
outside
of
just
running
any
other
set
of
binaries
to
make
an
application
upgrade
correctly.
So
an
operator
with
phase
2
capabilities
would
be
able
to
do
that
for
you
right
and
you
would
be
able
to
do
that
for
you
in
a
way
that
the
application
stays
in
service.
Now
there
are
more
things
than
upgrades,
obviously,
especially
with
stateful
applications.
B
We
need
to
have
some
form
of
recovery
ability
at
some
point,
because
as
soon
as
we
have
stay'd,
you
need
to
be
able
to
either
regenerate
that
state
or
restore
it
ie.
You
need
to
have
a
backup
at
some
point
in
order
to
get
that
into
service
after
a
catastrophic
failure
of
the
entire
cluster
or
an
entire
start
right
there,
so
phase
3
operator
would
be
able
to
do
that,
and
then
you
can
start
to
do
advanced
things
like
actually
looking
at
metrics
and
Lots
inside
the
applications
to
gainer
a
deep
inside
of.
B
What's
going
on
in
the
application
right,
how
much
traffic
is
it
processing
right?
How
much
work
is
done
in
a
certain
period
of
time,
and
you
can
use
this
insight
to
make
smart
decisions
about
how
the
application
should
behave
in
a
production
environment
based
on
that,
so
you
can,
for
instance,
next
to
a
regular,
horizontal
part
out
of
data
that
looks
at
things
like
maybe
CPU
utilization.
B
You
could
have
an
operator
that
actually
looks
at
the
latency
indicators
or
response
time
indicators
or
sort,
not
application,
specific
metrics
to
introduce
scaling
events
right
to
do
automatic
tuning
or
the
take
or
to
detect
abnormalities
in
in
your
runtime
right.
So
this
is
kind
of
what
we
envision.
There's
five
different
stages
of
operators
to
be
in
when
it
comes
to
their
maturity.
Now
I
think
it's
kind
of
obvious
that
operators
that
are
written
in
go
are
usually
the
ones
that
can
address
all
of
these
things.
B
I
go
because
kubernetes
is
written
and
go
so
as
so
far
it's
kind
of
natural
to
go
with
this
language.
If
you're
developing
against
the
go
away
application,
which
in
this
case
is
kubernetes
and
authorship
now,
I
think
there's
a
broader
set
of
people
who
like
to
get
the
advantages
of
operators
and
write
an
operator,
then
people
that
actually
know
how
to
probably
write
go
code.
B
Now
for
these
people
we
have
another
option
which
is
edible.
The
SDK
actually
offers
a
way
to
introduce
the
concept
of
an
answer
based
operator
that
is
in
an
operator.
You
define
a
set
of
custom
resources
that
you
watch
and,
if
anything,
changes
to
those
resources.
If
new
instances
of
these
resources
pop
up,
you
can
execute
an
answerable
label
or
an
intimal
role
as
a
result
of
that.
So
this
custom
application
logic
does
need
to
be
expressed
anymore
in
tentacle
code.
B
It
can
actually
be
an
ethical
paper
right
and
the
answerable
operator,
which
is
the
heart
of
that
in
the
sdk,
manages
this
for
you
as
your
scaffold
and
and
ansible
based
operator,
it's
basically
wiring.
Your
aunt's
will
cable
your
integral
rolls,
together
with
events
that
are
generated
from
the
kubernetes
system,
any
object
system.
B
So
all
you
need
to
do
as
an
author
of
an
operator
is
to
write
what
needs
to
happen
on
the
cluster
in
an
antelope
label
or
an
ethical
rock
and,
as
you
probably
have
seen,
ansible
has
pretty
sophisticated
support
for
kubernetes
and
authorship
these
days.
So
you
can
basically
create
objects
in
kubernetes,
knob
shifts
just
by
using
native
answer
modules
like
a
DES
right.
B
So
that
way
we
tremendously
lower
the
bar
of
getting
started
with
an
operator,
because
virtually
everybody
can
learn
and
write
as
well,
and
you
can
get
pretty
sophisticated
in
the
amount
of
things
that
did
offer
operator
can
do
and
can
react
to
expressed
and
sequentially
process
logic
of
an
animal
playbook
right.
So
this
is
the
other
type
of
operator,
and
the
third
type
of
operator
is
an
operator
which
is
based
on
an
existing
township.
B
So
helm
is
little
bit
more
limited
into
what
it
can
do
over
the
span
of
an
application.
Life
cycle
is
typically
responsible
for
installing
an
application,
so
that's
a
very
healthy
and
a
very
vibrant
community,
around
town
charts
because
they
have
been
making
they
have
made
installing
applications
on
kubernetes
objects.
So
simple
that
it
comes
down
to
a
single
command.
B
It
may
need
additional
things
like
config
map
secrets,
PVCs
and
so
on.
You
can
basically
package
this
up
in
the
mix
with
an
existing
ham
chart
and
make
an
operator
out
of
that.
So
we
have
gotten
the
SDK
to
a
stage
where
you
can
basically
create
a
fully
fledged
ready
to
run
operator
with
a
single
command
where
you
just
need
to
tell
the
offer
the
SDK,
what
the
name
of
the
operators
that
it
is
a
helm
operator
and
really
charge
for
that
resides.
B
So
this
command,
for
instance,
would
generate
a
ready
to
run
cockroach
TV
operator.
That
does
the
same
thing
that
the
hand
shot
does,
but
in
a
much
more
kubernetes
native
way
and
without
the
requirement
of
running
chiller
in
your
cluster
or
even
having
the
helm
binary
installed
on
your
machine,
familiar
from
your
you
access
to
kubernetes
cluster.
So
what
this
command
does
is
basically
creating
a
custom
resources
based
on
the
definition
in
the
hand
chart,
and
it
makes
this
custom
resource
configurable
with
all
the
tuning
knobs
that
did
values
dog
mo
and
helm.
B
People
familiar
with
helm
will
know
what
that
means,
make
all
these
tunable
and
not
accessible
via
the
custom
resource.
So
when
I
execute
this
and
I
create
the
cockroach
TV
operator
on
my
cluster
I
now
have
certainly
a
new
resource
definition
called
cockroach
TV.
That
has
all
the
settings
that
I
can
apply
to
the
hand
shot.
So
I
can
basically
treat
this
in
a
very
similar
way,
like
I
used
to
do
when
I
was
installing
cockroach
the
earlier
hand
shot,
but
I
don't
need
tiller
in
my
cluster
right
I.
B
B
So
this
is
a
way
to
actually
create
an
operator
without
writing
a
single
line
of
code
very,
very
exciting,
and
we
are
actually
working
on
this
script
right
now
that
converts
like
the
topmost
100
popular
ham,
charts
into
operators.
So
you
can
run
them
right
away
in
your
cluster
and
it
feels
very
very
familiar
in
terms
of
how
you
get
this
application
started.
B
Ok,
so
that
was
how
we
actually
support
creating
operators
and
how
we
make
it
hopefully
easy
enough
for
many,
many
people
to
write
operators
beautiful
popular
applications
like
opener
databases
or
distributed
systems,
or
for
a
homegrown
at
home,
too,
application
that
you
run
inside
your
organization
right
now.
The
next
challenge
we
need
to
address
is:
how
do
we
make
this
available
to
users
on
the
cluster
right?
As
I
said
in
the
beginning,
we
don't
want
to
cube
CTL
a
bunch
of
gamma
files
every
time
I
want
to
make
an
operator
right.
B
I
want
to
have
some
control
over
that,
and
I
actually
also
want
to
give
people
a
chance
to
to
view
what's
actually
getting
installed
when
I
create
this
operate
on
Macs.
After
so
that's
why
we
are
asking
developers
to
provide
a
little
bit
more
metadata
next
to
their
operators
in
a
couple
of
yellow
constructs,
which
we
call
class
the
service
version,
so
in
a
cluster
service
version,
that's
better
the
year
IMO
file
that
accompanies
your
operator
imaged.
B
We
basically
give
a
little
of
information
of
what
this
operator
can
do,
like
which
custom
resource
definitions
it
owns.
What
kind
of
our
vague
roles
it
needs,
which
version
it
has
there's
also
text
description
of
what
this
operator
can
do
and
if
you
have
multiple
versions
of
this
authors
of
this
operator,
which
you
will
have
as
soon
as
there's
an
update
to
the
application
and
new
capabilities
are
added
to
the
operator
resulting
in
a
new
version
in
an
operator
there's
also
information
in
the
metadata
about
which
version
replaces
another.
B
So
we
can
use
this
metadata
to
create
a
package
of
this
operator
and
we
give
this
package
to
the
cluster
administrator
and
the
class.
The
administrator
can
make
this
package
part
of
a
catalog
that
is
available
to
component
call
operator,
lifecycle
manager.
So
you
can
think
of
the
author.
The
lifecycle
manager
of
what
young
repositories
are
in
the
RPM
world
right,
so
we
don't
want
to
install
single
rpms
and
resolve
all
the
dependencies
ourselves
and
do
all
the
automation
that's
necessary.
To
do
this.
B
We
want
to
have
a
utility
that
works
off
the
concept
of
repository
or
a
catalog.
That
does
this
for
us,
alright
and
yeah.
Mr.
perfect
example
right
so
in
the
days
before
Yama
had
to
do
rpm,
I,
rpm,
a
and
then
I
find
out.
Oh,
it
means
these
kind
of
of
the
ends
of
the
dependency,
so
I
need
to
install
those
first
and
then
these
dependencies
require
other
dependencies
and
all
the
sudden
I
spent
like
30
minutes
and
just
finding
out
what
this
rpm
needs
in
order
to
move
it
into
my
system.
B
Right,
I
operate
the
lifecycle
manager
automates
this
kind
of
dependency
resolution
and
this
kind
of
housekeeping
in
order
to
track
which
are
typing
packages
are
installed
which
offerings
on
Souls
and
where
for
operators
right.
So
it's
something
that
the
administrator
configures
to
have
certain
repositories
or
catalogs,
as
we
call
them
available,
and
the
users
in
the
system
can
query
these
repositories
or
hair
loss
and
list
what
operators
are
available
to
them.
To
install
right
and
now
comes
something
that
is
very
familiar
to
anyone
who
has
ever
worked
on
a
rail
system.
B
We
basically
create
intent
to
install
another,
an
operator
by
creating,
what's
called
a
subscription
right,
so
we
subscribe
to
a
channel
that
is
part
of
the
operator
definition.
So
we
give
maintain
as
the
ability
to
specify
multiple
distribution
channels
of
their
software,
for
instance
stable
versus
beta
versus
not
evil.
B
So
the
concept
of
operand
framework
has
the
concept
of
channels
built
in
and,
as
such,
you
would
be
able
to
ask,
as
the
user
over
the
lifecycle
manager
what
packages
exists,
what
channels
as
a
channels
this
package
is
available
to
and
when
you,
for
instance,
say
I
want
to
have
a
CD
from
the
preview
channel.
I
can
create
a
subscription
object
and
then
operate.
The
lifecycle
manager
will
instantiate
the
operator
instance
and
set
up
all
the
things
that
are
needed
for
this
operator
to
run
correctly
right.
It
will
define
all
the
custom
recent
definitions.
B
It
will
make
sure
that
any
custom
reaches
definitions
that
the
operator
depends
on,
but
it's
not
responsible
for
itself
will
also
be
present,
so
you
can
essentially
model
dependencies
between
operators
and
it
will
also
set
up
the
or
the
required
role
based
access
control.
This
operator
me
because
an
operator
is
basically
creating
things
like
port
services
PVCs
on
your
behalf,
so
it
needs
to
have
the
permission
to
do
that
and
all,
and
it's
basically
responsible
of
giving
that
wherever
an
operator
is
made
available
right
and
then
you
can
use
an
operator
all
right.
B
B
That,
in
a
nutshell,
is
what
offered
a
frame
book
aims
to
do
right.
So
not
only
do
we
want
to
make
it
easy
to
adopt.
You
operate
a
pattern
and
get
your
mind
wrapped
around
tweeting
applications
this
way
using
custom
controllers
and
kinetise.
We
also
want
to
make
that
something
that
can
run
in
production
over
an
extended
period
of
time,
because,
let's
face
it,
it's
operators
are
successful.
These
will
be
long-lived
services
in
your
cluster
right.
They
are
very
much
comparable
to
managed
services.
You'll
find
a
public
cloud
like
Amazon
RDS
ones
them.
B
B
That
a
nutshell
is
what
operating
a
framework
does
now.
This
seems
a
little
bit
convoluted,
so
we
aim
to
make
this
experience,
especially
in
over
ship,
very
straightforward,
by
implementing
a
marketplace
or
App
Store
light
experience
that
we
call
or
Parihaka,
and
actually
we
can
give
you
a
little
bit
of
a
preview
of
how
that
is
going
to
look
like
and
production.
So
I
have
here
somewhere
my
openshift
for
cluster,
but
this
is
an
opposite
for
detail
version.
That's
currently
out.
B
You
can
actually
try
this
yourself,
I
tried
out
open
ships,
calm
and
it's
deployed,
and
hopefully
I'm
still
logged
in
yes,
looks
like
it
and
now
in
the
section
catalog
and
data
component
called
operator
hub.
So
this
is
our
if
you
will
App
Store
for
operators.
This
is
how
I
the
user
I
see
what
operators
the
administrator
has
made
available
to
me
remember
how
the
package
was
delivered
to
the
admin,
and
he
made
us
part
of
a
catalog
or
the
lifecycle
manager
is
aware
of.
B
This
is
what
this
will
look
like
in
home
shipped
so
from
here,
I
could
say:
oh
my
application
is
I'm
currently
running
or
developing
needs.
A
persistent
distributed,
key-value
store
what
better
to
use
in
LCD,
and
thankfully
there
is
an
operator
for
actually
available
that
released
me
from
knowing
anything
about
how
to
run
an
STD
cluster
in
production.
B
So
I
can
simply
go
here,
read
about
what
this
operator
does
for
me,
what
Exedy
is
in
the
first
place
and
what
the
operator
can
do
like
automated
updates,
for
instance,
I,
can
install
high
availability
and,
most
importantly,
I,
can
directly
install
it
from
here.
I
install
it
so
I
during
installation
I,
as
I
mentioned,
create
a
subscription
to
a
specific
channel.
So
in
this
beta
version,
there's
only
the
alpha
channel
available
and
I
can
also
set
the
target
here.
This
will
look
a
little
bit
different
production
over
chef.
B
Carter
wants
it
once
we're
in
forestry,
but
basically
they're,
setting
near
cold
target,
mentioning
the
concept
operator
group
controls
in
which
namespace
this
operator
will
listen
for
its
CR
DS.
So
you
can
basically
configure
an
operator
to
only
important,
very
particular
namespace
for
custom
resource
definitions
like
sed
cluster.
Let
CD
backup
sed
restore
to
be
created
now
in
this
I
only
have
global
name
on
operators
available.
So
that
means
sed.
B
The
sed
operator
will
be
available
and
listening
in
all
namespaces
throughout
the
cluster
and
the
ga
version
of
object,
or
you
will
be
able
to
pick
particular
namespaces
here.
Normally,
you
would
pick
the
namespace
that
you're
currently
in
okay,
we
select
the
channel
and
then
there's
a
thing
called
approval
strategy,
so
this
has
to
do
with
how
we
actually
give
you
updates.
B
So
you
have
their
choice.
I'm,
selecting
automatically
at
the
moment
and
I
press
install
now
there's
a
little
bit
of
a
glitch
in
the
UI.
Since
it's
a
beta
version,
you
don't
really
see
what
happened,
but
if
you'll
click
on
show
community
operators
again
and
you
scroll
down
you'll
see
that
now
the
SB
operator
is
installed.
Now.
How
do
I
use
this
as
user
I
go
to
installed
operators?
So
this
is
what
a
normal
user
would
see
right.
B
This
is
also
stuff
that
goes
into
this
metadata
file.
That
I
told
you
earlier
about
called
cluster
service
version.
So
if
I
were
to
say,
I
want
to
create
an
STD
cluster
I
could
say,
create
new
here,
I'm
greeted
with
example,
yellow
of
how
I
would
basically
created
an
SV
cluster,
and
this
is
really
what
makes
this
feel
native
to
open,
Shizuku
Vanita,
because
in
the
very
same
way,
I
create
deployment
objects.
B
It's
not
generic.
It's
not
like
which
image
you
want
to
use
which
pores
are
going
to
export
it's
specific
to
the
application
and
all
I
need
to
actually
know
in
order
to
make
an
HD
cluster
appear
is
how
large
it's
going
to
be
and
what
version
I
want
it
to
be
running
all
right.
So,
let's
say
I'm
fine
with
default
to
your
size.
It
was
free
and
version
is
free
to
13,
sorry,
outdated,
actually,
I
could
just
go
ahead
and
say,
create
and
then
I'm
getting
thrown
back
to
the
wise.
B
It
shows
me
all
the
instances
of
this
particular
new
resource
type
SD
cluster,
that
is
available
in
my
cluster.
If
I
click
it
I
can
see
here,
the
definition
it's
currently
creating,
so
I
can
see
here
the
resources
in
a
couple
of
seconds.
Hopefully
there
will
be
pots
that
are
spawning
up
here
and
I
will
be
able
to
see
how
an
sed
crafter
actually
is
made
up.
B
So
if
the
demo
gods
are
with
me,
maybe
there's
a
bug
on
my
saucer,
given
that
it's
beta
that
may
not
be
working
right
now,
but
you
kind
of
get
the
concept
right.
You
basically
create
application
objects
instead
of
primitive
things
like
clusters
like
like
ports
or
services,
and
have
the
operator
take
care
of
all
the
additional
steps
that
you
need
to
do
in
order
to
run
a
distributed
application,
which
consists
of
more
than
one
instance
type
of
an
application.
B
Okay,
I
don't
want
to
take
up
too
much
time,
but
hopefully
you
got
the
gist
of
what
the
experience
will
be
like
when
ohm
ship
for
releases.
It
would
be
fairly
UI,
driven
and
very
straightforward,
even
if
you
have
never
used
that
application
before
that's
what
you
want,
we
want
you
to
be
able
to
use
the
application
of
the
expert
in
running
and
maintaining
it.
That's
the
job
of
the
operator.
A
All
right:
well,
there
are
a
few
questions
and
I
think
in
the
chat.
If
you
want
to
pop
over
there
and
take
a
look,
julen
is
asking
and
Peter
Larson
been
doing
a
pretty
good
job
with
answering
them,
but
I
think.
Maybe
we
might
want
to
take
a
look
at
a
couple
of
things
just
to
clear
them
up
verbally,
but
the
recording
Julian
is
asking.
Is
the
operator
the
admin
operator
access.
B
B
It
is,
but
you
say
it
is
only
for
developer
users.
Well,
I!
Guess
that's
autocorrect,
because
we
want
developers
to
basically
pick
and
choose
operators
from
this
internal
app
store
right.
If
you
will
now
what
you've
seen
an
addition.
Is
the
operator
management
UI,
which
is
there
because
I'm
also
in
this
beta
cluster
and
administrators,
so
a
normal
user
will
see
there.
So
that
would
really
be
only
one
entry
point
to
install
hello
operator
now:
I'm
lying
a
little
bit
here
to
you,
because
there's
actually
a
second
entry
point
in
the
developer
catalog.
B
So
in
order
to
keep
consistency
with
to
you
know,
users
that
have
come
from
or
which
is
free,
X
versions.
We
also
serve
as
the
operators,
the
in
the
in
the
Service
Catalog
in
the
developer,
catalog
term,
where
you
usually
have
all
the
templates
and
another
images
available
right.
This
is
just
so
visual
consistently,
but
the
operator
has
is
kind
of
the
user
facing
portion
of
that
and
the
operator
management
section
that
actually
the
UI
of
the
operator
lifecycle
manager.
B
Okay,
the
other
question
is-
and
it's
actually
very
interesting
how
RCD
kind
names
resolved
when
more
than
one
operators
use
the
same
name.
For
example,
two
operators
with
both
exhauster
can
share
DNA,
it's
a
very
interesting
question
and
we
have
actually
implemented
logic
in
the
or
penalized
like
a
manager.
B
So
that's
another
benefit
of
that
component
or
are
you
using
their
components
where
they
offer
the
lifecycle
manager
we'll
look
at
which
API,
specifically,
which
group
version
kind,
is
offered
by
a
particular
operator
in
a
particular
namespace,
and
it
will
be
nine
another
operator
to
be
watching
or
installed
in
their
namespace.
That
watches
walks
or
owns
the
same
theory
cons.
So
we
will
explicitly
catch
that
case
and
will
disallow
making
that
operator
available
or
watching
this
particular
namespace,
because
there
is
already
an
operator
which
owns
as
we
call
it.
These
particular
types
of
user
definitions.
B
All
right,
this
is
the
questions
when
will
operator
have
launched
the
downstream
operators
initially
being
cluded.
If
not,
what
is
the
target
for
downstream
over
is
be
included.
Okay,
so
I'm,
guessing
you're,
referring
to
a
community
version
of
the
Opera
House.
So
right
now
offer
app
is
kind
of
a
thing.
That's
only
inside
of
OpenShift,
and
you
only
see
it
when
you
have
object
or
Ltd
running.
We
plan
to
have
a
community
a
central
place
and
I
have
a
mention
of
that
in
my
presentation
for
community
content
as
well.
B
A
So
please
stay
tuned
and
look
for
that,
as
well
as
we're
going
to
be
hosting
a
very
operator,
centric,
open
ship,
Commons
gathering
in
March
11th
in
Santa
Clara
and
the
Silicon
Valley,
and
if
you're
interested
in
coming
to
that,
let
me
know
ping
me
and
I'll
that
you
have
a
pass,
so
you
can
come
and
join
us
and
all
of
that
content
will
also
be
uploaded
and
added
to
the
YouTube
channel.
So
you'll
be
up.
If
you
can't
come
to
Silicon
Valley
you'll
be
able
to
to
watch
it
online
as
well.
A
B
Yes,
Peter's
right
operators
on
specifically
the
amber
that
operator
up
was
already
available
in
311
as
a
beta
version,
and
it
will
be,
it
will
still
be
available
right.
So
we
have
all
the
community
operators
there
as
well,
so
theoretically
people
can
use
them,
and
so
far
I
haven't
seen
operators
that
depend
on
things
that
we
do
specifically
in
photon,
X,
so
3
dot.
X
users
can
still
benefit
from
the
operators
that
we
have.
B
The
visual
experience
and
the
console
or
311
may
differ
a
little
bit
of
what
we
have
and
imported
X,
because
we
have
a
brand
new
console,
implementation
and
4x,
but
effectively.
You
still
keep
both
options
and
both
versions.
You
can
use
operators
and
feeder
between
11,
you
can
use
templates
in
311
and
you
can
do
the
same
thing
on
4
right.
B
Yes
and
dimensional
templates
brings
up
an
interesting
point
is
because
I
have
I
think
asked,
there's
now
a
couple
of
times
as
well,
specifically
in
the
context
of
helm,
because
so
far
we
have
been
advocating
to
use
open
to
have
templates
versus
hand.
Charts
and
people
have
gotten
to
love
the
hand
sauce
and
a
variety
of
hand
drops.
They
are
now
with
the
SDK,
which
is
obviously
also
something
that
works
on
3/11.
A
B
A
B
Basically,
just
wrap
up
with
the
last
slide
on
giving
you
a
little
bit
an
outlook
or
40
plan
in
terms
of
roadmap
or
SDK,
and
with
this
specifically
OLM,
so
we
are
looking
to
give
specifically
developers
using
the
SDK
way
more
guidelines
and
guardrails
when
it
comes
to
testing
operators
right
because
your
customers
are
trusting
us
to
run
their
critical
applications
that
they
for
applications
in
production
reliably.
So
we
will
have
many
more
facilities
inside
the
SDK
to
aid
with
end-to-end
testing
and
also
validate
the
maturity.
B
We
are
specifically
creating
a
utility
called
scorecard
that
is
able
to
analyze
your
operator
and
test.
You
operate
a
black
box
like
session
in
order
to
ensure
certain
quality
and
and
certain
aspects
that
we
deem
as
best
practices
all
right
so
far.
We
also
had
always
challenges
with
basically
making
operators
run
outside
of
openshift
clusters
or
even
on
Oakley
these
lessons,
because
they
are
based
on
revel
images.
Now,
there's
a
new
image
coming
up
that
is
free
from
any
kind
of
these
restrictions.
B
We
want
to
basically
be
able
to
stage
every
application
update
with
an
operator
upgrade,
so
this
will
happen
quite
frequently
quite
regularly.
So
we
need
to
make
sure
this
is
bulletproof
and
hence
we
are
giving
people
that
are
developing
operator
hand
and
making
sure
this
is
the
case
with
templates
for
Jenkins
pipelines
or
gate
like
pipe
ends,
for
instance,
then,
last
but
not
least,
I
wanted
to
mention
that
we
are
collecting
community
operators
that
are
packaged
for
use
with
the
operator
framework.
B
In
a
repository
that
you
see
here
on
the
screen,
so
here
we
are
basically
giving
existing
operators
that
don't
have
to
be
written
with
the
SDK
can
be,
but
it's
not
a
must,
but
we
basically
start
to
collect
operators
that
have
been
packaged
with
this
metadata.
I
was
describing
in
the
beginning
right,
so
they
can
be
installed
and
updated
through
operator
lifecycle
manager.
This
is
a
community
process
where
you
can
basically
get
your
operator
in
by
the
means
of
a
single
pull
request.
We
will
review
that.
B
It
with
this
slide
here
are
some
of
the
resources,
and
that
help
you
get
started
if
you
haven't
done
so
already,
and
if
you
want
to
talk
to
us,
we're
on
the
like,
as
well
in
bananas,
operators
and
they're,
also
a
Google
Group
wait
and
ask
questions
to
the
operative
framework
or
you
using
the
operative
framework
to
create
or
maintain
an
operator,
but
that
I
handed
over
to
do
you
Dan.
Thank
you.
Alright,.
A
Well,
stay
tuned
for
next
week
next
week,
if
you're,
an
ansible
person
will
have
operators
for
ansible
people
on
deck
is
an
open
ship.
Commerce
briefing
on
believe
it's
next
Thursday
same
time
and
well
as
I
said,
continuously,
be
updating
you
on.
What's
going
on
in
the
operator
space
as
well,
I
highly
encourage
you
that
last
link
on
the
line
there
that
groups
Google
calm
during
that
that's
where
most
of
the
and
the
conversations
are
going
on
there,
as
well
as
in
the
kubernetes
slack
channel.
A
So
please
do
take
questions
there
and
post
them,
especially
if
we
can
answer
them
once
then
everybody
can
read
them
you're
having
a
question
I'm
sure
somebody
else
is,
but
thanks
began
Daniel
for
taking
the
time
and
everybody
for
joining
us
today,
and
we
look
forward
to
hearing
from
you
and
showcasing
your
operators.
If
you
have
them.