►
From YouTube: London OpenShift Commons Gathering 2019 State of Operator Framework Daniel Messer Red Hat
Description
London OpenShift Commons Gathering 2019 State of Operator Framework
Daniel Messer - Product Manager, Operator Framework
Matthew Dorn - Senior Software Engineer, Red Hat
Michael Hrinvak - Principal Software Engineer, Red Hat
A
B
A
So
I
wanted
to
quickly
introduce
you
to
our
developer
representatives
here,
because
Matt
is
also
going
to
drive
us
through
a
quick
demo
and
then,
after
this
session
we
haven't
asked
me
anything
panel.
Also
Michael
is
gonna,
be
present,
so
there's
enough
technical
capacity
on
stage
two
to
ask
you
difficult
technical
questions
right.
So,
let's
dive
in
here
so
operators
I
think,
are
something
that
you
have
heard
about
already.
A
They
are
quite
the
bus
right
now
they
were
quite
the
boss
on
ask
you
can
at
least-
and
we
have
accumulated
a
representable
and
representative
list
of
operators
that
are
out
there
after
chorus
basically
introduced
as
paradigm
of
specific
application,
specific
controllers
on
kubernetes
two
years
ago
on
that
block
right.
So
this
has
really
taken
off
quite
well
and
is
well
adapted,
and
the
reason
for
that
is
is
not
hard
to
find
right.
A
So
the
reason
why
we
want
operators
and
why
we
need
operators
is
basically
to
keeping
things
simple
in
a
world
of
increasing
complexity.
So
why
do
we
need
these
operators?
Well,
with
the
introduction
of
containers,
we
certainly
have
seen
the
life
of
the
developer,
getting
much
easier
right.
So
now
you
don't
need
to
be
an
expert
anymore
in
order
to
get
Postgres
database
like
in
this
example
being
stood
up
right.
A
You
don't
need
to
be
familiar
with
the
intricacies
of
a
post-race
database
setup
and
configuration,
or
to
get
this
running
just
to
develop
an
application
that
uses
to
stay
the
base
back
on
Drive.
You
can
start
it
out
real
easily,
so
that's
all
well
and
fine,
but
what
about
operating
this
in
a
production
environment
right
so
the
day
two
operations
are
really
important
when
it
comes
to
kinetise
and
overshift
in
the
end
across.
A
So
even
the
things
and
systems
like
kubernetes,
we
haven't
quite
reached
a
level,
yet
that
would
translate
into
production
systems
with
what
these,
with
with
containerized
applications.
The
challenge
that
we
have
is
that
now,
with
this
abstraction
layer
around
the
application,
we
cannot
reuse
the
existing
concepts
of
backup,
failover
and
restore
anymore
right.
You
cannot
just
log
into
the
container
and
do
something
install
something
there.
We
need
to
find
another
way
and
some
operational
logic
is
already
in
kubernetes.
A
It
is,
however,
I
would
say,
aim
more
towards
stateless
applications
right,
so
it's
not
hard
to
run
a
set
of
cloned
application
instances
with
the
deployment
or
replica
set.
What
is
really
hard
as
when
these
applications
start
to
have
some
state
state
that
needs
to
be
maintained
over
the
life
cycle
of
this
operation
right.
A
The
application
might
be
part
of
a
cluster
where
it
needs
to
join
in
the
cluster
first
and
exchange
some
state
information
and
come
to
the
consensus
in
this
cluster
before
it's
actually
operational
right
and
when
you
scale
disgustedly
needs
there.
That
needs
to
happen
again
and
again.
When
you
update
this
cluster
of
applications,
you
need
to
do
this
in
a
graceful
way,
so
your
service
stays
operational
right.
A
So
the
minimal
entry
points
for
additional
logic
that
stateful
said
so
you
can
replica
sets
give
us
are
not
enough
for
complex
applications
and
that's
where
operators
come
in
so
leading
to
the
question.
What
really
is
an
operator
in
the
end?
So
an
operator
is
application,
specific
controller
logic
or
kubernetes.
What
we
achieve
with
this
is
something
we
call
it
kubernetes
native
application,
because
an
operator
uses
the
kubernetes
api
s
to
create
more
primitive
objects
like
deployment
services
in
parts
to
make
the
application
run,
and
it
runs
on
kubernetes
itself
inside
of
containers.
A
So,
at
the
end
of
the
day,
an
operator
is
nothing
more
than
and
and
and
golang
application,
for
instance,
and
apart
on
the
overshift
cluster,
that
obviously
has
access
to
your
open
shifts
and
kubernetes
api
s
and
is
able
to
react
on
certain
events
and
reconsider
the
state
of
those.
So
what
is
Leedy
is
in
the
end
is
a
custom
controller
and
it
works
in
the
very
same
way.
The
existing
controllers
on
kubernetes
an
overshift
work
right.
A
You
have
certain
objects
that
are
present
like
a
port
or
employment,
and
then
you
have
controllers
that
look
out
for
these
objects
appearing
or
being
changed
right
and
they
look
at
the
state.
These
objects
desire
and
they
look
at
the
state
of
the
system,
and
then
they
try
to
match
this
with
something
that
we
call
reconciliation.
A
But
it
does
this
for
a
particular
application,
with
the
application
specific
logic
in
mind
that
you
need
in
order
to
put
this
application
reliably
into
production
and
have
it
into
production
over
an
extended
period
of
time,
because
we
are
not
going
to
just
deploy
these
once
and
then
throw
them
away
again
in
ten
minutes.
These
applications,
especially
stateful
applications,
will
therefore
will
be
there
for
much
longer
right.
They
need
to
survive
updates,
they
need
to
survive
failures.
A
The
operative
framework
aims
to
make
it
really
really
easy
and
straightforward
to
create
operators
make
operators
available
in
your
cluster
and
also
maintain
these
over
time
and
provide
insight
into
how
the
applications
to
the
manage
by
operators
are
working
specifically
and
the
amount
of
resources
they
are
consuming.
So
the
framework
itself
consists
of
three
parts.
The
first
part
is
the
SDK.
A
This
is
something
that
developer
uses
in
order
to
create
an
operator
what's
necessary
to
create
an
operator
is
basically
a
piece
of
software
that
talks
to
the
kubernetes
and
openshift
API
k'vin
Edison
object
is
written
and
goes
so.
It's
kind
of
natural
that
the
software
is
also
written
and
go
and
users,
the
client
go
library
and
the
controller
runtime
an
order
to
talk
to
kubernetes
alright.
A
So
you
use
these
components
to
write
a
project
that
makes
sense
for
your
application
when
you
buy
into
this
concept-
and
you
start
introducing
new
objects
in
your
kubernetes
cluster
that
resembled
application
like
Winston's
now,
next
to
services,
part
and
deployments
I
have
an
object
that
is
resembling
an
HDD
cluster.
That
is
a
new
object
in
my
system
right.
A
It
is
fairly
typical
that
you
will
not
only
have
one
of
these
controllers,
but
you
will
have
multiple,
and
you
also
need
to
be
able
to
maintain
these
controllers
over
time,
as
they
are
updating
themselves
by
providing
new
features
by
providing
the
access
to
newer
versions
of
the
software
manager.
So
there
needs
to
be
a
lifecycle
component
in
there
and
that's
what
they
operate,
the
lifecycle
manager
does,
and
last
but
not
least,
we
also
want
to
give
customers
some
insight
into
the
resource
usage.
A
Now
the
operator
defines
that,
and
the
also
also
the
operator,
instructs
the
kubernetes
api
to
forward
any
events
that
come
from
these
kind
of
resources
to
the
operator,
so
he
can
be
notified
and
start
to
control
loop.
The
control
loop
is
that
piece
of
code
that
you
basically
a
write
that
makes
up
the
bed
of
the
operator,
and
then
you
have
what
we
call
a
kubernetes
operator
or
kubernetes
native
application.
Yes,
so
that
is
the
first
step
and
you
have
basically
three
options
to
get
to
this
step.
A
The
first
option
I
already
named:
it's
writing
it
and
go
and
why
that
is
certainly
the
most
flexible
option
that
you
have.
It
is
also
the
one
that
comes
with
the
steepest
learning
curve,
even
though
we
simplify
and
abstract
a
lot
of
the
let's
say,
mundane
tasks
of
writing:
a
go
based
controller.
There
are
still
things
you
need
to
understand
and
the
way
you
receive
updates
when
you
receive
events
from
the
system
and
how
you
should
write
out
the
status
and
so
on.
That's
why
the
Opera
SDK
gives
you
additional
ways
to
write
an
operator.
A
You
can,
for
instance,
write
an
operator
with
ansible
playbooks,
that
is,
you
can
basically
create
an
operator
that
defines
certain
objects.
Let's
say
my
SQL-
to
implement
a
MySQL
database
and
upon
events
coming
from
these
objects
upon
events
with
these
objects
appearing
the
operator.
Would
forward
that
event,
data
to
an
answerable
role
or
an
answerable
playbook,
and
then
in
that
ansible
playbook,
you
have
access
to
all
the
data
of
this
event
and
you
can
act
up
on
them.
A
You
can
use
ansible
modules
to
create
new
kubernetes
objects,
new
parts,
new
performance,
new
state
projects,
whatever
you
need,
you
do
this
in
ansible,
which
a
lot
of
people
are
these
days
familiar
with,
and
you
can
concentrate
basically
on
this
whole
logic
that
runs
inside
this
reconciliation
loop
right.
You
have
for
this
certain
state
that
is
desired
by
the
user.
That's
what
you
receive
with
the
event,
and
you
have
a
certain
state
that
the
system
currently
is
in
user,
says
I
need
one.
My
sequel
database
system
doesn't
have
a
my
sequel
database.
A
Yet
so
what
do
you
do
in
and
bind
to
a
playbook?
You
create
all
the
objects
and
all
the
things
that
are
necessary.
Therefore
production-grade,
my
sequel
database,
and
that's
it
so
with
ansible.
You
are
pretty
flexible
as
well.
You
can
model
almost
any
scenario,
be
the
simple
installation
of
a
workload.
A
The
groaning
updates
who
a
new
version,
where
additional
custom
steps
beyond
just
changing
the
parts
image
might
be
necessary
all
the
way
to
more
complex
scenarios
like
backing
up
the
database
or
restoring
it
from
a
backup
right
or
manager
failover
or
a
fade
back.
So
this
is
a
very
nice
way
to
have
the
topic
of
creating
an
or
the
area
of
creating
an
operator
be
approachable
to
mere
mortals
that
are
not
breathing
and
living.
The
goal.
Lang
specifically
each
day
write.
A
Another
even
easier
variant
is
a
helmet
operator,
so
helm
is
a
very
healthy
ecosystem
of
what
they
call
charts
to
basically
deploy
a
software
in
a
known
way
right.
So
as
a
user
who's
not
very
familiar
with,
let's
say
how
to
deploy
an
an
Apache
Kafka
cluster
I
would
be
able
to
find
a
ham
chart
for
that
and
just
install
that
chart,
and
in
that
short,
various
things
happen
that
eventually
lead
to
an
installed
Kafka
cluster.
A
So
I'm
I
can
basically
abstract
at
that
level
and
not
don't
need
to
be
an
expert
on
how
to
set
up
Kafka
it
can.
It
comes
with
a
little
bit
of
a
drawback,
though,
because
a
helm
or
helm
charts
need
a
component
running
inside
your
cluster
and
I'll.
Do
all
these
actions
right
and
this
component
called
tiller,
needs
certain
privileges
in
order
to
be
to
have
the
permissions
to
create
all
these
objects.
So
a
hell,
maist
operator
is
basically
created
from
a
helmet
art
and
it's
really
easy.
A
It's
basically
a
way
of
creating
an
operator
without
writing
a
single
line
of
code.
You
run
this
command,
which
is
part
of
a
new
version
of
the
opera
operator
SDK,
which
we
are
about
to
release,
which
will
take
any
existing
ham
chart
and
turning
it
into
an
operator
that
doesn't
give
you
things
like
backup
or
disaster
recovery,
but
it
gives
you
a
fairly
reproducible
way
of
installing
and
sufficiently
complex
piece
of
software
in
your
cluster
without
having
tiller
present.
A
What
you
get
is
a
container
image
that
you
deploy
on
your
cluster.
You
basically
make
sure
that
the
cost
resource
definitions
are
there.
As
well
so
the
events
from
these
custom
resources
can
be
forwarded
to
the
operator
and
then
the
operator
basically
runs
to
have
child.
So
as
of
today,
this
requires
the
machine
where
this
command
is
run,
the
developers
workstation
basically
to
have
helm
present,
but
in
a
later
release
of
the
operating
SDK,
we
will
even
be
able
to
scratch
that
requirement
as
well.
A
So
you
can
create
and
operate
a
lot
of
an
arbitrary
set
of
ham
shots
and
the
way
you
then
configure.
These
applications
is
using
the
values
that
the
mo
file
that
t-hub
chart
gives
you.
So
a
home
truck
comes
with
a
set
of
configuration
variables
that
you
can
influence
in
order
to
decide
how
your
application
is
going
to
be
deployed.
You
will
be
able
to
copy
and
paste
that
into
the
custom
resource
definition
of
that
operator,
in
this
case,
cockroach
DB
and
just
use
it
there.
A
So
this
is
a
way
of
creating
an
operator
which
is
able
to
run
and
update
your
application
without
writing
a
single
line
of
code
at
all,
which
is
pretty
cool.
So
now
we
have
hopefully
lowered
the
bar
low
enough
to
get
a
lot
of
people
interested
in
writing,
really
high
clouds
and
high
quality
operators.
What
happens
now
at
this
stage,
I
have
the
operator
itself
usually
packaged
into
a
container
that
I
would
run
as
part
of
the
deployment
or
stateful
set.
A
A
So
this
is
a
bunch
of
files
basically,
and
you
could
now
go
ahead
and
say:
okay,
I'm,
going
to
run
OC
apply
minus
F
on
all
of
that,
and
that
magically
appears
in
the
current
namespace
that
I'm
currently
in
that
will
deploy
your
operator
in
one
particular
namespace,
and
it
would
just
work.
The
drawback
is
that
is
one
Operator
in
one
namespace.
So
how
do
the
others
get
access
and
to
what
happens
when
this
thing
gets
an
update?
Are
you
gonna
run
this
again?
What
happens
if
you
change
the
deployment
object
of
the
operator?
A
Meanwhile,
so
we
need
some
sort
of
lifecycle
component
to
manage
that
right.
So
what
you
typically
do
is
you
take
all
this
metadata?
I've
called
it
metadata
here,
but
it's
really
yama
manifests
in
the
end,
describing
custom
resources
are
vague
and
so
on,
and
you
package
this
up
in
what
we
call
a
bundle
and
an
administrator
can
ten,
take
this
bundle
and
present
it
to
the
operator
lifecycle
manager
so
operate.
The
lifecycle
manager
does
the
same
thing
that
the
yum
package
manager
did
for
rpms
right.
A
We
could
for
sure
go
in
and
do
an
RPM
I
to
install
an
arbitrary
RPM,
but
that's
not
how
we
do
these
things
today
right
this
thing's,
like
upgrade
handling
and
a
dependency
resolution
that
we
have
been
become
very
accustomed
to,
so
we
want
pretty
much
the
same
thing
with
an
operator
lifecycle
manager
as
well.
We
want
a
component
that
is
able
to
detect
that
there
might
be
an
version
of
that
operator
already
existing.
A
We
might
want
to
be
able
to
detect
dead
that
you
basically
have
to
pull
in
certain
dependencies
in
order
to
make
that
operator
run,
and
we
want
to
be
able
to
give
users
a
front-end
to
this
to
see
what
operators
are
available.
So
we
want
to
have
something
like
a
young
repository
right
or
a
list
of
rpms
that
are
available
for
install.
So
that's
what
the
OLM
gives
us.
It
gives
the
user
a
way
to
basically
install
packages.
A
So
when
a
user
lists
packages
he
can
then
subscribe
to
it
and
it's
actually
called
a
subscription.
A
subscription
is
basically
intent
of
the
user
to
say:
I
want
to
have
an
operator
in
my
namespace
and
that's
just
that's
not
just
a
one-off
thing.
That's
a
continuous
subscription
right!
So
at
some
point,
when
the
administrator
again
imports
a
new
version
of
a
particular
operator
that
subscription
would
reflect
this
fact
and
would
be
able
to
automatically
update
an
operator
as
well.
A
So
upon
creating
a
subscription
in
my
namespace
I
get
automatically
an
operator
instance,
so
as
a
user
I'm
not
exposed
to
the
intricacy
cities
or
details
of
how
an
operators
actually
deployed
I
asked
over
them
to
do
that
for
me
and
all
be
it
resolving
dependencies
or
updating
into
the
newest
version
whatever
and
then
I
can
talk
to
the
operator
and
say
create
an
application.
This
application
is
represented
with
a
new
object
in
my
kubernetes
and
OpenShift
cluster.
A
A
So
any
questions
so
far
on
this
it
looks
a
bit
complicated
and
that's
why
we
have
an
open
shift
made
this
a
little
bit
simpler
by
introducing
something
we
call
the
operator
hub
so
think
of
the
operator
half
of
something
similar
to
what
the
docker
hub
is
for
docker
containers
right.
So
it's
a
catalog
of
community
operators
of
Redhead
provided
operators
and
third-party
operators
that
redhead
certified
that
are
available
to
install
in
your
cluster
so
operator.
A
Hop
is
a
component
that
will
show
up
in
the
open
ship
for
console,
and
you
will
see
that
in
the
demo,
very
quick
that
will
allow
you
to
select
from
an
app
store
like
experience
which
operators
you
want
to
make
present
in
your
cluster.
So
you
don't
need
to
be
that
admin
that
picks
up
packages
and
puts
them
into
olam
and
then
have
two
users:
query
all
M
for
things
that
are
called
packages
that
aren't
really
even
called
operators
that
point
and
make
these
available
right,
so
them
the
and
the
operator
hops
console.
A
B
B
B
All
right,
so
you're?
Looking
at
an
open
shift,
4.0
cluster?
You
has
anyone
here
tried
open
shift
4.0,
yet
at
try
to
open
shift,
calm
yeah.
If
you
haven't,
go
ahead
and
check
it
out.
This
is
deployed
straight
from
a
binary
downloaded
there.
What
we're
looking
at
here
is,
we
are
looking
at
the
okay
and
look
at
the
operator
hub.
So
Daniel
just
talked
about
the
operator
hub.
This
is
our
area
where
we
actually
have
what
are
known
as
catalog
sources.
So
we
have
our
Red
Hat
certified.
B
Operators
here
and
then,
if
I
click
on
here,
I
can
go
to
show
community
operators,
and
you
can
see
that
we
have
a
variety
of
community
based.
Operators
in
here
is
the
EDD
operator
which
we're
going
to
go
ahead
and
deploy,
and
this
on
the
back
end
is
actually
what's
referred
to
as
a
catalog
source.
A
collection
of
manifests,
so
we're
gonna
go
ahead
and
click
this
here
and
install
the
operator.
B
So
it's
the
sed
operator
and
it's
the
latest
greatest
cutting-edge,
one
that
the
STD
operator
author
has
published
and
we're
gonna
go
ahead
and
automatically
approve
it,
and
at
this
time,
once
I
subscribe
to
this,
it
is
going
to
go
and
install
the
deployment
the
which
contains
the
operator
image,
the
role,
the
role
bindings,
the
surface
account.
That
is
basically
called
HDD
operator,
which
is
responsible
for
actually
running
that
deployment.
And,
let's
see
if
we
can
see
it
running
here,.
B
B
So
at
this
point
the
operator
is
actually
running
in
the
environment.
It
is
running
within
this
open
shift
operators,
namespace
I'm,
then
gonna
go
and
actually
deploy
the
sed
cluster.
So
the
custom
resource
definitions
have
automatically
been
deployed
when
I
actually
went
and
installed
the
operator
and
now
I'm
actually
going
to
create
a
kind
at
CD
cluster.
As
you
can
see,
the
custom
resource
definition
is
actually
it's
referring
to
that
custom
resource
at
CD
cluster.
B
It's
gonna
be
three
nodes
and
it's
going
to
be
version
3
to
13,
and
what
populated
this
was
that
in
the
actual
operator
manifest
when
we
actually
went
and
deployed
the
ins
the
may
create
the
subscription.
We
decided
that
this
would
be
the
default
template
here
for
this
at
CD
cluster.
So
this
is
all
specified
and
something
called
a
cluster
service
version.
B
B
Let's
go
ahead
and
click
on
our
cluster.
It's
called
example
very
real-world
example
here,
and
we
have
one
member
of
that
CD
cluster,
currently
up
and
again,
I
just
want
to
kind
of
clarify
here.
This
is
this
is
for
your
apps
that
are
running
on
kubernetes
to
access.
Ok,
this
has
nothing
to
do
with
the
backend
at
CD.
That
is
actually
powering
powering
our
TD
environment
here,
okay,
this
is
where
an
application
running
on
top
of
kubernetes.
That
needs
a
key
value
store
like
Etsy
D,.
B
Also
you'll
notice
a
couple
of
things
here:
how
we
have
the
size
and
showing
us
the
member
status
in
that
catalog
source
that
Daniel
was
talking
about.
We
actually
have
the
ability
to
put
in
a
manifest
and
within
the
cluster
service
version,
manifest
some
hints.
Some
UI
hints
that
the
OpenShift
UI
can
actually
react
to
that's.
What's
able
to
show
us
here,
the
member
status
and
the
size
so
there's
ways
for
us
to
do
that
and
as
well
as,
if
you
notice
in
the
back
here
with
the
actual
image
appearing.
B
This
is
just
a
base64
encoded
image,
there's
also
contained
in
that
cluster
service
version
manifest
okay,
so
we
have
three
members
up
yet
CD
cluster
is
up
and
running
at
this
point,
I
could
probably
go
to
let's
see
if
we
can
find
our
pods
here,
so
desired
number
see
our
resources.
These
are
the
resources
have
been
created.
These
are
our
three
pods
I
could
go
in
here
and
there's
my
terminal
environment
so
within
the
pod
and
what
is.
C
B
A
Simple
enough
for
me,
but
I
think
the
really
cool
thing
here
is
that
you
didn't
deploy
a
whole
set
of
llamó
files
and
containing
deployment
spec
services,
config
map
secrets,
PVS
and
whatnot
to
make
a
free,
node,
SED
cluster
appear,
and
even
if
you
would
have
done
that
that
wouldn't
have
gotten
you
any
coordination
between
the
free
instances
right
because
they
are
not
aware
of
each
other
at
this
point
has
no
configuration
injected
it's
very
hard
for
them
to
find
each
other.
So
the
HC
d
operator
did
all
of
that.
A
For
us,
and
all
we
had
to
do
is
express
a
simple
object:
called
LCD
cluster
with
respect
size
equals
3
and
that's
it
and
you
could
customize
a
little
bit
further.
You
could
change
the
version,
for
instance,
or
you
could
change
the
storage
class
it's
using
to
have
at
CDC
if
it's
data
on
a
PV,
but
that's
it
that's!
The
simplicity.
I
was
talking
about
in
the
beginning
right
where
we
need
to
that.
We
need
to
achieve
in
order
to
keep
up
with
this
growing
ecosystem
of
software
that
we
see
on
kubernetes.
A
So
now,
with
an
operator,
it's
like
having
a
friendly
administrator
sitting
at
the
desk
at
a
desk.
Next
to
you,
which
you
could
just
ask
hey,
you
know
what
I
have
this
application
here,
that
and
I
think
it
requires
a
key
value
store.
That's
pretty
durable
and
production
ready.
Could
you
just
deploy
me
an
LED
cluster
with
with
whatever
best
practices
you
have
well
now
you
have
that
an
operator
right
and
you
can
use
and
talk
to
this
operator
with
basic
kubernetes
means
like
Yammer,
manifests
and
an
open
ship
with
a
pretty
you,
ivory
fish.
B
And
I
also
want
to
add
to
that
as
an
open
shift
administrator,
you
can
delegate
the
installation
of
operators
to
those,
so
you
may
not
want
to
give
full
cluster
admin
access
to.
As
far
as
the
whole
cluster
goes
right,
so
a
simple
developer,
who
perhaps
only
has
access
to
maybe
one
project
or
two
projects
they
can
have
the
ability
to
go
and
deploy
an
operator.
They
don't
even
with
associated
CR
DS
right.
They
don't
have
to
have
cluster
admin
access,
so
three
powerful
stuff.
All.
A
A
Obviously
we
are
also
going
to
invest
in
cross-platform
support,
so
we
will
have.
We
will
also
support
this
on
non
OpenShift
clusters,
and
we
will
also
provide
a
mechanism
where
you
have
an
image
that
is
valid
to
run
on
both
community
OpenShift
playing
kubernetes,
as
well
as
the
enterprise
version
of
app,
which
should
contain
a
platform.
We
will
also
make
this
accessible
to
partners.
So
our
goal
here
is
to
create
an
ecosystem
of
operator
vendors
that
provide
really
good
day
to
operation.
A
Implementations
of
popular
software
like
Posterous,
for
instance,
or
my
sequel,
so
one
of
those
vendors
is
here
today.
Crunchy
data
has
a
wheel
image
who
were
operated
to
basically
deploy
a
production
grade,
Postgres
database
with
read,
replicas
and
fail
overs
and
whatnot.
So
we
are
also
going
to
open
up
this
tooling
to
them
or
the
console.
You
have
already
seen
a
demo
of,
what's
not
going
to
be
known
as
the
operator
hub
and
for
that
Oh.
What
we
are
also
going
to
provide
is
a
place
where
community
operators
start
together.
A
So
we
have
this
awesome
operator
list
in
the
Korres
repository.
That's
just
a
list
that
people
keep
maintaining,
but
what
we
are
also
creating
is
an
upstream
catalog
of
operators,
and
we
call
this
operator
hot
dial,
and
it
will
have
two
community
operators
that
you've
just
seen
from
the
OpenShift
console,
and
that
is
also
a
way
for
you
to
contribute
to
this
to
this
ecosystem.
A
So
this
is
the
github
repository
there.
We
explain
the
process
around
how
you
would
contribute
the
operator
that
you
have
written
and
it
doesn't
matter
if
it's
been
done
in
helmand,
SDK
or
go
or
if
it
doesn't
use
the
SDK
at
all.
But
you,
the
only
thing
that
you
need
to
provide
is
a
little
bit
of
metadata
to
make
these
nice
graphics
appear
like
we
have
seen
in
the
console,
and
then
we
are
able
to
put
this
up.
A
I
figure
out
a
slide
on
that
into
an
application
registry
that
we
host
on
quake
and
that
is
going
to
be
the
source
for
all
openshift
clusters.
So
all
you
have
to
do
is
basically
create
your
operator
start
a
pull
request
against
a
repository
and
after
some
betting
on
our
side,
we
will
merge
it
and
it
ends
up
being
hosted
in
quei
and
from
there.
It
finds
its
way
to
the
user
via
the
operator
hub
in
openshift
and
okd.
A
So
hopefully,
this
makes
it
much
much
easier
to
get
operators
up
and
running,
and
here
are
some
links
on
how
to
get
into
this
topic
of
creating
operators
find
our
community
operator
repo.
We
also
have
a
stick
on
that
at
this
meeting.
Regulary,
the
kubernetes
channel
has
extremely
busy
these
days
on
operators
and
we
also
have
a
Google
Group
forum
address
communication.
A
So
hopefully
we
could
provide
you
with
some
insight
into
what's
happening
in
the
ecosystem
of
operators.
How
easy
these
days,
it
is
to
actually
start
creating
an
operator
like
that
single
command
that
turned
the
hem
chart
into
a
fully
functional
operator
and
how
this
comes
into
play
into
a
production
cluster
and
is
maintained
over
time.
Any
questions
experts
are
here
a
May
panels,
starting
soon
until
then,
thank
you
very
much
and
enjoyed
evening.