►
From YouTube: State of Operators: Framework, SDK, and Hubs with Rob Szumski OpenShift Commons Gathering 2019
Description
State of Operators
with Rob Szumski Red Hat
at OpenShift Commons Gathering 2019
Red Hat Summit
State of the Operators: Framework, SDKs, Hubs and beyond
Rob Szumski (Red Hat)
A
If
you
don't
know
what
an
operator
is
this
next
talk
is
gonna,
get
you
up
to
speed
and
hopefully
showcase
some
of
the
work
that
our
wonderful
partners
have
done:
building
operators
and
making
their
services
available
on
openshift
and
we're
really
thrilled
that
this
new
technology
is
part
and
parcel
of
the
open,
shipped
ecosystem
now,
and
that
looks
like
most
of
you
are
coming
in
so
I'm
gonna
stop
talking
and
introduce
one
of
my
favorite
PM's
Rob,
some
ski,
and
let
him
talk
a
little
bit
about
kubernetes
operators.
Thank.
B
So
we've
covered
some
about
operators
and
how
we
use
them
on
the
platform
a
little
bit
earlier
this
morning,
I
want
to
talk
mostly
about
operators
running
applications.
You
got
a
little
sneak
peek
of
this
at
the
very
end
with
the
amq
streams
kafka
operator,
but
before
we
get
into
some
of
the
specifics,
I
want
to
step
back
and
talk
about
where
operators
come
from.
B
And
so,
when
you
talk
about
operators,
it
really
mapped
back
to
how
we've
run
applications
on
kubernetes,
and
this
roughly
follows
three
main
cycles.
The
first
was
we
had
our
stateless
apps.
These
were
the
scale-out
engine,
decks
front
ends
and
caching
tiers,
and
things
like
that
that
we're
very
easy
to
run
used
replica
sets
and
deployments,
and
this
was
kind
of
the
first
wave
of
applications
to
run
successfully
on
kubernetes
and
then
very
quickly.
After
that,
we
got
to
staple
applications
and
we
had
to
make
some
changes
to
kubernetes
to
actually
make
this
happen.
B
So
this
is
where
we
get
stateful
sets
and
the
container
storage
interface
moving
pod
volumes
around
as
they
get
rescheduled
from
node
to
node,
and
this
unlocked
staple
services
for
the
first
time.
So
you
can
run
a
very
simple
postcard
database,
other
things
that
you
had
that
used
local
state,
and
this
is
great.
Now
we
had
a
second
wave
of
applications
coming
onto
kubernetes
and
everything
was
great.
Then
we
got
to
the
complicated
part.
B
This
is
running
full
distributed
systems
on
top
of
kubernetes
and
we
started
to
hit
some
boundaries
in
just
in
terms
of
the
objects
that
are
in
kubernetes.
We've
been
building
off
of
these
primitives
that
the
community
works
so
hard
to
build.
But
you
know:
there's
no
Kubb
object
for
data,
rebalancing,
there's
no
objects
for
backup
and
restore
we've
got
some
primitives
for
auto
scaling,
but
triggering
those
you
know
needs
to
hook
into
your
application.
Doing
this
seamless
upgrade.
B
You
know
we
can
do
rolling
updates
of
containers,
but
that
means
something
in
a
complete
distributed
system,
and
so
we
have
to
think
at
a
layer
a
little
bit
higher
than
kubernetes.
We
can,
of
course,
use
all
the
great
tools
in
kubernetes
to
make
this
happen,
and
so
what
is
that?
What
is
the
thing
that
makes
that
happen?
It's
an
operator
and
at
its
core
an
operator
is
taking
knowledge
of
how
you
run
an
application.
B
The
the
knowledge
that
an
ops
team
has
when
they're
you
know
following
a
run
book
or
a
wiki
or
maybe
you've
got
some
failover
scripts
for
like
taking
a
standby
database
and
making
into
a
primary
all
that
expertise
in
running
your
application
is
embedded
into
an
operator
from
those
experts
and
then
you've
got
this
immutable
artifact
that
can
be
deployed
on
a
cluster.
You
know
we
like
containers
because
they're
immutable.
B
Now
you
have
that
for
an
operator,
but
it's
a
complete
distributed
system
that
that
operator
represents,
and
so
of
course
operator
at
the
end
of
the
day
needs
to
make
all
these
cube
objects
that
we
just
talked
about,
so
it's
gonna
stamp
out
staple
sets
and
deployments
and
make
secrets
and
config
maps,
and
all
of
that
and
using
those
coop
primitives
is
really
really
powerful
for
a
few
different
reasons.
The
first
is
that
you
can
now
have
extremely
flexible
application.
B
Architectures,
like
your
developers,
are
used
to
if
they
were
stamping
out
raw
vm's
or
you
know,
whatever
orchestration
environment
that
they're
used
to
you
can
construct
any
of
these
applications.
You
don't
it
doesn't
have
to
be
a
three-tiered
app.
It
doesn't
have
to
be
a
certain
type
of
workload.
It
can
be
anything
and
then
you're
using
all
these
kubernetes
primitives,
so
you're
not
reinvent
the
wheel.
Nobody
here
needs
to
reinvent
secret
storage
service
discovery.
B
All
these
things
are
handled
by
cube
for
us,
and
so
we
can
use
that
kind
of
toolkit
that
we
have
in
the
ecosystem.
That's
always
getting
better
and
better
to
build
these
applications
and
because
you're
using
cube,
you
are
also
getting
a
uniform
debugging
experience.
If
you've
got
an
engineer
on
one
team
and
an
engineer
from
another
team
happens
to
be
on
call
if
they
want
to
get
streaming
logs
out
of
a
pod,
they
know
exactly
how
to
do
it.
B
If
they
need
access
to
a
certain
namespace
in
a
cluster,
they
know
how
to
use
role
based
access
control.
All
these
things
are
really
really
nice.
When
you've
got
cross-pollination,
you
can
have
team
four
they're
sharing
different
Jenkins
or
CI
pipelines.
You
know,
maybe
they
don't
share
100%
of
it,
but
they
share
95%
of
it,
which
is
really
really
powerful
and
then
the
most
powerful
part
of
cube
is
that
it's
hybrid.
You
can
run
this
on
any
infrastructure.
You
know
on
a
number
of
different
ku
providers.
This
operator
concept
is
not
specific
to
OpenShift.
B
We
think
we
have
a
really
great
experience
inside
of
OpenShift,
of
course,
but
it
allows
you
to
really
address
that
hybrid
market
now,
I
think.
A
lot
of
folks
here
are
typically
running
infrastructure
in-house.
But
if
you
are
a
software
vendor,
and
you
actually
need
to
ship
software
out
to
a
bunch
of
customers,
what
better
tools
to
have
than
cube,
api's
and
containers
to
actually
get
these
consistent
deploys,
and
so
you
can
ship
an
operator
to
those
customers
and
stand
it
up.
B
So
do
we
want
that.
Is
that
sound
great?
Is
that
the
Nirvana
that
we're
used
to
we're
gonna
get
there?
So,
let's
talk
about
how
we
get
there,
so
we've
got
the
operator
framework.
You
heard
this
mentioned
earlier,
and
this
really
is
focused
on
two
different
groups
of
folks
and
it
roughly
talks
about
what
I
just
laid
out,
which
is
for
people
that
are
building
software
and
shipping
software
in
either
commercial
entities
or
open-source
communities.
B
The
operator
framework
helps
you
get
started,
building
an
operator,
so
we've
got
a
set
of
CLI
commands
and
different
technologies
that
you
can
use
to
instantiate
those.
So
we're
going
to
talk
about
those
and
dig
into
it
a
little
bit
more
detail,
and
it's
a
really
nice
way
to
have
this
standard
set
of
tools
for
a
bunch
of
different
teams
that
are
inside
of
your
organization
and
then
for
the
application,
consumers,
the
folks
that
are
actually
running
this
software.
Now
they
have
a
really
consistent
way
to
deploy.
These
applications,
keep
them
up
to
date.
B
We've
talked
about
as
Clayton
said
in
the
very
beginning.
You
know
there's
more
and
more
security
vulnerabilities,
and
you
know
this
is
just
the
life
that
we're
gonna
live
now
is
constant
updates
to
these
C
V's
coming
out,
especially
as
you
have
more
and
more
components.
Inside
of
a
complex
distributed
system.
You
know,
kubernetes
has
had
CVEs
and
its
dns
server
and
other
things
like
that,
and
your
application
is
no
different.
There's
always
going
to
be
these
bugs,
so
you
won't
have
a
way
to
do.
B
Application,
Lifecycle
and
the
operator
framework
has
a
tool
just
to
do
that,
and
then
these
teams
can
consume
these
applications
in
a
correct
manner.
Using
all
that
operational
expertise,
that's
baked
in
from
the
experts
at
and
Redis
and
Postgres
whatever
you're
running,
because
they
know
how
to
do
it,
they
know
how
to
secure
it.
And
so
then
you
could
just
get
this
point.
Click.
B
You
know
very
self-service
manner
of
deploying
these
applications,
so
the
framework
is
made
up
of
three
different
pieces
of
software
and
they're
all
kind
of
targeted
on
that
entire
application
lifecycle
that
I
just
talked
about.
So
we've
got
our
operator
SDK
to
help
you
build
operators.
This
is
a
bunch
of
code
generation
and
scaffolding
that
helps
you
really
quickly
get
started
and
then
you've
got
the
operator
lifecycle
manager.
This
is
a
part
that
works
on
any
Kubb
distribution.
B
This
is
pre
installed
by
default
on
openshift
4,
and
this
allows
you
to
run
a
set
of
operators
on
a
cluster
and
manage
their
life
cycle.
So
if
you
need
an
update,
an
operator
from
version
one
to
version
2
and
then
that
operator
manages
you
know
a
thousand
database
instances,
but
you
might
just
not
have
one
of
those
are
gonna,
have
a
bunch
of
namespaces
and
do
access
control,
roll
out
different
versions
of
the
operator
in
different
sections
at
the
cluster.
B
First
lifecycle
manager
helps
you
with
all
that
and
then
lastly,
is
operator
metering,
and
this
is
the
idea
that
you
need
to
do-
have
operational
metrics
at
scale
when
you're
running
these
types
of
systems,
so
you've
got
a
database
operator.
That's
running
a
thousand
instances
of
Postgres.
You
want
to
get
operational
statistics
about
how
that
operator
is
managing
all
of
those
clusters.
How
this
specific
clusters
are
doing,
if
there's
any
hotspots
and
then
be
able
to
meter
against
any
sort
of
dynamic
metric
that
it
keeps
inside
of
its
memory.
B
You
know
this
is
kind
of
table
stakes
at
this
point
to
have
that
full
application
lifecycle
you
need
to
get
to
phase
2,
basically
and
we're
pushing
all
of
our
partners
to
get
there
all
the
open
ship
operators.
You
know
go
a
little
bit
further
past
this
and
ideally,
we
all
get
to
phase
5
really
quickly,
which
is
the
kind
of
picture.
The
most
perfect
cloud
service
you've
ever
used
the
experience
that
it
does
vertical
and
horizontal
auto
scaling.
It
might
tune
itself
differently,
based
on
the
number
of
requests
that
it
is
handling.
B
It
knows
how
to
do
hook
into
the
kubernetes
scheduler
to
make
sure
it's
picking
just
the
best
nodes
for
your
application
to
run
on
it's
doing
auto
failover
Auto
remediation,
just
picture.
You
know
you
never
have
to
interact
with
this
cloud
service.
We
want
to
build
all
operators
to
fulfill
that
experience
in
a
truly
hybrid
way,
so
you're
not
tied
to
a
single
cloud
provider.
You
are
using,
like
a
database
operator,
a
machine-learning
operator
to
get
that
same
experience,
but
on
infrastructure
that
you
own
on
the
bottom
side
of
this
you'll
see
three
little
bars.
B
These
are
the
SDK
flavors
that
we're
going
to
talk
about,
and
the
idea
here
is
that,
no
matter
what
kind
of
expertise
you
have
in
software
development
infrastructure,
ops
that
you
have
an
SDK
to
build
operators
against
that
matches
your
skill
set.
So
we're
gonna
dig
into
all
of
these
in
detail.
It
is
worth
noting
that
the
helm
SDK
is
mostly
forward
like
stateless
and
very
simplistic
stateful
workloads,
and
so
you
can
kind
of
get
to
install
an
upgrade.
B
So
we
have
a
verification
side
of
that
and
we
have
this
little
utility
called
the
operator
scorecard,
and
this
is
some
black
box
testing
that
observes
your
operator
on
a
real
cluster
and
make
sure
it's
doing
some
best
practices
and
we're
gonna
make
this
much
more
powerful
over
time.
And
then
these
all
end
up
on
operator
hub-
and
you
hear
to
mention
about
earlier.
This
is
a
community
listing
of
operators
that
are
high
quality
kind
of
fulfil.
Some
of
the
tenants
that
we
talked
about
here
and
get
to
that
phase.
B
One
phase,
two
in
the
maturity
model
and
ideally
much
further
first,
is
our
helm
SDK,
and
this
is
the
easiest
way
to
get
started,
and
you
know
quote
no
code.
This
is
different
than
Kelsi
Hightower's,
no
code,
github
repo.
Have
you
seen
that
it's
kind
of
funny,
so
we've
actually
written
all
the
code
for
you,
which
is
the
beauty
of
this.
So
if
you
have
an
existing
investment
in
helm,
charts
and
you
want
to
kind
of
run
those
in
a
more
secure
and
better
way,
you
can
take
those
charts
and
build
them
into
an
operator.
B
Now
what
that
looks?
Like
is
you
have
a
custom
object
that
your
operator
is
listening
for
so
in
this
case
we're
gonna
make
a
tomcat
operator,
and
so
we're
gonna
auto
pick
up
that
Tomcat
name
and
that's
going
to
be
our
new
object
and
what
you
could
do
is
you
have
a
values:
dot
yamo
file
if
you're
familiar
with
helm.
That
goes
in
and
those
are
all
your
tunable
for
your
chart,
and
so
you
compare
those
two
and
it
templates
out
with
your
values
and
then
boom
out
pops
all
your
kubernetes
objects.
B
You
basically
hook
this
up
to
that
CR
and
so
the
spec
of
that
object
just
becomes
those
values,
yeah
Mille.
So
you
have
this
really
nice
self-service
way
to
instantiate
a
helmet
without
running
tiller
and
the
security
issues
that
come
along
with
that
as
well
as
have
this
thing,
that's
actually
actively
running
in
the
cluster.
If
you
think
about
how
you
use
helm
today,
it's
kind
of
triggered
either
by
a
Jenkins
job
or
a
human
on
the
laptop.
It's
just
you
know
inputting
commands
here.
B
You've
actually
got
this
long-running
piece
of
state
which
helps
you
out.
So
here's
what
that
looks.
Like
in
detail,
it's
basically
just
like
we
like
containers
for
immutability,
and
we
want
to
have
all
of
our
artifacts
bundled
in
the
SDK
build
process-
is
actually
bundling
that
chart
in
our
code
into
a
container.
B
That
is
the
operator,
and
then
you
version
that
and
then
in
the
middle
there
you'll
see
this
is
our
Tomcat
object,
and
so
we've
got
two
values
that
we
can
tune
and
so
you've
got
engineering
teams
that
can
now
self
tune
their
instances
in
dev.
You
know
you
might
want
to
turn
things
down
and
prod.
You
turn
things
up
the
nice
thing
about.
B
This
is,
if
you
handed
this
off
to
another
team
to
run,
say
if
you
had
an
internal
app
that
was
based
on
a
home
chart,
they
can
use
this
operator
with
a
specific
version
put
in
the
same
values
and
always
get
the
exact
same
result.
So
this
is
really
great
for
CI
pipelines,
and
you
know
handing
this
off
to
a
team
to
say:
hey,
here's,
the
latest
version
of
our
thing
go
run.
B
It
go
test
against
it
and
then
because
this
is
an
operator
and
it's
using
some
of
the
kubernetes
extension
mechanisms,
it
fits
right
into
OC
or
cube
cuttle,
so
you
can
say,
get
Tomcats
and
all
namespaces
and
you'll
see
that
we've
got
our
two
namespaces
two
different
versions
of
our
operator
running
there,
and
so
that's
really
great.
That's
kind
of
the
experience
that
you
want.
You
get
this
common
debugging
across
the
cluster.
B
Our
our
ansible
sdk
works
much
the
same
as
this,
but
it
allows
you
to
get
a
little
bit
further
across
that
maturity
model
and
so
once
again,
you're
taking
existing
investment
in
any
ansible
playbooks
that
you
have-
and
this
is
really
great
for
infrastructure
teams
that
might
not
be
traditional
software
engineers
and
they're
used
to
dealing
with
ansible
and
playbooks.
You
basically
map
some
playbooks
to
some
cluster
actions
and
then
the
operator
execute
those
playbooks
when
it
needs
to,
and
then
from
that
on
the
build
looks
the
exact
same.
B
That's
all
that's
put
into
a
container.
It's
versioned,
you
get
the
exact
same
experience,
the
object
and
the
output.
It
actually
had
not
changed.
You
can
produce
the
exact
same
operator
with
both
of
these
SDKs.
Now,
of
course,
ansible
has
a
really
rich
ecosystem
of
different
play,
books
and
other
integrations,
and
so
this
is
really
great
if
you
won't
have
an
operator
that
might
go
out
and
mess
with
like
hardware
load
balancers
or
go,
create
DNS
records
or
anything
like
that
to
really
stitch
your
entire
user
experience
together.
B
B
Here's
an
example
of
not
using
the
SDK
but
kind
of
a
really
stubbed
out
control
loop,
as
I
mentioned,
you
know,
you're
the
expert
in
your
application,
and
so
when
you're
building
an
operator
you
need
to
take
in
all
that
and
kind
of
execute
it
in
this
control
loop.
But
that's
all
you
need
to
do.
We
handle
all
the
scaffolding
for
you,
and
so
this
is
a
good
example
of
just
an
initial
deployment
right
there,
where
you
know
your
I
want
to
have
at
least
one
Tomcat,
because
I
see
a
new
object
popped
up.
B
Oh
I've
got
zero
right
now.
Oh
I
know
what
I
need
to
do.
This
is
an
initial
deployment
of
that
application
and
then
constantly
you're
checking
different
parameters
of
the
application
to
make
sure
that
they're
in
spec
in
this
case
we're
checking
that
a
size
parameter
is
actually
reflected
in
the
replica
count,
and
you
would
kind
of
do
this
for
every
single
thing
that
you
care
about
doing
upgrades
along
the
way.
If
you
know
how
to
go
from
version,
11.10
1.1.2,
you
would.
B
If
you
need
to
execute
different
pieces
of
logic,
you
can
actually
do
that
in
the
operator.
What's
nice
about
this,
is
you
don't
have
a
way
to
do
that
in
kubernetes
objects
in
general?
If
you,
you
know,
if
you
have
a
Jenkins
pipeline,
that
just
outputs
35ml
objects,
there's
nothing
there
saying
Oh
start
this
migration
and
then
shut
off
this
pod
and
move
this
data
around
or
I
need
to
change
this
config
map
to
have
a
migration
to
a
different
data
format.
B
We've
been
really
lucky
to
see
this
operator
concept
take
off
since
we
introduced
it
in
2016.
It's
it's
proven
to
be
really
powerful
and
we've
kind
of
been
one
step
ahead
of
where
the
coop
community
is
going
such
that
these
tools
are
ready
for
these
district
complex,
distributed
systems
when
we're
now
running
them.
B
So
all
the
folks
that
you
saw
earlier
from
the
automated
vehicle
testing
and
some
of
the
NASA
pipelines
and
spark
and
Kafka
and
things
like
that-
all
really
excel
at
being
run
by
operators,
because
they're
a
ton
of
different
components
that
need
to
be
done.
So
here
you
can
see
a
few
of
them,
a
lot
more
listed
on
operator
hub
and
what
this
does
is.
It
lowers
the
barrier
to
entry
for
your
engineers,
picking
up
a
piece
of
software,
you
know
prototype
out
a
new
workflow.
For
example:
hey
I
need
a
no
sequel.
B
Datastore
I've
never
used
one
of
those
before
let
me
go
see
if
there's
an
operator
pull
off
Couchbase,
it's
got
all
the
operational
expertise
baked
in
and
so
then
you
can
get
really
quickly
up
and
running,
and
this
is
really
really
nice
and
then
you
can
standardize
that
across
your
organization
as
well,
so
everybody
is
using
kind
of
the
same
set
of
tools,
and
so
you
can
debug
them.
Have
the
same
security
testing
that
type
of
thing.
B
B
So
we
already
talked
about
this
a
little
bit,
but
I
want
to
break
down
exactly
what
you
need
to
do.
If
you
were
gonna,
leave
this
session
right
now
and
go
make
an
operator,
it's
kind
of
a
different
way
of
thinking
as
the
engineers
that
we're
onstage
earlier
talked
about,
our
scrum
teams
had
to
go
through
the
same
process.
It's
it's
a
different
way
of
thinking
about
how
you
architect
your
applications
and
then
how
you
deploy
them.
Oh
I,
think
I.
Could
that
slide
shoot?
Well,
we
already
talked
about
the
the
reconciliation
loop.
B
There
is
is
really
what
you
care
about,
and
you
need
to
think
about.
How
do
I
source
all
of
my
state
from
inside
of
the
cluster?
There
needs
to
be
no
external
state,
because
then
your
your
operator
can't
see
what
the
desired
verse
the
actual,
is
and
make
those
changes
happen,
and
this
is
really
really
key
and
it's
kind
of
it's
hard
to
do
it
first,
because
it's
just
you
need
to
shift
your
mind,
and
so
this
is
where
sdk
comes
in
into
play.
B
We've
got
a
bunch
of
examples
of
making
some
of
these
operators,
so
if
you've
ever
used,
the
Exedy
operator,
for
example,
there's
a
go
implementation
of
that
and
an
ansible
implementation
of
that,
and
so
you
can
compare
and
contrast
them
as
well
as
a
number
of
the
operators
on
operator
hub
itself
are
all
open
source,
including
a
bunch
of
the
operators
that
make
up
openshift
itself.
So
there's
a
plethora
of
kind
of
existing
knowledge
here,
and
this
is
where
the
open
ship
common
SIG's
come
into
play
as
well.
B
We
have
an
operator
sig
that
likes
to
get
folks
up
talking
about
what
they've
made
problems
they've
solved
common
solutions
that
everybody
can
pick
up
and
use
now,
when
you're
exposing
out
these
operators
to
different
users
of
your
cluster.
The
important
thing
is
that
they
are
self-service.
So
we
mentioned
that
we
want
this
cloud
like
experience,
we
want
everybody
to
be
able
to
expand
and
manage
their
own
database
instances
and
machine
learning
pipelines,
and
so
this
is
the
experience
you
get
inside
of
OpenShift.
B
So
you
can
see
all
the
operators
that
have
been
installed
in
your
namespace
for
your
use
and
then
you're,
interacting
with
objects
that
feel
very
cube
native.
So
here's
a
MongoDB
replica
set.
That's
going
to
get
you
a
production,
ready,
MongoDB,
and
you
know
you
don't
need
to
be
an
expert
in
how
the
TLS
certificates
are
generated,
how
these
components
discover
each
other
that
type
of
thing
and
at
the
end
of
the
day,
what
we're
striving
for
here
is
simpler,
get
ops.
B
If
you
think
about
you
know,
you
want
to
run
something
through
a
CI
and
CV
pipeline
and
you've
got
folks
reviewing
it
along
the
way
you
don't
want
to
be
changing.
35
different
kubernetes
objects
now
you're,
not
exactly
sure
what
happens
because
you're
an
expert
and
maybe
the
front
end
of
this
thing,
but
you
need
to
change.
B
You
know
propagate
a
value
all
the
way
through
to
the
back
end
and
then
test
all
that,
instead
of
it
being
35
objects,
it
might
just
be
2
here
so
here
we've
got
a
customer
and
object,
and
this
is
just
a
custom
operator.
You
can
write
inside
of
your
organization
and
then
you
know,
pull
off
and
depend
on
the
MongoDB
operator
for
your
stateful
storage
and
now,
when
you're
making
configuration
changes
from
staging
to
production,
for
example,
you
can
imagine
a
PR
review
is
just
you
know.
A
few
lines
changed.
B
And
then,
of
course,
security
is
a
really
important
part
of
this,
and
so
admins
need
to
have
full
control
and
visibility
into
what's
going
on.
So,
if
you
log
in
as
an
admin
persona
inside
of
OpenShift,
you
can
see
all
the
subscriptions
to
all
the
operators
that
you
have,
as
we
talked
about
earlier.
This
attaches
to
different
channels,
and
you
can
have
either
manual
or
automatic
approvals
to
help.
You
keep
these
operators
up
to
date
and
the
general
flow
is.
B
I
want
to
invite
you
to
try
these
out,
so
we've
got
a
getting
started
guide
that
walks
you
through,
using
all
the
components
of
the
operator
framework
I
mentioned.
We
have
our
special
interest
group.
That's
the
second
link
here.
It
meets
on
the
third
Friday
of
every
month
at
9:00
a.m.
Pacific,
so
we've
got
a
lot
of
folks
that
are
coming
and
talking
about
their
experiences
and
we'd
love
to
have
you
there.
We've
also
got
mailing
lists
and
other
asynchronous
forms
of
communication
and
then
operator.
Oh,
please
check
it
out.
B
B
All
right,
so,
first,
let's
take
a
look
at
operator
hub
inside
of
the
cluster.
You
saw
a
little
bit
of
this
earlier.
It's
my
internet
connection
is
decent,
and
so
here
you
can
see
that
there's
a
bunch
of
different
services
here
and
the
nice
thing
about
this
is
this
catalog
gets
added
to
every
single
day.
These
partners
are
in
charge
of
their
listings
and
so
they're
constantly
revving
them
as
new
versions
come
out.
They
have
new
capabilities
and
the
power
behind
this
is
if
you're
thinking
like.
Oh,
why
don't
I,
just
you
know,
run
this.
B
Why
do
I
need
something
to
manage
the
life
cycle?
This
is
every
time
that
you
install
one
of
these
you're,
actually
wiring
up
service
accounts
that
have
the
exact
minimal
set
of
permissions
that
you
need
to
run
this.
So
if
an
operator
is
doesn't
need
to
create
secrets,
then
it
doesn't
need
to
have
access
to
any
of
your
secrets,
for
example
inside
of
that
namespace.
If
you
want
to
isolate
an
operator
into
just
one
namespace
or
all
namespaces,
you
also
have
the
flexibility
to
do
that.
So
it's
really
really
powerful.
B
Here's
the
admin
side
of
it
so
on
this
cluster,
you
can
see
that
I've
got
a
MQ,
cockroach
DB
code,
ready
workspaces,
a
few
others
installed
on
here,
and
actually,
let's
talk
about
this
metering,
one
for
a
second.
This
was
mentioned
in
one
of
the
talks
and
what's
great
about
this
is
this
is
actually
a
cluster
extension
powered
by
an
operator,
so
we're
teasing
apart.
Some
of
the
components
of
openshift
so
like
the
logging
stack,
for
example,
is
now
kind
of
optional.
B
If
you
want
to
install
it,
then
you
can
use
an
operator,
it
all
gets
set
up
and
wired
up
for
you
same
with
metering
same
with
some
of
the
other
project.
We
have
like
the
D
scheduler,
and
this
is
so
that
you
have
full
control
over
the
experience
of
your
cluster.
So
if
you
want
a
slimmer
install
by
default,
that's
what
you
get
out
of
the
box
and
then
you
can
kind
of
add
on
all
these
other
components
which
is
really
exciting
and
all
that
happens
through
operator
hub.
B
So,
let's
take
our
admin
hat
off
for
a
second
and
go
over
to
our
developer
hat
and
so
we're
gonna
jump
over
to
our
installed.
Operators
and
I
have
a
sample
production,
API
back-end
project
that
has
just
a
few
of
these
objects
and
operators
installed
in
it,
and
so
here
you
can
see.
I've
got
self-serve
access
to
make
a
cockroach
TB,
for
example,
you
can
explore
some
of
the
capabilities
of
it
and
then
you've
got
these
templates
that
have
all
the
defaults
inside
of
it
sourced
from
that
developer.
B
This
one
is
really
a
good
example
of
how
complex
these
things
can
be,
and
not
necessarily
in
a
bad
way.
It
means
that
you
have
the
tunable
's
if
you
need
them,
but
they're
all
smart
defaults
managed
by
the
operator,
if
you
don't
so,
if
you
wanted
to
bump
up
some
of
the
cash
sizes
or
something
like
that,
you
have
self-service
access
to
do
that.
Even
if
you
wanted
to
experiment
with
this
now
you
have,
you
know,
created
a
new
namespace
install
the
operator.
B
You
know
go
mess
with
one
of
these
instances
without
having
to
you
know,
mess
up
any
of
your
infrastructure,
you're,
not
experimenting
and
development,
or
anything
like
that
and,
as
you
saw
earlier,
this
is
all
integrated
into
our
new
developer
catalog,
and
so
this
blends
together
any
servers
broker
backed
instances.
You
have
operators
that
are
installed
as
well
as
some
of
the
other
things
because
they
come
out
of
the
box
like
our
image
streams,
and
things
like
that.
So
here
we
looked
at
a
kafka
one
earlier
and
I.
B
Think
it's
worth
revisiting
that
just
because
of
how
powerful
it
is
so
I
think
I've
got
a
Kafka
running
in
here
and
it's
really
important
to
note
how
many
objects
this
thing
is
managing
you
know
these
applications
can
get
extremely
complex,
and
you
know
this
is
generating.
If
you
look
at
these
secrets
at
the
top
there's
a
CA
and
a
bunch
of
certificates
that
are
generated
and
they're
stored
as
secrets,
they
can
mount
it
into
some
of
these
pods.
If
you
want
to
rotate
these
secrets,
it's
a
whole
thing.
You
gotta,
restart
different
components.
B
The
operator
does
all
of
this.
For
you,
it's
really
really
powerful
to
get
a
production
ready,
Kafka
cluster.
You
know
in
35
seconds
it's
not
something
that
you
can
typically
do
on
most
infrastructure,
and
so
it's
really
really
powerful
and
remember
this
runs
anywhere
anywhere.
You
can
run
openshift
anywhere.
You
can
run
kubernetes.
You
can
get
this
operator
so
you're
not
tied
in
to
one
specific
cloud
provider.
This
can
be
virtualized
on
your
laptop,
and
so,
if
you
want
to
do
local
development
against
an
operator,
you
can
do
that
as
well.
B
I
think
I'm
about
out
of
time.
So
hopefully
that
was
a
good
look
at
the
overview
of
what
an
operator
is,
how
you
can
build
operators
either
take
them
off
the
shelf
and
if
you're
a
software
engineer
inside
of
an
organization-
and
you
want
to
an
operator-
we've
got
a
bunch
of
tools
ready
for
you
to
do
that.
Then
a
live
look
at
some
of
it,
an
open
ship
for
which
I
want
to
encourage
you
to
test
out
as
well
thank
y'all.
So
much.