►
From YouTube: State of Operator Framework | Daniel Messer Jason Dobies (Red Hat) | OpenShift Commons Briefing
Description
State of Operator Framework | Daniel Messer Jason Dobies (Red Hat) | OpenShift Commons Briefing
A
A
A
The
first
part
is
the
open
SDK,
which
addresses
or
print
developers,
allowing
them
to
building
and
packaging
operators
at
ease.
Without
writing
a
lot
of
the
boilerplate
code
that
is
usually
associated
with
writing
operators
against
kubernetes.
Then
we
have
to
operate
a
live
second
manager,
which
is
a
central
component
on
cluster
which
you
can
use
to
centrally
deploy
and
update
operators,
and
then
we
have
the
third
component,
which
is
operator
hub
IO,
which
is
a
community
catalog
to
which
you
can
publish
you.
A
Your
operator
that
you've
written
and
make
it
installable
and
updatable
from
the
web
below
is
also
the
link
onto
the
upstream
source
code
and
suggest
you
follow
and
store
this
repository.
So
let's
take
a
look
at
what
you
can
do
on
today
with
the
operator
framework.
First
part
is
writing
an
operator
right,
and
we
really
have
the
ability
to
address
virtually
any
kind
of
background
in
programming.
A
A
People
in
administrative
roles
with
ops
background
are
usually
very
proficient
in,
and
these
alcohol,
ansible
and
automation
tool.
Also
very
popular
upstream.
You
can
use
Hansel
playbooks
to
right
operators
with
the
Opera
SDK,
so
there's
no
further
knowledge
required
other
than
how
to
express
the
automation
and
lifecycle
management
of
your
application
in
ansible
playbook
tasks
play
books
and
roles.
A
This
is
possible
with
the
Golding
SDK,
where
you
write,
operators
with
the
goal,
programming,
language
and
use
the
SDK
to
scaffold
a
lot
of
the
boilerplate
code
that
you
would
normally
need
in
order
to
leverage
client
go.
The
operators
do
care,
also
comes
with
packaging,
so
you
can
readily
publish
and
ship
your
operator
to
something
like
operator
hub
that
IO
and
also
comes
with
a
test
harness
which
allows
you
to
test
and
validate.
Are
the
operator
that
you
just
build
here
are
a
couple
of
things
that
we
recently
added
to
the
operators?
A
Okay,
if
you
are
in
the
business
of
writing
an
operator,
we
recently
transitioned
to
helm,
be
free
as
the
project
made
its
debut
into
this
new
major
release.
So
with
the
SDK,
you
can
write
operators
using
helm,
v2
and
be
free
charts
and
they're
going
to
get
applied
using
the
helm,
v3
libraries.
We
also
have
conversion
support.
So
if
you
have
previously
deployed
helm
based
operators
with
the
SDK
and
ever
written
with
helm,
v2
there's
an
auto
conversion,
ain't
happening
in
the
background.
A
We
have
been
able
to
basically
work
in
lockstep
with
the
upstream
kubernetes
releases
so
that
you
can
always
use
the
newest
features
that
have
been
released
in
the
kubernetes
api
level.
Since
recently,
you
are
also
able
to
create
metric
endpoints
for
your
operators.
The
sdk
is
going
to
help
you
with
that
as
well.
So
you
really
provide
that
insight
into
your
operator
and
its
ability
to
handle
the
application
on
cluster.
And
last
but
not
least,
there
is
a
testing
utility
that
we've
shipped
for
some
time
and
the
sdk
now
called
scorecard.
A
This
utility
helps
you
do
some
very
basic
validation
and
behavioural
analysis
of
your
operator
and
check
if
it
conforms
to
non
best
practices,
and
this
new
version
of
scorecard
has
a
new
output
format
that
makes
it
very
easy
to
reuse
scorecard
in
a
release
pipeline
through
which
you
can
determine
whether
or
not
the
operator
actually
has
passed
certain
tests
or
not
when
it
comes
to
actually
running
operators.
The
operator
lifecycle
manager
is
really
the
central
point
on
Cluster
to
do
that.
A
It
supports
the
operator
developer
in
using
the
packaging
that
comes
out
of
the
sdk
and
put
that
into
a
catalog.
A
catalog
can
be
instantiated
on
cluster,
which
is
then
used
by
an
administrator
who
can
basically
pick
and
choose
from
this
catalog
to
install
an
operator
and
an
installation
is
not
just
if
deploying
the
part
that
has
the
operator
as
a
container
image
inside,
but
it's
also
doing
dependency
resolution
shoot
your
operator
require.
Other
operators.
A
Olm
will
automatically
resolve
those
dependencies
if
available
and
they'll
keep
them
in
soon
during
the
lifecycle
of
the
operators,
which
includes
up.
Of
course,
some
things
that
can
never
happen
on
a
cluster
is
that
two
operators
are
installed
that
own
the
same
resource.
They
are
managing
right.
So
thanks
to
the
built-in
collision
detection
inside
all-
and
you
and
your
cluster
are
basically
prevented
from
that.
A
So
throughout
updates
all
and
we'll
make
sure
that
dependencies
are
always
resolved
and
that
collisions
are
avoided
and
updates
can
be
automatic
or
manually
triggered,
and
they
are
basically
available
as
soon
as
the
catalog
refresh
the
user
component
on
the
cluster
and
is
where
tenants
in
your
kubernetes
cluster
are
actually
using
the
operator
and
its
services.
Olm
helps
them
discover
which
operators
are
actually
available
to
them
and
just
for
them
without
cluster
by
privileges.
Privileges
like
reading
custom
resource
definitions
right
so
at
a
very
granular
level.
A
You
can
give
certain
tenants
on
your
cluster,
which
is
really
useful
in
a
multi-tenant
environment
inside
into
which
operators
are
available
which
operators
are
installed
and
when
they
use
these
operators,
provide
them
with
rich
UI
controls
to
actually
interact
with
their
services.
So
in
the
packaging
metadata
of
operators
through
all
M,
there
are
annotations
that
can
be
used
by
a
graphical
consoles,
as
you
will
see
later,
and
a
demo
that
really
make
interacting
with
an
operator
feel
like
a
cloud
native
experience
but
everywhere,
where
your
cluster
rolls.
A
Some
noticeable
features
that
we've
added
to
all
M
over
the
past
couple
of
months
was
the
ability
to
configure
an
operator
and
have
this
configuration
persist
across
updates.
We've
also
invested
quite
a
lot
and
exposing
more
health
data
of
operators
and
provide
better
error
messages,
as
well
as
supporting
operators
that
run
in
proxy
or
disconnected
environments,
which
is
very
important
for
customers
that
run
in
a
commercial
setting,
but
don't
want
to
expose
their
kubernetes
cluster
to
the
Internet
right.
A
The
ability
to
you
install
operators
only
for
certain
tenants
and
also
make
them
only
visible
for
certain
tenants.
It's
a
key
component
of
all
em.
So
we
do
support
operator.
Tenancy
authors
of
operators
now
have
the
ability
to
define
the
update
path
through
which
all
em
will
update
the
installed
operators.
A
So
this
is
usually
done
by
pointing
back
to
the
most
previous
version
of
the
operator
that
you
want
to
update
from,
but
you
can
also
now
extend
this
to
version
range,
for
instance,
so
that
you
say
that
the
v3
release
is
able
to
update
the
v2
and
also
all
the
rebound
releases.
Last
but
not
least,
we
are
also
working
on
making
operator
catalogs
something
that
you
can
store
in
a
regular
container
registry.
A
We
have
the
ability
to
have
these
catalogs
sit
inside
databases
which
are
packaged
inside
a
container
image
and
fronted
by
a
G
RPC
API,
which
is
something
that
all
M
can
use
as
a
catalog
on
cluster
and
last
but
not
least,
we
do
have
operator
hub
that
IO,
which
made
great
progress
in
the
last
year
now
sponsoring
over
100
operators
that
are
frequently
updated
from
a
central
source,
easily
discoverable
ya
web
interface.
So
where
are
we
going
with
all
of
this
now
a
couple
of
things
that
are
going
to
happen
in
the
near
future?
A
First,
we
are
going
to
introduce
a
new
way
to
actually
bundle
the
operator
metadata
and
ship
this
around
today.
This
is
mostly
happening
in
the
form
of
tar
balls
that
are
sitting
on
this
and
are
getting
pushed
to
backends
like
Cueto
dial
in
the
future.
You
will
be
able
to
take
all
of
your
operate
on
metadata,
which
includes
its
custom,
resource
definitions,
its
package
format,
401
MD
cluster
service
version
and
put
this
into
a
folder
from
which
we
build
a
container
image.
A
This
container,
which
is
not
a
runnable
piece
of
software,
but
contains
all
the
required
metadata
and
some
metadata
on
the
image
itself
in
order
to
ship
it
to
any
container
registry
of
your
choice
and
have
it
something
had
it
be
something
that
can
be
installed
directly
using
all
among
cluster.
So
you
don't
need
to
ship
people.
A
lot
of
Yama
manifests
if
they
need
to
apply.
A
The
only
thing
you
need
to
do
is
publish
a
container
in
a
container
registry
and
on
the
cluster,
where
or
whatever
is
installed,
instantiate
a
new
operator,
installation
referencing
that
so-called
bundle
image,
and
that
is
really
it's
very
similar
to
locally
installing
an
RPM
without
making
it
part
of
a
larger
repository
or
a
catalog
right.
So
something
where
you
can
quickly
and
easily
install
an
operator
and
see
if
it
works
and
still
have
the
complete
experience.
A
This
works
a
standard
container
and
contain
an
image
tooling
and
as
available
to
all
M,
as
long
as
it
has
access
to
that
container
registry
operator
hop
today
and
in
the
future
will
start
to
accept
this
new
format
as
well,
while
maintaining
the
ability
to
submit
in
the
older
format
that
we
have
today
going
forward.
We
also
have
the
ability
to
form
catalogs
out
of
groups
of
these
bundle
images.
A
So
suppose
you
have
multiple
versions
of
these
are
actually
multiple
different
operators
packaged
as
these,
and
you
want
to
build
a
custom
catalog
that
you
want
to
expose
to
users
on
your
cluster,
because
it
contains
operators
that
you
have
tested
and
bedded
and
unknown
to
be
trustworthy.
You
can
do
so
in
the
future
quite
easily,
with
a
new
tool
called
OPM.
The
operator
package
manager.
So,
if
operator
bundles,
are
the
RPMs
for
operators,
OPM
is
essentially
the
young
or
DNF
equivalent
to
that
in
the
O&M
world.
A
So
here
you
see
how
you
actually
start
and
you
index
with
just
a
single
package
inside-
and
you
can
add
to
this
index
subsequently,
but
just
repeatedly
calling
the
index
add
operation
that
will
build
a
new
container
image
and
as
soon
as
you
push
that
you
have
eventually
created
and
refreshed
a
catalog,
and
you
can
push
this
with
any
container
tuning
of
your
choice
to
a
container
registry
of
your
choice
and
then
reference
that
as
a
catalog
for
OLM,
which
you
can
then
use
on
your
cluster
to
give
people
an
option.
A
A
selection
of
installing
operators.
You
see
here,
this
catalog
is
also
able
to
refresh
itself.
So
in
this
particular
example,
it
has
been
set
to
refreshing
every
30
minutes,
so
this
is
how
you
achieve
order,
updates
and
always
up-to-date
catalogs
on
your
cluster.
Now
what
goes
inside
this
package
is
essentially
the
same
as
of
today.
It's
a
CSV
and
more
one
or
more
custom
resource
definitions
of
your
operator.
A
An
operator
and
an
operator
installation
and
on
the
right
hand,
side.
You
will
see
that
the
new
object
that
we
simply
call
operator,
which
is
the
primary
point
of
contact
for
the
administrator
to
install
an
operator
beep
from
a
catalog
or
from
a
standalone
bundle
and,
at
the
same
time
configure
the
way
it's
updated,
which
namespaces
it
access
to
all
this
kind
of
stuff
that
we
used
to
do
in
four
different
objects.
A
We
are
now
able
to
do
in
one
API
and
that's
also
the
point
where
you
would
start
looking
for
information
about
how
your
operator
is
doing,
how
it
is
updating
if
it
has
updates
available
and
alike
for
users.
This
gets
really
natural
on
the
command
line
by
having
the
ability
to
essentially
just
cube,
CTL
get
operators
and
see
what
operators
they
have
available
in
their
namespace,
so
a
very
natural
way
to
interact
with
the
system
and
discover
which
operators
have
been
installed
by
your
admin.
A
On
the
sdk
side,
we
increase
the
test
tooling
capability
significantly.
We
have
realized
that
it
was
probably
too
much
of
an
overhead
and
toil
to
create
custom
tests
for
your
operator,
which
is
why
we
are
teaming
up
with
the
upstream
community
around
a
tool
called
cuddle
which
allows
you
to
express
tests
for
your
operator.
Just
using
gamma
manifests.
So
in
this
example,
you
see
basically
two
manifests
that
are
working
in
a
before
or
after
fashion,
so
as
part
of
a
new
custom
test
that
scorecard
will
support.
A
A
As
you
know
it
from
kubernetes
and
see
if
you
operator
actually
did
what
it
was
supposed
to
be
doing,
which
in
this
case
is
creating
a
deployment
which
is
supposed
to
have
three
replicas
in
ready
state.
After
that
custom
resource
called
cockroach
to
be
was
reconciled
by
the
operator
and
that's
all
it
takes
to
actually
test
an
operator
in
the
future.
So
you
don't
need
to
express
tests
or
complex
flows
and
golang
or
or
ansible.
A
Speaking
of
the
sdk,
we
will
transition
to
using
cue
builder
under
the
hood
for
scaffolding
gulang
operators.
So
today
the
Opera
SDK
maintains
its
own
golang
software
development
kit,
which
also
uses
controller
runtime,
a
very
popular
upstream
library
for
abstracting
working
with
the
kubernetes
api,
and
this
exists
next
to
the
ansible
and
helm
sdk,
which
is
then
something
you
can
leverage
to.
You
know
functionally
test
your
operator,
as
you
have
seen
before,
and
use
the
packaging
to
create
operator
bundles.
A
Now
we
have
worked
with
the
upstream
community
around
queue
builder,
to
join
that
particular
project
in
order
to
convene
on
a
single
implementation
of
how
scaffolding
and
building
scaffolds
for
going
operators
will
work.
So,
under
the
hood
with
degrees
of
offered
SDK
1.0,
which
will
happen
later
this
year,
you
will
be
able
to
use
Cupid
or
transparently
under
the
hood
of
the
operator
SDK
CLI,
and
that
means
you
can
virtually
import
any
existing
queue.
A
To
summarize,
all
of
that
for
developers,
we
make
it
easier
to
create
operators
and
create
cooperator
bundles.
We
will
make
you
use
kubernetes,
tooling,
very
familiar
container
tooling,
to
actually
build
bundles
and
catalogs.
We
will
have
integrated
custom
punch,
no
testing,
which
is
extremely
easy
to
use,
and
we
allow
you
to
write
operators
in
the
cube
builder
style,
while
not
missing
any
of
the
SDKs
features
and
functional
testing
and
packaging.
A
We
will
also
allow
them
in
the
future
to
actually
pick
the
operative
version
that
they
want
to
install,
so
they
are
also
able
to
install
older
versions
of
the
operator,
which
is
not
something
that
we
support
directly
today
and
we
will
include
tooling
90
new
operator
package
manager
not
only
to
create
yum
DNFs
repositories
of
operators,
but
also
mirror
those
repositories
to
a
potentially
disconnected
registry,
and
should
your
cluster
not
be
connected
to
the
Internet?
Last
but
not
least,
this
will
also
be
something
very
interesting
for
Apple
developers.
A
All
M
will
be
able
to
support
web
hooks
on
behalf
of
the
operator,
so
as
an
operative
developer.
You
no
longer
need
to
maintain
your
own
logic
and
procedures
to
register
your
Redhook
certificates
and
have
them
rotated
in
order
to
stay
current,
and
that
is
something
that,
in
the
future,
all
M
will
allow
you
to
do
so
with
all
those
new
fancy
features.
Now,
for
some
interesting
demos
brought
to
you
by
Jason,
take
it
away
Jason.
B
Awesome
Thank
You
Daniel,
you
guys
should
be
seeing
my
screen.
So
a
lot
of
what
he
talked
about
was
the
new
improvements
to
the
SDK
to
writing
to
testing
and,
as
someone
who's
been
doing
this
for
close
to
10
11
months
now,
that's
all
really
really
exciting.
This
demo
we're
gonna,
pin
it
a
little
bit
and
talk
a
little
bit
more
about
usage.
So
we
did
talk
about
okay,
so
we
did
talk
about
OLM
and
how
you
can
use
it
to
catalog
operators,
how
you
can
use
it
to
install
them.
B
So
we're
looking
at
right
now
is
an
open
shift.
4.3
cluster
I'm
in
a
demo
project
you
can
see
up
at
the
top
bar
and
we're
in
the
operator
hub.
This
is
giving
us
a
view
of
all
the
different
operators
we
can
install
into
our
cluster.
If
you
look
on
the
right
side,
I'm
not
completely
sure
to
keep
say
my
mouse
pointer.
But
if
you
look
on
the
right
side,
where
it
but
203
items
I
get
super
excited
because
every
time
I
do
this
demo.
B
That
number
increases
every
single
time
and
it's
really
awesome
to
see
we're
up
with
this
I
at
some
point,
so
I'm
going
to
filter
out
just
for
amq
we're
gonna.
Do
this
demo
on
the
amq
stream
just
operator
provided
by
Red
Hat
I
could
see
some
basic
information
here.
A
lot
of
stuff
is
available
on
operator
hubs
IO.
That
Daniel
was
talking
about
we're
going
to
like
to
install
it
into
our
demo
project
here.
I
have
some
extra
information.
We
didn't
talk
too
much
in
this
for
the
presentation
about
the
Update
channels,
but
there
are.
B
It
is
possible
for
OLM
to
manage
multiple
channels,
so
in
this
case
we
have
stable,
but
if
we
had
a
nightly
or
beta
or,
however,
you
want
to
subdivide
your
quality
in
your
release.
Cadence,
you
have
that
capability
and
OLM
we're
gonna,
say
an
automatic
update
approval
strategy
in
this
case.
For
this
demo,
that's
not
gonna
matter,
because,
ultimately
we're
just
going
to
install
this
single
operator
and
that's
gonna
go
from
there.
You'll
also
notice
Daniel
mentioned.
This
is
the
first
time
I
heard
about
this.
B
So
this
is
really
cool
that,
in
the
future,
we're
gonna
have
better
support
for
saying
I
want
to
install
an
older
version
right
now.
We're
just
saying
give
me
what's
coming
out
of
the
stable
and
it's
gonna
go
ahead
and
give
us
this
1.4
dot
out
now,
while
that's
installing
I'm
gonna
hop
over
to
our
developer
console
so
in
the
top
left.
B
If
you
didn't
see
my
cursor,
we
have
this
drop
down
now
in
as
a
4.2
I
believe
to
switch
between
the
administrative
and
the
developer
view,
they
carry
the
same
information
developer
view
I'm
just
using
in
this
case
to
show
that
we
have
a
deployment
and,
at
the
end
of
the
day,
what
OLM
is
doing
right
now
is
deploying
this
operator
pot.
So
we
see
that
it
has
come
up
and
if
I
hop
back
to
the
administrator
view
and
navigate
back
to
our
installed
operators,
we
see
that,
yes,
our
streams
operator
got
successfully
installed.
B
So
let's
take
a
look
at
what
we
can
do
with
it,
so
we
click
into
it,
and
we
see
a
lot
of
things
listed
on
here.
Each
of
these
items
you
see
in
these
boxes
and
they're
provided
api's,
are
the
custom
resource
definitions
supported
by
this
particular
operator.
For
now,
we're
going
to
focus
in
on
the
Kafka
one
in
particular,
and
it's
going
to
give
us
the
ability
to
create
one.
Now,
it's
going
to
display
yamo
like
we're
used
to
seeing
in
kubernetes
and
an
open
shift.
I
want
to
highlight
a
second
line
here.
B
The
kind
is
Kafka
one
of
the
benefits
of
operators
being
built
on
the
custom
resource
definition.
Functionality
is
it
lets
us
express.
Our
API
is
in
something
the
users
going
to
understand:
we've
modeled
our
objects.
After
what
they
actually
represent
in
our
application,
we
can
talk
to
openshift
and
say
I.
Want
you
to
create
me
a
Kafka,
so
we're
going
to
accept
the
defaults
and
hit
create.
B
Let
me
see
at
the
bottom
here
we
have
it
listed
that
this
is
the
new
Kafka
cluster
we
created
and
if
we
click
through
I'm
gonna
come
back
to
this
overview
tab,
but
for
now
I
want
to
go
to
resources
because
it
actually
runs
pretty
quick
if
you've
never
seen,
operators
install
a
complex
application,
which
is
to
say
something
beyond
a
fairly
simple
hello
world
demo.
This
might
be
a
bit
of
a
surprise.
It's
also
very
cool
surprise,
because
I
asked
OpenShift,
hey,
deploy
me
a
Kafka
cluster.
That's
it!
B
You
all
saw
the
single
llamo
definition.
It
is
created
over
a
dozen
resources
now
everything
ranging
from
secrets
to
services
to
pods.
So
it
started
by
deploying
the
three
zookeepers
which
came
up
pretty
much
before
he
even
clicked
into
this
tab,
and
then
the
next
step
was
to
deploy
the
actual
kafka
instances.
Now
the
defaults
were
four
three
three
at
each,
which
is
fine,
that's
exactly
what
we
got
and
we're
gonna
dork
with
that
a
little
bit
later.
B
B
I
can
query
this
server
for
the
number
of
topics
and
no
surprise,
there's
not
actually
any
topics
here.
That's
again,
not
a
surprise.
We
just
applied
this
cluster,
we'll
leave
that
running
in
a
tab
and
we'll
come
back
to
it.
So
if
I
navigate
back
to
the
installed
operator
again,
each
of
these
represents
a
custom
resource
that
I
can
interact
with
it.
The
openshift
api
level
I
can
create
them,
like
I
did
with
the
Kafka
instance
I
can
edit
them
patch
them
and
delete
them.
B
But
what's
really
cool
is
that
it
doesn't
have
to
be
something
tangible
on
the
server
I.
Don't
have
to
necessarily
have
it,
deploy
a
pod
or
create
a
service.
For
me,
in
this
case,
this
particular
operator
has
defined
a
custom
resource
called
a
casket
topic
that
represents
go
figure,
an
actual
casket
topic,
so
we've
renamed
it's
a
demo
for
the
purposes
of
the
demo
and
hit
create
and
it's
gone
and
created
the
custom
resource
for
us.
B
So
if
we
go
back
into
our
running
cluster
and
we
list
all
the
topics
we,
the
demo
topic
has
been
created.
So
again
we
created
a
resource
in
openshift,
but
didn't
correspond
to
any
of
the
existing
resource
definitions.
There
was.
There
was
no
pods
created
no
services,
but
the
operator
understood
that
when
you
tell
me
to
create
a
kafka
topic,
resource
I
know
what
to
do
and
by
that
it
means
go
into
the
actual
underlying
guts
of
the
server
and
deploy
a
new
topic
and
create
it
and
call
it
demo
topic.
B
We're
able
to
see
some
basic
things
that
it's
configured
to
deploy:
three
zookeepers,
three
Kafka
brokers
and
so
on.
We
could,
if
we
wanted
to
use
this
UI
to
edit
the
number
of
brokers
and
scale
it
up
the
same
time,
it's
still
a
resource
in
OpenShift.
So
we
can.
You
always
get
to
aryama
view
to
do
our
editing
like
we're
used
to.
In
this
case,
I'm
gonna
go
down
I'm,
going
to
set
it
to
four
replicas
for
the
Costco
brokers.
B
It
sells
and
I'll
hit,
save
and
quickly
a
bump
over
to
resources,
and
you
can
see
here
it
was
very
quick
to
stand
up.
But
if
you
look
at
the
timestamp
this
new
one,
this
Kafka
3
was
created
just
a
few
seconds
ago,
so
the
operator
received
the
configuration
or
an
update
to
the
existing
Kafka
cluster.
That
said,
hey
now
we
expect
to
see
for
cluster
instances.
The
operator
took
the
necessary
steps
to
scale
things
up
now.
It
didn't
just
scale
up
a
new
pod.
B
B
And
if
we
list
the
topics
we
see
our
demo
topic
listed,
and
this
is
where
we
are
adding
functionality
on
top
of
what's
provided
by
openshift.
It's
not
just
me
simply
saying
I
want
to
scale
the
pods
up
like
you've,
seen
in
a
number
of
OpenShift
demos,
I'm,
not
using
the
little
rocker
switches
or
doing
a
scale
command
from
the
command
line.
What
it's
doing
here
is
the
operator
understands.
B
This
is
what
it
means
when
I
get
a
new
broker
pod
deployed,
I
need
to
bring
it
into
the
system
and
I
need
to
have
it
accessible
by
the
full-blown
cluster.
So
all
of
this
extra
knowledge
that
takes
place
beyond
simply
scaling
up
a
new
pot
and
divide
B
ond
deploying
a
simple
container
you're,
not
putting
it
on
the
user.
The
operator
understands
this
is
what
it
has
to
do
to
get
connected
to
the
underlying
system
in
the
underlying
cluster.
B
So
there's
a
lot
I
could
do
beyond
this.
We're
going
to
stop
it
there,
but
the
big
takeaways
I
want
you
to
remember
are
that
inside
of
OpenShift
we
can
get
to
this
operator
hub.
This
is
going
to
list
us
all
of
our
available
operators
that
can
be
installed
once
they're
installed.
They
are
providing
this
very
rich,
very
custom
API
that
users
can
interact
with,
and
it's
not
going
to
be
that
difficult
for
the
users
to
understand
it,
because
it's
just
like
interacting
with
OpenShift.
Now
you
can
create
a
pod.
B
B
So
I'm
talking
in
terms
of
Kafka
and
in
terms
of
topics
and
again
that
topic
example
I,
absolutely
love
doing
it
in
demos
like
this,
because
it
breaks
everyone
from
the
initial
idea
that
oh,
my
operator
is
just
gonna,
create
pods
and
create
deployments,
and
to
really
showing
you
that
these
resources
are
modeled
after
the
domain
itself
and
the
operator
can
understand
when
a
resource
is
created,
an
open
shift.
It's
going
to
take
the
necessary
steps
to
make
that
stuff
love.
So
with
that
I'm
gonna
end.
It
thank.
A
You
Jason
this
was
such
an
impressive
demo
and
really
highlighted
the
capabilities
of
operators
that
are
specific
to
the
work
node
it
manages
write
and
the
interesting
bit
of
the
NQ
cough
cooperator
is
that
it
actually
hasn't
been
written
with
the
operator
SDK.
It
has
been
written
with
a
homebrew,
Java
SDK,
but
still
works
just
fine
as
a
packaged
operator
for
all
M,
which
shows
you
how
palatable
this
framework
is,
where
you
pick
and
choose
the
tuning
that
you
need
in
order
to
get
your
operator
developed
or
honest
or
just
published
on
operator
hot.
A
That
IO
will
want
to
close
this
with
a
couple
of
links
and
follow
up
information
for
you,
so
you
can
reach
us
in
the
kubernetes
snack
on
the
kubernetes
operators
channel.
The
git
repository
of
the
operator
framework
is
where
it
all
starts.
If
you
want
to
contribute
code
or
you
operator,
to
operate
a
hub
Taleo
the
link
to
operate
a
half
that
IO
is
also
on
this
slide.
And
last
but
not
least,
should
you
have
the
question
and
either
how
to
write
your
operator
or
regarding
the
packaging
for
all
M
or
testing
a
scorecard?