►
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Operators and Helm It Takes Two to Tango, Devdatta Kulkarni, CEO, CloudARK
A
A
A
What
operators
are
what
is
a
hand
charge
just
to
set
the
stage,
then
why
building
hand
charts
for
operators
make
sense?
Then
I'll
talk
about
some
of
the
analysis
that
we
have
done
about
of
operators.
Amongst
them
we
have
looked
at
the
hem
charts
of
existing
operators
and
then
I'm
going
to
share
with
you
some
guidelines.
A
If
you
are
developing
operators
and
what
kind
of
things
you
should
be
doing,
when
building
hem
charts
for
operators
and
then
we
then
I'll
talk
about
a
few
other
things
around
operators
and
hand
charts
what
kind
of
experience
that
they
enable
okay.
So
what
is
a
kubernetes
operator?
It's
has
become
popular
in
last
year
or
so.
A
Essentially,
a
kubernetes
operator
is
a
custom
resource
and
a
custom
controller
that
one
can
add
into
your
in
a
cluster
to
run
essentially
any
kind
of
workflow,
and
typically
these
days,
operators
are
being
used
to
manage
stateful
services
like
databases
on
kubernetes,
one
of
the
great
things
about
operators.
Our
custom
resource
definitions
is
that
they
are
kubernetes
natives,
so
you
can
use
standard
things
like
cube
CTL
to
work
with
these
customer
resources.
A
You
can
use
kubernetes
native
tools
like
helm
and
they
work
with
namespaces
service
accounts,
all
the
nice
things
that
we
get
with
base
kubernetes.
There
are
already
like
400,
plus
github
repositories
of
operators
for
various
kinds
of
things,
so
you
will
find
operators
for
relational
on
relational
databases,
application
workloads,
api
gateways
and,
as
the
picture
shows,
the
main
thing
about
DS
is
the
custom
resources
that
an
operator
introduces.
They
are
at
the
same
level
as
the
standard,
kubernetes
resources.
A
So
like
the
way
we
have
pod
service
deployment,
we
also
can
have
spark
my
sequel,
Kafka,
so
extending
the
kubernetes
cluster,
creating
new
abstractions
by
actually
extend
extending
the
cluster
and
not
wrapping
things
as
we
used
to
do
in
the
old
days
before
kubernetes.
So
that's
the
operator
and
helm.
This
is
the
hence
image
so
I
know
all
of
us
understand
this,
but
a
high
level.
What
helm
allows
us
to
do
is
it
allows
us
to
template
eyes
and
parameterize
kubernetes
artifacts
for
different
environments.
A
We
can
create
through
values,
dot
yeah
Mel
different
parameterization
for
a
kubernetes
artifact.
So
that's
one.
The
other
thing
that
helm
allows
us
to
do
is
essentially
order
and
orchestrate
the
actual
artifact.
So
there
are
several
hooks
that
helm
defines
and
these
can
be
used
for
doing
that
ordering
and
orchestration.
So
why
develop
a
hem
chart
for
operator?
If
you
think
about
it,
an
operator
is
no
different
than
any
other
kubernetes
artifact
right.
You
have
in
order
to
run
something
as
an
operator,
you
need
to
create
a
pod
for
it.
A
You
need
to
deploy
that
pod.
You
need
to
grant
it.
Certain
permissions
define
certain
self
service
accounts
so
and
helm
allows
us
to
distribute
and
package
standard.
Kubernetes,
artifacts
and
operator
just
happens
to
be
salt
effect,
so
it
makes
sense
to
use
a
standard
package
manager
like
helm
to
distribute
your
operators.
A
The
second
reason
is,
you
can
actually
use
the
parameterization
and
template
ization
features
that
help
provides
to
customize
your
operator
for
different
environments.
So
for
prod
environment
you
want
an
operator
to
behave
in
a
different
manner,
whereas
when
you
are
developing
it,
you
want
it
to
behave
a
little
bit
differently.
You
can
do
that
straightforward
using
helm.
So
that's
awesome.
The
third
reason
to
stick
with
using
helm
is
you
don't
need
to
learn
about
any
new
spec
format
for
app
deployment
and
the
control
plane
deployment?
A
So,
if
you
think
about
it,
operators
can
be
considered
as
control
planes.
You
are
adding
new
controllers
to
your
cluster
and
then
the
actual
kubernetes
artifacts
of
your
applications.
So
if
you
use
helm
then
for
both,
then
you
as
your
team,
you
don't
need
to
learn
any
new
artifact
for
that
and
then,
as
part
of
hand
charts
you
can
leverage
kubernetes
constructs
like
labels,
annotations,
CID,
manifests,
etc
for
again
doing
the
control,
plane,
deployment
and
the
application
deployment.
So
this
is
the
reason
why
one
should
develop
n
charts
for
operators.
A
So
with
that
in
background,
we
looked
at
existing
github
repositories
of
operators.
As
I
said,
there
are
four
hundred
plus
operators
in
the
community
and
we
selected
any
repository
which
has
10
plus
stars,
and
out
of
that,
so
102
turned
out
to
be
like
that.
This
was
by
the
way
done
in
April
of
this
year,
so
things
might
have
changed
since
then,
and
out
of
these
102
repositories,
we
focused
on
the
ones
which
were
written
in
go,
so
one
can
write
operators
in
any
language.
Still
the
predominant
language
is
go.
A
So
what
did
we
really
see
in
these
operator
repositories?
So
the
hang
charts
for
operators
were
defined
by
about
half
of
these
repositories?
Ok,
so
not
everybody!
There
is
still
developing
hand
charts
for
operators.
The
second
key
T
takeaway,
was
one
of
the
things
that
you
need
to
do
as
part
of
registering
a
new
API
in
kubernetes.
A
custom
resource
API
is
to
define
the
custom
resource
definition.
So
it's
like
a
meta
API
that
you
are
in
your
cluster
and
then,
after
that
you
can
start
using
your
custom
resources.
A
So,
what's
part
of
analysis,
what
we
found
was
not
everyone
was
defining
this
CID,
the
meta
API
in
ml
or
hand
charts.
There
were
quite
a
bit
of
repositories
in
which
these
things
were
getting
done
in
the
code
itself.
Okay
and
then
the
final
point
was
in
order
to
make
sure
that
your
custom
resources
are
created
in
a
manner
that
they
are
rightly
defined
and
structurally
and
semantically
the
properties
make
sense.
You
can
add
these
validation
rules
when
you
register
your
custom
resource
definition,
meta,
API
and
not
everyone
was
doing
that.
A
Okay,
so
that
was
at
a
high
level,
the
key
takeaway
of
this
analysis
so
from
based
on
that
and
having
worked
with
operators,
we
have
come
up
with
these
set
of
guidelines
and,
in
the
rest
of
the
talk,
I'm
going
to
go
through
these
guidelines.
Okay,
so
at
a
high
level,
the
guidelines
are
registered,
C
IDs
as
part
of
operator
henchard.
A
There's
a
special
hook,
CRD
install
that
we
recommend
that
everyone
add
on
there
see
RDS
use
values,
dot,
m
else
or
config
mac
for
configurable,
define
validation,
rules
for
CR
DS,
and
then
there
are
additional
annotations
that
I'm
going
to
introduce,
which
help
you
create
your
operators
in
such
a
way
that
they
can
use
seamlessly
in
an
ensemble
of
multiple
operators,
and
we
call
these
as
platform
as
code
annotations
and
I'll
talk
about
them.
So
the
first
guideline
is
to
register.
Cr
IDs
in
your
operator,
hand
chart.
So
what
is
the
CRD?
A
A
CR
D
is
the
meta
API
or
the
custom
results
definition
yeah
Mel
that
introduces
in
your
cluster
the
new
custom
resource.
So
let's
say:
if
you
want
to
create
a
spark
cluster
and
spark
as
your
custom
resource,
then
you
need
to
first
define
CRD
for
spark
as
part
or
register
and
in
your
cluster.
So
why
should
we
do
this
in
ml
and
not
directly?
Do
it
in
the
go
code
or
your
operator
code?
A
The
reason
is,
the
CRD
object
itself
has
to
be
done
using
cluster
scoped
permission,
it's
a
non
namespaced
object,
so
you
need
to
install
it
with
the
permissions
that
have
broader
surface
area
than
our
permission
said.
Then
what
you
would
do
for
actually
registering
the
operator
pod
itself,
so
separating
these
two
makes
sense,
because
CRD
installation
can
be
done
by
someone
who
has
more
privileged
access
to
the
cluster.
Someone
like
overall
administrator,
whereas
the
operator
pod
deployment
can
be
done
for
a
team
in
different
namespaces.
A
So
that
is
the
main
reason
you
should
separate
it
out.
The
two
things
the
not
only
that
the
other
thing
that
you
should
do
is
to
define
the
CRD
install
helm
hook
as
part
of
your
helm
chart.
So
here's
an
example
of
that,
a
notation
and
by
the
way
here,
is
that
custom
resource
definition,
meta
API,
that
you
should
register,
and
this
is
from
one
of
the
open
source
operators.
This
is
my
sequel
operator
from
press
Labs
in
which
they
have
defined
this.
A
So
in
this
case
the
my
sequel
backup
is
the
custom
resource
which
you
would
have
defined,
let's
say
as
part
of
your
chart
and
if
you
want,
if
you
don't
define
this
annotation,
what
is
going
to
happen
is
hell,
may
not
hell,
may
try
to
register
and
deploy
my
sequel,
backup,
custom
resources
first,
but
that
will
fail.
So
you
don't
want
to
you
want
to
avoid
that
and
that's
why
defining
this
annotation
is
essential.
A
So
the
third
guideline
is
just
calling
out
this
fact
that,
because
you
are
now
developing
your
hand,
chart
it
it's
possible
for
you
to
actually
customize
your
operator
deployment
for
different
environments.
There
are
two
ways
you
can
do
it
values.
Dot
ml
is
a
very
simple
way
to
customize
an
operator,
let's
say
just
changing
the
image.
If
you
have
a
MySQL
operator
and
you
want
to
use
different
docker
images
in
different
environments-
values,
dot
ml
is
a
very
good
way
to
do
that.
A
So
for
that
values,
dot
ml
is
not
sufficient
and
the
way
still
you
can
customize
your
operator
is
by
using
config
Maps
again
a
standard,
kubernetes
artifact.
If
you
can
define
these
config
maps
ahead
of
time,
then
you
can
package
them
as
part
of
your
hand,
chart,
and
that
way
you
can
customize
your
operator.
A
The
fourth
guideline
is
to
make
sure
that
your
custom
resources
are
created
in
such
a
way
that
they
are
semantically
structurally.
The
way
you
expect
them
to
be
and
part
of
that,
it
means
to
define
these
CRD
validation
rules.
This
is
now
part
of
kubernetes,
upstream
I
believe
from
1.15
onwards.
This
this
is
possible.
So
what
you
need
to
do
is
as
part
of
this
custom
resource
definition
again
remember.
A
So
this
was
the
fourth
guideline
now
for
the
final
guideline-
and
this
was
this
is
something
that
knew
that
we
had
cloud
arc
are
developing
and
I
want
to
talk
about
rest
of
the
talk
on
that
which
is
essentially,
we
recommend
that
you
add
these
new
notations.
We
call
them
platform
as
code
notations.
There
are
two
annotations
that
we
have
defined.
One
is
the
usage
and
notation
and
another
is
the
composition,
annotation,
and
these
are
notations
to
be
hard.
A
We
added
similar
to
the
CRD
install
hook
directly
on
the
custom
resource
definition,
and
what
these
annotations
do
is
the
usage.
A
notation
allows
you,
as
an
operator,
developer,
to
package
any
kind
of
information
that
is
beyond
the
spec
properties,
but
which
is
needed
by
your
users
to
consume.
That
last
custom,
resource
and
I
will
talk
about
an
example
of
how
we
use
this
so
usage
annotation.
The
value
of
that
is
just
the
name
of
the
config
map.
A
A
The
second
annotation
is
the
composition
or
notation,
and
this
a
notation
is
the
values
of
that
are
all
the
underlying
resources
that
your
operator
is
going
to
create
as
part
of
instantiating
a
custom
resource.
So
what
I
mean
by
that
is,
in
this
case
the
custom
resource
that
I'm
this
operator
manages
is
the
Moodle
custom
resource
and
as
part
of
creating
a
Moodle
instance,
it's
going
to
essentially
create
a
deployment
service.
Persistent
volume
claim
secret,
ingress
and
so
on.
A
So
any
and
all
custom
resources
that
your
operator
is
going
to
create
I
just
list
them
out
as
part
of
the
composition,
a
notation
okay,
and
how
are
these
useful?
These
annotations
are
useful
in
the
following
way,
so
before
jumping
and
thinking
about
how
they
are
useful,
I
would
like
to
talk
about
application
developers
and
how
they
are
using
custom
resources.
A
So
one
of
the
things
that
is
different
with
kubernetes
than
the
traditional
cloud
systems
is
that
you
can
extend
the
control
plane
of
kubernetes
by
adding
new
operators,
and
that
makes
the
the
challenge
of
creating
declarative
application
stacks
little
bit
different
than
traditional
infrastructure
as
code
systems
like
terraform
or
cloud
formation,
because
in
those
systems
the
underlying
api
is
not
going
to
change
ever
it's
known
ahead
of
time,
that's
never
going
to
change
with
kubernetes.
That's
not
the
case.
A
You
have
these
operators
and
custom
resources
that
can
be
installed
at
any
time
on
any
any
cluster.
So
that's
one
aspect:
how
kubernetes
is
different.
The
second
aspect,
if
you
think
about
the
custom
resources
that
we
create,
they
are
essentially
embedding
some
higher-level
actions.
So
let's
say:
if
I
am
using
a
custom
resource
for
my
sequel,
then
there
are
only
certain
things
that
that
custom
resources
declarative
going
to
expose
so
things
like
take
a
backup
or
instantiate
a
new
database
in
a
certain
manner.
So
compare
this
with
our
traditional
platform.
A
As-A-Service
systems
like
you
and
elastic
Beanstalk.
They
they
were
providing
very
constrained
manner
of
deploying
your
applications.
So
these
custom
resources
are
defining
essentially
that
kind
of
platform
experience
in
a
declarative
manner,
so
the
ensemble
of
custom
and
inbuilt
resources
that
one
now
creates
when
operators
are
involved.
The
declarative
application
starts
it's
natural
to
think
about
these
as
your
platform
definitions
as
code,
so
as
an
application
developer,
when
they
are
working
with
custom
resources.
The
challenges
that
arise
are
how
do
I
discover
what
all
custom
resources
exist
in
my
cluster?
A
How
do
I
use
these
custom
resources
and
so
on?
So
that's
one
aspect:
the
discovery,
challenge
and
other
challenges
to
how
to
bind
these
custom
resources
together.
So
is
there
in
communities
there
are
things
like
names
and
labels
label
selectors.
Those
are
enough
to
sort
of
bind
different
resources
together
with
custom
resources.
Is
that
enough
or
is
something
more
is
needed?
A
So
if
you
look
at
values
that
ml
as
well
it's
or
even
customized
for
that
matter,
what
you
have
is
values
that
are
defined
and
which
you
can
change,
but
these
values
are
not
going
to
change
based
on
what
exists
in
the
inner
cluster,
so
those
are
the
gaps
and
what
those
are
the
gaps
that
need
to
be
filled
for
creating
these
applications
tags.
The
platform
stacks
declaratively.
For
that
we
have
been
developing
the
q+
api
and
on
which
fills
these
gaps.
A
Q+
is
meant
to
complement
existing
tools,
so
it
complements
helmet,
complements
customize
and
it
consists
of
sort
of
three
independent
entities.
There
is
an
aggregated
API
server
which
helps
with
discovery.
There
is
a
mutating
web
book
which
helps
with
binding
resolution
using
runtime
information
and
then
there's
a
platform
stack
CID
as
well,
which
just
ensures
that
the
ordering
between
custom
resources
is
maintained
so
essentially
q+
for
discovery.
What
q+
provides
is
new
API
endpoints
that
are
accessible
through
directly
through
cube
cattle
to
find
runtime
and
Static
information
and
for
binding.
A
We
provide
name
value
based
runtime
value
based
resolution
and
I'll
show
you
an
example
of
that.
Okay.
So,
in
order
to
explain
this
in
a
more
detail,
this
is
the
sample
scenario
I'm
going
to
use
assume
that
we
have
an
kubernetes
cluster
in
which
two
operators
are
installed.
One
operator
is
the
my
sequel
cluster
from
press
Labs,
which
I
have
mentioned
earlier.
A
The
other
operator
is
a
Moodle
operator
which
allows
you
to
essentially
create
Moodle,
deploy
multiple
Moodle
instances
and
the
way
we
are
going
to
create
a
stack
is
to
essentially
in
the
Moodle
custom
resource.
I
am
going
to
use
the
my
sequel
cluster
custom
resource,
and
the
way
this
association
is
going
to
happen
is
the
underlying
service
that
the
MySQL
cluster
creates
that
that's
the
name
of
that
needs
to
be
passed
in
as
part
of
the
Moodle
customer
resource
instance.
So
here
this
is
the
name
that
I'm
passing
in
the
question
is.
A
This
is
something
that
I've
defined,
but
how
does
an
application
developer
get
to
know
that
this
is
the
name?
I
need
to
pass
him
and
can
that
binding
be
done
resolved
automatically
so
for
the
application
developer?
As
I
said,
we
introduced
two
endpoints.
There
is
a
man
endpoint,
which
is
essentially
a
way
for
you
to
get
man
page
kind
of
information
for
for
the
custom
resource,
and
the
other
point
is
the
composition
endpoint,
which
allows
you
to
get
a
composition
tree
of
a
custom
resource.
A
So
here's
an
example
of
using
the
composition
endpoint
on
the
my
sequel
cluster
instance-
and
this
is
using
that
a
notation
that
I
showed
you
earlier,
the
composition,
a
notation,
our
API
server
knows
what
all
component
objects
are
created.
So
behind
the
scene
it
is
parsing
the
following:
the
owner
reference
tree
and
building
out
this
particular
tree
and
I.
Based
on
this
output,
I
can
find
out
as
an
application
developer
that
yeah
there
are
multiple
services.
A
One
of
the
service,
whose
name
is
my
sequel
master
and
then
based
on
the
main
page
information
I,
can
go
and
choose
to
define
this
in
my
Yaman,
so
that
is
good,
but
then
I
want
to
do
this
automatically.
So
for
that
what
we
are
do
is
instead
of
having
me
to
do
this
manually.
We
define
these
binding
functions
and
one
of
the
functions
is
import
value.
These
are
function
definitions.
The
structure
is
inspired
from
cloud
formation.
So
how
do
we
use
these
import
value
functions?
It's
like
this.
A
A
I
can,
without
changing
the
yamen
spec
I
should
be
I
will
be
able
to
deploy
the
same
thing
in
other
environment
as
well,
and
the
resolution
in
that
will
happen
at
runtime,
based
on
whatever
the
my
sequel
cluster
service
is
created
in
that
particular
cluster,
so
in
summary,
create
henchard's
for
your
operators
and
use
the
guidelines
that
I
talked
about.
The
the
red
lines
here
are
subset
of.
There
are
additional
guidelines
and
I
have
link
to
those
guidelines.
A
Then
there
is
this
notion
of
declarative
application
management
in
kubernetes,
and
our
view
is
that
with
custom
resources,
declarative
application
management
needs
special
attention,
especially
if
you
are
using
multiple
operators
in
a
cluster
help
your
application
developers
to
streamline
the
discovery
and
binding
of
those
custom
resources
by
adding
platformers
code
annotations
and
those
annotations
actually
will
make
sense
only
if
you
also
have
the
few
plus
API
add-on
installed
in
your
cluster,
so
it's
open
source
as
well
install
it.
You
can
use
it
alongside
helm,
it
doesn't
replace
helm
or
customize.
A
A
Yeah,
it's
yeah,
that's
a
great
question,
so
the
question
is:
how
involved
is
deploying
q+
itself
so
for
deploying
q+?
It's
we
are
in
the
process
of
building
a
henchard
for
doing
that.
It's
not
there
yet,
but
yeah.
It's
not
involved
at
all.
You
would
need
the
privileges
for
cluster
admin
to
do
that
because
it
involves
installing
the
aggregate
and
API
server
but
yeah.
Apart
from
that,
it's
like
deploying
any
other
kubernetes
artefact.
A
A
A
That's
a
great
question,
so
the
question
was
instead
of
having
the
hook
defined
like
this.
Can
you
just
have
a
separate
hand
chart
for
just
installing
the
custom
resource
definition?
You
could
do
that
you
need
to
ensure
that
that
henchard
gets
deployed
first
now.
The
question
is
that
ordering
are
you
going
to
insure
from
some
outside
means,
or
so
the
hook
helps
you
just
avoid.
All
of
that,
but
yeah
you
could
do
that.