►
Description
Operator Framework SIG May 17 2019
IBM Serverless Operators Update
Kyle Schlosser IBM
A
What
we
wanted
to
do
is
we
wanted
a
sort
of
socialize
and
work
that
we've
been
doing
at
IBM
with
regard
to
what
we
call
serverless
operators
and
and
I'll
describe
some
more
of
that
in
a
moment,
but
I
want
to
start
by
just
introducing
myself.
My
name
is
Kyle
Schlosser
I
am
a
software
engineer.
Ibm
and
I've
been
collaborating
on
this
work
along
with
Andre,
tossed
who
I
believe
is
presented
to
this
group
in
the
past.
A
So
what
do
we
mean
by
service
operators?
There's
there's
two
aspects
to
that
that
we're
focused
on
the
first
is
scale
to
zero,
which
is
effectively
shutting
down
resources
that
are
not
being
utilized
so
that
we
can
achieve
greater
resource
efficiency
in
the
broader
scale
to
zero
space.
We
also
scale
to
zero,
also
include
scaling
up,
but
in
the
controller
space,
that's
a
secondary
concern.
A
We're
also
talking
about
event-driven
programming.
So
this
is
more
the
traditional
server
lists,
where
you're
trying
to
optimize
you're,
trying
to
achieve
different
programming
models
and
simplify
the
construction
of
applications
and
and
react
to
events
that
are
occurring
rather
than
having
these
active
server
processes
and
I'll
go
into
more
detail
on
what
what
we're
doing
in
each
of
these,
but
I
did
want
to
say
that
we're
focused
heavily
on
scale
to
zero,
and
this
event
urban
programming
model
is,
is
really
we're
in
the
infancy.
A
A
It's
using
half
a
percent
of
the
overall
CPU
of
a
single
single
virtual
machine
and
ten
megabytes
of
memory
are
being
consumed,
but
if
we
start
considering
running
controllers
where
they're
dedicated
to
individual
namespaces
they're
redundant
or
they
are
scaled
across,
like
many
different
types
of
software,
we
can
start
to
see
that
that
aggregate
effects
have
have
an
impact
on
the
overall
capacity
required.
In
order
to
run
all
these,
these
various
controllers
and
a
lot
of
controllers
in
terms
of
the
access
pattern.
They
don't
really
need
to
be
running
all
the
time.
A
So,
if
we
think
about
like
our
operators,
which
bootstrap
the
platform,
unless
we
are
changing
the
platform
after
the
fact,
then
we
don't
really
go
back
and
and
and
you
two
we're
not
making
a
lot
of
modifications
that
that
controller
needs
to
react
to
so
it
could
shut
down
in
in
the
meantime,
so
I
mentioned
this
already.
I
won't
go
into
further
detail
on
that,
but
but
basically
that
that
was
sliders
just
talking
about
the
the
aggregate
effect
of
of
having
a
lot
of
different
controllers
up
and
running.
A
So
we
have
proposed
two
different
technical
solutions.
As
for
for
resolving
like
how
we
can
achieve
this
server
lists
operator
service
controller
concept,
the
first
is
based
upon
key
native,
and
so
what
we've
done
there
is
is-
and
I
was
saying.
One
of
our
researchers
has
contributed
to
this.
Is
that
developed
a
key
native
API
resource
event
source.
A
So
within
the
key
native
architecture,
we
have
K
native
serve,
which
provides
scale
to
zero
capabilities,
so
that's
sort
of
a
substitute
for
deployments
that
it's
able
to
scale
to
zero
little
scale,
gale
up
and
down,
based
upon
traffic
coming
in
and
then
on.
The
the
other
side
of
that
is
the
generation
of
events
and
and
that's
done
through
these
events,
sources
and
so
by
creating
a
kubernetes
api
event.
Source
were
able
to
combine
the
key
native
eventing
and
serve
together
into
a
full
solution
which
allows
both
scale
to
zero
of
controllers.
A
But
then
it
also
enables
if
you
wanted
to
complement
like
an
existing
k
native
application
with
incorporate
boomerangs
api
events
into
that
application.
And
then
you
know
you
can
definitely
do
that.
But
then
there's
the
second
technical
solution
we
have,
and
this
is
sort
of
a
a
compliment
to
the
k
native
solution.
A
It
just
reacts
to
these
annotations
that
have
been
attached
to
the
controller,
and
so
we'll
spend
a
few
minutes
talking
about
the
key
native,
but
more
I'm
going
to
spend
a
bit
more
time
on
the
zero
controller,
zero
scaler
in
part,
because
that's
sort
of
been
working
out
more
recently
and
so
I
have
a
running
demo.
That
I'll
show
you
in
a
few
minutes.
A
Okay,
so
in
the
cane
ativ
I've
talked
through
this
a
bit
already,
but
I'll
talk
through
it
again
is
the
with
the
K
native
event
source,
there's
two
components:
there's
an
event
adapter,
which
you
see
on
the
left
of
the
diagram,
and
then
you
have
water
or
many
of
the
the
receivers
which
are
the
the
substitute
for
the
traditional
controller
or
controller
manager
processes
these.
On
the
on
the
right
side,
the
controller
manager
has
been
reconfigured
so
that
they
accept
cloud
events.
A
The
cloud
events
is
the
transport
mechanism
for
events
within
a
native
eventing,
and
so
we've
had
to
slightly
alter
that
the
controller
process,
so
that,
rather
than
reaching
out,
create
a
watch
subscription
against
the
kubernetes
api
server.
Instead,
we
wait
for
an
inbound
event
which
happens
to
come
over
HTTP
and
then
process.
Those
events
by
parsing
them
done
marshaling
them
into
the
proper
golang
data
structures
before
sending
them
into
the
into
the
reconciler
x'.
A
So
we
in
the
lab
we've
prototyped
this
by
adapting
the
controller
runtime
framework
in
order
to
accept
cloud
events
and
then
on
the
sender
side,
on
the
left
side
of
the
diagram.
We
have
the
the
event,
the
event
source,
which
is
consists
of
a
kubernetes
custom
resource
definition
within
the
EK
native
events
or
style.
So
controller
watching
for
these
resources
to
be
created
and
the
resource
defines
what
API
types
would
we
be
watching
for,
and
then
it
sends
those
off
into
an
event.
Sync
and
the
event.
A
So
using
that
cane
ADA
to
the
venting
style
we've
been
able
to
success.
We
demonstrate
control
or
scaling
to
zero
and
accepting
events,
but
we
also
found
some
drawbacks
in
the
process,
and
one
of
those
is
that
we
weren't
able
to
quite
achieve
the
the
scale
to
zero
footprint
optimizations
we
were
hoping
to,
and
this
was
a
result
of
the
event
adapter.
A
It's
on
the
left
side
of
the
diagram
that
was
represented
as
as
an
individual
deployment,
and
we
do
to
the
our
back
requirements
of
trying
to
provide
independent
at
our
back
isolation
between
different
controllers.
We
ended
up
booting
up
an
event
adapter
for
every
controller,
so
we
didn't
do
it.
We
didn't
achieve
a
lower
number
of
of
container
processes
being
executed.
So
that's
an
area
that
I
think
we'll
look
at
in
the
future
in
terms
of
how
we
can
it,
we
can
optimize
that
particular
event
source.
A
We
also
have
this
dependency
upon
K
native,
including
specifics
within
the
controller
implementation.
So
we
need
to
modify
that
controller
implementation
in
order
to
accept
these
cloud
events
and
to
open
up
this
additional
HTTP
transport,
and
so
there
was
some
additional
requirements
on
the
control
or
that
that
we
hope
to
eliminate
and
the
source
code
for
that
events.
Adapter
has
has
been
published
and
it's
actually
been
changed
since,
since
we
originally
pushed
the
contribution
it's
it's
now
found
in
the
K
native
eventing
repository
and
it's
called
api
server
source
it's.
A
So
if
you
were
interested
in
in
programming
to
that
that
event
source,
you
can
go
out
and
grab
the
code,
although
I
the
last
I
looked,
there
wasn't
a
lot
of
examples,
so
so
you
might
have
to
wade
through
some
of
that.
Okay,
so
the
the
zero
scalar,
then
is
was
the
the
second
technology
that
that
it's
a
second
technical
solution
that
we
looked
at
in
this
space
and
then
the
controller
zero
scalar.
What
what
we're
trying
to
do
is
just
just
shut
down
idle
controller.
A
So
we've
tried
to
do
this
without
any
modifications
or
minimal
modifications
to
the
controller,
so
that
existing
operators
could
take
advantage
of
this
with
with
little
or
no
modification
and
with
this
zero
scaler.
The
way
that
it
works
is
that
the
existing
deployments
are
stateful
sets
are
annotated
with
with
some
information
that
is
consumed
by
this,
the
secondary
controller
component,
and
that
basically
tells
the
overall
controller
which
which
operators
need
are
interested
in
which
events
and
which
types
from
the
kubernetes
api
type
system.
A
In
addition,
there's
an
ash
for
you
called
the
scale
timeout
which
determines
how
long
the
controllers
is
idle.
Before
we,
we
scale
it
down
to
zero.
So
the
like
a
typical
use
case
that
you
might
have
is
that
as
a
kubernetes
operator
provider,
you
could
put
some
of
the
annotations
onto
your
staple
sets
or
deployments.
Those
would
just
be
ignored
on
systems
where
the
zero
scalar
is
not
present
and
then
by
letting
the
you
could
also
put
the
the
scale
timeout
on
as
well
or
you
could
leave
that
to
the
cluster
administrator.
A
And
so
then
that
way
you
could
sort
of
pick
and
choose
or
leave
it
up
to
the
the
actual
and
hours
of
an
operator
whether
they
would
they
would
see
that
zero
scale
capability
or
not,
okay,
so
I
I
mentioned
this-
is
driven
by
annotations.
So
the
the
annotations
are
just
just
added
to
that
stateful
set
or
deployment
that
that's
representing
the
individual
operator,
and
then
we
have
in
terms
of
the
runtime.
We
have
this
zero
controller,
zero
scalars.
A
The
component
name,
but
it
itself
is
a
control
or
that's
that's
watching
for
these
different
things
to
happen
so
by
scaling
down
these
deployments
to
zero.
If,
if
an
individual
operator
controller
was
expecting
not
to
get
scaled
down
so
like
it
has
background
activities,
it's
doing
it's
starting
its
own
internal
timers.
This
technology
approach
wouldn't
really
be
suitable
for
that
it
works
for
those
controllers
where
the
responded
to
the
reconciliation
events.
A
Eventually,
they
process
the
active
reconciliations
and
there
are
no
more
reconciliations
that
need
to
be
processed
at
that
point.
They
can
just
basically
be
stopped.
It
also
realizing
the
fact
that
most
most
controllers,
predominantly
will
on
start
will
will
reconcile.
Those
events
are
irreconcilable.
A
Show
an
actual
example
here
is:
this
is
what
you
might
see
is
the
typical
annotations,
so
there's
three
annotations
that
we're
watching
for
here.
The
first
represented
here
is
the
the
owned
kinds
of
those.
Are
the
types
of
resources
that
this
this
controller
owns
or
is
creating
and
generating.
The
second
aspect
is
the
timeout,
which
is
how
long
we
are
idle
before
the
controller.
A
B
B
A
What
the
what
happens
is
for
like
the
watch
kinds
or
own
kinds.
If
there's
no
modifications
to
those,
then
the
controller
gets
scaled
down
to
zero.
If,
if
an
event
does
if
a
change
does
occur
to
one
of
those,
so
in
this
case
here,
if
the
sto
resource
is
modified,
then
the
controller
will
get
scaled
back
up,
okay
and
and
that
scale
down
scale
up
is
orchestrated
through
what
itself
is
a
controller?
It's
the
isn't.
A
So
we
call
that
the
zero
zero
cellular
controller-
okay,
so
the
the
annotations,
then
of
course
in
kubernetes
style,
can
be
either
incorporated.
You
know
into
the
application
itself
or
they
can
be
annotated
after
the
fact
and
so
at
the
bottom
of
the
screen
image.
Seven
example
of
what
what
that
looks
like
okay,
so
I'll
show
you
the
demonstration
here
so
right
now,
I
have
a
brand
new
kubernetes
cluster.
It
happens
to
be
mini
coup
I
just
started
it
up.
A
That
I
haven't
deployed
anything
on
to
it,
and
so
I've
got
two
components
that
are
going
to
deploy
out.
The
first
is
the
the
sto
operator,
so
this
is
just
going
to
deploy,
apply
the
various
resources
for
that,
and
then
the
other
component
is
the
the
zero
scaler
controller
which
I'm
just
gonna,
be
running
actively,
not
gonna
be
containerized
to
it,
though
that
will
come
up,
should
be
up
and
running
here.
Okay
and
then
I'll
start
a
watch,
so
you
can
see
that
the
SEO
controller
is
in
fact
up
and
running
right
now.
A
So
at
this
point
it's
just
the
issue
controllers.
These
two
controllers
are
up
and
running,
but
nothing's
really
different
in
this
system
from
if
you
just
sort
of
go
out
and
install
the
sto
controller
okay.
So
the
next
thing
I
do
then
is-
is
to
annotate
that
stateful
set
for
the
isseo
controller.
I'm
gonna
use
a
ten-second
timeout,
which
I
think
is
probably
shorter
than
what
you
typically
use
in
in
like
a
production
case,
but
it
works
well
for
this
demonstration
and
then
I'm,
just
specifying
which
watch
kinds
are
are
relevant
to
that
controller.
A
So
that
won't
be
the
determination
of
when
the
controller
actually
gets
scaled
down.
Okay,
so
we've
we've
now
annotated
that
controller
and
so
the
10-second
after
10
seconds
of
idle.
We
should
see
that
that
it
scales
down
and
you
can
see
that's
starting
to
happen
now
scale
down
has
been
requested,
but
it
hasn't
been
achieved
yet
so,
once
the
full
scale
down
has
been
achieved,
you'll
see
zero
out
of
zero
and
you
know
sometimes
on
a
stateful
set
that
takes
longer
than
on
a
deployment.
A
But
then
now
since
we're
idle
again
for
that
10-second
window,
we're
getting
scale
back
down
to
zero
again,
and
so,
if
we
have
another
modification,
it
will
again
just
continue
to
repeat
that
process
recognize
that
that
something
has
in
fact
changed
and
and
scale
it
back
up
or
scale
it
down.
If
it's,
if
it's
been
idle
beyond
the
time.
Oh.
A
Okay,
so
then
just
to
wrap
things
up
that
with
the
zero
scale
solution.
This
is
currently
it's
just
been
developed
locally.
It
hasn't
been
open
source
yet,
but
we
expect
to
open
source
it
within
the
next
couple
of
weeks
under
the
IBM
namespace
as
controller
zero,
scalar
and
the,
and
so
that
will
be
available
shortly
and
then,
as
I
mentioned
before,
the
K
native
event,
source
has
already
been
open,
sourced
and
and
is
being
worked
in
the
community
matured
in
the
community,
but
then
also
on
the
key
native
side.
A
A
There's
the
K
native
of
eventsource
controller
would
handle
that
a
bit
better
is
is
that
you
could
specify
parameters
of
how
high
you
wanted
to
scale
it
up.
The
zero
scale
controller
doesn't
handle
that
case
right
now.
It's
just
scaling
it
up
to
one.
That's
that's
a
feature
that
we
would
need
to
look
at
it.
Implementing
is
a
way
to
handle
that.
D
Is
Michael
just
wanted
to
add
that
that
presentation
of
super
cool
I've
been
like
really
interested
in
that
topic
for
over
the
last
few
months
and
just
haven't
been
able
to
carve
out
any
time
myself
to
really
take
it
seriously.
So
it's
actually
really
great
to
see
that
you
guys
have
dug
so
deep
already
on
this
and
they're
gonna
keep
going
so
I'm
looking
forward
to
seeing
when
you
make
that
available
for
the
rest
of
us
look
at
thanks.