►
From YouTube: Operator Framework SIG March 15 2019 Full Mtg Recording
Description
Operator Framework SIG March 15 2019 via OpenShift Commons
Full Mtg Recording
Update on OperatorHub.io – Rob Szumski, Red Hat
StorageOS operators – Simon Croome, StorageOS
Portworx Stork operator – Vick Kelkar & Dinesh Israni, Portworx
IBM, Kubernetes and Operators - Andre Tost, IBM
MLFlow – Zak Hassan (Red Hat)
Update on Javascript Operator - Marc Boorshtein, Tremolo Security
A
All
right,
everybody
so
welcome
again
to
another
operator
framework.
Big
meeting
lots
has
been
happening
and
lots
more
will
be
happening
over
the
coming
weeks
and
months.
So
we're
definitely
in
to
hearing
your
feedback
on
the
operators
that
have
been
written
and
the
status
of
things
that
you're
working
on
have
a
bit.
A
A
B
Okay,
so,
since
we've
last
met,
we
launched
operator
hub
IO,
and
this
is
what
that
looks
like
with
a
number
of
really
great
operators,
and
it's
that
work
with
the
operator.
Lifecycle
manager
and
you
know,
have
been
tested
on
various
upstream
kubernetes
providers
and
things
like
that,
and
so
would
love
everyone
to
go
here.
B
Check
out
some
of
these
operators
list
your
own
operators
there's
some
instructions
here
at
the
top,
under
the
contribute
link
for
submitting
your
operator,
and
what
this
does
is,
you
know,
render
a
bunch
of
cool
information
about
your
operator,
its
capability
levels,
some
of
other
versions.
If
you've
got
multiple
versions,
things
like
that.
B
A
The
let's
see
the
planet
scales,
the
test
one
was
just
recently
out
but
updated
as
well,
so
the
update
process
seems
to
be
working
pretty
good
and
we've
got
a
couple
of
other
pull
requests
coming
in
and
some
of
our
operators
I
had
never
heard
of
before.
So
it's
it's
gaining
some
nice
traction
and
people
are
giving
us
some
good
feedback
and
definitely
if
you
have
feedback
on
it,
let
us
know
or
make
a
pull
request
against,
especially
the
docs.
If
there's
something
missing,
let
me
know
and
we'll
try
and
get
that
updated
thanks
and.
B
We've
got
like
you
mentioned:
we've
got
some
some
peers
waiting
in
the
wings
I
think
for
some
folks
that
are
doing
today.
Like
storage
OS
has
a
PR
open
and
there's
some
other
there's
a
spark
operator
from
Google,
that
is
in
PR,
and
some
things
from
sis
take
their
monitoring
product
as
well
as
Falco
the
anomaly
detection
tool.
So
a
lot
of
good
stuff
coming
as
well
and
we'd
love
to
see
more
yeah.
A
Well,
if
one
just
floated
by
this
morning
from
a
micro
rocks
so
now
I'm
going
to
go,
look
at
their
API
and
all
that
good
stuff.
So
it's
it's
coming
it's
happening
and
we're
really
really
pleased
that
we
were
all
participating.
So
thank
you
very
much.
So,
let's
see
who
we
have
here,
I
think
we
have
storage,
OS,
the
IBM
and
Ray
toast
to
the
end
and
about
Simon.
If
you
take
over
the
screen
share
and
pick
us
off
with
an
update
on
your
storage,
OS
operators
story,
sure.
D
C
Okay,
great
and
so
I
don't
know
how
many
people
are
familiar
with
storage
device.
I'll
just
give
like
one
minute
that
we're
a
software-defined
storage
product
essentially
designed
for
running
applications
like
databases
within
containers,
so
we're
scalar,
we've
run
as
a
container
on
every
node
in
the
cluster.
Any
storage
capacity
that
our
container
can
see
will
aggregate
zanetakos
virtual
audience,
and
we
do
things
like
replication,
encryption
and
and
all
the
typical
sorts
of
enterprise
type
of
features,
but
importantly
we're
block
storage.
C
So
it's
designed
for
performance
in
mind
so
and
we've
been
working
on
operators
for
a
while
now
and
we've
definitely
seen
the
yet
how
this
can?
How
operators
can
make
it
easy
for
users
to
not
only
install
the
product
but
the
day-to-day
management
I
squeeze
I,
guess
most
operator
at
demos,
you'll
see,
there's
probably
not
a
lot
to
see.
You
know,
because
a
lot
of
the
good
stuff
in
it
operator
is
hidden
in
the
knowledge
we
build
into
the
operator.
C
So
I
can
so
I
have
installed
the
storage
OS
operator
in
this
cluster
and
typically
you
don't
even
run
one
instance
of
storage
or
less
across
your
posture.
So
we
do
have
a
few
different
see.
Are
these
the
one
for
the
the
cluster
itself?
Another
for
kicking
off,
not
great
so
on.
One
of
the
lot
of
the
benefit
of
the
operator
is
actually
this
upgrade
process
which
lets
us
find
all
the
complexity
of
a
rolling
upgrade
from
the
end
user
and
allows
us
to
do
a
non-destructive
upgrade
totally
hands-free.
C
C
I
was
fully
running
your
address
cluster,
so
that
usually
takes
about
a
minute
to
come
up,
and
one
of
the
things
we'd
like
to
do
upcoming
is
add
a
few
more
options
in
the
UI
for
configuring,
the
cluster
once
it's
up
and
running
so
things
like
taking
notes,
offline,
changing
the
lock
lever
things
like
that,
once
this
cluster
comes
up,
it'll
be
fully
usable
and
there's
not
a
lot
more
to
it
than
that.
C
We've
we've
implemented
much
like
yet
CD
operator
when
the
cluster
comes
up
well,
show
a
pie
chart
for
status,
but
I'm,
not
sure
if
you
want
to
sit
around
and
wait
for
that.
So
I
did
write
up
some
sort
of
lessons
learned.
So
it
wasn't
me
that
wrote
a
lot
of
this
there's
a
guy
called
Sonny.
He
was
I
think
it's
on
the
car,
but
you
know
I've
mentioned
some
of
the
reasons
why
we,
when
the
operator
so
a
lot
of
it,
is
hiding
complexities
complex.
C
E
C
Decided
not
to
rely
on
that
for
the
cluster
see
already
you
did
create,
especially
when
encryptions
enabled
we
grades.
We
allow
users
to
store
some
of
the
encryption.
Keys
is
difference,
so
obviously
they
don't
have
to
delete
all
of
their.
The
currently
is
one
way
and
when
you
leave
namespace,
so
we
take
care
of
those
ourselves.
C
Another
thing
was,
this
is
more
of
a
question
to
anybody
who's
more
familiar
with.
We
would
like
to
use
priority
classes,
especially
the
system
know
the
critical
we
you
know.
Essentially,
we
never
want
the
storage
system
to
get
evicted
from
a
node,
but
that
requires
us
installing,
in
the
cube
system,
namespace
so
I'm.
C
And
I
guess
a
bit
on
where
we're
going,
we're
looking
at
implementing
a
node
controller,
just
to
give
the
better
visibility
today
to
the
kubernetes
cluster,
and
that
will
help
us
make
better
dates
and
placement
decisions.
And
it's
finally
one
thing
that
we
that
we
really
appreciate
it
was
the
testing
framework
that's
available
within
the
operator
SDK,
if
I
just
sort
of
flip
over
to
white
go
to
our,
so
our
operators
or
smart
github
if
I,
go,
for
example,
to
our
Travis.
F
C
A
G
So
we
deployed
in
the
cluster
and
we
check
if
the
demons
attend.
Stateful
sets
are
healthy
and
if
we
have
any
like,
we
have
a
node
controller.
So
we
test,
if
when
we
update
the
node
labels
and
kubernetes
its
updated
in
our
store
just
labels
as
well
like
small
tests,
all
written
in
google,
that's
small
kind
of
things,
yeah.
E
Another
question
that
attacked
us
usually
when
we
see
operators
managing
the
version
of
Italy
handled
within
the
custom
resource
or
the
primary
customer
resource.
There
is
usually
not
a
additional,
accurate
CR
that
triggers
an
update.
I
wonder
what
was
the
compelling
event
that
motivated
you
to
kind
of
separate
that
and
not
just
have
code
executed,
based
on
changing
a
version
field
in
respect.
C
G
C
You
know
we
don't
want
it
stopped.
The
story
give
us
container
at
the
bad
time,
so
we
want
to
control
that
up
very
Christmas,
so
essentially
that
lets
us.
In
the
background
before
we
have
upgraded
node,
we
can
create
a
new
replica.
You
can
migrate
across,
we
can
move
around
so
then
we
can
make
sure
that
we
don't.
E
G
Can
answer
it
so
we
have
a
really
big
process
of
upgrading.
We
have
a
separate
controller
for
upgrade,
which
goes
through
different
steps.
It
changes
their
applicant
of
all
the
applications
and
updates
that
even
set
with
the
new
image
and
checks
the
help.
Once
everything
is
up,
it
reverts
all
the
changes
it
made
and
takes.
This
reduce
the
whole
system
out
of
the
maintenance
mode
and
it
restores
everything.
So
there
is
a
lot
happening
which
we
cannot
do
in
the
same
controller
through
the
other.
Separates
was
CRE
for
that.
D
Okay,
I
think
it's
my
mic
problem.
Hopefully
this
is
a
little
bit
better.
Yes,
so
we
have
a
couple
of
folks
here
from
port
works
and
we
just
wanted
to
start
off
this
with
a
quick.
You
know
setting
the
stage
we
are
in
the
same
space
as
storage
OS,
so
that's
great
that
they
are
also
here
today
and
so
today
we
are
going
to
quickly
talk
about
stork,
which
is
our
storage
operator
runtime
for
kubernetes.
D
So
we
are
a
cloud
native
data
platform,
so
we
not
only
do
the
normal.
You
know
activities
related
to
storage,
so
virtualized
and
presented
as
different
pools
of
storage
for
the
applications.
But
then
we
also
work
with
different
schedulers,
though
kubernetes
OpenShift
and
others.
We
are
also
application
aware,
so
it
allows
the
developers
to
kind
of
take
a
look
at
not
only
the
application
lifecycle
and
the
storage
operator
is
kind
of
helping
the
developer
do
self-service,
but
that's
a
very
high-level
overview
of
port
works
platform.
D
I
do
want
to
kind
of
highlight
that
we
have
certified
our
container
image
of
our
product
and
it's
already
in
the
OpenShift
Red
Hat
registry,
and
we
are
working
on
getting
our
operator
certified
and
pushed
into
operator
hub.
So
we
are
actively
working
on
that
and
I
just
wanted
to
set
the
stage
and
I'm
going
to
turn
it
over
to
Dinesh.
To
now
talk
about
the
store
operator.
D
E
D
H
Loud
and
clear,
okay,
cool
so
stock
basically
stands
for
our
storage
operator,
runtime
for
communities
and
the
goal
for
writing.
This
was
basically
to
provide
additional
intelligence
for
storage
solutions
in
communities
because
we
found
that
there
are
some
things
that
communities
can
do.
Natively
like
provide
hyperconvergence
and
initially
it
will
take
snapshots
and
provide
some
other
app
consistent
stuff
that
we
wanted
to
basically
provide
you.
So
our
stock
is
an
open
source
project
and
there's
a
github
link.
H
Present
overviewed,
also
written
a
blog
detailing
with
some
more
technical
details
on
how
some
of
the
things
work
within
stock.
So
you
guys
can
take
a
look
at
that
after
the
presentation,
if
you
want
so,
let's
go
over
the
high-level
the
details
of
the
features
that
we
have
in
stock,
so
we
provide
hyperconvergence
by
using
a
EULA
extender
so
that
it
prioritizes
our
nodes.
Where
data
is
located,
then
we
also
do
additional
health
monitoring
for
pods
so
that
you
can
actually
have
uptime
high
more
uptime
for
your
staple
applications.
H
Then
we
also
have
support
for
volume
snapshots
based
on
the
kubernetes
incubator
project,
but
from
that
we've
also
added
CR
DS.
For
basically
taking
group
volume
snapshots
and
also
taking
application
consistent
snapshots,
so
you
can
actually
run
pre
and
post
exact
commands
before
and
after
you
want
to
take
a
snapshot
so
that
you
can
either
choirs
or
flush
data
to
the
disk
before
doing
snapshots,
and
the
latest
operators
that
we've
added
in
stark
are
basically
around
pairing
clusters
and
migrating
data
between
two
kubernetes
clusters.
H
So
this
actually
migrates
your
data,
as
well
as
your
kubernetes
resources
between
clusters-
and
this
is
useful
for
cases
where
you
wanna
do
like
glue
billing
deployments
or
you
have
a
Devon
production
environment
where
you
were
basically
want
to
migrate,
your
workloads
from,
and
then
we
also
written
a
small
tool
called
stock
curl
to
basically
help
manage
the
resources
more
easily,
because
if
you
just
use
queue
cuddle
with
key
IDs,
they
don't
give
a
lot
of
information.
So
stocker
basically
helps
you
get
more
information
and
manage
them
better.
H
So
I'm,
just
gonna
walk
through
some
of
the
CRTs
that
we've
actually
created.
So
this
is
a
these
are
some
of
the
first
CR
DS
that
we've
created.
So
we
basically
have
a
rule
C
Rd.
That
can
be
used
to
run
commands
when
you're
taking
snapshots
or
group
snapshots.
So
I've
shown
examples
of
how
you
would
do
this
for
MySQL.
H
In
this
rule
you
would
basically
log
the
MySQL
tables,
take
a
snapshot
and
then
basically
unlock
the
table,
but
Cassandra
you
can
do
stuff
like
you
can
make
sure
that
all
the
data
is
flush
to
the
disk
before
you
actually
take
a
snapshot.
So,
basically,
once
you
define
these
rules,
you
can
then
refer
to
these
rules
from
a
snapshot
or
a
group
snapshot
CRD.
H
So
on
the
right
side,
if
you
see
I
have
just
defined
a
group,
snapshot
CID
and
I've
specified
that
in
the
only
in
the
Cassandra
namespace
I
want
to
basically
take
a
group
snapshot
of
all
PVCs
with
the
label
app
Cassandra
and
before
running
it.
I
just
want
to
run
the
Cassandra
Cassandra
rule
to
make
sure
that
all
data
is
flush
to
the
disk.
H
So
some
of
the
ladies
see
are
these:
have
you've
added
around
customer
pairing
and
migration.
What
you
can
do
in
this
is
basically
you
can.
You
can
specify
how
you
want
to
connect
to
a
storage
system
on
another
cluster
as
well
as
kubernetes
on
another
cluster,
so
they're?
Basically,
two
parts
in
this
spec
the
options
part
basically
has
the
IP
token
and
the
port
for
the
storage
cluster.
H
Then
the
conflict
part
basically
has
what
you
would
specify
in
your
cube
config
on
how
to
connect
to
another
cluster,
so
I'll
be
going
through
a
demo
of
the
cluster
penning
and
migration
and
I
can
show
you
how
the
specs
would
actually
the
entire
speculate.
But
over
here
you
can
see.
This
is
just
an
example
of
what
my
remote
clusters
API
server,
would
look
like.
H
You
would
then
have
your
you
would
have
your
different
credentials
and
certificate
data
in
this
in
this
ceoddi,
and
once
you
basically
specify
a
cluster
pair,
you
can
then
start
migrating
data
between
the
clusters.
So,
for
example,
this
is
the
migration
cid,
and
in
this
you
would
basically
I
would
have
have
a
reference
to
the
remote.
The
cluster
pair
that
you
created
in
the
left
side-
and
you
would
basically
say
that
you
want
to
there-
are
a
couple
of
options.
H
And
some
of
the
others
are
series
of
you
are
working
on
are
basically
around
scheduling
things
so
for
stuff,
like
snapshots
and
migrations,
you
basically
want
to
be
able
to
schedule
them
regularly
so
that
you
don't
have
to
trigger
them
manually
all
the
time.
So
again,
we've
defined
see
IDs
for
all
of
this,
so
you
can
specify
a
schedule
policy
to
say
when
things
should
be
triggered,
so
you
can
have
them
at
intervals.
H
Then
you
can
specify
you
want
to
trigger
them
daily,
weekly
or
monthly,
and
once
you
specify
these
share
your
policies,
you
can
basically
refer
to
them
from
different
types
of
schedules.
For
example,
you
can
have
a
snapshot
schedule
saying
that
you
want
to
basically
take
a
snapshot
every
minute
and
you
would
basically
specify
that
in
the
scheduled
policy,
Nemo
you're
in
the
snapshot
schedule,
you
would
basically
then
just
have
a
template
for
the
actual
snapshots
paid
for
yeah.
H
So
this
is
basically
saying
I
want
to
take
a
snapshot
of
my
mind
of
the
my
SQL
data
PVC
every
depending
on
how
the
test
policy
has
been
configured
and
you
can
basically
use
the
same
thing
in
the
same
policy
in
a
migration
schedule
too.
So
again
over
your
this.
The
spec
is
basically
a
template
from
the
migration
spec
and
you
can
specify
the
same
options
over
here
as
well
as
the
schedule
policy
that
you
want
the
migration
to
be
triggered
on.
H
So
what
I'm
going
to
do
now
is
I'm,
go
to
my
terminal
and
show
you
a
demo
of
how
the
migration
CRE
works,
so
not
from
here.
A
H
H
H
Well,
this
is
happening.
Let
me
just
show
you
how
the
cluster
perspect
will
look
like.
So
this
is
how
you
would
have
a
cluster
pair.
So
there's
a
lot
of
information
that
needs
to
be
transferred
from
the
non-dollar
of
information,
but
basically
some
information
about
the
queue
config
that
needs
to
be
transferred
from
the
destination
of
the
source
and
then
format
it.
So
we've
actually
provided
Cockrell
commands
to
do
this,
so
you
can
actually
run
a
command
to
basically
generate
a
template
for
this.
So
if
you
do
this,
it
actually
spits.
H
If
you
run
so,
let
me
show
you
what
the
command
was.
So
it's
basically,
you
can
run
stockwell
generate
cluster
pair,
give
it
the
name
space
that
the
spec
should
be
created
on
the
name
of
the
spec,
and
this
would
basically
spit
out
a
template,
and
then
you
would
basically
just
have
to
add
the
storage
options
into
the
spec,
and
then
you
can
copy
to
your
source
and
apply
it.
H
So
this
is
one
of
the
specs
one
of
the
templates
that
I've
generated
over
here
and,
as
you
can
see,
my
set
up
the
IP
of
the
storage
node
on
the
other
cluster,
just
one
of
the
nodes
and
a
token
that
I
generated
from
the
storage
cluster
and
the
port
that
our
API
server
is
running
on.
So
let
me
make
sure
that
this
is
up
now
that
this
is
up
I'm
just
going
to
apply
the
first
pair
that
we
saw
and
again
I'm,
going
to
use
stock
curl
to
look
at
the
status.
H
So,
as
you
can
see
over
your
storage
status
is
ready
and
the
scheduled
status
is
ready.
This
basically
means
that
we
were
able
to
pair
with
our
the
scheduler
on
the
other
side,
as
well
as
the
storage
on
the
other
side.
If
you
want,
we
can
look
at
the
ml
and
away
on
the
amyl.
It
actually
has
the
storage,
the
cluster
ID
for
the
remote
storage
ID,
and
it
actually
has
the
status
for
both
the
scheduler
and
the
stand
and
the
storage.
H
So
now,
once
we
have
the
clusters
paired
up,
we
can
start
migrating
applications
between
them.
So
it's
on
what
I
showed
you
as
an
example
this.
This
is
how
we
would
migrate,
MySQL,
so
I'm,
just
gonna
point
to
the
remote
cluster
that
we
that
we
just
paired
with
and
I'm
gonna
say:
I
want
to
migrate
volumes
as
well
as
the
resources
and
once
the
resources
are
migrated,
I
basically
want
to
start
them
up.
So
before
we
start
this.
H
What
I'm
gonna
do
is
I'm
gonna
start
a
watch
over
here
on
the
MySQL
namespace
and,
as
you
can
see,
there
are
no
resources
over
you're
right
now
and
what
I'm
going
to
do
is
I'm
going
to
just
apply
this,
so
you
can
actually
use
cue,
coral
stockpile
to
start
the
migrations
to.
There
are
sub
commands
to
do
that.
H
H
Now
that
we've
applied
it
I'm
just
going
to
use
Stockwell
to
watch
and,
as
you
can
see
be
also,
what
will
happen
is.
First,
it's
gonna
start
migrating
to
volumes
that
are
used
by
the
applications
and
masks
in
the
MySQL
namespace,
and
then
it's
going
to
start
migrating
the
resources
here,
as
you
can
see
right
now,
it's
in
the
process
of
migrating
the
volumes
once
this
is
done.
B
H
H
H
H
So
for
future
improvements,
we
are
basically
working
on
on
an
operator
for
a
storage
cluster,
so
that
will
basically,
we
are
basically
planning
to
write
away
our
generic
storage
cluster
CRD
that
can
be
used
to
deploy
any
storage
solution
on
communities.
They
will
have
the
most
common
options
that
all
storage
solutions
would
be
able
to
use
and
just
plug
in
and
and
deploy
that
storage
solution.
H
We
are
also
planning
to
add
an
operator
where
you
would
be
operator
for
objects
to
where
you
will
just
be
able
to
specify
how
like
what
kind
of
object
stores
you
want
and
we
are
trying
to
start
off
with
having
a
mini
object
store.
So
you
just
in
the
CID
all
you
would
need
to
specify
the
storage
class.
We
would
spin
up
deployment
services
and
basically
provide
you
with
all
the
credentials
for
the
object
stores
and
the
service
endpoints
and
stuff,
like
that.
H
We
are
also
planning
to
add
an
operator
to
basically
backup
all
these
resources
to
an
object
store.
What
I
showed
you
was
a
live
migration,
so
you
need
another
cluster
to
be
up
and
running
to
migrate,
your
data,
but
we
also
plan
to
have
an
operator
where
you
can
basically
pack
up
everything
to
an
object,
store,
and
so
a
lot
of
the
stuff
is
basically
written
is
abstracted
out.
E
A
I
Perfect
sharing
my
screen
yeah
so
like
I
said
so.
First
of
all,
this
is
Andres.
I
got
a
couple
of
my
colleagues
here
in
the
room
with
me
and,
as
you
said,
we're
new
to
all
this.
We
don't
really
have
an
operator
to
show
off
at
this
point.
Maybe
we
can
do
that
in
in
a
later
meeting,
so
I
guess
I'm
just
going
to
spend
a
couple
minutes
explaining
who
we
are,
what
we
do,
what
our
role
is
and
and
what
we're
working
on
and
and
see
where
that
takes
us.
I
So
we're
a
part
of
we're
a
team
with
an
IBM
we're
kind
of
a
central
team
that
is
tasked
to
help
all
of
our
software
products,
be
containerized
and
be
running
and
support
fully
supporting
kubernetes
and
we've
primarily
done
that
so
far
through
the
use
of
hound,
charts
and
so
we're
a
bit
of.
If
you
will
a
helm
center
of
competence
within
IBM,
we
provide
guidance
and
conventions
and
rules
for
all
of
our
development
teams
and
there's
many
of
them.
I
As
in
what
these
charts
look
like
in
the
criteria,
we
even
have
very
similar,
I
guess
to
what
you're
having
with
operators.
We
have
a
certification
program,
that's
internal
at
this
point
where,
before
they
can
ship,
they
need
to
go
through
our
process
to
get
certified
and
essentially
get
our
stamp
of
approval
so
again,
very,
very
helm.
Centric.
Now
it's
kind
of
and
and
I
keep
telling
everyone.
I
That's
what
we're
gonna
do
this
year,
we're
gonna
do
a
transition
and
a
shift
towards
being
from
going
helm
base
to
being
operator
based,
which
is
why
everything
that's
being
discussed.
You
and
everything
you
guys
showed-
is
extremely
interesting
to
us
and
and
it's
kind
of
like
the
only
picture
I
want
to
show
here.
I
But
then
you're
kind
of
left
on
your
own
to
to
then
maintain
those
resources
and
make
changes
and
so
forth,
as
opposed
to
the
operator
model,
which
is
more
of
a
desired
target
state
management,
kind
of
model,
that's
more
autonomous
and
automatic
and
and
go
through
there,
which
then
will
also
allow
us
to
do
more
of
what
we
called
a
to
operations
and
many
of
the
examples
you
guys
showed
earlier
on.
This
call
we're
good
examples
for
that.
You
know
what
you
want
to
move.
E
I
Around
or
change
what
they
look
like
or
scale,
or
what
have
first-class
kind
of
objects
that
represent
that
and
then
and
have
implementations
for
those
which
is
something
we
don't
have
today,
were
primarily
worried
about
the
initial
install
and
configure
and,
like
I
said,
helm
has
been
serving
us
pretty
well,
so
what
we
plan
to
do
is-
and
we've
done
a
lot
of
prototyping,
so
we've
written
a
couple
of
operators
and
so
forth.
We
haven't
created
any
yet
that
we
would
consider
a
kind
of
commercial
grade.
I
So
to
speak
because
keep
in
mind
the
things
that
we
represent
or
I
didn't
mention.
Two
days
we
have
a
little
over
a
hundred
pound
charts
that
we
essentially
have
as
product
offerings
fully
supported
that
sit
on
top
of
our
commercial
software
offerings,
and
so
we
need
to
bring
all
of
those
hundred
over
to
to
kind
of
the
operator
world,
and
we
want
to
do
that
in
a
way.
That
is,
that
is
what
we
call
Enterprise
grade.
So
what
we
plan
to
do
is
I'm
used
operator
SDK.
I
Can
we
develop
kind
of
a
common
taxonomy
there
that
would
be
consistent
across
all
of
our
products
and
so
forth,
and
there
again
I
love
the
CD
examples
that
you've
shown
up
here
earlier,
because
I
mean
at
least
at
a
high
level,
they're
all
aligned
with
what
we've
been
thinking
in
terms
of
what
they
need
to
look
like
and
and
and
the
granularity
of
them.
So
that's
all
good
news
for
us.
I
guess
and
I
have
a
feeling
that
some
kind
of
best
practice
for
grd
modeling.
I
If
you
will
it's
kind
of
emerging
there
and
and
we
have
it
as
a
job
like
I,
said
to
work
with
those
teams
we're
not
going
to
create
the
operators
for
them.
They
have
to
do
that,
but
we
want
to
provide
help
in
the
system
and
guidance
and
skills.
If
you
will
so
one
of
the
things
we
do
is
we
write
it
all
down
in
in
cookbooks
and
other
things
that
they
can
then
use,
and
we
also
definitely
plan
to
do
all
of
these
operators
kind
of
in
the
open
community.
I
So
we
want
to
have
them
publicly
available.
One
I,
don't
know
that
could
call
that
a
difference.
One
thing
to
keep
in
mind,
though,
is
that
many
or
most
of
our
stuff
is
commercial
software
that
is
license
that
is
not
available
for
free
download.
So
the
images
that
that
sit
behind
these
operators
I
mean
that
contain
the
actual
software
products.
They're
gonna
need
to
live
in.
I
You
know,
let's
call
it
protected
registry,
so
the
world,
so
to
speak,
because
we
need
to
have
some
entitlement
checking
in
there
and
all
that
that
doesn't
necessarily
mean
the
operators
can
be
in
the
open
and
we're
just
having
to
Chad.
You
earlier
saying
that
I
would
like,
for
all
of
the
operator
code
that
we're
building
to
be
open.
I
There's
really
no
need
for
us
to
keep
that
internal,
because
we
want
to
share
that
in
and
basically
collaborate
with
the
community
on
how
to
bring
that
forward
on
another
interesting
point
may
be
to
this,
and
that's
going
to
be.
My
last
comment
is
one
thing
that
I
feel
is
a
bit
of
a
gap
that
we
have
in
our
kubernetes
platforms
right
now
and
we
have
products.
I
We
have
something
called
IBM
cloud
private,
which
is
think
of
it
as
similar
to
what
OpenShift
is.
We
have
a
managed
service
in
our
cloud
as
well,
and
and
obviously
the
goal
is
to
support
all
kinds
of
kubernetes
flavors,
including
the
Amazon
and
as
your
flavors
and
what-have-you
one
of
the
problems
or
one
of
the
gaps
that
I
feel
we
have.
Is
that
we're
fairly
good
at,
on
the
one
hand,
saying
here's
a
piece
of
software?
We
can
bring
this
up
and
manage
it
through
automation
like
helm
today,
maybe
operators
tomorrow.
I
We
then
have
the
other
side
of
the
coin,
which
is
now
someone
wants
to
go
and
create
an
application
that
runs
on
this
platform
on
the
cloud
native
application,
using
micro
services
and
so
forth.
I
think
what
we're
having
a
bit
of
a
gap
to
address
is
how
you
bring
those
two
worlds
together.
How
is
it
now
really
easy
for
my
application
to
plug
into
an
operator
base
diesel
software?
A
service
I
then
want
to
want
to
use
and
what
we've
explored
there
and
and
we've
had
some
good
successes.
I
There
is
to
use
the
open
service
broker
model,
and
that's
what
you
see
if
you
will
at
the
in
my
slide
here
at
the
bottom
left
so
to
speak,
is
that
one
of
the
things
we
want
to
have
come
out
of
of
the
operator,
as
you
will
is
endpoint
and
credential
informations
in
the
form
of
binding
and
OSB
standardizes,
those
that
we
can
then
easily
plug
into
a
consuming
application
so
that
it
can
it
can
connect
and
and
and
there
again
and
I-
think,
we've
had
a
conversation
with
Rob
about
this
a
few
weeks
ago,
and
his
comment
at
the
time
was
well
there's
not
a
lot
of
traction
for
OSB.
I
At
this
point,
I
don't
know.
That's
necessarily
true.
I
mean
we
in
principle,
conceptually
though,
what
that
basically
means
is.
We
need
to
have
specific
CRD
that
reflect
bindings,
that
an
application
would
then
use
which
we
can
then
automate
and
whether
we
stub
those
into
a
service
binding
or
in
something
else.
You
know
that
that's
a
different
question
but
I
think
to
me
that's
an
important
aspect
of
as
we're
modeling,
these
CR
DS
and,
ultimately,
that's
going
to
lead
to
again.
I
Maybe
a
certain
sex
on
ax
me
that
we
can
define
is
that
there's
topology
centric
CR
DS
that
represent
clusters
of
things
that
then
run
a
service,
then
maybe
de
to
operational
types
of
deities
that
do
things
like
backup
or
migrate
or
upgrade
and
then,
but
then
also
have
application
facing
CR.
These,
for
lack
of
a
better
term
is
where
I
can
consume
things
from
this
back-end
and
and
have
the
bindings
that
go
along
with
that.
That
to
me
is
an
important
aspect
that
we
want
to
do
so.
B
I
Really
at
the
at
a
very
starting
point
of
now,
where
we're
we're
sitting
here
every
day
saying:
how
are
we
going
to
roll
this
out
to
the
masses
of
this
legalese
within
IBM
and
we're
about
to
get
rolling
on
that
and
hopefully
have
the
first
set
of
operators
coming
out
real
soon
here,
as
far
as
where
we
stand
within
IBM?
Overall,
we
have
bits
and
pieces
where
people
have
created
operators.
I
The
best
one
is
from
our
old
from
from
the
cloud
database
team,
they're
already
using
operators
today
as
a
first-class
thing,
and
that's
a
really
really
good
reference
point
for
us
to
to
build
on
top
of,
but
they're,
really
the
only
team
that
I'm
aware
of
right
now
within
our
company,
that
is,
that
is
fully
embracing
this
model.
So
there
again,
we
I
mean
the
task
at
hand,
for
us
is
that
it
goes
with
a
bit
of
evangelizing
and
socializing
as
well
to
kind
of
spread
the
word
and
thing
in
here's.
I
A
I
A
I
A
E
I
A
I
I
mean
we
just
had
this
exchange,
so
he,
my
colleague
Joey's
sitting
right
next
to
me
he
was
a
he
posted.
A
question
about
helm.
Hooks,
for
example,
is
something
we
extensively
use.
There
was
a
reply
right
away.
I
forgot
who
replied
but
said
all
try
this
out.
If
it
doesn't
work,
let
me
know
because
we
gotta
fix
it
and
that's
exactly
the
stuff
that
we're
looking
for.
B
Just
got
a
comment
that
I
think
this
group
is
the
group
that
I
think
it
would
be
great
to
if,
as
you
model,
your
C
or
DS
and
you're
coming
up
with
kind
of
the
rules
that
you're
gonna
give
to
all
of
your
teams,
I
think
we'd
love
to
kind
of
share
those
and
craft
them.
As
a
group
good.
Thank
you.
Everybody
is
gonna,
tackle
that
same
problem.
Basically
yeah.
A
I
A
J
J
So
this
tracking
server
is
just
basically
going
to
be
tracking
your
models,
so
I
have
a
little
bit
more
of
an
interesting
demo,
so
demo
that
I'm
going
to
show
you
today
is
gonna,
be
running
multiple
machine
learning,
jobs
and
tracking
different
parameters,
different
parameters
and
getting
back
different
metrics
and
comparing
so
our
goal
is
another
tool
that
I'm
using
to
do
this,
but
basically
mo
flow
is
going
to
be
hopeful
tracking,
all
the
information.
So
if
I
run
our
goal,
it'll
it'll
run
it'll
go
through
some
steps.
J
So
this
is
the
workflow.
It's
running
some
steps,
and
basically
after
this
is
done,
we're
going
to
have
our
model
and
our
the
parameters
that
were
used
to
train
that
model
and
the
metrics
that
that
were,
they
came
back
from
it.
So
these
are
so
I
had
18
experiments
before
now.
If
I
refresh
527.
So
if
I
compare
you
know,
I
can
compare
one
two
and
three
I
by
side
and
say:
okay,
which
which
machine
learning
model
do
I
want
to
go
with
in
terms
of
hyper
parameter.
So
why
am
I
tracking
these
parameters?
J
This
is
a
hyper
parameter
tuning.
This
is
a
technique
to
optimize
your
machine
learning,
Shin
learning
a
job,
so
you
can
pick
the
optimal
parameters
to
train
your
model
and
then
you
track
the
metrics
and
compare
which
metrics
give
you
the
best
metrics.
And
then
you
choose
that
model
that
you
want
to
roll
out
based
on
getting
back
the
best
metrics.
J
Yeah,
so
the
code
is
here:
it's
going
to
be
moved
to
the
mo
flow
community
and
it's
going
to
be
contributed
up.
There
there's
just
some
slight
things
that
I'm
thinking
about
so
I
was
I
was
asked
to
remove
the
vendors
directory
from
instead
of
contributing
like
this
whole.
Big
repo
I
would
have
to
delete
this
vendor
directory
and
I
can
regenerate
that.
J
B
F
J
Okay,
so
yeah,
basically
as
long
as
I,
can
delete
some
of
these
things.
These
are
it's
good
if
I
can
delete
some
of
that
and
the
vendor
directory,
so
we
don't
have
to
check
it
in
so
other
than
that,
so
mo
flow,
currently
the
the
operator,
what
it
what
it
supports
right
now
is
it's
able
to
store
models,
machine
learning,
model
in
steps,
man
except
storage
through
the
s3
protocol
and
the
way
that
I
do,
that
is
I.
Actually
I
actually
use
a
secret.
J
So,
for
example,
this
is
one
of
my
secrets,
so
this
secret
needs
these
particular
parameters,
the
AWS
secret,
key,
the
secret
access
key
and
ID,
and
and-
and
it
also
needs
the
endpoint
if
you're,
not
using
AWS,
you
would
provide
this
particular
parameter
in
your
secret
file
and
then
once
once
you
create
this.
What
my
operator
does
is
it
will
go
into
let.
J
It
would
go
in
and
and
and
mount
that
that
secret.
So
if
you
specify
the
secret
it'll
it'll
go
in
and
it
will
mount
a
secret
into
your
container
and
into
that
tracking
server
into
that
environment
and
then
I'll
pick
it
up
in
and
all
I'll
make
sure
that
the
tracking
server
can
connect
this
up
and
store
the
models
there
and
all
the
experiment
metadata.
A
J
It's
it's
already
happening
and
we're
engaged
and
in
discussing
like
some
of
those
things
that
I
mentioned
earlier,
like
deleting
the
vendor
directory
and
any
extra
files
that
they
don't
need
to
be
there.
There's
there
some
files
that
they
they
had
questions
about,
which
is
there's
these.
What's
it
called
there's
these
files
they
get
generated.
So
if
there's
any
like,
if
I
can
delete
some
of
those
extra
files,
then
it'll
make
the
process
a
little
bit
quicker.
A
Rob,
maybe
we
need
to
take
a
look
at
that
again
and
give
them
some
coaching
on
that.
D
E
K
K
Different
types
of
infrastructure
all
way
up
to
you
know:
Public,
Safety
and
phone
needs
to
go
off,
Public
Safety
applications.
You
know
one
of
the
largest
suburban
jurisdictions
in
America
uses
us
to
register
and
authenticate
their
tax
payers
and
folks
using
their
court
system,
so
so
we're
both
infrastructure
and
we're
application
and
we're
also
integration.
So
we
need
to
be
very,
very
flexible
in
the
way
that
we
approach
our
operators,
and
so
we,
we
kind
of
start
it
off.
You
know
how:
how
do
we
go
ahead
and
we
automate
our
deployment?
K
Our
application
is,
is
when
all
is
said
and
done.
Java
Web
App
does
a
lots
of
different
things,
but
it's
a
java
web
app.
So
on
its
face,
that
would
seem
like
it's
pretty
easy,
Tanner
eyes
which
it
was
but
building
a
system.
That
was
that,
let
us
make
it
easy
to
deploy
was
astoundingly
difficult
because
of
how
flexible
we
had
to
be
in
how
many
situations.
K
So
when
we
started
our
automation
journey
about
a
year
ago,
a
member
of
our
community
had
put
together
a
QuickStart
or
a
documentation
on
our
quick
start
for
kubernetes,
integrating
it
with
sam'l,
and
it
was
like
30
pages
long.
It
wasn't
incredibly
difficult,
but
it
was
from
this
key
tool
command
from
that
key
tool.
Come
and
get
this
cert
sign
at
that.
Cert
time
pull
this
out
of
metadata,
and
so
we
realized
we
needed
to
make
that
easier.
So
our
first
attempt
was
with
ansible
playbooks.
K
K
You
know
a
set
of
parameters,
then
we
would
figure
out
everything
else
and
we
ran
into
problems
with
that.
We've
got
a
lot
of
customers
that
use
ansible.
We
you
know
when
we're
deploying
onto
VMs.
Often
our
first
thing
is
table.
Here's
a
playbook
that
we
use
that
works
really
well,
but
the
the
biggest
issues
we
really
ran
into
were
lack
of
applications.
Lack
of
lack
of
pkcs12
support
outside
of
from
open
SSL
was
a
big
one.
Open
SSL
only
supports
one
key
per
per
bundle
and
Java.
K
K
Anybody
who's,
familiar
sam'l
to
metadata,
knows
the
pain
we
go
through
and
ansible
actually
had
a
lot
of
had
different
implementations
for
that,
depending
on
whether
it
was
on
Fedora
or
CentOS
or
else
so
it
was
a
good
first
crack.
It
worked,
but
it
wasn't
really
maintainable,
and
so
then
we
built
that
we
took
the
lessons
learned
from
that
and
we
built
this
idea
of
a
installer.
We
didn't
want
to
rely
on
helm
charts
because
we
had
feedback
from
customers.
K
That
said,
you
know
we
usually
kind
of
want
to
install
the
identity
management
solution
for
our
kubernetes
infrastructure.
Before
we
do
hell,
you
know
we
want
track.
Who
has
access
to
what?
And
so
we
didn't
want
to
make
him
a
dependency.
We
built
a
deployer
and
we
start
working
with
Java,
where
Java
shop
and
Java
JSON
support
is
terrible.
I
mean
it's,
it's
horrid
their
words
for
it
that
aren't
appropriate
for
public
use,
and
so
we
looked
at
and
say
well.
Json
JavaScript
has
native
JSON
support.
I
can
run
JavaScript
on
Java.
K
Why
don't
we
create
this
in
JavaScript
right
on
Java?
We
built
the
installer
work
great
and
then
these
things
called
operators
come
along
and
some
of
the
tools
that
we
need
things
that
we
need
to
be
able
to
do,
or
things
like,
like
we
said,
generate
the
key
store,
get
keys,
sign
or
certificates
signed
by
a
CA,
whether
it's
the
certificate
authority
inside
a
Gerber
nail,
if
it's
available
or
customers
who
are
using
vault
or
easy
RSA,
which
a
lot
of
the
the
distros
will
do,
get
those
search
sign
there.
K
So
so
provide
hooks
for
that.
And
then
we
also
were
doing
things
with
like
interacting
with
dashboard,
because
we
provide
SSO
there.
So
we
well
what's
interesting
about
it-
is
in
a
lot
of
ways
were
much
simpler
than
a
lot
of
the
apps
that
you're
gonna,
see
in
operator
hub.
Do
because
we
are
in
essence,
a
stateless
java
application
for
more
complex,
because
I
can't
say
you
have
to
use
my
sequel.
You
have
to
use
PG
sequel,
we've
got
customers
on
sequel
server.
We
got
customers
on
Oracle
stuff
that
we
can't
distribute.
K
You
know
we're
not
go
pro
grammars,
nothing
against,
go
it's
just
not
what
we
do.
We've
built
a
large
infrastructure
around
Java,
most
of
our
customers
are
very
comfortable
with
Java
and
so
seeing.
Java
and
JavaScript
is
something
that
that
they're
familiar
with
as
well,
and
we
hooked
in
ansible.
We
thought
about
going
with
the
ansible
one,
because
we
are
very
comfortable
with
danceable,
but
that
would
have
required
pretty
hefty
investment
in
building
out
better
support
for
pkcs12,
and
we
could
have
done
that,
and
we
would
have
been
happy
to
donate
that.
K
But
for
us
to
be
able
to
get
to
deployment
and
get
to
customers
there
actually
or
get
customers
our
software.
We
saw
this
as
the
fastest
way
to
doing
that,
and
so
the
idea
behind
the
framework
is
pretty
simple.
We
don't
do
a
lot
of
the
stuff
that
the
operator
SDK.
Does
we
don't
do
code
generation?
We
don't
do
you
know
the
CSV
generation
anything
like
that.
It's
really
straightforward!
K
You
implement
javascript
files
like
you'll,
see
here
on
my
screen
and
you
provide
an
on
watch
event
and
we're
just
going
to
give
you
the
raw
data
at
that
point.
You
then
can
do
whatever
you
like
with
it.
So
for
us,
we
need
to
go
through
generate
a
secret,
generate
a
java
key
store
based
on
the
information,
some
of
the
things
that
we
went
ran
into
like.
How
do
we
handle
secrets
right
dude?
Do
we
even
store
it
in
the
CR?
K
Be
that
like
terrible
idea
to
be
honest
from
security
standpoint,
not
that
secrets
are
really
handled
that
much
better
than
any
other
data,
but
at
least
I'm
hoping
sometime
in
the
future
that
there
will
be
better
support
for
externalizing
that
add
the
box
transparently.
So
we
we
want
to
stick
with
the
the
idea
of
making
sure
that
you're
using
secrets.
K
For
really
sensitive
information,
you
know
we're
in
identity
management
system,
so
we
might
have
administrative
credentials
for
your
active
directors.
That's
not
something
that
you
want
to
store
in
a
CR
D,
so
we
want
something
made
easy
to
interact
with
different
types
of
objects,
not
just
our
CR
de
some
of
the
other
things
we
ran
into
were,
like
you
know.
How
do
we
handle?
K
How
do
we
handle
error
handling
with
kubernetes
api
at
a
low
level?
You
know
we
ran
into
issues
with
like
when
we
built
out
our
custom
resource.
How
do
we
not
make
it
too
crazy
to
be
able
to
reference
different
fields?
So
you
don't
have
to
do
the
same
thing
once
or
write
the
same
thing
more
than
once,
so
we
actually
ended
up
using
utilities
for
like
being
able
to
embed
small
JavaScript
snippets
into
the
custom
resource
that
made
it
a
lot
easier,
so
yeah,
so
that
the
operator
itself
is
pretty
straightforward.
K
If
you
go
to
github,
here's
the
the
site
for
it,
it's
still
really
really
early
as
I
would
call
this
a
prototype
stage.
These
a
lot
of
error
handling
in
there,
but
it
gives
you
a
basic
idea
of.
If
you
want
to
get
started.
What
to
write,
you
can
take
a
look
at
we've.
We
have
written
a
prototype
operator
for
our
software
using
it
so
we're
using
that
as
kind
of
the
example.
K
A
K
A
K
Internal
tool,
we
built
it
mainly
for
us
for
our
operator,
but
we're
an
open
source
company.
We
we
do
just
about
everything
open-source
first,
so
really
what
we
wanted
to
do
was
just
put
it
out
there
you
know
PRS
are
welcome.
Suggestions
are
welcome.
This
has
been
a
bit
of
a
journey
for
us
kind
of
learning,
the
ins
and
outs
of
how
these
things
are
supposed
to
work,
and
so
you
know,
if
other
folks
want
to
use
it.
Other
folks
wanna
make
contributions.
That's
awesome!
This
is
not
a
product.
K
We're
building
we're
not
going
to
build
like
a
commercial
version
and
sell
this.
The
this
is
primarily
so
that
we
can
get
a
faster,
more
manageable
to
market
for
our
own
particular
situation
for
our
operator.
So
what
will
end
up
in
operator
hub
is
the
open
unison
operator
that
we're
building?
On
top
of
this.
A
Right,
we'll
look
forward
to
that.
We've
run
over
time
a
little
bit
today.
I
just
want
to
respect
everybody's
time.
Then
a
lot
of
people
are
hanging
out
and
still
hearing
this
and
I
know
Rob
has
to
pop
onto
another
call
so
I'll,
let
you
go
and
if
other
people
have
operators
that
they
want
to
get
some
air
time
for
the
next
one.
The
next
meeting
is
the
third
Friday
in
April.
If
you
want
to
go
longer
and
talk
about
you
know,
maybe
port
works.
A
You
wanted
to
do
something
with
it
longer
and
talk
about
your
actual
product
offering
and
projects.
Please
let
me
know
and
I'll
schedule
a
briefing
and
again,
if
there's
a
healthy
conversation
on
the
Google
group,
so
I
would
strongly
suggest
that
you
either
join
or
you
definitely
join
the
Google
group,
the
Google
framework
operator
framework
or
pop
over
into
new
kubernetes
and
the
kubernetes
operator
slack
channel
is
pretty
active
too.
If
you
have
questions
and
trying
to
think
anything
else,
I
should
be
adding
more
out
there.
B
Yeah
I
think
that
about
covers
it,
as
when
I
mentioned
that
the
app
binding
proposal
that
I
had
on
the
agenda,
we
can
punt
until
next
time.
I
think
that's!
You
know
the
topic
that
the
IBM
folks
are
bringing
up,
and
you
know,
has
all
these
writers
work
together.
We
want
to
have
this
great,
rich,
binding
experience,
so
maybe
some
homework
for
everybody
is
to
go
redo.
That
proposal
and
come
back
with
your
thoughts
for
next
time.