►
Description
Red Hat/CoreOS Update: Operator Framework Explained with Brandon Philips recorded at OpenShift Commons in Copenhagen at Kubecon
A
A
I've
pretty
much
the
easiest
job
in
the
world,
which
is
to
entertain
a
bunch
of
people
who
have
fresh
beers
and
food
in
front
of
them.
So
this
should
go
very
very
smoothly,
so
core
OS
was
acquired
by
Red
Hat
about
three
and
a
half
months
ago,
and
what
I
wanted
to
do
was
just
walk
through
some
of
the
things
that
we're
working
on.
A
It's
pretty
easy
for
me
to
talk
through
some
of
this
and
give
you
a
couple
of
live
demos
of
it,
because
a
lot
of
it
is
things
that
were
inside
of
the
tecktonik
product
that
we
are
going
to
be
bringing
to
over
shift
over
time.
So
really.
This
is
not
a
lot
of
new
brand
new
announcements,
but
really
just
familiarizing
folks
that
are
inside
of
the
open
chef
community
with
some
of
the
things
that
we
had
been
doing
inside
of
core
OS
and
inside
of
tecktonik.
A
So
the
first
thing
that,
if
you're
not
familiar
with
this,
this
is
the
tectonic
console,
which
is
an
administrative
console
on
top
of
on
top
of
kubernetes,
and
one
of
the
things
that
we
spend
a
lot
of
time
doing
at
core
OS
was
rethinking
the
way
that
enterprise
software
was
delivered
and
ensuring
that
when
people
get
enterprise
software,
it
has
a
lot
of
the
capabilities
of
a
cloud
service.
Now,
when
we
think
about
a
cloud
service,
there's
essentially
two
pieces,
there's
the
hosting.
A
That
is
a
very
traditional
business
where
you
stick
a
server
in
Iraq
and
you
give
it
an
IP
and
you
sell
it
to
somebody
and
then
there's
what
we
eventually
termed
automated
operations,
which
is
this
idea
that
it's
not
just
the
server
and
the
IP,
but
also
services
on
top
databases,
load,
balancers,
etc,
and
those
services
are
unique
because
the
operations
are
automated.
The
upgrades
are
automated.
Monitoring
is
automated
and
so
there's
a
lot
that
you
get
out
of
that
by
default.
A
And
so
we
wanted
to
make
sure
that
when
we
delivered
software
to
people
and
that
started
with
operating
system
and
eventually
with
kubernetes,
that
you
could
also
automate
those
operations,
because
there's
a
software
company
we're
not
also
going
to
sell
you
a
server,
so
we're
automating
operations
ended
and
where
it
will
begin
again
inside
of
openshift.
Is
we
have
this
one
click
update
inside
of
tectonic
where
the
software
it
gets
a
little
recursive,
where
we're
actually
hosting
all
the
components
of
kubernetes
on
top
of
kubernetes
and
don't
worry,
we
do
it
in
a
way.
A
That's
safe.
This
cluster
is
my
personal
cluster
and
it's
been
up.
It's
been
up
for
probably
eight
or
nine
months,
and
it
always.
It
always
surprises
me,
because
every
time
I
log
in
I
see
all
the
tweaks
and
stuff
that
I
saw
mock-ups
of
months
before
on
my
live
cluster
and
I
never
did
anything
because
the
software
is
just
constantly
updating
and
so
you'll
notice
inside
the
system
that
all
the
components
of
kubernetes
like
the
scheduler
is
in
here
and
they're
running
as
pods,
which
has
a
bunch
of
cloud-like
properties.
A
And
so
these
are
the
sorts
of
things
that
will
start
to
pour
into
openshift
and
was
part
of
the
announcement
during
the
acquisition
of
this
automated
operation
stuff.
So
that's
some
color
around
what
we
mean
by
automated
operations.
The
other
thing
is
that
sort
of
the
namesake
of
the
company
was
core
OS,
an
operating
system
which
we
eventually
renamed
to
container
linux.
With
some
success
of
that
renamed,
it's
always
challenging
to
rename
a
product,
but
the
automation
and
the
automated
operations.
Don't
just
go
down
to
the
kubernetes
layer.
A
They
go
all
the
way
down
to
the
foundation
of
the
actual
operating
system,
and
so
this
is
a
brief
demo.
If
you
keep
down
here,
keep
looking
down
here,
it's
looping,
but
what
we
had
done
inside
of
the
operating
system
is
that
kubernetes
is
actually
in
control
of
the
exact
version
of
software,
that's
running
on
each
node
and
that
status
and
that
information
gets
pushed
back
up
to
the
kubernetes
control
plane.
A
Reboots
are
controlled
across
the
cluster
in
case
of
security
updates,
and
you
end
up
with
the
system
where,
when
we
release
a
version
of
tecktonik,
you
get
not
just
kubernetes
that
have
set
version.
You
get
the
operating
system
of
the
set
version.
You
get
the
the
docker
version
of
the
set
Persia
and
this
entire
stack
of
software
is
controlled
together
and
it's
all
controlled
through
the
kubernetes
api.
So
you
can
control,
monitor
and
view
what's
actually
happening
in
real
time
using
coop
cuddle.
A
So
those
are
two
big
things
that
we
plan
to
bring
to
OpenShift
the
other
thing
which
we've
open-sourced
a
few
of
the
kind
of
secret
sauce
pieces
of
tectonic
and
they're
now
available
on
the
open
shift
github
around
monitoring.
We
ended
up
building
in
what
we
call
the
prometheus
operator
and
then
a
bunch
of
technology
around
monitoring
inside
of
tectonic,
so
that
you
get
immediate
insight
not
just
across
the
application
but
across,
as
you
saw
in
the
previous
demo,
actually
how
the
kubernetes
control
plane
is
running.
A
So
you
can
dig
in
debug
issues
and
that
sort
of
thing
over
time,
whether
they're,
post
level,
issues
plot
level
issues
or
individual
components
like
services
of
the
kubernetes
control
plane
all
right.
So
that's
that's
kind
of
a
preview
of
a
few
things
that
we've
started
to
do
that:
our
open
ships,
civic
and
then
the
other
thing
is.
We
announced
today
a
thing
that
we
call
the
operator
framework
and
I'm
gonna
run
through
and
give
a
quick
overview
of
what
that
looks
like
and
what
we're
trying
to
do
here.
A
A
There's
some
joke
in
here
about
being
acquired
by
Hardy
blew
that
one
so
operators
we
introduced
two
years
ago
and
the
idea
of
operators
we
introduced
operator
for
a
database
that
CD
an
operator
for
monitoring
system
Prometheus,
and
the
idea
with
operators
is
that
there
are
these
cube
native
applications
that
run
in
pods
and
are
managed
via
cube,
api's
and
so
by
running
pods
I
mean
you
deploy
the
operator
on
your
cluster
and
it's
just
a
normal
kubernetes
deployment
and
then
managed
with
kubernetes.
Api
is
means
that
you
deploy
a
resource.
A
A
By
analogy,
what
we're
trying
to
do
with
operators
is
something
that's
impossible
to
do
on
the
public
cloud
which
is
I,
have
my
application
whatever
it
is,
it
might
be
some
cool,
open
source
project
like
Cassandra,
or
it
might
be
something
like
an
S
ap
integration.
That's
specific
to
my
organization
and
I
want
to
make
that
available
on
the
public
cloud,
so
people
can
deploy
copies
of
that
application.
You
can't
do
that
on
the
public
cloud,
Amazon
or
Asher,
or
whoever
they're,
not
gonna.
Let
you
just
introduce
a
new.
A
You
get
more
consumption
of
it,
which
is
exactly
how
the
clouds
grow
so
quickly
and
we're
hoping
that
by
taking
advantage
of
that
success
of
cloud
but
bringing
it
to
kubernetes,
we
can
grow
the
overall
base
of
kubernetes
software.
So
our
goals
here
are
to
bring
more
operators
into
the
ecosystem
and
make
them
in
use
by
more
people.
A
So
the
operator
framework
is
this
toolkit,
where
we're
making
it
easier
for
people
to
build
these
cube
native
apps,
like
we've
done
with
that
CD
like
we've
done
with
Prometheus
and
make
them
manageable
across
lots
of
different
kubernetes
clusters.
Of
course,
including
OpenShift.
You
can
check
it
out
at
github
home,
slash
operator
framework
and
it
has
two
components
as
an
sdk,
which
is
a
bunch
of
tools
for
doing
the
hard
parts
of
building
one
of
these
operators
tracking
related
cube
resources,
tests,
scaffolding,
vendor
of
the
correct
libraries-
and
it
looks
like
this
jokingly.
A
One
of
the
Google
engineers
is
called
this
a
similar
project
that
he
was
working
on
Kuby
on
rails.
But
you
create
a
you
created
a
new
version
of
an
operator
using
the
operator,
SDK
command
line
tool
and
describe
it
and
then
a
scaffolding
gets
created
for
you
and
Phillip
Witt
rock
has
been
working
on
a
similar
project
and
we're
looking
to
bring
them
together
in
a
cig
inside
of
kubernetes,
which
is
up
for
all.
The
other
piece
is
opera
operator,
lifecycle
management.
So
you
have
these
operators,
but
it's
a
little
cumbersome.
A
A
So
you
can
go
in
and
say
these
are
the
versions
that
are
available
to
me:
make
it
available
to
specific
name
spaces
so
that
cluster
admin
has
control
over
what
people
are
deploying
as
their
monitoring
tool
or
their
database
track
those
instances
across
namespaces,
so
that
people
like
the
folks
that
Ticketmaster
are
able
to
figure
out
how
many
instances
exist
and
then,
of
course,
apply
updates
in
case
there's.
Some
problem
in
the
piece
of
software,
like
the
monitoring
stack,
has
a
security
issue,
so
it
looks
like
this.
We
have
these
manifests.
A
We
put
them
in
a
catalog
and
then
you're
able
to
deploy
them
across
namespaces
and
the
OLM
the
operated
lifecycle
management
really
solving
this.
Well,
how
do
I
deliver
my
app
onto
the
cube
root
of
these
hybrid
cloud,
and
you
can
do
this
with
things
built
with
the
operator
SDK,
but
you
can
also
do
this
with
helm
charts
or
the
kubernetes
built-in
types
there's
Doc's
on
the
repo
if
you're
interested.
A
So
quick
recap,
it's
open
source,
it's
up
here,
star
stuff,
because
that's
how
the
source
software
wins
is
lots
of
github
stars
and
the
next
steps
here.
We
want
to
make
more
operators
more
easily
and
bring
more
users
to
those
and
the.
Why
is
we
want
to
make
kubernetes
the
dominant
API
for
cloud
native
applications
moving
forward?
We
believe
at
Red,
Hat
I
believe,
as
somebody
who's
been
in
this
ecosystem
for
the
last
five
years.
A
So
this
is
our
opportunity
to
make
an
actual
compute
network
storage
infrastructure
that
can
run
anywhere
from
somebody's
laptop
somebody's
data
center
somebody's
public
cloud-
if
you
want
to
find
any
of
us,
we've
been
working
on
this.
These
are
the
faces
Kelly's
right
there
in
particular,
I,
don't
know
where
rob
a
Jimmy
are
I
think
that
I'm
playing
somewhere
lost
names
to
him
and
that's
all
I
got.
Thank
you
very
much
for
your
attention.