►
Description
OpenShift Commons Operator Framework SIG Mtg Full Recording 2019 Feb 15
Co-chairs: Diane Mueller @openshiftcommon and Rob Szumski @openshift @redhatcloud
Agenda:
How to add your Operator to Operator Framework Repo – Rob Szumski
Ansible, Kubernetes and Operators – James Cammarata
https://github.com/water-hole/galera-ansible-operator
intergr8ly: Lesson Learned Building Operator SDK Openshift Utilities- Leonardo Rossetti
https://github.com/integr8ly/operator-sdk-openshift-utils
Red Hat COP: Lessons Learned building an Image Scanning/Signing Service Operator – Andrew Block, Matt Bagnara
https://github.com/redhat-cop/image-scanning-signing-service
A
Hello,
everybody.
This
is
Diane
Mueller
I'm,
the
co-chair
of
the
operator
framework
stig,
and
we
are
going
to
give
everybody
a
few
more
minutes
to
join
us
in
the
link
to
the
hair
occur
and
the
notes
that
I'm
taking
you
do
have
to
sign
into
a
trendy.
But
if
you
can
add
your
name
to
the
it
to
the
attendees,
that
would
be
great
and
your
affiliation.
That
could
be
a
lot
of
Red
Hatters
on
here,
because
it's
a
hot
topic
internally
and
with
red
hat
and
with
our
customers.
A
So
look
at
the
nature
of
the
beast
right
now
and
we'll
give
everybody
a
few
more
minutes
and
we'll
start
a
little
bit
after
the
hour.
I'm
still
just
night
off
my
time
on
Pacific
time,
and
so
when
beating
for
my
other
co-chair
Rob's,
Minsky,
tumski
I.
Think
that's
how
you
say
it.
Some
speed,
I'm
always
putting
in
ink
and.
A
A
Better
call
Peter
tell
us
much
better
right
now.
Yes,
well
you're,
not
gonna,
get
to
see
my
shining
face
today,
so
folks,
just
advice
to
say
it
so
just
so
disappointed.
I
know
this.
It's
it's
snowing
in
the
background
here.
So
there
and
Zacks
joined
us
good
I'm,
just
waiting
for
another
minute
or
two
otherwise
I'm
just
going
to
pop
on
and
get
started
here.
A
Today
we
have
a
couple
of
people
that
are
new
to
the
community
that
have
just
been
popping
onto
my
radar.
That
I've
asked
to
comment
sort
of
introduce
themselves
and
the
order
on
the
agenda
is
really
flexible
and
we'll
wait
and
see
for
a
minute
if
Rob
manages
to
join
us,
I
know
he's
been
I'm
crazy,
busy
you're
here,
okay
great,
so
let
me
just
share
my
screen
for
a
minute
and
we
get
ourselves
started
here.
A
A
So
Rob
and
I
we
get
the
wonderful
tour
of,
and
it's
not
a
tour
of
chairing
this
cig
and
its
really
basically
a
place
for
us
to
share
the
stories
and
best
practices
and
the
lessons
line
that
everybody's
created
or
learned
during
their
endeavors
to
build
operators
or
to
work
on
the
operator
framework
itself
and
OLM
and
metering
in
the
SDK
and
Rob
is
here
and
I'm,
hoping
that
what
we
could
do
to
kick.
This
off
Rob
has
been
diligently
working
on
some
documentation
and
I
was
hoping.
A
I
could
get
him
to
to
talk
a
little
bit
this
morning.
First,
to
kick
it
off
on
this
topic
of
how
to
add
your
operator
to
the
operator
framework
repo
and
maybe
give
us
an
update
on
that
so
Rob.
If
you'd
like
to
take
the
screen
and
share
that
would
be
wonderful
and
I'll.
Stop
sharing
it
and
take
notes.
C
C
Alright,
so
I
wanted
to
talk
through
how
to
add
operators
to
some
of
the
art
community
repo
here
which
I'm
looking
at,
but
I
wanted
to
first
kind
of
talk
about.
So
we
have
this
awesome
operators,
repo
and
some
of
you
I
think
in
this
list.
What
this
gun
represents
is
just
kind
of
like
a
collection
of
operators
that
exist
out
there
with
no
information
kind
of
about
their
quality,
which
versions
of
kubernetes
they
work
with.
C
You
know
how
to
get
them
installed
other
than
you
know,
hopefully,
there's
a
readme
going
on,
and
so
this
kind
of,
if
you
think
about
this
and
contrast
it
with
what
it
would
look
like
to
install
a
binary
from
somebody's
get
project,
it's
kind
of
like
you're
running
make
and
to
get
a
binary,
and
you
have
it
locally
on
your
machine
really
not
great
for
like
actually
sharing
these
out.
So
what
we
want
to
do
is
kind
of
to
further.
C
That
analogy
is
have
the
operator
version
of
rpms
or
actually
packaging,
this
stuff
out,
versioning
it
strongly,
and
so
that
is
what
we
have
a
new
effort
under
this
community
operators
repo.
This
is
a
set
of
manifests
of
operators
that
have
been
bundled
up
to
work
with
our
operator
lifecycle
manager,
and
this
really
means
that
they're,
you
have
a
better
experience
for
shipping
these
operators.
C
If
you
do
get
a
new
version
of
an
operator,
the
cluster
knows
how
to
upgrade
that
with
a
rolling
deployment
to
the
new
version,
so
that,
as
folks
are
putting
out
bug,
fixes
new
releases
that
you
have
a
stable
sane
way
of
getting
that
just
like
you
would
do
with
an
RPM
today
on.
One
of
your
servers
so
documented
in
this
repo
is
a
set
of
early
steps
to
accomplish
this
and
what
it
really
comes
down
to
is
following
this
guide
right
here
to
make
a
cluster
service
version
file
for
your
operator.
C
C
Another
section
of
this
guide
is
about
not
CR
DS
that
you
own,
but
CR,
DS
that
you
depend
on,
and
so
these
are
the
required
CR
DS.
So
if
you
have
a
database
operator
that
wants
to
cooperate
with
the
Prometheus
operator,
for
example,
to
set
up
service
monitors
to
monitor
the
database
that
you're
running,
you
can
start
expressing
all
that
inside
of
this
file.
C
Other
things
that
are
in
here
are
just
a
bunch
of
other
things
about
the
API.
Is
that
you
require,
and
then
some
metadata
about
minimum
versions
of
kubernetes
that
you
work
on,
for
example,
this
min-koo
version
here
the
maturity
level
of
your
operators,
that
you
can
kind
of
signal
that
out
to
folks,
and
then
you
know,
icons
in
that
kind
of
stuff.
What
we
want
to
do
as
a
community
is
then
have
a
list
of
all
these
that
are
tested
and
known
to
work.
C
C
We
want
to
make
sure
that
everybody
has
a
really
great
experience
there.
So
I'll
drop
some
of
these
links
into
the
dock,
but
this
is
available
on
the
operator
framework
github
page
under
the
community
operators
section
so
we'd
love
to
take
any
questions.
If
anybody
has
them
now
and
also
help
folks,
with
this
process,
we've
got
a
number
of
folks
that
are
experts
and
how
to
do
this,
and
some
of
the
benefits
of
it
is
if
we
can
do
that
over
slack
or
whatever
mechanism
folks
aren't
doing.
A
C
There's
one
more
thing:
I
wanted
to
cover,
which
is
just
kind
of
interesting,
is
doing
this
for
your
operator
kind
of
forces
you
to
think
through
some
architecture,
which
is
pretty
interesting
and
one
of
those
is
this
install
modes.
So
if
you
built
an
operator
on
the
call,
you
know
that
you
have
some
different
mechanisms
for
what
events
you're
listening
to
you're
either.
C
Looking
at
cluster
wide
events,
you
might
be
looking
at
just
a
very
specific
namespace,
and
so
what
you
can
do
is
actually
tell
other
users
of
your
operator
how
it
understands
to
listen
for
those
events,
so
you
can
run
in
your
own
namespace.
So
basically
saying
this
operator
needs
to
run
and
watch
the
same.
Namespace
this
operator
only
listens
knows
how
to
listen
to
a
single
namespace,
but
it
can
run.
You
know
in
a
different
name,
space
from
that
I
understand
how
to
watch
multiple
namespaces.
C
This
is,
if
you
had
an
operator
that
you
want
to
watch.
You
know
a
number
of
production
namespaces
and
you
list
those
out
and
comma
separated
values
and
then
I
watched
all
namespaces.
So
it's
a
bunch
of
interesting
kind
of
architecture.
Decisions
of
this
is
making
you
think
through
and
so
I
think.
That's
just
kind
of
an
interesting
bonus
of
doing
this.
A
Any
questions
right
back
and
I
wanted
to
ask.
We
have
a
of
a
new
community
member
who's,
not
really
a
new
person
at
all.
James
Cammarata
who's,
one
of
the
were
one
of
the
leads
around
on
ansible
and
ansible
architects,
and
he
is
busy
moving
into
his
new
new
new
apartment
or
a
house,
or
something
like
that.
A
D
I've
been
the
ansible,
be
DFL
for
people
don't
know
what
be
DFL
is,
is
I'm
kind
of
like
the
lioness
of
the
ansible
project
2015
after
Michael,
the
original
creator
left
company
so
last
year,
I
started
getting
asked
to
kind
of
help
out
with
some
openshift
brunette
ease
things
and
specifically
the
ansible
operator
project.
I,
don't
know
how
many
people
have
are
familiar
with
it
played
with
it,
not,
but
basically
it
lets.
D
You
bundle
up
an
operator
that
uses
ansible
play
books
instead
of
having
to
write,
go
code
or
some
other
kind
of
you
know
a
language
specific
code.
So
it
really
lets
you.
You
know,
create
operators
much
more
easily
to
highlight
this.
Last
August
we
got
together
and
met
up
and
started
kind
of
working
on
some
things.
D
Operators
advanced
will
play
books;
instead,
it
was
ended
up
in
pretty
easy,
so
we
started
kicking
around
ideas
to
tackle
and
me
I
loved
OpenShift,
not
just
as
a
red
header
I
used
to
open
shift
very
early
on
in
the
2.0
series
before
it
was
completely
rewritten
to
be
committees
so
as
a
side
project,
it
always
kind
of
been
my
side
projects.
It
annoyed
me
that
I
couldn't
deal
up
my
sequel
databases
that
easily
inside
there,
if
I
click
the
up
arrow
on
the
pods.
D
For
you
know,
the
for
my
sequel,
I
ended
up
getting
was
to
separate
my
sequel
instances
running
that
didn't
talk
to
each
other.
It
was-
and
you
know
not
very
useful,
so
I
thought
doing
some
of
the
more
complex
database,
stuff
and
operators
with
ansible
would
be
a
really
cool
way
of
showing
that
you
know
how
easily
it
would
be
write.
Some
of
these
operators
in
ansible,
so
Galera
was
the
first
one
I
tackled,
because
you
know
my
sequel,
a
lot
of
places,
use
it
and
with
Blair
it's
really
easy
to
scale
up.
D
If
you
haven't
seen
it
it's
very
similar
to
like
you.
Basically,
just
spit
up
another
one
and
tell
it
where
the
cluster
is
and
it
handles
it
all,
and
it's
all
master
or
multi
master.
There's
no,
you
know
master
tier
versus.
You
know
read-only
slave
tiers.
Things
like
that.
You
can
still
do
that
with
my
sequel,
but
Galera,
it's
much
easier,
so
I
started
working
on
a
blog
post
showing
how
easily
it
is
to
you
know,
create
this
ansible
operator
to
do
to
manage
your
Galera
cluster.
He
lit
up
the
blog
post
covers.
D
You
know,
create
the
cluster
and
then
deploy
to
my
sequel,
the
Galera
later
on
it
and
then
start
doing
some
load
tests
against
it
using
sis
patch
and
just
shows
that
you
know
when
you
scale
up
the
number
of
pods
your
CR
D
defines
that
you
know
you
start
getting
this
really
nice,
linear
scaling
of
you
know
equal
throughput
using
suspense,
so
that
should
be
going
live,
probably
in
the
next
month.
Again,
as
Diane
said,
I
we've
just
moved
and
with
holidays
and
everything
else.
D
A
I
have
a
quick
one
for
you,
the
galera
ansible
operator
that
you're
creating.
Is
it
just
an
experimental
thing
or
is
it
something
you're
gonna
maintain
over
time?
I.
Don't
know
sorry.
D
The
ultimate
goal
would
be:
this
was
kind
of
a
proof
of
concept,
so
I
was
just
writing
and
maintaining
it,
so
it
definitely
doesn't
handle
all
failure
scenarios
properly
right
now,
it's
you
know,
it
starts
up
scales
down.
Does
persistent
storage
things
like
that,
but
as
far
as
some
of
the
more
mature
ones
they're
written
in
go,
it
doesn't
do
things
like
backups
and
if
you
totally
kill
the
cluster,
it
may
not
start
up
right
depending
on
how
you
kill
it.
D
You
go
in
there
and
kill
pods
directly
or
like
your
cluster,
your
entire
case
cluster
or
openshift
cluster
gets
wiped
out.
It
may
not
restart
exactly
the
way
you
think
it
would
so
yeah,
so
ultimately,
I
would
love
it.
If
somebody
took
over,
we
want
to
move
it
into
the
operator.
Sdk
examples
repository
right
now,
it's
still
in
what
that
link
that
is
on
the
hacker
MD
is
what
we
called
our
waterhole
was
kind
of
our
work-in-progress
repo
for
stuff
we're
doing
with
ansible
operator,
so
yeah
so
see.
D
D
D
Saudi
answer
your
question
leonardo!
Yes,
it's
you
basically
write
the
play
books
and
then
using
the
operator
sdk
command
line.
You
bundle
that
up
into
a
an
operator
that
you
then
launch
on
your
gates,
Buster
or
open
ship
cluster,
and
some
of
the
roles
will
be
reusable
typically
with
the
ansible
operator,
you're
doing
everything
with
the
kate's
module,
whether
it's
gates
or
open
shift
that
was
introduced.
D
D
Yeah
and
the
one
thing
I
forget
about
the
dimension
is
for
those
who
may
not
be
familiar
with
the
interval
operator.
One
of
the
things
that
really
makes
it
super
easy
to
do
is
when
you're
doing
it.
In
our
case,
you
always
have
external
resources
to
deal
with.
You
deploy
your
application
on
your
cluster,
but
then
you
have
to
go
out
and
update
your
DNS
entries
well
with
ansible.
D
It's
really
kind
of
trivial
to
have
the
operator
go
out
and
hit
you
know,
route
53
or
whatever
cloud
DNS
service
you
might
be
using
as
part
of
the
operator
managing
the
lifecycle
of
your
app.
You
don't
have
to
write
all
of
that
code.
You
just
use
the
modules
that
ansible
already
comes
with
to
do
that
external
resources.
My
other
kind
of
standard
example
for
something
like
this
is
say
you
want
to
do
a
database
backup
do
s3.
You
know
you're
running
your
own
cluster.
D
A
Any
other
thoughts
questions
people
want
to
add
here
around
the
ants
bowl
topic.
I
will
definitely
send
out
the
note
on
the
anti
operators
for
ansible
people
talk,
and
next
week
there
will
be
a
few
others,
some
more
coming
in
the
chat
right
now
and
I
can
still
I'm
still.
Okay.
Here,
Carol
I
still
have
audio.
A
Perhaps
you're
trying
to
speak,
maybe
Carol,
you
are
muted
Carol,
try,
say
Sam,
saying
something:
your
self
eating,
alright
anyways,
so
I
have
a
couple
other
folks
that
have
willing
to
air
their
lessons
learned
and
the
first
one
is
Bernardo.
Rosetti
who's
been
working
on
building
an
operator
SDK
for
Oakley
shift
utilities
and
a
few
other
things.
So,
if
you'd
like
to
share
your
screen,
I'll
stop
sharing
and
Lonardo
I.
Think
I.
Have
you
off
mute
now
we'll
be
there?
We
okay,
I,
can
hear
you.
Okay,
but
I
can't
see
your
screen
yet.
B
B
B
So
my
name
is
Leonardo
I
also
know
it's
deal,
I
work
for
working,
Red
Hat
as
well,
but
I'm
too
lazy,
no
taste
just
try
to
bundle
teams
together.
You
know
big
shifts
or
mean
work
on.
The
line
of
works
is
around
information
and
we
do
we
use
operators,
but
if
we
did
have
a
problem,
though,
where
we
needed
to
reuse
effective
templates
in
operator
to
deploy
whatever
do
you
yeah.
B
It's
so
as
a
as
a
as
our
I
was
saying
me
and
my
team:
we
do,
we
do
have
to
work,
would
you
rate
it
a
lot,
but
at
same
time
we
have
victory
use
of
a
page
of
templates
or
from
other
project.
So
because
of
that,
we
end
up
creating
this
tiny
operator.
As
the
senior
picture
YouTube
basically
has
a
bunch
of
libraries.
B
B
B
It's
just
your
usual
patient
or
a
patient
template,
and
then
you
have
a
bunch
of
parameters,
choose
parameters
there
it
it's
the
just
an
app
that
that
a
map
that
contains
your
template
for
me,
if
the
parameters
to
be
processed
by
the
operator,
also
create
the
atmosphere,
though
okay
so
go,
for
it
turns
rainy
I'm
just
going
to
deploy
this
particular
8.
This
Newton
resource,
as
so
as
I
suppose
results
created,
create
this
person.
The
template,
which
basically
contains.
B
B
B
Awkward,
it
contains
this
is
told,
face,
that's
where
I
get
the
parameters
from
the
curtain,
resource
I'm
at
them,
and
then
I
invoke
this
process
process
that
here
we
do
it
so
it
based
on
fuse
the
local
parameters.
Then
I
I
just
get
an
instruction
on
different
and
object
and
then
I
send
this
object
to
the
process
templates
resource
and
then,
if
it
returns
me
the
process,
the
project
template
opportunity
returns
me
did
the
process
template,
which
then
I,
basically
loaded
as
a
community
resource
data
resource.
B
So
I
can
read
the
runtime
object,
that's
a
that
is
processing
the
process,
or
else
I
read
tools,
object,
they
get
scarred,
they
get
started
into
destructive
itself
to
be
later,
the
truth,
video
operator
we
basically
we
have
this,
get
object,
method,
witness
returns
a
bunch
of
a
real
kind
of
return
them
and
work,
an
array,
our
latest
word
and
object.
You
can
apply
the
filter
here
and
the
filters.
Basically
you
it's
based
in
a
function
in
the
part
of
our
case.
It
doesn't
do
anything
it.
We
have.
B
This
default
no
filter
function,
but
if
you're
do
return,
an
error
who
are
using
the
discotheque
function,
the
particular
intent
object.
The
loop
won't
be
able
in
the
list,
isn't
it
cute
or
something?
And
then
here
I,
just
I
just
have
the
correct
name:
space
for
card
instructor
and
tiny
object,
I
set
their
their
owners.
Also,
this
year,
I
can
own
ultra
booklet
and
then
I
just
use
the
the
operators,
the
cable
tie,
kind
to
create
or
chose
objects.
No
patient
here
just
check
if
the
deployment
convicts
order
of
either
the
point
convicts.
B
B
But
we
we
also
just
expose
the
these,
the
opposite
teams.
Well,
you
can
easily
add
support
chosen,
you're,
very
dull.
It's
just
not
it's
just
array
that
contains
all
all
the
editors,
kings
and
methods
from
Frank
chose
officially
the
pipes
we
don't
need
to
do,
though
the
library
has
many
great.
So
the
developer
wouldn't
worry
that
much
about
to
Bishop
any
Talos.
We
just
have
to
have
those
basic
steps
which
part
of
a
shift
by
processing,
unofficial
pipes
and
the
last
thing
the
revelation
when
doing
when
using
open
shoot
pipes.
B
B
A
B
E
A
If
anyone
has
any
questions
or
feedback
for
Leo,
you
can
reach
out
and
go,
and
it
may
be
an
issue
on
his
who's
repo
or
reach
him
on
the
mailing
list,
and
we
have
I'm
going
to
stop.
You
take
over
the
screen
here
for
a
minute
and.
A
A
Had
yet
another
operator
that
I
had
been
interested
in
for
a
while,
you
need
a
practice
inside
of
Red
Hat,
though
I'm
gonna,
let
Andrew
explain
what
that
is
and
who
he
and
matter
are,
and
what
they've
been
doing,
share
your
screen
and
take
it
away
Andrew.
That
would
be
great,
absolutely.
G
F
All
right
so
we're
in
spend
a
few
moments
talking
about
the
image
signing
and
image
scanning
service
which
itself,
as
an
operator,
we
can
kind
of
talk
about
how
this
project
kind
of
unfolded
over
the
course
of
the
last
year
or
so.
First
of
all,
my
name
is
Andrew
block
senior
principal
consultant
at
Red,
Hat
I'm,
also,
the
co
manager
of
the
container
and
past
nudies
practice
within
Red
Hat.
F
The
community
of
practice
program
within
Red
Hat
is
a
way
for
those
who
are
interested
in
a
particular
area
or
field
of
interest
to
get
together
talk
about
it,
develop
new
solutions
but
then,
most
importantly,
go
out
and
share
their
learnings
with
the
community.
Not
only
was
in
Red
Hat,
but
also
in
the
open-source
community.
Very
much
like
we're
doing
today
and
my.
E
F
Our
first
one
talks
about
some
of
the
background
of
the
image
signing
image
scanning
problem
space
that
we
are
trying
to
solve
here.
First
of
all,
we
wanted
to
leverage
a
lot
of
the
existing
open
ship
ecosystem.
Obviously
there's
a
number
of
third-party
vendors
that
have
to
in
signing
and
image
scanning
solutions,
but
we
wanted
to
first
to
leverage
the
native
turn
within
open
ship,
and
most
important
thing
we
need
to
think
about
is
especially
around
the
terms
of
security.
F
How
do
we
guarantee
that
our
images
are
safe
for
you,
especially
the
ones
that
are
developers
who
are
developing
on
OpenShift
are
building.
How
do
we
guarantee
that
they're,
not
including
them,
either
from
invalid
or
notorious
upstream
images
or
they're,
putting
some
dangerous
libraries
into
their
application?
F
The
goal
here
is
obviously
to
increase
security
within
the
platform
and
an
application,
and
also
to
mitigate
vulnerability,
as
I
mentioned
previously.
We
need
to
go
ahead
and
utilize
open
ships
native
ecosystems
of
tools,
but
then
this
solution
needs
to
be
highly
integrated
into
continuous
integration
and
continuous
delivery
pipelines
that
a
lot
of
our
customers
work
with
very
much
like
Matt
mentioned.
I
work
very
much
with
different
customers,
all
shapes
and
sizes
going
this
small.
F
As
you
know,
someone
startups
to
upwards
of
top
fortune,
one
fortunate
to
company,
so
we
need
to
think
about
a
wide
range
of
customer
use
cases.
So
this
project
has
evolved
quite
a
bit
over
time.
I
first
started
out
as
we
called
the
image
signing
and
image
scanning
project
that
was
very
much
python-based.
We
were
leveraging
a
tool
that
was
developed
by
the
red
X
community
of
practice
that
basically
watched
her
event
from
the
open
to
prevent
API.
F
This
is
way
before
the
days
of
you
know
the
operator
lifecycle
manager,
operators
general
even
before
customer
resource
definition.
It
reacted
on
changes
from
image
and
image
stream
events
and
the
open,
shipped
API,
and
did
leverage
a
number
of
open
just
native
tools
that
it
was
leveraged
by
the
platform,
everything
from
open
ships,
open
sketch
scanning
and
the
atomic
scanning
tools
as
well.
Unfortunately,
it
did
have
a
bit
of
a
frail
architecture,
as
certain
events
could
be
missed
or
duplicated.
So
we
had
to
put
some
additional
logic
into
you
know
our
application
and
also
come.
F
You
know,
with
the
assumption
that
yeah
we
could
end
up
having
to
in
functionality.
Honestly
really
wasn't
that
great.
So
then,
our
customer
resource
definition
helped
us
get
a
little
little
bit
further
down
the
road
where
we
were
able
to
get
away
from
that
Python
base
library
and
a
venting
model,
and
to
leverage
customer
resource
definitions
and
more
of
a
native
field
to
how
we
were
interacting
with
open
ship.
We
were
no
longer
going
up
after
the
events,
API
and
utilizing.
You
know
just
great
event
triggers
from
that.
F
F
For
interesting
with
the
open,
shipped
API,
however,
there
was
a
lot
of
time
spent,
not
only
learning
about
how
shared
Informer's
and
the
go
lane.
Libraries
work,
but
also,
how
do
we
generate
the
libraries
for
modeling
our
custom
resource
definition,
I
personally,
probably
wasted
two
or
three
weeks,
just
learning
that
ecosystem
so.
E
F
Honestly,
it
was
two
things:
it
was
a
challenge
of
pain,
but
also
a
learning
experience,
so
we
were
able
to
create
a
brand
new
repository
built
upon
the
solution
and
it
worked.
It
was
great.
It
still
required
a
number
of
custom
logic
that
I
built
and
I
a
myself
and
the
team
built
to
talk
to
the
open
API
and
to
monitor
the
different
resources
and
then
really
what
things
the
game
for
us
was.
The
you
know
the
unveiling
of
the
operator
framework.
F
Framer
logic
which
basically
gave
us
the
boilerplate
that
we
needed
so
I
basically
went
ahead
and
eliminated
about
90%
of
our
code
base,
went
ahead,
remodeled
on
the
out
open,
shipped
up
arm
and
the
operator
framework,
and
instead
worry
about,
did
I
put
the
right
logic
in
for
interacting
with
the
OpenShift
API
and
instead
focus
on
the
business
problem,
focus
in
the
homes
going
to
go
ahead
and
implement
image,
signing
and
image
scanning
instead
of
how
am
I
gonna
talk
to
just
API.
How
am
I
gonna
know
about
when
changes
occur.
F
The
framework
provided
that,
for
me,
all
I
need
to
do
is
change
my
logic
slightly
to
implement
with
the
framework
needed
and
also
it
also
gave
our
team
the
ability
to
then
be
able
to
interact
with
the
burgeoning
community
that
the
operator
framework
had.
We
were
able
to
talk
with
others
that
were
having
some
other
challenges
for
interacting
with
OpenShift
API,
learnings
and
best
practices.
Lessons
learned
and
that's
really
what
we
started
to
do
is
start
interacting
with
the
community
and
able
to
harden
the
support
and
stability
of
our
project.
F
So
what
exactly
does
the
architecture
of
the
image
signing
and
image
scanning
service
look
like?
So
for
those
of
you
familiar
with
openshift?
Usually
there
was
the
fact
that
you
can
perform
various
build
types
on
the
platform.
You
can
create
images
and
then
push
those
to
open
ships
registry
and
then
make
use
of
image
stream.
So
the
first
step
is
for
a
user
to
start
a
build
and
to
have
the
platform
build
that
image
and
push
it
to
open
ships
integrated
registry
once.
F
F
And
then
it
goes
ahead
and
creates
a
signing
pod
and
then
that
signing
pod
will
go
in
and
sign
the
image
it
has
a
gbg
key.
So
that's
what
the
atomic
tools
leverages
by
default
that
is
automatically
configured
inside
the
cluster
as
a
default
key
that
can
be
leveraged.
However,
in
addition,
let's
say
a
development
team
has
their
own
key.
They
want
to
use
for
their
image.
F
They
can
provide
that
as
a
secret
within
their
and
the
signing,
Cod
and
controller
we'll
go
ahead
and
configure
that
to
be
used
by
the
signing
process
and
then,
finally,
once
once
the
signing
process
completes,
it
can
go
in
and
then
push
that
updated
image
to
the
open
ship
registry
for
deployment
time.
We
also
provide,
in
addition
to
signing,
we
also
use
open
scanning
as
well.
F
E
E
G
E
Want
to
give
the
security
folks
in
this
organization
a
warm
feeling
about
what
they're
using
so
what
we're
able
to
utilize
in
this
case
is
some
of
the
security
standing
for
runtime
images.
Like
I
said
in
a
production
environment,
you
have
all
these
containers
running.
We
want
the
developers
to
be
able
to
self
provision
their
own
scanning
requests
for
these
images
that
they're
building
it
utilizes
atomic
scan
under
the
scenes.
E
But,
however,
that's
also
that's
also
pluggable
you
can
you
can
substitute
different
Danning
mechanisms
and
we
know
there's
a
number
of
tools
out
there,
but
this
is
primarily
focusing
on
the
open
ships.
Suite
of
tools
available
security
folks
also
want
to
ensure
container
image
provenance
so
having
this
chain-of-custody
through
the
form
of
digital
signatures
is
so
something
that
gets
deployed
into
production.
We
want
to
be
able
to
trace
that
signature
back
to
when
it
was
built,
match
those
up
and
say.
E
Yes,
this
is
the
same
artifact
that
was
built
in
the
dev
environment,
then
that
is
being
deployed
in
production
and
then
finally,
for
production.
If
deployment
security
organizations
can
enforce
these
image
scans,
so
they
can
initiate
an
image
scan
and
upon
successful
completion
they
can
scan.
They
can
sign
that
that
image
so
back
to
the
previous
example,
developer,
crease
and
image
it
gets
scanned
in
the
development
pipeline
that
also
that
that
successful
scan
triggers
a
sign
mechanism
or
a
sign
object
to
be
created
using
the
operator
crd.
F
I'm
sure
Diane's
gonna
go
ahead
and
share
those
out
this
deck
out
afterwards,
so
we'll
be
able
to
share
that
out
with
the
entire
communities.
Oh,
we
have
our
repositories
resources.
We
also
have
a
demo
that
demonstrates
this
entire
process
for
image,
signing
and
image
scanning.
We
also
have
some
links
to
other
tools
that
are
being
leveraged
during
our
our
actual
operator
logics.
F
A
I
have
a
quick
question
for
you
about
ongoing
support
for
this
I
know.
This
is
in
your
your
repo
right
now,
but
is
there
any
thought
about
maybe
moving
this
over
over
into
open
staff
and
asking
the
community
there
to
work
with
this
and
maintain
it,
and
what
are
your
plans
for,
like
maybe
moving
this
into
the
community
operators
so.
E
F
F
To
a
new
version
in
the
next
few
months,
this
is
certainly
an
area
that
we
want
to
make
sure
that
we
make
as
compatible
as
possible
for
the
new
ecosystem.
If
there
is
any
changes,
well,
unfortunately,
there
shouldn't
be,
but
that
is
a
good
opportunity
to
be
able
to
re-engage
and
look
at
the
architecture
moving
forward
and.
E
Also
for
customers
who
are
looking
to
utilize
this
to
be
a
good
opportunity,
because
I
know
in
my
environment
you
have
a
lot
of
issues
with
identity
and
whether
or
not
these
community
products
or
projects,
if
red,
hats,
willing
or
companies
are
willing
to
absorb
any
legal
risk
associated
with
using
that
in
the
production
environment.
So.
A
F
Yeah
we
can
definitely
go
ahead
and
have
a
some
conversations.
I
know
one
area
that
I
know
we're.
Anthurium
we
investigated
to
the
poro
is
the
utilization
of
docker
I
know,
there's
a
heavy
dependence
on
docker
right
now.
So
that's
one
area
of
improvement
and
evolution
that
we're
going
to
have
into
the
400
phase
yeah.
F
C
It's
really
awesome
seeing
what
people
are
building.
Thank
you
for
sharing
some
of
your
operators
today
in
the
chat
we
were
talking
about,
the
storage,
OS
cluster
operator
and
so
excited
to
see
things
like
that
coming
to
help
folks
better,
take
advantage
of
some
of
the
kubernetes
features.
Around
storage,
persistent
volumes,
logging
all
that
good
stuff,
there's
a
really
great
set
of
operators
underway
to
help
with
all
that
so
we'd
love
to
get
those
added
to
our
community
so
that
everybody
can
benefit
from
them
and
try
them
out
on
their
cluster.
A
Cool
and
I
don't
know
if
Simon
is
still
on,
but
maybe
our
next
meeting
is
going
to
be
again
on
the
third
Friday
of
the
month.
We
have
which
happens
to
the
dorms
awesome.
A
Let
us
know
and
I
will
try
and
coerce
them
into
sharing
their
stories
and
telling
you
more
about
it
and
again
the
we
had
the
it's
in
the
calendar,
but
I'll
send
an
announcement
to
the
Google,
Group
and
posted
on
the
kubernetes
operators
about
operators
for
ansible
people
which
will
be
March
sticks.
So
that's
that's
the
other
thing
and.
G
A
A
What
we're
really
trying
to
see
is
is
what
we
can
learn
from
each
other
here
so
and
if
you
need
advice
or
help,
just
one
jump
on
the
google
group
I
think
the
link
the
link
is
down
below
on
the
page
here
didn't,
but
all
that
now,
if
you
haven't,
joined
the
Guru
loop.
E
A
The
Commons,
which
is
the
openshift
commons
community
calendar,
so
it
should
be
you're
seeing
my
screen,
but
all
of
the
events
that
are
in
the
and
all
of
the
other
SIG's
as
well,
whether
it's
learning
Bellco
and
others
all
here
and
there's
an
RSS
feed
that
you
can
subscribe
to
it
songwriters
at
the
bottom.
If
you
look,
if
you're
looking
to
keep
track
of
everything,
that
Diane
is
trying
to
keep
track
of
it's
it's
here.
A
E
A
There's
also
going
to
be
some
really
interesting
and
I,
don't
know
where
your
base
Simon,
but
the
the
trainings
haven't
been
listed
here.
But
there
is
a
going
to
be
a
whole
whole
lot
of
talk
about
operators
in
the
morning.
At
this
open
ship,
Commons
gathering
and
I'm
also
going
to
run
a
parallel
track
and
in
the
afternoon
from
1:00
to
4:00
Michael
with
knack
and
that
Dorn
are
going
to
repeat
the
hands-on
community
through
Nettie's
open
operator
framework
workshop
I.
A
Yeah,
that
would
be
great,
so
there's
lots
going
on.
There
will
also
be
a
hands-on
workshop
at
Red
Hat
summit
in
parallel
with
the
open
ship
comms
gathering
there
in
May
as
well
and
I,
just
keep
trying
to
wherever
I
do
anything
look
an
extra
room
to
the
we
can
also
do
the
operator
framework
workshop.
A
So
look
for
those
announcements
and
I'll
send
those
out
onto
the
mailing
list,
but
if
you
haven't
signed
up
anyone
who's
on
this
all
and
has
not
signed
up
for
the
Google
Group,
please
do
so
it's
really
where
all
of
the
announcements
go
out
there.
You
go
anything
else.
Anyone
wants
to
add
because
I'm
two
minutes
away
from
having
to
go
to
another
call,
so
I
am
going
to
let
you
it
ask
any
questions
in
the
Google
group
or
in
the
slack
Channel
and
kubernetes
proving
to
these
operators.