►
From YouTube: OKD Working Group - Ansible Collections for OKD - Fabian von Feilitzsch and Timothy Appel (Red Hat)
Description
OKD Working Group 2021 02 02
Ansible Collections for OKD
Fabian von Feilitzsch and Timothy Appel (Red Hat)
A
So
I'm
tim
mountain,
I'm
a
senior
product
manager
on
the
ansible
team.
I
was
actually
with
the
original
ansible
company
and
I've
been
along
for
the
ride
to
red
hat
and
then
to
ibm
and
I've
been
working
on
how
ansible
can
integrate
with
and
help
automate
things
happening
in
the
container
native
space.
So
one
of
the
things
being
from
you
know,
working
for
red
hat
is
that
we
looked
into
once.
We
started
this
effort,
how
we
could
start
to
help
automate,
what's
happening
in
okd
and
what's
happening
in
openshift
clusters.
A
So
what
I
was
presenting
here
was
like
the
1-0
that
we
came
up
with
initially
and
how
it
came
together,
and
I
want
to
try
to
just
speed
through
this
there's
a
lot
of
went
on
here
and
we
could
go
a
whole
lot
deeper,
finding
put
together
a
demo
if
we
have
time
for
it.
If
not,
we
can
provide
the
code.
So
I
like
so
that's
that's
my
background.
A
B
Yeah,
I'm
fabien
vampirelic,
I'm
a
software
engineer
in
the
openshift
org.
I
work
on
operator
framework.
B
B
Just
have
like
better
sort
of
like
full,
like
application
to
infra
level
integration,
so
that
people
who
are
using
ansible
in
there
like
traditional
I.t
things,
can
kind
of
more
easily
transition
to
the
kubernetes
space
without
having
to
up
end
all
their
tooling
logging
monitoring,
etc
and
I'll
cut
it
there.
So
we
have
time
for
the
whole
presentation.
A
All
right
thanks
bobby
yeah,
though
so.
The
first
thing
I
should
mention,
if
it's
not
apparent
by
who's
presenting
to
you,
is
that
this
was
a
joint
effort
between
the
ansible
team
and
the
openshift
team.
Fabi
was
one
of
the
engineers
that
came
over
and
worked
with
us
in
developing
this,
and
we
had
some
of
our
own
people
from
the
ansible
team
working
together
on
this.
So
this
is
truly
a
joint
effort
that
happened
out
there.
So
just
diving
in
I'm
gonna
like
say
I'm
going
to
speak
through
this.
A
I
know
I'm
not
talking
to
an
ansible
group
here,
so
you
might
be
wondering
what
what's
a
collection
you
might
be
familiar
with
ansible,
but
in
the
last
year
or
two
we've
had
this
huge
effort
going
on
to
separate
the
core
engine
that
that
ansible's
known
for
that
command
line
tool
and
the
what
we
call
the
content
so
that
they're
separate
in
that
they
can
move
independently.
A
One
of
the
problems
we
ran
into
with
our
batteries
included
approach
was
that
you
had
to
wait
for
the
next
release
of
ansible
to
come
out
to
get
new
features
for
a
cloud
service
or
some
type
of
other
application
or
api
change,
and,
and
it
was
just
getting
way
too
bogged
down.
A
So
we
came
up
with
this
thing
called
ansible
content
collections
that
we've
been
moving
towards
we're
most
of
the
way
through
or
it's
just
short
for,
our
collection
and
it's
a
new
format
for
organizing
ansible
content,
so
that
it's
independent
of
the
engine
and
can
be
added
and
installed
and
updated
independently
of
what's
happening
in
that.
A
So
what
we're
talking
about
here
is
one
of
those
collections
that
is
specific
to
working
with
okd
and
openshift
here
so,
like
I
said
just
to
review
that
then,
and
so
so
this
is
to
focus
on
the
unique
capabilities
of
okay
openshift
systems.
We
also
have
another
collection,
which
has
been
now
renamed
kubernetes.core
and
that
provides
the
baseline,
kubernetes
and
helm3
automation
capabilities
out
there.
A
So
if
you're,
working
with
okd
and
and
openshift
you're,
probably
going
to
use
both
of
these
collections
together
in
your
your
playbooks
and
the
stuff,
that's
baseline,
you
would
work
with
the
stuff,
that's
in
kubernetes.com
and
then,
when
it
comes
to
the
things
that
are
specific,
that
okd
adds.
On
top
of
that,
then
you
would,
you
would
pull
from
the
community
okd
collection,
a
couple
other
side
notes.
A
If
you
go
out
and
start
researching
this,
that
you
might
become
confused
or
or
wonder
about
is
camino
okd
is
the
upstream
collection
called
redhat.openshift
and
that
that
is
the
supported
offering
that
we
put
together
and
put
out
there
to
customers.
So
it's
one
in
the
same.
It's
just
one
once
the
the
downstream
and
once
the
upstream
of
that
content.
A
Another
quick
side
note
is
originally
our
kubernetes
content
started
off
as
a
community
effort
and
was
called
community.kubernetes
we're
going
through
the
process
of
changing
the
name
migrating
the
repo
things
like
that,
so
they're
they're,
essentially
the
same,
but
the
community.kubernetes
is
going
away
for
marketing
and
business
reasons
and
is
going
to
be
called
community
kubernetes
core
all
right.
So
that
was
just
a
little
background.
So
you
know
what
you're
looking
at
here.
A
So
let's
talk
about
what
is
in
this
collection,
so
so
what
we
did
when
we,
when
we
pulled
together
this
effort
last
summer,
to
make
something
that
was
supportable,
that
we
put
full-time
resources
on
we
worked
together
with
is
we
we
looked
at
the
what
was
in
that
community,
dot,
kubernetes
collection
and
said
all
right?
We
need
to
break
this
into
two
parts,
because
what
had
happened
is
it
was
just
done
through
community
contributions
coming
in
and
it
was.
A
It
was
mostly
baseline
kubernetes,
but
some
openshift
specific
features
had
rolled
in
and
then
we
were
getting
kind
of
complaints
from
both
sides.
People
that
were
trying
to
use
opd,
openshift
and
saying
hey
this
is
missing,
and
then
there
were
people
on
the
baseline,
kubernetes
crowd
coming
to
us
and
saying
hey.
What
is
this
stuff?
A
That's
in
here
that
it's
operating
different
than
it
should
so
we
decided
the
best
thing
to
do-
was
then
to
to
split
this
stuff
out
into
their
own
collections,
so
that
they
could
both
move
and
focus
on
each
other's
communities
better,
rather
than
trying
to
find
this
like
middle
ground.
So
that
was
the
one
of
the
first
big
things
that
fabian
and
other
engineers
took
on.
A
The
other
thing
that
I
mean
it
was
very
very
helpful
in
was
getting
proper
ci
testing,
including
prow
integration
into
this,
so
that
all
of
what
we
were
doing
got
run
against
the
latest
builds
that
were
happening
there.
That
was
something
that,
unfortunately,
wasn't
happening
in
the
previous
collection
and
work,
so
we
migrated
a
whole
lot
of
community
community
content
over
that
was
open,
shift,
specific,
an
inventory,
plug-in
and
oc
connection
plug-in.
A
There
was
a
an
openshift
auth
module
that
was
called
case
off
at
the
time,
we've
renamed,
and
then
we
created
a
a
module
specifically
for
working
with
declarative
resources,
but
it
gave
it
the
added
logic
for
working
with
things
like
I'm,
trying
to
remember
some
of
them
deployment,
configs
and
projects
and
things
that
are
specific
to
openshift,
that
the
kubernetes
core
module
would
sort
of
trip
on
there.
A
So
one
of
the
things
that
was
a
little
interesting
that
we
went
through
is
ansible's
added
name
spaces
and
we
decided
to
make
use
of
that,
and
so
we
had
the
case
module
that,
like
I
said,
handled
the
baseline
kubernetes
declarative
apis.
So,
rather
than
create
a
a
totally
different
named
one,
we
decided
to
use
the
kate's
name
again,
because
there
you
don't
have
to
do
it
fully
qualified
like
like
I've
shown
here.
A
It
would
make
it
a
lot
easier
for
people
to
move
or
or
port
their
playbooks
between
baseline
kubernetes
and
then
moving
to
open
shift
in
that
regard,
because
then
they
would
just
have
to
switch
what
name
space
they
were
pulling
that
module
from.
So
there's
a
little
side.
Note
more
advanced
thing,
and
then
we
created
a
few
modules.
So
this
is
an
area
that
we
were.
We
went
did
a
quick
survey,
and
so
what
are
the
most
common
things?
A
People
are
trying
to
automate
with
openshift
right
now
to
figure
out
what
is
in
the
1l
and
the
and
the
two
things
that
came
up
was
here
was
the
the
ability
to
expose
a
route
which
which
is
sort
of
like
the
exposed
in
kubernetes,
but
the
added
stuff
that
you
can
do
in
openshift
and
then
the
other
was
was
the
templates
that
came
up
the
ability
to
to
render
and
optionally
apply
those
to
what
you
were
doing
were
also
things
that
we
were
seeing.
A
A
lot
of
people
that
were
trying
to
use
ansible
with
openshift
were
were
trying
to
do
and
struggling,
and
we
wanted
to
make
that
easier.
So
we
created
those
two
modules
there.
So
I'm
going
to
stop
there,
like,
I
said
I
sped
through
a
lot
of
stuff.
Do
we
want
to
take
the
time
for
a
demo.
A
C
I
can't
see
the
chat,
unfortunately
on
my
that's
okay,
but
I
think
you
might
have
answered
it.
James,
you
are
asking
will
play
books
written
for
community
okd
work
without
changes
when
used
with
redhat.openshift.
A
Yes,
as
long
as
yes,
there
should
be
no
no
issue
there.
You
just
have
to
be
put
a
little
bit
of
care
into
how
you're
managing
your
name
spaces.
If
you
do
it
fully
qualified,
like
I
showed
back
here,
you
would
have
to
do
a
search
and
replace,
but
you
don't
have
to
do
it
this
way
and
I
would
recommend
not
doing
it
this
way.
A
If
that's,
what
you
want
is
the
ability
to
go
between
the
two
easily
there's
a
there's,
a
way
to
create
like
a
namespace
search
path
at
the
beginning
of
your
playbook,
and
then
you
don't
have
to
do
this
stuff.
The
fully
qualified
stuff
in
your
in
your
in
your
plays
and
your
roles.
D
Is
there
a
reason
why
the
red
hat
open
shift
whatevers
wouldn't
also
provide
the
community
okd
name
if
they're
going
to
effectively
be
identical.
A
D
A
C
A
Yeah,
that's
more
of
an
we've
done
it
for
awareness
and
also
clarity
when,
when
you're
dropping
in
a
single
task
to
document
something
you
don't
see
all
the
other
stuff,
you
could
have
done
at
the
command
line
or
in
the
playbook
declaration,
and
then
it
the
you
me.
People
may
get
confused
over
time,
as
you
have
modules,
with
the
same
name
appearing
in
different
collections
entirely
that
it
was
it's
just
for
clarity
of
that
type
of
documentation.
A
C
B
B
So
this
is
what
the
basic
module
invocation
looks
like
for
the
case
module
the
community.okd.cades
module
is
very
similar
to
the
kubernetes
case
module,
except
that
it
has
some
of
this
special
handling.
For
example,
projects,
if
you
don't
have
permission
to
go,
to
create
a
project
it'll
instead
issue
a
project
request
and
handle
that
whole
api
flow,
which
the
core
kubernetes
module
cannot.
B
B
So
if
we
look,
it
returned
a
lot.
A
lot
of
that
is
because
of
this
managed
fields,
field,
that's
returned,
but
this
is
what
the
api
returned.
When
we
issued
a
create
with
this
definition,
you
can
see
we
have
a
project
here.
It's
got
the
name
of
test
and
that's
pretty
much
it.
So
next,
let's
go
ahead
and
create
an
image
stream.
B
Let's
look
at
what
that
image
stream
looks
like
so
we
can
see
here
it's
just
a
regular
kubernetes,
manifest
instream.yaml
and
we're
going
to
be
pulling
in
the
python
docker
image,
and
you
can
see
here
you
can
just
reference
a
file
directly
much
as
you
could,
with
qctl
or
any
of
the
other.
B
You
know
common
utilities,
so
let's
go
ahead
and
create
that
image
stream,
which
should
import
that
python
image
all
right.
So
we
can
see
that
it
was
created.
We
see
the
spec,
that's
returned
here
and
the
status
which
gives
us
the
new,
the
location
of
the
container
in
the
openshift
image
registry.
B
So
next,
let's
create
a
deployment
config
to
reference,
this
image
stream
we
just
made.
We
can
look
at
what
this
looks
like
real,
quick.
It's
basically
going
to
just
it's
going
to
use
that
base
python
image.
It's
going
to
spin
up,
but
just
a
basic
http
server
and
we
can
see
we
have
some
environment
variables
set
and
then
it
has
this
image
change
trigger.
B
That
will
basically
say
you
know
if,
if,
if
the,
if
the
image
stream
gets
a
new
tag,
then
automatically
update
our
deployment
to
use
it,
and
it
will
also
mean
that
in
this
spec,
open
shift
will
update
that
python.
That
image
that
it's
referencing
to
be
the
image
from
the
local
registry.
Instead,
so
let's
go
ahead
and
create
that
deployment
config.
B
And
you
can
see
here
we
have
this
weight
and
weight
condition.
This
will
basically
look
in
the
conditions
array
on
the
status
of
an
object
and
it
won't
end
the
task
until
this
condition
is
true,
so
we're
basically
creating
this
deployment
config
and
we're
just
going
to
wait
until
it
reports
that
it
is
available.
B
B
This
spec
does
in
fact
have
the
image
replaced
by
the
image
stream
reference
and
it's
a
specific
sha
as
opposed
to
what
we
put
into
the
spec
initially,
which
was
just
python.
B
So
now
we're
just
going
to
create
we're
going
to
run
this
exact
task
over
again
we're
going
to
recreate
the
deployment
config
now
because
the
deployment
config
is
already
there,
the
modules
will
will
see
that
and
rather
than
issuing
a
create,
which
would
obviously
fail
they're
just
going
to
issue
a
patch
instead,
so
that
any
difference
in
this
deployment,
config
definition
from
what
is
currently
in
the
api
server,
would
then
should
result
in
it
to
be
redeployed
and
we'll
note,
because
we
have
this
image
field
from
the
api
changed
to
the
registry,
whereas
in
the
deployment
config
we
just
reference
it
as
python.
B
B
It
issued
a
patch,
but
it
issued
the
patch
using
the
trigger
because
it
knew
that
this
image
field
is
once
managed
by
another
controller
and
there's
no
sense
in
wrestling
with
that
controller
for
control
of
that
field,
and
this
is
especially
relevant
if
you're
writing
an
operator
or
something
like
that,
where,
if
you're
referencing
these
image
streams
from
an
operator,
it
would
then
lead
to
infinite
reconciliations
as
the
deployment
continually
emitted.
New
events
and
your
operator
continually
went
and
tried
to
change
it
back.
B
B
So
if
we
go
and
we
go
ahead
and
create
this
deployment,
yes,
so
that
will
deploy
it,
it's
using
the
same
base
image.
So
hopefully
this
will
be
pretty
quick
and
then
in
while
we
wait
for
that-
and
you
can
see
here-
wait
wait
wait
for
that
to
finish
here.
We're
issuing
only
a
patch
request
to
replace
only
the
fields
that
we
specify.
So
we
only
want
to
replace
spec
template
spec
containers.
We
want
to
replace
the
container
named
hello
world
with
the
python
image
again.
B
B
B
Okay
cool,
so
that's
the
end
of
the
industry.
So
next,
let's
look
at
what
we
added
for
routes,
so
routes
are
basically
open,
shifts
method
for
creating
ingress.
So
here
let's
go
ahead
and
just
create
this
simple
deployment.
It'll
deploy
a
hello,
openshift
container
that
runs
a
docker
container
that
basically
just
outputs
hello
openshift
in
in
over
http.
So
we
go
ahead
to
that.
We'll
create
the
service
as
well
to
expose
that
port,
and
then
we
will
look
here
at
the
open
shift
route
module
community.okd
to
openshift
route.
B
Openshift
route
is
approximately
equivalent
to
oc,
create
route.
It
can
expose
most
of
the
same
stuff.
So
here
we
can
see
we
reference
the
service
that
we
just
created
that
exposes
that
container,
that
we
just
spun
up
and
we're
going
to
create
every
all
of
the
stuff
in
the
default
namespace.
So
when
we
create
that
route
with
the
fewest
possible
arguments
just
giving
it
service
and
namespace,
we
can
see
that
it
returned
this
object,
which
includes
this
url.
B
B
So
you
can
do
custom
names,
you
could
allow.
You
know
tls
or
disallow
tls
or
make
tls
redirect
all
different
kinds
of
things
all
exposed
through
the
module
without
needing
to
you
know,
get
in
there
and
and
do
this
sort
of
definition
by
hand
alright.
So
let's
just
skip
through
the
rest
of
this
route.
B
All
right,
the
third
thing
I
wanted
to
look
at
we've
added
the
ability
to
interact
with
the
openshift
oauth
server
directly
through
this
module
called
community.okd.openshift
off.
So
first,
let's
go
ahead
and
create
the
secret
which
will
contain
the
information
for
this
user,
a
username
of
test
and
a
password
of
testing123.
B
Let's
go
ahead
and
configure
the
ht
password
identity
provider
so
that
it
uses
the
secret
that
we
just
created
to
verify
users
and
we'll
go
ahead
and
create
the
test
user
and
we'll
mark
it
saying
that
it
uses
the
ht
password
provider
and
it's
the
user
test
there
and
we'll
create
a
cluster
role
binding
this
cluster
row
binding
will
give
our
new
user
cluster
reader
access.
B
So
next
we're
just
going
to
use
this
community.kubernetes.cates
cluster
info
module,
which
will
return
information
about
how
we're
connecting
to
the
server
what
the
host
is,
what
authentication
parameters
we're
using,
etc,
etc,
and
we're
just
going
to
store
that
in
this
cluster
info
variable.
So,
let's
go
ahead
and
get
that
api.
Url.
B
So
you
can
see
here
all
the
information
that
that
module
returns
returns.
All
the
information
about
the
connection
it
returns,
information
about
what
apis
are
supported
by
the
server
that
you're
connecting
to
and
a
bunch
of
stuff
like
that,
as
well
as
client
and
server
version.
B
So
next
we
come
to
the
actual
indication
of
the
openshift
off
module.
So
you
can
see
here
we're
just
saying:
login
is
the
user
test
with
the
password123
to
this
host
that
we
just
pulled
from
the
cube
config
that
we're
using
to
connect
right
now?
So
let's
go
ahead
and
obtain
that
access
token,
and
there
we
go
so
you
can
see
that
we
ran
that
and
it
returned
this
api
key
that
we
can
use
to
authenticate
and
now
that
we
have
that
information.
B
B
B
Open
shift
templates,
which
are
basically
a
way
that
you
can
either
locally
or
in
the
server
set
up
some
some
basic
templating
or
without
you
know,
using
helm
or
the
ansible
templating
language
or
any
of
the
other
options
that
are
out
there
right
now.
So
the
nginx
example
template
is
one
that's
included
by
default
in
an
openshift
installation
and
it
pretty
much
does
what
you'd
expect,
which
is
create
an
nginx
deployment.
B
So
that
example
lives
in
the
openshift
namespace.
And
then
here
we
can
pass
the
parameters.
So
we
want
to
deploy
it
to
the
openshift
namespace
as
well,
and
we're
just
going
to
give
it
the
name
test123.
So,
oh,
and
also
we
are
putting
it
in
a
rendered
state,
which
means
that
we
don't
want
to
go
ahead
and
create
the
resources.
We
just
want
to
see
what
resources
would
be
created.
So
let's
go
ahead
and
render
that
template.
B
B
On
the
you
know,
that
would
expose
the
ingress
for
it
and
a
build
config
for
building
the
image
from
a
git
repository
and
the
deployment
config
for
actually
deploying
it
and
and
the
pods,
and
it's
all
hooked
up
to
the
nginx
example.
I'm
sorry
and
it's
all.
B
Apologies
left
turn
I
thought
for
a
second
yeah
and
it's
all
hooked
up
with
the
image
stream
tag.
So
now
that
we
have
these
resources,
we're
storing
them
in
this
result
variable
here
we
can
go
ahead
and
create
those
rendered
resources
by
looping
through
them,
and
the
supply
parameter
means
that
we
will
be
using.
You
know
the
basically,
the
equivalent
of
cube
ctl
apply
in
order
to
create
them.
B
So
let's
go
ahead
and
do
that
and
we
can
see
that
it
made
all
of
those
resources
that
we
had
rendered
before
build
config,
et
cetera,
et
cetera.
B
So
let's
not
delete
those
resources,
we'll
give
it
some
more
time,
and
then
you
also
have
the
option
rather
than
create
like
rendering
and
then
create
in
the
manually
that
you
could
also
process
and
then
create
them
directly
in
one
step,
and
we
should
see
pretty
much
nothing
change
here,
because
it
is
the
same
resources
all
right.
B
And
finally,
the
information,
the
the
last
little
bit
that
I
wanted
to
highlight
was
the
openshift
inventory
plug-in,
which
gives
you
the
ability
to
use
openshift
as
a
dynamic
inventory.
And
basically,
this
means
that
the
the
plugin
will
go.
Look
at
the
cluster,
look
at
all
the
pods
in
the
cluster
and
add
those
pods
to
your
ansible
inventory
as
targetable
hosts.
So
you
can
see
here
this
second
play
targets
the
namespace
testing,
pods
group.
B
B
B
Does
that
doesn't
have
python
installed,
and
so
this
is
not
going
to
work
on
it,
so
it
was
failed,
but
we
can
see
these
test
123
nginx
example,
the
hello
world
dc
and
the
hello
world
deployment
pods
that
we
spun
up
earlier
all
were
found
and
setup
was
run
and
we
can
verify
the
setup
runs
successfully.
B
By
looking
at
the
value
of
the
test
environment
variable
because
if
you
remember
in
our
deployment
configs,
we
added
that
environment
variable
just
test
with
the
value
of
test,
and
you
can
see
here
it
output
in
the
hello
world
dc
that
the
value
of
test
is
test
and
in
the
hello
world
deployment.
The
value
of
test
is
test
and,
of
course,
in
the
nginx
example.
It
does
not
have
this
environment
variable
defined,
so
we
had
a
failure
there
and
then
last
you
can.
B
As
long
as
there
is
python
on
the
pod
that
you're
targeting
you
can
copy
files
to
and
from
the
host
from
from
ansible
making
it
so
that
it
is
as
long
as
there's
python
installed
in
the
pod,
basically
or
in
the
container
there's.
Basically,
you
can
do
anything
that
you
could
normally
do
with
ansible
there
all
right-
and
that
is
all
that
I
wanted
to
demo.
So
thank
you.
A
Yeah,
so
thanks
thanks
bobby
it
was,
that
was
great.
Hopefully,
it
got
across,
like
all
the
different
things
that
you
can
do
to
to
automate
and
cut
down
on
all
the
command
line
stuff.
You
would
have
to
do
in
manual,
work
and
repetitive,
work.
Every
time
you
would
deploy
a
cluster
or
do
anything
the
the
the
big
question
that
we
have
for
for,
for
you
is
besides
using
it
trying
out
seeing
how
we
did
in
our
our
1-0.
A
Is
you
know
what
could
we
do
next?
What
do
you
want
to
see
in
this
next
there's
there's
a
lot
of
areas
that
we
didn't
touch
on
and
the
question
we
kept
asking
ourselves
was
well.
Would
that
be
useful,
well,
which
one
of
these
is
a
priority,
which
one
is
not
that's
the
type
of
feedback
that
we're
looking
for?
We
got
a
good
core
set
of
feedback,
mostly
from
from
red
hat
consultants
and
a
couple
other
people.
A
We
knew
in
the
community
that
were
doing
work
with
ansible
and
openshift,
and
they
gave
us
that
initial
batch
of
use
cases
and
we've
essentially
covered
them
all
right
now,
so
we're
we're.
Where
do
we
go
next?
So
that's
the
feedback
we'd
love
to
hear
for
those
of
you
who
are
interested
like.
I
will
give
this
deck
to
diane
to
to
send
around,
but
these
are
some
of
the
repos
of
the
content.
A
You
were
looking
at
starting
with
fabian's
the
demo
code
that
he
was
just
running
through
and
and
the
repos
where
we're
developing
the
collections
we've
been
talking
about
and
then
there's
a
bunch
of
blog
posts.
If
you
want
to
go
deeper
and
and
read
about
this
stuff
in
in
more
detail,
maybe
more
elegantly
than
I've
been
speaking
about
it.
So
that
is
all
that
we
had.
C
Well,
that
sounds
good
it.
I
think
we
could
use
some
feedback
from
the
community
on
maybe
some
maybe
more
useful
examples.
The
example
was
good
fabian
that
that
was
bad,
but
just
how
people
would
use
this
in
in
production
and
so
joseph
and
other
folks
who
are
doing
that
your
feedback
will
be
most
welcome.
E
One
thing
I'd
love
to
see,
and
maybe
it's
already
part
of
the
playbooks,
it's
kind
of
standard
operating
procedures
for
admin
tasks
like
key
rotation.
I'm
not
sure
we
probably
don't
have
that
yet,
but
something
like
snapshotting
and
backing
the
backup
of
cluster
states.
B
No,
so
the
content
there
was
mostly
focused
on
just
kind
of
like
providing
those
basic
building
blocks
kind
of
the
foundational
work
in
order
to
allow
us
to
build
more
things.
On
top
of
that,
but
collections
do
allow
you
to
currently
distribute
roles
and
I
believe
in
the
future,
it's
planned
to
allow
you
to
distribute
playbooks
as
well,
and
so
the
hope
is
that,
as
we've
sort
of
filled,
this
out
get
more
community
involvement,
get
some.
You
know
better.
Subject
matter
experts
I
deal
a
lot.
B
You
know
working
with
operators
with
kubernetes
api
and
low
level
components,
and
things
like
that.
I
have
less
experience
with
sort
of
higher
level
cluster
administration,
and
so
that's
definitely
like
we.
We
would
love
to
have
playbooks
and
roles
in
or
that
would
enable
users
to
like
very
easily
automate
those
tasks.
But
you
know
that's
sort
of
like
now
is
the
point
where
we
go
out
to
the
community
and
and,
like
you
know,
look
for
people
who
who
know
about
that.
B
Don't
necessarily
need
to
do
all
of
it,
but
if
they
have,
you
know,
requests
for
features
like
that
documentation.
Maybe
some
like
getting
started
places
like
that.
Those
would
all
be
very
useful
things
for
us
to
see
pop
up
in
that
repo
in
order
to
help
us
prioritize
and
also
in
order
to
help
us
understand
what
exactly
those
cluster
administration
tasks
are
and
how
we
can
help
automate
them.