►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Here's
the
quick
agenda,
just
a
regular
operator
framework
update
on
what's
going
on,
I
just
pulled
off
a
few
interesting
things
to
talk
about
wanting
to
talk
about
OPM
a
little
bit.
I
know:
we've
talked
about
this
in
the
past
a
little
bit
and
pointed
to
this
enhancement
proposal.
I
just
wanted
to
get
some
more
attention
on
that
and
then
I
wanted
to
give
also
a
quick
overview
of
a
tool
called
cuddle,
which
is
a
kubernetes
testing
tool
that
we're
going
to
be
integrating
into
scorecard
and
I.
A
Think
some
of
that
initial
work
has
just
landed
or
is
about
to
land.
So
we'll
talk
through
that
and
then
a
great
group
of
use
of
researchers
at
Red
Hat
spent
some
time
talking
to
operator
authors
about
how
they
think
about
building
operators.
Some
of
the
the
issues
that
they're
seeing
in
issues
that
they
have
just
kind
of
understanding
or
where
to
go
next,
so
I
just
want
to
talk
through
some
of
that
research
I
think
it's
kind
of
interesting
for
the
group
to
see
and
then
I
doubt
that'll.
A
B
A
And
if
anybody
else
wants
to
talk
about
anything
in
particular,
we
can
do
that
as
well,
and
if
you
have
any
questions,
this
is
totally
collaborative
I.
Just
these
slides
are
a
little
formal,
but
I
just
wanted
to
collect
some
links
and
scroll
through
some
stuff,
and
so
we
can
interrupt
as
much
as
you
want
all
right.
That's
great
forward.
A
First
item
I
want
to
talk
about
is
a
bunch
of
work,
just
landed
to
have
a
new
type
of
update
graph.
So
if
you're
familiar
with
the
kind
of
model
that
we
have
today,
every
version
of
a
CSV,
which
is
the
you
know,
manifests
that
has
a
bunch
of
metadata
about
your
operator,
says:
I've
replaced
this
version
and
I
can
also
you
can
skip
these
versions
too.
Get
to
me
is
roughly
kind
of
how
it
works,
and
so
that's
like
an
update
graph.
A
A
So
I
wanted
to
I've
got
one
kind
of
screenshot
of
what
this
looks
like
in
the
docs.
This
is
in
the
enhancements
proposal,
I
believe
for
semver
based
updates,
and
so
you
can
see
here
that
you
know
you
just
have
your
xev
operator
and
instead
of
having
to
put
this,
replaces
field
right
here
and
being
very
explicit
about
it.
You
go
from
you
know
one
Niner,
zero,
nine,
zero,
two,
zero
nine!
You
can
just
omit
that
and
move
forward.
A
So
pretty
cool
should
be
like
a
very
small
win,
but
I
think
it's
going
to
be
very
popular
and
then
following
that
is
then
cember
support
in
the
Skip
range
so
that
you
can
skip
to
different
versions
in
the
in
December.
This
is,
if
you
happen,
to
release
something
that
doesn't
work
like
you,
you
thought,
if
you
get
a
bug
report
or
something
like
that,
you
can
get
a
new
version
out
and
start
skipping
that,
and
so
always
when
you're
doing
anything
related
to
updates.
A
A
Some
other
updates-
we've
talked
a
few
times
on
these
calls
about
this
new
bundle
format.
This
is
basically
a
little
bit
of
a
change
up
of
how
that
operator
metadata
is
submitted,
and
this
is
really
teasing
it
apart
from
kind
of
this,
this
one
main
cluster
service
version
file
do
being
separate
files,
so
you
can
just
ship.
If
you,
you
know,
you
use
a
deployment
to
run
your
operator,
and
you
want
to
include
an
AR
back
set
of
rules
and
you
need
a
ship,
a
secret
and
a
config
map
with
it.
A
B
A
That's
roughly
what
it
is,
and
then
what
this
allows
you
to
do
is
be
able
to
then
pack
those
inside
of
a
container
image
and
that's
for
how
you
host
this
thing.
Instead
of
having
to
have
any
sort
of
other
tooling,
you
just
have
an
image
registry
like
we
all
do
to
use
kubernetes
and
you
can
put
these
catalogs
on
there
as
well.
A
So
what's
exciting
is
for
the
community
operators.
This
is
the
the
repo
we
have
in
github.
That
is
the
collection
of
all
the
community
operators.
You
can
submit
those
in
both
formats
and
and
those
will
be
converted
behind
the
scenes
if
they
are
the
older
format,
the
newer
bundle
format
so
that
newer
versions
of
OLM
can
fetch
those
dynamically
without
kind
of
knowing
what
happened,
which
is
really
great,
so
that'll
ease
our
transition
period
into
this
new
format
and
then
we'll
we'll
move
over
to
that
at
some
date
in
the
future.
A
If
you
wanted
to
learn
more
about
this
OPM
tool,
which
is
a
way
to
curate
your
own
sets
of
these
catalogs,
so
this
community
operators-
repo,
is
just
one
giant
catalog
of
all
the
operators
and
all
of
its
versions,
so
you
can
upgrade
between
them.
You
can
also
make
your
own
catalogs,
and
so
this
is
a
great
way
to
do
that
and
there's
some
enhancements
which
I
wanted
to
circulate.
A
This
link
around
is
a
better
I
guess
more
curated
view
of
even
a
curated
catalogue,
so
you
can
say
I
want
these
very
specific
versions
only,
and
what
this
can
help
you
do
is
to
remove
some
of
the
bulk
of,
if
you're,
mirroring
into
like
a
disconnected
environment.
You
know
it
might
be
gigs
and
gigs
and
gigs
of
containers,
pulling
versions
of
operators
that
you
don't
care
about,
and
so
you
can
pick
a
few
to
be
selective
about.
I
wanted
to
I,
think
actually
I
have
a
screen
shot
yeah.
A
So
this
is
this
OPM
tool
is
inside
of
the
operator
registry.
Repo
and
I
was
just
going
to
click
over
to
the
dock.
Here
that
has
some
of
the
commands.
I
think
I'm
just
going
to
talk
to
them
a
little
bit,
and
this
gives
you
a
little
bit
of
a
sense
of
how
it
works.
So
you
have
this
registry,
which
is
you
know
the
underlying
database,
that
is
holding
all
this
information,
and
so
you
can
add
and
remove
things
from
that
database
and
then
serve
that
out.
A
That's
the
the
G
RPC
calls
that
the
cluster
can
use,
but
then
you've
also
have
this
index,
which
can
make
it
easier
to
shift
container
images
directly
to
clusters
and
it's
got
kind
of
the
same
types
of
commands
here.
So
you
can
see
that
your
you're
building
certain
bundles
and
then
tagging
it
inside
of
a
container,
and
that
is
what
you
would
ship
to
your
injuries
or
put
it
in
your
registry
and
then
have
that
added
to
the
cluster.
So
pretty
cool
and
a
lot
of
nice
commands.
A
You
could
easily
add
this
as
part
of
a
CI
run
or
something
like
that
as
part
of
your
release,
process
I,
don't
have
any
other
information
to
add.
There's
some
export
commands
as
well.
So
if
you
want
to
dump
these,
for
you
know
offline
mirroring,
if
you
do
need
to
burn
them
to
a
DVD
or
whatever
and
get
them
into
a
secure
environment,
you
can
do
that
as
well.
I
forget
what
the
fellows
name
is
that
has.
A
A
And
I
know
if
any
of
the
folks
on
the
OLM
team
or
SDK
team
want
to
comment
on
any
of
this,
but
it's
all
pretty
exciting
work.
Some
of
the
the
you
know,
reasoning
and
justification,
for
it
are
just
a
lot
around
easing
use
of
loading
in
operators
and
clusters
for
testing
as
well
as
curating
them.
If
you
are
an
admin
team
and
you
want
to
you-
know
ship
the
same
catalog
to
five
different
cluster
or
something
like
that,
so
pretty
cool.
Let
me
get
back
to
my
presentation,
I.
C
Can
add
one
new
thing
to
this:
Tirpitz,
okay,
Rob
yeah,
so
the
past
couple
weeks,
we've
merged
some
new
changes
that
a
lot
of
those
OPM
commands
currently
or
I
should
say
used
to
shell
out
to
document
pod
man
to
deal
with
images.
A
certain
set
of
them
now
can
do
this
without
shelling
out
to
either
docker
pod
men.
They
don't
require
a
privileged
container
daemon
of
any
sort,
that's
export
and
then
any
of
the
registry
add
and
the
index
add
commands
all
have
this
as
an
option.
Okay,.
D
A
Awesome
yeah
because
the
so
with
the
you
know,
registry
specs,
you
can
have
manifest
lists
of
all
your
architectures
and
so
I
think
it's
more
of
a
question
of
for
things
like
operator,
hub
and
scorecard,
and
some
of
the
testing
pipelines
that
we
have
it's.
How
do
we
best
support
these
multi
arch
operators
in
terms
of
testing?
Or
do
you
know
because
these
environments,
for
things
like
mainframes
or
whatever,
can
be
a
little
esoteric
versus
things
like
arm?
But
you
know
we
don't
have
our
machines
in
our
pipeline
today
anyways.
A
So
it's
more
Vista
sounds
like
there
is
at
least
some
interest
and
maybe
that'll
be
a
future
topic.
Maybe
we
can
start
up
on
the
mailing
list
about
how
to
best
do
this
I
know
Red
Hat
internally
for
our
product
teams.
We
have
some
stronger
guidelines
for
exactly
how
to
do
multi
arch
that
we
can
learn
from
as
well.
But
you
know
we
can
tightly
control
those
environments.
So
it's
a
little
bit
easier
to
force
folks
to
do
the
right
thing.
A
A
None
of
the
required
CR
des
are
owned
by
other
things,
and
you
know
like
kind
of
that
type
of
stuff,
but
it
doesn't
have
any
like
assertion
based
logic
around
go
to
deploy
database
with
my
database
operator
or
go
start,
this
machine
learning
pipeline
or
whatever
it
is
and
assert
that
it
actually
worked
the
way
they
did,
and
so
that's
what
the
cuddle
tool
brings
you,
and
so
it's
in
the
Kudo
builder
org
on
github,
but
I
thought
it
would
be
interesting
to
walk
through
there.
Writing
your
first
test
guidelines.
A
This
is
a
really
good
example
of
so
this
is
just
really
simple
in
nginx
deployment,
with
three
replicas
and
then
you're,
asserting
that
you
did
get
three
replicas
out
of
it
at
the
end.
That's
kind
of
you
know
as
easy
as
it
gets
and
you
can
run
through
a
bunch
of
different
test
cases
with
this
I
think
it'll
be
a
really
exciting
way
to.
We
wanted
to
allow
folks
to
package
these
up
and
send
them
to
our
pipeline
as
a
way
to
actually
you
know
validate
that
this
works
I.
A
Think
with
the
end
goal
of
it
would
be
awesome
to
have
a
bunch
of
different
kubernetes
providers
hooked
into
a
test
framework
where
we
can.
You
know
test
out
the
current
versions
of
all
some
of
the
cloud
offerings
or
some
of
the
other
offers,
and
then
even
test
pre-release
versions
of
kubernetes
against
these
operators,
and
things
like
that.
That
would
be
pretty
cool,
so
this
is
just
setting
some
of
the
groundwork
for
that.
A
So
you
can
find
more
information
at
cuddle,
dev
and
you'll
find
this
guide
here.
There's
a
few
other
kind
of
tips
and
tricks,
and
things
like
that.
That
are
kind
of
interesting,
but
for
the
most
part,
it's
pretty
straightforward,
which
is
really
cool,
I'm,
curious,
I,
don't
know!
If
anyone
has
anyone
played
around
with
this
already
or
has
a
testing
framework
that
they
even
like
today
that
they're
using
I.
E
I'll
just
add
out
there
that
there
is.
There
was
a
conference
virtual
conference
put
together.
A
30-minute
video
was
created
on
on
getting
started
with
this,
and
I
can
make
sure
that
people
have
access
to
that.
A
D
E
E
I
guess
the
creation
of
cuddle
and
the
reason
it
exists
is
you
know
it's
consistent
with
what
we
were
trying
to
do
with
kudo,
which
was
to
have
a
declarative
way
to
manage
operators,
and
it
just
made
sense
to
have
a
declarative
way
of
testing
and
and
the
tests
actually
are
consistent
with
what
you
would
expect
your
Yamal
or
your
experience
with
kubernetes
to
be.
We
just
use
them
as
either
applies
or
updates
oftentimes
with
strategic
merging.
So
you
can
be
brief
as
well
as
asserts,
and
it
still
got
some.
E
You
know
runway,
there's
still
things
that
we'd
like
to
add
in
and
take
advantage
of,
but
I'll
have
to
look
at
the
other
other
options
out
there.
I
I
wouldn't
expect
this
to
be
a
an
all-in-one
solution
for
sure,
and
it's
definitely
probably
best
as
more
of
an
and
and
test
than
it
would
be,
or
certainly
more
so
than
a
unit
test.
There
are
some
elements
of
it
that
work
well
with
an
integration
test.
Those
well.
F
F
B
B
Runner
and
we'll
have
some
custom
test
images,
one
of
which
will
be
a
cuddle
based
test
image
that
basically
exit
exit
future
contests
and
then
outputs
the
results
in
a
way
that
scorecard
can
consume
them,
so
be
on
the
lookout
for
that
in
the
next
probably
month
or
two
and
can
like,
if
you're
interested
in
helping
us
integrate.
That.
That
would
be
absolutely
awesome.
A
All
right
last
topic
of
my
slides
I
just
wanted
to
talk
through
some
of
the
research
that
the
Red
Hat
user
experience,
design
research
team
did
around
operators
and
want
to
get
this
information
out
far
and
wide.
You
know
it's
there's
nothing
proprietary
here.
These
folks
just
happen
to
work
for
Red
Hat,
so
at
the
high
level.
A
So
this
was
a
pretty
small
survey
in
the
sense
that
it
was
seven
internal
operator,
authors
and
two
external
operator,
authors,
we're
gonna,
do
a
follow-up
study
to
try
to
get
at
least
five
more
external
operator
authors
to
even
out
those
numbers,
but
just
talking
them
through
how
they
built
their
first
operators.
Any
pain
points
where
they
are
today
that
kind
of
thing
and
I
thought.
A
The
interesting
thing
that
jumped
out
to
me
is
even
though
everybody
was
familiar
with
kubernetes
just
you
know,
either
a
user-
and
you
know
it's
creating
deployments
and
like
debugging
pods
and
all
that
kind
of
stuff.
They
still
had
to
learn
a
lot
of
Kubb
fundamentals
to
build
an
operator
just
because
you're
using
the
deeper
parts
of
it.
Then
you
get
kind
of.
A
If
you
scratch
the
surface
on
just
running
some
applications
and
talking
about
how
the
control
loops
work
and
the
caching
layers
and
some
of
the
things
like
that,
so
everybody
had
to
learn
something
which
was
really
interesting
and
that
kind
of
feeds
into
the
next
topic,
which
is
that
the
docs
and
some
of
the
learning
resources
we're
seeing
it
a
little
bit
as
scattered
and
especially
actual
examples
of
real
operators.
I
know
that
the
framework
is
guilty
of
having
some
of
its
first
things.
You
know,
building
your
sample
operators
that
are
not
real
applications.
A
You
know
they're
just
basically
standing
up
one
or
two
pods
and
that's
basically,
it
so
I
think
that's
some
good
advice
that
we
need
to
move
forward
with
folks
we're
using
the
SDK,
which
is
great
and
are
doing
some
testing,
but
want
better
in
to
end
test
support.
So
that's
good
to
hear
as
folks
wanna
you
know,
get
production
quality
operators
and
if
they
are,
you
know
going
to
be
pursuing
this
as
part
of
a
community
or
an
organization.
That's
going
to
buy
or
sell
this
operator,
you
know
they
need
to
be
bulletproof.
A
So
that's
awesome,
and
you
know
this
kind
of
very
much
lines
up
with
the
roadmap
that
we
have
planned
for
the
operator
framework,
some
of
the
the
upgrading
of
the
SDK
itself.
There
was
some
concern
for
breaking
changes
that
were
hard
to
find.
I
know
where
that,
where
you
know
we
just
adopted
some
of
the
lower
level
tools
from
some
of
the
sakes
with
the
controller
runtime
and
the
cube
builder
work,
so
I
think
that
kind
of
falls
under
this.
A
That
was
probably
not
a
hard
to
find
breaking
change
as
we
change
the
project
layout,
but
hopefully
going
forward.
We
won't
have
those
types
of
issues.
One
cool
thing
is
the
the
capability
model
is
used
by
all
folks.
Everyone
use
that,
as
a
reference
point
for
kind
of
how
to
chart
their
path
with
the
operator.
A
A
So
the
some
of
the
major
conclusions
here
around
docks,
where
that
it
wasn't
clear
how
stuff
fit
like
fully
end-to-end
going
from
I've.
Never
you
know
written
any
of
this
before
to
downloading
the
SDK
to
getting
started
and
then
wanting
to
reference
real
code,
and
this
is
something
that
I
see
a
lot
when
I
talk
to
folks
is
like
well
show
me
the
best
operator
out
there
or
you
know,
and
it's
more
like
okay.
Well,
what
type
of
application
are
you
going
for?
A
A
A
All
right
on
the
capability
model,
folks
I,
think
we're
very
successful
this
using
it
as
a
guiding
stars.
It's
exactly
what
it's
designed
to
do,
and
you
know
the
structure
of
this
does
make
it
a
little
bit
difficult
to
figure
out
how
it
applies
to
you.
But
it's
it's
kind
of
a
critical
thinking
exercise
on
the
behalf
of
the
user,
but
maybe
we
can
strengthen
this
a
little
bit.
A
And
then
someone
that,
like
I
guess,
a
little
less
major
outcomes
were
other
types
of
resources.
So
are
there
more
real-time
things?
Like
slack
I
know,
we've
got
the
kubernetes
operators
channel
they're,
like
code
examples,
you
know
other
than
the
ones
that
we
have
highlighted
already
and
books
and
other
things
like
that.
One
thing
that
we
do
have
under
way
is
the
operator
framework
website.
A
This
is
moving
a
little
bit
slower
than
I
would
like,
but
we're
you
know
it's
a
lot
of
work
to
write
all
this
content,
and
so,
if
anybody
wants
to
help
get
involved
with
that,
I
think
those
they're.
Those
repos
are
currently
in
kind
of
the
person
who
was
messing
around
with
some
of
the
Hugo
stuff
in
his
personal
github,
but
will
be
moved
to
the
operator
framework
or
care
really
soon,
and
so
we'll
we'll
take
any
pull
request
that
anybody
wants
to
send
our
way
and
that'll.
E
Obviously,
what
I'm
seeing
is
a
lot
of
people
expressing
that
there's,
a
lack
of
information
and
I'm
wondering
who's,
responsible,
right
and
I
feel
like
in
order
for
operator
SDK
to
be
successful.
That
education
has
to
be
there.
Do
we
take
the
ownership
of
that
and
just
drive
it
or
you
know,
do
we
come
together
as
a
community
and-
and
you
know
I-
don't
know,
that's
that's
really.
A
E
Okay
yeah,
it
sounds
like
we're
on
the
same
thought:
I
may
be
just
mr.
making
some
connections
or
or
yeah
I.
Well,
you
made
the
comment.
It's
taking
longer
and
there's
a
lot
of
effort
to
create
this
website.
I
believe
that
you
know
content
creation
is
usually
underestimated,
and
so
you
know
what
can
we
do
to
drive
it?
E
A
Absolutely
I
think
I
mean
just
the
cuddle
website
that
we
were
looking
at
earlier
is
a
really
great
example
of
this,
where
I
think
we've
got
a
little
bit
more
content
to
cover
than
that,
but
it's
a
really
nicely
put
together
website
that
you
know
it's
fronts
and
there
with
here's.
What
this
is
and
here's
how
to
get
started.
Some
things
that
this
website
I
think
we'll
do
really
well
is
have
a
unified
place
for
just
even
the
definition
of
an
operator.
Here's
what
we
see
an
operator
doing
things
like
the
capability
model.
A
How
do
you
like
how
do
I
know
if
these
operators
are
good
or
not,
and
then
expressing
the
differences
of
the
different
types
of
SDKs
that
you
can
use,
or
you
don't
have
to
use
an
SDK
and
where
the
lifecycle
manager
fits
in,
and
you
know
all
that
stuff
is
all
kind
of
wrapped
up
in
there.
So
hopefully.
E
G
C
Think
you
made
a
really
good
point:
we're
restocking,
I,
think
that
was
Ken,
which
is
that
you
know
operators
are
heavily
aligned
with
and
dependent
on
kubernetes
api
is,
I
don't
think
much
operator
documentation
calls
out
that
you
need
to
be
very
aware
of
those
api
conventions
and
patterns.
Look
and
that
writing
an
operator.
People
will
be
using
those
api's
and
expect
them
to
follow
the
same
conventions.
That's
good,
minetti's,
api's,
I'm.
Sorry,
anything!
There's
a
big
gap
there
right
now
that
we
could
do
a
lot
to
help
flush
out.
E
Yeah
agree:
seven
yet
was,
can
yeah
I
I,
what
we
are
gonna,
add
or
change
or
what's
being
suggested
at
this
point
to
the
API
element
is
the
when
it
was
a
book.
The
content
that's
available
online
today
doesn't
even
include
the
concept
of
CR
DS
and
so
CR
DS
will
be
added,
but
as
exactly
I
think
we're
aligned
on
that
date.
C
A
F
A
A
D
Have
something
to
ask
so
the
service
mesh
team
is
currently
undergoing
work
to
get
disconnected
mode
work
right,
I'm,
pulling
stuff
down
from
Quay,
but
from
some
internal
repository,
though
there
were
discussions
last
week
about
the
whole
related
images
stuff
and
how
that's
not
implemented
yet
I
guess
the
question
is:
is
there
a
time
frame
when
the
related
images
stuff
is
going
to
be
implemented
whereby
the
the
shah's
can
get
replaced
or
the
shots
can
replace
the
tags?
The
whole
annotation
stuff?
D
C
I
I
can
probably
help
with
this,
but
I
I
do
think
I'm
curious
why
this
is
important,
because
essentially,
what
we're
maybe
I
should've
said
so
we're
talking
about
the
same
thing
right
now,
there's
a
related
image
field
in
the
CSV
that
you
can
list
out
images
that
your
operator
might
require
at
a
particular
version.
So
when
you
call
that
manifest,
you
can
read
that
list
and
then
that's
another
thing.
Another
set
of
images
to
mirror
to
make
sure
your
operator
works,
offline
or
disconnected.
C
The
thing
that
hasn't
been
implemented
is
a
feature
that
Olin
would
do
at
runtime,
where
it
takes
those
values
in
the
related
images,
fields
and
projects
them
down
onto
the
template
annotations
of
your
deployment
back
so
that
when
we
stamp
out
a
deployment,
those
values
are
available
via
like
the
downer
API
at
runtime.
So
the
reason
that
that
has
not
been
a
priority
is
because
you
probably
already
have
to
you.
C
I
I
should
say
yes
and
no
Olin
doesn't
do
anything
with
the
related
images.
It's
it's
purely
metadata,
that's
used
by
tooling
for
building
offline
catalogs
right
now.
What
it
will
do
is,
if
you
put
a
tag,
it'll
mirror
to
a
tag
if
you're
using
the
offline
tooling
in
the
OSI
tool.
If
you
put
a
shot
at
a
mirror
to
a
Shaw
but
for
Lucy
P
disconnected
it's
a
requirement
that
all
mirrored
images
be
digests.
D
C
D
That,
in
our
related
images,
we
could
just
put
what
we've
been
used,
tags
right
and
then
somehow
some
magic
under
the
covers
would
say:
okay,
well,
we're
gonna
mirror
this
tag,
this
sha
and
then
that
sha
is
then
gonna
be
placed.
Like
you
said
in
the
well
in
the
pod
template
of
the
death
of
the
deployment,
we
can
then
pass
it
down
to
environment
variables
for
the
operator
to
use.
But
if
you're
saying
no,
that's
never
going
to
be
the
case,
you
have
to
put
the
Shah
in
the
CSV.
Then!
D
D
C
I
How
are
you
guys,
setting
up
and
tearing
down
kubernetes
environments
when
you're
doing
testing,
basically
right
now,
I'm
using
a
proprietary
tool
in
my
company
but
I'm
looking
for
a
standardized
way
to
create
kubernetes
clusters
as
setup
and
teardown
for
tests
in
gke,
vanilla,
coughs
OpenShift?
All
of
these
sorry,
it's
a
very
broad
topic,
but
this.
D
D
I
I
I
Yeah
yeah,
basically,
you
have
a
test
suite
right
of
multiple
tests,
I
guess
and
you
do
a
setup
and
a
teardown
before
and
after
right.
So
let's
say
the
suite
is
done
with
cuddle.
I
have
to
look
into
that
tool,
but
I
mean
the
the
teardown
and
setup
are
not
I.
Think
all
right
so
like
it
we're
all
addressed.
I
think
a
similar
problem
right
I
mean
we're
all
addressing
I.
Think,
maybe
because
we
want
our
operators
work
out
on
as
many
environments
as
possible.
Basically
yeah
part.
I
G
Yeah
I
think
for
like
a
pure
cube
answer.
I
would
probably
suggest
that
we
should
look
into
something
like
kindy.
That
seems
to
be
gaining
some
traction.
You've
been
upstream
with
people
using
kindy
as
the
as
the
way
to
spin-up
and
spin-down
kubernetes
clusters
under
test
inside
of
their
proud
config,
so
candy
might
be
something
to
look
into.
G
That
would
be
the
the
best
answer.
I
have
for
right
now
how
I
was
just
doing
it
as
there's
a
github
action
that
probably
spins
up
a
kindy
cluster
for
you
to
test
with,
and
then
you
know
it's
a
just
a
bunch
of
docker
container,
so
just
tear
it
down
and
you're
good
to
go.
Oh,
that
might
be
the
best
thing
to
look
into
for
that
side
of
the
test.
Today,
yeah
now.