►
Description
Walkthrough of CEP-2
https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-2+Kubernetes+Operator
A
All
right
we
are
recording,
so
this
is
the
second
group
of
Apache
Cassandra
kubernetes
operator,
sig
meetings
that
last
the
last
ones
we
had.
There
were
two
one
West
Coast
friendly
and
a
pack,
and
then
the
other
ones,
East
Coast
friendly
and
it's
eest
friendly
I.
Think
mostly
it
was
just
gathering
everyone.
It's
like
yes,
this
is
what
we
want
to
do.
Here's
some
basic
outlines
and
the
guidance
was
let's
go,
create
a
CEP,
so
magic
as
it
is
Ben
and
I
have
just
put
the
finishing
touches
on
it.
A
I'm
gonna
go
ahead
and
put
the
link
for
CEP
number
two
in
the
chat
chat
window,
and
this
is
on
the
Cassandra
C
wiki
granted.
This
is
pretty
light
as
it
sits,
but
I
think.
If
Ben
do
you
want
to
do
you
want
to
walk
through
this
I
mean
I'll.
Give
you
a
chance
to
you.
Can
you
could
put
some
or
you
want
to
share
your
screen,
or
how
do
you
want
to
do
this?
Yeah
I
could
share
my
screen.
If
you
want.
B
B
B
B
But
fundamentally
it's
it's
here
to
just
you
know:
try
just
major
changes
right
that
potentially
impact
a
number
of
things
that
go
beyond
the
scope
of
what
a
jury
would
so
me
and
Patrick.
We
spent
kind
of
last
few
days
working
on
the
CAC
EP
and
when
I
say
last
few
days,
working
on
this
EP,
you
go
you'll
be
like
well
what
they
have
you
been
doing,
because
it's
particularly
a
lot
of
words,
but
there's
been
a
lot
of
discussion.
B
To
give
you
a
very
quick
overview.
What
I'll
do
is
I'll
run
through
this
kind
of
step
by
step
and
explain
a
little
bit
about
raising
around
why
we
included
things.
Why
we
did
it
and
you
know
again,
feel
free
to
jump
in
and
we
can
have
the
discussion
down
in
there
and
I
might
pause
a
little
bit
to
kind
of
open
it
up
and,
of
course,
Patrick.
Please
jump
in
and
correct
me
if
I
get
anything
wildly
incorrect.
B
You
know:
we've
all
tackled
this
from
various
degrees
of
you
know.
Kubernetes
Inc
is
increasingly
becoming
a
target
that
we
need
to
run
more
and
more
about
capability
on,
on
top
of
whether
that's
on
prem
in
the
cloud,
however,
you
wanted,
you
know
asking
that
particular
cat.
We
have
a
broader
impetus
within
the
organization's.
B
We're
working
on
that
hey
kubernetes,
is,
is
kind
of
what
we've
got
a
target
here,
and
so
what
we've
personally
seen
is,
you
know,
that's
great
for
a
lot
of
applications,
but
you
know,
dealing
with
stateful
stuff
is
hard
mainly
because
kubernetes
as
a
community
has
ignored
staple
capability
for
so
long.
You
know
it's
always
really
easy
to
build
distributed
systems
when
you
don't
have
to
deal
with
state,
but
you
know
we're
now
at
a
point
where
it's
starting
to
mature
and
it
makes
sense
to
do
this.
B
Mobile
got
that
driving
impetus,
the
audience
that
we've
kind
of
targeted
in
terms
of
who
will
consume
your
operator,
but
also
I,
believe
who
has
and
put
into
this
it's
it's
pretty
wide-ranging.
To
be
honest,
you
know
the
dev
ops,
ops
crowd,
you
know
are
gonna,
have
their
shoulder
a
lot
of
this
responsibility,
but
on
the
flip
side,
you
know
we
tend
to
see
this
being
consumed
by
developers
that
need
to
be
able
to
test
and
run
a
build
against.
You
know:
production
environments
or
production,
like
environments,
you
know
locally
and
stuff.
B
B
You
know
some
of
these
different
levels
don't
fit
the
exact
kind
of
order
of
the
way
you
would
do
things,
particularly
in
a
distributed
system,
but
I
think
it's
a
useful
framework
to
start
to,
engage
and
think
about.
How
do
we
build
something
that
is,
you
know,
sits
also
a
little
bit
in
that
that
kubernetes
community,
as
well
as
the
Cassandra
one.
So
we
started
off
with
the
core
goals
that
one
obviously
lower
the
impedance
between
communities
and
Cassandra
operations.
You
know
that
goes
back
to
that
called.
B
You
know
that
call
motivation
of
making
Cassandra
easier
to
run
on
kubernetes,
but
in
terms
of
like
you
know
the
first
definition
of
what
each
is,
what
should
be
in
what
should
we
target?
First,
as
the
MVP,
we
really
drill,
driven
that
from
the
operator
maturity
scale,
so
we
kind
of
talked
about.
What
does
some
operators
do?
What
the
existing
operators
do
and
it
kind
of
looks
like
a
level
three
operator
compliance
makes
sense
for
I,
guess
what
you'd
call
the
MVP
or
the
initial
you
know
release
of
what
we're
trying
to
do
here.
B
What
you
will
see,
however,
is
clearly
there
is
a
mismatch
between
the
operator
maturity
scale
and
how
certain
distributed
systems
should
behave
right
so,
for
example,
level
five
they've
classified
horizontal
and
vertical
scaling
kind
of
in
the
same
category.
As
you
know,
a
whole
bunch
of
stuff.
That
kind
of
sounds
like
AI
helps
to
me,
which
is
you
know.
It
doesn't
really
make
a
ton
of
sense,
so
we've
kind
of
chosen
level
three
operator
compliance
as
the
target,
so
you
can
do
full
app
lifecycle,
storage,
lifecycle,
so
backup
in
recovery,
as
well
as
horizontal
scaling.
B
That
makes
the
most
sense
with
Apache
Cassandra.
We
also
identified
providing
a
pathway
to
level
four.
So
it
might
not
know
the
operator
itself
might
not
necessarily
do
this
stuff
out
of
the
box,
but
it
shouldn't
prevent
people
who
are
using
the
operated
from
you
know,
plugging
it
into
their
metrics
and
alert
and
logging
infrastructure.
That's
already
built
around
kubernetes,
and
then
one
of
the
other
goals
was
to
have
this
listed
on
operator
hub,
which
you
know
it's
of
a
nice
end
goal
to
plan
a
nice
little
flag
in
so
I'm
gonna
pull.
B
D
Hey
this
is
a
merit.
I
did
have
a
quick
question,
so
I
with
the
target,
I
I,
think
level.
Three
plus
you
know
the
scaling
stuff
makes
a
lot
of
sense
with
regards
to
the
operator.
Sdk.
Is
that
are
you?
Do
you
see
that
as
being
the
tool
that's
used
to
implement
ultimately
this
operator,
or
are
you
trying
to
agnostic
towards
that.
A
This
is
David
or
D
capital,
Ida
question,
at
least
from
this
good
level
perspective.
It's
with
Cassandra
itself,
Hatchin
minor
upgrades
and
upgrades
in
general
can
be
tricky
to
do
correctly,
and
there
are
many
cases
in
which
we
want
to
halt
and
be
in
mixed
mode
for
whatever
the
duration
is.
As
we
work
things
out.
What
are
your
thoughts
on
this
and
the
capabilities
there
and
then
the
second
one
I
was
gonna.
A
B
That
that's
our
that's
a
really
good
good
question,
I
think,
and
this
is
where
it
kind
of
gets
into
more
around
implementation
details
and
that
kind
of
thing
I
think
if
we
as
a
community,
determined
that,
in
order
for
us
as
the
way
that
we
see
as
people
who
have
to
run
and
operate
Cassandra
in
production,
if
we
feel
internally
that,
in
order
for
us
to
tick
off
level,
2
correctness,
which
is
you
know,
patch,
a
minor
version
upgrades
are
supported
and
if
we
as
a
community,
believe
that
actually
you
know
sometimes
this
stuff
is
hard
and
we've
got
to
hit
the
pause
button
and
you
know
go
in
and
figure
out
what
went
wrong
or
roll
it
back
or
whatever.
B
If
that's
a
requirement
for
us
to
internal
it,
to
meet
what
we
consider
to
be
level
2
well,
then
I
think
we
got
attack
with
that
for
sure.
So
you
know
I
think
this
is.
This
is
more
of
a
high
level
kind
of
set
of
ask,
and
you
know
it's
up
to
us
to
kind
of
define
what
what
out
there
should
have
done,
looks
like
for
each
of
those
and
then
in
terms
of
repair.
A
A
Sorry
Inc
I
just
wanted
to
add
something
to
this
is
David.
You
brought
up
some
really
like
specific
things.
I've
been
I,
think
you
know
to
adding
to
this
goal
thing.
It's
like
what
is
level
one
two
and
three
mean
in
a
Cassandra
world
yeah
and
be
let's,
let's
be
specific,
I
mean
that
I
think
that's
kind
of
the
goal
here
is
we
want
to?
We
don't
want
to
get
too
general.
We
want
to
start
yeah.
A
E
Cycle,
but
in
to
piggyback
on
that
with
even
level
one,
what
do
this
to
just
a
zero
in
on
configuration
management?
You
know
what
do
I
think
it'd
be
helpful
to
get
more
specific
on
that,
for
example,
does
that
include
encryption?
You
know
what
what
is
the
approach?
Are
you
gonna
for
that?
You
know,
except
that.
D
A
A
We
could
say:
oh
yeah,
we
do
all
three
of
these
like
no
one's
gonna
go
out
and
say:
well,
we
looked
and
that's
not
the
way
you
do
a
lifecycle
management,
full
lifecycle
with
Cassandra.
Oh
we're,
busted
I
think
this.
We
have
to
decide
on
our
own
and
maybe
there's
maybe
there's
things
that
we
say
yeah.
These
are
like
the
minimum
for
level
1
and
then
things
to
look
forward
to.
B
Yeah
I,
think
and
I
think
you
know
one
of
the
ways
that
will
help
us
define
what
these
various
levels
look
like
for
us.
Well,
when
we
get
it'll
kind
of
become
a
little
bit
clearer
as
we
get
into
some
of
the
other
sections.
So,
for
example,
what
are
some
of
the
non
goals
right
and
just
jump
board
for
a
second
one
of
the
non
goals
that
you
know
me
me
and
Patrick
kind
of
identified
is
actually
we
don't
want
this
to
be
like
a
service
facade
around
Casandra
right.
So
what
does
that
mean
right?
B
We
don't
want
it
to
be
so
abstract
where
you
just
you
know,
you
define
a
Casandra
key
space
or
you
define
a
table,
and
you
know
you
just
pointed
at
the
table
and
you
know
off
you
go
right.
It'll
still
have
concept
like
clusters
are
nodes,
and,
and
that
kind
of
thing
it
is
still
a
Cassandra
cluster
and
so
I.
Think,
as
as
we
progress
the
CDA
and
we
talk
about
what
do
we
want
it
to
be?
What
do
we
not
want
it
to
be?
It'll
help
us
build
some
answers
to
what
you
know.
B
What
does
level-one
look
like
what
is
level
to
look
like,
but
also
at
the
same
time,
some
of
this
stuff
won't?
Actually,
you
know
be
too
clear
until
we
get
down
and
implement
that
right
and
I
say
that
from
experience
in
building
our
operator
right,
we
we
toss
this
backwards
and
forwards
for
a
long
time
around.
How
do
how
do
we
do
configuration
management
like
in
in
the
cid?
Do
we
let
people
specify
different
configuration
options
right?
B
We
got
to
a
point
where
actually
we
decided
to
go
completely
the
other
way,
which
was
you
know,
we
allow
people
to
specify
a
Cassandra
yeah
mph
I'll
put
it
in
a
config
map.
We
did
some
stuff
around,
allowing
you
to
specify
you
know:
yeah
yeah,
Mille,
fragments
that
it
can
stitch
together
and
do
other
bits
and
pieces
there
I'm
not
saying
that's
the
way
to
do
it,
but
that's
kind
of
where
we
landed
on
that
side
of
the
right.
B
But
we
only
go
to
that
content
that
part
of
the
argument.
Once
we
sat
down
to
actually
build
it
and
figure
out
what
it
looked
like.
Hopefully,
we've
all
around
this
particular
table
got
enough
experience
between
us
to
have
some
stronger
opinions
before
we
get
into
the
meat
of
it,
but
yeah.
That's
that's
kind
of
my
two
cents
on
it
and
I.
Think
having
this
this
document
as
a
driving
or
as
a
sounding
board
to
drive.
That
would
is
a
good
thing.
A
Was
gonna
say
all
good
stuff
and
everyone
here
can
have
access
to
this
dock
full
edit?
This
is
not
this
isn't
Ben
and
Patrick
gating
information
it
just
real,
quick.
The
steps
are
in
the
hub
I'll.
Try
to
put
this
in
a
note
to
is
creating
either
when
you
go
to
the
sea
wiki,
you
can
create
an
account,
and
when
you
do,
you
can
just
let
me
or
Ben
know
your
username,
and
we
can
give
you
full
edit
rights
everything
on
this
and
it
has
like
a
version
tracking.
E
Yeah
another
another
comment:
going
back
to
the
configuration
management,
configuration
options
and
I
think
another
example
I
think
I
guess
we're
maybe
driving
it
was
wondering
if
just
for
the
case
of
Cassandra
or
distributed
systems,
that
I
agree
with
the
this
diagram
that
a
very
familiar
thing,
but
maybe
it's
overly
simplistic
and
I
think
it's.
It
comes
down
to
more
it's
more
than
just
well
about.
How
do
we
configure
the
yamo
file?
E
Look,
for
example,
yesterday
I
was
doing
some
work
for
for
writing,
ansible
playbooks
for
implementing
automating,
using
the
even
token
distribution
algorithm
so
and
that's
in
311
and
that's
awkward
to
say
the
least,
and
so
you
know
there's
a
lot
more
involved
there
than
just
turning
some
knobs
and
the
Amla
file.
It's
a
matter
of
executing
you
know
some
queries
and
and
and
I
would
say,
that's
definitely
a
lot.
B
Know
I,
they
agree
and
I
think
you
know
there
will
be
a
point
around.
You
know
where,
when
we
come
supplementation
details
like
that
of
like
okay,
let's
support
all
the
configuration
options,
but
hey
this
one
has
dependencies
on
others
and
that
kind
of
thing
you
know,
I
think
a
lot
of
it'll
just
come
down
to
well
what
do
we
support
from
day
one?
What
are
we
happy
with
if
you
can
do
it,
but
it's
awkward,
but
we've
got
a
plan
to
improve
that
workflow
and
then
what
are
we
just
not
support?
B
B
You
know
the
first
cut
of
it
was
literally
like
you
can
specify
a
key
store
and
if
it's
got
everything
in
it
might
work,
and
then
we
started
to
work
with
like
well.
How
do
we
do
certificate
request
within
kubernetes?
And
you
know
all
that
kind
of
fun
stuff,
but
I
think
there'll
be
a
lot
of
these
questions
where
the
answer
isn't
black/white
it'll
be
like
well,
here's
the
journey
that
we
expect
the
operator
to
go
on
as
we
build
out
this
capability.
B
B
B
So
the
next
stage
non-gold
so
so
this
is
kind
of
some
of
the
things
that
we
thought.
Maybe
we
don't
want
to
try
and
attempt
or
tackle
in
this
particular
project-
that's
not
to
say
they're,
not
important
questions
or
they're
they're,
not
problems
that
the
community
should
solve
it's
just
kind
of
like.
Maybe
this
isn't
the
best
place
to
do
it
right.
B
So
some
of
these
are
also
pretty
easy.
So
like
one
of
the
known
goals,
which
is
like
remove
the
need
for
any
Cassandra
administration
right,
so
I
think
from
the
start
trying
to
tackle
that
like
level
5
self
healing
self
blah,
you
know
it's
probably
definitely
a
bit
of
a
known
goal,
at
least
for
you
know
the
first
good.
While
you
know
I,
think
someone
raised
up
like
Cassandra
no
replacements
right.
B
You
know
there
will
be
cases
where
it's
like
if
the
node,
if
the
nodes
literally
down
and
then
the
hypervisor,
you
know
the
worker
that
it's
on
is
marked,
has
failed,
will
say
yes,
but
there'll
be
a
whole
bunch
of
other.
You
know
kind
of
questions
around
that,
but
I
think
explicitly.
We
don't
want
to
you
know
we're
not
trying
to
boil
the
ocean
here
we're
not
trying
to
make
this
a
completely
serverless.
B
Self-Driving
kind
of
you
know:
capability,
I,
already
kind
of
covered
this
one.
So
not
trying
to
you
know,
build
a
server
list
rosado
over
to
Sandra
and
then
the
other
one,
and
this
kind
of
came
up
on
the
mailing
list.
Discussion
was
official,
docker
images,
so
one
of
the
things
we're
gonna
have
to
do.
As
you
know,
to
have
a
kubernetes
operator.
Is
it's
going
to
have
to
have
some
docker
images
that
it
goes
and
runs
in
these
pods
right?
B
The
the
question
of
having
an
official
docker
image
for
the
project,
it
kind
of
broadens
the
scope
a
little
bit,
and
this
is
just
you
know.
This
is
my
opinion
here
that
in
that,
if
we
try
to
build
as
well
the
official
you
know
Cassandra
docker
image,
you
broaden
the
scope
around.
It's
got
to
be
aware
of
other
control
planes
right,
whether
it's
docker
compose,
whether
that's
whatever
my
sauce,
is
doing
AWS
CCS.
B
You
know
just
running
it
locally
on
your
on
your
machine
and
I
think
when
the
community
decides
to
tackle
an
official
docker
image
you
know,
and
if
it
gets
too
that
is
nice
and
robust,
then
the
the
the
kubernetes
operated
can
certainly
adopt
that
and
I
think
that
should
be
the
direction
and
the
path
but
I
think,
as
you
know,
an
initial
goal
trying
to
build
an
official
docker
image.
You
know
it
doesn't
make
a
lot
of
sense
and
will
probably
end
up
blocking
any
sort
of
project
in
that
scope.
B
B
B
Perpetua
Cassandra
specifically
for
lumping
discussions
in
the
mailing
list
around
the
way
that
we
do
these
kind
of
things,
especially
with
the
data
sex
announcement
about
around
drivers
where
they
live.
Like
I,
saying
I
mean
you
know,
I,
just
I
really
don't
see
this
living
intrigue,
so
I
don't
think
that'll
be
hugely
applicable
to
us,
but
it's
probably
just
something
worth
to
be
worth
paying
attention
to.
As
we
progress,
new
or
change
public
interfaces.
B
This
kind
of
the
main
ones
that
I've
kind
of
identified
here
and
again
you
know
we
as
a
team.
We
can
add
to
this
and
drive
that
discussion
as
we
kind
of
go.
The
main
ones
would
be
probably
changing
some
pluggable
components
like
the
seed
provider,
all
the
stitches
or
something
to
have
better
support
for
kubernetes
service
discovery
mechanisms.
B
I
know
that
in
our
operate
up
we
ended
up
modifying
I,
think
the
simple
C
provider
so
that
you
could
pass
in
a
DNS
address
and
it
would
resolve
all
the
IPS
behind
that
DNS
listing
and
then
it
could
take
a
subset
of
those
IP.
So
it
wasn't
just
one
IP
or
distal
DNS
resolution.
So
that's
just
one
way
of
doing
it,
but
whatever
happens,
we'll
need
to
have
some
probably
some
in
project
support
for
service
discovery
and
that
all
obviously
need
to
support
a
change
in
config.
B
D
B
B
Something
like
that
right,
we
just
there's
a
new
C
provider
in
Cassandra
right
and
we've
just
been
a
dependency
on
that
version.
You
know
having
at
least
minimum
that
version,
so
you
know
definitely
a
few
ways
to
kind
of
go
about
that.
But
I
think
for
the
moment,
if
the
kubernetes
operator
at
least
finds
its
own
docker
image,
it
can
be
responsible
for
whatever
needs
to
go
into
it
to
make
it
work
for
the
moment.
So.
A
I
have
a
question
kind
of
on
proposal,
but
it's
kind
of
more
like
where
things
live
on
the
fundamental
side.
Is
everyone
basically
has
their
own
manager
and
there's
attempt
right
now
to
try
to
make
it
so
that
there
is
a
centralized
manager
that
n
we
can
actually
work
with?
It's
is
the
operator
planning
to
delegate
to
the
manager
and
like
have
the
manager
interact
with
it
as
well,
to
find
these
things
that,
or
is
the
operator
planning
to
replace
some
of
the
functionalities
managers?
A
What
are
your
thoughts
on
this
sort,
like
the
lifecycle
manager,
know
like
there's
this
I'm
sorry,
everyone
calls
them
differently.
So
like
there's
the
sidecar
project,
for
example,
which
is
trying
to
be
people
contributing
their
internal
sidecars
and
to
making
it
so
they
can
actually
manage
Cassandra
so
like
I'm,
not
sure
if
you're
thinking
is
that
the
operator
is
its
own
thing
that
does
similar
or
is
it
delegating
to
the
sidecars
or
like?
What
is
the
relationship?
Is
this
awful
Fletcher
project,
or
is
it
a
a
tangent
of
the
sidecar?
A
B
An
opinion
on
it-
and
you
know
my
kind
of
view
is,
you
know,
I
I,
think
the
the
operated
project
needs
to
do
what
it
needs
to
do
in
order
to
get
off
the
ground
and
running
and
delivering
about
you
quickly
to
build
velocity
right.
So,
for
example,
if
someone
donated
their
operating
arange
or
about
a
sex
rings
class
or
whoever
all
right-
and
it
came
with
all
of
this
right
so
aside,
he
came
with
a
sidecar.
It
came
with
a
docker
image.
It
came
with
a
whatever.
It
is
my
view
on.
B
Yes,
that
means
more
work.
Yes,
potentially,
is
that
you
deduplication
of
different
things?
You
know
but
I
think
build.
You
know,
building
and
looking
with
the
sense
of
velocity,
so
we
can
start
delivering.
You
know
some
stuff
sooner
rather
than
say
having
a
dependency
on
the
sidecar
project,
for
example.
That
makes
more
sense
to
me,
but
when
it
makes
sense
to
switch,
then
we
definitely
switch
I
mean
when
do
that's
my
opinion.
I've
heard
other
arguments
where
it's
like
well
hey.
B
B
B
How
does
it
relate
to
compatibility
and
deprecation
and
migration
as
it
relates
to
the
broader
project,
rather
than
how
we
go
about
actually
implementing
this
stuff
for
the
operator
itself,
so
I've
kept
it
kind
of
very,
very
high
level,
and
from
that
perspective,
and
really
just
you
know,
target
those
in
three
dot,
X
I,
don't
think,
there's
a
ton
of
use
targeting
anything
lower
than
that.
B
But
again,
that's
more
of
a
nod
to
you
know
the
level
two
core
capability
that
we
talked
about
there
and
then
in
terms
of
testing.
You
know
one
of
the
concepts
that
mean
have
you
kind
of
talked
about.
B
It
also
means
that
you
know
once
we
get
to
a
point
if
we
do
reach
the
Nirvana,
where
all
this
stuff
is
running
and
using
kubernetes
and
using
the
kubernetes
operator
or
whenever
there's
a
new,
you
know
release
of
Cassandra.
We
can
say
well
yeah
the
operators
verified
for
it.
You
know
we
ran
all
that
testing
on
it.
The
other
really
great
thing
about
it.
Is
it
experienced
the
the
surface
area
of
who's
working
on
it
right
because
it
means
that
the
broader
Cassandra
community
I
mean
he
is
invested
in
making
sure
her.
B
A
A
A
David
I
was
I'm.
Sorry,
look
I,
would
love
to
hear
David
your
impression
of
using
this
operator
and
the
CIC
be
pipeline
for
Cassandra
it
to
you.
Sorry
I'm
here
so
one
of
the
things
that
I'm
doing
and
it's
kind
of
more
of
hacks
than
anything
else,
but
like
one
of
the
issues
with
Python
detest
and
you're,
calling
this
out
a
little
bit.
I
think
here
is
that
when
everything
runs,
local
Cassandra
actually
acts
differently
and
everything
is
remote.
A
So
we've
been
doing
a
lot
of
different
testing
where
we
actually
deploy
the
coasters
and
actually
to
hit
different
code
paths
that
you
cannot
really
hit
without
hacks
and
iPhone
detest.
So
if
Python
D
test
is
able
to
trigger
a
cluster
in
kubernetes,
doesn't,
in
my
opinion
matter
whose
operator
or
version
it
is
because
it's
testing
Cassander
itself,
so
it's
more
like
did
you
do
this
correctly,
so
like
it
does?
A
If
Netflix
can
use
it
doesn't
matter
if
Apple
can
use,
it
doesn't
really
matter
it's
more
like
it
can
give
me
a
cluster
similar
to
how
Lacey's
idiom
gives
it.
Then
you
can
trigger
something.
Sir.
There
so
I
see.
There's
benefit
there,
the
one
thing
that
is
kind
of
hard:
it's,
where
are
the
resources
so
like?
We
have
I,
think
something
like
nine
or
twelve
Jenkins
boxes.
I,
don't
actually
remember
anymore.
No,
that
was
like
from
we
have
a
decent
number
of
Jenkins
boxes,
but
we
don't
have
kubernetes
boxes
and
a
SF.
B
A
A
I
did
I,
think
I
think
Ben
just
dropped
out.
They
should
be
yeah,
I.
Think
you're
right
like
that,
should
be
something
that
is
I,
think
it
should
be
on
data
statics
to
make
sure
that
what
you
know,
whatever
server
version
we
have
is
compatible
with
it,
which
I
don't
think
it's
gonna,
be
that
hard
well.
D
A
That's
a
good,
it's
a
good
question
in
it.
You
know
as
and
then
there's
also
you
know
different
cloud,
implementations,
etc,
so
probably
reasonable
to
call
that
out,
and
it
may
be
that
you
still
have
to
use
a
different
operator
in
that
case,
but
I
would
hope
that
it
is
derived
from
this
operator.
I.
A
But
another
way,
I
see
that
not
the
whole
like
DLC
thing,
but
many
different
contributors
of
the
project
all
have
their
own
internal
Forks,
so
being
able
to
support
your
own
fork
would
actually
be
beneficial.
So,
therefore,
even
if
you
have
your
own
thing,
you
have
your
own
yamo
definitions.
You
have
your
own,
whatever
it's
you
could
potentially
share
on
this
side,
like
you
do
with
the
Apache
Cassandra
codebase,
which
then
means
that
data
stacks
had
their
own
stuff.
A
But
if
everything
is
only
nuf,
fishel,
Apache
Cassandra
releases,
then
that
gets
a
little
harder
for
a
lot
of
the
contributors
to
actually
use.
Hey,
I.
Think
with
anything
where
you
see
a
fork,
you
want
to
make
sure
they
or
a
version
or
anything
like
that.
You
want
to
show
some
compatibility
with
a
lot
of
things
like
a
driver,
for
instance,
if
you're
not
compatible
with
a
certain
driver,
then
what
are
you
doing?
You
just
created
an
island
for
yourself.
A
B
B
You
know,
and
as
long
as
you
give
folks
some
degree
of
control
over
that
will
make
it
extensible.
You
know
I
think
if
you
are
running
your
own
folk.
If
you
are,
you
know,
building
whatever
it
is
on
top
of
it.
As
long
as
you
know,
you
kind
of
got
those
extensibility
points,
then
that
it
should
be
pretty
easy.
Yeah,
it's
from
a
compatibility
perspective.
B
D
C
A
A
A
A
Well,
I
I'm,
gonna
I'll
pose
that
to
the
Central
European
crowd
tomorrow
too,
because
they'll
be
later
than
but
yeah.
So
just
another
like
quick.
Here's,
a
call
to
action
is
this
is
available
it's
out
there
please.
This
is
a
good
time
to
start.
You
know,
adding
notes,
add
your
would
have
you
I'm
gonna,
of
course,
try
to
go
in
there
and
try
to
well.
You
already
did
been
put
some
structure
around
this
idea.
A
What's
a
level
one,
what's
level
2
level
3
in
Casandra
terms,
but
it
seems
like
that's
what
we
need
to
be
really
debating
right
now
and
you
know
feel
free
to
post
questions
on
the
dev
list.
That's
it's
a
really
good
place
to
have
conversation
about
things.
I'll
try
to
post
a
summary
on
there.
If
you
want
to
use
that
as
a
start
for
the
thread,
if
you're,
if
you're
not
following
Deb,
Cassandra
dot,
Apache
org,
this
is
a
great
day
to
do
it.