►
From YouTube: 2020-06-04 Cassandra Kubernetes SIG
Description
This meeting was to walk through the CRD comparison created by John Sanda. A few different Cassandra operators were made available as examples on different areas of implementation. Notes for this meeting were taken in-line with the doc:
https://docs.google.com/document/d/1t5nYvdfUs7EQDoYCwKuA1OIVPfp3OEuEDK0KhbRd9QE/edit?usp=sharing
A
All
right,
everyone
welcome
to
yet
another
kubernetes
sig
special
interest
group,
Kassandra,
kubernetes,
sig,
so
I
think.
The
point
of
this
meeting
today
is
actually
doing
some
engineering
work
and
a
huge
thank
you
to
John
sand
ax
who's
here.
He
he
jumped
in
and
did
kind
of
the
hard
work
of
going
through
a
lot
of
different
operators
and
trying
to
if
you
were
across
from
our
last
meeting
we're
talking
about.
Okay,
let's
start
with
a
CRD
was
let's
work
through
the
parameters
and
kind
of
go
a
top-down.
A
This
is
how
kubernetes
projects
are
done
through
the
API
and
the
API
is
expressed
in
the
CR
D
and
so
John
did
that
work
and
we
have
a
consolidated
document.
What
I
thought
would
be
really
useful
is
John
can
walk
through
the
document
and
John
I.
Don't
know
if
you
want
to
do
a
screen
share
that
might
be
the
easiest
thing.
John
could
walk
through
the
document.
We
could
talk
about
each
section.
A
I
saw.
We
already
had
some
great
comments
in
there.
Cyril
Alex
Stefan,
who
was
here
but
he's
not
had
some
great
comments,
but
I
think
this
will
be
a
good
time.
The
goal
is
to
start
doing
some
some
level
of
merging
here
or
trying
to
consolidate.
If
we
can
so,
hopefully
we
can,
we
can
get
a
large
portion
of
it
like
yeah.
A
This
is
all
pretty
common
and
we
should
do
that
and
then
we
can
just
get
down
to
the
debating
like
the
finer
points
and
if
we
can,
the
next
step
will
be
since
John
is
doing
a
lot
of
the
like
halves
of
time
to
do
the
work
I,
we
could
take
that
input
and
actually
turn
it
into
something
that
can
go
on
something
like
get
help.
You
know,
take
it
from
you
know:
Google
Doc
coding
to
actual
code
and
see
if
we
can
keep
keep
the
project
going.
B
Okay,
so
thanks
everyone
for
reviewing
and
coming
on
the
dock
I
should
have
put
in
the
duck
for
some
background
information.
The
objective
was
for
to
look
for
least
common
denominator
and
use
that
as
a
starting
point
for
comparison,
for
example,
I
think
all
the
the
CR
DS
for
each
of
the
operators
provide
ways
for
configuring
resources,
CPU
memory,
for
example,
and
so
or
configuring
Cassander
itself,
I'll,
be
it
differently.
So
there
is
some
things
there
are
things
that
features
that
some
each
operator
has.
B
It's
others,
don't,
for
example,
it's
the
cluster
operator
and
cask
up
in
progress
for
backup,
restore
well
Casa
operator,
doesn't
have
that
I
intentionally
left
that
out,
because
I
wanted
to
do
think.
Just
look
at
things
that
are
in
common
but
may
be
implemented
differently
and
this
one
I'm
taking
more
time
than
I
thought
so
I.
There
are
things
that
I
definitely
missed.
So
I
appreciate
the
comments
and
please
feel
free
to
jump
in
and
interrupt
or
I
want
to
be.
B
The
minimal
example
the
insta
cluster
operator
was
oh
and
Freddy
folks,
here
from
Sky
UK
I
intended
to
do
for
all
wanna
do
cover
all
for
the
major
mature
operators.
Again
it
was
just
I
ran
out
of
with
the
so
I'm
gonna
skip
over
that
the
minimal
example.
That
was
just
because
I
was
least
familiar
with.
Insta
cost
was
operator
and
just
kind
of
poke
around
the
code
and
getting
familiar
with
things
that
was
scratch
work
so
in
the
cluster
topology.
B
B
You
know
there
is
lb
it.
You
know,
maybe
some
differences,
but
that
that
seems
to
be
pretty
similar
as
well
as
just
defining
the
number
of
nodes
in
the
cluster
I
highlighted
toleration
since
the
cluster
operator,
because
that
was
one
thing
that
was
I
think
was
unique
to
interested
clusters
operator.
B
B
Folks,
somebody
from
insta
cluster
and
data
stacks
to
chime
in
so
we
see
in
or
any
of
the
teams.
So
with
the
Cass
cop,
you
can
declare
multiple
data
centers
in
spec
to
see
the
DC
and
then
racks
within
the
data
center
and
both
with
Cass
operator,
we
scroll
down
to
Casa
operator
you,
your
top
level
objects,
is
the
the
data
center
and
I
left
out
the
full
spec
just
to
kind
of
try
to
cut
down
on
the
boilerplate
and
similarly,
with
with
its
cluster
operator,
it's
the
top
level
type.
Is
the
data
center
and.
B
C
E
F
G
G
F
F
D
D
C
Label
structure
that
you
have
here
is
pretty
awesome:
I've
been
talking
with
users
and
assuming
that
the
the
label,
the
nodes,
are
labeled
with
that
specific
failure
domain
label,
it's
changing
between
kubernetes
versions
and
I
expected
to
change
across
the
clouds
as
time
goes
on,
supporting
arbitrary
Keita,
I
use
label.
Key
and
value
pairs
is
pretty
interesting
and
it
looked
clicky
support
rules.
F
F
You
say,
and
it
was
so
difficult
to,
because
we
had
Java
that
actually
wanted
the
node
to
be
specifically
on
different
notes
over
the
place.
So
we
had
to
make
very
open
system
for
it
to
be
able
to
match
whatever
was
given
to
us
by
the
ops
by
the
British
team.
So
there
it
is
team
agreed
to
label
all
the
nodes
with
the
geographical
assets,
and
then
we
could
actually
say
we
want
based
on
this,
this
availability
zone
that
we
simulate
with
being
a
row
of
machines.
H
C
A
So,
just
for
a
quick
while
we're
waiting
for
Jim
should
we
capture
some
of
these
things
in
a
note
inside
of
here
just
thinking
out
loud
there's,
some
good
discussion
going
on
in
here,
yeah
yeah
put
a
comment
in
there
like,
yes,
what
you
said:
Chris
we
don't
want
it
to
get
lost.
I
mean
it's
not
lost,
because
I'm
recording
it'll.
What?
If
what?
If
I
have
a
hard
disk
failure?
B
B
So
I
know
the
reason
I
when
I,
when
I
came
across
the
I
started,
I
started
looking
at
insta
cluster
first
working
through
it
and
then
Cass,
cotton
and
Cass
operator
and
I
saw
this
stuff
about
toleration
and
a
vaguely
familiar
with
tanks
and
toleration.
But
I
needed
you
some
reading
and
it
seems
that
I'm
still
a
little
confused.
Don't
have
a
lot
of
experience
using
it
directly.
B
But
to
me
it
seems
on
one
hand
it
seems
like
there's
just
some
redundancy
with
affinity
rules
but
based
on
some
reading
I
did
it
looked
like
it's
complimentary,
so,
which
is
part
of
the
reason
I've
you
know
highlighted
so,
for
example,
it's
the
the
one
example
I
saw,
which
is
I.
Think
really
good
illustration.
B
You
have
a
node
that
has
memory
pressure
and
you
add
a
tank
to
the
node
that
it's
low
on
memory
and
only
pods
that
tolerate
that
taint
will
be
able
to
be
scheduled
to
run.
All
of
those
nodes
would
potentially
be
evicted
from
that
from
that
node
and
schedule
elsewhere.
So
I
like
seeing
the
ability
to
specify
the
Toleration
at
some
level,
knowing
that's
a
hard.
B
Let's
see
both
nothing,
it's
I,
didn't
I,
mean
didn't
notice
or
may
have
missed
it
with
the
insula
operator,
but
both
Cass
cop
and
Cass
operator
have
the
basically
the
ability
to
where
you
say
you
know
the
whether
or
not
you
want
to
allow
Cassandra
pods
to
be
co-located
on
the
same
note
and
I
think
that's
great
because
for
maybe
for
development
testing
you
want
that
for
production,
you
see.
Maybe
you
don't
want
that
and
so
just
to
have
that
ability
or
you're
just
more
resource
constrained,
and
you
want
more
density.
I
G
Laptops
internal
mic
is
busted,
my
headphone
mic
busted.
Second
labs
have
normal
mic:
okay,
okay,
so
I
was
saying
before
was
the
very
first
thing
I
was
saying:
was
we
we've
been
thinking
that
nodes
per
rack
is
a
better
system
than
what
we
have
where
we
just
specify
like
a
size
and
then
do
a
division
and
try
to
keep
everything?
Even
it's
very
confusing.
If
you
want
to
try
to
add
a
rack
like
what
are
you
doing
so
we
had
all
this
logic
in
the
webhook
the
bucket.
G
So
anyway,
I
like
the
Cask
up
way,
but
yeah
we
did,
we
I
think
we
decided.
We
did
not
want
to
just
push
down
like
a
size,
each
rack,
because
that
encourages
bad
practices
of
really
out
of
balance
things,
even
if,
like
someone
is
like
oh
but
I
needed
that
this
one
time
it's
like
no,
but
you
should
probably
have
better
practices
when
you
start
from
scratch
and
kubernetes
right.
G
So
that
was
one
thing:
the
taints
and
toleration
x'
and
I'd
be
like
a
node
selector
or
something
it
felt
like
a
lot
to
us
and
we
avoided
supporting
it
very
first
and
I
guess
I'm
glad
we
did,
but
you
eventually
do
need
to
support
all
of
it
to
support
the
more
heterogeneous
workloads.
People
will
put
in
their
kubernetes
clusters,
where
it's
like,
oh
I,
have
like
this
instance
is
dedicated
to
GPU,
stuff
and
I.
Don't
want
Cassandra
on
there,
so
we
had
a
known
selector
when
someone
SS
on
github.
G
You
might
need
to
rip
to
repelled
our
pods
from
other
things,
and
it
was
like
no
no
like
we
use
node
selector
everywhere,
so
it's
enough,
but
giving
people
that
it
yeah
it's
very
hard
and
my
last
thing
that
stupid
audio
problems
were
ruining
was
like,
as
far
as
like
the
the
labels
and
the
Eastham
rooms
and
rows
and
I
think
getting
that
feedback
is
really
important
because,
like
we're
we're
a
hundred
percent
in
the
clouds
of
knowing
how
people
you
know
people
expert
and
managing
down
the
center
doing
is
we
want
to
support
that
I
mean
as
long
as
there's
some
good
organization
like
we
should
make
sure
we
have
supposed
that
are.
F
F
Good
person
any
cloud
you
are
the
present
in
communities
that
maybe
not
Trudy
in
the
cloud.
Maybe
in
you
in
your
premises
with
you
with
where
you
don't
have
an
infinity
of
machines,
and
that's
our
that's
why
we
did
that
because
we
had
constraints
on
that.
We
were
given
by
Cassandra
expert
knows
to
be
on
the
same
and
they
really
want
to
say
they
said:
that's
the
condition
for
Cassandra
to
go
into
communities
unless
it's
not
gonna
work.
If
you
say,
let's,
let's
be
clear:
let's
be:
let's
drive
the
the
thing
it
may
not
work.
F
G
G
Support
the
sale,
your
domain
labels
sort
of
out
of
the
box
with
a
simple
thing:
it's
like
oh
label
that
just
take
this
rack
and
label
it
with
this
availability
zone
and
it
just
it
just
works
kind
of
look.
It
doesn't
I,
don't
know
if
you're
very
new
to
this
stuff,
it
might
look
magical,
but
it's
all
just
counting
on
existing
plumbing.
This
is
a
matter
of
safety.
A
Faults,
you
know
and
I
think
of
like
Cassandra
already
has
a
lot
of
that,
but
that's
the
Cassandra
world,
but
if
you're
not
specifying
you
know,
if
you're
not
being
overly
prescriptive
in
your
yah
mo
file,
will
you
get
a
safe
or
a
sane
I
hate
to
use
the
word
sane
default?
It's
safe
defaults
infrastructure.
G
I
found
this
hard
to
make
examples
where
it's
like,
not
knowing
what
region
or
cloud
anyone's
in
like
how
do
you
I
can't
really
put
up
for
the
examples
it
show
using
news
features
you
can,
but
then
someone
will
copy
paste
and
be
like
it
doesn't
work
and
they're
really
testing
nothing,
except
that
everything
else
is
working
as
it's
supposed
to,
and
you
have
to
kind
of
research.
These
values
before
you
slap
them
in
when.
D
You
question
to
you:
Tim,
sorry
Frank,
you
said
you
didn't
do
the
same
thing
regarding
the
parameters:
prorack,
I,
think
or
/
DC
I
think
I'm,
mature,
/
datacenter.
Maybe
you
have
only
the
notion
of
I
mean
nuclear
mr.
either
data
sooner
for
you
I.
Don't
remember,
I!
Think
that's
what
John
said
before.
But
you
said
you
don't
want
to
have
specific
parameters
on
the
data
center
or
on
the
rack.
G
D
G
D
It's
just
a
matter,
I
think
off
node,
but
not
having
the
same
notes
the
same
number
of
nodes
per
data
center.
So
we
can
have
a
data
center
where
we
just
run
Hadoop
jobs
where
we
don't
have
the
same
replication
factor.
So
we
have
less
number
of
nodes.
So
we
don't
need
really
need
that
notion
of
you
know
that
high
number
of
RAC-
and
maybe
also
that
number
of
nodes
per
rack-
so
just
some
will
so
to
you,
know
not
waste
space
by
having
the
same
rag
in
the
same
number
everywhere.
G
Yeah
yeah,
we
have
yeah,
we
support
multi
EDC
as
long
as
it's
in
one
kubernetes
cluster,
and
ideally
one
namespace
is
how
it
works
right
now,
which
lets
you
get
those
work
you
can
get
the
workload
separation
out
like
that,
but
we
don't
have.
Anyone
like
at
Astra
is
not
testing
that
very
hard
right
now.
So.
G
F
F
A
Difficulty
with
kubernetes
right
now
as
it
sits
as
it
when
we're
talking
about
crew
news
and
cassandra
working
together.
That's
a
huge
mismatch.
Kubernetes
does
not
do
multi
data
center
in,
even
if
you
were
the
VPN
or
Brigid,
or
anything
like
that.
You
you
nailed
it
there.
The
latency
is
too
hard
and
it's
the
it
doesn't.
Have
the
ability
to
do
things
like
partition
events
or
anything
like
that.
So
but
then
Cassandra
does
just
fine
with
that.
A
So
I
don't
think
we're
gonna
solve
it
with
this
CRD
multi,
datacenter,
multi,
rack,
I,
think
when
we
talk
about
it
in
these
terms,
assumes
single
IP
plane
that
doesn't
have
latency
may
be
inside
of
a
physical
data
center.
E
Hey
guys,
this
is
America
at
the
risk
of
confusing
matters
in
further
I,
just
kind
of
want
to
point
out
that
there
are
I.
Think
all
this
discussion.
Sort
of
presupposes
that
Goodman
eighties
itself
is,
is
scheduling
the
pods.
There
is
a
notion
of
outsourcing
that
to
something
like
a
storage
Orchestrator
like
clicks
fork,
which
is
actually
the
route
that
we've
taken,
that
DreamWorks
where,
which
is
to
say
we,
let
and
maybe
we'll
get
to
when
we
get
down
to
the
storage
section.
But
we
let
how
our
storage,
where
our
actual
persistent
volumes
live.
F
Yet
a
very
competitive
network
storage
that
is
available
and
fast
because
it's
available
but
fast
doesn't
work,
but
so
we
use
local
storage
with
the.
But
so
we
have
the
problem
of
locating
pods
in
nodes,
I
mean
if
we
have
EBS
like
storage
on
premise,
then
much
less
problem.
You
can
move
things
around
much
more
easily,
so
I
agree
that
it's,
but
should
we
should
we
start
from
the
hypothesis
that
people
have
a
network
storage
know
that
is
fast
enough.
I.
E
You
are
containers,
but
okay,
okay,
so
I
mean,
as
far
as
any
discussion
goes,
I,
don't
I,
don't
necessarily
think
it's
a
problem
having,
like
you
know,
specifying
its
apology
here.
If
there's
some
way
when
you
implement
your
controller
to
actually
push
this
down
through
to
your
storage
class
or
you
know
when
you're
actually
creating
your
storage
and
then
letting
that
drive
the
pods
scheduling
that
you
know
that
could
be
an
option
for
people
you
know
and
again
did
the
risk
is
trying
to
solve
every
problem.
E
F
B
So
if
you're,
using
a
storage
class
for
local
storage
and
then
you
declare
a
PBC
for
you
know
for
that
storage,
even
with
the
defaults
and
I'm
not
I'm,
definitely
not
very
familiar
with
the
algorithm
of
the
community
scheduler,
but
wouldn't
that
necessarily
then
the
scheduler.
You
know
limit
the
selection
of
potential
nodes
for
a
pod
to
those
nodes
with
the
available
storage.
E
I,
it
probably
depends
on
how
things
are
labeled,
so
it
depends
if
you're,
creating,
if
you're,
creating
the
storage
first
and
then
attaching
pods
to
it
or
if
you're,
letting,
basically
you're
scheduling
the
pod
and
then
and
then
creating
local
storage.
Where
that
exists,
I,
guess
it
really
kind
of
has
to
do
with
how
the
selection
criteria
for
the
pods,
what
is
considered
in
the
list
of
you
know,
available
match.
Selection
I
guess
were
for
the
Kuban,
a
scheduler
right,
I,
think
I
think
it's
going
to
ultimately
come
down
to
how
those
things
are
labeled.
G
From
from
my
experience,
I
doesn't
sound
that
different
to
honestly
how
ABS
ends
up
working
with
kubernetes
or
G
CPUs
version,
the
persistent
disks,
so
if
I
say
like
I
need
a
pod
in
this
availability
zone
and
like
for
whatever
reason,
I
can't
get
a
volume
in
that
availability
zone.
Also
at
pod
is
left
pending
and
then,
let's
say,
I
fix.
G
Some
mother
I
fix
something
like
what's
on
screen
right
now
and
and
the
operator
targets
another
get
rid
of
that
pod
and
or
rewrites
that
staple
said
to
target
another
zone
and
I
can
get
volumes.
There
still
is
checking
like
my
CPU
and
RAM
requests
fit
on
the
node
guys
trying
to
get
the
pod
on.
It
has
a
storage
and,
if
I
like,
take
that
pot
away
because,
like
we
have
this
idea
of
like
stopping
to
do
like
a
cold
like
you
know,
take
take
all
the
compute
away.
G
The
storage
is
left
in
place
and,
like
you
can
start
it
up
again
later,
all
the
the
pods
have
an
identity,
so
they
go
back
to
the.
If,
if
the
volume
for
them
is
only
on
exactly
one,
worker,
node,
like
it'll,
have
to
target
that
one
and
some
other
jobs
have
come
in
eating
up
the
CPU
RAM
whatever,
and
they
can't
then
that
pod
will
get
left
pending
again.
G
So
I
think
it's
important
to
make
sure
these
scenarios
work
with
whatever,
whatever
we're
producing
and
putting
out
there
for
the
Cassandra
community
like
it
should
work
well
on
cloud,
and
it
should
work
well
on
Prem
and
I
mean
there
is.
You
can
use
a
bad
storage
provider,
though
that
doesn't
it
doesn't
kind
of
fulfill
its
other
half
of
the
expectations
and
then
yeah
like.
D
I'm
not
sure
how
it's
an
edge
case,
what
you're
talking
about,
except
if
you
need
to
use
a
storage
class,
a
different
storage
class
per
zone
or
something
like
that,
because
today
we
used
it,
we
use
the
same
storage
class
for
all
the
nodes.
So
as
long
as
in
each
region,
the
storage
class
provides
the
right
storage
that
you
expect
it
will
work.
I
need
some
Cuban
Aries
to
just
get
it
and
I
mean
the
drivers
that
you
use
and
everything,
but
other
than
that.
E
Well,
it's
not
gonna,
give
you
a
quick
example
like
and
and
just
because
so
the
Astra
cooks,
like
we've,
been
working
with
NetApp
on
some
of
this
stuff
too
and
I
know.
This
is,
if
it's
not
already
in
their
in
their
product,
it
should
be
soon,
but
so,
for
example,
we
use
we
use
port
works
for
our
to
create
our
persistent
volumes
right
and
a
port
works
volume
has
the
notion
of
replication
itself,
and
so
what
we
choose
to
do
is
in
the
in
the
storage.
E
Definition
itself
is
where
we,
where
we
define
our
affinities
and
anti
affinities,
so
we
can
say
for
a
given
port
works
volume,
for
example,
ensure
that
both
of
those
replicas
live
in
the
same
rack,
so
that
we
can
ensure
ensure
there's
no
latency
between
the
replication
visit
synchronous
replication.
At
the
same
time,
all
of
the
volumes
for
a
given
Cassandra
cluster-
let's
say
it's
a
three
node
Cassandra
cluster.
E
We
want
each
of
the
volumes
to
have
anti
affinities
and
to
all
live
in
separate
availability
zones
right,
and
so
we
actually
specify
that
again,
not
at
the
pod
level.
We
specify
that
for
the
for
the
storage
volumes
themselves
and
then
an
import
works
will
actually
go
and
create
those
create
those
volumes
as
need
be,
and
then
that
informs
kubernetes
to
go
and
schedule
the
pods
local
to
wherever
those
those
volumes
and
replicas
live.
F
F
D
D
E
Cool-
and
I
guess
just
a
last
point,
Omega
that
is
the
stork-
seems
to
be
sore
the
I
guess,
the
favored
storage
Orchestrator.
So
you
know
again
used
by
NetApp
use
my
web
for
perks
and
others,
so
that
probably,
instead
of
trying
to
target
everything
if
things
can
work
with
stork
and
I,
think
it
kind
of
solves
80%.
Of
the
these
case.
A
Conversation
by
the
way,
this
is
exactly
what
I
hoped
would
happen.
You
knows
we
could
just
back
and
forth
mayor
by
the
way
in
Marin,
I
had
a
great
conversation.
Last
week
you
know,
mayor
has
been
working
with
kubernetes
and
Cassandra
for
a
long
time
at
DreamWorks.
So
as
I
see,
it
is
like
a
great
example
of
someone
also
running
this
in
production
and
having
to
deal
with
it.
You
know
with
a
different
point
of
view
and
that's
exactly
what
we
need
to
see
here.
B
B
And
then
I
highlighted
the
there's,
the
ability
here
to
add
labels
and
annotations
for
each
of
the
some
of
the
underlying
pieces,
which
I
think
is
great,
because
I
think
you
need
that.
However,
my
personal
take
is
that
why
you
need
the
flexibility?
I
also
think
it
makes
the
concert'
would
concern
me
that
maybe
you
expose
too
much
implementation
detail
so
want
the
flexibility
with
at
a
higher
abstraction
level.
B
Similarly,
here
you
see
that
options
for
cask
up
for
adding
the
annotations.
You
know
different
flavor,
something
unique
with
cask
up,
there's
a
debug
flag,
which
is
interesting
for
basically
putting
in
a
so
the
Cassander
doesn't
start
allowing
you
to
attach
shelf
to
poke
around,
which
is
certainly
very
to
be
very
useful
and
made
me
think.
I
was
reading
about
I
think
it's
somewhat
new.
What
is
it
the
it's
call
exactly.
First,
have
to
look
it
up,
there's
containers
designed
for
her
who's,
new
and
116
or
elsewhere,
116
for
exactly
this
kind
of
purpose,
ephemeral.
B
D
F
B
Okay,
so
I'm
going
through
that
section
quickly,
because
when
you
get
to
the
more
interesting
section,
Cassander
configuration
and
I
think
this
is
a
really
interesting
area,
because
it's
a
very
non-trivial
part
area
and
very
interesting
solutions
across
the
operators.
With
different
approaches
with
each
operator
and.
B
B
D
G
Yeah,
we're
probably
somehow
I'm
polar
opposites
here,
but
what
exactly
your
concern
is
what
we
tackled
so
I'll
say
that
yeah.
So
we
support
so
we
took
part
of
a
product
like
the
cluster
lifecycle
manager
that
Deus
Ex
sold
and
extracted
out
the
configuration
management
part.
It's
like
this
is
COI
and
that's
what's
running
here
that
understand
yeah
that
understands
that
yeah
Mille,
like
that
was
its
native
language
already.
G
So
it
was
like
easy
to
just
embed
here
so
Cassandra
I'm,
all
you
know
putting
tokens
here,
an
Authenticator,
so
it's
sort
of
we're
being
recorded,
I'm,
not
gonna,
say
I'm
thinking,
but
it
is
a
rather
heavyweight
solution
and
but
yeah
it'll
be
easy
to
support.
You
know:
config
changes
between
311
and
400.
It
looks
easy
to
support
and
that
just
turns
into
another
kind
of
big
project
to
maintain,
but
that
it's
open
sourced
as
well.
G
G
That
being
said,
yeah
like
we
just
mostly
wanted
to
assert
some
pretty
good
defaults,
so
that
the
good
defaulter
and
a
lot
encoded
in
we're,
calling
that
cask
config
builder
and,
in
my
examples,
like
I,
think
the
authentication
being
on
by
default.
We're
gonna
build
into
something
that
the
operator
asserts.
As
like
a
default
thing
like
you
know,
you
can
probably
fill
in
those
lines
to
disable
it,
but
probably
coming
in
the
next
release,
promise
and
better
motor
defaults
and.
G
So
yeah,
it's
interesting,
I,
don't
know
like
I,
think
I
think
there's
merits
to
both
approaches
like
we
have
probably
a
little
too
heavyweight
one,
but
it
was.
It
was
already
complete
work,
so
it
was
easy
just
easy
enough
to
scoop
it
in
the
like,
run
a
script
to
change
your
config
and
it's
a
lot
of
sids
like
I.
G
Think
now
that
I'm,
when
I
started
I
used,
I
started
with
go
before
I
started
working
on
the
operator,
but
now
that
I'm
even
more
farther
along
with
go
it's
like
there's
some
pretty
good,
yeah
mole
tooling
and
I
might
say,
like
the
approach
you
guys
used
is
pretty
great,
but
maybe
I
just
make
it
maybe
embrace
the
yeah
mole
data
structure.
When
you
can.
You
know
something
like
that.
It's
easy!
G
It's
easier
for
people,
look
cuz,
there's
a
there's,
a
pretty
good
yeah
mole
patch
spec
that
I've
used
for
some
chores
and
it'll
read
a
little
easier
but
yeah.
I
don't
know
I
this
part
I,
like
John
said,
is
so
interesting
and
complicated.
It's
like
some
of
the
stuff
I
noticed
in
the
previous
part.
You
skipped
over
is
like.
G
If
you
do
put
something
illegal
in
there
and
the
init
container
rejects
it,
the
messaging
could
be
improved,
but
you
don't
end
up
in
a
bad
state.
You're
either
definitely
broken
or
you're
good.
To
go
so
like
all
of
this
is
pretty
because
the
data
structures
pre-checked,
so
we
parse
all
the
sea
animal
it
needs
to
make
sense
you
can
put
in
by
arbitrary
JVM
parameters.
Obviously,
if
you
put
like
xx
my
random
thing,
I
think
Java
won't
start,
so
you
know
pretty
good
user
safety.
It
I
think
from
what
we
have.
G
E
Make
one
point
I
think
the
cast
operator
technique-
I
guess
just
it
just
seems
kind
of
self-documenting,
so
as
an
end
user
coming
along
looking
at
just
these
key
value
pairs
that
match
up
with
its
Andrea
more
jvm
option,
it
might
be
obvious
to
a
Cassandra
operator
to
know
you
know.
Oh
just
like
I
said
she
stopped
documenting.
B
Yeah
when,
when
I
was
going
through
this
yesterday,
with
Patrick
I
mentioned
that
yeah
with
with
Cass
operator
effect
yeah
it
just
Maps
Lea.
The
different
config
files
is
good
and
I
spend
time,
probably
too
much
time
trying
to
explain
to
people
convince
people
that
it's
a
good
idea
to
run,
Cassandra
and
kubernetes,
and
so,
if
they
make
that
leap
and
see
that
well,
okay,
I
need
to
change
some
settings
and
I've
got
to
do
a
bunch
of
extra
work
that
I
wouldn't
have
to
do
outside
of
kubernetes.
B
That's
just
gonna
be
one
more
strike
against
the
whole
effort,
so
I
think,
even
if
even
if
there's
a
solution,
so
whether
the
solution
is
what
Jim
is
driving
under
so
under
the
hood
or
if
it's
the
using
send
scripts
under
the
hood,
is
that's
one
thing
as
long
as
the
what
the
user
sees
is
relatively
straightforward.
Easy
is
a
big
win
in
my
book.
C
G
And
I
heard
to
me
earlier
I
mean
it's
kind
of
unfortunate,
yet
it
some
of
this
is
like
fuss
with
environment
variables,
fuss
with
it
all
kind
of
falls
into
three
piles,
though
I
think
it
all
looks
like
a
JVM
parameter
or
it's
the
animal
or
it's
like
looks
like
an
environment.
Variable
and
I
forget
something
maybe
forgot
something.
H
H
J
J
Is
communities
inherently,
does
not
give
you
a
stable
IP
address,
whereas
Cassandra
relies
on
the
IP
address
heavily
due
for
some
internal
data
structures,
so
just
want
to
know
like
what
what
you
all
think
about
much
about
this
particular
topic,
and
if
there
are
some
areas
of
improvement
in
Cassandra
that
we
can
address
before
you
know,
specifically
the
nodes
identity,
because
I've
seen
a
few
emails
fly
by
a
couple,
zeros
flyby
about
this
particular
topic.
So
is
there?
Is
there
some
you
know,
area
of
the
document
which
captures
the
improvements
we
could
make
in
Cassandra?
J
C
So
I
open
it
with
within
jira.
For
this
it's
a
15-8
23
there's
there
was
a
little
bit
of
pushback
there
and
the
suggestion
that
we
make
that
it's
EEP,
but
yeah
the
the
concept
of
keeping
track
of
nodes
by
oh
by
an
identity
instead
of
an
address
or
specifically
an
IP
address
when
those
are
very
fluid
in
environmentally
kubernetes
is
important.
So
that's
the
that
is
one
of
the
things
that
I
would
not
be
surprised
to
see
a
CEP
about
this
in
the
near
term.
Yeah.
J
I,
don't
necessarily
think
we
need
a
CEP
I
mean.
That's
like
the
like
the
implementation
detail,
part
it's
there
is,
you
know
a
a
thought
on
going
forward.
What
what
do
you
think
we
can
do
wouldn't
designer,
even
if
it
is
in
the
context
of
this
particular
ticket
that
we
can
discuss
and
say
a
given
going
forward?
You
know
the
identity
of
the
node
is
something
that's
gonna
be
the
has
to
be
decoupled
from
the
IP
address
or
something
else
right.
How
do
we
eat
D
couple
this?
A
A
Everyone
has
the
time,
so
we
I
was
thinking
that
probably
this
would
be
more
than
one
meet,
but
if
we
need
to
go
over
just
maybe
a
few
more
minutes,
I
just
want
to
make
sure
that
everyone's
time
is
respected.
I
know
it's
late
in
Europe
right
now.
What
I
could
offer
is
week?
I
could
schedule
another
meeting
next
week
same
basically
the
same
time,
so
we
can
maybe
John
take
the
input.
That's
here
now
roll
that
up
as
much
as
possible
and
then
continue
this
and
continue
the
conversation
with
what's
rolled
up.
F
Think
it's
good
to
it's
quite
pretty,
but
I
don't
know
where
we're
going.
Actually,
that's
my
only
one
is
your
D.
If
we
can,
we
want
to
say
that
we
have
some
additional
features
in
the
in
the
Cassandra
configuration,
but
we
have
a
bootstrap
image
that
allows
the
people
to
I
mean
our
ops
add
loads
of
requests.
So
we
had
to
make
it
as
flexible
as
possible
for
them
to
add
anything
they
wanted
at
a
good
time.
So
they
can
add.
F
We
have
a
system
that
copies
files
from
the
bootstrap
image
to
the
official
image,
but
we
want
it
to
rely
on
the
official
orchestra
image.
I
mean
we
know
the
official
is
not
official,
but
we
because
we
don't
know
who
maintains
it
but
anyway,
so
we
wanted
to
take
the
image
and
give
the
opportunity
to
the
to
the
user
to
add
whatever
he
wants
in
there.
So
basically
it
can
add
the
instructors
from
its
use
exporter.
Whatever
she
wants
and
you
can
change
the
start.
Start
file
start
command.
F
G
D
B
Yesterday,
all
three
operators
in
the
series
in
one
or
more
places
allow
you
to
specify
that
not
just
the
image
version,
but
the
image
itself,
whether
it's
for
a
sidecar,
init,
container
or
Cassander
itself,
are
there?
Is
it
as
simple
as
just
in
I
mean
this?
Is
the
general
question
is
simple?
You
can
bring
your
own
image
for
it.
You
can
expect
things
to
work.
B
This
came
up,
I
started.
Think
about
that
as
I
was
going
through.
This
you
know.
Can
I,
if
I
swap
out
whether
it's
an
image
itself
or
or
any
any
of
the
places
where
you
can
substitute
your
own
image?
I
know
in
one
case,
when
I
was
looking
at
like
for
the
eye
thing
with
the
boot
with
kescott:
there's
really
good
documentation.
B
D
Give
you
one
last
example,
and
that
does
gonna
go
back
to
the
question
that
you
asked
about.
What
could
we
do
to
make
Sandra
better
when
we
used
it
for
Cuban
Aires,
for
example,
we
had
to
stop
using
the
node
tool.
Finally,
I
mean
tool
because
we
were
using
it
to
test
each
cluster.
I
mean
all
the
node
was
okay,
but
we
were
just
stacking
up,
not
too
cold
because
it
was
taking
so
much
time.
D
So
we
had
to
replace
it
by
simple
curl
scripts
that
called
JMX
the
local
agent
that
we
have
to
just
check
if
the
node
is
alive.
So
this
is,
for
example,
what
we
have
in
the
default
bootstrap
image,
and
if
users
change
it,
then
they're
going
to
have
to
provide
scripts
to
do
exactly
the
same.
Maybe
they
can't
just
replace
scripts
with
course
to
not
to
but
they're
gonna
go
back
to
the
issues
that
we
have
had.
F
The
main
idea
is
to
provide
something
that
works
out
of
the
box.
I
mean
if
you
take
the
normal
cassandra
image
and
you
don't
touch
anything
it
should
work
now.
It
may
not
work
as
good
as
you
want
to
it's.
Not.
It
may
not
be
tuned
to
the
best
of
you
of
what
you
want,
but
it
needs
to
work.
So
that's
what
we
try
to
do,
something
that
actually
works,
but
then
I
will
observe
something
very
special,
so
they
need
to
take
this
in
there
and
there.
E
This
goes
back
to
something
I
brought
up
a
couple
of
weeks
ago,
which
is
the
interplay
between
the
operator
and
the
image
is
very
tightly
coupled
and
I.
Don't
think,
there's
gonna
be
any
way
to
get
around
that
I.
Think
as
part
of
this
and
I
understand
it.
Two
different
efforts
and
the
you
know
the
sort
of
the
official
Apache
projects
that
we're
not,
but
there
is
I,
don't
think
you're
gonna
be
able
to
get
away
around
the
idea
of
having
a
list
of
images
that
work
with
this
operator
or
vice
versa.
F
E
We're
gonna
get
yeah,
we
take
the
same
tack
at
DreamWorks.
Do
we
missions
before
we
have?
You
know
15
different
image,
databases
that
we
support
and
so
that
back
of
what
in
OCI
image
or
docker
image
must
support
that
actually
extends
across
all
of
our
databases.
So
we
have
like
a
document
that
defines
what
you
know.
A
compliant
image
is
across
all
databases,
but
yeah
I
think
I
think
there
has
to
be
some
sort
of
definition
of
what
what
is
compliant
with
the
operator.
F
And
the
last
thing
I
would
say
in
this
fact
that
Joe
is
showing
here
we
have
examples
of
things
that
people
can.
They
can
change
the
pre-rendered
a
sketch
found
to
do
some
specific
things
during
the
operation
of
the
first
choice.
You
want
to
do
replace
image,
replace
address
or
something
that's
what
you
say
here
they
could.
We
can
just
change
the
pre-rendered
Sh
and
it
does
a
comparison
with
the
node,
the
hostname
and
just
say
for
this
specific
node
I
can
change
the
IP
or
whatever
needs
to
be
done.
F
That
can
be
done,
they
just
trigger
a
running
restart
and
it
does
what
it's
needed.
That's
something
that
was
required
as
well,
and
we
need
to
have
the
that.
It's
not
really
nice
to
have
said
an
old
thing
like
that,
but
I
mean
oops
love,
says
I
mean
when
they
see
I
say:
oh
I
can
do
things,
that's
cool.
A
Okay,
so
go
ahead,
John!
Sorry,
oh!
No!
No,
no,
wait!
Well,
I
mean
I
notice.
People
are
dropping
off,
so
we're
losing
people.
So
maybe
this
is
a
good
time
to
stop,
but
I
will
schedule
same
time
next
week
again,
so
we
can
keep
going
Frank
I
think.
Another
thing
that
would
be
helpful
is:
if
we
engage,
we
have
the
slack
now
the
kubernetes
or
the
cassandra
kubernetes
slack
room
be
a
good
place
to
have
more
conversation
as
well,
but
yeah.
A
So
what
is
the
goal
is
personally
I
feel
like
what
I
see
is
we
have
real
potential
of
taking
everyone's
ideas,
potentially
putting
them
together
with
all
the
different
inputs
and
having
something
much
better
than
any
individual
operator
was
because
everyone
had
different
motivations.
You
know
and
I
think
that's
what's
important,
whether
or
not
that
it
can
happen
we're
gonna
find
out,
but
that's
that
would
be
great.