►
From YouTube: 2020-02-25 Cassandra Kubernetes SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Has
happened
all
right,
so
kubernetes
saying
our
sorry
cassandra
on
kubernetes
february,
25th,
hey
eric
all
right,
so
there's
a
lot
of
stuff
going
on
and
we
figure.
What's.
Let's
talk
about
releases,
let's
talk
a
little
bit
about
road
map
and,
and
things
like
that
lots
of
news
to
share
today,
so
I
figured
we'd
start
taking
a
bite
out
of
it.
But
I'd
also
like
to
promote
some
discussion
here
and
get
feedback
on
on
some
of
those
items.
So,
first
and
foremost,
we
are
cutting
a
release
of
cas
operator
today.
A
This
includes
the
changes
donated
by
orange
from
kaskop
specific
one
of
the
specific
features
being
the
node
affinity
labels.
So
previously,
when
you
specified
a
rack,
there's
a
zone
parameter
you
put
in
your
az
name
and
it
would
line
up
with
us
a
particular
label
on
nodes.
That's
fairly
common
in
kubernetes
those
labels
have
been
in
flux
since
we
started
on
this
project.
I
guess
it's
been
over
a
year
now,
so
it's
nice
to
see
the
ability
to
support
any
freeform
combination
of
labels
and
in
non-cloud
environments.
A
That's
nice,
because
if
you
use
any
other
labeling
mechanism,
this
can
be
really
helpful.
Like,
for
example,
I
worked
with
a
customer
where
they
had
data
centers
and
then
rooms
within
the
data
centers
and
then
racks
within
the
rooms
within
the
data
centers
and
the
this
new
functionality
allows
you
to
just
use
whatever
you
want
there.
A
A
Something
I've
been
talking
about
for
a
while
now
too,
so
it's
nice
to
see
that
come
in
so
big.
Thank
you
to
cyril
and
orange,
and
maybe
they'll
be
joining
here
a
little
bit.
Maybe
not
that's!
Okay,
so
1.6.0
is
that
release.
It's
also
worth
noting
that
there
is
a
it
looks
like
the
pieces
are
coming
together
for
a
release
in
the
kate
sander
project.
That
includes,
if
I'm
not
mistaken,
john,
please
correct
me
the
1.6.0
release
for
the
operator.
A
A
Almost
everything
is
namespace
scoped
at
this
point,
stargate
is
there
as
a
coordination
layer,
there's
some
interesting
pieces
to
that
that
we're
going
to
be
exploring
with
with
content
here
in
the
coming
weeks,
specifically
there's
a
lot
of
conversation
around
stargate
for
http
apis,
but
there's
a
separate
conversation
around
stargate
for
native
protocol,
so
you
can
actually
connect
with
your
cassandra
driver
talk
to
that
and
it
takes
all
those
coordinator
tasks
away
from
the
nodes
that
actually
store
your
data,
which
is
kind
of
fascinating.
A
So
that's
that's
another
big
feature.
We
have
authentication
enabled
non-root
images
there's
a
whole
big
list.
I
expect
that
there
will
be
a
blog
post
out
about
that
tomorrow
too,
which
leads
me
to
another
point:
there
will
be
a
blog
on
case
standard
io,
starting
with
the
release
of
of
those
with
that
release,
as
well.
So
really
big
news
there.
I'm
really
excited
to
see
it
happen.
C
Yeah,
just
a
couple
couple
questions
so
cast
operator:
1.0
is
getting
pushed
out
now
when
you
said
k8
center,
the
new
release
is
coming,
which
version
of
cast
operator
is
going
into
that
right
now,
the
same
1.0
so
cast
operator
version
is
1.6.0.
C
C
That's
what
I
miss
her
thank
you
and
then
in
in
terms
of
like
the
biggest
feature
you
would
say,
is
the
node
dfinity
fix
basically
not
being
limited
to
cloud
azs,
but
being
able
to
have
your
own
naming
conventions
that
get
that
get
used
for
the
rack.
Is
that
correct.
A
This
lets
you
be
a
lot
more
explicit
and
freeform
in
those
definitions
and
that
that's,
I
think,
that's
where
the
power
is.
I
don't
have
to
run
separate
kubernetes
clusters.
I.
B
A
D
So
you
have
yeah,
and
that's
so
you
know,
kez
operator
has
a
creates
a
pvc
for
var
lube
cassandra
and
with
that
change
you
can
now,
if
you
want
to
have
a
separate
pvc
for
your
commit
log,
edc
yeah
or
what
you
know
for
anything.
Yes,
that's
a
really
nice
enhancement
and
and
and
I've
opened
up
tickets
in
kate
sandra
to
leverage
that
functionality,
not
it
won't
be
in
1.0,
but
but
post
1.0
to
blockchain,
kate,
sander.
A
That's
a
good
catchphrase.
I
reviewed
the
pr
earlier
today
and
I
was
like
I
know,
I'm
missing
something
big
right
now,
it's
like
additional
config
or
something
like
that.
Thank
you.
That's
what
I'm
trying
to
say
right,
I'm
looking
at
the
list
of
of
things
that
are
landing
in
that
that
1.0
there's
a
move
to
cube
prometheus
stack
for
setting
up
the
monitoring
stack,
leveraging
native
kubernetes
ingress
resources
for
anything
that
speaks
http
is
that's
what
native
kubernetes
ingress
speaks
yeah.
A
This
is
this
is
the
big
item,
so
we
also
started
making
dedicated
charts
for
subprojects
so
for
the
reaper
operator.
Meduse
operator
really
excited
to
see
that,
because
that
lets
you,
if
you
aren't
going
all
in
with
the
kate
sandra
platform,
you
can
still
leverage
those
projects
in
your
your
deployment.
There
are
plenty
of
enterprises
and
users
that
will
they
already
have
some
of
these
pieces
in
place
or
they
just
want
to.
A
C
A
For
not
everything
is
in
a
dedicated
sub
chart,
but
the
operators
are
already
in
dedicated
subtitles.
You
can
install
those
specifically
today,
okay,
great
great
question,
so
we're
talking
a
lot
about
today
and
and
the
release
which
is
currently
scheduled
for
tomorrow,
but
I
think
it's
worthless
talking
about
the
future
direction
of
cassandra
on
kubernetes
and
and
some
of
these
these
projects
and
what
what
we're
trying
to
see
happen
here
over
a
number
of
of
milestones,
or
I
don't
want
to
say
quarters,
because
that
doesn't
seem
right.
A
I
call
them
milestones
because
they
don't
want
to
tack
them
specifically
down
to
particular
dates
on
the
calendar,
but
just
a
a
general
idea
of
where
we
want
to
be
and
what
we
can
do
to
enable
people
that
want
to
be
running
on
kubernetes,
so
yeah.
I
think
that's
that's
worth
kind
of
exploring,
and
we
we
have
some
users
here-
that
I
think
it'd
be
valuable
to
get
some
feedback
on.
C
Yeah,
what
one
of
the
things
I've
been
doing
with
the
client
is
getting
them
socialized
the
cast
operator,
as
they
cannot
well
they're
very
regulated
right,
so
they
they're
either
okay
with
using
something
that's
already
in
their
white
list
from
a
cloud
native
or
they
will
use
whatever
it's
possible
on
kubernetes,
and
so
one
of
the
things
that
came
up
is
like
you
know
so,
yeah,
it's
great
that
with
kate
sanders.
It's
like
super
easy
to.
C
You
know,
pull
it
down
run
it
have
something
up
and
running,
and
everything
that's
happening
on
the
kids
center
is
great,
but
I
know-
and
I
also
know
that
you
know-
data
sex
is
going
to
put
con
information
about
standard
kubernetes
as
part
of
the
new
certification,
but
basically
there's
the
the
ask
for.
C
Is
there
a
three
four
five?
You
know?
I'm
sorry,
one
one,
two
five
part
tutorial
of
how
to
get
really
good
with
this,
not
really
good.
With
this,
like
you
know
how,
on
the
on
the
kubernetes
I
o
site,
there's
like
this
four
or
five
lessons
you
just
go
through
and
it's
on
catacota
and
you
can
just
like
play
with
it
or
you
can
go
to
like
playwithkates.com
and
and
do
stuff.
That's
what
people
have
been
asking
for,
and
I've
been.
C
A
So
there's
a
there's
something
that's
being
worked
on
right
now,
I'm
making
sure
that
it's
it's
out
there.
There
are
a
number
of
catacotta
scenarios
for
kubernetes
being
developed.
Sorry,
specifically
cassandra
on
kubernetes
that
talks
about
it.
It
doesn't
just
look
at
kate
sandra,
but
there's
also
running
kaz
operator,
like
the
the
logical
progression
of
this
is:
let's
get
you
started
with
just
the
bare
basics.
You
have
a
kubernetes
cluster.
A
You
have
cause
operator,
let's
talk
about
what
that
means,
and
then,
let's
start
layering
in
some
of
the
other
pieces
that
you
would
want
to
see
in
a
in
a
production
like
environment
right
and
show
you
not
just
how
to
use
the
automation
but
walking
you
through
all
the
individual
components,
yeah
and
then,
by
the
time
you
get
to
using
the
automation
you
understand,
what's
going
on,
underneath
all
the
what
all
the
machinery
is.
So
if
something
does
happen,
you're
not
just
like
my
automation
is
broken.
A
It's
like
my
automation,
is
broken,
but
maybe
there's
an
issue
here
within
just
the
operator.
Let
me
go
look
at
that
custom
resource.
I
know
that
that
custom
resource
exists
right
and
how
to
interact
with
it.
I
know
what
pods
and
services
I
should
be,
seeing
not
just
the
native
protocol
connection
isn't
being
established.
A
So
let
me
take
a
look
and
see:
what's
if
that's
a
link,
I
can
send
you
this
second.
A
A
I
I
found
the
link.
It
is
in
chat
right
now,
it's
possible
on
data
sex
dev,
but
it
goes
into
it's
it's
using
catacota
as
well.
So
okay,
cool
yeah
check
that
out,
and
there
are
a
number
of
events
too.
There's
actually
a
workshop
happening
next
week.
C
A
There's
more
stuff
on
it
now,
since
the
last
time
I've
been
looking
at
it,
but
this
is
great
great
start
thanks!
Oh
yeah,
it's
it's
continually
continually
being
built
out,
so
I
would
expect
to
see
more
scenarios
coming
online
over
time.
C
Also
curious,
you
know,
is
kate
sandra
gonna.
Try
to
you
know
position
itself,
I
mean
I
know
I
think,
probably
a
goal
but
like
to
join
the
the
cloud
native
foundation
as
like
a
like
a
you
know,
a
project
in
the
in
the
cncf.
A
E
E
I
was
like
hey
rebel
now
I
we,
you
know
we
of
course,
linux
foundation.
Cncf
is
very
interested
in
this,
but
it's
like
it's
like
anything
else.
It's
like
we
need
to
get
it
to
some
sort
of
a
place
like
a
a
good
place
for
a
donation
into
foundation.
Yes,
the
answer
is
yes
in
the
details.
Who
knows
but
it'll
probably
start
with
building
out
the
right
competencies.
Like
you
know,
just
like
an
incubating
project.
I've
done
a
lot
of
incubating
projects
before
there's.
There's
things
you
need
to
do.
E
You
know
you
need
to
make
sure
that
you
have
all
the
bells
and
whistles
involved
like
do
we
have
a
steering
committee?
Do
we
have
good
participation
from
multiple
groups
that
aren't
just
a
single
vendor
and
you
know
so
we
got
some
work
to
do
there.
I
think
you
know
revel
you're,
gonna
say
yes
when
I
ask,
but
you
know
when
you're
like
people
that
are
in
this
call
are
like
the
type
of
people
that
we
want
to
work
with.
E
C
While
you
know
what
you
know,
what
is
the
native
data
storage
or
you
know,
data
option
for
kubernetes,
and
you
know
chem
cassandra
to
be
that
and
obviously
cast
operator
and
kate
sander
make
it
easier.
You
know
what
are
some
other
thoughts
that
you
all
have
to
kind
of
get
that
into
the
people's
mindset.
C
I
guess
you
like
it
to
the
point
where,
like
people,
just
it's
a
foregone
conclusion
right,
if
you're
gonna
set
up
a
cluster
for
applications,
you
know
use
prometheus,
you
use
grafana,
maybe
use
cortex
or
whatever
right
like,
but
but
you're
gonna
be
like
oh,
but
I
need
cassandra
for
that.
C
You
know
I
need
cassandra
to
be
a
part
of
the
general
reference
architecture
if
we're
going
to
make
a
scalable
app
and
we're
just
thinking
like
out
loud
here,
you
know
beyond
beyond
stargate
what
can
be
like
a
killer
reference
app
that
people
are
just
want
to
try
to
you,
know,
use
you
know
just
to
pull
it
down
and
play
with
it
and
that
that's
what
I've
seen
as
successful
from
from
some
at
least
say
some
projects
is
that
the
hello
world
project
that
people
can
use
in
their
operations
like
from
day
one
not
having
to
go,
make
their
own
data.
C
I'm
talking
like
ready
to
go
something
to
use
that
that
can
stick
around
in
an
organization
for
a
long
time.
You
know.
A
Definitely
and
there's
it's
interesting
because
there's
a
number
of
sample
apps
that
are
getting
pulled
together
and
and
that
would
work
without
any
issue
with
with
with
kid
sandra
and
that's
something
that
we're
very
cognizant
of.
So
that's.
A
I
don't
know
if
we're
gonna
be
talking
about
that
too
much
with
1.0,
but
there
are
a
number
of
sample
apps
in
flight
or
that
have
already
been
written
where
it's
just
a
matter
of
tweaking
the
connection,
location
and
and
spinning
it
up.
E
I
I
think
this
is
like
one
of
those
perfect
things
for
contribution.
You
know
it's
like.
Well,
I
don't
know
a
lot
about.
You
know
the
internals
of
kubernetes
or
the
internals
of
cassandra,
but
I
I
I
deploy
up
like
sre
knowledge.
I
deploy
applications
on
kubernetes
all
the
time.
I
love
cassandra,
I'm
going
to
try
to
put
some
work
towards
this.
E
If
there
is
you
know
like-
and
this
is
because
we're
recording
cattle
call
if
there
was
if
there
was
individuals
out
there,
that
could
say
here's
a
stack
like
here's,
a
typical
deployment
that
I
would
put
on
kubernetes
at
scales.
E
You
know-
and
it's
you
know
everything
from
ingress
to
database
and
you
know
it
deploys
in
a
clean
fashion
so
that
you
just
say
you
know,
helm,
install
or
use
cube
cuddle
and
you
get
it
all
set
up
with
your
deploy.
That
would
be
amazing
and
chris.
We
haven't
talked
about
this,
but
I
was
actually
thinking
about
this
as
an
option
is
putting
this
like
on
the
kate
sandra
repo,
like
having
a
place
where
we
have
just
examples
like
you
post
your
deployment.
A
Yeah,
definitely,
I
I
think,
that's
something
that
we
really
want
to
see
happen.
That's
part
of
the
community
engagement
right
is
show
us
what
you've
built
right.
If
you
want
to
open
source
it
we'll
happily
put
links
out
to
your
repo
feature
it
I'd
love.
Personally,
I
would
love
to
do
an
interview
with
somebody
who
is
done
this
type
of
deployment
and
and
wants
to
talk
about
their
projects.
A
I
think
it's
fascinating
the
decisions
that
are
made
and
whether
whether
that's
the
language
being
used
the
framework
and
why
you
chose
that
particular
kubernetes
resource.
I
think
those
are
all
really
crunchy
topics
and
yeah.
I
think
there
should
be
a
a
a
call
for
for
anybody
that
wants
to
contribute
in
that
way.
That's
yeah
I'd
love
to
argue.
C
There's
a
good
example,
I
think,
on
the
the
zero
to
hero
for
jupiter
lab,
like
it's
really
comprehensive,
like
here's,
how
you
get
started
with
kubernetes
on
azure,
I
don't
know
ws,
it's
not
like
the
most
comprehensive,
getting
started
thing,
but
it's
like
it
gets.
You
started
on
azure
get
you
started
on
aws
or
whatever
there's
like
10,
different
clouds
or
on-premise,
and
then
it
says
okay.
Now
this
is
how
you
get
started
with
jupiter
lab,
and
this
is
how
you
customize
it
right.
C
So
I
know
that
kate
sanders
has
much
better
documentation
than
the
cast
operator
right
now,
but
that's
something
where
I
literally
have
sent
that
to
three
people
in
the
last
week
and
I'm
like
go
learn
this,
because
this
is
what
we're
gonna
be
doing.
You
know
we're
gonna,
be
doing
this
for
a
project
and
they're
just
going
one
by
one.
Probably
within
you
know
two
three
days,
I
don't.
C
I
don't
necessarily
think
they're
gonna
be
ninjas
at
it,
but
they
will
have
created
a
customized
jupiter
lab
environment
because
it's
so
cleanly
written
that
you
know
an
idiot
can
do
it.
It's
like
and
it's
useful
because
people
need
this
stuff,
so
the
the
other.
You
know
I
can
do
it.
Oh
I'm
so
excited
I
mean
I
can
do
it
patrick.
If
I
can
do
it,
anybody
can
do
it.
E
A
A
There's
a
there's
a
couple:
pr's
landing
right
now
for
the
kate
stander
documentation
site
to
build
out
the
getting
started
section,
it's
actually
pretty
fascinating.
We
start
with
the
every
user
section,
which
is
just
like
make
sure
you
talk
to
kubernetes
right.
Let's
make
sure
you
have
nodes
with
enough
resources,
but
then
it
goes
through
actually
doing
the
install
and
verifying
your
deployment
like
there
are
the
right
pods
up.
Are
they
actually
running?
Are
they
still
pending
right?
A
Do
you
have
the
right
services
making
sure
everything's
in
a
good
spot
and
then
it
goes
into
like
a
choose.
Your
own
adventure
kind
of
mode,
where
you
can
say
I'm
a
developer,
how
the
heck
do
I
talk
to
this
thing?
You
can
say
I'm
a
I'm,
an
sre.
How
do
I
make
sure
that
this
thing
is
stable?
How
do
I
monitor
it
right
and
go
to
explore
those
those
interfaces
and
really
give
you
a
more
cohesive
experience,
because
previously
it's
just
been?
A
Oh,
hey,
you're
done
like
you
run
helmet,
install
good
luck
and
that's
that's
that's
not
very
friendly,
so
I
will
that's
in
review
right
now.
Some
of
it's
already
been
published.
A
C
Yeah
I
mean
I'm
thinking
like
so
like
a
lot
of
the
examples
I
do
the
the
play
with
kates.com
test
right.
Can
I
go
on
there
add
three
nodes
and
do
it
and
if
I
can
do
it,
that
means
I
can
do
it
for
real
on
a
real.
You
know,
production
or
whatever
in
my
cloud.
The
the
other
thing
I
was
thinking
about
is
I
mean
another
major.
C
The
major
killer
application
is
spark
or
kafka,
and
I
know
there's
tons
of
tons
of
operators
and
and
helm
charts
for
those
kind
of
that
may
be
something
that
I
can
put.
Somebody
on
our
team
on
is
okay,
get
kate
center
up
and
then
put
spark
in
there
and
you
know,
put
kafka
in
there
and-
and
I
I,
whatever
you
know,
get
grab
crap
from
twitter.
A
Process
it
that's.
The
thing
I
think
is
super
interesting
about
deploying
on
kubernetes
and
having
this.
This
kind
of
automation
is,
you
can
say,
okay
previously,
where
I
had
to
like
spin
up
instances
run
some
automation
to
to
install
cassandra,
wait
for
it
to
come
up
now.
It's
just
like
all
right
helm.
Do
your
thing,
while
you're
pulling
images
down,
I'm
going
to
go
and
find
this
other
helm
chart
for
spark,
and
you
can
focus
on
like
the
components
and
just
that
little
bit
of
plumbing
between
them.
Instead
of.
B
A
We
want
to
be
freeing
of
the
tdm
right,
so
I
think
that's
what
things
get
fascinating.
I
do
have
a
question
for
you,
raul
and
some
of
the
other
people
here.
I'm
curious
about
defaults
and
specifically
defaults
for
new
user
experience
right.
So
if
you
were
to
install
a
if
you
were
going
into
one
of
these
scenarios,
what
do
you
think
the
topology
looks
like
for
a
local
developer
installation
as
well
as
well
yeah?
Let's
talk
about
that
a
little
bit
so
just
local
developer!
A
Installation,
if
I
said
hey,
if
you
do
a
helm,
install
what
do
you
think
you
should
have
at
the
end
of
that
command
the
shape
of
that.
C
C
B
A
A
If
you
did
a
helm,
install
to
install
a
cassandra
stack
inside
of
your
kubernetes,
so
for
your
your
local
machine
right,
what
do
you
expect
that
to
look
like
when
it's
done
like
how
many
nodes
should
there
be
like
what
what
kind
of
traffic
do
you
think
it
should
support?
Should
we
be
talking
about
multiple
gigs
of
heap
or
or
fairways
small
and
tied
down,
but
more
nodes?
What
do
you
think
that
was
yeah?
I
mean
so.
F
I'm
pretty
opinionated
when
it
comes
to
this
stuff.
I
actually,
I
probably
wouldn't,
if
it
was
something
that
was
gonna,
be
installed
with
him.
I'd
probably
look
the
other
way
to
begin
with,
but
that's
just
me
so
so
we
actually
do
this.
So
at
dreamworks
we
run
we
run.
I
don't
know:
400
500
database
clusters
on
kubernetes,
a
mix
of
cassandra
and
elasticsearch
and
couch
base
and
everything
right.
So
so
what
we
generally
end
up
with
are
very
small
clusters,
and
we
want
them
that
way.
F
So
we
basically
take
the
notion
of
you
know
like
a
micro
service
and
we
kind
of
adapted
that
to
a
database,
and
we
came
up
with
this
notion
of
a
micro
cluster,
and
so
every
micro,
cluster
or
every
micro
service
basically
gets
its
own
deployment
of
a
small
database.
Whatever
its
back
end
needs
happened
to
me
so
for
things
I
need
cassandra,
we'll
just
spin
up
a
small
and
the
smallest
way.
We'll
go
is
generally
three
nodes,
because
we
want
the
aj
right
so
and
we'll
split
those
we'll
split
those
across
availability
zones.
F
So
that's
traditionally
the
smallest
will
go,
and
it's
basically
to
the
point
for
us.
Anyway.
We
never
felt
like
we
there's
very
few
cases
where
we
actually
need
to
scale
out
beyond
five.
Maybe
maybe
five
nodes
is
the
max,
maybe
seven
nodes
in
like
a
really
heavy
used
application,
but
because
we
have
this
micro,
these
micro
clusters.
F
Basically
each
cluster,
contains
just
a
small
amount
of
data
and
the
the
load
of
that
micro
service
itself
is
actually
pretty
low
in
comparison
to
sort
of
the
holistic
application
or
the
you
know
the
the
service
on
at
a
whole,
so
so
yeah.
So
we
generally
tune
these
things
really
light
a
little
bit
of
heat.
F
Only
recently
are
we
starting
to
play
with
like
thread
per
core
configs,
where
stuff
has
fallen
over,
because
we
haven't
given
enough
kubernetes
cpu
resources,
for
example,
to
a
particular
pod,
but
in
the
end,
what
we
like
to
see
is
all
that
stuff,
configurable
with
environment
variables
right
so
and
generally
speaking,
so
in
the
way
we've
done
it.
So
we
have
our
own
custom
operator
and
the
way
we've
done
it
is
our
our
cassandra
resource
or
our
dsc
resource.
F
We
actually
have
two
different
customer
resources,
there's
basically
a
configuration
dictionary
in
that
kind
and
we
can
go
through
and
specify
any
of
those
parameters
and
those
will
just
get
translated
and
doing
into
environment
variables.
And
then
we
have
an
entry
point
script
that
when
those
you
know
processes
start
up
will
go
and
dynamically
generate
the
the
cassandra
yamls
or
whatever
they
happen
to
be
in
like
an
container
and
then
start
that
process
up.
So
anyway,
that's
a
roundabout
way
of
saying
I
kind
of
when
I
actually
deploy
these
things.
F
A
Hopefully,
that
no
that's
great
so
these
for
for
people
for
your
developers,
though,
and
and
people
writing
the
microservices.
Are
they
always
talking
to
a
a
cluster
inside
your
environment,
or
did
they
ever
run
it
locally
and
they
have
even
a
further
paradigm
or
is
it
always
like?
Okay,
we're
gonna
provision,
a
three
node
cluster
for
development
and
and
we'll
just
talk
to
that
and
not
worry
about
running
this
locally.
E
F
F
Is
on-prem
effectively
so
it'll
go
into
one
of
our
data
centers
and
in
one
of
our
kubernetes
clusters
and
spin
up
a
three-node
cassandra
cluster
for
them
in
its
own
private,
namespace
and
and
then
we'll
just
provide
them
the
connection
and
points
and
they
can
connect
to
it.
So
they
there's
no
real
need
for
them
to
spin
anything
up
locally
and,
to
be
honest
with
you,
we
found
that
and
again
this
is
our
own
experience,
but
developers
tend
not
to
want
to
do
anything,
mini
cube
or
kubernetes
related
at
all.
A
Yeah,
I'm
that
makes
sense.
I'm
curious
when
you
have
in
that
model
of
deployment.
Are
you
using
any
sort
of
ingress?
How
are
you
getting,
how
can
their
sys
their
workstation
route
to
their
database
yeah.
F
So
a
few
things
yeah
it's
so
the
first.
The
first
thing
to
understand
is
that
we
basically
use
a
flat
network,
so
any
there's
no
overlays
going
on
at
all
within
our
entire
studio,
which
means
that
and
second
thing
is
we
basically
use
calico
for
as
rcni
right.
So
what
that
amounts
to
is
that
any
pod
running
in
any
kubernetes
cluster
can
be
routed
to
from
any
workstation.
So
any
developer
can
basically
route
to
that
pod.
F
B
F
On
dns
for
for
discovery,
so
every
every
pod,
so
here's
something
interesting.
So
in
our
operator,
what
we
do
is
we
scale
up
the
number
of
services
for
the
number
of
pods
there
are
in
that
staple
set.
So
if
we
for
spinning
up
a
three
node
cassandra
cluster,
we
will
actually
also
spin
up
three
services
and
we'll
front
each
of
those
cassandra
nodes
with
its
own
dedicated
service,
so
that
we
can
have
direct
route
ability
to
each
of
those
pods
and
because
it's
a
stateful
set
those
identities
stay
the
same.
F
A
F
A
Fascinating
and
so
if
you
have
a
flat
network,
any
pi,
you
can
run
multi
dc
pretty
easily,
then
multi-region.
It's
just
a
flat
network.
That's
great!
Oh
man.
C
Yeah,
I
think,
there's
one
of
the
topics
like
not
last
year
or
the
year
before,
where
I
had
somebody
from
it
was
that
accelerate?
Where
maybe
it
was?
I
don't
remember
what,
but
basically
they
had
done
similar.
Similarly,
they
had
exposed
each
node
with
its
own
service.
C
I
think
there
was
some,
maybe
I
talked
to
the
speaker
afterward.
Basically,
it
also
gives
the
the
token
awareness
to
the
clients
right,
like
you,
have
access
to
all
of
the
different
nodes
which
you
won't.
We
won't
get
that
if
everything's
behind,
like
an
lb
yeah
that.
F
F
That's
then
going
to
be
an
extra
hop
or
something
like
that
where
it
does
get
a
little
tricky
and
this
might
be
sort
of
tmi,
but
in
things
like
azure,
where,
where
everything
comes
in
through
a
load
balancer,
so
so
creating
that
extra
service
for
every
pod,
you
actually
have
to
create
a
load,
balancer
type
of
service,
which
actually
then
creates
a
load
balancer
rule
in
azure,
and
then
you
actually
run
into
hard
limits
as
far
as
how
many
little
balancer
how
many
rules
can
be
created
in
each
load,
balancer
and
we've
run
into
hit.
F
F
Yes
and
no
so
we
we
started
that
route
and
what
that
requires,
and
so,
if
we're
limiting
the
discussion
here
to
cassandra,
it's
it,
it
kind
of
it
kind
of
works.
But
our
use
case
goes
beyond
cassandra's.
We
have,
we
have
like
15
different
database
types
that
we
support,
but
but
the
the
trick
is,
is
the
driver
or
the
driver
has
to
be
smart
enough
to
unders?
Understand
those
topology
topology
changes
right.
F
So
if
cassandra
node
c
goes
down
and
gets
a
new
ip
address
comes
back
up,
the
client
has
to
be
able
to
reconnect
and,
like
I
said,
cassandra,
drivers
are
generally
pretty
smart.
So
it's
not
that
big
of
a
deal,
but
things
like
elasticsearch
we've
had
issues
with,
and
so
what
we
do
is
that's
why
we
want
those
static
ips
of
a
service
so
that
those
things
don't
change,
but
even
with
cassandra
there
is
still.
F
There
is
still
a
problematic
scenario
when
all
three
of
the
nodes
go
down
at
the
same
time,
so
let's
say
we
restart
all
three
nodes
and
the
whole
thing
goes
out
of
whack.
Then
we
actually
have
to
balance
the
microservice
in
order
for
it
to
basically
force
it
to
do
another
dns
lookup
and
go
get
the
go,
get
the
correct,
endpoints.
A
A
How
to
having
a
way
to
tell
various
services?
Hey,
you
probably
want
to
pull
from
the
ass
again
to
likes.
There
are
new
ips.
You
should
probably
go
talk
to
that's
that
wherever
the
kubernetes
service,
which
ultimately
ends
up
being
dns
or
what
have
you,
but
that
that's
a
fascinating
point
to
the
yeah.
F
A
Yeah,
so
it
sounds
like
one
of
the
reasons
that
you
went
with
the
service
per
pod
is
is
for
those
fixed
ips,
because
staple
sets
don't
give
you
that
they
give
you
fixed
names,
but
they
don't
give
you
fixed
ips
I've
seen
some
people
that
are
messing
around
with,
with
like
side
cars
doing
things
with
envoy,
where
they
would
have
a
virtual
they'd
have
an
ip
address,
that's
fixed
that
envoy
knows
about,
and
then
it
talks
to
the
the
pod
on
the
loopback,
which
was
kind
of
fascinating.
A
B
A
It
would
be
nice
to
start
to
develop
a
list
of
things
that
we
could
do
to
make
cassandra
a
bit
more
kubernetes,
certainly
yeah
I'll.
F
I'll
say
from
my
own
experience:
I'm
sorry
if
I
cut
you
off,
but
so
that
that's
an
interesting
one
is
that
is
the
ip
address
and
you
know
doing
things
like
playing
with
like
bootstrap.
You
know
whether
you
put
it
in
a
bootstrap
mode
or
not
or
whatever.
F
When
you
bring
on
the
new,
bring
the
node
back
up,
you
can
kind
of
get
away
with
things,
but
the
the
problem
that
I
have
is
is
with
the
way
that
token
ranges
are
basically
associated
with
ip
addresses
and
specifically,
if
I
want
to
take,
if
I
want
to,
if
I
want
to
do
a
blue,
green
migration,
where
I
take
an
entire
kubernetes
cluster
and
move
it
from
one
kubernetes.
F
Sorry
take
an
entire
cassandra
cluster
and
I
move
it
from
one
kubernetes
to
another
like
for
a
disaster,
recovery
scenario
or
whatever
it
happens
to
be.
You
know
the
when
those
persistent
volumes
come
back
up
and
they
read
up
hey
this,
this
token
range
is
assigned
to
ip
address.
1.2.3.4
it
causes
it.
You
know
it's
a
problem
unless
I
basically
stream
across
the
data
using
you
know
like
some
stable
load
or
something
like
that.
C
Yeah,
I
mean
you
the
whole
token
range
and
you
know
how
the
data
is
partitioned.
I
think
like
look
at
like
you
know,
couch
base.
For
example,
you
know
how
they
manage
the
v
buckets
right.
Those
vegan
b
buckets
can
be
moved
around
and
the
one
v
bucket
that's
getting
hit.
A
lot
can
scale,
whereas
another
v
bucket-
that's
not
getting
as
hit
as
much
hits,
can
just
sit
on
one
node
right
and
that's
that's
the
type
of
stuff
that
people
want
right.
C
I'm
not
saying
it's
there
now,
because
it's
when
you
tell
people
about
this
partitioning
and
like
why
you
can't
have
wide
partitions.
So
that's
a
whole
different
ball
game
about
how
to
make
a
standard,
more
user
friendly,
but
ultimately
you're
asking
about
a
developer
right
or
or
the
complainer
right,
there's
the
complainers
and
then
there's
there's
the
there's
the
committers
right.
So
the
complainers
are
always
like.
Why
the
is
this
so
hard?
Excuse
me:
why
is
it
so
hard?
And
you
know
you're
you're
asking
you
know
what
does
a
developer
want
right?
C
So
there's
at
least
three
scenarios.
I've
seen
different
scenarios
right,
there's
an
analytics
scenario.
There's
like
a
transactional
application
scenario
and
then
there's
one
which
is
more
like
you
know,
data,
I
guess
data
science,
but
same
same
thing
analytics
and
only
the
developers
that
are
doing
kind
of
the
are
making
an
app
right
and
only
a
specific
group
of
of
the
developers,
maybe
in
the
integration.
The
testing
they'll
need
something
where
they're
like.
C
I
want
to
bring
it
up
all
in
one
kubernetes
on
my
mini
cube,
you
know,
and
they're
and
they're
very
specific.
They
want
to
have
like
64
gigs
of
ram
on
their
computer
and
that's
what
they
want
to
do.
But
most
people
like
umer
said
is
give
me
a
place
to
deploy
code
and
and
have
this
work
for
me.
That's
what
I
want.
I
don't
really
care
about
the
engine
behind
it.
C
I
just
want
it
to
work
and
on
the
analytics
side
and
the
data
science
side
I
mean,
apart
from
people
using
like
you,
know,
jupiter
hub
with,
like
livi
talking
to
spark
talking
to
whatever
you
know.
Even
then
did.
Why
would
they
set
that
up
locally?
You
know.
So,
ultimately,
it's
like
people
they're
trying
to
make
apps.
C
They
want
an
aws,
dynamo
or
a
cosmos,
you
know
azure
cosmos
experience,
but
if
they're
doing
kubernetes
locally,
that's
a
very
small
group
of
people
that
are
going
to
be
doing
that,
like
people
that
are
either
trying
people
like
whom
I
are
yourself,
maybe
myself
that
really
want
to
get
into
the
nitty
gritties
of
like
making
a
platform
right,
designing
and
architecting
a
large-scale
platform.
The
developer
probably
doesn't
give
a
about
this
stuff.
F
Yeah
and
frankly,
when
trying
to
run
a
database
in
minicube,
it
becomes
even
more
difficult
when
you
start
talking
about
like
persistent
volumes
and
having
to
think
about
storage
classes
and
what
are
those
going
to
map
to
in
a
mini
cube.
It's
like
if,
if
I'm
using
mini
cube,
just
to
have
like
a
rapid
application
development
environment.
A
B
A
A
C
A
C
C
Yeah
but
mind
you,
I'm
not
talking
about
the
smart
startups,
I'm
talking
about
the
the
the
guys
that
want
to
make
everything
they
want
to
reinvent
every
reel
possible
right.
Those
are
the
people
that
will
probably
be
like.
I'm
gonna
do
everything
on
my
one
computer
and
then
I'm
gonna
scale
it
out
if
you're
smart
about
it.
You
know
honestly,
like
you,
you
you
will
put
on
eks
and
then
you'll
run
helm,
chart
right
done.
A
So
that's
that's
a
that's
a
great
point,
and
so
one
of
the
things
that
we're
looking
at
so
milestone,
one
in
that
roadmap
that
I'd
like
to
see
published
today,
is
about
getting
this
getting
our
one.oga.
The
milestone
two
is:
is
this
distribution
build
out,
which
is
very
much?
This
is
how
you
run
it
on
kubernetes
x,
whether
that
be
on
a
cloud
provider,
whether
it
be
on-prem
with
like
open
shift,
or
what
have
you
like
this
is.
This
is
step-by-step.
A
Maybe
there's
a
component
inside
of
that
particular
environment.
That
makes
sense,
maybe
they
have
their
own
ingress
right
or,
if
you're,
in
a
cloud
you're
talking
about
load
balancers
attached
to
services
instead
of
instead
of
there
are
many
ways
to
do
these
things
right.
If
you're
on
gcp,
you
probably
don't
want
to
use
s3
for
your
backups
right
exactly
so
that's
that's.
A
One
of
the
next
milestones
is
to
build
out
what
that
looks
like,
and
so
you
make
an
interesting
point
role,
and
I
was
actually
just
listening
to
the
gcp
podcast
yesterday,
where
they
were
talking
about
the
digital
ocean
platform
as
a
service
or
kubernetes.
So
that's
that's
an
interesting
data
point
to
take
into
account.
C
C
A
C
I'm
I'm
bringing
up
the
the
jupiter
lab
one.
I
was
literally
looking
at
it
earlier.
The
zero
just
here
I'll
send
a
link
right
now.
This
was
well
received
right,
so
they
got
really
good
traction.
So
basically
you
go
to
installing
jupiter
hub
and
it
has
well.
First
is
set
up
kubernetes
right.
It
tells
you
how
to
set
up
cooperatives.
We
don't
necessarily
need
to
know
how
that
set
up
helm,
but
it
has
it
for
google
aks,
aws,
open
ship,
ibm,
digitalocean
and
ovh.
C
I
don't
know
who
the
those
guys
are,
but
and
then
it
has
with
auto
scaling
right,
and
it
says
literally.
This
is
how
you
do
it
on
google
and
it's
mostly
the
same
content
but
very
specific
screenshots
and
copy
and
paste
for
that
particular
cloud
provider
right.
So
then
you
go
to
setting
up
jupiter
hub
and
it's
again
copy
and
paste.
Now.
What
you
just
described
is
like
two
orders
of
magnitude
better
than
that:
okay,
but
that
alone,
just
how
to
do.
It
is
good
enough
for
people.
A
C
C
It's
just
how
do
you
go
from
docker
compose
to
kubernetes?
That's
where
kind
of
people
get
hung
up,
but
you
know
generally
I've
seen
people
have
a
docker
compose
for
local
development
right
and
then
they
have
all
of
their
kubernetes
configurations
in
the
same
repository.
But
that's
what
then
gets
go,
gets
and
pushed
out
to
you
know
via
cicd
right.
A
B
A
Just
just
to
recap:
we
got
a
couple
of
minutes.
Left
has
operated.
Release
looks
like
that's
happening
today.
There's
a
couple
of
github
action
stuff.
That's
getting
figured
out
right
now
that
didn't
run
quite
right
on
the
the
push
of
the
tag.
Kate,
sander,
1.0,
sneak
peek,
it's
it's
launching
tomorrow
is
is
the
goal
there
so
really
great
milestones
for
these
projects,
where
we're
seeing
it
deployed
in
production
with
multiple
users
now,
even
in
multi-region
environments
for
cavs
operator.
A
I
believe
that
these
calls
are
every
two
weeks
I'll
double
check,
but
thanks
so
much
for
for
attending
really
great
feedback
session.
I
think,
as
a
follow-up,
I'm
gonna
starting
that
that
a
list
of
things
that
cassandra-
how
can
sander
can
grow
to
be
a
little
bit
more
cloud
native
with
regards
to
kubernetes
I'd
love
to
to
send
that
out,
maybe
I'll
ping
y'all
on
on
the
asf
slack,
as
that
list
starts
to
take
shape
and
solicits
some
feedback
there,
because
I
know
that
as
4.0
is
getting
closer
and
closer
to
release.
C
Yeah
I
mean
I
don't.
I
don't
anticipate
auto
magic
partitioning
right
until
like
cassandra,
six,
maybe
cassata
five,
but
it's
it's
it's
good
to
hope
right.
It's
good
to
put
the
ideas
out
there.