►
From YouTube: Webinar | Leveraging Docker and CoreOS to provide always available Cassandra at Instaclustr
Description
With a growing customer base and Cassandra clusters running on-top of a number of the world’s largest cloud and bare-metal hosting providers, Instaclustr is at the forefront of always-on Cassandra hosting. Instaclustr leverages the power of Docker, a modern containerization solution for Linux, and CoreOS, a lightweight Linux distribution tailored to running software inside containers, to build a stable and adaptable Cassandra hosting platform.
A
So
I'll
just
give
you
a
quick
overview
of
what
we
do
it.
Instead,
cluster
and
Inter
cost
is
a
managed
cap,
andhra
and
data
stacked
enterprise
in
the
cloud
product.
We
basically
offer
a
self-service
dashboard
that
allows
our
customers
to
create
and
manage
and
monitor
their
clusters,
and
the
whole
point
is
that
sort
of
ever
said.
A
B
A
We
kind
of
you
know
we
wanted
to
focus
on
our
project
rather
than
having
to
manage
Cassandra,
and
so
we
we
went
out
to
the
market
and
try
to
find
somebody
who
basically
did
what
insta
coster
does,
but
I
could
find
anybody
that
really
exceeded.
Did
it
the
way
that
we
wanted
there
was
one
services
existed,
but
they
said
around
catan
behind
this
with
a
HTTP
Jason
API
thing,
which
is
very
similar
to
AWS
DynamoDB,
so
kind
of
its.
You
know
you
don't
get
great
performance
out
of
that.
A
A
And
the
way
we
did
this,
so
you
know
we
started
off
on
just
Amazon,
Web,
Services,
ec2
and
initially
the
way
that
we
did
it
was.
We
just
ran
a
custom.
You
want
to
be
a
mi
which
this
was
just
based
on
the
stock.
You
want
to
a
mi
with
a
bunch
of
customizations,
no
cloud
a
net
script
that
raided
the
disks
together
and
catch
the
config
from
our
server
and
alas,
and
stuff,
and
the
way
Cassandra
was
installed
was
just
you
know.
The
traditional
apps
get
Azure
Cassandra
war
is
for
datastax
deployment,
the
expander.
A
Originally
we
we
just
we
start
on
amazon
web
services,
so
these
problems
kind
of
grew
out
of
that
environment,
but
it
turns
out
they're
actually
applicable
to
most
cloud
providers
out
there
on
AWS
we
use
instant
of
storage,
backs
AWS
instances,
and
so
instant
storage
is
basically
if
I
can
ephemeral
disk.
That's
attached
the
instance.
A
We
use
it
because
it's
insanely
fast
these
days,
it's
all
powered
by
solid-state
drives
and
you
also
get
extremely
low
latency,
because
the
disc
itself
is
attached
kind
of
directly
to
the
hypervisor,
so
I
think
I,
don't
really
go
into
the
details
about
that,
but
you
get
extremely
low
latency.
But
the
problem
is
that
it's
volatile,
like
you,
if
you
terminate
the
machine
or
or
anything
like
that,
the
data
on
that
disk
is
gone.
The
alternative
to
instant
storage,
which
is
elastic,
block
storage,
is
Amazon's
version
of
a
like
network
attached
device
or
a
sand.
A
It's
slower
your
higher
latency
and
you
also
end
up
sharing
bandwidth
like
you've
kind
of
got
one
Nick
on
your
instance.
That
not
only
is
Cassandra,
then
sending
data
to
inform
clients
other
nodes
in
the
cluster,
but
then
you're
also
reading
and
writing
to
EBS
overlay
network
interface.
So
your
instance
is
basically
competing
in
a
bit
itself
to
get
bad
with
22
rights
desk,
and
so
the
only
way
that
you
can
change
machine
images
once
you've
booted
in
essence
is
to
start
on
your
machine.
A
There's
no
way
to
kind
of
restart
a
machine
into
a
new
am
I,
so
that
means
that
it
wasn't
really
possible
term.
Is
this
concept
of
using
immutable
images
with
persistent
data
on
those
instant
storage
drives,
because
the
students
wanted
to
do
upgrades
of
Cassandra
or
any
of
the
other
topper
on
the
machine
is
a
whole
immutability
concept
is
kind
of
thrown
out
windows,
so
really
the
only
feasible
solution
on
a
into
the
storage
bacterial
resistance
was
when
we're
using
your
country
was
to
use.
A
Obviously,
you
know
replacing
instances,
it
is
a
business
as
usual
thing
you
know.
Obviously,
if
the
node
goes
down
or
something
like
that,
you
know
we
have
to
handle
instance
replacement,
but
we
didn't
want
to
have
to
replace
machine
every
time
we
wanted
to
upgrade
software.
So
you
know
it's
not
okay
for
upgrades,
because
no
replacement
takes
forever.
A
You
can
only
really
do
one
note
at
a
time,
and
so,
if
it
takes
forever,
they
end
up.
If
you
want
upgrade
at
all
costs,
when
you
version
of
Cassandra,
if
you
wanted
to
change
machine
images,
you
know
it
can
take
days
for
you
to
do
a
rolling
restart.
So
the
end
result
there
is
you
end
up
with
like
really
painful
and
slow
upgrades,
so
we
weren't
looking
for
a
better
solution
and
that
solution
is
chorus
and
stocker.
A
So
what
is
chorus
chorus
is
one
of
the
first
doctor
operating
systems
to
notice
is
one
of
these
operating
systems.
Basically
designed
for
running
containers.
Late
doesn't
have
much
of
a
user
lands,
it's
very
small
as
minimal
as
they
there
you
know,
doesn't
even
have
man
pages
installs
in
the
in
the
default
user
land,
which
that
has
been
a
bit
of
a
nightmare,
some
toast
for
support
people
than
anyone.
A
A
A
They've
done
all
the
testing
to
make
sure
worship
well,
and
it
also
comes
with
a
bunch
of
other
really
useful
software,
et/pt,
fleece,
etc.
As
I
said
there,
we
don't
currently
use
these
internally
ourselves,
but
we
do
have
plans
in
the
future
to
use
things
like
easy
c
d.
E
vc
d
is
a
distributed.
Key
value
store
similar
to
zookeeper
it
also
sports
like
distributed
walking
and
stuff
like
that
and
fleet,
is
basically
a
distributed.
Service
management
framework
assist
its
on
top
of
run.
A
system
d
chorus
is
well
supported.
A
It's
in
use
by
some
of
the
big
players
in
the
market,
Rackspace
uses
of
playstation
uses
it.
We
use
it.
Recent
funding
from
google
ventures
was
given
to
the
core
OS
skies
it,
so
it
just
kind
of
start
to
validate
the
fact
that
that's
you
know
it's
becoming
one
of
those
names
that
you
know
you
when
you
talk
about
darker
people
go,
oh
well,
you
know
are
using
coral
reefs
is
altered
to
run
your
containers.
A
They
didn't
have
you
gone
to
images
like
they,
they
were
building
them,
so
we
were
going
to
build
you're
going
to
images
ourselves,
whereas
core
arrest
was
available
on
GT
people,
so
either
bail
on
GDP.
So
you
know
it
means
one
less
thing
for
us
to
have
to
manage
ourselves
and
the
greatest
thing
about
Korres.
A
Chorus
themselves,
so
they're
responsible
for
building
the
images
that
we
run
on
to
the
anime
is
40,
aw,
Sgt,
etc.
So
that
means
there's
one
less
step
in
our
build
process,
and
one
of
the
great
things
about
choruses
is
up
with
this
concept
of
insight,
updates
and
roll
backs
on
failure
which
and
the
way
that
works
is.
They
should
have
two
system.
A
Petitions
one
is
active
at
any
particular
time
and
the
other
is
inactive
and
then,
when
it
wants
to
download
install
updates,
those
updates
are
installed
to
the
inactive
petition
and
swaps,
the
active
flags
around
restart
the
machine
and
you're
now
running
an
updated
version
of
the
OS.
But
if,
if
something
goes
wrong
in
that
update
process,
no,
you
can
roll
back
the
update
by
just
swapping
those
active
flags
back
around
and
you're
sort
of
back
to
the
state
you're
on.
A
All
your
data
itself
is
stored
on
a
separate
petition
so
that
that
data
includes
things
like
configuration.
Your
darker
containers
images
collapse
of
stuff.
So
that's
all
persistent,
but
the
thing
about
this
is:
it
gets
around
the
problem
of
doing
immutable.
So
the
images
where
you
know
Cora
rest
of
the
guys
that
manage
the
you
know
the
in
place,
update
procedure.
We
don't
have
to
worry
about
that,
so
the
system
updates,
colonel.
You
know
all
their
all.
Your
core
library
comparison
me
or.
B
A
A
So
Dhaka,
what
is
dr.
well
if
they
is
to
contain
a
runtime
and
the
containers
kind
of
like
our
lightweight
virtual
machine,
is
also
a
standardized
in
its
distribution
and
hosting
kind
of
environment
and
there's
a
whole
ecosystem
around
all
this
stuff
right.
So
you
know,
there's
kind
of
not
only
is
there's
the
kind
of
cord
doc
a
group,
but
then
there's
people
like
quito
io,
which
we
use
that
they
they
do
like
private
image
hosting
you.
A
So
this
whole
ecosystem
assange
grow
around
doctor,
which
makes
it
really
great
serve
to
get
started
as
there's
a
lot
of
community
involvement,
which
is
fantastic
in
a
room,
bring
we
bring
new
developers
on
board,
they
can
they
consider
there's
a
lot
of
documentation
stuff
out
there
already
that
they
can
go
off
read
so
you
know
it's
kind
of
well
used
tech,
and
you
know
it's
starting
to
get
a
lot
of
traction.
You
know
you
start
to
hear
about
people
using
in
production.
A
I
gave
a
presentation
at
the
beginning
of
the
year
and
a
few
things
I
off
to
use
doctor
and
core
OS
introduction,
but
we've
now
heard
about
a
couple
of
other
people
that
use
it
so
glad
we're
not
the
only
ones
now
that
they're,
using
both
of
those
things
in
anger
and
but
what
docker
basically
gives
us
is
the
immutable
images
component
right.
You
know
we
can.
A
Then
we
run
it
on
a
machine
and
then
when
we
want
to
upgrade
to
a
new
image,
the
old
containers
run
away
and
a
new
current
container
is
created
so
basically
like
this
resets,
the
kind
of
that
known
good
state,
there's
no
crust
that
builds
up.
You
know.
The
only
thing
that
we
keep
around
is
the
Cassandra
data
and
the
configuration
and
stuff
like
that,
all
the
all
the
libraries
and
all
the
components
about
Cassandra
is
uninstalled
or
destroyed,
or
whatever
you
want
to
call
it
and
redeployed.
A
Every
time
you
want
to
upgrade
Cassandra
the
docker
images
that
we
deployed
like
we
build
them
in
our
development
environment.
Those
saying
dev
images
get
kind
of
moved
over
to
our
testing
environment.
Once
once
development
has
been
completed
on
speech
for
something
and
then
once
testing
is
completed,
those
exact
same
images
are
pushed
to
our
production
farm.
So
we
have
this
dish
of
nice
flow
through.
A
A
Two
packages
stopped
over
supported,
whereas
because
containers
completely
isolated
from
each
other,
you
know
we
can
have
all
these
different
library
versions,
different
J
games,
etc,
and
they
even
all
coexist
on
the
one
machine
and
not
that
we
use
it,
but
it
even
lets
you
run
kind
of
different
user
land
lay
out.
So
you
know,
if
you're
more
preferable
for
Centaurus
versus
debian
or
something
you
know
the
packages
and
things
for
those.
Those
different
distributions
can
be
used
inside
containers.
A
So,
combining
the
two
together,
like
the
docker,
basically
gives
us
that
immutable
images
for
our
components
without
having
to
do
the
instance
for
placement
and
then
quarter
it
handles
the
rest.
So
all
the
colonel
upgrades
os-level
libraries,
everything
we
leave
it
to
them.
Dr.
is
completely
provider
agnostic.
It
runs
on
linux
that
doesn't
matter
about
the
divider
and
then
chorus
runs
on
every
major
major
classifying
that
we
want
to
support
and
also
on
bare
metal.
A
So
we
I
think
it's
in
beta
started,
supporting
IBM
SoftLayer
and
the
SoftLayer
does
their
metal
deployments,
and
you
know
so.
You
know
Korres
just
runs
there
like
any
other
operating
system,
which
is
fantastic,
so
basically
means
that
insta
cost
to
manage
Cassandra
can
run
anywhere.
You
know
we
want
to
onboard
a
new
cloud
provider
because
we
got
demand
forecasting
customers,
it's
quite
easy
for
us
to
do
so,
and
the
combination
of
doctor
and
chorus
makes
this
a
lot
easier.
A
So
the
the
integration
so
how
we
integrate
darker
and
chorus
inspiron
fireman.
The
main
thing,
of
course,
is
Cassandra
itself,
so
the
standard
data
and
configuration
is
persistent
on
the
host,
which
allows
us
to
restart
containers
upgraded
from
you
version.
Something
like
that,
and
the
way
that
this
is
done
is
that
the
sander
dated
directories
and
configuration
like
et
see
Cassandra
basically
is
mounted
from
a
directory
on
host,
so
that
data
is
then
assistant
and
kind
of
you
know.
I
guess
this
is
the
way
you
can
look
at.
B
A
Whereas
by
putting
a
layer
in
between,
we
can
replace
that
in-between
layer,
the
container
without
having
to
restart
the
operating
system,
but
the
data
is
still
persistent
on
the
ephemeral
drive
and
this
because
it
works
so
well.
We
actually
contain
around
everything,
not
just
Cassandra,
but
all
our
internal
services,
the
stuff
that
runs
on
our
central
services
to
do
our
our
console
website.
A
All
the
node
management
tools,
all
that
stuff,
it's
all
done
through
docker
containers
and
because
most
of
our
applications
are
like
our
internal
apps,
a
java-based
as
well
Cassandra's
jar.
Of
course
we
allow
this
is
this
single
kind
of
its
of
well
understood,
build
process
makes
it
easier
for
our
devs.
You
know
again.
If
we
want
to
bring
somebody
new
on
board,
basically
say
them
looked
away.
Everything
is
built
is
exactly
the
same.
You
know
you
don't
have
to
figure
out
how
Sanders
door
closes
ow
internal
tool,
see
that's
the
same
thing.
A
A
So
one
of
the
nice
things
the
doc.
What
to
do
is
it
has
a
concept
of
image
inheritance,
which
is
basically
one
image
considered
depend
on
another
which
allows
you
to
kind
of
isolate
and
consolidate
common
components.
You
know
tools
and
utilities
and
why
everything
like
that
supper
to
use
by
a
lot
of
images
selling
this
diagram.
Here
you
know
this
is
kind
of
like
a
general
layout
of
how
we
build.
Our
images
will
see
that
you
know
we've
a
deadly
on
debian.
We
didn't
serve
this.
A
We
also
use
docker
containers
to
container
eyes
off
center,
so
we
let
our
customers
access
off
center
through
our
dashboard,
so
there's
button
Lawrence
I'll
center
and
the
way
that
this
works
is
we.
Actually
we
deploy
one
instance
of
off-center
per
cluster
that
we
deploy.
We
do
that
for
a
bunch
of
reasons.
Mostly
the
security
purposes
off
center
itself
doesn't
really
have
tight
security
controls,
especially
the
community
version,
which
is
what
we
deport.
It's
also
hosted
independently
of
the
cluster
on
a
separate
machine
and
that
kind
of
gives
us
the
benefits
of
the
air.
A
You
know
like
in
independence
from
the
cost
to
buy
itself,
and
you
know
to
the
cluster
on
undergoes
a
restart
of
stella
keeps
on
running
and
so
on.
One
instance
of
off-center
is
equivalent
to
one
doctor
complainer,
so
we've
got
a
host
that
is
running
lots
of
docker
containers,
durable
athletes
on
multiple
hosts
for
redundancy,
but
bunch
of
dr.
retainers,
one
off
center
per
container,
and
the
main
reason
why
we
chose
this
method
rather
than
watching
like
a
separate
ec2
instance
for
every
option
of
distance.
We
wanted
a
point.
Is
that
it's
extremely
cost-effective?
A
A
A
The
reason
why
we
want
to
do
this
is
obviously
upgrades
is
the
main
thing,
but
we
also
want
to
support
multiple
versions,
a
standard
for
our
customers.
Some
of
you
guys
have
got.
You
know,
requirements
that
you
run
to
dr.
own
sustainability.
Other
people
want
to
use
two
dot,
one
for
the
new
hotness.
A
We
also,
you
know,
give
customers
the
option
to
deploy
the
patchy
version,
all
the
data
stacks
enterprise
version,
and
it
allows
us
to
also
you
know,
roll
back.
When
you
know
a
new
version
is
employed
and
why,
even
though
we've
tested
it
and
people
are
using
it,
you
know
there
might
be
discovered
by
somebody
in
the
project,
and
so
you
know
we
can
roll
back
to
a
previous
version
of
Cassandra
with
ease,
whereas
trying
to
do
roll
back
through
apps
get
is
actually
quite
painful.
A
A
Ubuntu
am
eyes
when,
like
with
each
version
of
Cassandra
baked
into
it,
was
that
why
AWS
has
is
kind
of
look
a
bit
of
a
crazy
limitation
really,
but
it
is
just
one:
it
works
where
you
have
to
create
an
ami,
our
region,
so
why
you
can
copy
the
am
I
to
each
region
this
if
you've
got
a
register
it
with
their
API
and
lapses
stuff.
And
so
you
know,
as
a
result,
AWS
has
nine
regions.
A
We
support
all
of
them
and
therefore
every
you
know
version
of
Cassandra
that
we
support
you'd
end
up
with
nine
images
and
because
we
can't
support
two
distributions
and
at
last
count
we
had
total,
like
13
versions
like
some
customers
have
upgraded
seems
like
that,
got
a
few
min
data
across
three
different
cloud
providers
in
total,
with
all
those
guys,
they've
got
like
twenty
nine
regions.
All
up.
A
If
you
do
the
math,
that
turns
out
to
be
approximately
377
images
and
that
number
is
just
going
to
grow
over
time,
whereas
with
darker,
you
end
up
with
one
image,
/
version,
and
you
know
that
works
everywhere.
This
is
fantastic
yeah.
It's
kind
of
you
know
the
the
database
structure.
Never
there
we
had
to
have
previously
to
support,
know
the
different
regions
version
is
like
the
stuff
is
quite
calm.
Table
is
now
it's
just
yeah.
Oh,
you
want
girls
and
two
dot,
one
dot,
10
yeah,
launch
it.
A
A
A
Debian
package
repository
is
that
they
only
keep
the
latest
version
of
Cassandra
around
for
each.
You
know
each
branch,
I
guess
you
know
to
dot
zero,
two
dot
one
and
so
means
that
if
you
want
to
actually
go
back
in
time,
if
you
want
to
run
like
a
previous
version,
you're
going
to
download
it
off
their
other
mirror,
which
isn't
actually
a
debian
repository
and
so
you're
going
to
kind
of
manually
install
the
package,
which
is
got
a
whole
bunch
of
pain,
points
and
stuff.
A
So
it
was
kind
of
that
would
get
just
another
thing
where
we
went.
We
need
to
be
able
to
keep
these
versions
around
ourselves,
rather
than
kind
of
relying
on
the
remote
package
repository
to
contain
all
the
versions
we
want
to
run
and
the
end
result
is
like
every
cluster
that
we're
running
has
a
known
same
configuration.
Every
node
is
running
the
same
version
Sandra
and
you
know
we
can
handle
upgrades
and
all
that
sort
of
processes.
You
know
elegantly.
A
So
how
do
we
offer
all
out
upgrades
if
the
process
itself
is
actually
quite
simple?
And
you
know
we're
kind
of
it's
the
same
each
time
and
joe?
Unless
something
goes
wrong,
we
can
actually
do
it
quite
quickly.
So
allows
us
to
be
very
reactive
to
like
the
Apache
project
producing
new
versions
of
Cassandra,
because
they
fix
bugs
quite
quite
frequently,
which
is
nice.
So
the
procedure
that
we
do
is
no.
We
build
a
new
diaper
image
to
the
standard
version.
That's
been
released
and
you
know,
like
development
of
stuff,
continues
on
that.
A
To
verify
that
you
know
the
whole
upgrade
procedure
works
reliably,
and
we
then
enable
those
images
in
our
production
environments
to
select
customers
if
they've
asked
for
it
or
parents
or
the
internal
support
teams
to
do
a
bit
of
fuel
testing
just
to
make
sure
the
judges
know
which
we
try
not
to
have
differences
between
tests,
introduction
with
you
unless
something
they
might
have
slipped
through
the
cracks.
So
we
just
want
to
do
a
testing
in
pods
before
we
say
it's
generally
available,
then
we
make
it
available
to
everybody.
A
This
typically
means
that
new
cut
new
clusters
are
launched,
we'll
run
the
new
doors
no
Cassandra.
By
default.
We
typically
allow
people
to
choose
the
previous
version
as
well.
Just
in
case
they
don't
want
to
be
running
the
bleeding
edge,
but
if
it
means
that
you
know
everybody's
always
haunting
new
clusters
with
the
latest
version
and
then
through
our
existing
customers,
we
liaised
with
them
to
kind
of
determine
the
the
best
time
the
strategy
to
do
a
rolling
cost
one
upgrade.
This
is
pretty
painless.
A
You
know
you
restart
each
node
one
at
a
time
waiting
for
it
to
rejoin
the
ring
standard,
Cassandra
procedure,
but
because
we're
basically
just
changing
images
swapping
one
out
for
another.
The
process
is
actually
quite
quickly.
No,
the
no
just
has
to
download
the
new
new
docker
image
and
raced
on,
and
your
back
up
and
running
so
yeah.
It
is
quite
painless
and
very
very
simple.
A
So,
just
to
give
you
a
negative
I
guess,
a
visual
diagram
of
what
would
kind
of
manage
to
accomplish
here
is
you,
you
can
imagine
a
node,
and
so
on
the
on
the
left-hand
side.
We've
got
docker
containers
and
on
the
right-hand
side
we've
got
a
mis
and
then
apt-get
install
upgrades.
So
you
can
see
on
the
right-hand
side.
What
happens
is?
Is
we
build
an
ami
for
2
dot,
0,
dot,
10
and
somebody
wants,
is
the
cluster
with
that?
Am
I
and
then
forever
onwards?
A
The
slowly
diverged
off
the
kind
of
main
main
line
and
that
are
forever
slightly
different
from
everything
else,
whereas
in
the
doctor
case
every
image
you
just
kind
of
replace
it
with
the
next
one.
As
you
go
along,
so
you
get
that
nice
clean
state
every
time,
so
there's
no
props
guild
up.
We're
kind
of
you
know
almost
certain
that
it
is
going
to
work
reliably
every
single
time.
We
want
to
do
this,
which
is
you
know
just
another
thing
that
allows
us
to
do,
and
it
is
simplified
operations.
A
As
if
so,
this
is
more
like
a
textual
description
as
well
like
so
a
case
we
had
was
we're
trying
to
upgrade
I
think
the
version
numbers
of
the
standard
factors
here
that
are
wrong,
but
an
example
where
we
tried
to
install
Cassandra
like
a
newer
version
on
the
node,
and
that
package
changed
a
few
things
around
how
they
they
serve
no
fun
little
things
up,
and
it
didn't
quite
correctly.
You
know
it
would
uninstall
things
from
just
the
previous
version,
but
if
you're
doing
a
larger
upgrade.
A
So
if
you
went
from
20
10
22
dot
14
in
this
case,
we
ended
up
with
like
a
few
things
that
weren't
quite
right
like
Cassandra
itself.
Wouldn't
some
wouldn't
start
up,
so
you
know
you're,
then
there
on
the
node
deleting
files
and
editing
files
and
figuring
out
why
it's
not
working,
whereas
the
darker
case,
you
know
you
just
launched
the
new
image
and
because
you
guaranteed
that
clean
slate
every
single
time
it
just
works,
then
you're
out
and
party
afterwards,
rather
than
being
all
upset
and
and
sad.
A
And
then
the
case
of
roll
back,
the
same
thing
happens:
you
know.
If,
if
you
know
you
install
the
new
package
and
Cassandra
doesn't
want
to
start,
you
go
run
a
bunch
of
commands
the
first
time.
I,
don't
know
how
to
do.
This
have
to
go
off
and
google
it,
and
you
hope
that
these
remove
and
the
purge
and
lots
of
stuff
fixes
the
problem.
And
then
you
try
and
reinstall
the
the
old
version
and
ever
set
good.
They
don't
keep
the
packages
on
the
repository.
A
B
A
We
needed
to
do
it,
doesn't
really
roll,
so
we're
quite
happy
with
using
it.
It's
important
dinner
service
dependencies.
You
know,
as
any
service
management
framework
should,
and
so
you
know
an
example
that
is,
you
know.
We
have
a
service
that
runs
on
every
note
that
takes
automatic
snapshots
periodically
and
uploads
those
to
a
backup
service
and
that
snapshot
service
requires
Cassandra
and
you
know
because
it
needs
to
be
running
in
order
for
it
to
ask
Sandra,
take
a
snapshot,
and
you
know
so.
We
were
kind
of
got
this
dependency.
A
A
A
Cassandra,
there's
there's
a
few
cases
where
it
kind
of
can
get
itself
into
a
bit
of
a
bind.
A
lot
of
these
events
fix
over
time,
but
we've
seen
issues
when
they'll
be
like
her.
A
thread
call
or
something
inside
the
application
that
staff
and
resources
and
the
other
thing
is
to
actually
manually
restart
the
application,
so
the
but
fittingly
kind
of
handles
all
this
automatic
restart.
For
us.
A
The
other
component
that
we
use
in
combination
with
cindy
is
d
bus,
which
is
pretty
much
the
remote
procedure
call
between
applications.
It
also
does
kind
of
notifications
between
apps
as
well.
So
you
know
when,
when
things
have
changed
in
one
application,
another
application
you
notified.
When
that
happens,
it's
all
it's
all
soccer
based
and
one
cool
thing
is
that
it
has
like
multiple
language
bindings.
A
So
you
know
not
only
is
there
a
seed
library
but
there's
a
debuff
library
for
python,
the
devo
flavors
from
java,
so
what
we
can
basically
do
is
we
can
control
system
d
via
d
bus
through
java,
so
you
don't
have
to
fork
and
exact
system
control
or
something
like
that.
You
can
just
call
d-bus
directly
and
because
it's
all
socket
based,
you
can
actually
mount
the
socket
into
a
docker
container
and
then
you
can
control
the
host
system
d
inside
a
docker
container.
A
And
so
one
of
the
things
we've
been
working
on
is
is
trying
to
more
tightly
integrate
cassandra
with
sips
and
E
and
D
bus,
and
the
reason
for
that
is
like
the
system
d
kind
of
has
this
concept
of
like
a
service
status
know
whether
or
not
us
it's
starting
or
stopping
or
if
it's
active
or
inactive,
but
the
definition
of
that
can
vary.
No
does
active
just
to
find
the
processes
running.
Does
the
pit
exists,
or
does
it
actually
mean
something
more?
A
You
know
so
in
the
in
the
cassandra
case,
it
doesn't
just
mean
that
you
know
java
is
running,
or
does
it
mean
the
cassandra
is
actually
accepting
visual
connections
and
the
reason
why
you
know
we've
kind
of
ask
this
question
is
because
system
d
will
start
dependencies
when
the
required
units
become
active.
So
going
back
to
the
the
snapshot
service
example,
this
natural
service
will
start
as
soon
as
the
standard
service
becomes
available.
But
if
the
natural
services
to
require
say
cql.
A
You
know,
like
so
six
year
old
clients
that
dependencies
on
Cassandra
right
they
shouldn't
start
until
cql
becomes
available
on
Cassandra
like
when
Cassandra
starts
up.
It
doesn't
actually
start
listening
on
the
the
sexual
connection,
port
straightaway,
there's
a
whole
bootstrap
process
that
has
to
happen
before
that.
That
socket
is
opened
and
if
it's
a
no
that's
actually
joining
the
existing
cluster,
it
actually
has
to
scream
all
the
data
from
all
the
existing
nodes
in
the
cost
of
first
before,
since
your
connections
are
accepted,
and
so
that
process
can
take
hours.
A
A
So
obviously,
you've
got
some
options
in
you
and
your
clients,
where
you
could
you
know
those
like
smaller
clients
who
just
want
to
do
one
one
thing
you
know
they
could
fail
fast
and
just
you
know,
wait
for
systemd
to
continuously
restart
them,
but
if
you've
got
larger,
more
complex
applications
that
are
trying
to
you
know
do
a
lot
of
stuff.
They
often
will
require
sort
of
like
a
reconnect
loop.
You
know
you're
managing
the
cassandra
connection.
Can
I
connect?
Oh
no.
A
A
So
one
option-
and
this
is
the
simplest
option-
is
stiffened-
ly-
allows
you
to
notify
it
when
a
unit
actually
becomes
active,
so
the
default
out-of-the-box
behavior
is
all
the
process
that
I've
started
exists
in
the
process
table.
Therefore,
the
union
is
active,
but
you
can
change
that
default
to
something
where
the
process
has
to
notify
sis
Andy
when
it
starts,
and
so
what
we've
managed
to
do
is
Wood
River.
It
need
java
agent
that.
A
We
use
a
jar
agent
for
this,
so
a
java
agent
is
a
kind
of
like
a
sidecar
component.
You
can
load
into
the
JVM
is
typically
use
flight,
debugging
purposes
and
stuff
like
that.
But
an
agent
actually
has
full
access
to
everything
is
running
inside
the
jelly
bean,
so
you
can
do
things
like
open
soft
sand
month,
red
sand,
lots
of
stuff
and
the
reason
why
we
did.
A
A
Both
on
our
platform,
we
needed
a
thing
that
works
for
both
so
using
a
jar.
Asian
means
that
the
same
agent
works
for
the
Apache
project
and
instead
of
stacks
enterprise,
and
it
doesn't
require
any
code
modifications
of
either.
The
only
disadvantage
of
this
is
that
the
unit
will
stay
in
activating
I.
Think
that's
the
state
afraid
you
know,
depending
on
how
long
Cassandra
takes
before
it
allows
see
through
all
connections.
It
can
take
one
long
time
like
we're.
A
Cena
take
many
hours
if
a
node
is
trying
to
join
a
ring,
so
you
have
to
actually
step
there's
a
timeout
value
in
system
d,
which
I
think
defaults
to
10
seconds
or
something
where,
if
systemd
doesn't
receive
notification
from
the
process
up
and
seconds
are
just
nuke,
sir.
You
have
to
set
that
to
zero,
which
is
disabled.
Otherwise,
you
know
if
it
takes
hours
to
do
this
notification
system
deep,
will
just
kill
the
process.
A
I
come
the
scientists,
don't
back
15
here
where
one
thing
the
cassandra
is,
though,
is
that
it
is
more
than
just
sexual
there's.
Thrift,
connectivity
and
the
jmx
as
well.
So
jmx
is
quite
useful
for
a
lot
of
monitoring
and
maintenance
operations
and
thrift
is
like
we
don't
use
it,
but
of
course
it's
more
classic
connectivity
mechanism
and
if
we
wanted
to
open
source,
this
would
probably
want
a
solution
that
works
for
all
these
kind
of
connectivity
options
because
have
an
odd
46.
A
Well,
so,
unfortunately,
the
the
aids
and
stuff,
because
it's
just
a
notification
to
say
that
you
know
you
can
only
say
that
the
unit
has
started
or
stopped,
there's
no
kind
of
internet
on
accepting
a
connections
on
a
particular
socket
or
something
to
do
Cindy.
You
need
to
have
some
kind
of
alternate
mechanism
for
this
so
and
the
other
thing
that
we've
we've
been
playing
around
with.
So
this
actually
hasn't
been
like
we're
tested
in
depth,
but
we're
not
actually
using
it
in
production.
A
Is
you
kind
of
have
a
simple
service
system
d
service
like
in
this
example
here,
because
hammer
cql,
which
is
inserted
in
to
the
dependency
chain
between
the
client
application
of
wants
to
you
secure
and
cassandra
itself
and
the
Cassandra
sexual
services?
Just
it's
like
a
wrapper
around
a
very
simple
tool.
You
know
it
could
be
a
platen
script
or
something
that
watches
a
port
for
connectivity.
A
B
A
B
How's
the
right
time
to
go
ahead
and
open
the
question,
so
I'll
go
ahead
and
ask
you
one:
that's
already
been
type
in.
How
do
you
deal
with
core
OS
reboots
for
update,
as
in
prevent
that
up
from
being
rearranged?
If
a
no
goes
down
for
an
update,
since
it's
will
or
/
should
be
back
within
two
minutes.
A
When
core
OS
does
a
reboot
for
an
update,
it
does
that
swap
the
system
petition
around,
but
that
doesn't
affect
any
of
the
data
on
the
node.
So
the
data
itself
stays
there
on
that
that
what
it
is
called
the
data
petitioner
on
core
OS
and
when
Cassandra
or
when
the
OS
restart
into
the
new
update
the
docker
container
for
Cassandra,
will
automatically
restart
and
Cassandra
will
just
handle
the
resynchronization
process
with
the
rest
of
the
clusters,
its
like.
If
you
just
turn
off
Cassandra
for
on
a
single
node
for
a
couple
of
minutes.
A
You
know
it
is
basically
the
standard
operating
procedure
for
doing
Cassandra
restarts
and
when
we
do
a
rolling
restart
around
the
cluster.
So
we
don't
restart
all
the
nodes
at
once.
We
actually
use
distributive
locking
to
prevent
that
through
our
support,
people
can't
hit
the
restart
button
on
every
node
at
once.
A
B
A
We,
yes,
we
we
actually
build
a
darker
image
with
Cassandra
embedded
in
it
or
or
baked
into
it.
So
as
part
of
our
our
dollar
bills
wonders
of
the
dockerfile.
There
is
actually
a
you
know:
apt-get
install
line
in
that
dr.
files
to
to
embed
cassandra
inside
in
the
container.
What
are
the
image
to
the
container
and
all
its
dependencies,
so
Java
any
utilities
and
libraries
that
it
depends
on
it
also
installed
as
well.
So
I'll
just
go
back
to
the
slide,
where
I
gave
the
sort
of
way
out,
whereas
so
this
here.
A
So
you
see
this,
like
you
know,
we
have
an
image
for
apache
cassandra
down
the
bottom
on
the
on
the
west
there
and
it
depends
on
a
common
image
which
depends
on
the
oracle
jdk
image
which
depends
on
on
a
base
image
and
so
on,
and
so
because
it
because
Cassandra
like
we're
running
a
single
application
per
container
yeah.
When
you
start
that
the
docker
container
Cassandra
automatically
starts
with
it.
We
don't
run
at
this
H
or
any
other
components
inside
a
docker
container.
We
use
it
for
the
life.
I
guess
is
the
recommended
approach.
A
A
A
That's
a
good
question,
so
we
we
should
have
our
own
config
management
process
and
actually
I
saw
a
couple
other
people
asking
about
concrete
management
here
as
well.
So
I
guess
I'll
just
give
a
little
bit
of
a
right,
quick
kind
of
run
through.
So
I
don't
have
any
diagrams
or
anything
to
explain
this,
but
we
let
our
customers
customize
a
few
cassandra
yamel
parameters.
A
You
know
I
think
it's
vile,
live
in
stigmata
et
Cie
Cassandra,
and
that
directory
is
then
mounted
into
the
docker
container
under
et
see
Cassandra
and
therefore
the
generated
yeah
more
file
that
we
produce
on
our
server
is
then
available
to
Cassandra
inside
the
container.
If
a
customer
wants
to
change
any
of
their
configuration
parameters,
currently
it's
the
support
requests
we
are
working
on.
A
B
A
A
What
is
it
food
because
it
because
like
when
you
run
a
docker
container
with
nap
based
networking,
you
end
up
with
like
an
internal
IP
address?
If
you
then
run
that
on
AWS,
you
end
up
like
a
node
end
up
with
sort
of
three
IP
addresses.
There's
the
public
IP
addresses
abode,
the
private
IP
address
of
the
AWS
instance,
and
then
there
is
the
private
IP
address
of
the
docker
container
and
like
we
tried
to
to
get
Cassandra
to
advertise,
right,
addresses
and
so
on.
B
A
So
thick
nodes
are
determined
when
we
launch
instances
so
our
instance
force
processes.
You
know
we
are
state
amazon
to
start
a
machine
of
a
particular
size
and
the
act
of
starting
a
machine
is
allocated
private
address
and
then
we
record
those
IP
addresses
and
then,
when
each
node
attempts
to
fetch
shifts
yeah
more
file
and
all
the
other
configuration.
We
then
return
a
select
list
of
those
IP
addresses
to
each
machine.
A
Client
applications
are
able
to
connect
every
node
in
the
cluster
connectivity
options.
There's
a
bunch
of
them
one
is
you
can
connect
by
the
public
IP
address
across
the
internet.
We
support
Cassandra's,
ssl
encryption
and
password
authentication
to
make
all
that
quite
secure,
and
then
we
also
do
firewall
whitelisting
as
well
to
restrict
it
to
the
set
of
application
or
addresses
for
your
application.
But
if
you're
also
running
on
AWS,
we
support
the
BCD
peering,
so
everything's
all
done
via
a
or
like
private
networking
stuff.
So
it's
all
quite
secure.
B
A
So,
like
my
understanding
of
it,
the
the
best
person
is
probably
asked
that,
for
us
internally
is,
is
then
wrong
head,
the
other
co-founder.
It
is
more
about
our
super
Cassander
expert,
I,
more
of
the
the
docker
and
core
OS
guy,
but
my
understanding
of
it
is
that
to
go
between
major
releases,
so
20
to
21
is
is
compatible,
but
between
kinds
of
last
last
version
of
the
minor
version
bump
to
the
next
version.
So
you
know
you
can
go
from
like
to
dot
0
dot,
14
or
whatever.