►
Description
Speaker: Ben Laplanche, Product Manager at Pivotal
Building a multi-tenant Cassandra for the Pivotal Cloud Foundry platform. An overview of the approach, challenges and thoughts on the road ahead to bringing big data products to the cloud.
A
Hi
everyone
thanks
for
coming
to
our
talk.
It's
really
nice
to
be
here
so
before
we
start,
my
name
is
Tarek,
and
this
is
David.
We
both
work
for
a
cool
company
called
open
freedo
and
basically,
we
enjoy
solving
IT
challenges
and
helping
our
customers
deliver
good
software,
so
we've
been
using
Cassandra
for
a
few
years,
and
today
we'd
like
to
talk
about
a
specific
project.
We
worked
on
as
opposed
to
jumping
directly
to
the
solution.
A
It's
a
high
street
retailer,
so
I
can't
mention
who,
but
it
sets
an
organization
and
with
high
volumes
of
sales,
the
architecture
can
be
described
as
microservices
architecture.
So,
basically,
a
number
of
services
that
work
together
are
decoupled
and
in
terms
of
technologies
we're
using
it's
mostly
java-based.
We
obviously
have
Cassandra.
A
This
is
why
we're
here
today
we
have
cloud
foundry,
which
is
pass
platform
as
a
service
and
rabbitmq
for
the
asynchronous
messaging
before
we
start
who's
using
actively
cassandra
now
in
production,
okay
and
who's,
considering
using
cassandra,
basically
all
experimenting
with
it:
okay
cool,
so
it's
like
fifty
percent,
so
this
is
more
or
less
what
the
architecture
looks
like
a
cloud
foundry
is
where
the
services
live.
The
yellow
boxes
are
the
services
they
communicate
using
RabbitMQ
and
we
have
a
number
of
supporting
services.
We
build
so
there's
the
event
service,
which
is
our
focus
today.
A
A
The
first
question
is:
why
do
we
need
an
event
service?
It's
an
event-driven
architecture
and
think
this
works
really
well
with
the
micro
services
architecture.
So
what
can
we
do
with
an
event
service?
There
are
a
number
of
things
first
of
all
catching
platform
events,
so
this
is
interesting
that
a
number
of
type
of
events-
obviously
we
noticed
on
the
platform-
can
the
first
typical
use
case
of
an
event
services
to
capture
these
events.
What
do
we
do
with
these
events?
Technical
events
to
services,
communicating
with
each
other,
are
useful
for
troubleshooting,
for
example.
A
We
also
have
what
we
call
business
events,
for
example,
service
registration,
and
this
can
be
used
to
trigger
a
synchronous,
downstream
processes.
Obviously,
when
we
working
on
micro
services
platform
with
a
number
of
services
each
time
we
add
a
new
feature,
we
cannot
just
break
the
whole
architecture
and
change
things,
so
we
need
a
way
to
allow
us
to
customize
our
processes.
Ok,
so
again,
service
registration,
I
want
to
send
a
specific
notification.
A
Email
for
a
specific
type
of
product,
I
have
a
standard
process,
and
I
can
use
events
to
trigger
the
air
specific
behavior,
but
we
also
need
one
single
source
of
truth
in
our
system
about
the
business
plan,
jack
transactions.
That
happened
again,
we
have
different
components,
they
do
different
things
and
sometimes
it's
very
hard
to
understand
what
happened
for
a
specific
customer
interaction
and
so
on.
A
Okay,
however,
we
live
in
the
real
world
and
any
project
has
a
number
of
constraints.
Although
this
one
can
be
described
as
a
greenfield
project,
there
are
a
number
of
constraints
related
to
the
context.
The
first
thing
is
ambiguous
requirements.
Why?
Because,
when
you
build
a
modern
platform
based
on
a
micro
services
and
events,
and
so
on,
there's
no
product
owner
I
was
going
to
come
and
tell
you.
Oh
I
need
this
sort
of
event
for
this
sort
of
transaction.
You
know
more
or
less
you
want
depends,
but
no,
sometimes
your
problems
can
be
ambiguous.
A
The
whole
paradigm
and
the
technologies
are
cutting
edge.
So
obviously
there's
the
familiarity
element
with
that
so
and
we
also
need
to
look
at
the
whole
platform.
Okay,
we
also
found
that
in
our
situation,
there
are
a
number
of
sort
of
assumptions
around
using
new
technologies
that
were
not
true,
for
example,
when
using
a
no
sequel
database.
The
typical
assumption
is
that
disk
is
cheap
and
memory
is
cheap.
Well,
not
when
you're
running
a
system
in
an
accredited
data
center.
A
Suddenly
these
things
become
an
issue,
because
this
becomes
really
expensive,
okay,
plus
obviously
in
a
very
complex
context.
You
don't
always
a
nice
not
fully
greenfield.
You
don't
get
always
an
ideal
condition
to
experiment
and
try
different
things
out.
So
we
are
building
a
solution.
Obviously,
and
building
useful
solution
is
not
just
about
dropping
a
technology
in
the
middle
of
nowhere,
it's
about
making
things
work
and
solving
the
whole
problem,
so
we
needed
to
do
all
these
things.
A
The
other
thing
is
that
we
couldn't
try
to
approach
the
problem
purely
from
firms
perspective,
because
we
might
be
optimizing
for
the
wrong
things,
since
we
are
not
aware
of
the
full
requirements,
so
we
needed
to
avoid
accumulating
technical
debt
by
building
things
that
no
one
wanted
and
then
having
to
change
these.
This
is
basically
drove
our
number
of
choices
that
we
discuss
later
in
the
presentation.
So,
in
this
uncertain,
ambiguous
context,
we
wanted
to
adopt
some
design
principles.
A
I
know
everyone
says
that,
but
we
will
hopefully
give
you
examples
of
how
we
adopted
these
simplicity.
Yes,
for
real
now
everyone,
if
I,
ask
who
likes
simplicity
around
probably
raised
their
hands,
but
then
I
can
ask
who
made
some
sort
of
trade
off
between
functionality
and
simplicity
and
people
are
little
bit
more
hesitant.
So
simplicity
is
a
fundamental
software,
a
value
in
software
development,
but
as
a
value,
it
does
compete
against
the
number
of
other
values
like
complexity
and
how
it
features
you
provide
and
so
on.
A
So
we
made
the
number
of
choices
where
we
simplify
to
keep
things
flexible
for
us
and
to
keep
things
more
performant
so
also
also
giving
examples
of
these
decoupling.
So
what
sort
of
decoupling
we
building
an
event
service?
So
the
event
service
is
very
important.
It's
part
of
the
technical
core
of
the
system,
but
at
the
same
time,
it's
being
invoked
by
other
services
doing
the
real
work.
A
Basically,
so
we
needed
something
very,
very,
very
simple:
where
services
don't
need
to
provide
plenty
of
God
and
so
on,
to
use
it
so
decoupling,
these
services
themselves
from
the
event
service,
but
also
decoupling,
the
implementation
so
using
a
contract
first
approach
to
solving
our
problems
and
designing
for
distributed
system.
Because
again,
we
have
a
system
or
anything
can
fail
where
the
volumes
can
be
unpredictable
and
so
on.
So
thinking
about
these
things-
and
this
is
obviously
a
party
behind
our
choice
of
Cassandra
for
this
context-
okay,
fine!
A
So
now
we're
going
through
the
journey
of
understanding
the
requirements
and
then
building
the
requirements.
So
the
first
thing
you
think,
oh
yeah,
an
event
service.
It's
obvious
I
need
to
store
events,
so
we
go
and
look
on
the
internet
and
you
look
at
best
practice
and
patterns
and
so
on.
But
the
first
different
question
is:
what
is
an
event,
so
we
found
out
that
we
get
sort
of
categorized
events
as
what
we
call
simple
events
and
in
this
case,
think
of
something
we
have
an
opaque
value
where
the
system
doesn't
care.
A
What
the
value
is.
You
just
store
it
and
you
read
it
back,
and
this
is
very
typical
in
the
context
or,
for
example,
meter
readings,
yeah
I
just
have
one
reading
and
you
stole
it.
Okay,
on
the
other
side
of
the
spectrum,
we
have
structured
events
and
structured
events
can
be
of
any
level
of
complexity.
So,
in
an
e-commerce
platform,
obviously
we'd
have
most
fractured
events.
We
have
user
registration
and
so
on
and
so
on.
A
But
obviously
then
we
had
number
of
the
challenges
to
events
in
a
way
that
is
flexible,
rich,
but
at
the
same
time
that
will
allow
us
to
do
everything
we
want
to
do
so.
The
light
slides
I
saw
of
the
light
bulb
moments
we
had
were
thinking,
so
we've
tried
to
target
somewhere
in
between
opaque
events
are
a
no-go
for
us.
A
We
started
by
storing
things
as
blobs
and
so
on,
but
then
I'm
not
really
a
big
fan
of
storing
things
in
a
way
that
the
database
doesn't
an
unstructured
and
having
two
unmarked
these
things,
and
so
on
think
also
about
that.
The
events
can
change
all
the
time
and
that
if
we
start
doing
that,
we
need
schemas
and
we
need
to
decode
the
events
and
so
on,
so
not
really
efficient
from
our
perspective.
So
first
thing
is
to
simplify
the
data
model.
A
What
does
that
mean
so
we're
going
for
something
structured
and
Jason
in
this
case?
Obviously
the
input
they
would
be
talking
about
that
later.
But
basically,
we
are
simplifying
only
allowing
one
level
of
nesting.
We
didn't
find
a
good
use
case
to
allow
any
arbitrary
level
of
nested
Jason.
You
might
think
this
is
as
a
technical
limitation.
It's
actually
not
only
that
these.
These
simplifications
also
helped
us
to
build
a
simpler
and
better
solution
as
well.
So
we
tried
to
marry
the
two
sides
at
each
time.
A
Ok,
the
other
thing,
the
other
choice
we
made,
which
might
be
a
little
bit
odd,
which
is
the
events,
can
change.
The
events
can
change
all
the
time
and
you
events
can
arrive.
So
how
do
you
do?
We
didn't
want
to
get
in
the
mess
of
managing
schemas
and
versions,
and
so
on
so
with
each
event,
and
we
use
the
Cassandra
map
types
and
David.
We
talk
about
that
in
a
minute.
We
just
stood
also
the
type
of
each
filled
with
it,
which
means
at
any
moment
in
time.
A
A
Okay.
So
that's
fine.
Let's
move
on
what
does
the
event
store?
Look
like
so
I
just
mentioned
this:
it
really
needs
to
be
simple.
Services
just
want
to
write
an
event
more,
and
we
also
observed
that
services
that
five
events
are
unlikely
to
read
the
event
back.
Some
other
closest
will
do
something
with
the
event,
but
not
to
say
that
triggered
it.
A
A
So
this
is
the
next
step
of
thinking
started
to
change,
initially
we're
talking
about
something
like
an
event
store
and
when
so
against,
think
oh
yeah
Cassandra
table
and
you
dump
stuff
in
it
and
it's
blob
and
whatever.
Actually,
this
is
not
enough.
We
started
thinking
of
an
event
service
with
a
contract
that
clients
of
the
service
can
use
and,
as
I
said,
we
use
for
that
resource,
oriented
design
so
rest.
A
So
this
is
a
good
way
of
trying
not
to
over-design
things
at
the
same
time
designing
things
that
are
stable
and
you
can
use
so
we
identified
in
this
case.
We
have
an
obvious
resource
which
is
the
event,
and
then
we
can
build
a
predictable,
uniform
interface
around
that
that
can
be
used
from
any
client
the
most
sophisticated
or
the
simplest
points
in
the
system.
Ok,
now
David
will
go
through
the
details
of
the
API
and
so
on.
Hello.
B
So
what
I'm
switch,
which
gives
a
little
bit
and
is
from
the
high
level,
principles
and
design
guidelines
that
we
come
off?
It
start
with
actual
implementation.
Now
we
didn't
just
implement
the
final
solution
at
day,
one.
It
was
an
evolutionary
work
in
a
sense
that
we
implemented
a
version
of
it,
then
added
improvements
and
extended
the
service
regularly.
Her
first
version
was
very
simple:
we
wanted
to
again
contract
first,
we
want
it
to
store
events
and
they
want
it
to
read
them
back,
and
that
was
it
and
very
simple.
Rest
API
can
support
that.
B
B
Obviously,
an
event
has
a
type
in
our
case
and
some
data
and
metadata
associated
with
it
represented
in
a
JSON
format.
This
is
the
contract
that
we
cannot
fit
as
the
interface
for
event
service.
This
is
what
you
can
post
do
it,
and
this
is
what
you
can
get
and
read
back
when
you
see
the
simple
nested,
our
other
one
level,
deeper
Jason
as
that's
all,
we
allow-
and
we
represent
this
natively
in
Cassandra
in
maps
the
architecture
to
support
these
requirements.
B
Initially,
it
was
very
simple:
you
just
needed
a
thin
REST
API
that
understood
these
Jason
structures,
decoded
them
into
Cassandra,
secure
representation
and
store
it
in
a
single,
even
stable,
which
is
keyed
by
a
time
you
ID
and
stores
the
data.
Initially,
this
started
off
with
actually
storing
blobs,
and
then
we
moved
to
the
map
model
gradually
and
you
read
it
back
from
the
same
way.
Simple
are
even
table
is
not
very
complicated,
either
again,
just
a
simple
primary
key
of
the
ID
and
restore
some
metadata
typed
timestamp
and
the
payload
is
a
blob.
B
Then
we
realize
that
yeah
we
need
querying
on
these
events
and
we
wanted
to
drive
notifications,
publish-subscribe
mechanisms
of
the
event
service.
That
was
a
natural
evolution
of
the
idea
and
obviously,
when
you
are
doing
rest,
the
natural
way
of
querying
events
becomes
the
query
string
of
your
rest
endpoint.
So
you
execute
an
HTTP
GET
against
a
URL
and
you
supply
a
set
of
parameters
which
can
contain
a
number
of
fields
at
the
moment,
and
this
can
evolve
in
the
future.
B
There
is,
we
can
add,
whatever
we
want
later,
there
are
various
on,
but
very
little
limitations
on
what
how
you
can
use
these
generally
they're,
very
natural.
The
only
thing
that
we
actually
required
that
you
specify
a
time
range
when
you
do
a
query.
Otherwise
you
are
feel
free
to
combine
any
parameters
and
use
them
or
don't
use
them.
It
will
just
work
in
the
service
will
take
care
of
it,
which
is
good.
B
You
need
to
dena
my
eyes
again
and
again
make
copy
of
the
data
again
and
again
and
again,
not
very
extensible.
It
consumes
a
lot
of
disk
space
if
the
events
are
reasonably
big,
which
is
in
our
case
true,
they
are
not
just
simple
integer
they're,
bigger
data
structures
and,
as
a
start-up
already
mentioned
at
the
disk
space
in
our
case
is
not
that
cheap.
B
But
generally
it's
not
something
that
you
can
just
ignore.
In
the
real
world
there
are
costs
associated
to
doing
that.
Additionally,
write,
write,
latency
might
be
affected,
which
may
or
may
not
be
a
problem,
but
something
to
think
about-
and
this
time
pocketed
indexes
can,
in
theory,
create
hot
spots
in
your
cluster,
because
at
that,
given
time,
bucketing
only
varietals
certain
set
of
nodes
in
the
Cassandra
cluster.
That
is
again
something
that
it
may
or
may
not
be
a
concern
for
for
the
particular
use
case,
but
something
to
think
about.
A
Okay,
so
this
is
the
new
architecture
which
is
obviously
much
more
complex
than
the
first
one.
However,
the
first
observation
is
that
if
you
remember
the
first
one
which
could
have
been
implemented
with
secret
light
or
other
web
sched
databases,
you
can
the
service
contract.
The
REST
API
is
exactly
the
same.
This
hasn't
changed
okay,
so
this
is
a
benefit
of
the
decoupling.
You
can
notice
a
couple
of
things
in
the
background.
First
of
all,
we
don't
have
the
simple
right
to
the
database
directly.
A
What
we
have
now
is
a
writing
pipeline
then
can
be
synchronous
or
asynchronous,
and
this
provides
us
a
nice
mixture
of
guarantee
versus
functionality
that
can
be
done
in
the
background.
David
will
explain
that
in
a
second
and
now
suddenly
we
stopped
fighting
just
to
a
simple,
simple
table.
We
have
a
number
of
supporting
structures
to
enable
all
the
queries
that
we
talked
about.
A
Obviously,
if
you
want
to
do
all
of
that
in
one
go
in
a
in
a
synchronous
way,
what
will
happen
is
that
it
might
work
in
the
beginning,
but,
as
we
add
more
supporting
structures
and
tables
to
support
our
queries,
the
performance
will
be
affected
and
we
might
probably
only
notice
it
when
it's
a
little
bit
too
late.
The
other
thing
is
that
we
also
introduced
to
the
event
service
now
rabbitmq
to
notify
interested
listeners
into
business
events.
A
B
Carrying
on
a
you
notice
that
we
are
putting
in
the
number
of
indexes
there,
these
are
actual
Cassandra
tables
that
we
are
not
using
Cassandra
indexes
reason
for
that.
There
is
some
use
cases
which
a
seminarian
axis
would
be
good
for
us,
others,
not
so
not
so
much
so
we
decided
just
to
go
for
the
uniform
approach
and
use
index
tables.
These
are
not
be
normalized,
Cassandra
tables,
but
he
obviously
to
a
minute
so
say.
B
And
yes,
we
are
decoupling
again.
The
client
here
are
seeing
the
benefits
of
the
decoupling,
so
the
clients
are
totally
unaware
of
what's
going
on
in
the
background,
and
they
don't
need
to
be
aware.
They
only
care
all
they
care
about
is
a
intuitive
rest
interface,
which
works
as
you
would
expect
it
to
work.
We
can
extend
this
architecture
without
changing
the
clients.
We
can
extend
this
via
the
paths
of
mechanism
and
Ardis
consumption
becomes
reasonable.
B
There
are
a
couple
of
downsides
as
well
are
doing
what
we
are
doing.
First
of
all,
we
are
sacrificing
the
latency
somewhat
obviously
going
through
a
rest
interface
will
not
improve
that
and
writing
to
multiple
cqa
tables
synchronously.
To
guarantee
certain
amounts
of
sufficient
persistence
will
impact
your
latency.
Our
service
code
also
becomes
more
complicated,
but
we
are
shifting
this
complexity
from
the
clients
into
the
service,
which
is
isolated
from
our
clients
and
I
mentioned
this
cluster
hotspot
or
caster
hotshot
problem,
which
we
didn't
delete.
B
B
So
our
index
structure
of
the
cassandra
tables
they
have
an
ascending
and
descending
version
of
each.
That
is
because
of
cassandra
20,
something
you
cannot
do
normally
would
be
able
to
do
a
reverse.
The
query
in
Cassandra,
just
by
declaring
the
we
want,
an
ascending
ordering
or
a
descending
ordering
that
doesn't
work
with
the
limits
and
encloses,
because
Cassandra's
can't
reverse
the
ordering
there.
So
we
store
ascending
and
descending
indexes
for
everything
so
that
we
can
return
whatever
is
needed
and
our
index
is
our
kind
of
normalized
data
structures.
B
They
reference
the
primary
event
table
by
ID,
so
you
don't
store
the
same
event
data
every
time
in
every
index,
just
reference
for
that.
Obviously,
therefore,
we
need
to
do
more
queries
when
reading
the
data.
This
sound,
even
index
structure,
looks
like
this
is
a
structure
supporting
querying
by
type.
So,
as
you
can
see,
your
partition
key
is
a
time
bucket
and
an
event
type
and
you
have
a
event
ideas,
clustering
key,
which
is
the
time
you
ID.
B
So
it's
going
to
be
ordered
by
time
when
you
insert
an
event,
it
will
append
to
the
end
of
the
index,
and
you
can
do
a
query
arranged
query
on
what
happened
in
a
given
time
frame,
and
this
is
an
example
that
I'm
going
to
walk
through
here.
How
do
you
query
from
this
structure
returning
into
rest
interface?
B
How
do
you
query
a
set
of
events
or
release
of
events
for
a
given
time
play,
and
how
do
you
imagine
eight
the
responses,
obviously
than
doing
rest,
you
don't
want
to
return
a
couple
of
thousands
events
in
a
single
HTTP
response.
You
want
the
client
to
be
able
to
flip
through
the
pages
as
the
client
can
process
them,
and
that's
how
we
implemented
this.
So
you
executed
get
with
some
parameters
against
against
your
service,
which,
as
I
mentioned
before
it,
mandates
that
you
specify
a
time
range
for
that
query.
B
First
of
all,
the
user
type
or
the
query
type
itself,
which
is
wearing
in
this
particular
example
for
the
type
of
the
event
to
select
the
index
table
that
we
want
to
use
and
select
the
type
in
the
partition
key,
and
we
use
the
time
range
to
select
a
range
of
pockets
or
a
list
of
buckets
that
you
want
to
be
able
to
query
now.
That's
not
quite
enough
to
do
the
time
range.
B
We
also
need
to
limit
the
questioning
problems
within
because
of
the
first
and
the
last
bucket
can
contain
events
which
do
not
fall
into
this
particular
time
range.
And,
lastly,
we
apply
the
limit.
You
want
to
be
the
certain
number
of
items
to
be
returned
in
single
HTTP
response
and
event.
We
want
to
be
able
to
flick
through
them,
so
we
select
in
this
particular
example.
You
want
to
get
five
events
in
each
response
and
each
page
what
you
select,
not
five
but
66
IDs
from
the
index
table.
B
The
reason
for
the
61
is
to
generate
the
debt
continuation
URL,
that
you
can
see
there.
So
this
is
how
response
looks
like
we
return
a
list
of
events
that
fulfill
your
query
parameters
and
if
there
are
more
in
the
event
store
for
that
time
range
and
for
the
type
we
return,
a
continuation
link
which
will
give
you
the
next
page
and
the
next
page,
and
so
on.
B
B
A
So,
probably
to
summarize,
the
first
thing
is
again
not
to
approach
this
problem,
which
is
naturally
performance-driven
just
from
the
performance
perspective.
So,
aside
from
scaling
for
more
data,
we
need
to
be
able
to
adapt
what
we
have
for
future
requirements,
future
type
of
event
and
so
on.
We
knew
that
these
things
will
happen,
but
we
didn't
know
what
or
when
exactly
so.
This
is
why
we
focused
a
lot
about
the
architecture
and
just
decoupled
everything
so
that
we
are
free
to
change
parts
of
the
system
without
impacting
the
whole
system.
A
B
Other
thing
is
that
I
would
recommend
everybody
is
to
experiment.
We
didn't
drive
to
that
final
solution
at
day,
one.
We
did
a
lots
of
proof
of
concept
experimented
with
Cassandra.
We
experimented
with
how
the
architecture
could
and
couldn't
work,
and
then
we
arrived
to
a
solution
that
we
were
happy
with
and
yes.
A
Simplification
has
been
a
theme
for
us,
so
we
simplified
the
structure
of
an
event
so
allowed
enough
a
structure
to
to
be
able
to
achieve
our
use
case,
but
at
the
same
time
not
any
structure,
and
we
found
that
actually
worked
well.
Even
at
the
Cassandra
layer
which
allowed
us
to
use
maps
and
types
and
so
on.
The
interface
is
extremely
simple,
which
allowed
us
to
evolve
the
implementation
from
something
almost
trivial
to
something
fairly
complex
that
can
evolve
and
the.
B
Last
but
not
least,
regarding
Cassandra
II,
you
should
be
using
the
secure
interfaces
because
they
are
a
lot
better,
but
you
need
to
understand
what
is
the
datum
underlying
it
and
one
could
sandra
is
doing.
How
is
it
storing
your
data
and
what's
happening
behind
the
scenes,
which
is
much
closer
to
the
legacy
thrift
model
of
reading
and
writing
data?
B
A
Okay,
so
this
is
probably
the
future
improving
slides,
it's
good.
I
kept
the
end
okay.
So,
yes,
again,
our
requirements
evolved
from
initially
very
simple.
Let's
do
an
event,
because
you
have
no
idea
to
do
with
the
event
later,
but
at
least
we
need
to
be
able
to
read
the
event
back,
and
this
was
the
first
what
we
called
an
MVP
to
then,
let's
query
the
event
in
different
ways.
The
next
step
for
us
is
to
do
analytics.
A
So
what
we
do
now
in
terms
of
processing
is
just
the
pub
sub
so-
and
this
was
a
late
requirement
that
we
could
include,
but
what
we
are
exploring
obviously
is
next
analytics
using
spark,
but
this
side
hasn't
been
developed.
Yet
this
has
opted
only
soft
graduated
lately,
and
this
will
be
the
next
requirement
edition.
B
B
A
Yeah,
obviously
the
service,
as
we
have
it
now-
is
optimized
for
these
of
online.
Almost
three
time
use
cases
where
the
assumption
is
that
you
are
going
to
need
to
create
events
in
certain
ways
and
read
the
subtype
of
the
events.
What
helps
us
to
design
what
we
need
is
that
it
will
be
driven
by
either
the
activity
of
the
user,
how
the
user-
or
something
like
that.
So
we
know
where
the
starting
point
is,
but
the
offline,
big
data
analytics
things
that
you
can
do
is
Park.
This
bit
hasn't
been
developed
yet
q.