►
From YouTube: 2022-11-29 Product Analytics Session Discussion
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool
we're
all
here
today
to
talk
about
sessions
and
now
we're
going
to
collect
it.
A
How
what
is
a
session
Define
it
and
determine
what
of
sessions
we
need
for
our
dashboards
for
the
internal
preview,
specifically
I
think
there's
a
lot
we
can
dive
into
with
sessions
in
general,
especially
with
stuff
from
the
proof
of
Concepts,
in
terms
of
like
recording
sessions,
and
things
like
that,
but
I
think
for
the
purposes
of
this
meeting,
we'll
just
want
to
keep
it
defined
or
scoped
to
the
internal
preview
and
specifically
how
we
can
tie
the
event
data
that
we're
already
collecting
to
sessions
and
then
yeah
what
we
want
out
of
it
for
the
for
the
first
preview,
I
guess:
Tim,
you
wanted
a
demo
from
Max
about
the
cedar.
A
A
C
Sure
so
yeah
I've
been
looking
mostly
at
today
at
what
are
pre-aggregations,
how
we
can
use
them
best,
especially
for
larger
projects
and
then,
which
leads
us
nicely
into
how
we
can
use
sessions
and
as
I
realized
in
the
last
couple
of
hours.
C
How
huge
examples
of
how
we
Implement
sessions
are
very
much
tailored
to
using
AWS
redshift
and
not
clickhouse,
which
will
present
some
interesting
problems
so
yeah
good
news
first
is
I
was
able
to
create
a
one
of
our
products,
analytics
stacks
and
import
around
a
million
also
sort
of
pseudo-anonymized
events
with
about
a
thousand
unique
users
and
I'm
able
to
query
that
using
Cube
using
the
pre-aggregations
that
are
set
up,
and
that
is
segmented
by
Anonymous
user
ID
and
can
be
searched
for
by
like
hour
minute,
even
second,
and
that
kind
of
works
as
advertised,
which
is
pretty
good
and
depending
on
how
our
infrastructure
is
set
up.
C
We
can
set
that
up
to
automatically
refresh
every
hour
every
minute,
every
10
hours,
whatever
we
want
and
that
just
kind
of
works
I'll
just
share
my
screens
quickly.
C
So
that's
just
in
our
in
your
regular
schema.
You
can
define
a
bunch
of
pre-gregations
along
called
Owl
user,
which
uses
a
category
index
of
the
anonymous
ID
and
then
every
five
seconds,
refreshes
it
on
an
hourly
basis,
which
is
probably
not
great
in
production
but
for
testing
purposes
is
great
and
then
down
to
as
far
as
the
every
second,
where
we'll
refresh
that
every
hour
in
terms
of
actually
querying
that
it
works
pretty
much
as
you'd
expect.
So
here's
the
standard,
graphql,
query
and
I'm
pulling
the
last
2000
users.
C
It's
probably
the
stakes
a
little
bit
and
we
can
do
that
by
second
watch.
It
not
work.
Now
it's
been
Flawless
there
you
go!
No,
so
there
you
go.
You
can
see
all
the
information
it's
going
to
make
my
browser
crash,
but
the
data's
there
and
it's
returned
pretty
quickly
over
a
about
a
million
or
so
records.
C
If
you
try
and
displace
it
in
a
chart
in
Firefox.
In
my
browser
it
doesn't
work,
the
data
is
retrieved,
but
then
the
Frontline
crashes,
because
they're
too
much
to
put
into
a
pivot
table
in
my
browser.
So
that's
that's
a
front-end
problem,
but
in
terms
of
the
back
end,
that
seems
okay
and
you
can
limit
them
by
a
number
of
individual
users
you
want,
or
you
can
even
create
particular
tables
as
well,
which
is
pretty
cool.
C
The
the
interesting
thing
here
is
that
you
can
see
so
in
this
one
I'm
segmenting
by
second,
you
can
see
which
of
the
pre-aggregations
apply
to
this
query.
So,
for
example,
if
I
were
to
update
this
to
our,
you
can
see
the
preregations
that
will
use
any
of
them,
because
you
can
a
second
pre-aggregation
can
just
be
re-aggregated
to
them
out.
C
So
that's
where
I
am
in
terms
of
sort
of
understanding
how
pre-aggressions
work
and
the
fact
that
they
do
what
we
need
to
do
in
answer
to
Dennis's
question
earlier,
which
is
what
difference?
Does
it
make
if
I
don't
have
pre-aggregations,
at
least
in
my
local
machine?
It
just
doesn't
work
the
query
just
times
out
and
it
doesn't
function
at
all
whether
that
will
be
the
case
in
a
productionized
clickhouse
instance
I,
don't
know,
but
it
gives
you
some
idea
of
how
necessary
this
is.
C
It
sometimes
works,
sometimes
doesn't
I'm
wondering
if
it
works
when
it's
stored
in
memory
and
returns
from
that,
so
maybe
by
starting
the
query
I'm
generating
it.
It's
drawing
a
memory,
I'm
running
it
again
and
then
it's
returning
the
data,
but
it's
super
inconsistent
as
soon
as
I
set
up
pre-aggregations.
It
just
kind.
C
So
that's
pretty
good,
which
leads
us
on
to
the
cubes
event
analytics,
which
isn't
a
feature
in
itself.
This
is
just
sort
of
a
set
of
tutorials
about
how
we
set
it
up
using
Cube
the
as
far
as
I
can
tell
this
all
looks
good
and
all
works.
The
the
thing
that
I
haven't
figured
out
yet
is
this
line
here
when
we're
creating
the
sessions
table,
it
uses
this
lag
function
which,
from
what
I
understand,
is
about
searching
between
Windows
I
want
a
30
minute
window
from
a
particular
point.
C
A
That's
that's
similar
to
what
we
we
both
think
or
I.
Think
we
all
at
this
point
encounter
with
funnel
analysis
like
the
tutorials.
The
queries
aren't
all
applicable
to
clickers
or
if
they
are,
they
may
be
likely
support
you
to
do
the
cube
driver
itself
or
clickhouse.
Just
has
their
own
way
of
doing
it
or
I.
A
Think
in
the
case
of
funnels,
we
just
have
to
Define
it
ourselves
in
terms
of
the
query
like
we
would
use
the
similar
Concepts
in
terms
of
like
defining
it
a
separate
Cube,
but
we
would
use
something
else
for.
C
A
B
B
C
A
I
think
answering
those
questions
will
then
help
us
like
give
you
a
better
idea
of
what
to
be
looking
for
and
then
that'll
also
help
Define.
Okay,
we
have
these
dashboards,
of
course,
we're
still
trying
to
Define
exactly
what
we
want,
ultimately
in
those
dashboards,
but
then
that
will
tell
us
yeah
what
what
cubes
do
we
need
set
up?
How
does
how
does
that
get
replicated
like
we
haven't
really
touched.
A
The
cube
configuration
at
all
right,
we've
got
stack,
set
up,
we've
got
integrated
with
gitlab,
but
we
haven't
really
covered
like
what
team
is
do
do
need
to
exist
to
power
these
pre-built
dashboards,
so
I
think
answering
these
questions.
I've
kind
of
called
out
the
agenda
will
at
least
get
us
a
little
bit
closer
to
that
cool,
so
yeah
as
far
as
defining
the
sessions,
so
the
way
Cube
has
has
defined.
A
Is
there
like
they've,
taken
a
look
at
the
segment
analytics
SDK
and
just
to
find
a
session
as
any
period
or
string
of
activities
that
happens
like
basically
within
you
know,
an
activity
window
over
30
minutes
so
like
that?
That
is
what
separates
sessions.
I
guess.
The
first
thing
to
pull
the
room
on
is
like:
do
we
agree
with
that?
Is
30
minutes
enough
between
sessions
or
do
we
think
it
needs
to
be
15
minutes
or
something
else,
or
you
know
any
opinions
on
that.
D
A
D
D
I
would
be
expecting
from
from
that
then
yeah
I
think
30
minutes
is
probably
fine.
Is
this
a
two-way
decision?
If
we
decide
we
want
to
change
this
or
is
if
customers
decide,
they
want
shorter
or
longer
sessions,
customers.
A
Is
a
different
story,
because
that
means
they
have
to
Define
their
own
Cube
schemas,
which
is
like
that's
a
whole
other
thing
of
like
do.
They
have
access
to
Cube
and
do
we
offer
that
with
our
pre-built
dashboards,
so
I
would
say,
like
that's
a
whole
can
of
different
worms
as
far
as
to
a
decision
and
how
we
want
to
Define
it.
Yes,
we
can
change
that.
If
we
try
we,
we
could
start
with
30
minutes
and
we
decide
we
want
to
increase
or
decrease
that
window.
A
D
Think
as
I
think
that's
okay,
I,
don't
know
how
commonly
that
would
be
asked
for.
But
if
it's
a
code
change
for
us-
and
everyone
says
we
want
to
change
the
default-
that's
fine,
yeah
I
think
based
on
that
30
minutes
is
probably
a
good
length.
If
you
walk
away
from
your
computer
for
half
an
hour,
I
think
we
should
consider
you
have
stopped
using
the
computer
and
you're
Now
using
the
computer
again
yeah.
A
It's
it's
a
very
opinionated
topic,
because
I'm
sure
people
will
have
like
15
minutes
is
like
more
common
or
like
whatever
but
yeah
we
can.
We
can
tweak
that
as
we
go
ultimately
as
well
as
far
as
customers.
Having
that
request,
you
know
if
they'd,
if
we
don't
offer
it
as
a
configurable
option.
Ultimately
they
can
deploy
their
own
stack,
go
into
the
cube,
schema
themselves
and
change
that
window.
So
they
still
have
control
over
that
if
they
really
want
to.
A
Yeah,
at
the
end
of
the
day,
like
everything
is
within
their
reach
to
Define
it's
just
how
much
of
it
do
we
make
how
much
of
an
easier
job
do
we
make
for
them
to
do
so
so
yeah,
okay,
so
then
we're
okay
with
the
definition
for
now
and
then
we'll
move
forward
with
that,
and
then,
if
we
need
to
tweak,
we
can
go
from
there
cool
tell
me
you
want
to
call
it
your
point
here.
A
B
So,
just
to
make
everyone
aware,
and
on
the
same
page,
so
we
we
got
some
pushback
on
using
Jitsu
in
the
first
place.
B
How
I
see
it
at
the
moment
is
it's
it's
a
shortcut
to
get
us
to
trekking
as
soon
as
possible
and
at
a
later
point
we
still
can
replace
it,
because
it's
a
part
of
the
system,
which
is
an
endpoint,
but
I
would
like
to
see,
as
we
have
also
the
product
intelligence
team
which
would
be
taking
this
over
at
some
point
is
what
we
would
need
to
get
a
cheats
replacement
in
place
and
what
can
be
the
benefits
of
it
and,
on
the
other
hand,
how
much
work
does
it
need
to
get
us
there
so
that
we
can
make
a
good
strategy
planning?
B
Do
we
do
this
rather
now,
this
is
the
impact.
This
is
the
costs.
This
is
the
benefit,
and
one
of
the
things
that
I
had
in
mind.
Looking
at
how
cube
is
doing
sessions
with
those
views,
do
you
think
they
are
doable
also
on
a
large
scale?
Would
you
rather
do
this
also
on
a
on
a
big
scale
through
these
Cube
views,
or
would
you
rather
take
the
route,
because
we
definitely
will
need
this
anyhow
to
do
some
sort
of
pre-education
and
post
aggregation?
B
So
this
means
like
if
we
get
in
and
then
what
we
have
right
now
already
with
this
data
enrichment,
which
would
cheats
replacement,
would
also
need
to
do
like
user
agent
string,
parsing
location,
adding
soil
anonymization
full
anonymization.
A
Yeah,
so,
just
from
like
what
I
see,
the
kind
of
like
logical
steps
of
progression
for
this
is
is
like
Max
is
figured
out.
You
know
the
pre-aggregations
and
now
we're
going
through
the
Notions
of
like
getting
the
cube
set
up.
But
what
I'm
curious
about
is
like?
Can
we
run?
Can
we
set
up
propagations
on
those
cubes,
then
to
help
with
the
scale
and
like
make
that
more
manageable?
And
then,
from
my
perspective,
as
well
as
like?
A
What
does
that
impact
on
like
the
hardware
itself,
in
terms
of
storage
and
memory
of
doing
so,
and
then
like
modeling,
that
of
like
okay?
Well,
that's!
This
is
how
it
performs.
This
is
how
much
it
uses
in
memory
or
storage
for
a
million
events
or
10
million
events
or
in
100
million
events,
and
eventually
just
extrapolate
that
to
Canvas
scale
in
a
shared
pool,
environment
or
not
so
I
think
we're
we're
in
the
process
of
figuring
that
out,
but
I.
A
Don't
think
we
have
an
answer
yet
there
I
I
think
that's
that
I
think
that's
where
we're
going
to
be
figuring
out
very
very
soon.
If.
C
If
it's
a
priority
to
to
get
Hard
Solid
numbers
on
that,
that's
fine,
we
can
do
that.
I've
today
definitely
hit
a
limit
of
what
my
laptop
can
handle,
but
that's
fine
we've
got,
we've
got
the
cloud
we
can
figure
out.
The
size
of
those
data
sets
fairly
simply
bear
in
mind
that
we've
got
a
pretty
fixed.
C
It's
a
list
of
database
columns,
so
the
size
the
database
uses
up
bearing
in
mind
most
of
those
columns
are
always
filled
should
be,
should
scale
fairly
linearly.
So
if
we
want
to
set
up
an
example
with
one
million
10
million
100
million
events
and
then
see
how
quickly
responds
with
their
pre-aggregations
and
how
much
space
those
praggregations
take
up,
then
we
can
totally
do
that.
I,
I.
Think.
The
short
answer
is
a
lot.
C
A
bit
of
a
bit
of
everything,
really
mostly
without
a
pair
of
aggregations,
okay
but
I,
wasn't
hitting
any
space
limits.
I
was
hitting
processing
limits.
C
A
Yeah
because
I
think
that'll
be
important
to
figure
out
if
this
is
something
we
can
actually
even
offer
like
in
a
shared
pool
environment
or
if
yeah
does
this.
This
would
likely
determine
also
like
how
we
structured
Cube
like
if,
if
we
need
specific,
like
Cube
instance
dedicated
to
name
spaces
or
things
like
that,
similar
to
how
we
are
doing
like
Gypsy
tables,.
C
My
gut
feeling
is
probably
a
shed
pool
is
probably
fine,
depending
how
big
your
opponent
is
yeah
right,
I
that
that's
my
guess.
Right
now,.
A
Right,
yeah,
I
guess
what
I'm
trying
to
say
is
like
it
would
be
good
to
see
how
it
cubes
scales
so
that
we
can
scale
the
cluster
or
clusters
commensurate
with
how
cute
needs
to
scale,
but
yeah
should
be,
should
be
fine,
so
I
guess
Tim
to
answer
your
question:
do
you
believe
sessions
through
Cube
would
be
usable
in
a
big
data
set
signs
pointing
in
GS,
but
we're
still
kind
of
fully
valid
like
fully
working
to
answer
that
question
with
higher
confidence,
but
wouldn't.
C
They
do
suggest
in
production
that
you
end
up
using
things
like
write,
a
sentinel
or
redis
pool
or
actually
no
they're,
replacing
red
or
something
Cube
store,
which
I
assume
runs
scaling
horizontally,
yeah,
but
yeah.
There's
a
lot
of
unknowns
here.
A
A
If,
for
example,
without
naming
names
like
large
customers
are
going
to
require
their
own
about
192
gig
machine
for
this
to,
like
you,
know,
run
Cube
store
on
or
something
like
that
like.
We
just
need
to
know
what
what
that'll
yeah,
how
that'll
work.
C
That's
a
definite
possibility.
You
know
if
you're
doing,
if
you're
doing
big
queries
that
aren't
pre-aggregated,
they
have
to
be
loaded
into
memory
and
that's
not
cheap
yeah
yeah.
B
My
my
gut
feeling
tells
me
that,
will
you
rather
like
a
aggregated
version
of
exactly
that
terms
of
data
would
rather
help
us
earlier
than
later.
Yeah,
basically
crunching
that
as
soon
as
the
events
are
coming
in
and
checking
up
on
those.
A
Because
the
alternative,
too,
is
instead
of
storing
it
in
memory,
we
have
separate
like
etls,
that
like
pre-process
it-
or
we
can
back
like
cue
that
so
that
you
know
eventually
that
gets
crunched
forward.
It's
more
manual
control
on
our
end,
but
like
there
are
ways
around
it,
so
that
we
can
keep
the
resources
constrained
a
little
bit
more.
If
that's
a
major
concern,
but
we'll
still
have
to
answer
those
questions.
C
And
the
the
you
can
set
up
Cube
to
do
the
pre-aggregations
on
a
separate
thread
process,
even
machine
I.
Think
so
I
don't
know
the
D
the
the
details.
But
if
you
can
offload
that
into
some
background
job,
then
at
least
we're
not
hogging
too
many
resources
from
from
one
single
shared
machine.
Yeah.
B
Yeah,
my
my
fear
would
be
if
we
do
both
in
a
full
production
size
version
in
the
sense
of
pre-aggregation
plus
the
views
on
sessions.
This
becomes
super
heavy
during
the
processing
time
with.
If
you
do
this,
when
the
events
come
in,
it's
like.
Oh
that's,
my
start,
that's
my
end.
Let's
write
this
to
another
table.
The
only
problem
that
you
would
have
there
is
that
you
have
most
probably
like
sliding
schemas
over
time
because
yeah
you,
you
figure
out
hey.
B
We
need
to
figure
out
something
else
on
my
session
table
and
you
can't
do
this.
Then
you
can,
but
it
it's
also
another
type
of
lift.
If
we
do
this,
basically
going
back
in
time,
if
we
figure
out
at
some
point
yeah
for
a
session,
we
also
want
to
know
the
temperature
at
that
location
and
write
it
also
into
that
table.
So
so.
B
My
idea
would
be
that
there
is
basically
a
like
a
crown
shop,
a
classic
one,
going
around
all
the
time
and
taking
a
look
at
the
events
that
came
in
and
taking
a
look.
If
from
that
user,
there
was
any
sort
of
event
that
came
in
longer
than
30
minutes.
If
not
okay,
then
we
can
wrap
up
that
session,
write
an
entry
to
a
session
table
and
say:
okay.
That
was
the
starting
event.
That
was
that
we
tracked
it
was
at
12
a.m
and
then
at
12
35
a.m.
B
A
B
They
do
it
by
writing
it
into
Regis
and
having
basically
a
session
storage
that
we
could
also
write
like
a
timeout
key
over
there
and
say:
okay,
the
last
event
for
user
XY
set
came
in
at
this
time,
and
then
we
basically
have
a
checker
against
this
radius
to
see.
Okay,
we
didn't
receive
any
more
events
since
30
minutes
good.
Let's,
let's
figure
this.
C
Out,
okay
and
in
theory,
that
should
be
something
we
can
do
with
pragations
in
Cube.
But,
like
you
say,
if
we're
doing
that
and
we're
doing
a
a
Jitsu
and
that
becomes
potentially
heavy,
it
depends
how
heavy
the
pre-aggregation
process
is.
It
doesn't
seem
massive,
but
then
depends
what
it
is:
you're,
pre-aggregating,
I
guess.
A
Yeah
yeah,
so
I
I
can
start
measuring
that
I'm
going
to
set
up
a
separate
cluster
to
like
experiment
that
and
see
see
what
the
machine's
doing
just
to
get
a
better
look
into
that
so
I'll
take
over
that
this
week.
Okay,
did
we
were
you,
okay
with
that
answer,
Tim
then.
B
B
At
least
simply
keep
it
in
mind
that
we
could
throw
away
Jitsu
earlier
than
later,
and
if
we
do
it,
then
we
would
do
it
there.
So
I
don't
want
to
go
completely
over
the
top
with
anything
in
queue.
I
think
it's
super
totally
fine.
For
now
to
do
pre-application,
that's
something
we
anyhow
will
need,
and
if
we
have
this
view
in
queue
for
sessions-
and
we
get
this
working
rather
faster
than
that
it
takes
a
couple
of
months,
then
let's
do
that
also,
but
keep
in
mind
that
we
might.
B
If
we
see
okay,
we
need
some
more
work
on
top
of
this
okay,
then,
let's
figure
out
again
the
Jitsu
playing.
What
I
will
do
now?
I
will
write
everything
down.
That
I
would
believe
that
we
would
do
or
need
in
in
a
titsu
replacement
own
self-built
thingy,
and
there
is
not
only
just
a
session,
but
it's
also
the
data
enrichment
that
we
would
have
better
control.
One
of
the
things
that
I
definitely
can
see
is
all
that
we
do
in
this
post
aggregation
where
we
would
then
also
basically
combine.
B
Data
for
sessions
is
also
that
there
is
tons
of
ml
stuff
that
we
can
do
around
funnel
analysis
and
funnel
Discovery
and
basically
crunching
down
overall
data.
That
is
way
too
heavy
at
the
later
point,
so
anything
that
we
can
do
there
definitely
will
give
us
an
advantage
earlier
rather
than
later
so,
but
it's
it's,
it
comes
down
and
boils
down
to
a
strategic
decision.
When
do
we
start
with
Vegeta
replacement
so.
B
They're
simply
the
ask
and
to
push
of
a
couple
of
people,
including
Sid,
to
to
see
okay
is
this
a
component
that
we
want
to
rely
on
a
big
future?
A
long-term
future
I
see
Jitsu
personally
still
as
a
it
gets
us
to
Market
super
quick
by
setting
up
a
couple
of
things
and
have
tracking,
which
is
super
boring
to
build
and
a
couple
of
things.
Tito
has
also
those
scripts
for
data
enrichment
and
stuff.
B
Like
that
so
A
couple
of
things,
we
could
also
do
there,
but
I
simply
want
to
create
now
an
epic
write
down
everything
that
we
need
make.
Most
probably
then
also
some
t-shirt
weight,
sizing
of
giving
basically
an
estimate
and
say:
okay,
if
we
want
to
replace
to
it.
So
now
we
need
to
invest
five
months
of
work.
Okay,.
A
Cool
pre-aggregations
for
X
active
users
I
believe
that
would
be
the
same.
Sorry
I'm,
jumping
around
the
agenda,
so
Cube
has
that
reference
talk.
It
does
include
a
number
of
different
additional
metrics
for
like
number
of
events
per
session
bounce
rate.
Things
like
that
I
think
some
of
those
will
lend
itself
to
the
dashboard,
at
least
from
what's
been
initially
kind
of
presented
in
the
vision,
proof
of
concept.
So
I
don't
know
if
we
really
or
if
we
want
to
discuss
this
synchronously
right
now
or
give
some
thoughts
to
it.
A
But
I
was
just
kind
of
posing
the
question
of
like
what
what
from
the
sessions
data
do.
I
want
to
actually
surface
in
the
audience
dashboard.
So
for
reference
we
have
the
visual
from
the
vision,
proof
of
Concepts
users
and
new
as
single
stack
components,
so
that
would
effectively
be
Neo
versus
returning
sessions.
Number
of
total
sessions,
I
think
that's
a
given
and
an
advertisement
duration
and
an
average
sessions
per
user.
Are
we
okay
with
that?
As
far
as
like
the
initial
offering
so
like
yeah?
Let's
start
with
that
question.
D
That
sounds
sounds
good
to
me.
The
the
metrics
you
just
enumerated,
dentists
are
going
to
be
ones
that
people
are
going
to
care
about
when
they
think
about
how
do
my
users
engage
with
my
product
or
application
yeah
and
is
a
common
enough
bar?
That
is
a
good
place
for
us
to
start
from,
and
people
can
customize
on
top
of
it
from
there
yeah.
A
That's
good
because
I
think
balance
rate
and
those
other
things
are
nice
to
have,
but
I
don't
want
to
scope
crate
right
now.
So
if
we're
okay
with
the
first
four
so
sessions
new
versus
returning
number
of
total
sessions,
average
session
duration
and
then
average
sessions
per
user,
then
I
think
we
can
start
with
that
and
go
from
next
that
number.
Oh,
it's
in
the.
A
C
Sorry,
okay,
so
I
can
start
working
on,
but
over
the
next
couple
of
days
and
seeing
if
we
can
overcome
the
issue
with
clickhouse
lag.
But
looking
at
the
thing
Tim
sent
me
I
think
it
should
be
relatively
easy.
A
Always
the
fun
time
of
manually
verifying
the
the
data
is
as
it
is,
but
your
effort
to
appreciated
there
I
just
noticed
some
topics
just
to
go
through
the
agenda.
I
called
this
out
earlier,
I'm
quite
curious
about
the
performance
impact
as
we've
discussed,
of
cube
pre-aggregations,
as
well
as
the
the
views
and
how
that
will
scale
per
per
namespace
per
project
and
then,
as
those
namespaces,
inevitably
collect
more
and
more
events,
but
I'll
start
setting
up
a
cluster
to
start
getting
some
visibility
into
that
separate
from
our
main
sandbox
cluster
Timmy.
B
D
C
Okay,
I
haven't
confirmed
it
to
myself,
but
there
are
lengthy
things
in
the
keyword
document
ation
about
how
to
implement
Dao
wow
and
how
specifically
So,
based
on
that
I,
would
think
it
can
be
done.
Having
a
look
at
the
SQL
measures
that
they
provide,
that
that's
what
I'm
expecting
to
see
so
I
I,
don't
want
to
say
yes,
definitely
but
yeah,
probably.
A
Now
there
are
certainly
ways
to
manually
query
for
that,
but
there's
a
already
Cube
documentation.
It's
yet
another
view.
Another
Cube
to
I
think
it's
called
a
rolling
window
attribute
where
you
can
Define.
For
what
intervals
do
you
want
to
Define
your
active
users?
So
if
you
can
think
of
more
funny,
sounding
acronyms,
then
I'm
sure
it
can
support
it.
D
A
Well,
the
propagation
is
just
storing
results
or
like
yeah,
storing
the
results
for
the
query,
but
I
think
I
think
you
can
run
free
aggregations.
You
need
cubes,
just
like
basically
create
a
result
set
right
and
then
you
can
run
for
aggregations
on
that.
So
you
could
use
both,
but
I
think
we
would
want
to
use
both
actually.
C
Yeah,
the
link
that
I've
just
put
in
the
agenda
for
data
was
equally
and
monthly.
Active
users
doesn't
use
prior
questions
at
all.
It's
just
how
you
would
query
them,
assuming
that
you
didn't
need
to
so
yeah
you're
right,
because
there's
a
lot
of
crush
over
there.
A
That's
a
good
question:
we
have
an
analytics
stack,
repo
I
can
put
it
there
for
now,
and
so
we
have
a
better
place
for
it.
I
guess
I'm
gonna.
Actually
you
would
want
something
between
shared
between
the
analytics
stack,
because
analytics
stack
is
the
production
version
and
the
dev
K,
the
dev
version,
so
I
think
we
would
want
our
separate
one.
I
know
that
we
created
one
before
Tim
and
like
the
analytics
section
of
space,
so
yeah.
B
I
thought
storing
just
Cube
files,
I
mean
we
could
also
have,
or
we
simply
go
the
way
of
rather
doing
first,
a
dev
kit
now
in
those
iterations
and
as
soon
as
we
are
happy
with,
for
example,
and
then
we
can
take
that
schema
copied
over
to
our
bigger
stack
and
then
do
more
work
in
that
kit.
So
like
a
manual
process,
so
that
we
don't
have
like
one
thing
where
we
are
rolling
out
something
the
dev
kit
and
could
break
then
also
the
production
version,
yeah.
A
That's
I
like
that
idea,
starting
dead,
getting
and
graduated
to
analytics
stack
once
we're
happy
with
it
and
in
the
process.
We
would
be
like
testing
it
anyways
on,
like
some
cluster
and
product
in
a
production-like
environment,
just
to
make
sure
it's
good
to
go.
A
Define
what
we
want
to
get
out
of
sessions
I
think
as
far
as
remaining
items
there's
a
couple
of
different
things:
I'll
set
up
a
cluster
to
start
trying
to
see
the
performance
impact
of
what
Cube's
doing
and
also
just
have
my
own
understanding
of
what
all
this
fun
Cube
stuff
is.
A
At
the
same
time,
Max
is
investigating
going
from
pretty
good
aggregations
to
setting
up
cubes,
correcting
the
queries
to
make
sure
it's
compatible
with
click
house
and
then
seeing
yeah,
which
which
tool
is
best
for
our
case
here
and
then
yeah.
What
what
how
much
work
is
involved
to
actually
surface
the
session
data
we
want
to
present
on
the
dashboard.
A
On
top
of
all
of
that,
we
will
get
a
bunch
of
different
issues
set
up.
I'll
transfer
this
back
into
a
single
source
of
Truth
and
yeah
I
think
we're
good
to
go
as
far
as
this
session
is
concerned.
Unless
we
have
anything
else,
we
want
to
cover.
B
Yeah
I
will
update
this
kitsu
replacement,
epic,
with
more
information
and
properties,
and
also
in
the
analytics
section
slack
Channel.
As
soon
as
I
have
some
info
in
there
and.
B
A
Yeah
and
and
I
think
it's
worth
noting
that,
like,
let's
not
worry
about
user-defined
Cube
schemas
for
now
like
this,
is
purely
for
internal
preview
and
pre-built
dashboards.
I
just
want
to
make
sure
like
yeah
that
this
can
get
very
complicated
very
quickly,
but
I
just
want
to
make
sure
we're
we're
approaching
it
from
a
more
limited
scope
here.
A
Yeah
sounds
good
to
me.
Awesome
then,
with
that
I
wish
everyone
a
good
evening
afternoon
morning,
whatever
it
is
in
your
time
zone
and
have
a
good.
A
Time
is
irrelevant
at
this
point,
but
yeah
have
a
good
whatever
and
then
see
everyone
next
time.