►
From YouTube: 2023-05-02 Product Analytics Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
So
hopefully
everyone
can
see
my
screen
and
let's
get
into
the
open
issues.
So
we've
got
a
lot
here,
we'll
just
go
through
the
ones
that
are
assigned
product
analyst
designer
outputs,
incorrect,
visualization
definition.
Rob.
Is
this
one
actively
being
worked
on
or
should
we.
C
It's
not
actively
being
worked
on.
No,
it's
sort
of
something
that
just
needs
to
be
checked.
When
we're
doing
the
save
demo
work
to
see
if
it
is
actually
a
problem
or
not
got
it.
A
I
think
we
can
just
go
ahead
and
get
onto
the
workflow
issues,
so
we
have
a
issue
and
design
Kevin
just
returns.
Hopefully
we
can
get
some
progress
there
and
then
a
live
cluster
is
kind
of
on
hold
right
now.
Cube.Js
is
not
running
in
production
mode,
and
so
there's
a
little
bit
more.
A
We
need
to
do
there
to
kind
of
get
that
ready,
so
I've
kind
of
put
that
on
hold
for
now,
unless
we're
actually
a
little
bit
further
along
with
the
snowplow
switch
over
issues
which
I
don't
believe,
are
there
that's
still
in
progress.
A
So
that's
the
current
status
of
the
cluster
update
Ellen
has
been
working
on
adding
instrumentation
details
to
general
settings
or
at
least
been
refining
it,
but
we
can
check
in
on
that
async
this
one's
not
really
actually
ready
for
development,
because
it's
blocked
by
the
cluster
update
but
yeah.
We'll
I'll
update
that
issue
and
move
that
back.
A
But
we've
got
fetching
dashboard
configurations
from
the
back
end,
which
has
been
started
yet
from
Beyond
and
he's
out
so
we'll
just
keep
going
workflow
in
Dev
tracking
usage
of
product
analytics
weekly
active
users
and
products
and
making
data
Max
anything
to
report
here.
D
Yeah
I've
sort
of
Rewritten
the
implementation
plan
after
discussions
with
James
and
busty
we're
gonna
focus
on
reddit's
councilors
rather
than
database
accounting,
which
is
fine,
so
I've.
Just
sort
of
started.
Work
on
that
today,
as
a
way
to
ease
myself
back
in
to
remember
what
it
is
that
I
do
for
a
job.
A
Awesome
cool
I
know:
we've
got
improving
management
of
Secrets
within
the
stack,
rather
anything
to
mention
here.
C
Hard
pivot
on
this,
because
I
wasn't
in
this
last
one
was
in
the
APAC
time.
So
there's
a
hard
pivot
on
this,
where
we
were
originally
using
a
Secrets
Library,
but
Dennis
found
out
that
infra
now
has
a
hash,
a
cop
hasher
called
fault
setup
for
gitlab,
so
we'd
much
rather
make
use
of
the
existing
infrastructure.
If
we
can-
and
it's
also
tied
into
OCTA,
so
we
can
provide
team
member
access
and
all
that
sort
of
stuff
securely.
C
So
I'm
pivoting
on
that
I've
got
an
MR
out
to
get
asked
group
access.
C
I've
got
I've
started
working
on
a
Mr
to
add
up,
add
that
to
the
infrastructure
related
repo
had
some
conversations
with
infra
when
I
was
last
in
on
Friday
around
that,
so
I've
got
progress
forward
and
I'm
working
on
the
Mr
to
make
that
conversion.
So
it's
in
progress,
awesome,
slow
progress
about
progress.
A
Progress
nonetheless,
we've
got
putting
an
ad
dashboard
button
to
the
analytics
listing
page,
which
is
effectively
bringing
in
the
dashboard
designer
experience
from
yarn,
but
he's
out
so
we'll
just
continue
onwards.
We've
got
updating
the
product
analytics
backend
for
snowplug
compatibility,
Alan.
Anything
to
update
us
on
here.
E
I've
got
a
draft
Mr
for
that.
I
was
waiting
for
the
Max's
Mr
and
my
byocmr
to
get
merged,
and
now
they
are
merged,
so
I'll
be
moving.
That
forward.
A
Awesome,
cool
and
hello
you've
got
improving
the
values
Yama
usage
by
pulling
out
environment,
specific
variables
out
of
the
deployment
yamls.
B
A
Cool
and
you've
also
got
the
next
one
in
review
here
for
improving
deployment
of
persistent
volume
plans
using
stateful
sets.
B
Yeah
so
when
I
start
working
on
it,
the
idea
was
that
we
could
like,
because
the
PVC
is
over
bound.
That's
why
it
was
giving
an
error
when
it
roll
update,
so
that
issue
is
fixed,
but
it's
under
review
it's
in
the
second
level,
so
it
should
be
fixed
soon.
But
then
we
noticed
another
issue
while
working
on
it,
which
means
like
to
increase
the
replicas
of
the
stateful
sets,
but
that
fun
I
don't
think
we
will
be
handling
in
this
issue.
A
Cool
sounds
good
and
then
Rob
we've
got
the
fixed
conditioning
parts
of
business
volume
claims
which
I
think
it's
kind
of
superseded
by
police.
C
Yeah
yeah
very
much
so
so,
I'm
just
testing
a
Little's
issue
right
now
for
a
final
like
hard
manually,
testing,
everything
and
I
get
a
strong
feeling
that
I
can
actually
close
my
issue
in
favor
of
his
in
its
entirety.
So
I'm
just
checking
that
as
well
and
if
so,
I
will
close
it
out.
A
Hopefully
that
will
be
also
to
make
it
snowplow
compatible,
but
yeah
we
can
check
in
on
that
Owen's
also
working
on
deep
link
URLs
as
presented
as
top
pages
in
The
behavior
dashboards
kind
of
just
refining
a
little
bit
of
our
dashboard
behaviors
there,
and
then
I've
got
an
issue
here
to
improve
credential
usage
and
clickhouse
and
Cube,
which
that
merge
request
is
in
review
so,
hopefully
kind
of
gets
us
to
the
point
where
we
can
start
to
actually
put
this
into
the
secrets
work
that
Rob
is
working
on
as
well,
but
just
improvements
across
the
board
for
our
chart,
cool
moving
on
to
verification
now
and
you've
got
these
two
issues.
E
No
I'm
just
looking
for
the
unmute,
then
no
worries
yeah
both
of
those
are
just
waiting
behind
the
snowplow
flag.
So
whenever
we
are
ready
to
enable
that
and
verify
in
production,
then
we
can
close
this.
A
Out
sounds
good
I
think
we've
got
one
remaining
front-end
issue
as
far
as
actually
retrieving
those
snowplow
specific
dashboards,
and
then
we
should
be
able
to
actually
start
verifying
all
this.
If
we're
at
that
point,
then
I'll
go
ahead
and
update
the
Clusters,
barring
the
cube.js
production
deployment
cool.
Then
looking
at
our
bookmark
here,
I
think
our
last
one
was
presenting
call
to
actions
to
configure
dashboards.
A
So
we've
got
quite
a
few
issues
since
then.
You
know
it's
not
here,
but
we
have
removed
the
product
analytics
specific
dashboard,
listing
everything
points
to
the
shared
dashboard.
Now
so
that's
great
Max
we've
got
the
initialization
working
working
with
the
snowplow
configurator
service.
Anything
you
wanted
to
call
out
here.
D
A
This
is
basically
just
when
you
go
to
the
the
manual
kind
of
initialization
process
not
connected
to
the.
B
A
Sure
that
the
clickhouse
database
and
the
tables
are
set
up
or
sorry
it's
now
the
table
is
set
up
for
the
project.
So
yes,
thank
you.
Dennis,
yes,
no
worries,
it's
pts,
even
determining
where
to
put
the
instrumentation
details
after
sharing
the
dashboard
listing,
Kevin
I'm,
not
sure,
if
you're
on
the
call
but
we're
going
to
put
the
instrumentation
details
in
the
settings
General
page.
A
For
now
there
was
a
discussion
about
kind
of
the
amount
of
information
we're
adding
there
already
and
whether
or
not
it
makes
sense
to
set
up
a
configuration
screen,
but
that
resolved
with
you
know
it
at
least
doesn't
block
us
from
onboarding
customers.
So
we'll
just
continue
to
move
forward
with
that.
A
Lorena
is
documented,
available
existing
visualization
options,
so
we
I
think
we
can
just
go
ahead.
There
we've
made
some
improvements
in
terms
of
indicating
to
the
user
that
they're
in
edit
mode,
so
updating
the
handles
and
other
indicators
there.
Hello
you've
got
a
issue
for
dropping
the
clickhouse
connection
stream
column.
Anything
you
wanted
to
call
out
here.
B
Not
just
one
thing,
maybe
is
that
I
created
a
follow-up
ticket
for
the
next
16.1
Milestone,
so
that
we
need
to
remove
the
ignoreal,
but
the
column
is
dropped
already.
There
yeah.
A
Cool
awesome,
access
back
from
PTL,
so
welcome
back
and
you've
got
the
updating
audience
Behavior
dashboards
to
call
from
snowplow
specific
cubes
canvas.
Is
there
anything
yeah.
D
F
Hey
everybody
you
too,
for
the
fiscal
year
kicked
off
officially
yesterday,
so
we
do
have
on
the
product
side
of
the
house.
One
key
result
applied
to
the
continue
to
be
GitHub,
objective,
I
think,
is
what
it's
called.
They
probably
named
it
the
wrong
thing,
but
it's
specifically
to
get
to
Beta
during
Q2.
F
So
that's
going
to
be
kind
of
the
overriding
objective
for
us
is
to
continue
to
validate
through
the
burst
users,
which
we
have
the
first
customer
who's
ready
to
go
as
soon
as
the
stock
is
ready
to
onboard
them,
which
is
super
exciting
and
then
six
weeks
after
that,
we've
kind
of
time
bound
it
or
once
we
get
feedback
from
five
users.
F
Whichever
comes
first
we'll
be
moving
into
our
beta
phase
from
functionality
perspective,
it's
very
much
the
same,
we're
just
hoping
to
validate
from
users
that
they're
getting
value
out
of
the
existing
functionality
or
what
other
problems
they
would
like
to
solve
with
project
analytics
and
validate.
This
back
is
ready
to
go,
so
you
can
click
through
to
both
the
individual
KR
and
the
broader
list
from
the
product
side
and
Dennis.
F
You
and
I
talked
I
think
briefly
in
an
issue,
or
is
there
something
coming
from
the
engineering
side
for
product
analytics
as
well
or
product
intelligence
that
we
should
be
aware
of.
A
Yeah
I
don't
know
if
it's
gonna
warrant
to
know
KR,
but
there's
I'm,
trying
to
kind
of
formulate
the
plan
for
this,
but
there's
a
lot
of
scaling.
We
need
to
do
and
requirements
that
are
coming
out
of
my
findings
from
like
getting
Cube
to
run
a
production
mode
and
click
house
and
separating
that
out.
A
So
overall
we
want
to
start
to
have
an
actual
and
I
think
our
term
for
it
I
get
lab,
is
an
architectural
blueprint
in
terms
of
what
we're
going
to
need
to
actually
be
able
to
effectively
run
this
and
in
an
official
beta
phase
and
I'll
start
to
I
mean
actually,
it
was
about
to
drop
Link
in
in
the
group
channel.
A
It's
kind
of
clue,
people
in
on
the
requirements
for
cube,
but
this
this
may
inform
our
basically,
we
want
to
achieve
a
stable
architecture
as
far
as
like
our
goals
for
this
quarter
to
help
us
get
to
Beta
but
as
as
you'll
see
soon.
A
F
I
know
the
clickhouse
working
group
is
doing
a
lot
of
great
work
and
it
looked
like.
There
was
some
suggestion
last
week
to
use
smaller
instance
sizes
for
quick
house
which
might
make
it
a
little
bit
easier
for
smaller
customers
to
stand
up
and
run
a
product
analytics
back,
which
theoretically
could
help
us
in
the
long
term,
with
more
people
being
able
to
use
it
for
self-managed.
But
we
still
want
to
explore
the
option
of
providing
it
at
a
managed
service.
Even
for
self-managed
folks.
A
Yeah
that
was
more
around
and
I'm
actually
leading
that
investigation.
As
far
as
I'm
trying
to
see
you
know
how
little
The
Click
house
instance,
we
can
use
just
in
terms
of
basically
what
is
the
minimum
like
amount
of
resources
required
to
be
able
to
run
the
analytics
stack
and
part
of
that
is,
of
course,
clickhouse
and
so
part
of
that
testing
is
going
to
be.
You
know
if
we
just
limit
clickhouse
to
four
gigs
of
memory,
for
example,
how
many
you
know
how
much
data
can
we
support?
A
How
much
can
we
query
that
gets
a
little
bit
more
complicated,
especially
as
I'm
discovering
with
Cube,
because
it's
got
its
own?
It
can
run
in
a
distributed
nature,
but
it
has
its
own
requirements
for
being
able
to
scan
a
number
of
rows.
A
So
all
that
to
say
is
that
I'm,
hopefully
we
can
figure
out
a
way
to
kind
of
have
a
minimal
setup
for
for
self-managed
customers,
but
at
the
same
time,
if
they're
they're
wanting
to
scale
this,
it
might
require
a
kind
of
quite
a
bit
of
resources
on
their
own,
but
we'll
we'll
try
to
figure
out.
What's
what's
the
MVC
for
them.
A
I'm
about
to
drop
a
link
to
Cube's
documentation
is
you
know
it's
Gotta?
It's.
It's
got
a
whole
production
architecture
which
is
good,
but
it's
you
know
it's
Baseline
setup
is
like
asking
for.
You
know
four
to
eight
CPUs
and
you
know
eight
gigabytes
of
memory,
but
you
know
we've
been
able
to
run
everything
so
far
with
minimal
requirements,
so
it's
just
kind
of
outlining
what
what
can
be
achieved
with
minimal
specs
and
then
what
you
should
be
running
it
at.
A
But
you
know
it's
part
of
the
process
of
us
figuring
it
out
so
yeah.
F
Foreign
folks,
plenty
of
time
to
click
through
and
read
if
they
were
ignoring
our
conversation.
So
any
questions
back
to
the
group
or
what's
in
the
KR
and
if
you
haven't
had
a
chance
to
read
it
feel
free
to
async
and
I'll
answer
questions
either
in
the
document
in
an
issue
or
in
Slack.
F
Cool
last
thing
that
I'll
add,
which
I
didn't
add
to
the
agenda,
was
the
weekly
update
that
I
put
out
last
week,
I'm
planning
on
doing
every
week
for
us
in
product
analytics.
Basically,
what's
our
the
active
user
account?
What
did
we
learn
from
users
last
week
and
how
that
shipped
roadmap
for
us,
if
I,
can
get
thumbs
up
on
those
to
say,
hey,
I'm,
looking
at
it
and
I'm
finding
it
by
you
it'll
be
great.
F
If
we
get
a
couple
of
weeks
of
no
thumbs,
then
I
can
discontinue
that
if
we're
not
finding
value
in
it,
and
it
can
be
more
of
a
self-serve
than
be
pushing
it
into
the
channel,
that
data
should
always
be
available
somewhere.
D
A
C
A
Then
we're
at
the
end
of
the
agenda
unless
there's
anyone
anything
we'd
like
to
show
and
tell
or
anything
like
that,.