►
From YouTube: 2023-09-26 Product Analytics Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
September
26th
product
analytics
group
sync,
given
the
small
number
of
attendees
or
the
intimacy
of
the
meeting,
I,
think
we're
going
to
just
go
through
and
do
a
quick
kind
of
SYNC
stand
up
and
then
and
liter
of
the
billboard
and
then
we'll
just
kind
of
go
from
there.
A
So
I'm
happy
to
start
so.
I
was
just
clearing
an
issue
with
the
y'all.
Remember
the
bad
message
in
Kafka
backing
things
up
that
happens
again.
Vector
doesn't
really
get
stuck
on
so
so
the
previous
problem
was
that
clickhouse
didn't
know
how
to
handle
it
because
it
had
a
Kafka
engine
and
then
was
pulling
in
a
message
and
then
trying
to
insert
it.
A
It's
happening
the
other
way
around
where
vectors
trying
to
sort
of
into
click
house,
but
it's
having
an
issue
with
some
of
the
one
of
the
values
and
then
that
got
backed
up
and
though
the
current
setup
is
that
there
are
three
there's
there's
a
enriched
Advanced
topic
which
all
the
enriched
events
go
into
and
that's
partitioned
into
the
three
ways.
The
the
idea
is
that
each
we
have
three
Kafka
Brokers.
The
idea
is
that
we
split
the
work
so
that
the
three
Vector
aggregators
can
then
like.
A
You
can
only
have
sorry
the
quick
Kafka
architecture
overview
is.
You
can
only
have
one
consumer
of
a
partition
of
topics
so
the
way
that
you
parallel
parallelize.
That
work
is
that
you
split
it
up,
split
up
a
given
topic
into
end
number
of
partitions
and
then
you
would
have
a
number
of
consumers
which
in
this
case
are
vector
aggregators
to
actually
consume
each
topic,
but
for
some
reason,
I'm
still
investigating
this.
A
They
all
like
one
got
stuck,
and
then
they
I
think
the
other
aggregators
tried
to
help
out
and
they
all
got
stuck
and
then
basically,
it
stopped
concealing
altogether
I'd
like
to
know
if
we
can
kind
of
keep
them
specifically
assigned
to
each
each
partition.
But
basically
things
got
backed
up.
Messages
got
queued
up
in
the
hundreds
of
thousands
and
basically,
if
nothing
had
happened,
Kafka
would
eventually
run
out
of
disk
space
events
stop
coming
in
Kafka
crashes.
A
A
What
I
was
focused
on
the
good
news
is:
is
that,
despite
having
a
backup
of
I,
think
1.3
million
events
as
soon
as
I
was
able
to
I,
was
able
to
grab
the
bad
messages
I'm,
so
I'm
going
to
dig
into
that
later,
clear
them
out
of
the
queue
and
then
restart
the
vector
aggregators,
and
it
was
able
to
clear
it
out
in
like
the
span
of
a
couple
minutes.
A
So
the
good
thing
is
is
that
the
vector
agents
have
enough
horsepower
in
them
to
like
clear
things
out
when
things
get
backed
up.
So
that's
nice,
that's
a
new
learning,
but
bad
news
is
it's
like.
Why
is
it
like?
Vector
needs
to
be
able
to
say,
hey
I've
tried
this
enough
times
and
we
need
to
just
get
rid
of
it
or
do
something
better
with
with
that,
so
that
problem
is
still
with
us,
but
hopefully
we
can
kind
of
dig
into
that
anyways.
A
A
A
few
of
us
have
signed
up
to
be
like
hey,
let's
I'd
like
to
help
out
I'll
log
right
there,
but
I
want
to
make
sure
that
the
context
is
set
so
that
we
have
clear
defined
outcomes
before
we
throw
people
on
that.
A
So
that's
my
next
focus
and
then
after
that,
I've
been
Gathering
key
metrics,
so,
as
you'll
see
in
the
agenda
max
is
working
on
working
with
someone
from
the
someone
on
the
reliability
team
or
more
specifically,
observability
I'm,
just
trying
to
refrain
from
using
that
team
name,
because
we
have
an
observability
team
on
our
side
but
to
set
up
logging
and
monitoring,
and
so
that's
just
getting
the
metrics
over
from
the
services
over
there.
But
I
I
want
to
come
up
with
the
key
metrics
of
things.
A
We
need
to
actually
pay
attention
to
to
each
service
so
that
we
can
actually
use
those
metrics
to
actually
pay
attention
to
what
we
need
to
and
set
up
a
learning
metrics
based
on
that.
So
that
was
a
really
long,
stand-up
update!
That's
what
I've
been
focusing
on!
Who
wants
to
go
next.
B
I'll
go
next,
but
they
don't
have
anything
assigned
to
be
on
the
board,
so
I've
been
working
on
getting
ahead
a
little
bit
on
16-6
planning,
which
is
really
just
in
time.
B
B
So
last
week
we
hit
a
high
on
users
being
dashboards,
which
is
exciting,
so
I'll
share
that
in
Black
channel
for
a
wider
distribution
and
like
I
mentioned
before
the
recording
started
getting
together
some
content,
some
things
that
we've
created
some
things
we
haven't
created
for
internal
reviews,
getting
ready
for
GA
I,
know
we're
not
yet
to
Beta,
but
trying
to
get
ahead
of
that
for
all
the
internal
approvals
and
all
of
the
various
work
streams
and
fulfillment
and
revenue
and
all
of
those
places
that
will
need
to
take
place
as
we
consider
pricing
and
packaging.
C
Oh,
the
things
that
I've
got
on
the
go
I've
got
the
the
stack
mode
in
charts
issue.
That's
been
merged
now,
just
waiting
on
a
new
version
of
UI
to
flow
through
to
the
application
to
do
some
verification
on
that.
So
that's
nice.
We
can
actually
read
our
Legends.
Now
we've
got
the
error
state
in
the
new
visualizations
draw.
C
So
if
the
visualizations
fail
to
load
that
draw
displays
a
an
actual
nice
message
that
one's
just
a
maintain
a
review
and
then
I've
got
this
other
bug
which
I
haven't
got
very
far
with
I'm,
still
trying
to
figure
out
what
to
actually
do
when
you've
got
a
whole
big
block
of
Legends
that
are
actually
bigger
than
the
dashboard
panel
that
are
inside
of
there
isn't
really
a
nice
way
to
handle
that,
but
I've
just
been
trying
to
figure
out
what
a
nice
solution
might
look
like
and
then
a
couple
of
things
that
aren't
on
the
board.
C
A
C
We
do
have
issues
for
them,
I
think
renovate
bot
just
doesn't
tag
them,
however,
got
it
yeah
right,
yeah,
cool
I
can
have
a
look
and
see
what
would
be
involved
in
getting
it
on
the
board.
I.
A
B
A
James,
what
do
you
think
about
so
that
was
interesting
about
like
we
have
an
uptick
in
usage,
I
had
a
sneaking
suspicion
that
has
some
a
lot
to
do
with
the
git
lab
the
icon,
instrumentation
being
live
in
the
dashboard
one
thing
for
the
team
to
be
aware
of
as
well
like
in
elementary
here
as
well
like
we
definitely
want
to
start
focusing
on
better
supporting
custom
events
and
I
think
would
be
really
interesting
is
if
we
can
dog
food
that
I
know.
A
A
If
we
were
the
first
to
create
our
custom
dashboard
to
say
like
what's
interesting
to
us
and
saying
like
Okay,
to
compare
numbers,
but
also
just
to
be
able
to
say
hey
like
here's,
what
we're
looking
at
as
far
as
usage
for
for.com
and
we
can
start
to
see
within
product
analytics.
We're
gonna
get
real
meta
here,
but
being
able
to
see
like
who
are
our
top
users
for
product
analytics,
for
example.
B
Yeah
yeah
that'd
be
interesting,
yes,
definitely
where
we
want
to
go
Max
together,
not
Max.
Sorry,
Rob
and
I
started
talking
about
that
a
little
bit
yesterday
and
trying
to
figure
out
in
a
little
bit
this
morning
of
what
is
the
solutions
Facebook
like
for
this
self-describing
event,
and
how
are
we
going
to
query
those
yeah
all
of
the
description
about
it
is
buried
in
the
Json.
It
could
get
to
be
a
very
expensive
query,
so.
A
B
Kind
of
brainstorms
some
solutions
there
that.
C
B
B
So
the
how
many
people
are
looking
at
a
dashboard
might
need
a
little
bit
of
work
a
little
bit
of
a
code
change,
but
should
be
a
great
first
candidate
for
that,
and
if
that's,
if
that
was
it
like,
that'd
still
be
a
great
Inception
kind
of
a
moment
of
curious
product
analytics.
The
chart
you're
looking
at
is
how
many
people
look
at
product
analytics
right
right.
A
Yeah
yeah
it'll
be
interesting
because
yeah
we
lost
that
flexibility
moving
to
snowplow
I,
don't
know
if
there's
like
you
know,
with
self-describing
events,
I
don't
know
and
to
be
fair.
I
haven't
looked
at
this
game
for
a
while,
but
I
don't
know
if
we
already
have
a
de
facto
like
custom
event
key
and
value
that
we
already
extract,
but
at
least
going
by
that
convention
we
can
say
hey
if
you're
tracking,
a
custom
event,
you
at
least
have
something
already
extracted
out
of
the
schema.
A
So
that
query,
it
is
faster
to
query
because
definitely
being
able
like
having
to
query
a
Json
blob
and
a
string
is
going
to
be
that's
not
going
to
work
like
we're
already
having
to
investigate
query
performance
with
like
doing
full
scans
right
now,
which
is
causing
let's
get
another
clickhouse
cluster
can
scale
that
high,
but
it's
definitely
not
efficient
use
of
our
credits
there.
But
again
we
want
to
figure
out
what's
a
good
balance
between
here's,
what
you
can
you
can
collect,
whatever
you
want,
but
querying
it
may
have
some
limits.
A
But
then,
if
you
want
to
really
collect
certain
pieces
of
information,
you
have
to
use
these
fields
which
will
be
extracted
and
yeah.
So
but
yeah
it'll
be
interesting
to
see
what
we
can
do
with
Customer
Events
and
starting
a
show.
You
can
do
with
custom
dashboards
and
getting
people
excited
about
that.
All.
B
Right
for
purposes
during
beta
and
leading
up
to
ga
a
boring
solution
of
here's,
how
you
create
a
custom
event:
we've
predefined
it
for
you
yeah,
it's
probably
fine,
because
it
gets
you
something.
I
think
that
for
the
use
cases
we've
heard
of
like
our
own
and
for
these
and
we're
getting
to
see
click
events
like
that's
a
great
first
step,
I
think
so
you
could
start
to
use
visualization
designer
residence
for
a
lot
of
those
things.
Yeah.
A
We
just
have
to
make
sure
like
we
have
a
good
balance
of
like
we
have
three,
let's
say,
for
example,
three
Fields
versus
the
here's,
your
event
type,
you
know,
click
or
whatever
you
want
to
call
that
event,
and
then
here's
the
key
and
the
value
for
that
I
think
that's
a
good
first
start
to
say:
here's
what
you
can
expect
reliably
to
be
extracted
out
so
that
you
can
query
that
with
a
greater
time
range
or
fewer
limits
than
if
you
were
to
query
a
Json
blob
and
then
let
me
just
kind
of
go
from
there
and
then,
as
long
as
we
document
that
it's
like
well,
if
you
work
within
those
quote
unquote
constraints,
then
you
should
be
able
to
reliably
report
on
any
custom
events.
A
Definitely
I
think
that's
something
we'll
we'll
start
to
really
focus
on
the
next
couple
milestones.
A
Cool
anything
anyone
else
would
like
to
talk
about.
B
I
did
notice
today,
while
I
was
doing
a
demo
with
a
customer
which
I'm
going
to
try
to
share
that
recording
later
the
the
the
drawer
in
the
dashboard
designer
now
is
nice
and
separated
shows
the
different
types.
Super
slick
super
impressed
with
that.
So
thanks
to
the
team
for
continuing
to
iterate
there
and
get
through
those
issues
that
is
experiences
much
better
than
it
was
before
great
work.
Yeah.
A
Hopefully
we
can
give
it
the
same
treatment
for
the
visualization
designer
it'll,
be
nice
to
have
a
full
and
end
UI
flow
for
that
yeah
I'm
gonna
start
flexing
that,
but
yeah
it's
coming
along
awesome
cool.
Well,
keep
it
short
and
sweet.
Give
everyone
back!
15
minutes!
It's
good
to
see
everyone,
otherwise
yeah!
There's
nothing
else,
have
a
good
rest
of
your
Tuesday
or
beginning
every
Wednesday
and
catch
you
on
the
next
meeting
cheers
all
right:
yeah,
bye,.