►
From YouTube: 2023-07-31 Analytics Section Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
thanks
everyone
for
joining
today.
This
is
the
analyze
section
meeting
it's
July,
31st
2023.,
so
James
myself
and
tanuja
have
the
first
one
to
talk
about
the
Q2
okr
review,
to
review
just
what
we
what
we
said,
we
focused
on
the
last
quarter
of
the
progress
we
made
and
the
results
we
achieved.
James.
Do
you
want
to
review
yours
to
start
with
yeah.
B
Some
of
the
key
results
that
we're
measuring
that
by
we're
getting
feedback
from
five
customers
who
are
using
product
analytics
in
the
experiment,
phase
graded
that
out
at
about
20.
We
got
some
great
feedback
from
internal
customers,
but
we
haven't
gotten
to
external
customers.
Yet
pricing
proposals
submitted
that
has
been
done.
Element
we're
working
through
figuring
out
some
of
the
work
streams
with
fulfillment
so
that
we
can
charge
for
this
product
as
we
get
closer
to
GA
and
then
getting
feedback
from
five
plus
customers.
B
Sorry
I
was
getting
feedback.
We
got
feedback
from
internal
customers,
but
we
didn't
have
five
plus
customers
using
product
analytics
in
the
experiment:
phase
external
customers.
We
did
get
a
lot
of
great
feedback,
it's
the
score
that
it's
60,
because
we
did
receive
feedback
from
three
or
four
teams
internally.
B
That
gave
us
a
a
lot
of
good
insights
about
how
they
were
using,
how
they
wanted
to
use
some
of
the
validation
of
the
the
next
iterations
for
product
analytics.
But
we
really
want
to
get
to
external
users
and
didn't
quite
get
there
during
the
quarter.
So
that's
how
those
scored
out
for
the
product
analytics
and
getting
product
analytics
into
development
of
the
beta
phase.
A
All
right
awesome
thanks
for
that,
I
will
review
to
nuja's
for
the
analytics
instrumentation
group
as
well.
Let
me
share
my
screen
here
so
for
this
quarter
we
had
two
okrs
for
the
analytics
instrumentation
group,
the
first
being
introducing
an
easy
to
use
approach
for
Instagram
gitlab
that
gives
us
room
to
grow.
This
is
really
referring
to
the
new
internal
tracking
set
of
apis
that
we
started
developing
and
I.
Think
the
team
did
a
really
great
job
at
putting
these
together.
We
measured
that
by
two
different
outcomes.
A
Key
results,
the
the
first
being
that
we
wanted
to
convert
at
least
five
existing
events
to
the
new
tracking
API.
So
that
was
complete
successfully.
You
know
things
are
working
as
expected
is
my
understanding
so
great
job
to
everyone
there.
We
also
wanted
to
officially
deprecate
the
existing
redis,
hll
and
snowplow
tracking
apis.
A
We
didn't
quite
get
to
the
point
where
all
of
the
work
needed
to
say
it's
officially
deprecated
was
completed,
but
the
team
got
close,
and
so
that's
something
that
they're
going
to
be
working
on
finishing
either
this
Milestone
or
the
next
Milestone
to
just
wrap
up
some
of
the
things
we
weren't
able
to
get
to
due
to
some
of
the
other
schedule,
things
from
the
AI
impact
as
well.
A
Some
of
the
infrastructure
stuff
we've
worked
on
with
that
group,
the
other
key
result
this
group
focused
on
is
building
a
strong
foundation
for
instrumentation
of
customer
apps,
and
so
the
way
that
we
were
going
to
measure
against
this
was
providing
at
least
five
language
specific
sdks.
That
customers
can
use
to
instrument
their
own
apps.
That
way,
customers
can
really
focus
on
you
know:
I
have
an
existing
application.
I
can
use
exactly
what
gitlab
has
out
of
the
box
to
go
ahead
and
instrument
it.
A
We
didn't
get
all
the
way
to
100
on
this
one,
but
we
made
good
progress:
completing
the
node.js,
the
browser.js
Ruby
and
Ruby
instrumentation
sdks.
So
that
gives
us
a
really
good
foundation
to
start
with
and
we're
working
on
finishing
up
those
last
last
two
ones,
and
then
this
other
one
of
getting
feedback
from
five
plus
customers
on
these
instruments.
Sdks
This
is
highly
correlated
to
what
James
was
talking
about.
We
needed
external
customers
to
actually
start
using
it
before
we
could
get
feedback
from
them
for
the
reasons
James
just
talked
about.
A
A
One
of
the
things
that
that
jumped
out
to
me
in
hindsight
is
because
some
of
these
were
very
highly
correlated.
Maybe
that's
a
pattern.
We
need
to
be
more
mindful
to
not
pick
things
where
if
we
can't
do
one,
we
can't
do
another
right,
because
that
made
this
progress
percentage
go
lower,
even
though
it
was
ultimately
the
the
same
root
cause
for
them.
A
C
Just
on
your
point,
Sam
about
like
understanding
the
correlation
between
other
objectives
and
the
like
if
they
share
dependencies,
are
you
saying
that
we
should
avoid
those
or
just
be
aware
of
be
more
mindful
or
I
guess
like
what's?
I
should
be
treated
going
forward
in
future
quarters
more.
A
The
latter,
rather
than
strictly
the
former,
if
we
have
a
good
reason
to
pick
two
different
things
that
have
some
correlation
I-
think
that's
okay,
we
can
do
it
on
a
case-by-case
basis,
but
we
should
be
explicit
about
being
okay
with
it.
I
I,
don't
know.
If
we
necessarily
had
an
explicit
sort
of
conversation
about
that
for
this
quarter,
maybe
we
will
get
to
a
point
in
the
future
where
we
say
we
should
only
pick
things
that
are
fully
orthogonal
to
each
other,
but
I
think
at
least
for
now.
C
Yeah
I,
just
I,
just
wanted
to
make
sure
it
wasn't
like
that.
We
that
we
did
one
pick
things
that
were
usually
exclusive
or
something
like
that.
I
think
it
is
useful.
Sometimes
if
we
have
a
common
goal,
it's
just
it's
unfortunate
when
it
blocks
Us.
In
this
case,
where
yeah
progress
didn't
go
as
far
as
the
stage
would
have
liked,
but
yeah.
B
C
Think
there's
importance
in
having
some
alignment
or
if
they
overlap
a
little
bit
but
yeah
yeah.
A
So
that
great
that's
a
good
question,
though.
So
let
me
go
ahead
and
share
my
screen,
so
this
this
is
linked
in
the
agenda.
If
you
want
to
follow
along
at
home
as
well,
but
so
this
is
the
the
draft
we've
started,
putting
together
for
the
current
quarters,
okr
we're
still
waiting
for
the
CEO
and
Senior
Management
who
cares
to
be
finalized,
but
this
aligns
with
what's
drafted
there
and
really
this
quarter.
We
want
to
focus
on
get
providing
visibility
for
teams
to
understand
how
their
apps
are
performing
and
being
used.
A
This
really
speaks
to
the
fact
that
analyze
now
includes
not
only
with
product
analytics
and
analytics
instrumentation,
but
also
observability.
We
have
a
really
unique
opportunity
now
to
start
telling
a
Better
Together
story
to
our
customers
when
they
use
gitlab
and
the
analyze
stage
of
you
know,
are
your
customers
getting
the
value
they
expect
out
of
your
applications?
Are
they
using
the
features
that
you
just
shipped?
Are
they
not
using
them,
but
also
we
can
help
them
understand.
Is
the
technology
behaving
like
you
expected?
A
Perhaps
the
reasons
no
one
using
the
feature
is
because
it's
broke
and
you
didn't
have
visibility
into
that
previously,
and
so
this
is
going
to
be
a
really
powerful
story.
I
think
we
can
tell
and
will
really
help
us
differentiate
against
other
point
products
on
the
market,
as
well
as
other
platform
products
specifically
GitHub,
which
is
what
the
draft
okr
is
really
going
to
be
focused
around
to
make
that
a
little
bit
more
specific.
That
is
going
to
be
broken
into
one
key
result
per
group
for
product
analytics.
B
B
Launch
is
more
of
a
marketing
action
or
go
to
market
motion,
so
this
is
we're
going
to
open
it
up
and
make
it
available
and
then
delivering
a
blog
post
about
that
that'll
be
kind
of
that
good
of
our
promotion
for
the
beta,
as
well
as
re-engaging,
then
with
field
team
to
let
them
know
that
the
customers
who
are
interested
in
the
open,
Beta,
but
we're
interested
in
experiment
now
can
opt
in
on.com
and
then
holding
some
number
of
customer
feedback
calls
that's
yet
to
be
set
with
experiment
participants
a
little
bit
of
that
is
dependent
on
how
many
spare
the
participants
do.
A
Awesome
thanks
for
that
for
the
observability
group.
This
key
result
is
going
to
be
focused
on
bringing
our
first
iteration
of
tracing
within
gitlab
to
Market,
so
the
first
of
that
is
actually
ship
the
ga
and
make
it
available
within
gitlab,
similar
to
what
James
was
talking
about.
A
This
really
reflects
more
of
the
the
technology
released,
not
necessarily
a
marketing
launch
and
go
to
market
set
of
actions,
but
that's
what
the
next
one
is
really
about
on
generating
Market
awareness
for
tracing
with
a
blog
post,
just
so
that
we
can
start
getting
more
in
front
of
customers
get
out
there
in
the
conversation.
So
we
can
drive
usage
and
then
we
also
want
to
be
dog
fooding
it
by
applying
it
to
pipelines.
A
There's
some
discussion
over
which
project
we
want
to
do
dog
fooding
on
specifically
but
dog
food,
and
what
we've
just
shipped
is
GA
is
really
a
key
activity,
because
if,
if
we
at
gitlab
can
use
something,
that'll
help
us
learn,
but
also,
conversely,
if
we
at
gitlab
cannot
use
what
we're
shipping.
It's
not
really
fair
for
us
to
ask
customers
to
use
what
we're
shipping
right,
and
so
that's
what
the
the
observability
groups
okr
draft
is
on
right
now
and
then
for
analytics
instrumentation.
A
We
want
to
enhance
our
instrumentation
offering
for
internal
and
external
customers,
and
really
this
is
going
to
be
about
a
few
different
things,
one.
We
want
to
put
a
roadmap
and
do
user
research
around
the
cross
stage.
Datavision,
that's
a
strategic
area.
We
want
to
take
the
group
in,
and
so
we
need
to
start
getting
more
understanding
of
that
problem.
Space
specifically
as
well
as
what
that
plan
would
potentially
look
like.
A
Then
we
also
want
to
continue
what
we
started
last
quarter
by
establishing
internal
events
as
the
primary
and
trusted
source
of
instrumentation
with
gitlab
that'll
be
finishing
up
some
of
the
things
that
we
we
didn't
complete
last
quarter
as
well
as
some
new
things
and
then
finally,
we
want
to
make
sure
our
browser
and
Ruby
SDK
are
Rock
Solid.
These
are
the
sdks.
We
will
primarily
be
dog
fooding,
our
own
products
with,
and
we
expect
a
lot
of
our
customers
to
to
start
here
compared
with
some
of
the
other
sdks
we've
shipped.
C
For
the
product
analytics
one
I'd
post
a
comment
there
I
think
beta
will
definitely
be
it's
definitely
ambitious,
so
that
that
checks
off
that
box
and
I
call
that
out
specifically
just
because,
with
the
with
the
challenges
we've
had
with
getting
out
to
external
customers,
I
think
once
we
get
to
experiment,
it
would
be
possible
to
get
out
to
Beta,
but
I
think
white
really
needs
to
be
ironed
out
with
beta,
since,
theoretically,
any
number
of
users
can
sign
up
for
it
or
opt
into
it.
C
We
need
to
figure
out
what
limits
we
can
put
in
place
to
kind
of
help
mitigate
the
fact
that
we
don't
have
good
infrastructure
cover
coverage
right
now.
I
think
that's
a
that's!
That's
a
Truth
at
this
point,
so
I'm
curious
to
see
what
we
can
kind
of
do
to
basically
Define
some
limits
around
that,
so
that
we
can
be
comfortable
or
have
some
moderate
sense
of
confidence
in
terms
of
actually
opening
up
beta
for
for
everybody,.
C
And
yeah
open
to
ideas
from
anyone
in
engineering
as
well
related
to
the
the
putting
limits
in
place.
B
I'll,
add
that
topic
to
our
group
call
for
later
this
week
and
link
to
the
limits
and
abuse
issues
that
are
already
opened
up.
Having
a
health
check
is
a
good
counter
to
that
really
ambitious
KR,
because
we
could
open
it
up,
but
not
actually
have
the
service
available,
because
it's
constantly
down
from
abuse
or
just
people.
You
know
accidentally
detoxing
us
because
that's
how
they
use
the
service
not
of
their
own
or
no
malicious
actors.
A
C
C
But
it
was
about
kind
of
a
retrospective
on
the
fact
that
we
had
switched
the
snow
plow
and
we
had
been
working
on
the
SDK
and
then
someone
has
that
lane
candidate
I'd
appreciate
it
if
they
can
add
it,
while
I
explain
it,
but
basically
just
kind
of
seeing
how
how
we
can
kind
of
better
collaborate
across
groups
in
order
to
kind
of
avoid
being
surprised
by
the
fact
that
some
of
the
metrics
we
get
wrong,
and
things
like
that.
C
B
A
We'll
we'll
get
some
time
back,
it
was
great
seeing
you
all
we'll
talk,
soon,
see
ya.