►
From YouTube: 2023-09-18 Analytics Section Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah
I
just
wanted
to
say
great
job
everyone,
a
couple
of
huge
milestones
in
the
last
week.
I
guess
I
just
need
to
go
in
more
PTO,
first
customer
onboarded
to
product
analytics
and
the.com
yourkitlab.com.
B
It
looks
like
the
number
of
events
is
ticking
up
this
morning.
That
increases
the
percentage
so
great
job,
everyone
I
know
it's
a
huge
amount
of
work
to
get
to
the
point,
and
it's
great
to
see
from
that
date
of
going
through.
So
thanks.
C
Yeah
so,
first
of
all,
awesome
I
think
this
is
really
a
good
step
forward,
especially
also
for
testing
things
and
and
getting
feedback
and
everything
the
oh
yeah
for
to
be
able
to
kind
of
get
a
bit
more
of
a
an
understanding
of
whether
our
gitlab.com
instrumentation
is
working
as
expected.
C
I
created
a
assassin
chart
just
to
have
like
a
comparison
between
what
do
we
collect
with
our
normal
snowplow,
like
the
old
snowplow
setup
versus
then
what
we
can
see
in
our
new
setup,
because
I
guess
in
the
end
the
numbers
should
overlap,
because
we
just
have
the
same
snowboard
SDK
running
two
times
sending
off
the
same
page
view,
for
example,
two
times
yeah,
so
that
it
should
give
us
a
number
over
the
next
days
and
weeks
to
to
just
compare
against
and
see.
B
A
Definitely
I'll
check
out
that
chart
and
then
once
we
get
the
100
going,
then
we
can
have
a
really
accurate
count
of.
What's
going
on
so
it'll
be
good
to
kind
of
reconcile
that
yeah
again
I
I
mentioned
it
on
on
slack
but
I'll
just
say
for
the
recording
or
anyone
who.
A
Really
happy
that
we
got
to
this
point
so
congrats,
everyone
on
on
the
work
and
looking
forward
to
making
more
progress
to
get
to
Beta
and
eventually
GA,
so
the
journey
continues,
but
we're
we're
making
some
good
progress.
A
I
can
give
a
little
bit
of
along
those
lines.
I
can
get
a
bit
of
a
quick
overview
in
a
way
to
think
quick
on
my
feet,
because
I
need
to
provide
a
good
update
in
terms
of
what's
left
in
terms
of
finishing
up
the
experiment,
release
as
James
had
requested,
not
to
call
you
out,
but
just
I'm,
holding
myself
accountable
to
it.
So
this
is
me
practicing
for
that.
A
Async
update,
basically
so,
what's
exactly
left
to
get
to
experiment,
finish
up
experiment
yet
officially
into
beta
and
then
eventually
G8
as
well.
This
speaks
to
obviously,
of
course,
our
product
development
in
terms
of
what
we're
doing
and
polishing
and
building
a
pond
with
with
that
Designer
visualization
designer,
but
also
what
we
actually
do
practically
to
kind
of
get
this
going
and
then,
as
I
spoke
about
that
I
just
thought
of
another
thing.
A
So
just
Excuse
me
while
I
as
everyone
knows,
I'm
super
good
at
typing
and
talking
at
the
same
time,
so
I'm
just
going
to
type
first
and
then
talk
later
so
a
couple
things.
Definitely
on
the
infrastructure
side
to
pay
attention
to
logging
and
monitoring
is
something
that
is
both
useful
for
us
to
better
understand
how
everything's
working
health
and
Service
status
wise,
and
so
we
can
have
that
feeds
into
on
one
hand,
cost
and
performance
analysis
to
be
able
to
figure
out.
How
much
does
it
cost
to
run
the
infrastructure?
A
How
can
we
can
we
optimize
that
in
any
way
not
to
over
index
on
it,
but
basically
is?
A
Runways
and
infrastructure
effort
to
help
kind
of
streamline
that
I've
spoke
to
John
Jarvis
who's,
a
staff
SRE
as
well
to
kind
of
see
what
our
options
are.
What
is
the
easiest
thing
for
us
to
do
to
get
logging
and
monitoring
going
as
we
are
now
versus
in
the
future
when
we're
in
an
instruction
managed
environment?
So
all
these
are
related.
A
So
that's
a
long
way
to
say
is
that
that
visibility
into
how
everything's
performing
will
be
useful,
but
all
for
us,
but
also
required
for
us
to
really
get
approved
to
be
running
in
production
in
a
general
availability
capacity.
It's
also
useful
for
us
for
beta
as
well,
just
because
we
want
to
have
some
confidence
that
you
know
we
have
n
number
of
customers
being
able
to
opt
in
that.
A
We
have
that
visibility
to
see
if
things
are
going
to
become
unhealthy
or
if
we
need
to
turn
off
that
toggle
to
to
kind
of
slow
down
the
pace
of
people
opting
in
and
to
really
just
have
that
Baseline
confidence
of
okay.
This
is
ready
for
people
to
really
start
using,
because
we
will
start
getting
into
areas
of
unpredictability
in
terms
of
how
are
they
going
to
use
it?
A
Are
they
just
going
to
try
to
you
know,
we've
I'm
sure
we're
familiar
with
some
customer
incidents
in
the
past,
where
they've
they've
managed
to
DDOS
Us
in
some
way,
and
things
like
that,
so
we
definitely
want
to
have
a
little
bit
more
confidence
there
getting
into
the
cost
and
performance
analysis.
Part
of
that
some
I
thought
I
had
since
basti
reminded
me
with
the
science
chart.
I
was
curious.
A
What
the
level
of
effort
would
be
to
start
importing
previous
data
just
so
we
can
kind
of
get
that
scale
of
data
and
see
how
much
that's
going
to
cost
like
run
over
time.
So
we
can
start
getting
into
the
conversation
of
well.
What
is
our
data
retention
policy
going
to
look
like,
because
if
a
customer
stores
x
amount
of
events
over
so
many
months
or
years?
How
much
is
that
going
to
cost
us
and
how
do
we
kind
of
shape
our
policy
around
that?
A
It's
a
it's
a
signal.
Of
course
we
can
always
kind
of
just
start
with
I
mean
James.
We've
been
talking
about
it,
like
start
with,
just
like
a
stricter
policy
and
then
it's
easier
to
loosen
over
time,
but
it'll
be
good
to
have
that
information,
but
Boston
you
had
a
sub
point.
There.
C
Yeah
I
don't
think
it
should
be
kind
of
prohibitive
to
from
a
effort
point
of
view
to
to
import
those
I
think
it
would
be
restricted
to
the
page
views
because
events
just
look
completely
different
from
a
structural
point
of
view
and
then
importing
them.
They
just
would
be
usable
but
page
views
I,
don't
think
there
should
be
a
problem
with
those
and
those
are
all
in
S3
back
in
an
S3
bucket
or
Pockets.
C
The
the
events
from
the
last
years
so
I
think
it's
just
on
to
kind
of
writing
a
script
and
finding
an
efficient
way
to
send
those
into
into
clickhouse
yeah
yeah.
So
the
S3
Imports
I
guess
there
would
be
an
option
and
then
we
can
modify
them
slightly
to
so
that
they
adhere
to
our
structure
but
yeah.
If
it's
really
helpful
to
kind
of
understand,
especially
performance
for
I,
don't
know
years
of
or
at
least
months
of
data,
then
let's
do
it.
A
Yeah
I
mean
I
think
on
one
hand,
it'd
be
interesting
just
since
we,
because
we
have
data
basically
dating
back
to
when
we've
first
introduced,
no
plow
tracking
right
so
like
that's,
also
pretty
interesting
to
be
able
to
kind
of
test
out
performance
of,
like
those
cubes
start
to
complain
or
click
Out
start
to
complain.
Once
you
started
going
back
far
enough
and
things
like
that,
but
yeah
I
think
it'd
be
really
interesting,
and
and
perhaps
it
yeah
an
experiment
that
we
can
run
to
kind
of
just
see
yeah.
A
Can
we
import
it
directly
from
S3
and
then,
if
we
have
to
remap
we
can
we
can
do
that,
whether
that's
before
or
after
it
gets
into
click
house,
but
whatever?
Whatever
is
the
the
easiest
way
to
get
there?
But
yeah.
B
A
C
Be
an
action
item
there.
If
not
I
can
I
can
create
one
or
you
just
tag
me
on
it,
and
then
we
will
make
sure
that
we
can
prioritize
it.
A
Cool
awesome,
yeah
kind
of
going
along
in
terms
of
what
else
is
left
in
terms
of
like
the
the
main
points
for
getting
to
Beta
at
least
is
like
really.
We
want
to
get
these
running
in
the
information,
managed
environments,
we've
we're
still
technically
running
in
the
cloud
sound
box,
and
you
know,
while
we're
not
hosting
red
data
and
transiting
it
like
ultimately,
and
we're
very
close,
because
we
want
to
be
running
an
infrastructure,
managed
environments.
So
Pierre
is
I,
think
actually
waiting
for
me
to
provide
some
information.
A
So
we
can
get
the
pre
and
staging
environment
set
up
and
then
once
those
look
good,
then
we
can
move
on
with
production
and
figure
out.
How
we'll
do
the
switch
off
from
running
on
the
cloud
sandbox
cluster
to
the
new
one?
The
good
news
is
that
they
should
be
connected
to
the
same
clickhouse
cluster.
So
it's
just
a
matter
of
changing
endpoints
in
terms
of
the
onboarding
and
the
cube
services,
and
things
like
that.
So
that's
of
course,
yeah
definitely
a
requisite
step
for
getting
us
out
to
Beta
And
GA.
A
A
You
can
correct
me
if
I'm
getting
this
bucketing
wrong,
but
usage
quota
will
be
interesting
for
us
just
to
also
see
like
have
visibility
into
Beyond,
just
looking
into
the
raw
data
like
how
much
usage
we're
having,
but
also
to
really
make
it
effective
for
people
joining
in
the
beta
and
to
have
an
idea,
as
we
kind
of
get
toward
announcing
pricing
and
things
like
that,
to
see
whether
it's
viable
for
them
or
not.
A
So
really,
the
usage
information
should
be
visible
in
the
beta
in
the
MVC,
whatever
MVC
fashion
we
we're
kind
of
working
towards,
and
then
the
billing
side
is
really
connecting
that
information
to
the
Fulfillment
workflow
to
make
sure
that
we
are
able
to
build
them
in
some
way,
which
will
definitely
be
something
where
we
before
we
we
need
to
figure
out
before
we
can
actually
say
we're
generally
available,
so
they'll
get
that
right.
James,
that's.
A
A
I
am
technically
running
the
live
clusters
and
this
is
on
recording,
but
on
some
unreleased
code.
So
I
appreciate
everyone
stationed
with
that,
but
we'll
get
that
assorted
very
soon,
but
that's
pretty
much
everything
that's
going
on
from
my
side
as
far
as
what
I
think
needs
to
get
done
for
GA.
Has
anyone
have
any
questions
around
that
at
all.
A
A
Cool
well
great
work.
Everyone
I
will
go
back
to
seeing
if
I
blew
up
a
cluster,
because
I
turned
on
the
fire
hydrant
to
of
events
free.com
to
50.
But
it's
good
to
see.
Everybody
hope
you
have
a
good
rest
of
your
Mondays
and
a
week
and
then
see
you
in
the
next
call.