►
From YouTube: Product Analysts and Mathieu talking about Service Ping
Description
Carolyn, Dave and Mathieu are talking about Service Ping data source.
A
Let
me
just
come
here
cool,
so
basically,
I
just
wanted
to
beat
because
currently
has
just
talked
about
such
usage.
Pindella
yesterday
last
week,
I
believe
which
is
kind
of
like
a
complicated
project
but
also
a
niche
project,
and
I
think,
there's
a
lot
of
a
a
lot
of
confusion
around
these
projects
and
and
a
lot
of
teams
involved
and
pinged
and
so
on.
A
So
I
believe
also,
I
owe
you
maybe
just
a
more
general
presentation
about
usage
ping
and
service
ping
in
general
because
I
believe
that's
a
very
complicated
data
source
and
that's
also
a
very
important
data
source
and
very
important
system
for
the
company,
and
I
guess
for
dave.
I
know
he's
a
bit
he's
a
bit
more
aware
about
these
days,
as
I
knew
for
sure,
because
it's
been
no
longer
not
big,
not
anymore,
any
other
reason,
but
yeah.
A
So,
but
but
basically
the
goal
will
be
also
to
have
a
kind
of
office
hour.
I
would
say
between
you
and
me
so
just
trying
to
understand
what
you
know
just
try
to
get
the
conversation
and
then
try
also
to
just
let
you
ask
as
many
questions
as
possible
so
that
even
even
the
most
the
most
random
questions
you
can
just
drop
them.
So
so
I
understand
more
where
you
are
you
understand
also.
A
I
understand
why
we
should
start
from
and
just
we
just
try
to
yeah
get
together
to
a
place
where
you
feel
a
bit
more
comfortable
with
the
data
source.
A
So
just
I
want
you
to
start
with
just
like
very,
very
random
questions
which
is
like
what
do
you
know
about
usage
being
a3?
So
what
is
it?
How
many
times
this
is
run,
for
example
like
where
is
it
stored?
What
do
we
do?
Do
we
have
the
data
for
customers
and
suppose
I
listed
like
some
questions
and
be
carrying.
You
can
just
just
give
me
just
short
answers
or
whatever
and
just
then
you
can
evaluate
for
me.
B
Yeah,
so
I
mean
my
understanding
in
general
of
usage.
Paying
is
that
it's
these
kind
of
aggregated
metrics
that
we
get
from
self-managed,
then
there's
this
whole
other
side
of
how
we're
like
producing
or
replicating
something
similar
on
the
sas
side.
I
definitely
don't
understand
that,
but
you
know
my
understanding
is
from
the
self-managed
side.
B
We
only
get
it
from
about
30
of
our
customers
and
that
it,
you
know
folks,
can
opt
out
of
it
and
that
we
have
to
very
deliberately
instrument
which
events
we
think
that
we'll
need
with
any
of
the
nuance
that
they
have,
but
that
it's
mostly
just
aggregated
counts.
We
can't
necessarily
roll.
We
don't
necessarily
have
the
data
on
a
user
level,
it's
just
more
of
an
aggregated.
B
B
Those
are
like
a
few
things
that
I
wanted
to
know
and
then
I
know,
there's
been
some
kind
of
questions
surrounding
discrepancies
on
the
sas
side
of
like
manual
versus
automated
generation
of
similar
data,
and
I
feel
like
I
don't
don't
even
know
enough
to
like
truly
form
that
question
other
than
I've
seen
it
pop
up
a
bunch
of
times
as
like
a
problem
and
and
something
that
we
need
to
figure
out
how
to
reconcile
or
switch
over
data
sources.
A
Yeah
cool,
that's
a
lot
of
things
to
cover
and
also,
I
think
that
the
main
problem
is
that
we
are
very
bad
at
naming
in
this
company,
so
we
call
usage
ping
a
bit
of
everything
so
and
when
we
don't
have
a
name
when
we
don't
when
we
we're
not
creative
with
naming,
we
just
like
append
like
we
prefix
like
with
sas.
So
we
call
like
another
system,
that's
usage
ping,
which,
which
is
actually
like
a
totally
different
system
of
like
the
initial
usage,
so
usage
pin
service
pin.
A
So
now
it's
called
service
pin
but
service
ping.
Initially
it's
just
one
thing:
it's
a
it's
a
product,
analytic
data
source.
It's
a
pro
analytic
service
that
is
integrated
in
the
product
that
allows
to
calculate
aggregated
metrics
at
the
intense
level
for
any
instances
that
have
have
github
installed
so
instance,
level
instance,
level
metrics
for
all
the
instances.
C,
e
e
and
sas
cells
being
one
instance
giving
us
aggregating
matrix.
A
But
that's
the
first
thing.
So
why
is
it
the
case?
Is
that
two
things
when
we
are
an
open
source
project
and
two,
our
users,
customers
or
just
people,
just
developers
care
a
lot
about
privacy
and
security,
and
they
don't
want
to
be
tracked
and
they
don't
want
to
be,
and
they
want
to
know
what
we
track
from
them.
A
So
that's
the
whole
reason
of
it.
So,
for
example,
we're
not
the
only
one
we're
not
pioneers
into
it
like
a
lot
of
open
source
companies.
Do
the
same
so
or
just
like.
Also
software
companies
do
the
same.
So,
for
example
like
when
you
download,
I
think,
sublime
text
they
actually
want
to
send
telemetry
data
to
them,
and
it's
it's
most
of
the
most
of
the
time
for
non
internet
like
non-sas
projects
or
just
websites
or
just
apps.
A
So
it
has
been
running
since
I
think
2017
on
a
bunch
of
instances
every
week,
and
we
don't
know
when
it
could
be
it's
around
a
random
day
of
the
week.
For
all
the
instances,
so
it
could
be
monday
or
one
customer
tuesday
for
another
and
end
sunday
for
ethereum.
So
once
a
week
get
some
background
right.
A
A
A
So
that
means
that
that's
the
most
complete
data
you're
here
carrying
okay,
you
just
turn
off
the
camera.
I
wasn't
sure
so
that's
the
most
comprehensive
data
source
we
have
because
that's
the
one
that
covers
sas
and
self-managed
at
the
same
time
and
the
only
one.
A
So
that
means
he
can
give
us
data
for
the
whole
universe
of
the
part
of
the
universe
of
the
users
we
we
we
are.
We
are
that
use
the
the
github
keyboard
products
and
services.
As
you
said,
a
user
can
turn
it
off,
so
they
can
say
no
to
it,
and
in
that
case
we
will
never
get
any
data
from
them.
A
There
is
a
project
to
enforce
that.
They
send
a
strict
me,
a
minimal
set
of
data
to
us
that
will
be
the
critical
usage
data,
so
we
will
normally
receive
them
from,
like.
I
don't
know
when
it's
going
to
be
the
case,
but
when
it's
going
to
be
like
this
we're
going
to
renegotiate
contracts
and
we're
going
to
change,
I
think
private
policies
to
have
this.
This
decided.
So
that
means
that,
for
example,
looking
at
key
item
key
key
metrics,
we
have,
we
will
have
the
data
for
everyone.
A
B
A
Thing
also
that
that
needs
to
be
said
is
that
so
that's
a
service,
that's
in
in
the
in
the
gitlab
code,
github
your
data
source
code.
So
that
means
that
we
have.
We
there's
a
it's.
It's
it's
based
on
release.
So
that
means
like
from
a
release
to
another
usage.
Things
are
added,
new
usage,
pings
are
removed,
usage
ping
are
improved
or
whatever,
and
so
that
means
that,
depending
on
the
release,
an
instance
is
on
the
number
of
counters
we're
going
to
receive
are
not.
A
A
Six,
for
example,
which
was
two
years
ago,
that's
a
very
rare
case,
and
we
have
kpis
around
this,
so
we
can
track,
for
example,
the
the
the
percentages
of
instances
that
are
on
the
latest
release
the
percentages
of
instances
that
are
on
the
last
three
releases
and
so
on,
but
and
it's
on
the
product
adoption
dashboard,
but
that's
something
that
is
important
to
know
and
and
and
to
know.
A
B
A
I
think
I
believe
now
and
all
of
them
are
here
normally
they're
all
documented
the
documentation
is
pretty
light
and
somehow
sometimes
doesn't
say
exactly
what
it
is,
but
it
exists,
and
at
least
you
can
do,
and
you
can
just-
and
you
have
more
information
about
the
group
that
does
that
that
that
implemented
this
metric
and
owns
this
made
this
metric
of
counter.
We
call
it
counter
we're
going
to
call
it
counter.
I
need
to
to
stick
to
this
discounter,
the
tiers
on
which
it's
available.
A
What's
the
style,
it
was
the
current
status
and
so
forth,
and
this
is
powered
by
yml
files.
So
each
counter
has
a
vml
file,
and
in
this
counter
you
see
the
definition.
Two
very
important
thing
that
I
wanted
to
say
it's
row
11
and
row
12.,
so
row
11
is
a
time
frame.
So
initially
we
had
only
one
type
of
one
type
of
matrix.
They
were
old-time
metrics,
so
they
were
capturing
and
I
can
show.
A
A
I'm
just
writing
the
data
first,
these
days
here,
so
they
are
capturing.
For
example,
the
number
of
ci
builds
that
have
been
created
since
instance
started.
So
that's
this
query.
That
is
very
simple.
It's
a
sql
query.
So
that
means
what
that's
what
we
call
like
the
sql
base
counters
or
database
counters.
A
B
A
A
And
then
the
second
important
thing
is
just
this
data
source.
Here,
it's
what
we
call
release
hrl.
We
have
two
types
of
counters
or
data
sources.
One
would
be
what
we
call
database
or
I
call
it
since
the
beginning
of
this
machine
sequel
based
counters,
the
others
really
such
help.
So
talking
about
the
sequel
counters,
because
they
are
very
it's.
The
notion
is
quite
simple
to
grasp.
We
have,
as
I
showed
here,
queries
so
we
define
the
in
the
code.
Some
counters,
the
counters,
the
the
ruby
code
translates
into
sql
queries.
A
We
run
these
queries
against
the
database.
We
get
a
number,
we
record
the
number
the
others
are
ready
to
hr,
which
are
incremental
counters
that
are
incrementing
on
specially
incremented
on
specific
actions
happening
in
the
product.
The
action
could
be
a
page
view,
a
click
on
a
click
on
a
button,
a
back-end
event,
a
front-end
event,
and
just
so
any
kind
of
actions,
an
api
call,
and
so
so
those
are
not
the
call
based
counters
and
can't
be
even
replicated
are
very
hard
to
just
replicate
in
the.
C
So
like,
as
as
we
know,
we
can't
report
on
sas
for
manage
or
monitor
because
it's
a
redis
how
it
uses
redis
so.
A
Yeah,
so
basically
so
we
have
so
usage
ping.
Metrics
are
instant,
slow
instance
level.
Installer
instance.
Level
counters
are
fine
for
a
lot
of
metrics
that
we
want
to
track,
like
let's
say,
manage,
manage
my
smile,
because
we
get
for
that
number
of
people
that
use
the
the
men
at
stage
and
and
we
so
we
have
mana
stage-
is
tracked
with
discounter
and
we
have
process
this
result
for
this
counter.
A
For
this
move
easy,
the
problem
is
that
when
you
try
to
be
more
granular,
so
when
you
try
to,
for
example,
get
a
number
of
you
like
a
spo
for
example,
so
you
try
to
get
number
of
namespaces
that
are
using
this
name.
The
domain
app
stage
this.
You
can't
get
it
at
the
moment
because
you
don't
have
access
to
this
information
in
usage
ping
usually
is
instance
based
and
you
don't
have.
You
can't
replicate
it
with
other
data
sources
potentially
in
the
future.
You
could
replicate
it
with
snowplow.
A
A
That
would
be
unique
and
you'll
be
able
to
just
say
I
can
count
the
number
of
users
that
are
part
of
this
project
that
are
using
stage
that
are
using
the
manage
stage,
so
they
won't
be
real
ids
that
are
just
a
similar
relatable
to
just
an
id.
The
namespace
id
in
github.com
they're,
going
to
be
hashed,
they're,
going
to
transform,
secured
and
so
on
so
that
an
analyst
can't
say
this.
Namespace
id
had
these
number
of
users
using
this
product.
This
feature,
but
that's
the
future.
A
So
for
now
all
the
redis
counters
you
have
one
result
per
instance.
You
can't
go
further,
so
that
means
you
can't
calculate
at
the
user,
the
user
level
you
use
the
level
matrix.
You
can't
calculate
namespace
level
matrix.
The
only
thing
you
can
do
is
that
you
can
report
how
many
users,
at
the
instance
level,
or
how
many
actions
at
the
instance
level
have
been
done
in
the
last
28
days.
C
Yeah
yeah,
that's
very
clear.
Thank
you
for
the
explanation.
A
So
we
receive
it
for
all
the
instances
that
have
this
diamond
and
you
see,
for
example,
so
it's
going
to
record
the
non-public.
So
here
you
have
a
also
p2is
are
sent
to
data
such
as
the
hostname
and
also
you
have
a
license
and
all
the
latest
license
data.
So
you
are
able
to
just
map
it
to
a
license
id
and
also
to
a
subscription
rate.
A
So
here
you
know:
that's
the
using
ee
s
stands
for
starter.
I
don't
forget
yeah
for
starters,
so
they're
using
so
renault
french
brand,
a
french
automatic
brand,
they
have
like
a
starter
subscription.
The
subscription
id
is
this
one.
A
A
A
Yeah
monthly
count
of
the
number
of
merge
request
users,
that's
a
very
bad
explanation.
What
it
means
is
that
users
interacting
with
a
major
request,
so
that
means
that,
on
the
whole
renault
instance,
you
have
last
month
1525
people
that
interacted
with
my
request:
either
open
image,
request
or
put
some
consumer
requests
or
commented
on
a
module.
B
And
that's
that's
a
count
of
distinct
users
right,
okay,
great.
A
And
you
have
an
active
user
account
here:
6255,
which
is
the
number
of
users,
users
that
have
the
status
active,
so
users
that
have
an
account
actually
and
that
that
can
access
the
account.
So,
for
example,
you
can
do
basic
math,
you
have
25
percent
of
the
whole
user
account
that
is
using
the
create
stage,
something
you
can't
go
further.
So
you
can,
you
can't
know,
like
user
user
too,
just
use,
create
and
verify,
because
that
you
don't
have
you
just
know
that
you
have.
A
A
A
There's
also
one
field
that
I
think
is
very
important.
What
you
call
the
uuid
should
be
a
unique
identifier,
for
instance,
which
is
not
entirely
true
surprise,
because
we
know
where
everything
is
complex.
A
So
when
you
spin
up
an
instance,
a
database
is
created
a
possibility
database
and
in
this
database
you
have
this
uuid
written.
So
normally
it
should
be
unique.
The
only
problem
is
that,
for
example,
you
can
clone
an
instance
and
when
you
clone
an
instance,
there's
no
drop
and
recreation
of
the
database,
the
database
stays
the
same,
so
the
uuid
could
be
the
same.
A
A
Instance,
so
that's
the
basic,
so
that's
what
we
get
every
week
from
different
people.
You
said
30
of
the
of
the
customers
of
the
turn
on
usage
thing,
it's
more
like
65,
actually
percent.
That's
what
we
believe
for
very
very
long
time
was
30.
Actually,
that's
not
true.
A
So
we
receive
roughly
from
65
percent
of
our
customers
that
the
only
proxy
we
have,
because
actually
we
can't
know
the
real
number.
Why?
Because
we
don't
know
actually
how
many
users
are
using
the
instant
they
are
using
the
c
version
and
that
never
send
us
any
kind
of
data
or
the
ee
free.
So
it
might
be
actually
that
c
and
e.
Only
ten
percent
of
the
users
active
users
or
active
instances
turn
on
usage.
A
A
So
that's
what
we
get
from
version
and
then
from
version
we
go
to
xml
and
estimated
xml
sounds
of
course,
but
I
don't
know
if
it
should
be
the
the
right
moment
to
talk
about
xml.
I
think
it
should
be
in
another
meeting.
A
B
A
So
what
I
talk
about
talk
about
what
I
just
talked
about
is
just
like
these
upper
parts,
and
you
can
just
remove
this.
It's
just
well,
it's
just
a
new
size,
physical
service
that
is
getting
created,
but
it's
a
totally
different
topic,
so
the
main
flow
that
needs
to
be
understood.
It's
this
part
which
is
just
standard
and
that's
the
main
data
source
to
do
some
very
high
level
calculations
of
how
of
products
usage.
I
will
practice
it
on
product
adoption
and
features
and
then
like
something
got
plugged
into
it.
So.
A
And
coming
from
actually
two
different
teams,
so
one
is
the
project
intelligence
team,
which
is
kind
of
responsible
for
this
whole
architecture
of
the
service
usage.
So
the
product
intelligence
team
runs
manually
at
the
moment,
sas
usage
ping,
so
the
generation
of
the
usage
ping
for
sas
for
github.com
manually.
So
that
means
that
every
every
week
an
engineer
comes
goes
under
on
the
local
machine
starts.
A
Triggering
the
usage
ping
connect
to
the
prop
database
run
queries
for
eight
six
to
eight
hours
in
this
database
then
stores
the
the
usage
being
obtained
here
in
the
version
app.
I
think
it's
pretty
much.
It's
a
bit
more
automated
than
what
I'm
saying,
but
it's
pretty
much
what
it
was.
That's
something
so
that
means
query
run
against
production
database
for
six
to
eight
hours,
also
highly
reliant
on
manual
steps,
so
engineering
engineer
that
just
take
care
take
over
and
take
care
for
a
week
of
the
usage
interaction.
A
That
means
that
what
happens
it
just
engineers
work
too
much
and
not
have
time
to
think
about
it.
What
happens?
It's
just?
They
are
sick.
Actually,
two
of
them
are
sick
and
and
there's
no
one
in
the
house
unknown
it.
They
don't
know
that
they've
been
sick,
so
usage
pin
can
be
skipped
for
a
week
for
two
weeks
for
more
and
we
are
running
into
problems,
because
what
we
want
is
just.
We
want
to
have
a
cadence
of
just
having
one
usage
per
week,
for
instance,
and
especially
for
sas.
A
A
The
second
problem
is
just
completely
different,
but
that
kind
of
being
regrouped
into
the
same
bucket,
which
is
called
like
now,
like
the
very
mysterious
statue
thing
stuff,
so
technical
managers
are
asking
for
technical
managers
are
asking
for
for
more
product
data,
so
they
want
to
know
more
about
the
customers
and
they
want
to
know
more
what
they
perform
on
the
website.
A
So
the
problem
is
that
here
you
see
sas
usage
ping
is
at
the
instance
level.
So
that
means
it's
super
easy
actually
for
site
for
self-managed
people,
because
you
have
a
subscription.
You
have
an
instance
if
they
turn
on
the.
If
they
turn
on
the
usage
ping,
you
get
granular
enough
data
to
know
which
tapes
they
adopt
do
they
use
verify.
Do
you
they
use
this
feature.
They
aren't
ultimate,
but
they
barely
use
the
secure
stage
so
and
so
forth.
So
they
are
capable
to
just
like
temps
with
a
bit
of
training.
A
They
are
capable
of
just
like
building
a
use
case
around
what
what
features
they
use.
This
customer
used
what
they
didn't
use,
what
we
should
demo
them,
how
we
should
try
to
just
lead
the
the
conversation
for
an
upgrade
for
an
extension
and
something
else,
but
they
didn't
have
this
data
for
us,
and
the
problem
was
that
sas
is
becoming
slowly.
The
primary
revenue
source
for
gitlab.com
or
id
or
for
github,
or
at
least
the
the
the
the
curves
are,
are
gonna
crash
at
some
point.
A
That
doesn't
that's
what
we
all
think
about,
and
we
aim
for,
and
we
didn't
have
this
data
for
sas,
because
an
organization,
so
the
of
the
spo
is
at
the
it's,
the
name
space
or
it's
a
ultimate
namespace
on
sas
and
we
didn't
have
granular
enough
data
on
sas
from
usage
pin.
A
A
And
so
what
you
tried
to
do
was
just
like
and
and
the
ass
was
just
like,
creating
this
through
this
system.
So
these
two
systems
were
asking
for
the
same
stuff,
which
was
leveraging
github.com
database
to
recreate
artifacts
of
the
after
usage
pin
in
the
database,
one
being
at
the
same
same
granularity
as
the
one
which
is
already
generated
by
by
the
engineers,
one
being
a
total
different
service
at
another
level.
A
I
think
the
one
that
you
heard
the
most
was
the
first,
the
so
which
is
just
at
the
same
level
just
trying
to
just
automate
it
and
prevent
manual
steps
in
the
process
currently
and-
and
this
like
manual
to
me,
manual
generation
of
usage
being
done
by
by
bytes
and
by
the
engineers.
B
The
latest
initiative
is
specific
to
sas.
It
is
using
the
gitlab.com
to
essentially
recreate
what
we
get
at
the
instance
level
for
self-managed,
but
trying
to
do
it
at
a
top
level
name
space
level,
but.
A
A
A
So
basically,
I'm
gonna
just
go
maybe
over
what
we
do.
What
we
did
to
do
that
to
do
the
instance
level
usage
thing
and
just
explain
a
bit
the
basics
of
this
project.
I
don't
think
you'll.
It
should
be
your.
You
should
be
asked
to
do
anything
on
this,
because,
since
more
like
a
conversation
that
we
should
have
between
product
intelligence
and
their
team,
I
think
you're
being
blocked,
because
everyone
is
a
bit
scared
now
and
just
like
and
they're
like
what
are
we
doing?
How?
A
When
can
we
stop
and
they
try
to
find
touch
points
to
ping
and
just
to
to
just
yeah
to
just
then
they're
nagging
people
just
to
just
try
try
to
get
the
attention
after
the
largest
book
possible,
but
what
we
did
is
basically
this.
A
So
for
release
counters,
actually
it's
very
simple:
the
release
counters
are
super
quick
to
generate.
They
are
just
like
keys,
become
keys,
it's
automated
it
just
in
three
minutes.
We
can
just
generate
all
the
related
counters,
so
it's
very
simple
to
do
it
and
just
the
pricing
intelligence
team.
A
Creating
an
api
endpoint
that
we
can
just
call
whenever
we
want.
We
just
get
a
response
with
all
the
values
of
these
counters
period.
That's
it
so,
let's
say:
let's
look
for
radius.
So
now
everything
is
zero,
because
it's
a
new
instance,
but
basically
this
can
be
generated
in
one
minute.
I
believe-
and
we
get
numbers
for
this
for
this-
for
these
counters-
that
the
instance
level
that's
super
easy
for
sass
we're
having,
and
I
showed
you
already
the
queries
themselves.
A
So
the
goal
was
to
just
transform
this
data
to
make
it
readable
in
the
snowflake
database
in
our
setup,
and
so
that's
what
we
did
so,
for
example,
I
can,
I
think
the
best
would
be
so
here
is
an
example
of
what
we
did
with
the
ci
build.
We
transformed
dci
built
query
into
the
dci,
dot
query,
which
is
roughly
the
same.
A
We
just
added
the
counter
name
and
we
just
changed
the
pre,
the
the
front
table.
So
and
we
just
removed
all
these
backlashes
and
sound,
but
just
instead
of
being
ci
builds,
it's
a
snowflake
style
database
name,
gitlab.com
underscore
ci
business
called
dedup
source,
and
basically
we
run
this
database.
So
these
tables
are
in
conflict,
so
those
tables,
so
that's
pretty
much
how
it
works.
So
the
whole
flow
is
that
you
have
the
postgres
database
here.
A
You
just
extract
everything
every
day
into
the
raw
database
of
snowflake.
That's
where
everything
lands
and
then
from
raw
it
goes
to
prod
and
prod
is
when
you
have
access
to
sizes
and
in
prague.
What
we
were
doing
before
is
that
we
were
having
raw
and
we
duplicating
these
databases,
because
those
are
duplicates
are
just
duplicated
and
we
are
renaming
columns
at
the
same
time.
A
Source
so,
for
example,
an
id
was
becoming
a
ci
build
id
a
type
was
becoming
a
ci
build
type.
So
when
you
run
the
query
that
I
showed
you
before,
this
one
will
fail
in
this
table
because
we
just
have
the
names
of
the
postgres
columns,
not
the
names
of
those
conflict
tables.
So
what
we
decided
to
create
was
a
layer
in
between
the
row
and
the
source
tables
that
was
only
deduped
but
with
no
column
renaming,
so
the
I,
the
ci
build.id,
will
stay
the
id
column
in
this
table.
A
A
So
that
means
that
this
is
done
pretty
much
working
a
little
bit
of
improvement
needed,
but
that's
going
to
be
done
and
all
of
this
will
work
once
this
is
work
we're
going
to
just
test
it
against
what
generated
at
the
moment
and
that's
what
you
were
talking
about-
maybe
join
it
at
the
moment
manually
by
engineers
and
just
say
like:
do
we
have
the
right?
Do
we
do?
We
have
like
a
a
good
dedication,
so
is
there
gonna
be
a
deviation?
A
It
just
has
to
be
small,
I
hope
less
than
a
percent,
so
that
the
whole
that's
just
a
pink
thing,
and
please
like
just
I
I
want
you
to
know
what
it
is.
I
just
don't
want
you
to
just
get
into
the
weeds,
because
I
don't
think
it
should
be
also
your
team
and
if
you
start
doing
this,
I
think
you're
gonna,
be
it's
gonna,
be
very
time-consuming
for
you.
A
And
yeah:
that's
what
I
wanted
to
talk.
Maybe
I'm
gonna
leave
also
this
last
10
minutes
just
maybe
for
more
questions.
Also
for
you,
because.
A
B
Tradition,
this
is
super
helpful
and
clears
up
a
lot
of
things.
I
guess
one
question
that
I
have
is
like:
what
are
your
kind
of
top
gotchas
about
this
data?
So
I
know
you
highlighted
that,
like
uuid,
hypothetically
would
be
unique
to
an
instance
unless
you
clone
an
instance
like.
Is
there
anything
like
that
that
you
can
think
of
that
could
be
a
very
bad
thing
or
is
just
kind
of
a
funky
nuance
to
the
data.
A
I
think
uid
I
mean
it's
a
very
complicated
data
source
and
so
first
it's
very
hard
to
know
the
it's
very
hard
to
know.
What's
the
unique
instances,
as
I
said
so,
there's
a
several
cases
actually.
So
when
you
think
about
it,
actually
it
should
be
pretty
simple.
You
have
an
instance.
You
have
a
license.
You
have
a
subscription,
all
should
be
one
to
one,
but
actually
everything
is
end-to-end,
so
there
could
be
a
subscription
they
should
they.
So,
for
example,
there
is
sometimes
like
subscriptions
attached
to
several
instances.
A
Sometimes
you
have
several.
You
have
an
incense
attached
to
several
licenses
over
time,
so
the
whole
schema
of
just
joining
a
data
source
data
sources
together,
usage
being
licensed
and
zora
data
is
also
very
complicated
and
something
we
didn't
manage
to
just
be
100
sure
that
what
we're
doing
is
right
and
so
at
the
moment,
something
which
is
very,
very
complicated
with
data
sources.
A
Just
join
financial
data
with
product
data
source-
and
I
think
dave
knows
it
also
for
the
gitlab.com,
that's
something
which
is
really
hard
and
that
as
hard
for
for
for
version
we're
trying
to
get
better
at
it.
So
that's
also
a
big
gotcha,
which
is
just
don't
rely
on
license
either.
I
think
that's
the
the
that's
a
very
bad
buffer.
A
Like
bridge
data
source,
we
tried
to
just
get
everything
we
tried
to
get.
I
mean
my
my
understanding
and
my
and
would
be
that
the
source
of
truth
is
the
usage
pin
and
the
version
app
everything
outside
of
the
user
say.
The
version
ad
for
product
data
source
is
wrong,
or
just
it's
part,
is
it's
less
accurate
than
the
the
usage
data
system
and
to
join
usage
data
to
zoorada
use
usage
data
data
using
usage
usage,
ping
data.
A
That's
the
big
culture,
then
don't
try
to
do
more
than
what
this
database
data
source
allows.
I
think
this
data
source
mainly
allows
to
do
high
level
high
level
calculations,
high
level
analysis,
high
level
trends
very
hard
to
just
say
to
just
try
to
do
more
than
this,
or
just
always
think
10
times
like
before.
Starting
a
project
like
this,
because
you
can
start
just
digging
your
own
grave
and
just
like
working
on
this
for
six
months
and
just
finishing
pretty
stressed.
B
A
A
One
gotcha,
which
is
very,
very
simple:
the
team,
the
printing
intelligence
team
is
very,
very
helpful
and
if
you
have
any
questions,
I
think
you
should
go
to
them.
Alina
has
been
here
for
two
years.
Nikolai
has
been
here
for
a
very
long
time
as
well,
and
they
are
very
reliable
engineers
and
alper
also
knows
a
lot,
but
alpern
is
a
is
a.
A
A
I
think,
to
be
honest,
I
think
they
are
very
interesting
analysis
around
around
how
how
people
upgrade
and
and
that
that
could
be
tackled,
because
it's
kind
of
like
a
black
box
at
the
moment
how
people
upgrade,
though-
and
I
think
it
should
be
very
interesting
product
analysis
like
trying
to
find
some
patterns
on
who
upgrade
and
what
and
why
and
when.
Is
there
like
a
flow
like,
depending
on
like
how
people
heavily
use
a
product?
A
Is
there
also
like
a
correlation
between
conversion
rate
and
expansion
or
renewal
of
the
subscriptions
that
could
be
actually
a
grade
and
a
a
sorry,
a
grade
rate
and
expansion
in
renewals?
That
could
be
actually
something
as
well
like
measuring
nps,
like,
I
think,
that's
also
like.
So
it's
a
it's
a
data
source
that
has
a
lot
to
provide
and
and
but
that
that
has
to
that,
has
to
be
like
very
focused
on
my
careers
and
and
carry,
and
they
do
it.
A
B
I
think
that
covered
most
of
what
at
least
I
know
to
ask
right
now,
but
this
this
was
all
super
helpful.
Do
you
mind
sharing
the
link
to
that
to
that
presentation.
A
Yeah,
so
I'm
gonna
work
a
bit
more
on
the
presentation.
I'm
gonna
share
it.
Also,
I'm
gonna
show
it
anyways,
it's
just
that.
There's
a
lot
of
work.
Data
flows
here
that
gonna
be
at
the
end.
I
think
it's
not
the
right
place
to
be
here.
You
should
be
in
the
handbook,
so
I'm
gonna
just
transfer
them
to
the
handbook
at
some
point,
but
I
can.
I
can
already
put
it
here.
C
And
just
to
add
in
something
else,
for
you
carolyn,
just
in
addition
to
all
the
things
that
matthew
knows
from
a
data
perspective
he's
legitimately
the
best
deck
maker.
I
know
so
it's
just
another
school
another
skill
to
add
to
the
tool
set.
A
Cool,
so
I
think
that
for
next
week
what
I'm
gonna
do
is
that
I'm
gonna
scale
for
xml,
because
the
same
same
type
of
conversation,
for
example.
I
think
that
would
be
called.
If
nicole
is
here
as
well.
I
don't
wanna
invite
neil,
because
I
feel
that
I
don't
wanna
overwhelm
him
with
just
so
technical
details.
A
A
I
think
that
you
can
explain
to
him
also
so
that
the
the
goal,
also
that
for
xml,
specifically,
I
think,
I'm
so
I've
I've
taken
this
project
like
two
years
ago,
one
and
a
half
year
ago
I
was
the
only
one
working
on
this,
so
I
beat
the
entire
pipeline,
which
is
stable
now,
but
I
think
you
shouldn't
be
the
one
tackling
the
modeling
part,
but
you
should
definitely
understand
the
concepts
and
also
be
able
to
answer
the
questions
like
what
is
the?
What
is
the
estimation
actually
for
what?
A
What
did
they
when
it
decreases,
why
it
decreases
over
time
and
what
and
also
what
I
would
like
to
transfer
to?
It's
also
like
the
conversation
we
had
with
like
sid
scott
kenan
back
then
around
potential
improvements
around
this
pipeline.
In
this
estimation,
because
this
estimation
is
still
like
a
version-
a
stable
version-
I
would
say-
but
it
just
can
be
improved
for
sure
and
for
people
that
look
for
I
mean
I'm
sure
we
understand
we
underestimating
some
stuff.
A
So
so
I
want
to
explain
you
also
like
the
conversations
we
had
so
that,
and
there
are
a
lot
of
already
written
documentation,
so
I'm
going
to
link
it
in
another
doc
or
in
this
doc.
So
you
can
read
it
also
in
advance,
but
the
goal
will
be
for
you
to
just
also
not
freak
out.
If
someone
asks
a
question
about
this
until
like
okay,
I
kind
of
know
like
what
what's
about
and
I
and
yeah
you
and
that
you
feel
confident
and
you
you
will
still
you
won't
sweet.
A
Yeah,
so
that's
and
also
that's
a
very
overlooked
kpi,
because
I
think
that
seed
is
also
like
very
monitoring
like
monitoring
it
very
closely
and
almost
taking
too
many
too
many
big
decisions
based
on
these
numbers,
so
like
budget
decisions
sometimes
so
I
think
it
would
be
also.
It
would
be
also
good
to
transfer
it
to
you.
B
That
sounds
great
yeah.
Looking
forward
to
that,
thank
you
so
much
for
taking
the
time
and
pulling
this
all
together
yeah.
This
is
enormously
helpful.