►
From YouTube: 2023-01-30 Analytics Section Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah
yeah
I
I
see
it
now.
Sorry,
I'm
I'm,
still
confused,
sometimes
but
assume
UI
compared
to
Google
meets
no
worries
all
right.
So
welcome
everyone
to
the
analytics
section.
Meeting,
I
think
I
have
the
only
topic
today
and
I
wanted
to
kind
of
quickly
talk
about
the
proof
of
concept
around
snowplow
as
a
potential
replacement.
B
Can
maybe
show
it
to
you
very
quickly
so
that
it's
a
bit
more
clear
to
everyone?
What
we're
talking
about
I,
don't
know
if
everyone
checked
the
Mr
beforehand
and
then
I
think
there's
already
a
bunch
of
good
questions.
So
I
think
I
really
wanted
to
use
this
opportunity
to
get
some
input
around
potential
blockers
thoughts
that
people
have
around
this
so
I
think
just
to
give
kind
of
the
right
context
to
everyone.
Why
we
even
did
this
because
I
don't
know
how,
where
everyone
was
so
I.
B
Think
there's
in
general
there's
been
some
reservations
around
okay.
How
scalable
is
Jitsu?
How
supported
will
be
in
the
long
run,
because
it's
a
pretty
new
startup
and
a
bunch
of
those
kinds
of
considerations
around
Jitsu
there
was
an
investigation
into
okay.
How
much
would
it
take
to
completely
replace
it
from
and
build
something
similar
from
scratch,
which
turned
out
to
be
quite
a
long
time,
but
then
there's
also
snowplow,
which
we
already
use
on
the
pro
intelligence
site
for
our
event,
tracking
right
now
in
the
gitlab
and.
B
So
the
idea
came
up
whether
we
could
theoretically
use
snowplow
as
such
replacement,
because
it's
a
lot
more
mature.
It
has
been
around
around
for
a
long
time,
I
think
around
about
10
years
now,
maybe
even
more.
B
We
use
it
ourselves,
so
we
we
know
that
it's
scalable,
at
least
to
gitlabs
scale,
without
a
lot
of
problems.
So
we
don't
have
any
kind
of
infrastructure
concerns
really
with
it,
and
so
we
wanted
to
investigate
that,
and
in
order
to
do
that,
I
tried
to
build
a
proof
of
concept
to
mainly
answer
two
questions.
So
one
of
them
was
kind
of.
B
How
would
we
do
local
development
or
how
could
it
be
even
packageable
as
a
single
entity
like
as
a
as,
for
example,
a
kubernetes
setup,
or
something
because
the
snow
plow
docs?
They
focus
a
lot
around
using
snowplow
on
one
of
the
existing
public
clouds.
So
there's
a
lot
of
documentation
around
okay,
you
can
deploy
it
on
AWS.
You
can
deploy
it
on
gcp,
but
it
always
then
uses
some
gcp
or
AWS
specific
queuing
system,
and
we
wanted
to
figure
out.
Okay
is
it?
B
That's
the
one
question
and
the
other
one
is:
how
do
we
get
data
from
snowplow
to
clickhouse,
because
there's
also
no
obvious
answer:
there's
a
bunch
of
loaders
around
snowplow.
So
how
do
you
get
events
into
different
database
systems
like
postgres,
bigquery
snowflake,
but
there's
no
official
kind
of
loader
into
clickhouse,
and
so
we
wanted
to
answer
those
two
questions.
There's
no
decision
being
made
around
replacing
Jutsu
at
all.
This
is
for
now
an
investigation,
but
I
think
we're
now
at
a
point
where
we
can
move
closer
to
making
an
actual
decision.
B
B
Can
you
see
the
screen
all
right,
so
this
is
kind
of
a
drawing
of
how
the
general
setup
of
snowplow
right
now
works.
So
I
connected
it
to
our
local,
my
local
TDK
in
the
future.
It
could
be
connected
to
gitlab
or
any
other
kind
of
thing.
Snow
ply
always
consists
of
two
main
snow,
plow
yeah
things
or
servers.
So
the
services-
that's
the
snowplow
collector.
B
So
that's
the
thing
that
the
service
that
the
events
get
sent
to
so
it
mostly
just
kind
of
takes
events
and
validates
I,
think
very
briefly,
whether
it's
generally
an
event
or
Modern
event
and
then
puts
them
into
a
queue
into
a
queuing
system,
I'm
currently
using
rapidmq
for
this
for
the
proof
of
concept.
In
theory,
in
our
deployed
snowplow
version
we're
using
Amazon
Kinesis,
it
also
supports
a
bunch
of
other
queuing
systems,
so
it
puts
them
into
a
raw
queue.
B
Then
another
process
takes
over,
which
is
the
so-called
snowplow
enricher
that
takes
these
raw
events
and
can
apply
things
to
it.
So
one
one
thing
that
we're
currently
doing
in
gitlab
itself
is
pseudonymized,
pseudonymizing,
I
hope
we
have
to
put
it
right.
The
some
some
data
from
from
the
user
so,
for
example,
the
the
user's
ID
or
the
URL,
so
that
we
are
more
privacy
compliant.
B
You
can
also
do
something
like
IP
address,
anonymization
and
then
it's
put
into
enriched
queue,
which
then
can
deliver
it
out
so
the
in
in
the
enrich
queue.
All
these
events
are
actually
just
tab.
Separated
values
like
it's
a
it's.
It's
one
line
of
tab,
separated
values
which
you
then
can
pick
up
in
the
proof
of
concept.
What
I'm
doing
is
clickhouse
has
a
rapidmq
loader,
so
it
can't
just
take
events
directly
from
a
rabbitmq
queue
you
can
tell
okay.
B
B
Is
that
so
it's
zoom
in
a
bit,
maybe
is
it
readable,
looks
all
right
okay,
so
this
is
how
these
events
look
like.
So
maybe
this
already
answers
some
questions.
So
snowplow
has
a
general
kind
of
column,
structure
of
events,
which
is
always
I,
think
the
same
I
think
these
are
around
about
120
ish
different
columns,
there's
a
lot
of
things
like
some
user
IDs.
Theoretically,
some
things
that
you
can
derive
from
the
IP.
B
If
you
want
the
page
URLs
a
lot
of
stuff
and
a
lot
of
this
might
not
be
filled
actually
I-
think
the
most
important
thing
is
that
are
these
contexts.
B
So
what
you
see
here
this
is
where
you
can
apply
your
own
schema
too,
and
where
you
can
enforce
certain
things
where
you
can
tell
snowplow
that
events
need
to
have
I,
don't
know,
for
example,
a
user
ID
of
your
service,
so
akitlab.com
user
ID
needs
to
be
sent
with
them,
otherwise,
they're
not
real
events
or
yeah,
whatever
you
want.
So
this
is
where
you
can
enforce
things
around
the
events.
B
There's
a
bunch
of
things
that
clearly
show
that
snow
plow
has
been
around
for
some
time,
there's
stuff
like
whether
Silverlight
or
Flash
is
enabled
in
this
browser.
So
it
clearly
has
been
there
for
some
time
and
if
I
go
back
so
these
are
ordered
by
time
now
you
can
see
the
latest
one
is
from
a
few
minutes
ago,
if
I
click
around
here,
I
hope
this
works
Now
demo
time
and
do
a
few
page
views.
B
So
now
there's
new
events
from
this
minute,
so
from
1509
UTC
time
zone,
you
can
see
there's
a
bunch
of
new
events
that
came
in,
which
are
mostly
done
in
this
case,
just
page
views
or
also
specific
events
that
we're
sending
so
I
think
that
should
hopefully
give
a
kind
of
General
overview
of
how
this
is
working
and
again
I.
Think
for
now.
If
the
question
is
okay
for
us
that
we
need
to
answer,
it
is
a
viable
replacement.
B
The
advantage
is
that
snowplow
would
give
us
are
again
the
maturity.
One
big
advantage
on
the
product
intelligence
side
is
that
they
have
a
lot
of
different
sdks.
We
could
package
up
so
they
have
a
ruby
SDK
which
we're
using
currently,
but
they
also
have
go
sdks
python
sdks,
like
a
lot
of
sdks
that
we
could
just
use,
which
would
be
a
lot
of
less
work,
that
we
would
need
to
do.
Compare,
for
example,
to
chitsu,
which
doesn't
have
that
many
pre-built
sdks
to
just
use.
C
Yeah
I
had
the
first
one,
which
is
actually
my
collage's
succulently
answered
for
me
already,
but
I'll
voice
it
anyway.
We
have
in
product,
and
since
we
have
the
concert
we
we
started
thinking
a
while
ago
about
having
a
common
taxonomy,
because
eventually
we'd
like
to
be
able
to
retrieve
data
not
just
from
Front
End,
page
views,
but
also
back-end
events,
just
like
you've
just
been
showing
actually
the
snowplow
stuff
and
you'd
want
it
in
a
unified
schema,
a
way
that
the
same
data,
whether
it's
coming
from
backend
or
front
end.
C
C
So
snope
I
was
asking
if
snowpile
gives
us
that
full
control,
Jitsu
doesn't
and
many
other
collecting
schemas
don't
give
you
that
full
control.
They
hide
a
lot
of
things
behind
their
collectors.
The
way
the
collector's
built
because
already
responded
to
this
are
saying
that
they
give
you
the
ability
to
custom,
build
your
own
schemas,
which
is
great.
That
is
for
the
JavaScript
tracker,
though,
does
that
also
apply
to
the
all
the
other
sdks
as
well,
or
is
it
just
for
the
JavaScript
decide.
D
It
surprised
to
every
SDK
the
snow
plow
provides
a
few
types
of
events,
the
one
that
busty
was
presenting
was
currently
used
by
the
gitlab,
which
is
a
canonical
event,
which
has
bunch
of
columns
way
more
than
we
might
want
to
use
like
marketing
ones,
which
might
be
confusing.
D
Gitlab
also
was
using
this
custom
self-describing
events,
but
we
drop
this
support
only
because
we
lacked
of
a
policy
when
and
why
to
Define
these
events,
but
for
the
purpose
of
the
analytics.
The
way
I
would
say
it
is
that
analytics
can
use
those
defined
events
to
Define
the
schema
and
then
only
present.
This
schema
to
the
end
users,
so
that
would
alleviate
the
problem
that
we
are
trying
to
to
escape
from
where
we
have
like
10
different
schemas.
Each
of
the
groups
had
their
own
schema.
D
They
were
not
matching
each
other
and
we
kind
of
went
okay,
there's
the
only
one
schemer
you
can
use
that
you
can
only
customize
the
contexts.
So
we
have
like
the
common
at
least
common
attributes
that
all
the
events
has,
but
it's
possible
to
build
the
event
completely
from
the
scratch
and
it's
regardless
of
the
SDK
used.
B
C
B
To
that
to
what
they
have
on
the
roadmap,
I
mean
I,
don't
know
if
we
can
have
used
it.
But
it's
kind
of
interesting
is
the
idea.
If
you
do
this,
that
you
then
can
even
create
custom
sdks.
C
And
it
looks
like
we
have
to
submit
those
schemas
to
an
external
open
public
body
as
well.
Doesn't
it
yeah,
which
is
actually
beneficial
for
us,
because
one
thing
product
analysis
will
be
looking
to
do
eventually?
Is
opening
up
the
ability
for
customers
to
use
their
own
stack,
not
just
the
stack,
that's
built
into
gitlab.com,
but
their
own
personal
Stacks.
So
having
a
way
of
being
able
to
share
a
schema
that
anyone
can
connect
and
look
at
would
be
very
useful
on
which
kind
of
brings
me
to
my
next
Point?
C
C
The
stack
is
owned
by
us,
but
it
we,
as
I
said
we
will
be
looking
to
give
users
the
ability
to
try
to
set
up
their
own
and
the
way
it
works
is
you
click
a
button
and
in
the
background
we
start
off
a
worker
which
sends
an
API
call
to
Jitsu
to
create
a
new
project
and
get
injitsu,
which
then
creates
a
new
table
and
click
house
under
that
gitlab
project's
ID
and
then
Cube
will
when
it
goes
to
query,
it
will
query
against
that
specific
ID
and
we
also
get
out
an
API
key.
C
That's
used
for
The
Collector
to
send
data
only
to
that
Jitsu
project
to
no
other
project.
If
that
makes
sense,
I'm
assuming
that
going
down
the
snow
plower
route
will
mean
that
we
won't
have
this
one
step.
Automated
process
we
will
actually
manually
have
to
set
up
each
of
these
individual
individual
elements.
B
I
think
there's
a
lot
of
ways
to
do
this,
so
there's
it
shouldn't,
be
a
blocker
just
for,
for
example,
for
understanding
so
for
even
when
you
put
the
data
into
clickhouse.
B
What
it's
currently
doing
is
kind
of
there's
this
queuing
table,
which
you
cannot
even
look
at
which
then
you
create
Separate
Tables,
where
you
take
a
materialist
view
that
puts
the
data
from
the
queue
into
the
actual
table.
So
you
could
imagine
a
system
where,
for
every
project
that
gets
created,
a
new
table
gets
created
a
new
materialized
view.
That
only
takes
the
events
that
correspond
to
this.
B
A
B
B
B
So
it's
like
the
raw
events
go
in
into
the
queue,
but
then
you
can,
we
can
say
Okay
based
on
some
database
query
or
something
that
we
do
or
like
a
Reddit
like
some
list
that
we
have
of
these
are
applicable
project
IDs
or
something
like
that.
Then
we
could
filter
them
into
good
events
and
bad
events.
B
It
automatically
filters
out
any
malfunction
like
not
re-correctly
structured
events,
so
those
would
be
automatically
so
there's
a
bad
queue
automatically
that
any
unstructured
events
go
into.
That's
something
that
we
already
have
set
up,
for
example,
on
gitlab.com
and
I,
think
that's
around
60
000
events
per
day.
Currently
that
go
in
there
out
of
some
reason
to
promise
I
didn't
check
kind
of
deeply
into
that.
Yet.
But
this
is
like
anything,
that's
completely
malformed
or
doesn't
apply
to
the
schema
you
defined.
A
Okay,
so
it
has
to
enter
our
pipeline
before
we
can
actually
filter
it
out,
Jutsu
yeah,
so
we
have
that
validation
in
the
front.
Okay
yeah,
so
we'd
have
to
build
something.
If
we
wanted
that
in
the
front,
then
okay.
B
C
B
Yeah
so.
C
No
place
yeah
either
way
could
be
out
of
so
I.
Think
the
product
analytics
right
that
the
the
current
cue
for
snowplow
come
is
gitlab
only
gitlab.com,
only
more
or
less
isn't
it
60
000
events,
product
Analytics,
using
the
shared
I.
Guess
you
could
call
it
shared
stack,
would
be
you
know
whatever
that
projects
product
is
their
main
website
of
millions
of
views
a
day
or
whatever
times
that
by
how
many
customers
make
use
of
this,
and
they
will
have
multiple
products,
multiple
projects
that
are
connected
to
this?
C
Theoretically,
not
just
one.
So
we
need
to
think
that
not
only
does
this,
we
need
to
from
a
product
his
point
of
view.
We
need
to
make
it
so
the
stack
is
recreatable
or
siled
in
some
way,
so
we're
not
spreading
dates
to
someone
else's
data
in
mixing
it
in
with
everyone
else's
if
it's
on
a
shared
stack
but
more
importantly,
as
I
said
before,
we
also
need
to
come
up
with
a
way
of
packaging.
C
This
so
well
important
analysis
we'll
have
to
think
of
a
web
packaging
it
so
others
can
run
their
own
version
too.
Yeah
yeah.
B
But
that's
that's
exactly
what
we're
trying
to
or
what
I
was
trying
to
do
with
this
proof
of
concept,
because
I
mean
we,
we
currently
have
around
I
think
40
50
million
events
per
day
going
into
our
own
snowplow
instance,
but
that's
run
on
Amazon
Kinesis,
and
so
that's
not
really
transportable,
because
you
would
need
to
run
it
on
on
the
AWS
stack.
B
B
B
You
don't
need,
like
you
just
need
one
instance
of
it
and
it
wasn't
like
up
to
1
000
events
per
second
or
something
it's
like
it.
It
doesn't
really
need
a
lot
of
infrastructure
thinking
which
is
roughly
kind
of
our
current,
like
gitlab
software.
As
a
service
size
yeah
beyond
that,
you
would
need
to
start
thinking
about
kind
of
okay,
clustering
and
things,
maybe
a
bit
more
but
yeah.
So
up
to
that
point,
it's
possible
and
you
can
also
in
theory.
B
You
could
also
like
set
up
a
separate
system
where
you
say:
okay,
if
you
go
beyond
a
certain
point,
you
you
connect
to
Amazon,
Kinesis
or
Google
Cloud
pops
up,
I!
Guess
if
you
don't
want
to
kind
of
take
care
of
the
infrastructure
with
webmq.
C
A
Thank
you.
Sorry
excuse
me
so
that
gets
into
my.
My
point
is
like
moving
from
this
POC
and
also
maybe
is
I
need
a
education
on
on
what
role
snowfall
micro
plays
as
well
as
excuse
me.
Are
we
planning
to
use
rabbit
mq,
which
is
experimental
currently,
and
are
we
prepared
to
switch
to
something
like
Kafka?
A
If
we
need
to
because
we,
the
the
portability
and
being
able
to
set
up
your
own
self-managed
cluster
I,
think
is
key
to
our
strategy
from
a
product
offering
standpoint.
So
you
know,
what's
what's
the
strategy
to
move
this
out
of
POC.
B
So
yeah
I
mean
the
the
strategy
itself.
I
think
that's
something
that
we
would
need
to
decide
kind
of
if
we,
if
we
choose
to
do
it
as
per
like,
maybe
going
back
to
snowplow
micro.
This
doesn't
play
any
role
in
there
really.
So
it's
not
by
micro
itself.
It's
just
a
tool
to
test
your
snowplow
locally
with,
like
we
use
it,
for
example,
in
the
gtk
right
now
to
kind
of
be
able
to
test.
B
If
events
are
generally
going
through,
it's
one
Docker
container
I,
don't
know
if
it
now
sets
up
an
elastic
search
instance
or
something
that
kind
of
just
shows
you.
Okay,
these
good
events
came
in
these
bad
events
came
in.
That's
it
so
I
completely
replaced
snowplow
micro
for
this
use
case.
So
this
is
kind
of
just
a
snowplow
instance,
as
you
could
also
set
it
up
in
the
cloud
and
I
think
for
our
use
case,
where
we
actually
want
to
build
an
analytics
product.
B
A
That's
how
I
understood
it,
but
in
the
merger
Quest
I
read
it
as
I
reuse,
the
existing
snowplow
micro
implementation
in
the
monolith
to
redirect
snow
plow
event,
so
I
for
some
reason
thought
it
was
still
in
play
here.
But
looking
at
the
docker
file,
you
can
see
that
the
collector
and
the
enricher
are
there.
So
I
just
wasn't
sure
if
micro
was
actually
in
play
or
not.
B
Maybe
I
didn't
put
this
correctly,
so
the
only
thing
I
did
is
I
have
to
make
it
easy
to
set
up
on
the
GDK
site.
I
like
had
left
generally
snowplow
micro
enabled
so
that
GDK
sends
out
the
data
to
this
to
the
to
the
snowplow
micro
Port,
but
then
turned
off
the
docker
command
to
to
start
snowplow,
microbe
and
GDK
starts.
B
So
it
uses
the
implementation
that
we
currently
have
in
place
in
GDK,
where
in
the
development
mode
it
sends
things
to
a
certain
IP
address
or
to
the
localhost
on
a
certain
Port,
but
instead
of
snowplow
micro
on
this
port,
the
POC
is
sitting
yeah
got
it
and
I
mean
I.
Think
you,
you
made
a
good
point.
B
So
the
rabbit,
what
is
experimental,
are
the
Revit,
mq
and
richers
and
collectors,
so
snowplow
announced
them
I
think,
sometime
to
towards
the
end
of
last
year
as
experimental,
and
they
with
the
same
intention
that
we
kind
of
have
to
have
something
that
is
theoretically
easily
to
run
locally,
but
also
good
way
to
package
it
up
and
put
it
on
any
kind
of
cloud.
There
is
active
development
on
those,
so
the
docker
images
are
getting
recreated
and
they
do
active
development
there.
B
B
So
there
is
a
possibility
for
us
to
like
if
they
actually
drop
it
to
take
it
on
or
I
think
also
what
Nikolai
wrote
they
have
a
lot
of
other
possibilities,
so
there
is
a
way
to
just
put
it
onto
the
local
file
system.
The
your
events
I
think
that's
what
Jitsu
actually
is
doing
right
now
as
well.
B
B
So
you
for
a
very
minimal
use
case.
You
don't
even
need
a
queue,
for
example,
for
local
development.
That
would
probably
be
enough,
and
even
maybe
for
small
gitlab
instances
or
small
products
for
bigger
products.
What
they
have
is
Kafka,
which
I
think
is
or
as
far
as
I
understand,
is
completely
mature
in
their
system,
so
you
can
use
it.
There's
no
like
risk
of
it
being
dropped
anytime,
soon,
I.
B
As
far
as
I
understand,
it
is
a
bit
more
effort
to
set
it
up
to
my
understanding.
I,
never
set
it
up
my
own,
but
that's
kind
of
my
general
understanding
that
Kafka
I
mean
because
it's
it's
more
built
to
be
clusterable
and
so
from
from
the
start.
It
it
assumes
that
you
have
multiple
nodes
and
so
on.
So
there's
a
bit
more
thinking
that
is
involved,
but
in
theory
you
can
also
set
that
up
in
any
kubernetes
cluster.
So
it
provides
the
same
portability.
B
It
provides
everything's
the
same
and
I
think
if
we
would
move
on
with
snow
plow,
there
should
be
another
investigation
to
okay.
Should
we
go
with
rabbitmq
or
maybe
it's
not
Kafka
would
be
even
easier
from
the
start,
and
it's
only
me
not
being
familiar
with
it,
who
who
rather,
like
picks
the
rabbit
mq.
A
Yeah
I
guess
my
follow-up
question
is
well
I.
Just
wonder
if
that's
something
we
have
to
figure
out
before
we
decide
to
move
with
snow
plow
in
terms
of
like
making
sure
it
is
packageable
I
mean
rabbit.
Mq
definitely
sounds
easier
from
an
infrastructure
point
of
view,
but
yeah.
If
they
decide
to
drop
it,
then
we
don't
want
to
be
caught
off
guard
as
we're.
You
know
migrating
away
from
Jitsu
and
having
to
reconfigure
part
of
our
pipeline
there
or
or
like
cluster
image.
B
Yeah
I
mean
we
could
we
could
move
forward,
for
example,
with
trying
to
set
up
something
similar
with
Kafka.
B
It
shouldn't
be
too
much
of
a
hustle
to
do
that
also
locally,
just
to
verify
that
it's
also
not
that
much
more
work,
yeah
because
I
understand,
for
example,
jitsu
I
think
they
even
kind
of
also
internally
building
something
based
on
Kafka,
also
but
I'm,
not
100.
Sure
about
that.
E
Is
there
support
for
rabbit,
mq
is
experimental.
Are
they
working
anywhere
publicly
where
they're
saying
like
we
need
to
achieve
this
set
of
criteria
before
we
mark
it
as
GA
or
before
we
decide
to
cut
loose
on
it,
because
if
it's
just
marked
experimental,
we
don't
necessarily
know
why
that's
different
versus
it's
experimental,
but
these
are
our
steps
before
we
sign
off
on
it
as
fully
supported
that
might
help
us
make
a
make
a
decision
here.
B
I
think
they
have
a
public
roadmap
I
or
at
least
I
had
I
need
to
check
into
it
again
to
my
knowledge
right
now,
it's
kind
of
okay.
This
is
experimental.
We
are
starting
to
do
it.
We
are
actively
developing
it.
They
didn't
lie,
lay
out
any
kind
of
what's
it
called
a
criteria
around
their
decision
yeah.
Yet
wherever
I
looked
but
I
think
we
can
definitely
dig
a
bit
deeper
there
to
figure
this
out.
E
Okay,
cool
because
also,
if
we
can
connect
with
whatever
those
teams
are
that
are
responsible
for
this
and
say:
hey.
We
gitlabber
interested
in
consuming
this.
That's
extra
customer
in
Market
usage
data
for
them
to
help
prioritize
it.
If
that
question
does
come
up
with
them,
you
know.
Is
anyone
actually
even
using
this.
B
Yeah,
that's
something
that
we
could
actually
do.
I
think
that's
actually
a
question
from
my
side,
because
I'm
not
sure
how
and
I
think
we
are
at
time,
but
I
still
just
want
to
voice
it,
because
I'm
not
sure
how
we
are
kind
of
handling
this
currently
with
chitsu.
How
aware
they
are
of
our
usage
because
no
blood
they
make
their
money
with
deployed
snowplow
like
they
make
their
money
with
a
deployed
version
of
snowplow
that
is
kind
of
okay.
You
don't
need
to
take
care
of
it.
B
We
deploy
it
into
your
Cloud
into
your
AWS
or
TCP
account,
not
your
local
Cloud,
but
into
your
gcp
account
and
we
take
care
of
managing
it.
So
I'm
I'm
curious,
also
about
kind
of
the
the
relationship
aspect
of
okay.
Would
we
tell
them
what
we
want
to
do?
How
do
we
handle
this
because
potentially
there's
some
conflict
of
interest
of
us
going
on
to
their
Turf
by
providing
a
hosted
version
of
snowplow.
E
On
the
Jitsu
question,
I'm
part
of
their
slack
but
I
think
that's
about
as
far
as
any
sort
of
like
formal
relationship
with
Jitsu
goes
right
now,.
A
B
B
If
there
are
any
questions
also
feel
free
to
try
it
out
and
also
voice
your
questions.
There
I
think
that
would
be
a
good
place
to
kind
of
document
all
the
questions
and
start
discussing
around
them,
so
that
we
can
figure
out
whether
we
want
to
pursue
this
further
or
go
with.
After
all,
I.
Think
from
my
point
of
view,
we
just
need
to
make
like
go
through
these
questions
once
now,
and
and
then
make
a
firm
decision.
Okay,
this
is
the
way
we
want
to
go.