►
From YouTube: [REC] Key Meeting - Product (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
I
was
just
I
was
looking
at
hila's
comment
in
number
three
in
the
agenda
just
wanted
to,
and
I
I
tried
to
go
through
the
issue
and
I've
been
part
of
some
of
the
conversations,
but
I'd
love
just
to
nail
down
how
we're
feeling
about
our
action
plan
around
you
know
the
gitlab.com
data
syncing
issue,
because
we're
a
few
days
behind
so
just
a
quick
summary,
I
think,
would
be
good
here.
A
I'm
happy
to
provide
what
I
know:
justin
or
kanan.
Do
you
all
wanna.
C
C
C
We
need
to
make
sure
we
address,
but
a
downstream
effect
is
that
we're
unable
to
replicate
data
and
use
it
for
analytics
purposes
so,
and
one
of
the
data
team
folks
are
on
the
call
as
far
as
where
we're
at
right
now,
the
postgres
12
upgrade
is
sort
of
the
big
milestone
the
team's
been
working
on
the
infrastructure
team.
I
believe
that
was
delayed
about
a
week
or
two
someone
might
know
better,
but
that's
where
we're
behind
right
now.
C
So
once
that's
done
as
far
as
I
understand
it,
the
replication
work
will
will
the
replication
will
pick
back
up.
Excuse
me
and
we'll
start
to
get
data
back
in
there.
So
that's
like
fixing
the
immediate
problem.
I
think
we
also
need
to
step
back
and
look
at
the
pipeline
as
a
whole
and
understand
what
we
might
need
to
do
in
the
future
to
sort
of
harden
ourselves
against
potential
issues.
Again,
maybe
that's
part
of
sharding,
maybe
there's
other
things.
C
D
E
Related,
I
can
give
you
a
high
level
technical
summary,
but
we
can
have
to
go
into
detail,
so
we
have
a
primary.
We
have
a
replica,
and
then
we
have
data
team
reading
from
the
replica
the
replica
is
backed
up.
That
means
it
doesn't
have
the
latest
information
from
the
primary
it's
backed
up
by
about
three
and
a
half
days
or
so
so
it's
stale
and
it's
behind
this
can
often
happen
when
you
have
a
lot
of
load
on
your
primary.
E
So
then
the
primary
has
to
decide
whether
it's
going
to
serve
operational
use
cases
or
it's
going
to
try
to
catch
up
the
replica,
so
some
trade-offs
can
be
made
to
say
catch
up
with
the
replica
later
continue
to
serve
the
operation
of
these
cases.
First,
that
can
create
this
data
drift,
and
our
data
pipelines
now
have
to
understand
that.
Okay,
I
don't
have
some
tables
are
10
days
old.
Some
tables
are
three
days
old.
Can
I
really
build
on
my
dashboards?
Probably
not
so
all
my
dashboards
are
playing
baseball.
D
F
And
so
I
I
don't
know
the
technical
answer
either,
but
I
do
know
that,
like
this
is,
for
instance,
christopher
gave
me
a
heads
up
about
this
yesterday.
This
is
his
productivity,
metric
development
department,
mr8
and
he's
like
this
chart
is
like
20
days
behind.
So
I
I
don't
and
so
yeah
the
I
don't
think
what's
been
talked
about
so
far
is
the
root
cause.
So
I
think
we
need
someone
from
the
data
team
or
the
infrastructure
team
to
determine
like
why.
F
Why
is
the
data
warehouse
behind
or
are
unaccessible
or
something
out
of
date?
I
know
that
from
a
people
perspective,
there
is
a
dependency
of
the
data
engineering
team.
That
said
within
finance
on
our
infrastructure
department,
and
our
infrastructure
department
has
obviously
been
busy
doing
the
pg-12
upgrade
and
other
things,
but
I
I
think
it's
as
simple
as
something's
broken
and
they
are
unable
to
prioritize
fixing
it
and
our
data
team
is
like
we.
We
can't
do
it.
So
there's
been
this
long
saying
like
how
do
we
reduce
that
dependency?
F
How
can
we
build
the
capabilities
in
the
data
team
where
they
can
be
self-sufficient?
Managing
the
infrastructure
versus
right
now,
I
think
they're
stuck
at
a
higher
level,
but,
like
I
said
we
should
get
infrastructure
answer
exactly
or
more
precisely.
What's
going
on
there.
G
Do
you
need
to
get
the
infrastructure
team
involved
here
and
I
I
don't
know
who
just
put
this
summary
of
it
from
dennis,
but
it's
a
good
summary
talking
about
the
end
process
and
the
lag
situation.
G
I
think
the
other
thing
we
need
to
talk
about
is
that
it's
not
you
know
a
nuke
is
trying
to
create
a
kind
of
a
simplistic
view
of
it,
but
it's
actually
very
complex
because
of
the
different
data
areas.
So
it's
not
just
one
database
right,
we're
grabbing
pieces
of
data
because
of
the
way
the
dot
com
platform
is
structured.
G
So
I
don't
think
we
want
to
take
this
entire
meeting
up,
but
we
can
definitely
dig
into
some
of
the
root
cause
and
the
reality
is.
The
replica
is
not
keeping
up
the
date
to
then
have
a
data
pipeline
into
the
data
warehouse.
So
that's
that's
what
we
have
here.
G
I
think
one
of
the
big
root
causes,
as
eric
pointed
out,
was
to
get
that
upgraded
for
the
postgrest
postgres
12,
which
has
been
put
to
may
8th
and
then
we're
going
to
see
if
we
get
some
real
improvements
and
then
I
think
there's
some
architectural
designs
that
we
need
to
dig
into.
So
I
know
infrastructure
and
the
data
team
are
working
hard
to
understand
and
end
the
problem.
D
F
While
this
meeting
is
going
on,
why
don't
I
poke
around
on
slack
and
just
ask
some
of
the
people
involved
and
this
move?
This
meeting
can
move
on
to
some
of
the
business
questions
and
if
I
get
an
answer
in
the
next
20
minutes,
I'll
I'll
bring
it
back
here.
So
we
have
it
or
if
not
I'll,
get
it
async
and
I'll
push
it
over
slack.
So
we
know
yeah.
D
B
Yeah,
so
on
to
the
business
question.
So
one
of
the
key
pillars
in
our
operating
plan
is
the
you
know
the
security
pillar
and
it
looks
like
we're
doing
really
really
well,
but
I'd
love
to
hear
from
the
team
and
how
they
think
their
road
maps
going
and
progressing
towards
kind
of
the
aspirations
they
set
out
during
the
planning
process.
And
then
I
have
a
follow-up,
which
is
the
thing
I'll
talk
about
at
the
top
of
the
hour.
H
Yeah
so
I
would
say
we're
making
really
good
progress
driving
the
maturity
of
secure.
H
In
fact,
we
are
in
the
process
of
doing
the
category
maturity
scoring
for
per
sas,
and
it's
actually
scoring
just
short
of
lovable,
so
we'll
be
moving
to
complete
here.
H
H
H
H
Yeah
I'd
say
we're
a
little
bit
behind
on
where
we
initially
thought.
We
would
be
at
this
point
that
has
to
do
with
some
of
the
rapid
action
type
stuff.
Over
the
last
couple
months,
we
actually
moved
our
entire
container
for
our
entire
dependency
scanning
team
to
working
on
database
optimizations
for
a
couple
of
releases,
so
that's
kind
of
slowed
us
down
a
little
bit
there.
H
The
last
part
is
when
we
decided
to
put
hiring
on
hold.
Last
year
we
weren't
done
staffing
some
of
the
teams,
so
that's
also
slowed
us
down,
but
I
will
say
it's
a
big
credit
to
todd
and
and
wayne
for
being
able
to
optimize
within
the
teams
to
keep
things
still
moving
forward,
despite
having
sometimes
two
or
three
developers
on
a.
B
H
B
No
two
parties
sid
wants
two
parties,
so
good,
just
one
follow-up
on
the
integration
of
the
acquisitions.
I
I
saw
that
and
I
think
let
me
get
my
details
correct
because
it
the
names
are
very
similar.
I
saw
that
the
the
peach
fuzzing
api
security
is
scheduled
by
the
end
of
july,
but
the
fuzzing
api
did
not
have
a
date.
I
I
guess
there
was
a
dependency
there.
I
just
love
more
color
on
that
and
how
that's
going
and-
and
do
you
feel
good
about
that.
H
Yeah,
so
the
I
will
say
there's
a
little
bit
of
confusion,
because
api
secures
the
name
of
the
peach
product
we
acquired
we're
using
it
for
two
different
purposes:
the
api
fuzzing
I'll
talk
to
lra.
We
probably
should
move
that
to
saying
complete
so
once
the
api
fuzzing
part,
because
that
is
fully
integrated
in
it's
running
in
the
pipeline.
H
The
second
part,
which
is
in
the
dashed
row
that
we
are
still
ongoing,
doing
integration
after
I
say
it
was
the
february
milestone.
I
was
where
we
considered
api
buzzing
done
on
integration.
We
started
doing
the
work
on
the
dash
side
and
I
checked
with
the
team
actually
today
and
we're
on
track
for
still
the
summer
time
for
that
to
be
fully
integrated
in.
B
It
does
yeah,
I
just
I
realized
we
hadn't
checked
in
on
the
integrations
in
these
meetings,
even
though
it's
reported
on
every
time.
So,
thanks
for
for
continuing
to
push
that,
I
know
it's
it's,
it's
really
good
and
it's
a
great
integration.
I
B
D
Yeah,
I
think
I
answered
my
own
question,
but
it's
remarkable
how
much
more
conversion
the
company
accounts
have,
and
I
don't
think
I
can
make
a
case
for
the
non-company
accounts.
So
I'm
going
to
put
on
the
egroup
agenda
of
monday
to
stop
sign
ups
with
like
free,
non-company
email
addresses,
free
email
addresses.
J
So
I
think
I
think
one
thing
we
should
look
into
there
is
there's,
there's
definitely
kind
of
like
a
I,
I
kind
of
think
of
it's
like
dark
like
so
it's
dark
matter.
It's
like
dark
ar
of.
We
get
these
engineers
that
sign
up
that
use
it
for
personally,
so
they
recommend
it
to
their
company
and
that
company
then
purchases.
We
do
see
that
in
user
interviews,
when
we
talk
to
signups
that
they
talk
about
their
personal
use
and
then
recommending
it
to
their
company
later
on.
J
I
don't
necessarily
disagree
with
you
that
we
should
prioritize
company
signups.
I
definitely
think
we
should,
but
I
think
we
should
leverage
the
data
team
and
the
product
analyst
and
try
and
understand
if
they're,
what
that
correlation
is
between
personal
users
and
getting
companies
to
sign
up,
and
if
we
can
understand
that
at
all.
D
Cool
that
makes
a
ton
of
sense
I'll,
just
removed
it
from
the
egroup
agenda,
and
thanks
thanks
for
sharing
this.
B
Data
and
sid
also
the
marketing
team
just
made
me
just
made
some
adjustments
to
their
lead
scoring
to
to
to
talk
about
that
capability
as
well.
So
it's
good,
I
think,
there's
good
alignment
around
this
problem
and
understanding
of
what's
happening,
and
I
think
it's
gonna
be
a
good
optimization.
I
Yeah
and
one
thing
one
I
want
to
add-
I
think
cinders
analysis
is
more
focused
on
using
company
email.
Our
is
a
slightly
different.
Is
we
kind
of
filter
the
data
by
when
they
sign
up
this
indicate
I'm
signing
up
for
company
use,
so
those
are
slightly
different,
but
we
have.
I
have
a
data
issue,
we'll
talk
with
marketing
and
finance
to
see
which
one
is
the
better
in
terms
of
more
sensitivity
and
also
lower
efforts
to
implement.
K
So
really
like
makes
a
lot
of
sense
to
prioritize
business
emails
and
if
they
do
have
personal
emails
and
business
emails,
I
think
you
know
we're
better
served
just
requesting
for
their
business
emails
just
give
us
your
business
emails
and
we'll
continue
communicating
there,
rather
than
encouraging
the
personal
news
emails
as
well,
and
then
the
pricing
change
when
we
took
our
price
up
that
basically
exacerbated
the
discrimination
between
the
amount
of
arr.
We
got
from
our
personal
and
business
emails,
so
that
was
the
other.
I
guess
economic
shocker.
There.
D
J
Yeah
totally
agree,
I
think
gila
and
I
can
partially
take
this
on
with
the
product
analytics
team
and
start
digging
into
the
relationship
between
the
two
thanks.
B
Great
one
other
part
of
the
operating
plan
I
wanted
to
check
in
on
assistant.
Are
you
done
sorry?
Did
you
finish
with
that
topic?
Yep,
awesome
cool,
so
one
of
the
other
things
the
operating
plan
talked
about
was
product.
Qualified
leads
that
concept,
and
I
know
this
came
up
in
the
compass
meeting,
but
I
I
thought
I'd
just
re
just
check
in
here,
and
I
know
we
are
adding
two
resources
within
the
growth
team
to
focus
on
that.
B
But
I'd
love
to
get
a
sense
of
how
we're
thinking
about
the
timeline
to
to
start
to
to
do
that
and
start
feeding
that
sales
force.
I
Yeah,
so
those
two
resources
are
still
in
higher,
but
sam
and
I
we
we
have
been
thinking
about
the
road
map,
so
we
added
a
pql
section
into
the
gross
handbook.
We
have
some
mvcs
that
are
already
live,
but
like
with
the
resources
we
can
expand
on
those
but
like
we
are
already
kind
of
making
some
efforts
there.
Then
you
can
add
more
details.
J
Yeah,
so
we
have
some
very
light
mvcs
in
the
product
that
I
mean
it's
not
driving
volume
at
this
point,
once
we
get
these
two
additional
resources
hired,
ideally,
would
really
start
driving
volume
in
the
coming
milestones.
My
plan
is
to
really
start
to
work
with
the
ux
team
to
build
a
backlog
for
the
pco
work
so
that,
as
these
people
get
hired,
we
can
hit
the
ground
running
and
have
to
share
that
as
it
gets
built
up
with
you.
B
D
Cool
this
is
a
very
positive
surprise
to
see
the
unique
monthly
active
users
growing
six
percent
month
over
month.
That's
pretty
extreme,
now
march
was
a
bit
of
a
longer
month,
but
still
six
percent.
This
is
our
like.
This
is
that's
a
really
good
number.
Actually,
I'm
surprised
that
it's
outpacing
the
paid
user
growth.
Does
anybody
have
any
idea
of?
Why
kind
of
why
this
is
going
so
fast.
L
I
guess
I'll
just
share
from
looking
at
things
like
cmao
for
the
ops
section.
We
had
a
general
dip
from
november
december
january
and
february,
and
I
had
assumed
that
that
was
because
of
shortness
of
months
and
people
not
working
various
days,
and
that
we
also
saw
a
significant
spike
in
march
that
basically
put
us
back
on
track
to
where
we
were
from
the
fall.
A
Yeah,
it's
a
pretty.
That
was
a
very
common
pattern.
I
don't
know
how
to
explain
why
it
grew
faster
than
paid,
but
definitely
there
was
a
big
ramp
in
usage
in
march,
pretty
much
across
the
board.
Yep.
B
Thanks
so
my
my
last
question,
I've
been
I've
been
trying
to
articulate
that
sorry.
I
was
a
little
behind
today,
so
I
I
didn't
have
as
much
time
to
explore
the
product,
materials
and
they're
really
really
good
and
there's
a
lot
of
great
depth
in
there.
So
thank
you
to
the
team
and
I've
been
trying
to
articulate
this
question.
So
I
don't
think
I
did
it
justice,
but
I
was
really
interested
in
the
experiment.
B
The
growth
experiment,
where
we
know
that
as
we
invite
users
and
we
get
more
user
engagement,
that's
really
good
for
the
user
experience
and
really
great
for
adoption
and
then
great
for
them
converting
to
paid.
And
then
I
saw
a
note
about
how
we're
optimizing
that
flow
from
20.
I
think
it
was
20
clicks
to
10
clicks,
which
I
think
is
great
and
then
the
other
thing
I
noted
is
that
we
started
doing
a
hard
email.
B
I
assume
that
means
you're
going
to
verify
email
as
part
of
the
process,
which
is
really
good
because
you
know
obviously
it's
spam
and
things
like
that.
It's
good
to
do
that.
I
guess
I'm
just
trying
to
get
understand
the
learnings
that
the
team
had
from
that
experiment
and
then
does
that
does
that
apply
to
any
of
the
registration
steps
too
like
when
we
verify
email,
because
you
know
and
things
along
those
lines.
So
I
guess
that's
my
question.
J
So
I
can
speak
to
specifically
on
the
invite
side.
So
when
we
turned
on
hard
email
confirmation,
it
broke
some
of
the
existing
logic
in
understanding
that
someone
was
coming
from
an
invite
email.
J
So
we
quickly
fixed
that
and
that's
where
we
saw
a
big
uptick
in
the
accepted
invite
rate
again,
but
then
there's
still
just
a
lot
of
low
hanging
fruit
there
in
optimizing
that
flow
so
right
now,
the
user
also
has
to
go
back
to
their
email
to
confirm
it,
even
though
they
accepted
a
link
in
their
email.
J
We
have
an
openmr,
we
have
an
openmr
right
now,
that's
fixing
that
so
we're
we're
just
chipping
away
at
it
and
I'm
I'm
really
confident
we
can
get
the
clicks
down
by
at
least
50
percent,
so
that
should
have
a
nice
improvement
on
the
invite
experience
and
then
myself
and
the
other
growth
pms
along
with
gila,
are
actively
thinking
about.
J
How
can
we
kind
of
unify
this
sign
up
in
onboarding
experience
and
and
really
get
it
as
crisp
as
possible,
and
that
I
know
that's
something
that
mike
is
thinking
a
lot
about
and
he's
actively
working
on
with
his
team.
So
I
don't
know
if
he
wants
to
add
anything
there.
Yeah.
M
I
can
jump
in
there.
Sam
thanks,
there's,
there's
also
just
in
the
actual
new
account
flow,
some
challenges
with
the
hard
email
confirmation.
So
things
like
you
confirm
the
email
from
from
a
link
in
the
email
you
receive,
but
then
you
land
back
on
a
sign-in
page,
that's
empty!
So
you've
just
entered
all
this
information,
so
some
of
the
things
we've
been
exploring
is
how
can
we
remove
some
of
the
friction
there?
So
can
we
is
a
first
pass
just
fill
in
your
email
address.
M
You
don't
have
to
type
it
in
again,
just
little
things
to
remove
all
the
clicks
and
friction
right
now,
because
yeah,
it's
22
clicks
for
the
invited
team
member.
It's
nearly
as
much
as
that
for
the
sign
up
as
well.
So
we
explored
things
like
a
magic
link.
Auto
logging
you
in
there,
but
there's
actually
some
security
challenges
with
that.
So
we're
really
just
trying
to
remove
as
much
of
that
friction
and
the
difficulty
of
getting
signed
up,
so
they
can
get
into
the
product
and
start
experiencing
the
value.
M
B
M
There
is,
and
then
even
if
you
think,
about
getting
into
the
product
and
getting
started,
there's
a
there's
a
whole
bunch
of
friction
there,
where
we
want
to
start
removing
aspects
from
the
first
mile
flow,
bringing
them
into
like
a
dashboard
where
you
log
in
yeah,
start
to
get
some
data
collection
there.
The
continuous
onboarding
work
that
sam's
team
is
doing.
That
could
be
the
home
for
it.
M
I
I
want
to
add:
melissa
has
been
collaborating
with
us
as
well.
It's
a
group
effort-
and
this
is
kind
of
a
case
because
we're
in
we're
experiencing
a
high
volume
of
spam,
we're
enhancing
a
lot
of
security
measure.
But
every
time
we
implement
things
like
this,
it
will
have
a
real
business
impact
so
like
we
are
collaborating
with
various
teams
trying
to
minimize
those
business
impact.
F
Yeah,
so
from
conversations
in
slack
for
clarification
there,
there
isn't
a
dedicated
replica
of
the
gitlab.com
database
today
for
replication,
it's
reading
off
of
the
real
replicas.
F
There
is
some
delay
some
lag
between
the
primary
and
the
replicas,
but
instead
indicated
if
it
was
three
or
20
day
lag
like
the
application
wouldn't
work,
that's
not
happening,
there's
a
relatively
small
amount
of
lag
that
really
doesn't
affect
gitlab.com
sort
of
experience.
However,
the
process
that
the
warehouse
is
using
is
like
has
zero
tolerance
of
any
lag,
so
it
seems
like
it's
just
creating
gaps
in
data.
The
moment
there
we
cross
some
threshold
and
like
no
data,
is
copied
over
for
certain
things.
F
Like
massive
polls
like
ci,
builds
and
whatnot,
so
we've
got
sort
of
like
a
non-distributed
systems,
tolerant
process,
that's
very
naive
and
for
any
myriad
of
reasons
that
needs
to
be
fixed
and
improved
eventually,
anyway,
down
the
line.
There
is
a
short-term
solution
and
because
we
moved
the
pg12
upgrade
out
to
may
8th,
it
looks
like
we
may
have
somebody
somebody
be
able
to
look
into
it
on
monday
and
just
create
a
dedicated
replica
that
won't
have
any
lag
and
will
kind
of
just
work
it
will.
F
So
that
way,
the
warehouse
can
continue
to
run
this
non-fault
tolerant
process
and
get
what
it
needs,
and
then
the
charts,
like
the
one
that
I
displayed,
that
are
21
days,
that
date
will
be
sort
of
like
replenished
long
term,
we'll
we're
still
going
to
have
to
develop
a
more
tolerant
process
that
the
warehouse
uses
because
it
it
can't
be
so
brittle
where
the
moment
it
can't
talk
to
anything
it
just
it
just
flakes
and
zeroes
out
it
just
doesn't
record
data,
so
short
term
fix.
Maybe
someone
can
do
that
monday.
F
Long-Term
fix
we'll
still
have
to
look
into
that
that
process,
because
in
any
scenario
databases
are
up
or
down
or
lag
is
introduced
or
the
network
between
two
things
will
go
down,
so
it
has
to
be
more
more
fault
tolerant,
but
we
think
we
should
be
able
to
have
a
short-term
solution.
We
could
possibly
have
someone
do
on
in
the
monday
time
frame
awesome
enough.
I
F
And
steve-
and
I
have
run
one
later
so
we'll
do
a
little
write-up
and
we'll
share
that
dock
around
just
people
understand
current
state
next
steps.
So
it's
clear
to
everybody
we're
all
on
the
same
page.