►
From YouTube: Quality (Mek) Group Conversation (Public Livestream)
Description
Thursday, November 7⋅12:00 – 12:25am
A
A
A
On
anyway,
thanks
for
the
question,
yes,
I
think
we
are
going
to
be
adjusting
a
little
bit
based
on
my
conversation
with
Eric
I.
Think
the
golden
ratio
is
still
a
one
to
one
as
that
with
a
product
group,
but
for
more
complex
features,
for
example
geo,
where
you're
setting
up
to
a
good
lab
first
and
second,
really
failing
over
those
numbers,
may
change
with
content
you're
looking
at
too
in
those
areas-
and
you
add
that
exception
class
there,
but
we
will
try
to
stick
to
to
a
one-to-one
similar
to
the
designers.
A
And
defensive,
we
split
up
the
groups
healthy
ly
than
the
number
steering
ratio
would
work
cool
any.
A
A
Would
like
to
keep
the
ratio
aggressive,
because
I
want
to
be
loud
and
clear
that
the
way
we
do
quality
--get
laughs,
we're,
not
the
QA
team,
let's
be
done
tests,
it's
the
engineering
teams
to
share
the
responsibility
in
in.
In
writing
the
tests
and
we
responsible
for
for
the
testing
but
structure.
The
role
is
slightly
evolving
into
developers,
but
for
tests,
so
I
think
focusing
on
all
the
tests
and
all
the
stack
pruning.
The
tests.
I
think
I
think
that's
the
goal
here.
D
A
Man,
this
is
a
hard
one.
This
is
a
good
question.
I
think
there
are
two
things
that
come
to
mind:
deleting
old
tests
and
pruning
them.
I
think
we
are
really
good
at
adding
tests.
Sometimes
we
need
to
remove
them
the
lowest
hanging.
Fruit
is
not
that,
but
it
costs
us
more
money
which
is
throwing
more
resources
at
it
and
given
the
scale
of
where
we're
increasing
the
pipelines,
that's
an
expensive
route
to
take,
and
sometimes
we
need
to
be
more
more
drastic
in
testing
the
right
things.
A
C
A
Yes
and
a
lot
of
those
like
hang
on
the
hiring
plan,
because
we
are
flogging
the
band-aid
on
sorry
for
using
violent
terms
like
the
biggest
fire
party,
was
just
a
release
right.
We
need
to
make
sure
everything
that
goes
out
that
runs
every
day
on
staging
is
crisp
to
help
the
delivery
team
deploy
and
the
trash
that
comes
in
and
the
enterprise
has
guests.
That's.
A
Plate
for
the
team
like
we
have
absolutely
no
resource,
no,
no
staffing
to
plan
ahead.
There
are
some
discussions
now
with
and
I
appreciate
them,
Dava
and
Michele.
Bringing
up
is.
We
would
would
like
us
to
help
risk
analysis
earlier
on
not
at
kickoff
like
way
before
kick-off.
So
when
the
kickoff
comes,
people
know
exactly
what
tests
to
write
and
then
it
gets
factored
into
capacity
planning
I
think
we
would
love
to
reach
that
state.
A
B
I've
got
one
more
yes,
please
on
slide.
11
I'll
take
the
sent
as
soon
as
I'm
done,
but
on
slide
11
I
was
looking
the
team-wise
coverage
percentage.
Can
you
can
you
explain
what
that
metric
meanings?
Her
so
make
sure
just
make
sure
I
understand
what
that
means.
Sure,
and
we
have
the
whole
department
also.
A
More
than
happy
to
help
take
notes.
This
is
ie.
Is
the
enterprise
Gulabi
ii
test
cap?
This
is
what
we
planned
I
believe
you
can
click
on
the
link
here
on
the
list
of
test
cases
coverage
that
we
wanted
close
out
in
q3.
The
progress
is
overall
63%.
This
is
the
end-to-end
plan
test
coverage.
It
doesn't
capture
the
unknown
unknowns,
so
just
to
be
clear,
but
what
we
know
that
we
need
to
have
tests
for
and
then
the
the
bottom
one
is
progress.
A
If
you
look
at
it
by
feature
tiers,
so
features
in
ultimate,
61%
I
think
we
believe
we
went
with
starter
first,
because
that's
most
of
our
customers,
where
they
are
and
we're
working
towards
premium
ultimate
is
sorry.
Did
they
get
that
wrong?
Tania
Rhonda?
Did
we
start
with
premium
first
or
starter?
First
I.
D
B
B
A
A
B
A
So
I
see
seven
smiling
if
you
only
have
one
person
and
then
also
pull
into
the
you
recall
the
load,
balancing
the
load,
balancer
issue
on
get
Lovett
calm
and
we
were
helping
to
reproduce
that,
and
that
was
also
like
another
bigger
fire
that
we
were
pulling
before.
So
could
do
any
more
I
progress
there
yeah.
E
B
E
B
E
A
B
Cool
this
is
just
a
suggestion,
is
is
that
if
products
prioritize
this
area,
one
thing
you
consider
also
as
a
headcount
reset
I
know
it's
disruptive.
But
if
you
feel
like
this
is
an
area
though
like
like
that
was
just
my
reaction.
But
if
you
know
products
like
oh
yeah,
that's
fine,
we'll
eventually
get
to
that.
That's
that's
great,
but
if
we
feel
like
that's
a
better,
a
better
area
focus,
that's
you
know
be
prioritizing
another.
It
might
be
the
right
thing
to
do
at
this
point.
Yeah.
F
A
B
Interesting
question
because
I
think
a
lot
of
customers
well
I,
don't
I,
don't
know
these
statistics
on
this,
but
I'd
love
to
see
whether
this
is
validated
or
not,
which
is
my
suspicion,
is
a
lot
of
customers
pay
for
our
product,
even
though
they're
using
free
tier
just
because
they're
scale
so
like
scale
as
a
as
a
feature.
So
then
you
can't
you
can't
necessarily
just
look
at
whether
it's
a
paid
for
feature
because
it's
it's
you
know
its
features
that
need
to
work
consistently
and
work.
B
Well,
all
the
time
and
that
scale,
and
that
would
be
my
biggest
concern
is-
is
like
in
the
case
of
verify,
if
they're
all
free
features.
But
it's
you
know,
a
vast
majority
of
our
customers
are
using
them.
If
they
start
breaking
in
new
and
unique
ways,
that's
gonna
cause
it
to
go
from
a
lovable
portion
of
our
product
to
just
a
complete
portion
of
our
product.
So
so
I
don't
know
if
we're
necessarily
in
danger
of
that
extreme
scenario,
but
it
just
feels
it
just
feels
like
that
way.
A
So,
and
this
this
might
be
the
first
time
we
are
unveiling
that
number
so
the
next
time.
We
look
at
it
and
have
more
background
on
this,
but
we
did
discuss
that
and
the
intent
where
this
came
out
from
was
actually
a
retrospective
with
the
existing
customer
calm
and
they
requested
it.
Hey,
please
make
sure
your
inner
thighs
previous
features
have
Intuit
has
to
cover
them,
and
it
is
why
it's
coming
from.
If
I
have
more
resources,
we
would
definitely
have
the
the
coverage
for
free
features
that
spoke
about
that.
B
And
to
be
clear,
this
is
not
a
commentary
on
performance
in
any
stretch
of
the
imagination,
and
it's
all
about
areas
of
focus
right
and
just
thinking
about
it.
From
that
perspective,
so
it
sounds
like
sounds
like
your
diligence
was
done
here,
thanks
for
thanks
for
entertaining
my
my
foray
into
this
area
know.
This
is
what
Google
conversations.
G
Okay,
I
had
this
question,
so
regarding
just
the
slide
on
workload
and
the
general
responsibilities
of
everyone
for
testing,
has
there
been
any
thought
on
whether
offering
some
training
focused
on
effective
testing
methodologies
would
be
helpful
and
spreading
that
load
out
and
helping
kind
of
with
that,
especially
you
know,
maybe
in
terms
of
newer
features
coming
online.
A
A
Should
we
do
something
like
that
again
so
to
be
to
be
clear:
maybe
they
they
carry
the
carrying
of
the
message
or
that
the
marketing
was
wasn't
done
as
effectively,
so
the
turnout
wasn't
great,
but
we're
definitely
thinking
of
training
materials
for
the
whole
entire
org
and
I
could
start
with
a
page
in
the
quality
engineering
handbook
that
instructs
or
gives
us
tips
on
how
to
test
and
what
to
consider,
especially
around
limits
and
scope
that
that
self
and
the
team
has
been
able
to
provide
they're
called
test
heuristics.
A
F
F
We
have
worked
quite
heavily
in
the
previous
quarter,
also
on
documentation,
but
I
think
we
can
even
maybe
explore
other
resources
like
like
one
resource
that
I
usually
use
is
test
automation,
University.
They
have
lots
of
free
courses
about
many
different
topics
in
that's
automation
that
could
be
used
by
anyone
inside
or
outside
need,
lab
inside
or
outside
the
the
Quality
department
as
well.
So
I
think
we
should
explore
other
resources
as
well.
I.
H
Just
like
to
add
that
I
think
it's
valuable
for
the
counterparts,
the
QA
counterparts
to
establish
relationships
with
their
teams
and
know
that
a
lot
of
those
are
just
being
here
now,
but
I
think
that
will
improve
things.
Because
it's
one
thing
to
train
and
to
know
what
you're
doing.
There's
no
thing
to
have
regular
frequent
conversations
and
using
QA
support
like
we
should.
B
There
I'll
keep
my
camera
off,
but
I'm
curious
about
the
application
limits
on
slide.
25
I
was
interested
in
just
I,
see
it's
a
it's
a
large
epic,
many
many
issues,
just
maybe
a
quick
summary
and
sort
of
key
things
in
terms
of
challenges,
of
of
either
setting
or
or
understanding
what
limits
should
be
enforced.
I'll.
A
Be
completely
honest,
I
haven't
read
through
the
whole
epic,
yet,
but
I
appreciate
the
engineering
work,
especially
from
development
and
infrastructure
that
went
into
the
research
I,
think
we're
getting
data.
Now
the
concern
is
the
challenges.
Obviously
the
lanista,
we
don't
know
no
unknown
unknowns.
A
The
ones
that
we
know
I
think
we
could
documenting
them,
should
only
take
like,
like
a
quarter,
just
put
it
in
the
handbook,
which
is
because
we
were
starting
with
an
MVC
with
AK.
So
this
half
so
start
with
one
section
and
just
put
either
deaf
or
see
ICD
in
but
yeah
they
the
challenges
will
be.
What
have
we
not
an
uncovered
yet
gotcha.
B
So
the
mystery
factor-
yes
I-
can
help
a
little
bit.
Stephen
is
your
concern,
or
is
your
question
more
around
what
the
limit
should
be,
or
is
it
more
around
just
general
challenges
we
have
in
this
area?
Not
a
concern
really
at
all.
I
was
just
curious
yeah,
in
other
words,
I've
seen
yeah.
You
know
understanding
what
the
limits
should
be
is
always
I
think
that
the
most
difficult
part-
and
it
got
me
thinking
about
like
this
morning-
we
had
an
outage
where
some
some
resources
got
saturated.
E
B
B
Basically
in
implementing
these
limits,
and
most
is
I-
think
we
have
a
pretty
good
idea
of
what
initial
draft
limits
should
be
and
then
once
we
have
those,
and
we
can
figure
out
whether
you
know
there's
a
potential
impact
with
scaling
associate
with
that
in
general,
the
product
you
know
because
of
our
iterative
nature
or
the
product
was
not
thought
of
in
terms
of
putting
limits
in
by
default.
In
fact,
if
you
read
some
of
the
documentation,
it
says
something
to
the
effect
of
of.
B
If
you
put
limits
in
by
default,
that's
actually
a
negative
customer
experience
so
at
scale
on
a
dot-com
site.
You
need
to
have
limits
in
place
just
because
people
do
things
in
new
and
creative
ways
and
that
can
cause.
You
know
what
previous
assumptions
about
what
it
was
being
used
at
to
change,
and
then
that
causes
other
system
effects
so.
B
At
is,
is
is
just
one
looking
at
each
area
trying
to
determine
what
the
limits
should
be,
which
shouldn't
take
a
whole
lot
of
time
in
the
game,
product
appropriately,
prioritize
them,
and
that
was
kind
of
a
distribution
based
on
a
previous
outages
associated
with
this.
This
morning's
is
kind
of
interesting,
because
it
was
my
understanding
at
least
at
this
point
is:
it
was
primarily
due
to
importation-
and
you
know
that's
a
good
example
where
we
probably
need
to
think
about
rate
limiting,
which
is
limiting.
D
B
A
B
B
B
D
A
Excited
for
our
hiring
pipeline
for
quality
engineering
managers
I
think
we
have
really
two
strong
candidates
right
now
that
we
can
get
it'll
be
really
really
good
for
for
for
a
good
lab,
so
excited
for
the
hiring
and
excited
for
contribute
actually.