►
From YouTube: Quality Group Conversation
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Sure,
thank
you.
So
everybody
welcome
to
the
gue
composition
for
the
quality
department
and
I'm
gonna
quickly,
link
where
the
slides
are
it's
in
the
document,
but
I'm
gonna
link
into
the
chat
as
well
before.
We
saw
him
to
just
let
people
they're
just
as
slides
for
a
few
minutes
and
then
we'll
get
started
on
the
conversations.
B
B
Sure
thanks
well
there
so
get
lab
insights
came
as
an
inception
of
how
do
we
track
the
metrics
on
books
and
it
evolved
from
from
that
into
tracking
the
velocity
and
the
metrics
of
our
engineering
department.
B
There's
multiple
uses
of
this
first,
we
need
to
know
we're
moving
fast
and
then
it
is
a
reckless
velocity
or
is.
It
was
a
good
velocity
if
you're
pushing
merges
as
fast,
but
the
number
of
bucks
are
going
up
for
I'll
be
resolving
the
bugs
fast
enough
for
SLA.
If
one
of
those
metric
is
hanging,
that's
kind
of
a
reckless
velocity
and
that's
not
productive,
because
then
you
need
to
fix
those
bugs
again
when
it
goes
to
production
right.
B
So
a
good,
a
good
productivity
Ignis
velocity
is
I,
give
you
if
the
troopers
are
going
up
and
then
the
bugs
are
going
down
and
the
Bucks
are
not
a
lot
getting
faster.
That's
that's
a
sustainable
velocity
and
we
want
to
move
in
that
manner
because
I'm,
that's
what
that's,
what
I
would
categorize
as
being
productive
because
you
were
your
main
shoe
is
high
quality
from
the
start
and
it's
not
gonna
come
much.
Come
come
back
to
buy
this
towards
the
end
and
from
then
on.
C
B
B
C
B
A
And
that's
actually
what
I
was
going
to
respond,
because,
typically
those
slides
are
available
to
do
people
put
for
the
meaning,
but
it
doesn't
always
mean
that
you
know
that
everybody
is
reading
them
beforehand.
So
it
is
helpful,
I
think,
to
link
at
the
beginning
of
the
chat,
and
you
know
if
people
are
prepared
with
questions
they
could
dive
right
in.
B
So
slide
number
six
is
the
the
kid
lab
insights
migration
I
think
we've
made
significant
progress
here,
there's
a
cool
concept
that
actually
renders
the
graph
inside
the
product
itself
and
there's
just
some
scaffolding
going
on.
So
it's
looking
great
so
far.
We
expect
to
have
the
production,
ready,
graphs
on
issues
and
bugs
be
inside
the
product
itself,
and
this
is
something
we
are
very
excited
about.
B
It's
a
big
wins
and
the
other
big
win
is
a
slide
number
nine,
which
is
we
will
we
were
able
to
get
tests
to
be
green
for
six
days?
That's
the
max!
We
have
so
far
a
lot
more
fault,
tolerance,
McCaslin
is
being
built
in
and
a
lot
of
it
also
tied
to
how
we
do
the
deployment
and
release
and
staging,
because
it's
not
always
the
latest
master
but
yeah.
This
is
the
lot
of
progress.
It's
not
perfect
yet,
but
yeah
kidding
we're
getting
that
to
be.
B
B
Great
question:
so
we
do
want
it
and,
let's
help,
but
we
so
like
this
too.
We
need
to
be
specialized
right,
I
think
enlisting
help
is
great,
but
we
need
to
make
sure
that
they
can
help
in
a
productive
way,
which
means
we
need
to
buy
the
tests
in
a
way
that
is
fault,
tolerant
and
easy
to
be
written
and
read
by
the
people.
I
even
list
help,
and
they
spend
a
lot
of
time,
reading
the
Testament
under
standard
tests
and
it's
kind
of
a
waste
of
other
people's
time.
B
There's
still
a
lot
of
just
plans
to
be
closed
out
and
a
lot
of
that
work
has
been
facilitated
and
done
by
the
quality
department
team
things.
That's
that
would
be
the
first
iteration
to
endless
helping
while
we
work
on
the
tests
retries
and
if
you
see
the
dynamic,
a
locator
validation
that
Dan
has
added.
So
those
are
gonna
pay
dividends
down.
The
line
in
making
the
test
frame
was
more
fault.
Tolerant.
B
B
Another
thing
I
wanted
to
highlight,
which
is
a
really
hires
and
see
its
performance
tests.
We
are
running
into
this
head-on
and
this
is
a
performance
test.
Bed,
that's
gonna,
be
shared
by
it
by
Stan,
if
you
you're
tensed
and
some
engineering
fellow
GC
earlier,
and
also
the
memory
team
as
well
so
right
now.
Staging
and
production
is
not
a
good.
A
good
representative
of
what
on-prem
customers
are
gonna
run
under
environments,
ideal
they're
running
a
10k
user,
but
this
is
our
first
iteration
so
creating
and
creating
an
environment
with
10k
users.
B
It's
not
a
first
iteration.
It
probably
comes
later,
so
we
have
to
transpose
to
go
down.
So
if
you're
running
10k
and
we're
loading
up
with
cho
k,
users
who
transported
down
to
smallest
environment,
maybe
supports
400,
gonna
load
up
with
500
or
six
for
its
Old
Testament
skill
as
a
first
iteration
and
then
iterate
on
forward
right
now,
it's
the
focus
is
on
NFS.
B
That's
where
a
lot
of
enterprise
companies
configure
gitlab
us
as
we
get
better
this
I
envision,
more
environments,
there's
people
they
could
be
a
NFS
and
then
not
NFS,
and
then
maybe
a
bigger
environment
later
fit.
This
all
relies
on
all
the
great
books
ready.
The
structure
teams
are
doing.
I
know
the
Anthony
is
working
on
making
creating
new
test
environments
easier
by
terraform
and
all
the
scripts,
so
I
it's
dependent
on
that
as
well.
But
this
is
the
first
iteration
it.
It
call
it's
a
lot
of
help
from
everybody
to
get
this
happen.
E
B
No,
it's
the
like
recent,
like
yesterday's
recent
okay,
so
we
have
performance
test
at
Wenzhong
calm,
so
the
table
Foreman's
tests
that
we
run
on
stage
and
later
production
are
the
same
sets
of
criteria
that,
let
me
run
an
on-prem
as
well,
but
maybe
transport
sounds
will
always
go.
But
yes,
it's
gone.
It's
on
our
radar
and
if
you
have
any
more
information,
if
we
said
us
no,
it
could
be.
It
could
be
events
that
is
production
related
in
a
short
term.
B
But
if
do
you
know
specifically,
what
part
of
the
application
is
slow
or
is
it
issues
or
merge
requests
our
performance
tests
that
Rama
has
been
working
on
like
we,
we
create
initial
with
a
lot
of
labels.
We
create
emerging
crisis
like
five
thousand
lines
of
this,
so
we're
making
progress
in
that
regard.
But
if
you
have
more
specifics
than
now,
please
do
share
with
us,
so
we
can
help
close
out
the
gap.
B
I
see
Dahlia
in
the
car,
so
I
might
just
pull
in
for
a
brief
discussion
on
throughputs.
So
is
there
anything
else
that
you
think
we're
missing
Dahlia
for
true
dashboards
and
throughputs
that
you
would
like
to
see
in
the
product.
F
So
for
throughput
itself,
I
think
we're
we're
in
a
good
place
as
far
as
that
chart,
but
I
did
like
that
that
you
mentioned
that
development
metrics
that
we're
trying
to
build
our
complimentary.
So
we
don't
just
want
to
track
throughput.
We
want
to
attract
quality.
We
don't
want
to
only
look
at
throughput.
We
want
to
also
look
at
cycle
time,
so
so
yes,
there,
there
are
definitely
additional
metrics
that
we'd
love
to
put
together
and
start
to
have
a
comprehensive
dashboard.
F
B
Yeah
I
just
want
to
use
that
to
highlight
chart
number
seven
because
we
do
have
an
engineering
working
group
and
it's
on
in
the
handbook.
It's
open
to
everybody,
get
lab
for
transparency,
underscoring
that
the
the
average
merge
requests
per
month
and
throughputs
are
just
a
number
I
think
productivity
falls
two
ways,
but
if
you're
removing
fast
and
then
you're
incurring
a
lot
of
bugs,
that's
not
productive,
because
it's
gonna
come
back
and
incur
costs
later.
So
yeah
just
want
to
underscore
the
effort
from
that
dog
we're
doing
there
and
then
slide.
D
I
think
this
falls
under
that
so
I'm.
If
you
had
an
epic
Dalia
to
link
to
that,
that
would
be
awesome.
But
I
was
the
reason
to
hire
going
through
like
material
feature
freeze.
It
was.
It
was
interesting
to
see
like
productivity
shoot
up
quite
a
bit,
but
obviously
it's
a
very
high
stress
type
as
well,
and
some
wondering
what,
in
terms
of
like,
maybe
like
developer
happiness
or
comfort
or
stress
levels,
we're
actually
measuring
or
taking
steps
to
measure
and
how
that
contributes,
quality
or
even
just
throughput.
Yeah.
F
F
Sorry,
so
thank
you
for
the
question.
Lucas.
Yes,
absolutely
I!
If
you
look
at
my
ok
hours
for
this
quarter,
I
am
working
on
an
MVC
for
tracking.
You
know:
I
started
off,
calling
it
pull
a
happiness
survey,
but
I
think
happiness
is
a
bit
an
overloaded
term.
So
what
I'm
going
with
is
a
pulse
survey
and
the
intent
of
it
is
to
just
get
some.
You
know
weekly
pulse
from
the
team
on
how
they
feel
about
their
work,
how
they
feel
about
their
manager,
how
they
feel
about
the
company.
F
Obviously
these
are
all
contributing
factors.
This
is
not
going
to
be
like
this
data.
I,
don't
look
at
it
and
say:
if
all
the
numbers
are
high,
then
my
teams
are
happy
and
everyone
is
enjoying
life
and
it's
fantastic
they're,
just
a
good
data
point
and
one
of
the
ways
to
measure
something
that
is
really
hard
to
measure
and
sometimes
very
subjective,
so
Lukas
the
you
have
not
been
exposed
to
this
because
I've
limited
the
MVC
just
in
the
interest
of
learning
and
iterating
quickly
to
the
configure
back-end
and
the
monitor
back-end
teams.
F
So
I'd
love
to
kind
of
hear
their
thoughts.
I've
been
getting
some
comments
and
the
managers
have
been
able
to
get
some
feedback
on
that.
But
that's
the
purpose
of
it
and-
and
my
hope
is
that,
as
as
this
practice
matures
a
little
bit
so
in
the
next
iterations
or
so
I
would
love
to
include
other
teams
as
well
if
they
would
like
to
participate.
F
So
that's
another
one,
another
metric
that
I'm
proposing
to
add
to
this
development
dashboard,
because
I
agree
with
you
a
high-stress
time
with
high
throughput,
but
where
the
teams
are
burning
out,
you
need
a
different
measure
to
get
a
sense
that
that's
not
a
very
sustainable
practice.
And
again
does
that
answer
your
question
or
do
you
need
more
clarification?
I
will
link
to
the
epic
as
well.
B
Yeah
I
think
we're
gonna
do
an
NBC
on
unhappiness
to
I
think
that
the
dolly
is
leading
so
yes,
happiness
is
on
the
radar
and
I
think
engineering
management
is
is
really
well
aware
of
that
aspect.
B
Thank
you,
sir.
So
we
do
have
plans
to
incorporate
usage
pings
for
on-prem
customers
and
maybe
record
hard
times
from
premises
in
the
wild.
We
might
not
be
able
to
do
that,
but
we
would
love
to
collaborate
with
your
team
on
this
and,
if
you
fine,
so
essentially
record
like
the
disastrous
times
like
a
tree
ring
where
like
this
is
a
spike
in
CPU
is
a
spike
in
in
usage
and
then
yeah.
G
G
Basically,
there's
not
any
specific
plan,
it's
like
we're,
you
know
we're,
but
it
is
something
we're
talking
about
because,
of
course,
we
need
to
figure
out
how
how
we
would
operate
that,
how
we
would
fund
that-
and
you
know
how
you
know
whether
we
mean
how
we
would
communicate
that
to
as
a
feature
to
customers-
and
you
know
like
how
you
know
how
would
that
get
paid
for
because,
of
course
it's
it
would
require
a
significant,
a
moderately
serious
amount
of
an
investment
to
to
operate
an
entire
monitoring
monitoring
cluster
for
our
customers.
Okay,.
B
Okay,
yeah
that
we
keep
keep
you
in
the
collaboration
here,
I
think
that's
like
once
we
we
are
focusing
on
the
arm
for
him
testing
our
side.
What
we
do
will
also
want
to
know
of
if
some
every
good
count
is
suffering
from
from
hardships,
and
that
would
give
us
information
ahead
of
time
and
we
can
contact
them
earlier
on
yeah.
G
G
B
B
G
Resource
is
not
terribly
high.
You
know
the
remote
streaming
protocol
sends
a
lot
of
data,
but
it's
it's
compacted
and
fairly
reasonably
efficient.
It's
more!
It's
more
about
the
operating
the
large-scale
distributed
storage
cluster
to
receive
data
from
many
customers.
So
you
know
we
we
operate.
You
know
just
to
run
git
comm.
We
have
about
a
dozen
different
Prometheus
servers,
monitoring,
various
components,
plus
a
Fano
storage
cluster
and
all
that
kind
of
thing
to
run
a
run,
something
that
we
could.
B
G
No
I
would
love
to
I
would
love
to
see
that
go
forward
because
it
would
make
it
really
easy.
Of
course,
a
lot
of
customers
that
are.
Are
you
know,
privacy
sensitive
and
they
wouldn't
want
they
may
or
may
not
want
to
do
that
or
it
might
take
some
convincing
for
them
to
like
hey
look.
You
know
this
is
the
sanitized
list
of
data
that
we're
sending
it
doesn't
include
any
of
your
private
data.
It
is
just
is
just
the
the
performance
numbers
coming
from
the
system.
G
B
D
D
Asap
we're
not
able
to
get
to
get
live
dogs
get
up,
guk
work,
so
I
know
it's
not
an
okay
R,
but
I'm
curious
its
capacity.
That's
blocking
their!
What?
What
does
the
initiatives?
There's
not
really
just
around
like
having
capacity
to
maintain
those
products
where
they're
active
work
on
improving?
Like
me,
the
developer,
tooling,
the.
B
Capacity
is
to
start
making
incremental
changes
to
improve
the
the
tooling
productivity
is
also
the
ability
to
set
up
the
GDK
and
make
sure
you
can
test
things
on
your
local
environment
before
you
sin
anymore,
so
these
fall
into
that
team's
responsibility.
We
were
just
really
stretched
thin
right
now
and
there's
also
some
gitlab
code
reviews
and
maintaining
it
laptops
that
we
would
love
to
help
with,
but
we're
not
able
to
get
to.
B
B
B
You
I
want
to
say
thank
you
to
the
team
for
taking
notes
as
well.
I
think
we
have
25
Ashton.
Do
you
think
we
should
just
let
people
prepare
to
join
the
company?
Call
I,
don't
think
there
are
any
more
questions.
No
I!
Think
that's
good
thanks
great!
Thank
you.
Everybody
to
use
the
first
format
for
our
department,
so
I'm
next
time,
I
will
advertise
the
slides
a
bit
a
bit
further
in
addition
to
Google
Docs
and
thanks
for
your
participating
thanks
for
your
time,
I'll
see
you
next
time.