►
From YouTube: [REC] Quality Key Review (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we
are
now
live
streaming
welcome
everyone.
This
is
mack
vp
equality,
and
this
is
the
quality
key
review
for
november.
The
month
month
of
november
calendar
year,
2021.
I'll
be
provided
an
overview
already
of
our
metrics,
and
I
hope
that
everyone
has
digested.
We
can
jump
into
questions
right
away.
I
believe
there
are
some
questions
from
sid
already,
so
we
can
go
ahead
to
number
five.
B
Yeah
thanks
for
that
meg
and
thanks
for
the
video
and
yeah
I
just
want
to
say,
metrics,
look
great,
so
great
work.
It's
really
impressive!.
A
B
Very
small
thing,
but
can
we
remove
the
header
in
the
that
is
in
the
agenda
link
in
the
calendar?
Invite
because
it
sends
you
to
the
wrong
place
in
the
dock,
and
I
always
am
disoriented
for
like
two
seconds.
B
It's
not
it's
a
bit
of
a
more
personal
interest
question,
but
have
you
have
we
heard
of
data
robot?
Apparently
it's
a
popular
way
to
kind
of
do
end-to-end
testing,
but
I
haven't
heard
of
it.
I
wonder
whether
it's
because
I'm
from
like
the
ruby
on
rails
clan
and
they
don't
use
it
or
it's,
it
seems
that
it
has
companies
using
it.
If
it's
any
good.
Why
haven't
I
heard
of
it?
Basically
is
my
question
probably
because
I'm
out
of
the
game
for
a
long
time,
but
I
don't
know.
A
B
I
think
I'm
I'm
confused
about
what
it's
even
called.
Give
me
a
second
robot
framework.
That's
what
I
meant.
A
B
B
I
was
looking
for
a
kind
of
open
source
uipath,
slash
microsoft,
power
automate.
I
wonder
whether
there
was
an
open
source
offering
there.
B
We
want
to
get
time
to
first
failure
down
and
one
way
to
do
that
is
to
run
the
test
that
failed
the
last
time
first
and
I
wonder
what
the
fastest
way
to
do.
That
is
because
I
think
I
heard
about
like
oh,
we
need
all
kinds
of
infrastructure
before
we
do
that,
but
it
seems
to
me
you
can
just
look
at
the
the
the
log
of
the
last
build
and
see
what,
where
test
failed.
B
If
a
test
failed
and
just
run
that
test
again-
and
that
seems
it
seems
like
a-
I
don't
know
a
two
day
job,
if
you
do
that,
only
for
our
spec
or
something
like
that,
something
that
we
have
a
lot
of
tests
from,
but
I'm
probably
missing
something.
D
Oh
kyle
yeah.
We
essentially
did
what
you
just
described
running
the
failed
tests.
We
had
to
build
a
little
bit
of
tooling,
but
we're
evaluating
results
this
month.
It
was.
It
landed
in
14-5
towards
the
end
of
the
milestone.
So
next
key
review
we'll
have
an
update
on
the
what
we
found
coming
out
of
this.
We
have
been
doing
selective
tests
and
so
really
what
we've
seen
is
for
mrs
prior
to
approval.
This
hasn't
running
the
previously
failed
tests,
where
it's
already
happening
with
our
minimal
pipelines,
but
for
tests
after
approval.
D
It
was
really
just
internal
tooling,
like
in
internal
pipeline
tooling,
so
the
jobs
that
run
that
take
the
r
spec
artifact
and
run
those
so
you're
right.
It
wasn't
in
the
release
post,
but
it
because
it
was
more
for
internal
usage.
At
this
point.
B
Cool
and
will
they
need
custom
code
for
every
framework
right.
If
you
do
run
tests
in
python,
you
need
different
code
to
run
those.
D
Yes,
yeah:
we
we
can
get
the
failed
tests
from
the
product,
but
piping
it
into
what
is
running.
The
test
is
where
the
custom
tooling
is
correct:
cool.
B
D
I
I
would
say
it's
worth
exploring
more
and
optimizing
further,
so
our
pipeline,
we,
we
have
a
lot
of
setup
in
front
that
still
has
to
run
before
these
jobs
so
looking
to
eliminate
that
setup
can
accelerate
this
further
and
that's
the
future
iteration
that
I
want
to
look
towards
cool.
A
B
It's
probably
hard
to
productize
this,
but
I'm
very
interested
in
productizing
it.
I
think,
over
time,
we're
gonna
go
to
probability
based
testing,
where
the
amount
of
tests
that
you
run
and
the
length
that
you
wait
for
depend
on.
B
Then
I'm
going
to
run
the
full
test
suite
because
they're
also
deploying
if
this
test
succeeds,
I'm
going
to
do
a
deploy
to
production
versus
an
experienced
engineer
who
doesn't
make
a
lot
of
mistakes
on
a
low
risk.
Part
of
the
code
base
is
doing
something
and
we're
pretty
sure,
they're
not
even
going
to
try
to
merge
the
results
like
that.
That's
a
lower
stakes
and
I'm
not
going
to
run
all
the
end
to
end
tests
because
it
just
tested
that
whole
code
like
two
minutes
before
the
previous
commit.
B
So
I
I
think
our
approach
of
running
a
hundred
thousand
tests
on
every
commit
it's
not
gonna
scale
and
I
don't
think
it
it
should
for
other
people
either,
and
I
think
right
now.
A
lot
of
companies
are
lots
of
thinking
and
debates
about
this.
Well,
I
think
automation
should
kind
of
take
that
role
and
it
should
be
a
combination
of
an
ml
model
plus
well,
not
even
an
ml
model,
probably
a
statistical,
some
statistical
factors,
and
it
should
say
well.
A
Thank
you
for
the
the
context
and
strategies
that
I
I
agree.
I
think
running
this.
The
test
that
matters
at
the
right
place
and
right
size
really
resonates,
and
I
can
see
that
it's
a
lot
of
challenges
in
many
other
companies
if
they
have
to
roll
their
own.
If
we
can
solve
this
and
productize
is
there's
much
to
be
much
to
be
made
here.
Much
results
to
gain.
D
Yeah
and
then
the
other
shout
out
I'll
say
related
to
that
is:
we've
had
an
initiative
in
ep
to
run
less
tests
per
mr
and
our
initial
results
on
front
end
and
back
end
test
optimization
as
we
reduce
from
189
000
test
run
to
about
30
000
prior
to
approval.
For
mr
and
once
we
reduce
like
flaky
spec
impact,
we
can
look
to
shift
that
later
to
work
towards
merge,
train
and
run
the
full
set
of
tests
on
merge,
train
and
really
evaluate
that
full
risk.
D
At
that
point,
if
needed,
our
error
rate
that
we
see
is
about
is
somewhere
between
5
and
12.
It's
hard
to
estimate
with
flaky
test
failures
in
there.
So
we're
trying
to
eliminate
that
and
then
look
further
to
refine
this
and
provide
that
feedback
to
the
testing
group
for
solution
validation,
but
it
just
at
a
high
level.
It
just
looks
the
files
that
are
changed
and
says
these
are
the
tests
that
map
to
those
files
using
dynamic
test
mapping
for
rails
and
just
has
built-in
tooling
for
it
and
runs
only
those
tests.
B
I
assume
there's
no
other
questions
yet.
Let
me
double
check
that
and
then
github
recently
came
out
with
what
we
call
merge
trains
and
that
we
released
a
couple
of
years
ago.
If
I
remember
correctly
now
that
github
also
has
it,
people
started
to
ask
for
kind
of
merged
dependencies.
B
C
Yeah,
this
is
very
timely.
We
just
had
this
exact
conversation
on
a
call
with
jihui
last
night.
This
is
something
that
they
desperately
need.
It
will
speed
up
their
processes
quite
a
bit
essentially
that
community
contributors
also
could
use
this
quite
a
bit
to
help
sort
of
untangle
the
the
dependencies
of
having
to
wait
for
something
to
be
merged,
make
it
into
master
before
you
can
start
to
work
on
your
next,
mr
kyle
or
mac.
Anything
you
want
to
add
to
that.
A
Yeah,
so
to
add
to
the
context,
the
need
here
is
there's
a
chain
up
in
mars,
that
is
being
upstreamed,
and
we
need
better
visibility
on
how
they
interplay
with
each
other,
which
one
needs
to
merge
first
and
much
later,
there's
dependencies
that
need
to
be
merged
first
and
it
gets
tested.
So
there
is
need
here
at
the
review
level.
I
think
the
conversation
here
is
not
only
at
the
review
level
but
at
the
running
pipelines
merging
level
as
well
and
getting
confidence
there.
A
So
I
think
the
two
pieces
match
and
then
we
are
starting
the
conversation
with
our
product
counterpart
how
to
move
faster
in
the
review.
I
think
there's
a
gap
that
we
need
to
fill
and
plan
out
for
what?
How
does
this
look
like
when
it
comes
to
the
op
section
how
it
comes
to
the
test
section?
So,
yes,
I
think
there's
a
this
is
where
we
should
put
a
focus
on
holding
to
be
sharper
yeah.
B
B
A
I'll
I
can
take
the
action
to
have
that
conversation
with
our
with
our
product
counterpart
and
then
we
can
review
it.
The
next
time
you
have
a
q
review.
We
can
you
can
track
it
here.
So
great,
it's
a
low
pressure.
Ask
yep.
B
And
then-
and
we
can
also
start
making
a
road
map
for
github
in
three
years
and
then
also
have
it,
which
is
kind
of
good,
like
the
this
industry
should
go
a
lot
faster,
so
cool.
And
I
I
think
if
we,
I
think
the
most
important
thing
is
like
an
issue
with
kind
of
mock-ups
of
the
interface
so
that
people
can
wrap
their
brain
about
it
around
it.
Because
it's
so
abstract.
B
B
B
But
this
is
not
a
group
conversation.
Sorry,
that's
for
group
conversations.
This
is
a
key
meeting,
but
yeah
great
work
really
appreciate
it
in
this
because
and
also
mac
you're,
taking
on
additional
responsibilities
with
all
kinds
of
other
projects,
really
appreciate
you
stepping
up.
A
You
I
think
that
that's
no
more
questions
we'll
conclude
our
key
review,
we'll
see
you
in
the
next
time.
Thank
you
very
much
for
your
time.
We
appreciate
it
thanks
man.