►
From YouTube: Quality (Mek) Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Sure
so,
I'm
Fernando
arias
I'ma
just
front-end
engineer
on
secure
stage.
Yeah
I
had
a
question
so
I
find
myself
often
when
I
put
up
a
merge,
request,
I
guess:
I
I
try
to
test
the
changes
I
make
to
make
sure
whatever
I
implemented.
Doesn't
great
things
I
write
unit
tests.
We
do
have
this
challenge
insecure.
Like
writing.
An
automation
test
because
the
workflows
are
pretty
complex
too,
relies
on
a
lot
of
data
being
put
in
to
get
lab.
You
know
results
from
running
different
scanners
analyzers.
B
So
my
concern
is
that
a
lot
of
the
times
I
try
my
best
to
test
all
the
happy
path
and
edge
cases.
As
part
of
you
know,
putting
up
my
merge
request,
but
sometimes
we
still
have
bug
slits
through
things
that
are
not
covered
in
some
of
the
unit
tests
that
we
write
but
would
have
benefitted
from
end-to-end
tests.
But
there
are
challenges
with
in
writing.
End-To-End
tests
that
you
know
pretty
much
ever
and
us
from
having
that
coverage.
So
my
question
was:
how
do
we
fill
those
gaps?
B
A
Thanks
for
bringing
this
up
as
a
challenge,
we
have
to
documentation,
you
want
to
reconcile
them
going
forward.
There's
his
test
statistics
page
under
test
engineering
that
can
guide
you
under
permutations
to
consider
right
now.
Man,
oh
testing,
is
more
of
an
on
as
needed
basis,
depending
on
how
we
who
you're,
working
with
with
the
SATs
and
test
planning,
there's
also
an
opportunity
to
do
some
ad
hoc
testing.
A
Once
your
changes
hit
stage
in
the
keyway
slash
release,
issue
quality
is
every
respond,
everybody's
responsibility
and
there's
no,
no
team
that
dedicated
to
do
my
no
testing
here
at
gate.
Lab,
though
I
understand
that
the
process
in
the
secure
and
defend
areas
may
not
be
a
cover
right
now,
because
the
release
process
is
a
bit
different.
I
think
this
is
a
room
for
improvement
that
we
can
can
help.
You
have
some
documentation
for
the
product
in
your
area,.
C
How
would
you
like
to
vocalize
yeah?
So
if
I
understood
correctly,
the
quality
team
made
low
testing
forget
lab
itself
with
k-6?
Can
we
add
that,
as
a
feature
we
get
lab
so
that
if
you
use
all
our
DevOps,
it's
automatically
gonna
do
some
low
tests?
I
use
k-6
myself
this
weekend
seem
pretty
straightforward.
I'd,
be
a
bit
more
work
to
add
it
to
other
DevOps
but
off
to
see
to
make
progress
here
and
if
needed.
A
Thank
you
said
and
I
pause.
I
did
not
fall
back
up
with
an
issue
to
this
I'm
sure
we
have
an
issue
for
it.
I've
discussed
this
with
Tanja
as
well.
We
will
likely
take
this
as
a
cue
to
okay,
our
and
I
think
the
team
is
up
for
up
for
the
challenge
and
excited
for
for
this.
Is
there
we?
We
could
work
closely
with
you,
as
well
as
our
product
advisory.
A
C
I,
don't
have
any
opinions
on
how
to
implement
it
say
if
you
have
a
a
good
idea,
just
go
at
it
and
knock
one
or
more
off
Thanks.
The
next
question
to
like
how
does
the
uncle
process
work
like
I,
see
the
handbook
link
but
much,
maybe
also
how
has
it
been
going
so
far?
What's
what's
the
result
being
over
people
experiencing
it?
Do
people
need
know
how
to
find
on-call.
A
C
Thanks
for
that
great
great
progress
there
on
slide
20
you
talked
about.
It
seems
that
the
cost
is
reduced,
even
we're
not
at
the
end
of
the
month
yeah.
But
if
that's
a
recent
graph,
it
seems
we
made
a
dent
in
that.
Is
that
the
case?
What
are
the
next
steps?
How
much
money
are
we
talking
about?
I?
Think
it's
about
a
million
dollars
that
we're
spending
on
testing
our
own
code.
C
A
There,
like
you,
said
I,
was
very
excited
to
see
this
graph
as
well.
I
believe
that
we
are
starting
to
see
an
ROI
on
cancel
Anita
pipelines
and,
yes,
we
plan
to
remove
all
unneeded
tests.
Kyle
the
interim
engineering
manager
for
the
EP
team,
Jim
correctly
visits
on
the
call
as
well
Kyle.
Do
you
want
to
speak
more
to
that
color?
Let's
you're
closer
to
it
than
I,
am.
D
Yeah
I
think
right
now,
it's
just
starting
with
optimizing
pipeline
frequency,
so
by
cutting
it
down
we're
also
working
with
the
rapid
action
for
CI
runner
minutes
to
ship
to
private
runners,
which
will
reduce
our
cost
for
jobs
across
the
whole
group
that
that
used
right
now,
the
shared
runners
which
are
more
expensive
and
then
we'll
start
looking
at.
How
do
we
optimize
what
tests
we
run
a
little
bit
more
within
the
pipeline,
so
starting
more
broader
and
then
focusing
down
from
there?
Okay.
D
A
Could
be
I'm
assuming
here
it
could
be
in
the
specs
that
are
different
between
private
and
and
shared
public
runners.
It's
alluding
to
the
costs.
That's
been
the
pattern,
we've
seen
so
far,
the
difference
in
costs.
If
anybody
here
from
infrastructure
can
chime
in
more
than
welcome
to
be
corrected.
What.
A
C
Thanks,
oh
thanks
today,
yeah
I
hope
that
we
can
make
functionality
in
gitlab
that
can
just
run
the
tests
that
are
most
likely
to
fail
and
only
only
run
dogs,
I,
think
I
think
it
should
be
more
a
game
of
chance,
so
we
can
make
something
there
that
we
can
also
ship
to
our
customers,
because
we're
not
the
only
ones
with
this
problem.
All
I.
Have
our
customers
see
CI
costs
getting
out
of
hand
quickly
too
so
think
about
how
we
can
do
something
that
benefits
everyone.
A
E
We
do
have
something
like
that
in
progress
in
in
the
test
group
testing
group.
It's
early
right
now,
so
I'll
I'll
go
find
the
the
issue
for
that
and
like
that
in
our
document,
yeah.
C
It's
it's
a
so
suppose
we
could
have
her
test
caused
by
with
this
that's
a
hundred
thousand
a
month.
That's
a
$3,000
like
five
thousand
dollars
per
working
day
that
we're
spending
on
this.
That's
real
cost
like
$5,000.
She
wouldn't
you
went,
said
five
thousand
dollars
on
fire
every
day.
That's
what's
happening
right
now,
so
it
might
be.
It
might
be
good
to
to
to
think
about
a
timeline
for
this.
F
I
have
a
next
one
and
just
to
to
connect
the
dots
what
they
just
said.
If
we're
spending
that
kind
of
money
just
amplify
that
by
how
many
users
all
of
our
customers
are
having,
where
they
don't
have
good
visibility
and
control
over
their
CI
pipeline
costs,
so
is
even
higher
magnitude
for
the
combination
of
all
of
our
customers.
My
question
was
about
the
new
quad
planning
process
with
s
debts,
so
obviously
the
product
management
organization
has
been
really
tied
into
this
process
with
the
assets
MEC.
E
Can
it's
going
well
so
far,
I've
noticed
that
we
have
some
false
starts.
I've
noticed
some
I've
noticed
one
got
gap
in
the
documentation
right
now
as
to
where
we
come
in
to
the
process
I'm
going
to
get
that
corrected
here
shortly,
but
overall,
everyone
seems
to
be
responsive
and
I've
been
able
to
engage
early
with
my
team,
the
testing
group
and
verify
and
I
know
other
SC
T's
are
having
success
as
well,
but
we
probably
have
some
further
training.
We
could
do
internally
on
that.
Just
help.
E
G
Yeah
so
my
question:
this
is
in
with
regard
to
the
priority
and
severity
different
missions,
I'm
really
happy
to
present
a
handbook.
I,
don't
know
this
was
a
recent
thing
actually
or
not,
but
I
was
curious.
We
have
these
episodes
for
priority
labels.
It's
just
something
that
we
track.
Is
that
something
that
we
measure?
Is
it
possible
for
us
to
measure?
Maybe
we
do
I'm
just
I'm
not
aware
of
it
all.
A
We
do
is
actually
in
the
form
of
get
lab
insights
in
our
dashboards.
However,
they
we
are
planning
to
evolve
this
to
the
next
step,
because
SLO
should
be
tied
to
severity
and
the
next
iteration
to
improve
this
is
move
the
SLO
to
the
type
of
bug
and
severity
instead.
But
variety
is
just
a
guideline
on
timeline,
so
you,
if
you
set
it
as
a
p1
like
it's
roughly
one
release
or
one
month.
So
that's
the
next
iteration
that
we
look
to
make
right
now.
A
H
Yeah
so
Edmund
here
slide
10
question
I
noticed
that
we
have
the
ability
to
worth
winning
the
ability
to
auto
prioritize
based
on
essentially
labels
and
was
wondering
if
you
could
point
me
to
how
we
would
set
that
up.
I
think
this
is
something
a
lot
of
customers
would
love
to
see
and
looking
at
the
e
at
the
issue,
it
looks
like
it
has
to
do
with
some
changes
to
Ruby
code,
wondering
if
there's
an
easy
way,
we
could
practice
stuff
for
customers.
Sure.
A
This
was
actually
highlighted
in
in
kidnap,
commit
SF
and
we're
using
our
features.
It's
good
lab
service.
So
when
a
label
change
event
happens,
the
serverless
function
reacts.
If
you
look
at
that
slide,
if
it
has
availability
and
you
bump
up
the
severity,
the
priority
will
automatically
bumped
up
within
the
acceptable
range.
Now.
A
C
So
small
requests
to
update
the
title,
I
think
we're
going
to
transition
to
from
the
quality
dashboard
to
get
lab
insights,
at
least
that's
what
the
summary
says,
but
the
title
still
says,
or
maybe
periscope
and
I
really
encourage
everyone
to
migrate
to
get
live
insight,
so
our
customers
can
also
benefit,
and
we
can
dog
food
or
our
own
product.
I
also
want
to
mention
make
that
thanks
for
the
thanks
of
a
great
presentation,
I
think
there's
there's
a
ton
of
progress.
C
A
C
A
A
On
o2,
so
that
that'd
be
interesting.
So
let
me
clarify
the
approach
is
to
still
preserve
the
single
source
of
truth
in
periscope.
That
means
we
moved
out,
but
the
core
of
the
logic
can
be
killer
insights
and
we
can
supplant
the
information
in
periscope
CSV
sheet
load.
Have
an
export
and
gillip
insights
have
an
API
call,
so
information
looks
the
same,
but
the
engine
that
churns
out
the
data
is
actually
in
gitlab
and
we
add
value
to
get
lab
there
and
our
customers
have
a
choice
to
use
insights
or
maybe
pick
up.
C
I
think
if
it's
in
get
lab
I,
don't
think
anybody
cares
that
maybe
people
care
but
I
think
if
it's
in
get
lab.
That
should
be
good
enough
and
then
the
next
thing
is
like.
Oh,
but
I
can't
set
a
goal
line
and
stuff
like
that.
I
will
just
keep
adding
that
to
get
lapis,
also
I'm
totally
for
just
having
you
think
that
should
be
the
priority
over
having
it
in
periscope
agree.
Thank
you
for
that.