►
From YouTube: Code Quality category handoff discussion
Description
Taylor, Thomas, Becka and James talk through what's left for Testing to complete before handing the category off to the SAST group.
A
And
so
this
is
the
ux
handoff
for
the
code
quality
category
testing
handing
off
to
sast,
I
put
together
a
quick
agenda
and
link
to
it
in
the
invite.
There
is
a
ux
transition
issue
that
I
pulled
some
things
out
of.
We
can
go
through
the
issue.
We
can
go
through
the
agenda.
A
Some
of
my
key
things
that
I
want
to
talk
about
today
are
just
making
sure
that
there's
shared
understanding
about
where
we're
at
in
code
quality
and
kind
of
what
our
roadmap
looks
like,
and
why,
for
the
next
couple
of
milestones,
there's
some
things
in
there
that
are
important
to
us
as
a
group
in
verify.
That
may
not
be
important
to
you
as
a
group
insecure,
and
so
we
can
probably
back
burner
those
or
put
those
into
the
backlog,
especially
around
things
like
performance
indicators.
A
If
we're
transitioning
this
out
of
our
pi
and
it
isn't
going
to
fit
into
yours,
it
doesn't
make
a
lot
of
sense
to
continue
moving
forward
with
some
of
those
issues.
So
I
wanted
to
get
a
touch
base
on
that
talk
to
you
about
where
we're
at
in
our
current
category
maturity.
A
Research,
that
is
kind
of
in
flight
and
then
just
touch
on
real,
quick,
some
of
the
other
administrative
stuff,
triage
issues,
documentation,
ownership,
things
like
that
see
who
else
we
need
to
loop
into
this
and
some
things
we
need
to
take
care
of
just
so
things
aren't
lingering
on
either
side.
B
I'm
not
entirely
sure
to
be
honest,
what
do
you
do?
You
have
an
opinion
either
way
james.
A
A
So
we
should
be
looking
at
the
agenda
doc,
so
just
talking
through,
what's
currently
scheduled,
what
we're
working
on
then
in
the
current
milestone
is
our
mvc
around
showing
code
quality
violations
in
the
mrdif,
and
this
is
the
implementation
where
we
show,
just
in
the
header
of
the
different
files
that
have
changed,
that
there's
a
new
code,
quality
violation,
that's
associated
with
dmrs
introduced
as
part
of
that
merge
request,
so
that
someone
can
click
into
that
go
into
the
widget
and
see
what
the
the
problem
is.
A
That
is
in
flight
right
now.
I
think
we're
in
yeah
our
workflow
is
in
dev,
but
it's
really
in
verification.
This
should
be
ready
to
go
soon.
Oh
I'm,
sorry!
This
is
not
the
the
issue
I
was
talking
about.
This
is
the
full
annotation
that
is
in
dev.
We
anticipate
this
carrying
over
into
14-0
and
then
actually
having
the
in-line
treatment
of
things.
Let's
see,
I
think
we
have
yeah.
We
have
our
the
design
here
and
becca
and
taylor.
A
I
think
you
weighed
in
on
this,
as
we
were
going
through
the
process,
because
we
anticipated
that
the
sas
scans
would
take
this
as
well.
C
A
Yeah
we
talked
to
phil
and
pedro,
both
quite
a
bit
about
this
early
on
in
the
process
from
a
performance
standpoint.
We're
tackling
this
much
the
same
way.
We
did
the
adding
the
test
coverage
annotation
there,
the
test
coverage
visualization.
So
this
is
a
pipeline
artifact
as
opposed
to
a
job
artifact.
All
the
processing
is
happening
on
the
back
end.
So
from
a
performance
standpoint.
There
should
be
minimal
impact
here
and
becca.
A
I
know
you're
looking
at
as
you
incorporate
the
sas
stuff
into
this,
making
it
an
option
to
turn
off
this
visual
indicator
that
I
think
that
would
clean
up
some
of
that
for
folks
who
don't
want
to
have
that
as
part
of
their
mr
review
experience.
All
the
feedback
that
we
got
around,
I
want
to
use
code
quality
and
I'm
putting
this
job
into
place.
Is
that
I
want
it
because
of
code
review,
and
so
this
is
the
place
that
I'm
looking
for
it
and
where
it's
the
most
helpful.
B
A
B
A
Perspective,
I'm
blanking
on
what
we
did
for
research
for
this.
I
know
that
internally,
let's
see
thinking
back
now
like
12
months
ago,
when
we
started
talking
about
this,
we
did
interviews
internally
because
we're
moving
from
minimal
into
viable
and
so
all
the
problem,
validation
and
solution
validation
would
have
been
with
internal
stakeholders,
probably
a
lot
of
it
very
ad
hoc
and
not
recorded
in
the
traditional
manner
from
a
solution,
validation
standpoint.
A
I
would
say
we
did
that
through
the
issue
again
with
our
internal
stakeholders,
ping,
the
folks
who
are
on
the
dog
voting
issue
or
the
team
within
create
within
source
code
and
code
review,
to
look
at
this
and
validate
hey
is
this:
is
this
a
not
too
intrusive
but
b?
Also
solving
the
use
case
that
we're
looking
to
solve
of
I
want
to
do.
I
am
doing
a
code
review.
I
see
that
code
quality
violations
have
been
added
to
it.
A
B
Okay,
yeah,
that
sounds
good,
so
just
for
transparency,
I
in
140
am
going
to
be
doing
solution,
validation
of
the
solutions
that
I've
come
up
with,
that
integrate
assassin
considering
you
know
future
in
the
future,
secret
detection
and
possibly
some
other
findings
as
well.
So
I've
talked
to
pedro
here
and
there
one
on
one
and
he's
really
pushing
for
thinking
about
this
in
terms
of
a
framework-
and
you
know
thinking
more
holistically
rather
than
just
you
know,
a
solution
for
for
code,
quality,
sas
and
secret
detection.
B
But
if
there's
more
teams
in
the
future
who
want
to
jump
on
board,
how
can
we
accommodate
that
so
sure
I
I
have
solution,
a
solution,
validation
issue,
open
that
I
will
add
to
this
agenda
as
well,
so
more
research
to
come
in
for
cool.
A
All
right,
so
that
is
what's
lined
up
for
13
12
14
0.,
the
other
item
in
14-0-
and
this
is
where
I
think
we
might.
The
road
maps
might
diverge,
is
tracking
for
users
who
view
that
code,
quality
and
code
quality
on
the
the
code.
Quality
annotations
is
an
ultimate
tiered
feature,
so
that
would
go
for
us
into
pg
map,
which
is
what
we're
tracking
and
we're
tracking
pgmod
a
little
bit
different
than
everyone
else.
A
We're
actually
tracking
users
who
aren't
just
at
an
ultimate
tier
or
at
a
paid
tier,
using
a
feature,
but
users
who
are
using
a
paid
feature
sounds
like
the
same
thing,
but
is
tracked
very
differently.
I
don't
think
that
that
is
the
performance
indicator
for
the
secure
team
or
for
the
sas
team
or
the
sas
group.
A
Rather
so,
if
not,
then
we
would
probably
put
a
pause
on
this
because
it's
not
going
to
contribute
to
our
performance
indicators
after
this
handoff
is
done
and
get
some
space
back
in
that
milestone,
especially
with
all
the
deprecations
that
are
happening
and
taylor.
I'd.
Put
that
question
to
you
of
how.
How
does
how
does
this
contribute
to
your
performance
indicators.
C
Yeah,
it's
a
good
question.
I've
looked
at
your
pi
page
and
I
don't
have
a
good
sense
of
how
that's
going
to
transition
today.
Secure
largely
looks
at
job
runs,
and
so
I
mean
we
easily
could
just
look
for
the
job
name
code,
quality
and
that
contribute
to
it.
I
don't
think
that's
you
know
necessarily
the
most
valuable.
We
are
trying
to
transition,
some
of
that
to
more
specific,
like
how
many
vulnerabilities
are
being
tracked
and
how
many
users
are
interacting
with
those.
C
A
We
do
have
usage
paint
tracking
for
how
many
users
look
at
the
full
report
on
the
pipeline
page.
That
was,
that's
primarily
been
our
kind
of
big
ticket.
Our
big
driver
for
pg,
now
it's
at
least
half
every
month
of
our
users
and
so
you'll
get
that
as
part
of
the
transition
and
the
team
was
looking
at
instrumenting.
The
same
way
through
usage,
pings
you'd
have
both
sas
and
self-managed.
C
A
Yeah
yeah
part
of
that
is,
I'm
gonna,
skip
ahead
and
then
jump
back.
What
we
found
is
that
users
don't
know
that
data's
there,
that
full
report
that
surfaced
through
discussions,
we've
had
informally
with
users,
twitter
forum
and
then
the
last
interviews
that
I
did
last
week
with
some
internal
stakeholders
of
trying
to
figure
out
where
we're
at
maturity
wise
on
code
quality,
asking
them
to
find
the
full
report.
Neither
of
the
users
who
are
internal
gitlab
employees
could
find
it.
A
They
just
didn't
know
it
was
there,
and
so
that's
one
of
the
issues
that
we
have
here
is
adding
a
link
from
the
widget
to
the
full
report,
because
that
will
drive
additional
eyeballs,
an
additional
pg
mail.
We
think
is
some
low
hanging
fruit
to
bump
that
number
up,
yep
yep
all
right.
So
then,
jumping
back
in
really
after
that
the
annotation
and
the
mr
diff.
The
big
thing
for
us
would
be
getting
away
from
code
climate,
and
for
that
we
need
a
mvc
for
a
non-code
climate
scan.
A
As
we've
been
talking
about
this
as
a
group,
we
think
that
doing
something
like
google,
cop
or
one
of
the
js
linters
is
going
to
be
something
immediately
internally
valuable
and
is
going
to
validate
for
us
that
we
can
do
a
scan,
translate
that
into
the
code
quality
json,
so
that
we
don't
have
to
change
all
of
the
other
features
as
well
and
then
get
that
parity,
at
least
for
one
language
or
two
languages.
A
So
that
was
how
we
were
approaching
that
in
vc
we
think
that
we
probably
should,
in
the
spirit
of
being
you
know,
good
teammates,
continue
down
that
path
and
finish
that
up,
so
we
can
at
least
hand
that
off
to
the
team
that
also
gets
us
out
of
the
docker
and
docker.
A
So
that
would
be
the
other
big
thing
that
I
would
point
out
of.
We
should
finish
this
before
we
complete
our
handoff
as
far
where
it
comes
to
code,
quality.
A
A
Once
the
report
is
uploaded
like
you're
fine
you're
done,
you
don't
even
need
to
really
run
the
scan.
As
part
of
your,
I
mean
you
need
to
run
it
as
part
of
your
pipeline,
but
if
you
upload
it
and
ran
it
somewhere
else,
it
would
work
the
same
way.
All
of
the
features
would
continue
to
work.
The
same
way
as
what
I
am
trying
to
say.
C
B
A
On
the
base
pipeline
compared
to
or
from
the
target
pipeline
compared
to
your
source
pipeline,
that's
where
it
says
that
diff
yeah
and
then
that's
what's
going
to
display
as
part
of
the
mr
diff,
where
it's
just
the
new
violations
that
are
found
or
what
is
showing
up
there.
So
if
you
have
one
new
violation
in
one
file
that
you
changed
and
you
introduced
it,
that
is
what
is
going
to
display
there,
not
every
violation
in
that
file.
So
it
gives
you
contextually.
A
A
Okay,
okay,
so
we'll
plan
on
continuing
to
move
forward
with
that
in
vc
in
14.1.
The
next
couple
probably
aren't
as.
A
Super
important
the
group
files
group
issues
by
file
that
should
be
pretty
straightforward
within
that
we've
sorted
by
severity.
Now
we
want
to
just
excuse
me,
pull
everything
together
by
file,
so
things
are
a
little
bit
easier
to
look
at
the
link
from
the
mr
widget
to
the
full
report
that
I
mentioned
and
then
support
for
multiple
reports.
A
A
This
is
support
for
multiple
reports.
We
fixed
this,
for,
I
think
the
widget,
but
not
for
the
full
report,
yeah
in
the
full
good
quality
report.
If
somebody
is
running
multiple
linters
and
then
they
spit
out
into
a
codeclimate.json-
and
you
can
do
that
in
multiple
jobs,
we'll
only
take
the
last
one
today,
so
this
functionality
will
say
give
me
all
of
those
I
put
them
all
together
and
then
display
all
of
those
results.
A
C
A
A
If
we
enable
support
for
multiple
reports
that
feeds
into
the
full
report,
I
could
see
some
users
are
just
going
to
pick
up
their
own
linters,
run
them
and
upload
the
artifacts,
and
then
they
get
all
of
that
code,
quality,
artifact
goodness
or
the
code
quality.
Goodness,
especially
the
mr
diff.
That's
going
to
drive
some
ultimate
usage
for
you,
so
that
could
be
a
good
thing.
I
would
pick
that
up
drop
the
two
above
that
the
grouping
and
the
mr
widget
to
full.
A
I
I'd
anticipate
that
most
of
the
usage
you're
going
to
pick
up
from
that
support
for
multiple
reports
is
going
to
be
around
the
mr
def.
So
if
we
do
those
two
things
enable
the
diff
to
show
all
of
the
new
violations,
and
then
let
you
upload
your
own
reports
across
multiple
jobs,
that's
going
to
be
the
the
best
place
or
the
best
way
to
go
forward.
C
C
D
D
A
Yeah,
the
biggest
source
of
escalations
comes
from
tams
or
yeah.
It
comes
from
tams
and
primarily
it's.
They
can't
get
the
widget
to
show
up
by
and
large
that
is
due
to.
A
There
isn't
actually
a
base
report
there
to
compare
against
for
any
number
of
reasons.
It's
expired.
They
have
started
their
branch,
not
off
of
that
commit
on
the
base.
We're
we've
done
a
lot
of
work
in
trying
to
better
expose
the
errors
of
why
you
don't
have
a
base
report
or
why
it
looks
like
you
might
have
a
base
report,
but
it's
not
actually
what
we're
comparing
against.
A
So
the
team
continued
we've
continued
over
the
last
couple
of
miles
to
iterate
on
that
and
try
to
improve
the
experience.
Ultimately,
what
we
got
to
was.
We
don't
want
to
guess
at
which
report
you
want
to
try
to
compare
against,
because
that's
worse
than
just
not
showing
you
anything,
is
showing
you
something
that's
incorrect,
so
that
would
be
the
most,
but
it's
I'll
get
maybe
one
ping
a
week
in
slack.
A
Yeah,
that
would
be
the
biggest
escalation
as
far
as
troubleshooting
the
biggest
escalation
as
far
as
what
they
want.
Next
is
getting
away
from
docker
and
docker
because
for
our
for
folks
who
can't
use
it
for
whatever
reason,
security
whatnot,
it's
just
slow
to
load,
that's
the
the
biggest
point,
and
that's
maybe
one
or
two
pings
a
week
I'll
get
about
hey.
When
are
we
getting
rid
of
docker
and
docker?
Can't
you
guys
just
do
it?
Sas
did.
A
All
right,
the
content
or
the
current
maturity
research.
I've
conducted
two
interviews
internally
as
we're
moving
from
minimal
to
viable
once
that
mri
got
approved.
Last
week,
though,
I
put
a
pause
on
that,
but
the
dovetail
project
is
linked
there,
as
well
as
the
issue,
I'm
going
to
close
that
issue
out,
probably
at
the
end
of
this
week,
but
that
research
is
there.
The
insights
are
there.
The
interviews
are
uploaded.
A
I
was
scheduled
to
move
this
from
minimal
to
viable
in
q2,
so
at
least
you're
in
a
spot
where
you
can
continue
that
if
you
want
to
take
that
on
or
if,
like
you
said,
you're
really
working
on
vet
and
want
to
just
put
a
pause
on
this,
you
can
extend
that
out
as
part
of
a
direction
update,
no
worries
so
you've
already
completed
the
completed
yeah.
I've
got
two
interviews
done.
I
was
running
into
having
a
hard
time
finding
internal
folks
to
interview
for
this.
A
Think
it's
I've
been
trying
to
get
at
least
four.
I
think
I
used
for
my
last
category
that
moved
for
minimal.
I
think
five
is
preferable,
though
yeah.
A
For
what
I
was
scoring
on
was:
can
you
get
the
get
what
you
need
out
of
the
mr
widget
and
that
scored
well
enough
that
it
would
move
forward
for
the
other
secondary
job
that
I
was
more
curious
about
than
actually
doing
the
research
on
it
was
the
full
report
finding
it
and
that
scored
very,
very
poorly.
A
A
A
Cool
and
then
the.
B
Yeah,
that's
okay!
Thank
you,
thomas.
I
I
was
just
wondering
if
you
have
any
mrs
with
code
quality
findings.
I
I
haven't
yet
had
time
to
go
through
the
entire
transition.
Ux
transition
issue
that
that
hayana
opened,
but
I'm
just
trying
to
get
a
sense
of
what
these
findings
look
like.
B
Every
every
mr
I've
been
to
so
far
in
the
gitlab
project.
Anyways,
I'm
not
seeing
any
findings.
Yeah.
A
I
can,
and
that
goes
back
up
to
your
solution,
validation
question.
I
think
that
I
skipped
over
a
long
time
ago.
A
B
Yeah,
I
think
my
other
questions
are
in
the
administrative
section
below
which
also
touches
on
my
question.
Above
I'm
just
not
very
clear
at
the
moment,
anyways
on
on
what
transitions
exactly
there's
the
configuration
page.
I
assume
we're
not
adding
code
quality
to
that
we
have
a
security
configuration
page.
We
also
have
a
vulnerability
report.
B
I'm
guessing
we're
not
adding
code
quality
findings
into
that
since
they're,
not
necessarily
security
vulnerabilities,
but
do
we
move
it
in
the
documentation
it's
currently
under
the
testing
category,
so
I
guess
just
not
sure
how
the
internal
transitions
not
not
wanting
to
shift
the
org
too
much,
but
just
wondering
internally
versus
you
know,
surfacing
this
to
to
users
sure
do
we
categorize
this.
A
So
just
taking
it
there
the
documentation
and
just
the
moving
of
bits.
We
can
definitely
move
that
over
into
the
sas
category
and
then
I
would
hand
it
off
to
taylor
as
far
as
how
this
incorporates
into
the
vulnerability
report
really
customers
point
at
that,
as
that's
the
kind
of
report
that
we
want
with
our
code.
Quality
information
in
it
as
well.
A
C
Yeah,
I
think,
initially
we'll
probably
update
the
configuration
page
to
just
have
the
static
text
to
the
documentation
to
get
it
turned
on.
I
think
that
should
be
an
easy
lift
and
help.
Customers
really
like
understand
that
it's
part
of
our
security
tools
in
terms
of
the
the
vulnerability
report,
I
don't
think
we're
gonna
change
that
anytime
soon
unclear
to
me
what
the
ux
should
do
with
this.
A
A
Yep
include
the
template.
You
need
to
merge
into
your
into
your
default
pipeline
and
then
branch
off
of
that
and
then
you'll
get
the
mr
widget
on
that
first
or
first
pipeline
run,
though
you'll
get
the
full
report
on
the
pipelines
tab
or
on
the
pipelines
page.
D
A
It
is
not
if
you're
hosting
the
the
code,
climate,
docker
image
or
container
image
on
your
local
registry,
so
we
have
instruction
on
how
to
do
that
on
the
documentation
page.
A
We
have
a
community
contribution
right
now
that
we're
working
through
to
also
improve
the
docker
rate
rate
limits
or
docker
hub
rate
limiting,
because
we're
hitting
docker
hub
or
some
folks
are
still
set
up
to
hit
docker
hub.
So
there's
a
workaround
that
we're
going
to
add
in
that
will
help
with
that.
C
A
There's
my
triage
issues
today
are
our
verified.
Triage
issue
got
blown
up
with
a
bunch
of
code,
quality
stuff
that
was
didn't
have
a
category,
because
the
category
has
changed
so
I'll,
open
a
request
with
engineering
productivity
to
get
that
fixed.
That
was
the
the
thing
that
triggered
me
to
say:
hey.
We
should
talk
about
administrative
stuff
other
than
that.
I
think
most
everything
is
wrapped
up
or
we
can
finish
it
off
in
the
next
couple
of
days
or
weeks.
A
Okay,
cool
all
right,
so
it
looks
like
we
have
a
couple
of
takeaways
here,
I'm
gonna
post
this
up
on
unfiltered
for
the
folks
who
were
unable
to
attend
and,
as
we
get
finished
up
with
some
of
those
last
couple,
issues
we'll
get
set
up
to
do
a
engineering
handoff
as
well
thomas,
if
you
guys
want
to
pair
up
on
anything
during
that
time.
Just
let
us
know.