►
From YouTube: Verify:Testing 1 and 3 year vision Discussion
Description
Discussion between James (Product) and Juan (UX) about wireframes for 1 year vision for the Testing category and some of the problems that could be solved in 3 years alongside an early wireframe.
A
This
is
zawan,
and
I
are
talking
about
designs
for
vision.
We're
gonna
go
through
the
one-year
vision
designs
that
juan
has
already
done,
and
just
kind
of
talk
through
those
have
a
conversation
about
those
and
then
I'll
be
getting
a
first
look
at
some
of
the
three-year
vision,
ideas
that
juan
has
had
to
build
on
those
things.
So
I'm
going
to
hand
it
over
to
juan.
Why
don't
you
take
it
away.
B
Sure
so
I'm
gonna
share
my
screen
step
two.
A
B
B
We
like
the
goal
is
to
identify
certain
areas
of
work
right
like
things
that
we
believe
are
gonna,
be
fundamental
in
the
next
three
years,
but
I
think
the
idea
here
is
to
see
what
what
can
we
kind
of
land
in
terms
of
the
foundation
that
we
gotta
build
and
and
that,
basically
it's
looking
forward
to
the
next
year,
so
yeah,
some
of
the
some
of
those
areas
like
the
first
one,
is
hot
spots
in
the
code
base.
B
We
also
added
accessibility
and
we
also
have
web
performance
load
performance.
So
we
are
great
at
proactively
telling
our
customers,
when,
like
things,
go
wrong
right
but
like
what?
What
we
don't
have
yet
is
holistically
telling
them.
Okay,
like
things
seem
to
be
things,
seem
to
be
going
wrong
in
this
particular
area
and
like
this
particular
file,
seems
to
have
a
lot
of
trouble
or
like
this
particular
set
of
files.
Have
a
lot
of
trouble,
so
that's
that's
one
thing
you
know
just
like.
B
A
A
B
Probably
a
better
description
of
what's
going
on,
yeah
and-
and
I
think
that
that
that's
where
the
opportunity
lies
right
like
not
even
like
showing
that,
like
hot
spot
but
like,
why
is
it
a
hot
spot?
You
know
like
we?
We
can't
like
provide
that
reason.
You
know
the
problem
on
why
it
should
be
addressed.
You
know,
because
I
think
we're
very
passive
about
the
the
things
that
we
just
bubble
up
for
the
customer
and
like
they
should
be.
B
B
The
second
thing
was
identifying
tests
that
are
flaky
and
just
in
general,
getting
test
insights,
and
that's
that's
another
interesting
one,
because
right
now
we
provide
all
these
like
underlying
infrastructure,
to
run
different
test
suites
and
run
different
test
frameworks,
and
we
are
collecting
the
data
of
like
or
like
at
least
we're
processing,
all
the
data
of
how
the
tests
are
doing,
and
I
mean
there's
a
lot
of
value
in
just
understanding
that,
in
the
context
of
time
you
know
like
how
many
times
does
this
particular
test
like
fail
in
the
last
10
days
or
14
days
or
one
month
or
two
months.
B
You
know,
but
even
beyond
that,
just
understanding
like
the
nature
of
the
whole
project
in
that
sense
of
what's
flaking.
B
What's
not
perhaps
the
whole
suite
is
flaky
or
you
know
perhaps
there's
just
like
one
file
that
has
all
the
flaky
tests
you
know
like
if
we
can
like
provide
different
vectors
of
data
and
like
allow
people
to
start
seeing
how
how
one
thing
relates
to
another
we,
we
might
not
necessarily
like
pinpoint
those
things
to
them,
but
if
we
show
enough
the
vectors
of
data,
they
might
be
able
to
figure
out
that
pretty
fast,
so
yeah,
just
like
test
insights
in
general,
you
know
like
how
they're
performing
how
long
it's
the
test
stage
taking
in
terms
of
execution
time,
I
mean
even
down
the
line.
B
A
Do
you
to
jump
into
that
design
and
just
start
talking
through
these
there
as
well
sure
yeah?
That's
a
great
call,
as
I
add
in
a
little
color,
the
the
flaky
tests,
I
mean
we
I've
seen
in
my
experience,
I'm
sure
one
you've
seen
and
we've
heard
from
tons
of
customers,
internal
stakeholders
as
well.
You
end
up
with
these
suites,
where
you
have
like
one
or
two
flaky
tests
that
you'll
run
your
pipeline
it'll
take
45
minutes
to
get
through
and
one
test
failed,
and
you
know
that
it'll
pass.
A
If
you
re-run
it,
but
then
you
have
to
rerun
the
whole
thing
as
45
minutes.
So
I
think
you
great
idea,
the
other
day
of
as
we
built
the
infrastructure
to
identify
or
like
mark
a
flaky
test,
then
being
able
to
grab
all
of
those
flaky
tests
that
failed
or
test
that
a
user
has
said
hey
these
might
be
flaky
and
just
rerun
those
at
the
end
of
the
existing
pipeline.
Because
then,
if
they
all
pass,
you
get
to
a
green
pipeline
in
a
lot
less
time.
A
B
And
yeah
just
to
add
that
even
like
internally,
when,
when
with
gitlab,
it
happens
all
the
time
that
I
make
a
contribution
and
the
pipeline
runs
and
I
feel
like
I
have
the
feeling
that
something
failed
because
of
something
that
I
did
and
then
I
go
and
I
check
and
it's
nothing
about
that.
It's
maybe
like
there
was
like
a
networking
error
or
something.
So
if
I
retry
the
job,
the
most
likely
thing
that's
gonna
happen
is
that
it's
gonna
succeed
whether
or
not
that's
flakiness.
B
I
mean
it's
debatable,
but
I
think
like
just
the
fact
that,
like
many
times
just
retrying
a
job
fixes
the
issue.
Yeah
indicates
that,
like
we
should,
at
the
very
least,
allow
them,
if
not
do
what
you
said,
which
is
like
identifying
all
the
tests
that
are
failing
and
then
like
us,
a
new
job
run
all
those
at
least
rerun
the
job.
B
If
it's,
if
it's
failing,
you
know
like
if
the
flight
key,
if
the
job
with
the
flaky
test
fails
because
of
that
flaky
test,
I
think
we
should
give
the
customer
the
ability
to
retry
the
job,
because
they
know
there's
something
flaking
that
particular
job
right.
So
now
the
problem
is
like
the
job
could
be
huge
right
so
like
it
could
be
that
that's
the
largest
part
of
the
pipeline,
you
know
so
like
that's,
not
gonna
achieve
a
lot
in
terms
of
efficiency,
but
I
mean
it's
a
start.
B
I
guess
so
yeah
some
of
the
things
that
james
was
saying
just
talking
on
top
of
the
design.
You
know
like
right
away.
The
the
whole
idea
here
is
like
assuming
that
you
are
a
customer
who
has
multiple
projects
and
you
want
to
get
insights
from
all
your
projects
in
terms
of
code
quality,
and
you
know
test
executions,
and
you
know
in
general,
what's
the
behavior
of
your
code
base
against
your
testing
solutions?
B
The
whole
idea
here
is
to
provide
them
that
overall
performance
like
metrics-
and
I
put
some-
this-
is
not
set
in
stone.
This
is
just
part
of
the
vision,
but
of
course
one
thing
that
I
put
is
the
the
overall
group
testing
coverage
across
all
projects.
B
How
much
is
the
what's
the
average
or
the
p90
better
and
I'll
talk
a
little
bit
about
the
yp90
and
that
or
average,
but
what's
kind
of
that
number
across
all
my
projects
right?
So
if
it's
taking
15.2
minutes,
how
can
we
make
that
trend
down
or
up?
You
know?
It's
gonna
save
money.
It's
gonna
save
time.
It's
gonna
make
us
more
productive.
B
Ultimately,
I
think
the
many
of
these
things
are
signals
that
help
people
to
rethink
their
testing
in
such
a
way
that
they
keep
shifting
to
the
left,
which
I
think
it's
the
goal
you
know
so
it
happens
many
times
that,
like
your
your
pipeline,
your
tests
fail
because
of
linkedin
issues,
for
example,
and
I
I
believe
that
like
if
we
could
provide
no,
I
don't
have
that
here
in
the
designs,
but
if
we
could
provide
something
like
your
pipelines
are
failing
because
of
trivial
things,
you
know
like
linking
errors
or
you
know
unused
classes
or
unused
variables
or
whatever.
B
We
should
be
able
to
also
pinpoint
that
and
tell
them
like.
You
know,
you
might
have
a
problem
here
and
you
might
want
to
shift
more
to
the
left
and
allow
your
developers
to
or
like
create
something.
So
your
developers
are
not
using
the
pipelines
to
test
what
they
should
be
testing
on
their
ids,
but
that
just
was
like
a
parenthesis.
Random
thought
so
yeah,
just
in
general,
like
all
these
things
that
you
see,
are
metrics
across
all
your
projects,
the
code
quality
widget
shows.
B
This
is
basically
based
on
code
climate,
so
those
are
metrics
that
code
climate
provides
currently
like
the
churn.
The
code,
complexity,
duplication,
maintainability,
there's
this
thing
called
trivial
checking,
which
I
don't
understand.
Well,
I
I
included
it
because
I
think
it's
kind
of
interesting.
B
I
think
it
has
to
do
a
lot
with
how
meaningful
is
the
the
checking
against
the
the
code
base
and
like
if
you
are
spending
a
lot
of
resources.
Just
because
you
changed
a
letter
or
something
like
that,
you
know
or
like.
A
Meaningful
check-in
yeah
and
then
so,
if
I'm
looking
at
this
right
this
channel
this
sums
up
a
lot
of
the
data
that
we
offer
today
across
projects
roll
it
up
to
the
group
level
and
then
it
looks
like
under
that
you
have
a
table
that
starts
to
break
down.
Then
within
your
group.
Here's
your
projects.
A
We
could
even
then,
so
this
is
kind
of
the
one
year
vision,
I'm
starting
to
roll
all
this
up
to
the
group,
getting
the
data
that
we
have
collecting
it
and
starting
to
look
at
it
over
time,
and
then
we
could
roll
that
up
to
the
instance
level,
for
a
self-hosted
and-
and
that
would
be
pretty
awesome,
so
you
could
start
to
look
at
you
know
across
all
of
my
groups.
However,
things
look
correct,
yeah
cool
and
then
do
you
have.
A
B
Yeah,
so
I
think
the
idea
here
is
to
think
about
these
graphs
us
as
like
a
lens
to
see
your
data
right.
So
the
idea
here
is
to
you
have
a
lot
of
things
like
test
coverage
and
you
have
things
like
duration
of
the
cicd
testing
jobs,
but
it's
not
only
showing
them
but
being
able
to
plot
one
project
against
all
another.
Just
so
you
can
see
if
it's
your
whole
work,
the
one
that
it's
trending
up
or
down.
B
You
know,
maybe
there's
some
efficiency
that
you
can
get
from
project
a
to
project
b,
maybe
they're
pretty
similar
and
project
a
is
doing
better.
You
know,
perhaps
project
a
is.
You
know
like
the
this
is
not
capture
in
the
design
that
it
could
be
something
that
we
find
out
as
we
validate
more
with
customers.
But
I
even
thought
about
you
know:
testing
data
again
between
environments.
B
You
know
like
so
how
how's
my
my
testing
data
in
staging
how's,
my
testing
data
in
pre-prod,
you
know
like
once
and
and
perhaps
for
some
reason
like
one-
is
more
performance
than
the
other.
You
know
and
you
might
want
to
identify.
What
is
what
is
that?
Maybe
you're
not
running
certain
suites?
B
Maybe
you
are
skipping
certain
things
or
maybe
you
know
it's
there's
many
things
that
then,
by
the
way,
it's
so
important
to
be
opinionated
in
terms
of
like
the
solutions,
but
not
the
data
or
like
the
insights
that
you're
gonna
get
from
these.
So
I
think
that
the
importance
to
create
something
that
allows
them
to
identify
whatever
they
need
to
identify
and
then
go
and
solve
the
problem
you
know.
So
I
think
that
has
been
the
goal
with
these
things.
A
What
does
this
code
look
like
how
many
of
the
tests
passed
in
our
pre-broad
environment
so
just
part
of
our
development
stage?
How
many
of
those
tests
actually
pass
in
production?
Like
do
we
have
funky
data
out
and
prod
that
mean
these
tests
never
actually
pass
so
anytime,
we
slow
down
to
fix
them
in
in
our
development
cycle.
It's
just
a
waste
of
time,
because
they're
not
going
to
pass
in
production
anyway,
right
yeah.
That.
A
Like
as
we
transition
into
our
talking
about
three
year,
vision
like
starting
to
think
about
that
of
showing
this
kind
of
data
that
we
can
start
to
expose
in
one
year
against
production
level
data
as
we
get
closer
to
three
years
out.
B
Yeah
yeah,
I
know
I.
I
hope
that
that
that's
a
great
point
and
I
think
you're
right
about
that.
The
fact
that
the
test
is
failing-
it's
not
failing
in
staging,
doesn't
mean
that
the
that
the
actual
code
that
it's
being
tested
against-
it's
not
gonna,
fail
in
production,
and
it
does
happen
that
it
like
production
data.
It's
wildly
more
complex
than
I
mean
unless
you
have
like
something
that
you're
really
like
testing
with
somehow
with
production
data.
B
Maybe
you
you
are
gonna,
get
that
same
results
in
your
staging,
but
that's
unlikely
right,
like
not
even
yeah.
Most
tests
are
written
with
mock
data,
and
it's
just
up
to
the
developer
to
include
like
all
the
cases
yeah
but
yeah.
There's
always
gonna,
be
something
that
they're
not
thinking
about
right
so
yeah.
It's
that's
an
interesting.
A
One,
let's:
let's
jump
ahead
into
the
three-year
stuff,
just
for
the
rest
of
the
time
I'm
going
to
link
into
this
video.
Did
you
put
the
your
discussion
of
the
one
year
out
on
unfiltered
or
is
that
link
somewhere
else.
A
Link
to
it
in
the
issue,
I
think
I
can
either
way
I
will
put
a
link
to
one's
great
discussion
because
you
walked
through
these
mocks
and
kind
of
didn't.
B
A
Through
already
I'll
put
that
in
so
that
you
can
talk
through
the
rest
of
the
box,
but
let's
jump
ahead
to
those
last
three
where
you
started
doing
some
ideation
of
kind
of
where
I'm
thinking
about
through
your
vision
of
going
from
that
signal
of
there's
problems
here
across
your
org
or
across
your
project.
This
is
more
of
a
signal
within
merge,
requests
or
pipelines
of
hey.
You
have
something
wrong
and
we
might
be
able
to
fix
it
for
you
so
starting
to
push
that
further
left.
B
You
have
a
merge
request
and
you
run
the
pipeline
and
then
something
fails,
and
then
on
that
summary
widget
we
we
show
you
okay,
there's
certain
things
that
failed,
and
today
we
just
show
how
many
of
the
total
tests
that
were
run
failed,
there's
so
many
other
things
that
we
could
in
that
first
glance
tell
the
customer
right,
so
something
that
it's
coming
is
showing
that
has
history
inside
of
how
many
times
how
many
of
those
tests
have
failed
more
than
once
in
the
last
14
days,
but
as
I
was
exploring
this
another
thing
that
came
to
my
mind,
was
okay,
there's
gonna
be
also
things
are
potentially
fixable
in
an
automatic
or
semi-automatic
way.
B
You
know,
and
many
of
those
things
I
mentioned
before-
are
likely
linking
issues
and
that
type
of
stuff
very,
very
small
stuff,
but
the
vision
is
that
if
you
have
those
tests-
and
we
can
identify
correctly,
what
type
of
issue
was
created
from
that
from
that
test?
For
instance,
this
one
is:
doesn't
follow
the
proper
code,
styling
combinations
right.
B
You
know,
but
there's
no
reason
to
do
that
if
we
apply
that
from
from
here
you
know
so
the
goal
here
will
be
because
we
know
what's
the
issue
and
we
know
that
we
just
need
to
run
whatever
yarn
prettier
right
or
whatever
command
is
the
one
that
you
need
to
write
for
linkedin
then
we
can
just
you
could
just
go
and
fix
error
and
then
what
that
will
do
is
basically
inject
that
yeah,
just
basically
write
or
like
make
that
difference
that
div
and
then
we
run
the
pipeline
with
the
div
like
the
details
are
also
still
on
the
air.
B
It
could
be
like
very
applied
suggestion
type
of
thing.
You
know
I
don't
know
like
or
it
could
be
like
fixed
error
and
it's
just
magically
fixed.
You
know,
then
there's
many
other
issues
like
that
that
could
be
fixed
in
this
way.
We
could
identify
the
ones
that
are
within
the
realm
of
possibilities.
Of
course,
not
everything
is
going
to
be
fixable
this
way,
but
I
put
another
example
of
many
other
things
that
could
happen
here.
For
instance,
we
will
identify
when
something
is
becoming
flaky
or
seems
to
be
flaky.
B
From
our
perspective,
we
are
not
claiming
that
it's
flaky
we're
just
gonna
tell
you
that
this
seems
like
a
flaky
test.
You
know,
because
of
the
data
that
we
have
been
tracking
of
these
tests.
We
have
a
feeling
that
it's
a
flick
test
right,
so
there's
probably
very
very
little
things
that
we
could
do
about
that.
In
terms
of
I
mean
we
cannot
fix
a
flaky
test,
because
we
there's
something
else
that
we
don't
understand
what's
going
on,
but
we
could
definitely
shortcut
you
need
to
start
getting.
B
I
need
to
start
like
fixing
that
somehow
right
so
just
having
a
call
to
action
to
create
an
issue
right
and
what
I
did
in
this
example
is.
If
you
create
an
issue,
then
what
will
happen
is
that
it
take.
B
It
takes
you
to
the
new
initial
form
and,
in
the
new
issue
form
it's
going
to
have
a
pre-populated
title
that
it's
going
to
say
something
like
flacky
test
call
on
the
name
of
the
test
or
whatever
it's
going
to
be,
but
it
already
gives
you
a
perpetuated
idea
of
what
the
issue
needs
to
be
and
then
it
also
auto
populates
the
the
form
with
the
things
that
are
important
for
you
to
start.
Try
adding
things
like
the
test
name,
the
suite
the
test
file
and
even
run
data
and
links
to
the
trace.
B
You
know
so
you
can
just
go
and
see
if
it's
always
failing
because
of
the
same
reason
you
can.
It
gives
you
that
issuable
that
it's
a
place
for
people
to
go
and
discuss
what's
happening
right
and
it
could
be
more
more
rich
than
these,
and
it
doesn't
need
to
be
only
issues
I
mean
we
know
that
test
cases
are
coming
as
a
new
issue,
so
it
could
be
that
we
identify
here
an
error
that
it's
not
contained.
We
there
doesn't
seem
to
be
a
scope
within
the
the
test.
B
Suite
I
mean
something's
failing,
but
it's
failing
just
at
runtime,
because
for
whatever
reason
then
we
could
pinpoint
their
these
failed
and
you
want
to
create
a
test
case
for
these.
You
know
I
mean
those
are
the
ideas
that
I
had
so
far,
but
there's
probably
many
other
things
that
you.
A
Cool,
I
really
like
this
because
we
can
apply
this
kind
of
across
the
board.
Accessibility
comes
to
mind.
Browser
performance
comes
to
mind
where
we
know
that
there's,
like
those
reports
come
back
with
this
is
what
is
wrong
because
you
didn't
match
style.
A
We
can
start
to
say
and
then
apply
that
correct
style
or
their
correct
formatting,
or
do
this
thing
that
will
fix
that
issue
for
you
and
then,
as
we
start
to
kind
of
capture
and
say
in
this
file,
this
line
or
this
senate
code
fingerprint
matched
up
with
this
violation
type,
and
you
created
an
issue
and
you
fixed
it
as
we
start
to
build
a
history
of
those.
We
can
then
use
those
to
go
back
in
and
create
more
things
that
you
can
be
fixed
automatically
like.
Oh,
you
had
an
issue.
A
You
had
this
violation
in
this
similar
looking
bit
of
code,
and
here
is
the
commit
where
it
got
fixed.
What
if
we,
you
know,
tried
to
insert
that,
or
at
least
pointed
you
at
it
and
say,
issues
like
this
have
been
fixed
with
these
or
violations
like
this
have
been
fixed
in
these
five
issues.
In
the
same
type
of
code,
so
at
least
it
gives
that.
A
A
leg
up
on,
oh
well,
there's
prior
art:
I
can
go.
You
know
instead
of
hunting
it
down
myself,
it's
right
there
in
the
interface.
I
think
we
can
apply
that
same
kind
of
format
and
pattern.
To
I
mean
almost
all
of
our
categories:
accessibility,
browser
performance
code,
quality
comes
to
mind
too,
where
you
have
like
those
violations,
and
you
have
you
know.
If
you
see
a
fix
that
that
fixes
the
code
quality
part
of
it,
then
you
probably
apply
that
for
you
later.
B
Yeah,
I
know
that's
an
excellent
point,
because
another
thing
that
can
happen
is,
of
course,
if
we
create
an
issue
here.
The
whole
idea
is
that
next
time
that
you
see
this
error,
we're
not
gonna
tell
you
to
create
an
issue
again,
because
that
will
be
like
you
will
end
up
with
duplicated
issues.
For
the
same
thing,
the
idea
is
that
next
time
we
will
show
you
hey,
he
seems
to
already
have
an
issue
where
you're
thinking
about
it.
B
You
know
so
you
could
be
any
contributor,
but
you
already
know
that
someone
is
thinking
about
it
or
you
could
just
go
and
keep
contributing
because
you
keep
seeing
the
error,
but
it
creates
a
story
around
testing
like
that
narrative
that
it's
more
cohesive
and
it's
not
just
an
error
that
you
keep
ignoring
or
maybe
one
day
you
decide
that
you
want
to
fix.
You
know
so.
Yes,.
A
If
you
just
see
that
issue
there,
and
then
we
can
going
back
to
what
we
looked
at
initially,
we
can
start
to
bubble
these
up
back
to
those
first
report,
type
of
views
or
dashboard
type
of
views
of
here's,
all
the
issues
associated
with
the
errors
and
how
many
you
have
so
as
like
a
vp
of
tech,
you
can
start
to
look
at
how
big
is
my
tech
debt
like
how
many
flaky
tests
do
I
have
how
many
code
quality
violations
do
I
have
how
many
issues
do
I
have
and
start
to
look
at
that,
along
with
some
of
the
other
analytics
that
we
have.
A
A
Yeah-
and
I
think
that
these
these
mocks
are
great
because
it
shows
it
in
one
example:
that's
pretty
concrete,
but
it's
one
that
we
can
apply
across
the
board
like
I
said
it
would
be
a
really
similar
pattern,
I
think,
of
if
we
can
fix
it,
we're
gonna
try
to
get
you
to
fix
it
right
there
in
the
widget.
A
If
we
can't
we're
gonna,
let
you
make
a
create
an
issue
or
mark
something
as
flaky
or
repeat
or
ignore,
whatever
it
might
be,
and
we
can
play
around
with
the
behavior
with
some
solution:
validation,
yeah,
100,
awesome,
cool
yeah.
I
mean
that
that
really
well
covers
a
lot
of
those
problems
that
we
wanted
to
solve
in
the
next
year
and
it
starts
to
get
at
some
of
the
things
that
we
want
to
look
at
in
the
three-year
vision.
A
As
well
of
things
like,
I
see
all
of
these
accessibility
issues
and
I
have
no
idea
how
to
fix
them.
Maybe
we
can,
you
know,
start
to
apply
some
fixes
for
you
same
thing
with
you
know,
code
quality
or
browser
performance,
even
load
testing.
I
think
if
we
put
these
things
out
there
and
make
them
available
to
the
open
source
community-
and
we
can
start
to
gather
back
that
data
of
how
these
issues
were
fixed,
we
can
then
turn
around
and
give
that
back
to
the
open
source
community
as
well
as
customers.
A
You
know
50
000
times
this
bug
has
been
fixed
this
way
or
this
code
quality
issue
has
been
fixed
this
way.
So
it's
a
well
vetted
bit
of
data,
potentially
total.
B
Yeah
totally,
no,
I
totally
see
what
you're
saying
just
creating
that
that
part
from
the
error
to
the
issue
in
the
context
of
what
you're
testing
makes
a
lot
of
sense-
and
I
mean
many
times
like
when
you,
google,
something
you're
gonna,
either
find
stack
overflow
question
or
a
github
issue,
or
something
else
right,
yep
and
this.
This
is
something
else
that
adds
that
you
know
broad
idea
of
you
know
like.
Maybe
it
should
be
more
accessible
in
general
terms,
for
the
program.
B
Yeah,
that's
how
I
see
it,
because
if
we
create,
if
we
allow
them
to
create
the
this,
because
we
because
we
see
these
tests
a
lot,
we
because
we
see
we,
we
kind
of
start
tracking
those
errors
a
lot.
B
We
also
have
a
good
idea
of
how
how
often
that
error
is
happening
right
in
general,
just
across
the
board
for
open
source
issues
like
open
issues,
you
know
yeah,
but
I
just
what
I'm
thinking
is
that
it's
it's
just
simply
the
idea
that
creating
an
issue
for
something
shouldn't
be
as
hard
or
like
you
shouldn't.
Have
that
much
friction.
You
know
and
yeah.
A
I
think
that
that's
something
that
we
can
tackle
in
the
next
year
and
make
part
of
our
one-year
vision,
I
think,
where
we
get
to
three-year
vision.
We
talk
about.
How
do
you
take
action
against
these
things?
How
do
you
you
know
improve
overall
test
coverage?
Can
we
do
that
with
some
ai
and
ml
buzzword,
buzzword,
I
mean
just
making
these
things
easier
for
developers
to
you
know,
improve
their
code
quality
overall.
A
Cool
well,
that
was
awesome
thanks
for
taking
time
juan,
we'll
get
this
post
it
up
to
unfiltered
so
that
other
people
can
watch
the
discussion
and
check
out
those
cool.