►
From YouTube: Verify:Testing Internal Customer Call - November 2021
Description
Today we dug deep into what's next for test history at the project level and how it can help the engineering productivity run fewer tests and move closer to their goals of shorter pipelines.
* https://gitlab.com/groups/gitlab-org/-/epics/3129#note_718919880
A
All
right,
so
this
is
the
testing
internal
customer
call
for
november
2021..
We
do
these
every
six
weeks
and
always
it's
trackable
month.
It
is
first
on
the
agenda.
I
have
a
couple
of
things:
welcome
to
robbie
who's,
going
to
be
the
new
product
manager
for
testing
robbie.
If
you
want
to
do
just
super
quick
intros
tell
everybody
about
yourself,
everybody
be
in
joanna
and
kyle.
B
Sure
so
this
is
my
second
week.
I'm
still
wrapping
up
a
few
loose
ends
with
onboarding,
so
you
may
not
really
see
me
surface
yet
doing
a
lot
of
project
management
tasks
just
yet,
but
expect
to
see
that
by
the
end
of
the
week-
and
I
really
enjoy
streamlining
processes,
and
so
I
I'm
told
that
there
is
an
opportunity
in
verified
testing
to
do
that.
So
I'm
really
excited
to
dive
in
and
start
to
prioritize
and
then
help
the
automations
work
as
best
as
they
can
to
make
the
customers
happy
for
their
needs.
A
So,
thank
you.
Road
map,
deck.
A
After
review,
work
on
build
artifacts
we're
gonna
be
working
on
some
ex.
Well,
we
being
testing
I'm
stepping
out
of
that
robbie
and
scott
welcome
scott
and
the
team
are
gonna,
be
working
on
some
experiments
more
on
top
of
funnel,
as
we
think
about
customers
starting
to
create
their
pipelines
and
then
use
their
pipelines.
Based
on
what
we
learned
in
our
most
recent
research
around
how
people
are
testing
and
if
they're,
using
those
test
summary
features
to
try
to
drive
more
adoption
there.
A
We
think
that
that
will
be
helpful
and
help
drive
in
turn
community
contributions
beyond
usage,
so
that
we
can
that'll
help
accelerate
the
flywheel
a
little
bit
getting
some
of
those
community
contributions.
So
that's
the
the
slight
tweak
in
the
roadmap,
but
otherwise
that
deck
is
up
to
date
and
ready
for
you
to
review.
I
am
kyle.
Go
ahead.
C
Can
you
make
the
connection
for
me
on,
like
you
talk
about
the
flywheel
and
increasing
community
contributions?
I
didn't
quite
get
that
when
I
went
through
the
deck,
can
you
talk
more
to
that.
A
Yeah,
that's.
That
was
super
spontaneous
comment
for
me:
oh
okay,
but
I
we
have
a
good
number
of
parsing
things.
A
good
number
of
report,
support
issues
that
are
in
the
backlog
that
we
think
that
if
we
can
get
community
contributions
like
we
did
for
test
coverage
visualization
around
those
report,
types
it'll
help
us
increase
our
usage,
which
in
turn
brings
more
customers
to
the
platform
and
those
will
all
end
up
being
core
contributions
or
core
features
which
in
turn,
we
can
drive
more
community
contribution.
A
We
think,
as
we
expand
our
user
base
with
more
report
teams.
C
Got
it
that
makes
sense,
we've
been
looking,
this
is
tangential,
and
this
is
also
an
area
I
get
to
play
in
for
good
or
bad,
but
we've
been
we've
been
trying
to
figure
out.
How
do
we
better
direct
community
contributors
to
a
subset
of
issues
that
are
maybe
fit
for
them?
C
So
what
you're
talking
about
is
very
apt
to
that
goal
where
it
sounds
like
you
have
a
set
of
issues
that
works
towards
a
bigger
theme
that
we
should
direct,
whether
it's
back-end
or
front-end,
community
contributors
to
I'd
be
really
interested
somewhere
down
the
line
and
like
just
looping
me
into
the
epic
or
or
issues
for
that,
because
it
would
be
it'd
just
be
another
opportunity
to
highlight
yeah
and
issues
that
are
fit
to
collaborate.
Sorry,
oh.
A
No
worries
I
was
gonna
say
we
haven't
been
able
to
tag
as
many
issues
as
I'd
like
that
was
on
on
the
on
the.
It
was
one
of
the
possibilities
for
team
okrs
this
quarter,
but
it
didn't
make
the
cut
for
us
in
testing
around
setting
a
number
of
issues
that
are
community
contribution
or
ready
for
contribution.
A
I
don't
know
what
the
right
label
is,
but
that
is
something
that
I
think
robbie
and
scott
could
definitely
work
with
you
on
going
forward,
set
a
big
direction,
lay
out
some
of
the
low
hanging
fruit,
around
easy
adoption
of
those
features
and
creating
documentation
about
how
we
can
build
those
features
or
add
to
those
report.
Types
we
have
an
epic
set
out
to
add.
A
The
last
thing
I'll
add
there
is
that
I
think
that
package
has
done
a
great
job
with
that
they've
had
a
good
number
of
contributions
around
entire
new
repository
types
or
registry
types
and
package
types
that
users
have
completely
added
and
that
came
after
they've
built
that
documentation
of
how
do
you
contribute?
So
that
might
be
a
good
bit
of
work
that
could
be
invested
in
and
verify
overall.
A
Yeah,
well
I'm
going
to
jump
over
the
dog
food
and
test
history
as
a
follow-up
and
jump
into
mac.
Your
first
topic,
if
you
are
ready
and
want
to
voice
that
over.
D
Thanks
for
picking
up
this
conversation,
so
we
have
had
some
discussions
on
the
side
and
the
need
from
stakeholders
to
know
how
each
test
is
doing,
how
how
often
they
have
been
failing
for
a
period
of
I
say
six
months
to
a
year,
and
I
want
to
make
sure
we
are
keeping
this
group
informed
and
discussed,
because
I
would
hate
it.
I
would
hate
to
burst
into
to
implement
further,
while
finding
out
that
it's
eventually
on
the
roadmap
on
on
on
the
test
history
report.
D
So
I'm
trying
to
dig
up
the
the
agenda
item
from
our
reliability
stand
up.
So
essentially
we
show
how
how
often
the
test
has
been
quarantined
and
how
how
if
they're
still
quarantined
and
then
there's
a
there's
a
we
want
80
percent
of
our
tests
to
be
green
and
working
and
stable
over
time,
and
I'm
glad
that
we
have.
Is
it
15
days
of
history?
A
D
Track
two
weeks:
14
days,
14
days,
okay
and
eventually,
how
can
we
dog
food
is
full
further?
This
is
going
to
land
in
in
the
product
and
on
our
side
it's
in
the
ops
instance,
so
it
won't
be
as
imp
as
as
in
your
face
as
customers
using
it,
but
we
could
move
towards
dog
fooding
further,
where,
where
things
land
yeah.
A
A
Oh
okay,
we,
I
think
we're
combining
the
two
dog
fooding
test
history
reports
and
speedy,
reliable
pipelines.
I
was
lumping
those
two
together,
I
think,
but
yeah
sorry,
I
I'm
mixing
topics,
that's
my
bet,
so
we
have
what
we're
working
on
currently
or
one
of
the
nbc's
that
the
team
is
in
is
the
project
quality
summary
that
will
start
to
show
the
execution
data
from
there.
A
I
think
that
the
next
logical
step
down
that
path
is
the
historic
test
data
for
projects
that
will
be
that
next
click
as
you
go
from
a
project
showing
currently
how
many
tests
are
passing,
being
skipped
and
failing
into
history
of
those
metrics
over
time,
and
that's
the
next
step
as
well
in
that
we're
tracking
over
14
days.
How
many
times
has
this
built
in
master?
That's
really
shallow
super
mvc.
A
We
can
expand
on
that
data
set.
The
concern
has
always
been
how
much
data
we're
going
to
retain,
and
so
I
think
that
there's
still
a
spike
in
there
for
the
team
to
noodle
on
how
do
we
get
good
good,
directional
data,
a
good
amount
of
detail
for
the
user
without
having
to
store
everything
and
blowing
up
the
database,
so
that
gitlab
suddenly
becomes
super
slow
when
you're
trying
to
load
any
sort
of
report
page
or
looking
at
the
pipeline
page
if
we
start
to
pull
that
data
into
there.
A
D
Yeah,
when
a
woman
do
we
think
it's
gonna
land
and,
of
course
we
have
engineer
allocations
and
reliability.
I
know
that's
coming
in
as
well:
I'm
not
asking
to
rev
up
the
date
just
realistically.
When
do
you
think
there's
going
to
be
a
preview
that
we
could
enable
on
the
ups,
our
ops
instance
and
start
to
see
data
or
even
in
the
in
the
monorepo
we
can
get
some
metrics.
That
would
be
a
good
start.
A
Yeah,
it
doesn't
look
like
any
of
our
issues.
Have
a
fiscal
year
quarter
on
them.
Do
we
tentatively
slated
the
project
level
test
history
for
default
branch?
I
had
put
a
fiscal
year,
23
q1
label
on
it,
so
that
would
be
happening
at
some
point.
We
just
kicked
off
q4
so
starting
february
through
may
march,
sorry
march
april.
A
A
And
really
I'm
gonna.
I
say
that
but
robbie's
the
the
dri
on
this
and
the
prioritization,
so
you
want
to
sync
back
with
them
going
forward
with
this.
D
And
I
added
the
links
under
my
my
name:
there
you
have
it
the
number
bullet
points
so
yeah.
I
know
my
name,
there's
a
link
to
the
the
daily
stand
up,
which
is
now
by
two
times
a
week.
Okay
and
our
team
did
the
work
around.
So
things
are
in
the
ops
instance.
Infrastructure
has
grafana
pipelines
data
ingestion
already,
so
we
piggyback
on
that
boring
solution,
pipe
it
out
to
grafana,
and
then
we
had
a.
We
have
another
post
processing
that
we
upload
to
our
pis
eventually.
D
So
eventually,
I
would
love
to
have
this
shown
in
the
product
or
build
from
the
feature
sets
and
not
using
grafana.
This
gets
us
moving
for
now,
and
it's
also
for
you
to
see
where
we're
using
it
and
if
it's
helps
with
product
validation
and
product
market
fit
then
by
all
means.
Please
leverage
the
links,
yeah.
A
A
I
want
to
say
job
duration
over
time,
and
it's
really
just
comparing
it
to
what
is
happening
on
main
versus
what
is
happening
in
that
pipeline
or
in
that.
Mr
rather
I
don't
know
how
far
they've
gotten
into
that,
but
that's
another
area
where
we
could.
We
want
to
look
at
tracking
metrics
like
job
duration,
testoration
things
like
that.
C
C
The
metrics
report
on
the
mr
that's
already
done,
but
what
we
can't
do
is
like
block
or
warn
if
you're
exceeding
a
threshold.
So
historically
we
can
use
that
to
say.
Oh
here's,
the
mr,
that
created
a
performance
regression
for
job
a
and
that's
great,
but
it's
almost
easier
to
find
that
it's
it's
not
almost.
It
is
easier
to
find
that
data
in
size
sense
than
anything
in
the
product.
Yeah.
A
That
makes
sense,
it
sounds
like
the
outcome
would
be
that
if
you're
beyond
some
threshold,
though
you
want
to
block.
C
Yeah,
whether
it's
red
or
just
say
hey,
are
you
aware
that
job
a
is
higher
than
the
p90
like
is
20
higher
than
the
p99
or
you
just
like
anything,
but
like
that
sort
of
configuration
is
what
I
have
in
mind
where
it's
an
informant
informed.
C
Yeah,
I
think
that's
that's
the
best
for
our
workflow,
because
we'll
see
different
changes
have
impacts
to
jobs
versus
like
the
entire
pipeline.
They
may
not
manifest
in
the
total
pipeline
duration,
but
one
job
might
go
up
by
60
percent
of
duration.
Unexpectedly,
yeah,
okay,.
A
Great,
so
we've
covered
the
points
I
wanted
to
cover
on
the
dog
food
in
the
test.
History
reports
thanks
everyone
anything
else
to
close
that
out
or
are
we
ready
to
move
on
to
speedy,
reliable
pipelines.
A
A
In
speedy,
reliable
pipelines,
testing
is
usually
the
biggest
or
the
longest
tent
pole
and
how
long
pipelines
take-
and
I
think
that's
especially
the
case
for
us-
why
we
talk
to
kyle
so
much
right
about
how
do
we
speed
up
these
tests
and
run
tests
more
reliably,
but
I
think
there's
other
opportunities
there.
A
I
had
an
mr
open-
and
I
have
a
research
item
open
now
close
the
mr
to
figure
out,
if
there's
opportunity
to
create
a
pipeline
efficiency
group
around
things
like
hey,
my
pipelines
are
too
slow
or
they're
getting
slower
over
time
or
hey.
This
job
is
always
the
slowest
of
all
of
the
jobs,
even
though
it
has.
You
know
only
two
commands
in
it
answering
questions
like
that:
more
exploratory
questions
and
less
hopefully,
less
alert
type
things,
but
that's
probably
naturally,
where
we
could
go
beyond
that.
D
Yeah,
this
is
mostly
a
double
check
from
my
end,
because
we've
been
hearing
that
we
need
to
dog
food
more
and
then
more
product
validation
wanted
to
triple
check
that
this
is
the
stakeholder
meeting
is
what
we
should
continue
to
double
down
on,
and
then
we
can
lean
on
you
to
to
collaborate
with
the
rest
of
the
groups
and
verify
so
I'm
not
proposing
to
change
anything.
A
And
there's
a
new
verify
direction,
page
that
jackie
spun
off
that
may
I
don't
think
it
has
anything
more
specific,
but
it
does
talk
about
the
direction
of
the
stage
a
little
bit
more
outside
of
ops,
because
the
ops
direction
page
is
very
broad.
It
is
very
similar
to
the
deployment
or
the
delivery
direction
page
that
kevin
put
together
recently
I'll
link
to
that
in
the
agenda
as
well.
C
A
We
haven't
seen
any
progress
on
that
front
with
the
engineering
allocations
and
some
of
the
other
work
that
we've
prioritized
recently
around
stability.
That
is
a
huge
opportunity.
I
personally
think
and
we'll
work
to
research
to
validate
that
and
that's
a
place
that
we
can
move
forward
for
pipeline
execution.
My
new
group,
I
think
that
robbie
can
move
forward
with
that
in
testing.
A
If
it's
the
best
opportunity-
and
I
think
that
darren
is
starting
to
think
about
that-
and
the
enterprise
runner
management
as
well
and
I'll
have
because
here
a
little
bit
on
the
design
front
there.
But
those
are
all
areas
where
I
think
that
we
could
start
to
gather
data.
Put
interesting
views
together
for
users
to
get
more
out
of
the
platform.
D
Cool,
I
want
to
voice
understanding.
I
know
engineering
allocations
and
reliability
is
key.
So
that's
that's.
That
was
the
highest
priority.
Last.
A
All
quarter,
I
think
our
last
point
is
yours
back.
You
want
a
voiceover.
D
D
A
There's
nothing
that
I
had
on
the
roadmap
around
this
specifically
or
the
template
itself
drew
has
moved
to
a
different
group
and
he
was
the
kind
of
champion
of
this
effort
and
it's
as
with
everything
else
that
just
hasn't
been
a
priority
for
us
right
now.
Last
last,
I
remember
I
don't
want
to
throw
this
at
kyle
too
much,
but
kyle
and
albert
were
working
on
some
experiments
and
we
had
a
good
number
of
results
from
that.
But
I
don't
remember
where
we
ended
up,
but
I
don't
think
the
ball.
A
The
ball
wasn't
back
in
our
court
to
make
improvements
that
we
thought
would
move
the
needle
on
overall
pipeline
duration.
I
think
the
team
is
going
in
a
different
direction,
that
this
was
not
going
to
work
as
well,
as
we
had
thought
kind
of
dinner
that
raider
might
totally
weigh
off
and
throw
you
into
the
bus.
C
So
engineering
productivity
is
looking
at
how
we
can
use
dynamic
mapping
in
our
project
pipeline
to
run
a
selective
set
of
tests
that
apply
most
to
the
mr,
so
mac.
I
I
just
want
to
kind
of
reiterate:
that's
a
tooling
implementation,
not
a
product
implementation,
and
I
think
the
long-term
goal
is
that
maybe
we
can
learn
from
our
tooling
usage
to
see
how
we
can
bring
something
to
the
product
to
run
a
selective
set
of
tests,
especially
for
large
projects
like
that
are
similar
to
git
lab,
where
there's
187
000
tests.
C
C
I
don't
even
know
if
it's
still
being
used.
That
might
be
a
better
question.
I
know
zeff
did
that,
mr
he
he
might
know
or
whoever
the
counterpart
is
for
that.
That
group,
I
think
I
was
just
looking
at
the
ci
configuration.
I
don't
see
it
in
there
anymore,
so
I
don't
think
it's
being
used.
A
I
see
okay,
I
can
take.
Is
it
to
do
following
up
with
zeph
and
see
where
that
ended
up?
I
owe
him
a
coffee
chat
anyway.
The
other
context
that
I
think
I
could
bring
to
this
is
that
we
just
did
a
number
of
customer
interviews
around.
A
You
know
flaky
tests
and
green
pipelines
and
when
pipelines
fail-
and
we
found
really
interesting
data
around
customers
who
can
run
all
of
their
tests
locally,
really
don't
care
if
they
have
to
rerun
the
pipeline
as
much
and
if
they
have
to
run
every
test.
The
microservice
architecture
was
the
primary
use
case.
A
So
I
think
that
we
could,
as
we
try
to
niche
this
down
and
focus
on
who
this
solves,
for
it
is
really
gitlab
and
the
gitlab
type
of
repository
in
projects
where
there's
just
so
many
tests
that
writing
them
locally
is
an
onstarter,
I'm
just
not
going
to
do
it
and
I'll
wait
for
ci
to
run
all
of
those
tests.
For
me,
if
that
helps,
simplify
our
efforts
here,
I
think
that
those
are
good
lessons
learned
from
that
research.
D
You
see
yeah,
I
want
to
hone
in
on
that,
if
it's
big
shop
customers
that
likely
have
a
really
backlog
of
tests
written
five
six
years-
and
this
would
be
more
impactful
if
you're
starting
new
in
a
new
startup
20
tests,
then
there's
no
there's
nobody
not
wanting
to
run
locally
right.
So
that's
yeah.
A
I
think
that,
like
we
talk
about
not
running
as
many
tests,
I
wonder
if
there's
opportunities
around
not
having
as
many
tests
if
there's
other
ways
that
we
can
solve
this,
you
don't
want
to
run
as
many,
but
if
you
don't
have
as
many
you're
not
going
to
run
as
many
are
there
stale
tests
that
just
don't
exercise
code
anymore,
that
exists
and
helping
teens
understand
those
dynamics
as
well
as
we
kind
of
back
away
from
what
this
solution
is
and
think
more
holistically
about
the
problem.
D
C
Are
you,
I
guess,
are
you
hearing
from
customers
similar
like
similar
things,
that
we
want
to
run
less
tests
like?
I
see
that
in
the
industry
for
more
established
companies
quite
a
bit,
so
I
just
wanted
to
make
sure
like
we're,
not
coming
full
circle
back
to
let's
remove
tests
from
our
test
set
and
overlooking.
A
I'd
say
the
feedback
was
I
want
a
pipeline,
that's
when
I
started.
I
think
it's
going
to
run
green.
So
if
part
of
that
solution,
space
is
don't
run
a
bunch
of
tests
that
are
flaky
or
can
tell
me
that
tests
might
be
flaky
and
run
those
ones
first,
I
think
there's
a
lot
of
solutions
to
that
problem.
A
What
was
interesting
in
that
research
that
we
did
was
for
those
smaller
customers
if
they
got
still
three
quarters
of
the
way
through
a
pipeline
and
it
failed,
and
then
they
knew
that
retrying
it
was
the
fix,
they
were
still
super
annoyed.
Their
pipelines
were
only
five
minutes
long
though.
So
what
do
you
talk
about?
I
went
from
five
minutes
to
eight
minutes
in
my
pipeline,
and
that
was
super
painful
scott
and
I,
who
are
doing
the
research
just
kind
of
chuckled
to
ourselves,
like
you,
think,
five
minute
pipelines
are
long.
A
You
should
look
at
our
hour
and
a
half
long
pipelines
like
when
that
runs
45
minutes
and
then
fails.
That's
super
annoying,
but
that
pain
point
is
still
real
for
them
of
they
expect
that
feedback
loop
as
fast
as
possible.
So
when
things
get
in
the
way
of
that
that
just
a
retry
fixes
it's
super
annoying,
so
there's
lots
of
opportunity.
I
think,
within
that
problem,
space.
C
And
I'll
say
one
thing
one
plug
I'll
kind
of
put
it
here
is
as
we're
looking
at
contributor,
tooling
and
gdk.
One
of
the
things
we'd
like
to
do
is
allow
people
to
easily
run
that
subset
of
tests
locally,
whether
it's
the
tests
that
most
recently
failed
or
the
tests
that
most
apply
to
the
change
based
on
the
the
main
line.
C
We're
engineering
productivity
is
in
discussions
on
like
how
do
we
provide
that
data
in
a
way
that
can
be
consumed
with
a
cli
tool,
so
people
we
shift
out
of
that
mindset
of
I'm
gonna
push
my
code
go,
get
a
coffee,
wait
for
feedback
and
see
if
my
code
change
even
worked.
We
enable
people
to
do
that
locally
or
in
a
cloud
development
environment
of
their
choosing.
C
D
A
D
C
The
tests
that
most
apply
to
a
change,
that's
something
that
has
to
is
computationally
expensive.
We
have
to
compute
it
on
every
mr
based
on
the
diff,
align
it
to
the
dynamic
spec
analysis,
output
or
just
we
use
their
find
related
tests
it
and
it
varies
based
on
tech
stack.
So
it
depends
on
which
problem
we're
optimizing
for
there,
like
the
most
recently
failed
tests
or
the
the
tests
that
most
apply
to
the
change.
It's
in
progress.
C
Yeah
yeah,
I
think
that
would
be
great.
We
have
a
bunch
of
detect
test
jobs
that
create
a
json
file
that
then
gets
consumed
later
in
the
pipeline
or
in
subsequent
pipelines
to
know
which
tests
to
run
and
then
like,
like
the
other
thing
that
we
find
problematic
I'll
say,
like
the
other
product
feature
that
we
find
problematic
with
this
is
our
ci
configuration
allows
us
to
split
a
job
across
lots
of
builds
right,
so
we
can
say
parallel.
20
take
our
r
spec
jobs,
run
it
on
20,
20
runners
simultaneously.
C
That's
awesome,
however,
when
we
run
a
subset
of
tests,
there's
no
configuration
option
to
say
like
automatically
expand
the
set
based
on
the
duration
data
that
is
known
or
how
long
these
tests
take.
So
we're
still
running
20
executors,
but
maybe
we're
running
four
tests
on
those
20
executors,
so
16
of
them
run
nothing.
They
spin
up.
C
C
Okay,
there
is
an
issue
I
think
for
like
automatically
determining
that
parallel
factor
that
I've
seen
before
yeah,
but
my
point
for
talking
about
that
is
like
there's,
multiple
different,
like
problem
areas.
If
you
shift
things
to
get
lab,
then
there's
kind
of
a
different
problem
that
I
think
will
surface
sure
good
to
know.
A
Oh
well
lots
of
great
discussion
today,
robbie.
Hopefully
that
gave
you
a
lot
of
good
context
around
some
of
the
problems
that
we've
been
trying
to
solve,
for,
I
think
the
entire
time
I've
been
here
with
engineering
productivity,
some
of
the
efforts
that
we've
made.
A
I
think
we
dug
into
a
lot
of
the
stuff
that
we've
done
and
I
have
a
takeaway
to
circle
back
to
zest
on
that
field,
fast,
template
and
see
where
that
ended
up
and
what
next
steps
might
have
been
make
sure
we
get
that
ball
picked
back
up
all
right.
So
I
have.
E
A
little
bit
of
context
around
that
it
turned
out
that
the
customers.
team
had
made
some
enhancements
to
their
pipeline.
Already,
that's
sped
it
up,
and
so,
when
we
implemented
the
fail
fast
template,
it
didn't
save
them
any
time.
I
think
it
actually
even
added
a
couple
of
minutes,
which
was
why
we
disabled,
but
I
don't
know
what
the
next
steps
after
that
were
going
to.
C
C
Yeah
so
yeah
it's
something
I
guess
we
could
look
at
to
see
how
we
can
use
the
template
more,
but
we
can
have
our
own
fail
fast
logic
around,
like
r
spec,
I
think
foss
impact
where
we'll
mark
a
pipeline
has
failed.
If
those
jobs
fail
earlier
in
the
pipeline,
then
everything
else
and
I
think
we'll
stop
like
we'll.
Actually,
I
think
it
marks
it
as
canceled,
because
you
can't
technically
stop
all
jobs
without
canceling
everything
yeah.
A
Okay,
I'll
still
follow
up
with
zeff
and
then
just
get
that
run,
follow
good
stuff
just
to
talk
steph
and
we'll
go
from
there
about
what
we
did
actually
implement
and
what
next
steps
were
so
kyle.
I
will
probably
ping
you
an
engineering
productivity
and
make
sure
that
I
have
that
all
wrapped
up
we'll
get
an
update
out
to
everyone
about
answering
that
question.