►
Description
Today we had an in depth discussion about the current state of the Find Failures Fast project and next steps to help Engineering get to their goal for Cost / pipeline (https://about.gitlab.com/handbook/engineering/quality/performance-indicators/#average-cost-per-merge-request-pipeline-for-gitlab) by running fewer tests in MR Pipelines. We also touched on the priority balance for Verify:Testing between Internal Customer needs and our Primary Performance Indicator of PGMAU (https://about.gitlab.com/handbook/product/ops-section-performance-indicators/#verifytesting---paid-gmau---count-of-active-paid-testing-feature-users).
A
This
is
the
internal
stakeholder
call
or
internal
customer
call
for
the
verified
testing
group
for
august
27
2020.,
I'm
going
to
jump
right
into
the
agenda.
There
was
a
question
from
neck
about
recording
this,
which
we
always
do
and
then
posting
it
for
internal
usage,
which
we
always
do.
It's
always
on
unfiltered,
often
as
an
unlisted
video,
just
because
we
do
sometimes
talk
about
customers
in
here,
but
I
always
link
to
the
recording
in
the
agenda.
A
If
you
go
back,
you
can
see
the
recording
link
for
the
previous
meetings
and
it
gets
posted
into
our
group
testing
channel.
I
can
post
that
more
broadly,
absolutely
I'll
start
to
post
that
into
the
quality
channel
as
well,
since
y'all
are
an
internal
customer
so
that
you
have
more
ready
access
to
that
and
then
moving
into
the
roadmap
deck
we're
not
going
to
go
through
that
today,
as
is
our
mo.
I
just
want
to
call
out
a
couple
of
the
changes
of
note.
The
epic
for
the
performance.
A
A
If
they
are
installing
the
other
thing,
I
said
we
have
changed
the
name
from
junit
report
to
test
report
just
to
stay
away
from
trying
to
say
it's
just
for
junit
tests
or
supportive
java
to
more
broadly
so
that
was
a
verbage
change
that
you'll
see
in
documentation
and
the
category
stuff
around
the
site.
So
it's
starting
to
try
to
use
that
more
and
then,
of
course,
across
most
categories,
we're
trying
to
work
on
smaller
mvcs
and
linked
to
a
smaller
iteration
of
the
test
unit
or
test
history
rather
mvc.
A
We
had
it
started
with
test
history
at
a
project
level,
as
we
started
to
dig
into
that.
We
realized
the
architecture
for
the
data
storage,
for
that
was
going
to
be
hairy
at
best,
and
so
we
broke
it
down
even
further
into
a
smaller,
simpler
change
and
there'll
be
another
mvc
issue
that
pairs
along
with
that
one,
to
display
this
history
on
the
test
summary
and
one
to
display
this
history
on
the
unit
test
report.
A
So
to
present
the
same
data
in
both
places
and
I'll
link
that
back
in
as
well
that
just
came
out
of
our
team
meeting
this
morning,
I
just
haven't
had
a
chance
to
make
that
update
here.
So
that's
a
the
just,
the
quick
things
to
call
out
of
what
has
changed
in
the
deck.
A
I
wanted
to
talk
a
little
bit
about
today
about
our
current
balance
of
balancing
the
needs
and
wants
of
internal
customers
with
continuing
to
move
forward
on
our
key
performance
indicator
for
the
group.
Around
paid
paid
group
monthly,
active
users
paid
gmail,
so
we're
really
focused
on
implementing
new
features
or
improving
features
that
are
at
that
premium
or
ultimate
level,
and
a
little
bit
at
the
starter
as
well.
The
good
news
is
that
a
lot
of
those
are
dog,
fooding
features
as
well,
so
things
that
we're
already
building
for
internal
stakeholders.
A
Things
like
screenshots
attached
to
the
unit
test
report
is
at
a,
I
believe,
at
a
premium
level.
Maybe
that
one
is
that
core
enhancements
to
the
code
quality
feature
set,
that's
all
at
a
premium
or
a
starter
level.
So
these
things
we're
accomplishing
both
goals
with
a
lot
of
our
tasks,
so
we're
not
sacrificing
right
now,
one
for
the
other
and
we'll
continue
to
try
to
balance
that,
because
so
many
of
our
categories
are
at
a
minimal
maturity.
A
One
of
the
key
components
of
moving
to
viable,
which
is
always
part
of
our
direction,
is
it
is
broadly
used
that
category
is
broadly
used
within
gitlab.
So
there's
a
couple
of
questions
here:
I'm
going
to
go
ahead
and
verbalize
mac's
question
and
then
answer
it
and
then
joanna.
I
think
you
have
a
question
as
well.
So
next
question
was:
do
you
have
insights
on
what's
next?
What's
the
next
list
of
work
that
will
improve
paid
gmail,
we
do
the
the
list
of
work
that
will
improve.
A
That
is
adding
telemetry
for
all
of
the
paid
features
first,
so
once
we
can
count
those
things
we
can
show
them
on
a
graph.
So
that
is
step
one.
We
have
an
epic
for
that.
You
can
link
to
the
epic
from
the
performance
indicator
page,
which
includes
our
graph
and
all
of
the
improvements
that
we're
working
to
make
and
then
from
a
new
feature
set
perspective.
The
code
coverage
data
for
groups
will
be
a
premium
feature
and
so
we'll
be
working
to
promote
that
with
tams
through
social.
A
All
of
those
things
to
get
users
working
with
that
or
engaging
with
that
feature,
that's
the
next
new
feature
in
that
regard.
After
that,
it'll
likely
be
enhancements
to
the
code
quality
report,
which
is
also
a
paid
feature
and
just
making
that
a
lot
more
usable.
That
is
currently
the
one
thing
that
we
are
tracking
and
it
has
very
low
usage,
so
we'd
love
to
see
some
enhancements
there
to
make
that
more
functional,
more
widely
used
and
engaged
with
and
joanna
I'll.
Let
you
because
I've
been
dominating
verbalize
your
question.
B
Sure
thanks
james
yeah,
so
my
question
was
kind
of
along
the
same
lines
like
if
we've
picked
an
mvc
or
a
category
that
we
want
to
focus
on
moving
higher
and
maturity
level.
Yeah.
A
A
So
if
you
look
at
our
maturity
plan
code,
sorry
here
we
are
so
today
is
august.
27Th
code
testing
and
coverage
is
the
next
category
that
is
scheduled
to
make
a
move.
A
We've
started
the
process
of
going
through
the
category
scorecard
category
maturity
scorecard
the
cms
to
measure
if
we
are
at
a
viable
state
for
that,
and
you
can
actually
see
on
the
top
of
the
page
what
constitutes
viable.
This
is
a
new
process
that
we're
going
through
in
measuring
maturity
for
a
category
so
code,
testing
and
coverage
is
the
next
one.
After
that
we'll
be
looking
at
code
quality
and
so
part
of
that
is
working
on
those
dog
fooding
items.
A
We've
had
some
great
discussions
internally
with
the
folks
on
the
getaway
team
and
some
other
folks
about
what
is
preventing
them
from
more
widely
using
the
code
quality
feature
as
it
stands,
and
so
we've
identified
some
key
deliverables.
There
they're
included
in
a
maturity,
epic,
for
code
quality
and
I'll
link
back
to
that
in
the
video
notes
and
in
the
agenda
of
what
we
need
to
do
to
kind
of
solve.
The
problems
get
to
the
outcomes
that
they
need
to
make
more
more
broad.
Use
of
that.
A
The
nice
thing
is
that
also
overlaps,
with
a
lot
of
the
feedback
that
we're
getting
from
customers
about
how
we
can
make
the
code
quality
feature
set
better
for
them
and
start
to
replace
some
of
the
other
code
quality
vendors
that
they
have
in
place.
So
that
helps
us
become
that
single
tool
across
their
devops
tool
chain
so
that
our
reliant
on
integrations
even
or
other
third-party
tool
sets
outside
of
gitlab
you're.
Seeing
all
of
that
in
one.
A
And
then
kyle,
who
also
is
not
present
today,
had
a
status
update
for
us
thanking
for
collaboration
on
the
work
that
we're
doing
on
the
test
file.
Finder
they've
replaced
the
get
get
lib
or
yeah.
C
So
they've
they've
merged
into
the
main
gitlab
project
using
the
tff
tool.
The
gem,
instead
of
using
the
built-in
gitlab
helper
function
so
now
tff,
is
in
fact
being
dog
booted
by
the
main
gitlab
project.
I
believe
the
other
merger
quest
he
has
opened
was
to.
C
Put
that
in
to
one
other
place
where
they
were
using
the
tff
helper
function,
that's
built
in
instead,
so
they
did
like
the
they
did
most
of
it
in
the
first,
mr
and
then
the
second,
mr,
is
to
do
the
the
tricky
bit
with
the
ce
and
ee
pipeline
file
matching.
So
that
looks
like
it's
all
coming
down
the
pipe
and
looks
like
it's
going
well
and
so
I'll
just
continue.
I'm
pretty
happy
about
that.
Now
that
we're
dog
fooding
it.
C
I
think
that
I
can
feel
confident
to
try
and
get
that
gem
included
in
the
project
part
of
product
list
for
metrics,
because
we
are
using
it
in
gitlab
proper
now,
so
adding
it
to
that
list
should
be
easy,
but
I
guess
we'll
see
it
still
requires
executive
approval
to
add
it
to
the
projects
part
of
product
csv
file
and
then
get
it
into
our
metrics
and
kind
of
allow
that
merge
request
into
that
file
to
be
counted
in
the
mr
rate
or
into
that
project.
C
A
Some
all
verbalized
mechs
a
couple
of
comments.
Great
to
see
this
update
I'd
request,
help
to
ensure
that
documentation
is
pulled
in
early
for
the
add-ons
we
are
shipping.
A
I
think
that
that's
following
our
existing
process
of
working
with
marcel
to
ensure
that
we
have
documentation
updates
for
tff
as
we're
making
them
another
point
for
mac
once
this
lands,
and
we
get
to
see
if
this
helps
with
the
customer
portal.
From
my
understanding,
tff
is
currently
disabled
there,
my
brain's,
not
connecting
what
customer
portal
means,
though,
can
anyone
help.
D
Sorry,
that's
that's
where
I
enabled
the
the
actual
template
initially,
but
they
had
optimized
their
pipeline,
so
well
that
it
it
actually
ended
up
costing
us
quite
a
bit
more
to
have
it
enabled
than
not
so
I've
gone
back
and
forth
on
a
few
different
implementations
of
just
trying
to
get
the
tff
gem
enabled
most
recent
one
was
trying
to
go
back
and
see
after
my
last
talk
with
our
brief
interchange
about
what
was
it
interruptable
yeah,
that
to
figure
out,
if
I
could
figure
out
a
way
just
through
ci,
to
go
back
and
do
it
and
I'm
about
to
loop
drew
back
in
on
some
bash
stuff,
because
I
I
bash
my
head
against
the
wall
on
that
long
enough,
so
that
that's
kind
of
a
next
step
there.
C
And
I
think
we're
talking
about
the
same
thing
here.
I
remember
seeing
some
some
like
ci
minutes
break
down
from
you.
That
was
super
helpful.
The
it
my
read
of
it
was
that
the
tff
logic
looks
good
for
the
failure
cases
and
the
more
and
the
less
useful
part
was
the
template
and
the
ci
configuration
of
tff
ended
up
being
inefficient
because
the
failure
cases
were
good.
But
overall
we
saw
a
huge
increase
by
adding
the
state
adding
the
job
to
the
pre-stage.
C
Is
that
and
so
the
goal
here
is
going
to
be
trying
to
reconfigure
tff
in
a
narrower
way.
To
not
add
all
those
minutes
is
that
accurate.
D
Right
right,
it's
scaling
it
down
just
to
a
single
job
within
a
pipeline
as
opposed
to
the
entire
stage
right,
so
that's
kind
of
similar
to
what
albert's
doing.
But
what
albert
is
doing
the
the
logic.
There
is
not
going
to
be
something
that
we
use
in
the
customer
portal
because
we
don't
have
this
ce
versus
ee
issue
going
on.
So
it's
it's
he's
got
some
more
complicated
logic.
B
B
I
was
just
gonna
say
I
think
that
answered
my
question,
which
was
if
this
work
that
albert
had
been
working
on
overlaps
with
with
what
stuff
was
working
on,
and
my
concern
was
that
we
were
reinventing
the
wheel
a
little
bit,
but
it
sounds
like
there
is
some
overlap,
but,
like
we
still
kind
of
need,
the
two
different
approaches
is
that
right.
D
A
Sorry,
james
no
problem,
we
said
exactly
the
same
thing,
but
that
answered
your
question
so
real
quickly.
The
road
map
for
the
rest
of
dog
fooding.
There
are
three
items
on
here,
two
of
which
I
should
share
screen
to
show
this
three
items
on
here,
one
of
which
is
not
scheduled,
which
is
just
add
an
example
to
the
examples
page.
The
other
two.
A
I
think
I
bumped
one
of
these
out
a
milestone
and
left
the
other,
but
I
think
that
they
would
go
potentially
sequentially,
not
in
parallel
to
incorporate
coverage
detection
for
ruby
and
then
incorporate
coverage
detection
for
more
languages,
and
that
is
pretty
hand
wavy
as
far
as
what
those
might
be
and
if
there's
a
way
to
do
that,
that
is
a
little
more
agnostic
kind
of
the
way
that
we
did
with
the
running
the
test
for
the
new
and
modified
files
where
it
would
pick
up
across
multiple
languages
through
a
really
simple
mapping.
A
So
I'd
love
to
hear
from
the
customers
who
we
have
zeph
and
joanna
and
from
our
engineers
that
we
have
ricky
and
drew
what
makes
more
sense
here
to
tackle
next
and
does
the
timing
of
it
scheduling
it.
A
couple
of
milestones
out
where
we
deliver
in
about
two
and
a
half
months
makes
sense,
or
is
this
something
that
just
isn't
the
highest
priority
right
now
and
we
can
further
backlog.
It.
D
C
C
To
look
at
the
shopify
that
shopify
blog
post
about
selectively
running
tests
and
seeing
if
we
could
somehow
incorporate
that
type
of
strategy
into
the
gym.
That
seems
like.
That
seems
like
a
big
thing,
though,
especially
because
it
involves
a
lot
of.
C
Like
we'd
probably
have
to
execute
a
bunch
of
different
programs
from
within
the
gem
on
the
terminal
so
that
we
could
build
the
traces
out
so
that
we
know
what
tests
are
covering,
what
things
that
we
know
what
tests
we
need
to
run
in
a
given
situation
without
actually
using
coverage,
because
I
think,
like
the
shopify
blog
post,
says,
the
coverage
reports
aren't
necessarily
a
great
and
truthful
mechanism
for
deciding
what
tests
you
need
to
run
all
the
time,
particularly
if
your
goal
is
to
only
ever
run
a
subset
of
the
tests
on
merge,
request
pipelines.
C
C
So
I
think,
for
my
perspective,
like
don't
quote
me,
but
I
think
that's
what
they're
doing
and
and
which
is
good,
but
if
we
could
get
it
even
tighter,
where
okay
there's
back
end
changes,
but
we're
only
gonna
run
these
tests
because
they're
the
only
possible
tests
that
would
pass
through
the
part
of
the
code
that
was
modified,
yeah,
so
that'd
be
cool.
A
The
so
the
dog
fooding
plan
that
we
have
is
based
on
the
kind
of
the
steps
that
albert
laid
out
in
one
of
our
initial
feedback
issues.
So
I
we
so
the
next
steps
would
just
be
validate
the
latest
change
that
we
have.
Does
it
get
us
to
an
improvement
that
we're
happy
with?
Where,
with
incorporating
the
gem
into
the
main
project,
we
have
what
we
want.
A
C
Yeah,
I
was
just
I
think
I
was
almost
thinking
that
it
would
be
cool
to
just
kind
of
skip
over
the
coverage
stuff
because
it's
kind
of
been
there
done
that
and
a
lot
of
different
tools
provide
that
functionality,
and
so,
if
we
could
kind
of
leapfrog
it
right
into
the
kind
of
final
goal,
then
that
might
be
cool.
I
also
think
that,
like
the
dog
fooding
has
begun,
we're
officially
dog
fooding
it
and
without
either.
I
think,
there's
like
two
avenues
forward.
We
kind
of
need
to
talk
to
engineering
productivity
and
figure
out.
Okay.
C
What's
the
next
thing,
that's
on
your
mind
now
that
you're
you've
started
using
this
gem
and
then
the
other
avenue
is
with
zeph
trying
to
figure
out
how
it
can
be
made
effective
for
the
customer
portal.
So
I
think
those
are
are
kind
of
our
two
natural
dog
fooding
pathways,
but
I
don't
know
that's
the
way,
I'm
thinking
about
it.
I
don't
know
what
you
all.
A
Think,
hey
burke,
except
for
joanna,
you
want
to
weigh
in
on
that.
B
I
yeah
I'm
also,
I
think,
with
ricky,
about
like
digging
in
more
like
a
bit
deeper
on
on
our
approach
to
how
we're
picking
the
tests
that
way
that
we
have
like
a
solution.
That's
again
like
it
feels
more
mature,
rather
than
having
like
more
mvc
type
of
implementations
that,
like
do
like
a
little
bit,
but
don't
quite
get
our
customers.
A
Okay,
so
I
think
next
steps,
then,
would
be
go
back
to
engineering
productivity.
I
can
set
up
some
time
with
kyle
to
talk
through
in
a
couple
of
weeks
how
it's
going
with
the
current
implementation.
They
have
if
we've
met
the
objectives
of
the
dog
fooding
issue,
and
we
can
close
that
out
and
we
can
talk
at
that
time.
What
are
some
more
broad
outcomes
and
we
can
start
to
work
with
the
wider
community
about
now
great.
A
We
want
to
see
how
you're
using
this,
what
are
the
other
outcomes
that
we
can
help
provide
here's
the
direction,
we're
thinking
about
going
piggybacking
on
that
shopify
blog
of
you
know,
running
fewer
tests
based
on
codepaths
and,
what's
likely
to
be
executed
based
on
the
files.
You've
changed.
C
I
I
think,
it'd
be
interesting
to
ask
when
you,
when
we're
having
that
conversation
with
kyle,
what
what
the
I
guess,
how
could
how
what
would
it
have
to
do
for
us
to
convince
them
to
run
a
subset
of
tests
instead
of
the
whole
suite?
I
know
that
that
was
one
of
like
the
the
takeaways
from
the
shopify
blogs
that
they're
just
we
don't
run
all
the
tests
anymore.
C
C
C
Run
the
entire
test
suite
on
the
on
like
before
they
deploy
right
before
their
release,
goes
out.
They're
running
the
entire
test
suite
on
the
primary
brand
right,
but
I
guess
I'm
I'm
interested
in
what
kind
of
confidence
we
would
need
to
be
able
to
give
them
as
a
customer
to
say,
okay,
you
know
this
really
is
good
enough.
We're
going
to
stop
running
all
the
tests
like
I
don't
I
don't
know
what
the
what
their
evaluation
criteria
for
that
would
be,
but
I
think
it'd
be
it'd,
be
cool
if
we
did
have
that.
C
That
seems
like
a
good
defined
objective.
A
D
Yeah
and
we
don't
have
to
start
from
scratch
with
that,
the
whole
shopify
dynamic
analysis
approach,
that's
something
that
the
some
folks
at
toptal
had
created.
It
was
a
an
algorithm
that.
D
D
So
you
can
see
exactly
where
these
things
happen,
both
in
in
the
app
as
it's
running
and
within
the
test.
So
it
can
map
the
two
together
and
then
you
select
those
tests
based
upon
that
mapping.
So
there's
some
difficulties
that
shopify
calls
out
with
like
the
mapping
getting
behind
some
things
like
that,
but
we
have.
We
have
some
work
already
done
for
us
if
we
want
to
figure
out
how
to
make
use
of
that
and
and
take
that
further
step.
C
One
thing
just
to
you
know:
let's
continue
to
hit
this
horse,
is.
A
C
Does
that
sound
great
yeah,
I
think
well
just
basically
replace.
However
they're
doing
it
now
with
the
gem.
So
I
don't
know
what
they're
doing
right
now
to
pick
and
choose
whether
they're
going
to
run
the
back
end
test
or
not,
but
it's
probably
doable
with
the
tff
gem
and
if
it's
not,
then
that's
a
good
point
for
us
to
dig
in
and
do
a
small
improvement
so
that
we
can
get
it
yeah.
A
It
could
be
interesting,
too,
to
start
reaching
out
outside
of
our
internal
customers,
just
thinking
more
broadly
as
the
testing
product
manager
to
other
to
our
customer
base,
who
potentially
is
also
using
or
developing
and
ruby
on
rails,
because
that
seems
to
be
where
it's
going
to
be
most
applicable
or
it's
going
to
be
the
easiest,
maybe
for
us
to
support
and
say
hey,
have
you
started
using
this?
Would
this
replace
whatever
methodology
you
have
to
try
to
run
fewer
tests
in
your
mr
pipelines,.
A
I
wanted
to
just
touch
on
real,
quick,
the
other
fooding
things
that
are
going
on.
We
talked
a
little
bit
about
code
quality
already,
there's
some
key
things
that
we're
going
to
develop
there,
including
blocking
a
merge
on
a
degradation
including
the
code
quality
information
in
the
diff,
mr,
which
the
team
has
a
lot
of
experience
with
parsing
data
and
presenting
it
there,
based
on
a
current
community
contribution,
I'm
hoping
that
we
can
leverage
some
of
that
knowledge
to
make
that
an
easy
thing.
A
A
What
do
we
need
to
do
to
start
to
include
the
accessibility
template
as
part
of
our
just
standard
pipelines
so
that
as
we're
looking
at
or
as
we're,
making
front-end
changes
and
deploying
review
apps
that
we're
starting
to
run
a
scan
there
and
look
at
how
the
accessibility
is
changing
and
then
visual
reviews
which
I'm
starting
to
poke
at
a
lot
more
because
a
it
has
really
low
usage
and
b
is
a
paid
feature
so
starting
to
understand
how
we
can
dog
food
that
and
tell
that
story
about
how
it's
helped
us
internally,
get
better
feedback
faster
out
of
the
review,
apps
that
we're
already
deploying
so
that
engineers
can
more
quickly
make
the
changes
that
they
need
to
make
based
on
feedback
from
the
rest
of
the
team
or
the
product
manager.
A
Whoever
that
might
be,
and
then
parker,
you
said,
no
need
to
verbalize.
But
you
were
at
commit
all
day
yesterday.
We'd
love
to
hear
how
it
went.
E
Yeah
sure
I
didn't
know
if
we
would
have
time
or
not,
but
it
was
just
a
cool
confirmation
for
for
what
I've
seen
on
the
direction
page
and
kind
of
what
I've
gathered
from
some
conversations
here
around
code
coverage.
So
there's
a
customer
that
is
a
devops
engineer
at
usaa.
If
I
remember
qriket,
I
believe
his
name
was
michael
michael
baker.
E
And
he
came
in,
we
talked
for
about
an
hour
because
it
was
a
little.
It
was
a
slow
period
during
the
booth
and
he
started
asking
some
questions
and
I'm
talking
about
what
really
aligns
with
the
validate
verify
and
testing
category,
and
he
saying
it
was
really
neat
that
we
had
this
direction
page.
He
had
no
idea
that
we
were
building
this
and
they
are
currently
building,
like
essentially
a
code
coverage
for
groups
internally
right
now,
so
I
told
him,
you
know:
hey,
keep
an
eye
out
right.
E
Send
them
to
the
issue
said:
leave
a
comment
to
you
know
say,
feel
free
to
participate,
give
a
thumbs
up
right.
Let
us
know
that
you're
working
on
this.
This
is
something
and
that
will
be
very
helpful.
E
So
I
thought
that
was
cool
to
see
and
that's
validation
for
you
all
and
what
you're
working
on
and
on
that
that
will
will
be
useful
for
for
customers
and
that
it'll
save
them
time
too
right,
because
they're,
obviously
investing
in
that
right
now,
and
he
said
at
the
end
of
the
day,
you
know
would
be
really
awesome.
E
He's
like
I
don't
want
to
have
to
do
a
lot
of
work
to
find
out
the
information
I
need
he's
like
if
the
badges
just
signify
what
you
know
the
info,
that
we
need
to
see
now
and
we
can
have
a
trend
and
visualize
it,
and
I
can
just
my
manager,
can
look
at
it
and
go
bam.
You
know
this
is
what
I
need
to
see,
that's
kind
of
where
he
was
getting
that.
So
that
was
a
really
cool
thing.
E
I
mean
we've
talked
about
that
a
few
times,
so
I
just
thought
that
would
be
neat
for
y'all
to
hear.
That's
awesome
overall
commitment
really.
Well,
though,
it's
hard
when
you
do
a
virtual
event,
like
that,
I
was
at
the
demo
booth
for
most
of
it
and
then
checked
out
some
other
things,
but
traffic
was
steady
and
it
wasn't
overwhelming.
So
we
were
able
to
have
conversations
like
I
just
kind
of
expressed
consistently
throughout
it's.
A
A
A
All
right,
as
always,
thank
you
for
helping
take,
notes,
joanna
ricky,
you're
rockstars
at
that,
and
I
am
not
I'll
get
this
posted
up
to
unfiltered
before
lunch.
My
lunch,
which
is
only
in
about
an
hour
and
put
the
link
back
into
quality
group
testing
and
in
the
agenda.
So
everyone
has
access
thanks.
Everyone.