►
From YouTube: Create & Verify Frontend Group Session: Availability in Lovable Stages (February 2021)
Description
Right after the end of FY21-Q4 we had a call to discuss and share tips on dealing with Availability in Lovable Stages — or to be more exact, Lovable product categories: https://about.gitlab.com/direction/maturity/
In this call we had Frontend Engineers from the groups:
* Create: Source Code
* Create: Code Review
* Verify: Continuous Integration
* Verify: Pipeline Authoring
* Verify: Testing
This video captured that call and the following document the agenda:
https://docs.google.com/document/d/1_yyaaMatg0et4OigNGsjJPZDnV0I4sB8xy9MHoVPg78/edit
A
Hi
everyone
welcome
to
the
quarterly
meeting
of
front-end
group
sessions
where
we're
going
to
be
addressing
the
topic
of
availability
in
lovable
stages,
which
I
should
say
it's
more
correct
to
say,
lovable
product
categories
and
we
do
have
a
couple.
So
as
you
look
on
the
product
category,
maturity
board,
code,
review
or
source
code
and
verify
are
the
stages
where
they
live.
So
that's
why
we're
all
healed
together?
A
We
are
looking
to
learn
from
each
other
and
we're
going
to
have
an
open
discussion
about
this.
We
have
five
sections.
The
the
floor
is
open
for
discussion,
as
you
want
to
say
something.
Just
put
your
your
point
in
the
bullet
points
we'll
do
our
best
to
cover
for
each
other,
taking
notes.
So
please
try
to
do
that
and
yeah.
So
the
big.
The
big
question
here
is
how
to
deal
with
availability
and
ensuring
availability,
while
maintaining
a
good
velocity
in
shipping
work
to
our
customers.
So.
B
Don't
have
edit
access
to
the
document,
I
don't
know
if
other
people
do,
but
I
can't
help
add
notes.
A
A
Loving
the
ascii
art
all
right,
so
now
that
everybody
has,
can
someone
confirm
you
have
access?
You
can
edit
type
something
there
we
go.
I
see
a
colon
all
right,
so
I
there
is
access.
So
can
we
go
on
thanks
thanks
so
much
the
there's
a
link
to
maturity
board
in
case
you
want
to
see
so
the
first
section
I'm
going
to
dedicate
a
couple
of
minutes
to
it
is
basically
just
defining
availability
and
we
do
have
the
definition.
A
So
I'm
going
to
be
reading
the
the
definition
so
that
we
can
all
so
see
so.
Issues
with
availability
label
directly
impacts
the
availability
of
gitlab.com
sas
solution.
It
is
considered
as
another
category
of
bug
so
basically
is
making
things
available,
making
sure
the
features
are
features
that
customers
use
are
available
and
don't
stop
working
all
of
a
sudden.
A
C
I
would
say
for
me,
one
of
the
things
that
I've
always
tried
to
keep
in
mind
is
that
new
work
should
never
emperor
older
work,
so
whatever
I'm
adding
to
the
code
base,
should
shouldn't
influence.
The
previous
feature
we're
already
working
for
our
users
and
for
me,
that's
also
availability.
B
B
So
I
have
a
lot
to
say
in
sort
of
a
later
section
about
all
of
the
different
ways
and
things
that
we're
doing
to
keep
them
unbroken,
but
that,
like
it,
it
can't
just
be
this
like
tenuous
thing
like
really
good
availability
is
like
a
whole
full
system
that
makes
it
easy
to
make
things
available.
A
I
can
add
my
perspective
as
you're
thinking.
It
might
not
be
as
a
straight
up
availability
on
the
perspective
that
the
users
will
see
the
features
working,
but
I've
grown
to
feel
like
even
the
the
minor
or
the
smallest
minute,
visual
regression
or
misalignment.
A
They
also
tie
in
with
availability,
not
that
the
feature
is
not
available
to
them,
but
in
our
stage
I
feel
like
this.
The
impact
is
kind
of
the
same.
If,
if
you
have
misaligned
icons,
it
starts
to
become
the
nuisance,
but
if
it's
something
that
you
know
what
those
very
heavily
used
paths-
and
you
always
bump
it
into
the
same
annoyance-
it
starts
to
become
pain.
So
I
feel
like
at
times
visual
misalignments
take
the
importance
of
of
availability.
A
C
Thoughts
yeah
I'd
like
to
add
on
that,
because
I
think
that's
a
very
good
point
and
I
used
to
be
a
q
engineer
back
in
the
days,
and
one
thing
that
we
would
do
is
that
we
would
have
something
called
a
right
score,
which
is
a
concept
of
when
you're
assigning
a
weight
to
an
issue.
You
don't
just
take
into
account
how
badly
it
impacts
the
future,
but
also
how
often
and
for
how
many
of
your
user
right
so
like.
C
I
think
that
ties
into
this
idea
that
it's
not
because
it's
a
small
issue
that
it's
not
important,
it
might
even
be
more
important,
sometimes
a
misalignment,
if
it's
seen
by
everyone-
and
it's
seen
all
the
time
is
worse
than
like
one
user
having
to
reload
their
page
once
and
that
can
like
tie
into
availability,
and
we
can
see
it
sometimes
like
in
the
pipeline
graph
and
misalignment.
We
get
like
dozens
of
comments,
whereas
a
completely
broken
page
might
not
even
get
an
eye
like
someone
raising
an
eyebrow.
So
it
really
depends.
D
E
D
A
We
do
have
some
of
that
factored
into
the
severity
by
the
way
where,
if
you
look
into
the
severity
table
it
does
have,
let
me
just
check
we
used
to
have
give
me
one.
Second,
maybe
we
don't
the
size
of
the
crowd,
the
impact
that
it
had.
A
A
A
Awesome,
thank
you
for
the
contributions.
Let's
move
to
our
next
section
feature
flags,
friends
or
foes.
We
have
25
minutes
to
discuss
this,
and
this
section
is
more
focused
on.
How
can
we
leverage
future
flags
to
be
an
aid
for
efficiency
and
not
become
an
obstacle
in
that
way?
In
the
way,
do
you
use
any
special
techniques
or
have
you
had
any
learnings
in
the
past
couple
of
months
of
rolling
this
out
that
you'd
like
to
share
with
the
team
and
miguel?
You
have
our
first
point:
take
it
away.
F
Yeah,
I
want
to
say
that
they
are
friends
and
I
want
to
point
out
that
they
have
become
easier
to
use.
In
the
past
few
months
we
have
some
the
yaml
files
that
indicate
which
team
they
belong
to
who's,
the
owner,
the
rolex
issues.
F
I
think
there
are
some
friction
points,
at
least
for
me.
Slug
is
very
inconvenient,
especially
because
I
forgot
commands
that
I
have
to
use
it's
very
easy
to
forget
that
we
have
to
put
dash
stating
and
then
you
may
you
have
to
be
careful
with
that.
So
it's
not
a
very
fun
experience
to
modified,
but
I
think
we
need
that's
some
things
that
we
can
work
with
to
improve.
F
I
do
think
they
become
enemies
when
we
have
them
sail
for
many
iterations
and
nobody
knows
what
they
are
anymore.
We
have
the
worst
ones,
are
the
ones
that
are
stale
and
this
and
default
faults.
So
they
like
what
happened
with
that.
Like
nobody,
nobody
will
answer
and
they
were
just
an
idea.
So
I
think
I
think
it's
important
to
control
the
age
of
a
feature
flag,
because
once
they
get
very
old,
they
are
definitely
a
sign
that
something
went.
F
B
Yeah,
I
love
feature
flags.
It's
been
really
helpful
for
a
lot
of
different
projects
like
I
feel
like
feature
flags
are
so
so
important
for
our
stages
for
a
couple
different
reasons
right,
like
it's
helpful,
just
to
have
chat
up
so
that,
if
something
breaks,
we
don't
have
to
worry
about
getting
a
hot
patch
out
there,
how
long
it's
gonna
take
to
roll
everything
out
like
which
is,
I
mean
an
obvious
sort
of
baseline
statement
about
them,
but
it
has
come
in
handy
where
it's
like
that's
very
broken.
B
Okay,
now
I
turned
it
off
without
sort
of
having
to
deal
with
any
other
parts
about
it.
It's
also
been
really
helpful
that,
like
I
know,
gitlab
has
like
we
have
this
ideal
that
we're
shipping
all
the
time
and
we're
shipping
things
incrementally,
but
I
think
with
our
stages,
we're
starting
to
hit
the
like
ends
of
that
philosophy,
and
one
of
those
can
be
sort
of
replacing
things
or
reworking
things.
B
So,
like
I've
been
working
for
a
number
of
months
now
on
basically
a
recreation
of
our
pipeline
graph,
because
of
many
reasons
it
was
brittle
in
many
different
ways.
It
has
just
sort
of
outgrown
what
it
was
for,
and
it
got
to
the
point
that,
like
adding
features
to
the
graph
was
very
hard,
so
we're
like
well,
we
need
to
redo
this,
but
you
can't
redo
the
pipeline
graph
and
roll
it
out.
Incrementally
right,
you
can't
be
like
okay
guys.
B
We
took
away
your
whole
pipeline
graph,
and
now
you
have
one
job
and
we're
gonna
build
it
all
up
again,
like
you.
Just
can't
do
that
and
so
being
able
to
use
the
feature
flag
for
that
has
been
also
super
helpful
and
one
tip
about
that
approach
that
we've
come
to
is
using
async
loading
with
the
feature
flags
has
been
really
important
because,
since
we're
essentially
building
a
second
pipeline
graph
in
the
same
path
as
the
first
pipeline
graph,
we
don't
want
everyone
loading
the
code
they
can't
see.
B
So
in
our
like
bundle
file,
we've
been
using
the
async
loading
with
the
feature
flags
so
that
we're
never
loading
the
code.
You
can't
use
if
you
can't
use
that
code,
which
I
think
is
a
pretty
good
tip
for
it.
B
The
one
downside
is
with
tests,
I
think,
which
is
that
our
spec
test
defaults
to
the
flags
being
on
view
unit
tests.
You
have
to
inject
the
flag,
which
isn't
inherently
terrible,
but
it's
like
small
friction
point
that
you're
like
great.
I
have
to
go
fix
this
and
it's
not
really
clear
in
our
spec
test
like
something
I'm
working
on
now
is
trying
to
figure
out
how
to
like,
like
you,
just
have
to
double
your
test
to
make
sure
you're,
covering
both
the
paths
while
you
roll
it
out.
A
Thanks
sarah,
so
two
things,
thank
you
so
much
for
that.
I
added
a
little
to
do
for
you
to
please
after
this
call
not
doing
add
a
link
to
an
mr
that
does
exactly
that,
so
that
we
can
all
have
a
reference
of
a
synchronous
loading,
the
code
depending
on
the
feature
flag.
The
other
is
about
the
thing
you
just
said
about
covering
both
paths.
A
One
of
the
things
that
I
can
share
that
we
that
really
bit
us
in
the
past
was
when
we
were
developing
features
that
were
interdependent
and,
in
this
particular
case,
was
batch
loading,
the
diffs
that
justin
was
working
on
and
then
thomas
was
working
on,
not
loading
the
inline
and
parallel
data
at
once,
we'd
only
load,
the
one
we
needed
and
then,
when
we
switch,
we
will
go
to
the
next
one
they're,
not
obviously
related,
but
they're
dealing
with
the
same
part
of
the
code
base,
they're
still
dealing
with
the
same
part
of
the
the
logic
of
the
merchandise
app.
A
So
what
we
did
was
we
had
two
feature
flex
for
that
which
is
great,
but
then
they
compounded.
So
now
we
had
to
cover
scenario
for
batch
diffs
on
batch
diffs
off
and
then
for
each
of
these
inline
and
parallel
split
on
and
off.
This
was
a
mess
rolling
out
and
even
though
they
they
they
were
thought
about
doing
them
separately.
They
ended
up
being
rolled
out
at
the
very
same
time.
A
B
And
that
reminds
me
of
one
other
note.
I
just
wanted
to
add
that,
because
of
the
feature
flag
2,
we
actually
have
just
duplicated
some
code.
We
ended
up
leading
leaning
into
you
know
like
there's
some
code
where
okay,
I
can
inject
whether
we're
using
the
graphql
or
the
rest
end
point
and
know
that
so
that
like
because
you
have
to
look
up
data
slightly
differently,
there's
like
a
whole
other
talk.
B
I
will
do
someday
on
how
we
did
this,
but
one
of
the
things
we
did
do
was
to
a
certain
extent
we
just
duplicated
files,
and
I
was
sort
of
loathed
to
do
that
at
first.
I
remember
having
a
whole
conversation
with
peyton,
where
he
was
like
just
duplicate
things,
and
I
was
like
no.
We
cannot
do
it
and,
like
the
answer,
was
three
layers
of
those
files
are
just
duplicated
and
they'll
get
deleted
when
they're
done
and
all
their
specs
will
get
deleted
when
they're
done.
A
Cool
thanks,
sarah,
so
moving
on
one
one
of
the
frustrations,
I
have
with
chat
ups
that
I
wanted
to
share
here
in
case
anybody
has
any
tips
is
that
by
the
usual
rollout
process,
we
have
to
enable
it
in
staging
first
and
then
in
specific
projects
in
production
and
then
go
for
everything
all
projects
on
github.com,
and
this
is
a
step
that
I
don't
have
a
particularly
good
option,
a
good
solution
for
where
you
have
enabled
for
gitlab.com,
sorry,
gitlab
or
gitlab
project
and
gitlabcom
www.gitlab.com
project.
A
G
C
That
yeah,
the
only
problem
with
that
and
that,
because
I
talked
with
that
to
withdraw,
is
that
if
you
do
that,
and
you
do
a
rollback,
it's
like.
Let's
say
you
disable
it
and
you
just
say
disable,
then
the
actor
will
stick
take
precedence
over
the
whole
disable.
So
then
you're
only
disabling
it
so
like
if
the
one
you
specified
before-
let's
say
oh
gita.com,
is
on
and
then
for
this
user
and
it's
on
for
everybody
and
then
say
just
off
for
everybody.
It
will
still
be
on
for
gitlab.com
and
the
actors.
C
You
have
to
disable
all
of
them
one
at
a
time.
So
that's
so.
G
Yes,
if
you
already
have
actors
pretty
fine
and
you
enable
it
for
all,
it
will
work
for
everyone.
But
there
is
a
caveat
that
fred
mentioned.
A
Okay,
that's
good
to
know,
but
I
I
could
swear
that
I
tried
it
and
it
didn't
work
I'll
revisit
that
no
need
to
linger
here.
Thanks
for
for
that,
oh
wait!
I'm
putting
this
on
the
wrong
point!
Thank
you
so
much
jose
and
I'm
looking
for
all
should
work
fred
said
rolling
back
will
keep
the
actors
on
okay
moving.
F
A
You
can
the
chat
ups.
The
setups
is
something
that
we
develop.
I
believe
there's
a
project
somewhere,
I
don't
know,
there's
a
project
in
gitlab.
I
don't
remember
exactly
what
project
it
is:
yeah,
gitlabcom
chatups
and
that's
the
project
that
has,
I
guess,
the
code
for
everything.
So
miguel
asked.
Is
this
an
internal
project,
internal
project
and
yes,
it
is
yeah,
definitely
feel
free
to
go
in
there
and
see
if
you
can
open
an
issue
for
it.
A
And
that's
fair
in
usability
yeah.
We
do
too
in
usability.
We
always
say
that
if
something
is
difficult
to
use,
it's
not
on
the
person,
it's
on
the
system,
so
definitely
open
issues
with
suggestions
to
improve
miguel.
That's
the
best
way
to
go
ahead,
anything
else
to
add
mega.
I
cut
you
off
there.
Sorry
good
awesome.
My
next
point,
which
something
that
we
have
in
wrestling
of
over
the
past
couple
weeks
on
on
code
review
is
we're
struggling
with
the
always
default
to
false
part
of
the
process.
A
Where,
yes,
most
often
that's
the
case
that
we
want
to
use,
especially
when
we're
rolling
out
features
in
pieces
and
steps,
we
don't
want
to
make
a
default
on,
but
sometimes
just
sometimes
it
makes
sense
to
skip
the
overhead
of
rolling
it
out
and
having
it.
Enable
on
product
has
been
pushing
for
this.
We
on
the
on
the
engineering
side.
A
We
resist
this
because
some
of
the
times
when
product
pushes
us
to
skip
levels,
we
have
had
regressions
and
we
had
to
revert
the
code,
which
is
never
pleasant,
but
this
seems
like
a
good
halfway
solution
where
we
have
the.
If
we
have
the
confidence
that
the
code
is
well
tested
and
it
has
been
well
tested
and
we
still
want
to
have
the
assurance
of
turning
it
off,
if
things
go
sour,
it
seems
like
a
good
way
to
go.
A
C
C
No
matter
what
so
we
we
don't,
we
never
avoid
the
rollout
to
to
my
biggest
sadness
in
life,
but
it's
I
think,
because
this
also
ties
into
the
next
point,
so
I'm
kind
of
merging
them
together
but
like
like
something
you
know,
sometimes
product
is
expecting
a
feature
to
roll
out
in
a
certain
mile
zone
and
the
code
will
ship,
but
because
it's
defaulted
to
false,
the
rollout
might
take
longer
than
the
milestone
limit
right
so,
like
the
code
is
done,
but
then
it's
still
off
so
like
they
can't
do
the
release
post
and
that
for
that
we
felt
the
pressure
in
the
last
two
milestone
in
at
least
another
ring
where
it's
like.
C
Well,
why
isn't
it
already
right
in?
Why
isn't
it
already
on?
It's
like
well,
it'd
be
default
to
off
and
we
roll
it
out.
So
it's
you
can't
make
a
release
post
about
it
because
it's
not
defaulted
on.
So
that's
where
the
friction
comes
from
and
sometimes
again
it's
like
it's.
Is
it
like
a
rush
and
we
should
just
distance
ourselves
from
what
product
wants
in
terms
of
releasing?
They
should
just
do
the
post
when
it's
done
and
it's
on
or
are
there
instances
where
it's
safe
to
just
turn
it
on
and
say?
A
But
I
think
we
as
ems
and
ics
have
to
start
thinking
about
iteration
in
a
giving
time
for
the
rollout,
if
it's
too
large,
if,
if
a
deliverable
is
too
large
to
put
in
a
milestone
and
still
fit
the
rollout
after
the
merge,
we
have
to
cut
it
down
into
into
shorter
steps.
And
that
starts
in
the
planning
stage.
A
So
I
understand
products
frustration.
We
have
to
be
aware
and
empathetic
to
them
that
they're
trying
to
ship
something
to
customer
and
they
announce
it
on
the
kickoff
all
of
a
sudden.
It's
merged
two
days
before
the
freeze,
not
the
freeze,
but
the
end
of
the
milestone
and
all
of
a
sudden.
The
customers
won't
get
it
because
you
have
to
do
the
roll
out
that
creates
pain
for
them.
And
it
creates
this
thing
that
the
the
engineer
developed
the
feature.
Why
is
it
not
done?
Why
is
it
not
visible?
A
So
we
have
to
find
the
middle
ground
here,
which
is
cut.
The
smaller
steps
make
sure
that
you
account
for
the
rollout.
The
one
thing
that
I
would
say
is
that
sometimes
features
are
by
themselves.
Opt-In,
which
was
one
of
the
examples.
The
file
by
file
of
review
and
merge
requests
was
stuck
in
user
preferences
right,
so
the
users
had
to
go
there
to
enable
it
that
didn't
make
sense
to
put
default
false.
A
B
I
feel,
like
I'm
curious
what
other
people
think
about
this,
but
I
feel
like
this
is
just
sort
of
a
tension
between
like
what
gitlab
theoretically
wants,
sometimes
and
like
the
actual
physical
limits
of
the
universe
like
when
there's
when
you're
in
a
level
like
when
you're
in
a
very
mature
stage.
Sometimes
even
things
that
seem
small
are
big
and,
like
I
think,
there's
like
I.
I
think
that
cut
it
down
is
a
good
plan
when
it
can
happen.
But
I
also
feel
like
a
lot
of
times
in
our
stage.
B
We
talk
to
the
pms
as
well
about
just
sort
of
accepting
where
the
stage
is
and
that
like.
Sometimes
that
means
that
something
is
going
to
be
made
in
one
milestone,
and
it's
going
to
spend
a
milestone
on
on.com
and
default
off
and
like
you're,
really
just
going
to
announce
at
the
next
milestone
and
like
no
one's
gonna
die
and
like
that's
just
sort
of
how
it
is
right,
like
I
feel
like
it's
that
tension
there,
where,
like
I'm,
not
uninterested
in
our
company
values.
H
Yeah,
so
we
had
like
a
regression
not
too
long
ago
with
one
of
our
back-end
engineers
that
kind
of
went
through
like
stan
hugh
and
a
few
different
things
and
then
so,
like
we
had
some
things
in
the
docs
that
were
put
that
say,
hey,
let's
always
default
to
faults,
and
I
think
that's
a
good
idea
for
safety,
because
with
lovable
stages
like
the
last
thing
we
want
to
do
is
like
let
our
customers
see
a
bug
and,
like
you
know,
there's
different
use
cases
like
you.
H
You
know
and
just
do
a
lot
of
testing
for
that.
But
if
it's
like
really
customer
impacting,
I
would
like
to
always
default
to
faults
for
these
reasons,
because,
like
we've
had
to
come
back
and
bite
us
in
the
butt
in
the
past
and
like
the
cost
of
like
implementing
that
feature,
flag
with
off
is
like
a
low
cost
to
like
actually
having
that
regression
or
that
bug
and
having
to
go
through
the
headache
of
like
rolling
back.
Or
you
know
that
process.
A
Yeah,
I
don't
completely
disagree,
but
I
think
the
the
value
of
having
an
option
to
still
turn
it
off
in
case,
something
sudden
and
unexpected
happens-
is
worthwhile
shipping
it
with
the
feature
flag
there.
Even
if
you
default
on
we've
had
a
situation
in
the
past
where
we
were
able
to
mitigate
a
s1
p1
in
40
minutes,
because
the
feature
flag
was
in.
Otherwise
we
have
to
wait
for
a
deploy
and
the
customers
were
very
upset,
but
during
those
40
minutes,
then
they
they
mitigated
any
more
thoughts.
H
Yeah
so
like
so,
it
was
defaulted
off.
I
mean
defaulted
on
and
then
the
s1
and
p1
was
there.
A
Sorry,
no,
in
that
case
it
wasn't
default
to
on.
It
was
a
regular
rollout
of
the
feature
and
then,
after
months
of
the
feature
being
online
two
months,
I
think
was
the
repository
browser
and
then
the
the
customers
had
branches
with
hashes
in
them
in
octotharps
and
that
broke
the
the
app
the
view
app
and
the
fact
that
we
still
have
the
haml
implementation
in
place
allowed
us
to
quickly
toggle
into
the
haml
implementation.
A
And
so
what
I'm
saying
is
like
the
value
of
having
a
feature
flag.
There
is
separate
from
what
was
the
fault
value?
You
know.
Oh,
it's
always.
H
Right
right,
right,
yeah,
I
can
totally
agree
with
that,
because
we
actually
ran
into
that
like
this
milestone
with
something
that's
been
in
the
code
base,
probably
like
three
or
four
months
and
that
feature
flag.
It
wasn't
a
it
wasn't
s1,
but
so
we
can
like
push
a
little
easy
fix,
but
having
that
value
still
in
the
code
base
would
have
been
beneficial
there.
H
You
know
it's
it's
kind
of
hard
to
say
like
I
guess
it
really
just
depends
on
like
the
feature
like
if
we're
going
to
default
off
or
on
like
a
lot
of
times,
I
find
that
I
just
don't
ship
with
the
feature
flag.
If
it's
going
to
be
default
to
true,
but-
and
you
know
in
in
some
cases
like
that,
it
can
come
back,
it
can
help
you
in
the
long
run.
E
I
just
wanted
to
quickly
say
that
you
know
when
we,
when
we
do
ship
something
with
default
on
that's
affecting
on-prem
customers
and
we
can't
change
their
feature
flags
once
they've
installed
the
new
version
of
gitlab
so
default
into
true
before
it's
tested
on
gitlab.com
isn't
as
safe
as
it
feels.
Even
though
that
feature
flag
is
there
we
have
to
reach
out
to
the
customer
and
say
hey.
E
H
Yeah,
you
would
have
to
go
into
the
rails,
console
at
that
point
and
give
the
customer
the
feature
flag.
Name
and
they'd
have
to
go
in
the
rails.
Console
do
feature,
dot,
disable
and
feature
flag
name
which
you
know
is
kind
of
a
quick
fix,
but
it's
a
pain
for
on-prem
customers
that
are,
you
know,
running
their
own
servers.
C
I
have
a
question
that
might
lead
to
an
unpopular
opinion,
but
is
is
because
we
now
have
rollout
and
we
default
feature
to
off
and
we
are
in
a
more
lovable
stage.
C
Isn't
it
in
a
way
encouraging
us
to
build
larger
feature
so
that
we
don't
do
as
many
rollout
right
so
like
it
kind
of
encouraged
rolling
out
the
feature?
You
know
it's
more
stable
so
that
you
do
one
roll
out
when
everything
is
looking
good
instead
of
like
one
quick
feature
and
iterating
on
that,
like
it's
more
question,
it's
not
certainly
an
opinion
but,
like
I
feel
like
with
the
rollout
and
all
the
steps
and
the
time
it
takes.
C
It's
very
tempting
to
then
cram
more
features
under
that
flag
and
under
that
rollout
and
then
like
release
that
thing
that
should
work
and
then,
if
it
doesn't
work,
you
can
turn
it
off.
Instead
of
this
is
the
first
iteration
this
milestone,
and
so
I
have
to
spend
a
week
rolling
it
out
and
the
next
milestone.
I'm
gonna
spend
another
week
rolling
it
another
one
out
for
v2
and
then
v3
and
mp4,
and
there
we
go.
C
I
have
like
a
working
version
and
I
had
to
do
four
roll
out
for
this,
so
that
might
be
something
also
that
right
seems
to
contradict
our
values,
but
in
terms
of
workflow
and
efficiency.
C
It
actually
is
better
for
an
engineering
perspective,
so
it's
more
like
customers
have
to
wait
longer
before
they
get
something,
but
at
the
same
time
we're
leveraging
the
fact
that
we
have
now
feature
flags
and
that
we
are
in
a
larger
stage
that
might
require
us
to
spend
more
time
before
we
can
actually
ship
something
that
the
customer
will
have
value
from
instead
of
like
at
least
we've
seen
that
in
partnering,
with
like
the
the
part-time
visualization
is
a
good
example
where
we
had
something
quickly
done,
and
then
we
talked
with
like
our
staffs
and
distinguished
engineer
in
ci
and
was
like
well,
you
could
ship
it
this
way,
but
even
though
it
works
and
like
it
has
some
value
because
it
doesn't
support
everything,
it
should
support
it's.
C
It's
actually
gonna
turn
people
off
from
using
this
feature.
So
like
we
developed
behind
the
disable
feature
flight
for
a
couple
of
milestones
and
then
we
turn
it
on
and
then
it
was
working
like
fully
working
and
then
we
had
only
one
rollout
and
people
could
start
using
it.
But
I'm
curious
what
other
people
think
about
that.
A
Yeah,
I
get
your
point.
I
don't
think,
there's
an
easy
answer
to
it.
I
think
the
the
bottom
line
is
the
low
level
of
shame.
Barrier
in
lovable
product
categories
is
harder.
We
tried
having
a
low
level
of
shame.
We
had
the
situation
with
the
role
out
of
reviewers
that
everybody
felt
the
pain
where
we
shipped
something
to
product
to
production.
A
We
enabled
this
was
more
about
enabling
it
or
not
enabling
it,
but
it's
kind
of
the
same
thing
about
the
default,
which
was
a
feature
was
known
to
have
problems
on
the
workflow.
We
enabled
the
reviewers
without
having
the
merge
request,
navigation,
update,
count
right
and
for
a
couple
weeks
people
were
missing
reviews.
We
knew
that
this
problem
existed.
A
A
Cool
sam,
you
said:
read
only,
can
I
skip
it
or
do
you
want
to
talk
about
it.
E
A
E
A
Cool
justin.
D
A
Cool
one
of
the
things
that
I
learned
by
review
by
looking
at
one
of
the
merge
requests
that
thomas
shipped
a
couple
week
a
couple
days
ago,
was
that
you
can
set
the
default
enabled
on
the
code
to
check
the
ammo
mind
blown
that
was
nice
all
right.
Do
we
have
any
more
points
on
the
section
we're
out
time,
but.
E
A
Please
thanks
right
moving
on
to
section
two
restarting
the
timer
regressions
how
to
react
efficiently.
A
So
we
know
we
know
how
we
should
react
to
regressions
and
we
know
that
if
it's
something
that
we
pushed
and
regressions
take
precedence
over
the
work
that
we're
doing
and
that
they're
important
to
fix
as
quickly
as
possible.
A
G
Yeah,
I
do
have
a
lot
to
say
about
that,
because
we're
lovable
there
are
there's
one
concept
in
distributed:
systems
called
the
three
pillars
of
observability
one
is
locks,
the
other
is
traces
and
the
other's
metrics,
and
because
right
now
we
have
a
lot
of
eyeballs
looking
at
what
all
of
the
features
that
we
do
and
ship
one
of
the
things
that
we
should
always
take
into
consideration
when
we
have
an
issue.
G
If
it
is
a
regret,
visual
regression,
we
don't
have
like
a
very
specific
system
on
how
to
classify
them.
I
have
to
fix
them
like
what
is
the
priority.
Sometimes
we
say:
hey,
it's
a
a
different
color
that
is
not
aligned
with
our
design
system,
so
we
change
it.
Sometimes
it
takes
a
little
bit
longer
because
it's
a
small
change,
but
what
happens
if
everything
is
misaligned
or
perhaps
it
goes
even
further
and
nothing
shows
up
nothing
loads,
not
even
an
error
message.
G
That's
when
those
three
pillars
come
to
come
in
and
we
already
do
have
a
part
of
part
of
those
filters
implemented.
We
have
sentry,
we
have
prometheus
in
the
case
of
the
new
pylon
chart
and
we
have
also
site
speed
and
we
have
the
lcp
metrics
that
tell
us
hey
how
well
our
pages
are
doing
in
in
load
times.
G
So
from
my
understanding,
we
should
always
take
the
context
as
well
and
there's
a
couple
of
places
that
say:
there's
four
pillars:
instead
of
triplers
and
the
fourth
pillar
is
context,
and
in
this
case,
because
when
you
have
a
significant
amount
of
logs
and
metrics
going
all
over
the
place,
you
should
probably
take
a
look
into
those
or
if
our
lcp
metric
is
high,
we
should
look
into.
Why
is
high?
H
Yeah
so
like
those
visual
aggressions
like
I,
I
remember
when
we
were
doing
like
some
different
okrs,
where
we
were
changing
like
some
buttons
and
different
things,
and
each
one
would
have
to
be
visually
inspected
with
screenshots,
and
it
was
just
such
a
pain
and
like
in
get
lab
ui.
We
have
like
visual
screenshots
like
so.
We
have
dom
dumps
in
the
regular
gitlab
code
base
to
use
you
know
just
normal
snapshots
but
like
if
we
had
that
in
place
like
some
of
those
visual
aggressions
would
not
happen.
H
So
that's
like
I
would
love
that
and
another
thing
like
with
those
regressions
like
this
is
super
inefficient
but
like
if
we
had
manual
qa
testers
like
that
would
be
like
so
great
like
I
could
just
ship
something
and
hey
you
test
the
heck
out
of
it
and
let
me
know
if
anything's
broken,
but
you
know
I
know
we're
more
in
the
automated
world
now
yeah.
A
I'm
glad
you
brought
that
up
peyton,
because
I
think
we've
worked
on
places
where
we
do
have
those
teams,
other
companies
use
them,
and
I
think
it's
important
to
approach
that,
because
at
gitlab,
the
reason
why
we
don't
is
not
so
much
because
we
have
automated
it's
mostly
because
the
having
of
the
the,
if
you
had
a
manual
team
to
do
that
psychologically,
it
would
kind
of
offload
to
them
the
responsibility
of
checking
those
things
and
at
gitlab
from
the
very
start,
we've
thought
of
quality
as
everyone's
responsibility.
A
So
we
do
want
to
have
co-ownership
of
quality
with
that
and
and
we're
all
on
board
with
it.
But
but
I
still
share
your
perspective
that
automated
visual
testing
screenshots
would
definitely
help.
I
was
trying
to
recover
one
of
the
projects
that
tim
developed,
which
is
a
small
tool
that
we
can
use
for
those
kinds
of
efforts.
A
A
Yes,
gitlab
screener,
I'm
putting
the
link
on
the
agenda
right.
A
Where
are
we
okay?
This
is
here
and
I'm
gonna
put
the
link
there.
So
that's
something
that
might
be
useful
for
anyone
doing
wide
efforts,
because
we've
seen
a
couple
of
visual
regressions
through
those
very
worthwhile,
very
valuable
efforts
of
cleaning
the
code
base
regarding
buttons,
moving
things
to
gitlab
ui.
A
They
did
have
some
side
effects
and
side
effects
in
lovable.
Product
areas
are
a
little
bit
more
visible
and
severe
than
others.
Those
tools
will
catch
it.
So
I
think
it's
important
to
be
bringing
awareness
to
that.
While
we
don't
have
the
automated
visual
screenshot
testing
we
might
get
by
with
using
gitlab
screener
for
a
while.
H
And
another
point
that,
like
so,
I
tend
to
like
my
personality-
is
shipped
fast
fast
fast
and
I've
had
some
regressions
like
in
the
past,
and
I
think
a
lot
of
that
relates
to
shipping
so
fast,
fast,
fast,
like
yeah.
H
So
I
think,
like
slowing
down,
can
like
help
a
lot
of
those
regression
or
prevent
a
lot
of
those
progressions.
A
B
E
B
I
just
wanted
to
point
out
that
we're
actually
adding
visual
tests
for
the
pipeline
rollout
now,
specifically
because
it's
not
just
that,
like
in
general,
that
it's
helpful,
it's
really
helpful
any
place
where
you
have
the
like
combinatoric
level
of
the
pipelines
right
like
it's
not
actually
possible
for
any
human
being.
Who
wants
to
get
their
work
done
to
test
every
possible
pipeline
setup?
You
could
have
in
terms
of
jobs
and
dark
mode
and
combinations
and
stuff.
So
it
is
coming.
B
We
can
report
back
when
it
actually
happens,
but
it's
part
of
the
plan
for
the
big
pipeline.
B
Yes,
the
goal
right
now
is
working
to
get
it
so
that
we
can
have
a
manual
job
for
it
in
our
ci
test,
because
I
don't
want
it
to
be
a
problem
for
anybody.
Who's
not
working
on
this
very
specific
thing.
But
I
do
want
people
who
are
changing
the
css
on
pipelines
to
have
like
some
confidence
that
they're
not
breaking
things
because
they
definitely
are
and
like.
We
should
know
what
they
are.
A
Okay,
please
report
back.
This
sounds
very
interesting
in
a
related
effort
that
we're
having
regarding
transient
bugs
which
we'll
talk
eventually
we're
talking
about.
How
can
we
help
developers
prepare
their
local
environments
for
particular
scenarios
like,
for
example?
How
do
I
prepare
a
project
that
has
different
kinds
of
mrs
with
different
kinds
of
pipelines?
A
In
our
particular
case,
we
have
a
bunch
of
combinations
regarding
project
settings
squash
on
merge,
fast
forward,
merge
on
all
situations,
and
all
of
that
takes
a
lot
of
time
to
replicate
locally
and
when
you
set
it
to
review,
the
reviewer
doesn't
have
that
at
all.
So
it's
hard
to
to
applicate.
So
we're
talking
about
that
too.
So,
let's
keep
sharing.
If
we
have
any
any
progress
there.
B
Yeah
and
fred
has
been
making
a
generator
for
us
for
pipelines.
That's
a
little
thing.
We
don't
have
to
like
stick
around
on
it
a
lot,
but
it's
good
to
know
that,
like
we've
also
been
doing
that.
So
we
should
have
like
a
second
summit
about
like
auto
generation
or
something.
E
E
Let
me
go
I've.
I've
linked
to
the
issue
where
we're
working
on
that
as
well
on
sarah's
point
a
little
further
up.
So
if
you
want
us
to
look.
F
Cool
so
yeah,
I'm
just
looking
at
at
the
rise
scoring
method.
I
just
I
just
opened
it.
I
had
no
idea
this
existed
and
I
compare
it
with
our
triage
model
and
it
looks
like
the
rise
model.
It's
more
sophisticated,
at
least
at
first
glance,
so
it's
possible.
We
are
ready
to
move
to
something
better.
I'm
not
I'm
not
sure
who
wrote
the
triage
process.
F
I
don't
mean
to
open
them,
but
I
think
it's
complicated
enough
that
if
we
replace
it
with
a
right
scoring
method
or
something
like
that,
I
think
it
could
be
good
for
us,
especially
at
the
lower
stages.
So
it
could
be
something
we
can.
We
could
bring
up
to
them,
say
hey
we're
interested
in
looking
into,
for
example,
things
like
the
audience
we
are
impacting
because
so
that
the
severity
is
not
not
just
this
very
simple
thing,
so
that
could
be
like
an
action
point.
There.
A
On
that
miguel,
I
would
say
that
we
don't
call
it
rice,
particularly
explicitly,
but
we
do
have
some
sort
of
part
of
the
rice
framework
in
built
into
our
triaging.
So
we
do
have
that
part
that
I
showed
you
about
the
percentage
of
audience
and
stuff.
So
I
think
we
do
have
some
of
it.
A
What
would
be
interesting
to
see
is
what
are
the
things
we're
missing
from
the
rice
framework
and
come
up
with
suggestions
for
our
quality
team
and
pinging,
ramya
or
max
mek
would
be
a
a
good
person
to
ping
on
those
issues
that
you
create.
But
it's
important
to
see.
A
C
I
would
be
curious
to
see
if
we
are
actually
using
the
rich
part
of
the
severity
when
we
apply
label,
because
to
me
it
feels
like
it's
more.
How
angry
are
people
and
then
we
just
stick
a
label
to
it,
and
maybe
it's
just
my
perception
perception,
but
I
think
we
could
use
a
more
like
thorough
and
research
approach
because
the
right
score.
C
What
is
good
is
that
you're,
exposing
the
values
of
each
of
these
categories,
and
then
it
generates
a
total
score
and
you
can
have
a
board
which
is
sorted
by
score,
and
so
it
is
no
longer
you
would
like.
You
can
apply
a
special
label
on
top
of
it
and
say
like
right.
This
is
breaking
thing
for
specific
customers.
C
So
even
if
the
right
score
says
it's
like
50,
then
it
should
be
like
200
just
because
of
that,
but
like
it's,
it's
more
fair
in
the
sense
that
you
might
have
a
higher
rise
score
and
a
very
something
that
might
seem
tiny
but
you're
like
well,
it's
affecting
everyone
and
it's
a
tiny
issue
but
like
we
might
fix
it
faster
than
this
new
feature.
That
adds
this
button
because,
like
the
rice
is
slower
on
this
new
feature,
so
it
also
resurface
bug
and
technical
debt.
F
Fred,
are
you
aware
of
issue
tracking
systems
that
have
price
built
in.
C
F
I'm
saying
I'm
saying
we
could
this
could
be
a
an
enterprise
edition
feature.
This
could
be
unless,
unless
I'm
missing
something-
and
we
already
have
it
or
something.
C
And
gyro
was
a:
it
was
a
they
had
the
option
to
have
custom
fields,
so
you
could
create
a
custom
field,
make
it
a
rice,
then
create
subfield
that
would
calculate
yeah
and
then
have
like
a
table
that
sorts
it
through
rice,
because
you
can
sort
by
any
value.
So
you
can
say
sort
by
highest
rise
and
then
you
would
get
your
board
and
that
that
was
pretty
convenient
to
see.
F
Now
the
the
cheaper
version
of
that
is
to
have
the
the
gitlab
bots
calculate
things
for
us,
but
that
could
be
like
for
the
boring
solution
to
it.
But
it
would
not
be
a
feature.
A
Cool
I'm
trying
to
look
because
one
of
the
things
that
I've
heard
in
the
in
the
past
all
right-
I
think
I
found
it
so
one
of
the
one
of
the
qual
one
of
the
changes
that
quality
team
is
rolling
out
is
then
this
can
be
expanded
to
other
areas.
Is
that
any
bug
that
is
labeled
as
mergy
quest,
ux
and
bug
gets
automated
bump
severity?
A
A
Yeah
found
it
so
there
that's
the
link
and
mac
the
the
idea
is
that
quality
is
changing,
triage,
automated
severity
to
consider
certain
parts
of
the
products
and
bump
up
severity.
That's
kind
of
the
summary.
A
E
A
You
are
absolutely
correct,
so
yeah
we
had
that
discussion
on
tuesday
and
I
still
make
mixed
it
up.
Yeah,
what's
what's
raised,
what's
defined
as
the
minimum
priority
is
the
priority
not
the
severity.
The
severity
is
still
the
same
category
but
the
severity
three
or
severity.
So
on
the
on
that
board,
where
you'll
see
and
for
the
sake
of
the
people
watching
the
video
I'll
just
share
my
screen,
really
quick,
there
we
go
so
the
merge
request
box
severity
is
one
two
three
four
severity
two
will
automatically
be
as
a
minimum
priority
one.
A
So
this
means,
if
there's
an
s2
reported.
Potentially
that
will
be
disruptive
to
the
current
work,
because
priority
one
is
supposed
to
be
drop
everything
into
and
address
that
severity
three
goes
to
priority
two.
Those
would
usually
end
up
in
the
backlog,
so
this
is
already
a
good,
a
good
improvement
and
then
severity
four
goes
priority
three.
So
it's
trying
to
make
these
bugs
a
little
bit
different
in
this
area,
so
this
little
label
could
be
replaced
by
something
on
the
verify
area.
So
keep
that
in
mind
can.
B
A
That's
a
fantastic
question.
Sarah,
that's
first,
I
could
write
a
book
on
that
question,
but
I
won't
so
I'll
just
give
you
the
tldr
ux
bugs
some.
So
we
we
just
on
the
transient
bugs
working
group
and
the
fact
is
what
we
consider
bugs
is
code
that
is
working
with
a
defect
in
terms
of
user
experience.
A
Bugs
is
user
expectations
that
are
not
being
fulfilled
and
they're
both
bugs
they're,
just
different
kind
of
category
of
bugs
one
is
ux
bugs
the
other
is
engineering
bugs
the
way
we
solved
it
on
transient
bugs
working
group
is
that
ux
bugs
are
bugs,
and
the
perspective
is
that
some
of
the
things
that
we
weren't
building
so,
for
example,
that
everybody
knows
what
I'm
talking
about
when
we
update
the
status
of
the
merge
request
by
merging
or
by
closing,
there's
a
bunch
of
places
in
the
page
that
doesn't
get
updated
now.
Is
that
a
bug?
A
B
No,
but
it's
okay,
it
was
a
very
valiant
effort.
I
think
I
I
have
seen
things
that
sometimes
aren't
like.
I
think
your
example
was
a
very
good
example.
There
are
other
things
sometimes
where
it's
like.
I
think
the
mr
button
might
be
one
that
many
people
on
verify
are
familiar
with
the
like.
Is
this
thing
mergeable
and
like?
What's
the
pipeline
widget
like?
What's
the
code
you
can
have
and
people
are
coming
to
you
being
like
this?
Is
a
bug
and
you're
like
well,
it
works
the
way
it
was
designed
to
work.
B
You
know
what
I
mean
like
it's
like
this
sort
of
feeling
of,
like
that's
a
feature
that
we
should
do,
but
it's
still
a
feature
because
I
think
there's
a
certain
way
about
when
people
talk
about
ux
bugs
that,
like
there's
like
this
implication
that
it's
because
you
didn't
build
the
thing
right
when
that's
not
necessarily
the
case
right,
where
it's
all
like,
we
did
not
foresee
this,
so
we
should
fix
it,
but
I'm
always
a
little
leery
of
that.
But.
A
Yeah,
I
would,
I
would
I'll,
tell
this
to
wrap
it
up,
because
we
have
to
move
over
we'll
move
on
is
that's
where
the
humans
in
the
process
come
into
play
and
I've
had
cases
where
we
had
discussions
and
the
three
of
us
all.
The
managers,
product
product
manager,
back-end
manager,
front-end
manager,
we're
kind
of
like
I
don't
know.
If
this
is
a
bug
and
then
we
decided
on
handling
it,
regardless
of
whether
it
was
a
bug.
A
I
think
all
right,
all
right,
I
think
we're
done
anything.
Anybody
wants
to
say
in
10
seconds
about
regressions,
then
we're
moving
on
section
three
preemptive
action.
We
have
20
minutes
to
discuss
this.
Let's
see
where
we
are
right,
then
we'll
be
all
right,
we're
already
kind
of
like
tight,
but
let's
do
it
prime
direction.
What
are
the
approaches
you
use
while
developing
or
planning
your
features
to
ensure
a
reliable
delivery
delivery,
free
of
bugs
anything
around
leveraging,
instrumentation
and
testing
to
ensure
we
catch
production
bugs
as
early
as
possible?
B
I'm
sorry
I'm
talking
so
much.
I
will
try
to
say
it
as
fast
as
I
can
and
then
hear
what
everyone
else
is
doing
too.
So
for
me,
I
feel
like
we
try
to
start
from,
like
the
very
basis
of
it
like
that,
you
can't
have
availability.
If
you
don't
have
good
structure
in
your
code,
because
the
messier
your
code
is
the
more
stuff
you're
just
gonna
break
right,
like
I
think
we
all
know
and
agree
about
that,
and
so
I
think
we
have
been
trying
to
find
ways
to
keep
the
code
less
breakable.
B
Some
of
that
is
with
css
utility
classes
is
like
such
a
core.
So
much
of
the
pipeline
graphs
fragility
and
its
css
fragility
is
that,
like
it's
very
old-style
nested
css
that
I
think
made
sense
when
it
was
built,
but
it
means
that
a
little
change
has
cascading
effects,
so
I
think
focusing
on
using
utility
css
whenever
possible
helps
same
with
like
data
driven
in
like
an
architectural
sense,
not
in
a
like
what
do
the
metrics
say:
sense,
don't
put
magic
numbers
into
things,
but
read
stuff
off
the
dom.
B
That
was
another
big
problem
with
the
pipeline
graph:
lots
of
magic
numbers
that
made
sense
once
but
now,
like
part
of
our
rewrite,
is
like
let
the
browser
lay
it
out,
because
it's
good
at
laying
things
out
and
then
look
up
the
number
off
the
browser
and
like
make
that
programmatic
and
let
the
data
drive,
how
the
rest
of
it's
going
to
work,
small
components
with
this
little
intermingling
as
possible
right.
We
all
know
that
we've
been
using
more
higher
order
components.
B
B
Has
nice
error,
handling
view
itself
has
like
an
error,
captured
method
that
you
can
use,
and
then
we
actually
have
an
errors
pattern
in
pa
that
we've
been
using
a
lot
for
how
we
capture
and
bubble
up
our
errors
to
the
wrapper
and
that's
helping
us
with
sentry,
because
we
have
very
clear
places
where
we
can
put
those
interventions,
because
we
have
a
known
pattern
and
then
we're
also
adding
some
prometheus
calls
to
check
the
performance.
Drawing
the
links
is
new
and
different.
B
We
can
only
know
so
much
about
the
graphs,
so
we've
been
working
with
the
back
end
to
just
add
an
endpoint
where
we
can
just
send
some
link
rendering
performance
metrics.
So
we'll
see
how
helpful
that
is
in
helping
us
understand.
What's
going
on
with
the
links
in
the
graph
and
then
what
else
do
I
have
to
say,
yeah
visual
tests
we
talked
about,
and
tests
in
general
are
sort
of
like
how
do
we
preemptively
solve
this,
I
think,
are
a
double-edged
sword
like
they're.
B
Really
great,
but
they
take
a
long
time
to
write
and
no
one
wants
to
delete
them
and
we
don't
need
them
anymore.
No
one
wants
to
say.
I
think
we
don't
need
this
test,
because
it's
not
testing
what
we
need
it
to
do,
but
then
managing
those
and
updating
those
takes
a
lot
of
time
so
they're
great,
but
I
have
concerns
and
sorry
yes,
pa
is
pipeline
authoring
awesome.
A
Thanks
sarah,
I'm
going
to
bring
it
back
to
thomas,
because
he
has
a
point
on
your
data
driven
in
the
architectural
sense
and
reading
things
off
the
dom
thomas.
E
Yeah,
I
just
I
so
I
kind
of
want
to
know
more
about
that.
If
that's
something
that
you
apply,
sort
of
as
general
rule
or
more
like
only
when
you're
trying
to
pull
like,
say
the
width
of
elements
that
have
been
rendered
or
something
my
my
take
has
been
for
for
many
years
that
the
dom
is
an
artifact
of
the
software.
It's
how
we
display
the
software.
It's
never
like
the
thing
we
read
from
to
get
information
for
our
software.
E
B
I
do
a
lot
of
code
is
what
I've
ended
up
doing
a
lot
of
so
for
us
it's
been
things
like
width,
but,
like
I
think,
of
the
dom,
it's
not
just
an
artifact
right,
it's
our
runtime!
It's
your
java!
It's
your
jvm!
It's
your
whatever,
like
the
browser
actually
has,
does
a
lot
for
us
and
it
does
it
really
quickly,
and
so
I
think,
to
the
extent
that
like
if
the
browser
knows
something
you
want
to
know
then
sometimes
getting
it
from
the
browser
rather
than
recalculating.
B
C
I
think
I
can
add
on
that
like
for
this
specific
example
like
we
can
see
of
like
how
it
used
to
be
done,
where
you
would
have
css
with
like
r
value
right
so
like
you
want
to
do
a
line
between
two
job,
and
so
you
say:
okay,
I'm
assuming
the
column.
Width
is
always
the
same.
So
I'm
gonna
move
30,
pixel
down,
then
40
pixel
on
the
right
and
then
30
pixel
up
and
then
I
have
a
link.
C
But
that
means
that
anyone
that
changes
anything
in
the
graph
breaks
the
link
right,
because
then
these
value
changes.
And
so,
if
you
read
from
the
dom
you
can
go.
What
is
the
current
width
of
this
column
and
like
I
can
use
a
percentage
base
value
to
determine
like
how
far
the
line
should
go
and
so
like
these
will
never
break
because
they're
relative
values
that
you
read
from
the
thumbs?
I
think
that's
in
this
instance
why?
For
example,
it's
a
good
idea
to
read
from
the
top.
H
B
B
Yes,
if
what
you
want
to
know
is
it's
heightened
with
you
should
just
ask
the
dom
and
likewise
other
things
that,
like
you
know,
we
shouldn't
be
recalculating
data.
The
merge
widget
meeting
we
had
today,
where
the
back
end
is
going
to
do
it,
and
we
can
sort
of
provide
that
information.
But
we
shouldn't
be
recalculating
it
that
to
me
would
be
like
making
it
data
driven
again
by
like
reading
in
what
we
need,
as
opposed
to
adding
to
it
ourselves
when
we
can,
because
it
makes
it
less
code
and
less
fragile.
A
Okay,
careful
with
the
reflows,
though
measuring
elements
is
expensive,
but
anyway,
thank
you
so
much
food
for
thought,
I'll,
move
on.
For
the
sake
of
time
for
frederick
we
have
a
point:
there
go
ahead.
C
Yeah,
so
basically,
one
of
my
pinpoint
that
I've
had
in
like
a
lot
of
new
feature
is
that
when
it's
time
to
write
the
rspec
or
the
end-to-end
test,
I
am
less
competent
in
ruby
than
I
am
in
javascript,
and
so
usually
how
we've
been
doing
is
I
mean
when
I
was
in
ci?
It
was
more.
You
write
the
spec
and
then
the
queue
engineer
comes
after
you
in
pa.
We
didn't
add
a
set.
C
So
that's
going
to
change
soon,
but
we
didn't
have
a
queue
engineer
to
help
us
with
the
the
r
spec,
but
like
one
way
to
be
more
preemptive
about
issue
obviously,
is
to
write
good
testing
in
both
our
spec
and
and
to
end
test,
and
historically,
we
didn't
get
a
lot
of
help.
Writing
them.
So
as
a
front
end,
we
end
up
writing
all
the
feature
spec
and
so
just
having
more
availability
from
q
engineer
and
I
think
in
the
past
it's
been
a
resource
problem.
C
More
than
anything
so
like
we
don't
have
enough
q
engineer
to
really
help
us
write
the
test,
so
they
usually
come
after
and
like
give
tips
on
like
how
you
could
improve
them.
But
it
would
be
much
better
if
we
could
have
like
a
like
an
actual
dedicated
issue
to
implement
these
tests
and,
like
make
sure
we
cover
every
use
case.
We
think
we
need
to
with
end-to-end
tests
and
not
break
the
platform.
F
I
I
can
say
yes,
I
also
find
it
thankful
to
write
ruby.
I
don't
think
I've
had
to
write
anything
in
qa,
so
I
think
this
is
very
well
managed
by
the
qa
team,
but
the
feature
specs
are
something
that
when
they
come
up,
it's
very
slow
for
me.
F
It's
not
an
efficient
way
to
spend
my
time,
and
I
wish
we
had
some
more
support
part
of
the
process
that
involves
more
support
from
qa,
and
I
I
don't
think
I
think
we
could
make
an
exception
for
front
end,
because
we
don't
know
ruby,
as
as
vaccine
does
so,
we
might
have
to
protest
a
bit
more
about
how
we
can
get
help
from
qa
to
the
front
end
to
to
do
the
changes
we
need.
A
So,
on
that
note
miguel-
I
I
I
wish
samantha
was
here
so
I
had
the
same
conversation
when
samantha
joined
and
she
wasn't
very
comfortable
with
with
rspec.
She
did
a
journey
of
like
learning
getting
more
comfortable
with
with
that
and
that
came
through
collaboration
with
back
and
directly
reaching
out
to
colleagues
reaching
out
to
counterparts
to
to
ask
for
help
and
over
time
she
became
much
more
resilient,
because
I
I
don't.
A
I
don't
think
we
should
focus
so
much
on
the
ruby
because
they
do
fulfill
different
goals,
the
tests
and
they
do
help
us
a
lot.
So,
especially
when
you're
doing
refactors,
you
want
to
make
sure
that
you
have
a
strong
test
coverage,
not
just
on
the
code
but
also
on
the
feature
side,
so
to
make
sure
that
we
don't
break
any
expectations
from
the
feature
side,
even
though
the
code
was
rewritten.
A
So
I
I
think
they
are
valuable
being
a
front-end
responsibility
but,
like
our
collaboration,
value
is
strong
and
if
you're,
if
you're
feeling
slower
or
just
slower
than
usual,
I
don't
think
there's
anything
wrong
with
reaching
out
to
back-end
collaboration
and
basically
even
schedule
time.
For
that.
F
I
wish
it
was
more
part
of
the
process
of
development
than
something
ad
hoc
where
I
have
to
reach
out
every
time
I'm
confused
it
could
be,
it
could
be
a
training
for
all
the
quantum
engineers
we
have
a
training
on
our
spec
and
that
the
way
us
heartbreak
works,
but
at
least
we
have
a
process
for
that
instead
of
suffering
through
mr
after
mr
and
maybe
somebody
will
will
have
time
to
help
us
sometimes
or
maybe
not,
and
then
it
will
just
be
disorganized.
D
Yeah,
I
think
it's
an
excellent
point
around
how
people
will
tend
to
do
the
thing
that
is
easy
for
them
and
that
they're
good
at
and
I
feel
like
some
of
our
pain,
points
and
testing
come
from
the
narrow
view
of
what
we're
comfortable
with
on
the
front.
End
side
like
I've,
had
a
lot
of
pain,
points
dealing
with
rails
and
ruby
very
recently,
and
it
kind
of
structures
our
tests
in
a
wrong
direction,
whereas
we
could
have
one
larger
integration
test
that
more
accurately
satisfies
covering
this
particular
feature
component.
B
I
think
there's
a
flip
side
to
that,
though,
which
is
the
capybara
tests.
Take
a
lot
longer
to
run
than
unit
tests
like
literally
the
test
file
for
the
pipeline
is
1200
lines
of
capybara,
which
is
easily
a
thousand
lines
too
many
like
like.
We
spend
so
much
pipeline
time
running
those.
So
I
agree
that
sometimes
we
should
be
writing
them,
but
I
think
I
agree
with
everyone
that
having
more
of
a
framework
would
be
helpful
in
helping
people
make
the
right
decisions
around
them.
B
Also,
our
backend,
I
feel,
like
I
don't
know
how
it
is
with
create,
but
getting
attention
from
our
backend
is
incredibly
hard,
like
they're
responsible
for
so
much
ci,
stuff
and
so
much
jobs
and
like
the
jobs
tables
and
like
so
so
much
stuff
that
the
front
end
being
like.
Can
you
help
me
write
my
capybara
test
they're
just
like
I
have
so
many
other
things
to
do?
That's
not
that.
D
Yeah
I
totally
agree
and
at
is
that
the
risk
of
sarah
and
I
just
taking
over
this
whole
call
and
ranting
about
testing.
I
I
think
the
the
problem
with
capybara
test
taking
a
long
time
to
run,
is
way
less,
that
it's
a
capybara
test
and
way
more.
How
we're
writing
those
tests
themselves
we
have-
and
this
is
highly
opinionated.
This
is
totally
my
opinion,
but
I
think
we
test
a
lot
of
the
wrong
things
and
I
think
we
test
a
lot
of
meaningless
things
way
too
much.
D
So
we
chew
up
a
bunch
of
cycles
reiterating
over
things
like
I
I've
refactored.
I
think
two
tests
in
the
last
week
where
we
had
a
before
each
the
respooling
up
a
component
that
was
a
display
only
no
state
change
component,
but
it
iterated
over
it.
Oh
well,
I'm
going
to
test
this
one
piece
of
it.
Now
I'm
going
to
destroy
the
component,
rebuild
it
and
test
this
other
piece.
That
was
the
ecstatic
value,
and
I
see
that
a
lot
so
yeah.
C
And
on
that,
like
definitely
on
that
point,
like
I've
seen
that-
and
I
have-
I
have
fought
against
it-
a
lot
like
where
some
people
will
be.
You
should
write
in
an
end-to-end
test
and
what
I'm
being
asked
to
do
is
like
go
and
assert
that
the
text
is
rendered
and
I'm
like.
Well,
this
is
a
waste
of
capybara's
capability
right,
it's
like
I
should.
I
should
assert
functionality
and
making
sure
the
user
can
do
the
right
thing,
like
the
text
and
like
the
button
is
there.
This
is
all
unit
tests.
C
It's
faster,
it's
easier
to
maintain
and,
like
I,
I
think,
like
the
biggest
problem
for
with
me,
and
I
brought
that
up
before
in
one
another's
ci
meeting.
C
If
there's
a
need
for
end-to-end
tests,
there
would
be
a
separate,
mr
that
could
release
either
in
parallel
or
right
after
and
then
once
it's
done,
then
you
can
turn
on
the
flag
in
a
separate,
mr
and
then
it's
out
but
like
making
all
and
also
like
it
can
make
the
mr
very
big,
and
so
you
have
to
do
different,
commits
or
different,
mr
and
then
turn
the
flag.
So
why
not
just
have
someone
who's
way?
More
competent.
C
Tell
the
engineer.
If
we
need
a
test,
I
will
take
care
of
it.
What
is
this
feature?
Yes,
we
need
a
capybara
test.
Here's
what
I'm
gonna
do
does
that
sound
good,
yes,
merge
everything
together,
and
then
we
have
like
a
much
better
workflow,
and
that
would
also
avoid
bloating
the
end-to-end
test,
because
then
people
write
the
test
with
right,
meaningful
tests
and
not
just
like
this
copy
is
okay.
C
A
So
let
me
just
summarize
one
thing
and
then
move
on
to
you
is
this:
I
see
two
things
here:
one
the
quad
planning
part
of
the
process
which
is
supposed
to
have
the
scts.
The
software
engineering
test
come
and
help
the
deliverables
when
they're
after
they
assigned
help
weigh
in
what
is
the
needed
strategy,
and
I
heard
that
you
have
one
recently.
A
So
if
that's
a
recent
thing
that
would,
I
would
suggest
sam
to
reach
out
to
them
and
making
sure
that
they're
aware
that
we're
expecting
that
of
them
like
a
little
bit
more
guidance
on
the
issues
themselves.
What
are
the
sort
of
tests
that
these
needs
ahead
of
the
development
being
made?
The
other
is
collaboration
in
terms
of
back
end
because
I
feel
like
they
don't
have
time
to
support
you.
G
That
was
me.
I
just
wanted
to
mention
that
perhaps
one
of
the
things
I
I
agree
up
with
a
lot
of
those
points
that
perhaps
we're
not
testing
the
right
things,
but
there
is
another
point
that
we
have
to
take
into
consideration.
That
is
the
release
team.
The
release
team
tends
to
use
the
end-to-end
tests
to
be
able
to
say
hey.
G
This
is
a
release
that
we're
actually
going
to
release
to
the
general
public,
because
once
a
release
is
scheduled
and
you
get
all
the
omnibus
images
and
all
of
the
installers
that
we
actually
support,
they
actually
look
at
the
reports
and
say:
hey
everything
is
working.
Yes,
everything
is
rendering
yes,
because
before
before
we
actually
had
this
kind
of
automation.
H
Times,
quick
hack
that
most
probably
won't
like
use,
because
sometimes
the
features
are
too
big,
but
a
lot
of
times
to
be
kind
of
sneaky.
I
guess
like
so
I
don't
like
writing
ruby
tests
most
don't
so
a
lot
of
times
I'll
pair
up
with
the
backend
engineer
on
like
a
feature
and
we'll
work
off
the
same,
mr
and
typically,
they
are
just
happy
because
it's
part
of
their
work
just
to
continue
doing
that.
A
Collaboration
all
right,
we
need
to
move
on
we're
at
time
of
this
section.
Anybody
has
the
last
minute
10
seconds
thing
to
say.
F
I
have
I
have
one
the
sre
we
used
to
do
an
sre
shadow
where
we
were
the
front.
The
monitor
engineers
were
shadowing
the
sre
team,
I'm
wondering
if
it
could
make
sense
to
have
a
qa
shadow
where
we
chase
around.
Like
your
engineer,
can
learn
everything
there
is
to
know
from
them
about
rspec.
It
could
be
cool.
A
A
What
are
some
techniques
we
can
use
when
the
audience
is
so
large
that
there
are
strong
support
for
opposing
alternatives?
So
it's
not
rare
in
lovable
stages
that
you
have
users
clamoring
for
one
solution
that
goes
directly
against
the
other
solution
that
other
users
are
asking
for:
any
techniques
on
dealing
with
users
and
customers
expectations
in
our
stages,
so
miguel.
F
My
technique
is
to
pass
that
or
responsibility
to
the
pm
and
and
let
them
deal
with
the
users.
It's
it's.
It
sounds
like
a
joke,
but
I
think
it's.
It
has
some
seriousness
to
it,
because
I
think
that
they
are
the
ones
that
are
detached
from
the
product
enough
that
they
can
see
things
sometimes
more
clearly
than
us,
and
I
think
that
that's
how
they,
if
a
user
says
something
mean
they
are
not
going
to
take
it
personally,
because
they
are
not
as
close
as
we
are,
because
we
build
the
thing.
F
So
I
think
it's
it's
good
to
upload
that
to
them,
and
then
they
can
decide
if
they
want
to
listen
or
not.
I
would
rather
not
pay
too
much
attention
to
it.
A
Yeah
I
miguel.
I
agree.
I
understand
that
it
is
a
responsibility
of
the
product
manager.
The
reason
why
I
put
this
section
here
is
because
I
think
that
we
are
making
decisions
as
engineers
on
the
ground
every
day
and
just
as
an
example
for
phil
was
building
the
emergency
quest
navigation
drop-down
with
accounts
for
the
reviewed
and
the
signed,
and
users
were
complaining
about.
A
Oh,
but
now
I
have
to
click
two
times,
and
I
have
to
open
this
thing
to
get
the
where
I
usually
use
to
get
to
my
assignees
and
stuff,
and
in
that
moment,
without
even
any
product
manager
involvement,
there
was
a
decision
to
have
the
top
link
of
the
drop
down,
have
the
href
to
the
assignees
list.
Now
this
is
a
decision
in
development
that
we
make
that
affects
how
the
users
will
use
this
feature.
A
F
Yeah
my
my
boring
answer
is:
is
that
we
shouldn't
we
could
involve
the
pm
in
those
moments
to
explain
to
them,
so
we
need
good
communication
with
them
to
explain
to
them
what
are
the,
what
pros
and
cons
and
what
is
easy
to
make.
What
is
the
boring
solution,
but
I
don't
think
I
don't.
I
don't
want
to
feel
too
involved
in
in
deciding
if,
if
users
will
love
this
or
not,
because
I
think
that
will
cloud
my
work
and
it
will
make
it
more
difficult
for
me
to
make
the
right
engineering.
A
Choices:
food
for
thought-
I
guess
thanks
miguel
me
I'll
just
read
my
point
very
quickly.
I
would
like
everyone
to
think
about
getting
it.
You
you
we
are
in
the
driver's
seat
of
a
lot
of
stuff.
We
do.
We
can
contribute
for
things
to
be
put
pushed
into
the
radar
of
pms
to
be
scheduled.
One
of
the
examples
were
fight
by
file.
It
was
an
engineering
pushed
feature
that
eventually
turned
out
to
be
very
useful
for
product
as
well.
Some
customers
love
it
and
thomas.
I
will
yeah
just
read
it.
E
I'll
summarize
real
quick,
I
think
our
app,
at
least
in
the
diff
side,
in
the
mr
side,
we're
trying
to
accomplish
goals
for
like
multiple
types
of
users
and
what
that
means
right
now
is
our
code
is
like
well,
the
ui
is
kind
of
cluttered,
but
also
the
code
is
really
cluttered
because
we're
like,
if
this
go
over
and
do
this
thing
otherwise
do
this
thing.
I
think
I
think
it
would
be
really
useful
to
both
us
and
our
users
to
just
like
split
those
up
into
different
modes.
E
Like
hey,
are
you
a
single
file
modality
person?
Here's
the
single
file
app,
you
can
just
go!
Do
all
this
stuff
over
here.
Are
you
the
like?
Are
you
just
different
different
modes
of
users
for
our
apps
would
be
really
interesting,
because
that
way
that
way,
users
who
who
really
complain
when
we
change
the
height
of
an
element-
maybe
that
would
never
occur
on
the
type
of
mode
that
they're
using
because
they're
never
going
to
see
that
change.
Something
like
that.
A
That
goes
to
our
example
of
a
file
by
file
non
file
by
file
but
more
tied
to
potential
personas.
That
sort
of
thing
we've
had
content.
Writers
complain
a
few
things
that
doesn't
make
sense
for
developers
yeah
that
those
are
and
again
create
the
issues
pre
create
proposals.
We've
had
great
success
of
like
starting
with
the
merge
request.
One
of
the
features
that
phil
shipped
this
milestone
was
something
that
he
worked
out
of
the
deliverables
in
the
last
milestone
and
hey.
This
would
be
really
handy.
A
A
All
right,
thank
you
so
much
and
last
section
I'll
shorten
it
from
five
to
four
minutes,
any
other
tips.
Anything
we
haven't
discussed
anything
that
comes
to
mind
and
be
succinct.
If
you
can.
H
Yeah,
I
have
one
I
can't
see
the
screen.
So
if
anybody
put
a
did
anyone
put
an
agenda
point
down
my
monitor
died
over
here.
I
didn't
want
to
skip
in
front
of
anyone.
No.
H
So
recently,
on,
ci
we've
been
working
a
lot
with
tech
debt.
So
one
of
the
things
we've
done
is
kind
of
evaluate
a
lot
of
our
code
space
and
started
creating
technical
debt
issues.
So
we've
got
like
20
30,
whatever
it
is,
and
we're
trying
to
spend
some
time
on
tackling
those
issues
so
like.
H
I
think
it's
very
valuable
to
assess
your
domain
every
now
and
then
and
create
those
technical
debt
issues
and
try
to
get
them
scheduled
with
your
pm,
to
make
your
life
easier,
like
as
an
engineer
to
maintain
this
code
base
your
test
and
eventually
it
will
allow
you
to
ship
things
faster
and
easier
and
make
our
customers
and
product
a
lot
happier.
C
Fred
yeah,
it's
just
to
to
be
reminded
that
you
know.
Sometimes
we
have,
we
have
weight
that
we
can
say
2
p.m
and
our
em
and
so
sometime
being
proactive
with
refactors.
There
are
sections,
sometimes
that
you
can
see
are
going
to
be
a
problem
as
they
grow
and
just
pushing
for
it
to
be
scheduled
and
like
trying
to
communicate
how
important
it
is
for
the
future
of
this,
the
section
it's
usually
I've
found
no
pm
not
listening
to
that.
C
A
100
fred
I'll
just
say
that
if
it's
a
quick,
quick
refactor
doesn't
even
affect
the
delivery
of
the
deliverable,
you
don't
even
have
do
you
need
to
ask,
but
if
it
changes
the
delivery
always
bring
it
up,
create
the
issue
and
push
it
for
the
schedule.
The
next
milestone,
I'm
all
for
it
and
I
think
sam
is
due
justin
or.
A
E
D
Real
quick
push
for
more
decoupling
back
in
and
front
end.
We've
got
a
couple
issues
where
I've
had
a
straight
feature
be
denied
because
it
wasn't
physically
capable.
B
A
Yeah
graphql
will
save
us
all
all
right,
we're
out
time
everyone
we
made
it.
Thank
you
so
much.
Hopefully,
this
session
was
productive.
It
was
for
me
and
thank
you
all
for
taking
one
hour
and
a
half
of
your
time,
to
discuss
your
tips
and
tricks
I'll
upload
it
to
youtube
I'll
share
it
on
the
channel.
Sam
will
share
it
with
you
all
as
well,
and
have
a
wonderful
thursday
and
I'll
see
you
on
the
retrospective.