►
From YouTube: Refactoring the Pipeline Graph: An Adventure
Description
You may notice your big pipeline looks a little different on Gitlab and has a callout about feedback! If you’re into learning more about how we got to this new pipeline graph — in 50+ MRs with contributions from 6+ engineers and at least three UXers and two PMs — this is the video for you.
If you are interested in: project planning, deleting code, modernizing in place, GraphQL, observability, or failures, there will be something here for you.
Learn more at https://gitlab.com/gitlab-org/gitlab/-/issues/328538
A
Okay,
so
I
actually
have
this
full
screen,
so
I
can't
see
any
of
you
just
fyi,
so
if
you
need
to
break
in,
you
should
break
in
and
say
something
anyway.
It
is
april
27th.
I
am
sarah
grafheny
palermo
and
this
is
the
tale
of
the
pipeline
graph.
So
this
is
a
story
about
changing
a
graph
that
looked
like
this.
Please
notice,
by
the
way,
these
very
exciting
lines
and
arrows
here,
it's
very
important
to
the
story
changing
this
graph
to
look
like
this
graph.
A
It's
very
lovely
new
graph
and
you
might
be
wondering
why
is
this
interesting
right?
Why
should
you
care-
and
I
think
it's
not
just
that
the
feature
itself
is
interesting,
although
it
is
and
we'll
talk
about
svgs
and
fun
stuff,
but
also,
I
think,
we'll
be
seeing
a
lot
more
projects
like
this,
namely
projects
where
we're
looking
at
how
we
want
to
stabilize
and
extend
older
features
in
our
code
base,
as
well
as
working
with
an
eye
for
stability
and
safer
rollouts
when
we're
sending
them
out
to
people.
A
A
So
you
can
put
your
questions
in
the
agenda
if
you
have
deeper
questions
that
are
not
addressed
in
this,
please
by
all
means
like
set
up
another
call
with
me,
and
I
will
go
into
it
in
detail-
I'm
really
excited
about
it,
but
I
wanted
to
sort
of
go
for
breadth,
so
we're
going
to
have
a
rambling
talk
that
will
cover
the
background
of
the
project,
its
goals,
the
story
of
the
project
itself.
A
This
is
like
the
main
part,
has
all
the
code
in
it
and
then
we'll
talk
about
sort
of
where
we
are
now
and
what
we
expect
in
the
future.
But
first
I
wanted
to
share
some
stats.
This
project
covers
50,
plus
mrs
10
plus
teammates
were
involved.
I
wanted
to
give
extra
shout
outs
to
laura
frederick
marius,
jose
and
andrew
who
helped
either
doing
back
end
code,
helping
me
figure
things
out
doing
all
of
the
maintaining
and
reviewing
so.
Thank
you
to
all
of
these
people.
A
It
also
involves
tens
of
thousands
of
data
points
now
in
century
and
prometheus
and
12
16,
5
6
months
really
depends
on
where
you
start
counting.
So
if
we
want
to
start
counting
at
16
months,
we
can
look
at
the
background
which
is
in
12.2,
directed
acyclic
graphs
were
introduced.
The
feature
flag
was
removed
in
1210.,
essentially
direct
dags
make
it
possible
for
jobs
and
pipelines
to
execute
based
just
on
which
jobs
they
explicitly
need.
A
That's
why
we
have
this
cool
needs,
keyword
and
that's
neat,
but
it
causes
a
problem
or
a
challenge
when
you
look
at
our
graph.
This
is
a
graph,
that's
probably
very
familiar
to
you,
and
we
can
see
now
that
it's
like
stage
by
stage,
but
some
jobs
have
already
executed
some
jobs
haven't.
What's
the
order?
What's
the
rule,
we
don't
even
know
man,
and
so
we
started
looking
at
different
ways
to
make
this.
A
This
is
actually
a
slide
from
dimitri
from
about
a
year
ago,
now,
looking
at
different
ways
that
the
graph
could
look,
it
has
one
of
my
favorite
comments
ever
in
a
slide
deck.
We
have
some
problems,
it
was
a
big
challenge.
A
We
narrowed
it
down
and
focused
on
creating
this
separate
graph.
That
shows
the
relationships
between
jobs,
but
not
any
statuses
or
actions,
and
it
was
basically
the
most
pared
down
way
to
start
to
answer
the
question
of
like
how
do
we
calculate
and
show
these
relationships
in
a
useful
way,
and
can
we
do
it
without
messing
with
the
main
pipeline
graph?
So
we
made
this,
we
rolled
it
out.
Users
came
back
to
us
and
basically
said
no.
It
needs
to
be
on
the
main
pipeline
graph.
A
All
I
want
to
see
is
this
on
the
main
pipeline.
F
right,
so
it's
like
okay,
great,
you
want
the
thing
that's
hard,
but
maybe
this
was
also
an
opportunity
right.
The
graph
was
cramping
our
style.
It
was
a
little
creaky.
We
couldn't
show
links
beyond
the
next
column.
They
were
css
borders
that
could
only
go
there
if
the
graph
was
very
brittle
in
terms
of
layout.
I
called
your
attention
to
those
lines
not
lining
up.
Basically
anytime.
We
wanted
to
add
a
feature
to
the
graph.
It
was
like.
A
A
So
there
were
a
lot
of
things
we
wanted
to
fix,
and
graphql
was
on
the
horizon
for
us
and
that's
partially,
because
we
wanted
to
keep
the
section
in
line
with
evolving
approaches.
We
were
using
the
mediator
pattern,
which
is
our
like
preview
x
pattern.
It
was
a
little
old,
so
just
to
you
know,
keep
up
to
date.
We
also
wanted
to
allow
front-end
engineers
to
better
self-serve.
This
is
really
vital
on
verify.
A
I
think
it's
probably
vital
on
all
teams
but
verify
back-end
engineers
if
you've
been
seeing
the
rapid
actions
and
the
ci
scaling
work.
Those
are
often
the
same
engineers
that
we're
going
to
to
ask
to
help
support
product
work
with
back-end
changes
and
so
to
the
extent
that
our
front-end
engineers
can
self-serve.
It
was
a
really
big
win
for
us.
It
allows
non-contiguous
apps
to
share
caches
because
pipelines
pages
are
old.
A
We
still
have
a
mix
of
hamlin
view,
but
using
graphql
means
you
can
share
the
default
client,
and
so
it
can
share
caches,
which
was
a
bonus
to
us
and
we'd
already
done
a
small
test
on
graphql
that
dag
graph
I
showed
you
before
was
the
first
thing
we
moved
to
use
graphql.
We
chose
that
because
it
was
new
and
small,
but
luckily
it
covered
a
bunch
of
the
same
data
that
was
in
the
pipeline
graph.
A
A
quick
note
here,
I
never
say
execution
order
when
we're
talking
about
this
view,
because
technically
something
that
depends
on
four
fast
jobs
will
execute
before
something
that
it
depends
on
one
slow
one
and
I've
found
that
our
users
have
very
idiosyncratic
ideas
of
how
things
work.
So
we
should
only
say
the
words
that
are
very,
very
true
about
them,
and
so
it's
not
exactly
execution
order.
A
We
wanted
to
show
the
links
between
jobs
across
multiple
columns,
and
we
wanted
to
do
all
of
this
in
a
way
that
a
version
of
the
graph
could
also
live
in
the
pipeline.
Editor
section.
This
is
the
pipeline
editor
section
and
though
the
data
shown
in
the
jobs
is
different
right
now,
it's
names
that
might
grow
into
other
things,
it's
conceptually
important
for
them
to
be
the
same
graph
and
stay
connected,
so
that
was
a
primary
goal
as
well.
A
So
planning
the
project.
This
is
a
big
project
right.
How
did
we
conceive
it?
How
did
we
plan
it?
How
did
we
get
away
with
doing
it?
It
was
essentially
sprouted
in
a
space
where
there
weren't
a
ton
of
new
front-end
features,
and
there
was
some
turnover
going
on
on
our
team.
So
maybe
there
was
a
lot
of
engineer
ideas
and
not
the
most
supervision,
and
I
know
it
sounds
naughty
to
say.
Oh,
we
did.
A
We
started
this
when
no
one
was
looking
and
then
we
just
went,
but
I
think
it's
important
to
think
about
that,
so
that
we
can
start
thinking
about
how
we
make
these
kinds
of
projects
less
of
a
surprise
or
a
skunk
works
approach,
because
the
outcome
is
super
worthwhile
and
I
think
that
as
the
company
continues
to
mature
we're
going
to
want
to
continue
refreshing
parts
of
the
ui
in
bigger
ways,
and
we
should
keep
that
in
mind.
So
you
know
we
broke
it
up.
A
I
wanted
to
still
have
a
spirit
of
iteration,
so
we
did
a
spike.
We
replaced
the
graph,
we
added
the
higher
order,
components
we
updated
the
specs.
We
added
the
links.
There
were
surprises
everywhere,
but
I'll
talk
about
them
there,
and
then
we
did
the
rollout.
So
in
terms
of
communicating
the
project,
it
was
also
a
little
bit
different
and
a
little
bit
of
a
challenge.
The
spike
was,
as
you
expect
one
issue.
We
listed
the
questions
we
were
going
to
answer.
A
I
wanted
to
be
really
responsible
and
say
this
might
take
some
time,
but
like
we're
not
going
into
a
hole
and
just
coming
out
with
something
at
the
very
end,
then
we
broke
that
into
an
initial
plan
that
I
expected
to
take
three
milestones.
You
can
see
down
there,
it
says
13
8,
it's
13,
12
and
both
flags
are
now
just
defaulted
on.
A
It
also
meant
we
could
leverage,
gitlab
and
use
the
related,
merge
requests
feature.
I
use
this
to
look
up
old,
merch
requests
all
the
time
for
this
project
and
then
to
do
sort
of
smaller
communication.
I
used
a
slack
channel
for
the
feature
we
had
done
this
with
the
dag,
and
in
that
case
there
was
a
lot
more
back
and
forth.
This
one
was
sort
of
more
me
talking
to
the
world,
but
I
really
liked
using
it.
I
felt
like
I
could
be
more
informal
and
keep
everyone
up
to
date.
A
I
did
get
feedback
that
some
people
felt
like
it
was
harder
because
it
wasn't
in
issues
so
as
you're
doing
projects
like
this.
It's
definitely
an
approach
worth
considering,
but
one
pros
and
cons.
Okay,
that's
all
the
organizational
stuff.
Now,
let's
talk
about
the
code,
the
first
step
of
the
project
was
a
month-long
spike.
I
had
a
theory
that
all
of
this
would
work,
so
I
just
wanted
to
test
it
out
with
non-production
level
code.
This
is
my
favorite.
Mr
summary,
I
think
I've
ever
written,
never
merge
me.
A
So
we
did
a
non-production
level
change.
It
gave
me
something
to
pair
on
with
fred
for
our
theories
about
how
to
link
the
two
graphs
and
though,
as
I
worked
through
it,
I
was
able
to
work
with
our
backend
engineer
with
laura
to
create
the
graphql
endpoints.
We
would
need
for
the
production
level,
so
we
did
a
spike,
but
we
did
combine
it
with
getting
ready
to
go
into
the
next
phase,
and
I
think
that
was
really
successful
and
laura
mentioned.
We
used
client
resolvers
to
do
this,
which
I
think
is
a
pattern.
A
We've
been
seeing,
people
use
more
and
more
and
laura
mentioned.
It
wasn't
just
that
this
was
nice,
because
I
could
identify
the
shape
of
the
data
and
share
it
with
her,
but
she
loved
being
able
to
pull
the
branch
down
so
that
she
could
actually
test
her
code
against
the
front
end.
So
that
was
really
successful
as
we
moved
into
the
production
code,
so
we
needed
to
replace
it
in
place
right.
Unlike
other
iteration
work,
you
can't
take
away
a
whole
graph
and
then
slowly
give
users
a
graph
back,
that's
not
possible.
A
So,
unsurprisingly,
we
decided
to
use
feature
flags,
so
the
mrs
could
stay
small
and
we
could
pair
the
feature
changes
with
their
specs.
We
could
have
worked
on
a
feature
branch
but
because
it
was
going
to
take
a
while.
We
wanted
to
just
work
against
main
against
master
and
that
also
made
the
graphql
changes
available
to
everyone
as
we
worked
so
that
we
were
still
getting
feature
work
done.
While
we
were
doing
this
refactor,
we
paired
the
feature
flags
with
async
loading,
though
because
this
was
going
to
take
a
while.
A
We
didn't
want
users,
loading
code,
that
we
knew
didn't
work
right
and
we
did
some
performance
testing
on
this.
This
is
kind
of
what
it
looks
like
and
it
was
successful.
A
It's
a
little
bit
faster
and
we
weren't
giving
people
code
they
didn't
need,
but
we
weren't
going
to
duplicate
everything
right
at
some
point:
we're
going
to
duplicate
components
at
some
point
we
needed
to
share
and
the
primary
drivers
of
the
components
we
chose
to
duplicate
were
cases
where
we
were
doing
a
lot
of
the
data
work
right,
which
was
cases
where
the
components
were
doing
a
lot
of
data
work.
Making
the
api
calls
formatting
the
data.
A
Those
should
definitely
just
be
separate,
but
also
cases
we're
changing
structures
for
breaking
the
css,
the
css
used
to
create
the
links
and
the
graph,
as
we
mentioned
before,
it
was
pretty
brutal
and
it
was
very
nested
so
like
something
as
small
right
as
like.
Changing
a
div
or
moving
one
thing
would
break
the
entire
graph,
and
so
that
also
drove
there
was
at
least
one
level
of
component
that
I
tried
to
share
across
both
implementations
and
it
couldn't
handle
it
because
of
the
css.
A
This
is
me
tearing
my
hair
out,
using
outline
to
find
where
the
css
was
highlighting
different
things
to
make
sure
we
could
understand
what
was
up.
So,
in
the
end,
we
use
this
diamond
pattern
right
where
you
have
the
shared
parent
and
that's
where
we're
doing
the
async
loading
to
decide
which
should
load.
A
Then
we
have
the
duplicated
elements,
and
then
we
have
the
shared
children
where
it's
simple
enough
and
display
enough
that
we
only
need
to
change
a
little
bit.
Something
nice
here,
of
course,
is
that
by
making
legacy
files,
we
also
made
legacy
unit
test
files,
so
we
didn't
need
to
adapt
the
tests
as
well.
I
think
that
would
have
taken
so
much
more
time
in
this
way.
A
We
just
have
very
clear
files
that
we
can
delete
when
we're
done,
and
we
know
that
it's
all
done
and
by
not
trying
to
worry
about
sharing
the
graphql
with
the
rest
calls
in
those
components.
A
It
meant
that
we
could
focus
on
sharing
our
graphql
with
the
new
components
that
needed
to
share
the
like
unwrapping
and
parsing
data,
which
is
the
pipeline
graph
and
the
dag
right
like
by
not
trying
to
make
it
do
double
duty.
We
could
make
sure
it
was
in
line
with
its
other
components
and
not
sort
of
infecting
them
with
the
restructure.
A
In
that
third
case,
where
we
have
graphql
and
rest
in
the
same
component,
we
dealt
with
it
in
two
different
ways.
The
first
is
actually
relying
on
aliases
in
graphql
queries
themselves,
so
graphql
type
names
sometimes
need
to
be
different
than
the
rest
name.
Something
was
called
status.
It
is
a
detailed
status
in
graphql
because
it
can't
have
the
type
name
of
status
here
we
can
just
rename
it.
Every
component
wants
to
call
it
status
rather
than
like
renaming
it
everywhere
right.
We
can
rely
on
graphql
in
cases
where
it
was
a
little
more
complicated.
A
A
When
we
delete
the
legacy
code,
it
gives
us
basically
a
table
of
contents
of
all
the
places
to
go,
delete
things
instead
of
sort
of
combing
through
the
code
searching
for
or
again
so
I
thought
that
was
a
big
win
and
one
last
data
consumption
pattern
that
held
it
helped
us
cut
scope,
which
is
funny
to
say,
because
this
is
really
big
is
the
way
the
action
items
were
implemented,
and
this
is
before
we
started
this
project,
but
these
little
items
here
that,
let
you
do
things
on
your
jobs,
those
are
actually
entirely
self-contained.
A
Rest
components
that
just
emit
an
event
when
they're
done
and
felipa
notes.
I
love
it.
She
wrote
this
comment
about
how
this
should
not
be
done,
but
it's
being
done
because
it
makes
more
sense,
and
so
I
want
to
salute
her
and
encourage
you
all
to
break
the
rules
when
it
makes
more
sense
because
it
totally
saved
our
butts
here.
It
would
have
been.
You
know
another
week
or
two's
worth
of
work
to
try
to
move
all
of
these
to
mutations
at
the
same
time.
A
So
then,
once
the
graph
was
replaced
and
nothing
was
different,
except
that
the
links
are
gone
and
we
could
show
a
view
without
the
links
we
added
the
higher
order,
components
and
again,
the
higher
order
component
is
how
we
keep
this
graph
and
our
other
graph
in
sync.
There
are
three
of
them.
The
first
two
are
just
about
css,
the
main
graph
wrapper
and
the
linked
graph
wrapper,
and
this
is
sort
of
what
you
do
with
utility
css,
instead
of
old
style
css,
to
keep
them
in
line
right.
A
The
higher
order
component
just
says
these
things
look
the
same,
but
instead
of
being
classes
that
you
need
to
grep
for
to
see
where
they're
in
place
or
having
them
sort
of
crash
into
each
other.
You
can
just
put
your
utility
classes
there
and
be
like
great
they're
shared
the
lynx.
Layer
is
the
third
and
the
meatiest
it
conditionally
renders
the
links
and
the
svgs
and
calculates
them
and
all
sorts
of
things
and
then
shows
the
graph.
A
It
also
takes
care
of
performance
reporting
performance
to
prometheus,
which
I
will
talk
about
later,
but
this
is
another
pattern
right
that
we're
starting
to
standardize
on
michael
luna
wrote
one
of
these
for
the
top
level,
where
you
can
sort
of
only
conditionally
render
something
when
you
have
it,
and
the
wrapper
component
is
just
checking
your
conditions
and
then
it
draws
the
links
which
are
so
awesome
and
the
real
innovation
for
this.
A
A
There
was
like,
let's
not
set
a
bunch
of
view,
getters
and
setters
for
like
data,
that's
not
important
to
view,
but
basically
as
soon
as
something
touches
apollo,
it's
in
view
and
view
is
gonna,
know
about
it
and
deal
with
its
reactivity,
and
so
once
that's
the
case.
The
second
version,
which
is
what
we're
doing
in
the
graph
where
we're
using
the
declarativeness
of
view
to
work
for
us,
was
like
a
much
better
option
in
general,
this
pattern
of
using
the
links
layer,
a
higher
order
component
for
it,
has
been
moderately
successful.
A
I
think
I'm
fairly
happy
with
it.
One
thing
I'm
not
happy
with
is
the
parent
component
still
has
to
do
a
lot
to
deal
with
sort
of
graying
out
the
jobs
that
aren't
involved,
etc.
I
have
some
questions
in
general
too,
about
whether
we
need
such
a
complex
ui
to
show
this
people
like
it,
but
it
might
be
possible
to
show
information
in
another
way.
So
that's
something
we're
still
keeping
our
eye
on
for
iteration.
A
Another
big
challenge
with
that
is
actually
that
this
is
a
recursive
component,
so
the
pipeline
graph
has
a
linked
pipelines,
column,
but
a
linked
pipelines.
Column
contains
a
pipeline
graph.
I
was
thrilled
when
I
found
out
you
can
do
this
in
vue.
It
is
useful
when
you
need
to
have
child
components
that
are
couldn't
be
of
an
indefinite
length,
but
it
means
that
drawing
the
svgs
is
a
little
complicated.
A
You
need
to
make
sure
that
they're
next
to
each
other
and
not
inside
one
another,
and
you
can't
make
certain
guarantees
about
the
data
right.
You
can
have
jobs
with
the
same
names,
but
jobs
don't
have
ids
job
groups,
don't
have
ids
in
our
data,
so
you
end
up
doing
funny
things
to
make
sure
you're
not
drawing
links
in
the
wrong
place.
A
The
other
sort
of
downside
of
this
is
that,
in
view
you
cannot
guarantee
when
a
component
mounts
that
its
child
components
have
also
mounted
in
the
dom
right.
If
you
wait
for
the
mounted
life
cycle
hook,
it
might
not
be
loaded
yet,
and
so
we
have
that
problem
when
drawing
the
svg
that
we
need
those
jobs
to
be
rendered
before
we
can
draw
the
links
layer.
So
basically,
in
the
child's
component,
we
omit
an
event
being
like.
Yes,
I'm
actually
done.
I
checked
with
some
of
you
folks,
because
I
was
a
little
worried.
A
So
then
the
last
stage
was
updating
our
spec
specs.
So
when
we
started,
we
just
defaulted,
the
feature
flags
and
r
spec
to
off,
because
we
knew
we
were
going
to
be
sort
of
working
incrementally
and
that
no
one
was
going
to
be
able
to
see
it
so
rather
than
keeping
it
in
sync
the
whole
time
or
something
we
just
said:
rspec,
don't
worry
we'll
come
back
at
the
end.
A
I
think
that
was
successful.
A
weird
note,
our
spec
defaults
to
on
just
defaults
flags
to
off.
So
if
you're
working
across
those
two
borders
definitely
keep
that
in
mind,
but
once
we
went
through
and
found
the
few
blocks
that
needed
to
be
duplicated
to
support
legacy
and
new,
it's
great
because
again
they
have
the
flag
off.
So
it's
really
easy
to
find,
which
ones
that
we
need
to
delete.
A
I
also
learned
something
fun
from
laura,
which
is,
if
you
don't
know
why
your
rspec
is
failing,
don't
run
it
headless,
but
like
actually
let
the
browser
open
and
look
in
the
network
tab.
That
is
how
we
found
a
secret
like
week-long
bug
all
right.
Yay
we
made
things
that
was
super
cool
now
it's
time
to
do
the
new
layout.
A
The
whole
reason
we
did
this-
and
this
was
like
the
test
of
my
idea
like
if
we
did
all
this
work
and
made
it
like
the
dag,
then
making
the
new
layout
should
be
simple,
and
was
it
yes,
it
was
it
was
this
many
lines
it
took
me
15
minutes.
I
wrote
this
many
lines
of
code.
We've
done
all
the
work
that
led
up
to
it.
It
was
perfect
right
done
yes,
except
my
enemy
polling.
A
We
pull
for
the
pipeline
graph
because
you
want
to
know
if
your
job
is
succeeding.
We
pull
every
10
seconds
by
default
and
polling
complicates
this
quite
a
bit.
I
would
say:
apollo's
heal
in
this
case
is
fragments
and
normalization.
A
A
Every
10
seconds
when
we
get
new
data,
but
apollo
can
make
it
very
difficult,
especially
for
something
like
groups
that
don't
have
ids
to
get
the
data
out
of
the
cache,
even
though
apollo
has
normalized
it
for
us
that
should
be
good,
but
it's
a
bit
of
a
challenge
in
apollo
3
identify
you'll,
be
able
to
pass
it
a
fragment
and
it'll
be
able
to
find
it
for
you,
and
that
will
be
great,
but
we
don't
use
apollo
3
right
now.
A
We
use
apollo
2.,
so
I
ended
up
creating
a
sort
of
homegrown
version
where
we
calculate
the
layers
and
create
a
lookup
table,
and
then
we
use
our
lookup
table
so
we're
not
recalculating
the
layers.
If
you
want
to
see
what
that
is
in
detail,
I
will
post
these
and
you
can
look
at
those
links,
but
essentially
that's
what
we
chose
to
do.
A
A
A
If
you
look
at
the
code
behind
that,
you
actually
have
to
use
set
timeout
to
make
this
work.
The
way
that
you
would
expect
to
make
it
work
in
view,
and
you
can't
use
next
tick.
Why
is
that?
This
is
a
graph
that
may
be
familiar
to
many
people.
Essentially,
this
is
the
way
javascript
works
in
your
browser.
A
You
have
a
call
stack,
it
does
a
bunch
of
events,
there's
a
render
loop
and
a
callback
queue
and
what
we
need
is
to
start
the
loader
rendering
in
the
render
loop
and
then
do
the
work
but
view
next
tick
comes
at
the
end
of
the
call
stack
before
you
render,
which
is
often
what
you
want
just
not
in
this
case.
So
if
you're
trying
to
do
these
kinds
of
work,
you
can
use
set
timeout,
which
will
put
your
work
into
the
callback
queue,
which
is
where
you
want
it.
A
That
was
a
bit
of
a
surprise.
There
were
a
couple
other
surprises
that
are
worth
keeping
in
mind.
The
first
one
is
dark
mode.
This
shouldn't
be
a
surprise,
but
it
is
sometimes,
I
think
it's
still
sort
of
a
second
citizen.
So
if
you
are
doing
ui
stuff
make
sure
you
think
about
dark
mode.
Simon
is
great.
He
just
did
like
a
sync
call
with
me
to
look
at
everything
and
fix
it
really
fast.
So
I
highly
recommend
doing
that.
A
The
other
surprise
the
biggest
surprise,
was
polling
again.
This
was
like
the
most
destabilizing
shock
in
this
whole
project.
It
probably
delayed
us
by
a
milestone
or
a
milestone
and
a
half,
and
so
I
was
like
polling
graphql
has
polling,
that's
great.
We
can
delete
our
homegrown
version,
we'll
not
delete
it
because
other
people
use
it,
but
we
can
cease
to
use
our
homegrown
polling
and
instead
we
can
use
the
view
apollo.
A
Graphql
client,
like
yay
that'll,
be
great,
but
it's
basically
naive
polling
and
we
have
sort
of
big,
robust,
gitlab
level
polling
so
that
we
don't
accidentally
ddos
ourselves.
So
we
took
out
some
time
to
look
at
the
solutions
and
it
turned
out
that
the
e-tag
that
we
use
with
rest
now
we
could
still
do
it.
So
that's
great!
So,
basically
now
we
pull
rails
and
rails
asks
redis
if
anything
has
changed,
comparing
the
e-tags
and
it
can
decide
whether
or
not
to
do
work.
A
A
Considering
how
late
in
the
project
we
were
like,
oh
no
polling,
but
to
do
that
we
had
to
use
another
graphql
library,
there's
a
link
here
to
like
all
of
our
graphql
polling
docs,
but
you
basically
have
to
say,
use
get,
and
then
we
use
a
different
link
library
because
the
batch
library
won't
support
get,
but
you
need
get
free
etags.
A
A
I
found
a
fun
bug
too.
If
you're
doing
this,
you
might
find
that
polling
is
working
on
your
local
dev
machine,
but
not
on
production
and
that's
because
rails
is
basically
more
forgiving
than
aj
proxy
and
nginx
heiner
helped
me
figure
this
out.
I
have
just
saved
it
here
so
that
you
all
know
it
could
happen
to
you.
A
Finally,
the
biggest
surprise
was
that
we
had
to
rethink
the
ui
at
the
last
minute.
It
ended
up
that
we
took
the
links
out
entirely
on
the
stage
view
and
users
were
really
surprised
and
were
like.
Oh
my
gosh,
my
links
are
gone.
Is
my
graph
broken
and
we
were
like?
They
don't
mean
anything.
Those
links
never
meant
anything
like.
Why
are
you
surprised
that
they're
gone?
They
didn't
do
anything,
but
people
thought
they
did
and
didn't
realize
they
were
fancy
borders
and
I
think
sam
is
like
still
writing
a
post
about
that.
A
So
you
can
talk
to
him
about
it
too,
but
it
was
a
really
big
surprise
and
then
that
fun
bug
I
mentioned
to
you
last
time
the
drop
down.
So
I
ran
into
a
case
where,
if
you
clicked
on
something
and
then
you
hovered
over
a
job,
popper
would
no
longer
make
the
drop
down
go
away.
I
was
very
confused
about
this.
A
It
turns
out
that
adding
the
class
when
vue
was
re-rendering,
the
dom
nodes
made
popper
lose
track
of
that
dom
node
for
right,
whatever
reason,
whatever
reference
it
is
keeping
to
that
node,
it
does
not
work.
So
now
we
just
have
a
wrapper
class.
When
we
replace
this
with
a
new
drop-down,
it
might
not
be
as
much
of
a
problem,
but
you
know,
if
that
happens,
to
you,
it
might
be
adding
classes
all
right.
So
we
wrote
all
the
code
and
we
had
to
write
it
out.
This
is
probably
roll
it
out.
A
This
is
like
my
favorite
section-
maybe
put
it
at
the
end,
but
I
like
it
because
the
code
I've
shown
you
today
was
rolled
out
in
two
pieces,
the
new
layout,
which
was
the
second
rollout
that
went
to
100
this
morning,
I'm
very
excited
it
was
fairly
straightforward,
but
before
that
was
rolling
out
the
revised
graph,
which
was
utterly
and
absolutely
terrifying,
we
had
on
verify
some
higher
profile
reversions
at
the
end
of
2020,
which
was
no
one's
fault,
the
code's
very
complex,
but
we
wanted
to
sort
of
come
up
with
a
plan
that,
like
maybe
not
do
that
again.
A
If
we
could
avoid
it
because
it
was
you
know
it
would
be
nice
not
to,
but
we
didn't
have
a
metric
about
whether
something
worked
that
was
not
complaints
and
whether
complaints
should
count
as
a
real
metric.
It's
definitely
a
lagging
indicator,
so
we
wanted
to
figure
out.
How
do
we
do
this
a
bit
more
safely?
A
So
first
we
did
like
a
cautious
plan
for
the
rollout
right
standard,
gitlab,
rollout,
ui
the
project
itself,
then
to.com,
but
we
started
with
what
I
was
calling
the
private
beta.
We
asked
users
to
people
at
gitlab,
since
the
pipeline
is
used
often
enough
at
gitlab.
We
could
just
ask
people
here
which
was
really
helpful
like
if
they
could
scan
some
broken
stuff,
if
they
would
mind
using
it
first
and
we
found
like
a
solid
set
of
ui
bugs,
and
I
really
appreciated
that.
A
More
importantly,
we
started
looking
at
a
rough
kind
of
observability
a
way
to
know
things
about
our
running
app
right,
a
set
of
real-time
indicators,
so
we
could
know
what
working
looked
like
and
what
wrong
looked
like,
and
there
were
three
levers
we
could
pull
here
for
these
metrics
and
none
was
perfect.
Snowplow
was
set
up
to
work
with
the
front
end
really
well,
but
it's
not
real
time,
so
you
can't
flip
a
flag
and
be
like
okay.
Snowplow.
A
Tell
me
if
everything's
broken
prometheus
is
real
time,
but
it's
not
really
optimized
to
work
with
the
front
end.
You
need
a
back
end,
endpoint
and
so
using
that
consistently
could
cause
performance
issues
because
now
you're
adding
a
whole
new
set
of
calls
to
the
back
end.
A
Sentry
is
set
up
to
work
with
ui
and
real
time.
So
that
sounds
really
good,
but
it
doesn't
have
source
maps.
It's
very
error
focused,
so
I
think
it's
harder
to
use
sentry
for
you
know
like
if
you
just
want
to
find
out
about
something,
and
in
our
case
it
was
not
actually
on
we
spent
about
two
weeks
and
by
we
I
mostly
mean
jose,
trying
to
find
out
why
century
wasn't
working.
So
this
is
a
little
note
if
you
turn
something
off
to
investigate
it.
A
We
leveraged
the
ability
to
provide
component
names
right
to
make
up
for
the
source
map
so
that
we
could
at
least
have
some
idea
of
where
the
error
was
occurring.
We
added
calls
to
entry.
You
know
where
we
expected
errors
in
the
apollo
error
handler
stuff,
like
that.
But
vue
also
has
this
error
captured
hook.
That
will
be
called
when
an
error
is
captured
right
when
just
some
error
gets
called,
and
this
has
been
really
helpful
to
help
us
find
the
errors
that
we
didn't
expect.
A
Likewise,
we
use
a
report,
failure
pattern
in
pa
and
ci,
which
is
just
a
way
of
centralizing
right,
getting
an
error
displaying
that
error.
It's
sort
of
the
wrapper
level
component-
it's
not
super
complex,
but
the
nice
part
about
having
that
pattern
is
that
it
meant
there
was
a
really
obvious
place
again
to
add
our
sentry
call.
So
it
was
nice
it
made
us
feel
like
we
made
good
architectural
decisions
when
it's
like.
Oh,
we
can
like
slot
this
in
and
it
worked.
A
It
worked
almost
too
well,
I
want
to
say
we
discovered
a
bug
because
someone
clicked
on
like
a
three-year-old
graph
once
and
it
showed
up
in
century
old
graphs
might
not
have
iids,
but
the
graphql
interface
assumes
that
all
graphs
will
have
iids.
You
cannot
get
a
pipeline
data.
Well
pipelines
will
have
ids
like
you
can't
get
a
pipeline
without
an
id
if
you're
using
a
newer
endpoint.
So
there's
probably
important
lessons
about
interface
assumptions
here,
but
you
know
sentry
was
really
helpful.
We
found
really
crazy
bugs.
We
also
found
another
one.
A
This
job
is
undefined.
It
took
a
really
long
time
for
us
to
hunt
it
down.
We
rolled
out
the
new
graph
and
most
things
were
good,
but
I
was
getting
a
thing
like
this
job
is
undefined
and
I'm
like
why?
Why
is
that
happening?
And
we
click
links
and
it
only
happened
on
private
repos
right,
so
we
couldn't
see
it.
We
couldn't
observe
it
in
the
wild.
We
just
knew
that
it
happened
and
the
century
feedback
cycle
is
a
little
bit
slow.
That's
one
of
the
cons
right.
You
have
a
question.
A
You
can
add
a
log
to
century
to
like
put
the
data,
you
need
in
there
wait
a
day
plus
to
deploy
and
then
see
what
happens,
and
you
can
make
it
a
little
bit
better
by
adding
some
feature
flags
and
a
couple
of
hypotheses
when
you
roll
it
out.
So
you
can
test
a
few
things,
but
you
still
basically
have
that
day-long
cycle
to
see
what
happened.
We
even
tried
reaching
out
to
users.
A
So,
instead
of
just
asking
the
code
to
tell
us
what
happened
we
reached
out
to
users,
they
didn't
respond
in
the
end,
it's
coming
through
the
back
end
empty
handed
it
off
to
some
back-end
engineers.
I
think
it's
a
permissions
problem
where
someone
can
see
a
pipeline,
but
none
of
the
jobs
in
it,
which
is
very
strange,
was
it
worth
it,
spend
all
this
time
to
find
this
tiny
bug
that
nobody
complained
about?
I
don't
know.
I
think
that
is
an
important
question
to
ask
when
working
on
sentry
is
sort
of.
A
If
we
can
do
this,
then
what
are
our
guidelines?
What
do
we
do
with
all
of
this
power
that
we
have,
but
it
was
very
successful
too.
I
rolled
out
something
earlier
this
week.
Something
was
undefined.
I
was
able
to
see
it
on
sentry,
identify
the
problem
and
merge
the
fix
in
like
an
hour
and
again,
it's
not
something
that
happens
all
the
time,
so
we
might
have
gone
for
quite
a
while
without
knowing
we
had
this
edge
case
bug.
So
it
worked.
I
was
very
happy.
A
Observability
is
not
just
about
errors,
as
I
mentioned
before,
right
when
I
was
talking
about
sentry,
but
it's
also
about
performance.
One
concern
about
moving
these
links
from
css
to
actively
calculated
svg
ones,
is
that
the
performance
of
iterating
node
over
nodes,
it's
unclear,
especially
because
we
don't
really
know
what
size
graphs
we
have.
So
without
the
data
we
used
a
tripwire
right,
which
is
like
what
you'll
see
on
the
issues
page
where
basically
like.
If
you
have
too
big
of
data,
we
can't
show
you
the
links
to
start
off
with
fred.
A
Has
this
great
yaml
generator
when
you
work
on
ci,
you
need
cie
animals
a
lot,
so
he
basically
tested
graphs
for
me
and
kept
making
them
bigger
until
we
found
out
what
was
too
big
and
set
the
tripwire,
but
more,
I
wanted
to
be
able
to
use
observability
to
answer
like
three
important
questions.
How
long
did
it
take
to
draw
those
links?
How
many
links
were
we
drawing
and
what's
the
job
link
ratio,
because
we
make
assumptions
about
how
our
pipeline
graph
works
that
are
not
necessarily
true
or
accurate?
A
You
know
all
of
our
examples
have
up
to
100
jobs,
say
some
links,
but
then
we'll
hear
things
from
clients,
like
some
people
have
300
downstream
pipelines.
Some
people
are
using
matrix
jobs,
and
so
you
can
have
like
900
jobs
and
job
groups,
and
it
can
get
really
big
and
we
have
not
necessarily
been
thinking
about
that
lately
because
we
don't
know,
but
we
wanted
to
know
those
things,
and
so
we
did
set
up
a
back
end
endpoint
to
forward
these
three
performance
metrics
to
prometheus
right.
A
We
were
able
to
use
the
performance
work
that
dennis
did
that
works
in
like
the
performance
bar
on
dev,
but
sort
of
have
it
run
in
production
and
send
that
information
to
us
so
that
we
can
know
more
what
we
know
yet.
I
don't
know
because
this
is
pretty
new
and
we
haven't
really
gone
through
and
come
up
with
something
actionable,
but
it's
the
kind
of
work
we
can
look
for
with
front-end
observability.
If
we
keep
growing
in
this
direction.
A
Finally,
I
have
been
a
front-end
fixtures
skeptic
for
a
very
long
time,
but
this
project
changed
my
mind
about
it
entirely.
At
least
working
with
graphql.
I
think
using
front-end
fixtures
for
anything
that's
grown
slightly.
Large
is
a
hundred
percent
worth
it.
Mock
data
can
be
really
verbose
with
graphql
mock
resolvers,
because
you
need
to
add
type
name
to
everything
which
can
get
really
big
and
it
sort
of
helps
cover
an
integration
point
where
we've
run
into
cases
where
people
have
changed.
A
Graphql
types
on
the
back
end
and
not
known
what
that's
going
to
change
on
the
front
end,
and
I
think
that
you
know
more
than
worrying
about
it
in
capybara.
Making
a
certain
level
of
integration
test
in
our
jest
test
by
using
the
fixtures
is
like
a
really
helpful
place
to
just
double
check
that
things
aren't
breaking.
So
I've
totally
changed
my
mind
on
this,
and
that's
basically
where
we
are
today.
That
is
our
whole
adventure
still
to
come
right.
The
next
things
that
are
going
to
happen
is
deleting
the
old
code.
A
I
can't
wait.
It's
going
to
be
like
a
2500
negative
2500,
mr
it'll,
be
glorious.
A
Visual
testing,
so
this
was
something
I
really
wanted
to
get
to
to
make
our
rollout
safe,
and
I
just
didn't
have
the
time,
it's
kind
of
hard
to
figure
out
how
to
render
all
the
visual
variations
that
you
need
in
a
visual
test.
I
believe
we've
brought
or
bringing
storybook
to
the
main
gitlab
repo
for,
like
view
shared
components,
and
I
would
be
interested
in
looking
at
what
it
would
look
like
to
create
test
cases
using
that
so
right,
the
pipeline
graph
can
have
so
many
different
setups
modes,
etc.
A
It's
impossible
to
test
them
all
manually,
and
so
I
think
this
is
a
really
important
case
to
look
at
how
visual
testing
can
help
us
deleting
tests.
I'm
excited
to
delete
tests,
there's
1700
lines
of
capybara
tests
for
the
pipeline
graph,
which
is
like
1500
lines
too
many
tests.
So
you
know
looking
at
what
we're
unit
testing,
especially
if
we're
using
fixtures.
Looking
at
what
visual
testing
can
pick
up
to
bring
down
those
tests,
I
think,
will
be
great.
There
will
be
more
performance
enhancements
to
come
on
the
links.
A
The
links
are
okay
performance,
but
there's
a
lot
we
could
do
to
make
it
better
more
fun
ways
to
understand
job
dependencies.
I'm
excited
to
see
like
where
this
goes
in
terms
of
product,
and
then
I
am
waiting
with
baited
breath
for
apollo
3
and
subscriptions
that
will
take
out
maybe
60
of
the
complexity
that
is
still
in
our
code.
I
have
a
lot
of
different
takeaways,
I'm
going
to
run
through
really
fast.
That
are
mostly
things
I
said,
but
they're
here.
A
If
people
want
to
look
at
them
later,
it's
worth
thinking
about
how
to
integrate
this
type
of
maintenance
into
our
project
planning
right.
I
did
this
and
it
worked
out
well,
but
we
should
think
about
how
to
plan
for
it.
Nested
css
is
a
trap.
I
like
nested
css,
just
fine
for
small
projects,
but
for
something
the
size
of
gitlab.
It
was
really
hard
to
change
and
we
kind
of
had
to
delete
a
bunch
of
things
just
to
make
it
possible
because
untangling
it
was
really
difficult.
A
Client
directors
directives
are
great
and
make
everyone
happy
using
graphql
made
it
a
lot
easier
to
keep
our
front
ends
in
different
apps
in
sync
and
let
them
share
the
data,
which
is
the
thing
they
really
share,
use
async
loading
to
keep
from
loading
code.
You
don't
need,
if
you're
doing
something
bigger.
A
A
Removing
things
can
make
things
easier,
but
also
confuse
people
like
those
links
didn't
even
do
anything
man.
I
should
have
written
more
of
an
architecture
plan
which
we
have
done
since
for
the
pipeline
editor
and
by
we
I
mean
fred
wrote
a
really
good
plan
and
I
was
like
yes
fred.
That's
a
really
good
plan,
good
job.
This
might
have
helped
us
catch
the
polling
problem
sooner,
if
not
other
edge
cases.
I
think
that
was
a
big
mistake
and
we
need
more
support
for
dark
mode.
A
Just
generally,
right,
like
I
feel
like
it's
kind
of
a
hobby
for
one
engineer
right
now,
and
it
can
it
sort
of
feels
that
way
and
it
would
make
our
product
better
if
we
had
a
little
more,
I
don't
know
structural
support
for
it
and
finally
use
fixtures
they're
great,
that
is
my
takeaway.
The
bigger
things
I'm
thinking
about
now
is
what
would
an
ideal
cadence
of
this
kind
of
modernization?
A
Look
like
what's
the
right
balance
between
homegrown
code
and
library
code
did
I
make
the
right
choice
when
I
was
like
trying
to
convince
apollo
2
to
work
as
hard,
I'm
going
to
write
a
lookup
table?
And,
finally,
what
should
we
do
with
sentry
and
century
errors?
What
would
general
guidance
for
all
of
our
other
teams?
A
Look
like
this
is
my
like
secret
plug
here:
it's
not
a
secret
plug,
but
we're
working
on
getting
an
observability
working
group
up
and
going
for
front-end
observability,
where
we
can
collaborate
with
infrastructure
and
even
product,
and
I
think
these
are
the
kinds
of
questions
we
would
be
able
to
answer,
and
I
would
be
excited
about
that
because
it's
done
a
lot
for
us,
but
I
think
it
could
still
do
a
lot
more
anyway.
Now
it's
time
for
questions,
you're
thinking
about.
Thank
you
so
much
for
listening
to
me
speed
through
this
today.
B
A
I
will
put
them
up
and
give
everyone
access.
I
didn't
want
to
do
it
beforehand,
because
I
didn't
want
you
to
know
about
my
surprise
curses
slide.
So
you
know
I
didn't
want
to
ruin
the
surprise
but
yeah
I'll
put
it
up.
C
My
imposter
syndrome
is
at
an
all-time
high,
so
well
done.
Congratulations
to
you
and
the
rest
of
the
verified
team.
Really
good,
awesome
stuff,
and
on
to
my
question,
you
mentioned
the
upstream
downstream
generation.
Fred
has
answered
something
about
that
being.
Can
you
go
into
a
bit
more
detail
about
what
that
is.
A
Yes,
so
I'm
like
I'm
like,
oh
do
I
have
the
pipeline
even
running,
so
I
could
show
you,
but
essentially
in
a
pipeline,
you
can
have
linked
pipelines
that
you're,
given
ciem
will
file
triggers.
A
You
can
have
jobs
that
you
trigger
and
when
you
look
at
the
graph,
there
are
cute
little
links
to
your
upstream
and
downstream
jobs,
and
you
can
click
them
and
open
them
up
and
see
that
pipeline
all
in
place,
but
the
way
that
it
was
working
with
the
rest,
endpoint
and
the
mediator
is
that
there
were
n
plus
one
concerns
about
looking
it
up.
A
The
way
we
restructured
it
with
graphql
instead
of
using
the
old
mediator
now,
those
linked
graphs
can
make
their
own
api
calls.
So
it
doesn't
so
you
can.
You
can
do
it
until
there's
so
many
jobs
in
the
browser
that
it
will
break
your
browser.
If
you
have
enough
downstream
generations,
you
can
continue
to
do
that
until
it
breaks
the
browser,
so
that
was
a
big
win
for
us.
C
Cool
that
makes
sense,
and
I
think
that's
agrees
with
what
fred
said,
which
is
good.
I
think
I
understand
it
roughly.
My
next
question
is:
did
you
find
so
view
documents
that
you
can
do
a
next
tick
call
within
the
mounted
hook
and
that
will
guarantee
that
the
entire
view
has
rendered
I've
never
verified
this
myself,
but
did
you
find
that
that
wasn't
true.
A
It
I
did
find
that
that
wasn't
true,
that
calling
it
in
the
mounted
next
tick
was
not
sufficient
for
multiple
children,
because
this
is
the
thing
I've
now
realized.
I
sort
of
skated
over
because
we're
using
the
higher
order,
components
and
then
passing
it
through
the
slot.
A
The
next
tick
wouldn't
like
technically,
the
like
child,
is
like
three
children
down
or
two
children
down,
because
it
goes
through
the
links,
layer
and
then
maybe
into
another
slot,
and
it
couldn't
guarantee
that
so
I
asked
natalya.
I
went
to
my
known
view
expert
and
was
like
I
did
this.
Does
this
seem
like
I
was
like?
Is
this
bad,
or
is
this
the
right
way
and
she
was
like?
No
it's
fine.
I
was
like
okay.
C
Cool
that
makes
sense.
Thank
you
and
my
third
and
probably
final
question
for
today.
Can
you
go
into
a
bit
more
detail
about
why
you
want
to
delete
around
1500
lines
of
cap
embarrassed
specs?
Do
they
really
not
provide
any
value
and.
A
They
cause
pain
because
they
need
to
be
updated
when
you
they
don't
use
data
test
ids,
they
use
classes,
so
they
cause
pain.
When
you
move
things
around
because
they're
brittle
you're,
like
you
know,
if
your
layout
changes
it
shouldn't
break,
the
tests
is
a
personal
feeling
that
I
have,
and
that
is
not
the
case
for
a
lot
of
those
and
yes,
I
think
they're
covering
they're
trying
to
cover
interactions
that
we
can
cover
elsewhere
now,
either
in
our
unit
tests.
A
A
That
is
a
good
question.
I
don't
know
the
ones
the
set
tells
me
we
should
keep.
A
I
haven't
gone
through
all
of
them,
yet
so
the
ones
if
it
looks
like
what
we're
testing
is
that
we
want
to
test
that
the
data
is
being
rendered
correctly
right
that
like
where
we're
getting
our
data,
it's
being
shown
in
the
pipeline
correctly,
and
we
haven't
covered
that,
let's
say
by
using
fixtures
or
something
where
it's
more
of
a
real
integration
test,
like
anything
that
could
be
in
a
unit
test
to
me,
should
be
ingested
not
in
capybara.
A
Oh,
and
that
reminds
me
of
another
plug
for
things
I
like,
if
ingest,
you're,
testing
the
rendered
data
instead
of
using
shallow
mount
and
testing
props,
then
that's
another
reason.
You
can
delete
that
test
out
of
capybara
because
then
you're,
not
just
testing
props
but
you're
testing
that
it's
rendering
the
correct
thing.
So
it's
another
reason
to
not
use
props.
If
you
can
avoid.
A
A
D
He's
busy
blowing
up
the
chat
by
the
looks
of
it
it's
already
been
answered.
I
just
wanted
a
link
to
the
svg,
rendering
and
fred
hooked
me
up.
Thank
you.
E
A
Oh,
it
just
doesn't
exist
yet
so
when
it
does
exist.
That
is
what
we'll
do
we
don't
have
subscriptions
at
the
level
for
pipelines
that
we
need
to
like.
Do
it,
but
yeah
that's
the
goal.
We
found
out
the
polling
thing
rather
late
in
the
project.
I
had
like
a
panicked
week
of
like
dming
our
staff
back
end
engineers
being
like.
I
please,
like.
I
don't
want
this
project
to
end
in
flames.
What
can
we
do
please
help
and
they
did,
and
so
this
was
yeah
a
solution
until
then
I
was.
F
F
Just
want
to
say
great
work,
sarah
and
fred
and
rest
of
team,
I'm
so
glad
graph
is
much
more
easier
to
maintain.
Now.
A
Thanks,
yes,
I
know
you
suffered
a
lot
of
the
pain
that
helped
us
get
to
the
place
where
we
could
replace
it.
Peyton
did
a
lot
of
work
before
the
new
graph
happened.
Where
people
product
ux
would
ask.
Oh
can't
we
just
change
this
one
tiny
thing
can't
we
just
make
this
two
pixels
taller.
What
could
possibly
happen
if
we
made
it
two
pixels
taller
and
you're
like
the
world
will
end.
That's
the
actual
answer.
A
If
you
make
it
two
pixels
solid,
the
world
will
end,
and
I
know
that's
absurd,
but
you
know
when
I
said
we
sort
of
snuck
into
doing
this.
The
the
wall
we
had
or
the
like
protection
we
had
was,
we
could
say,
remember
all
those
things
we
said
no
to
before.
I
know
this
is
going
to
take
a
long
time,
but
then
we
can
say
yes
and
that
did
help
a
bit,
so
peyton's
suffering
helped
us
all
and
yeah.
I
want
to
say
thank
you
again
to
fred.
A
Most
of
all,
he
has
we've
been
working
on
these
links
together
in
partnership.
This
whole
time
I
mean
big
stuff
little
stuff
me
just
texting
him
being
like.
Does
this
make
sense
at
all
he's
been
super
helpful,
andrew
maintained,
like
90
of
this
laura
wrote
all
of
our
back
end.
She
wrote
all
of
our
graphql
resolvers
and
was
really
helpful.
Jose
is
like
the
king
of
century.