►
From YouTube: Session #2: Discussing: Improve maintainability & performance of the FE of Merge Request Diffs
Description
The first of several calls to discuss approach to improve performance and maintainability of the Frontend MR Diffs app.
Epic: https://gitlab.com/groups/gitlab-org/-/epics/2852
Slack: #g_create_source-code-fe
(See Session #1: https://www.youtube.com/watch?v=QaoGn4PkDcU)
A
So
welcome
to
the
second
session
of
discussion
on
how
we
can
improve
the
performance
and
maintainability
of
merge
request
if
sap
and
we'll
get
started
right
away
from
the
points
that
we
left
from
last
week,
can
someone
please
confirm
that
you're
seeing
the
agenda?
Is
it
shared
okay,
good?
So
these
were
the
points
that
we
uncovered
last
week.
This
is
the
agenda
at
the
bottom.
A
B
Yeah
talking
about
improving
performance,
I,
don't
know
how
does
it
affect
maintainability,
but
since
every
bit
of
state
we
get
to
view
view
makes
reactive.
Sometimes
it
takes
a
huge
admission
cost,
but
then
it
also
takes
a
long,
lasting
memory
cost
of
all
the
watcher
functions
it
needs
to
create.
So
what
parts
of
our
state
tree
do
not
need
to
be
reactive,
and
we
can
I
know
that
you
can
object.
That
free
is
something
if
you
won't
touch
it.
The
reason
this
those
kind
of
a
hit
on
maintainability
is
how
does
it
developer
now?
B
Is
this
reactive?
Is
this
not
react?
That's
a
somewhat
of
unanswered
question.
I.
Think
in
the
good
lab
code
base
is
how
do
we
handle
these,
but
I
know
if
anyone
else
has
any
experience
or
thoughts
on
isolating
parts
of
the
state
that
doesn't
need
to
change
into
certain
buckets,
even
some
of
those
of
like
huge
files
like
our
individual
lines,
probably
don't
need
to
be
reactive
because
they
don't
really
unless
we
get
the
whole
file.
C
D
B
Brought
this
up,
because
this
is
an
issue
that
the
web
IDE
has
where
there
is,
this
huge
set
up
time
and
it's
with
UX,
and
so
there's
there's
opportunity
there
in
the
web
ie
for
keeping
the
control
of
what
needs
to
be
reactive.
What
doesn't
but
I
can
understand
based
on
a
previous
conversation,
this
might
not
be
as
relevant
I.
A
That
is,
we
need
to
keep
keep
our
minds
if
we
could
put
buckets
or
or
if
like
stuff,
that
is
never
gonna
change
or
if
we
just
use
prefixing,
that's
also
a
good
approach
so
that
we
can
know
exactly
what
will
be
frozen
or
not
and
static
will
never
update.
So
that's
more
of
an
aspect
for
us
to
keep
in
mind
with
room
when
we're
reworking
this
state
into
into
whatever
that
is,
but
I
feel
like
this
has
been
a
topic.
A
Bringing
the
cost
down
of
building
things
I've
been
tearing
it
down
is
definitely
going
to
be
there
for
the
for
the
ongoing
work.
So
thanks
for
raising
it
any
more
thoughts
on
the
frozen
frozen,
frozen
thing,
freezing,
let
it
go,
can
I
let
it
go
right:
I'm
gonna!
Let
it
go!
I'm
gonna
write
it
down
a
below,
but
Thomas.
Can
you
please
voice
your
puppet?
Please.
E
Yeah
last
last
I
knew
it
was
a
couple
months
ago,
maybe
maybe
about
two
months
ago
there
were
some
unrelated
discussions
in
like
is
this
known
about.
Someone
had
left
their
Mr
page
open
and
they
did
a
it
wasn't
someone
on
our
team,
but
they
had
done
a
they.
Don't
like
a
perf
test,
I
just
on
their
browser
because
it
had
slowed
down
and
it
showed
just
a
just.
A
steady,
almost
linear
increase
in
memory
use
over
time,
and
it
was
just
it
was
in
the
background
just
running
sitting
in
a
tab.
E
So
we
have
a
memory
leak
somewhere
and
I,
don't
know
where
it
is
or
what
it
is.
But
I
think
that
my
suspicion
is
that
should
not
be
increasing
a
memory
over
time
probably
has
to
do
with
reef
etching
notes
every
couple
of
seconds
or
I.
Don't
know
what
runs
on
on
a
polling
thing,
but
something
is
something
is
leaking
memory
just
linearly
over
time
and
we
could
probably
improve
a
lot
of
just
browser
performance
by
just
fixing
that
I
don't
know
it's
good.
It's
a
it's.
B
B
A
B
A
A
Think
that
was
incredibly
felt
before
we
enabled
batch
Tiff's
and
the
split
of
in
London
parallel,
so
that
was
probably
beneficial,
but
I
still
think
we
have
a
problem
that
Tom
is
identified
because
it's
still
so
present
there
to
point
and
so
yeah
that's
how
we
doing
a
voice.
Your
answer,
your
question.
E
And
my
response
is
I,
don't
know
of
any
research
system
done
directly
into
this
other
than
in
in
the
effort
of
doing
other
things
like
the
split
dips,
for
example,
like
was
to
do
the
split.
It's
actually
improve
performance
and
memory
usage,
so
this
is
probably
worth
revisiting.
What
Paul
has
talked
about,
and
maybe
some
other
stuff
to
actually
get
some
stats
on,
whether
we're
leaking
memory
or
not?
Yeah.
A
So,
dear
answer
to
your
question,
Talia
I
think
there
was
some
time
in
the
pasture
that
we
looked
specifically
at
memory
leaks,
but
it
was
too
long
ago.
It
was
after
the
merge,
Equestria
refractor.
We
did
take
a
look
at
certain
parts
of
the
code
base
that
were
leaking,
but
since
then
I
would
say
like
20
late
2018,
we
haven't
done
like
a
deep
audit
of
it
and
I
feel
like
part
of
this
next
effort
that
we're
gonna
be
doing
we're.
Gonna
have
to
find
like
the
quick,
quick,
wins
and
I
would
kind
of
now.
A
Compare
this
to
those
improvements
that
the
back
end
has
been
working
on
specific
end
points
that
are
then
to
fight
to
be
taking
longer.
So
we
probably
need
to
identify
and
do
a
little
bit
of
an
audit
and
to
see
what
are
the
worst
offenders
here
and
we
haven't
done
a
focused
work
on
that.
No,
we
haven't,
and
we
probably
should
do
you
have
any
tips.
Experience
with
that
and
like
identifying
memory
leaks
in
heavy
big
view,
apps.
A
A
Living
merge,
request,
reviews
that
sort
of
thing
is
definitely
felt
like
it's
increasing
the
weight
of
the
merger
class
until
it
just
becomes
too
heavy
and
just
refresh
the
page
and
everything
starts
to
come
back
to
normal.
So
we'll
definitely
have
to
do
a
widespread
sweep
to
see
if
we
can
find
the
worst
offenders.
So
maybe
you
can
move
on
to
the
next
points.
Phil
one
of
my
favorite
points
and
the
agenda
today.
C
So
kind
of
different
changes
here
and
around
the
place
I
wish
it
would
be
easy
to
rewrite
it
all.
Just
from
start
and
kind
of
a
big
undertaking
and
I'm
slightly
worried
that
probably
all
feature
specs,
don't
cover
most
of
the
and
features
that
we
have
on
the
dips,
but
I'm
just
throwing
out
there
is.
We
spend
a
lot
of
time
to
endure
little
changes.
C
A
It
was
yeah,
my
computer
is
lagging
so
I
couldn't
unmute
myself
so
yeah
at
the
first
time
that
this
has
been
mentioned.
We
kind
of
like
laughed
and
we
kind
of
discarded
it
a
little
bit.
Then
we
pause
and
we
thought
it
hard
and
it's
probably
something
that
we
can
think
about
to
take
some
iterative
approach
to
this
for
sure.
A
So
one
of
the
things
that
I
remember
is
that
when
we
were
building
the
batch
comments,
the
modulus
reviews
was
that
we
used
a
module
for
the
state
for
the
batch
comment.
So
I
wonder,
could
we
could
we
start
rewriting
things
into
modules
in
VO
X
state
in
like
this
Paul
is
asking?
Could
we
write
a
slice
and.
C
B
The
dennis
has
done
this
and
a
part
of
it
was
that
there
was
some
key
components
and
that
we
now
had
a
reusable,
reusable
use
case
for
so
with
the
web
IDE
sub
multi
file
editor.
Our
eventual
goal
is
to
have
multi
file
snippets
and
so
we're
talking
about
hey.
Can
we
repurpose
the
web
ID
for
this
without
really
happen,
to
give
a
real
answer
now?
B
B
Let
me
build
these
from
scratch,
with
the
purpose
of
replacing
these
old
different
limits,
and
now
it's
done
I've
technically,
where
he
wrote
it
all,
but
under
the
scope
of
reusability,
which
by
definition,
is
going
to
be
decoupled
and
easier
to
maintain.
So
the
approach
I
didn't
dive
into
it.
So,
thanks
for
thanks
for
typing
it
out
Thomas,
but
was,
is
there
something
about
mr
divs
or
whatever
about
the
yeah?
Is
there
someone
about
merging
positives
that
can
be
reused
outside
of
mirja
Kristoff's?
D
So
having
just
come
through
the
rewrite
of
the
version
dropdowns
that
Paul
and
I
worked
on
quite
a
bit,
I,
don't
I'm
not
as
strong
into
the
rewrite
everything
from
scratch,
as
we
were
before
see
how
it
was
possible
to
untangle
one
piece
in
place.
I
think
that's
probably
the
better
approach
here,
yeah
and
there,
especially
since
there's
a
lot
of
ways
that
we
structure
the
merge
request
that
really
lends
itself
to
doing
it.
D
So
much
if
we
start
low
and
work
our
way
up,
we
might
see
enough
performance
benefit
that
we
don't
have
to
completely
overhaul
the
whole
thing
and
I
still
think.
There's
a
lot
of
benefit
to
be
had
fry
it
by
just
fixing
the
state,
which
I
think
is
a
point
a
little
farther
down
and
I.
Think
if
we
handle
all
those
things,
we
don't
really
need
to
I
think
we'll
end
up
eventually
rewriting
the
whole
thing,
but
in
place,
as
opposed
to
just
to
throwing
it
away
and
starting
over
yeah.
C
A
A
A
So
I'll
put
it
there
in
the
takeaway,
so
you
can
create
an
issue
on
it
and
we
can
discuss
what
would
be
a
good
candidate
for
this.
The
one
thing
that
I
would
think
of
is,
for
example,
using
your
example.
Just
in
about
the
compare
versions.
I,
don't
think
the
performance
problems
are
coming
from
that
or
the
memory
footprint
is
coming
from
that
because
it's
rarely
used
when
it's
used.
A
It's
using
a
lot
of
complex
code
for
sure
and
the
front
end
is
doing
a
lot
of
heavy
lifting,
but
I
don't
feel
like
that's
a
bottleneck,
so
we
should
probably
try
to
have
a
discussion
towards
what
would
be
one
of
the
heavy
components
that
we
can
extract
to
outside
of
the
app
and
have
it
communicated
with
it.
So.
B
I
think
I
think
the
commit
view
is
almost
as
maybe
like.
80%
of
the
features
of
them
are
divs,
because
even
leave
comments
on
commits
commit
shows
at
differences
between,
and
that
commit
view.
I
think
right
now
is
in
Hamel,
so
I
think
I
think
that
that
is
a
pretty
prime
candidate
for
let's,
let's
approach
diffs
from
bottom
up
from
this
commit
approach,
knowing
that
hey
we
can
also.
This
is
the
same
problem
that
mr,
is
also
trying
to
solve.
That
would
be
interesting,
so.
A
What
you're
saying
is
like
we
could
consider
rewriting,
instead
of
just
really
factoring
the
comic
view
in
Hamel,
to
use
the
components
in
the
diffs.
We
would
build
a
new
app
for
the
commits
and
but
and
try
to
then
build
it.
Thinking
about
the
merger
questions
in
mind
and
then
move
it
there.
That
would
have
the
benefit
of
not
having
that
work
affect
the
merge
across
stage
until
it's
ready
all
right,
I'll
write
it
down
any
other
thoughts
here.
A
C
D
So
testing
it
manually
is
pretty
straightforward,
but
I
have
I,
don't
know
how
to
automate
that
you
know
I'm
taking
performance
snapshots
before
and
after
changes
is
fine,
but
it's
pretty
tedious
and
I.
Don't
think
that's
what
we're
talking
about
so
does
anybody
have
any
experience
of
automating
performance
testing,
or
should
we
wrote
a
QA
person
into
this
offline
I
can.
A
Show
what
we,
what
we
have
done
and
what
we're
looking
for
and
why
we've
learned
so
one
of
the
things
that
Rami
is
set
up,
which
is
what
your
you're
talking
about
Phil,
is
an
automated
job
that
will
grab
the
reviews
app
and
run
some
performance
comparisons
to
it.
So
that's
currently
in
the
pipeline.
So
if
you
look
at
the
review
performance
job,
that's
what
it's
doing
and
it's
trying
to
run
a
script
on
a
merge
request
page
and
to
try
and
get
the
timings
of
it
all.
A
At
the
time
we
felt
like
you
can
grab
like
the
global,
like
Dom,
ready,
page
load,
but
not
much
else
so
it's
hard
to
like,
for
example,
grasp
the
impact
of
memory,
for
example,
as
you
interact
on
the
page,
so
he
kind
of
fell
short
of
what
we
wanted.
We
never
evolved
it
too
much.
We
can
look
again
to
it,
but
one
of
the
things
that
we
saw
was
the
performance
of
reviews
app
since
its
virtualized,
and
it's
never
like
fully
guaranteed
to
have
the
same
resources.
It's
kind
of
flaky
anyways.
A
So
that's
one,
the
other.
Yes,
we've
done
some
tests
manually
and
the
other
thing
I
would
add.
Just
to
conclude,
my
participation
in
this
topic
is
that
we
have
a
section
in
the
handbook
to
track
historic
performance,
metrics
and
I've.
Just
I've
just
done
an
update
the
other
day,
and
this
is
what
it
does.
It
basically
use
a
site
speed
to
go
and
check
the
speed
index
of
the
page,
and
so,
for
example,
the
page
you'll
be
looking
at
would
be.
This
I
can
merge
request.
A
That
is
complex
and,
as
you
see
it's
coming
down
a
lot,
so
in
2018
it
was
journey
7,000
and
then
it
went
down
significantly.
So
this
would
be
the
number
we'll
be
looking
for,
but
it
is
abstracted
into
the
spin
speed
index
of
site
speed.
We
shall
have
to
go
laughter
Tony
with
Google.
This
is
kind
of
like
one
way
of
tracking
the
overall
performance
of
the
page.
It's
not
as
detailed
as
we
need,
though
it
doesn't.
A
Once
we
go
into
the
reports,
it
does
give
you
a
bunch
of
details
of
the
page
itself,
so
this
would
be
it.
So
we
can.
We
can
get
a
bunch
of
information
here.
So
that's
something
to
keep
in
mind
is
that
we
can
use
this
to
kind
of
write,
a
tool
that
would
kinda
give
us
reports
on
a
daily
basis
or
something
or
alarms
on
based
on
this.
But
I
haven't
worked
on
that,
so
any
thought
any
other
thoughts,
participation.
E
E
Know
we
have,
we
have
so
many
things
in
in
the
front
end
that
we
could
use
I,
don't
know
when
what
the
case
is
for
Suntory
versus
snowplow,
for
example,
because
we
already
have
snowplow
like
events
that
we
can
send
back.
I,
don't
know
where
we
would
want
to
do
that.
I
guess.
Maybe
sentry
is
the
place
since
it's
a
front
end
specific
framework
or
whatever.
D
D
A
So
we've
done
that
in
the
past
for
specific
features
of
improving
performance,
the
test
we
would
run
would
be,
and
I've
done
this
with
a
couple
of
you
for
the
batch
Tiff's,
where
we
would
get
the
timings
I,
don't
know
where
I
kept
that
spreadsheet,
but
I'll
search
while
I,
while
I
speak
I
can.
If
what
we
do
then
is
very
manually
in
very
time-intensive.
A
Is
that
we'll
run
a
profile
on
the
page
of
the
merge
request
in
master
and
then
switch
back
to
the
feature
branch,
warm
up
the
caches
and
then
run
the
profile
again
and
then
we'll
compare
basically
the
timings
of
scripting
rendering
and
most
mostly,
that
the
idles,
the
idle
periods
we
ignore,
and
that
will
give
us
a
good
conference
of
whether
we
have
an
impact.
The
the
problem.
Samantha
is
that
that
is
now
currently
done
manually
and
it's
very
hard
to
automate
that
yeah
into
our
tooling.
A
A
We
could
add
markers
and
we
can
have
the
script
track
that
when
those
type
when
those
markers
are
emitted
and
to
Paul's
excellent
point
about
the
observation
of
an
experiment,
changes
its
outcome,
so
all
that
they're
doing
is
they're
leveraging
the
timings
API,
the
timing
API,
which
is
native
to
the
browser.
So
we
can
add
custom
events
if
I'm
not
mistaken,
and
that
was
the
idea
is
to
leverage
something
that
won't
be
affecting
the
performance
of
of
the
test
of
the
page
just
by
testing
it
is
we
don't
measure
anything.
A
D
D
It's
a
worth
noting
that
trying
to
use
views
built-in
performance
so
which
it
says
in
the
docs.
You
know
don't
use
this
in
production,
but
it'd
be
nice
to
have
something
that
we
can
leave
running
somehow
in
actual
production
code.
So
we
can
see
how
it's
truly
affecting
Excite
I
saw
huge
swings
between
using
that
or
trying
things
in
dev
versus
actually
having
it
out
in
a
branch
somewhere.
It
was
enough.
A
I
I
could
go
even
a
step
further
and
I've
I've
worked
on
a
couple
projects
where
we
would
leave
the
matter
console.log.
We
would
have
our
own
lager
kind
of
thing.
That
would
be
an
abstraction
layer
and
we
can
turn
it
off
and
on
manually
what
that
meant
was
that
we
would
have
our
lager
go
with
the
code
into
production,
which
will
increase,
probably
the
size
of
the
bundles,
but
it
would
be
silenced
in
production
and
we
would
potentially
go
in
production
and
enable
it
so
we'll
be
able
to
debug
these
things
in
production.
C
B
A
I
think
it's
important
to
distinguish
between
the
current
the
typical
console.log
usage
we
use
in
development,
which
we
definitely
don't
want
to
have
in
production,
but
some
more
definite
like
performance,
logger
or
something
like
that.
That
would
be
only
used
for
markers
like
this.
Then
we
could
potentially
keep
that
into
the
code
and
we
wouldn't
be
a
console
log.
It
would
be
something
that
leverages
console.log
if
you,
if
it's
enabled
something
like
that,
but
all
right.
It's
a
small-minded
point.
So
we'll
just
take
a
note
of
that
and
we'll
discuss
it
later.
A
B
A
B
A
lot
of
it's
clear
like
we're
still
in
this
elaboration
discovery,
phase
of
what
is
performance
testing
look
like,
and
this
emphasize
like
this-
has
organization-wide
benefits.
So
there's
in
my
opinion,
like
there's
a
high
profitability
of
will
we
invest
to
this,
because
this
isn't
just
there's
across
all
of
front
end.
We
could
use
help
on
performance
testing,
so
I
think
I.
Think
any
effort
we
put
into
figuring
that
out
is
would
be
really
cool.
Things
I
would
suggest
like
not
doing
a
full
rewrite
of
everything
at
all.
We've
done
this
cuz.
B
B
Yeah
one
thing
that
one
idea
that
came
to
mind
those
because
we're
talking
about
wanting
to
test
not
just
like
page
load
and
some
rudimentary
metrics,
but
maybe
even
like
test
certain
behaviors
and
things
that
the
performance
related
there
so
sometimes
we're
severely
missing
in
front
end,
is
front
and
integration
tests.
We
have
fronted
unit
tests,
we
have
feature
specs,
we
don't
have
just
front-end
integration
tests
and
these
would
raise
warnings
if
our
front-end
integration
tests
are
slow.
B
It
would
raise
warnings
that
something
is
slow
here,
and
so
we're
not
doing
this
as
fast
as
we
could
just
like.
We
kind
of
failed
just
facts
if
they're
too
slow
right
now,
though,
that
that
number
is
little
ridiculously
high
because
we're
still
cleaning
those
up
but
over
an
editor
where
we're
kind
of
gonna
be
trailblazing.
What
if
front-end
integration
tests
look
like
for
for
the
web
IDE?
So
we
can
because
a
lot
of
it
is
a
front-end
app.
So,
let's
just
spin
it
up.
B
Somehow
we
stub
the
backend,
and
then
we
run
through
some
use
cases
and
make
sure
that
the
user
can
do
this.
That
figuring
out.
How
do
we
then,
maybe
make
sure
that
those
tests
are
aren't
too
slow
could
also
give
us
some
of
this
benefit,
and
especially
at
the
more
detailed
use
case
level.
Not
just
was
page
flowed
good
right.
C
A
B
Yeah
I'm
excited
I,
it
should
be
next.
Milestone
is
the
goal,
the
other
thought,
and
we
were
already
talking
about
this
I
put
down.
There
was
no
Tim
Solomon.
Has
this
huge
goal
for
all
empowering
all
teams
to
you
know
be
looking
at
their
dashboards
and
metrics
and
being
married
to
this
the
issue
and
it
kind
of
with
that
with
that
table.
B
The
issue
is
these
reveal
issues
after
they're
merged
into
master,
not
necessarily
before
it's
still
helpful
having
that
that
I,
so
we
catch
it,
may
be
before
users
do
or
before
nd
users
or
even
self
hosted
and
users,
but
is
there?
Is
there
a
for?
Do
we
have
century
dashboards
yet
for
for
the
front
end
I
know
that's
something
that
we're
working
overall,
but
is
there
any
work
for
the
mrs
with
front
end
and
century
dashboards.
A
Not
planned,
but
it's
something
that
we're
kind
of
waiting
for
guidance
on
the
century
for
whoever's
dealing
with
that
thing.
I
remember,
Dennis
I
think
was
addressing
that,
so
we
haven't
heard
any
guidance
yet,
but
we'll
definitely
hook
it
up
once
it's
there
I
think
we've
done
some
activities
on
like
adding
it
to
snowplow
just
for
metrics
of
usage,
not
particularly
focused
on
performance,
but
we'll
definitely
keep
an
eye
on
that
for
sure
cool
thanks
Bob.
A
C
A
A
Everything
that
touches
notes
will
definitely
be
impacting
the
discussions
as
well
and
there's
also
a
lot
of
the
code
that
is
shared
between
the
issue
itself.
So
even
we
might
not
be
breaking
the
discussions
tab.
It
might
be
breaking
the
discussion
tabs
on
the
issues.
So
something
is
that,
if
that
what
you're
warning
yeah.
C
A
Good
good,
definitely
good
to
know
we
were
just
having
discussion
about
a
particular
line
of
code
that
has
like
three
ways
of
grabbing
the
content
of
the
note,
and
one
is
for
bad
comments.
The
other
is
for
merge,
request
comments
and
the
others
for
issue
discussions
and
they're.
All
in
the
merge
request.
Actions
so
is
deafening,
is
gonna,
definitely
be
something
I
have
to
deal
with
while
we're
untangling.
This
thing
anybody
has
questions
about.
At
this
point.
A
D
This
is
more
of
a
this
might
be
a
little
too
nitpicky
and
specific,
but
it
just
was
one
of
the
things
I
ran
into
while
I
was
doing
the
most
refactor,
with
the
drop
downs
recently
and
I
hold
on
till
of
the
document.
Where
was
I,
it's
I
think
probably
just
lump
that
under
the
state
review
discussion
and
about
how
the
front
end
is
just
doing
a
lot
of
state
management
that
seems
unnecessary
and
I.
Think
even
in
this
case
could
probably
be
abstracted
into
kind
of
the
view,
X
sort
of
the
view.
D
The
view
model
thing
that
Paul
and
I
have
talked
about
last
time,
where
we
could
actually
use
this
as
kind
of
a
manipulation
later
before
the
front
end
actually
starts
using
it
so
that
we
actually
have
it
in
the
shape
that
we
want.
Like
this
start
version
thing
that
I
ran
into
is
just
a
very
unwieldy,
but
I
think
it's
probably
too
specific,
for
this
particular
call
I
kind
of
want
to
just
put
that
underneath
as
an
example
of
a
particular
state
management
problem
that
I
ran
into
and
move
on,.
A
Thanks
Justin,
but.
B
This,
this
kind
of
think
it's
exploded
in
discussions,
because
it
is
the
model
of
what
a
discussion
looks
like
and
like
is
a
discussion
resolved
like
this
is
yeah.
This
is
a
really
isolate
example,
just
comparing
versions,
but
this
problem
gets
I
think
really
blown
to
scale
when
the
component
level
is
having
to
do
lots
of
calculations
or
are
we
resolved
or
now
I'm,
like
with
discussions
and
particulars
I,
think
that's
riddled
with
this.
The
same
situation
needs
home.
D
100%,
I
and
I
think
a
lot
of
these
things
kind
of
can
be
resolved
using
Thomas's
idea
of
flattening
how
we
store
the
data
in
the
store.
I
see
huge
benefit
of
that,
and
it's
kind
of
like
the
first
thing.
I
personally
want
to
attack
is
just
fixing
the
way
we
store
state
I
think
we
can
actually
achieve
a
lot
of
performance
benefits
and
maintainability
benefits
by
reducing
a
lot
of
the
duplication.
That's
happening
in
memory
as
I
know.
B
Right
and
we
do
like
this
sinking
thing
of
like
if,
if
we
change
something
we
didn't
like
resync
at
all
to
Sedalia
and
yeah,
those
are
that,
on
top
of
you
know
the
amount
of
garbage
that
we're
probably
creating
a
memory
and
things
that
have
to
get
cleaned
up
too,
like
it's
yeah
I,
think
that's
a
really
good
point.
Well,
thanks
for.
A
This
brings
me
this
brings
a
question
in
my
mind,
which
we
should
definitely
have
a
decision
or
a
design
principle
in
a
way
that
there
there
are
two
ways
of
looking
at
the
state
management
right
we
can.
We
can
keep
our
repetition
and
duplication
of
the
state
management
to
on
a
state
to
a
minimum
and
then
handle
all
kind
of
other
shapes
of
data
that
we
need
through
getters.
A
That
will
decrease
the
number
of
bytes
that
we
use
for
the
state,
but
we'll
probably
have
more
computation
whenever
we
need
to
do
something
and
to
work
which
II,
I
guess
we'll
have
you
duplicate
the
data
for
us,
so
it's
kind
of
a
bit
out
of
our
control.
If
you
have
it
through
getters
now
bringing
the
topic
of
getters
is.
How
do
you
all
feel
about?
Should
we
minimize
the
use
of
getters?
D
B
I
can't
see
a
Natalia's
face:
I'm,
oh
yeah,
she's,
still
here,
okay
with
knowing
what
you'd
know
about
like
the
way
the
discussions
are
stored
and
we
have
like
a
big
list
of
discussions
and
merge
requests
on
the
discussions
page,
but
then
in
dips
we
have
a
smaller
set
of
discussions.
Well,
just
look
like
in
a
problem
where
I
have
one
query:
that's
returning
all
discussions
for
them.
Then
I'm
gonna
just
update
like
a
single
discussion
or
add
a
new
one
on
this
line
of
diff.
Is
there
an
issue
with
like
these
two
pages?
B
C
Makes
sense
for
Apollo
it's
actually,
we
have
single
source
of
truth
as
a
full
of
cash
and
when
we
change
anything
like
when
we
send
a
mutation,
if
it's
just
an
updated
the
base
automatically
for
us
using
an
ID
or
any
kind
of
unique
identifier,
we
can
define
if
we're
changing,
something
we
need.
If
we're
adding
an
entity
or
deleting
an
entity.
We
call
an
update
for
permutation
and
we
change
in
these
ourselves
on
the
client
side.
So
this
part
looks
exactly
like
buicks
and
you
call
your
dispatch
in
action.
B
C
A
Yeah,
so,
given
that
the
distinction
is
not
the
Lord
a
lot
and
we're
definitely
considering
that
going
towards
graph
QL
at
this
stage,
I
think
we're
keeping
with
view
X.
Unless
we
build
a
new
app,
then
we're
gonna
have
to
go,
have
a
conversation
about
the
backend
but
as
before,
like
you
know,
tell
you
you
just
mentioned
already
discuss
this
a
little
bit
on
last
week.
A
We're
definitely
not
gonna
pursue
or
not
definitely,
potentially
not
gonna
pursue
Apollo
in
graph
QL
at
this
stage,
just
given
the
entire
complexity
of
the
application
that
we
have
in
our
hands,
but
still
it's
important.
This
topic
that
I
just
captured
here
about
transforming
the
data
on
fetch
and
that
later,
that
sounds
good
to
me,
and
this,
together
with
the
freezing
of
certain
parts
of
the
state,
could
potentially
give
us
a
good
benefit
of.
If
we
know
that
that
data
is
static
and
will
not
change,
we
can
duplicate.
A
D
While
we're
daydreaming
here
speaking
to
the
transforming
at
the
fetch,
which
I
also
totally
agree
with,
is
the
better
way
to
go,
is
there
any
do
we
have
any
feeling
about
so
I
know?
We
mentioned
last
time
about
an
issue
of
just
changing
things
on
the
backend.
Now
that
is
kind
of
a
problem
because
they
have
more
clients
than
just
ourselves
like
there
are
people
consuming
that
API
that
isn't
just
get
lab
comm
or
our
own
personal
client?
D
Is
there
any
around
I
know
infrastructure
doesn't
like
the
idea
of
us
just
building
node
servers
flippantly,
but
if
we
could
offload
this
to
a
in
Justin's
beautiful
world,
we
can
do
whatever
he
wants
a
node
server
or
endpoint
that
we
can
write
ourselves
to
do
that
transformation,
not
in
the
browser
and
then
consume
that
endpoint.
For
this
particular
client.
B
Some
think
said,
just
I
think
keep
in
mind
is
that
where
we
are
identifying
by
not
solving
it
all
the
way
up
to
stringing
is
that
bad
data
needs
to
be
transformed.
We
can
solve
it
here
at
the
client
level
or
yeah
we
could.
We
could
totally
have
proxies
that
do
things
and
have
data
quote-unquote
data,
warehouses
whatever,
but
the
downside
is
that
that
compute
we
pay
for
the
clients
compute,
we
don't
pay
for
and
ideally
with
a
lot
of
you
know
front-end
applications
like
it's
nice
when
the
client
gets
to
handle
all
this.
B
Hopefully
it's
a
good
user
experience,
but
we
don't
have
to.
We
don't
have
to
pay
for
that.
Cpu
ticks,
I,
think
there's
we.
The
problem
isn't
just
isolated
a
front-end.
We
could
solve
the
front-end
performance
by
offloading
it
to
another
problem
area,
so
I
would
I
would
suggest
and
I
think
it
goes
back
to
see.
There's
the
adage
of
you
know:
good
algorithms
be
fast
computers
any
day
like
a
video.
No,
you
know
exponential
algorithm
versus
a
logarithmic
algorithm.
I
could
run
it
on
stinky,
raspberry
pi.
E
When
I
make
a
note
on
what
Paul
just
said
it
on
that,
we
that
we
pay
for
things
in
the
backend.
It's
true,
we
do
pay
in
dollars
or
things
to
the
back
end,
but
EMR,
dips
or
EMR.
Stuff
is
something
that
we
want
to
have
multiple
remain
lovable
like
this
as
a
product
and
if
we're
paying,
if
we're
paying
on
the,
if
we're
not
paying
by
putting
something
on
the
front
end,
we
are
actually
paying
a
little
bit.
E
B
A
So
I
feel
like
we
can
move
on
to
other
topics,
because
that
is
definitely
an
unanswered
question:
how
we're
gonna,
where
how
and
where
we're
going
to
be
doing
the
transformation
but
I
feel
like
overall
consensus
is
that
we
definitely
transform
the
data
we'll
definitely
get
to
the
bottom
of
that
later,
and
please
update
the
notes
there.
If
they're
not
complete,
pull
in
Thomas
I,
just
wrote
there
Justin
your
point
and
we're
ten
minutes
at
the
front
from
the
end.
Go
ahead.
D
A
D
The
next
one
is
mortgage
style,
I
want
to
make
sure
we-
and
this
is
a
good
timing
for
it.
I
want
to
try
and
wrap
this
discussion
up
with
more
actionable
baby-step
items,
and
it
still
does
seem
like
attacking
the
state
management
is.
That
is
the
first
real
step.
We
have
untangling
that
mess
and
that
might
help
inform
what
we
do
with
the
rest
of
the
merge
request
structure.
D
So
yeah
and
I'm
still
I
still
really
like
Thomas's
flat
structure
idea.
I
think
it
makes
a
lot
makes
a
lot
easier
to
reason
about
I.
Don't
think
we
need
to
do
a
bunch
of
getters
for
that
to
make
that
work.
I
I,
think
that
gets
us
away
from
a
bunch
of
getters
that
we
do
if
we
transform
the
data
when
we
get
it
into
a
flat
structure,
it's
much
easier
to
pull
because
then
inside
of
a
component
when
I
want
to
get
a
diff
ID,
it's
already
keyed
by
the
ID
I'm
just
grabbing
it.
D
A
Yeah,
if
you
like
these
three
steps,
are
kind
of
like
the
overarching
topics,
so
definitely
feels
like
a
potential
way
forward.
I
think
so,
just
so
that
you
understand
my
ideas
is
grabbing
all
the
takeaways
and
shaping
them
into
sub
epics
or
sub
issues.
Out
of
the
topic
we
have
right
now
and
then
we
can
drill
down
on
each
one
of
those
so
that
we
can
then
start
scheduling
deliverables
and
have
a
plan
like
over
going
into
the
future.
So
a
lot
of
this
thing
still
requires
for
the
discussion.
A
So
what's
the
actual
extractions
of
the
state
that
you're
talking
about
from
the
components?
How
are
you
going
to
be
starting
to
do
that?
I,
don't
think
we'll
be
having
like
a
to
deliverable
thing
at
the
end
of
this
we'll
have
several
of
these
I
would
say
at
least
more
than
a
dozen
for
sure
several
dozen
sexual,
so
so
out
of
this,
so
I
think
we'll
definitely
be
using
baby
steps.
There's
no!
A
A
So,
by
the
time
we're
done
with
this,
you
will
will
have
gone
significant
amount
of
time,
but
so
that
just
so
you
understand
my
expectation
of
this
topic.
It's
gonna
be
probably
the
largest.
We
have
this
year
in
source
code,
so
yeah
the
baby
steps
make
sense.
We
definitely
to
reel
down
much
more.
We
wanted
each
one
of
these
into
deliverables
on
the
issue
themselves.
Anyone
has
any
more
thoughts
on
this
points
that
just
embraced.
E
C
A
B
A
Thanks
Paul
yeah,
we
definitely
saw
the
benefit
of
breaking
epics
into
smaller
issues
in
there
easier,
I
think
they're
easier
when
we
have,
like
you,
said,
a
very
well-defined
problem
like
rework.
This
follow
this
approach
and
if
we
have
sort
of
a
map
and
plan,
we
can
definitely
leverage
the
community
for
it.
Usually
I
haven't
seen
a
lot
of
successful
community
contributions
on
rhe
factors
per
se
like
deep,
complex
free
factors,
but
I
feel,
like
you
have
a
great
point
about.
A
If
we
prepare
it
well
and
if
we
document
it
well,
it's
a
not
expected
outcome
kind
of
like
the
recent
efforts,
like
the
local
view,
we
right
wait
like
we
have
input
which
files
were
dressing,
was
that
the
strategy
and
then
the
outcome
is
this
like
a
little
recipe?
We
can
definitely
have
the
benefit
of
that.
So
we'll
keep
that
in
mind.
If
you
have
a
big.
A
B
A
E
E
A
Good,
so
it's
definitely
deeper
conversations
to
bring
over
to
the
issues
so
we're
two
minutes
away
from
the
end.
I
had
one
crazy
idea
that
I
keep
bringing
back
so
I'll
give
a
little
bit
of
context
on
this
server
side.
Rendering
topic
is
so:
we've
looked
at
competition
and
a
lot
of
the
difference
that
we
see
is
that
from
the
get-go
right,
the
perceived
performance
on
their
pages
sometimes
is
immediately
faster
because
a
lot
of
it
cancer
comes
ran
in
from
the
server.
Now
we
don't
have
the
same
structure.
We
don't
have
the
same
scale.
A
You
don't
have
same
solution,
but
what
I
see
is
that
would
bootstrapping
the
entire
UI
on
the
front
end
alone
and
I
keep
bringing
this
like.
If
Tim
Zalman
has
a
dream
up,
there
I
have
another
dream
which
is
to
have
like
the
app
can
render
from
the
server,
and
then
we
just
hydrate
with
the
interaction
on
top
of
it.
There
are
significant
challenges
to
this
that
we
have
identified
in
the
past,
but
I
wanted
to
since
we're
discussing
this
moment
in
time
like
rebuilding
the
or
just
improving
the
diffs
app.
Does
this
topic?
A
D
C
A
I
keep
hearing
that
it's
hard
to
get
rails,
to
render
view
apps
get
render
fool
to
view
app,
and
the
topic
always
brings
a.
If
you
had
a
note
server,
we
could
probably
get
this
quicker.
I
think
there's
something
we
need
to
bear
in
mind,
and
this
is
something
that
has
been
on
my
mind
every
time
I
since
gitlab
every
since
I
joined
gitlab,
which
is
we're
not
like
other
companies
on
other
products.
A
We
have
to
bear
in
mind
that
we
have
a
product
that
is
going
to
be
installed
in
self
hosted
instances
right,
so
the
management
of
the
application
at
the
self
hosted
instance
is
crucial
for
our
business.
So,
whatever
solution
we
need
to
so
whatever
solution
we
come
up
with
have
to
be.
With
that
in
mind,
that's
why
we've
bound
a
little
bit
by
the
number
of
services
we
shipped
and
introducing
a
new
dependency
on
the
stack
is
not
something
we
do
lightly.
A
Reward
at
the
end
that
justifies
the
effort.
We
can
probably
do
some
studies
right.
If
you
know
what
I
mean
like
get
the
infrastructure
to
potentially
study
a
potential
scenario.
How
that
will
look
like?
Is
it
actually
because
we're
we've
been
putting
real
time
WebSockets
because
of
the
scale?
But
we
have
this
real
time
working
group
and
we
have
a
member
here
on
the
call
Natalia.
So
what
that
means
is
that
it's
some
problems
are
just
insurmountable
until
we
have
space
and
justification
to
go,
find
them
so
I
want
you
to
keep
that
in
mind.
A
I,
don't
think
we
can
put
pull
server-side
rendering
within
a
month
for
sure,
but
if,
as
we
go
on
that
journey
on
this
journey,
it's
something
that
we
need
to
bear
in
mind
that
if
we
have
a
good
case,
we
can
get
a
thread
to
explore
this
into
the
future,
which,
if
we
have
to
prepare
the
code
of
the
that
we're
building
right
now
till
later
be
server-side
rendered
something
that
we
can
bear
in
mind.
So
yeah
we're
at
we're
over
time,
I'm,
sorry,
thoughts.
We
can
drop
the
thoughts
in
the
in
the
document.
A
So
I'll
as
a
wrap
up,
thank
you
so
much
for
your
participation
on
the
call.
This
has
been
extremely
useful,
so
I'm
gonna
be
putting
all
of
this
into
issues
in
epics
and
then
eventually,
I'm
I'm,
not
gonna,
schedule
a
call
for
next
week
right
away
because
gonna
be
because
the
beginning
of
the
milestone
and
everything
but
potentially
we'll
have
some
more
of
these
sessions
and
I'll
invite
you
all
again
to
jump
over
at
this.
So
we
can
break
it
down
potentially
more
topical
things
like
one
four.
A
Stick
management
word
for
transforming
whatever,
but
I'll
keep
you
posted
on
what
comes
down
the
pipeline
in
terms
of
scheduling
right
now,
I
just
have
to
thank
you
all
for
joining
the
call
and
once
I
have
issues
and
epics
builds.
I
will
just
flood
you
all
over
slack,
so
you
can
jump
on
it
and
have
a
discussion
because
discussion
is
not
over.
We
still
have
to
drill
down
on
these
topics.
So
yeah,
that's
it.
Thank
you.
Everyone,
any
parting
thoughts
comments.