►
From YouTube: 2021-03-24 Code Review Weekly Sync (PM+PD+Eng)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
First,
up
on
the
list
is
the
1311
database
work.
There's
anything
we
need
to
go
over
michelle
or
update
some
issues
and
we're
we're
good.
B
Yeah,
I
will
add
a
link
to
the
issue
that
I'm
using
to
track,
and
let
me
just
add
a
note
to
do
that,
since
I
can't
find
it.
B
A
lot
of
issues
from
the
release
are
definitely
at
risk,
though,
but
I
really
don't
want
to
evaluate
for
sure,
until
maybe
friday
or
monday,
until
we
get
a
little
farther
down,
but
everything
is
looking
really
good.
As
of
yesterday,
we
had
11
issues,
all
of
those
are
assigned
now.
Some
of
them
are
actually
getting
closed
today
and
tomorrow.
So
very
positive.
C
So
far
so
far
we
are
taking
a
look
at
some
of
our
ideas
and
things
we
can
do
to
unload
the
pressure
on
the
database
side
and
of
course
we
have
deep
rework
that
we
could
do
but
we're
looking
at
the
quicker
ones,
but
nothing
has
come
up
yet,
but
all
the
team
has
been
briefed
on
hey.
If
you
have
anything
that
you
see
that
we
can
work
on
jump
at
it
and
then
we
just
priorities,
but
so
far
nothing
has
come
back
as
worth.
C
Pursuing
phil
was
looking
at
one
particular
case
with
the
autocomplete
users
lookups
and
everything,
but
that
that
turned
out
empty
a
little
bit
just
because
the
the
codebase
is
just
unfathomable
complex,
but
we
haven't
found
out
anything
meaningful.
The
word
pursue
that
would
affect
the
deliverables,
but
if
it
does
I'll,
let
you
know.
A
C
There
might
I
haven't
identified
specific
cases.
I
have
warned
them
that
backend
is
being
put
under
this
pressure
to
prioritize
all
the
work
that
we
can
do
independently,
but
I
don't
have
any
list
of
issues
that
might
be
at
risk
yet
once
we
have
the
better
clue
by
monday
after
michelle
provides
some
feedback,
we'll
probably
know
for
sure.
A
C
A
Yeah,
I
think
I'm
fine
with
that
one
moving
on
like
that
one
seems
understandable.
I
also
gave
the
telemetry
folks
a
heads
up
on
the
one
that
they
had
requested
for
the
metrics
dictionary
thing
that
they
had
asked
us
to
do,
but
that
probably
would
not
happen
in
1311
because
of
this
so
they're
aware
of
that
one
as
well,
since
that
was
an
external
group
that
asked
everything
else.
I've
seen
seems
fine,
but
cool.
D
That's
I
noted
that
it'll
slip
because
it
has
a
hard
back
end
locker,
but
gary
is
is
like
so
close
on
his
back
end
stuff.
If,
if
he
can,
if
he
just
has
like
an
hour
of
time,
I
think
he
can
do
it,
so
I'm
not
sure
how
busy
he
is
on
like
db
stuff.
So
it's
it's
really
close.
B
I'll
check
into
this
one,
because
I
checked
into
it
like
last
friday
or
something
and
it
seemed
like
it
was
just
a
day
or
two
away
from
being
merged.
It.
B
A
Yeah,
well,
I
really
appreciate
michelle
you
staying
on
top
of
all
of
this
and
rallying
the
troops.
You've
done
a
an
awesome
job
of
that,
and
also
communicating
back
to
everyone
else
what
the
impact
is.
A
So
it
is
very
much
appreciated
for
my
side-
and
hopefully
hopefully,
we
get
through
this
and
the
engineers
still
like
us
and
still
want
to
work
on
stuff,
we'll
see
next,
one
on
the
list
is
performance,
and
so
I'm
bringing
this
up
not
not
in
relation
to
the
database
stuff,
but
it
came
up
in
the
hacker
news
release
post
thread
that
people
were
complaining
about,
merge,
request
performance
and
it
was
very
specific
to
like
large
mr
performance
again,
you
know
we
hear
the
word
large
and
we
don't
have
a
good
on
I'd
say
we
don't
have
a
good
definition
for
what
large
is
yet
and
we're
trying
to
get
people
to
sort
of
narrow
down
into
what
they
find
large,
but
it
came
up
there.
A
It's
come
up
in
both
those
surveys
that
we
recently
ran
on
merge,
request
problems.
The
internal
survey
even
get
lab
engineers
com
their
number.
One
problem
with
merge
requests
was
large.
Merge
requests
even
though
we're
supposed
to
have
small,
merge,
requests
and
externally,
that's
also
the
first
problem,
the
number
one
problem
so
far
in
the
data
we've
gotten
back.
A
So
it's
clear
that
there's
an
issue
here
and
it's
something
that
sort
of
across
the
board
users
see,
and
we
know
that
it's
ux
and
engineering
and
and
we
need
to
work
on
that,
and
so
one
of
the
things
from
the
product
design
side
we
discussed
yesterday.
A
The
next
sort
of
like
up
thing
we
were
going
to
go
work
on
was
tracking
status
in
an
mr,
so
some
of
the
work
that
thomas
is
doing
right
now
for
the
viewed
pieces,
we're
going
to
lay
the
foundation
to
start
displaying
which
files
you
viewed
and
be
able
to
give
that
information
to
other
people.
And
that
way
you
would
know
every
time
you
visited,
nmr,
you've
reviewed
these
files
or
you
haven't
reviewed
these
files,
or
these
files
have
changed
and
these
haven't
so
that
was
going
to
be
next
up.
A
From
the
engineering
side.
We
have
endpoint
issues,
but
those
are
monumental
and
don't
always
get
us
where
we
need
to
get,
and
but
I
think
we
we
clearly
need
to
go
focus
on
this
as
part
of
the
ux
stuff,
but
also
separately,
and
so
I
just
wanted
to
toss
that
out
there
as
like
a
hey.
A
Oh
and
the
point
c
came
up
in
the
product
key
meeting
and
there's
a
timestamp
to
the
recording,
so
you
can
see
sid
asking
about
it
there.
So
it
is.
It
is
on
the
radar-
and
I
got
asked
about
it
in
my
one-on-one
yesterday.
So
I
would
expect
I
would
just
expect
it
to
be
on
top
of
mine
for
everyone.
C
Yeah
thanks
kai
yeah
linking
to
that
particular
time
was
useful,
so
we
could
totally
catch
that,
and
this
has
happened
throughout
time.
This
will
always
be
an
ongoing
struggle
for
us.
I
think
the
challenge
is
balancing
showing
all
the
files
expanded,
which
was
that's
what
the
users
want,
but
then
having
limitations
on
the
what
the
browser
can
do.
There's
also
some
inefficiencies
on
the
code,
particularly
on
the
front
end
that
we
are
aware
of,
and
some
of
them
comes
from
the
way
that
vue
itself
works.
C
That's
part
of
the
work
that
phil
is
doing
this
milestone
to
make
a
better
usage
of
one
of
the
features
on
the
browser
request
idle
callback
to
prevent
the
browser
from
blocking.
So
all
of
those
are
like
you
said
pursuits
that
might
not
get
us
to
where
we
need
to
be,
but
we'll
need
to
iterate
and
follow
that.
But
the
one
thing
I
wanted
to
bring
up
here
is
one
of
the
things
that
we
have
been
discussing
as
a
potential.
C
C
I
think
this
topic
overlaps
a
lot
between
front
and
the
ux,
so
I
agree
there,
but
we
were
just
discussing
about
thing:
what's
stopping
us
from
doing
virtual
scrolling
on
the
merge
request,
divs
and
the
native
search
immediately
pops
up
like?
Is
this
something
that
we
can
live
without
and
if
not,
how
would
even
build
such
a
feature?
That's
what
google
docs
does.
If
you
go
on
the
google
doc
of
the
agenda
and
do
command
f,
it
will
have
a
natives,
not
a
native
search,
a
google
doc
search
on
the
document.
C
We
would
have
to
rebuild
this
ourselves
to
search
the
mr
like
find
in
this.
Mr
and
the
search
wouldn't
be
trivial.
We
probably
have
to
do
some
indexed
on
the
back
end
that
we
could
quickly
look
up
instances
now.
Do
we
just
search
code,
we
search
code
and
discussions,
there's
a
bunch
of
sources
of
data
to
do
this
so
that
it
makes
it
non-trivial.
C
However,
if
we
ever
want
to
get
to
a
place
where
you
can
scroll
through
a
50
000
files,
mrs
without
breaking
a
sweat,
this
will
have
to
be
part
of
the
solution,
but
there
are
challenges
there.
So
that's
one
of
the
things
on
my
mind
that
we
can
pick
up.
The
other
is
making
the
page
static,
which
benefits
no
one
but
yeah.
I
feel
like
there's
a
room
to
grow
iteratively,
but
we'll
probably
have
to
do
some
more
disruptive
work
like
like
this,
but
anyway
that's
my
thoughts.
E
Yeah,
I
I
agree:
there's
there's
a
lot
that
we
could
try
to
do
and
clever
things
for
us
to
work
on,
but
I
I
don't
know
I
when
I
look
at
at
get
github,
for
example,
and
very
large
pull
requests
there.
It
just
loads
much
faster
and
they
could
be
doing
some
crazy.
E
But
but
yeah,
I
think
I
think
most
of
all
is
just
what
you
receive
it's
already
there.
It's
not
lazy
loading
anything
and
my
feeling
as
a
user
is
that
we
not
only
lazy
load
a
lot
of
things
and
doing
that
more
and
more
it's.
E
E
And
and
yeah
one
of
the
things
I
wanted
to
say
about
the
the
large
merge
requests.
Is
that
the?
E
What
what
we've
noticed
in
the
feedback
we've
been
getting
in
the
surveys
is
that
it's
usually
people
attribute
large,
mrs
or
characterize
them
by
having
a
lot
of
files
or
a
lot
of
changed
lines,
and
they,
I
think
it's
difficult
for
users
to
disassociate
the
concept
of
merge,
requests
that
are
big
and
merge,
requests
that
are
slow
right,
and
so
it's
it's
hard
for
them
to
say:
hey
is
this
because
it
has
a
lot
of
commits
or
a
lot
of
files,
or
is
it
just
slow?
B
E
C
Yeah
that
wasn't
me
but
yeah,
that's
the
one
on
the
topic
of
the
experience
on
github.
This
has
come
up,
cetus
called
it
out.
Specifically,
we
bring
up
the
topic
of
server-side
rendering.
Then
we
investigate
there's
a
proof
of
concept
that
phil
built
that
makes
it
work
so
that
we
can
serve
aside,
render
view
apps,
but
there's
still
challenges,
because
that's
what
drove
us
away
from
loading
things
immediately
is
now.
C
We
want
to
delay
the
first
time
the
time
to
first
byte,
which
is
the
time
that
the
page
starts
being
served
from
the
backend
you're
going
to
have
to
look
up
all
of
that
pre-rendered
structure
of
the
app,
including
the
data.
What
they
do
is
pretty
impressive.
I
don't
think-
and
I've
said
this
on-
calls
and
I've
I've
gotten
some
flack
around
it.
I
don't
think
we'll
ever
get
to
github's
performance
levels
until
we
do
server-side
rendering.
I
still
believe
that
I
don't
think
it's
an
easy
accomplishment.
C
The
problem
with
doing
server
side,
rendering
right
now
is
that
it
requires
us
to
ship
node,
including
in
our
stack,
which
is
another
component,
and
this
goes
into
adding
more
resources.
People
that
are
using
self-hosted
instances
will
have
to
have
another
addition
of
resources
in
the
in
the
requirements
of
gitlab.
C
So
there's
a
bunch
of
roadblocks
there,
but
if
we
ever
want
to
get
meaningful
change
like
quality,
like
quantum
leap
quality
of
life
improvements,
I
personally
don't
believe
we
can
get
there
without
server-side
rendering
because
technically,
it's
far
less
costly
on
the
front
than
to
do
hydration
after
this
after
the
content
has
been
searched
from
the
back
end
and
we
just
hook
up
the
view
app
onto
the
page
after
it's
been
shown
to
the
user,
making
it
interactive
yeah,
that's
the
one.
I
want
to
say
seriously
rendering.
A
Do
we
know
like
do
you
know
what
git
hub
is
doing
like
do?
We
know
what
their
tech
stack
is
and
what
they're
doing.
C
Not
specifically,
I
haven't
done
a
lot
of
research
on
on
their
tech
stack,
particularly,
but
I
I
I
know
it's
server
side
rendered.
I
know
it's
fast,
so
there's
probably
a
ton
of
caching
happening
behind
the
scenes,
but
I
don't
know
much
else.
D
I
don't
really
know
what
their
front
end
stack
is.
I
know
they're
also
rails
based,
so
they
have
the
same
synth
back-end
stack,
but
I
don't
know
what
their
front
end
is.
I
know
they
use
a
lot
of
these
a
lot
of
web
components.
Custom
elements
which
makes
me
think
they're
not
using
react
because
react
and
those
things
don't
go
together
very
well.
So
maybe
they
have
some
other
front
end
like
us
like
view,
maybe
I
I
don't
know
what
the
front
end
is
honestly.
B
Carrie
knows
it
is
rails,
but
the
diffs,
I
think,
are
using
a
different
language.
It's
not
go.
I
can't
remember
what
it
is.
Carrie
knows
what
it
is.
Haskell.
I
think
it's
haskell.
D
A
Is
that,
like
a
is
diff
generation
like
our
biggest
bottleneck
in
the
merger
quest,
or
is
like
that
sort
of
a
lot
better
than
it
used
to
be?
Or
is
that,
like
a
thing
that
we
should
also
be
thinking
about
as
like
how
we
generate
the
discs
to
even
begin
with.
B
Yes,
and
no,
I
think
it
I
think
the
answer
is
like
it
depends
like
what
we
saw
with
merge
head
is
the
new
one,
for
example,
the
conflicts.
Those
are
really
difficult
to
do,
and
they
take
a
lot
of
time.
Sometimes
it's
not,
though
so
from
the
backend
perspective.
I
think
there's
a
lot
of
different
things
going
on
and
I
would
love
to
look
into
all
of
them.
C
Yeah
I
smell
this-
I
smell
a
spike
around
here,
so
we
can
identify.
What
are
the
roadblocks
are
because
I've
always
historically
heard
about
how
costly
it
is
to
render
the
diffs
on
the
back
end
and
then
there's
also
the
bridge
between
rails
and
and
gilly,
and
all
that
that
needs
to
heavily
be
cached.
But
then,
since
it's
such
a
fastly
moving
target,
the
caching
then
becomes
stale
very
quickly.
So
there's
there's
challenges
there,
but
it
needs
to
be
thought
if
we
start
pursuing
the
server-side
rendering
as
a
goal.
C
All
of
these
things
will
have
to
fall
in
place
and
we
we
peel
back
the
layers
and
we'll
start
addressing
the
diffs
to
start
addressing
all
of
those
things,
but
the
goal
should
be
that
we
can
present
things
right
from
the
first
time.
The
page
is
served
fully
assembled,
but
again,
if
we're
not
able
to
ship
node
with
it,
all
of
that
work
is
moot.
C
So
there's
I
I
was
it's
a
good
segment
for
michelle's
point
of
about
a
shadow
kr,
because
I've
had
discussions
with
performance
team
where
they
bring
up
the
topics
of
this.
This
merge
request
for
the
10k
reference
architecture
is
taking
too
long
and
and
then
what's
the
what's
the
limit,
it
shouldn't
be
the
same
limit
of
experience
of
performance
as
it
is
for
a
small.
C
Mr
users,
don't
have
the
same
expectation,
so
metrics
shouldn't
have
the
same
expectation,
it's
kind
of
like
when
we
talked
about
having
2.5
seconds
of
lcpe
across
the
board.
Well,
if
I'm
rendering
a
merge
request
with
5000
files
changed,
is
that
expectation
still
the
same
from
the
user's
perspective,
I'll
theorize,
not,
I
think
a
shadow
kr
was
that
would
definitely
be
helpful
in
in
aligning
all
of
those
pieces
and
starts
doing
the
spikes
and
doing
investigations
to
get
things
in
place.
C
Yeah,
I
agree
categorize
what
a
large
mrs
are
and
then
reduce.
A
If
we
had
an
okr,
I
mean
sort
of
implies,
we
know
how
to
get
there.
I
don't
think
we
know
how
to
get
there
yet
is
sort
of.
I
think
what
what
I'm
hearing
in
some
of
this
and
some
of
this
we've
got
like
ideas
and
things
that
we
could
do,
and
we
know
we
could
make
some
like
weird
user
experience,
changes-
and
I
say
weird
because,
like
switching
to
automatic
single
file
mode,
is
potentially
like
workflow
damaging
for
a
lot
of
people
but
masks
a
performance
problem.
A
Before
we
get
to
like
a
shared
okr,
what
are
the
things
that
we
want
to
go
like
what
is
sort
of
if
we
carved
out
time-
and
I
don't
know
that
we
get
to
carve
out
time
in
1312,
because
we've
still
got
some
things
we
want
to
do.
But
let's
say
we
carve
out
time
in
a
in
a
future
milestone
like
what
would
we
be
carving
it
out
to
do
like
what
would
be
the
things
that
we
would
want
to
see.
B
I
will
take
this
question
because
I
don't
necessarily
think
we
need
to
know
how
we're
going
to
get
there
for
the
okr,
I
think
part
of
the
progress
and
that
okr
is
trying
to
figure
out
how
to
get
there.
So
if
this
was
like
a
spike
issue
or
discovery
issue,
then
the
outcome
is
like.
I
have
no
idea
what
we're
going
to
do,
but
I
do
know
this.
B
B
But
I
don't
think
that
should
hold
us
back.
We
just
need
to
look
into
it
and
like
set
a
goal.
It
just
prompts
us
to
put
this
at
top
of
mind.
C
Yeah,
I
can
add
that
from
the
front
then
we
have
a
couple
of.
Like
you
said
ideas.
Justin
has
had
a
theory
about
doing
rendering
a
very
dumb
static,
not
interactive
version
of
the
diffs
and
then,
as
the
user
gets
close,
we
hydrate
those
and
get
and
make
all
the
interaction
layer
on
top
of
it.
C
That
sounds
good
in
theory,
but
we've
had
done
a
couple
of
experiments
and
then
the
hydration
is
very
costly
because
the
user
is
scrolling
through
pass
scrolling
past
50
files
very
quickly,
and
then
you
have
to
hydrate
all
those
50
and
it
sounds
good
in
theory,
but
we
haven't
done
the
the
investigation
deep
enough
to
be
confident
that
that
will
work
and
that's
where
we're
at
with
the
front
end
is
it's
a
heavy
application
with
inherit.
There's
also
view
three
on
the
horizon,
which
theoretically
makes
things
faster.
C
If
we
do
it
properly,
I
don't
have
a
time
frame
for
that,
but
it's
upcoming
so
yeah
it's
a
couple
of
ideas.
I'm
not
sure.
A
We're
we're
bumping
up
against
time.
The
last
question
I
want
to
ask
are
the
endpoint
issues
that
we
do
have
like
discussion.
We've
got
one
for
discussions
and
we've
got
one
for
this.
I
don't
remember
what
the
other
one
is.
Are
those
useful
and
valuable,
and
would
they
make
a
meaningful
impact
on
performance
if
we
were
to
spend
time
there
are?
We
is
like
thinking
this
about
this
on
an
endpoint
by
endpoint
basis,
sort
of
the
wrong
way
to.
C
C
I
my
gut
feeling-
and
it's
just
my
good
feeling-
is
that
the
complaints
we
saw
on
hacker
news
is
not
about
the
time
it
takes
to
render
the
page
so
much
so
as
the
interaction
with
the
page
on
large,
mrs
we've,
we've
got
pretty.
I
think
it's
the
most
sophisticated
data
loading
on
a
whole
gitlab
is
the
merge
request.
Divs
patch
discussions
is
still
one,
so
we
could
we
could
do
the
batch.
That's
what
we're
trying
to
do
with
the
discussions.
It's
it's
mind-bogglingly
challenging,
but
I
think
what
people
are
struggling.
C
The
most
is
when
you
open
an
mr
and
it's
it's
using
one
gigabyte
of
memory
of
your
browser
and
you
try
to
interact
with
things
and
things
get
sluggish.
That's
what
I
think
is
the
most
place,
the
the
place
that
needs
the
most
attention,
the
data
loading
from
the
endpoints.
We
can
make
them
faster
sure,
but
we
have
done
extensive
work
on
that
side
as
well.
C
The
other
part
we
have,
but
it's
challenging
throughout
so
yeah,
it's
necessary
tldr.
It's
necessary,
but
I
don't
think
we
should
focus
all
of
our
attention
there.
B
I
agree
with
that
too.
I
hypothesize
that
that
discussions
issue
will
probably
be
a
great
benefit,
but
what
I
wrote
down
to
look
at
was
nothing
in
particular,
but,
like
the
top
five
pain
points,
because
I
don't
know
if
their
problem
is
that
dips
are
responding
slowly,
I
kind
of
don't
think
that
any
of
those
are
the
problems.
I
think
it's
exactly
what
andre
said,
but
I
don't
know
that
so
I
think
we
look
at
it
from
that
perspective.
A
All
right,
I
have
one
more
if
you
could
start
over
today,.
A
A
If
possible
is
what
if
we
were
to
build
instead
of
kicking
you
to
single
file
mode,
if
we
thought
the
mr
was
large,
what
if
we
just
had
a
sort
of
different
code
review,
experience
for
large,
mrs
that
was,
the
sidebars
were
gone
like
all
of
the
other
things
that
happen
in
a
merge
request
that
require
data
and
require
us
to
go,
render
a
bunch
of
things.
What
if,
what,
if
lots
of
things,
went
away
or
what,
if
there's
another
option
there
like
what?
E
Yeah-
and
I
think
what
I
was
going
to
say
is
also
not
only
like
if
we
do
it
from
scratch,
but
can
we
like?
We
know
we
have
ideas
about
the
what
the
problems
are,
and
we
know
that.
Maybe
we
could
do
this
and
we
have
this
idea
for
batch
loading
and
this
and
that.
E
But
if
we
had
to
build
it
all
from
scratch,
can
we
articulate
a
vision
of
the
ideal
state
and
then
we
can
iterate
towards
that
ideal
state
with
what
we
have
today,
because
going
ahead
and
building
the
ideal
thing
is
probably
not
the
most
efficient
but
from
what
everyone
is
saying.
I
don't.
I
don't
know
if
just
going
along
and
fixing
things
one
by
one
and
then
like.
Oh,
this
is
a
great
idea.
Let's
put
this.
Oh,
this
is
ready.
E
B
C
Give
me
now:
can
you
hear
me
now,
so
you
lost
a
very
clever
sentence.
I
was
saying
that
from
the
technical
side,
we
are
considering
different
things.
We
would
do
things
differently,
considering
progressive
web
applications,
leveraging
graphql,
maybe
even
throw
in
some
offline.
First
there
methodology
to
make
things
faster
from
the
get-go.
C
That's
one
of
the
research
that
thomas
is
doing
is
related
to
that
thing,
but
it's
still
iteration
not
replacing
not
starting
from
scratch
and
maybe
even
throw
in
a
different
kind
of
experience
more
like
web
web
id,
like
kind
of
thing
that
well
that's
what's
on
our
minds
to
do
a
little
bit
differently
and
thomas
added
that
virtual
dom
is
a
memory
and
performance
killer
text
because
suggesting
no
view
yeah
we're,
definitely
considering
all
options,
and
that
would
be
one
of
them
yeah
getting
rid
of
virtual
dom.
A
How
long
would
it
take
to
build
like
a
a
new
from
scratch
like
could
any
milestone?
Could
you
proof
of
concept
like
an
entire
new
like
if
there
was,
I
don't
know
two
back-end
engineers
and
two
front-end
engineers?
Could
you
build
an
entire
new
merch
request
page,
that's
like
super
basic
and
shows
just
diffs
maybe
allows
com
like
comments
and
diffs.
D
I
think,
frankly,
I
think
that
we
could
do
that
in
a
small
number
of
days.
If
we
really
limited
it
like,
we
only
show
in
line
it's
no,
no
parallel
swapping.
You
know
no
side-by-side
view,
no
commenting,
it's
the
dips
are
given
to
us
by
the
back
end
like
the
back
end
just
says:
here's
the
html,
so
it's
kind
of
just
like
to
get
all
the
files
render
all
the
lines
the
vacuums
gives
us.
D
There
isn't
a
ton
now
the
technology
decisions.
That's
the
that's
the
real
question
of
like
how
do
we
do
this?
It's
the
right
way,
but.
C
What
what
thomas
is
describing
is
probably
reworking
the
div
zap
itself.
It
would
still
live
inside.
The
merger
quest
page
you'll
still
live
inside
the
changes
tab,
but
for
starting
from
scratch.
We
probably
start
with
the
different
architecture
on
the
page
itself
to
make
it
fully
different
and
that
will
take
more
than
a
milestone.
C
A
A
C
A
Yeah
we're
well
timed,
keep
it
in
mind.
I'd
say
this
is
a
top
of
mind
thing
and
then
we'll
try
and
figure
out
a
plan
when
we
can
fit
something
in,
but
I
I
would
be
open
to
okay.
I.