►
Description
In this session, we discuss ways to improve the speed of elasticsearch specs. We also discuss approaches to make a script more deterministic like the test environment
B
C
Okay,
but
so
I
have
this
dream
to
move
all
of
the
specs
that
deal
with
elasticsearch
to
move
away
from
creating
the
elastic
search
indexes
and
deleting
them
like
every
single
time.
It
runs
I
think
it's
around
the
context,
and
so
anytime
you
have
things
at
like
lower
levels.
It
gets
really
expensive,
like
deleting
the
indexes
and
recreating
them.
C
The
search
team
tests
used
to
be
like
the
top
three
longest
running,
spec
files
it
used
to
be
ours,
I,
don't
think
it's
ours
anymore,
but
about
I,
don't
know
how
long
ago
this
Mr
was
about
a
year
ago.
This
is
just
to
show
the
before
and
after
of
these
four
specs,
where
this
was
originally
introduced,
and
it
made
a
really
a
significant
impact
on
the
times
so
I
thought,
maybe
it
would
be
fun
to
yeah
that.
C
Yeah,
ours
are
bad,
and
this
in
particular,
are
the
ones
that
were
changed
over.
They
were
testing
permissions,
like
these
permission,
like
almost
like
tables
yeah,.
A
C
So
it
was
just
like
constantly
deleting
the
indexes
and
recreating
them
over
and
over
and
over
again,
but
I
don't
see
any
reason
why
yeah
we
have
to
do
that.
Yeah.
B
C
B
C
So
my
computer's
freezing
hold
on.
Let
me
let
it
try
to
catch
up
okay,
so
we
have
this
config.
That's
set
up
for.
C
C
A
C
So
we
have
this
config
set
up
where
it's
called
colonelastic,
that's
the
old
one
and
around
every
example.
It
does
a
setup
and
A
Tear
Down
the
setup
kind
of
clears
some
tracking
deletes.
All
the
indexes
creates
an
empty
index
for
a
few
things.
C
We
have
migrations,
it
marks
all
those
is
completed
and
then
it
kind
of
refreshes
elastic
search
and
then
the
tear
down
deletes
the
indexes
and
clears
the
tracking,
and
this
is
cool
but
I.
Think
it's
like
it's
adding
a
lot
of
time.
C
So
we
have
a
second
one
that
was
created,
which
is
just
deletes
all
of
the
data
in
the
index.
C
Instead
of
creating
and
deleting
all
the
we
have
like
I
think
five
or
six
indexes
right
now
and
it's
gonna
get
possibly
more,
as
we
add
more
data
to
elasticsearch
or
continue
we
used
to
have
one
we've
been
working
on
splitting
it
up
into
separate
like
that.
Work
will
eventually
be
done,
so
we
should
have
somewhere
around
like
seven,
but
if
we
decide
to
add
projects
or
groups
or
users
to
the
index,
those
will
now
be
separate
indexes.
B
B
C
C
Yeah
I,
don't
think
it
has
the
concept
like
that,
but
it
is
a
good
idea
to
maybe
look
into
into
that.
I
feel
like
most
of
the
place.
I
worked
at,
don't
have
a
tough
Suite
as
comprehensive
as
as
GitHub,
which
is
nice,
and
what
I
prefer
I
like
tests,
because
I
don't
have
to
worry
that
my
stuff's
Gonna.
A
A
C
The
ones
that
deal
with
ours
I-
yes,
they
are
yeah,
so
it
was
a
similar.
That's.
A
C
Yeah,
this
is
like
for
egration
tests
and
the
unit
tests
and
probably
feature
tests.
So
my.
A
C
So
I
did
I
remember
how
I
timed
this
I
think
it
just
timed
it
locally,
but
about
a
year
ago
the
times
for
these
specs,
like.
A
C
Is
the
before
running
the
entire
file,
and
this
is
the
actor
I
think
this
was
on
my
computer,
so
I
mean
take
that
with
a
grain
of
salt.
I
have
one
of
the
2019s,
but
I
think
the
numbers
were
different
enough.
That
I
felt
like
it
was
going
to
improve
in
in
the
Pipelines.
C
Time
yeah
so
I
mean
your
question
makes
me
think
we
probably
there
are
still
probably
cases
where
we
may
want
to
clear
the
indexes
like
we
have
some
tests
around
like
creating
the
indexes
and
so
having
them
not
be
deleted
and
not
have
like
a
fresh
elastic
search
instance
like
might
cause
some
problems,
but
when
it's
around
just
kind
of
mashing
data
in
there
and
making
sure
that
the
search
is
working,
the
way
it's
supposed
to
work
like
it
doesn't
matter.
If
the
indexes
are
already
there
or
not,
yeah.
A
C
No,
what
we
have
like
some
of
our
data
has
parents
joins
set
up,
but
we
are
moving
away
from
that
and
we're
trying
to
denormalize
the
data
so
that
anything
that
has
like
a
joined
record,
which
should
also
be
stored
in
elasticsearch,
is
gonna,
be
kind
of
have
that
data
copied.
That's
part
of
the
work
that
we're
doing
to
like
split
all
the
data
off
of
one
index
into
multiples.
So
my
thought
my
thought.
A
Was
so
like
coming
from
like
working
a
little
bit
with
like
mongodb.
A
A
And
my
my
thoughts
was
okay,
we
clear
out
the
indexes,
but
we
just
insert
all
brand
new
data
every
time
like
our
is
this,
are
we
really
clearing?
Are
we
even
able
to
really
clear
things
out
just
the
way
elasticsearch
works
and
like
I
don't
know
so?
I
was
I'm.
Just
surprised
that
this
this
would
work
I,
think
but
because
I
also
just
don't
know
elasticsearch
a
lot
at
all.
So
I'm
learning
about
a
lesson
search
right
now,
so
so.
C
Well,
that's
why
I
thought
it
would
be
fun
to
do
this,
because
I
really
wish
that
we
could
get
like
more
devs,
at
least
more
comfortable
with
like
testing
our
Mrs,
because
it
requires
like
you,
have
to
have
it
set
up
in
GDK,
and
then
it
has
a
whole
other
piece
of
elastic
searches.
A
lot
of
people
don't
know,
but
the
delete.
This
is
the
Mr
that
was
originally
introduced,
like
you
can
just
tell
it
to
delete
everything.
C
C
Are
like
almost
like
data
Rose
and
yes
and
I
apologize.
C
A
C
Are
right
about
the
data
being,
but
the
you
can
tell
elasticsearch
to
have
like
a
strict
and
mappings
are
the
way
that
you
mappings
is
like
the
way
that
you
have
your
schema
defined,
that's
closer
to
like
a
schema.
Our
mappings
are
locked
down.
You
can't
just
add
new
data
to
an
index.
C
That's
not
already
defined
in
like
the
mapping
or
the
schema
got
it
got
it.
I'm
gonna,
throw
an
error
we
have
it
set
to
like
oh
I,
forget
what
the
name
the
term
is,
but
it's
like
it's
locked
down.
You
can't
just
say:
oh
I.
Have
this
Json
document
I'm
gonna.
Add
this
new
field
like
it
doesn't
work
like
that.
You
would
have
to
recreate
the
whole
index
from
scratch
with
the
new
mapping
that
has
that
field
defined
got.
A
C
C
It's
like
yeah,
so
in
the
same
thing
with
a
I,
don't
know
how
the
database
works
in
the
tests,
but
the
way
that
elasticsearch
is
set
up
right
now.
It
would
be
like
if
you
completely
deleted
all
the
tables
and
recreated
all
every
time
you
ran
a
test.
Yeah,
that's
what's
happening
and
I
think
that
we
for
not
all
of
them,
maybe
I
feel
like
it
has
the
potential
to
speed
up
any
of
the
tests.
A
A
And
than
just
setting
up
unstructured
data
bucket
and
did
what
do
you
think
is
causing
what
do
you
think?
B
B
This
change
is
not
deleting
the
indexes,
so
it
is
probably
the
creation
of
the
indexes
that
is,
that
slows
it
down,
whereas
this
is,
if
you
look
at
the
two
approaches,
there's
like
clear,
tracking
delete
indexes,
create
empty
indexes,
create
migrations
index
and
all
of
that
stuff
is
not
happening.
A
B
If
you
only
if
there's
a
test
that
actually
cares
about,
the
is
affected
by
the
previous
state
of
the
index
is
not
being
cleared,
and
if
that
is
the
case,
you
can
still
flag
those
as
elastic
yeah,
I,
think
my
we
did
flag
as
elastic.
Why
did
those
need
it
like?
What
are
they
doing
different
than
the
ones
that
use
the
delete
by
query?
Oh.
C
B
C
C
C
A
B
C
C
C
A
We
actually
have
a
oh
I've
seen
this
so
long
ago.
We
actually
have
like
some
profiling
thing
built
into.
C
B
I
mean
I'm
wondering
if,
like
the
index,
isn't
getting
set
up
in
the
first
place.
But
maybe
you
could
just
do
only
that
in
this
case,
but
not
all
of
the
tearing
down
and
it
would
and
then
they
would
still
all
work.
C
This
is,
let
me
look
at
this
in
the
code.
Something
will
be
easier
to
trace
refresh
index
refresh
index
I
think
it
calls.
This
is
like
an
elasticsearch
term,
where
it
runs
like
when
things
get
indexed
to
elasticsearch
they're,
not
immediately
available
for
search,
but
it
has
this
like
refresh
process
that
runs
I,
think
it's
like
usually
every
30
seconds
that
will
like
flush
all
of
the
changes
and
make
them
available
for
search,
but
in
tests.
C
We
call
it
manually
a
lot
of
the
time
to
make
sure
that
any
new
new
documents
like
are
visible
in
searches.
Otherwise
the
tests
would
like
fail,
and
it's
not
really
a
failure.
It's
just
like
the
way
that
elasticsearch
is
working.
C
There's
a
oh
yeah
see
that
it's
called
there's
two
methods
that
we've
used
refresh
index
and
then
the
ensure
elastic
search
Index.
This
one
is
to
make
sure
that
anything
that
gets
cued
up
for
indexing
and
redis
gets
processed
by
the
workers
that
are
called
by
the
two
Services
bookkeeping
and
initial
bookkeeping
service,
and
then
it
does
the
same
refresh
that
says:
okay,
anything
that
was
indexed
then
now
needs
to
be
available
to
search.
Can.
C
Like
a
like,
a
like
a
like
a
like
a
mode
like
a
test
mode
or
something
I
guess.
A
A
C
A
A
C
C
Jobs
like
people
just
didn't
write
tests,
that's
the
fastest
test.
You
know
that
is
awful.
I
was
at
somebody
tell
me
that
they.
A
C
Sad,
oh
yeah,
okay,
the
profile
thing,
that's
good
I
might
do
that
and
then
I
think
I
could
probably
run
it
twice
just
to
see
where
the
slowness
is
and
what
the
impact
is.
C
Should
we
question
about
elasticsearch
being
almost
either
as
a
test
double
or
maybe
some
sort
of
like
mode
where
we
wouldn't
have
to
do
this
refresh
all
the
time.
I
don't
know
like.
B
C
Like
try
to
stay
so
I
say
a
little
bit,
I
think
what
I
was
wondering
and
what
you
you
all
are
asking
me
questions
about
it.
It's
like
what
should
we
do
when
we're
testing
things
that
should
be
creating
indexes?
A
lot
of
these
specs
are
written
with
the
understanding
that
it's
a
new
instance
of
elastic
search
and
so
I
can
open
up
this
spectrum
just
so
we
can
look
at
one
of
these
really
quick.
C
The
other
one
I
know
had
failures
was
the
rake
task
which
is
used
to
create
indexes
it's
pretty
much
used
to
like,
create
and
manage
the
elasticsearch
indexes
outside
of
things
so
yeah.
This
is.
This
is
one
of
the
failures
which
is
basically
creates
an
index
if
it's
not
found
I'm
wondering
if
stuff
like
this
might
need
to
have
the
old
route
like
the
old
way.
I,
don't
know.
C
B
More
specifically,
and
also
like
it
looks
like
it's
not
just
deleting
and
recreating
the
indexes.
The
old
one
does
a
lot
of
other
stuff,
and
it
would
be
interesting
is
like
which
of
those
things
is
like
the
one
that's
actually
needed
and
which
of
them
are
the
slowest
yeah
and
have
you
used
a
benchmark
in
Ruby.
B
C
C
A
A
C
You're
aware,
the
time
is
being
spent
I
just
know
that
as
soon
as
we
don't
do
all
these
things
around
it,
it's
much
faster
yeah,
but
I
yeah
I
think
it
was
a
good
idea
to
maybe
look
at
renaming
if
I'm
going
to
make
this
change
renaming
elastic
to
maybe
like
slowly
like
an
empty
elastic
instant
like
like
almost
like
an
empty
elastic
surge.
You
know,
instance
or
empty
cluster,
or
something
like
that
and
then
like
like
a
preserved
or
I
hate
me
making
too
much
like.
C
This
is
cool,
I
might
I,
unfortunately,
have
to
go
because
I'm
in
the
working
group
meeting
and
I
keep
missing
the
Monday
meetings
because
of
the
holidays.
We've
had
in
the
friends
family
and
friends
day,
yeah.
A
C
I
I
really
appreciate
y'all
walking
through
this
of
me,
even
though
we
didn't
do
any
coding
kind
of
talking
through
the
benchmarking
idea.
I
think
is
going
to
be
super
helpful
because
this
I
feel
like
this
is
almost
like
a
fun
coding
project.
For
me,
I
mean
all
opportunities
like
this
is
like
kind
of
not
so
much
something.
C
That's
prioritized
for
my
team
at
the
moment,
but
I
don't
know
I
kind
of
appreciate
watching
you
Paul,
like
kind
of
just
chunk
away
at
things
like
and
eventually
they
get
finished,
but
it's
not
always
like.
Oh
I'm
working
40
hours
on
this.
A
A
C
A
B
A
C
A
So
helpful
for
me
to
get
a
more
clearer
understanding
of
what's
what
this
must
be
doing
so
cool.
C
Thanks
for
sharing,
I
really
appreciate
it,
but
if
yeah,
if
I,
think
of
anything
fun
on
the
Mrs
I
might
tag
both
of
you.
But
it's
probably
going
to
be
a
slow
going.
No.
A
Oh,
hey
Chad:
do
you
wanna,
do
you
wanna
pair
on
something
else
sure
okay,
I
got
something
cool.
That's
great!.
A
C
B
A
B
B
Get
or
the
I'm
going
to
use,
get
lab
flavored
and
marked
down
snapshot.
Examples
like
you
want
to
have
a
across
every
run.
You
want
to
have
a
stable,
like,
instead
of
data,
to
run
them
against,
like
the
same
group,
the
same
project
with
the
same
name.
You
have
the
same
URL.
So
when
you
snapshot
them,
people
run
them
on
a
different
environment
in
a
different
machine.
They're
always
going
to
be
identical,
like
there's
ways
to
ignore
the
changes,
but
I
did.
A
B
B
And
these
aren't
even
what
is
supposed
to
be
under
a
standard
rails,
fixture
directory,
there's
something
completely
random.
That
is
not
what
you
know
are,
but
the
the
problem
is
like
it
wants
to
create
a
repo,
and
these
need
to
be
run
in
Dev
mode,
like
you
just
run
it
from
the
command
line,
and
it
does
its
thing
I,
don't
necessarily
want
it
to
have
to
use
the
test
environment,
yeah.
B
A
A
B
B
This
is
a
command
line,
script
and
speaking
of
test
speed,
the
reason
it's
I
use
to
iterate
faster
unless
I
use,
Fastback
helper.
So,
for
example
like
how
fast.
B
There's
one
test
at
the
end
here,
which
is
the
only
thing
that
actually
does
the
shell
out
to
the
back
end
and
shell
out
to
the
front
end
and
have
them
generate
all
of
their
stuff,
which
is
the
slowing
like
it's
not
time's
slow,
but
it
takes,
you
know,
I,
don't
know
40
40
seconds
or
a
minute,
and
so
that's
all
like
in
a
subshell
which
is
this.
You
know
this
render
static
just
called
from
a
subshell.
B
And
you
could
even
you
know,
Skip
it
if
you
want,
but.
A
A
You
share
an
example
of
one
of
the
outputs
that
could
vary
across
environments
and
runs
and
stuff.
B
Anything
that,
like
the
last,
let's
see,
yeah,
there's
not
that
many
yet,
but
like
here's
sort
of
an
example
like
on
footnotes.
I
like
hacked
around
this
in
a
different
way
by
like
hard
coding,
the
ID
to
42,
with
the
app.
A
A
Really
like
what
like
for
the
ref
like
a.
B
Yeah
footnotes
like
where's
the
example
here
in
this
one
yeah
like
a
a
link
to
the
wiki,
which
has
the
namespace
the
seven
namespace,
the
the
name
of
the
file,
the
path
file,
the
name
of
the
file,
all
those
yeah.
So.
A
My
here's-
this
is
my
so
you
did
sounds
like
you
did
something
with
the
ID
attribute
already
and
my
thoughts
was
I.
May
we
could
try
tackling
this
of
a
making
all
of
the
data
consistent
across
runs
or
we
can
sanitize
our
output.
B
I
already
have
a
way
to
do
that,
which
is
like
there's
this
whole
for
things.
That
I
can't
do
this
for
there's
like
this
whole
approach,
where
you
have
to
Define
regexes
that
pull
out
parts
of
the
HTML
output
and
has
a
you
know,
a
numbered
set
of
replacement
regexes.
But
that's
like
Yay.
It's
a
pain
and
you
have
to
manually
curate
it
so
I
want
to
do
that
for
as
few
times
as
possible.
Ideally
I
want
to
have
the
data
be
contrasted.
A
A
B
Yeah,
I
didn't
even
documented
it.
If
you
look
in
here
and
you
look
at
normalization
like
there's
a
whole
approach
that
can
these
are
all
the
sorts
of
things
that
can
vary.
Yeah.
A
B
You
can
like
the
fine,
regexes
and
reuse
regexes
to
do
parts
of
them
and
replace
refs,
but
these
need
to
be
like
curated
on
a
per
example
basis
right.
A
So
I'm
saying
maybe
if
we
could
just
do
like
a
like,
allow
method
to
receive
this,
is
that
thing
so
like
doing
this
before
we
build
it?
We
use
these
methods
to
build
these
paths.
This.
B
B
B
Just
like
create
a
dummy
group
in
a
project
and
there's
so
many
dependencies.
A
project
has
like
it's
got
to
have
a
repo
and
a
path
and
like
here,
they're
apparently
like
importing
real
ones
and
you
gotta,
sidekick
and
stub
out
all
of
this
stuff.
So,
like
my
meta
question
was
like,
is
there
a
standard
way
to
I?
Just
want
to
create
a
dummy
group
in
a
project
and
where
would
I
look
to
see
anything.
B
I
mean
seed
has
a
specific
meaning
in
rails,
which
is
like
it's
to
see
the
development
database,
but
I'm
saying
you
know
in
a
more
General
sense:
I
just
want
to
create
a
group
and
a
project
and
create
them
with
the
same
data
every
time
and
if
they
already
exist,
don't
create
them.
Well,.
A
B
Which
which
I'll
probably
do?
But
it's
like.
B
A
B
B
It
wants
to
create,
like
that's
where
it
blows
up
like
trying
to
validate
that
there's
a
repo
and
I
just
try
to
direct
use.
You
know,
rails
active
record
credit
commands
to
create
it.
The.
A
B
I'm
not
right,
so
that's
that's
the
path
I
was
going
down
all
right,
let's
use
it
that
makes
sense
and
I'm
like.
Is
there
like
a
standardized
canonical
way
to
do
that?
If
I
just
want
a
dummy
one
and
like
those,
those
ones
are
what
I
found,
but
you
can
see
here,
it's
like
the
the
Italy
client
is
trying
to
yeah
your
stuff
and.
A
A
B
A
B
A
Yeah,
it
seems
possible
that
there's
a
lot
of
there's
a
lot
of
baked
in
assumptions
that
you're
going
that
route.
You
know.
B
B
So
we
go
back
down
that
route.
Let's
try
it
so
like.
What
does
this
do?
How
does
this
feel
right
so
yeah?
This
is
what
I
was
going
to
work
on
today.
It's
like
well
all
right,
I
have
my
group
up
here,
but
if
we
went
back
and
looked
at
like
what
this
was
doing,
this
conditionally
creates
a
group,
and
it's
like.
B
What
I
need?
Why
can't
I
just
extract
this
out?
Yeah
I
stole
this
whole
thing.
Maybe
that's
what
I
need.
D
Hi
I've
tried
to
to
perhaps
look
into
project
Factory.
We.
We
also
tend
to
create
repositories
in
factories
as
well,
so
I.
B
D
Yeah
sure,
but
as
far
as
I
understand,
you
are
looking
to
create
a
repository
right
and
of
course
you
wanted
to
avoid
the
create
service
yeah.
D
I
just
wanted
to
to
point
you
at
the
possibility
to
look
into
a
specific
Factory
and
look
what
they
are
doing
and.
B
D
They
are
calling
project.create
underscore
repository
if
it's
just
about
creating
a
repository
without
using
the
service.
Well,.
A
I
can
send
you
the
link
to
the
I
I
just
found
it,
but
you
may
have
already
done.
D
B
D
I
tend
to
use
factors
a
lot
like
when
testing
on
my
rice
console,
so
it's
it's
already
there,
so
it
just
use
Factory
bots
on
that.
This.
A
Also
implies
that
you
could
even
get
the
use
the
spying
Behavior
outside
the
test
environment.
A
D
Yeah
yeah
sure
so
so
for
aspect
I
think,
like
all
the
marketing
functionality
is
a
different
repo
from
aspect
core
and
I
think
you
can
use
it
outside
of
of
aspect
itself,
so
you
can
just
load
the
library
and
it's
very
likely
you
need
to
include
some
magic
module
in
order
to
have
its
like
enabled
globally.
So
you
can
do
like
stop
or
allow
to
allow
stuff
to
so,
but
you
can
totally
do
it.
B
It
that's
those
are
good
ideas
and
what
Paul
and
I
are
discussing
before
honestly.
It
may
be
better
to
force
this
to
happen
in
the
rails
test
environment,
because
then
I
can
just
blow
away
the
database
and
because
the
other,
the
other
thing
like
in
addition
to
like
these,
are
all
the
things
that
could
be
random
and
so
I'm
trying
to
avoid
having
to
do
the
rejects.
A
B
So
all
of
these
dependencies,
like
the
test
database,
is
going
to
be
there
but
like
sidekick
in
the
test
environment,
plastic
search.
Whatever
else
is
needed
during
this.
You
know
the
course
of
these
set
up
an
API
call.
I
think
those
are
all
started,
probably
in
the
tests.
You
know
spec
helper
and
not
necessarily,
or
they
just
guaranteed
to
be
there
by
the
GDK
running.
B
A
B
D
C
D
Have
a
non
non-active
records
factories,
so
you
could,
in
theory
and
in
practice
you
could
use
factories
with
a
Fastback
Helper,
but
for
that
you
need
to
load
manually,
Factory
Bots,
but
also
the
used
models
which
again
should
not
use
any
kind
of
rails
code.
So
if
it's
just
like
some
in-memory
models,
then
you
can
use
it.
B
D
Right
right,
but
you
can,
you
can
try
actually
to
to
only
require
the
pieces,
you
really
need
and
it
would
be
still
faster
than
just
requiring
rates
and
what
all
comes
with
rails
like
or
active
support
and
all
this
stuff.
So.
D
B
A
D
B
D
B
Big
picture
here
is
this:
is
the
the
get
lab
flavor
markdown
specification,
and
this
is
sort
of
a
that
serves
many
purposes,
but
the
it's
performance
testing
to
say
that
our
markdown
is
always
rendered
the
way
it's
supposed
to
be
on
the
right
end,
the
back
end,
but
also
there's
this
characterization
or
Golden
Master
testing
Behavior
to
it.
B
You
may
have
seen
some
of
these,
but
it's
all
of
the
like
for
every
possible
markdown
example
and
there's
going
to
be
like
you
know,
between
500
and
a
thousand,
it
renders
the
static
or
back
end
and
front
in
HTML,
but
the
problem
is
the
API
calls,
and
especially
even
for
different
contexts
like
a
single
API
call
can
be
in
a
Wiki
context.
It
could
be
in
a
like
if
it's
on
here,
no.
B
B
But
ideally
it's
got
always
exactly
the
same
path,
exactly
the
same
name,
so
the
vhtml
it
renders
won't
be
variable,
because
there
is
a
way
here
that
I
can
deal
with
variability,
that's
built
in
which
is
this
normalizations
approach,
which
is
like
for
everything
for
every
example.
That
does
that
you
have
to
specify
regex
and
what
the
replacement
values
are
for
that
Red
X,
which
is
a
lot
of
duplication,
and
you
can
dry
it
up
by
using
yaml
anchors.
But
ideally
this
is
all
manually
created.
We
don't
want
to
do
any
of
that.
B
B
D
And
you
you
want
to
do
this
in
in
our
spec
Suite
or
like
manually
or.
B
If,
in
order
for
it
to
run
fast,
all
of
the
rendering
is
done
by
shelling
out
and
calling
for
the
back
end,
this
rails,
Runner,
which
says
hey,
read
a
file:
that's
got
all
the
markdown.
They
need
to
render
and
iterate
over
all
of
them.
B
However,
many
7
800
we
end
up
with
rendered
them
out
and
dumped
that
file,
and
then
the
script
picks
it
up
and
it's
the
same
same
approach
for
the
front
and
two
except
we
shall
have
to
call
yarn
jest
to
for
all
fronted
ones.
Okay,
so,
interestingly
front
end,
since
just
is
really
good
at
completely
setting
up
an
environment
just
like
webpack
would.
B
D
Okay,
so
basically
you
want
to
get
rid
of
all
the
normalization
by
by
being
able
to
to.
D
B
Something
really
it'll
be
most
of
these
that
we're
getting
rid
of
this
one
I
think
the
only
random
value
was
the
footnotes
and
I
fixed
that,
with,
like
a
an
environment
variable
that
I
plug
in
to
hard
code,
the
name
the
value
of
the
footnote.
So
it's
really
these
which
come
from
you
know
the
group
The
namespace,
the
file
names,
the
wiki,
the
wiki
name
and
stuff,
like
that
attributes
of
the
records
themselves.
D
D
D
Recognize
it's
fine
also
means
ID.
So
it's
what's
like
potentially
conflict
with
your
existing
local
IDs
app
because,
like
usually,
if
you
set
up
gtk,
for
example,
you
get
like
20
project
created
right.
Unless
you
are
doing
like
more
enable
more
project
ratio,
it
takes
an
hour
to
trade
or
whatever.
So.
B
Yeah,
okay,
to
not
recreate
them,
it's
like
a
find
or
create,
and
as
far
as
the
IDS
I
have
a
note
about
that
here.
What
I
was
gonna
do.
Is
this
all
right,
the
primary
key
to
an
arbitrarily
High
number
like
just
set
it?
B
You
know
the
next
primary
key
is
ten
thousand
and
like
the
a
previous
project
that
I
worked
on,
we
we
relied
heavily
on
Rails
fixtures
and
instead
of
you
know
like
Factory
bot
creating
over
and
over
all
the
time
we
relied
on
Rails
fixtures,
which
most
people
don't
like
to
do,
because
they
have
to
curate
the
ammo
files.
But
what
we
did
is
use
object
mothers
to
generate
the
yaml
files,
and
it
was
a
a
very
similar
problem
in
that
you
wanted
them
to
always
be
consistent.
B
So
the
way
we
did
is
like
here's,
my
sequel,
but
you
can
basically
hard
code
what
you
want
the
next
Auto
increment
for
a
primary
key
to
B,
so
I'm,
assuming
there's
got
to
be
a
way
to
do
that
in
postgres.
So
that's
what
I
was
planning
on
doing
to
force
the
ID
to
be
whatever
okay.
D
So
these
very
stable,
but
it
was
just
like
record
IDs
as
the.
B
D
B
Yeah,
which
is
what
I'm
doing
here
like
I,
got
yeah
yeah
the
names
for
the
group
for
the
path.
That's
the
plan,
so
I
think
that's
a
good
idea.
I
think
the
what
I
came
away
with
is
like
I,
probably
should
use
the
test
environment
just
to
have
a
clean
sleep.
Then
maybe
I
don't
have
to
worry
about
setting
the
IDS
and.
B
D
A
I
I
totally
understand.
Yes,
there
are
because
many
times
when
I'm
on
a
meeting
and
I
was
like.
Oh
it's
been
five
minutes
no
one's
showing
up
so
I
I
close
it
out.
Then
someone
shows
up
like
at
20
minutes
later
I.