►
Description
Rocio Delgado will dig into how GitHub creates a culture of performance to keep delivering a product that is fast and highly available. We'll discuss common pitfalls for database performance degradation, monitoring tools to help identify problems, and common solutions, including MySQL indexing optimization.
About GitHub Universe:
GitHub Universe is a two-day conference dedicated to the creativity and curiosity of the largest software community in the world. Sessions cover topics from team culture to open source software across industries and technologies.
For more information on GitHub Universe, check the website:
https://githubuniverse.com
A
So
my
name
is
Lucia,
yellow
I'm,
an
engineer
manager
at
the
platform
team
at
github
I've
been
working
here
for
about
16
months,
I'm
from
Mexico
and
live
now
in
Brooklyn.
You
know,
github
is
a
remote
company,
so
we
a
lot
of
us
work
from
home,
I'm,
really
interested
in
databases
and
data
structures,
since
it
was
in
college.
That
is
a
class
that
I
most
look
forward
to
attend
and
to
this
day
I
actually
worked
a
lot
with
databases
and
back
in
engineering.
You
can
find
me
on
Twitter
with
Roxy
and
it
could
help
rusty.
A
Oh
so
databases
have
been
around
for
a
long
time
in
nineteen.
Ninety
nineteen.
Seventy,
the
ACM
published
this
paper-
that's
called
a
relational
model
of
data
for
large
share
data
banks.
It
was
published
by
Edgar
F
code,
any
IBM
researcher,
and
it's
about
he
proposed
the
use
of
relational
data
model
where
the
schema
of
the
data
was
not
necessarily
related
or
disconnected
from
physical
storage.
A
To
this
way,
this
is
the
principle
of
database
databases,
models,
modern
databases,
and
so
this
is
just
fascinating
to
me
and
since
then
in
the
market,
we've
seen
very
expensive
databases,
open
source
databases,
no
sequel,
new
sequel
timeline,
a
time
series,
data,
races,
graph
databases,
all
sorts
of,
and
data
is
always
at
the
center
of
application
development,
but
a
github
wit
said
yes,
sequel
or
y
SQL,
and
we
use
and
a
lot
of
my
skill
as
our
data
storage
and
well,
I'm
gonna.
I'm
going
to
show
why
and
how
it's
that
we
do
that.
A
So
the
idea
that
no
sequel
and
sequel
databases
are
in
opposition
from
each
other.
It's
not
necessarily
true,
I
I,
don't
I,
don't
agree
with
that.
Is
it's
really
not
a
system
fits
all.
It's
just
whatever
works
for
you
and
with
whether
you
have
the
confidence
enough
on
that
technology,
and
you
decide
to
move
forward
with
that.
So
I
did
again
how
we
have
at
the
operational
experience
and
the
team
to
actually
put
boundaries
a
little
bit
and
rely
in
MySQL
a
sore
back
in
to
give
you
an
idea
of
the
size
of
the
database.
A
There
is
some
numbers,
and
you
also
can
see
the
october's
that
Chris
announce
it
earlier,
but
this
is
just
a
rough,
some
numbers
that
is
storing
our
database.
We
have
16
million
users,
125
million
issues,
38
million
repositories
and
about
78
pool
request
since
2010.
These
numbers
are
a
little
bit
different
from
the
github
October's
intern,
in
the
sense
that
they
are
reporting
active,
but
this
is
like
total.
So
this
is
like
the
size
of
our
database
and
just
like
the
main
tables
or
data
models.
So
you
can
see
that
this
is
actually
generated.
A
A
A
How
do
we
make
sure
that
we
keep
building
features?
And
you
know
we
are
keeping
the
side
up
and
fast
and
it's
reliable,
so
one
concept
that
I'm
really
interested
on
is
performance.
I
strongly
believe
that
performance
should
be
thawed
early
on
a
development
process
just
like
choosing
a
database
infrastructure
or
a
JavaScript
framework
or
methodology
performance
cannot
be
an
afterthought,
and
so
this
is
like
the
concept
of
performance
as
the
first
class
citizen,
and
so,
if
you
have
performance
issues,
doesn't
matter
how
great
your
product
is,
it's
good.
A
It's
going
to
fact
you're
more
all
your
customers
and
your
development
cycle,
but
the
real
reason
I
interested
in
this
topic
is
because,
in
my
years
of
experience,
I've
seen
fires-
and
you
know
a
lot
of
those
could
be
prevented.
If
you
have
a
little
bit
information
up
front,
so
we
need
to
understand
growth
and
build
for
the
future,
prevents
it.
It's
also
another
important
thing
about
infusing
the
culture
performance
in
any
team.
So
what
are
the
the
concepts
or
the
aspects
that
you
need
to
consider
to
build
a
culture
performance
in
any
team
organization?
A
The
first
one
is
that
it's
everybody's
responsibility.
There
is
not
only
one
team
or
person
in
your
company
that
has
to
be
the
person
to
go
to
to
talk
about
performance.
It
has
to
be
infused
through
management
all
the
way
from
engineers
QA
event
designers,
when
you're
thinking
in
a
feature,
it
has
to
be
thought
as
a
first
class
citizen.
So
the
second
one
is
processing
tooling,
and
this
is
what
I'm
going
to
spend
more
time
talking
about.
A
It
relates
to
automation,
have
an
infrastructure
and
the
tooling
to
support
that
continuous
performance,
monitoring
of
production
and
other
environments,
and
again
how
we
rely
a
lot
of
robots,
but
not
this
one.
This
one,
you
may
know
Hubert
already.
Yes,
our
get,
helps
chata
pot
and
he's
customizable
its
mo
its
built-in
nodejs
and
copy
script.
It's
open
source
and
we
use
get
cubot
for
pretty
much
everything
in
our
company
a
couple
of
examples,
so
we
use
keyboard
and
slack
and
we
run
every
graph
that
is
available
to
share
when
you're
building
a
feature,
for
instance.
A
So
if
you're
talking
about
a
certain
system
or
just
just
looking
at
in
this
case,
we're
looking
at
/
browser
page
load
time,
but
pretty
much
every
graph
that's
available
in
graphite,
we
can
dump
it
into
slack,
and
it's
really
good
for
boarding
people
into
teams
and
at
the
company
also
is
good
for
sharing
knowledge
or
information.
Since,
as
I
mentioned,
we
are
remote
company,
then
it
just
makes
it
very
easily
to
have
the
data
available
in
slag
so
relating
to
the
database.
We
have
about
65
different.
A
My
SQL
commands
the
run
in
Cuba
from
I,
don't
know,
setting
up
on
your
server,
actually
deleting
dropping
tables
started.
Oh,
and
so
this
was
one
with
favorites.
It's
called
MySQL
table
sizes.
So,
if
I'm
an
engineer
working
in
a
feature
and
I
need
to
decide
whether
I'm
going
to
be
added
on
you
call
them
or
I,
don't
know
just
just
want
to
know
more
about
what
the
system
is.
A
So
not
only
our
team
grew.
We
also
grew
our
assistance
and
infrastructure
and
with
that,
the
need
to
support
it
at
some
point
in
the
past
engineers
built
in
a
feature
will
do
everything
from
product
design
the
development
of
the
future
setting
up
new
servers
if
it
was
needed
and
running
database
migrations-
and
this
was
all
thanks
to
Hugh
bud.
However,
sometimes
what
we
start
growing
and
we
also
have
a
lot
of
databases
now
in
different
clusters.
So
running
migrations
start
getting
a
little
bit
complicated
and
sometimes
people
will
need
to
babysit
the
migration.
A
We
just
different
processes
in
the
past,
and
so
it
was
not
necessarily
a
scalable
as
a
team,
and
it
not.
It
was
not
a
scalable
because
well,
it
was
not
a
job
well
suited
for
an
engineer,
doing
education
development.
What
happens
if
the
database
goes
down
right
like
that
would
be
pretty
bad,
so
fortunately
we
also
have.
A
We
also
grew
that
database
infrastructure
team
grew
and
we
have
a
team
of
awesome.
Applique
engineers
looking
at
after
the
database
infrastructure,
which
are
second
row
right
here,
yeah
sit
out
today's
partial
partial,
so
fortunately
they
they
came
to
companies
sale
like
we
need
to
do
change
the
way
with
the
things
and
they
develop.
This
new
online
schema
migration
tool
for
mysql
cole
github.
Some
line
schema
thingy,
that's
the
official
name
and
our
ghost,
and
so
they
took
over
the
operation
management
of
running
migrations,
and
that
was
awesome.
A
So
ghost
is
provide,
possibility
and
many
operational
perks,
and
so
with
with
ghost
we'd,
really
find
a
top-notch
solution
to
our
database
migrations
problem.
However,
the
process
of
running
on
migration
was
still
a
little
bit
tied
to
development
practices,
so
a
developer
needed
to
open
a
PR
or
still
need
to
open
a
PR
with
the
migration,
and
you
know
if
we
use
Ruby.
So
it's
like
after
record
migration,
you
need
to
generate
the
sequel,
that's
going
to
be
executed,
so
you
had
to
have
a
local
development
environment.
A
So
there
was
not
something
database
infrastructure
team
needed
to
do
for
the
daily
work,
so
we're
still
there
with
the
we
went
back
to
square
one
right
in
the
sense
that
we
have
people
doing
something
that
it's
not
necessarily
the
first
thing
that
they
should
be
doing.
So
we
need
to
take
a
step
back
and
develop
a
process
to
go
around
this,
and
we
can.
We
came
back
with
this
migration
q
process,
which
also
runs
in
Cuba,
and
you
know
the
name
is
not
necessarily
important,
but
what
we
can
do
with
migration
Q
is.
A
We
developer
opens
a
to
requests.
It
can
be
Dennis
review
by
somebody
in
the
platform
team.
It
gets
added
to
a
queue.
Then
it
gets
a
schedule.
We
have
commands
to
show
actually
okay
yeah,
we
can
add
a
one
subject
of
migration:
pull
requests
review.
It
gets
added
to
a
queue.
Then
we
can
see
how
many
are
pending
review
or
how
many
are
waiting
for
to
be
scheduled.
Once
I
get
a
schedule,
we
actually
have
the
command
generator
to
run
the
ghost
migration
directly
into
the
pool
requests.
A
So
there
is
no
middleman,
so
the
middleman
is
actually
cubed
and
ghost
right.
So
this
is
the.
How
goes
sorry
cubot
adds
the
ghost
command
into
the
pool
request.
So
whoever
is
going
to
run
this
migration
just
take,
takes
the
the
command
and
ron's
it,
and
if
you
want
to
know
more
about
ghosts
specifically,
that
is
a
talk
that
shlomi
is
going
to
give
like
at
one
I.
Don't
know
the
schedule
of
that
I
think
what
20
minutes
behind
schedule.
A
So
it's
like
one,
the
next
one
after
lunch,
so
go
and
see
that
one
it's
really
interesting.
So
this
is
how
we
generate
the
ghost
command,
so
there
is
no
development
environment
in
between,
and
so
we
can
also
see
if
the
migration
is
running
and
how
much
is
going
to
take
and
everything
is
in
Cuba.
So
perfect.
We
found
our
process.
We
have
database
engineers
doing
what
they
do
best.
We
have
application
engineers
what
we
know
best
and
Hubert
and
goes
do
the
work,
so
process
alone
will
not
solve
all
the
problems.
A
If
without
the
right
pool
in
understanding
and
empathy
between
teams,
it
cannot
just
work
right.
We
need
to
put
in
the
choose
up
each
other
and
see
what
is
that
we're
missing
here
and
the
process
has
to
be
efficient
enough
to
accommodate
for
the
needs.
A
process
should
provide
clear
communication
and
all
the
steps
should
be
very
well
communicated
across
every
party
in
bowl.
A
It
has
to
foster
collaboration,
and
it
should
be
meant
to
remove
friction,
be
like
without
between
teams
and
not
to
add
more
process
or
like
bureaucracy,
and
a
process
should
be
paired
with
the
best
tool
in
for
the
job.
So,
in
this
case
goes
is
the
tool
that
we
use
and
who
bought
is
the
tool
we
use.
We
use
my
skills
as
a
database,
so
it's
really
not
about
which
tool
you
use
its
choose
the
right
one.
If
it's
a
shell
script,
that's
fine
right!
We
choose
whatever
it.
A
So,
speaking
of
tools,
what
else
do
we
do
to
be
on
top
of
performance
problems
when
it
comes
to
our
database?
We
use
peak.
This
is
an
open
source
tool
that
was
developed
and
get
help,
but
it's
now
it's
move
out
of
github
its
own
branch,
and
so
what
pic
does
it
has?
It
puts
a
little
bit
a
bar
on
top
of
your
application
and,
as
you
can
see,
we
can
right
now
we're
reporting,
sequel
timing,
elasticsearch
cash,
but
you
can
really
put
anything
you
want
with
in
this
particular
example.
A
If
I
mouse
over
the
sequel
section
I
can
see
all
the
active
record
objects
that
are
being
loaded
in
this
page,
so
I'm
developing
a
particular
feature.
I
can
see
what's
being
executed
and
I
can
also
see
the
actual
query.
So
this
is
a
very
good
way
of
finding
M,
plus
ones
or
any
performance
problems.
One
your
development,
another
tool
that
we
use
is
haystack.
A
Hastag
is
the
internal
exceptions,
tracking
tool,
and
but
we
use
really
for
everything,
from
exception
monitoring
to
like
slow
queries,
and
this
is
little
a
little
bit
different
from
reporting,
slow
queries
from
a
database
perspective
because
you
can
customize
what
this
is.
Slow
query
means
for
your
application
and
it
doesn't
need
to
have
anything
to
do
with
the
database
itself.
A
Individual
metrics
are
important,
not
necessarily
just
like
first
time
to
first
byte
or
you
know,
page
old
fool
request,
page
time.
It
has
to
be
more
about
individual
aspects,
so
you
can
easily
identify
which
aspects
of
your
applications
are
the
ones
causing
problems
like
if
it's
IO
memory
weed
versus
rights
in
the
database.
It's
really
important
that
you
do
granular
metrics
a
couple
of
examples
of
this.
We
have
sequel
tables
total
time.
This
is
this
particular
graph
is
showing
the
top
12
tables
that
are
most
straight
into.
A
A
The
second
example
is
the
common
needles
in
haystacks.
So,
after
a
deploy
when
we
are
checking,
if
we
did
some
regressions,
we
can
see
the
individual
rates
of
issues
like
it's,
no
queries
or
JavaScript
exceptions,
it's
low,
pull
requests
and
well.
That
is
up
to
you,
but
this
is
really
helpful
after
a
deployment,
so
we
can
identify.
We
have
any
any
way
to
those
new
issues.
A
So
the
key
point
about
having
measures
for
everything
is
to
remove
noise
and
just
look
at
what
it's
important
to
you.
Although
it's
really
important
that
you
do
individual
aspect
metrics,
as
I
mentioned,
I
came
across
this
latency
tip
of
the
day
and
it's
measure
what
you
need
to
monitor
than
just
monitor
what
you
happen
to
be
able
to
easily
measure.
So
you
can
measure
everything,
but
really
it's
not
going
to
be
useful
for
you,
then
don't
do
it
right.
So
so
how
does
it
look
when
we
put
it
all
together?
A
So
I
was
a
point
and
I
was
looking
at
haystack
found
this
slow
query.
The
slow
query
actually
shows
the
sequel
that
is
being
executed.
I
open
an
issue
just
to
report
it
back,
but
I
took
a
stab
on
it
and
use
another
tool
that
is
called
leave
a
cortex.
This
is
a
third
party
tool.
It's
not
it's,
not
us,
so
I
went
and
find
the
similar
queries
and
I'd
saw
that
yeah.
In
fact,
it's
a
little
bit.
A
It's
been
spiky
for
a
couple
weeks,
so
one
other
thing
that
we
have
available
for
us
be
a
hue
bot
is
the
ability
to
copy
tables
from
production
into
an
estate
environment.
So
it's
really
hard
to
test
with
data.
That's
the
sites
that
I
mentioned
in
a
local
environment
right.
So
you
cannot
replicate
that
or
perhaps
you
can,
but
you
will
not
have
the
capacity,
your
local
server,
even
if
you
can
generate
that
data,
so
cloning
a
table
into
an
estate
environment,
that's
something
that
we
have
available.
That
is
really
really
useful
too.
A
We
can
do
it
ourselves,
so
Cuba
we
run
this
command.
He
bought
copies
the
table
and
notifies
when
it's
ready.
It
could
take
some
time.
It
depends
what
the
type
of
the
tape
the
size
of
the
table
is
so
I
tested.
My
new
index
I
figured
it
was
a
new
index
which
I'm
going
to
talk
a
little
more
about
that
later
and
so
realize
that
yeah
and
did
the
new
index
execute,
makes
the
query
execute
in
three
hundred
percent
more
efficiently
so
open.
A
A
A
A
We
profit,
we
I
go
back
to
Hugh
bot,
Rhonda
graph.
That
I
was
looking
before
and
this
is
how
it
look
like
after
the
index,
so
everything
I
did.
It
was
no
interaction
with
a
human.
So
far,
then
I've
go
back
to
the
cortex
and
I
also
see
the
the
graph
really
looks
like
it's.
It's
an
improvement,
also
haystack,
no
more
needles.
A
So
if
this
is
not
available
to
you,
if
it's
not
possible
to
copy
a
table
to
a
different
environment,
because
you
know
maybe
it's
expensive
or
it's
really-
the
tooling
is
not
there.
Yet
you
can
use
science
and
what
I
mean
by
this
is.
You
can
actually
add
the
index
into
your
database
and
tested
with
scientists,
which
is
a
library
that
is
Ruby
library,
that's
for
Cobra
factor,
but
we
can
use
it
sometimes
for
performance
monitoring,
not
just
necessarily,
if,
like
a
cobra
factor
in
terms
of
performance,
not
necessarily
changing
the
entirely
the
code.
A
Scientists
has
the
grabs
that
each
of
this
lines,
the
performance
that
they
provide
and
it's
all
built
in
so
I-
can
just
go
and
look
at
this
graph.
My
candidate
is
the
new
index
which
is
in
green,
and
the
control
is
blue
and
I
can
see
that
it.
This
index
is
a
slightly
better
than
the
old
ones,
so
I
can
chip.
My
PR
remove
the
experiment,
and
this
is
this
is,
as
I
said,
it's
open
source,
so
you
can
use
it
in
your
projects.
A
So,
to
summarize,
the
performance
culture
will
not
only
improve
how
you
think
about
a
feature
will
it
will
definitely
improve
the
way
you
write
code
because
you
will
have
this
mindset
of
like.
Hopefully,
you
will
think
about
it
before
building
something,
so
it
could
be
scalable,
and
you
know
you
just
it's
going
to
be
double
awesome.
A
So
it
gives
us
a
lot
of
confidence
to
have
the
operational
experience
to
be
being
able
to
manage
and,
most
importantly,
we
package
it
up
in
ship
it
and
github
enterprise,
which
it
has
to
be
very
reliable.
We're
not
there
to
do
the
same
experiments
with
doing
github
com.
We
have
to
trust
it
as
a
solution.
So
really
those
are
the
main
keys.
It's
not
about
selling
my
skill
as
a
as
a
solution
for
you
it.
So
it's
all
about.
A
We
chose
this
and
we
decided
to
you
know,
push
a
little
bit
the
boundaries
every
most
of
our
data
models
are
structured
data.
However,
we
we
also
went
a
little
bit
beyond
sorry.
We've
been
a
little
bit
beyond
what
a
typical
usage
of
my
skill
looks
like
and
we
started
using
a
key
value
data
in
MySQL.
A
So
this
is
how
the
table
looks
like
the
another
mysql
command
in
cubot.
That
gives
us
the
table
definition.
So
I
don't
have
to
ask
anybody
and
it
also
can
give
me
the
size
and
which
indexes
are
available,
etc.
So
this
is
a
key
value
stable.
It's
very
simple:
we
only
have
a
key
and
values
column.
A
couple
of
time
stamps
and
the
index
on
expiration
expires
add
to
be
able
to
prune.
A
So
you
can
set
up
a
TTL
as
well
and
we'll
have
a
process
is
going
to
clean
up
that
data
whenever
the
time
is
whenever
it
has
also
the
interface
interface
in
the
code
that
will
not
return
results.
Obviously,
if
it's
passed
the
TTL,
so
it's
very
simple
and
we
use
it-
we
have
a
couple
of
usages,
but
one
of
the
most
simple
ones
are:
is
the
dismissal
of
notices
so
across
the
application
we
have
a
couple
of
notices
that
whenever
we
want
to
show
a
call
to
action
to
our
users,
I
could
be.
A
You
know.
This
is
new,
look
up
the
feature
or
yeah
report
a
message
to
every
user,
but
we
also
need
to
make
sure
that
we
don't
display
it
multiple
times.
So
this
is
not
really
something
that
belongs
necessarily
to
a
common
table
like
that's,
it
seems
you
know
it's
not
I
could
have
a
notice
is
stable,
but
a
simple
approach
could
be
just
a
in
a
key-value
format.
A
I
could
have
key,
you
know
it's
my
user
and
the
notice
that
I'm
displaying
and
the
valley,
whether
whether
the
user,
this
message
or
not,
it
could
be
the
presence
of
the
key
in
the
key
value
store.
So
this
is
something
you
can
implement
in
Redis
or
any
other
key
value
store.
However,
we
took
a
stab
at
my
scale,
so
basically,
when
we
show
come
some
of
this
messages-
and
you
click
the
cross
to
this
message-
we
store
it
as
a
key
in
my
sequel,
mysql
and
like
something
like
this.
A
So
we
restore
user,
dismiss
notice,
the
notice
itself
and
the
user
ID.
The
note
is,
could
be
there
not
whether
the
we
actually
just
check
for
presents
and
whenever
we're
going
to
display.
The
note
is
so
really
we
use
kb,
for
things
are
going
to
be
really
intensive.
We're
really
good
at
a
scalene
tweets
in
our
clusters.
A
So
kb
is
that
in
a
mysql
it's
we
try
to
implement
some
of
the
features
that
are
like
really
intense
it,
but
not
necessarily
super
heavy
rights,
and
so
far
it's
been
proven
to
so
this
is
a
little
bit
of
how
we
use
it
right
now
that
we
don't
have
a
lot
of
features
yet
on
kb
in
my
skill,
but
we
have
about
you
know
you
can
see,
we
have
a
lot
more
weeds.
The
blue
line
is
reads
or
the
selects
over
this
table,
and
we
really
have
a
very
small
number
of
inserts
and
deletes.
A
A
So
speaking
about
indexes
and
my
skill
and
databases,
what
is
a
good
indexes
strategy?
This
is
something
I'm
really
keen
on
and
it
really
sometimes
it's
not
about
the
tulane
is
how
you
use
the
tool
itself.
So
if
you
understand
really
well
how,
in
this
case,
my
skill
works
in
terms
of
you
know
the
storage
engine
and
how
indexes
are
storing
the
behind
the
scenes,
it
will
really
help
you
a
lot
to
improve
performance.
A
So
one
of
the
things
that
we
tend
to
do
is
just
focus
on
the
most
important
queries
like
the
ones
that
are
run
the
most
or
the
ones
that
are
like.
You
know
the
the
slowest
points,
obviously,
but
in
terms
of
volume,
look
at
the
heaviest
queries
first
and
try
to
optimize
for
them
if
we
take
a
staff
of
all
of
them
at
the
same
time,
you
know
it
may
not
be
a
good
strategy,
because
I'm
going
to
explain
having
too
many
instances,
actually
hurts
performance,
so
building
an
order
index
order
that
benefits
more
queries.
A
So
if
you
look
at
when
you're
analyzing
Aquarius
in
your
database
taken
as
some
of
the
similar
queries
and
try
to
optimize
for
a
couple
of
them
as
opposed
to
it
into
Julie,
that
will
help.
You
also
reduce
the
number
of
indexes
on
ad
on
our
table.
So,
for
instance,
I
have
a
query
that
is
select
star
from
issues
where
user
ID
equals
2
and
rip
asteroid
equals
2.
A
So
I'm
using
equality
in
both
columns,
but
I
have
another
one
that
is
looking
for
a
repository,
but
I
user
greater
than
so
the
inequality
condition
in
this
one.
It's
going
to
be
important
and
so
the
index
that
benefit
the
most.
For
those
two
queries,
it's
going
to
be
repository,
the
end
user
ID
with
repository
first
because
of
the
Equality
condition
and
then
we'll
still
going
to
be
used
the
same
index
for
both
queries.
But
if
I
do
the
opposite,
it
will
only
benefit
one
with
that
said
prefer
to
extend
existent
indexes
older
than
Alan.
A
Once
I
said,
you
know
it's
really
bad,
at
least
for
my
skill.
I
know
I
know,
maybe
passports
have
like
the
ability
to
add
like
index
merch
and
but
it
still
is
going
to
hurt
performance
too.
What,
if
you
have
too
many
indexes
and
so
really
being
keen
on
that?
It's
it's
really
important
and
favorite
multi-column
indexes
up
a
lot
of
us
tend
to
add
an
index
on
every
column,
just
because
it
seems
like
it's
going
to
be
useful,
but
really
it's
better
if
we
have
a
multi-column
index
than
one
in
each
column.
A
A
Index
is
the
reason
why
indexes
are
or
her
performance
rights
is
because
they
they
do
require
space
and
obviously
the
more
you
have
the
bigger
the
table
and
the
other
thing
is
when
you
are
going
to
write
to
the
database.
You
need
also
need
need
to
write
to
the
index.
So
it's
going
to
affect
the
performance
of
writing
and
actually
even
in
weeding,
is
going
to
affect,
because
the
query
analyzer
is
going
to
look
at
all
the
possibilities
and
the
more
you
have.
A
Obviously
it's
going
to
take
a
little
longer,
another
command
that
we
use
for
my
sequel.
This
is
one
of
my
favorites
is
dead
in
Texas,
and
so
this
every
time
we're
changing
something
in
one
of
the
tables.
You
have
the
ability
to
look
for
redundant
or
non-unique
yun
use
indexes,
so
you
can
make
it
leave
the
can
better
than
you
found
it.
So
if
you're
going
to
add
something,
maybe
you
can
remove
something
as
well.
So
this
is
another
of
the
of
those
50
65
commands
that
we
have
supports
with
emoji
and
so
I
titled.
A
A
So
it's
really
a
work
in
progress
just
in
the
right
tool
for
the
job
it's
hard,
but
I
encourage
you
to
take
a
data-driven
approach
to
making
decisions.
So
what
are
we
doing
next
in
terms
of
databases
and
data
stores
and
to
try
to
improve
the
way
we
build?
Software
is
have
better
dashboards.
For
instance,
we
can
do
the
same
things
we
have
for
the
database
for
elastic
search
or
for
reddit
or
for
every
other
data
store
that
we
have
available.
Also
why
migrating
over
4
2day
the
dog?
This
is
a
work
in
progress.
A
I
don't
have
any
dashboard
so
far,
but
it's
looking
great
and
it
also
has
integration
with
the
Hubert.
We
have
also
one
thing
that
we
want
to
improve
is
the
described
discoverability
of
cubed
commands.
As
I
said,
a
process
should
be
able
to.
You
see,
communicate
the
steps,
so
sometimes
it's
hard
to
find
everything
that's
available,
because
we
have
so
many
and
so
I
really
encourage
you
to
talk
to
your
team
about
this,
to
encourage
to
take
a
stab
at
performance
beforehand
when
you
but
you're
developing
a
future
and
not
living
it
to
the
end.