►
From YouTube: Database Office Hours 2020-03-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
nice
to
see
all
of
you
new
faces
on
the
call
thanks
for
joining
this
is
the
database
office
hours
call
and
we're
kicking
it
off
with
congratulations
to
Tiger
he's
not
on
the
call
today
he's
on
the
other
side,
Australia
and
Kemah
database
maintained
recently
I
like
yesterday
and
I'm
really
glad
about
that.
You
can
see
or
read
all
about
it
on
the
EMR
and
yeah.
Thanks
Tiger
really
appreciate
your
efforts.
B
A
Think
that
happens
from
time
to
time,
depending
on
the
case.
But
the
review
were
led
to
my
knowledge.
It
doesn't
assign
you
in
the
first
stage,
if
you're
a
maintainer,
because
that's
that's
really
the
idea
that
CW
involve
reviewers
early
so
for
preparing
the
M
R
and
getting
a
chance
to
review
and
then
container
its
assigned
later,
but
obviously
there's
nothing,
keeping
you
from
pinging
a
maintainer
directly
or
if
that
makes
sense.
B
A
Kind
of
brings
us
to
the
next
topic.
We're
soliciting
feedback
for,
for
this
particular
call
and
I've
tried
to
summarize
what
we
already
have
so
I'm
yeah.
Maybe
we
can
run
through
very
quickly
so
going
forward.
I
would
love
if
the
group
of
database
antennas
were
taking
over.
This
will
be
taking
over
this
call.
There
is
absolutely
no
reason
this
is
dependent
on
on
me
anymore.
We
have
a
really
awesome
group
of
people,
I
feel,
and
it
takes
a
bit
of
time.
A
You
know
you
gotta,
prepare
to
call
it
ahead
of
time
and
going
forward
be
great
if
the
maintainers
should
be
stepping
in
here
and
I.
Think
Adam
suggested
that
maybe
there
is
a
facilitator
for
each
of
the
calls.
So
we
have
a
you
know,
set
person
to
prepare.
The
next
call
I.
Think
that's
a
great
suggestion.
A
There
is
maybe-
or
there
was
a
feedback
that
that
call
is
pretty
early,
so
maybe
we
can
do
it
a
little
bit
later,
I'm.
Currently,
looking
for
a
better
slot
but
realize
that
what
I
proposed
on
the
only
issue,
that
sort
of
collides
would
be
with
another
call,
I
think
so,
I
looked
at
it
and
there's
a
spot
opening
for
3:30
UTC
year.
Perhaps
that's
something
we
can
do
please.
Perhaps
you
can
leave
feedback
on
the
issue.
A
B
My
topic
is
I'm
shaking
I'm,
300
or
so
user
shipping
queries
and
baking
in
the
maybe
history
of
all
the
development
done
on
them.
File
on
that
file
and
while
fixing
a
little
stuff
I
noticed
that
a
lot
of
user
shipping
queries
are
failing,
because
a
shared
scope,
scope,
which
is
also
used
in
some
other
screen,
is
modified
and
that
made
so.
There
are
two
reasons.
Basically,
first
is
the
hyper
growth
of
gitlab.
So
it's
a
piece
grooming.
So
that's
the
one
reason
which
is
always.
B
The
second
reason
is
sometimes
there
is
a
shared
scope
and
then
sometimes
I
see
that
maintainer,
the
reviewers.
We
are
suggesting
a
scope
to
be
shared
and
then
the
chilled
scope
later
on
in
a
different
day
mark
gets
modified
with
a
quite
different
query
plan.
And
finally,
it
fails
the
users
thing.
So
my
general
question
is
generally
when
we
review
MMR
and
there's
this
cop.
How
can
we
really
check
if
that
scope
is
used
in
other
screens
with
different
limit
settings?
B
A
C
Yeah
sure
so
anyway,
so
no
so
the
idea
is
to
you
know
we
have
a
CI.
Dub
has
a
pretty
good
coverage
and
they're
in
DC
Iran.
We
kind
of
touching
the
whole
code
base
and,
of
course,
queries
are
also
executed.
During
this
see
Iran,
we
could
easily
record
the
queries
we
are
actually
executing
against
against
the
database
and
later
on,
the
other
database,
where
we
basically
have
a
collection
of
people
disclosed
that
we
having
in
the
individual
application
and
when
a
new
merge
request
is
created.
C
C
D
C
You
know
the
test
is
running
in
a
randomized
order,
so
we
need
to
have
some
kind
of
sanitized
version
of
the
query,
so
we
can
easily
get
you
know,
detect
the
changes
and
then
compare,
compare
query
sets
and
as
soon
as
we
have
this,
we
can
easily,
but
not
so
easily,
but
in
some
cases
quite
easy
to
turn
a
query
into
a
query
that
is
able
to
have
a
bit
that,
let's
say,
replace
the
project.
I
didn't
forgive
that
dot.
Org
slash,
leave
that
project
ID.
A
C
And
also
I
often
find
that
running
queries
against
gitlab
dot.
Org
is
not
the
best
gift
that
the
board
has
maybe
a
few
hundred
groups,
and
actually
we
have
groups
that
have
several
thousand
subgroups.
So
maybe
it
works
forget
about
pork,
but
for
some
other
groups
it
will
miserably
fail.
So
this
is
also
an
improvement,
could
be
an
improvement
for
us
to
always
pick
the
right
ideas
and
and
and
they
thought
for
our
queries.
Let's
say
if
you
query
the
namespace
that
mean
you
should
look
for
projects
for
your
positive
namespaces.
C
C
B
Mean
I
don't
see
any
silver-blue
here
actually
and
the
main
risk
is
that
an
active
record
scope?
You
can
then
later
get
it
and
group
by
some
count
it
you
know.
So
it's
just
a
partial
query
anyway,
so
I
think
Adams
suggestion
is
like.
Maybe
it
could
solve
it
for
ever,
but
I
don't
see
a
very
good
way
to
catch
such
errors,
and
the
tragic
result
is
that,
for
example,
the
usage
ping
on
hit
lab
was
failing
for
14
months
due
to
one
query,
which
actually
wasn't
changed.
Let's
go
questions
like
someone
for.
A
Perhaps
we
perhaps
you
can
make
a
smaller
step
either,
and
this
will
also
serve
as
a
way
to
detect
what
query
exchange.
Perhaps
like
you
know,
we
don't
we
detecting
how
other
performance
changed,
might
be
really
difficult,
but
just
statically
figuring
out
what
what
kind
of
normal
or,
if
we
see
no
new
or
normalized,
creates
and
which
is
the
thickness,
for
example,
after
this
last
one
might
actually
made
yet
here
one
and
you
would
get
a
report
on
your
MRI
saying
that
oh
there
is
this
in
bad
new
Curie
and
those
those
have
changed.
C
F
Maybe
they
updated
a
field
or
something
else
so,
but
we
need
to
remove
all
all
of
those
services
because
we
have
invalid
pin
data.
For
example,
a
is
like
17%
of
the
services
are
embodied
in
Gila
calm,
so
when
I
think
that
they
we
have
to
remove
them,
but
I
know
so
like
what
are
the
steps
in
order
to
do
that?.
D
Not
really
so
at
my
piece
we
had
an
incident
in
production
where
we
traded
a
lot
in
the
lux
services,
but
I
think
we
should
have
deleted
them
all,
but
it
might
be
that
that
part
of
the
or
that
they're
still
still
some
avoid
service
left
from
that
incident,
but
I'm,
not
sure
and
I.
Think
we
can't
really
figure
out
well,
we
might
be,
you
might
be
able
to
figure
it
out.
D
A
Mean
generally
I
think
it's
the
typical
patterns
of
you
wanna
introduce
a
unique
constraint.
What
you
have
like
your
data,
wild
lights,
existing
that
data,
wild
lights
right
and
then
take
a
step
back
and
realize
what,
where
were
the
invalid
data
actually
came
from
or
if
it's
still
coming,
we
would
deploy
a
application
change
that
make
sure
that
we're
not
going
to
see
additional
data
coming
in.
A
D
That
makes
sense,
do
you
know
who
we
can
ask
for
for
help,
because
we
need
someone
to
help
us
with
production
database
access.
So
we
can,
you
know,
I,
think
we
can
run
some
queries
and
database
lab,
but
we
can
just
get
the
number
of
rows,
but
we
can't
really
see
a
credit
update
and
I
think
when
we,
if
we
could
figure
out
what's
the
earliest
created
ad
for
this
invalid
services.
A
A
This
is
something
that
you
can
get
right
and
you
would
basically
create
an
access
request
and
explain
why
that
helps
your
work
and
after
a
while,
it
does
take
some
time
to
get
that
after,
while
you
get
to
get
production
access
and
you
can,
you
can
safely
go
to
AP
sequel,
console
on
a
production,
replica,
read-only
mode,
and
then
you
can
do
all
those
those
kinds
of
queries
and
understand,
but
I
think
that's
really
useful
to
have
in
this
case.
If
you
don't
have
that
yet
feel
free
to
reach
out
to
I
guess
pretty
much.
D
A
Yeah,
so
we
have
a
a
production
database
that
there's
basically
a
cluster
of
database
instances
that
serve
the
site
on
so
they
put
this
in
the
AJ
cluster
and
then
we
have
additional
replicas.
One
of
them
is
the
archive
what
we
call
the
archive
replicas
that
is
sort
of
not
participating
in
that
cluster,
and
you
can
reuse
that,
so
you
can
run
queries
that
take
a
long
time
or
that
you
know
which,
where
you
do
you
risk
affecting
the
site
negatively.
A
D
D
Yeah
I
will
add
some
notes
to
the
agenda
about
the
plan.
So
just
to
recap,
so
we
try
to
get
access
and
then
we'll
figure
out
the
root
cause
of
this
involved
records.
And
then
we
will
try
to
eliminate
the
root
cause
and
then
at
yeah
and
then
afterwards
we
try
to
come
up
with
a
plan
to
clean
up
the
invalid
data.
A
Perhaps
it's
also
worth
to
make
the
application
pick
the
right
record
as
long
as
we
have
the
duplicates
in
the
database,
I'll
pick
a
pick
them
consistently.
Thank
you.
You
capture
that,
on
this
show
already
and
today
with
you
in
Postgres,
if
you
select
without
any
order-
and
you
pick
the
first
record-
it's
really
just
luck
that
you,
you
got
the
same
record
all
the
time.
So
there
is
no
consistency
with
that
and
really
depends
on
how
the
whole
theory
is
being
executed
internally.
A
D
F
The
done
moment
we
are
taking
a
decision
right
like
do
you
know
what
I
mean,
like
maybe
the
users
they
always
see.
The
record
different
instances
base
it
on
different
criterias,
maybe
of
the
execution,
and
then
we
made
that
same
impulse
in
life.
Okay,
now
that
record
is
going
to
be
retrived
with
this
specific
order,
so
it
doesn't
matter
what
you
are
as
accusing
executing
at
the
moment
like,
and
that
that
is
could
be
like
a
kind
of
a
breaking
chain
for
some
instances
right
and
we.
A
Yeah
I
guess:
that's,
that's
part
of
making
or
cleaning
up
their
data
is
making
making
a
decision
for
one
of
those
records
and
deleting
the
other
ones
and
yeah
I
don't
see
a
way
around.
That
really
perhaps
is
worth
to
understand
what
the
differences
are
for.
Duplicates
like,
or
maybe
something
specifically,
you
can
look
at
in
the
databases,
find
examples
of
records
and
see
how
they
find
example
of
duplicates
and
see
how
they're
different,
maybe
that
helps
to
decide
what
which
one
to
pick.
A
C
So
I've
seen
this
a
few
times.
I
had
a
few
cases
that
for
some
time
on
migration,
we
had
to
create
an
index
to
support
the
query
that
is
going
to
be
migrated
or,
or
you
know,
the
each
folder
each
batch
to
make
it
more
performant,
and
usually
we
say
in
the
bound
method
we
have
to
remove
the
index.
However,
in
some
cases
when
we
have
to
rollback
Berggren
migrations
are
already
scheduled
and
running
in
another.
C
You
know,
process
server
doesn't
really
matter
but
outside
of
the
current
migration
process,
and
when
you
just
kick
out
the
index,
those
run
migrations
might
affect
the
stability
of
it.
Simply
because
the
query
that
we
expected
to
run
really
quickly
because
we
had
we
expected
that
the
index
is
there
now
it's
taking
longer
time
and
they
timeout
so
sometimes
I
suggest
not
to
remove
the
index
within
the
same
migration,
but
take
care
of
it
at
a
later
step
and
of
course,
there
are
pros
and
cons
one.
C
One
thing
that
I
don't
like
is
that
the
schema
can
be
a
bit
inconsistent.
So
when
you
might
be
something-
and
you
know
back,
then
you
don't
get
the
exact
same
result
and
address
and
I
think
Stan
had
a
suggestion
to
make
the
migrations
a
bit
more
but
give
some
kind
of
structure
where
we
can
handle
this
case
and
simply
when
we're
all
that
we
need
a
way
to
kind
of
stop
the
background.
Migration
and.
C
A
Mean
it
kind
of
goes
with
with
all
the
migrations
that
you
wanted
down
the
road
back
to
be
sort
of
consistent
right
so,
but
we're
already
not
consistent,
really
because
you
know
when
you
run
the
town
for
the
scheduling
or
not
really
fun
scheduling
jobs.
Maybe
that's
that's
something
we
could
be
doing
to
to
really
allow
you
to
stop
the
migration
and
then
also
there
is
no
problem
with
removing
the
necks
or
whatever
you've
done
in
the
in
the
up
migration,
and
that's
that's
kind
of
consistent
with
the
with
the
up
migration.
C
Another
comment
from
standard:
of
course
we
could
look
into
to
try
this
with
the
psychic
jobs
are
are
stored
and
we
could
just
eliminate
delete
or
the
other
entries,
but
that
might
not
be
super
efficient,
I'm,
not
I,
don't
know
how.
How
should
the
psychic
stores
they
staying
about?
I'm,
not
sure
we
can
easily
query
that,
so
maybe
we
should
have
something
on
the
database
side
that
can
keep
track
of
track
of
things.
A
Yeah
we
had
a
recent
data
migration
that
was
causing
a
lot
of
traffic
background
migration
and
that's
why
we
created
that
issue
to
discuss
how
we
can
better
handle
back
row
migrations
yesterday.
It's
really
it's
really.
A
girls
like
go
to
the
red
is
to
you
and
find
old
jobs
and
delete
them
just
kind
of
a
manual
way
of
dealing
with
that,
and
we
don't
have
any
in
fact,
really
sometimes
even
hard
to
tell
if
a
backrub
migration
already
stopped
right,
or
at
least
not
straightforward,
so
that
that
would
be
really.
C
I
wonder
if
we
could
start
with
maybe
having
a
feature
flag
in
the
background
migration.
So
of
course
it's
not
instant.
It's
reloaded
every
10
second,
or
something
like
that.
It
cashed
in
at
level
cache
as
far
as
I
know,
but
that
would
allow
us
to
if
there
is
a
really
bad
background
migration
that
is
running.
We
can
easily
turn
it
off
and
it
will
just
return
nothing
and
slowly.
You.
C
B
B
My
idea
was
should
be
also
renamed
temporary
migrations
a
bit
differently,
maybe
also
with
a
release
date
solid.
They
don't
hang
out,
I
mean,
while
working
with
a
lot
of
indexes.
I
will
argue
that
a
big
portion
of
the
indexes
could
be
not
used
at
some
point
due
to
either
being
having
an
another
index
which
covers
it
or
removing
the
functionality
without
removing
the
index.
A
A
On
the
other
hand,
if
it
sticks
it
up
around
for
quite
a
while,
then
there
is
also
risk
that
you
know
other
other
queries
rely
on
that
or
or
start
using
it.
That's
another
problem,
I
guess
and
we
do
have
a
lot
of
the
mixes
that
are
actually
unused.
I
can
perhaps
find
that
issue
later
didn't
study
like
two
months
ago
so,
and
there
are
quite
a
few
ones
that
are
not
really
being
used.
C
So,
to
unblock
a
community
contribution,
I
would
like
to
to
migrate
one
color
from
the
users
table
to
another
table,
and
this
would
require
us
to
move
basically
migrate.
130,000
records
from
the
user
table
I
would
create
an
index
for
it
to
make
it
make
it
super
fast.
So
iterating
over
it
wouldn't
be
a
big
issue,
and
the
question
is
the
record
record.
A
A
A
F
Next
topic,
so
is,
if
maybe
you
have
more
information
dealing
with
this
all
the
time.
I'd
say
when
we
modify
the
dvst
Marvy
a
we
because
someone
another
migration
getting
to
master.
We
conflict
so
in
a
way
to
all
the
time
three
base
from
master
in
order
to
fix
that
problem,
and
he
say
if
you
know
some
way
to
me
to
avoid
that-
maybe
not
updated
that
lining
the
scheme,
RV
or
a
andreas
already
say
that
maybe
we
can
pick
the
latest
on
conflict.
F
A
A
You
won't
have
a
more
conflict
on
your
future
branch,
but
I
don't
see
a
way
of
not
having
that
at
the
moment
of
working
around
that
for
this
one
time
for
structure
sequel,
you
know
we're
short
before
moving
over
to
handing
that
scheme
up
a
bit
differently
with
playing
sequel
and
it's
a
bit
different.
There
I
think
the
what
we
do
there
is.
We
basically
dump
the
whole
data
from
the.
A
A
F
Maybe
this
is
a
bad
idea,
but
lasting
for
a
bit
about
that
I
know
so
how
it
rails
exactly
one
day.
Migrations
like
maybe
go
today
to
this
line
of
the
schema
to
find
the
timestamp
and
he's
going
to
run
all
the
migrations
like
back
in
the
time,
but
so
my
question
in
what
happen.
If
we
don't
more
day
like
the
schema
busy
on
time,
estimate
or
like,
we
would
have
something
fixed
and
some
timestamp
or
maybe
something
in
the
future,
and
we
only
modify
actually
the
change
in
the
schema.
But
not
not.
F
B
F
The
idea
is
that
say:
we
never
modify
the
I
necessarily
we
can't
say
like
in
the
scheme
RV.
We
can
set
a
time
as
time
that
is
in
the
future,
and
when,
when
we
learn
migrations
locally,
we
never
get
done
line
modified
because
I
wear
my
richness
are
going
to
run,
but
done
line
is
not
changing
because
it's
in
the
future-
or
maybe
maybe
it
is-
doesn't
allowed
to
have
that
or
is
it
some
way
to
work
in
a
way
that
is
not
supposed
to
be?
A
I'm
not
particularly
sure
about
this,
but
I
think
rails
uses
it
in
a
way
to
say
that
when
you,
when
you
load
the
schema
from
this
file,
you
know
when
you
have
a
an
empty
environment
on
your
machine
and
loaded
from
the
file.
It
assumes
that
all
the
migrations
prior
to
this
version
have
been
run
right
and
then,
when
you
have
new
migrations
coming-
and
it
knows
you
know
they
have
a
higher
version
number
and
then
you
know
they
are
not
run.
G
F
My
issues
because,
for
example,
I,
was
modifying
a
not
only
the
line
of
the
scheme
RV,
but
also
the
thymus
term
of
the
of
every
migration
that
I
was
having,
because
I
wasn't
sure
that
they
to
always
have
like
the
latest.
My
in
the
feature,
Graham's
I,
always
try
to
have
my
mind
racing
with
the
later.
A
Yeah,
so
this
is,
this
is
not
really
necessary.
You
don't
have
to
change
the
version
number
and
your
migration
once
after
you
created
that
you
just
have
to
resolve
the
conflict
with
the
master
punch,
but
yeah
I
mean
we
can
we're
short
before
changing
this
to
structure
sequel.
So
perhaps
we
can
wait
for
that
and
see
how
how
that
solve
stuff
for
your
breaks,
all
this
stuff
I
don't
know,
but
the
pattern
is
likely
to
change
quite
soon.
B
B
Manipulate
your
timestamp
in
your
file
name
it'll
earlier
then
I
guarantee
it
one
I
mean
across
your
own
Amar's.
If
you
don't
want
to
deal
with
this,
you
know
like
you
can
basically
set
your
own
time
step,
but
it's
risky
because
you
have
to
have
consistency
back
force
in
the
database
at
the
timestamp,
where
your
particular
migration
is
should
find
everything
backwards.
I
think
we
should
wait
for
structures,
Gordon.
B
You
know
in
the
user
shipping
we
have
like
four
tables
counted
using
approximate
counters
today,
I
had
a
chance
to
dig
into
it
so
and
all
of
them
are
failing
on
production,
rails
console,
so
I
went
a
bit
deeper
and
especially
the
notes
table
is
problematic.
It's
checking
the
you
know,
approximate
table
relative
of
strategy
and
finally,
the
usual
count,
and
it
fails-
and
my
question
is
of
course
it
goes
back
to
lack
of
statistics.
Cia
builds
also
fails.
B
A
A
So
this
is
the
last
strategy
that
we
try
and
and
the
reason
why
that
doesn't
work
usually
is
because
the
tables
are
large
free.
Then
we
have
table
sample
counting
to
use
a
couscous
feature
table
sample.
You
can
specify
the
size
of
the
sample
that
you
want
from
the
table
and
then
you
sort
of
interpolate
this
or
two
to
understand
the
size
of
the
full
table,
and
then
the
third
one
is
based
on
the
booze
cruise
statistics.
So
put
Swiss
keep
statistics
internally
on
the
number
of
tuples.
A
E
A
At
the
moment,
or
if
we
already
understand
what
what
causes
it
to
be
too
great,
but
if
we
realize
that
those
statistics
are
missing-
and
this
is
this
is
a
production
problem
and
we
should
fix
that,
so
there
is
no
way
that
you
can
or
you
shouldn't
be.
You
shouldn't
attempt
to
do
this
from
you
know
from
the
rails
code.
This
is
something
that
should
have
should
be
there
at
all
times.
A
B
A
Yeah,
that's
an
operational
problem,
but
typically
you
always
have
those
statistics
right.
So
if
we
don't
have
them,
then
this
is
a
problem.
So
if
you
realize
this,
is
it's
important
to
look
into
that
right?
For
example,
what
we
know
that
happens
is
when
we
have
a
failover
in
the
database,
there's
a
short
period
of
time
where
we
don't
have
any
statistics
at
all.
A
So
we
fail
over
to
a
replica
on
the
replica.
Doesn't
have
those
statistics
until
those
are
rebuilt.
This
happens
in
the
background,
but
it
takes
a
while
to
complete,
and
during
that
time
you
don't
have
those
on
all
other
times.
There
should
be
there.
So
if
there
are
not,
then
this
is.
This
is
a
problem.
B
Last
three
months,
I
was
able
to
check
it
again
and
again,
and
I
can
assure
you
that
I,
maybe
I
should
open
an
issue.
Ci
builds
and
notes,
don't
have
it
and
some
others
usually
don't
have
it
like
usually
larger
tables.
That
might
be
another
issue
to
maybe,
depending
on
the
configuration
size
column
it
all
together.
B
B
What
I
mean
is
that
approximate
count?
Cimu
doesn't
work
so
when
I
check
a
callback,
a
table
sampling
doesn't
work,
then,
when
I
go
deeper,
real
couples
doesn't
work.
Also
in
the
you
know,
at
the
end,
based
on
your
comment
that
it
may
be
related
to
the
statistics
like
of
statistics,
I
assume
there's
an
operational
problem
there
well.
A
No
I'm
just
saying
that
if
you
look
at
those
strategies
right,
the
one
that
is
based
on
unreal
triples,
it's
basically
going
to
reject
you
trying
to
count
stuff
when
there
are
no
statistics
right.
So
this
is
a
check
in
the
code
saying
that
yeah
either
half
those
statistics
and
I
can
I
can
give
you
an
approximate
count
or
we
don't.
And
then
we
we
go
to
the
next
strategy.
We
try
table
something,
for
example
and
I
think
table
something
has
something
similar
based
and
because
it
also
needs
that
information
actually.
B
We
have
first
try
a
table
sampling
which
is
dependent
on
a
rare
tapas
table
sampling
doesn't
work.
Then
we
go
to
the
red
tapas,
which
already
is
the
reason
table.
Sampling
doesn't
work
actually,
so
it
also
fails
for
the
same
reason,
and
then
thirties
hits
the
time
out
15
seconds
so
I,
don't
talk
about
regular
count,
so
I
think
table
sampling.
Finally
already
suffers
from
the
same
problem,
which
I
mean
I'm
searching
a
way
to
tackle
this
problem,
I
like
on
production.
B
B
A
C
Yeah,
just
just
because
recently
when
I
got
execution
comes
from
database,
that
and
timings
quite
nice,
and
it
was
simply
because
they
did
the
first
query
result
and
that's
usually
an
uncashed
query,
result
and
I
was
looking
in
our
documentation,
guides
and
I
couldn't
find
like
hey.
How
do
we
collect
these?
These
plans,
how
to
be
format
them
and
and
what
do
we
consider
as
cache
them
and
uncashed
cached
results?
So
the
question:
do
we
have
something
I
don't
know,
but
if
not
I
would
be
happy
to
to
maybe
create
an
issue
and
document
it.
A
C
C
A
A
E
A
C
C
Focus
on
you
know,
I
mean
it's
nice
if
we
can
provide
cold
timings,
but
I
would
be
more
interested
in
the
in
the
cached
ones,
because
I
would
say
most
of
the
time
that's
just
irrelevant.
I
know,
called
timings
are
also
also
important,
but
it's
quite
it's
quite
clear
to
that.
Cold
timing
is
actually
larger
than
our
statement
time
out.
If
the
cache
timing
is
really
really
low
right.
So
if
we
keep
it
under
200
milliseconds
the
warm
then
I
mean
it's
not
super
accurate
I
guess,
but
it's
more
like
a
guesstimation.
A
The
other
thing
you
can
look
at
is
how
much
data
you
actually
touched
on
on
the
screen.
So
if
it
could
be,
it
could
be
warm
cached
and
but
still
touching
a
lot
of
data.
And
then,
if
you
don't
have
the
cash-
and
this
is
like
much
slower
most
likely.
But
if
you
have
a
warm
or
a
good,
warm
cache
timing
and
you're,
not
touching
a
lot
of
data,
then
that's
probably
fine.
G
I
think
it
may
even
be
just
good
to
have
some
notes
on
you
know
when
you
use
each
of
these
different
tools
like
what
what
differences
might
occur
or
why
you
might
want
to
use
one
over
the
other
for
certain
types
of
queries
or
to
get
certain
types
of
information.
I'm
sure
that
that
could
be
helpful
for
some
people.
A
F
F
The
two
columns
timon
template,
where
clubs
sign
necessary
in
this
case
is
our
index.
We
cannot
say
a
like:
they
are
like
separating
bases
like
me
to
keep
I,
see
this
or.