►
From YouTube: Database Office Hours - 2020-04-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Okay,
so
hi
everybody,
so
this
is
database
office
hours
in
the
new
time
on
Wednesday
and
yeah.
The
first
point
is
mine
is
about
check
constraints
so
just
a
bit
of
context
here
corrective
and
we
are
trying
to
add
not
now
constraint,
to
an
existing
table
with
many
records
that
might
cause
stability
issues,
because
when
you
add
another
constraint,
simply
it's
going
to
scan
the
table
to
to
check
if
we
have
null
values.
While
we
are
keeping
the
exclusive
lock
on
the
table,
so
there
is
no
efficient
way
to
add
the.
A
I
mean
safe
way
to
add
a
new
constraint
to
to
an
existing
table,
but
there
is
another
concept
in
Postgres
called
check
constraint.
That
is,
you
can
actually
create
an
invalid
constraint
on
the
table
first
and
later
on
validated
and
the
validation
doesn't
require
extra
locking
on
the
on
the
table
and.
A
E
E
So
yeah,
so
we
this
after
table
requires
a
look
for
for
it's
a
very
quick
lock,
but
it
may
be
blocked
by
other
processes.
Running
another
query
this
so
it
may
timeout.
So
what
we
propose
there
is
to
always
run
with
locally
tries,
and
my
question
was
because
I'm
also
adding
some
helpers
right
now.
I
was
taking,
for
example,
that
our
helper
at
concurrent
for
a
key
does
not
use
with
lot
retries,
and
my
question
for
all
of
you
is:
should
we
update
it
to
also
use
with
the
Opry
tries
inside
the
the
helper
method.
A
Yeah
so
at
some
point
we
should
start
updating
the
helper
methods,
but
we
have
to
be
very
careful
what
we
Rock
exactly
with
this.
This
block.
We
should
look
into
each
method
that
we
execute
and
check
what
kind
of
locks
are
being
acquired
hidden,
and
how
long
will
this
operation
take?
For
example,
when
we
are
filling
up
data,
when
we
add
a
new
column,
we
should
not
grab
this
with,
with
lock
returns
with
other
statements.
A
E
So
yeah
the
other
approach
would
be
to
to
require
that
the
add
component
foreign
key
is
dropped
with
local
tries
inside
the
migration.
So
there
is
a
balance,
so
we
will
say
that
some
migration,
some
tables,
are
safe
to
use
it
with.
Luckily
try
and
not
have
it
inside
the
method.
I
don't
know.
So
that's
why
I'm
asking
what
the
opinion
of
everyone
do
we
enforce
with
inside
the
helper
or
do
we
believe
it
per
gays
in
the
migration.
A
So
I
think
we
cannot
do
it
on
the
migration
level
because
we
be
disabled,
so
this
migration.
What
we
have
now
is
at
conclusion,
foreign
key
requires
several
steps,
and
one
of
the
step
is
the
constraint,
validation
and
that
can
run
for
several
seconds
right
and
for
that
we
don't
want
to
run
in
the
same
transaction
as
the
ad
foreign
key
statement,
because
the
the
lock
will
be
on
the
table.
The
lock
will
be
kept
until
the
transaction
finishes.
So
in
this
special
case
we
should
only
specify
with
block
retries,
for
the
alter
table
statement.
E
Okay,
okay,
so
for
for
everyone
to
know,
I'm,
currently
working
on
adding
helper
methods
for
adding
in
general
constraints,
so
I'm
going
to
add
some
helper
methods
to
allow
us
to
add
a
constraint
sector
constrain
validate
the
constraint.
It
will
be,
those
will
be
generic
helper
methods
and
then
we
can
create
more
specific
ones.
So
we
will
have
to
pass
the
check
Clause
as
a
string,
because
it's
generic,
that's
why
I
was
asking
so
in
this
case,
I
will
have
the
lock
retries
inside
the
helper
method
and
okay,
we
can
discuss
more
in
the
review.
D
A
A
I
will
link
it
later
on
the
database
like
a
channel
and
so
far,
I've
seen
a
few
examples
where
actually,
it
seems
like
I
mean
I
cannot
say
it
for
sure,
but
it
seems
like
we
had
several
retries
and
I'm,
hoping
that
it
actually
had
preventing
statement
timeouts
but
of
course
I'm
not
hundred
percent
sure
about
it
right
and
so
far,
I
don't
have
any
suggestion
to
change
the
timings.
I
can
maybe
create
a
spreadsheet
where
we
can
easily
visualize
examples
what
we
already
have,
but
apart
from
this
I,
think
we
are
pretty
safe.
D
We
think
that
we
would
have
a
reason
to
change
the
timings
if
we
repeatedly
see
the
help
of
fail
in
a
sense
that
it
ends
up
running
the
the
statement
without
any
long
time
other
than
business
T.
So
the
last
step
it
would
take
if
anything
else
fails.
If
that
happens
more
often,
we
should
probably
look
at
the
timings
right,
but
and
in
other
cases,
when
it's
working,
fine
I
would
I,
don't
see
a
reason
why
we
would
change
that
right.
A
Yes,
I
might
add
a
bit
of
improvement
to
the
logging.
So
when
we
cannot
acquire
the
lock,
it
would
be
nice
to
check
what
kind
of
locks
we
have
on
the
table
and
maybe
keep
track
of
the
the
look
idea.
If
there
is
something
like
that,
and
maybe
lock
that
hey,
we
fail
to
acquire
the
look
for
five
times
already
and
it's
always
the
specific
clock
ID
and
then
we
can
kind
of
see
that
hey
probably
there
was
a
long-running
transaction
somewhere
and
that's
why
we
couldn't
acquire.
A
E
E
It
cannot
be
optimized
more
and
the
discussion
there
was
that,
because
this
is
a
daily
summary
that
they
want
to
generate,
but
a
potion
solution
will
be
that
they
can
cast
the
results
and
so
and
use
them
so
that
this
query
runs
only
once
per
day
or
something
like
that
once
per
12
hours
or
whatever.
So
my
question,
there
was:
what's
your
opinion
on
that?
What
are
what
do
we
do
with
such
cases
casting
or
do
we
propose
creating
the
materialized
aggregate,
direct,
a
B
and
a
C
loading,
it
or
I?
E
In
this
case,
it
was
not,
it
was
just
a
query
that
it
will
run
whenever
the
page
loaded,
so
we
had
a
discussion
about
whether
it
could
be
cast
or
not
or
what
are
the
options,
because,
even
though
that
update
brings
the
running
time
from
10
seconds
to
below
one
second,
it
still
is
a
lot
and
we
generally
strive
for
below
100
milliseconds.
So
that's
and
yeah
I
wanted
you
to
check
this
and
think
about
the
torsion.
D
E
Let's
say
that
the
generic
use
case
show
they
have
an
API
call
or
something
that
takes
a
lot
of
time,
but
it's
something
that
we
can
cause
whether
a
solution
using
casting
is
acceptable
or
not,
and
then
that
my
second
question
there
is
that,
even
if
we
accept
such
a
query,
when
you
check
the
query
in
the
model,
it's
you
are
not
sure
that
cast
will
be
used.
So
so
there
was
a
discussion
there
that
okay,
we
will
use
it.
We
will
cast
the
results
and
everything
will
be
okay.
F
A
Get
lab
or
group
is
a
very
large
group.
We
have
several
subgroups
and
projects,
but
it's
quite
easy
to
have
a
group
that
is
larger
than
that
gives
up
and
that
could
easily
affect
the
performance
of
of
the
query
and
yeah.
What
suggestions
should
be
could
be
that
you
know
I
noticed
that
we
are
querying
a
date
range
so
executing
a
few.
Smaller
queries
could
probably
work.
A
It
would
be
safer
against
against
the
database,
but
it
might
increase
the
loading
times
because
it
has
to
execute
more
query.
Is
parsed
results
via
the
result,
object
that
it
might
have
on
some
external.
We
have
on
the
front-end
rendering
side
that
could
be
one
idea
and
the
other
one
is
like
I've
seen
this
pattern
quite
often
that
we
have
a
group.
A
Query
where
we
simply
to
try
and
some
query
on
the
project
IDs-
and
this
is
pretty
much
everywhere,
because
we
have
this
nested
group
relationship
and
and
if
we
stored
the
root
name.
Space
idea
is
the
top
group
on
the
on
the
table.
It
might
be
able
to
improve
the
performance
simply
because
you
have
one
number
of
an
ID
to
to
scan
from
the
index
and
not
multiple,
let's
say
project
I
pleased.
Indeed,
at
all
I
mean
the
it
Network
group
has
I
think
800
projects.
D
That
would
sort
of
naturally
aligned
with
what
we
were
discussing
for
partitioning
and
we
also
consider
the
root
name
space
as
a
partitioning
key
and-
and
that
already
means
that
we
would
have
to
add
this.
This
information
to
a
lot
of
tables
and
sort
of
what
comes
without
is
that
you
do
it's
a
denomination
and
when
you
start
to
move
projects
across
their
level
namespace
all
of
a
sudden.
D
D
D
Think
part
of
that
question
Yanis
also
was
do
we
do.
We
accept
run
times
like
800
milliseconds,
with
the
argument
that
hey
this
is
going
to
be
cached
later,
that's
obviously
a
trade-off
right.
So
if
that
was
a
feature
that
is
still
beyond
the
feature
flag
and
control
it
better,
maybe
that's
not
a
problem
in
general.
F
D
Yeah,
it
sounds
different
from
the
conversation
that
mr
it
sounded
like,
but
we
were
exploring
that
feature
anyway.
So
using
that
as
an
argument
that
we
don't
spend
a
lot
of
time
and
the
design
or
in
optimizations
just
just
to
find
out
if
we
actually
want
to
use
that
or
want
to
do
that
feature
at
all.
So
perhaps
that's
a
good
way
of
putting
that
into
behind
the
tip.
We're
also.
B
B
Is
the
query
which
I'm
looking
at
the
one
which
is
aggregating
on
the
date
calendar
and
generate
series?
Another
approach
would
be
that
in
some
other
projects
were
using
a
generated
based
table
back
until
2015.
So
you,
after
a
generated
taste
table
with
an
index
on
the
day
and
in
the
if
to
join,
is
the
problem
there
slovis,
it
could
maybe
increase
the
800.
Milliseconds
decrease
the
duration.
E
Yeah,
that's
a
good
idea.
I
tried
in
a
couple
of
iteration
by
using
a
city
directly
in
the
similar
approaches,
but
the
query
cannot
go
below
600
milliseconds,
to
tell
you
the
truth,
so
it's
not
like
it
can
go
way
way
down
so
at
least
from
what
I
can
see.
If
you
have
an
ideas,
but
so
that
when
discussing
in
general,
let's
say
that
we
have
this
query:
it's
not
like
I
want
everyone
to
find
the
solution
now,
but
cannot
go
below
600
700.
What
are
we
doing?
I.
B
D
E
D
F
A
A
E
C
B
Amar's
are
now
suffering
and
their
structure
is
squares
are
having
redundant
changes
and
structural
as
well
as
more
verbose
than
you
know,
schema
therapy
and
it's
becoming
harder
to
keep
track
of
what
changed,
but
not
changed.
I
mean
everything
is
good.
Just
when
you
look
at
it
like
a
create
table,
for
example,
makes
a
lot
of
changes
right
now.
E
E
B
Totally,
that's
why
I
think
first
of
all
tune
is
working.
Oh
man,
Amara.
Now
he
assigned
me
on
fixing
structure.
A
squirrel
I
also
updated
with
Myron
Adam
the
review
guidelines
so
that
people
check
their
rerun,
they're
done
and
migrations
again,
just
to
see
that
the
changes
they
do
are
cleaned
back
and
if
they
do
it
table
to
see
that
you
know
all
the
confusing
structure.
Chi-Square
changes.
Ec
are
disappearing
suddenly
in
the
down
method,
and
that
could
give
them
a
way
to
discover
random
changes.
B
D
Changes
that
are
sometimes
it's
hard
to
see
what
changes
relate
actually
to
your
migration
and
which
ones
don't
right,
and
then
you
have
to
pick
the
ones
then
that
relate
to
the
migration
I
thought
we
should
be
doing
that
in
CI.
Eventually,
I
linked
an
issue
for
that,
because
currently
you
we
don't,
we
don't
have
it
in
CI
and
you
would
be
able
to
add
a
migration
and
not
pick
the
right
changes
to
structure,
sequel
and
just
push
that,
and
we
would
merge
it
because
it
requires
money
or
with
you
and
the
trouble
from
that.
D
Is
that
that's
when
this
gets
merged
and
you
can't
say
that
any
any
of
the
exists
or
that
existing
systems
would
deviate
from
from
installations
that
are
new
right,
the
one,
the
ones
the
bootstrap
form
structures
you
go
there
to
have
a
different
schema
than
the
ones
that
execute
the
migrations.
This
is
something
we
should
prevent
perps.
We
can
do
that
and
say
I.
F
B
Say
you
have
four
V's
and
you
want
to
put
them
just
without
adding
changing
another
field
with
a
comma,
so
you
want
to
do
it
right
in
the
wrong
place
so
that
in
structure
a
square,
you
see
your
changes.
You
have
four
columns
changed
and
you
didn't
change
another
line
by
adding
a
comma.
That's
like
logical,
okay,.
D
B
B
E
B
B
C
Okay,
I
think
I'm
I'm.
Better
now
is
that
correct.
For
some
reason
of
volume
was
very
low.
Okay,
sorry,
yeah,
so
I
was
working
on
merge
requests
where
exactly
where
the
schema
is
checked
out,
which
your
are
like
with
where
your
branch
started
from,
then
it
resets
it
to
test
database
to
that
version
of
the
schema,
and
then
it
runs
your
migrations
again.
So
you're
only
the
changes,
you're
tuning
in
the
migrations
are
applied
to
the
structure
that
sequel
and
then
you
can
do
the
day
from
that.
C
D
C
Yeah
I
had
some
trouble
because
C
is
running
against
the
merge
result,
so
we
need
to
make
sure
not
yeah
it's
weird,
because
the
structure
is
cecal.
You're
committing
is
not
correctly
to
what
the
merge
results
will
be
and
then
so
I'm
hoping
I'm
not
running
into
education,
where
yeah,
where
the
order
is
different
for
some
reason,
so
maybe
need
to
toy
with
it
a
little
bit
more
before
we
make
it
fail,
hard
or
or
at
least
have
like
an
easy
shutdown
switch.
B
A
D
D
Wanted
to
just
drop
a
link
to
your
conversation.
That's
currently
happening
for
Caleb
calm,
which
may
be
interesting
to
follow.
So
currently
we
have
pretty
much
everything
behind
PG
bouncer.
So
that's
our
load.
Balancing
solution
is
Chris
and
there
are
a
few
exceptions
from
that,
namely
the
the
deploy
and
a
few
other
things
like
some
monitoring
and
also
when
you
open
a
console.
D
You
don't
go
through
bouncer,
but
you
go
directly
to
to
the
pod
instance
and
that's
starting
to
become
an
issue
because
we
run
out
of
connections
and
then
you
know,
there's
a
connection
limit
and
that
actually
can
affect
our
age,
a
solution
where
Patroni
the
age,
a
manager
doesn't
have
an
actually
enough
connections
anymore
and
basically
panics
and
triggers
fail
overs.
And
so
we
there.
G
Are
many
additional
issues
there?
Where
is
all
in
them?
Sorry
for
interrupting,
but
the
connections
is
primary
source
of
problem,
of
course
connection
spikes,
so
we
will
solve
it.
It
will
be
better
but
still
connection.
If
you
can
find
this
in
solve
this
problem,
it
will
be
very
helpful
this
actually
that's
why
and
I
joined
I
saw
it
in
the
document
and
decided
to
join
I
have
a
lot.
D
Good
timing,
so
one
of
the
problems
that
I
see
you,
but
this
is
only
related
to
the
deploy
host-
is
that
we
can't
run
migrations
through
PG
bouncer
unless
we
find
a
solution
to
that
problem.
But
the
the
issue
is
that
PG
bouncer,
we
run
it
in
transaction
pooling
mode,
and
that
means,
whenever
you
start
a
transaction
you're
going
to
stick
you
the
same
session
and
settings
and
all
that
and
until
you
commit,
and
that
allows
you
to
to
do.
D
G
I
commented
there
as
well,
so
we,
when
we
connect
using
piece
equal
to
pause,
guess
there
is
a
way
to,
for
example,
it's
it's
also
related
to
other
options
like
application,
and
there
is
a
problem
when
we
see
in
the
log
that
application
name
it's
it's.
It
is
not
known
at
connection
time.
Why?
Because
it
then
later
it's
set
using
set
command
right.
So
first
client
connects
with
a
known
name,
and
then
it
sets
application
into
something.
This
is
also
a
problem.
It's
it's
a
rate
related.
G
It
also
can
be
solved
if
we
connect
using
piece
equal.
There
is
P
two
options:
upper
score,
page
options.
We
can
use
page
options
and
set
application,
name
statement,
amount,
walk,
timeout,
anything
and
it
will
be
already
initialized
in
leap,
PQ
level
in
driver
level
ready.
It
will
be
applied
immediately
at
like
before
connection
established,
and
this
is
very
useful
and
it
will
be
applied
to
this
session.
G
If
you
introduction
mode
in
in
your
work
food
picture,
bouncer,
it
will
be
also
applied.
Maybe
it
will
not
work
right
with
this.
You
stick
with
to
the
bouncer,
so
okay,
here
I,
should
check
with
v
PG
bouncer,
because
before
advising
it
to
to
move
this
solution
to
Ruby,
okay,
sorry
so
I
I'm,
not
sure
about
the
juban
sir.
It
doesn't.
It
definitely
works
when
you
do
when
we
do
connect
a
direct
connection
but
bid
rebounds
from
I
have
my
loop
may
lose
some
of
this
right
so.
D
G
Because
different
connections
did,
did
you
try
picture
options
with
C
or
oh
I?
Don't
remember
exactly
Pedro
options
and
then
string
inside
it
lower
case
all
because
equals
lock,
timeout
or
statement
time
article,
something
it
works.
I
would
secretary
bouncer
it
follow-up
if
it
works
for
PC
cool
next
question
will
be
how
we
can
use
the
same
approach
for
Ruby
connections.
G
G
We
always
use
transactions
if
it's
it's
okay,
I
see
what
we're
saying
if
it's
like
separate
statements,
okay
right,
but
we
still
can
do
something
and
okay
I,
see
I,
see
what
you're
saying
we
can
wrap
it
to
sing
two
separate
two
additional
transactions.
Even
if
it's
a
single
command,
we
can
hack
with
database
migration,
migration,
headers
and
put
rapid
every
every
step
with
set
side
statements.
G
But
it's
not
as
I've
said,
it's
not
pleasant
already,
not
not
so
it's
less
fun
so
but
still
possible
or
which
we
can
keep
continue
working
directly,
but
we
will
limit
connections.
We
we
just
discussed
that
we
are
limiting
on
github.com.
We
are
limiting
gitlab
user,
which
is
most
common
for
all
connections.
We
we
will
set
limit
on
you
at
the
user
level,
so
say,
link
will
be
lower.
G
G
Sorry
address
there
isn't
one
more
option:
I
forgot
about.
One
more
option
is
to
use
different
user
okay.
If
it's
right,
if
you
can
apply
limit
for
different
user,
we
will
need
to
take
care
of
ownership
of
objects.
For
example,
if
you
create
table
and
we
are
paying
for
from
different
user,
maybe
we
will
need
to
pass
ownership
in
the
in
the
end.
Maybe,
but
it's
possible,
we
can
limit
this
user.
G
D
D
D
G
Right
what
happened
on
March
30
I
spent
many
hours
investigating
it.
So
what
happened?
Actually
a
lot
of
connection
spike
actually
get
lapped
database
accuser
reached
only
to
213,
maybe
but
other
users
consumed
a
lot
and
first
patron
II
couldn't
establish
connection,
and
this
is
already
a
problem.
But
we
have
90
seconds
to
resolve
this
problem.
C
G
Was
slow
in
general
master
mode?
Other
loans
are
not
affected
in
this
case
and
I
checked
limits.
Also
on
I
analyzed.
We
it's
a
separate
topic.
We
don't
have
good
monitoring
for
user
analysis,
historical
data,
so
I
needed
to
set
up
my
own
poor
man's
scripts
scripts
to
analyze
and
analyze
the
replicas
as
well.
They
are
totally
fine
and
they
need
much
less
connection.
So
much
okay,.
G
So
wait
any
any
guesses.
What
why?
What
what
caused
these
spikes
and
how
to
solve
this
and
get
rid
of
these
spikes,
because
limiting
is
the
solution,
but
not
a
good
solution
right,
because
the
device
user
is
shared
and
solve
these
spikes
will
affect
user
experience.
I
mean
users,
normal
human
users
on
github.com
right,
so
any
any
guesses.
What
how
to
solve
this
problem
of
spikes?
G
D
G
Petronia
just
one
connection
and
we're
not
going
to
limit
it.
We
didn't
see
problems
with
it,
so
we
are
limiting
only
application
to
user
270
out
of
300
and
we
are
limiting
github
superior
user,
which
usually
just
one
connection
I,
don't
know
where
from
we
will
put
10
and
we
are
limiting
get
lot
P,
SQL
user
input.
2020
should
be
enough
most
humans
using
this.
It
lab
P
skill
when
doing
so
do
get
lab
fiscal
command.
So
20
is
enough,
as
I've
already
said
and
explained
in
issued.
G
C
G
G
Sure,
I
yeah,
I
I
was
doing
database
reviews
many
months
ago
radio.
So
maybe
we
participated
in
some
calls
before,
but
it
was
long
ago,
not
not
with
all
of
you,
of
course,
so
I'm
I'm
you
come,
you
can
come
in
just
Nick
and
I
have
15
years
of
police
experience,
20
years
of
databases
in
general.
So
quite
a
lot.
G
I'm
came
from
Russia
moved
to
the
to
California
five
years
ago
and
I
was
actually
I,
was
briefly
employee
of
github
for
two
months
in
2018
and
then
moved
to
contract
contract
work
because
I'm
I
love
to
build
startups.
I
created
three
social
networks
in
Russia
and
sold
my
third
one
just
last
what
an
autumn
it
was
fun
and
right
now
I'm
building,
possibly
a
company.
It's
very
small
company
we're
building
tools
which
I
want
like
I
see.
These
tools
should
help
to
to
to
deal
with
paas
better.
G
The
main
idea
is,
let's
verify
all
we
do
on
large
data
bases
to
have
better
understanding
before
me.
We
move
to
deployment.
This
is
a
very
basic
idea.
I
practiced
it
a
lot
I
felt
how
things
improved
when
cloud
adoption
raised,
for
example,
on
RDS,
it's
very
easy
to
spin-off
clone
like
and
promoted.
It
will
be
slow
in
the
very
beginning,
the
solutions
to
that,
but
it's
like
complicated,
but
it's
already
very
good
and,
for
example,
when
I
already,
when
I
did
partitioning
Theo
4
years
ago,
in
94,
I
think
with
triggers
and
so
on.
G
I
I
did
a
lot
of
iterations
and
I
enjoyed
working
with
clouds
and
I
think
they
it's
when
I
thought,
which
should
be
automated
and
we
need
to
have
faster
iterations
verifying
all
our
steps
on
full-sized
data
sets.
If
you
have
access,
of
course,
so
and
and
just
then
I
start
started
to
work
on
it
and
possibly
is
a
set
of
automation,
tools
and
actually
right
now,
it's
already
a
platform
to
automate
these
like
thin
provisioning.
When
you
have
thin
clones,
it
doesn't
suit
for
all
tasks.
G
G
I'm
quite
excited
to
see
what
future
brings
us
so
so
with
good
luck.
I
have
contractor
and
help
from
time
to
time,
if
mostly
on
infrastructure
side,
but
I'm
glad
to
see
that
database
group
is
teammates
so
peak
already,
because
I
remember
how
it
was
hard
to
react
to
a
lot
of
code
review
requests
database
review
requests.
It
was
very
hard
to
combine,
like
so
I'm
happy
to
see
that
so
many
of
you
are
doing
database
related
reviews
and,
of
course,
if
you're
using
jorinda
base
lab
a
channel,
I'm
always
happy
to
help.
G
If
something
is
not
right
here
there,
but
by
the
way
do
you
know
that
you
can
do
hypothetical
indexes
there?
You
can
I
promised
on
the
rest
to
describe
it
in
the
commutation,
but
something
I
described.
You
can
check
hypo
command
if
you
need
to
wait
many
hours
with
maybe
it's
better
to
to
do
hypothetical
index
check
them
and
then
only
then
build
huge
index,
because
maybe
you
will
not
have
good
plan.
So
don't
do
these
big
index
right
hypo
command
if
you
can
start
with
help
as
usual,
without
as
running.
B
G
Hypothetical
index
is
using
the
same
statistics
of
positives
of
the
same
thing,
one
which
is
full
size
as
it
would
be
on
production
compared
to
real
index
the
you
will
own
only
lack
execution
and
buffers
numbers
real
numbers,
but
you
will
see
the
structure
and
planned
cost
right
you,
theoretically,
you
can
do
it
on
your
small
instance
on
your
laptop
or
something,
but
you
need
to
use
some
Japanese
guy
created
some
extension
to
export
in
import
statistics.
This
is
interesting,
but
it's
like
it's
not
popular,
no,
not
popular
approach.
G
So
in
this
case
you
could
import
production
statistics
to
your
small
database
without
real
data,
and
then
hypothetical
mixes
would
work
in
your
case
as
well,
but
with
draw
you
have
whole
database
and
hypo
command.
Just
save
you
time,
you
can
I
call
it
a
sneak
peak
of
plan
right.
So
you
can,
you
can
check
couple
of
options
of
index
and
you
see,
plan
is
good,
cost
is
fine
structure
is
better
and
then
let's
build
it
and
verify
it
completely.
A
B
G
Sure,
by
the
way,
I
think
some
of
you
experienced
issues
with
reset
I,
guess
they
upgraded
and
bug
fixes
applied.
If
some
and
I
still
don't
understand,
under
which
circumstances
you
lose
pause,
this
instance
and
your
session
is
broken,
but
at
least
in
this
case
you
reset
will
work
because
recently
didn't
work.
So
if
you're
in
bad
situation,
you
issue
reset
and
you
just
if
even
if
possible
is
the
lost,
is
that
will
work
and
you
can
atleast
start
from
scratch,
not
waiting.
Two
hours
of
expiration.
A
G
I
saw
two
people
headed
multiple
times,
and
it
seems
that
some
behavior
led
to
it.
I
need
to
investigate,
which
one,
of
course
it's
about
a
buck.
Definitely
but
right
now
you
can
do
Rosetta
at
least
not
not
pinging
me,
because
even
if
you
pinned
me,
I
cannot
restart
job
because
it
will
lose
all
sessions
that
cannot
maintain
sessions.
Yet
Oh
checkup
is
broken.
It's
another
story.
G
F
G
One
more
highlight
I
think
in
a
couple
of
days:
I
will
upgrade
also
Joe
and
you
will
have
activity
command
and
terminate
command
in
case.
If
something
some
previous
command,
you
don't
like
it,
you
will.
You
will
be
able
to
see
like
produced
activity,
content,
actually
write
with
comment
and
then
terminate
some
wrong.
G
A
G
G
G
You
there
are
options
you
we
can.
We
cannot
open
an
issue
in
get
on
get
lab
anywhere
anywhere.
You
want
like
an
infrastructure
repository
or
anywhere.
You
can
also
go
to.
You
can
talk
to
me
directly
or
in
the
best
optional
or
you
can
go
to.
We
have
separates
like
actually
it's
called
I
can
I
can
tell
you
where
it
is:
it's
it's
to
discuss
data
by
slab
and
and
draw,
but
specifically
it's
like
it's
like
database
community,
just
like
that
by
slab
communities
like
the
code,
and
I
I've
put
a
link
to
the
document.
G
So
there
we
can.
You
can
meet
other
people
who
are
using
these
tools
as
well
and
discuss
it
publicly.
Also,
you
cannot
actually
we
added
you
can
put
feedback
to
our
repository.
We
develop
everything
on
get
lap,
of
course,
because
we
like
it's
much
better
than
github,
also
because
it's
using
possibly
it's
not
my
sequel,
so
we
have
a
depository
and
you
can
create
issue
there.
G
Actually,
we've
got
a
couple
of
new
requests
in
the
past
from
from
github
people,
so
they
didn't
finalize
to
be
included
yet,
but
maybe
what
they
will
so
so
you
can
open
issue
there
as
well
and
actually,
when
you
start
session
withdraw
there
is
link
there.
You
can
just
click
the
link
in
in
the
forward
in
welcoming
world
what
so,
okay,
I'm
not
going
to
consume
out
of
time.
For
this
call,
thank
you
so
much.