►
From YouTube: Database Office Hours 2020-02-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
C
D
So
then
I
think
he
is
part
of
infrastructure.
He
asked
about
specific
feature
called
Hitler
exporter.
Is
it
in
use
or
not,
because
there
are
some
failures
and
missing
bait
on
on
the
on
the
chance
and
yeah
I
wasn't
able
to
answer
the
question?
Is
this
still
needed,
or
can
we
maybe
deprecated
this
feature
I.
C
Was
assured
that
when
I
saw
that
I
did
find
a
dashboard
in
Hana,
where
we
use
it
for
pool
mares
and
I
recall
having
a
discussion
in
a
few
months
back
about
those
metrics
and
that
we
should
actually
move
them
into
in
ticket
lab,
maybe
out
of
the
exporter
and
I'm,
not
sure
about
the
the
other
curries
in
there.
There
is
quite
a
few
that
are
unrelated
to
boomers.
C
E
C
D
D
C
C
C
And
we
did,
we
did
have
a
discussion
about
that
a
few
months
back
where
I
think
they
went
in
a
way
that
we
we
we.
We
should
probably
look
into
moving
those
metrics
into
the
into
the
giblet
product
so
that
we
can
report
those
palmera
metrics
from
it
rather
than
you
know,
trying
to
run
expensive
database
queries
to
figure
out
what
the
status
we
were.
We
were
basically
push
those
metrics
to
committees,
but
I'm,
not
sure
what
the
state
of
lattice.
B
B
So
the
case
was
that
for
some
project
projects
we
were
missing
the
entries
in
the
services
tables
and
these
projects
were
the
projects
were
missing.
Those
rows
were
the
projects
which
were
connected
to
the
kubernetes
clusters.
Somehow
the
way
that
cuba,
that
the
project
can
be
connected
to
the
kubernetes
cluster
is
either
through
cluster
projects,
which
is
like,
let's
cause
direct
connections.
So
we
have
like
one
intermediate
table
clusters
table
and
the
products
table,
but
also
projects
can
be
connected
to
a
cluster
via
its
group,
and
in
this
case
we
have
another
layer
in
between.
B
So
we
have
namespace
than
cluster
groups
than
cluster
and
also
there's
the
case
for
project
can
access
the
instance,
like
the
github
instance
short
cluster.
In
that
case,
there's
no
there's
any.
There
aren't
any
connections
between
those
two
entries
in
the
database,
so
we
cannot
detect
it
by
any
query,
except
that
this
instance
level
cluster
exists
at
all,
and
that
means
that
all
projects
within
that
kidnap
instance
have
access
to
this
cluster
and.
B
B
Complicated
joints
to
make
sure,
especially
for
the
group
short
clusters,
so
we
need
to
join
projects
with
the
groups
with
the
cluster
groups
clusters
and,
in
the
end,
with
the
cluster
applications
to
detect.
If
there
are
missing
rows,
if
their
roles
in
services
are
missing
or
not
and
yeah,
so
so
that
last
voice
was
challenging,
and
the
first
approach
was
because
there
was
one
attempt
to
write
this
migration,
which
unfortunately
was
reverted
because
it
felt
on
staging.
B
But
the
reason
that
this
migration
felt
it
was
inverted
was
that
prodigy
job
comm.
There
is
no
instance
of
a
cluster
which
I'll
mention,
which
means
that
only
small
subset
of
projects
has
the
access
to
the
to
the
cluster,
but
for
the
staging
github.com
there
is
the
instance
level
cluster
and
that
cause
all
entries
in
the
projects
table
to
be
affected.
So
we
have
like
the
two
scenarios
here,
one
with
the
group
shard
cluster,
and
this
case
most
likely
would
be
very
sparse.
B
B
B
We
had
a
chat
with
andreas
right
before
this
call,
and
he
suggests
that
some
approach,
which
looks
really
promising
because
it
also
worked
over
the
project's
table,
and
it
was
even
more
sparse
because
it
was
like
free
only
of
three
effects
at
rasterize.
Remember
and
it
went
through
I
know.
If
others
would
present
this
solution.
You
used.
C
C
The
two
patterns
that
we
have
today
for
background
migrations
is
that
you
can
either
schedule
jobs
that
basically
work
on
the
range
of
records,
so
you
have
a
I,
don't
know
primary
key
range
and
then
one
job
tackles
chuckles
one
batch
and
the
alternative
is
to
to
schedule
jobs
where
you
are
rather
explicit
about
records
that
you
want
to
want
to
tackle.
So
I
think
this.
If
I
remember
that
correctly,
this
is
what
the
what
we
had
in
the
first
attempt
to
would
schedule
jobs
and
for
a
number
of
individual
project
IDs
right.
Yes,.
C
Right,
okay,
so
I
I
linked
it
was.
It
was
similar
in
a
sense
that
there
were
only
three
projects
on
Uncle
they
were
affected
and
that
we
wanted
to
fix,
but
basically
figuring
running,
a
query
that
that
would
yield
those
three
protagoras.
B
B
C
The
timeout
is
still
the
same,
so
also
in
background
migrations.
You
are
typically
limited
to
15
seconds
at
the
moment
for
statements,
but
I
think
the
benefit
that
you
get
is
the
you're
basically
looking
or
the
improvement
that
you
have.
Is
that
you're
looking
at
a
specific
range
of
projects
and
yeah
I've
seen
case
in
a
lot
of
cases
this
this
already
helps
because
you
can
use
the
primary
key
index
and
them
as
well.
It's
a
narrow
search
compared
to
you
know,
consuming
all
the.
B
C
B
From
me,
the
other
idea
was
to
have
like
the
intermediate
or
temporary
table
where
we
would
I
would
show
the
screen
again.
So
we
have
the
diagram,
so
my
idea
was
to
skip
checking
all
the
projects
in
the
post
deployment
migration,
but
only
check
this
part
so
pick
up
only
the
affected
clusters
without
going
for
the
products,
because
the
number
of
clusters
is
much
much
smaller
than
the
product
and
select
only
these
clusters
to
this
temporary
table
and
then
iterate
over
this
table
and
join
the
products
to
record
squared
from
that
table.
B
And
the
idea
was
that
would
somehow
speed
the
task
between
the
two
queries
so
one
to
create
this
table,
and
then
we
also
could
use
this
background
migration
pattern
with
the
ID
range,
because
it
wasn't
very
suitable
to
this
case
when
we
tried
to
filter
all
the
products
with
the
one
query,
but
I
think
really,
the
your
approach
is
more
straightforward
and
we
do
not
create.
We
don't
need
to
create
another
temporary
objects
within
the
database
which
reduce
the
cost
of
this
and
work
needed
to
complete
it.
So
I
think
your
approach
is
better
mm-hm.
A
C
B
That
Maria
was
that
to
create
a
create,
a
table
for
like
one
release
and
then
drop
it
I
considered
the
materialist
view,
but
I'm
not
sure
about
self-hosted
instances
and
which
the
which
version
of
the
monstrous
SQL
they
are
using
and
as
far
as
I
know,
dimitra's
view
are
from
9
point
something
up.
So
the
older
versions
doesn't
have
this
feature
ya.
C
Okay,
thanks
has
anybody
else,
thoughts
on
this.
C
Cool,
so
the
next
topic
was
a
question
about
a
also
a
back
row:
migration
worm.
We
could
basically
use
a
specific
index
that
helps
this
migration.
That's
not
used
otherwise,
but
it
is
very
targeted
for
British
weather
yeah.
What
we're
doing
here
in
the
migration
I
think
that
was.
That
was
the
main
question.
If
that
is
if
that
is
something
we
would
want
to
do
or
if
there
is
any
downsides
to
that,
because
we
we
have
to
maintain
the
index
until
we
can
remove
it,
which
is
likely
the
next
milestone.
C
So
we
would
be
running
a
migration
and
then
we
have
the
index
lang
it
lying
around
without
being
being
used,
and
we
have
to
maintain
that
I
think
that's
a
sort
of
the
dropping
from
it,
but
I
don't
see
any
reason
why
we
wouldn't
want
to
do
that.
If
that
really
helps
the
micro
migration
I
think
it's
a
really
good
way
of
didn't.
F
F
C
C
A
D
C
C
What
I
wasn't
sure
for
for
larger
tables
is
so
we
would,
we
would
add
the
column.
We
would
have
a
background
migration
batch,
updating
or
you
know,
incrementing,
while
he's
in
there
and
then
ultimately,
we
can
use
a
unique
index
which
you
add
if
there
is
no
like
incoming
traffic
at
that
time,
that's
straightforward
because
we
run
that
job
once
and
then
you
can.
You
can
be
sure
that
there
is
unique
values,
but
if
there
is
traffic
coming
in
at
the
same
time.
C
Thank
you
at
some
point.
You
have
to
have
a
point
in
time
where
you
you
basically
stop
that
traffic
and
make
sure
that
the
sequence
or
make
sure
that
the
table
is
totally
prefilled
for
that
all
those
those
records
have
a
their
value
and
then
make
sure
that
the
sequence
goes
goes
on
top
of
that.
So
the
sequence
has
a
state
as
well
right,
it's
increments
values
and
it
hasn't
state
and
I.
Think
at
some
point
in
time
you
have
to
say,
like
I'm,
now
reconfiguring
the
sequence
of
Sarge
from
from
higher
points.
D
D
Well,
that
is
that
should
be
on
my
speech
in
Postgres.
So
if
you
have
a
unique
yieldex,
then
you
can
actually
at
the
private
key
constraint
to
the
table
by
specifying
the
index,
and
so
there
one
I
wouldn't
expect
that
one
just
be
recheck
the
table
from
from
scratch
right
to
and
show
the
uniqueness
it
will
just
a
unique
index
to
actually
tell
okay.
This
is
actually
him
just
add
the
primary
key
constraint,
yeah.
C
G
D
G
C
G
G
D
C
G
E
I
had
used
new
IDs
in
Postgres
previously,
and
yet
we
always
use
the
v4,
which
is
the
random
one.
But
there
is
an
extension
that
supports
it,
and
you
can
Postgres
also
has
a
specific
data
type
for
UUID,
which
is
like
a
packed
representation.
So
it's
not
as
large
is
that
a
text
value
or
something
of
the
string
representation.
G
D
You
can
see
a
problem
with
UUID
using
our
like
utilities,
so
we
have
like
lots
of
functions
for
iteration
because
we
usually
rely
on
the
ID
like
this
each
batch
thing
and
also
the
sorting
the
often
don't
use
the
timestamps
to
sort
by
created
and
but
we
just
sort
by
ID
I
guess
you
I
mean
that
is
the
before
done
before.
I,
don't
think
it
has
this
capability
to
to
be
actually
some
what's
sort
of
all
I
don't
know.
C
It's
a
good
point.
You
can
probably
still
reason
about
already.
There
is
still
like
there
is
still
an
integer
representation
of
the
IDS,
so
you
can,
you
can
sort
them,
but
it's
not
that
they
that
they
have
a
relationship
with
when
the
record
was
created,
like
like
the
ID
secret
sequence,
has
right,
there's
a
strong
relationship
between
ID
sequence
and
they
created
apt
time.
C
A
Week,
adequate
Minds,
just
we
touched
on
at
the
end
customer
tripped
over
the
problem
that
AWS
doesn't
one
analyze
automatically
when
you
do
a
upgrade
on
RDS.
So
thanks
to
to
Paulo
in
the
distribution
team,
finished
piecing
together
how
it
works
in
omnibus.
So
we
we
we
run
move
on
that
at
the
end
of
an
upgrade.
You.
C
Cool
thanks
for
updating
from
from
last
week,
the
last
point
on
the
agenda
just
out
of
that
I
found
that
really
interesting.
There
is
a
interesting
discussion
going
on
started
yesterday
about
basically
frequency
of
database
changes
and
how
we
test
database
changes
and,
in
general,
an
interesting
discussion.
A
time
I
saw
you,
you
added
three
points.
There
yeah.
D
Yeah,
just
I,
don't
wanna
really
go
into
the
details,
but
you
know
our
problem
is
that
we
look
at
the
database
migrations.
We
kind
of
try
to
think
about
past
historical.
You
know
scenarios
and
then
how
problem
happened
like
historically
and
try
to
say
yeah.
This
happened
in
the
past.
So
don't
do
this,
but
find
another
way
to
do
that,
for
example,
and
we
do
database
migrations
on
high
traffic
vehicles
and
you
basically
don't
plan
the
migrations
on
a
production
size
database
before
actually
hitting
Canali.
As
far
as
I
know
and.
D
And
yeah:
that's
that
would
be
a
nice
step
forward,
but
I
think
that's
that's
not
enough.
We
should
also
find
a
way
to
you
know
simulate
like
a
real
life
scenario,
because
then
the
migration
does
in
production,
it's
much
as
the
migration,
but
we
also
get
a
lot
of
traffic
from
from
verifications
and
batata.
Like
really
special
looking
scenarios,
we
might
not
be
able
to
detect,
and
it's
often
that
you
know
the
migration
succeeds,
but
actually
the
background.
D
Migrations
are
the
issues
that
say
they
are
too
frequent
or
they
are
just
basically
processing
too
much
data
and
might
affect
performance
of
the
production
system.
So
it's
really
difficult
to
to
have
something
that
developers
can
actually
you
know
use
without
also
exposing
customer
data
and
actually
kind
of
mimics
the
real-world
scenario.
And
yes,
it's
a
big
challenge
and
it's
I'm
curious
how
we
will
solve
this
and
I
also
understand
the
you
know
the
the
idea
to
to
limit
the
changes
totally
understand.
D
C
Yeah
I
really
agree
here:
you
can
have
two
changes
and
they
only
and
they
they
also
blow
up
on
production,
so
not
really
about
I.
Think
it's
not
really
about
the
number
of
changes
that
you're
making
I
wanted
to
add
that
I
think
we
also
don't
have
a
canary
environment
for
your
staging
at
the
moment
and
so
basically
did
the
deploy
and
how
we
run
the
migrations
and
staging
it's
a
bit
different
than
in
production.
C
C
C
F
C
C
C
D
D
So
the
idea
was
to
see
it's
not
directly
on
on
top
of
the
database,
basically
having
clients
that
are
doing
various
web
requests
to
to
the
application
and
that
basically
generates
similar
load,
that
the
production
system
is
actually
getting.
So
that
was
the
that
was
the
idea,
but
you
know
to
propel.
It
has
a
should
have
a
similar
configuration,
a
similar
set
up
and
the
production
system,
and
we
used
a
to
code
generator
and
the
other
patterns
of
custom.
D
Like
is
basically
inchoate,
client
I'm,
just
sending
requests
to
the
server
and,
of
course,
doing
sub
something
and
measuring
the
response
times
and
and
collecting
stands
and,
of
course,
monitoring
on
top
of
the
production
system
that
we
are
currently
testing
production
like
system.
So
we
can
actually
compare
the
performance
characteristics
to
actual
real
birth
data
and
that's
how
we
could
match
okay,
the
system.
What
we
have
here
is
really
close
to
what
we
have
on
production.
C
D
So
our
idea
must
be
no
tested
from
from
the
outside,
so
we
can
basically
catch
other
issues,
them
and
database
performance
issues,
for
example,
on
developer,
used
the
wrong
block
level
and
after
10
minutes
the
disgustful
under
normal
load,
and
these
like
infrastructures,
as
if
issues
would
be
also.
Oh
sorry,
beard.
C
G
Understand
the
discussion
correctly
that
Urich
studies
you
know
limit
how
we
code,
how
many,
how
many
migrations
we
do
and
we
test
it.
However,
we
discuss
here
how
we
test
migrations
better.
So
we
don't
agree
that
you
should
limit
adding
migrations
right
I
mean
when
I
read
discussion.
Usually
you
focus
on
how
to
better
test
migrations
in
a
production
like
environment,
in
all
the
comments,
but
your
original
IJ's
idea
is
to
limit
them
and
maybe
have
few
of
them.
If
I'm
not
mistaken,.
C
No
I
understand
it
in
the
same
way:
I'm
I'm
just
not
sure
what
we
will
be
banned.
What
the
benefit
is
from
from
limiting
the
number
of
migrations,
because
I
don't
know
from
from
the
last
incident
if
she
picked
the
one
or
two
migrations
that
were
causing
it,
and
if
we
only
had
those
that
we
would
still
have
the
incidents
I.
G
G
C
C
I
think
that
is,
that
is
I,
think
that
is
a
concern
that
that
we
should
have
for
we
that
we
are
sometimes
where
we're
adding
migrations
just
to
ship
a
feature
we're
adding
adding
columns
to
two
tables
just
to
ship
the
feature,
but
we're
not
really
perhaps
we're
not
having
enough
discussion
about
how
how
it's
just
going
to
look
long
term.
What's
the
what's
in
three
months,
are
we
going
to
have
like
ten
more
columns
to
this
table?
Or
is
it
like
really
that
we
should
should
break
apart
the
table
already?
C
And
that
means
a
bit
of
more
work
upfront
because
you
have
to
create
relationships
or
a
lot
more
tables.
That's
a
bit
more
effort
to
and
compared
to
just
adding
a
column
but
down
the
road.
We
will
have
less
expensive
changes
because
then
we
we
add
those
columns
to
the
existing
small
table
and
as
far
as
I
understand
this,
the
there
is
also.
G
Part
of
the
rails
magic.
It
makes
very
easy
for
any
developer
to
create
migrations
models,
and
that's
also
a
problem
for
the
database
and
raise
enables
any
even
like
any
developer,
to
make
migrations
very
easily.
That
ends
up
also
being
a
curse
in
that
we
end
up
having
a
lot.
Oh,
you
know
differently,
designed
tables
index,
columns,
logic.
F
Yeah
I
think
impart
is
also
about
being
mindful
when
he
comes
to
Ally
migrations
or
modifying
our
schema,
and
particularly
on
large
tables
like
from
the
incident.
It
was
just
renamed
in
a
column
and
it
costs
on
services
which
is
an
old
table,
and
it
calls
like
a
disaster
and
yeah
I
think
we
just
need
to
be
more
careful
when
reviewing
them.
Ours.
F
No
I
actually
reviewed
that
amar
I
didn't
notice,
like
even
the
only
incident
call
everyone
where,
like
scratching
their
heads,
because
this
problem
was
very
tricky,
but
perhaps
analyze
analyzing
the
problem.
They
were
only
you,
they
were.
They
wanted
to
change
a
column
just
because
instance
was
a
better
name,
and
perhaps
that
makes
sense
it's
on
the
back
in
code,
but
perhaps
the
column
wasn't
necessarily
needed
to
be
changed.
Like
services
is
a
huge
table
renaming,
perhaps
it
is
not
necessary.
F
C
C
C
We
were
also
at
the
top
of
the
annually
I
just
wanted
to
throw
in
another
issue
that
came
out
of
the
discussion
that
I
I
was
really
surprised
by
I've.
Never
heard
of
that
before
that
is
that
you
know,
question
has
a
column
limit,
which
is
not
really
too
surprising,
so
you
can
only
have
I
think
one
1600
columns
in
a
table
and
we,
you
shouldn't,
have
those
white
outs,
not
the
problem,
but
what
I
found
really
surprising?
What
I
didn't
know
was
that
the
any
columns
that
you
drop,
they
also
count
words
that
limit.
C
C
But
if
you
look,
for
example,
that
users
that
has
49
columns
in
total,
which
is
really
arch,
really
white
and
ten
of
those-
have
been
dropped
and
34
drop
columns
for
application
settings.
So
it's
not
that
bad.
It's
not
that
we're
hitting
that
more.
But
it's
kind
of
interesting
to
see-
and
it's
also
I
think
there's
no
easy
fix.
We
found
for
that.
So
first
I
expected
they.
C
G
Just
a
question:
do
you
know
the
design
of
application
settings
and
not
that
I
mean
something,
but
why
was
it
decided
to
put
every
setting
in
a
one
row
and
in
which
column,
rather
than
having
a
key
value?
You
know
application
settings
table
where
Keys
were
the
setting
names
and
the
column,
and
we
had
a
rarely
column
and
do
you
remember,
does
anyone
know
why
it's
better
reverse
or
something
or.