►
From YouTube: 2021 06 16 APAC Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Hi
everyone,
I
think
fabian
said
he
was
running
late
on
my
grandma.
B
Shall
we
start
so
I
bought
for
them.
So
I'm
just
wondering
how
we
start
on
this
decomposition
for
one
table.
B
C
C
So
like
we
have
it,
I
don't
know
we
have
a
set
of
failures
and
we
just
keep
merging
merge
requests
after
merge
requests
until
we
have
no
more
joins
that
violate
this
rule
that
you
can't
merge
between.
You
can't
join
between
ci
in
the
main
database
and
then
the
so.
The
first
like
series
of
merge
requests
are
just
basically
just
changing
a
lot
of
queries
to
not
use
joins
anymore.
B
A
A
So
we
need
to
retain
the
current
like
data
and
still
use
that
for
as
long
as
it
is
needed.
So
now
the
question
is
like
how
we
can
make
a
distinction
between
something
that
should
be
decomposed
and
something
that
should
be
made.
So
I
had
this
proposal
about
create
schema
that,
like
we
create
schema
ci,
you
alter
main
database
and
move
the
ci
instance
variables
into
this
schema
to
indicate
that
this
is
exactly
the
table
that
got
migrated,
and
now
we
just
basically
next
step.
A
We
create
a
new
like
ci
structure
sql
with
these
like
ci
instance
variables
only
and
the
tricky
part
right
now.
It's
like
we
need
to
ex,
and
also
we
discredit,
schema
ci
and
the
tricky
part
right
now.
We
need
to
our
probably
db,
slash
ci
migrate,
execute
in
both
contexts,
main
database
and
the
decomposed
database.
A
For
pretty
much
always
and
now
like
in
the
gdk
like
whether
we
use
many
databases
or
not,
it's,
I
think
it's
like
a
matter
of
the
database
yml,
but
whether
application
uses
many
database
or
not
to
store
data,
it's
more
a
matter
of
like
probably
what
you
wrote,
environment,
variable
or
maybe
a
feature
flag
or
anything
else.
That
may
be
useful
for
us.
A
A
If
you,
if
you
don't
prefix
your
table,
name
with
the
ci
dot,
if
this
schema
is
not
in
the
search
bar
for
the
given
database
connection,
it
will
be
not
automatically
be
considered
as
a
ci
means
to
be
used,
because
it's
not
in
the
public
schema
so
technically
for
the
development
environment.
A
It's
pretty
great,
because
we
can
actually
keep
the
compose
database
in
the
main
database
but
ensure
that
in
the
development
and
testing
there
are
no
cross
joints
that
are
performed
on
the
sql
levels
because
of
the
search
path,
because
our
ci
is
not
in
the
search
bar.
So
like
your
ci
bits
will
not
be
unwinded
into
the
table
out
of
the
ci
schema,
because
the
ci
is
not
added
to
the
search
path.
Only
public
schema
is
added
to
the
search
path
by
default.
So
it
has.
A
A
B
Yeah,
I
think
I
think
I
think
we
can
set
multiple
migration
paths.
So
so,
even
if
we
have
like
ci
migration,
something
then
we
can
set
it
to
run
in
the
main
db
and
the
ci
dp.
So
it
sounds
like
rails
already
gives
that.
A
Migration
parts,
so
exactly
what
you're
saying
I
think
we
can
say
give
me
my
grades
for
the
main
database
and
dbc
I
migrate
for
the
main
database,
maybe
post
migrate
or
whatever
else
we
need,
and
it
should
just
work
simply.
A
So
so
like
so
like,
then
the
question
is
like
how
we
handle
things
like
background
migrations
that
may
execute,
depending
on
the
context
so
about
like.
I
guess
this
is
the
next
step
to
figure
out
because,
like
at
some
point,
migration
executed
will
be
executed
in
the
context
of
very
specific
databases
and
like
these
migration
will
not
will
not
be
able
to
cross-join
databases.
D
The
only
thing
that
worries
me
about
schemas
is,
if
we
ever
release
them
to
self-hosted
to
self-manage
instances
is
that
anyone
can
mess
with
their
path,
and
you
know
include
that
schema
in
their
path,
and
we
have
seen
crazy
stuff
happening
in
the
self-managed
report
to
us,
but
we
can
be
explicit
about
that.
So
we
have.
D
We
are
even
at
the
moment,
discussing
being
explicit
on
which
schemas
are
in
the
path.
So
this
is
this.
Can
we
can
work
with
that?
We
can
have
a
very
explicit
directive
that
we
say
those
are
the
the
paths
and
that's
it
and
we
can
even
have
it
so
yeah.
This
works
so.
D
A
D
And
at
the
moment
we
depend
on-
and
this
has
caused
a
lot
of
problems-
not
a
lot,
but
it
has
caused
problems
with
some
self-managed
instances.
We
depend
that
the
user
defined
the
on
the
posture
side
has
the
correct
house:
only
the
user
and
the
public
path
in
the
in
its
path.
D
So,
but
if
someone,
the
administrator
of
the
postgres,
goes
and
changes
the
path
for
the
user,
so,
for
example,
now
we
are
using
the
github
user
for
connecting
to
the
database
if
they
change
it.
The
behavior
of
gitlab
changes.
So
this
is
a
problem
either
way.
So
I
think
that
going
the
other
way
around
and
setting
the
path
explicitly.
D
It's
even.
A
Working,
I
would
I
I
would
aim
on
like
ice
point,
setting
a
half
on
the
connection
as
soon
as
you
open
connection
because,
like
there
are
like
two
benefits
we
in
this
model,
we
want
different
path
to
be
configured
for
the
production
like
more
permissive
so
like,
including
in
the
search
all
possible
schemas
that
we
want
to
traverse
but
like
in
the
development
we
want
to
like
be
decomposed
to
4-bit
joining.
So
but,
like
the
question
is
like,
is
there
a
usage
pattern?
D
Than
the
public
yeah,
we
have
seen
people
are
doing
weird
stuff,
like
moving
a
git
lab
from
public
to
a
a
different
name,
for
example,
because
I
don't
know
they
had
a
gitlab
in
public
and
then
they
backed
it
up
in
gitlab,
underscore
backup
and
then,
for
whatever
reason
they
wanted
to
switch
to
gitlab
underscore
backup
and
they
switched
the
path
to
using
that
or
and
a
problem
with,
the
schemas
is
that
you
don't
know
about
it.
So
we
have.
D
Half
club
stables
are
in
in
another
schema,
and
so
they
at
some
point
we
added
some
migration
helpers
that
were
explicit
about
public
and
migrations
were
breaking
for
them
and
after
going
with
the
through
the
problems
with
them,
we
realized
that
they
had
the
tables
they
had
a
split,
not
brain,
but
the
split
schema.
Half
the
tips,
one
schema,
half
the
tips
in
another
schema
and
it
started
because
they
moved
something
I
assumed
they
moved.
D
D
If
we're
not
explicit,
so
if
we
just
say
select
star
from
ci
underscore,
I
don't
know
ci
instance
variables
whether
a
postgres
will
pick
the
ci
verbs.
That
is
earlier
in
the
path
and
that's
it
because
it
goes
through
the.
You
know
us
with
all
paths,
but
we
don't
know
what
happens
there
and
that
this
is
prone
to
errors.
So
I
think
that
we
have
to
fix
that
either
way.
A
We
need
to
figure
out,
but
like
we're,
not
gonna
figure
out
that
easily
because,
like
we
are
not
fetching
like
like
schemas
available
and
how
many
of
them-
and
we
are
not
publishing
that
into
our
like
usage
beings,.
D
So
there
is
another
layer
of
problems
there
and
there
are
some
self-managed
instances
that
they
don't
want
to
give
us
access
to
their
full
database,
and
this
is
something
that
now
that
we're
moving
with
starting.
I
think
that
we
have
to
be
more
explicit,
that
we
own
the
database
and
we
can
add
databases
at
schemas.
D
D
And
we
and
the
problem
there
is
that
we
are
not
very
explicit
on
the
fact
that
no
we
own
the
database
we
have
to
own
the
database.
We
want
to
be
able
to
add
to
do
what
to
add
extensions,
to
to
add
schemas
and
with
start
we
will
we
will.
We
want,
to
also
add
databases,
so
we
cannot
anymore
let
we
will
have
to
find
the
solution.
A
D
Yeah
we
can
do
that.
We
can
create
a
migration,
most
probably
the
same
people
that
came
back,
so
we
realized
about
all
those
problems
when
we
created
the
dynamic
and
we
started
dynamically,
adding
partitions,
etc.
So
that's
the
point
where
we
we
have
seen
all
those
problems.
There
are
hundreds
of
thousands
of
gitlab
instances.
It's
not
like.
We
have
thousands
of
instances
that
have
this
problem,
but
we
are
in
a
lucky
position
that
we
have
so
many
instances
out
there
that
some
of
those
do
some
things
out
of
the
ordinary
we.
D
D
A
D
I
personally
think
that
if
we
do
sorting-
especially
if
we
start
at
some
point,
we
will
do
rebalancing-
we
will
do
dynamic,
not
today,
not
in
a
year,
but
sooner
or
later
we
we
need
to
have
access
and
do
whatever
we
want
for
rebalancing
for
whatever.
A
B
D
A
D
Yeah,
I
agree
so
back
to
what
we
were
discussing
in
the
beginning.
I
think
that
having
the
migrations
target,
specific,
schemas
or
specific
databases
is
the
easy
part.
One
thing
that
we
will
have
to
do
is
to
also
switch
the
way
we
set
up
the
structure
to
the
sql
because
for
new
self-host
self-managed
instances
or
local,
we
use
one
file
extraction.scale.
D
D
So
if
we
want-
and
we
will
want
to
use
multiple
schemas
and
then
multiple
databases,
we
have
to
be
also
experience
it
on
those
on
what
we
are
going
to
do
with
structure.sql
and
setting
up
a
new
database.
D
Yeah,
we
have
changed
it
on
purpose,
so
we
we
were
explicit
on
using
public
and
at
some
point
during
13.
I
don't
remember
at
some
minor
version
of
13.
We
went
back
to
not
being
explicit,
which
gave
us
some
benefits,
but
now
this
is
a
problem
for
what
we
are
discussing.
So
we
have
to
go
back
to
being
not
only
being
explicit
about
schema
being
explicit
per
day
or
per
you
know,
per
day
per
object
about
schema.
D
A
D
A
D
D
A
Yes,
like
like
multiple
structures.
Well,
and
actually
this
works
pretty
well,
I
I
noticed
one
one
more
aspect,
sorry
that
I'm
hijacking
this
course
so
heavily.
A
I
noticed
one
extra
aspect
like
I
know
that
structures,
but
it's
not
very
good
like
for
the
like
human
eyes,
but
all
the
partitions
are
like
replicated
so
many
times
like
in
in
in
the
structure
where
they
could
be
actually
be
written
in
much
more
efficient
form
that
reduce
the
main
structure.
A
Of
course,
that's
true,
for
example,
like
we
have
product
analytics,
experiments,
124,
partitions
yeah
with
actually
it's
like
95
of
the
structure,
sql
yeah.
D
That
annoys
me
so
much
so
much
and
it's
easy.
We
can
move
all
the
partitions
outside
and
this
is
only
for
the
static
partitions,
because
the
dynamic
partitions
we
create
them
dynamically,
using
a
worker
but
for
the
static
partitions.
At
the
moment
we
have
everything
in
structure.sql
at
some
point.
D
Sooner
or
later
we
should
move
it
out
and
have
a
a
a
file
that
it's
not
for
to
be
human,
readable
and
use
that,
but
because
the
the
the
structured
experience
is
also
very
useful
as
a
the
single
source
of
truth
of
what
we
have
in
the
database.
So
a
lot
of
people
visited
include
me.
A
D
A
C
A
So
I'm
kind
of
like
thinking
that
exactly
what
how
it
does
not
look
like
we
just
create
see
I
structure,
rescue
or
manually
initially
the
table
that
we
want
to
miss
and
we're
just
gonna
create
migration
like
when
we
add
the
like
migration.
A
If
you're
gonna
want
to
access
ci
instance
variables,
you're
gonna,
have
to
add
the
migration
in
the
ci
migrate,
folder
that
will
be
executed
both
of
the
main
and
on
the
ci
database
concurrently
and
our
ci
job
that
checks
the
db
structures
right
now,
it's
allowed
to
failure
because
of
the
resume
with
responsive.
We
should
aim
for
making
this
failure,
because
this
is
the
job
that
actually
also
checks
the
consistency
of
the
generated
structural
squad
if
they
reflect
exactly
migrations
executed.
A
Can
we
discover
a
case
where
someone
puts
ci
instrument
variables
into
main
database,
but
doesn't
put
it
into
c
and
migrate
because
then
like?
Because
then
we
would
end
up
with
the
like
schema
being
like
different
between
main
and
the
ci,
because
migration
was
put
in
the
wrong
place.
A
Yes,
but
like
because
like
as
soon
as
we
migrate,
ci
instant
variables,
we
we're
gonna
like
move
the
current
structure.
It's
effectively
a
snapshot
now
like
for
this
table,
you
need
to
add
new
migrations
into
ci
migrate
for
these
to
be
executed
on
both
databases.
We're
not
gonna
like
create,
or
maybe
we're
just
gonna
create,
like
the
ci
migration
like
in
two
places,
but
it
seems
like
pretty
not
not
really
great.
A
D
A
D
B
Yes,
so
to
answer
the
communion's
original
question,
I
think
we
can
detect
like
someone
has
changed
the
ci
table.
We
can
do
comparison.
Ci
has
access
to
both
databases.
B
B
A
Like
if
you
would
have
a
schema-
and
we
assume
that
let's
say
a
single
structure-
sql
manages
a
single
schema
like
public
and
ci.
Can
we,
like,
I
don't
know
like
there?
Is
there
some
other
way
that,
like
we
could
let's
say,
accommodate
ci
structure
sql
into
main
database
and
like
perform
these
migrations?
A
I'm
not
sure
like.
If
this
is
I'm
thinking
like
it
doesn't
make
sense
because,
like
like,
I
mean
schema
like
defines
like
a
logical
partitioning
of
the
database
right,
so
maybe
maybe
structuresql
doesn't
describe
like
the
database.
Configuration
database
schema,
so
the
database
structure
as
a
whole,
but
rather
describes
a
schema
only.
But
I
don't
think
that.
A
D
I
was
asking
because
I
I
was
under
the
impression
by
following
the
discussion
behind
the
comments
by
dylan
and
adam,
that
we
had
a
solution
that
pretty
much
works
and
we
can
at
some
point
we
have
an
idea
of
how
to
to
be
able
to,
at
some
point
to
drop
the
tv.
C
Yeah,
I
don't
think
we
have
to
drop
the
table.
I
think
yeah,
there's
dif
there's
competing
concerns
here.
One
is
about
the
application
code
being
coherent,
which
is
say
this
structure.sql
will
have
to
have
the
ci
tables
for
as
long
as
we
we
allow
any
customers
to
have
ci
tables
to
remain
where
they
are
or
so
long
as
we
allow
grace
period
before
customers
have
to
migrate.
C
I
don't
I
don't
know
if
we
can
actually
delete
the
tables
in
our
production
instance
without
our
production
instance
being
in
a
kind
of
weird
state
where
it
doesn't
match
the
structure.sql
file,
and
that
may
not
be
a
good
idea,
so
we'd,
probably
truncate
the
tables
or,
if
there's
something
more
efficient,
we
could
do
maybe
delete
the
tables
and
recreate
them.
If
that
was
more
efficient,
I
don't
know.
A
Yeah
so
like
we
cannot
drop
tables
under
all
our
customers
migrate.
That's
really
like
the
clue
we
can.
We
can
drop
data
on
the
production
fairly
easily
as
soon
as
we
validate
that
we
we
switch
over,
and
this
can
be
like
as
the
on
saying
truncating
recreating
structure.
Probably
truncating
is
like
more
reasonable
to
do,
but
that's
really
the
thing.
C
Oh
foreign
keys,
maybe
well,
we
would
have
removed
any
foreign
keys
into
the
tables
that
we
truncate
by
that
point
in
time,
but
I
just
thought
the
trunk
pavement
you
had
to
clean
up
the
dead,
tuffles
and
like
that'll
all
go
on
the
wall
stream
and
it'll
all
be
these
things
that
postgres
has
to
clean
up
as
opposed
to
deleting
a
table
where
postgres
doesn't
to
just
just
unlink
something
I
I
might
be
not
correct
about
all
of
that.
But.
D
D
C
I
feel,
like
I
understand
the
plan
better
now
I
was
a
little
bit
confused,
but
I
get
now
that
the
application
code
is
going
to
start
creating
these
new
ci
tables
and
gitlab
the
rails.
Application
will
just
know
about
the
two
different
tables,
but
we
will
just
only
ever
choose
which
one
to
connect
to
based
on
a
state
of
a
migration.
A
D
D
A
No,
like
you,
don't
really
need
to
do
it
like
that
as
soon
as
grace
process
of
migrations,
it's
gonna
in
the
given
context,
like
the
connection
execute
at
column,
it's
gonna
like
define
the
like
because,
like
you
define
a
connection
and
under
connection,
you
define
a
database
like
connection
string
and
under
that
you
define
a
migration
that
you
want
to
run
and
now
you
iterate
on
each
of
these
connections
each
of
these
database
migrations.
A
So
in
the
migration
context,
you
already
have
the
database
configured
properly
for
the
migration
from
the
given
path
being
executed.
A
So
what's
gonna
happen
in
the
race
out
of
the
board,
it's
like,
if
you,
if
you
have
the
let's
say,
add
ci
column
rb,
it's
gonna
be
executed
twice
with
the
migration
context,
pointing
once
to
the
main
database.
Second,
one
to
the
another
database,
and
now,
as
long
as
you
use
active
record
migration
connection,
it
will
work
just
out
of
the
box.
Now,
if
you
use
active
record
base
connection,
it
will
when
it
fails.
So
we
may
to
some
extent,
fix
migration
helpers
to
ensure
that
it
uses
the
correct
connection
from
the
migration
context.
A
Not
the
legacy
connection
hunter,
something
that
we've
been
discussing
with
tong
on
his
work
so
in.
I
would
assume
that,
right
now
in
the
70
percent
or
like
80
of
the
cases,
it
will
just
work
out
of
the
box.
There
are
like
more
cases
where
we
need
to
where
we
need
to
fix
them,
which
is
like
the
migration
helpers
that
we
need
to
make
migration
help
us
aware
of
the
connection
being
used.
D
In
general,
data
migrations
will
have
to
fix
data
migrations
to
pick
the
correct
table,
and
one
minor
thing
there
is
that
we
don't
use
the
models,
the
application
models
in
data
migrations,
because
we
don't
want
if
code
changes,
we
want
the
migrations
to
be
able
to
run
still
so
at
the
moment
we
replicate
and
we
create
a
local
model,
so
we
will
have
to
have
a
solution
to
to
easily
create
a
local
model
for,
for
example,
ci
variables.
D
That
will
also
have
the
correct
connection
details
in
in
the
data
migration,
but
I
think
that
an
adam
can
correct
me
on
that.
I
think
that
this
is
something
we
can
do.
A
B
I
might
have
time
to
try
hack
something
together,
but
someone
gets
to
it.
I
don't
mind:
let's
discuss
the
due
date
tomorrow
when
craig
is
around.
This
will
be
a
great
question
as
well.