►
From YouTube: 2021 07 07 APAC Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
I
can't
remember
where
I
saw
the
migration
plan
now.
I
believe
it's
something
that
dylan
and
adam
wrote
and
it
say
that
it
it's
we're.
Gonna
use
streaming
replication
to
migrate
the
data
I'm
just
wondering
like
what
is
that?
How
does
that
impact?
The
do
tables?
Do
you
know?
If
there's
would
there
be
any
requirements
on
the
tables.
B
A
Just
link
the
plan,
so,
as
far
as
I
understand
the,
what
we
will
have
is
a
full
replication
of
the
main
github
database.
At
some
point
and
after
we
switch
over
so
the
ci
database
table,
the
ci
modulus
will
start
using
the
new
database
server
after
the
migration.
We
will
start
slowly
cutting
out
the
tables.
What
we
don't
need
like
we
don't
need
a
project
stable.
They
have
an
issues
table,
so
that
will
be
like
an
extra
step
where
we
eliminate
these
unnecessary
and
unneeded
tables.
C
C
The
approach
that
you
said
using
stream
replication
to
replicate
all
the
data
in
this
new
database,
and
when
you
do
this
failover,
you
will
be
a
bit
more
traumatic
because
you're,
pausing
and
pausing
some
components
and
redirecting
there
and
other
approach
will
be
to
keep
the
stream
replication
in
a
moment,
promote
the
secondary
the
secondary
database
for
ci
and
keep
replicating
there
until
you
catch
up
what
is
being
inserted
with
logical
replication
and
then
the
process
can
be
more
smooth
or
faster.
To
be
done
is
one
of
the
things
that
I
understood.
B
B
C
Your
application
will
replicate
everything,
but
it's
happening
non-database
to
the
other.
Yes,
the
problem
is
like
from
the
in
the
moment
of
the
failover.
You
have
to
stop
all
your
application
in
part
and
then
promote
execute
the
promotion
in
the
other
side,
because
how
it
works
is
stream
replication.
You
have
a
primary
and
you're
sending
the
streams
to
the
secondaries,
but
your
secondary
here
doesn't
receive
rights
and
you
will
have
to
stop
for
a
minute
or
for
a
second
and
execute
a
promotion
there.
C
C
Then
you
create
a
lot
of
logical
replication
and
before
having
this
dramatic,
let's
see
a
few
seconds.
What
you
do
you
let
this
sync
up
with
logical
and
when
it's
already
in
sync,
like
you,
have
all
the
transactions
in
both
databases.
Equally,
you
execute
a
failover
with
the
endpoints
to
write
in
this
database
that
was
already
promoted,
and
this
could
reduce
a
bit
the
impact.
B
A
Thanks
yeah,
just
just
one
thing,
so
if
I
understand
correctly,
let's
say:
if
we,
if
we
cut,
if
we
do
this
switch
over
and
actually
the
whole
database
will
be
affected,
so
all
rounds
will
be
affected
because
we
have
to
wait
until
the
complete
replication
for
all
the
bus
finishes
right.
A
C
Like
we
will
have
to
stop
all
the
I
think
here
we're
trying
to
migrate
ci
to
a
new
database
right
you'll
have
to
stop
all
the
traffic
from
ci.
You
will
have
errors
for
the
time
card.
You
apply
all
the
replication
lag
because
the
streaming
okay,
you
applied
everything
that
you
can
promote.
This
promotion
takes
on
seconds.
C
Then
you
could
like
restart
and
point
to
this
place.
What
I'm
saying
here
is
like
we
will
have
like
an
impact,
maybe
under
one
minute.
That
is
fine,
but
we
can
reduce
this
to
a
few
seconds.
If
we
do
with
logical-
or
at
least
it's
the
thought
of
the
theory
behind
yeah.
A
Yeah
yeah,
I
understand
that,
but
what
I
didn't
get
is
that
you
say
that
we
cut
the
ci
tables,
but
actually
the
other
part
of
the
application.
You
still
produce
data
right
that
will
be
inserts
and
updates,
and
that
also
needs
to
be
synchronized
or
replicated
to
the
new
server.
So
by
cutting
the
ci
tables,
you
are
not
going
to
stop
that
application
right.
C
When
you
stop
the
string,
you
stop
all.
You
are
right
with
that
and
then
what
I'm
proposing
the
proposal
is
just
with
the
logical.
We
just
replicate
the
tables
that
we
need
in
kci,
for
example,
the
table
issues,
users
and
all
of
these
we
don't
touch,
we
don't
send
a
new
stream.
But
yes,
when
you
do
the
stream,
you
replicate
everything
all
right.
A
Yeah,
okay,
I
got
it
and
I
was
looking
into
the
part
how
we
can
use
the
user
impact.
So
ideally,
if
we
can
get
this
done
in
a
very
short
period
of
time,
a
few
seconds,
then
we
can
put
some
stuff
in
place
of
the
application
there
or
the
load
balancer
there
that
may
be
failed
request
can
be
retried
for
this
specific
period.
A
So,
let's
say:
if
you
can
do
this
under
10
seconds,
and
I
would
say
if
you
get
some
sort
of
error
from
the
pg
bouncer,
that
the
database
is
not
available,
we
can
sleep
a
little
in
the
application
or
at
the
low
balance
level
that
we
tried
the
trick.
But
for
this
to
actually
start
working
on
this,
I
would
say
we
need
some
benchmarking.
C
Yes,
the
idea
is
benchmark.
Let
me
tell
you
the
whole
theory
that
we
are
talking
about
here.
I
wanted
to
use
for
major
upgrade
versions.
Imposers
and
the
whole
theory
is
to
use
as
well
what
you
said
in
pg
bouncer.
We
can
use
pause
for
a
few
milliseconds,
depending
on
the
volume
of
data
that
we
are
talking
about.
C
We
could
like,
like
in
theory,
let's
say
at
middle
of
3am
when
we
have
a
low
peak
of
requests,
put
the
application
in
pause
in
the
pity
bouncer
pause,
your
latency
will
increase
transfer
to
the
application,
but
still
the
requests
are
there
and
then
we
redirect
to
the
new
database
and
resume.
But
this
needs
to
be
tested.
B
C
This
is
after
we
do
the
because,
when
you
think,
like
after
the
moment
that
you
did
the
logical
replication
or
the
stream,
the
failover
of
your
application,
we
need
to
understand
which
tables
you
need
in
which
not
and
then
you
can
complete
them
after
a
few
hours.
I
think
it's
even
healthier
to
do
this
as
fast
as
possible.
C
In
case
that
you
are
getting
some
data
from
this
new
database,
we
are
getting
some,
not
fresh
data,
you
know.
So.
Can
you.
C
A
Why
does
it
if
we,
if
we
continue,
you
know
to
test?
If
we
can
do
the
separation
in
the
application
code,
we
could
use
what
andrea's
proposed
schemas
with
schemas
and
search,
but
we
can
actually
prevent
access
from
specific
connections
to
these
ci
tables.
So
we
can
make
sure
that
there
are
no
cross
connections
or
cross
queries
and
cross
joins
between
these
tables.
So
we
can
safely
remove
the
tables
without
worrying
that
oh,
you
might
have
a
play
that
joins
the
issues.
C
A
Start
of
curiosity,
how
would
you,
how
would
you
do
this
cutover?
Would
you
just
write
a
script
that
okay
now
I'm
going
to
pg
bouncer?
I
will
either
disable
or
pause
the
requests
and
then
I
don't
know
who's
write
some
script
to
set
up
your
applications.
So
would
you
automate
the
whole
process
with
a
script
or
just
do
it.
C
I
think
we
can
do
this.
Inheritable
will
be
the
best
way
and
automate
all
these
steps.
In
essence,
what
I
had
in
mind
for
our
previous
migrations
also,
I
am
not
sure
if
pd
bosses
will
support
all
we
are
proposing
here.
I
hope,
because
we
are
talking
about
just
the
then
this
will
be
something
interesting.
C
C
A
C
A
B
D
I
realized
I
jumped
in
halfway
through
this
conversation.
You
may
have
already
figured
out
some
of
this,
but
one
thing
I've
been
thinking
about
with
the
new
ci
databases,
it
sort
of
doesn't
matter
that
it
will
have
all
the
new
tables
that
aren't
necessarily
in
the
ci
schema.
D
I
checked
to
see
that
if
you
have
migrations
in
the
migrations
table
that
aren't
migrations
that
are
available
in
your
migrations,
that
doesn't
cause
any
extra
problems.
So
I
don't
know
if
we
have
any
validation,
that's
actually
going
to
run
into
problems.
The
fact
that
there
are
tables
in
there
that
are
not
in
structure.sql
I
mean
we
don't
use
structured
sql
for
reduction
as
well.
As
far
as
I
understand.
A
D
Yeah,
so
structure.sql
file
is
not
read
by
the
application.
It
doesn't
matter
when
it
when
it
comes
to
running
migrations.
It
doesn't
matter
that
there's
extra
tables
there,
as
well
as
extra
schema
migrations
in
the
schema
migrations
table,
so
it
just
doesn't
matter
that
we'll
be
cutting
over
to
a
new
database
that
happens
to
have
all
of
the
data
tables
pre-populated
that
are
not
relevant
to
this
current
schema.
D
A
B
We
don't
we
don't:
dump
schema
in
production
right.
B
D
Yeah,
I
I
was
thinking
about
this
potentially
being
a
problem
a
while
ago,
and
I've
been
trying
to
figure
out
what
problem
might
actually
arise
from
this.
That
one
problem
is
that
it's
kind
of
like
a
dirty
hack,
we're
never
going
to
run
any
of
these
migrations
we're
never
going
to
because
the
migrations
will
have
already
been
run
on
that
database,
and
we
will
at
some
point
need
to
delete
a
whole
bunch
of
tables
from
that
ci
table.
D
But
we
won't
do
that
via
migrations,
because
our
ci
structure
and
everything
else
in
our
previous
migration
history
will
not
know
that
that
database
contained
any
of
those
ci
tables.
However,
this
is
why
I
kind
of
liked
what
andre
andreas
was
saying.
We
just
keep
like
all
the
same
migrations
in
the
same
structure
file,
because
it's
like
that
makes
this
hack
a
little
bit
less
of
a
hack
because
they
are
actually
the
same
schema.
D
B
Yeah,
when
yeah
well
ensure
the
impact,
because
I
was
asking
like
streaming
replication,
does
that
mean
that
the
two
data
structures
need
to
be
the
same?
And
the
answer
is
yes,
then
you're
saying
the
dbci
structure
doesn't
get
used
anyway,
so
it
shouldn't
melt
and
theory,
which
is
nice,
but
it
is
a
little
bit
confusing.
D
Yeah
yeah
it'll
definitely
be
confusing
because
you'll
just
have
these
two
databases
that
are
identical
and
yeah
I
mean
mostly,
nobody
will
be
exposed
to
that
and
at
some
point,
when
we
do
a
cleanup,
then
they
will
diverge
but
yeah
until
we
need
to
clean
up
them.
They
won't
diverge.
B
D
D
So
we'll
just
we'll
cut
over
and
it'll
just
look
like.
Okay,
all
the
migrations
have
already
run,
there's
nothing
to
do,
which
again
is
kind
of
like
a
bonus
for
like
actually
having
the
same
set
of
migrations
and
the
same
structure
file,
because,
like
it's
all
more
conceptually
sound
that
you
did
actually
run
those
migrations
at
some
point
yeah.
It
will
only
be
different
if
we
happen
to
have
migrations
that
only
run
on
the
ci
database
and
we
already
don't
plan
on
doing
that
for
some
time.
B
Yeah,
okay,
yeah,
because
the
schema
migrations
will
be
streamed,
synced
as
well,
replicated
as
well
and
then
at
some
point.
After
the
cardboard,
then
we
can
have
different
things:
different
different
migration,
folders
and
different
ci
structures
and
whatnot.
D
That
will
all
look
a
bit
odd,
but
nothing
will
break
and
we'll
just
have
to
explain
why.
That
is
odd.
That
way
for
duration,.
B
D
Yeah,
the
current
plan
also
involves
like
switching
to
that
new
database
as
a
read
replica
for
ci
as
well.
Since
we
can
separate
reads
and
writes
for
the
ci
connection
we
can,
you
know,
have
separate
in
the
rails,
configuration
connected
for
ci,
reads
and
ci
rights.
We
could
actually
have
ci
reads
going
to
the
new
database
well
and
well,
early
this
and
probably
detecting
any
problems
that
might
arise
from
those
weird
schemas.
B
Go,
that's
me,
does
anyone
else
have
other
things?
Oh
anything
to.