►
From YouTube: 2021 07 19 APAC Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
All
right,
I
can
start
since
I
think
I
have
the
first
response
on
here,
which
is
to
craig's
question
about
how
we're
going
to
identify
and
discover,
discover,
time-sensitive
fixes
and
specifically
the
things
that
are
going
to
block
us
from
moving
ci
tables
across.
A
B
Yeah,
so
what
I'm
seeing
based
on
this
merge
request,
I
did
about
you,
know
preventing
using
active
record
base
that
we
have
several
cases
when
we
do
mixed.
So
when,
when
we
have
a
transaction,
we
have
a
lot
of
mixed
statements
that
are
involving
ci
tables,
and
this
could
be
easily
fanned
out
to
the
teams.
But
at
the
moment
we
don't
really
have
answers
how
to
actually
fix
this.
So
how
do
we
do
two
phase
commits
or
if
we
don't
do
too
fast
commits?
A
Have
we
have
we
come
across
a
specific
example
like
the
one
that
I
think
in
my
head
would
be
when
we
create
a
pipeline
that
I
imagine
is
going
to
be
the
canonical
example
of
us
doing
rights
that
we
want
to
be
transactionally
consistent.
You
create
a
pipeline,
you
create
environments
for
the
pipeline.
A
B
Yeah,
so
I
found
a
few
places
where
we
do
these
and
then
I'm
starting
to
collect
in
this
issue
just
linking
it
to
the
agenda.
B
So
basically,
I'm
I'm
going
through
the
offense
to
the
offenses
and
then
just
inspecting
easter,
each
transaction
and
see
what's
what's
going
on
inside
and
then
just
making
a
note
that
what
I'm
seeing
here
and
we
have
a
few
cases
when
yeah
we
have
these
mixed
models
and
updates
within
a
within
a
transaction.
A
The
the
yeah.
B
A
Think
the
logical
way
to
approach
it
is
pretty
similar
to
what
you're
saying,
which
is
that
we
need
to
provide
people
and
guidance
on
what
they
should
do.
We
can't
just
fan
it
out
right
now,
and
I
think
that
kind
of
makes
me
think
similar
to
what
I
was
doing
last
week
with
the
ci
and
with
the
security
stuff
he's
like
when
I
went
through
one
after
the
other,
after
the
other,
it
sort
of
became
emerging
like
what
the
theme
was
and
maybe
we're
just
collaborating.
B
C
I
think
we
would
simply
not
do
it
to
face
commit.
We
would
simply
forbid
that,
for
me,
it's
like
a
very
easy
receipt
to
having
a
deadlocks
all
over
the
place,
so
my
perception
is
like
and
actually
writing
if
transaction
is
open
on
a
we
for,
beat
writing
to
b-
and
it's
probably
like
general
rule
so
like
if,
if
you
open
transaction
on,
I,
we
just
simply
forbid
to
writing
on
b
within
the
transaction,
and
this
might
be
like
a
logical
thing
implemented
on
the
right.
We
simply
forbid
two
transactions.
C
That
would
discover
places
where
we
kind
of
forbid
like
where
we
kind
of
fire,
like
this
rule
and
then
like
found
out
like
on
breaking
down
these,
like
rights,
to
be
sequential
across
to
different
transactions
because,
like
I
think
we're
not
gonna,
be
able
to
do
two-phase
commit
properly
on
the
application
side.
They're
always
gonna
be
some
rollbacks
and
we're
gonna
end
up
in
ton
of
in
ton
of
deadlocks
as
well.
So
I
I
think,
decide
now
to
to
face
commit,
don't
even
try
to
do
it.
B
Yeah
I
mean
that
that
makes
sense
to
me.
We
should
to
ensure
that
we
clean
up
in
case
we,
you
know
we
we
encounter
a
failure
or
something
like
that,
because
we
we
will
end
up
with
a,
but
we
won't
have
strong
consistency
right.
So
we
will
need
some
sort
of
background
process
background
job
that
cleans
up
stuff
that
shouldn't
be
in
the
database.
B
C
C
Then
the
question
is
really
like
how
we
will
like
go
along
on
actually
like
forbidding
that
in
the
specs
and
development-
and
maybe
this
is
really
like
the
first
step
towards
that
like
trying
to
do
poc
like
trying
to
use
poc
where
we
actually
have
already
like
two
distinct
databases
and
like
simply
implement
its
rule
and
see
like
how
many
things
will
be
broken,
and
maybe
we
just
see
pattern
there.
C
But
I
think,
like
me,
thinking
and
also
talking
if
andreas
like,
we
simply
should
get
rid
of
the
two-phase
commits,
like
I
mean
like,
like
the
the
issue
says,
fixing
to
face
coming,
but
in
this
case
like
fixing
means
like
simply
don't
allow
to
face,
commits
to
even
take
place.
I
I
think
this
is
my
general
perspective
on
that.
It's
just
gonna
be
pretty
painful.
If
you
would
have
to
do
it
like
the
same
way.
A
Yeah,
I
think
that's
like
two
problems
that
will
be
left
to
solve
them.
One
will
be
like
how
much
do
we
care
about
cleaning
up
data
and,
and
then
the
other
one
will
be?
How
do
we
make
things
a
bit
more
reliable
when
things
can
be
partially
so
like
we're,
cleaning
up
data
and
then
making
sure
that
okay,
the
data
that
we
are
okay
with
not
cleaning
up
in
the
event
of
a
failure?
What
would
be
the
implications
for
a
user?
So
looking
at
a
specific
case
and
go?
A
We
can
actually
live
with
the
implications
to
a
user
here
where
they
get
a
500
and
they
go
and
do
a
task
again
and
they
end
up
creating
some
offered
records
somewhere
else
and
nobody
notices
that
we
don't
care,
but
then
there'll
be,
like
the
other
cases
like.
How
do
we
make
this
reliable?
And
one
thing
that
kind
of
comes
to
mind
is
like
other
places
where
we
try
to
do
http
requests,
we
put
them
to
sidekick
or
other
long-running
tasks.
A
We've
pushed
a
sidekick
and
psychic
has
retries
in
it,
and
if
you
create
a
pipeline
of
sequential
sidekick
jobs,
queuing
other
sidekick
jobs,
you
can
do
things
to
two
different
databases
reliably
in
a
way
I
mean
until
you
exceed
retries,
but
you
know
you
can
have
that
pipeline.
If
it
detects
a
permanent
failure,
further
along
it
can
go,
it
can
know
to
roll
back
previous
work
and
any
failure
in
doing
the
rollback
of
previous
work
would
just
be
a
sidekick
failure.
That
was
retried
again.
So
I
think
you
can
kind
of.
A
A
B
Yeah
a
bunch
more
problems,
yeah
and
I
think
that
would
be
quite
easy
to
implement
during
you
know
the
ci
run,
because
the
connection
actually
knows
when
a
transaction
is
open
and
then
you
can
just
keep
track
of
which
connection
open
the
transaction.
B
A
Okay,
so
then
the
next
point
is
like
what
are?
What
are
the
next
steps
in
terms
of
identifying
and
finding
out
other
issues
related
to
that
you
have?
We
have
the
things
we've
actually
caught
from
the
poc.
Then
we
have
things.
We
know
we
will
catch,
but
we
haven't
caught
yet
in
the
poc.
A
C
I
we
just
go
like
go
like
through
poc
and
like
all
the
challenges
and
like
create
issues
for
each
of
them
simply
and
then
like
marking
the
code
in
the
relevant
comment,
the
issue
that
was
created-
and
I
think
like
this-
is
probably
how
we
need
to
track
that,
like
each
of
these
individual
changes
that
we
made
so
far
probably
needs
to
be
issued.
And
then,
like
needs
to
be
like
reason.
If
this
is
the
correct
way
to
do
it
or
not,.
A
Yeah
it's,
it
seems
like
there's
like
a
next
step,
though,
which
is
like
making
our
poc
even
more
fail
in
even
other
ways
that
it's
not
failing
today,
like
the
nested
transaction,
is
one
way
we
need
to
make.
It
start
failing
in
that
way
to
detect
those.
But
then
the
other,
like
another
example,
is
cascading
deletes.
I
don't
think
we'll
our
tests
will
detect
all
cascading
delete
problems,
and
so
we
need
to
make
our.
C
I
think,
like
one
thing,
that
we
could
do
like
it's
like
change,
because
like
right
now,
we
evaluate
that.
C
Tend
foreign
discover
if
the
relations
are
preparing
mark
on
the
mothers
and
they
are
properly
configured
with
the
dependent
flag?
So
I'm
kind
of
thinking
that,
like
we
can
like
discover
that
like
require
foreign
keys
like
in
the
within
database,
but
require
a
proper
relations
ships
on
the
models
defined.
If
you,
if
you
simply
do
that
cross
database,.
A
Yeah,
I
think
there
might
be
some
tables
that
don't
have
models
would
be
one
thing
so
like
join
table
and
then
they
would.
We
would
want
cascading
deletes
on
those,
but
I
think
that's.
C
A
A
You
find
any
foreign
key
that
has
a
cascading
delete
or
nullify,
and
then
you
check
that
it's
got
the
equivalent
thing
written
in
a
model
somewhere
and
you
spit
out
an
error
if
it
doesn't
and
that's
the
list
of
things
to
solve,
and
that
may
mean
we
need
to
create
model
files
to
describe
those,
and
then
we
actually
need
to
implement
something
that
does
a
good
job
of
cascading
delay.
It's
not
a
really
bad
job
of
cascading
delays,
which
I
think
we
talked
about
in
the
past
on
sidekick
or
something
doing.
C
Can
we
look
because,
like
it's,
it's
pretty
much
like
a
topological
sort
to
some
extent,
probably
what
we
what
you
could
do
like
you
could
discover
that
like?
If
you
have
dependent
the
little
on
this
relation,
will
it
actually
remove
all
the
sub
objects
related
to
that
one
by
the
foreign
keys
that
already
defined
in
the
current
database,
because,
like
we're,
gonna
still
have
like
a
bunch
of
the
foreign
keys,
we're
gonna
have
the
foreign
keys
from
the
pipeline
below.
Basically,
so
they
will
not
be
removed.
C
So
it's
very
likely
that
even
if
we
do
depend
on
destroy
it's
very
likely,
gonna
cover
95
of
the
cases.
The
only
the
left.
Five
is
like
how
we
just
discovered
those.
C
A
I'm
sure
we
have
issues
that
describe
this,
but
they're,
probably
out
of
date
in
terms
of
what
we
want
to
accomplish,
but
maybe
there's
just
a
couple
more
issues.
We
need
to
say
problems
to
solve
that
are
not
yet
done
in
the
poc.
C
I
will
update
these
issues
as
well.
Let's
see
how
big
how
it
goes,
then.
I
think
we
need
to
like.
We
need
to
update
the
face
commits
we
need
to
update
foreign
keys
and
we
need
to
fund
out,
like
all
the
issues
out
of
the
current
poc.
A
A
This
is
logical
because
it's
using
a
join
table
and
that
join
table
is
a
similar
data
set
size
to
the
thing
we
and
or
this
is
logical-
and
it's
not
using
a
limit
sort
by
clause
somewhere
else
that
kind
of
causes
all
sorts
of
problems
right
like
I
think
we
need
to
go
through
every
disabled,
joins
and
check
them
one
at
a
time.
How
many
do
we
have?
Is
there
a
lot?
Not
many.
A
C
Yes,
so,
like
disable
john's
guarantees
of
like
true
or
false,
I
think
what
we
really
want
to
do.
We
pretty
much
now
know
which
disabled
joints
we
need
to
use
because
of
the
poc.
So
we
can
actually
go
ahead
and
start
like
setting
these
flux
on
the
application.
C
But
then
probably
we
need
to
extend
this
disabled
joints.
Logic
also,
we've
had,
let's
say
with
the
feature
flag
system
to
be
able
to
toggle
on
and
off
that
on
the
production
to
validate
that
it
doesn't
cause
the
performance
issue
as
well,
because
I
expect
because,
because,
like
your
comment
about
slow,
it's
very
valid
and
I
think
we
need
to
somehow
a
way
to
validate
that
as
well.
A
Yeah,
so
maybe
maybe
we
just
have
like
you
know,
all
of
the
disabled
joins
are
feature
flagged
automatically,
based
on
the
name
of
the
model
and
the
association,
and
we
have
to
actually
go
ahead
and
enable
those
one
at
a
time
and
then
once
we
do
that,
we
can
add
to
it
to
say
that
it's
no
longer
feature
flagged.
Once
we've
compelled
it
and
that's
kind
of
our
to-do
list.
C
And
automated
generation
of
the
future
fact
is
pretty
tricky,
but
probably
I'm
not
sure
how
many
of
these
disabled
joints
we
have
probably
not
that
mana.
So
maybe
we
could
get
something
like
I
don't
know
with
the
grouping
of
a
few
design,
blue
joints
and
like
under
a
single
picture
flight
and
boiling
out
that
in
the
body,
how
will
we
have
to
kind
of
hook
into
some
rails?
A
That
disable
joins
thing
right
because
that
that
doesn't
support
it
in
if
we
backboard
the
disable
joint
feature,
it's
like
either
on
or
off.
We
can't
feature
not
enabled
question
mark,
because
that
would
be
run
at
ruby
load
time.
Only
one
time.
C
A
What
about
what
about
we
do
it
dynamically
with
feature
flags
where
we
use
the
the.
A
Is
the
model
and
a
concatenation
of
the
model
and
the
relation
name
in
that
monkey
patch?
So
we
don't
have
to
do
that.
I
mean,
I
suppose
it's
pretty
fine
for
us
to
just
go
through
every
single
one
of
them
and
add
that
probably
not
a
big
deal,
but
either
way
that
requires
us
to
change
the
disabled
joint
back
port
to
support
this
lambda
syntax
if
it
doesn't
already
support
lambda,
syntax.
C
There
is
already
there
is
no
disabled
joint.
We
need
to
backboard
it
anyway,.
A
Yeah
yeah,
so
we
could
backport
it
so
that
it's
and
also
make
it
it's
support
the
lambda
syntax
or
we
can
backport
it
and
also
include
our
own
monkey
patch
for
feature
dot
like
our
own
feature,
flag
checking.
So
that's
more,
that's
implicit
and
we
don't
have
to
write
the
code
and
that
could
be
based
on
using
the
actor
feature
of
feature
plex
or
we
could
just
give
it
lambda
support,
which
is
what
you
just
suggested.
C
A
I
think
that's
good.
We've
got
the
feature
flag,
yaml
files
that
allow
us
to
link
to
an
issue
and
give
people
all
the
context.
That'll
be
much
more
obvious
when
people
look
at
that
what's
going
on,
so
I
think
that's
fine
or
just
probably
yeah.
Unless
there's
too
many
of
them,
we
might
just
write
a
script
to
generate
a
ton
of
those
lambda
syntaxes,
but
yeah.
C
C
A
Okay,
that
makes
sense-
that's
probably
yeah.
I
think
the
lambda
syntax
is
being
explicit
and
just
manually
going
and
adding
all
of
those
in
and
kind
of
coming
out
with
our
own
grouping.
That,
we
think
is
logical,
like
seems
pretty
easy.
Okay,
so
that's
just
a
next
step
to
do
is
all
of
the
disabled
joints.
We
have
change
them
from
true
to
lambda,
syntax
feature,
dot,
enabled
and
then
create
the
relevant
issues
and
yaml
files
for
all
of
those
fixed
flags.
C
Or
do
we
plan
to
switch
to
race
7.0,
but
it's
still,
I
think,
in
the
beta
right.
A
I
don't
know
that
probably
is
far
more
work
for
us
to
keep
up
to
date
than
trying
to
push
towards
better
versions
of
rails.
But
I
don't
know.
C
A
C
C
So
I
think
the
like
the
dynamic
toggle,
it's
kind
of
required
us
for
us
to
roll
out
that,
because
there
is
like
another
way
then,
like
I
don't
know,
adding
if
statement
and
having
different
directions
and
like
evaluating
that
on
the
load
time,
but
I
think
from
now
on.
As
soon
as
we
get
these
disabled
joints
in
place,
I
don't
think
that
we
should
be
like
supporting
london.
A
A
A
A
A
Okay,
so
I
think
you're
summarizing
the
previous
discussions
anyway,
but
with
regards
to
craig
wondering
about
having
a
label
so
that
we
can
farm
out
the
issues
and
say
these
ones
are
blockers
and
these
ones
can
wait.
Do
we
think
that's
necessary
or
or
are
we
just
gonna
go
like
we're?
Not
even
gonna
bother
creating
issues
for
things
that
can
wait.
They
like
make
it
impossible
for
us
to
split
the
database
or
we
don't
care.
A
A
C
Yes,
but
like,
is
it
like
the
blocker
like
in
the
sense
that
like
like?
Yes,
I
like
we
need
these
things
to
be
done,
but
I'm
kind
of
thinking
more
like
a
stream.
Is
it
like
the
blocker
on
all
other
work
that
we
may
be
planning
to
do,
because,
like
a
lot
of
these
things
out
of
the
plc,
they
are
clearly
blockers
for
the
production
rollout,
but
they
are
like
self-sufficient
and
can
be
solved
by
someone
else
and
not
being
blocked.
C
We
just
provide
them
like
a
description
of
the
problem
and
like
the
solution
how
to
solve
that.
But
then
they
are
like
like
contained
and
they
are
not
really
blocking
any
other
work
or
like
they
are
not
being
blocked
by
any
other
words.
So
I'm
more
looking
at
the
like
this
kind
of
now
intermediate
things
out
of
that
list
like
what
is
essential
for
us
to
focus,
as
for
the
main
main
path
versus
what
is
like
all
of
these
like
site
things
that
needs
to
be
done.
C
A
A
We
need
to
get
this
done
first
before
we
can
even
work
on
the
important
stuff,
and
you
want
to
separate
that
from
what
can
be
just
done
concurrently
with
all
the
rest
of
our
work,
what
other
teams
do
concurrently
alongside
of
us,
I
think
I
would
be
not
wanting
to
say
that
those
things
weren't
blockers,
like
I
think
it's
important
for
us
to
keep,
calling
those
things
blockers
to
communicate,
that
to
the
rest
of
the
company
that
they're
blocking
us
the
ability
for
us
to
deliver
stability
to
getlab.com's
database.
A
So
if
anybody
has
an
issue
that
says
you
can't
join
between
these
tables
and
these
tables,
maybe
we
can.
The
shouting
team
can
keep
working
in
the
meantime,
but
we
can't
actually
fix
the
core
problem
that
we're
trying
to
solve
so
they're
blockers,
and
we
should
like
be
communicating
to
other
teams,
they're
all
blockers
and
not
and
not
mix
the
messaging
and
say.
Oh,
these
are
the
blockers,
and
these
are
the
kind
of
things
that
we
can
do
we're
not
waiting
on
right.
A
Now
that
that's
the
only
thing,
I
think
isn't,
I
think
that's
the
important
thing
to
communicate
with
other
teams.
I
think
anything
that
is
actually
blocking
us
from
getting
out
like
doing
anything
right
now,
like
we're
just
going
to
be
the
ones
doing
that
work
right
in
practice.
A
A
C
Yes,
I'm
kind
of
thinking
about
the
plc
and
like
thinking
like
how
much
more
time
you
want
to
invest
into
poc,
I'm
kind
of
looking
at
the
latest
pipeline
now,
but
so
far
I've
seen
like
there
were
still
some
broken.
Spikes
and
majority
were
migrations.
B
A
A
I
don't
know
that's
kind
of
like
more
up
to
how
you
would
want
to
use
it,
but
if
you
want
to
just
keep
using
the
poc
as
your
kind
of
scratch
pad
for
just
like
finding
identifying
the
problems
and
then
distributing
the
problems
to
individual
team
members,
I
think
continuing
to
identify
classes
of
problems
and
then
team
members
work
on
those
classes
of
problems
and
just
figure
out
what
is
the
best
way
to
tackle
those
separately
from
the
poc
that
that
seems
to
work
for
me.
But,
oh
actually,
we
have
800
failures.
It's
pretty.
A
Yeah
they're,
all
I
added
skips
to
all
of
the
security
scans
once
but
yeah
so,
and
that
affected
a
few
different
tests
so
like,
even
though
we
probably
have
a
few
comments
in
our
code,
maybe
we
have
like
you
know
tens
of
comments
in
the
test
that
say
skip
or
not
comments
but
skip
statements
in
in
artifact.
A
That
is
cool
that
we
can
see
the
skip
test
there.
This
is
a
super
useful
way
for
us
to
see
that
we're
working
this
down
so
yeah
we
have,
we
can
solve
two
problems,
we
can
solve
failed
tests
and
then,
once
we've
run
out
of
failed
tests,
we
can
solve
skip
tests,
so
we
can
keep
doing
both
concurrently,
I
think
like
for
all
the
skips.
We
should
at
least
start
adding
issues
to
them.
To
just
start.
The
conversation.
C
So
I
think
my
proposal
right
now.
As
for
the
poc,
I
would
simply
mark
out
like
aspect
migration,
because
we
know
that
this
is
broken
ee,
the
same
library
record
for
the
dp.
We
know
that
it's
gonna
be
broken
as
well,
because
I'm
kind
of
more
focused
right
now
on
the
functional
test
of
the
actual
application
running
then
like
this,
like
fundamental
blocks
that
we
need
to
fix
anyway.
C
So
I
would
probably
focus
on
getting
unit
and
integration
to
be
as
green
as
possible.
So
I
just
they
probably
go
through
those
and
like
also
like
mark
them,
and
probably
this
would
be
like.
I
guess
the
moment
where
we
kind
of
conclude
the
poc
that
we
cross
out
migration
and
db
library
code
and
then,
like
all
others,
are
either
skipped
or
fixed,
and
we
have
created
issues
for
that.
C
And
then
my
suggestion
for
the
poc
would
be.
I
guess,
trying
the
two-faced
comment
problem
to
see
how
many
cases
of
such,
if
you
would
implement
the
the
the
rule
in
the
application
and
like
see
on
the
poc.
C
In
how
many
cases
it
would
generate
a
red
tests.
B
C
Because
I'm
not
sure,
like,
I
think,
like
that
plc
right
now
in
that
current
form,
is
pretty
comprehensive
to
show
like
all
of
these
aspects.
Now,
the
question
is
like
related
dylan
to
your
comment
about
the
performance.
A
Yeah,
so
we
can
run,
we
can
probably
find
a
way
to
run
gpt
against
this
branch,
but
the
application
needs
to
work
and,
for
example,
if
migrations
don't
work
properly
or
or
you
know,
just
a
set
of
features,
don't
work,
then
the
performance
like
the
security
scans
they're
all
marked
as
skipped,
because
the
features
don't
work.
A
If
we
have
too
many
failures
in
gpt,
we
might
just
find
that
that's
not
particularly
meaningful
to
us
yet
to
detect
we're
not
at
that
stage.
Yet.
C
Okay,
so
what
you're
trying
to
think
is
probably
it
would
be
better
to
wait
till
we
fix
all
of
these
skipped
ones
to
make
them
green
to
some
extent,
and
then
we
could
probably
run
actually
this
branch
because
it
would
have
yeah
of
functioning.