►
From YouTube: 2021 08 02 APAC Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
A
B
A
A
C
A
A
A
Yeah,
I'm
I'm
working
on
a
allergic
rest
where
we
try
to
detect
transactions
and
you
know
cross
database
modifications
within
the
transactions.
So
if
you
have
a
transaction
open
regardless
of
the
database
and
you
make
changes
to,
let's
say
the
issues
table
and
the
ci
table,
then
we
should
be
able
to
detect
this
and
optionally.
There
is
an
error,
I'm
also
adding
some
utility
functions
to
to
disable
these
checks,
so
we
can
actually
throw
this
out
incrementally
I'm
working
on
master
for
now
and
yeah.
B
B
It
got
a
little
bit
confusing
because
now
we're
looking
to
use
the
prometheus
labels
to
distinguish
between
different
databases
as
opposed
to
the
key
of
the
metric
itself.
So
this
is
kind
of
now
making
things
a
little
bit
divergent
and
confusing,
because
log
keys,
there
is
no
way
to
kind
of
like
distinguish
different
log
keys
and
json
from
each
other
other
than
just
using
a
different
string.
Key,
but
prometheus
has
a
slightly
more
sophisticated.
B
You
can
have
the
same
metric,
but
you
can
add
labels,
and
this
is
a
way
to
distinguish
between
different
metrics
within
the
same
name.
So
now
things
are
probably
going
to
diverge
a
bit,
and
I
was
just
working
on
that
request
and
it's
now
a
bit
fiddly.
C
B
Okay,
you're
changing
the
role
with
your
exchanges.
I
I'm
not
up
to
date
on.
C
I'm
just
kind
of
curious,
like
how
your
mr
looks
like,
because,
like
maybe
what
makes
sense
of
merge
things
like
how
they
are
today
and
like
when
we
would
like
work
on
on.
Like
I
don't
know,
getting
rid
of
some
of
the
metals
would
simply
like
fix
that
as
well.
So.
B
Yeah
I
linked
it
there
my
guess
is
yeah,
it's
not
necessarily
going
to
have
overlap
with
the
multi-database
support
for
load
balancing
stuff.
The
the
tricky
thing
is
like
we
have
legacy
in
log
keys.
We
use
and
prometheus
metrics
names
so
like,
I
feel
like
we're
kind
of
stuck
in
sticking
to
the
same,
using
the
same
keys,
same
naming,
structure
for
log
keys
for
all
the
database
counts
and
we're
stuck
with
sticking
with
the
same
prometheus
metric
names
and
we're
stuck
because
there's
all
these
people
build
tooling
around
these
things.
B
B
This
is
a
confusing
thing:
roll
was
used
to
generate
the
name
of
prometheus
metrics
in
the
past,
but
they
don't
want
to
keep
going
down
this
road
of
generating
more
and
more
prometheus
metrics
names,
or
at
least
that
was
kind
of
the
feedback
I
got
so
far
was
using
generating
longer
and
longer
prometheus
metric
names
by
interpolating
more
strings
into
them,
in
which
case
would
be
the
config
name
or
the
database
name.
They
don't
want
to
do
that,
so
instead,
we'll
use
labels,
which
is
actually
a
good
use
of
prometheus
labels.
B
B
Yeah,
that's
what
I'm
planning
on
doing
and
and
and
that's
that
that's
the
easiest
option
for
now.
But
it's
just
the
code
starts
to
look
pretty
weird
and
hacky
now,
but
anyway,.
C
Because,
like
for
the
like
er,
I
mean
I
could
change
these
key
names
like
many
times
already,
so
I
would
be
not
so.
I
think
worried
about
like
retaining
the
current
key
names,
maybe
except
like
db,
underscore
count,
but
maybe
for
the
all
other
ones.
You
would
kind
of
figure
out
like
the
better
pattern
for
them
because,
like
we
are
not
super
dependent
on
the
keys
as
far
as
now,
except
maybe
like
some
elastic
water,
that
we
have.
B
Right
the
security
watches
that
are
on
there
we
have
there's
also
links
into
you,
know:
people
build
dashboards
in
cabana
and
there's
links
to
those
dashboards
from
grafana
dashboards.
For
like
quick
debugging
with
more
details
yeah,
it
is
less
problematic
than
the
graffana
one,
because
the
grafana,
the
prometheus
metrics,
will
be
powering
alerts
and
graffana
dashboards
and
those
are
all
like
complex
version
control
things
where
things
in
elasticsearch
and
combined
probably
are
much
less
critical,
but
it's
still
a
road
to
go
down
for
me
to
change
naming
conventions.
There.
C
B
A
B
A
I'll
find
it
and
thank
you
so
the
current
plan
is
that
we
will
create
a
galaxy
schema
and
get
up
share
and
also-
and
then
we
we
used
to
be
implicit
about
whether
you
use
public
or
not.
So
the
idea
is
to
be
explicit
about
using
public
dinosaur,
kill,
fci,
and
then
we
will
move
from
the
ci
tables
that
we
will
move
to
the
new
database
to
that
schema
as
well,
so
the
the
biggest
and
the
first
root
block
is
in
production
to
validate
that
which
is
actually
used
schemas
in
production.
C
I
I'm
kind
of
curious
like
why
you
need
to
direct
dci
database
with
schema
search
bar.
C
A
So
there's
the
search
path.
This
is
just
for
the
downline,
because
if
we
hide
the
so
in
in
the
main
database,
we
can
either
have
public
and
gitlab
ci
or
just
public
right
put
aside
shared.
So,
but
if
we
hire
public,
it's
it's
useless
to
hide
public
without
having
a
ci
database.
A
C
I
was
thinking
that
maybe
what
we
could
do
like
we
could
run
as
a
regular
migration
like
create
ima,
unlike
on
those
main
database,
and
maybe
add
some
dummy
table
for
that
to
just
validate
that
schema
search.
Part
is
properly
set
under
like
some
next
connections
that
we
will
be
opening
yes,
because
because
right
now
like
what
your
magic
was,
does
it's
like
kind
of
requires
us
to
actually
the
structures?
Well,
okay,
so.
A
A
Oh,
no
so
yeah,
so
I'm
doing
that
as
well
in
in
a
new
english
request.
So
I
haven't
started.
C
Okay,
so
like
what
you
are
doing
like
before
is
probably
like
testing
search
path,
but
this,
mr,
is
to
unify
the
structure
sql
to
equal.
I
mean
ci
structure
sql
to
equal
structuresql
right.
Yes,.
C
C
Okay,
I'm
gonna
comment
later
throughout
the
day,
so
I
will
have
like
my
feedback
for
tomorrow
and
like
some
mrs
open,
that
we,
I
think
slightly
chance,
for
example,
how
we
deal
with
the
before
and
the
beginner
stuff.
So
maybe
we
would
merge
these
things
out
of
mind
before
yours.
A
A
C
So
I
kind
of
have
more
comments
of
mine
before
like
bottom,
but
I'm
kind
of
like
continue
splitting
my
or
I
contact
switching
between
like
the
rx
changes
and
these
ci
structure
changes
so
actually.
Last
week
I
started
creating
a
draft
branch
now
locally
for
all
of
this
before
all
and
db
cleaner
stuff
to
kind
of
make
that
in
a
form
that
could
be
merged-
and
it's
like
it's
valid
for,
like
I
think
or
the
future
stuff,
and
I
also
added
marginalia
comments.
C
So
this
design
simply
will
not
work
with
the
ci,
but
this
is
something
that
we
can
translate
there
and,
while
looking
at
that,
I
will
also
create
issue
how
we
could
approach
it
differently,
because
actually
this
was
kind
of
self-solved
the
db
role
everywhere,
because
each
we
in
all
places
have
action.
Access
to
the
connection
connection
has
access
to
the
pool
pool,
has
access
to
the
db
config
and
http.
C
C
C
If
you
want
to
retain
the
db
roll,
you
could
split
the
string
and
have
like
the
main
connection
and
and
like
the
the
role
basically
or
like
specification
and
role
in
that
form
that
you
access,
like
from
connection
pool,
dbconfig
dot,
name
simply.
B
But
I'm
wondering,
if,
like
my
my
bias,
is
to
move
further
away
from
using
string,
concatenated
names
to
interpret
meaning
and
like
one
of
the
things
you're
saying
there
is
we're
going
to
have
a
config
name
called
main
underscore
replica,
and
that
is
how
we're
going
to
know
that
it's
the
main
database
and
that
it's
the
regular
car
by
the
fact
that
it's
main
underscore
and
like
we
have
structured
meaning
today
in
roles
and
that
the
structured
meaning
is
actually
because
we
wrote
the
load
balancer
code
and
it
understands
what
is
a
primary
and
replica
and
like
that
and
then
we're
gonna
destructure,
that
into
string
concatenation.
C
Yes,
but
like
like
the
currently,
I
think
we
will
not
work,
because
the
current
design
connection
was
it's
very
connection
based
and
they
actually
right
now,
it's
doing
like
very
heavy
tracking
of
the
connection.
So
that's
that's
the
interesting
part
because
there
are
like
this
is
one
of
the
way.
Another
way
you
can
really
create
a
dvd
config
with
a
replica
set
to
true
as
a
plug
of
the
db
company,
because
this
is
like
the
internal
behavior
of
that.
C
But
what
I'm
kind
of
saying
today
that,
like
today
the
lord
balancer
class,
there
is
like
a
hush
or
like
two
hustlers
that
we
manufacture
whether
the
given
connection.
What
is
the
purpose
of
that
connection,
and
this
simply
will
not
work,
because
you
would
have
to
traverse
all
of
the
load
balancer
connections
to
find.
C
If
your
particular
connection,
I
lost
you
for
a
second
and
if
this
particular
connection
is
like
the
connection
that
what
is
the
purpose
of
that
connection,
so
I'm
kind
of
just
thinking
rather
like
that
the
problem
with
the
current
design,
the
trick
we
access,
like
the
global
object
to
make
that
and
this
robot
to
track
an
internal
state,
and
this
is
a
pointer
as
well.
So
that's
not
really
like
a
great
thing
to
do.
B
We
have
limits
in
like
we're
using
a
bunch
of
rails
classes
right
now,
like
their
connection,
objects
and
stuff
like
monkey
patching
things,
but
like
I
don't
see
conceptually
why
we
can't
have
the
connection
know
what
its
role
was
like,
why
a
connection
should
be
able
to
know
that
it
is
a
read,
write,
connection
or
a
primary
or
a
replica
like
whatever
language
you
want
to
use
so
like
right
now,
the
code
it
looks
like
it
says,
db
role
for
connection
here.
B
I
don't
know
this
might
be
something
you're
saying
this
goes
through
dot
main,
but
there's
no
reason
like
that.
This
code
here
can't
get
a
db
roll
from
a
payload
connection
in
a
way
that
doesn't
use
a
static
thing,
so
it
doesn't
use
a
dot
main
or
class
methods.
It
could
use
instance,
methods
on
this
to
interpret
the
the
role
of
that
connection.
I
don't
know
how
I
would
I
could
go
through
and
how
it
works.
Now,
I'm
guessing
you're
saying
it
calls
dot
main
or
something,
but.
C
Yes,
I
think,
like
the
interface
may
be
very
similar
to
what
you
have
now,
but
this
information
will
be
accessed
differently,
so
I
I'm
gonna,
probably
open
just
a
mark
for
that.
It's
gonna
be
much
easier
like
for
you
to
see
how
it
gets,
but
very
likely.
C
I
need
to
look
like
how
you
origin
this
kind
of
things,
but
it's
very
likely
that
we
might
kind
of
retain
a
pattern
that
nothing
changes
in,
for
example,
in
this
method
invocation,
because
just
implementation
of
this
method
changes,
because
it
could
be
fetching
data
directly
from
the
connection
instead
of
like
from
the
global
hash
tracking,
something
that
it's
tied
to
like
to
the
singular
answer.
B
B
C
C
B
I
think
we
talked
about
that
tomic
camille.
You
should
pay
me
on
a
merge
request.
If
you
have
it,
if
you
want
to
get
more
concrete
about
that,
can
it
may
well
affect
what
I'm
working
on
it's
going
to
take
me
a
couple
of
days
to
get
the
design
right
of
what
I'm
working
on
anyway,
so
tong
you're
up
next,
the
chain
of
merchandise.
A
Yeah,
so
I
just.
A
Highlight
like
a
few
things
that
we
need,
and
I
think
that
we
a
lot
of
other
other
ammas,
we
will
need
as
well.
So
I
see
that
camille
is
we're
gonna,
open,
emma's,
gonna
tour
that
thank
you
yeah.
C
Yes,
I
I
was,
I
was
also
wondering
about
like
opening
the
one
for
the
structure
sql,
but
I
I
think,
like
I
just
wonder,
like
how
to
proceed
with
that
because,
like
if
you
started
working
on
that
as
well,
I
can
just
kind
of
move
on
to
something
different.
But
if
you,
for
example,
want
to
sorry
focus
on
the
schema,
I
can
try
to
open,
mr
for
that,
and
we
can
kind
of
work
on
the
mr
together.
C
If
you
would
go
in
this
like
common
structure
and
common
migration
paths,
the
better
would
be
like
to
configure
that
to
point
to
exactly
the
same
directories
simply
because,
like
we
don't
have
problem
with
enums-
and
I
was
actually
thinking
about
pushing
mr
to
validate
that
if
we
can
actually,
I
don't
know,
provision
another
ci
database-
maybe
disable
that
for
now
but
see.
C
Actually,
that's
that's
the
tricky
part,
because
we
have
the
ci
instance
variables,
so
we
could
actually
retain
exactly
the
same
structure
but
run
all
the
ci
tests
using
this
separate
database
with
the
ci
instance
variables.
So
technically.
This
could
be
like
a
very
easy
way
to
check
that.
Some
very
small
portion
of
the
ci
decomposed
in
another
database
is
working
on
the
tests,
so
I
was
actually
thinking
about
trying
that.
C
Because
I
would
also
want
to
get
rid
of
the
overlay
that
is
made
for
the
jio
and
kind
of
discover
all
the
connection
classes
to
actually
make
changes
on
the
connection
process
and
the
same
for
the
db
keynote
so
kind
of
like
because
right
now,
even
also
like
the
geo
they
override
before
all
because
they
don't.
The
b4o
is
like
kind
of
like
not
very
well
suited
for
my
databases.
But
I
think
that
there
is
slightly
better
design
that
we
can
kind
of
support.
A
C
Yes,
I
part
is
working,
but
he's
also
pretty
occupied
with
the,
and
he
discovered
that,
like
disabled
joints,
I
only
picked
like
two
backboards
so,
but
he
discovered
that
there
were
like
some
problems
with
the
polymorphic
relations.
C
So
I
I'm
kind
of
like
wondering
that
this
is
probably
one
of
the
items
that
we
would
benefit
from
having
imagined
quickly,
because
we
could
then
like
focus
on
fixing
the
no
relations
from
the
plc.
But
second,
this
would
actually,
I
think,
is
correct
for
the
this,
mr
of
the
poc,
from
the
elastic
that
you
reviewed
about
discovering
gross
joints.
C
I
didn't
work
on
this,
mr
folder,
because
there
is
no
point
until
we
have
like
this
in
place,
because,
right
now
there
is
like
too
many
failures,
then
actual
variables
after
disabled
joints,
because
disabled
joints
generate
the
lack
of
the
disabled.
Joints
generates
a
bunch
of
the
values
today
on
these
merch
requests
of
mine
being
open,
and
until
we
kind
of
mark
these
disabled
joints,
I
don't
know
even
if
they
fit
with
the
feature
pack,
to
indicate
that
they
should
be
like
this.
C
These
relations
going
through,
they
should
be
executed
as
separate
queries
like
it's
gonna,
be
too
much
issues
to
create
on
this
crossover,
mr
of
mine.
So
we
could.
B
Has
many
through
explicitly
like,
maybe
all
has
many
through
queries,
look
exactly
the
same
and
we
can
just
go.
Okay
has
many
three
queries
don't
trigger
this
exception
and
therefore
we
don't
need
to
use
that,
or
we
could
say,
has
many
three
monkey
patches
so
that
it
disables
the
the
checker.
C
Yes,
but
but
there
is
the
difference
like
you,
don't
execute
that
on
the
color,
you
are
executing
this
as
a
colleague,
so
you
don't
really
know
exactly
if
you
use
hazmat
and
joints
at
the
time
of
the
execution.
C
So
you
would
have
somehow
like
hacked
into
the
implementation,
to
indicate
that,
like
how
you
how
you
access
the
data
because,
like
from
the
first
like
this,
is
like
this
check
is
like
very,
very
far
very
low.
So
it's
not
even
rl.
It's
not
even
yet
active
record
relation,
but
basically
has
money
flow
converts.
C
B
Yeah,
I
think
it
would
be
possible
with
some
heavy
monkey
patching
of
has
money
through
to
like
change.
What
has
money
through
does
so
that
it
white
lists
itself,
but
that's
probably
not
a
good
approach,
and
I
agree
with
you,
then,
if
we
get
disabled
drawings
done
and
we
well,
the
thing
I'm
concerned
about
there
is
even
if
we
fix
the
disabled
joins
thing,
we're
still
maybe
20,
mrs
away
from
having
all
the
disabled
joins.
B
B
Yeah,
that's
fine.
I
guess
I
kind
of
reluctant
to
say
that
disabled
joins
is
the
solution
to
these
problems
by
default.
I
actually
saw
several
examples
where
we
just
removed
has
many
through
relationships
already,
because
they
were
never
used
as
money
through
it's
optional.
It's
not
like
you're,
not
expressing
the
structure
of
your
database
when
you
say
how's
money
through
people
sometimes
are
adding
them.
Preemptively
going
on
it's
nice
to
describe
how
this
thing
is
related
to
this
other
thing
through
another
thing,
but
then
they
never
call
them
methods.
C
That's
okay!
That's
that's
another
tricky
aspect
because,
like
one
of
the
things
that
I
saw-
and
there
is
also
there
is
that
like,
if
you
don't
have-
has
many
relations,
it's
very
likely
that
people
will
use
the
dot
where
clause
fairly
often,
and
it's
actually
happening
in
some
cases.
C
B
Well,
I
don't
know
that
sort
of
conceptually
when
you're
reviewing
code,
you
should
be
able
to
say
whether
you
think
using
a
scope
with
dot
where
looks
cleaner
than
it
has
many
through,
but
the
reviewer
should
be
able
to
say
okay.
That
is
actually
a
has
money
through
that
you
could
be
writing
there,
but
I
don't
know
that.
That's
I
I'm
not.
I'm
not
convinced
that
keeping
around
has
many
throughs
just
to
avoid
people
ever
writing
dot
where
because
it
doesn't
actually
work
anyway,
because
people
don't
see
all
the
has
money.
B
Throughs
like
when
you
look
at
the
active
record
model
with
40
has
many's
and
60
validations
in
it.
You
don't
necessarily
know
that
it
actually
contains
what
you've
what
you
need
to
get
at,
and
so
you
just
go
ahead
and
write
a
scope
with
dot
where
anyway,
because
that's
what
how
your
brain
thinks
about
how
to
access
that
data.
B
B
B
Yeah
anyway,
I
still
I'm
still
concerned
that,
if
we
add
the
disabled
joins
thing,
we
can't
we
can't
actually
solve
all
the
disabled
joins
right
away,
but
maybe
you're
right
with
the
feature
flag.
We
dissolve
them
all
by
default
and
then
we
just
don't
turn
on
the
feature.
Flags.
C
I
I
mean
I
mean
like
we
still
need
to
enable
like
the
future
plug
at
some
point,
but
there
is
like
the
the
potential
risk
associated
with
the
performance,
so
we
need
to
actually
validate
that.
So
probably
we
need
to
turn
on
feature
plug
by
future
plug
also.
It
actually
doesn't
harm
performance
later.
B
What
I
I
feel
a
little
bit
uncomfortable
saying
that
we
would
have
feature
flags
in
the
code
that
we
have
not
even
confirmed,
are
probably
a
safe
feature
flag
like
merging
a
feature
flag
to
the
code
for
a
joint
query
that
if
we
just
looked
at
it,
when
oh
there's
absolutely
no
way
that's
going
to
be
performant
based
on
the
usage.
It's
the
usage
is
doing
a
sort
and
now
that's
what
has
to
happen
in
memory
and
it's
going
to
load
a
million
records
as
opposed
to
20..
B
If
we
don't
evaluate
each
one
of
those
things
individually,
I
feel
uncomfortable
merging
a
feature
flag
that
has
that
hidden
behind
it,
because
there
is
no
code
review
process
for
enabling
the
feature
flag.
We've
just
got
you're
saying:
okay,
it's
up
to
the
developer,
to
test
if
it's
performant
but
they're,
not
having
the
time
of
a
database
reviewer
going.
Okay,
let's
look
at
the
specific
queries
and
usages
to
figure
out
this.
Is
you
want
those
things
to
be
vetted
at
a
thorough
level.
C
C
C
This
one
expect,
for
example,
like
only
one
okay.
So
in
the
random
projects,
this
case
is
probably
something
to
validate
it,
but
probably
up
to
100
of
the
products
that
you
manually
assigned.
So
I
think
as
soon
as
we
have,
mr
it's
gonna
be
much
easier
like
to
see
it
and
like
discuss
like
if
this.
B
Yeah,
but
that
am
I
doing
all
of
them
in
one
go
seems
like
a
bit
much
for
us
to
go
through.
Is
this
valid?
That's
why
I
kind
of
like
the
idea
of
saying
we
push
those
all
off
to
the
relevant
teams
to
for
them
to
justify
that
their
house
money
through
is
valid
and
safe,
or
they
have
to
write
their
queries
differently
and
and
denormalize
and
remodel
their
data.
B
Like
that's
the
same
way,
we've
been
doing
with
the
join
queries
and-
and
many
of
them
will
come
back
with
okay
yeah
I've
validated
this
here's.
Why?
I
think
this
has
me
through
it's
safe
and
and
then
we
can
merge
that.
C
B
Yeah,
so
teams
have
already
started
doing
that
I
mean
somebody
from
secure
was
already
showing
me:
merge
requests
with
disabled
joints
because
they
branched
off
of
the
other
disabled
joints
thing
and
then
we
looked
at
it
and
they
were
unused,
so
we
removed
them.
C
B
Okay,
well
maybe
they
were
finding
things
that
you
didn't
find
in
the
poc,
but
okay
and
you're
also
saying
that
the
test
suites
yeah-
okay-
so
I
mean
unused,
is
okay.
Maybe
unused
is
a
distraction
from
this
conversation
and
I
won't
bring
that
up
but
yeah.
It's.
I
think
it's
a
good
way
to
go
to
distribute
disable
joins
to
the
team
as
another
solution
for
solving
their
join
queries.
B
Well
now,
just
talking
about
how
do
we
get
to
the
point
where
people
are
presented
with
all
of
the
join
queries
to
solve?
We,
we
had
the
sql
parsing
thing
that
you've
implemented.
You
say
that
it's
not
a
good
idea
to
do
that
before
we
have
disabled
joins
implemented
and
I'm
still
unsure
if
that
disable
joins.
Really
it
feels
like
it's
too
many
steps
we're
still
waiting
to
get
that
sql
parser
out
to
people.
C
So
this
is
this
is
my
thinking
why
we
need
like
to
get
disabled
joints,
because
these
are
the
things
that
we
already
know
to
cause
the
problems
and
now
like.
If
you
would
have
to
add
a,
I
don't
know,
skip
for
every
throw
you
search.
B
If
we,
I
think
it
would
be
good
if
we
did,
because
that
still
aligns
with
the
same
goal
we
just
talked
about.
If
somebody
has
to
address
the
has
many
throughs
and
they
have
to
do
them
one
at
a
time
systematically
and
go
okay.
Is
this?
The
safe
has
many
through
disabled
joins
and
and
because
it's
thousands
of
tests
doesn't
necessarily
mean
it's
thousands
of
lines
of
has
many
throughs.
We
really
shouldn't
have
that
many
has
many
throughs
that
are
going
back
and
forth
between
ci.
B
C
Yes,
but
like
for
the
plc,
the
approach
I
took
with
the
plc,
I
wrote
a
screen
that
goes
through
all
relations
and
ensure
that
disabled
joints
is
configured
appropriately
for
all
ones.
That
would
simply
need
them,
so,
like
the
poc
actually
can
be
defined
in
my
disabled
joints,
then,
actually
is
needed
today
by
the
code,
because
some
of
this
may
be
this
screw
may
be
not
used,
as
you
kind
of
said,
but
poc
is
ensuring
that
this
specification
is
simply
correct
from
the
functional
point
of
view.
C
So,
if
you
use
this
thing,
it
will
simply
work
because
it
understands
that
this
data
is
here.
This
data
is
here,
so
this
relation
needs
to
have
configured
disable
joints
and
like,
for
example,
like
I
actually
find
initially
disabled
joints
flag
to
be
not
so
straightforward
to
declare
where
it
should
be
used.
That's
why
I
kind
of
like
tested
that
on
many
levels
and
wrote
a
script
to
kind
of
traverse
all
the
relationships
to
ensure
that
it's
actually
created.
C
C
I
I
think
it's
like
adding
extra
work.
I
think,
like
you,
can
achieve
exactly
the
same
with
the
lambda
and
disable
joins
and
feature
flag.
If
you
want
to
ensure
like
that
things
are
continuing
working
locally,
I
think
feature
flag.
Can
access,
probably
something
like
a?
I
don't
know
extra
parameter
with
the
url
or
whatever
or
like
feature
flag
already
has
url.
B
B
B
C
B
C
I
think
there
is
like
a
challenge
like
we
need
to
figure
out
a
process
that
works
for
the
majority
of
them
because,
as
you
said,
there
is
a
party
of
them.
We
see
it's
basically
like
too
much
effort
to
go
one
by
one
from
them,
so
we
need
to
figure
out
a
way
to
do
it
in
much
more
efficient
form
to
not
spend
now
four
months
on
actually
getting
this
so
like
so
like,
there
is
a
valid
question
about
the
process.
C
So
the
question
is
like
how
we,
how
we
do
it
like
automatically
or
like
how
we
validate
it
or
how
we
enable
that,
in
the
bulk,
how
we
validate
the
performance
of
that?
What
metrics
we!
We
look
at
that
because
right
now
like,
if
we
create
another
40
issues,
we'll
simply
not
be
able
to
do
it.
B
B
It
doesn't
need
to
be
40
issues,
though
a
lot
of
them
will
be
grouped,
but
yeah
I
mean
grouped,
emerge
requests
and
whatever
else
I
might,
my
preference
would
be
distribute
and
allow
the
teams
to
group
solutions
as
they
please.
40
is
not
huge
when
we
distribute
it
to
eight
other
teams
and
they
can
look
at
the
ones
that
all
relate
to
them
and
group
them
if
they're
equivalent
and
have
the
review
process.
A
So
like,
if
I
jump
in
here,
what's
the
issue,
is
that
enabling
a
feature
asking
each
group
to
enable
a
feature
like
this
problematic
because
there's
no,
mr.
B
Yeah,
basically
camille's
proposal
is
we
we
stick.
We
we
use
disabled
joins
everywhere,
but
behind
a
feature
flag.
So
we
just
add
40
feature
flags
to
the
code,
and
then
we
say:
okay,
you
can,
you
now
have
disabled,
joins
and
they're
all
enabled
by
default
in
ci,
but
disabled,
by
default
in
production
because
they're
a
feature
flag.
B
B
Yeah,
but
there
will
be
code
review
of
the
engineer
themselves,
adds
it
for
one
thing,
but
if
we
add
it
for
all
40
and
don't
look
at
our
own
code
to
figure
out
if
they're
sensible,
then
we
still,
we
have
feature
plans
shipped
to
production
that
potentially
have
seriously
bad
performance.
That,
like
the
performance,
might
be
so
bad
that
you're
loading
like
a
million
ci
jobs
for
a
project.
When
you
are
planning
on
sorting
those
ci
jobs
by
something
else
like
that.
That
is
the
kind
of.
C
I
want
to
see
a
fair
test
for
me
to
actually
understand
what
I
am
fixing,
so
this
will
generate
a
failed
test
if
you
enable
that
and
like
we
have
some
kind
of
control-
and
I
I
kind
of
fully
agree
like
with
you
dylan
about
like
there
is
a
tricky
part
like
how
we
could
roll
out
that
I'm
kind
of
wondering
like
how
we
can
actually-
and
I
will
crosstown
discovery
soon-
for
people
to
actually
take
leverage
of
that,
because,
if
we
add
cross
joint
discovery
is
also
gonna,
discover
like
all
of
these
disabled
joints
being
in
used
and
have
to
be
fixed.
C
Simply,
that's
that's
really
like
my
concert
and
like
for
me
like
feature
this
feature
enabled,
even
maybe,
if
we
mark,
even
if
we
I
don't
know,
stop
by
featuring
a
neighborhood
and
look
at
this
very
particular
particular
future
flag
name
to
I
don't
know
inject
into
pg
query
or
anything
that
will
simply
work.
C
I
I
I
I
will
be
happy
with
that.
Maybe
maybe
it's
not
future
enabled,
but
maybe
kind
of
following
the
idea
of
the
allo
cross
joins,
call
execution
that
returns
true,
but
it's
actually
probably
also
tricky
things
like
to
like
to
do.
But
maybe
this
is.
This
is
how
we
approach
it.
But
the
question
is
like
how
we
find
disabled
joints
that
are
simply
okay
to
be
set
to
true
and
ones
that
are
simply
unlikely
to
be
true.
That
have
to
be
reworked,
because
there
is
like
this.
C
I
think
there
is
a
plenty
of
them
that
we
need
to
fix
and
in
probably
90
of
the
cases
is
probably
fine
to
set
them
to
true
because
of
the
of
the
quantity,
so
how
we
can
like
make
a
decision
quickly
and
move
on.
On
top
of
that,
so
I
think
the
feature
flag,
probably
like
we
can
work
around
it
by
reworking
this
pg,
where,
if
we
have
disabled
joints
with
lambda
like
this,
we
could
simply
like
use
this
syntax.
C
It's
gonna
be
probably
lazy,
evaluated
as
part
of
the
execution,
so
it
would
toggle
the
flag
on
the
pg
query
analyzer
for
this,
but
then
we
are
still
left
with
the
problem
of
the
40
issues
and
how
we
fix
all
of
these
disabled
joints
do
not
spend
a
few
months
on
that.
So
we
would
probably
solve
a
problem
of
the
cross
joints,
but
we
still
have
the
problem
of
the
disabled
joints.
There's
probably
minority
of
the
cases
is
just
fine
to
true.
B
Yeah
so,
but
I
I
that's
well
that
I
agree
with
everything
he
said
what
I
feel
like.
If
we're
already
monkey
patching
into
the
has
money
through
stuff,
we
can
add
the
issue
url
syntax
I
talked
about
rather
than
disable
joins.
True,
you
don't
actually
allow
people
to
disable
joins
without
code
review.
We
just
have
allow
cross
joins
inline
on
all
of
the
has
money
throughs.
They
have
to
have
the
issue
url,
and
that
is
we
tackle
that
in
the
same
way,
all
cross
joins
are
tackled
in
gitlab
as
an
issue
assigned
to
a
team.
C
B
No,
I
don't
know
based
on
all
the
ideas
that
we
just
discussed.
I
don't
I
haven't
heard
a
simpler
one.
Well
I
don't
know
I
mean
simple,
it's
just
very
the
wrong
word
for
sure,
but
I
I
haven't
heard
it
hadn't
heard
an
option
that
gets
us
to
the
end.
Point
quicker
feature
flags.
I
don't
think
get
us
to
the
endpoint
quicker,
for
example
like
we
can
do
something.
B
Yeah,
okay,
I'm
fine
with
that
it
it's
lazily
evaluated
and
what
you
just
it
would
be
semicolon
false
or
something
I
think
like,
rather
than
checking
feature
flag.
C
B
B
Yeah
we'll
get
let's
get
all
40
of
them
classified
like
that
and
then
like
passing,
and
then
we'll
go
ahead
and
create
issues
for
them
in
batches.
I
I'm
happy
to
go
through
40
things
and
go.
This
belongs
to
this
team
or
this
team
and
just
create
quick
issues
and
label
them
and
they'll
just
go
into
the
same
bucket
of
issues
that
we
already
have
for
teams
to
solve.
You
have
a
join
query:
solver.
C
A
C
I'm
still
worried
by
time
it
takes
like
to
do
it
because,
like
you
you're
still
talking
about,
like
I
don't
know-
let's
say-
maybe
not
40
they're
gonna
be
like
20,
mrs
hiding
disabled
joints,
that
has
to
grow
through
all
of
their
reviews
and
go
through
their
process,
and
it's
it's
gonna
take
a
lot
of
time
simply
so,
I
think
we
need
to
figure
out
a
way
to
make
it
like
more
robust
since,
like
in
a
lot
of
the
cases,
this
is
gonna,
be
probably
the
only
reason
I
use
solution.
B
B
C
So
that
so
maybe,
like
my
pers,
my
proposal
for
that,
because
the
first
one
is
probably
ones
that
we
can
actually
disable
joints
right
now
to
set
the
truth,
yep
and
we
have
the
first
manner
and
for
the
past
money
we
probably
can
write
a
query.
They're
gonna
find
like
the
what
is
like
the
biggest
amount
of
the
entries
returned
by
the
disabled
joints,
and
we
can
probably
run
this
on
top
of
the
production
and
see
if
this
is
the
if
we
are
kind
of
going
over
the
boundary
of
like
healthy
records.
B
Yeah
that,
based
on
my
understanding
of
hazard
and
through,
I
don't
understand
how
you
can
look
at
a
has
many
and
say
this
is
going
to
return
30
records
unless
you
take.
The
joint
table
only
has
30
rows
in
it,
but
any
table
that
has
more
than
30
rows
will
has
the
possibility
for
turning
more
than
30
records.
It
depends
on
how
they
use
that
has
many.
C
So
I
I'm
gonna
think
if
there's
like
some
way
like
to
generate
these
queries,
because
maybe
if
we
can
identify
the
cases
that,
like
because
of
the
access
pattern,
we
simply
just
don't
have
a
possibility
of
like
accessing
that
many
rows.
It's
we
just
disabled
to
insert
it
through,
so
we
could
actually
focus
our
attention
on
the
ones
that
are
really
like
problematic.
B
C
To
be
honest,
my
solution
for
the
problem
would
be.
I
would
enable
picture
plug
and
roll
out
the
feature
pack
as
soon
as
I
see
anything
being
broken
and
just
keep
it
running
and
just
would
continue
adding
more
feature
blocks
by
general
understanding
of
like
how
big
expected
quantity
of
the
entries
you
kind
of
expect-
and
this
is
like
how
to
vary
like
that,
but
really
like
raise
the
awareness.
C
This
is
like
the
aspect
that
we
are
changing
and
this
is
probably
like
because
like,
except
probably
like
maybe
two
cases,
I
don't
think
that
this
is
able
just
require
very
extensive
validation.
One
thing
that
I
would
be
worried
about
is
like
the
amount
of
the
products
assigned
to
the
runner.
C
C
C
We
came
up
as
long
as
we
kind
of
focus
on
what
does
matter
not
like
finding
out
each
individual
one
so
yeah,
I'm
happy
of
the
like
at
least
one
suggestion
with
the
first
one,
because
this
seems
like
pretty
straightforward.
That
reduces
the
amount
of
the
future
work
that
we
have
today.
B
Yeah,
that's
why
I
would
say:
disable
joins
is
almost
like
an
anti-pattern
like
if
you
were
doing
code
review-
and
you
saw
somebody
doing
a
join
query
in
memory
in
ruby.
They
were
loading
from
one
table
and
then
joining
it
against
another
table
in
memory
and
then
applying
limits.
This
is
part
of
our
interview.
B
One
of
the
things
we
do
in
our
interview
is
like.
If
you
saw
that
code,
you
should
say
no,
that's
not
a
good
idea
to
do
code
that
loads
everything
from
the
database
and
then
sorts
and
limits
it
in
memory.
So
that's
what
disable
joins.
Does
we
probably
should
see
it
as
a
red
herring
and
code
review?
That's
something
justified
by
yeah
yeah.
But
in
this
case
we
know
it's.