►
From YouTube: 2021 07 21 EMEA Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Need
the
conversation
going
so
discussion
on
whether
to
use
schemas
or
not,
and
it
looks
like
tong-
has
already
added
a
comment
about
experimenting
with
table
names.
B
Date
so
thoughts
on
schema
or
not
schema.
C
I
I
think,
to
sort
of
answer
that
question:
we
have
to
sort
of
take
a
step
back,
which
is
in
our
test
suite
and
development
environments.
How
are
we
going
to
enforce
that?
C
A
given
table
that
we
have
isolated
cannot
use,
say,
cross
cross
boundary
joints
that
that's
not
called
on
cross
shard
across
database,
because
this
immediately
sort
of
locks
us
into
the
the
next
step,
but
basically
at
a
code
level.
Let's
say
we
move
for
this
case.
Let's
say
we
move
the
users
table.
C
How
would
we
enforce
that?
Somebody
doesn't
add,
for
example,
new
foreign
keys
or
joins
that
table
onto
something
that
may
not
be
available,
because
I
think
if
we,
if
you
start
there,
the
the
choice
of
using
different
schemas
or
different
database
or
whatever
it
is.
That's,
that's
basically
a
solution
to
this
question,
so
I
think
we
should
basically
start
there.
I
can
answer
part
of
that.
I
think
for
foreign
keys.
We
could
disallow
foreign
keys
to
be
added
in
migrations
for
certain
tables.
C
D
I
mean
you
would
have
to
drop
or
rename
tables
to
make
them
invalid
or
parse
executed
sqls
to
react
to
find.
If
you
are
using.
I
guess
black
sorry
then
listed
a
set
of
tables,
but
I
think
this
is
like
the
same.
The
same
challenge
like
that
doing
these
things
on
the
death
and
test
kind
of
breaks,
the
way
how
you
even
migrate
that,
because
maybe
for
the
tests
on
on
each
run
test,
we
open
transactions.
So
probably
we
can
do
these
ddr
changes
within
a
transaction.
D
They
will
be
roll
backed,
but
it's
not
so
straightforward.
If
you
want
to
kind
of
replicate
the
same
for
the
development
it
did
running
of
the
db
migrate,
if
people
will
migrate,
schemas
would
also
to
structure
sql.
So
it
will
kind
of
end
up
very
often
at
least
in
from
my
head,
like
in
the
state
that
develop
from
the
development
database.
D
The
structure
sql
would
not
be
equal
to
what
is
committed
in
the
repo,
because
you
would
have
to
make
some
changes
to
the
structure
to
kind
of
enforce
that
forbidden
constructs
cannot
be
used
and
this
would
be
either
drop
tables
or
rename
them,
or
do
something
different
that
I'm
not
aware
of.
Currently,
that's
that's
really,
my
my
perspective
on
the
alternatives.
D
If,
if
we
don't
do
schema,
there
is
like
there
is
like
a
workaround
which
is
like
drop
or
rename,
which
kind
of
like.
If
you
drop
a
rename
like
you
can
even
for
the
foreign
keys,
you
can
discover
that
you
are
foreign
keying
to
table
in
a
different
context,
based
on
the
name
or
comment
of
the
table
and
and
like.
We
could
probably
discover
a
lot
of
these
things,
but
you
kind
of
need
to
to
like
reduce
or
like
change
the
your
perspective
of
the
like
your
view
of
the
database
effectively.
C
C
We
also
need
to
do
that
transparently,
as
in
if
we
make
this
change,
at
least
in
my
opinion,
we
shouldn't
have
to
require.
You
know
a
300
something
developers
to
now
suddenly
do
some
extra
steps,
and
it
should
be
also
something
that's
transparent
to
production,
because
you
know
whatever
approach,
we
take,
whether
it's
schemas
database
or
something
else
it's
going
to
be
a
while
before
we
deploy
it.
C
C
My
concern
mostly
there
is.
We
haven't,
really
tested
this
in
any
capacity,
if
you
know
pg
balance
or
the
load,
balancer,
etc,
and
theoretically
you
can
cheat.
So
if
we
have
a
setup
where
we
move
certain
tables
to
different
schema-
and
we
also
do
this
during
your
development
environment,
somebody
could
essentially
cheat
the
system
by
just
changing
their
search
path
and
then
they
can
join
all
they
want.
As
far
as
I
understand
at
least.
D
Now,
yes,
like,
like
we
use
that
for
the
partitions,
we
use
full
qualified
name.
That
pref
is
the
table
name
with
the
with
the
schema,
and
this
is
how
you
will
get
a
partition,
static,
dynamic
and
we
don't
set
search
path
and
like
this
is
how
we
kind
of
create
analyze
partitions
today
right.
So
the
the
this
is
the
simplest
way
to
cheat.
C
Right,
I
it's
so
it
is
possible.
I
I
don't
expect
people
would
do
this
deliberately
and
I
think
when
they
do,
you
know
we
basically
slap
them
on
the
fingers,
but
the
sort
of
core
concern
that
I
have
basically
is.
I
I
prefer
having
a
solution
where
it's
physically
impossible
and
not
it
is
unlikely
simply
because
the
the
harder
guarantees
are
better.
C
Something
like
that,
I
think,
would
be
very
useful
because
then,
essentially,
we
could
do
at
the
model
level.
We
could
say:
hey
this
model
can't
join
just
you
can't
do
it
and
I
think
they're
basically
done.
Of
course
you
have
foreign
keys,
but
at
least
the
joining
part
is
fixed
as
it
becomes
a
separate
query
code
that
requires
the
join
will
break,
which
means
you
have
to
fix
it.
It
shows
up
in
the
test.
It's
it's
very
clear.
C
Instead
of
this
sort
of
behind
the
scenes
system,
where
you're
not
really
aware
of
it
until
you
run
into
something,
whereas
at
least
I
personally
prefer.
Basically
this
explicit
thing
somewhere
that
says
you
can't
do
this
like
it.
Just
physically
won't
work
with.
That
said,
I
don't
know
if
there's
a
way
to
back
port
that
stuff
to
our
version
of
reels.
Oh.
D
A
D
Because
because
github
blog
post
mentioned
disabled
joins
feature,
and
we
actually
backported
that,
and
we
are
using
that
in
the
poc.
C
We
are
yeah,
I
think
it
might
have
been
that
one.
I
can't
find
it.
My
history
is
full
of
get
up
co-pilot
stuff,
so
it's
it's
somewhere
in
the
pile,
but
somebody
shared
it
in
the
shouting
channel
a
few
days
ago.
I
think
it
might
have
been
jariff.
D
But
it's
it's
like
that,
like
what
it
offers
is
fairly
limited
and
it
works
only
in
the
specific
cases
even
like.
If
you
use
this
syntax,
you
can
still
like
cross
join.
If
you
do,
let's
say
dot
joints,
for
example,
you
can
still
cross
it,
so
it
kind
of
removes
a
very
extra
it
doesn't
even
limit
it
kind
of
like
changes
a
behavior
or
when
you
are
using
that
in
a
very
particular
syntax.
D
So
if
you,
for
example,
use
dot
includes
with
the
disable
join,
it
still
will
be
joined.
It's
it
will
not
use
separate
queries
for
for
that,
you
need
to
use
explicitly
dot
preload,
so
so
this
is
more
like
a
handy
feature
to
a
guide
that
some
of
the
syntax
are
not
allowed.
But
this
is
not
a
security
feature.
That
kind
of
four
beats.
C
D
Started
adding
some
additional
patch
on
the
eager
loading
to
prevent
that,
but
it
still
like
doesn't
cover
our
cases.
It's
like
it's
more
like
the
like
a
hint,
but
not
a
security
feature
that
you
kind
of
before.
C
What
we
could
do
in
theory
is
we
have
sort
of
two
options:
one
we
take
a
set
of
tables
and
we
say
these
now
live
in
the
ci
schema
and
we
add
a
migration
that
does
that.
As
far
as
I
understand,
this
is
sort
of
an
instant
change.
You
can
do
it
in
production,
no
problem.
C
I
do
think
for
that.
We
should
somehow
test
how
pg
bouncer
works.
With
this,
with
its
connection,
pooling
and
and
how
the
load
balancer
reacts,
I
think
the
low
balancer
will
probably
be
fine,
pg
bouncer,
I'm
not
sure
if
you
set
the
search
path
globally
or
per
connection.
I
saw
some
stuff
being
discussed
about
it,
but
basically
that
may
require
that
prior
to
us
deploying
such
a
migration,
we
first
have
to
change
pg
bouncer,
to
include
let's
say
not
just
the
ci
search
path,
but
maybe
a
couple
of
others
just
ahead
of
time.
D
So
from
what
we
tested
and
from
what
I
saw
patrick
and
correct
me,
if
I'm
wrong,
like
we,
don't
have
a
very
reliable
way
of
the
on
the
pg
bounce
server,
even
though
there
are
a
commas,
but
they
are
treated
as
executed
on
their
best
effort.
But
what
a
gift
that
we
can
do.
We
can
alter
the
role
or
outer
database
and
search
set,
a
default
search
path,
so
basically
append
our
schema
search
part
to
to
the
either
role
or
database.
D
So
technically,
from
what
I
understood
from
patrick.
This
would
require
that
we
have
to
cycle
all
the
outgoing
connections
from
the
pg
bouncer
to
ensure
that
they
fetch,
like
a
new,
a
new,
a
new
thing.
Out
of
that,
I
think
what
I
learned
as
well,
that,
like
this
race
configuration
database,
yms,
schema
search
path,
is
simply
no
go
because
it's
very
unpredictable
and
it
simply
will
fail
on
the
pg
balancer
and
we
cannot
guarantee
anything
but
from
what
I
kind
of
see
right
now.
D
C
D
Actually,
andreas
proposed
some
some
very
good
idea
like
if,
at
some
point
as
a
intermediate
step,
we
could
open
two
separate
connections
to
the
same
logical
database
and
knowing
all
the
implications
on
the
performance
what
we
could
have.
We
could
have
these
different
connections
by
default
by
by
a
role
configured
to
have
different
schema
path.
A
Right,
I
think
you
would
never
want
to
be
using
an
explicit
search
path
or
fully
qualified
table
names.
So
basically,
if
you
only
rely
on
search
path
being
correct,
then
you
can
only
work
with
ci
tables.
If
you,
if
you
have
it
configured
that
way
or
with
the
main
schema
you
have
it
configured
the
other
way.
But
if
you
never
use
explicit
schema
names,
then
there
is
like
less
risk
of
cheating
right
across
across
schemas
and
like
like
commonly
explained.
A
If
you
have
those
two
separate
connections
and
assuming
we
can
set
that
up
in
production,
we
can
even
switch
things
much
more
incrementally
to
using
the
separate
ci
connection,
so
even
even
with
a
feature
flag.
Even
for
our
group
only
we
could
say
that
yeah,
okay,
now
we're
using
the
ci
connection
and
we
can
see
if
a
certain
certain
code
path
actually
works
if
it
doesn't
see
any
of
the
other
tables
that
are
not
inside
the
ci
schema.
A
That
is
something
where
I
see
a
huge
benefit
compared
to
having
to
wait
until
we
actually
migrate
data
physically
to
a
separate
database,
and
only
then
realizing
that
yeah.
There
is
some
code
paths
that
still
don't
work
that
we
still
haven't
caught.
Basically,
because
then
this
is
sort
of
a
big
bang
migration,
where
the
rollback
is
very
expensive,
at
least
like
if,
if
at
all,
possibly
really,
whereas
when
we
use
schemas,
we
can
really.
If
we
want
that
right,
we
can
do
very
small
steps.
A
A
That
entails
a
pg
bouncer
being
set
up,
and
it
also
means
that,
since
we're
talking
about
connections
to
the
same
database,
it
means
that
those
additional
connections
are
going
to
go
to
the
same
single
postgres
cluster
that
we
have
today,
and
that
is
an
additional
overhead
most
likely.
That
is,
that
is
an
unknown.
I
think
that
we
would
have
to
check.
C
Yeah,
I
think
that
the
benefit
there
will
be
that
those
connections
would,
I
would
imagine,
mostly
sit
around
idle
unless
we're
actively
using
them.
So
I
think
that
should
be
fine,
so
put
it
this
way,
I
I
am
still
a
little
uncertain,
but
that
is
not
so
much
because
of
schemas,
but
more
because
I
kind
of
look
at
this
and
like
why
is
there
not
a
better
way
like
I
I'm
basically
a
little
disappointed
in
the
sort
of
state
of
I
guess,
rails
and
postgres?
C
Probably
because
there's
not
a
lot
of
people
who
run
into
this
anyway,
given
that
the
meals
joining
solution
that
I
had
hoped
will
be
helpful,
is
not
as
helpful
and
given
the
you
know,
writing
robocop
rules
or
whatever
is
probably
going
to
be
total
pain
and
not
detect.
Everything
it
seems
like
ski
mouse
might
be
the
best
approach.
C
I
think
the
you
know
the
sort
of
big
asterix
there
is.
We
just
have
to
make
sure
it
works
in
pg
bouncer
before
we
commit
a
lot
of
time
to
this,
because
it's
a
bit
of
a
waste.
If
we
you
know,
do
all
these
migrations
set.
That
up
and
then
we
deploy
through
our
production
environments
like
oh
no
pj
bounce,
just
doesn't
work
with
this
sorry
and
though,
in
that
case,
I
think
he
could
do
this
thing.
C
A
It
can
be
any
name
and
we've
had
that
before,
where
people
were
configuring,
whatever
schema,
they
thought
it
it
can
be
like
can
be
gitlab
or
something,
but
due
to
the
fact
that
we're
not
even
explicit
about
using
using
schema
names
again,
it
still
works,
but
you
just
can't
assume
that
public
is
the
default
schema.
D
D
A
I
wasn't
quite
sure,
like
I
think
I
caught
somewhere
that
basically
having
a
separate
database
is
an
optional
step
even
for
self-managed,
so
even
if
they
want
to
do
that,
or
only
if
they
want
to
do
that,
they
would
have
a
separate
database,
and
in
that
case
I
can
even
imagine
that
if
you
keep,
if
you
have
schemas-
and
you
logically
divide
your
particular
tables
into
those
schemas
for
those
installations
that
don't
have
a
separate
database,
you
would
be
able
to
just
keep
having
one
connection
pool.
A
Basically,
that
has
a
very
permissive
search
path
where
you
can
actually
see
all
those
tables
and
basically
nothing
would
change
for
those
installations.
They
would
keep
having
one
database.
There
is
no
migration
needed
whatsoever,
which
is
probably
quite
difficult
in
a
self-managed
setup
to
do
the
data
migration
to
a
separate
database
with
no
downtime.
A
So
in
that
case,
we
wouldn't
we
wouldn't
need
to
do
anything
about
self-managed
and
they
can.
They
can
opt
in
into
having
a
separate
database.
Is
that
a
plan
offering
sort
of
a
separate
database
as
an
opt-in
solution
for
self-managed,
or
did
I
get
that
wrong
so.
D
I
bought
the
self-managed
like
we
didn't
yet
define
exactly
how
we
gone
around
that,
but
one
of
the
suggestions
from
seed
and
like
expectation
is
like
that
we
run
self-managed
exactly
the
same
as
we
run
production,
so
this
may
imply,
on
the
same
database
server
having
separate
logical
databases,
but
at
least
I
would
expect
that
we
want
to
end
up
in
the
solution
that
we
use.
Separate
connections
anyway.
D
Do
not
like
run
that's
all
in
the
single
big
transactions,
because
then
what
happens
is
like
on
github.com.
We
start
running
transactions
across
across
two
different
databases,
but
if
we
keep
in
mind
with
our
permissive
search
path,
all
of
these
transactions
will
be
basically
joined
together
now,
so
our
your
on-premise
versus
github.com
will
behave
slightly
differently,
because
github.com
will
run
this
sequentially
on
premise.
We
basically
run
that
on
the
database
level
together.
D
So
I
think
the
way
how
we
end
up
on
running
on
on-premise
is
that
we
truly
have
to
embrace
that
even
on
the
on-premise.
Even
if
this
is
same
database,
it
may
be
same
logical
database
but
different
search
path.
Whatever
we
choose,
we
keep
running
two
separate
connections
to
ensure
that
it
behaves
exactly
the
same
as
github.com
as
for
the
all
too
faced
commit
stuff
that
we
are
kind
of
gonna
be
forbidding.
D
This
would
be
like
my
my
like
understanding
about
the
seed
expectations.
It's
like
it's
less
relevant
about.
If
these
are
like
the
data
are
together
or
not
together.
If
they
are
separate,
logical
databases
or
separate
servers,
there
is
like
this
structural
choice
from
the
application
that
it
runs
across
two
different
connections.
So
this
is
the
mechanic.
I
think
that
we
should
retain
to
ensure
that
it
behaves
exactly
the
same
as
github.com.
A
Okay,
that
makes
sense
and
regarding
the
data
migration
into
a
separate
database,
is
that
something
that
you're
also
aiming
to
ship
to
self-manage?
So
they
can.
They
can
also
do
that.
We
did
not
yet
discuss
that.
B
All
right
on
to
number
two
fabian.
E
Oh
yeah,
I
thank
you
for
responding
to
my
to
my
issue
request.
E
I
perceive
a
certain
level
of
frustration
in
the
responses
and
I
can
relate
the
context
for
this
request.
Is
that
the
discussion
on
decomposition
and
the
concerns
raised
by
sid
regarding
this
particular
mr
for
normalization,
where
we
opened
on
monday,
I
have
so
far,
I
think,
failed
to
convey
that
these
are
not
concerns
and
I've
tried
to
represent
the
opinion
of
the
team
faithfully.
E
That's
where
I'm
coming
from
so
this
is.
This
is
kind
of
where
we
are
at,
and
I
think
my
interpretation
of
where
we
are
at
with
regards
to
decomposition,
right
and
unrelated
to
this
normalizing
of
data
in
this
particular,
mr,
is
that
I
don't
think
decomposition
closes
the
door
definitely
on
future
shouting
strategies.
I
think
that
is
not
the
case
and,
secondly,
normalization
and
the
denormalization
of
sharding
will
be
a
large
part
of
the
work
that
has
to
happen,
and
this
particular,
mr
is
one
of
many-
does
not
add
significant
overhead.
E
C
Yeah,
basically,
like
the
this
sort
of
normalization,
that's
been
said
before,
but
yeah
the
answers
depends
depends
on
what
we
normalize
and
how,
in
general,
no
we
will
always
have
share
tables
to
some
extent
and
having
shed
tables
itself
is
fine
like
if
you
want
everything
fully
isolated.
C
E
But
I
think
I
think
this
is
like
this
is
sort
of.
My
point
is
like
I
don't
think
we
are
clear
yet
on
what
we're
going
to
build
in
the
next
two
or
three
years,
but
that
is
sort
of
tbd.
You
know
we,
we
don't
have
a
definite
answer
as
to
what
that
would
entail,
but
the
situation
is
such
that
what
we're
doing
right
now
is
not
closing
the
door
on
many
of
these
these
futures,
because
we
have
share
tables,
we
will
have
to
deal
with
them.
That
is
a
part
of
you
know.
E
You
know
the
work
that
will
need
to
happen,
and
so
I
think
that
is
important.
There's
no
one-way
door
that
he
has
closed
and
with
this
one,
mr
all
of
a
sudden,
we're
no
longer
able
to
to
shard
in
the
future,
or
it
is
a
significant
increase
of
burden
right,
because
it's
one
of
many-
and
I
also
agree
with
you-
and
I
think
that
was
not
really
made
apparent
so
far-
is
that
we
reduced
a
lot
of
data
and
if
you
have
less
data,
it
becomes
easier
to
handle
that
on
some
level.
D
E
Yes,
I
agree
with
you
and
then
the
the
second
thing
is
that
the
the
decomposition
effort
right,
the
taking
all
of
the
ci
tables
and
putting
them
in
a
different
database
is
not
making
it
impossible
to
shard
by
whatever
strategy
later
on.
As
far
as
I
understand
right,
it's
like
it's.
Of
course
we
have
two
databases
and
we
will
need
to
deal
with
it,
but
it's
not
we
not.
E
D
I
would,
I
would
even
say
that,
like
if
you
decompose
it's
much
easier
to
start,
because
you
are
starting
smaller
data
set
much
more
content,
you
have
figure
out
better
starting
strategy
and
you
you're
actually
starting
a
small
person
or
maybe
big
person
because
of
the
decomposed
block
of
the
application
not
affecting
others.
So
it's
much
easier
to
optimize
and
do
it
iteratively
instead
of
doing
one
or
all
or
nothing
effectively
us
with
everything.
D
A
Can
I
add,
for
the
I
mean
what
we're
doing
here
with
the
normalization
is
a
lookup
table
basically
and
that
that's
a
good
strategy
in
a
relational
model
in
any
case,
so
we
could
practice
where
we
are
right
now,
but
even
if
we
go
into
sharding,
then
nothing
keeps
you
from
having
lookup
tables
and
shouting.
You
just
want
to
make
sure
that
you
have
the
right
data
on
your
chart
and
it
doesn't
have
to
be
a
shared
global.
D
A
Table
right,
so
that
means
you
lose
some
benefits
because
you're
you
have
more
duplicated
data
potentially,
but
you
can
still
make
that
you
can
still
read
the
benefits
of
that
normalization
in
a
shorted
environment.
So
yeah.
E
And
I
think
that
that's
a
good
point
and
if
you
can
add
that
to
the
context,
it
would
help
me
because
that
you
know
like
it's
important,
to
have
this
sort
of
track
record.
But
I
think
it's
also
for
me-
and
I
think
I
raised
this
just
like
I
think,
there's
a
fundamental
question
sort
of
in
the
longer
term
architecture
of
gitlab,
like
yorick,
said
where.
E
Maybe
we
want
fully
isolated
tenants
at
some
point,
and
if
that
is
the
case,
it
will
break.
You
know
certain
functionality
that
we
have
right
now
or
make
it
very,
very
difficult
to
do.
Yeah
get
like
pods,
but
that
is
that's
a
that's
a
call
based
on
our
you
know
our
desire
for
single
tenancy
and
whatnot,
and
I
you
know
nothing
that
we
do
right
now
in
like,
as
I
interpret
it,
and
you
have
to
tell
me
if
this
is
wrong,
we'll
make
it
impossible
to
deliver
such
a
thing.
E
You
know
like
all
of
the
open
source
projects
right
and
they
just
want
their
own
little
world
and
that's
fine.
You
know,
then,
okay,
we're
going
to
break
that
functionality
and
it
that's
that's
how
it's
going
to
be.
You
know,
but
we
don't
know.
If
that's
the
case,
that's
a
that's!
A
longer
term
question.
D
Yes,
but
but
like
you
are
rising,
the
the
conflicting
point,
I
think,
is
it
really
like?
We
are
talking
about
the
tenants
so
like
making
efficient
around
10
months
or
do
we
talk
about
the
regions
which
makes
seems
like
iran
in
the
europe
versus
us,
because
both
kind
of
like
two
different
problems
to
solve?
I.
E
That's
correct.
That
was
my
point.
I
think
they're
fundamentally
different
problem
spaces
with
different
requirements.
D
Yes,
but
but
it's
I
think
there
are
different
solutions
because,
from
the
cost
perspective
like
region
is
going
to
be
significantly
larger
instances
on
a
tenant,
you
really
want
to
have
a
cost
of
a
single
tenant
as
small
as
possible,
and
it's
like,
like
even
like
from
your
investment
like
you're
gonna,
be
investing
different
amount
of
the
money
and
running
those.
E
E
Yeah,
but
that's
where
we
are
like
I'm.
What
I
will
try
to
do
is
summarize.
You
know
your
your
points
in
this
issue
in
like
three
sentences
and
share
that
I
think
that's
the
best
I
can
do.
I
know
it's
a
distraction
on
some
level.
B
I
think
what
the
part
is
describing
in
that
dog
c
is
trying
to
solve
multiple
problems
with
one
solution:
that's
where
we
are
and
that's
what
I
think
we
have
different
opinion,
because
we
need,
we
believe
the
problems
should
be
isolated
solved
by
different
solutions.
B
B
At
the
micro
level.
I
mean
technical
level.
I've
typed
my
suggestion
there
about
sharding
itself
draw
a
picture
of
how
it
looks
like
if
we
start
by
top
level
namespace
in
the
future
and
also
then
draw
a
picture,
a
rough
picture
or
tell
the
story.
We
have
not
decided
our
sharding
direction
yet
so
many
possibilities
with
two-door
decision
a
two-way.
I
mean
two-way
decisions
right
so
that
that
may
help
explain
the
sharding
solution.
But
I
think
the
past
is
a
much
higher
level
we're
trying
to
solve
all
the
problems
with
one
solution.
E
E
I
think
there's
there's
fundamentally.
To
be
perfectly
frank,
I
think
I've
not
personally
found
a
good
way
of
saying
this
in
such
a
way
that
results
in
sid
agreeing,
and
I
I'm
trying
to
find
a
way
to
do
that
in
order
for
us
to
be
able
to
move
on
with
what
we're
doing-
and
I
think
that
this
is
sort
of
this
issue
was
our
idea
yesterday
on
how
to
get
some
of
that
data
written
down,
and
that
already,
I
think,
is
very
helpful.
I
think
this
picture
is
another
idea
of
doing
it.
E
C
C
Do
we
actually
have
people
willing
to
pay
a
considerable
amount
of
money
for
this
because,
as
far
as
I
understand
the
people
that
we
use
gitlab.com,
they
don't
want
their
stuff.
I
said
they
use
gitlab.com,
because
other
projects
are
there
yeah
and
the
people
that
want
to
have
their
data
isolated.
Like
say,
a
government
first
of
all,
they're
not
going
to
use
the
sas
anyway,
because
that's
probably
against
all
their
regulations,
and
if
they
are,
it's
probably
going
to
be
something
like
amazon's.
C
C
People
who
talk
to
other
people-
you
know-
I
I
honest
to
god.
I
don't
see
the
use
case
for
signing
up
for
gitlab.com
and
then
saying
I
want
all
my
data
in
the
eu,
but
I
don't
care
about
my
logs
and
these
things.
Gitlab
has
decided
not
to
localize,
but
I
still
want
to
be
able
to
interact
with
non-eu
projects
and
I
still
want
to
have
non-eu
people
interact
with
my
projects,
because
you
get
this
sort
of
muddy
gray
area
where,
whatever
regulations,
the
perhaps
trying
to
follow,
are
like
questionable.
E
We
have
no
detailed
understanding
of
these
requirements
for
this
target
segment
and
we
should
do
exactly
what
you
just
suggested
and
actually
that
out
before
we
do
anything,
because
I
cannot
answer
these
questions
and
if,
if
I
cannot
answer
these
questions,
then
how
am
how?
How
are
you
supposed
to
know
what
to
implement?
E
Quite
frankly
is
that
is
my
stance,
so
I
think
you
know
there
is
a
significant
piece
of
sort
of
discovery
to
be
done,
for
regions
right
and
maybe
even
more
broadly,
for
tenant
isolation,
because
I
think
that
will
have
a
very
big
impact
on
the
architectural
decisions
where
we
want
to
take
gitlab,
and
if
we
have
once
we
have
clarity
on
that,
you
know
it
will.
I
think,
help
us
determine
the
right
kind
of
you
know
solutions
for
for
these
requirements.
E
They
can
look
very
different,
and
so
that
is
my
my
stance
and
luckily
I
think
we're
not
blocked
on
anything,
because
some
of
these
efforts
can
be
paralyzed
right.
You
can
do
the
decomposition,
it
actually
buys
us
a
significant
amount
of
of
time.
You
know
with
database
scalability
and
whatnot
and
yeah.
I
I
think
we
are
in
full
agreement
on
that.
E
Do
we
test
and
we
will
solve
our
regions,
problems
and
our
sharding
problems
and
our
scalability
problems
or
do
gitlab
parts
and
we'll
solve
all
of
our
our
problems,
because
I
think
we
have
not
done
this
all
of
the
work
necessary
to
like
from
a
customer
perspective,
to
determine
this,
because
there
are
trade-offs.
I
don't
think
there
is
a
single
solution
that
will
magically
solve
all
of
these
problems
at
once,
without
any
any
trade-offs
to
consider
so.
D
E
Yeah-
and
I
think
we
also,
I
think
know-
and
please
again
correct
me
if
I'm
wrong
here-
that
the
ongoing
decomposition
work
when
it
completes
will
provide
us
significant
headroom
for
future
growth.
I
think
that
is
in
my
mind
why
we
why
we
also
chose
this,
because
it
is
something
that
we
can
deliver
in
a
relatively
let's
say,
short
amount
of
time,
which
then
buys
us
a
lot
of
the
the
time
needed
to
make
all
of
these
considerations
and
investigate
them
in
an
appropriate
way.
E
E
I
just
need
to
convince
sid
and
and
others
that
that
is
the
case
quite
frankly,
if
that
makes
sense,.
E
And
this
is
where
this
so
to
make
full
circle,
and
then
we
can
close-
and
this
is
where
this
issue
request
is
coming
from,
because
I'm
in
the
position
where
I'm
not
a
developer
right,
I
like
I
can.
I
can
point
at
your
expertise
to
answering
these
questions
right.
It's
not
in
my
in
my
remit
to
say:
fabian
has
read
many
books
on
on
relational
databases
and
things.
This
is
like
that
right.
I
think.
That's
not
my
my
point.
D
D
E
Yes,
I
agree
with
you,
anyways
that's
the
context
and
I'm
I'm
really
glad
and
grateful
for
all
of
your
support
and
that,
please
don't
don't
think
otherwise
and
I
I
hope,
I'm
not
causing
more
more
frustration
than
necessary.