►
Description
This Proof of Concept implements cluster-wide set of tables (users, application settings, ...) and pod-local set of tables (groups, projects, ...). Shows how some of the interactions do behave when significant amount of data is being split between Pods.
A
Foreign
I
would
like
to
show
you
a
quickie,
a
proof
of
concept
trying
that
is
trying
to
model
reports
strictly
through
the
proposal.
The
main
purpose
of
this
proof
of
concept
is
to
model
application
behavior
in
the
proposed
design.
The
main
aspects
of
this
design
is
that
all
of
the
information
about
the
products
and
groups
are
stored
locally
to
a
pot
and
never
served
outside
of
the
pot.
But
information
about
users
and
application
settings
are
stored
cluster-wide.
So
you
could
configure
that,
like
user
information
is
effectively
decomposed
and
being
used
across
all
codes.
A
As
you
can
see,
we
successfully
logged
into
gitlab.
Currently
there
are
only
two
parts
but
zero
and
pod
one.
You
can
dynamically
change
the
Pod
that
you
are
interacting
with
this
performance
bar
drop
down.
Let's
take
a
look
first
at
the
port0
on
port0
is
where
our
personal
namespace
is
created
for
a
user
root.
This
is
also
where
our
database
fixtures
were
run.
This
is
why
you
actually
see
on
this
dashboard
all
of
the
projects
that
are
available
on
the
pod
0.,
because
of
this
limitation
of
this
design.
A
A
If
we
switch
to
the
Pod
1
here,
you
would
see
actually
a
single
project
that
was
exclusively
created
on
this
spot.
There
is
a
limitation
which
is
like
that
new
group
new
group
needs
to
be
unique
across
all
parts.
So
now,
just
because
we
created
a
new
group
and
stress
project
in
the
Pod
one,
such
new
group
can
no
longer
exist
in
report
zero
as
an
implication
of
the
design.
A
All
of
the
information
that
are
accessible
in
user
dashboard,
they
aren't
scoped
to
the
projects
available
only
in
a
given
pot,
for
example,
you
can
see
here
63
open
issues
from
various
projects.
However,
when
we
switch
to
Pod
1,
we
actually
see
Zero,
and
this
is
really
expected
because
on
the
Pod
1
there
is
only
a
single
project
created
with
a
single
issue,
and
this
issue
is
not
really
assigned
to
anyone.
This
has
implication
to
all
of
the
informations
that
are
to
to
all
items
that
user
interacts
with
on
the
given
pot.
A
A
A
A
A
A
A
A
Currently
it
would
show
404-
and
this
is
the
expected
Behavior,
because
in
the
context
of
the
Pod
1
there
is
no
in
there
is
no
a
directly
accessible
information
about
the
this
product,
it's
only
accessible
in
the
context
of
the
port0.
This
is
also
the
reason
why,
when
we
try
to
reference
an
issue
from
this
project
in
context
of
the
Pod
1,
we
were
unable
to
do
so
so
now,
just
because
we
work
in
the
context
of
the
Pod
one.
We,
let's
switch
to
the
output0.
A
A
So
this
proposal,
sorry,
this
proof
of
concept
uses
both
Square
sqr
schemas,
to
provide
a
separation
between
pods
but
still
allowed
to
use
cross
joins
between
cluster-wide
tables
and
both
local
tables.
So
schema
public
is
our
cluster-wide
schema
and
there
is
a
number
of
tables.
I
show
you
an
example
of
the
CI
instance
variable
and
CI
instance.
A
Variable
effect
is
actually
a
clustered
cluster-wide
table
the
same
as
application
settings
the
same
as
CI
Runners
the
same
as
various
deploy
tokens,
gpg,
keys
and
mentioned
before
users,
because
of
that
this
is
why
we
are
able
to
continue
interacting
with
another
approach.
Even
though
we
log
to
the
Pod
zero,
because
all
of
the
session
details
can
be
said
between
each
of
these
parts,
an
application
can
successfully
authorize
if
user
is
allowed
to
perform
those
operations.
A
You
may
ask
what
is
actually
part
of
the
pot,
so
there
is
as
many
schemas
are
spots,
the
pots
are
numbered
by
a
monotonic
number.
If
we
look
at
the
schema
port0,
these
are
all
the
tables
that
are
available
in
context
of
the
port0,
probably
the
most
frame.
My
camera
cable
are,
of
course,
namespaces,
which
is
our
groups
and
personal
namespaces
and
all
of
the
tables
related
to
the
Pro
X.
A
A
The
next
question
can
be
how
we
actually
identify
which
table
should
be
part
of
Portugal
tables
or
cluster-wide
tables.
There
is
a
set
of
different
scripts
to
perform
That
Base.
The
main
script
is
actually
classify
both
database.
This
is
the
script
that
iterates
all
of
the
following
keys
and
those
foreign
keys
of
each
table
and
outputs
that
into
gitlab
preference
yml.
Then
each
of
these
tables
contains
a
various
information
to
try
to
model.
A
Affinity
between
tables,
so,
as
you
can
see
here,
users
is
a
cluster
white
table
that
is
referenced
by
a
number
of
other
tables.
Users
table
is
currently
reference,
I
think
by
around
200
different
columns.
So
we
need
to
somehow
make
gitla
preference.
Yml
I
mean
sorry,
classify
post
database
script
aware
whether
this
is
reference
that
is
related
or
external
related
means
that
it
should
be
leaving
in
the
same
database
external.
A
A
A
Once
once
we
have
this
information,
there
is
another
script,
create
pod
database
that
uses
postgres
SQL
schemas,
to
create
this
virtual
barrier
of
the
visibility
between
different
schemas.
Then
we
effectively
iterate
each
of
these
tables
by
creating
database
many
times
and
modifying
first
sequence
to
indicate
this
sequence
is
not
owned
by
anyone.
It
means
that
this
sequence
is
like
cluster-wide.
A
This
actually
allows
us
to
have
most
of
application
working
because
application
when
it's
running
is
configured
with
schema
search
path.
This
schema
search
path
allows
us
to
use
a
clustered
white
tables
and
be
able
to
cross
drain
them
with
a
pot
local
tables,
but
only
a
single
put
at
a
single
time.
It
means
this
is
how
we
can
actually
switch
dynamically
between
different
pods
and
model.
All
of
these
interactions
with
the
minimal
effort.
A
This
schema
search
path
is
files
configured
statically
on
a
boot
up
as
in
this
database
for
EML,
but
it's
also
configured
dynamically
based
on
user
choosing,
puzzier
or
pod
one.
This
information,
but
0
or
Pod
1,
is
sent
by
cooking.
That
is
name
selected
pot.
The
selected
pot
cookie
is
then
being
preserved
by
Rock,
either
way
to
configure
temporarily
database
connection,
but
then
it's
also
passed
through
to
side
key
workers
to
resolve
that
properly
for
other
jobs
being
executed,
and
this
is
exactly
what
you
can
see
here.
A
If
we
look
for
site
key,
this
is
an
entry
of
the
side
key
and
there
is
like
a
selected
port0.
It
means
that
we
schedule
the
site
key
job
when
we
were
running
on
the
Pod
zero.
A
This
issue
had
a
reference
to
a
project
that
was
part
of
the
Pod
zero,
but
we
were
unable
to
resolve
this
project
because
this
project
is
not
available
in
the
context
of
the
Pod
one.
So
maybe
one
of
the
next
step
is
try
to
figure
out
if
this
kind
of
workflow,
where
your
reference
swings
across
different
parts,
is
visible
and
how
it
would
look
like.