►
From YouTube: Pods Brainstorming 2022-01-09 Dylan and Kamil
Description
appliction_settings schema change: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/108452
schema validation tooling changes: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/108462
B
Camille
and
I
are
discussing
the
first
steps
for
our
pods
proposal
too,
which
is
to
work
on
decomposing
application
settings
and
sharing
it
across
two
pods,
as
well
as
figuring
out
what
we
need
to
change
in
gitlab
to
support
the
workload
of
creating
a
second
group
on
the
second
pod
and
Camille
has
a
merge
request
for
the
application
settings
part
already.
A
So
so,
I'm
not
sure
if
it's
really
like
benefits
are
for
us
to
talk
about
this
particular
MRI,
because
this
is
more
like
just
getting
this
class
of
errors
and
fixing
them
and
he
seems
to
be
aligned
with
yeah.
A
It's
just
more
like
like
getting
this
to
the
working
State.
Should
we
go
back
like
to
our
discussion
on
the
proposal
too,
that
we
had
in
this
issue?
Maybe.
B
A
I
have
24
Qi
plan
q1
planning.
Let
me
incube.
B
Everything
this
one
yeah
I
got
it
now
all
right.
What
was
the
latest
I
mean
one
thing
when
I
looked
at
your
merge
request
Okay.
So
let
me
share
that
when
I
looked
at
your
merge
request,
which
I'll
open
again
just
for
the
sake
of
clarity,
I
think
you
know
the
one
disagreement
we
were.
No.
This
is
not
the
right
one.
Obviously
you
just
linked
to
another
thing.
B
When
I
looked
at
this,
you
had
gitlab
main
cluster
as
the
get
live
scheme
and
I
think
that
was
like.
The
only
thing
we
were
disagreeing
on
was
whether
it
needs
to
be
get
lab
mating,
cluster
or
gitlab
cluster,
and
when
I
look
at
this
merge
requests,
I
can
see
that
it
is
really
just
a
naming
difference
and-
and
it
really
shouldn't
matter
too
much
what
we
were
disagreeing
about,
so
I'm
happy
to
go
with
main
cluster.
B
We
could
figure
that
step
out
later,
so
I
don't
think
it's
necessarily
productive
for
us
to
debate
the
sharing
connections
between
two
pods
thing
or
actual
decomposition,
because
I
think
once
we
get
to
making
a
plan
for
production,
we
can
make
a
decision
on
that,
but
I
also
don't
think
we'll
end
up
with
a
gitlab
CI
cluster,
because
we
haven't
really
talked
about
any
tables
that
really
need
to
be
shared
between
the
two
pods,
except
for,
like
maybe
runners
or
instance,
variables,
but
I
think
those
things
can
also
be
resolved
in
separate
ways
if
we
get
to
the
specifics
of
them.
B
So,
like
you
know,
instance,
variables
potentially
might
be
debatable
whether
or
not
it
needs
to
be
shared
between
the
pods
instances,
at
least
in
the
first
production
iteration.
We
could
maybe
go
okay
instance.
Variables
aren't
shared
because
we're
not
even
using
instance
variables
on
gitlab.com
anyway
and
then
run
as
we.
We
that's
all
better.
A
A
We
have
the
the
only
I
think
the
difference
that
we
have
is
like
you
said
you
have
focused
on
the
feature
side
where
you
said
like
we
have
gitlab
mail,
then
we
have
gitlab
users,
git
approach
and
whatever
else
that
may
be
said,
I'm
kind
of
saying
that,
like
we're
gonna,
we
have
to
migrate
things
out
of
the
GitHub
Main
into
a
schema.
A
That
is
very
clear,
indicating
that
this
is
like
a
pedrocal,
so
I
mean
in
the
model
that
I'm
kind
of
like
saying
right
now
that,
like
the
GitHub
mine,
is
like
a
very
white
in
a
definition,
but
over
time
you
would
be
migrating
that
into
two
types.
Basically,
every
table
would
be
classified,
it
would
be
GitHub
mine,
but
and
GitHub
main
cluster
to
indicate
where
the
table
is
actually
like
located
in
terms
of
the
data
affinity.
A
So
I
I
think
this
is
like
the
main
difference
between
what
you
are
describing,
because
in
your
case,
it
assumes
the
GitHub
mind
is
effectively
a
pod,
local
and
everything
else
is
like
in
this
feature
specific
tables
and
I
kind
of
like
I'm,
not
sure
if
this
is
gonna
like
how
we're
gonna
reach
to
that.
Because,
like
the
outcome,
probably
gonna
be
the
same.
But
I
think
like
to
approach
that
iteratively.
A
We're
gonna
have
to
go
through
intermediate
schema
name
to
indicate
that
something
is
like,
like
gitlop
main,
which
is
like
undefined
about
the
Affinity
GitHub
main
Port,
which
is
like
defined
to
be
a
podlocker
GitHub
main
cluster,
which
is
defined
to
be
cluster
White,
which
can
then
like
be
renamed
to
whatever
so
I.
I
think
this
is
one
structural
difference
between
what
you
are
describing
and
what
I'm
kind
of
envisioning.
B
B
What
are
you
trying
to
accomplish
by
doing
that,
because
the
only
thing
I
can
think
is
that
you're
creating
sort
of
a
third
state
where
things
are
undefined
so
like,
rather
than
it
being
two
schemas
gitlab
main
and
gitlab
cluster
you're,
saying
it's
going
to
be
three
possible
values,
get
like
main
pod
get
my
main
cluster
I'll
get
like
mine,
and
this
actually
implies
that
something
is
undefined
and
for
something
to
be
undefined.
That
means
to
say
that
our
tooling
sees
that
as
a
special
case.
A
Yes,
I
mean
actually
I
have
way
simpler
way
to
help
for
handle
that
you
know
in
in
a
kind
of
way
that
like
makes
our
tooling
to
not
have
to
change
at
all,
and
this
is
another
Mr
that
I
opened.
I
was
thinking
about
the
approach
stuff.
Like
so
far,
we
have
gitlab
9
and
gitlab
CI
and
whatever
you
should
be
looking
at
the
last
coming
sorry
I
I
see
that
it's
like
I
got
three
based,
I
I,
don't
know
if
you
like.
A
The
name
inherits
Kim
as
it's
like
the
valid
word,
but
the
way
how
I
I
was
seeing.
That
was
that,
like
in
a
case
where
your
query
uses
git,
lock,
mine
and
GitHub
main
cluster,
it
means
that,
like
the
square
is
not
undefined
yet
because
it's
defined
as
like,
your
primary
context
of
the
execution
is
like
GitHub
main
cluster.
If
you
are
query
right
now,
it
used
as
gitlab
man,
gitlab
main
cluster
and
GitHub
main
pod.
A
It's
like
a
clearly
conflicting
sequence,
which
is
like
you
cannot
crosswind
between
mine
and
sorry
cluster
and
a
pod,
so
I
was
and
I
like
the
the
reason
for
doing
that
is
like.
A
The
idea
is
like
it
should
allow
you
to
use
that
in
every
place
where
your
query
uses
GitHub
my
and
GitHub
my
import
and
contrary
and
like
similarly
to
the
cluster,
but
it
should
not
allow
use
you
to
that
where
you
may
be
accessing
quote
and
cluster
context,
so
I
I
kind
of
thought
this
as
like
an
iterative
way
of
solving
table
Affinity,
because
you
might
Target
a
single
table
at
a
time
instead
of
having
to
find
all
tables
in
a
bag
that
should
be
like
type
together
and
allow
list
all
cross
joins
in
a
code
base
that
are
allowed
or
like
whatever
else.
A
It's
a
hypothesis
that
it's
easier
I,
don't
notice
that
this.
This
is
like
the
hypothesis
in
my
head.
That's
like!
If
we
start
with
just
marking
users
and
groups,
it
should
give
you
as
like,
very
small
list
of
the
violations
that
we
can
fix
and
focus
on
interactions
exactly
between
these,
these
two
tables,
but
not
between
users
and
users,
something
at
the
accent,
let's
say
I,
don't
know
personal
access
tokens
yet
because
this
is
not
relevant
at
this
point.
A
But
for
us,
I
think
like
at
this
point
would
be
important
just
to
focus
on
interactions
between
users
and
groups
table
so
that
the
idea
behind
that
is
like
simply
allow
us
to
be
very
selective
and
have
like
a
very
small
list
of
the
violations,
because
I
was
thinking
that
like,
if
we
record
tables
being
touched
by
the
group
create
service,
we
could
only
focus
on
classifying
these
tables
and
fixing
violations
only
for
these
tables,
but
nothing
else
yet
and
technically.
A
This
would
like
swing
the
problem
to
something
smaller
and
then
we
could
iteratively
work
on
other
tables
over
time
to
Define
that,
let's
say
Affinity.
So
this
is
my
hypothesis
that
should
make
it
easier,
but
I
did
not
yet
guess
to
trying
this
out.
So
I
may
be
completely
wrong.
A
A
A
A
So
if,
if
you
look
at
the
table
schemas
out
of
the
query,
but
I
mean
it's
not
complete
yet,
and
you
have
query
that,
like
queries
like
gitlab,
May
and
GitHub
main
cluster,
it's
gonna
be
wind
to
just
be
returning.
It's
like
my
cluster,
because
GitHub
main
cluster
is
like
a
most
specific
scheme
out
of
the
road
that
is
defined.
B
A
This
is
this,
is
this
is
wrong
example.
Maybe
you
just
do
namespaces
joints
projects,
for
example,
because
okay,
it's
gonna
be
more
like
a
pot
local,
but
we
assume
that,
like
namespace
is
gonna,
be
gitlab
main
port.
B
A
B
But
okay,
yes,
I,
think
I
get
that,
but
what
I'm
confused
by
is
how
that's
different
to
what
you
said
earlier,
which
is
like
so
I,
was
trying
to
summarize
what
you're
saying
before
it's
like.
If
you
have
a
query
that
uses
gitlab
main
cluster
and
get
my
main
pod,
that's
bad!
That's
an
error!
If
you
have
a
cluster
that
uses
gitlab,
Main
and
gitlab
Main
cluster.
That's
fine
is
that
this
thing
here.
Isn't
that
just
isn't
that
just
restating
what
you're
saying
here
but
phrasing
the
consequences
of
it
differently.
A
Yes,
it's
exactly
the
same
but
like
the
second
case
is
basically
by
example.
The
this
query
will
generate
this
world,
but
like
out
of
that
specifically.
B
B
Received
for
what,
though,
the
the
terminology
you're
using
like
for
this,
you
know
the
only
relevant
information
is
get
lab
main
pod
to
me
that
doesn't
translate
to
how
I
understand
what
our
tooling
does,
which
is.
It
raises
errors,
for
you
know
the
list
of
schemas
that
crop
that
that
violate,
and
so
that's,
why,
like
this
terminology,
I
thought
they
threw
me
is
like.
So
are
you
talking
about
a.
A
B
The
right,
the
foreign
key
also
would
have
said:
is
it
okay
to
have
a
foreign
key
between
these
two
tables
and
it
gives
back
gitlab
main
part?
Okay?
So
that's
fine.
It's
just
one
schema,
but
that's
not
really
how
the
foreign
key
test
works,
I,
think,
but
whatever
that
maybe
is
just
an
implementation
detail.
It
works
like
that
exactly.
A
A
The
the
only
case
where
it's
not
yet
implemented
in
this
summer
and
I
just
realized
that
it's
like
cross
modification
but
like
like
you,
can
see
like
how
how
I
did
it
like
for
this
particular
case
of
the
table
schema?
But
this
is
something
like
to
fix.
It
I
just
thought
about
the
simplest
implementation.
Today,
okay,
like
not
fully
tested
in
our
cases,.
A
It's
it's
basically
like
Union
of
all
observed
schemas.
So
what
I'm
now
like
doing
in
the
case
of
the
table
schema
would
have
to
be
added
to
the
Cross
modification
as
well
as
as
the
same
way
of
seeing
this
like.
If
you
see
GitHub
main
port,
the
git
lock,
mind,
information
is
no
longer
relevant.
This
is
right.
B
Okay
right,
you
can't,
because
you
can't
just
keep
building
an
array,
because
at
one
point,
if
you
just
called
this
with
projects,
you'll
get
back,
get
left,
Main
and
then
Therefore.
Your
array
will
now
contain
a
gitlab
main,
which
you
don't
want
it
to
later
on.
You
want
that
gitlab
main
to
be
overwritten
by
gitlab
main
pod.
A
B
B
B
B
A
So
I
I
think,
like
the
outcome
is
like
what
you're
saying
you
you
want
to
you
actually
like
focus
on
doing
significantly
more
at
a
single
time,
because
it's
still
like
compatible.
What
is
being
here
is
just
let's
say,
much
more
granular
yeah.
What
you
are
proposing
and
like
the
everything
is
fast.
It's
just
like
an
Aiming
convention,
but
naming
concerns
so
yeah
about.
B
It
I
think,
like
you
know,
your
approach,
maybe
has
an
interesting
I
mean
I'm
skeptical,
but
there's
maybe
like
a
middle
state
where
we
end
up,
like
you
know,
defining
enough
tables
and
we
go
okay.
Actually
that's
a
good
enough
pod
and
we
deploy
the
second
part
and
it's
like
a
second
part
that
has
some
features
working
and
some
not
like
I'm
skeptical
that
we'll
get
to
that.
But
I
think
that
is
the
theoretical
benefit
that
most
people
want
to
accomplish.
But
like
we,
we
all
want
that.
B
But
we
don't
know
what
that
looks
like.
A
I
I
I,
don't
know
if
this,
like
the
state.
The
last
statement
is
true
and
it's
gonna
be
true,
I,
I'm
kind
of
think
more
that
before
you
can
deploy
well
like
you
still
need
to
probably
like
classify
most
of
that,
but
the
tricky
part
about
how
you
get
like
to
the
critical
mass
that
you
can
confidently
classify.
What
is
rest,
so
that's
that's
my
suggestion.
A
So
I
mean
like
in
the
end
I
think
we
should
figure
out
that
we
have
everything
classified
just
like
the
map,
it's
very
bumpy
road,
to
get
to
classify
significant
part
of
the
titles,
so
yeah.
B
Yeah
I
I
think,
basically
it's
you
know
it
will
just
it'll
be
a
matter
of
you
know
a
fact
of
how
we
learn
how
this
actually
plays
out,
whether
or
not
it
ends
up
being
a
simpler
way,
a
model
for
to
get
to
the
full
list
of
tables
or
not
I,
the
the
pretty
much
the
only
advantage
of
the
way
I
was
thinking
about
it.
I
that
I
still
think
is
an
advantage.
Is
that
it's
a
sort
of
more
kind
of
a
path
that
we've
already
gone
down
before.
B
So
it's
it's
a
strategy
we've
already
used
before
with
CI,
but
you're,
rightly
pointing
out
that
you
know.
Probably
the
classification
problem
here
is
more
difficult
than
it
was
with
CI
with
CI
I.
Think
like
we
had
all
of
the
tables,
except
for
like
three
of
them
classified
right
off
the
bat
correctly
and
with
the
only
different
ones
were
like
taggings.
We
had
to
learn
that
that
goes
in
CI
and
two
other
things.
I
can't
remember
what
they
were,
but
like
so.
B
Debated
ones
were
environments
and
deployments,
but
we
kept
them
in
mind
yeah,
so
it
given
that
you're
right
that
that
probably
was
a
simpler
decision
than
what
this
will
be.
I
think
you
know
you're,
implementing
and,
and
you
this
is
updating
our
tooling
to
to
handle
this
special
case
and
I.
Think
you
know
it's
fair
to
say,
you've,
thought
of
a
kind
of
simple
way
to
do
it.
That's
probably
going
to
end
up
being
like
under
100
lines
of
code.
Changes
to
our
tooling
under
50.
is
the
is
the
guarantee.
B
Yeah
so
I
think
it's
fair
to
say
you
are
updating
the
tooling
because
you
said
that
we
didn't
need
to,
but
we
we
do
it's
just
it's
just
a
simple
update
and
I
think
that's
fine
and
now
that
you've
explained
what
it
will
be:
I
I'm,
okay,
I.
A
A
B
Yeah
remember
this
look
at
this.
That's
one
One
update
versus
two
updates.
It's
like
it's
an
extra
change
to
add
to
one
yeah
but
okay,
cool.
Then,
if
we
talked
about
that,
I
can
try
to
find
a
better
place
to
actually
write
this,
but
then
other
than
that.
What
what
would
be
the
next
logical
step?
Kinda
has
something
to
do
with
users,
which
I
think
you
already
defined
pretty
well
and
maybe
I
don't
have
any
questions
about
sex,
put
users
and
namespaces
in
a
different
in
a
well-defined
different
schema
and
see
what
happens.
A
Okay,
but
no
I
think
like
what
you
could
do
differently
like
put
namespaces
into
GitHub
main
port,
because
technically
it
should,
it
can
break
with
the
existing
application
settings
but
application
settings
if
we
ship
that
it's
still
gonna
be
before
like
this
tooling,
is
put
in
place.
So
right.
B
That's
because
you've
already
done
application
settings,
you've
already
removed
this
life's
foreign
key
and
there
was
no
joins,
but
the
you
know,
there's
only
possibility
would
be
left
would
be
cross
modifications
that
could
possibly
break
but
yeah
we
could
start
with
namespaces.
We
will
have
the
same
problem
with
users,
though
right
because
yeah
users
in
that
yeah.
A
A
Those
maybe
even
like
I'll,
always
think
everything
like
foreign
Keys
cross
lines
and
cross
modifications
inside
and
then
like
kind
of
figure
out
how
many
of
those
are,
and
even
if
this
is
why
it
will
Puff,
because
I'm
kind
of
like
worried
that,
like
I
I,
don't
know,
I'm
worried
that,
like
this
is
gonna,
be
long
and
I.
Don't
know
how
long
it's
gonna
be.
That's
my
worry.
B
B
I,
don't
yeah
I,
don't
really
get.
Why
that'll
be
a
problem
like
you
have
these
foreign
keys
that
can
be
swapped
out,
but
like
it's
pretty
simple
right,
that
the
fix
to
that
problem
is
simple:
I
kind
of
I'm
I'm,
not
even
sure
it's
necessarily
bad
database
design.
The
way
we
have
it
I'm
kind
of
I'm
sort
of
okay
with
the
database
design.
We
have
there,
but
I
don't
really
get
what
the
problem
is
other
than
fixing
this.
A
B
A
B
Yeah
this
is
well
I
mean
we
talked
about
early
on
how
some
parts
of
application
settings
may
need
to
have
a.
We
mean.
You
know
an
application
settings
table
that
has
a
row
per
pod
and
that
this
may
be
the
first
example
of
where
that's
necessary.
A
So
I
I,
I,
I
I
think
that
this
is
one
of
the
things
that
shows
like
like
how
it
should
be
solved
so
because,
like
technically
like,
if
you
set
up
what's
2,
just
the
lack
of
that
entries,
it's
like
still
gonna
make
applications
work.
It's
just
gonna
make
it
work
slightly
differently,
because
some
administrators
will
not
have
access
to
the
airport
to
some
templates
will
not
be
seen
used.
B
Yeah
I
think
it's
a
pretty
good
kind
of
product
question
to
start
with
and
say
like
look,
we
started
very
small
with
application
settings.
The
very
simplest
table
has
almost
no
joints
and
very
few
foreign
Keys.
Now
we
have
two
teams
that
own
this
feature
and
a
different
team
that
owns
this
feature
and
we
can
go
to
them
and
say
what
do
you
think
should
happen
in
the
pods
space
and
like.
B
Yeah
I
mean
I,
think
we'd
have
to
kind
of
put
some
constraint
and
say:
look
we
we're
definitely
not
okay,
with
all
the
pods
being
able
to
access
this
directly,
but
the
template
project
directly,
but
we're
okay
with
you
rewriting
the
feature
so
that
it
you
know
it.
It
makes
requests
across
the
apis
when
it's
needed
or
something
more
elaborate.
But
it's
it's
a
much
more
difficult
feature
to
maintain.
Would
you
implement
it
that
way
or
would
you
just
say,
go
ahead
and
you
know
have
one
per
pod
and
just
implement
it.
B
That
way,
when
do
you
think,
is
reasonable
to
actually
start
asking
that
question
since
we've
already
discovered
this
now.
A
B
Yeah
yeah
I
think
well,
I
mean
I.
Think
that's
we're
just
talking
about.
If
we're
trying
to
de-risk
product
aspect
of
it,
then
that's
that's
kind
of
question.
We
need
to
start
asking
and
yeah
the
I
think
you're,
maybe
also
thinking
like
we
should
build
up
a
massive
amount
of
examples
before
we
start
asking
the
questions,
because
then
it
will
show
the
magnitude
of
this
problem
to
be
solved.
B
So
let's
say
you
end
up
finding
with
users
in
namespaces
like
200
examples,
and
then
we
can
describe
two
of
them
and
say:
look
let's
work
on
and
focus
on,
just
these
two
of
them
and
figure
out
how
much
work
that
would
be
and
multiply
it
by
a
hundred,
and
that
is
at
least
the
scale
of
the
problem
we
have
to
solve.
Maybe
partly
how
you
were
thinking
about
how
we,
how
we
get
from
where
we
are
today
to
understanding
the
scope
of
this
work.
A
We
could
probably
like
describe
that
pretty
well
like
what
is
happening
and
what
would
be
the
problem
and
like
what
would
be
the
barrier
and
then
simply
ask
in
each
case.
Can
we
simply
ignore
this
problem?
A
What
will
happen
if
we
completely
ignore
the
problem
with
application
break
and-
and
like
tell
us
if
this
is
like
fine,
because
I
think
not
all
of
these
problems
have
to
be
solved
at
least
like
this
one
that
we
have
here:
I'm,
not
sure
if
this
is
essential
to
solve,
but
I
think
it's
like
essential
to
be
aware
that
this
is
like
the
problem
and
I'm
just
kind
of
curious,
like
how
many
of
such
patterns
we're
gonna
have
because
technically
at
some
point,
we
should
have
a
predictable
patterns
of
these
problems.
A
B
A
Mean
like
I'm
I
I'm,
not
saying
that,
like
we
shouldn't
be,
it
I
feel
like
it's
probably
like
a
completely
white
approach
to
do
it
once
we
go
and
like
figure
out
these
problems,
because
when
we
like
engage
like
very
targeted
group
of
people
specifically-
and
it
also
kind
of
like
meets
our
knowledge,
how
how
this,
how
these
things
react
to
this
type
and
like
what
would
be
their
idea
to
solve
that,
would
it
be
like
dismiss
and
fix
like
an
intra
cluster
or
like
a
padlocker
because
then
like
it
could
be
like
if
we
have
like
this
easy
problem
solve,
it
can
be
like,
like
a
you'll,
probably
have
like
a
a
guidebook
to
other
type
of
the
features
hey
like
this
is
how
we
solve
this
existing
this
PR
features
or
like
this
is
the
design
that
we
are
implementing.
A
A
We
make
applications
to
work
without
that
or
we
are
solving
that,
like
in
like
kind
of
templated
way
across
the
board
and
basically
like
we
have
I,
don't
know,
let's
say
two
or
three
different
paths:
how
how
it,
how
it
can
be
solved-
and
maybe
we
can
fit
all
of
them
into
that.
So.
A
Sorry
I
I'm
kind
of
saying
random
stuff,
because
this
is
also
a
very
random
problem
and
I.
Don't
know
how
people
from
these
themes
would
react
at
least
very
random
program
how
and
how
they
would
like
to
solve
that
if
they.
B
B
Prefer
that,
okay,
in
parallel
to
the
development
effort,
we
we
actually
start
to
take
those
identified
things
too
product
managers
and
get
them
to
describe
what
what
they
think
the
impact
would
be
for
the
feature
and
whether
or
not
they
they'd
find
it
acceptable
and
the
you
know,
development
teams.
If
there's
something
else
we
missed
at
the
same
time,
the
development
work
continues
with
probably
meaningfully
we
have
to
put
users
and
namespaces
into
a
separate,
redefine,
schema
and
then,
basically,
maybe
we
can
in
parallel.
B
Have
you
know
multiple
of
us
going
through
and
what
allow
listing
hundreds
of
examples,
or
you
know,
one
of
us-
could
do
all
of
the
Cross
join
analysis.
Another
do
all
of
the
foreign
keys
or
something
like
that
and
start
to
add
them
to
allow
lists
and
then
start
to
fix
some
and
then
once
we
fixed
a
few
start
to
delegate
other
fixes.
B
But
yeah
I
I
think
that
what
we
need
to
be
happening
in
order
to
be
most
efficient
like
we
need
to
be
constantly
in
parallel,
trying
to
highlight
the
scale
of
the
problem
and
the
kinds
of
problems
that
we're
having
so
that
we
can.
You
know,
de-risk
something
early
and
go
actually
there's
an
element
of
this
plan
that
is
problematic
and
we
need
to
actually
adapt
to
that
early
and
not
not
Implement
a
whole
bunch
of
things.
There's
just
too
much
work.
B
Yeah
I
I
was
so
Nick
already
put
that
in
the
thread,
but
we,
you
know
he's
already.
We
already
have
CID
composition
as
a
list
of
top
priorities
for
all
teams
in
the
company,
so
we're
changing
that
we're
trying
to
rename
that
section
of
the
prioritization
handbook
to
say,
pods
and
that
will
kind
of
help
us
have
the
conversation
with
the
rest
of
the
company
about
is
pods,
actually
still
a
top
priority.
A
Okay,
so
so
this
is
something
I
actually
have
also
like
another
parallelogic
about
the
approaches
we
are
taking
with.
Database
levels
are
sharing
of
data
and,
and
so
I'm.
A
It's
just
my
random
thoughts
that
I'm
gonna
be
giving
I
I,
don't
know
like
like,
if
basically
like,
I
I
found,
like
the
current
apples
like
thoroughly
problematic
because
of
let's
say
the
stability
and
like
the
fact
that
you
can
very
easily
break,
but
one
but
I
also
don't
know
like
what
is
like
the
best
way
to
move
forward
with
that
and
like
let's
say,
like
the
the
last
risk
here
like
something
that
we
can
actually
do.
A
You
have
a
cluster
White
database
to
each
each
pod
can
write
second,
which
is
like
you
have
cluster-wide
database,
to
which
only
a
single
post
can
write
and
everyone
every
other
Port
has
to
read
only,
but
in
order
to
apply
changes
you
it
has
to
go
through
well-defined
RP,
and
this
is
probably
like
a
very
similar
to
an
approach
number
three,
where
you
have
like
a
very
dedicated
service
that
gives
you
like
a
very
stable
access
to
to
the
cluster-wide
data,
which
is
like
not
database
level.
B
Okay,
I
was
trying
to
follow
you
and
also
try
to
find
a
place
to
write
this
down.
So
first
approach
would
be
the
direct
DB
access,
DB.
B
Now,
with
read,
write
as
described
above
implies
risk
of
corruption
from
another
pod.
B
A
price
risk
of
corruption
and
stability
from
another
pod
instability,
second,
would
be
well-defined
API
for
sharing,
and
that
was
kind
of
where.
A
A
The
second
approach
that
I
was
thinking
would
be
like
an
async
DB
access,
so
I
was
thinking
in
a
way
that,
like
each
can
read,
but
only
to
perform
advice,
you
need
to
do
like
a
well-defined
RP.
B
A
I
I,
don't
know
how
it
could
look
like
it
could
be
like
even
let's
say
a
separate
service
or
like
it
could
be
like
a
responsibility
of
the
port
one.
So,
let's,
let's
use
board
one
a
responsibility
of
the
good
one
or
like
it
could
be
like
a
separate
service
that
only
has
like
a
right
access
and
every
other
access
has
to
like
right.
Access
has
to
go
through
that
upbeat
like
a
single
like
a
let's
say.
A
use
case
would
be
user
logins,
any
other
average
user
activity
timestamp.
A
A
A
B
Yeah
I'm
kind
of
my
my
instinct
is
that
that
mixed
approach
is
probably
more
complicated
than
we
would
like,
but
and
yeah,
but
I
I,
don't
know
like
we
kind
of
we
kind
of
went
with
the
pods
architecture
of
you
know.
The
earlier
questions
were
like
would
shared
things
be
in
shared
services,
or
would
we
just
share
direct
access
to
a
database
and
then
this
one
is
somewhere
in
the
middle
of
the
two?
It's
like.
Oh
we'll,
share
direct
us.
It's
the
database
for
reads,
but
we'll
extract,
shared
Services
for
rides
and
I.
B
Think
that
sort
of
maybe
you're
trying
to
balance
the
the
corruption
Problem
by
saying
okay
with
the
shared
services
but
they're
simplifying
the
the
amount
of
code
changes
by
still
allowing
all
of
the
existing
select
queries
to
work.
I
guess
is,
is
the
advantage
of
this
it
just.
It
does
seem
like
a
pretty
strange
architecture
that
I've
never
seen
before,
because.