►
Description
Discussion about Geo's potential future features.
A
So,
just
to
say
it
again,
so
something
thinking
about
active,
active
setup
would
be
interesting
and
also
things
like
having
a
very
simple
solution
which
is
very
robust
for
disaster
recovery.
And
these
are
things
which
I,
which
just
came
into
my
mind.
If
I
think
about
an
idiot
solution
for
gitlab.com
and
and
then
looking
at
gu
there's
the
question
of
how
much
is
geo
the
tool
for
that.
What
does
it?
A
What
can
I
do
already
right
now
and
what
needs
to
be
done
to
to
make
it
work
for
the
rest
of
the
things
that
we
want
to
do?
And
that's
still
not
really
that
clear
to
me
and
and
thinking
about
this.
A
What
also
just
came
comes
to
my
mind
by
brainstorming
is
that
I
see
mostly
a
need
for
orchestration
missing
from
geo,
because
if
you
want
to
manage
several
sites
and
things
like
failing
over
and
maybe
even
active
active
setups
with
just
parts
of
the
data
being
solved
from
there
other
parts
from
another
side,
we
would
need
some
kind
of
orchestration
to
manage.
A
All
of
this,
which
could
be
inside
of
geo,
could
be
outside
of
g,
I'm
not
sure,
but
then
these
are
things
that
need
to
play
together,
at
least
and
and
taking
a
step
back
and
thinking
about
these
things,
I
think,
would
still
be
interesting
if
you
want
to
talk
about
the
far
future
of
geo,
but
then
your
turn
future,
of
course,
yeah.
No.
B
Actually,
I
am
interested
in
that,
because
some
of
the
things
that
you
mentioned
are
maybe
not
as
far
off
as
other
things
right
for
various
reasons,
but
I
think
it
is
because
the
I
think
that
the
way
I
think
about
it
as
well
is
for
me
geo
is
not
necessarily
a
static
thing
that
will
not
be
able
to
change
right.
B
This
is
the
product
that
we're
building
in
order
to
fulfill
the
r
and
it
works
in
a
specific
way
right
now
for
specific
things,
but
if
we
actually
establish
that,
let's
say
the
lovable
state
for
the
r
right
is
incorporate
some
of
the
things
that
you
just
said,
then
in
my
mind
right.
Maybe
that
is
also
something
that
we
just
need
to
build
and
that
may
require
some
change
in
how
the
product
works.
Ultimately,
at
least
for
us.
I
think
that's
I'm
not
opposed
to
that
at
all.
B
If
I,
if
I'm
sort
of
thinking
about
the
far
future
right,
I'm
just
also
acknowledging
that
some
things-
let's
say
we
work
on
postgres
right.
I
think
we're
not
going
to
change
that
anytime
soon
right,
and
so
there
are
certain
constraints
that
you
operate
in
that
make
some
things
harder
than
than
others,
but
yeah
yeah.
A
I
mean
I
mean
looking
at
it.
I
think
thinking
postgres
isn't
the
hardest
thing
to
do.
I
mean
it
just
works
that
the
problematic
things
comes
with.
You
want
to
have
something
like
like
an
active,
active
setup
like
having
a
second
database
instance
somewhere
and
want
to
use
it
to
to
also
write
data
there.
That's
just
not
working.
A
A
I
think
the
main
problem
right
now,
if
you
want
to
have
something
like
an
active
active
setup
where,
let's
say
part
of
the
traffic
is
served
from
one
side
and
another
part
from
another
site
would
be
actually
really
database
rights
because
we
only
can
have
one
active
primary
and
one
side
currently,
and
that
means
on
the
far
away
side.
The
database
latencies
would
be
very
bad
and
I'm
not
sure
if
that
would
work
in
this.
B
Way,
I
think
the
the
short
answer
is
no.
I
think
this
is
something
that
we
know
from
from
geo
that
this
is
something
that
folks
would
like
for
various
reasons,
but
the
the
postgres
single
side
actually
being
writable
is
a
bit
of
a
bummer.
There
are
probably
some
ways
around
it
with
logical
replication
and
whatnot,
but
that's
that's
more.
B
Yeah,
that
is
a
a
thing
that
is
more
difficult,
but
before
I
offer
my
opinion,
john
or
skarbeck,
what
are
your
thoughts.
C
Well,
henry
nailed
the
primary
one
that
I
would
love
to
see.
You
know
having
two
active
sites,
I
think,
is
beneficial
all
over
the
place,
I'm
not
actually
sure
what
else
we're
missing
from
jail
outside
of
that
at
this
moment
in
time
I
was
looking
through
the
list
of
limitations,
and
I
saw
that
some
of
those
are
either
planned
for
or
it
sounds
like
some
of
those
are
just
limitations.
We
can't
handle
this
moment
in
time.
C
It'd
be
interesting
if
we
could
eliminate
some
of
those
limitations
that
we've
got
documented,
not
really
sure
at
the
moment.
C
C
So
potentially
a
good
thing
that
would
be
have
documented
what
would
be
like
how
to
plan
for
geo
in
a
customer
workspace,
because
I
don't
think
we've
got
that
specifically
documented
and
there's
so
many
different
ways
to
build
on
infrastructure
it'll
be
impossible
for
us
to
document,
but
it
it
would
be
nice
if
there's
at
least
somewhere
a
primer
that
says
hey.
If
you
want
to
consider
geo.
C
Look
at
this
documentation
for
what
you
made
eatery
have
require
what
you
may
need
as
a
requirement
for
a
secondary
site
that
way
prior
to
us
building
infrastructure.
We
are
aware
that
we
need
to
think
about
this,
and
we
can
think
about
that
as
we
plan
the
current
infrastructure
that
a
customer
may
be
building.
B
I
think
the
dragonfly
bsd
file
system-
and
you
know
then
weird
things
happen
because
you
know
the
world
is
very
exciting
and
then
I
think
that's
all
cool,
but
if
you
can
make
a
strong
recommendation
and
say
you
know,
this
is
a
way
to
do
this
right.
This
is
how
we
at
gitlab,
deploy
the
infrastructure
right.
This
works
well
right.
We
have
some
reference
documentation.
B
At
least
you
can
make
a
you
can
guide
people
towards
a
way
that
you
can
support
better
right.
Some
people
may
still
do
something
for
various
reasons,
and
that's
okay,
but
I
I'd
say
the
vast
majority
will
look
at
this
and
say:
okay
if
they
recommend
this
right
and
they
can
support
it
really.
Well,
you
know
why
not.
A
So
we
have
thousands
of
customers
and
also
people
doing
crazy
things,
people
being
doing
malicious
things
and
also
at
the
very
high
scale,
and
that's
a
different
tool
to
customers
who
use
their
own
instances
internally,
because
they
don't
need
to
open
it
up
for
for
everybody
outside
of
the
world
right,
and
so
you
always
will
see
different
traffic
mixes
than
other
customers
will
see
who
are
using
it
internally
and
who
can
easier
to
adjust
to
their
own
traffic
patterns,
because
we
see
all
kind
of
traffic
patterns
that
will
always
make
gitlab.com
in
a
way
special
and
needing
more
considerations
and
adjustions
for
that
in
infrastructure,
which
is
not
needed
by
normal
customers,
even
if
they
are
at
a
very
big
size.
A
B
Well-
and
I
think,
because
of
the
scale
like
one
of
the
few
assumptions
that
I've
made-
and
I
think
this
is
partially
related
to
what
you
said
henry
one
of
the
reasons
why
geo
as
is,
is
not
usable
on
production
is
simply
cost
right.
It
is
like,
even
if
you
just
said,
hey,
let's
just
double
our
infrastructure
and
make
it
a
geo
thing,
and
let's
say
that
works
like
there
are
many
questions
about
this.
B
That's
not
really
feasible,
because
we,
you
know
yeah.
How
will
we
how
we
justify
that
cost
and
so
a
couple
of
concerns
that
I
have
where
I'm
not
really
sure
how
to
go
about
that?
Yet
is
how
to
minimize
the
cost
of
a
secondary
site.
Even
if
you
say
it
is
active
right
for
some
things
right,
but
you
you
won't
likely
want
to
double
your
infrastructure,
spend
and
so
yeah.
A
I
think
that
I
think
this
is
this.
I
think
this
was
discussed
very
well
and
the
initial
discussions
around
how
to
achieve
disaster
recovery
and
then
point
in
time,
recovery,
goals
and
things.
I
think
the
decision
to
maybe
have
a
minimal
disaster
recovery
site
just
for
sinking
data
and
then,
if
necessary,
spinning
it
up
to
the
needed
size
if
you
need
to
fail
over,
which
would
mean
a
downtime
for
a
few
hours
for
sure.
But
what
would
help
us
do
to
adjust
after
failover?
A
I
think
that's
making
total
sense,
and
that
would
also
still
allow
something
like
what
I
would
like
to
see.
Maybe
this
idea
of
having
one
percent
of
traffic
being
served
from
one
side
and
99
from
the
other
side,
because
this
one
percent
is
a
nice
canary
of
does
the
other
side
work
at
all,
so
that
we
can
be
sure
that
we
can
fail
over
and
everything
will
work.
A
That's
just
an
idea
we
don't
need
to,
but
we
could
try
to
achieve
this.
Maybe
maybe
it
would
even
work
with
low
database
latencies,
because
if
we
loosely
see
reads
from
local
replicas
there,
it
would
be
okay
and
for
the
rights
being
slow,
maybe
for
some
selected
customers,
which
we
say
that
will
be
the
one
person
being
served
from
there.
That
would
be
still
okay,
so
we
could
maybe
think
about
this.
B
B
So,
in
my
limited
understanding
of
sending
off
kubernetes
right,
it
gives
you
the
ability
to
scale
up
and
down
certain
pods
based
on
demand
a
lot
more
easily
than
you
would
be
able
to
do
this
if
you
were
hosting
this
on
virtual
machines
or
metal
or
some
such
right,
so
my
my
thought
would
be
because
the
geo
secondary
doesn't
do
anything
really
other
than
let's
say
if
it
doesn't
serve
any
traffic
right.
What
it
does
is
it
needs
to
have
enough
capacity
to
replicate
data
within
the
the
recovery
time
objective
right.
B
B
Let's
say
as
much
as
you
you
can,
I
don't
know
what
that
means
right
in
order
to
sustain
replication
and
that
would
already
reduce
the
cost
of
idle.
You
know,
like
application,
servers
hanging
around
right
significantly.
So
that's
my
number
one
thought.
B
Servers
themselves,
it's
the
storage
cost
for
the
ssds
in
git,
and
so
we
need
to
do
that.
But
that's
not
really
the
issue
or
the
issue
is,
quite
frankly,
all
of
the
unpaid
tiers
that
have
generated
a
ton
of
git
data
that
is
hanging
around
a
lot
of
it
will
not
be
active
either
right
and
as
far
as
I
know
so
far,
we
we
just
have
it
on
ssds
and
we
don't
really
have
a
good
way
yet
to
say
hey.
B
This
thing
is
inactive
for
like
60
days
or
something
like
that,
offload
it
on
hard
drives
or
into
object,
storage
or
any
anything
like
that.
I
don't
think
you
can
do
that
easily,
but
I
think
that
may
be
so
a.
I
think.
That's
a
really
important
product
improvement
for
for
in
general,
because
it
will
also
reduce
the
cost.
B
The
spend
on
the
primary
in
my
mind
right,
but
that
the
question
is:
how
can
we
minimize
that
storage
cost
on
the
secondary,
and
I
I
had
some
some
ideas
there,
so
the
the
bluntest
thing
I
could
think
of
right
is:
if
we
need
disaster
recovery
capabilities,
we
will
likely
not
want
to
offer
any
slos
to
unpaid
tears,
we're
not
going
to
say
you're
not
going
to
pay
you're,
not
paying
for
our
service,
and
we
guarantee
this
is
going
to
be
restored
within
8
hours
or
24
hours
or
whatever
it's
going
to
be.
B
I
think
you
know
so
we
need
to
not
lose
the
data
and
that's
kind
of
it
for
paid
tiers.
That's
where
this
becomes
the
sales
relevant
factor
right.
Where,
if
you
say,
if
you
are
on
silver,
we
give
you
an
slo
of
uptime
and
also
of
being
able
to
recover
from
a
disaster
within
x
hours,
and
so
my
initial
thought
was
what,
if
we
had
an
ability
in
geo
to
just
not
replicate
anything
for
unpaid
tiers
to
the
secondary.
C
I
think
the
immediate
dangerous
part
that
comes
to
mind
is
that
we
support
open
source
projects
and
for
certain
scenarios
we
provide
maybe
a
specific
license
for
those
projects.
They
simply
don't
pay
for
it.
I
don't
know
if
that
impacts
your
thoughts
here,
but
it
would
be
kind
of
dangerous
to
not
sync
an
open
source
project
that
sure.
B
Like
I'm,
I
know
like
I'm
saying
there
is
probably
a
lot
of
danger
in
some
of
those
decisions
right,
so
you
need
to
be
careful,
and
this
is
the,
as
I
said
very
blunt
but
then
like
let's
say,
for
example,
we
would
only
replicate
data
that
is
tied
to
specific
licensed
tiers.
That
doesn't
mean
we
don't
need
a
second
plan
for
restoring
the
other
data
in
case
your
data
center
goes
up
in
flames
right
now.
I
assume
we
have
backups
and
we
store
a
lot
of
the
data.
B
Somehow,
right,
I
don't
know
how
that
works
right
at
all,
but
I
was
thinking
about
a
sort
of
high-level
scenario
where,
if
you
have
a
secondary,
you
have
the
ability
to
select
by
license,
let's
say
just
for
gold
for
now
right
and
you
have
to
recover
from
a
disaster,
and
so
you
promote
the
secondary,
and
if
you
are
a
gold
customer,
you
will
be
able
to
log
in-
and
you
know,
continue
doing
your
stuff,
that's
fine,
but
everybody
else
will
not
be
permitted
to
log
to
log
in
at
that
moment,
until
we
manually
restore
that
data
from
a
backup.
B
A
This
totally
makes
sense
also
in
other
ways,
because
for
the
cost,
efficiency
efforts
and
and
performance
considerations
we
have.
We
also
would
like
to
see
something
to
be
able
to
put
certain
customers
which
have
certain
licenses,
maybe
on
on
certain
gitly
nodes,
with
different
specifics
for
performance
right.
So
we
want
to,
for
instance,
move
less
used
repositories
and
maybe
non-paying
repositories
over
to
hdd
storage
and
then
maybe
paying
customers
over
to
ssd
storage
and
then
highly
used
repositories
too.
Maybe
so,
I
think,
there's
already
an
effort
and
an
issue
for
that.
A
But
we
would
know
okay,
this
disk
image
needs
to
be
replicated
over
to
the
disaster
recovery
site,
so
that
wouldn't
even
be
a
a
very
fine-grained.
A
selection
on
geo
be
necessary.
We
could
just
take
full
disk
snapshots
or
disk
images
or
I
don't
know
about
replicate
or
multi-region
disks.
I
think
there's
something
like
that,
but
I
never
really
looked
into
performance
criteria
and
things
so
that
would
be
coming
down
to
my
my
first
point
of
as
an
infrastructure
and
site
reliability
engineer.
A
I
would
like
to
see
simplicity
to
make
things
robust
and
manageable,
and
just
by
being
able
to
select
where
customers
land
on
which
gitlie
node
would
make
us
make
it
easy
to
say
these
gitly
nodes
need
to
be
replicated
over
to
the
disaster
recovery
site.
Everything
else
I
don't
know
make
backups
and
different
means
which
are
cheaper
and
takes
longer
to
restore
in
case
of
disaster
recovery
yeah.
A
So
that
would
be
good
to
be
able
to
select
on
this
point,
but
looking
into
the
simplicity
point
again
what
I
see
right
now,
what
gitlab.com
needs
to
replicate
this
database
and
gitly
nodes
right?
I
think
everything
else
is
an
object,
storage.
So
it's
it
doesn't
matter
to
us
if
we
are
on
one
side
or
on
the
other
side
for
disaster
recovery,
because
we
have
it
anyway
in
object,
storage!
That's
right!
There's
no
need
to
to
replicate
anything
via
jio.
A
And
but
what
we
also
could
do
is
just
you
know,
sync
disk
images
or
use
something
like
a
stream
of
data
from
one
disk
to
another,
one
which
you
can
use
with
selfies,
for
instance,
to
replicate
data
over
to
another
site.
And
then
you
are
done
right.
Yeah.
B
Sorry
to
interrupt,
but
I
I
had
some
discussions
with
james
ramsey
like
a
while
ago.
In
my
mind,
ideally,
you
know
at
some
point:
jio
would
not
necessarily
have
to
rely
on
its
own
git
replication
mechanism
anymore
either
right.
Maybe
that
is
something
that
italy
can
do
right
where
you
have
that
at
that
level,
and
we,
you
know,
and
italy
manages
that,
because
gateway,
I
think,
is
better
at
managing
git
data.
B
Ideally,
and
so
I
would
really
like
to
see
that,
because
it
would
delegate
that
complexity
away
from
from
us
as
well
right-
and
we
wouldn't
have
to
necessarily
do
that.
A
Yeah
but
but
then
it
comes
to
the
point.
What
is
you
doing,
then,
if
git
is
replicated
via
gitly
and
and
database
application?
More
or
less
is
really,
I
don't
know
doesn't
need
much
attention
because
you
set
it
up
once
and
then
it
works
right.
What
what
is
you
doing,
then?
In
this
context?
For
gitlab.com
I
mean.
B
Actually,
like
take
a
step
back
here
right,
I
think
I'm
hearing
this
assumption,
and
maybe
that's
a
misunderstanding
on
my
thing-
that
geo
needs
to
do
some
something
fancy
right
in
order
to
be
to
be
valuable
but
in
my
mind,
right.
B
I
think
we
are
perfectly
like
happy
about
this
right.
I
think
there
are
many
things
that
we
need
to
support
in
terms
of,
for
example,
not
every
customer
relies
on
object,
storage
right,
so
we
need
to
have
all
of
this
replication
logic
for
folks
that
don't
do
that
right,
because
we
can't
rely
on
object,
storage
to
do
the
replication
right.
Some
others
may
have
may
have
other
issues,
but
I
think
for
me
on
a
like
on
a
higher
level
right.
B
That
will
make
me
very
happy
right.
I
think
that's
actually
very
desirable
right,
I'm
and
I
think,
but
I
think
that
is
what
we
need
to
keep
in
mind
right.
We
don't
want
to,
or
I
at
least
I
don't
want
to
build
a
a
unique
like
set
of
scripts
that
works
for
gitlab.com
right,
but
I
would
like
to
be
able
to
offer
something
to
customers
right
to
to
use
this
so
that
it
is
integrated
into
our
into
our
product
right,
but
yeah.
A
Yeah
totally
and-
and
I
see
that
geo
is
absolutely
great
for
people
who
have
first
things
like-
let's
say
single
instance-
zero
node
instances
and
not
having
everything
synced
via
object,
storage,
for
instance,
and
don't
care
or
don't
want
to
put
much
effort
into
setting
things
like
up
up
like
like
disk
mirroring
to
other
sites
via
cloud
providers
and
things
like
that.
Because
then
geo
is
just
you
turn
it
on
and
it
works
right.
But
the
more
complicated
or
the
bigger
your
installation
gets
the
more
different.
A
It
will
look
like
you
have
different
cluster
setups.
Maybe
let's
start
with
reference
architectures,
and
we
can
maybe
build
geo
to
work
for
those.
C
A
B
B
A
C
A
A
For
instance,
you
want
to
you
know,
turn
on
your
primary
database
or
switch
it
over
to
the
the
other
primary.
You
want
to
take
care
that
I
don't
know
perfect
or
something
else
is
taking
care
of
quickly
to
you
know
switch
over
to
the
other
side.
Maybe
you
need
to
set
up
some
routing
things
load,
balancing
things
you
want
to
turn
some
settings
for
from.
Where
are
we
sending
emails
or
stuff
like
that?
Maybe
you
think
about
these
things.
A
So
it's
more
about
orchestration
in
a
very
abstract
way,
because
there
are
hundreds
of
ways
how
these
things
could
be
managed
by
customers,
and
you
can't
foresee
them
all,
and
that
looks
for
me
more
like
you
have
some
kind
of
a
plug-in
structure
like
here's,
your
callback
for
switching
the
database.
Here's
your
callback
for
searching
your
email
provider,
here's
your
callback
for
doing
something
with
load,
balancers
and
italy.
Jio
would
just
call
these
provided
callbacks,
but
doesn't
know
what
really
is
happening
in
the
back
end,
because
you
can't
foresee
all
of
that.
B
Really
like
that
idea,
because
we
are
like
just
to
give
you
a
bit
of
context
that
may
may
help
you
we,
we
have
this
issue
at
the
moment,
not
only
for
for
geo,
but
that
we
don't
have
orchestration.
B
So
if
you
want
to
upgrade
a
50k
reference
architecture,
you
have
to
log
into
every
single
node
type.
You
know
like
upgrade
the
package
right.
Do
that
in
the
specific
order
it
gets
really
really
messy.
It
essentially
sucks.
Okay,
that's
one
problem
right
for
the
failover.
It's
essentially
the
same
thing.
You
know
you
need
to
change
the
configuration
on
every
single
node,
so
that
gitlab,
you
know,
recognizes
this
new
state
and
what
we've
we
had
a
little
poc
on
this,
where
what
we
could
kind
of
do
is
say.
B
Okay,
let's
see,
maybe
we
have
a
different
configuration
file
cluster
right.
That
geo
can
manage
where
we
can
say
that
propagates
into
gitlab
rb
right
and
we
can
say:
okay,
you
are
currently
a
secondary
node
right.
You
change
that
configuration
maybe
via
something
like
like
you
can
do
that
manually.
Right
have
that
in
code
right
or
you
could
utilize
console
at
some
point
to
do
that,
for
you
right
or
you
just
have
a
simple
rig
test
that
does
it
for
you
as
well
right.
B
So
our
idea
of
iteration
was
to
say
if
we
are
able,
for
example,
to
change
automatically
through
some
means.
I
haven't
figured
that
one
out
yet
right
and
not
reliant
on
it.
The
configuration
effect
of
that
node
without
forcing
people
to
you
know,
update
their
gitlab
rb
automatically
right
that
we
we
have
a
tiny
iteration
on
the
way
to
orchestration
and
then
what
geo
kind
of
does
in.
As
you
said
right
like
we
don't
quite
know
exactly,
you
know
what
it
means
anymore,
to
update
your
database
like
make
your
database
writable.
B
And
then
you
have
maybe
something
like
service
discovery
framework
like
console
or
whatever
that
we
ship
in
the
reference
architecture
right
and
say
like
this
is
how
you
would
be
able
to
promote
with
a
single
command,
and
then
we
delegate
as
much
as
we
can
to
the
individual
things
and
maybe
for
gitlab.com
that
would
be
would
be
different,
but
that
that
principle
kind
of
holds
and
that's
where
I
would
like
to
go,
because
I
think
otherwise
we
yeah
we
kind
of
stuck
right
and
that's
so
I
like
that
direction.
B
I
think
it's
actually
a
really
interesting
idea
to
say:
hey,
you
know
we
like,
we
have
our
own
replication
engine
right.
You
can
monitor
this
in
the
admin
interface
or
you
know,
choose
object,
storage
and
you
just
kind
of
swapped
out
the
back
end
right,
but
we
can.
We
know
how
to
handle
this
and
it's
fine
right
and
I
think
that's
a
that's
a
good
thought.
B
A
I
mean,
I
think
we
want
the
same
thing
here,
but
I
just
try
to
think
about
if
I
think
about
staging,
for
instance,
or
staging
infrastructure.
A
B
I
mean
like
currently
the
geo
team,
like
we
have
some
more
basic
problems
right
with
our
replication
engine
that
we
are
like
in
the
like.
We
are
90
percent.
The
way
that
we
want
to
be
so
you
can
expect
us
to
spend
another
six
months
on
that.
You
know
because
the
last
90
percent
are
hardest.
B
No,
but
I
I've
started
to
think
about
on
the
like
in
on
a
high
level
as
to
like
these
orchestration
efforts,
and
what
that
what
that
mean.
You
know,
we've
made
some
some
inroads,
but
that
is
that's,
maybe
a
thing
to
really
figure
out
right,
because
my
my
issue
at
the
moment
there
is
that
you
know
you
use
chef
right,
others
use
something
else
right,
and
so
I
like
we
can
make
some
opinionated
choices
for,
for
example,
the
reference
architecture
is
right
and
recommend
something.
B
A
Look
at
staging-
maybe,
let's,
let's
suppose
we
have
something
set
up
as
a
secondary
staging
site
which
maybe
is
a
scaled-down
version
of
staging,
but
but
the
same
kind
of
topology,
yep
and
and
let's
say
we
want
or
we
need
to
fail
over,
like
we
really
have
an
outage
yeah.
I
think
the
first
thing
that
gu
or
anything
would
need
to
do
then,
is
you
know,
noticing
there's
an
outage
noticing.
A
I
want
to
come
up
as
a
primary
site
now
and
then
the
steps
would
be
something
like
okay,
I
need
to
promote
my
database
cluster
right.
I
need
to
maybe
set
up
some
fencing
to
prevent
that
the
other
side
still
comes
up,
unplanned
right,
then
reconfiguring
all
of
the
nodes
in
the
cluster
in
some
way
to
now.
Okay,
I'm
now
a
primary
node
like,
for
instance,
all
the
web
api
radius
and
so
on.
Nodes
need
to
be.
C
A
Currently,
right
now
doing
a
reconfigure,
I
guess
so
that
they
get
it,
which
takes
some
time
so
and
then
maybe
some
special
things
for
your
cloud
provider
like
load,
balancing.
A
For
infrastructure-
and
that
looks
like
we
have
these
callbacks-
you
need
to
promote
your
db
callback.
You
need
a
fencing
callback
like
taking
care
that
the
other
side
doesn't
come
up
again,
yeah,
which.
A
Turn
the
configuration
of
all
your
nodes
in
the
cluster
over
and
and
reconfigure
them
and
then
maybe
other
special
infrastructure
stuff,
and
you
are
needed
in
your
in
your
cloud
to
make
things
work
like
load,
balancing
dns,
which
is,
I
don't
know
things
like
that,
and
these
are
the
main
things
that
need
to
be
triggered
and
are
very
generic.
If
you
want
to
support
everybody,
but
I
guess
are
needed
by
by
most
people
who
have
any
kind
of
gitlab
cluster
right
and
which
geo
could
maybe
orchestrate
if
it
knows.
C
B
And
I
think
that's
exactly
let
me.
B
Just
share
my
screen
with
you
like
this
is
this
is
really
helpful
by
the
way,
and
I
don't
mean
to
be
critical.
I
really
enjoy
these
types
of
conversations
because
they
they
flush
out
a
lot
of
questions
and
yeah.
So.
B
If
I
can
it
yeah,
promoting
a
secondary
should
be
simple
and
the
this
is
the
the
current
flight,
and
this
is
actually
also
in
the
mural.
So
I
could
have
opened
this
as
well
right.
So
the
idea
here
is
really
like.
If
you,
what
you
talked
about
is
the
promote
to
secondary
command
right
rather
than
the
planned
failover,
where
we
have
to
do
some
other
things,
because
you
don't
want
to
lose
any
data
right.
B
But
essentially
what
I
would
like
to
see
here
is
to
be
able
to
run
a
single
promotion
command
right,
a
single
thing.
You
know,
on
a
on
a
node
still
triggered
by
a
sysadmin
right
that
would
be
able
to
propagate
all
of
those
individual
changes
right
and
is
able
to
say
hey.
I
like-
and
this
is
the
this
is
the
item
here-
that
we've
started
working
on,
which
is
currently
paused,
but
where
we've
had
a
poc.
B
So
the
idea
here
was
to
say
the
first
iteration
for
us
would
be
to
have
a
single
command
on
any
node
right,
so
any
node
type
could
be
redis
could
be
the
database
node
that
you
could
execute.
That
is
able
to
understand
the
configuration
of
that
node
and
alter
it
in
such
a
way
that
it
becomes
like
it
moves
from
a
primary
from
a
secondary
to
a
primary
on
the
database
node.
That
would
mean
you
know
doing
the
things
necessary
to
promote
your
adronib
cluster
right
on
a
redis
node.
B
B
Was
merged
already
by
by
douglas
it's
kind
of
going
in
this
direction,
where.
B
No
it's
hard
to
see.
Essentially
there
is
a
a
file
that
allows
you
to
to
specify
this
type
of
configuration
right.
Where
you
can
say
I
am
a
geoprimary
node
right
or
I
am
a
geosecondary
node
and
that
would
propagate
into
specific
configuration
in
the
gitlab
rb
file
and
then,
when
you
execute
it
right,
it
would
actually
then
change
that
specific
file
updated
reconfigure
and
voila.
B
Maybe
I
don't
know
you
can't
do
the
subject
nodes
before
the
web
application
nodes,
or
vice
versa,
right
so,
and
there
is
some
complexity
in
it,
but
that's
kind
of
the
vision
that
I
have
for
for
geo
on
that
level,
because
I
think
what
you
my
assumption
as
a
cis
admin
or
an
sre
at
that
point,
when
a
disaster
happens,
I
think,
ideally
you
don't
want
to
be
able.
You
don't
want
a
10
20
page
run
book
with
very
detailed
configuration
changes
on
every
single
node
right.
B
You
want
to
be
able
to
say,
I
have
determined.
This
is
a
disaster
right,
which
is
sometimes
maybe
even
hard
to
determine.
If
you
know
this
is
now
the
place
where
you
want
to
do
it,
and
then
you
want
to
be
able
to
go
and
have
as
few
commands
as
possible
to
do
with
the
reconfiguration
needed.
That's
kind
of
my
my
assumption.
C
I
think
one
thing
to
keep
in
mind
here
is
that,
when
it
comes
to
geo
need
to
make
those
necessary
calls
to
initiate
the
switchover
is
that
if
your
primary
is
down
like
hard,
you
need
to
figure
out
what
needs
to
happen
in
that
case
like
it
will
be
easy
to
promote
secondary.
But
what
do
you
do
to
the
primary
site?
If
geo
cannot
talk
to
it
to
say,
hey
downgrade
yourself
to
secondary,
because
otherwise,
if
it
comes
back
up
as
this
is
admins
fix,.
B
No,
I
think
this
is
like
this
is
why
I'm
personally,
I'm
very
hesitant
to
automate
these
kinds
of
decisions.
That
may
be
the
end
state
right
where
it's
really
lovable
and
we
know
exactly
how
to
handle
this,
but
I
like
for
situations
like
this,
where
you
can
have
a
split
brain
or
whatnot.
B
I
personally
think
that
in
many
instances
you
want
these
actions
to
be
manually
triggered
because
a
team
of
people
ideally
understands
what
the
options
are
right
and
until
the
point
where
you
are
so
confident
that
you're
going
to
do
these
things
automatically
right.
That
is
a
long
road
and
you
know
I
that
would
be
awesome,
but
I
can
foresee
some
things
I
mean
you're
talking
about
disaster
right.
So
who
knows?
What's
what's
happened
right?
B
So
at
that
point
like
I
I
personally,
I
would
be
humble
and
say
you
know
just
saying
we
have
thought
of
everything
right,
and
this
is
going
to
be
absolutely
fine.
It's
most
likely
a
lie
so.
B
B
I
think
there
is
a
desire
overall
to
to
be
very
to
be
maybe
a
little
bit
more
strict
as
to
what
functionality
geo
delegates
right
to
other
components
and
says,
like
hey,
you
know,
like
postgres,
takes
care
of
its
postgres
application
geo
manages
you
know
the
the
change
of
state,
let's
say
depending
on
where
you
are,
maybe
in
the
far
future
italy
or
your
cloud
provider
figures
out
how
to
replicate
git
data
we
like
are
able
to.
B
You,
know
change
where,
where
right
operations
can
happen
right
these
kinds
of
things,
and
if
you,
if
you
think
that
to
the
end,
then
geo
becomes
at
that
point
a
ideally
right,
thin
layer
that
allows
you
to
change
configuration
of
your
gitlab
instance
easily
right
in
order
to
to
facilitate
a
failover
that
is
kind
of
what
it
is.
You
know
once
you've
you
have
reached
the
scale
where
you're
not
actually
replicating
data
between
you
know
regular,
regular
files,
but
you
have
specific
providers
for
all
of
those
things.
B
I
think
that's
actually
not
necessarily
a
bad
case
to
be
in
also
for
efficiency.
I'm
just
sorry,
I'm
vocalizing
this
out
loud
right,
because
I
I
personally
think
that
it
becomes
very
hard
with
a
team
of
you
know
eight
folks
to
do
all
of
this
yourself,
right
so
being
able
to
say
italy
team
figure,
please
figure
out
right
how
we
can
move,
get
data
around
efficiently
right
so
that
it's
cost
efficient.
You
know
that's
good
right
or
you
can
rely
on
other
technology
like
posters
to
do
that.
A
Kind
of
what
we're
doing,
I
think
I
would
summarize
it
like
this.
The
bigger
and
more
complex
your
instance
is
becoming.
The
less
geo
should
be
concerned
with
how
to
replicate
things,
because
normally
you
would
have
something
in
place
already,
which
is
very
custom,
and
so
it
comes
down
that
geo
is
becoming
more
and
more
orchestration
thing
instead
of
a
synchronization
thing.
A
I
think
the
very
hard
thing
to
follow
with
for
jio
already
is
that
you
provide
means
to
synchronize
all
data
via
gu,
which
is
a
lot
of
effort,
because
the
product
is
changing
a
lot
all
the
time,
but
this
enables
smaller
customers
very
nicely
to
set
something
up
without
taking
care,
and
geo
is
doing
that
all
for
them,
but
for
bigger
instances,
it's
the
other
way
around
right.
You,
you
can't
foresee
how
things
need
to
be
synced,
so
you
need
to
more
or
less
take
what
is
there
and
then
orchestrate
it?
Maybe.
B
Yeah
yeah,
I
think,
actually
that's
a
that's
a
great,
maybe
a
handbook
page
even
to
like,
have
sort
of
these
inverse
arrows
right.
It's
like
the
degree
of
automation,
right
versus
the
degree
or
to
which
geo
actually
replicates
data,
even
though
we
we're
talking
here
really
really
large
instances
right.
B
But
it's
it's
kind
of
true
and
I
also
actually
agree.
I
don't
know
if
you're
familiar
like
we've
created,
essentially
a
self-service
framework
to
allow
other
teams
to
add
replication
support
to
their
to
their
features,
because
we
also
learned
over
the
last
year,
essentially
that
you
know
there's
so
many
new
things
that
people
do.
We
will
never
be
in
a
position
to
do
all
of
that
for
them
right.
B
So
we
need
to
make
it
easy
for
folks
to
just
say
here:
okay,
great,
you
know,
geo
is
now
supported
that
covers
90
of
our
customer
base.
B
Thanks,
I
think
that's
actually.
That
was
very
helpful
for
me
and
I
may
have
a
better
idea
now
of
how
to
approach
some
of
these
challenges,
for
example,
maybe,
rather
than
thinking
about
how
geo
can
select
on
what
to
replicate
for
for
storage,
I
can
talk
to
italy
and
say
this
is
maybe
a
key
requirement
for
the
r
on
dot
com.
You
know.
Can
we
support
you
in
you
know,
for
example,
you
know
deciding
where
things
are
stored
and
how
they
are
stored
right,
because
that
makes
things
easier.
A
I
think
the
next
interesting
question
now
is:
what
does
that
mean
for
our
working
group
because
or
for
the
goals
of
our
working
group?
Because
now
the
goal
we
are
working
on
is
to
see
how
we
can
make
geofit
for
staging,
but
how
we
are
working
on
that
is,
I
don't
know
with
our
discussion
right
now.
We
figure
out
that
geo
should
do
less
and
less,
and
we
should
more
look
into.
A
How
do
we
do
orchestration
and
setup
of
infrastructure
right
for
this
use
case,
and
I
we
need
to
see
that
we
don't
block
ourselves
by
looking
too
much
into
adjusting
gu
to
the
current
setup
of
staging,
because
we
need
to
abstract
it
away.
B
B
I
think
that's
true,
but
also
so
two
thoughts
I
have
if
we,
for
example,
now
set
up
a
multi-node
instance
in
in
staging
right.
That
was,
I
think,
the
next
step
right
that
is
relatively
minimal
and
we
can
still
replicate
some
data.
B
We
are
then,
at
that
point
I
think
we'll
we'll
get
feedback
on
what
kind
of
orchestration
would
be
required
right
or
what
is
very
tedious
to
actually
do
at
that
point,
and
I
think
that
may
be
a
good
feedback
point
saying
like
hey,
you
know
at
the
moment
I
need
to
like
edit
this
configuration
here
and
there
manually.
This
is
really
tedious.
You
know
we
can.
We
can
do
that
and
then
I
think
this
is
maybe
the
questions
like
rather
than
just
orchestrating
this
in
the
infrastructure.
Specific
way
of
doing
things
really
talk
about.
B
B
B
And
I
think
for
staging
a
lot
of
these
cost
considerations
and
some
of
the
gitly
things
they're
not
as
relevant,
because
it's
not
that
much
data
right.
So
I
think
for
for
some
of
this
I
think
we're
still
okay-
and
I
don't
know
like
I-
I
like
to
be
really
like-
I
like
to
make
small
steps
right
and
even
those
small
steps
take
a
long
time
right.
So
I
think
we're
we're
not
out
of
runway
yet
in
what
we
can.
What
we
can
do
with
my
impression.
A
B
B
And
then
we
just
I
don't
know,
get
a
couple
of
geoengineers
on
it
for
a
week
and
the
problems
will
be
gone.
That's
generally
how
things
work
out
so
skarbic,
any
other
closing
words
from
you.
I
don't
know.
B
Happy
monday
thanks,
I
thought
that
was
very
useful,
I'll
try
to
condense
some
of
this
as
well
into
an
issue
and
write
it
up
and
say,
like
okay,
this
is
some
of
the
thoughts
we
had.
I
really
appreciate
you
taking
the
time
yeah
thanks
for
setting.
C
Us
up
cool,
bye,
bye.