►
From YouTube: 2020-11-18 Disaster Recovery working group
A
B
B
B
So
yeah,
okay,
let's
start
yeah,
so
welcome
to
the
second
meeting
for
the
disaster
working
group.
He
just
like
cover
a
working
group.
Sorry.
B
Yeah,
it
wasn't
meant
like
that
so
yeah,
I
think
it
doesn't
make
sense
to
read
through
all
of
the
things
what's
been
done
and
what's
happening
next
from
this
meeting.
Maybe
if
you
go
through
these
different
links
that
have
been
provided
and
what's
been
done
already,
you
can
see
that
a
lot
of
questions
have
been
raised
and
also
answered
already
in
different
issues.
B
Thanks
for
fabian
for
answering
a
lot
of
questions
and
what
we
all
did
this
week,
I
think
was
learning
a
lot
and
then
learning
about
geo
and
the
other
way
around
learning
about
how
or
infrastructure
is
looking
like
in
staging
right
and
what
I
try
to
do
is
to
compile
a
list
of
findings
that
I
got
so
far.
B
There's
this
link
here
that
you
maybe
can
look
into,
and
I
would
like
to
go
through
the
list
of
questions
gaps
and
things
today,
just
to
clarify
on
them,
maybe,
first
and
and
later
I
have
some
points
written
down
for
making
decisions
on
what
to
do
next,
because
there
are
several
options
on
what
we
can
do
and
we
should
make
decisions,
so
we
really
can
start
working
on
that.
B
B
First
of
all,
the
goal
of
this
working
group
was
to
see
how
we
can
use
geo
for
disaster
recovery,
and
so
we
also
wanted
to
find
possible
gaps
by
doing
a
failover
of
staging,
and
I
already
found
some
gaps
during
my
research,
which
I
listed
here.
There's
some
missing
support
for
some
things
that
we
wouldn't
would
like
to
have.
One
thing
would
be,
for
instance,
support
for
patrony
cluster
on
secondary.
B
This
is
worked
on,
it
seems
and
we
could
work
around
this
probably
manually.
This
is
my
understanding
so
far.
If
you
want
to
do
the
over
in
staging,
then
one
issue,
if
you
want
to
go
to
production
later,
is
there's
no
support
for
ripple
rebalancing
currently,
which
we
can
often
do
in
production
like
moving
ripples
to
different
goodly
shards.
B
This
would
need
to
be
implemented
in
staging.
I
think
we
don't
do
this,
so
we
still
can
work
on
stain,
checking
with
gtu
for
that,
and
then
there
are
certain
data
types
which
are
not
supported
right
now
in
geo,
which
are
listed
here,
where
we
need
to
see
what
we
can
do
about
that
to
either
implement
this
in
geo
or
find
other
ways
to
sync
those
data.
If
you
really
want
to
have
a
disaster
recovery
solution,
so
that's
things
I
found
so
far.
B
One
other
interesting
feature
that
I
would
like
to
see
in
geo
in
the
future
would
be
that
we
can
spin
up
infrastructure
automatically
on
demand,
because
right
now
in
every
solution
that
we
know,
we
would
need
to
do
this
manually
in
case
of
a
failover
if
you
want
to
have
a
form
standby
site
which
isn't
fully
populated
with
servers
out.
Of
course,
reasons.
B
B
This
is
a
difference
to
the
setup
of
geo
and
staging
right
now,
where
we
take
this
from
omnibus
and
don't
have
petroni
at
all,
so
we
need
to
see
how
we
can
get
support
either
in
staging
production
to
install
from
omnibus
and
postgres,
but
that's
not
a
short
term
thing,
I
think,
or
if
how
we
can
work
around
this
manually,
for
this
first
iteration
of
testing.
I
think
this
can
be
done
manually.
We
don't
need
geo
to
fully
support
automatically.
You
know
manage
the
patronic
cluster
right
now.
B
Second,
thing
is
ngo.
We
have
a
single
node
setting.
We
have
a
single
node
geo
installation
that
means
failing
over
to
it
could
be
problematic,
because
I'm
not
sure
if
that
would
be
good
enough
to
serve
all
of
staging
traffic,
I
mean
for
doing
advanced
tests
a
little
bit,
switching
back
that
could
work,
but
if
you
really
want
to
run
it
for
a
longer
time,
then
probably
we
should
have
a
multi-node
setup
and
also
that
would
be
more
realistic
for
what
we
are
aiming
for.
B
Just
switching
over
to
a
single
instance
wouldn't
be
that
realistic
to
test
for.
C
A
A
So
I
would
consider
this
probably
the
be
the
biggest
blocker
right
now,
so
the
database
stuff,
as
far
as
I
know,
geo
supports
external
databases
if
nothing
else
right
like
so
you
right,
there
are
things
around
the
current
setup,
but
actually
being
able
to
take
all
the
traffic
after
the
failover
that
we
have
even
in
staging.
I
would
consider
considered
to
be
the
biggest
blocker
right
now
that
we
would
probably
want
to
do
before
we
do
a
failover.
B
Yes,
I
agree,
I
think,
to
this.
The
next
locker
would
be
that
currently
we
only
sync
selectively,
we,
I
think,
would
want
to
sync
all
of
staging.
That
means
right
now.
We
just
sync
for
certain
repositories
or
groups
right,
fabian,
correct.
D
B
I
think
we
want
to
switch
on
to
fully
sync
staging
if
you
want
to
test
this,
so
that
would
need
to
be
done,
but
that
shouldn't
be
hard,
but
it
also
needs
a
bigger
cluster,
probably
and
then
also
big
blocker.
I
think
is
that
if
you
want
to
do
the
full
cycle
like
failing
over
and
failing
back,
we
would
need
to
set
up
staging
as
a
geoprimary.
B
That
would
need.
I
mean
that
we
need
to
set
it
up
to
run
the
trackingdb
and
the
lock
cursor
on
staging,
and
we
need
to
figure
out
how
we
can
set
this
up
and
test
this
of
course
properly,
and
that
would
also
mean
that
we
deviate
from
how
we
set
up
production
and
we
need
to
no
it's
some
work
to
to
be
done
to
really
do
the
full
cycle.
B
So
I
think
that's
from
my
point
of
view
at
least
the
biggest
things
to
work
on,
but
I
can
come
back
to
this
maybe
later
in
the
document,
because
I
have
some
options
for
how
we
can
maybe
iteratively
get
this
on
and
some
things
on
open
questions.
Just
for
my
understanding
of
how
g
is
working,
I
listed
here,
for
instance,
how
we
would
sync
writers,
but
it
seems
that
we
don't
need
to
do
anything
about
this,
because
this
would
just
work
right.
C
So
I
see
the
answers.
Yes,
so
essentially,
every
geo
secondary
gets
its
own
red
is
just
a
database.
There
is
no
syncing,
so
I
can.
I
can
answer
these
if
you
would
like
so
red.
This
is
not
synced,
but
I
think
that's
fine,
prefect
or
gitly
cluster,
essentially
geo
secondaries
get
their
own.
Italy
instance
whatever
that
looks
like,
but
it
could
be
a
getaway
cluster
setup,
so
geo
will
replicate
data
and
talk
to
italy,
so
that
is
not
handled
by
prefect
should
can
we
use
prefect
for
utility
syncing
and
disaster
recovery?
C
C
B
It's
interesting
because
I
think
in
the
issue
that
I
linked
the
initiative
for
cost
co-saving
of
gitly
and
and
implementing
perfect
from
andrew
thomas.
I
think
they
proposed
that
we
could
have
one
gitly
note
being
in
a
different
region
and
and
having
it
synced
by
prefect
right,
and
I
think
you
should
circle
back
to
the
getaly
team
to
ask
them
if
this
is
possible
or
not.
If
this
is
a
good
idea,
yes,.
C
I
did
that
today,
and
the
answer
is,
I
think
currently,
this
is
not
something
that
they
they
recommend.
As
far
as
I
understand
it,
I'm
not
saying
it's
not
possible,
but
I
linked
to
the
slack
threat
internally
as
well,
because
I
was
surprised
to
see
this
and
it
would
be
cool
if
it
worked,
but
I
don't
think
it
does
at
the
moment.
That's
my
understanding
right
now,
maybe
maybe
I'm
wrong.
B
C
Okay,
well,
the
answer
is,
I
think,
in
my
view,
you
know:
that's
maybe
also
something
up
to
the
team.
Anything
that
is
in
gcs
buckets
should
use
cross
region
replication.
For
that
you
know
we
do
that
only
at
the
beta
level.
I
think
that's
fine
and
so
geo
is
essentially
responsible
for
at
that
point,
for
the
git
data
you
know,
building
on
what
I
just
said
earlier
and
obviously
you
know
you
have
the
sort
of
the
warm
standby
capacity
where
you
actually
have
a
working
gitlab
instance.
At
that
point,.
C
So
marin
is
is
right.
We
support
external
databases,
I
don't
even
know
if
the
petroni
version
that
you
use
on
staging
is
the
same
as
is
an
omnibus
and
if
they
are
compatible
right.
So
no,
I
think,
in
the
long
term.
Right
I
think
using
omnibus
is
maybe
preferred
in
general,
but
I
think
for
now
you
can
probably
use
an
external
patrony
cluster
and
petronia
has
standby
cluster
functionalities.
C
Repo
balancing
is
correct.
I
don't
think
we
do
that
then
missing
support
for
think
for
syncing
data
group
wikis
is
likely
going
to
be
a
concern
of
the
group
that
actually
implemented
that
feature
with
the
plan
of
shipping
that
in
the
next
two
to
three
releases.
But
who
knows,
I
think,
that's
that's
a
little
bit
up
for
grabs
version.
Snippets
is
going
to
happen
in
13.7.
We
have
no
plans
right
now
for
elasticsearch
integration.
We've
talked
a
little
bit
about
it,
so
there's
no
distinct
timeline
on
it.
C
As
far
as
I
know,
for
dr
purposes,
you
can
re-index
your
elastic
search
if
you
really
wanted.
So
there
is
no
real.
That's
a
data
loss
right.
It's
just
time
that
you
lose,
which
is
important.
Gitlab
pages
is
blocked
on
a
large
re-architecture
of
gitlab
pages,
which
is
also
likely
going
to
mean
that
gitlab
pages
supports
object.
Storage.
C
You
can
regenerate
gitlab
pages
by
rerunning
ci
pipelines,
so
there
is
no.
So
you
know,
as
in
it
doesn't
doesn't
support
geo,
but
there
won't
be
any
data
loss
as
far
as
I'm
aware
right.
So
there
are
some
definite
gaps
here,
but
at
the
moment
for
at
least
the
last
two
items
I
I
don't
actually
know
what
the
timeline
is
and
then
the
last
comment
is
no
support
for
spinning
up
infrastructure
on
demand.
That's
absolutely
correct.
C
E
C
We
have
I
mean
geo
is
essentially
a
configuration
of
of
gitlab,
so
whatever
works
for
for
reference
architectures,
let's
say
in
general,
with
some
additional
bits
works
for
geo
as
well
right.
So
if
you
choose
to
provision
your
stuff
with
terraform,
that's
fine.
I
think
we
want
to
have
more
of
a
unified
front.
What
we
recommend
to
customers
and
say
like
use
the
git
lab
environment
toolkit
for
this,
which
is
terraform
ansible
based
I've
seen
anything
from
customers
right
from
no
automation
at
all.
C
To
you
know,
chef
ansible
puppet,
I
don't
know
right,
so
it
really
depends
and
for
staging
and
production.
My
understanding
is
that
you
choose
to
use
chess
right,
so
that
would
be
something
that
would
need
to
happen
so.
A
The
the
problem
right
now
is
that
we
are
in
a
hybrid
setup,
so
we
have
part
of
the
fleet
running
kubernetes
and
parts
on
vms,
which
makes
this
not
as
clear-cut
as
let's
just
use
shaft
to
push
this
forward
so
yeah,
not
to
mention
that
we
are
actually
deploying
with
terraformanceable
combo.
That's
the
deployer
project,
so
not
as
clear
cuts.
A
C
I
think
from
the
geo
perspective-
and
I
mean
that
was,
I
think,
controversial
for
various
customers.
We
do
have
hem
helm
charts
for
for
geo
as
well.
C
So
I
don't
think
there's
anything
particular
that
would
make
it
not
feasible
to
run
geo
on
kubernetes
if
that
is
desired
right,
that's
maybe
in
the
longer
term
important,
but
this
spinning
up
infrastructure
or
in
general
configuring
it
right
and
then,
let's
say
in
another
iteration
also
saying
this
is
the
minimal
set
of
geo
nodes
that
we
actually
need
right
and
then,
when
we
fail
over,
we
want
to
maybe
spin
things
up
on
demand
to
save
on
cost
earlier
right.
That
functionality
does
not
exist
within
geo
at
all.
B
I
think
that's
also
also
I
wouldn't
couple
these
concerns
together,
because
it's
such
a
big
effort
to
get
it
right
for
everybody
in
every
kind
of
infrastructure,
but
having
maybe
a
hook
like
a
callback
for
spinning
up
infrastructure
at
a
certain
stage
of
failover.
That
would
be
maybe
a
good
thing
so
that
customers
could
plug
in
their
own
stuff
there
to
to
happen,
and
then.
B
Okay,
thanks
for
this
great
summary,
I
think,
let's
at
least
my
my
what
I
see
here
is
that
for
doing
a
failover
test
and
staging,
I
don't
see
a
blocker
which
makes
it
impossible,
as
in
we
have
gaps
that
we
can't
overcome
in
the
next
time
like.
Even
if
we
don't
sync
all
of
the
data,
we
can
maybe
find
ways
how
to
do
the
failover
test
without
you
know,
losing
data
on
on
on
the
staging
site.
B
Okay,
then,
as
we
now
understand,
there
are
several
things
that
need
to
be
done
to
do
a
failover
in
staging
and,
of
course,
the
biggest
goal
would
be
to
do.
The
full
circle,
like
a
full
failover
testing,
running
it
for
a
while
with
every
thing,
zoomed
and
then
fell
back
right.
But
this
is
a
lot
of
effort
and
a
lot
of
testing
needed
to
do
this,
and
so
there
are
options
how
we
can
pass
test
parts
of
it.
B
Maybe
iteratively
going
closer
to
this
goal,
which
I
would
suggest
that
we
consider
to
do,
maybe
all
of
them
or
a
part
of
them
and
see
how
far
we
can
get
within
a
certain
point
of
time,
and
the
very
first
simple
thing
that
we
could
do
right
now
is
to
just
and
promote
geo
right
now
at
this
time,
because
that
would
just
stop
the
replication,
bring
it
up
the
database
up,
and
we
could
just
test
on
this
onenote
how
it
works
right.
That
is
simple.
B
B
The
next
thing
that
we
could
do,
of
course,
is
setting
up
a
multi-node
geo-instance
cluster
for
realistic
testing
and
that
we
can
maybe
sync
everything
from
staging.
B
That
would
be
more
realistic
and
would
bring
us
to
a
state
where
we
really
can
run
better
tests
like
with
all
the
traffic,
maybe
and
would
also
help
us
to
do
things
without
breaking
staging,
because
if
we
don't,
you
know,
fail
back,
then
staging
still
can
stay.
Fine.
We
just
will
not
see
the
data
that
we
change
by
testing
back
in
staging,
but
I
mean
that's
fine
for
testing
and
we
would
see
how
failover
is
working.
B
Another
option
would
be
to
have
two
new
clusters,
like
a
primer
and
a
secondary
cluster
for
testing
failover
and
feedback.
That
also
would
help
testing
without
destroying
staging
first.
If
you
do
something
wrong
and
then
of
course,
the
last
step
would
be
that
we
really
do
the
full
thing,
the
full
circle
of
failing
over
to
a
geo,
geocluster
and
then
failing
back
to
staging
and
hoping
that
we
really
get
all
the
data
synced
back
in
between
and
everything
is
working
out
right
and
that
would
be
the
end
goal.
B
I
guess
it's
just
a
question
how
many
gaps
the
in-between
find
to
accomplish
this,
but
I
think
if
we
go
on
iteratively,
there
are
higher
chances
to
get
things
going
instead
of
aiming
at
the
last
step
here
in
an
and
in
one
go
right,
because
I
think
we
need
to
learn
some
things
first.
B
That
would
be
the
thing
that
I
propose
right
now,
but
I'm
hoping
for
opinions
on
that.
A
B
I
think
the
the
critical
thing
is
if
we
fail
over
and
want
to
fail
back,
is
to
see
that
we
are
sure
that
this
is.
This
is
fully
working
right
and
if
we
do
this
at
the
first
time
in
staging,
we
want
staging
to
work
after
that
right.
We
don't
want
to
have
inconsistencies
and
that's
why.
I
think
maybe
we
should
have
maybe
first
two
test
clusters
where
we
test
the
failover
and
fail
back
cycle
and
be
sure
how
it
works
and
how
we
set
up
primary
and
secondary
clusters
for.
A
Failing
over
and
then
what
would
we
gain
from
by
doing
that,
because
if,
if
I
understand
that
option
correctly,
that
would
mean
we
would
be
doing
basically
what
our
customers
are
doing.
Right
now
have
two
clusters
that
they
are
failing
failing
over
in
between,
and
we
kind
of
know
the
answer
to
that
correct
me
if
I'm
misunderstanding,
that
yeah.
B
Probably
that's,
then,
a
learning
step
for
me
at
least
for
how
all
these
things
has
to
be
set
up
to
to
be
working.
F
B
B
Fee
but
okay,
our
customers
are
doing
this.
How
often
are
we
doing
this,
like
the
people
involved
in
this
a
work
group
here?
How
often.
C
C
I
think
the
answer
is
not
often
enough,
because
it
is
highly
manual
right,
which
is
another
thing
to
actually
automate
this
right
and
have
it
in
ci,
for
example,
as
part
of
quality
for
a
3k
reference
cross
cluster.
That's
also
an
ongoing
thing,
but
I
think
this
year
is
really
saying
before
we
roll
out
changes
in
staging.
You
know
you
need
to
understand
what
you
know.
How
do
these
things
actually
work
together
and
you
can
do
that
and
throw
it
away
if
things
blow
up
right?
If
you
right.
A
I
I
get
that,
but
then
my
question,
I
guess,
should
be
when
you
say,
set
of
multi-node
primary
and
secondary
test
clusters
to
do
this
test.
What
kind
of
setup
are
you
having
in
mind
like?
Are
we
talking
about
classic
reference
architectures
that
we
have
with
vms
omnibus
or
are
you
talking
about?
I
don't
know
helm
charts,
or
are
you
talking
about
a
copy
of
what
we
have
right
now
in
staging,
because
all
of
those
have
their
own
advantages
of
disadvantages.
B
That's
a
good
question
because
we
need
to
spin
up
a
bigger
geo
cluster
anyway
in
staging
to
which
we
want
to
fail
over,
and
so
I
thought
we
need
to
dimension
this
one
so
to
know
how
many
nodes
do
we
need
how
big
it
doesn't
need
to
be,
and
if
you
know
that
we
could,
maybe
it's
not
harder
to
to
have
two
of
them.
B
If
you
anyway
know
how
to
configure
them
and
set
them
up
right
as
a
first
step,
so
bringing
up
one
new
geo
cluster
isn't
much
more
work
than
bringing
up
two
of
them
just
for
testing
and
then
testing
the
full
cycle
of
failing
up
failing
over
and
failing
back
and
then
prepare
for
doing
this
for
staging
itself.
D
I
feel
like
I
get
two
paths
forward
here.
One
is
we
could
go
down
your
first
option
of
just
testing
staging
as
it
is
and
as
it
stands
today,
just
to
learn
the
process
of
how
that
impacts.
Our
current
architecture
of
how
we
have
everything
set
up
inside
of
our
infrastructure
and
for
point
three
here.
Maybe
we
use
our
reference
architecture
and
spin
something
up
in
automated
fashion
and
we
do
those
failovers
on
a
regular
cadence.
C
But
that
can
be
a
parallel
effort
right.
I
think
that
is
something
that
I
think
quantity,
for
example,
is
actually
aiming
to
do,
because
that
is
in
general,
a
good
idea
that.
D
A
Right,
but
that
is
a
long
path.
It
is
a
very,
very
long
path
and
this
working
group
can
exist
to
do
that
sure.
But
I
wonder
whether
it's
better
like
we
are
better
off
using
our
time
to
do
something
different
there,
not
different,
but
something
that
is
more
resembling
more
of
our
infrastructure,
rather.
C
Of
the
staging,
even
though
I'm
hooking
up
people
here
for
work
that
are
technically
not
in
my
team,
but
no,
but
I
think
there
are
two.
There
are
two
parallel
things
that
are
valuable
independently
from
each
other
and
can
help
inform
each
other.
But
I
I
do.
I
do
think
that
just
my
two
cents
from
the
proposal
that
I
see
from
henry
here
to
me
is
essentially
you
start
small
you
you
try
to
understand
how
the
smallest
thing
works.
C
Then
you
increment
the
complexity
to
a
multi-node
and
you
again
try
to
configure
this
and
understand
how
it
behaves.
You
know
trying
to
eliminate
as
much
risk
as
possible
of
impacting
staging,
because
that's
critical
right.
You
know
you
kind
of
do
that.
You
proceed
to
a
bubble
test
and
then,
when
you
are
most
confident
that
you
actually
understand
how
the
fail
back
and
forth
process
actually
work,
then
you
do
it
right,
because
I
think
what
we
want
to
avoid
is
a
situation
where
we
spin
up
something.
C
C
These
are
the
issues
we
already
uncovered
until
we
arrive
at
this
full
full
loop,
which
I
do
believe
should
be
the
the
end
state
that
is
desirable
right,
so
that
is
to
work
towards,
but
I
I
wouldn't
want
the
situation
personally,
where
you
know
that
value
will
be
realized
in
three
months
or
two
months.
You
know
until
then
we
have
nothing
else
to
kind
of
say:
we've
reached
that
goal
that
is
actually
working
right.
I
think
that
that's
always
hard.
A
And
I
know
we're
at
time,
but
henry
a
question
for
you.
So
if
you
want
to
approach
it
that
way,
would
you
set
up
just
yet
another
cluster
on
the
side
like?
Would
that
be
an
option
to
set
up
a
cluster
on
the
side
and
correct
connect
the
current
secondary
geosecondary
or
geo
instance
to
this,
and
then
have
them
sync
between
one
another?
B
A
Maybe
at
some
point
add
staging
as
another
setup
in
that
cluster
and
then
you
kind
of
introduce
staging
into
that
cluster
and
then
see
magic
happen.
That.
B
Would
be
the
best
idea
actually
yeah,
because
if
we
promote
the
new
geocluster
and
then
sync
it
with
a
second
geocluster,
we
can
have
this
a
data
set
of
staging
inside
of
it.
We
can
test
fourth
and
back
yeah
and
we
don't
need
to
shut
down
staging
or
hold
traffic
away
from
it
or
something
like
that
and
yeah.
B
That
would
be
a
very
good
testing
opportunity
key,
because
we
would
need
to
change
staging
anyway
later
to
work
with
gu,
and
I
think
we
should
come
up
with
a
very
good
setting
for
gu
it's
automatable
and
then
go
on
changing
the
architecture
of
staging
and
how
it's
set
up
with
postgres
pattoni
and
gu
yeah.
B
I
think
if
we
can
agree
on
that,
that
we
try
to
accomplish
until
thanksgiving
that
we
between
those
two
clusters
with
the
staging
data
set,
but
not
with
actual
staging.
That
would
be,
I
think,
a
very
good
solution
because
takes
out
a
lot
of
pressure
of
of
criticality
but
gives
us
all
the
results
that
we
want
to
have.
A
B
C
A
So
how
about
so?
How
about
for
testing
purposes?
How
about
we
connect
the
third
cluster,
the
new
one
that
we
create
connected
to
the
secondary
that
is
currently
there
that
has
current
data
sync,
those
two
between
each
other
do
all
the
failovers
we
need
and
only
then
introduce
staging,
as
it
is
right
now
into
that
party
and.
C
Do
the
failovers
in
between
you
see
the
connection
between
our
small
secondary
and
the
main
primary
right
now,
because
it
has
a
copy
of
the
data.
You
know
yeah
at
this
moment
and
then
you,
you,
link
up
these
things
and
you
have
set
you
kind
of.
You
have
no
connection
to
the
original
anymore,
but
we
have
a
copy
of
of
stage.
C
Yeah
sounds
good
to
me,
but
I'm
the
product
manager.
So
if
you
know,
I
think
maybe
maybe
douglas
and
akriti
and
nick
should
should
think
about
that
and
if
there
are
any
concerns
with
that
approach.
F
Yeah
yeah.
I
think
we
should
continue
that
discussion
because
it
seems
like
there's
some
implications
of
I'm
not
sure
exactly.
If
you
can
just
say,
like
you
know,
sync
a
secondary
from
another
secondary
without
having
promoted
first
and
what
those
employees.
F
A
C
Correct,
I
think
I
think
the
staging
environment
at
the
moment
is
already
configured
to
some
extent
to
serve
the
secondary
right.
It
is
just
assuming
there
must
be
some
configuration
so
that
that
actually
works
right.
So
in
essence,
the
staging
currently
is
already
acting
as
a
primary,
but
I
don't
know
the
details
of
that
right,
but
it
must
be
the
case
because
otherwise
these
two
instances
couldn't
talk
to
each
other.
C
A
And
let's
put
let's
put
some
timelines
on
that
and
say:
maybe
we
take
tomorrow
to
actually
do
all
of
that
sync
up
on
all
of
that
and
say
friday
coming
friday,
we
make
a
decision,
we
can
do
it
in
the
in
the
channel
in
the
slack
channel
or,
if
necessary,
jump
on
a
quick
call
to
make
a
decision
where
to
go
next.
So
basically,
one
day
investigation
between
everyone,
so
geo
plus
henry
skarbeck,
also
can
help
out
here
and
friday.
A
A
All
right,
given
that
we
are
over
time,
I
will
upload
this
call
after
I
finished
my
next
call
that
I'm
already
late
to
and
share
it
in
slack
in
here.
We
didn't
take
too
much.
No
too
many
notes,
but
hopefully
recording
will
be
sufficient
and
let's
keep
in
touch
through
slack.