►
From YouTube: Distribution / Geo: sync for Helm chart support of Geo
Description
Jason & Gabriel have a sync meeting over adding Geo support to GitLab's cloud native Helm chart
Issue: https://gitlab.com/charts/gitlab/issues/8
Meeting notes document: `Distribution / Geo sync 2019-03-12`
Covered:
- component additions to the charts
- new requirements for stateful data
- needs for MVC
A
A
Paula
doesn't
pulling
this
up.
I
was
doing
a
review
for
one
of
the
auto
deploy
app
guide
got
distracted
there.
Look,
oh,
it's
Ted,
applied
I
need
to
be
able
to
call
okay,
so
this
issues
actually
been
open
for
around
a
year
at
this
point
over
a
year.
At
this
point,
so
it's
almost
a
year
and
a
half
originally
opened
way
back
in
the
beginning
of
the
charts.
When
we
decided,
we
should
be
able
to
do
this
on
feature
parity
at
some
point,
but
long
ago,
before
it
became
a
priority
of
any
kind.
A
The
evolution
has
been
around
for
a
little
long.
What
we
tried
to
figure
out
how
to
do
this?
What
we
have
discovered
after
a
couple
of
discussions
across
a
couple
of
summits
hilariously,
is
that
the
minimum
requirements
are
that
we
need
to
actually
build
a
container
based
on
geo
log,
sir,
which
is
good
because
it's
already
in
the
rails
code
base.
So
we
basically
need
just
another
container
that
operates
with
a
slightly
modified
config
and
a
different
entry
point
so
fairly
straightforward.
A
The
needs
of
that
container,
however,
have
been
a
little
complicated.
We
need
a
separate
database,
GOTY
ml,
that
defines
the
Postgres
connection
for
geo
to
actually
behave.
That
is
not
necessarily
a
separate
instance
of
a
database,
so
it's
not
necessarily
a
different
location,
but
just
a
different
actual
database
connected,
so
that
would
have
to
be
one
available
and
to
well-documented
so
that
people
know
they
have
to
create.
The
second
database
when
they're
using
an
external
provider
is
also
the
requirement
of
having
a
TLS
which
we
have
had
its
support
for.
A
The
current
directions
for
installation
of
Geo
actually
dictate
manual
first
replication,
which
is
something
that
we
may
want
to
look
into
if
it's
feasible
to
do
that
with
the
charts
and
what
the
steps
will
actually
be.
So
that,
in
my
opinion,
is
definitely
gonna,
be
a
breakdown
item.
We're
going
to
document
how
to
create
the
databases
as
well
as
how
to
do
the
first
replication
I
have
not
read
them
in
detail
recently,
so
Gabriel.
If
you
can
point
out
exactly
what
we
would
need
to
do,.
A
Sure
so,
currently
the
chart
one.
We
specifically
acknowledge
that
if
you're
going
to
use
the
charts,
you
really
should
not
use
the
built
in
Postgres
chart
in
production,
so
anything
that
would
require
a
wall
or
resiliency.
This
is
not
what
the
current
charts
Postgres
intended
to
do,
however,
what
you
have
to
do,
if
you're
gonna
use,
say
Google,
Cloud
sequel,
is
you
get
connected?
C
B
B
The
same
for
the
second
database
instance
that
you
need
for
geo,
which
we
usually
call
it
shield
tracking
database,
it's
regular
Oscars
instance,
but
the
only
different
picture
that
it
has
a
required
foreign
that
a
wrapper
to
be
configured
so
there
is,
we
use
some
inventions
from
how
you
call
the
connection,
name,
etc,
but
it
has
to
point
to
this
secondary
replicated
that
the
days
the
video
only
ones.
So
that's
the
main
change
that
needs
to
happen
to
these
secondary
data
limits
and
on
it
La
Brea
site.
B
D
B
A
B
A
A
A
B
A
A
B
A
Expected
that
would
be
the
case
because
then
you'd
have
psychic
jobs
from
the
wrong
one.
That
would
be
unfortunate
to
say
the
least.
The
application
would
not
know
what's
going
on
anymore,
okay
and
then
the
last
item
that
which
is
a
big
one
that
will
actually
have
to
flesh
out
is
that
making
use
of
geo.
All
of
the
secrets
of
keys
have
to
be
replicated
between
all
the
nodes.
Is
that
100%?
True,
the.
A
Right:
okay,
so
that's
good
that
pretty
much
hasn't
changed.
The
current
documentation
is
how
it's
done
on
the
end
of
us.
Is
you
spin
up?
One
of
them
grab
the
secrets
file
and
then
manually
replicate
across
all
of
the
nodes.
That
is
a
little
bit
different
when
it
comes
to
how
the
charts
work,
because
weird
we
don't
have
a
secrets
file,
you
actually
have
to
just
go.
Give
me
all
of
these
secret
contents.
A
So
I
can
go
replicate
these
secrets
over
here
and
we
don't
actually
have
that
extremely
well
documented
on
how
you
actually
do
the
same
secret
migration.
If
you're
going
to
run
a
pair
of
an
omnibus
install
Ana
chart,
so
that's
kind
of
related
in
two
ways:
it's
a
separate
issue,
but
also
one
that's
going
to
be
required
for
this
use
case.
B
A
To
cut
my
education
to
see
if
we
can
include
console,
because
there
are
some
pluses
and
minuses
to
that
right
now
for
console
to
be
able
to
update
the
contents
of
secret,
it
has
to
have
the
right
to
update
those
secrets.
So
it
needs
an
additional,
are
back
role.
That's
doable,
especially
if
it's
names
based.
A
The
problem
that
comes
into
play
is
right.
Now
there
is
no
specific
way
to
tell
it:
hey
the
secrets
have
been
updated,
all
the
things
that
use
them
go,
restart
right,
so
rolling
all
of
the
things
that
would
have
to
use
that
we
may
actually
have
to
build
that
into
the
operator.
To
do
it
correctly.
Our
operator
within
charts
does
not
behave
as
an
instance
regulating
item.
Okay,
so
there's
multiple
patterns
in
the
operator
framework
usage.
A
lot
of
people
have
a
full
instance.
A
You
say
I
want
an
instance
of,
and
it
goes
and
creates
you
an
instance.
Pre
creates
everything
and
then
it
individually
manages
it.
Instead
of
having
a
chart,
you
have
a
chart
that
deploys
the
operator
and
then
the
operator
actually
deploys.
The
thing
you
want
ours
operates
in
such
a
way
that
you
have
an
operator
that
has
a
kata
fide
system
operators,
knowledge,
meaning
that
it
will
actually
ensure
that
your
system
runs
and
rolls
all
of
the
things
in
the
right
periods
of
time.
A
We
may
want
to
look
into
the
operator
actually
being
required
to
keep
an
eye
on
the
secrets,
so
that,
when
console
would
then
turn
around
and
change
the
secrets
the
operator
will
go.
Oh
the
secrets
have
changed
role
all
of
these
pods
because
they
know
about
this
secret
so
like.
If
you
did
it
in
omnibus
console,
would
replicate
across
all
of
the
nodes
and
update
the
secrets
file,
and
then
it
would
know
I
need
to
make
this
call
to
Michelle
scripts.
A
That
then
restarts
all
of
the
things
we
need
to
make
that
same
kind
of
behavior
in
place,
because
our
console
instance,
that's
replicating
content
should
not
be
responsible
for
replicating
pods,
because
then
any
time
we
add
a
new
pod,
we
alter
you
know
which
one
does
or
does
not
need.
Redis
then
we'd
have
to
change
that
as
opposed
to
the
codified
operator
knowledge,
which
is
what
the
operator
does.
It
replaces
that
functional
framework
of
how
the
omnibus
behaves
when
you
do
that
right.
A
A
B
They
need
to
have
API
access,
I.
Think
right
now,
it's
the
secondary
that
needs
to
call
the
primary
and
not
the
other
way
around.
I
need
to
double
check
that,
but
it
used
to
be
not
that's
the
case
in
the
best,
but
I
think
today
it's
just
the
secondary
that
acts
as
the
primary.
They
need
that
to
do
some
sort
of
validation
during
buggy.
B
A
A
A
To
make
sure
that
wasn't
anything
outside
of
the
the
normal
exposure
that
needed
to
be
at
it,
so
no,
no,
ok,
great,
then
I,
don't
have
to
worry
about
that.
I
had
I
had
made
a
comment
for
those
of
you
following
along
that
may
not
know
exactly
where
I'm
reading
for
him
I
had
made
a
comment
where
I
was
concerned
about
requiring
exposing
the
in
charts
equal
because
doing
a
TCP
port
through
an
ingress
load.
Balancer
is
not
ideal,
to
say
the
least.
A
A
Okay,
I
think
we
have
most
of
the
basic
breakout
understood.
Okay,
so
let's
come
back
through
here
our
actual
points:
we've
covered
them,
but
let's
actually
say
what
needs
done
task
wise.
So
we
can
start
doing
issue
creations,
TLS,
okay,
the
geo,
the
logged
cursor
container
based
on
rails.
That's
a
simple
issue,
I
believe,
but
it's
highly
dependent
on
a
bunch
of
other
things.
So.
A
A
E
A
A
A
C
B
B
A
Okay,
as
long
as
I
can
I
can
make
clear
documentation
on
like
name
it's
something
that
makes
sense
by
default.
For
example,
you
have
get
live
HQ
production,
but
you
can
always
change
whatever
the
database
name.
Is
it's
a
property?
You
just
said
it.
It
doesn't
stay.
So
that's
just
a
matter
of
noting
that
one
yeah.
B
A
A
C
A
Okay,
so
that's
a
that's
a
future
item,
the
minimum
requirement
for
that.
It's
actually
document
what
things
need
to
be
replicated
from
the
secrets:
the
rails
secrets
into
the
application
secrets
in
terms
of
brainy's.
Obviously,
they're
gonna
have
separate
Postgres
current
credentials.
They're
gonna
have
separate
Redis
credentials,
but
the
necessary
bits
that
israil
secrets
you're
going
to
need
to
know
how
to
try
and
say
between
the
two.
A
B
Okay,
I
just
remembered
as
well
that
we
implemented
the
get
proxy
on
the
secondary
that
axis.
For
example,
when
you
do
a
git
bash,
the
secondary
server
it
Brux
is
that
to
the
primary
I
think
this
is
done
on
AJ
proxy
as
the
configuration,
but
at
any
to
double
check
that
so
there's
one
more
thing:
yeah.
A
B
A
A
A
B
A
B
You
know
the
thing
is
the
tracking
data
base
is
what
kind
of
state
or
do
in
a
secondary
node,
and
we
still
need
to
join
stuff
with.
What's
actually
in
get
lab.
For
example,
we
need
to
know
if
a
project
is
still
Stewart
this,
or
this
thing
this
kind
of
stuff
are
based
on
the
list
of
projects
if
we're
still
missing,
to
replicate
a
repository
on
the
other
side,
so
that
the
true
that
maze
needs
to
talk
to
each
other.
B
B
Wait:
wait
we
just
make
the
TLS
obsolete,
but
yeah
she
died,
blows
still
needed.
The
reason.
Why
is
that
the
replicated
one
can't
and
have
any
any
writing
data
and
that
it's
just
read-only?
So
we
need
to
write
state
in
another
place
and
that's
why
we
have
this
secondary
tracking
database.
So,
for
example,
whenever
we
sink
a
new
resource,
sorry
we
starts
later
star,
checksum
and
and
a
bunch
of
others
metadata
and
but
wait.
We
don't
have
the
list
of
people,
sorry,
don't
know
we
don't
have
the
list
of
users
attached
instance.
B
A
A
It
can
be
added
in
the
future.
Okay,
just
anything
that
would
help
like
other
people
that
are
trying
to
follow
along
what
exactly
is
going
on
that
aren't
familiar
with
either
from
my
side
or
your
side
to
be
like
what
in
the
world
were
they
talking
about
and
I
figure,
if
I'm
a
little
confused,
they
might
be
confused.
A
B
Let's
go
back
a
little
bit,
you
have,
let's
say
your
cry-baby
steel,
see
in
the
US,
and
you
want
to
have
a
secondary
GTO
in
Europe.
So
you're
running
your
first
cluster
in
the
US
and
it's
a
regular
bit
lab
installation
and
in
Europe
we're
going
to
run
this
Geo
secondary
load.
It
is
actually
clustered
so
the
RDS
in
the
US
will
replicate
to
europe
in
europe.
You
have
a
really
double
version
of
what
you
have
in
the
US,
but
that's
how
we
can
different.
B
That's
how
you
can
actually
browse
the
club
from
the
database
in
europe
ready
in
Europe,
but
actually
all
the
synchronization
information
that
you
need
to
take
care
to
be
sure
that
you
have
everything
that
you
know
they're
all
the
way
poster
is
anything
everything.
That's
that
that's
actually
change
it.
We
need
to
start
that
information
inside
Europe
in
another
instance,
because
that
instance,
you
can
do
any
writing
operation
to
play
because
it's
replicated
using
streaming
replication.
D
A
E
A
E
B
Most
of
this,
this
state
information-
the
starting
point-
is
the
jeweller's
it,
so
it
will
iterate
over
something
similar
to
our
ball.
What
file
that
we
implemented
inside
the
database
itself?
So
you
read
all
those
changing
events
that
we
need
to
replicate
to.
Our
cursor
will
schedule
things
on
the
psychic
secondary
node
and
in
psychic
will,
through
all
the
warnings
and
updating
things
and
that
job
or
that
bunch
of
jobs
will
start
the
states
inside
these
secondary
signals,
and
we
actually
need
to
hold
this
state
there.
B
So
we
can
provide
information
to
the
users
here,
their
web
interface.
So
we
can
say
you
have
2000
repositories
on
your
primary
and
we
have
replicated
50%
of
them.
2%
failed
because
something-
and
we
show
you
why
it
failed-
and
these
other
49%
is
painting
a
fake.
That's
not
like
that.
So
we
can
have
this
whole
information,
it
a
user
and
we
need
to
start
state
to
be
able
to
them.
A
D
B
Location,
some
sort
of
appliance
like
from
Bell
or
something
like
that.
You
probably
want
cheer
to
replicate,
even
if
its
scene
object,
storage,
I'm,
not
exactly
sure
how
we
are
handling
that
today,
I
think,
if
you
are
using
object
stars,
we
just
don't
replicate.
That
said
there
are
situations
where
you
actually
want
to
do
to
do
that
kind
of
purification.
Well,
so
a
problem
if
we
are
not
having,
if
we
are
not
doing
that,
yet
they
probably
need
like
a
selector
switch
or
something
okay.
A
Lot
of
people
who
deploy
into
one
of
the
public
clouds
will
have
the
ability
to
do
replication,
but
between
the
various
zones
right
GTS,
s3,
digitalocean
spaces
all
have
this
functionality,
but
you
in
some
cases.
You
actually
have
to
turn
that
on
specifically
so
like
we
want
to
say
you
know,
if
you're
going
to
use
s3
with
eks,
then
you
want
to
make
sure
that
you're
replicating
across
the
continents
for
these
buckets
otherwise
you're
gonna
have
the
implication
of
you:
have
local
access
to
anything
on
get
but
anything
else.
A
Lfs
uploads
traces
is
still
going
to
be
slower
because
you
now
have
to
go
across
the
ocean
either
way
right.
So
we're
gonna
wanna
make
sure
we
document
that
it's
not
a
hard
requirement
right
now,
because
we
can
easily
POC.
This
on,
you
know,
use
a
cloud
provider,
but
we
should
also
look
into
consideration
for
those
people
using
on
site
or
using
just
doing
a
full
proof
of
concept
with
Mineo
that
those
two
nodes
have
no
way
of
replication.
A
A
And
the
next
item
I
see,
is
repositories
and
wikis
and
get
content.
This
does
not
surprise
me
at
all
because,
like
we
said
this
is
over
HTTPS,
so
it
does
the
proxying
for
you,
the
it
still
comes
time
to
that
gish
get
push
proxy
and
understanding
exactly
how
that
works
and
when
what's
involved
there,
so
we
will
need
better
clarification
on
that
one,
whether
that's
just
an
external
link,
so
we
know
how
it
works.
That's
fine!
But
like
Gabriel
and
I've
already
said,
it's
not
a
hard
requirement
for
MVC.
A
A
A
Okay,
so
we
need
the
goe
log
cursor
container.
We
need
the
documentation
of
the
production,
the
get
lab,
HQ
geo
production
database
and
documentation
of
how
to
do
the
initial
replication.
Is
that
the
initial
notification?
What
when
I
say,
initial
replication?
What
am
I
talking
about
for
those
in
the
crowd?
B
There
are
two
steps:
first,
you
need
to
replicate
the
database,
so
it's
basically
dump
and
restart,
and
there
are
few
things
that
you
need
to
do
before,
like
you
need
to
enable
too,
so
we
we
start
to
viewing
events
that
she'll
be
replicated
on
the
other
side.
So
after
you,
your
queueing
event,
you
do.
The
repetition
of
these
replications
it
lab
the
secondary
knows,
will
start
to
back
feeling
everything
that
is
not
database
related.
So
all
the
repositories,
all
the
assets
lobs,
it
statutes.
B
A
Okay
and
I
did
tack
in
one
more
thing
here:
consensus
realized
we
didn't
actually
go
yeah.
We
should
actually
document
the
fact
that
we
need
to
be
able
to
configure
all
of
this.
Once
we
have
everything
else
in
place.
We
have
to
be
able
to
tell
it
if
it's
a
primary
or
secondary
note
kind
of
required.
A
A
That's
a
question:
I,
don't
actually
know
the
answer
to,
and
it
might
be
something
that's
required
for
monitoring.
Do
we
do
anything
with
Prometheus
on
that
at
all
from
the
Omnibus
side,
I'm,
not
sure
I.
A
B
A
A
Okay,
so
quick
summary
for
the
following
on
the
video
we've
come
down
to
the
following
list
of
tasks.
We
need
to
create
a
geo
log,
cursor
container
that
needs
to
have
database
access
to
the
gate,
lab
HQ,
get
alive,
HQ
geo
production
database,
and
only
that
and
then
you
also
need
to
have
access
to
the
instance.
Local
Redis.
The
database
will
need
to
be
created,
which
is
the
gate
lab
HQ
geo
production
aka.
A
The
geo
tracking
database,
which
is
separate
from
the
primary,
get
lab
HQ
production,
and
this
should
be
within
the
same
region
as
the
cluster.
We
need
to
document
that
because
we
have
to
worry
about
the
initial
replication,
especially
if
your
on-prem,
we
have
a
few
notes
in
the
document
about
exactly
what
needs
that.
But
currently
there
is
at
least
one
manual
step
we
have
to
document
and
possibly
in
the
future
script
the
replication
of
secrets
between
clusters,
specifically
the
rails
secrets
and
believe.
That's
it
correct.
Just
the
rail
secrets.
F
A
A
A
Okay,
now,
with
that
in
mind,
let's
go
ahead
with
the
next
steps.
Obviously,
the
first
next
step
is
to
take
all
of
those
things
and
break
them
down
into
actual
actionable
issues
within
the
charts,
primarily
I
may
be.
The
documentation
of
the
f
DW
would
be
over
on
the
Geo
side
and
I
think
get
proxy
definitely
needs
to
be
over
on
the
Geo
side,
but
all
of
the
rest
of
that
would
be
to
go
and
create
the
necessary
task
lists
now.
E
A
B
B
Like
a
diamond
or
something
like
that,
and
if
that
that
diamond
dude,
there
is
split
between
going
to
Italy
in
a
local
instance,
are
going
to
the
gate
interface
on
the
other
cluster
as
as
a
regular
user
via
HTTP,
and
what
would
that
be
Zubov
inside
communities
are?
Are
you
seeing
any
any
type
of
any
concern
or
something
that
would
complicated
that
I?
Don't.
A
See
anything
immediately
because
we're
we're
talking
basically
about
get
push
over
HTTPS,
correct?
Okay,
if
we're
talking
I
get
push
over
HTTPS
I
see
no
real
implication.
That
does
the
controlling
of
where
what
goes
where
we
do
need
to
figure
out
how
to
get
proxy
works.
But
if
we
have
to
add
another
daemon
in
place,
that's
technically
feasible,
but
that
would
be
related
to
the
unicorn
and
workhorse
api's.
So
I
don't
really
see
a
problem
there.
A
If
we
somehow
had
to
put
something
in
place
on
an
SSH
path,
that
would
get
complicated
pretty
quickly,
but
that's
not
too
much
of
a
concern
adding
in
the
actual
genome.
Log
cursor
container
should
be
relatively
straightforward
and
should
be
something
we
could
manage
to
do
in
three
days
and
that's
with
you
know,
framing
testing
bolting
it
in
so
I.
Don't
see
anything
here
that
should
be
horribly
hard
to
do.
A
It's
basically
making
sure
that
we
can
actually
replicate
what
we
need
inside
of
the
chart,
and
that
requires
a
better
understanding
of
exactly
how
all
of
these
things
interplay
and
as
and
somebody
on
the
team
comes
in
and
actually
starts
doing.
The
physical
work
we'll
get
a
better
picture
over
time
on
how
long
that's
actually
gonna
end
up
taking
do
I
think
this
is
feasible
by
the
twelve
point
Oh
miles.
So
yes,
by
a
longshot,
it's
quite
possible
that
we
can
get
everything
in
by
the
twelve
point
of
milestone.