►
From YouTube: Geo on GitLab.com (Geo + Delivery Conversation)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Well,
I
can
attempt
an
answer,
so
I
have
a
problem
that
you
have
with
that
document,
but
I
still
think
that
is
actually
you
know
the
documentation
from
the
infrastructure
side.
That
I
am
aware
of
exists,
I
think
from
a
Geo
perspective,
I
mean
at
this
moment
in
time.
We
don't
offer
anything
to
get
a
clock
on
because
it's
not
available,
but
if
it,
if
it
was
available
right,
I,
think
I
think
the
the
initial
thought
is
it
would
allow
for
complete
streaming
replication
of
the
Postgres
database
and
it
would
essentially
create
a
read-only
secondary.
C
Just
to
further
that
I
think
the
the
two
most
important
things
that
get
a
that
year
will
offer
get
lapto
Commons
as
Fabien.
They
say,
they're
the
faster
recovery
times.
I
think
that
at
the
moment,
some
of
the
some
of
the
backups
that
are
taken
and
then
we
restore
from
our
are
in
vivre
sort
of
three
to
six,
if
not
twelve
our
betters,
and
that
has
a
potential
for
either
data
loss
or
some
corruption
there
so
I
think
faster
recovery.
C
Time
is
a
huge
thing,
and
the
second
thing
is
also
we're
selling
geo
to
some
very
big
customers
and
when
they
either
we're
not
running
around
come.
They
asked
a
whole
bunch
of
questions
and
I
think
by
running
this
ourselves,
we're
also
giving
the
customers
a
confidence
that
this
is
a
solution
that
works
and
is
useful
and
provides
value
and
for.
B
B
B
That's
the
dot-com
interest,
but
also
from
sort
of
like
the
product
angle,
having
geo
enabled
on
a
daily
large-scale
instance,
gives
confidence
to
our
customers
and
also,
it
is
I,
think
a
light
with
our
dogfooding
value,
where
we
like,
we
probably
have
like
a
larger
number
of
sort
of
scalability
issues
in
geo
that
we
have
not
discovered
yet
because
you
know
it
is
currently
not
running
at
the
github.com
scale.
I
think
a
recent
example
that
indicates
that
is,
we
had
some
relatively
slow
database
queries
that
made
it
very
hard
to
sink.
B
D
Just
thought:
I'd
raise
my
hand,
I've,
never
used
the
raised
hand
option
in
zoom,
but
apparently
it's
there,
yeah,
just
I,
guess
complimentary
to
that.
So
at
the
moment
there
are
two
different,
dr
databases
that
are
currently
running,
and
one
of
them
is
delayed.
I
believe
that's
a
few
hours
and
the
reason
for
that
is
to
give
us
a
period
of
time
to
recover
lost
data,
whether
it's
accidental
incidental,
those
types
of
things
and
then
there's
another
one
that
is,
is
more
up-to-date,
but
it
is
designed
to
be
used
for
analytics.
D
As
far
as
I'm
aware,
it's
not
actually
being
used
in
a
dr
mode.
It's
used
for
running
queries
and
things
like
that.
So
as
well
as
offering
another
more
sort
of
accurate
or
at
least
a
faster
replication,
we
would
also
be
replicating
the
non
database
content.
I.
Think
that's
the
the
kicker
for
the
GSI
does
not
only
will
be.
Will
we
be
replicating
database
content,
but
you
know
sort
of
in
the
early
stages
for
selected
projects
like
all
the
gif
data,
all
the
LFS
objects
and
all
the
other
data
types
we
support.
D
So
it's
it's
not
just
the
database
content.
It's
also
the
project
data
related
to
the
all
things
project
so,
and
just
one
other
thing:
the
streaming
replication
and
back
in
this
Fabian
point.
Whilst
that
is
our
preferred
and
the
optimal
way
to
implement
geo
in
the
context
of
gitlab
comm,
we
won't
be
able
to
use
streaming
replication,
at
least
at
this
current
stage,
because
it
is
not
configured
for
the
existing
dear
databases.
So
we
would
use
a
right
ahead.
D
A
C
That's
exactly
what
I
was
going
to
say
it's
a
buffer
in
case
something
goes
wrong:
they
can
pull
it
straight
out
of
the
delayed
without
having
to
go
back
and
recover
from
a
and
recover
from
backup.
So
they've
asked
us
to
leave
those
alone
and
not
to
include
those
in
our
work
at
all.
We
need
to
do
something
completely
separate
to
those
does.
A
A
B
C
There
is
a
there
is
a
line
of
Investigation
to
understand
if,
if
there
were,
if
there
is
really
no
way,
we
could
do
streaming
replication
because
from
the
reading
that
I've
done,
it
seems
that
the
writer
head
logs
was
a
recommendation
and
the
only
POC
that
was
built
at
the
time
there
was
no
there
wasn't.
There
was
never
a
proper
investigation
down
into
what
the
impact
of
streaming
would
be.
C
C
C
A
C
C
You
know
not
to
have
multi
node
want
to
have
massive
clusters
of
anything
just
do
it
with
I
think
at
the
moment
the
recommendation
is
to
machines,
because
one
is
not
going
to
have
and
then
take
those
learnings
through
to
what
needs
to
happen
in
production.
That
they've
said
that
we
don't
have
to.
We
don't
have
to
be
too
concerned
with
with
overrunning
on
cost
there.
If
we're
going
to
keep
it
minimal.
D
Yeah
at
this
stage,
with
the
current
wages
configured,
we
need
to
replicate
the
entire
database,
even
though,
in
the
early
stages
of
this
on
staging,
we
plan
to
do
what's
called
selective
sync,
where
we
can
select
just
a
few
projects,
but
unfortunately
the
whilst
the
selective
sync
will
just
sync
the
project
related
objects.
It
still
needs
and
still
replicates
the
entire
database.
So
yeah.
A
D
A
A
The
second
thing
that
worries
me
is:
if
we
cannot
do
selective
things
at
all
I'm
kind
of
concerned,
how
are
we
going
to
be
able
to
minimize
the
the
scope
or
other
minimised
impacts
on
whatever
right,
like
we
have
a
bagging
code?
We
need
to
replicate
everything.
Are
we
gonna
do
this
over
and
over
again
all
the
time?
What
comes
to
mind
is
elasticsearch
the
the
problems
that
we
had,
enabling
elasticsearch
with
the
size
of
github.com
and
the
only
way
it
was
resolved,
or
rather
unblocked.
A
The
work
was
that
we
were
able
to
narrow
down
what
we
want
to
do
and
test.
While
we
are
fixing
the
bugs
and
I'm
talking
about
DB
only
here,
which
is
not
even
the
largest
problem
that
we
have
with
with
this
whole
problem,
which
is
whole
system,
so
I
would
kind
of
recommend
that
we
see
what
can
we
do
to
write
like
plan
ahead
with?
C
I
think
that's
why
we
were
looking
to
start
this
on
staging,
because
I'm
expecting
that
staging
is
going
to
be
I.
Don't
expect
that
there's
gonna
be
straightforward
to
get
it
on
to
staging
and
I
think
it's
already
going
to
show
us
a
number
of
things
that
are
that
are
challenging
for
that
datasets
in
itself
and
then
I
think
it
lab.
Comm
is
gonna,
be
like
on
another
level
of
the
challenges
that
we're
gonna,
find
on
staging
I.
C
A
A
We
didn't
have
any
way
to
actually
replicate
even
like
a
part
of
a
real-world
traffic,
and
we
moved
to
connect
with
in
this
case
is
not
even
an
option,
because
this
is
production,
so
we
might
need
to
find
a
way
how
to
simulate
some
of
these
things
that
simulates
some
of
the
real
world
production
impacts
on
the
database.
If
you
want
to
have
some
confidence,
so
that's
gonna
take
some
some
extra
time
for
sure,
and
it's
not
only
going
to
be
geo.
Work
is
going
to
be
other
words
as
well,
so.
A
For
example,
we
have
an
abuser,
inserting
hundreds
of
rows
of
you
know
creating
new
projects
doing
all
of
that
stuff.
When
I
say
hundreds
I
mean
thousands,
and
then
we
also
have
like
CI
usage
that
actually
continuously
affects
the
database.
We
have,
you
know
like
work,
they
start
and
work.
They
end
cycles
as
well.
That
also
increases
the
load
on
the
database.
You
have
the
scale
of
the
whole
cluster
affecting
each
other,
so
delays
in
replication,
Network
issues,
and
so
on.
A
None
of
that
we
can
see
the
in
staging
easily
because
there
is
like
there
is
nothing
on
it,
not
to
mention
that
I'm
not
even
sure
what
our
state
of
monitoring
is
on
staging.
We
have
some
alerts.
We
have
some
money
monitoring
on
it.
We
have
some
logging
on
it,
but
how
good
that
is
how
much
data
that
is
going
to
give
you
while
you're
doing
this
thing
is
here's
a
question
and
I'm
not
trying
to
like,
say
we're,
not
gonna.
Do
any
of
that
I'm
just
saying
that
managing
expectations,
yeah.
C
When
I
think
about
what
what
the
concerns
are
there,
it
sounds
like
it
sounds
like
a
big
concern
is
the
amount
of
activity
that
is
going
to
happen
on
on
gitlab.
Comm
is
not
represented
anywhere
else
and
the
only
place
we're
gonna
see
that
is
on
github.com,
but
I
think
that
the
way
that
geo
is
architected
the
way
that
everything
has
to
go
through
the
database.
When
we
take
everything
everything
is
replicated
through
through
to
a
secondary.
There
is
already
a
replication
mechanism
in
production.
C
C
What
we
that's,
what
we
need
to
figure
out
for
putting
that
on
staging
and
because
of
that,
the
geo
installation
that's
running
on
a
secondary
is
going
to
be
sucking
from
that
from
that
card
storage
and
then
making
the
request
back
to
production
to
pull
across
any
other
data.
And
if
we
find
for
whatever
reason
that
the
secondary
is
actually
you
causing
problems,
we
can
disable
it.
We
can
help
we'll
make
sure
that,
through
staging
that,
we
have
the
right
monitoring
in
place
to
just
disable
it.
C
Yeah
I
think
it's
mainly
that
I
don't
think
that
the
I
don't
think
that
the
stress
on
the
database
is
going
to
be
enough
to
cause
problems,
because
it's
already
a
mechanism.
That's
used,
but
I
take
your
point
that
when
we
put
this
on
production,
it's
gonna,
it's
like
it's
a
complete
step
change,
because
it's
it's
gonna
be
the
biggest
traffic
that
has
ever
gone
through
the
geo
system,
I
mean
even
when
this
was
used
to
do
the
GCP
migration
it
was
a
year
ago
and
get
that
was
a
completely
different
started
year
ago.
A
A
The
only
reason
why
I'm
mentioning
this
to
you
Rachel
is
like
you
know
that
it
took
three
years
to
move
with
elasticsearch,
especially
because
of
these
type
of
issues.
The
amount
of
data
that
passes
through
and
two
terabyte
database
is
not
humongous
like
there
are
bigger
databases
in
the
world,
but
what
is
going
to
be
crucial
for,
for
this
whole
effort
is
having
a
way
to
do
smaller
chunks
so
that
we
don't
get
completely
stuck
because
if
we
start
this
and
then
in
the
middle,
we
realize
too
much
data
is
passing
through
the
system.
A
We
cannot
limit
it,
let's
go
and
do
six
months
really,
so
you
might
get
it
climbing
the
wall
basically
so
like.
If
we
have
to
take
six
months
then
for
geo
to
redo
parts
of
the
architecture.
So
so
it
cannot
accommodate
that.
I
would
rather
like
try
to
think
ahead
a
bit
and
see
what
kind
of
options
we
have
and
whether
any
work
that
you're
doing
right
now
in
geo
can
take
that
into
account.
No.
C
For
sure
and
I
think,
the
only
thing
that
springs
to
mind
might
be
the
way
that
we
process
events.
But
again
this
is
processing
events
on
a
secondary,
that's
different.
That
is
separate
from
the
production
system,
even
if
that,
even
if
we
turn
the
secondary
off
and
we're
still
filling
up
the
database
on
that
secondary
and
we're
still
filling
up
all
of
the
stir
and
we're
still
just
developing
a
queue
of
work
that
needs
to
be
back
filled
in
I.
Don't
see
that
having
a
massive
impact
on
on
the
production
side.
C
D
No
I'm,
just
going
to
I,
didn't
want
to
cut
you
after
I
think
one
thing
we
can
do
and
just
to
sort
of
reiterate
what
you're
saying
is
that
we'll
be
replicating
a
third-tier
style
database
which
won't,
which
does
not
sort
of
create
any
back
traffic's.
So
we
won't
be
impacting
production
by
creating
the
additional
Postgres
cluster,
but
it
is
the
it's
the
secondary
sidekick
jobs
that
will
hit
production
that
are
likely
to
be
the
cause
of
any
elevated
load.
D
A
I'm,
just
gonna
had
to
cut
you
off
right
now
there
there
is
no
such
thing
as
canary
cluster
canary
uses
the
production
database
canary
use
production
Redis.
The
only
thing
canary
uses
production
side
kicks.
The
only
thing
that
canary
actually
does
have
is
web
workers,
API
notes
and
not
even
literally
know
so
canary
is
just
production
with
an
Asterix
yeah.
D
D
So
you
would
point
this
could
point
the
secondary
at
the
canary
end
points,
which
means
you
know,
the
requests
go
through
the
canary
application,
API
server
infrastructure,
but
ultimately
will
be
serviced
by
the
same
database,
production,
cluster
and
Redis,
and
things
like
that.
But
it
will
mean
that
I
guess
we
can
reduce
the
number
of
HTTP
requests
coming
into
the
public.
A
A
Is
already
like
very
much
stretched
too
thin
with
the
past
two
weeks
of
events,
all
of
the
traffic
that
is
hitting
it
is
basically
public
and
we've
been
struggling
to
see
like
what
can
we
do
about
it?
A
rather
should
we
even
continue
using
the
canary
as
we
are
using
it
right
now,
given
the
like,
all
the
limitations,
I
already
told
you
like
sidekick
is
not
using
like
we.
We
need
to
be
careful,
what
kind
of
code
we
deploy
and
so
on
and
so
on.
So
it's
gonna
take
more
work
there.
D
B
I
just
wanted
to
also
get
back
from
the
technical,
like
details,
just
to
give
you
my,
like
1-minute,
summary
of
what
I
would
like
to
see.
I
personally
expect
a
lot
of
insufficiencies
in
NGO
to
surface
throughout
this
right
and
that's
kind
of
the
point
as
well.
When
we
we
I
would
like
to
see
them
right
in
order
to
a
fix
them
and
also
inform
you
know
like
product
development
with
that
in
mind,
because
I
think
the
art
will
become
more
important
right
or
is
already
quite
important
to
our
customers.
B
I
think
the
on
the
technical
level
I
think
what
your
surface
is.
How
can
we
do
that
without
impacting
the
production
system
in
a
negative
way
right
now,
I,
don't
necessarily
have
an
answer.
It
would
be
great
if
we
had
a
proper
staging
environment
and
a
really
nice
cannery,
it
I,
don't
think
we
do
right,
but
we
have
to
like
figure
out
a
way
how
to
how
to
sort
of
work
within
the
system,
because
I
yeah,
that's
sort
of
my
perspective
on
it.
Well,
it's
like
maybe.
A
C
So
I
think
when
it
comes
to
looking
at
disaster
recovery
for
compliance
reasons
and
I
mean
this
disaster
recovery,
just
because
it's
a
good
practice,
good
governance.
Anyway,
aside
from
all
the
compliance
side
of
things,
one
of
the
things
that
becomes
very
important
when
we're
looking
at
this
from
a
compliance
perspective
is
one
of
the
huge
things
that
they
always
ask
is
how
are
you
improving
your
disaster
recovery?
C
What
have
you
done
to
improve
this
in
the
last
quarter
and
even
though
geo
is
not
in
this
current
state,
a
disaster
recovery
tool
that
we
can
actually
use
and
say?
Yes,
you
push
the
button
and
you've
got
everything.
What
we
can
say
is
their
continual
improvements
that
are
happening
all
the
time
to
improve
the
disaster,
recovery
capabilities
of
this
and
every
quarter.
That
goes
past
and
every
one
report
that
we
file
about
disaster
recovery,
we're
able
to
say
in
the
previous
quarter.
C
We
couldn't
we
couldn't
replicate
these
data
types
now,
its
included
in
an
automated
system.
We've
been
able
to
remove
the
manual
piece
of
this
work
entirely
from
this
process.
It's
now
automated
and
we
can
continually
have
the
same
story
repeating
itself.
Every
quarter
and
I
think
that
that's
a
very
compelling
story
from
a
compliance
perspective
that
gets
with
it
covers
quite
a
lot
of
ground
and
when
going
through
any
type
of
order
and
in
terms
of
duplicated
effort,
I
think
once
it's
at
once.
E
C
A
D
And
I
think
like
the
one
of
the
benefits
of
geo,
is
that
it
was
originally
designed
to
operate
on.
You
know
spread
out
networks,
so
there
there
is.
You
know
a
lot
of
people
say
why
don't
you
just
use
like
a
simcha
to
replicate
files
and
things
like
that,
but
inside
this
actually
smarts
building
to
deal
with
latency
and
retrying,
and
so
it's
it
is
designed
to
work
in
completely
different
parts
of
the
world.
A
A
C
C
A
A
D
So
the
Selective
seek
is,
we
would
use
that
which
allows
a
project.
You
can
use
selective
sync,
either
at
a
group
level
or
project
specific
level.
So
we
would.
We
would
choose
ideally
a
small
project
to
begin
with,
and
also
we
have
additional
configuration
options
with
how
many
workers
we
would
have
per
task.
D
So
we
would
set
those
very
low
now
one
worker,
her
there's
four
different,
two
nipples
and
remember
for
top
of
my
head,
so
it
would
sit
those
very
low
and
then
to
a
very
small
project,
so
that
would
and
I
guess
the
amount
of
work
that
the
secondary
would
do
back.
He'll
head
back
at
the
primary
would
be
dependent
upon
how
busy
that
project
is.
D
D
A
Wait:
okay,
yeah,
so
the
other
question
I
know
we
are
going
over
time,
but
I
hope
you
have
five
more
minutes
to
cover.
Just
this
one
paint
me
a
picture
of
this
running
in
production
info
I'm
interested
from
the
perspective
of
infrastructure
cost.
So,
let's
say
staging
is
our
playground
right
now
ideal
case.
What
do
we
expect
to
run
on
top
of
what
we're
currently
around
on
staging?
If
we
have
geo
fully
enabled
like
all
data
sources,
database
and
so
on,.
B
Right,
if,
if
the
the
goal
here
is
to
have
geo
fully
enabled
so
that
you
can
fail
over
fast
in
case
of
a
proper
disaster
on
uncom,
then
I
think
by
and
large
you
would
mirror
the
infrastructure
right.
So
at
that
point,
if
you,
if
that
is
the
expectation
right,
you
would
have
I
think
very
similar
infrastructure
for
for
geo
that
you
would
have
to
for
comm,
which
then
obviously
doubles
the
cost.
C
Then
that's
where
we
have
to
work
with
infrastructure
to
decide
what
like
how
geo
actually
fits
it
to
the
disaster.
Recovery
capabilities
of
the
system.
I
know
immediately.
That's
doubling
the
cost
of
the
production
infrastructure
is
never
going
to
be
approved
and
I
think
what
would
be
more
if
a
to
say
that,
in
terms
of
a
complete
fail
over
playground,
we
use
staging
as
a
failure
at
the
playground
to
be
able
to
prove
out
that's
what
we're
releasing
to
customers.
C
You
can
do
fail
overs
and
you
can
spin
things
up
and
spin
things
down
and
promote
and
demote
and
all
those
good
things
that
we
used
staging
for
all
of
that.
But
on
the
production
side,
we
work
more
closely
with
infrastructure
to
be
able
to
say
in
order
to
remove
the
recovery
time
objectives
for
the
pipes
we
go.
We
recommend
that
you
do
this
with
Geo
we're
going
to
have
this
these
types
of
modes.
C
If
you
want
to
restore
if
we
want
to
be
able
to
restore
repositories
in
under
two
hours,
this
is
the
process
that
we
would
need
to
follow
to
be
able
to
say,
activate
this
go
here.
This
is
where
your
data
is.
This
is
how
you
pull
it
back
and
be
very
specific
about
about
the
use
of
Geo
for
production,
so
I
think
we
have
to
be
very
careful
in
how
we
tailor
what
geo
offers
the
production
instance,
and
it
needs
to
be
quite
separate
from
what
that
it's
going
to
happen
on
the
staging
environment.
C
I
mean
at
the
moment
what
I'm
looking
for
is.
Is
we
just?
We
just
need
to
get
it
installed
and
running,
and
then
we
can
have
all
of
those
discussions
about
you
know
in
order
to
get
these
benefits.
This
is
what
we're
gonna
have
to
do
next,
I'm
concerned
that,
if
we're
going
all
the
way
to
the
end
of
that
and
thinking
all
the
way
to
the
end
of
the
project
that
we're
not
actually
going
to
actually
get
to
the
point
of
installing
it
I
understand.
A
What
do
we
expect
to
get
out
of
that
experiment
with
the
databases
and
how
much
actual
work
you
think
from
the
Geo
side
will
be
and
I'm
going
to
contribute
from
the
side
of
infrastructure
to
understand
how
much
that,
for
how
much
work
that
will
be
once
we
have
that
I
would
I
would
like
to
go
to
well
I'm,
not
gonna,
ask
for
it
for
we're.
Just
gonna
start
working
on
it,
but
I
am
going
to
inform
people
of
what
we
are
trying
to
achieve
and
start
the
discussions
on.
What
do
we
want
to
see?
A
C
C
Well,
I
think
in
terms
of
next
steps
for
this
and
ash
is
going
to
be
continuing
working
on
getting
this
deployed
to
staging
is
we've
also
got
some
time
from
Devon
at
the
moment,
and
some
other
s
Ari's.
That
Jerry
has
very
kindly
offered
to
us,
will
offer
the
use
of
for
us,
so
ash
and
Devon
are
going
to
be
spending
some
time
pairing
together
to
get
further
with
deploying
to
staging
I
understand
what
you're
also
going
to
need
from
us
is
what
the
expectations
are
once
we
have
this
in
staging.
C
This
is
what
we
expect
to
achieve
by
having
this
done.
I
can
go
back
and
update
the
epic
that
we
have
to
get
this
to
staging.
Let's
start,
these
are
the
benefits
we're
hoping
to
see.
This
is
what
we're
going
to
use
it
for,
and
I
think
that
this
discussion
today
has
been
quite
useful
to
just
help
me
verbalize,
some
of
that
out
and
I
can
take
what
I've
said
in
this
recording
and
put
it
on
to
the
epic
itself.
I'll
do
that
sometimes
week
and
then
I
understand
also
the
next
steps
from
there
is.
C
You
also
want
to
know
more
about
how
I
mean
how
much
or
how
many
machines
we're
going
to
be
using,
so
that
there's
a
bit
of
a
Costas
estimate
done
there
and
I
think
that
once
ash
and
Devon
are
a
little
bit
further
along
with
where
they
are
at
the
moment,
because
it's
discussions
now
are
happening
about
what's
what
nodes.
So
what
what
machine
types
and
sizes
should
we
be
using
and
for
what
purpose?
So
once
we
have
more
clear
indication
of
that,
I
can
add
it
to
that
ethic
as
well.
Yeah.