►
From YouTube: Geo Roadmap overview using GitLab Plan
Description
Fabian Zimmer, Product Manager for GitLab Geo provides an overview of the group's roadmap for the next months.
Roadmap: https://gitlab.com/groups/gitlab-org/-/roadmap?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=group%3A%3Ageo
Geo utilises GitLab's Plan features and epics to manage their work. Learn more about epics: https://docs.gitlab.com/ee/user/group/epics/index.html
A
Hello,
everyone:
this
is
Fabian
that
product
manager
forget
lap
Gio
I
just
spent
a
couple
of
hours
cleaning
up
our
road
map
view
with
a
get
met,
planned
feature,
I
mean
gee.
We
use
epics
track
our
work
and
I
thought
this
would
be
a
good
opportunity
to
give
her
a
brief
overview
of
what
you
can
expect
in
the
next
month's.
It's
a
bit
broader,
I'll
record.
A
A
Yeah,
so
this
is
the
the
new
view
of
the
Geo
Road
map
and
we've
had
some
improvements
from
the
plan
team
recently
on
this,
but
you
know
supports
up
epochs,
which
is
pretty
sweet
and
associated
milestones,
so
I'm
trying
to
use
this.
It's
not
perfect,
but
we
iterate
a
lot.
So
please
bear
with
me
I'll
try
to
highlight
a
few
things
that
maybe
didn't
accomplish
yet.
So
there
are
a
few
items
here.
A
If
you
look
at
the
geo
Direction
pages,
they
are
describing
sort
of
our
or
strategy
or
direction
for
the
next
month
and
they
tie
into
increases
in
maturity.
So
what
we
believe
is
the
maturity
of
these
categories
and
then
geo
at
the
moment.
The
two
I
want
to
talk
about
our
G
replication
and
disaster
recovery
and
on
a
another
level
there
are
improvements
to
the
administrator
experience.
They
were
previously
grouped
in
geo
replication,
but
they
kind
of
span
both
categories,
so
I've
spun
it
out,
which
is
something
that
it's
maybe
also
a
top
little
concerning.
A
A
So
this
is
going
to
end
at
the
end
of
July,
following
that,
you
have
an
increase
in
maturity,
ideally
something
else
to
discuss
to
complete-
and
this
is
tracked
here
and
there's
also
potential
work
for
increasing
the
maturity
of
geo
application
from
viable
to
complete.
So
that
being
said,
I
promised
subtopics.
So
we
talked
about
some.
The
administrator
experience
is
pretty
simple.
There
are
a
few
other
things
here
that
are
are
hidden
because
we're
not
actively
working
on
them
at
the
moment.
A
The
most
important
is
really
improving
the
user
experience
of
systems,
administrators
that
are
maintaining
G
or
installations,
and
so
we
are
working
on
this
quite
actively.
We've
shipped
a
number
of
iterations
in
previous
milestones,
starting
with
twelve
nine
twelve.
Ten.
Thirteen
point,
oh
and
very
soon,
thirteen
point
on
so
this
is
an
ongoing
effort,
and
at
the
moment
we
anticipate
that
most
of
the
things
for
this
iteration
will
be
in
a
state.
A
Now
speaking
about
disaster
recovery
and
viable
maturity,
so
geo
can
be
used
to
sail
over
to
a
geo
secondary
in
a
disaster
situation,
so
primary
data
center-
it's
not
available.
You
can
also
use
it
as
a
planned
failover,
potentially
to
migrate
to
different
infrastructure
and
at
the
moment
the
disaster
recovery
offering
is
at
the
minimal
stage-
and
there
are
few
reasons
for
that.
So
I
expand
this
here.
A
You
can
see
there
are
few
subtopics
and
I'll
talk
about
three
of
them
in
a
little
bit
more
detail
and
I'll
focus
on
that
can't
actually
those
here
as
well
I'll
start
with
the
self-service
framework.
So
one
of
the
issues
with
Geo
at
this
moment
in
time
is
that
get
lab
as
a
sort
of
complete
solution
for
the
development
operations.
Security
lifecycle
generates
a
number
of
data
types
or
resources,
for
example,
git
repositories
and
geo
needs
to
replicate
those
data
from
a
primary
to
one
secondary
or
several
secondaries.
A
We
don't
replicate
all
of
those
data
types
right
now
and
that's
quite
an
issue
for
disaster
recovery
situations,
because,
if
it
isn't
replicated,
it
won't
be
available
on
your
secondary
immediately,
which
means
you
need
to
restore
it
from
a
backup
which
then
actually
increases
your
time
until
your
sort
of
secondary
site
is
fully
available.
So
I've
talked
about
that
in
the
past.
One
of
the
main
things
that
we
would
like
to
improve
here
is
to
make
it
incredibly
easy
to
support
new
data
types.
A
If
another
group
in
git
lab
actually
decides
to
implement
a
feature
and
they're
generating
new
data,
then
it
should
be
very
simple
to
add
geo
support
on
top
of
that
which
ultimately
then
results
in.
Essentially
a
hundred
percent
coverage
of
what
geo
is
replicating
and
what
is
actually
being
generated
by
like
it
lab
so
the
moment
this
is
very
difficult.
This
is
a
main
focus
of
our
work
at
the
moment.
A
So
there's
the
ability
in
gates
lab
to
essentially
act
as
a
you
have
to
serve
package
files
for
first
Makonnen
or
MPN
like
various
languages,
and
so
you
want
to
replicate
those
to
a
secondary,
and
so
here
by
13.1,
we've
estimated
that
we
want
to
actually
ship
a
package
file
replication.
You
can
see,
there's
quite
a
bit
here.
Still
too
close,
we'll
see
how
it
goes
by
the
end
of
the
month
and
see
if
we
can
even
converge
that,
but
that's
sort
of
where
we
are
at
there's.
A
A
Our
front
end
engineer
has
done
a
really
great
job
working
on
this,
so
there's
a
bunch
of
technical
debt
that
we
need
to
clean
up,
and
we
will
do
that
over
the
next
couple
of
releases
as
well
again,
you
know,
like
I,
hope
that
this
were
actually
finish
by
by
August,
maybe
a
little
bit
earlier.
But
this
is
one
of
these
sort
of
things
that
should
actually
be
helpful
for
a
lot
of
other
data
types,
independent
of
what
we're
replicating
then
further
down.
This
is
the
the
second
most
important
thing
that
we
need
to
add.
A
This
is
not
needed,
in
my
view,
for
viable
maturity
here
for
disaster
recovery,
but
I
can't
sort
of
split
out
these
things
and
move
them
to
complete
maturity.
So
I'll
have
to
figure
something
out,
but
we
will
believe
that
we'll
actually
do
this
in
August
to
probably
the
end
of
the
quarter
so
locked
over
here,
and
that
should
conclude
the
entire
sort
of
self-service
framework,
the
backend
implementation,
something
that
is
going
to
happen
in
13.2,
which
is
very
exciting.
A
Given
that
we
are
almost
done
with
package
file
replication,
we
can
now
actually
support
replication
for
the
three
remaining
files
that
are
not
currently
supported.
Terraform
state
vulnerabilities
and
external
merit
requests,
so
we
will
aim
to
include
them
in
13.2,
so
this
is
only
replication,
not
verification
yet,
but
this
would
be
many
fold
faster
than
the
actual
implementation
of
the
certs
frame,
but
you
can
measure
that
I
really
hope
we
can
ship
this.
A
Another
thing
I
move
this
into
the
viability
for
the
AR,
which
is
an
effort
to
really
improve
our
scalability
of
geo
by
simplifying
backfill
operations
and
ultimately
remove
some
of
the
technology
that
we've
added
the
foreign
data
records
from
post
West,
and
so
this
is
actually
also
quite
relevant,
because
the
for
some
of
our
main
customers
also
means
that
we
need
to
be
able
to
scale
up
geo
quickly.
Ideally,
we
can
Excel
data
fast,
so
scalability
and
overall
performance
is
quite
important,
so
I've
looped
it
here.
A
It
is
a
little
bit
similar
to
this
epic
here,
where
it
may
not
really
tie
the
closely
to
the
viable
maturity.
But
I'll
have
a
discussion
with
with
Nick
about
that.
If
we
want
to
include
it
here
or
not,
but
we
are
pretty
close
to
wrapping
that
up.
Actually,
thanks
to
some
amazing
work
from
one
of
our
staff
engineers
Douglas
so
by
the
end
of
13.2,
I.
Think
we'll
be
we'll
be
done
with
that.
So
so
far,
I've
only
really
talked
about
replicating
and
verifying
data
and
making
that
easier,
which
is
really
important.
A
But
I
haven't
really
talked
much
about
sort
of
the
failover
process
in
in
general
and
the
features
that
are
required
for
that-
and
this
is
the
other
angle
that
we
are
addressing
for
viable
maturity,
and
there
are
a
few
things
here
in
in-flight,
so
the
first
one
is
that
replication
should
be
easy
to
parse
and
resume.
So
you
should
be
able
to
say
I
don't
want
to
replicate
any
data
right
now
to
a
secondary.
A
That
can
be
helpful,
for
example,
during
an
upgrade
where
you
want
to
decouple
your
secondary
from
your
primary
and
we're
almost
done
with
this.
It's
like
this
is
by
weight,
and
a
few
things
are
very
close
to
merging,
so
I
hope
that
this
actually
gets
gets
done,
but
we
will
see
so.
This
is
very
close.
A
We're
also
going
to
start
working
on
simplifying
the
planned
failover
documentation.
So
we
have
a
few
ideas
on
how
to
do
this
interviews
with
systems.
Administrators
have
indicated
that
the
documentation
is
something
that
we
can
really
improve,
so
we
will
work
on
that
and,
lastly,
or
not.
Lastly,
actually
there's
a
little
bit
of
discovery
left,
specifically
about
sort
of
alternative
alternatives
to
replicating
data
with
rsync
cross
that
out
I
think
pretty
soon,
then.
Here
this
is
a
maintenance
mode,
we'd
only
board.
A
We've
kicked
that
forward
quite
a
few
times,
because
we
worked
on
other
bits
that
pausing
and
resuming
first,
but
this
is
now
broken
down
to
such
an
extent
that
we're
quite
confident
that
we
can
iterate
quickly
on
this.
So
this
is
scheduled
for
thirteen
point
two,
which
is
the
next
release,
but
because
there's
likely
going
to
be
significant
testing,
I
estimate
that
you
know
this
will
only
really
ship
after
we've
tested
this
night
on
staging
we've
had
some
experience
with
that.
A
So
this
is
the
maintenance
mode,
and
then
here
this
is
really
one
of
the
like
most
important
things
that
we
will
need
to
focus
on
in
the
in
the
next
month.
So
we
started
automating
pre-flight
checks
for
planned
failure
with
all
the
operations
a
systems
administrator
should
actually
perform
before
and
fail
over.
A
We
may
not
necessarily
be
able
to
continue
with
that
in
13.2,
but
here
the
the
the
work
that
is
going
to
start
is
get
up.
Geo
for
PR
purposes
is
not
often
deployed
as
a
single
node,
so
it
is,
it
come
it's
essentially
a
multi
server,
G
or
secondary,
and
at
the
moment,
in
order
to
promote
a
secondary
to
a
primary.
A
So
the
process
works
as
it
does
right
now,
but
it
is
really
about
making
it
simpler,
easier
to
administrate
in
a
situation
that
is
potentially
stressful,
and
this
is
hopefully
something
that
we
can
show
is
possible
with
POC
align
internally
and
then
just
you
know
like
push
this
for
forward
in
specific
small
iterations.
So
these
are
the
things
that
are
currently
happening.
You
can
see.
A
Some
of
them
are
taking
us
sort
of
in
to
the
end
of
July,
beginning
of
August
and
following
that-
and
this
is
obviously
the
further
out
we
are
the
higher
the
uncertainties
or
you'll
get
some
more
detailed
information
about
13.1.
We
have
some
good
ideas
for
thirteen
point.
Seven
point:
two:
we
get
some
good
idea
about
thirteen
point
three
and
after
that
becomes
a
little
bit
more
uncertain,
but
there
are
a
few
things
that
I
would
like
to
highlight
that
are
up
next.
A
Once
we've
added
like
completed
most
of
these
changes
here,
and
we
feel
confident
that
when
we
went
to
change
the
maturity
to
viable,
we
want
to
need
to
continue
to
actually
push
it
to
complete
and
two
things.
That
will
certainly
need
to
happen
is
to
move
existing
files
blobs
so
for
the
LFS
files
or
uploads
CI
artifacts
to
the
existing
self-service
framework,
which
would
then
allow
them
in
combination
with
what
I
talked
about
over
here.
A
So
this
is
something
Patroni
is
shipping
to
get
lat
with
thirteen
point
one
and
depending
on
our
ability,
we
we
will
need
to
have
test
and
verify
how
we
can
use
that
on
the
secondary
and
how
that
and
after
that,
there
would
be
plenty
of
yet
to
define
sort
of
groupings
of
improvements
to
like
promotion,
demotion
and
another
like
failover.
So
this
is
a
sort
of
holding
tank
for
four
items.
This
is
the
current
estimate
high
uncertainty,
but
it
would
be
great
if
we
could
complete
this
by
the
end
of
this
financial
year.
A
End
of
Jan,
then
for
geo
replication,
which
is
something
that
we're
not
actively
working
on
right
now.
There
are
a
few
things
that
we
know.
We
definitely
need
to
do,
and
two
of
these
items
here
are
the
so-called
secondary
mimicry
and
automatically
choosing
Pacino
for
the
best
music
experience.
So
what
this
means
on
a
high
level
is
currently
when
a
regular
github
user
accesses
a
secondary
only
when
using
git
with
the
location
where
get
URL,
it
is
sort
of
transparent
or
fully
automatic.
A
In
other
instances,
a
user
will
need
to
understand
to
a
certain
degree
that
they're
interacting
with
a
secondary
which
may
not
actually
always
result
in
a
better
user
experience.
So
one
example
here
is
we
give
the
web
interface,
which
is
read-only
on
a
secondary,
and
so
many
users
use
the
primary
to
work
in
the
face,
because
the
secondary
can't
be
edited.
A
So,
to
give
you
an
example,
you
pull
from
a
secondary
via
get,
and
then
you
push
to
a
secondary,
but
it's
the
same
URL
as
for
your
primary
and
Geo,
and
the
load
balancer
figures
out
what
is
fastest
you're
trying
to
access
the
web
interface,
and
if
there
are
operations
you
can
perform,
maybe
you
get
proxied
automatically
to
the
primary.
The
same
for
docker
registry
is:
maybe
you
want
to
pull
from
a
docker
registry
use
the
local
node.
You
want
to
push
it
proxies
you
or
redirects
you
to
the
primary.
A
So
this
is
the
the
work
here
that
we
need
to
do
and
I
think
that
will
increase
the
usability
of
Geo
tremendously
for
do
you
replication
purposes
and
then
something
that
always
comes
up
and
we
would
really
like
to
tackle
is
making
you
easier
to
install
we've
invested
some
time
to
understand
the
installation
process.
It's
quite
common
in
manual
and
as
a
sort
of
side
effect
of
some
of
the
work
that
we're
going
to
do
by
simplifying
a
failover.
We
may
be
able
to
simplify
the
installation
as
well,
because
the
problems
are
often
similar.
A
We
need
to
communicate
between
all
the
servers
in
a
cluster,
and
so
maybe
we'll
benefit
here.
So,
as
you
can
tell,
these
items
are
relatively
high
level
and
have
a
high
degree
of
uncertainty,
but
this
is
what
will
likely
come
next,
so
yeah,
that's
a
that's
the
Geo
roadmap
at
the
moment,
as
you
can
see,
there
is
a
lot
of
work
going
on
in
the
team
right
now.
We
are
not
tackling
not
addressing
I,
should
say
backups
at
this
moment
in
time
for
capacity
reasons
other
than
bug
fixes.
A
So
I
didn't
talk
about
it
very
much,
but
this
is
kind
of
the
the
state
of
play
at
the
moment.
A
couple
of
qualifiers
here,
I
needed
to
choose
the
timeframes
manually.
This
is
to
do
with
the
fact
that
sometimes
there
are
quite
a
few
issues
in
an
epoch
and
they
can
ship
independently
and
add
value,
but
I
tend
to
not
add
issues
to
specific
milestones
far
in
the
future,
so
we'll
have
milestones
specifically
for
the
issues
will
that
we
do
entered
in
point
two,
and
maybe
thirteen
point
three,
but
not
in
thirteen
point
four.
A
Just
because
we
work
with
a
Kanban
style
process
and
we
are
able
to
shift
around
some
of
those
things
based
on
the
demands
that
or
the
requests
and
requirements
that
we
get
from
our
customers.
So
to
give
you
an
example,
if
we
were
to
hear
that
this
is
really
important
for
various
reasons,
it
may
be
something
that
we
are
able
to
accelerate
and
then
I
need
to
shift
turns
around.
So
the
epics
really
here
a
little
time
blocks
that
need
manual
updating.