►
From YouTube: GitLab 14.9 Kickoff - Enablement:Geo
Description
14.9 Outlook: https://gitlab.com/gitlab-org/geo-team/discussions/-/issues/5029
A
A
A
So
this
is
a
quite
interesting
feature.
Object.
Storage
is
often
used
by
customers
to
store
files
that
are
generated
by
gitlab
and
in
order
to
replicate
those
data
from
a
primary
to
a
secondary
site.
Many
customers
also
rely
on
cloud
vendors,
who
have
object,
storage,
replication
mechanisms
between
different
regions,
but
sometimes
that's
not
available,
mainly
because
maybe
a
region
doesn't
actually
support
this.
The
vendor
doesn't
support
it
or
you're,
an
on-premise
customer,
and
you
use
an
object,
storage
implementation,
but
you're
not
actually
using
a
cloud
vendor.
A
Then
you
have
to
rely
on
a
different
replication
mechanism
between
storage,
buckets
and
gitlab
geo
allows
you
to
replicate
data
that
is
stored
in
an
s3
compatible
object.
Storage
bucket
between
different
sites
is
currently
in
beta
and
we're
continuing
to
work
towards
general
availability.
There
are
essentially
three
things
that
need
to
happen
in
order
for
us
to
be
satisfied
that
it
meets
our
quality
standards,
the
one
is
to
be
able
to
verify
files
that
are
stored
in
object
storage.
This
is
important
to
avoid
data
integrity
issues.
B
B
We
are
making
some
changes
in
omnibus
to
ensure
that
to
ensure
that
omnibus,
properly
supports
multiple
databases
and
that
will
unblock
us
to
be
able
to
then
move
the
tracking
database
configuration
into
make
it
better
integrated
with
omnibus
and
then
get
rid
of
some
of
the
some
of
the
legacy
configuration
and
code
around
that,
and
so,
ultimately,
while
this
is
very
much
a
behind
the
scenes
change,
it
will
make
it
a
lot
easier
to
maintain
this
feature
going
forward.
A
Yeah
thanks
for
highlighting
that,
I
think
that's
a
great
point.
Making
things
easier
to
configure
adhering
to
the
newest
standards
and
rails
makes
it
easier
for
our
engineers
to
do
their
job.
That,
then,
is
manifested
in
easier
configurations
for
customers,
but
also
higher
velocities.
I'm
really
glad
we're
addressing
that.
A
The
next
thing
is
moving
ci
job
artifact
replication
to
our
self-service
framework.
As
a
reminder,
geo-cell
service
framework
is
our
new
standard
for
replicating
and
verifying
data,
so
by
moving
job
artifacts
from
our
old
way
of
handling
it
to
the
self-service
framework.
We
reduce
the
technical
complexity
of
geo
overall,
and
it
also
enables
verification
for
this
data
type,
making
sure
that
there's
no
data
integrity
issues
because
something
failed
in
actually
replicating
it
from
a
to
b,
that's
ongoing.
We
are
pretty
close
to
shipping
this.
B
Yeah,
so
we
are
also
on
the
tail
end
of
of
another
epic
ian's
done
a
ian
and
nick
westberry.
Our
quality
engineer
counterpart
have
done
a
great
job,
updating
the
the
geoqa
tests
to
pass.
They
hadn't
gotten
a
lot
of
attention
for
a
while,
and
a
lot
of
them
were
failing
due
to
various
reasons,
and
so
they
went
through
that
got
all
the
qa
tests.
B
Passing
then
also
were
able
to
leverage
the
gitlab
environment
toolkit
to
be
able
to
spin
up
geo
environments
using
our
reference
architectures
inside
of
pipelines,
test
zero
downtime
upgrades
as
well
as
run
the
qa
test
against
that.
B
So
this
is
a
level
of
testing
that
we
dreamed
about
as
as
recently
as
a
year
or
two
ago,
and
being
able
to
run
the
geoqa
tests
against
environments
that
have
been
spun
up
using
get
inside
of
our
pipelines
either
as
part
of
a
scheduled
pipeline
or
triggered
using
in
the
omnibus
project
using
the
latest.
B
Build
is
now
possible-
and
this
has
just
really
improved
the
efficiency
and
quality
of
our
and
thoroughness
of
our
testing,
taking
something
that
we
often
had
to
set
aside
an
entire
day
for
and
record
a
demo
to
test
that
you
could
do
a
zero
downtime
upgrade
on
geo
and
then
run
the
full
suite
of
qa
tests.
Against
that
now,
we
can
just
run
that
in
a
regular,
automated
fashion.
B
So
so
the
work
that
we're
doing
in
149
is
to
just
document
that
the
get
geopipeline
is
available
and
how
to
use
it
and
when
to
use
it.
We
also
want
to
automate
the
testing
of
geo
failover.
So
we
can
test
zero
downtime
upgrades.
We
can
do
the
end
to
end
test
to
make
sure
replication
and
some
of
the
basic
geo
functionality
is
working.
B
Now
we
want
to
automate
testing
of
a
geo
failover
on
our
reference
architectures,
which
is
a
critical
operation
that
customers
rely
on
to
work,
and
we
want
to
make
sure
that
any
changes
that
we.
B
Code
base
or
how
gitlab
is
deployed
are
still
do
not
break
our
capability
to
failover,
so
really
excited
about
that,
and
then
we
also
just
wanna
make
sure
that
our
own
scheduled
pipelines
are
more
robust
so
that
we're
automatically
alerted
anytime
there's
an
issue
there
and
that
we're
responding
to
to
any
potential
issues
that
might
affect
customers.
A
Yeah
thanks
for
sharing
this,
I'm
super
excited
about
it,
I
think,
for
our
customers.
This
really
means
you
know
higher
quality,
fewer
bugs
that
are
not
noticed
during
the
development
life
cycle
and
that's
particularly
relevant
for
business
critical
operations
like
failovers,
so
I'm
super
excited
to
see
this
and
yeah
we'll
hope
to
wrap
that
up
in
the
next
release.
A
The
next
item
I
could
be
talking
about
is
that
we've
shipped
our
unified,
url
and
proxying
feature
in
gitlab
14.6,
so
we
are
continuing
to
iterate
on
this
improving
it.
There
are
a
few
key
improvements
that
we
would
like
to
make.
One
of
those
improvements
is
to
allow
secondary
proxy
with
separate
urls.
A
So
there's
some
work
going
on
there
and
we're
quite
excited
to
see
this
continue
because
you
know
there's
a
new
feature.
There
are
still
a
few
things
that
we
can
improve
and
that's
going
to
happen
and
continue
to
happen
in
the
upcoming
release,
14.9
and
then
also
the
next
one
after
that,
but
yeah
that's
exciting.
I
think
this
is
iteration
action,
so
it's
it's
really
fun.
A
And
lastly,
one
thing
I
would
like
to
highlight
is
something
that
we
are
a
little
bit
in
the
discovery
phase
of
so
we're
starting
to
pick
this
up,
which
is
to
improve
the
overall
geometrics
and
also
exposing
a
potential
dashboard
for
systems
administrators
that
includes
key
metrics
for
for
geo,
specifically
around
replication
and
verification,
performance
and
replication
lag.
Often
geo
is
tied
to
recovery,
point
objectives
and
recovery
time
objectives
and
having
better
insight
on
how
much
is
the
geosecondary
set
behind
in
terms
of
replication
will
really
help
systems
administrators
understand
if
their
geosystem
is
working.
A
Well,
there's
also
a
number
of
ux
research
initiatives
and
improvements
that
we
are
working
on,
for
example,
the
jobs
to
be
done,
definitions
for
backup
and
restore
which
are
proceeding,
which
is
really
great,
we'd
like
to
spend
more
time
improving
our
backups
and
restore
in
the
future.
A
I'm
not
going
to
go
into
all
those
details
check
out
the
planning
issue
if
you're
interested
but
yeah,
I
think
we're
very
excited
for
all
of
the
work.
That's
ongoing
and
see
you
at
the
next
one
bye,
bye
thanks.