►
From YouTube: GitLab 12.10 Kickoff - Create:Gitaly
Description
See what else is in store for the Source Code group in 12.10 in our planning issue: https://gitlab.com/gitlab-org/create-stage/-/issues/12653
A
Hi
I'm
James
Ramsay
group
product
manager
here
at
gate
lab-
and
this
is
the
get
early
group,
12.10
kickoff
call
and
I'm
really
excited
to
share
with
you.
The
key
features
that
we'll
be
working
on
in
coming
released.
One
of
the
key
things
we've
been
working
on
for
quite
a
while
is
adding
high
availability
to
get
early.
Ginley
is
how
gitlab
stores
git
repositories
allows
us
to
scale
them
horizontally
for
performance
and
is
really
critical
in
the
gitlab
architecture
of
how
the
application
works.
A
But,
unfortunately,
today
it
isn't
straightforward
to
make
the
ghibli
component
highly
available,
which
means
that
an
outage
of
acutally
node
would
prevent
developers
accessing
repositories
and
prevent
teams
from
being
able
to
deploy
their
projects
into
a
production
environment
because
you
wouldn't
be
able
to
access
the
source
code.
So
that's
why
we've
been
really
focusing
on
high
availability.
It's
really
critical
and
important
to
us
and
to
our
customers
and
we're
nearing
a
very
exciting
milestone
in
12.10
that
milestone
they
were
aiming
for
is
to
release
a
better
version
of
high
availability.
A
In
order
to
do
that,
there's
a
couple
of
things
that
we
need
to
overcome.
So
this
is
the
diagram
of
roughly
how
things
work,
focusing
primarily
on
get
early
and
what
we've
built,
but
today,
if
you're
not
using
the
alpha
high
availability,
your
connections
when
you're
requesting
information
from
repository
will
go
directly
to
a
get
early.
A
Node
might
have
multiple
get
early
shards,
but
the
change
we've
been
working
to
make
is
supporting
having
multiple
replicas
of
that
primary
giddily
node,
and
in
order
to
direct
your
requests
to
the
correct
location,
we've
implemented
a
proxy
router,
and
what
that
also
does
is
every
time
a
write
operation
comes
in,
it
gets
replicated
to
all
replicas
one
of
the
key
things
we're
going
to
be
doing
in
top
10
is
implementing
this
database
so
that
the
replication
queue
is
stored
in
a
reliable
storage
location.
Currently,
it's
in
memory.
A
Secondly,
the
other
thing
we
need
to
do
is
make
it
possible
to
support
multiple,
multiple
prefect
proxy
routers,
so
that
we
can
have
high
availability
of
that
component,
and
so,
if
we
have
multiple
routers
directing
get
requests
to
the
get
early
nodes,
it's
important
that
they
direct
the
requests
to
the
same
Gili
node.
So
that's
important
for
failover
when
a
node
goes
down,
do
all
the
prefect
nodes
consistently
agree
on
which
knows
now
the
primary.
A
So
in
the
previous
release,
we
went
into
some
effort
evaluating
using
console
for
this,
but
we've
decided
that
for
this
first
iteration
we'll
reuse,
the
prefect
database
and
do
a
very
simple
implementation
and
in
the
future
we
are
considering
to
come
back
and
add,
support
for
console
and
also
had
support
for
a
more
cloud
native
approach.
That
would
work
well
inside
kubernetes.
So
this
is
the
first
issue.
I
mentioned.
We
need
to
move
the
replication
queue
to
the
database
works
going
well
on
that.
That's
in
progress.
A
You
can
see
that
there's
a
number
of
merge
requests
already
merged
and
there's
more
in
progress,
and
this
is
the
key
issue
that
we've
just
created
based
on
research
and
investigations,
and
we've
already
begun
a
spike
into
using
SQL
for
later
election
and
beginning
to
do
some
refactoring
so
that
we
can
use
SQL
and
leave
the
door
open
for
other
kinds.
Technology
like
console
and
finally-
and
the
other
key
part
is
observing
when
data
loss
occurs
in
a
phthalo.
A
So
when
a
failover
occurs,
if
there
are
unreplicated
changes,
we
need
to
provide
a
way
for
an
administrator
to
understand
what
isn't
up-to-date
what
data
is
trapped
on
the
server
that
has
crashed
or
unrecoverable
crashed
so
that
they
can
communicate
that
with
their
users
and
take
action
accordingly.
So
those
are
the
key
three
items
that
are
necessary
for
us
to
take
Italy
betta,
Pig,
Italy
h8
beta
in
from
top
ten,
and
they
are
our
top
priority,
but
we're
also
tackling
a
couple
of
other
sort
of
parallel
and
related
efforts
that
are
important
to
high
availability.
A
A
Improving
the
get
early
data
migration
tools
is
really
important
because
we
expect
customers
to
incrementally
roll
out
a
che,
at
least
our
early
adopters.
We
don't
expect
them
to
just
cut
over
from
their
current
configuration
to
a
new
one.
We
expect
them
and
would
like
to
partner
with
them
instead
of
moving
a
couple
of
projects
onto
a
high
availability,
configuration
and
then
progressively
migrating,
and
we
have
api's
for
that,
but
they're.
A
They
need
some
improvements
and
we're
currently
working
on
those,
and
that
will
also
help
other
customers
that
aren't
necessarily
using
high
availability
yet,
but
who
need
to
recharge
repositories
on
large
gate
lab
instances
and
then
the
other
piece
of
work
that's
happening
in
parallel.
That's
really
important
is
continuing
to
investigate
transactional
rights,
so
the
system
I've
described,
involving
a
replication
Q,
is
an
eventually
consistent
design.
A
That's
more
lower
level
at
the
get
and
writing
a
hook
directly
into
get
rather
than
writing
it
into
the
application,
and
we
hope
to
learn
more
from
that.
We've
already
got
a
merge
request
open
where
we're
sort
of
doing
some
exploration,
and
that's
here.
This
3
phase
commit
ref
update
experiment
looking
forward
to
seeing
where
that
goes,
and
that's
going
to
really
inform
our
roadmap
over
the
coming
months,
because
once
we
have
eventual
consistency
which
is
really
what
will
become
our
recovery
mechanism.
A
So
if
a
transaction
fails,
what
we'll
do
is
we'll
put
a
job
on
to
the
replication
queue
and
say
hey,
so
someone
needs
to
go
and
recover
this
or
get
lab.
Go
work
out
how
to
recover
this
repo
update
it
from
the
agreed
correct
version
and
replace
this
broken
version
that
so
the
replication
queue
won't
be
thrown
out
in
the
bin,
but
it
will
take
on
a
different
but
important
purpose
within
our
architecture
and
then,
finally,
and
I'm
really
excited
about
this.
A
It's
unrelated
to
high
availability,
but
get
labs
being
helping
with
the
work
on
partial
clone
in
get,
and
we
want
to
make
partial
clone
for
large
files
using
the
blob
size
filters
available
by
default.
Everyone
uses
get
lab
and
so
we'll
be
working
to
enable
that
by
shipping.
The
latest
version
of
git,
which
includes
performance
improvements
on
the
server
side,
we'll
be
shipping
a
patch
to
add
finer
grained
control
so
that
we
can
turn
just
the
blob
size.
A
A
People
who
are
running
the
latest
version
12.10
will
be
able
to
use
partial
clone
out
of
the
box
with
the
blob
size
filter.
That's
really
exciting.
We
look
forward
to
feedback
from
customers
about
how
this
feature
works
and
yeah
look
forward
to
iterating
on
it,
both
in
your
lab
and
in
the
git
project
itself,
to
make
it
work
really
well
with
large
files.
So
that
means
you
get
LFS.
If
you
want
to
learn
more
about
this
feature-
and
there
is
a
blog
on
about
git
lab
com.
A
But
yeah
how
partial
clone
fetches
only
the
files
you
need
so
I
recommend
checking
out
this
blog
post.
It
summarizes
the
work.
That's
been
done,
the
state
of
it
today
and
also
provides
a
couple
of
links
to
issues
where
we'd
love
to
hear
more
from
you,
our
customers,
about
how
we
can
make
this
feature
even
more
useful.
So
that's
what
we're
working
on
in
1210,
it's
a
lot
of
things.
All
of
them
are
very
exciting.