►
From YouTube: GitLab 13.0 Kickoff - Create:Gitaly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
But
I.
Guess
summarize.
The
the
state
of
what
we
hope
to
launch
and
the
primary
things
that
we're
introducing
to
get
lab
compared
to
today
is
the
introduction
of
the
prefect
proxy
router,
which
receives
requests
from
the
application
and
users
asking
for
good
data
and
then
directs
them
to
the
appropriate
node.
Within
a
cluster
of
giftee
nodes,
so
today
you
would
have
one
get
early
node
for
each
shard
and
with
high
availability.
You
can
have
a
cluster
of
nodes,
so
if
one
node
goes
down,
another
node
will
kick
in
as
primary
and
start
servicing
requests.
A
So
that
means,
if
you've
got
a
backlog
of
replication
jobs
that
haven't
been
completed
and
the
primary
goes
down
when
failover
occurs,
there
will
be
data
loss,
and
this
is
expected
in
this
initial
minimal
version
of
high
availability,
because
in
this
first
iteration
we're
favoring
availability,
because
if
the
primary
has
gone
down
and
is
not
able
to
recover
during
that
period,
we
would
have
to
lock
the
replicas
as
read-only,
which
essentially
halts
most
work.
It
prevents
bug,
fixes
being
merged
to
your
production
code,
prevents
all
kinds
of
development
work.
A
So
in
most
instances
this
would
be
considered
an
outage
because
access
to
up-to-date
data
has
been
lost,
so
we
want
a
favored
availability
for
this
first
iteration.
This
brings
me
to
the
second
item
that
I'd
like
to
share
with
you
that
we're
working
on
and
also
in
the
13
dot,
a
release
which
is
continuing
our
investigations
and
experiments
with
strong
consistency.
A
So
what
we're
hoping
to
launch
is
generally
available
is
our
eventually
consistent
high
availability
solution,
but
we
are
already
working
on
strong
consistency,
which
means
that
when
you
write
a
commit
or
create
a
branch
in
a
git
repository
and
get
lab,
returns
a
success
code
and
says
I've
created
this
branch
or
I've
created
this
committee
and
that
committee
is
replicated
on
to
multiple
get
early
nodes
at
that
point
in
time.
So
that
means
as
soon
as
the
write
is
accepted.
If
a
failover
occurred,
there
would
not
be
data
loss,
and
so
this
is
a
fail.
A
This
is
favoring
consistency
over
over
availability.
So
we
we
slow
down
the
writes
by
making
it
a
consistent
transaction.
But
when
the
write
has
occurred,
it's
occurred
in
multiple
places
and
this
is
really
where
we're
aiming
for,
and
we've
already
done,
two
investigations
in
two
different
approaches
and
we've
settled
on
what
we
think
is
a
good
minimal.
First
iteration
for
us
to
explore
this
in
a
more
realistic
implementation
and
we're
going
to
be
using
pre
receive
hooks
to
do
this.
A
A
That's
mono
repos
in
large
organizations
where
you've
got
thousands
of
engineers
working
on
the
same
repository.
Often
a
single
server
can
become
saturated.
So
even
if
you
had
dedicated
shard
just
for
that
primary
repository,
it's
possible
to
saturate
it,
because,
no
matter
how
big
your
server
is,
it
will
have
finite
CPU
and
memory.
And
so
the
idea
of
horizontally
distributing
reads
across
up
to
de
replicas
means
that
that
load
will
be
distributed
better
across
the
available
computing
resources
of
the
cluster,
rather
than
all
landing
on
the
primary.
A
So,
in
combination
with
strong
consistency,
which
means
as
soon
as
the
write
completes
its
on
multiple
replicas
and
this
improvement,
which
is
around
minimal
sort
of
iteration
to
explore
this
around
distributing
the
reads.
I
think
this
shows,
like
the
two
major
areas
of
improvement,
we're
looking
at
for
high
availability,
which
is
strong
consistency
and
horizontally,
distributing
read
loads
and,
importantly
by
tackling
both
of
these.
A
This
will
eliminate
the
need
for
NFS
in
the
various
configurations
that
our
customers
use
NFS
for
replicating
data
between
nodes,
so
that
wraps
it
up
for
high
availability
and
the
various
prongs
of
our
work,
as
we
seek
to
address
that,
and
that
brings
us
me
to
managing
a
large
lab
instances.
So
again,
looking
at
this
diagram
as
we
as
a
get
lab
instance,
grows
very
large.
A
The
shards
could
become
unbalanced
and
that's
a
problem
because
you
want
them
to
be
balanced
in
terms
of
storage
utilization,
and
you
also
want
them
to
be
balanced.
In
terms
of
resource
utilization,
ideally-
and
so
this
is
something
we
want
to
optimize,
particularly
for
gitlab
comm-
to
make
sure
that
we're
delivering
great
performance
to
all
our
customers,
but
it's
also
important
so
that
we
can
simply
make
sure
that
there's
available
storage
on
the
servers
and
they
don't
become
full
at
the
moment.
A
We
do
this
through
a
manual
rebalancing
process
through
API
that
allows
you
to
move
repositories
from
Mashhad
shard,
but
we're
hoping
to
automate
this
and
one
of
the
easiest
way
to
automate
rebalancing
is
to
do
it.
When
you
create
the
repository,
you
have
to
choose
a
shard,
you
put
a
new
repository
on
and
at
the
moment
we
have
a
very
simple
round-robin
approach,
but
we
just
choose
any
of
the
enabled
shards.