►
From YouTube: 2020-03-20: High availability Gitaly demo
Description
Development branch of a persistent job queue in postgresql. Allowing one Praefect to go down, and another will pick up the work unhandled.
A
B
C
C
A
C
C
A
F
F
Of
course,
they'll
run
integration
on
it
as
a
preparation
step.
I
have
checked
if
it
works
at
all
or
not,
and
then
yes
looks
like
it
works.
What
we
have
so
far
is
the
table
visa
jobs
itself.
So
it's
what
we
actually
store
it
in
memory
for
the
past,
I
know,
implementations
and
the
details.
Actual
details
of
the
job
now
stir
it
as
Jason.
F
Unfortunate
formatted
properly
here,
but
what
we
have
is
all
those
fields,
as
we
saw
before
in
memory
now
we
have
them
in
secure
storage,
so
most
of
them
on
dates.
We
have
also
related
past
for
the
project
where
it
stood
and
we
have
source
and
target
storages.
So
wherever
the
application
must
be
came
from
what
storage
and
also
we
have
metadata,
it
was
added
by
me
recently
for
now
we
have
only
aiyee
here,
as
it's
only
kilometer.
We
have
so
far
what
else
yeah
well.
F
E
F
So
if
we
generate
it
internally
in
prefect,
there
must
be
some
critics
for
that.
I,
don't
remember
what
but
yeah
we
have
perfect
for
that.
I
think
bulk
and
say
more
about
that
yeah.
Okay,
let's
move
on
to
other
fields,
as
always,
it
has
an
ID
very
just
striking.
What
is
the
number
and
unique?
Of
course
we
have
a
state,
it
shows
what
is
the
state
of
this
replication
job,
so
the
initial
state
is
ready
and
we,
when
the
job
is
ready,
it
can
be
consumed
and
replication
could
happen.
A
F
So,
oh,
if
we
have,
let's
say
if
we
have
two
replication
jobs
for
the
same
storage,
but
the
repositories
are
different:
they
will
they
can
run
in
parallel,
if
we
let
say
have
to
prefix,
and
if
we
have
two
jobs
or
more
for
same
storage
and
for
the
same
ripple.
This
comb
allows
us
to
run
these
zones
replication
jobs
only
sink
recently.
F
A
F
A
F
If
folks,
but
the
prefect
and
actual
job
now
it
is
empty
because
we
have
no
running
jobs
and
once
new
job
created
it
will
the
process
of
it
will
create
a
lock
if
it's
not
in
this
table
yet
and
when
the
job
is
consumed
for
processing
the
role
in
the
job
oak
table
will
be
created.
So
no
other
prefect
will
be
able
to
run
any
other
jobs
for
this
storage
in
this
treatment
and
bonuses.
F
D
Yeah
I
remember
reading
articles
about
this,
and
they
also
said
that,
if
you
do
something
that
takes
a
long
time
while
holding
the
lock,
that's
probably
also
not
a
good
thing.
You
want
to
have
shorter,
shorter
queries
on
the
database
where
you
updated
fields
to
say,
I'm
holding
the
lock,
and
you
update
it
again
when
you're
done
yeah.
B
D
D
By
this
blog
post,
but
I'm
getting
the
impression
the
moment
you
are
talking
to
a
different
system
where
you're
doing
IO
or
you're
doing
Network
calls
then
you're
introducing
an
amount
of
uncertainty.
How
long
it's
going
to
take
that
it's
probably
not
appropriate
to
hold
a
lock
on
the
post
crash
level.
Yeah.
B
D
E
D
E
F
F
F
F
A
F
B
B
A
B
D
A
A
A
A
C
D
You're
not
supposed
to
do
that
very
bad
things
can
happen
if
you
do
that,
we
actually
have
a
validation
in
gisli
when
it
boots
that
none
of
the
storage
is
defined
in
its
config
file
or
nested,
and
the
only
reason
this
is
not
tripping.
That
validation
is
because
it's
two
independent
gift
leaves.
But
this
is
not
this.
This
is
not
the
same
layout
yeah.
D
It's
in
the
same
file
system.
Yes,
no,
this
grace,
yeah,
that's
why
you
don't
want
this
I
think
what
we
could
do
is
I
mean
not
a
lot
of
people
are
using
multiple
catalyst
anyway
in
GDK,
so
we
just
have
to
update
the
code
that
generates
these
paths
and
and
create
repositories
dot
one
or
something
at
the
top
level.
Technically
around
it.
Yeah.
A
A
D
We
were
actually
talking
about
sorry.
I'm
gonna
hijack
this
for
a
second
because
we're
having
so
helpful
seven
conversations
with
several
people
about
reg
exes
for
relative
paths,
and
this
is
an
example
of
something
that
is
not
normal
format
for
hash
storage,
but
that
is
a
valid
pathing
and
Gately,
and
things
work
correctly,
because
Gately
doesn't
care
that
the
path
looks
like
this.
D
So
if
we
would
ever
if
we
want
to
have
very
strict
rules
about
what
the
relative
path
think
italy
are
supposed
to
look
like,
then
this
is
the
type
of
thing
you
run
into
when
you
go
into
that
rabbit,
hole
I'm,
not
saying
we
can't
go
into
that
rabbit
hole.
I
just
want
to
point
out
that
it
might
be
deep.
F
B
A
F
D
D
So
then
every
time
Prometheus
scrapes
it
just
gets
the
current
number
most
so
not
sure
if
we
need
to
have
this
right
at
the
start,
because
what
depends
a
bit
on
how
this
looks
when
we
merge
it.
But
if
we
merge
something
where
sequel
is
in
there,
but
it's
still
sitting
next
to
the
old
in-memory
store,
then
we
can
have
a
separate
merge
request
where
we
at
this
this
interface
but
I,
think
a
custom
probe
I
think
you
can
have
a
custom,
Prometheus
probe
and
that
might
be
the
most
natural
way
to
do
it.
D
D
B
F
B
Will
be
possible
for
us
to
demo
if
a
prefect
comes
online
after
it
had
downtime
that
it
will
resume
with
job
queue.
So
basically,
let's
say
we
have
a
way
to
queue
jobs,
and
while
it's
handling
that
we
kill
prefect
and
reboot
again
and
we
see
the
change
of
status,
is
that
possible
or
am
I
now
stretching
what's
what
can
be
done?
This
demo.
A
A
F
F
A
F
D
F
A
F
D
A
D
G
F
F
A
F
G
D
B
D
Having
a
rapid
rapid
back
off
might
not
be
bad
because
I
think
anecdotally,
I
have
the
feeling
that
we
have
may
have
some
patterns
where
we
create
a
lot
of
jobs
in
a
row.
And
if
the
replication
is
the
back
of
runs
a
little
later,
then
you
might
actually
grab
a
lot
of
a
bunch
of
jobs.
That's
called
created
in
the
meantime,
of
course,
don't
want
to
do
it
too
much,
but
I.
F
If,
let's
say
we
have
a
multiple
prefix
running
and
there
are
a
bunch
of
replication
jobs
in
queue,
will
they
be
processed
and
in
proper
order
and
so
on.
If
the
working
it
works
correctly
and
overall,
how
Prieta
can
handle
a
lot
of
updates
or
any
other
kind
of
replication
jobs,
so
I
I
think
it's
it's
only
the
start
of.
F
D
D
One
is
that
the
states
are
strings,
it
could
be
enums
with
intz
and
that
would
be
smaller,
but
I
don't
know
if
we
care
right
now,
also
the
jobs
storing
them
as
JSON
is
maybe
a
bit
big.
We
could
compress
them
or
we
could
use
protobuf,
but
we
just
saw
that
it's
really
nice
for
debugging
that
you
can
just
type
adjacent
strings,
so
it's
probably
appropriate
for
where
we
are
right
now
to
store
the
mystery
son.
D
F
D
B
D
Think
the
assumption
I
made
when
I
laid
out
the
configuration
of
this
stuff
is
that
we
have
our
own
database.
So
all
the
gate
lab
all
the
gate.
Lab
data
is
going
to
be
in
in
a
database
called
github
HQ
underscore
something,
and
we
have
prefect
underscore
something
so
within
that
it
sort
of
is
implied
that
we're
in
the
prefect
namespace.
F
D
B
Thanks
thanks
for
demoing,
Pablo
I
think
it
was
really
good
to
see
also.
How
did
they
test
in
the
database
works
and,
for
example,
the
attempts
that
it
comes
down
instead
of
up,
for
example,
there
was
some
very
insightful,
at
least
for
me,
yeah
and
I
guess
good
weekend
for
everyone
and
stay
safe.
While
you
stay
indoors
and
see
you
all
next
week,
bye.