►
From YouTube: 20200520: Gitaly Cluster demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
All
right
so
so
before
this
I
went
ahead
and
I
just
went
through
the
docs
to
set
everything
up
so
I
think
I
did
a
little
test
project,
so
I
think
things
should
be
working.
Let
me
login
the
graph
on
Oh.
Actually,
I
didn't
go
to
the
good
fauna
steps.
Let
me
do
that.
Real
quick,
oh
I,.
B
D
C
C
I
think
that
so
this
probably
plans
to
do
a
bit
of
refactoring
I,
don't
think
more
do
a
strike
rename
because
we
probably
want
to
keep
a
prefect
file
around
for
like
config
documentation.
This
is
like
setup
guide,
but
yet
I've
been
talking
with
Evan
read
about
refactoring.
This
further
there's
been
a
lot
of
changes
to
all
they
J
talks
all
over
gitlab.
Recently
I
was
just
giving
it
a
quick
minute
to
settle
down
before
I
started
iterating
again.
A
C
Right,
so
do
we
want
to
demo
it
I
think
the
horizontal
reads
thing
got
merged
right,
anything
new.
We
want
to
try
today,
I.
C
Yeah,
let's
take
a
look
at
rate,
distribution,
that'd,
be
cool
and
then
I
think,
once
we've
done
that
I'd
like
to
spend
some
more
time
looking
at
data
loss,
particularly
it'd,
be
great
to
validate
that
we
can
see
metrics
and
such
being
generated
for
non
repository
like
so
non
project.
Repository
data,
so
wiki's
design,
snippets
at
a
project
level,
personal
snippets
I
think
they
should
all
be
covered.
But
our
demos
haven't
really
been
begun.
Okay,
those
other
kinds
of
repos.
C
C
C
B
C
C
All
right
so
I
guess
we
can
do
we
want
to
use
that
we
W,
like
that
commas,
our
test
project
or
a
smaller
one.
A
B
C
C
B
C
Okay,
all
right,
in
that
case.
A
D
C
C
B
B
C
B
B
B
A
B
C
Think
it'd
be
useful.
Imagine
like
a
century
era
in
the
application
or
something
so
we
start
seeing
some
error,
learning
and
and
we're
like
what
is
going
on
here
and
we
discover
that
its
routing
like
and
maybe
what's
going
on,
is
routing
requests
to
a
style,
node
and
because
redistributions
going
wrong
like
we'd,
have
no
way
of
isolating
and
detecting
like
where
the
get
error
is
coming
from.
C
C
B
A
C
Distributing
reads:
enable
okay,
all
right,
well,
I
think
she
move
on
James.
You
want
to
see
some
more
failure,
loss
scenarios,
I
mean
data
loss
scenarios,
yeah
so
I
think
we'll
be
good,
is
to
get
a
turn
one
of
the
get
early
nodes
off
and
then
create
some
snippets
at
the
project
and
personal
level.
Upload
a
design,
push
a
wiki
thing,
and
so
that's
like
creating
day
loss
across
multiple
different
repositories
that
are
part
of
the
same
project
and
then
see
what
the
output
of
data
loss
command
is
and
if
they're
all
individually
reported.
C
C
C
B
C
B
C
C
A
D
C
D
C
D
C
C
C
D
B
C
D
B
C
Yeah
there's
an
issue:
thirteen
one
to
remove
the
job
count,
because
the
choke
out
is
misleading
and
like
it
doesn't
really
provide
the
information
you
might
think
it
does
and
there's
also
inaccuracies
in
the
methodology
used
where
it
like.
If
there's
dead
jobs
and
then
a
successful
replication,
it
still
counts
a
dead
jobs.
Even
though
it's
oh.
A
C
D
C
C
Improvement
for
13:1
that
scheduled,
which
is
to
list
which
nodes
the
jobs,
have
failed
on,
so
whether
it's
failed
on
all
nodes,
or
in
this
case
it
should
only
be,
let's
think
Italy
too,
because
right
now
doesn't
tell
an
admin
like
where
the
data
loss
is
is
if
you're
looking
at
like
a
system
level
that
actually
hasn't
been
data
loss.
Yet
because
there
is
a
replica
there's
like
two
copies
of
this
repo
two
copies
of
those
changes.
C
But
there's
one
read
one:
node,
that's
behind
missing
this
data
right,
so
the
output
of
this
is
is
misleading.
You
know
in
a
multiple
regards
the
other
other
aspect
which
there's
another
issue
about:
is
the
fact
that,
as
an
administrator,
it's
useful
to
know
that
there
are
four
distinct
repos
that
are
missing
data,
but
it's
very
unhelpful
knowing
which
they
are
like
in
that
it
might
matter
a
lot
if
the
five
repos
missing
data
are
the
repos
that
everyone's
working
on
like
your
level
org,
but
if
they
were
other
repos,
it
might
matter
less.
C
But
the
first
thing
that
administrators
going
to
want
to
do
is
copy
and
paste
the
output
of
this
put
it
in
email
center
to
their
boss.
They,
like
it's
okay,
we've
got
20
repos
that
lost
data,
and
these
are
the
ones
we're
going
to
like
work
on
recovery,
but
currently
an
admin
would
have
to
go
and
convert
all
these
hashed
IDs
to
useful
IDs
to
communicate
it
to
anyone
who's,
not
a
sysadmin.
So
there's
another
issue
about
that
as
well.
Are.
A
C
D
Or
now,
that's
why
I
hash
Storage
is.
There
is
just
that
these
are
not
human
readable
right.
So
if
I
need
to
go
figure
out
is
to
get
lab
project
behind.
It
takes
a
lot
of
steps
to
kind
of
figure
that
out
right
here.
I
think
that's
James
is
saying
is
either
we
make
an
API
call
or
we
we
just
display
that
front
and
center.
C
C
B
D
C
C
C
C
C
B
B
D
A
C
C
C
So
maybe
I
don't
think
any
of
those
things
will
exist
because
we
created
a
new
wiki.
We
created
new
desires,
they're
all
just
gonna
pop
right,
it's
ok!
Well!
What
we
can
do
is
how
do
we
quickly
run
the
recovery
command?
We
bring
up
giddily
one
and
run
the
the
command
and
see
if
it
recovers
everything
reconcile.
C
C
B
C
C
C
B
C
D
C
D
C
B
B
Is
how
all
of
our
sub
commands
work
right
now?
We
just
rely
on
the
user,
providing
all
the
right
connection,
information
of
the
config
file,
the
prefect
config
file.
If
that's
not
going
to
always
work
like
on
Google,
maybe
we
should
have
an
override
where
you
can
say,
use
this
IP
address
and
said:
what's
in
the
config
file
right.
D
C
I
want
to
be
respectful,
their
runs
time,
so
I
think
we
should
probably
quickly
review
the
agenda
and
assign
action
items
so
I've
been
a
little
lacks
in
being
clear
who
should
do
white
man
by
when?
So,
let's
go
through
the
agenda,
so
the
first
thing
we
noticed
was
the
dashboard
should
add.
Total
requests
at
the
error
rate
has
context
Paul.
Can
you
both
create
an
issue
and
try
to
address
that
integer
for
next
demo.
C
The
street
reads
so:
I
will
investigate
in
the
status
of
distributing
reads
the
docs,
follow
up
and
make
sure
that's
addressed
as
a
priority,
so
that
we
don't
leave
something
half-finished
same
with
the
metrics
and
make
sure
that
metrics
are
addressed
before
Pablo
moves
on
to
what
Sammy
haven't
really
get
that
one.
That's
one.
These
are
distributing
reads
that
was
Pablo.