►
From YouTube: 20200527: Gitaly Cluster demo
Description
Looking at read distribution and observability
A
C
A
Yeah
actually
Paul
I
was
at
Bolsa
Chica.
Yesterday,
wetlands,
no,
the
beach,
are
you
with
the.
A
B
B
A
I
I
have
friends
who
are
out
there:
okay,.
A
B
So
yesterday
my
my
daughter
shoved
my
son
and
he
ended
up
splitting
his
eyebrow
open.
Oh.
A
C
B
C
B
C
B
B
C
In
theory,
every
giddily
node
should
be
emitting
Prometheus
metrics
about
the
number
of
raid
ops.
So,
just
looking
at
the
usual
read
out
repeat,
we
should
be
able
to
see
the
reads
coming
in
to
different
nodes
if
they're
getting
round-robin
around
so
rather
than
looking
at
the
Promethean,
this
special
prefect
dashboard,
we
just
look
at
the
dashboard,
particularly
node.
That's
inside
of
the
prefect
cluster.
C
So
I
think
we
should
be
able
to
find
a
way
to
do
it.
I
think
we
saw
a
only
mode,
but
maybe
we
can
poke
around
that
a
bit
more
and
yeah
I'd
like
to
look
at
the
giddily
dashboard.
A
little
bit
more
consider
is
a
giddily
dashboard
that
ships
with
an
omnibus
that
just
shows
like
standard
CPU
usage.
This.
C
B
D
C
B
C
B
C
C
There's
the
read:
only
option
for
marking
things
read
only
there's
the
red
distribution
option.
Are
there
any
other
options
that
are
missing?
The
reason
I
ask
is
that's
a
lot,
a
lot
of
different
feature,
interactions
and
I
think
we
should
probably
make
it
a
priority
to
start
rewinding
those
and
removing
the
ones
that
are
safe
to
remove,
because
that's
just
a
lot
less
co-pastor
I'll
steer.
B
Yeah,
it's
it's
an
issue
with
regression.
Testing
I
create
an
issue
about
this,
because
we're
only
testing
certain
combination
of
features,
which
is
a
problem
because
we
don't.
We
don't
know
that
something
conflicts
with
another
feature
until
we
remove
a
feature
flag
or
make
it
default
in
the
configuration
they.
C
B
Would
think
anything
that
is
off
is
default
because
the
way
the
config
file
works
is
the
zero
value
of
a
configuration
field
is
what
gets
set.
If
you
don't
provide
anything
in
a
Tom
will
file
it.
So
if
something
false
is
always
a
zero
value,
for
example,
like
an
empty
string,
is
a
zero
value,
so
I'm
over
here
at
a
automatic
failover
leader
election.
So
we
want
to
enable
Postgres
by
because
I
don't
think
this
is
in
the
above
instructions
right.
B
B
C
B
C
C
C
B
B
C
B
B
B
B
D
B
A
B
A
C
Are
because
the
confusing
bidders
were
saying
instance,
just
the
local
heist
one,
we
would
be
expecting
to
see
the
other
three
gillies,
which
is
not
localhost
showing
up
in
this
dashboard
right
yeah.
So
this
meant
this.
This
makes
sense
like
this
child,
as
it
is,
it's
just
showing
the
local
get
early.
A
C
B
B
A
B
B
A
A
B
B
B
B
A
A
A
A
A
B
B
B
A
D
A
B
A
C
I've
that
second
graph
is
showing
request
rate
per
get
early
node,
so
instance,
one
is
still
receiving
all
the
read
operations.
I've
got
a
loop
running
my
bash
channel,
just
hitting
a
making
a
blob
request
over
and
over
and
over
again,
and
so
that's
showing
that
the
request
is
always
going
to
node
one
so
something's,
not
quite
working,
hey.
B
B
C
C
A
C
We
should
think
about
what
the
right
chart
is
for
observing
this,
because
I'm
not
sure
that,
like
well
I,
guess
yeah.
You
probably
want
to
see
that
at
all
prefix
of
diverting
traffic,
but
more
useful
is,
are
the
request
rates
kind
of
balanced
to
hold
the
giddily
nodes.
So
really
just
one
HGD
node
once
like
this
kind
of
a
chart
in
the
prefect
dashboard
that.
C
D
B
A
B
B
B
B
A
B
C
So
here's
this
for
an
interesting
idea:
let's
see
what
happens
if
I
start
pushing
rides
so
I've
got
my
one-liner
and
I'll
just
start
pushing
you
refs.
So
we
should
see
because
of
the
replication
job.
Everything
should
start
pointing
back
to
the
primary
because
the
replica
will
be
always
be
behind.
So
we
shouldn't
go
if
they're
behind
right.
C
B
C
B
A
D
A
C
C
D
Yeah
ten
percent
well
I,
don't
know
what
to
expect,
but
that's
no
ton
of
load
reporting
on
it.
Yeah.
C
C
C
Postgres
levels,
I,
guess
I'm
more
interested
in,
like
the
OP,
a
state
rate
faces
per
second,
because
a
tocar
respond
to
one
prefect,
like
one
giggly
request
to
say
one
guilty:
making
one
rowdy
I
think.
A
D
Answer
is
four
seconds:
yeah
uh-huh,
that's
not
just
read.
That's
aggregate.
D
B
C
C
C
C
B
B
C
C
C
D
C
D
B
C
C
C
C
Think
that's
pretty
useful.
We
didn't
look
at
read-only
mode,
but
I
think
that's
alright,
I
think
this
was
pretty
useful,
so
just
to
wrap
up
the
action
items,
I
think
I'll.
We
need
to
check
if
there's
issues
for
all
these
config
options
to
start
removing
them,
so
I'll
post
a
link
to
all
the
ones
that
I
can
find
and
then
maybe
we
can
create
others
for
the
ones
that
we
can
also
remove.
I.
C
Think
there's
a
bunch
of
fixes
that
need
to
be
done
to
this
get
lab
omnibus
Gilly
brought
either
to
the
dashboard
itself
or
to
the
configuration
the
scraping
config
in
the
cluster
set
up
so
that
this
dashboard
works
out
of
the
box,
and
it
looks
like
we're
already
adding
redistribution
charts
to
the
prefect
one.
So
that's
all
right!
We
just
need
to
consider
this
one
I.
B
D
C
All
right,
well,
it
looks
like
we
should
try
and
get
and
rate
distribution
up
and
running
like
looks
pretty
good.
Let's
try
and
get
it
on
by
default.
Get
some
shots
in
there.
So
I
think
again.
The
key
action
item
is:
we've
got
a
lot
of
the
building
blocks,
but
we
just
don't
have
observability
out
of
box.
So
this
is
just
a
lot
of
factoring
with
getting
dashboards
working
and
making
sure
we
have
can
see
what
we
need
to
see
it
be
confident
things
happening
so.