►
From YouTube: How Gitaly fits into GitLab: Episode 1 – Gitaly client
Description
A 1-hour training video for contributors new to GitLab and Gitaly.
Overview of GitLab backend processes, gitlab-rails deep dive: Gitaly config in gitlab-rails, SQL data model, overview of how Gitaly calls get made via GitalyClient.call.
Recorded 2019-02-21
A
B
A
A
A
This
note
process
is
unique
to
development,
because
in
development
we
have
life,
we
have
web
back,
doing
something
with
life
recompilation
of
the
JavaScript
assets
and,
as
you
don't
see,
this
in
production
and
I
won't
be
talking
about
that
at
all,
because
the
github
JavaScript
front
enters
and
talk
to
katella
directly,
so
I
think
we
can
ignore
it
in
this.
In
this
discussion,
then
we
have
unicorn
and
unicorn
sort
of
belongs
with
sidekicks
I'm,
going
to
move
it
up
here
and
then
down
there.
We
have
get
lap
pages
and
these
things
ignore
what
this
is.
A
A
C
A
A
A
We
could
be
using
something
else
that
unicorn
but
unicorn
happens
to
be
what
we
use
in
kids,
lap
and
we've
been
using
it
for
so
long
that
we
tend
to
confuse
the
rails
web
server
processes
with
unicorn,
because
unicorn
happens
to
be
the
name
of
the
Pacific,
the
actual
process
that
hosts
the
rails
application
but
yeah.
In
some
ways
the
architecture
of
unicorn
is
relevant
to
what
we're
doing,
but
most
of
the
time
it's
something
you
can
ignore,
we're
doing
experiments
to
replace
unicorn
with
Puma,
which
is
another
Ruby
web
application.
A
Server
I,
don't
know
what
the
stages
of
those
experiments
is.
That's
a
separate
topic,
but
yeah
I
wanted
to
point
this
out,
because
really
this
should
be
you
should
think
of
this
as
web,
and
this
is
background
jobs,
but
the
actual
programs
we
used
to
do
that.
They're
called
unicorn
and
sidekick,
so
people
tend
to
talk
about
unicorn
and
sidekick
around
here
all
the
time,
but
those
are
I,
guess
their
brand
names
and
does
not
take
the
actual
generic
purpose.
A
It
would
also
be
possible
for
us
to
replace
sidekick,
but
there
are
no
plans
to
it.
They
would
do
that
that
I'm,
aware
of
yeah.
So
one
more
word
about
workhorse
workhorse
is
a
weird
concept.
That's
a
bit
unique
to
Kate
lab.
It
is
a
reverse
proxy
that
sits
in
front
of
unicorn,
so
it's
actually
tightly
coupled
to
unicorn
and
every
request
that
reaches
unicorn.
Almost
every
request
goes
through
workhorse,
first
and
workhorse
modifies
its
or
does
its
own
little
stuff
with
the
request.
A
If
it
feels
like
it's
and
otherwise
it
just
passes
it
through
as
a
reverse
proxy
back
to
you
know
to
the
web,
the
Ruby
the
rails
back
ends
and
then
also
the
response
passes
back
through
our
course.
So
there's
complex
interactions
going
on
between
this
and
this
that
are
a
later
topic,
because
gisli
actually
has
a
role
in
this
course.
A
Exactly
but
workers
doesn't
know
anything
about
where
gates
lis
is
so
every
time
it
talks
to
Gately.
It
has
first
talk
to
unicorn
to
hear
what
kill
a
server
to
talk
to
and
what
to
do
and
and
these
interaction
flows
are
you're
a
continued
contributor
at
some
point,
you're
going
to
run
into
this,
and
it's
our
job
to
know
how
this
stuff
works,
but
we're
going
to
ignore
that
for
now
and
go
back
and
focus
on
the
rails
app.
A
A
For
whatever
reasons,
I
I
don't
really
know
how
to
explain
this
if
you're
new
to
Rails,
but
these
things
live
out
at
the
same
code
base.
So
the
kid
laps
EE
repository,
git
lab
org,
slash
get
FCE
is
the
rails
code
base,
plus
the
front-end
code
base,
and
but
so
in
there
is
a
big
Ruby
application
with
lots
of
files,
and
both
of
these
things
are
processes
that
load
that
entire
application
into
memory,
and
so
they
share
a
lot
of
codes
both
of
these
directly
connects
to
Redis
and
to
Postgres.
A
A
A
Cool
so
the
main
configuration
almost
all
configuration
of
this
rails
application
is
in
a
single
yellow
file
in
the
config
subdirectory,
and
that
is
this
kit
left
a
yellow
file,
it's
big
and
actually
a
lot
of
configuration.
These
days
is
in
the
database,
because
this
yellow
file
is
stored
on
this
coin.
Your
application
server
and
the
real
process
is
not
allowed
to
edit
this
file,
so
I
guess
nowadays
this
is
more
sort
of
bootstrap
configuration,
so
this
is
stuff
that
the
application
needs
to
know
before
it
can
even
boot
up.
A
A
A
A
Funny
way
of
general
and
we
have
preferences-
oh,
this
is
general
repository
I.
Don't
understand
why
these
buttons
are
different
from
Oh
general
is
broken
down
into
these
areas.
So
here
is
stuff
about
repository
checks
and
housekeeping.
This
is
actually
relevant
to
get
to
me
and
somewhere.
Maybe
it's
in
general
I.
Are
there
greatly
timeouts
the
default
timeouts
that
can
apply
to
get
Li
requests
made
from
from
within
a
web
context?
A
Oh
preferences
Italy:
here
we
go
here,
we
have
a
little
bit
of
conflict
and
this
is
stored
in
the
database
and
I
yeah.
I.
Think
the
way
to
look
at
this
is
that
if
we
can
get
away
with
storing
it
in
the
database,
we
do
that
nowadays,
just
because
it
makes
it
so
much
easier
to
apply
to
conflict
changes
on
github.com,
because
otherwise
you
need
to
change
edit,
the
config
file
on
disk
on
every
application,
server
and
restart
the
application,
which
is
a
major
hassle
in
the
case
of
kids.
Little.
B
C
A
Actually,
both
the
RPC
allows
you
to
set
deadlines
on
calls
and
do
if
I've
never
checked
this,
but
the
way
I
understand
it.
Those
deadlines
can
then
be
enforced
by
either
the
clients
or
the
server
wickedly
does
not
set
its
own
deadlines
IRA's
by
default.
There
are
no
deadlines
and
Gately,
and
if
there
is
a
good
deadline,
it's
only
because
the
client
asks
for
it.
So
we
have
a
fairly
trusting
permissive
thing
going
around
there:
okay
yeah,
to
go
back
to
your
question.
John
I
think
we
reload
config
from
the
database
on
each
request.
A
So
if,
if
you
make
a
change,
then
it
should
probably
start
making
having
an
impact
from
all
requests
after
the
change
is
committed
to
sequel,
ok,
I
think
I'm,
not
super
sure
about
this,
because
I'm
not
active
in
this
app
part
of
the
epilogue.
So
this
is
one
part
of
config,
but
for
as
far
as
quickly
concerned,
this
is
kind
of
small.
A
A
A
No,
it's
called
repository
storage
and
that
has
a
string
in
there
in
this
case
called
default,
and
this
somehow
will
get
mapped
to
a
Catelli
server,
which
is
maybe
which
I
think
is
one
of
the
things
you
wanted
to
know
about
paul
yeah,
yeah,
so
we'll
we'll
get
to
that
or
I
want
to
get
to
that.
While
we're
talking
at
this
yeah.
A
So
each
this
is
an
important
part
of
configures
far
as
quickly
is
concerned,
every
repository
belongs
to
our
projects.
This
actually
goes
as
far
as
that
repositories
used
to
or
still
don't
really
exist
in
the
sequel
database
anywhere.
We
just
have
records
of
projects
and
implicitly
each
project
has
repository
attached
to
it.
A
So
looking
up
a
repository
starts
by
looking
up
a
project
and
saying
project
on
Tripoli
Tory,
and
then
you
get
your
repository
and
every
project
actually
has
two
repositories,
because
it
is
a
normal
repository
and
a
wiki
repository
and
these
things
in
the
data
model.
They
just
hang
off
projects
which
is
a
I,
don't
know,
I,
guess
that
that
this
happens
we
hear
data
model
and
we
look
back.
A
A
A
A
So
this
is
actually
the
mapping
you
were.
We
were
talking
about,
pool
where
I
want
to
know
how
does
the
storage
name
relates
to
a
network
address,
and
this
page
is
generated
from
data
in
the
config
file,
so
the
this
is
actually
data
that
is
written
on
disk
on
each
grid,
lab
application
server,
there's
a
config
file
that
needs
to
have
this
same
mapping.
If
we
would
have
more
than
one
Goodley
server.
A
A
Another
thing
worth
noting
is
that
this
thing
is
on
a
UNIX
socket.
That
is
because
UNIX
sockets
have
some
some
nice
properties
for
this
stuff.
You
don't
have
to
worry
about
port
clashes
and
it's
easy
to
restrict
access
using
directory
permissions,
so
that
only
I
think
this
one
is
not
very
restrictive,
because
I
suspect
this
directory
is
open,
but
in
production
in
a
standard,
single
server
production
installation.
A
C
A
Question
I
think
default
is
that
we
get
into
Community
Edition
for
the
Enterprise
Edition
territory,
I.
Think
and
good
luck,
Community
Edition,
nothing
happens.
There
is
so
maybe
I
should
have
booted
Enterprise
Edition
I.
Don't
want
to
do
that
now,
because
that
we're
sitting
here
waiting
for
five
minutes
for
the
whole
thing
to
boot,
but
in
Enterprise
Edition
you
can
designate
which
repository
servers
which
storages
get
used
to
create
new
repos,
and
this
is
a
very,
very
bare-bones
mechanism
that
just
does
just
enough
for
what
we
need
on
github.com
and
nothing
more
this.
What
happens?
A
Don't
get
out
this
comb?
Is
we
notice
that
we
were
running
out
of
disk
on
our
existing
Gately
servers?
We
instantiate
a
couple
new
ones,
and
then
we
configure
the
admin
panel
to
say
all
new
repositories
will
be
round
robins
on
one
of
these
three
new
ones
and
then
let
one
point
to
dashboard
says:
oh,
our
disk
is
full
on
these
three
new
ones,
and
then
we
start
over
again.
I
think
this
is
in
the
docs.
A
A
C
A
So,
historically,
a
storage
was
just
a
mount
points,
so
a
subdirectory
somewhere
on
the
gate,
lap
server
and
because
we
want
its
wifely,
you
see
it
exactly
this.
This
is
outdated,
but
this
picture
sort
of
ignore
what
get
data
ders
is
that's
from
only
bus,
but
things
would
have
looked
like
this.
Perhaps
in
the
old
situation
and
when
we
introduced
Italy,
we
wanted
to
have
an
option
where
we
say
we
run
grittily
close
to
the
app
on
the
same
server
as
the
rails
application,
and
we
don't
worry
about
these
mount
points.
A
So
then
one
Gately
process
would
have
to
serve
all
these
three
things.
At
the
same
time,
so
with
indigenous
config
file,
you
can
define
multiple
storages
that
it
serves,
I,
say
I,
it's
this
I,
don't
know
how
to
explain
this.
Clearly,
it's
it's
too
much
flexibility
for
our
own
Goods
to
be
honest,
but
we
were
sort
of
ended
up
here
because
of
the
way
we
had
to
transition
from
the
existing
situation.
A
B
A
Yes,
here
we
go
I,
don't
know
why
this
is
schools
advanced,
but
this
is
where
it's
it's
defined.
So
there's
a
subsection
repositories
storages,
and
this
is
an
array
of
hashes.
No,
it
isn't
hash.
This
is
hash.
Key
is
it.
This
is
llamo
I
have
to
think
for
a
moment
about
what
it
means.
This
is
a
hash
called
storages
and
it
has
entries-
and
this
one
there's
only
one-
there
will
be
another
one.
It
would
look
like
this.
A
B
A
B
A
Think
maybe
it's
a
log
I,
don't
remember
exactly
and
and
if
you
look
at
the
picture
again,
you
can
sort
of
see
what
happened
here
right,
because
this
was
the
old
situation
named
bath
in
get
labs,
config
and
now
in
get
labs
config.
It
is
named
Italy
address
and
then
in
give
me
config,
we
say
name
bath
again
right.
This.
B
Is
actually
how
we
go
ahead?
Yeah
so
I
was
gonna.
Ask
in
the
case
where,
if
one
giddily
server
has
to
like
do
something
in
the
repository
on
the
separate
like
honest
on
the
different
storage
and
that
storage
doesn't
live
locally,
then
it
has
to
make
a
network
call
that
they
kind
of
like
figure
that
out
these.
A
B
But
what
I
mean
is
like
what,
if,
in
this
situation,
okay,
so
with
this
storage,
yeah
they're,
the
giggly
address
was
something
else:
yeah
I'm
gonna
do.
A
Supposed
to
be
at
this
right,
this
implies
that
there
is
a
second
Kootenai
process
running
with
its
own
unique
config
file.
So
the
first
Gately
process
would
have
this
config
file
and
then
the
second
one
would
have
its
complete
file.
It
would
have
this
no
come
on
this
with
name
storage,
three,
and
this
can
be
anything
right.
B
A
A
Otherwise,
nothing
works,
for
instance,
when
you
make
a
fork
of
a
repository
right
now,
that
is
a
naive,
full
clone
and
because
we
don't
know
where
the
other
repository
is
we
just
default
to
making
a
network
call
to
whatever
the
quickly
server
hosts
the
original
rain,
we're
into
a
clone
like
that's,
okay
and
this
yeah
exactly
how
this
network
hole
works
is
very
interesting,
but
we
should
probably
not
go
into
it
now.
Yes,
and
unless
you
both
want
to,
but
I
think.
B
C
A
A
Okay,
this
is
not
very
clear,
but
we
we
allow
addresses
that
look
like
eunuchs
call
on
something
and
we
have
defined
this
custom
made
up
scheme
where
you
can
say
TCP
colon,
slash,
slash,
and
then
you
get
an
unencrypted
its
well.
It
really
is
it's
a
JDBC
connection,
so
that's
HTTP
slash,
but
it's
an
unencrypted
network
connection
and
since
11.8
you
can
also
use
TLS
colon,
slash,
slash
and
then
it
will
use
encryption
and
I
think
the
way
things
are
currently
deployed
on
github.com
is
that
the
machines
are
all
firewalls
and
they
use
unencrypted
variants.
A
There
is
also
a
token
mechanism
where
each
server
may
have
a
token.
You
can
also
see
that
here
in
the
example
config
file,
that's
you
can
set
a
token
Burkett
Lee
server
and
at
the
beginning
of
each
RPC
call
we
transmit
the
token
to
prove
the
client
proofs,
transmits
the
token
to
prove
that
it's
the
client
that
it
has
access.
It's
very
basic
authentication
mechanism.
A
So
sort
of
the
route
of
looking
at
how
things
work
coming
in
from
the
convict,
so
we've
now
touched
on
the
sequel
model.
How
repositories
are
stored
in
the
database?
Well,
not
because
they're,
they
hang
off
of
the
projects
really
and
the
mapping
between
storages
and
getting
the
addresses
and
the
confusing
ways
they
can
overlap
or
cannot
overlap.
A
Oh
close.
This
close
this
complain
then
I'm
not
saving
and
I.
Don't
care.
Okay,
so
some
basics
of
the
layouts
of
the
rails,
application
most
Ruby
code
is
in
either
an
app
or
in
Lib
I,
don't
know
exactly
where
it
is
understanding.
What
does
happen?
What
is
live?
It
is
part
of
figuring
out
why
the
hell
rails
organized
the
way
it
is
within
app.
There
is
the
models
directory
and
these
things
map
to
sequel
tables
for
the
most
part,
not
necessarily,
but
almost
all
of
these
correspond
to
sequel
tables.
So
there
they
are
yeah
I.
A
Think
I
guess
active
record
is
the
actual
name
of
this
pattern.
So
in
here
you
have
the
project
class
and
the
project
can
have
an
Associated
repository,
and
it's
this
actually
good
example,
because
projects
here
is
an
active
record
class,
meaning
that
it
has
a
sequel
counterparts
and
repository.
It
is
not,
but
it
is
still
tightly
coupled
to
the
rest
of
the
rails
application
code.
So
this
is
this
is
the
first
layer
you
often
hit
when
dealing
with
repositories
and
the
kit
lab
coat
days.
A
A
Now
this
thing
started
out
as
a
sort
of
adapter
between
kids
lab
the
application
and,
however,
we
were
accessing
git
repositories
before
we
had
kidney,
we
use
rockets
a
lot
or
which
spawns
get
processed
directly.
Before
that
we
used
before
we
use
rocket,
mainly,
we
use
a
gem
called
grits
and
good
luck.
Goods
throughout
the
history
of
Kislev
has
been
a
sort
of
adapter
between
the
rails
application
and,
however,
we
really
interact
with
git
repositories.
A
So
when
we
did
the
grisly
migration,
we
kept
this
adapter,
but
the
this
thing
is
not
actually
making
it
nickels
because
it
works
easier
for
us
to
not
write
all
that
code
in
here,
but
to
write
it
next
to
this.
So
next
to
this
we
have
lip
kit
lab
literally
clients,
and
this
is
the
actual
G
RPC
client
code.
So
when
Caleb
interacts
with
the
repository
first,
it
has
to
fetch
project
which
is
AB
models
project.
Then
through
project
access,
is
a
repository
which
is
AB
models.
Repository
then
from
there.
A
It
needs
to
access
lipcott
lab
git
repository,
which
sometimes
happens
transparently,
which
is
very
confusing
because
in
Ruby
you
can
have
transparent
delegation
and
but
there's
no
strict
typing,
so
that
models
repository
typical
of
git
repository
and
then
the
actual
GOP
sequels
get
made
here
in
this
lip
get
lab
get
to
the
client
thing.
Are
you
still
with
me
so.
B
A
This
directory
is
where
we
used
to
interact
with
regrets.
Rugged
calls
used
to
live
here,
and
there
were
some
rogue
bits
of
code.
Otherwise,
in
order
we're
in
the
application
other
place
in
the
application
would
also
call
rugged,
but
officially
rugged
calls
happened
here.
So
these
patches
that
are
now
being
discussed
to
bring
back
rugged
calls
are
modifying
this
codes
so
that,
instead
of
calling
out
to
easily
clients
it
will
call
out
to
records,
got
it
ok,.
A
So
cool
good
question:
yes
yeah,
so
ok,
I'm,
taking
the
long
way
around
before
I,
show
you
the
actual
to
your
PC
stuff.
Forgive
me
for
that.
The
all
all
the
main
stuff
happens
in
this
file
get
to
the
client
RB.
This
is
a
generic
to
your
PC
clients.
That
is
well.
It's
generic
with
respect
to
all
RPC
calls
that
exist
with
integrity
protocol,
but
it's
specific
to
Italy.
A
A
This
will
have
a
find
commit
somewhere
or
find
commits.
So
this
is
an
example
of
glue
codes
where
we
builds.
This
is
us
directly
using
G
rpcm
protobuf
code.
This
creates
a
program
of
objects
in
Ruby,
with
some
field
sets,
and
then
it
does
Italy
client
on
coal,
which
is
the
magic
that
we
have
yet
to
come
to
and
gets
response
objects,
and
then
it
parses
the
response,
and
then
these
things
get
parsed
again
in
the
lipcott
lipcott
layer.
A
A
A
It's
a
bit
weak,
so
that's
the
natural
reaction,
here's
cool!
So
this
thing.
What
let's
see
what's
going
on
here?
It
thinks
of
storage,
I,
think
that's
the
storage
name,
the
surface
and
the
RPC.
So
that's
abstractly
saying
we
want
to
use
commit
service
and
we
want
to
call
the
fines,
chromate
RPC
and
takes
the
request,
objects
and.
A
A
A
A
So
this
thing
is
doing,
and
then
we
look
up
the
stub
for
this
service
and
storage.
And
this
then
we
do
a
dynamic
methods
goal
in
Ruby
with
the
RPC
and
the
requests
and
the
keyword
arguments.
So
RPC
in
this
case
might
be
Caleb,
so
service
might
be
essay
surface
equals
commit
service,
let's
say:
storage
equals
default
and
it's
a
RPC
equals
signs
commits.
These
are
ruby
symbols,
I'm
typing,
which
are
the
usual
way
to
refer
to
methods.
A
So
okay
I'm
say
block,
so
that's
what
will
be
the
keyword
arguments
and
this
thing
is
taking
care
of
the
reuse
of
stub
objects.
So
stub
objects
are
how
the
RPC
wants
you
to
make
to
your
piece
equals.
So
if
we
would
make
it
your
piece
equal
by
the
book,
it
would
look
something
like
this.
It
would
would
say
stuff
equals,
get
Li
commit
service,
stub,
don't
new
and
then
address.
A
A
This
is
what
things
would
look
like
naively.
There
are
more
complex
because
these
sub
objects
establish
a
network
connection
to
the
gate
lessor,
and
we
don't
want
to
create
a
new
one
all
the
time,
so
we
catch
them
globally
across
the
rails
process.
We're
in
so
this
Bess.
What
this
method
is
doing,
this
methods
is
going
somewhere
up
here.
A
Okay,
using
a
mutex,
it
goes
into
a
hash.
It
looks
a
by
storage
and
by
service
it
loops
up
a
stub
objects
and
the
deterrents
that
thing,
but
there's
really
nothing
to
it,
except
that
we
don't
want
to
throw
away
that
object,
all
the
time
and
yeah
and
then
there's
stuff
in
there
like
what
are
the
credentials,
what
are
the
headers
and
that
all
has
to
be
looked
up,
which
is
just
if
you
look
at
it
in
isolation,
it's
not
very
complex.
A
A
So
this
will
be
the
stub
and
instead
of
so
this
is
a
method,
call
that's
not
something
we
can
do
dynamically,
oh,
and
we
need
to
pass
the
keyword
arguments
which
is
the
is
these
headers.
So
this
is
what
it
would
look
like
naively
and
because
we
can't-
because
we
want
to
do
this
dynamically,
for
whatever
RPC
we're
dealing
with
this
becomes
an
under
under
sends
like
this.
B
A
A
Does
some
magic
on
the
input
so
actually
I
got
the
input
wrong.
So
I
wrote
that
it
was
this
I
wrote
this
commits
service,
but
actually
that's
wrong.
The
input
is
this
and
then
that
becomes
a
string
and
then
that
gets
camel
cased,
meaning
that
this
becomes
an
uppercase.
This
thing
goes
away
and
this
becomes
uppercase
and
it
looks
like
this
so
that's
camel
case,
and
then
it
goes
back
to
symbol
and
it
becomes
this.
A
A
A
A
A
Okay,
this
is
the
real
find
commits,
so
this
is
Dutch
coal,
and
so
this
would
be
a
string
like
default
and
it
it
can
just
abstractly
name
the
surface
and
the
RPC
and
the
request.
So
all
this
meta
programming
stuff
is
to
make
these
lines
simple.
So
we
can
have
just
one
dog
whole
function
that
we
can
make
all
the
calls
with.
B
A
A
A
A
Process
yeah,
if
we
wanted
to
do
something
like
that,
we
could
do
that.
We
only
use
this
once
so
it
is
not
completely
mature.
One
thing
that
is
notably
missing
is
that
this
is
not
the
only
get
early
clients.
This
is
a
very
big
and
important
to
get
any
clients,
but
it's
not
the
only
one,
and
so
we're
course
kid
lab
shell.
A
Elasticsearch
indexer
are
also
getting
clients,
and
if
we
want
to
have
these
server-side
feature
flags,
they
would
have
to
be
propagated
from
here
down
to
workhorse
and
then
back
up
into
gets
me
and
that
propagation
doesn't
happen
yet,
so
you
cannot
control
the
metadata
that
workhorse
is
setting
on
its
get
equals
to
get
feature
flag
behavior
on
the
Gateway
server
right.
If
we
want
to,
we
may
have
to
build
that
out
at
some
point:
it's
just
not
there.
Yet
that
was
very
complex,
so.
C
A
C
A
Not
sure
if
the
way
the
requests
flow
gets
things
propagated
in
the
right
direction,
because
unicorn
does
not
make
er
piece
equals
to
work
horse,
so
you
work
horses.
The
initial
well
depends,
you
know
this
is
a
big
topic
and
I
don't
think
we
should
get
into
it
because
we
set
aside
an
hour
for
this
and
hours
hour
is
almost
up
so
I
think
it
would
be
good
to
try,
try
and
wrap
this
up
and
see
if
there's
something
else
sounds.
C
A
There
isn't
much
else
going
on
here,
I
mean
a
lot
of
this
has
to
do
with
metrics
and
observability
and
like
putting
tags
or
counters
and
just
keeping
track
of
what's
happening
with
get
illegals,
and
we
can
do
that
because
all
get
only
calls
and
rails
get
initiates,
go
through
this
class,
which
is
nice
strength
to
have.
But
the
downside
is
that
this
class
keeps
getting
more
complex,
as
we
add
more
metrics
and
things,
which
is
why
it
looks
the
way
it
does
today
yeah.
So
here's
the
server
feature
flag
thing:
I.
A
Yeah,
oh
and
the
other
thing
we
do
with
the
grizzly
clients
class
is
that
it
has
this
feature,
enabled
methods
which
is
just
a
really
thin
wrapper
around
this
generic
feature
flag
thing
inside
gitlab.
So
if
we
make
a
client
sites
change
like
the
commit
finder
we
were
talking
about,
we
would
have
a
feature.
A
So
you
can
send
the
percentage
of
traffic
it
gets
randomly
chosen
if
the
feature
is
enabled,
so
you
can
ramp
up
the
use
of
a
new
feature,
but
all
that
logic
is
not
in
this
file.
Fortunately,
because
this
file
is
complex
enough
already,
it
closes
in
here.
That
is
all
about
counting
calls
count,
counts.