►
Description
Git Cache Maintenance Projects Idea
Brainstorming Together About Ideas and Alternatives
Objective
Meet for 60 minutes with those interested in the Git Caching project idea to discuss ideas and alternatives and to identify areas where there may be questions. Encourage discussion of different alternatives and ideas that might lead us to a better implementation.
A
Welcome
it's
the
30.
This
is
april
1st
2022
india
standard
time,
and
this
is
the
get
cash
maintenance
project
ideas,
brainstorming
session,
we're
recording
thanks
everybody
for
joining,
so
the
idea
was
in
our
last
session.
We
started
a
bunch
of
discussions
about
various
topics
around:
how
should
the
user
interface
interact
and
what
would
that
mean?
A
A
B
Yeah
so,
basically
on
my
on
my
proposal,
there
was
a
comment
by
cal
and
I
I
might
be
saying
his
name
wrong,
but
I
just
wanted
to
link
it
I'll
link
it
in
the
chat
here.
Okay-
and
he
was
talking
about
like
in
the
previous
session.
We
had
talked
about
the
prefetch
feature,
and
he
also
like
had
some
insight
on
how
we
could
use
it
and
where,
in
which
cases
it
would
be
useful
and
where
it
might
not
be
it.
B
A
Okay,
so
this
is
where
he
and
I
are
going
to
disagree
and
I'm
going
to
try
to
try
to
justify
why
I
disagree.
That's
good,
very,
very
good
question.
Okay,
so
kale
olave
nimatalo
is
a
a
well-known
and
very
skilled
user
of
jenkins.
So,
however,
I
think
he's
not
understanding
the
context.
A
A
So
so
they
are
not
reference,
but
they
are
bare
and
and
as
bare
repositories,
they
can
then
be
used
for
other
other
purposes.
So
and-
and
now,
let's
so,
let's
let's
talk
through
this
a
little
bit.
So
a
cache
on
the
controller
is
being
used
to
answer
questions
about
the
the
content
of
that
cached
repository
and
if,
if
we
prefetch
content
into
that,
then
that
does
that
lets
us
not
have
to
require
that
everything
must
use
web
hooks.
First
point:
because
not
everyone
can
use
web
hooks
there
are.
A
So
so
now,
then,
to
his
second
point
here,
the
yes:
it
could
save
network
bandwidth
during
it
actually
doesn't
it
doesn't
alter
network
bandwidth
usage,
because
the
prefetch
is
fetching
objects
that
will
be
fetched
eventually,
however,
what
it?
What
the
prefetch
does
is
saves
time
during
the
fetch
that
will
act
on
those
changes,
because
the
objects
are
already
there
now.
His
point
here
about
it's
not
safe
to
remove
objects
from
a
reference
repository
is,
is
a
valid
and
interesting
case.
A
A
A
B
So
would
I
have
to
like
implement
some
safeguard
for
for
that
scenario,
when,
when
there's
like
so
I'll,
have
to
disable
the
garbage
collection
over
there.
A
A
reference
repository
has
to
be
on
the
same
file
system
or
on
the
same
computer
as
the
the
repositories
that
are
referencing
it
so
now
now
there
are
things,
though,
that
we
have
to
do
in
the
prefetch
to
safeguard
it.
So,
while
the
while
the
disabled
garbage
collection,
we
don't
have
to
disable
garbage
collection
on
a
repository,
that's
being
pre-fetched.
A
Prefetch
itself
does
things
to
avoid
updating
the
the
refs
without
the
user,
no,
having
requested
that
they
actually
be
updated.
So
but
I
think
you
may
be
aware
of
that
from
our
last
our
discussion.
The
last
time
prefetch
does
a
has
a
very
specific
description
in
the
get
maintenance
page.
Let's
find
that.
B
Yeah,
I
think
it
would
still
be
helpful
for
others,
but
I
think
it
only
shows
the
delta
of
of
the
of
the
newly
collected
information
that
is.
A
Well,
so
so
the
the
crucial
thing
here,
for
me
at
least,
is
what
happens
when
it
does
a
prefetch.
Is
it
modifies
the
ref
spec
to
bring
everything,
that's
being
pulled
into
a
location
that
other
commands
won't
find
it?
And-
and
that's
that's
this-
that's
why
they
say
here
this
is
done
to
avoid
disrupting
remote
tracking
branches.
So
what
happens
when
a
prefetch
runs
is
to
the
user?
It's
as
though
nothing
changed
in
that
repository.
A
So
if,
if
the
master
branch,
for
instance
on
the
remote
received
new
objects,
when
I
do
a
prefetch,
my
local
copy,
I
can't
see
those
objects
in
the
usual
way.
If
I
look
at
a
git
log
of
the
master
branch,
prefetch
effectively
hides
them,
but
pulls
them
anyway.
It
hides
them
in
this
ref
slash
prefetch
thing
and
that
that
way,
it's
not
bringing
in
an
update
to
the
master
branch
that
I
did
not
explicitly
request.
A
Okay,
great
yeah,
so
so
the
idea
is
objects
that
would
have
been
pulled
by
the
user.
Asking
give
me
the
latest
changes
on
the
remote
tracking
branch
are
already
there
locally,
but
they're
just
sitting
in
this
refs
prefetch
location,
and
when
I
ask
for
them,
they
get
also
recorded
in
the
ref,
slash
origin.
B
Okay
and
and
the
prefetch
would
also
be
making
the
fetch
command
faster
right.
A
A
C
A
B
I
was
thinking
that
we
can.
We
can
have
the
options
that
you
have,
and
I
was
I
was
thinking
of,
adding
options
such
as
comparing
it
with
with
previous
runs
or
saving
a
particular
run,
so
that
it
can
be
compared
with
later
on.
A
B
Maybe
if
the
user
wants
to
manually
edit
some
of
the
tasks
and
how
they're
run
and
they
might
want
to
change
some
of
the
config
values,
so
what
this
would
allow
them
to
do
would
be
to
they
can
they
can
have
a
specialized
way
so
that,
for
their
specific
repository
that
they
are
using
and
how
it
can
and
how
get
gc
can
get
gc
under
maintenance
commands
can
help
them
optimize
their
time.
A
Okay,
so
so,
okay,
now
I
think
I'm
seeing
so.
For
example,
a
a
very
large
git
repository
may
need
different
options
for
git
gc.
Yes,.
A
D
So
mark,
as
we
were
discussing
earlier,
what
I
had
in
my
mind
in
terms
of
ui
was
that
the
administrator
in
an
organization
where
people
are
using
jenkins,
not
everyone,
would
have
the
option
to.
D
Change
the
strategy
for
these
tasks,
so
as
a
developer,
if
I'm
going
to
run
my
build,
I
don't
know-
or
I
don't
care
how
the
optimizations
are
working
within
the
jenkins
system
right,
it
could
be
possible
this
from
an
administrator's
perspective.
I
understand
that
you
know
we're
providing
a
page
where
we're
going
to
give.
Maybe
we
can
give
on
some
heuristics
and
basis
of
which
they
could
create
a
strategy
and
run
these
tasks,
these
individual
maintenance
tasks.
D
So
if
that
is
the
case,
if
we're
providing
this
page
for
the
user
for
the
administrator,
then
as
far
as
I'm
aware-
and
please
correct
me,
if
I'm
wrong,
how
would
we
show
the
run
history
there
I
mean,
if
you're
thinking
that,
based
on
our
previous
performances,
we
want
the
user
to
choose
how
they
they
want
to
update
their
strategy
of
running
these
tasks.
How
would
we
do
that?
As
far
as
I
understand,
those
are
two
separate
areas
of
the
jenkins
ui
like
in
my
global
setting
page.
D
I
can't
see
in
the
configuration
I
can't
see.
My
previous
builds
right
there.
This
is
a
history
that
is
available
to
me
in
that
context,.
A
A
For
me
at
least
I
don't
think
of
as
as
as
jobs
in
the
sense
that
they're
jenkins
jobs,
I
think
of
them
more
as
the
results
of
the
of
the
maintenance
job
may
come
out
in
some
sort
of
a
log
like
this
get
polling
log,
because
because
I
was
assuming
the
maintenance
tasks,
aren't
the
same
as
a
jenkins
as
a
jenkins
job
is.
Is
that
consistent
with
what
you
were
thinking
as
well.
D
A
Yeah,
so
so,
and
so
my
assumption
there
had
been
that
somehow
we
would
need
in
this
accessible
from
this
maintenance
ui
access
to
a
history
of
these
logs
today,
the
get
polling
log
as
an
example
only
shows
one.
It
shows
the
most
recent
and
if
I
click
poll
now,
it
will
and
then
show
the
get
polling.
Log
we'll
see
it
updates
the
polling
log,
but
I
don't
have
any
history
of
it
so
that
that
the
get
polling
log
is
actually
not
the
kind,
not
quite
the
concept
I
had
for
tasks.
D
I
mean
yeah,
I
I
just
wanted
to
understand
that,
when
we
were
talking
that
we,
when
we
were
talking
about
how
we
want
the
user
to
use
the
job
history,
to
determine
the
strategy
to
update
or
modify
the
strategy
that
they
have
for
these
maintenance
tasks
and
if
those
the
schedule
and
the
strategy
for
the
for
those
tasks
are
configurable
at
the
administrative
page
level.
A
Would
we
have
a
selection
over
on
one
of
the
columns
on
the
right?
That
says
something
like
show
me
the
history
of
this
task
and
then
a
page
would
appear
that
is
the
history
of
the
task
and
that
that
page
may
look
something
like
this.
This
build
history,
page,
let's
see,
can
I
see
the
where's.
The
history
well
may
make
conceptual
back
here
we
go.
D
D
A
I
was
assuming
we
wanted
to
hide
it
that
we
would.
This
is
purely
for
administrators.
If
you
don't
have
administrator
permission
you
you
can't
do
anything
with
these.
The
this
page
would
just
not
be
available.
So
now
I'm
open
to
being
being
wrong
about
that,
because,
because
my
thought
was,
this
is
purely
an
administrator
function,
because
one
of
the
actions
might
conceptually
be
delete.
The
schedule
right
just
stop
the
thing
from
doing
any
maintenance
and
your
users
might
suffer
if
some
malicious
user
said
I'm
going
to
stop
the
maintenance
of
all
of
our
get
caches.
A
E
I
want
we
I
wanted
to
discuss
about.
Do
you
know
the
order
of
execution
of
the
git
maintenance,
so
we
stopped.
We
stopped
right
there.
You
know
after
prefetch
and
incremental
repack
last
time
when
we
were
trying
to
lose
objects.
I
don't
know
if
you
remember,
but
you
know
the
lose
objects
didn't
get
deleted.
Okay,
I've
gone
through
it
and
mark.
Can
you
open
a
big
repository
like
in.
A
A
E
Can
can
you
have
a
look
at
the
you
know,
lose
objects
present
in
the
direction.
E
No,
no
worries,
can
you
run
the
get
a
maintenance,
you
know
or
run
the
loose.
A
Okay,
so
so,
let's,
let's
do
it
with
one?
That's
not
quite
as
big
as
this
one.
Then
I
think
I
think,
if
you're,
okay
with
it,
let
me
go
to
a
different,
a
slightly
different
repository.
A
C
Loop
object,
maintenance
command;
okay,
so
get
maintenance.
C
E
A
E
E
A
look
at
the
loose
objects,
I
don't
think
you'll
find
any
okay.
So
right!
Okay!
So
that's
how
this
last
time
when
we
were
trying,
you
know
it
wasn't
displaying.
So
basically
what
I
was
thinking,
the
order
of
you
know
the
maintenance
commands
should
be
prefetch,
loose
objects
and
incremental
repack,
because
I'm
not
sure
if
incremental
repack
takes
considers
the
loose
object.
You
know
pack
ref
file,
so
it's
better.
If
we
go
in
this
way,
what
I
was
feeling.
A
E
E
And
then
the
the
thing
about
incremental
repack,
I
haven't
found
anywhere
on
the
internet,
whether
you
know
it
considers
loose
objects
or
not
behind
the
scenes.
Okay,
so
it
doesn't
matter.
If
you
know
incremental
repack
is
placed
in
front
of
loose
objects,
but
it's
safer.
If
you
put
loose
objects
in
front
of
you,
know
incremental
repack,
so
that
we
don't
miss
out
on
it.
A
A
Right,
okay,
so
this
was
looking
at
combining
multiple,
smaller
packs
into
a
large
single,
larger
pack
for
efficiency
got
it
okay,.
A
E
It
would
be
also
it
would
be
easier
to
search
for
any
you
know,
get
objects
using
multiple.
You
know
this
incremental
repack,
because
all
the
objects
are
sorted
okay,
so
it
would
be
easier
as
we
would
be
using
binary
search
okay,
so
that
would
reduce
the
time
complexity
as
well.
A
E
Then
I
would
I
was
thinking
about
combat
graph
and
then
you
know
gc
garbage
collection.
A
E
E
And
last
time
we
we
had
a
discussion
about.
You
know
the
comment
graph
where
on
every
fetch,
it
keeps
upgrade
updating
the
comment
graph,
which
is
a
very
huge
task
right.
E
A
A
E
C
Sure
you
mean
like
an
ellis
tree:
yeah
yeah,
a
tree
where
you
get
everything
in
a
tree's
new
yeah.
E
E
Yes,
yes,
okay,
can
you
see
that
there's
a
commit
graph
chain,
okay,
which
links
both
the
commit
graphs
together?
Okay,
so
one
commit
graph
keeps
building
when
you
keep
doing
the
fetch
command,
which
doesn't
disturbs
other
comic
graphs.
So,
basically
you
won't
be
modifying
or
updating
all
the
comments,
which
won't
be
a
huge
task.
A
D
D
D
A
D
So
what
trishikesh
is
talking
about?
Sorry,
if
I
pronounce
ring
rom,
is
to
to
write
the
commit
graph
incrementally
after
every
fetch.
That
is
a
recent
change.
As
far
as
I
read
when
I
pointed
out
that
fact
last
time
that
it,
if
we
don't
have
a
property
enabled
in
the
previous
versions,
it
won't
incrementally
fetch
it
every
time
we
are
doing
performing
a
fetch.
The
way
right,
the
commit
graph.
D
As
I
read
what
earlier
work,
was
that
the
gc
was
tasks
to
update
it
and
every
if
the
distance
between,
if
the
time
period
between
the
gc
and
the
fetch
is
long,
then,
when
you're
performing
the
get
fetch
it's
going
to
take
more
time,
it's
going
to
be
an
intensive
operation
because
gc
was
supposed
to
do
it.
They
recently
the
the
author
who
actually
developed
common
graph
from
what
I
read
from
his
blog.
D
He
said
that
they
recently
with
the
recent
versions
included
this
configuration
by
default,
which
is
called
fetch.right
commit
graph,
and
if
it's
enabled
true,
it's
going
to
incrementally
update
the
commit
graph
on
each
fetch
so
that
they
don't
have
to
bear
the
cost
of
doing
it.
Once
in
a
while.
E
E
Yeah
version
2.24
greater
than
version
2.24
right,
common
graph
is
enabled
by
default.
So
we
have
to
consider
cases
where
you
know
get
versions
less
than
2.24.
We
have
to
enable
it
and
the
jenkins
get
correct.
A
D
I
mean
it
would
depend
on
a
lot
of
factors
if
it's
sub-optimal
it
could
be
since
the
gcs
starts
to
update
it.
We
don't
know
what
kind
of
activity
that
repository
is
going
through.
If
you
know
the
fetch
is
actually
going
to
cost
a
lot
in
terms
of
commit
graph
when
you're
talking
about
but
yeah
it
could
be,
it
potentially
could
be.
That
is
what
the
author
says.
D
A
Blog
and
the
feature
notes
that
it
may
be
expensive
that
fetch
and
is
it
a
safe
way
to
say
that
the
fetch
may
be
more
expensive.
E
B
A
A
No,
no
because
because
it'll
be
handled
in
the
future
right
it
just
just
okay,
when
when
fetch
happens,
so
the
sequence
we
had
described
was
hey.
If
we
do,
let's
be
sure
where
we
had
it,
it
was
prefetch
to
retrieve
new
objects.
So
I
did
the
fetch.
Then
we
could
conceptually
say:
okay,
fine,
we're
going
to
run
loose
objects
to
form
them
into
a
pack
and
then
incremental
repack
and
commit
graph
to
update
them
and
less
frequently
a
gc
and
and
that's
the
one
that
tests
gc
less
frequently,
because
it's
so
expensive.
A
D
So
I
just
had
a
suggestion
when
looking
at
these
experiments
that
we've
been
doing
so
in
the
proposal.
D
Would
it
be
beneficial
for
everyone
for
the
students
who
are
writing
the
proposals
to
start
with
the
repository
of
their
liking
and
define
the
parameters
of
the
repository
which
are
going
to
be
affected
by
the
maintenance
tasks
like
the
size?
Maybe
the
number
of
objects,
the
number
of
loose
objects
and
the
pack
files
and
the
number
of
references
that
that
repository
has
and
then
define
their
strategy.
D
While
we
were
trying
to
run
this
experiment
and
then
describe
it
instead
of
you
know
just
describing
how
these
commands
are
going
to
work,
because
that
is
something
that
anyone
can
google
and
find
out
right,
but
to
choose
your
own
type
of
repository
and
then
to
run
it
and
to
show
how
it
would
actually
work.
It
would
be
a
good
experiment,
but
it's
something.
I
just
wanted
to
ask
for
with
the
other
mentors
as
well
something
that
we
should
be
expect
in
a
proposal.
A
A
Of
the
operations
and
their
impact
on
the
repository,
yes,
so
the
idea
being
okay,
if,
if
a
bunch
of
new
commits
arrive-
and
in
this
particular
repository
example,
I
give
there
are
a
bunch
of
new
commits
that
seem
to
arrive
pretty
regularly,
then
I
do
the
prefetch
and
what's
the
impact,
what's
the
impact
on
the
prefetch.
So
what
happens
from
prefetch?
And
it's
we
got
this
many
new
loose
objects.
We
got
this
many
and
then
all
right
now
we're
gonna
do
loose
objects
and
what
happens
then.
A
D
D
So
this
would
be
something
that
for
me,
I
I
would
say
and
may
be
beneficial
for
the
people
who
are
writing
it
to
you
know,
come
up
with
the
right
strategy
that
they
think
would
be,
because
I
think,
even
if
we
have
to
talk
about
the
user
interface,
but
before
that
we
need
to
know
what
kind
of
a
strategy
we're
going
to
choose
right
for
the
maintenance
tasks.
A
Well,
and
and
in
terms
of
sample
repositories,
if
I
recall
correctly,
we've
got
in
the
git
client
plug-in
one
or
two
samples
actually
coded
in
the
source
code.
Right
of
here's,
a
large
medium
and
a
small,
so
c,
the
get
plug-in
source
code
get
client
plug-in
source
code
for
the
urls
of
some
example-
large
repositories.
Now
they
are
not.
I
don't
know
that
they
are
chosen
as
highly
active
repositories-
they're
large,
but
relatively
quiet.
D
So
this
is
yeah.
This
is
where
yeah
we
would
leave
that
to
the
student
right.
How
they're
choosing
the
repository
based
on
what
parameters
is
something
that
would
be
really
interesting.
D
The
activity
activity
would
determine
the
commit
size
and
how
they're
being
updated
and
then
yeah
size
would
be
more
related
to
the
objects
and
the
management
of
objects
so
yeah
it
would.
That
is
what
I
was
thinking.
A
A
E
I
had
a
or
not
I
had
another
doubt
regarding
the
execution
of
the
git
maintenance
like
we
are
running
it
globally,
right
so
on
many
repositories.
So
are
we
gonna
execute
it
serially
or
parallelly?
You
know
creating
multiple
threads
and
then
running
away.
You
know
various
repositories
or
like
sequentially
one
after
the
other.
A
A
What,
if
that
overloads
the
controller,
because
git
gc
of
the
linux
kernel
will,
by
default,
take
every
thread
every
core
on
that
processor
to
perform
its
job?
If
I
remember
correctly,
get
get
gc
is
designed
to
be
massively
parallel,
and
so
it
would,
if
we,
if
we
schedule
too
frequently,
we
may
consume
that
controller's
processor
and
it
may
not
be
able
to
do
its
real
work
because
we're
so
busy
doing
doing
garbage
collection.
A
D
Yes,
so
what
I
wanted
to
say
was
that
it
depends
on
what
is
our
priority.
I
mean
with
parallelization
what
we
are
going
to
achieve,
let's
say
the
tasks
if
they
were
above
for
all
of
the
repositories,
it
would
take
x
amount
of
time
if
they
were
serialized
run
in
a
serialized
fashion.
If
we're
doing
it,
parallely
it
would
be
x.
D
A
Good
good
point
I
and
I
think,
you're
right,
the
first
priority
is
to
is
to
do
no
harm
right
to
use
the
the
medical
phrasing,
so
don't
harm
the
controller.
Don't
don't
harm
the
controller.
With
these
tasks
now
there's
going
to
be
some
cost,
it's
not
free
to
run
a
garbage
collection,
but
so
I
think,
then,
that
would
lobby
prefer
serial
tasks
until
proven
otherwise
right
because
that's
the
lowest
lowest
risk.
D
Yes,
and
with
serialization,
I
believe
if
the
administrator
is
looking
at
how
the
maintenance
schedule
is
working
on
a
repository.
Let's
say
I
feel
like.
Oh,
you
know
this
is
not
working
as
I
wanted
it
to
be,
so
I
can
stop
it.
You
know
the
the
cost
of
stopping
it.
Maybe
not
at
that
point
would
not
be
as
huge
as
let's
say,
10
already,
10
repositories
have.
You
know,
use
that
schedule
whatever
I
chose
that
I
thought
would
be
perfect,
but
in
practice
it's
not.
A
D
Yes,
what
I
meant
was-
let's
say
I
as
an
administrator
right
in
my
mind,
I
had
you
know
a
frequency,
an
interval
of
how
the
tasks
would
work,
but
I've.
Never
I
in
practice
I
didn't
try
those
tasks,
how
they
would
you
know,
perform
the
maintenance,
but
in
serialization
I
would
get
to
see
that
in
the
very
first,
the
first
repository
that
they're
going
to
run
in
parallel
parallelization.
I
won't
have
that
control
right,
so
you're
going
to
run.
D
E
A
Good
question
so
preferred
development
technique,
so
mark
has
a
strong
bias.
Yeah
at
that.
E
A
A
So
russia
cash
did.
That
answer
your
question?
Yes,
yes!
So
now
now
one
of
the
this
this
the
question
also
highlights
a
point
of
hypocrisy.
If
you
will
the
git
plug-in
and
the
git
client
plug-in.
A
Are
difficult
to
test
and
why
are
they
difficult
to
test
because
they
were
initially
created
without
tests?
Okay-
and
I
wish
I
could
say,
we're
different
than
that,
but
I
did
a
I.
I've
done
a
30
minute
talk
actually
on
the
history
of
the
get
plug-in
and
the
get
client
plug-in
for
the
first
18
months
or
two
years
of
the
life
of
the
plug-in.
A
A
And
so
so
that's
that's
me
acknowledging
my
hypocrisy
right.
No,
I
was
not
the
person
who
wrote
the
plugin
in
those
first
two
years,
I've
if
you
look
at
the
history
of
the
blame
of
files
in
the
jank
in
the
get
plug-in
most
of
the
blame
on
tests
is
my
name.
So
I'm
I'm,
I
wrote
a
bunch
of
tests,
but
initially
there
were
no
tests
for
the
first
at
least
18
months
of
the
life
of
the
plug-in.
A
A
D
Yes,
yeah,
I
I.
I
definitely
think
that-
and
I
agree
with
that
was
the
one
of
the
things
which
I
learned
you
know
was
the
first
thing
that
I
learned
was
death
driven
development.
It
was
something
that
I
didn't
consider
in
my
mind
when
I
used
to
give
estimates.
It
was
all
about
writing
the
future
and
how
much
time
that
would
take.
I
never
considered
how
much
time
it
would
for
me
today,
and
it
takes
considerable
time
to
think
about
the
the
state
space
of
how
your
feature
whatever.
D
If
it
is,
even
if
it's
a
line
single
end
code,
how
it's
going
to
affect
the
users
and
especially
for
a
plug-in
which
is
distributed
to
a
such,
you
know
wide
audience,
it's
necessary
to
think
to
have
that
sort
of
principles
there
for
development.
D
So
I
and
I
would
say,
while
writing
I'm
sorry.
I
just
wanted
to
say
that,
while
writing
the
proposal,
it
should
be
considered
that
you
know
when
you
give
your
estimations.
Just
as
as
I
was
a
you
know,
an
amateur
developer,
I
never
thought
about.
I
didn't
consider
the
amount
of
time
it
would
take
to
test
the
features
that
I
thought
that
I'm
going
to
deliver.
A
Good
good
point,
and-
and
just
so
just
so
everyone's
clear-
why
why
I
think
that's
so
important
within
the
first
30
days
of
the
release
of
the
git
plug-in,
there
are
90
000
installations
using
it
around
the
world
and
those
90
000
installations
probably
have
10x
that
many
users.
So
we
could.
We
could
harm
close
to
a
million
people
if
we
do
something
badly
wrong.
A
A
B
I
had
found
a
jira
issue
where,
where
this
project
idea
was
probably
initiated
at
the
very
first
time,
so
I
just
wanted
to
like
share
that
link
and
if
there
was
like
some
potential
discussion
from
the
users
of
of
the
of
the
people
that
were
having
this
issue.
B
I
noticed
that
there
were
some
issues
involving
that
the
the
data
was
not
being
cleared
for
for,
like
the
master
and
the
slave
repositories,
for
both
of
them.
So
is
that
something
that
we
can
ensure
that
in
our
implementation,
we
do.
A
So
this
project
is
actually
not
giving
real
consideration
to
the
agents.
So
what
is
called
a
slave
here,
because
the
cash,
the
cash
management
is
only
happening
on
the
controller
on
the
thing,
that's
called
the
master
here
this.
This
won't
help.
Agent-Based
environments,
it
will
only
address
only
address
things
on
the
controller
is,
was
that
your
question
aryan.
B
A
Yeah
good
good
pointer
to
this.
To
this
this
one,
okay.
A
All
right,
thank
you.
We've
we've
hit
our
time
thanks
very,
very
much
for
your
patience.
We
will.
If
you
are
interested,
we
could
meet
again
in
two
weeks
that
will
be
after
we've
entered
the
period
for,
let's
see,
let's
double
check,
I
think
that's
after
we've,
we've
entered
the
time
when
applications
would
be
accepted
yeah,
but
it's
not
after
the
time
when
the
applications
are
closed.
So,
if
you're
interested,
we
could
we
could
plan
to
meet
again
in
two
weeks
on
the
15th.
Are
you
interested.