►
From YouTube: Implement a namespace storage limit discussion
Description
Alessio and Nicolas talking about https://gitlab.com/gitlab-org/gitlab/-/issues/209119
A
B
Lamenting
a
namespace
storage
limit,
the
no
namespace
is
capable
of
hitting
okay
as
an
MVC.
This
is
issue
two
zero,
nine
one
one
nine
just
here
mostly
for
answering
question
is
Nikolas,
is
doing,
is
taking
the
effort
of
doing
these
issues
but
as
part
of
the
delivery
team,
I
have
worked
on
some
of
this
stuff
almost
one
year
ago,
so
I
may
have
some
knowledge
about
it.
Thank.
C
You,
okay,
so
just
an
overview
about
the
issue,
so
this
is
MVC.
We
don't
limit
it.
We
just
want
to
implement
the
limitation
itself
and
we
fought
for
as
a
starting
point.
We
will
limit
this
on
github.com
to
one
petabyte,
so
it
will
not
affect
any
namespace.
That's
not
what
we
want
in
the
beginning,
like
for
the
embassy,
but
afterwards
we
want
to
have
like
elements.
So
people
who,
like
users,
who
go
beyond
the
element,
need
to
pay
for
the
storage.
C
That's
that's
the
whole
idea
and
we
want
to
implement
this
like
you
can
set
a
limit
for
all
namespaces
and
right
now
we
have
a
limit
for
a
repository
or
project
like
ten
gigabit
limit,
but
you
can
have,
of
course,
like
multiple
projects
inside
the
namespace,
so
we
will
be
limited
to
ten
gigabytes
and
then
you
can
horizontally
add
more
gigabyte
and
yeah.
So
we
want
to
limit
this
on
the
top-level
namespace.
C
Therefore,
on
this
like
root
namespace
watch
what
we
kept,
we
call
this
right
now
and
you
then
see
like
a
quota
like
you
see
it
on
the
CI
minutes
right
now.
This
is
this
is
the
whole
idea
and
yeah.
So
I
left
you,
your
team.
Already,
a
lot
of
work
and
I
want
to
talk.
We
have
huge
others.
There's
a
database
study
about
a
database
case,
the
above-named
Space
Storage
statistics.
We
made
some
proposals
how
you
can
calculate
the
project,
sizes
and
yeah.
C
So
the
solution
that
you
you
decided
on
was
every
time
I
proposed
a
project
updates.
You
schedule
something
like
a
chop
and
then
like
these
cops
job
gets
scheduled
in
one
and
a
half
hours
later,
I.
Think,
and
then
you
update
the
root
namespace
statistics
based
on
the
new
sizes,
that's
correct,
and
if
something
within
between
this
one
of
one
and
a
half
hours
changes
like
again,
you
don't
change,
you
don't
create
a
new
job.
You
just
updated
the
current
job,
that's
what
I
got
so
far.
This
is.
C
Aggregations
cattle,
okay,
so
my
the
question
like
adding
this
limit
is
relatively
easy
and
also
having
like
this
checking.
This
limit
on
one
page
is
all
also
relatively
easy.
The
thing
is
at
some
point
like
on
CI
minutes.
We
want
to
maybe
send
out
an
email
that
you
already
have
like
80%
of
your
storage
used
as
an
example,
and
the
question
is:
how
can
we
do
that
and
I
would
like
to
have
some
thoughts
from
you
on
that,
and
also
this
is
something
that
I
would
like
to
hear
from
you
and
otherwise.
C
C
B
B
B
If
it's
not
if
something,
if
you
can
find
something
in
the
write-up,
so
in
the
blog
post,
the
original
issues
has
all
the
discussion
about
it,
because
there
was
me
Myra
and
Yannick
involved
into
this,
and
we
were
constantly
discussing
way
to
do
this
in
so,
if
it's
not
in
the
blog
post,
it's
definitely
in
the
marecus
orientation.
Thanks.
C
C
B
The
one
is
that
the
query
itself
is
low
because
you
act
on
root,
namespace
level,
because
you
want
to
aggregate
on
the
paying
entity,
but
statistics
are
stars,
are
recorded
by
project
and
direct
namespace,
so
not
at
the
root
level.
So
we
had
to
rebuild
the
tree
in
order
to
accumulate
the
values
on
every
child
item.
So
this
takes
time
as
well.
The
the
table
is,
is
big
and
we're
just
doing
some
in
calculation
say:
okay,
we
can
just
we.
B
We
must
save
the
information
somewhere,
and
this
is
why
we
decide
if
so
it
gets
scheduled,
but
it's
a
side
of
a
transaction.
So
there
were,
this
I
think
we're
the
slightly
concerned
that
we
may
end
up
writing
the
schedule
on
the
database,
but
losing
the
Redis.
For
some
reason
it's
may
be
human
because
the
maybe
you
restart
something
or
you
lose
information
so
say
we
want
to
make
sure
that
once
a
day
we
have
a
check
point,
and
we
say
because
this
information
in
database
is
supposed
to
reflect
something
that
is
scheduled
on
Redis.
B
B
C
Yeah
I
mean
Emma's
Custis
of
the
scalability
team
at
some
point,
maybe
have
also
ideas,
but
I
would
like
to
get
get
your
brain
at
this
as
well.
So
the
initial
proposal
had
something
like
daily
chop
that
we
run
along
namespaces
and
it
is
basically
the
same
issue
that
we
they
have
had
before
like.
We
would
need
to
calculate
everything
which
is
a
lot
of
work
and
costs
a
lot
and
it's
not
viable
and
github.com.
Okay,.
B
So
the
the
idea
that
we
had
about
this
was
that
we
should
introduce
the
concept
of
a
soft
limit
because
they're
one
of
the
reason
why
I
we.
This
was
a
shifting
focus
shift
in
our
team.
So
we
drop,
but
we
completed
the
calculation
and
then
we
dropped
the
next
part
of
this
epics.
But
there
was
a
lot
of
UX
involved
into
this
and
there's
no,
you
external
in
our
team,
so
we
were
kind
of
struggling
with
that
part.
C
B
B
What
what
you
know
is
that
when
you
change
something
that
effect
cirrage
in
I
think
up
to
one
hour
and
a
half
or
two
hours,
I,
don't
remember,
think
one
or
enough
you
have
the
final
data
that
is
updated,
so
the
biggest
delay
you
can
have
is
one
hour
enough,
which
means
that
if
you,
you
can
do
something
like
this.
If
you
want
to
enforce
a
limit,
you
basically
check
the
size
of
what
is
coming
in
and
this
this
really
depends
on
the
upload
you're
doing
right
because
forgive
you
know
upfront
for
some
artifacts
uploading.
B
You
don't
know
the
size
up
front,
so
you
can
not.
You
can
not
tell
if
they're
at
the
end
of
the
upload,
you
will
be
above
the
limits,
but
this
is.
This
is
really
depends
on
what
you're
doing
so.
The
point
is
that
when
something
comes
in
you
can
do
it
your
best
to
understand
if
this
will
bring
you
outside
of
the
limit
and
then
I
do
you
have
to
be
flexible
in
this,
because
you
say
maybe
this
will
go
a
bit
outside
and
the
next
one
will
not
get
it.
B
So
it
cannot
be
really
hard
and
strict,
but
it's
something
like
when
you
are
over.
You
cannot
upload
anything.
So
if
you
have
this
in
mind,
you
can
do
the
all
the
uux
and
the
user
communication
on
soft
limit
thresholds,
which
means
that
the
same
way
you
daily
or
in
one
hour
whatever
you
update,
you
can
also
check
if
that
number
is
below
the
soft
limit
threshold.
B
This
is
something
that
you
can
even
do
when
you
render
the
page,
because
at
that
point,
that
the
value
is
already
in
database
and
if
you
have
the
the
limit
at
the
root
namespace,
which
we
don't
have
at
the
moment.
So
if
you
have
that
limit
and
the
percentage
you
can
just
check
it
and
show
a
big
banner
in
every
page
of
the
repo
say
you
know
you
are
above
the
limit.
Send
or
the
first
time
you
go,
you
send
an
email
things
like
that.
C
C
A
A
B
Exactly
but
it's
hard
for
you
so
that
that's
why
we
started
with
all
those
work,
because
we
need
four
month
way
for
doing
it.
So
this
is
not
a
problem.
I
was
more
thinking
if
getting
the
root
namespace
but
I.
Don't
expect
this
to
be
an
expensive
extra
query,
because
if
you
think
about
every
page
that
we
show
you
still
have
the
bread
crumb
or
things
like
that,
the
root
namespace
I
guess
it's
already
loaded
somewhere,
because
we
we
display
it
in
a
lot
of
things.
So
that's
wrong!
C
B
Is
not
another
thing
that
we
thought
about
was
that
when
you
do
the
accumulation
after
it
you
kicks,
you
kick
another,
a
synchronous
job
that
updates
the
status
in
terms
of
thresholds.
So
you
didn't
want
to
do
the
calculation
I
mean
it's
not.
The
calculation
is
just
checking
a
percentage
over
a
value,
but
whatever
every
time,
but
only
when
something
change.
So
if
something
changes,
do
you
update
the
accumulation
value
and
then
you
also
update
the
routing
space,
which
may
be
easier.
B
C
B
B
A
C
B
Yeah,
so
it
basically
just
break
it
and
that's
not
so,
but
there
we
showed
the
breakdown.
So
you
know
by
root
namespace
how
much
storage
you're
using
and
then
you
go
down
each
project
and
you
can
see
the
breakdown
of
LFS
artifacts,
and
so
that's
something
that
we
have
so
in
the
discussion.
But
I
don't
know
if
he
made
in
the
final
version
of
the
image
we
said
we
should
write
down
this
overall
information
is
updated
up
to
this
date,
so
it
may
not
be
entirely
accurate
but
yeah.
B
C
Takeaways,
for
me,
are
that
it's
relatively
cheap
to
calculate
the
roots,
namespace
storage
and
check
it
on
every
page
request.
If
you
really
want
to
this,
is
like
relatively
cheap
and
that
we
could
use
the
current
drop
to
execute
something
like
an
email
which
it's
that
you
are
above
us,
so
yeah
I
would
say
that
you
should
just
know.
C
B
B
B
Build
artifacts
asides
can
be
negative
because
there's
a
neat
there's
a
box
somewhere
I
mean
we
I,
don't
remember
the
name
of
issue,
so
it's
well
known.
So
sometimes
they
agree
the
pearl
project
information,
so
not
the
other
variation.
The
base
value
is
for
checking
everything
timeout
so
because
it
timeouts.
It
may
not
write
the
statistics
for
that
project
and
we
ended
up
with
situation
where
you
had
negative
information,
for
instance.
So
this
breaks
do
the
overall
effort
of
checking
the
values.
B
No,
this
is
just
for
the
artifacts
upload.
So
what
happens
is
something
like
this
that
maybe
you
have
let's
say
one
artifact
of
10
gigabytes.
Ok,
just
made
up
numbers
and
if
you
delete
it
you
are,
you
are
removing
10
gigabytes
from
from
the
value.
Ok,
so
what
happens
is
that
there
are
the
there
could
be
a
situation
in
which,
when
you
add
the
new
artifacts,
the
storage
upgrading
query
times
out
so
the
the
project
statistics,
so
not
an
it,
reduces
the
project,
run,
don't
get
the
plus
value.
B
So
you'd
say
you
got
another
artifacts,
one
gigabyte
so
say,
plus
1
gigabyte,
this
time
outs.
So
your
storage
is
still
0
because
you
had
plus
10
minus
10
plus
1.
Ok,
so
it
this
time
also
you
still
have
0.
Then
you
delete
the
artifacts
because
the
Aspire
or
whatever,
and
then
you
get
a
minus
1.
If
the
plus
1
timeout
it
then
you
get.
Your
final
result
is
that
you
have
minus
1
gigabyte
of
storage
in
use,
which
is
not
true.
Ok,.
B
B
I
mean
it's:
it's
the
package
group,
but
I,
don't
know
because
we
started
developing
our
own
Fork
of
the
registry.
So
I
think
that
there's
a
there
is
an
effort
for
having
this
information
and
that's
not
a
story.
And
then
we
have
the
uploads,
the
user
uploads
to
discussion
issues
that
one
is
completely
entrapped.
Okay,
if
I
remember,
but
this
we
should
start
tracking
it,
because
at
least
when
we
save
it,
we
know
the
sides
we
could.
B
We
have
with
the
upload
is
this
that
there's
no
way
to
delete
one
once
you
upload.
Something
is
there
and
the
other
problem
is
that
when
you
drag
something
into
the
editor
right,
it
will
start
the
upload
immediately.
So
you
dropped
something
it
will
be
uploaded
and
you
and
the
markdown
information
will
be
there
for
you
to
link
it
yeah.
But
if
you,
if
you
decide
not
to
use
that
information,
so
you
remove
the
markdown
issues.
The
upload.
C
B
One
of
the
issue
that
Mari
linked
there
is
this:
it's
an
epic
aroma
issue
and
remember
where
we
have
break
down
kind
of
storage
and
it's
all
about
allowing
user
to
free
some
spaces,
so
each
line
tracks.
If
we
have
an
issue
discussing,
how
can
we
allow
a
user
to
delete
something
and
there's?
No,
there
is
one
also
for
them
for
the
app
loads.