►
From YouTube: Think BIG discussion
Description
Our inaugural think BIG discussion in which we talk about why we are having this meeting and review our first epic.
A
There
are
there
ideas
that
we
should
talk
about
other
other
plans
that
we
should
consider
and
as
a
team,
we
could
move
from
there
and
then
once
we
have
that,
then
these
meetings
move
to
more
of
once
a
month
thing
as
it
as
a
check-in.
Where
we
say
are
we
doing
the
right
things
like
if
we
zoom
out
from
everything
that
we're
doing
now?
Does
it
seem
like
we're
working
on
the
right
things
for
our
team
long
term?
Does
that
make
sense?
A
Okay,
cool?
So
as
this
is
our
first
meeting
aside
from
that
intro
I
thought
it
would
be
a
good
idea
to
just
maybe
for
today
give
a
high-level
summary
of
some
of
the
ethics
that
we've
been
laying
out
and
defining
and
how
that
could
impact
us
and
then,
ideally,
we
can
start
by
going
through
the
first
one
here,
which
is
the
lower
the
cost
of
the
container
industry
by
introducing
storage
management
features,
and
we
can
go
through
each
of
the
issues
there
and
discuss
he
in
tonight
did
I
miss
anything.
A
B
B
I
locked
my
computer,
whenever
I
try
to
unmute
myself,
it's
very
cumbersome,
no,
that
was,
it
I,
think
it's
really
great
to
make
sure
that
we
all
have
that
chance
to
align
on
the
bigger
vision,
as
especially
as
we
can
kind
of
get
into
the
minutiae
of
the
individual
issues.
Just
like
you
were
saying,
I
think
it's
really
great.
Every
one
part
of
that
conversation
as
opposed
to
just
a
select
few
cool.
C
Sorry
to
interject
Tim
I
would
say
that
if
people
have
feedback
on
epics
that
we
should
prioritize
putting
that
feedback
in
the
epic
itself
or
the
issue
depending
on
what
it
is
and
if
people
have
like
particular
questions
about
the
process,
maybe
put
it
in
the
document.
I
don't
know
if
that
works
for
everyone,
but
I
just
want
to
on
the
front
of
contributing
and
making
it
async.
C
A
Yeah
that
sounds
good.
This
way
we
don't.
We
have
a
single
source
of
truth
for
everything
that
we're
talking
about,
which
would
be
great
okay.
So
these
are
the
four
epics
shoot
there
we
go.
These
are
the
four
epics
that
ena
and
I
have
been
working
on
and
dan
and
they
kind
of
go
in
priority
order,
at
least
in
my
mind.
The
first
one
here
is
lowering
the
costs
we
mentioned
that
one.
This
is
the
motivation
for
prioritizing
this
epic
is
both
for
our
customers
and
for
our
internal
use.
A
Currently,
the
container
registry
is
costing
us
tens
of
thousands
of
dollars
a
month.
Storage
costs,
it's
costing
our
customers,
not
that
much,
but
they
are
still.
We
have
certain
customers
that
have
tens
of
terabytes
of
data
that
that
they're
not
using,
and
they
don't
want
that
cost
and
they
don't
have
a
way
of
managing
the
storage
really
for
us
to
scale.
You
know
the
container
registry
beyond
you
know,
to
the
next
level.
We
need
to
provide
these
sort
of
I
would
call
them
table
stakes,
features.
A
We
need
to
provide
features
to
allow
people
to
clean
them
up
so
in.
If
you
click
on
the
epic
I
walk
through
and
the
problem
that
we're
solving
and
the
goal
so
I'll
just
read
through
the
top
six
goals
that
I've
identified
here,
so
an
improved
user
experience
for
removing
tags
from
the
UI
we've
been
doing.
Some
of
that
already
actually
nick
has
been
created.
A
A
A
A
Currently,
the
bulk
removal
process
only
works
at
the
project
level,
and
so,
if
you
have
many
groups,
it's
very
inconvenient
to
run
the
process
for
it
for
each
project,
something
we're
working
on
now
and
optimized.
Garbage
collection
algorithm
that
can
handle
terabytes
of
data
without
long
windows
of
downtime
and
in
the
issue
I
specifically
get
into
that.
A
They
can't
set
that
much
downtime,
a
UI
driven
retention
and
expiration
policies
for
images
or
text
the
same
thing,
but
an
you
know
from
from
the
UI
being
able
to
set
a
policy
that
will
expire
images
based
on
what
you
decide:
the
ability
to
expire
images
from
CI
and
CD,
similar
to
the
way
that
we
handle
artifacts,
where
you
could
say,
expire
in
seven
days
or
expire
on
birch,
or
something
like
that,
and
then
automated
scheduled
garbage
collection
and
maybe
to
start.
This
is
only
for
our
customers
and
maybe
it's
not
for
get
lab
comm.
A
So
having
policies
for
them
will
be
really
helpful.
I
made
some
predictions
at
the
of
the
outcome
to
reduce
yeah,
exactly
to
reduce
the
storage
cost
for
the
gitlab
comm
container
registry
by
20%.
I
think
we
could
do
a
lot
better
than
that,
but
I
don't
really
know
what's
in
there
or
what
we're
gonna
find
until
we
until
we
dig
into
it
and
then
I
think
we
could
do
better
on
behalf
of
our
customers
and
lower
by.
D
A
And
some
important
decisions
that
we
made,
we
just
we've
our
current
passes
to
fork
the
docker
registry
as
opposed
to
building
our
own
or
implementing
something
like
docker
trusted,
registry
or
harbor,
and
we
did
that
for
a
variety
of
reasons.
We
didn't
integrate
with
harbor
or
docker
trusted
registry
because
of
concerns
about
integrating
other
products
and
to
get
lab
and
the
duplication
and
removal
of
features
like
does
it
make
sense
that
I
have
two
sets
of
role-based
access,
control
or
two
different
databases,
and
things
like
that.
A
We
felt
that
building
our
own
registry
from
scratch
was
risky
because
of
unknown
unknowns.
We
didn't
really
know
what
we
were
going
to
find
and
forking
seemed
like
the
most
iterative
approach
that
we
could
take,
and
it
also
maintains
some
possibility
of
contributing
back
to
the
open
source
project
which
would
be
in
line
with
gibbs
values
as
well.
And,
although
we're
not
sure
if
we
will
absolutely
be
able
to
do
that,.
C
C
Sorry
team
just
did
jump
in
there
yeah
we
have
on
the
integration
front,
there's
sort
of
an
agreed
approach
that
we
ought
to
be
sort
of
building
our
own
in
some
capacity
to
be
able
to
have
the
features
available,
reflect
what
a
our
customers
need
and
also
what
we
need.
Obviously,
because
dogfooding
we
use
this
a
lot.
So
I
think
you
know
there
had
been
some
questions
about
Harbor
and
the
other
solutions,
VTR
and
sorry
doc.
A
trusted
registry
is
a
little
bit
more
extensive
of
an
investment
to
try
and
figure
out.
C
We
need
this
specific
thing
and
how
do
we
get
it
and
we
have
to
ask
for
it
kind
of
thing:
harbor
itself
implements
docker
distribution
registry,
you
think,
which
is
exactly
what
we
have
in
exactly
the
same
way
and
so
anything
they
have
made
it
a
point
to
continue
to
use
exactly
what
is
deployed
from
the
open
source
project
and
they
have
managed
to
get
some
contributions
back
into
that,
but
they
sort
of
indicated
that
it
was
somewhat
difficult.
C
A
Where
we
talked
through
the
problem
to
solve
and
the
audience
and
outcomes
and
other
information
is,
we
line
up
the
issues
and
we
can
use
this
as
sort
of
a
piece
of
the
roadmap,
because
if
we
could
see
these
line
up
in
specific
milestones,
we
move
this
over.
A
So
I
mentioned,
you
know
some
of
the
work
that's
been
happening
towards
this
epoch
already
we
did
this
everything
that
damage
was
talking
about
making
those
decisions.
How
do
we
prioritize
that?
How
do
we
decide
to
fork
that
that
was
all
in
this
determining
the
path
towards
complete
for
the
container
registry
for
Nico
kid
lab
for
all
of
our
features?
Use
all
of
our
categories:
talk
about
maturity,
being
minimal,
viable,
complete
and
loveable,
and
currently
the
container
registries
in
the
viable
stage,
meaning
that
it
can
be
used
it's
adopted,
but
it's
missing
key
features.
A
A
We
talked
about
the
multi-select
elite,
we're
working
on
the
deleting
the
only
selected
tags
from
the
container
registry.
Now
we
recently
launched
something
that
a
permissions
change
that
allows
the
predefined
variable
CI
registry
user
to
untag
images
from
CI
CD.
Previously
they
could
only
build
and
push
images,
publish
images,
and
now
they
can
actually
untag
them
as
well.
So
this
is
a
move
in
the
right
direction
of
being
able
to
automatically
untag
and
remove
images
from
CI
come.
C
Yeah,
it
seemed
just
to
jump
in
really
quick
here
as
well
just
for
context
Nikko.
The
way
it
works
is
the
garbage
collection.
That's
built
in
to
container
registry
only
runs
when
it's
in
offline
or
read-only
mode,
so
you
have
to
sort
of
shut
down.
This
is
why
it's
a
concern
for
some
of
our
customers,
the
amount
of
time.
So
you
have
to
effectively.
You
can
pull
images
still,
but
you
can't
actually
potentially
depending
on
how
you
do
it,
but
you
can
pull
images,
but
the
garbage
collection
process
only
removes
untagged
blobs.
C
So
the
actual
you
know,
binary
daughter,
is
only
removed
if
there
untagged
anyway.
So
if
there's
no
process
to
untag,
then
there's
no
way
that
a
garbage
collection
will
even
help
clean
up
necessarily
because
it's
only
untagged
items
that
get
removed,
and
so
this
is
why
a
lot
of
this
stuff
sort
of
kids
together
pre
it's
it's
a
file
system
based
sort
of
manifest,
so
they
have
a
manifest
that
relates
to
the
blobs
and
it's
basically
file
structure.
So
that's
why
you
can't
do
it
online
now,
because
broken
files
and
everything
won't
work
properly.
A
That's
an
important
important
note,
thank
you
and
as
part
of
the
so
we
added
a
multi-select
lead.
We're
updating
the
delete.
Logic
really
is
remove
untag
logic.
We
need
to
update
the
pop-up
message,
it's
a
small
change,
but
it's
an
important
one.
Since
we
change
the
overall
functionality
and
then
now
we
start
to
get
into
a
little
bit
more
of
the
the
beat
of
this
issue.
A
So
the
first
one-
and
this
is
generally
how
I
am
trying
to
approach
things
in
term
of
prioritization,
so
I'm
open
to
feedback
on
this
I'm
generally
trying
to
prioritize
an
API
for
a
functionality
and
something
that
we
could
launch
first
for
our
customers
prior
to
doing
anything
at
the
for
the
user
interface
level
and
some
of
that's
my
background
of
building
api's
and
then
having
the
front-end,
consume
them,
and
part
of
that
is
just
me
another
way
of
introducing
functionality
without
having
to
worry
about.
How
does
this
fit
into
the
application?
A
A
C
A
A
We
have
self
managed
customers
that
are
using
the
container
registry
and
they
have
less
than
let's
say
500
gigs
of
storage,
that
they're
using
those
users
can
run
bulk
removal
of
tags,
and
they
can
run
garbage
collection
with
a
couple
with
a
couple
of
hours
of
downtime
and
clean
up
their
registry.
Yet
I
get
probably
like
5
questions
for
a
week
from
people
saying
how
do
I
look?
A
How
do
I
get
rid
of
some
of
this
storage
in
my
in
my
container
registry,
so
I
think
one
feature
here
that
Jason
actually
recommended
is
giving
users
some
help.
Let
them
know
that
this
feature
exists,
so
maybe
some
sort
of
contextual,
tooltip
or
commentation,
and
the
application
that
would
prompt
system
administrators
to
that
they
and
just
alert
them
that
they
can
actually
that
these
features
do
exist
and
they
can
use
them.
So
that
would
be
for
self-managed
instances.
A
A
C
D
C
So
that
the
the
issue
there
was
to
that
would
be
a
next
step
from
optimizing.
Garbage
collection
would
be
to
implement
online
and
I.
Believe
that's
mistyped,
because
online
is
what
we're
talking
about.
Generally
speaking
and
I'm
sure
I
know
Camille
intended
the
same
from
his
thing,
so
I'm
calling
an
online
garbage
collection,
meaning
that
the
registry
is
not
in
read-only
or
offline
mode,
but
that
would
be
like
a
next
step,
because
that's
kind
of
the
idea
that
we
have
is,
let's
optimize
what
we
have.
We
can
release
that
to
our
customers.
C
They
can
get
improving
performance,
they
can
clean
up
more
quickly
and
then,
let's
start
looking
doing
online
gouge
collection
and
that'll
give
us
a
little
bit
of
a
step
in
to
working
with
the
ddr.
Excuse
me,
doctors
should
be
should
register
in
the
container
registry.
Back-End
is
that
NT
question
yeah.
A
A
C
No
problem
cool
sorry
that
issue
that
we
were
talking
about
some
reason:
I
wasn't
able
to
add
the
epic
to
it,
I'm,
not
sure
exactly
why
so
that
I
linked
it
in
the
chat
here.
Okay,.
A
C
A
Okay,
so
we're
up
to
we've.
We've
now
alerted
users
that
these
features
exist.
We
are
optimizing
it,
and
so
the
next
pieces
are
all
about
retention
and
expiration
policy,
so
being
able
to
expire
images
and
tags
from
the
container
registry
using
git
lab
the
github
llamó
file,
good
lepsy
IMO
file-
and
we
already
are
doing
this
for
artifacts,
which
are
also
handled
with
a
similar
to
way
in
a
similar
way
to
packages
or
to
images,
and
so
I'm,
not
sure,
what's
involved
in
actually
making
this
happen.
A
Then,
assuming
and
this
one's
a
little
bit
more
tricky
allow
users
to
run
and
schedule
garbage
collection
from
gitlab,
not
that
the
user
interface
or
the
functionality
is
that
hard,
but-
and
we
probably
just
need
to
have
some
warnings
in
there
like
if
they're
you
know
be
careful.
If
you
have
more
than
this
amount
of
space
or
I,
don't
know,
or
maybe
this
only
goes
to
self-manage
users.
A
So
there's
some
considerations
for
how
we
handle
it
and
it
probably
depends
on
how
effective
we've
optimized
that
garbage
collection,
but
this
this
is
a
definitely
something
that
all
of
the
other
container
registries
have
as
part
of
their
platform.
Where
you
have
the
ability
to
just
set
a
schedule,
you
could
say,
run
automatically
every
n
number
of
days
or
n
number
of
minutes
and
then
just
have
it
run.
And
then
the
next
piece
is
another
user
interface
piece.
A
So
the
ability
to
set
retention
and
expiration
policies
for
container
images
I
was
thinking
that
we
could
leverage
the
existing
bulk
removal
API
either
at
the
project
or
the
group
level.
Although
it
sounds
like
part
of
that
now
now
that
I
understand
the
some
of
the
performance
and
functionality
of
the
bulk
removal
API
that
it
sounds
like
that.
Optimization
will
definitely
need
to
happen
before
we
consider
doing
this
at
a
broader
level.
Right.
A
And
then
another
one
is
just
being
able
to
delete
packages,
images
or
packages
after
merge.
So
we
do
this
now
like
when
you
submit
a
merge
request,
you
have
the
option
to
delete
the
branch
upon,
merge
and
squash.
Your
commits
Ian
had
this
idea
based
on
user
research.
To
just
have
an
option
you
know
have
be
out
include
the
option
to
remove
images
or
packages
on
merge
as
well,
and
that's
definitely
a
use
case
that
we
would
take
advantage
of
because
in
get
lab
every
time
any
any
branch
gets
built.
A
That's
this
epic:
what
do
you
all
think
right
open
to
feedback
is?
Are
we
missing
anything?
Maybe
we
can
go
through
now
the
the
notes
and
and
think
about?
Is
there
any
background
information?
That's
missing.
Does
anyone
have
any
questions
about
why
we
chose
that
epic
or
any
of
those
particular
issues.
C
So
I
think
I
think
there's
a
couple
of
blocking
items
here
that
I
think
we
could
sort
of
move
through
at
some
level
because
we
need
to
start
moving
on
this
work
and
it's
obviously
it's
extremely
important,
not
only
cost
to
to
continue
at
lab,
but
also
our
customers,
some
of
our
larger
customers,
as
Tim
mentioned,
you
know,
dealing
with
a
fairly
large
container
registry.
So
this
is
important
for
everybody.
C
I
think
that
the
slight
blocking
we
have
here
is
that
we
have
a
couple
of
question
marks
around
how
we
actually
like
so
we're
optimizing,
the
garbage
collection
algorithm.
They
have
them
in
code,
submitting
that
back
to
the
trunk.
I
guess
the
word
just
keeps
coming
in
my
head,
because
I
can't
become
a
better
reason
that
the
the
main
project
that
may
not
even
go
anyway.
So
then,
how
do
we
take
our
version
of
the
container
registry,
our
version
of
distribution
registry
and
then
deploy
that
internally?
C
There's
been
a
couple
of
conversations
around
some
of
these
issues,
so
I
think
that's
going
to
be
something
we
have
to
figure
out.
If
we
do
end
up
on
our
own
Fork
we're
going
to
then
have
to
start,
you
know
updating
it
with
any
security
updates
that
happen
from
from
the
the
main
repo,
the
main
project,
and
so
there's
other
work
that
goes
along
with
this.
If
we
end
up
switching
that
way,
my
my
preference
would
be
to
contribute
back
to
the
main
project
is
to
mention
the
other.
C
C
This
work
right
up
front
when
we
don't
have
anyone
on
the
team
who's
actively,
like
you
know,
has
a
a
lot
of
experience
running
go
because
that
put
some
of
them
position
of
being
responsible
for
something
that
they
don't
really
know
that
well,
I
have
every
faith
in
our
team
to
be
able
to
figure
it
out,
but
like
it's
still
not
ideal.
So
that's
why
we're
so
doing
the
process
of
hiring
a
go
person
I.
C
Think
there's
gonna
be
a
little
bit
of
shuffling
around
with
this
work
that
will
sort
of
slow
it
down
a
bit
and
I
would
also
say
that
some
of
the
issues
as
you
outlined
them,
Tim
sort
of
they're.
Quite
there's
some
dependencies
in
there
that,
like,
for
example,
we've
talked
about
so
bulk
delete
is
an
example.
One
of
the
ideas
that
both
delete
is
to
just
you
know,
run
each
delete
of
each.
You
know
tag
as
an
offline
process,
rather
than
a
sync
process,
to
help
speed
that
whole
thing
up.
C
If
we
did
that,
then
we're
sending
a
task
in
to
delete
a
tag
and
then
even
the
image
backs,
which
would
be
good
value
or
the
blob.
It
would
be
the
good
value
that
could
then
be
used
to
actually
just
do
the
work
of
scheduling
Italy.
You
just
schedule
that
job
and
it
goes
and
runs
exactly
the
same
command.
So
there's
some
overlap
here
and
some
of
this
work
that
could
be
quite
useful
and
we're
not
gonna
really
get
a
good
handle
on
that.
C
Until
we
start
working
through
these
issues,
to
begin
with
so
I
think
that's
kind
of
my
main
concern
around
video
and
sorry
definition
registry,
and
they
contain
a
registry
and
working
through
this,
the
implement
in
line
garbage
collection
as
the
title
of
the
issue
or
online
garbage
collection
as
I'm
calling
it.
That
is
going
to
be
more
involved
and
I.
Think
the
big
question
mark
for
me
on
that
particular
effort,
which
I
think
is
really
important,
because
that's
gonna.
C
We
need
that
because
even
if
we
could
help
customers
who
have
large
container
registries,
we
still
have
two
petabytes.
So
it
doesn't
matter
how
quick
that's
optimized,
it's
not
going
to
happen,
so
we
need
it
online
and
then
we
need
to
sort
of
start
thinking
about
how
we
do
that,
but
whether
there's
a
migration
involved
to
a
new
version
of
the
occupation
registry
as
the
container
history,
which
would
become
the
container
registry
at
that
point,
how
we
migrated.
C
What
that
looks
like
whether
we
can
upgrade
what
that
upgrade
path
looks
like
and
then
the
migration
process,
which
is
something
we
chatted
a
bit
Camille
about
a
long
time
ago
of
like
do.
We
just
write
a
write,
a
script
that
goes
and
gets
them
and
pulls
and
pushes
which
would
be
very
slow,
I'm
consuming,
but
you
would
at
least
get
everything
that
was
tagged
and
then,
once
that
happened,
you
could
drop
the
old
container
registry.
C
So
this
is
there's
some
migration
effort
that
needs
to
go
in
there
as
well
and
considerations
around
how
that
happens.
The
other
thing
to
consider
there
is,
as
we're
migrating,
we're
likely
going
to
have
more
data
usage,
not
less.
That's
that's
something
to
keep
in
mind,
so
we
sort
of
have
to
I,
think
Meyer
and
brought
that
up
a
while
ago,
and
we
chatted
about.
C
In
mind,
those
are
most
of
my
thoughts.
I
think
we
are
also
waiting
for
a
couple
of
team
members
to
join
Engineers.
So
we'll
have
onboarding
during
this
time
that
we're
working
through
this
and
then
you
know
we'll
have
people
contributing
also,
we
will
hopefully
be
able
to
move
a
little
bit
quicker,
but
that
does
mean
that
we're
not
really
we
don't
have
the
full
team
yet
so
it
means
that
some
of
these
things
are
like
might
take
a
bit
longer
than
his
idea
until
we
get
people
on
board.
So
that's
my
brain
dump,
I'm.
A
Are
we
in
terms
of
the
garbage
collection
process?
Are
there
any?
You
know,
I've
been
I've
been
having
this
idea
just
optimize
it
which
of
you
know
it's
in
and
I've
and
I,
based
that
off
of
some
of
the
public
contributions
I've
seen
if
people
trying
to
push
back
to
docker
and
saying
it's
like
orders
of
magnitude
faster
one,
can
we
take
that
code
that
that
person
contributed
that
would
that
was
already
like
made
it
through
of
Dockers
checks,
but
just
was
never
merged?
A
Is
there
any
way
that
we
could
just
like
pull
that
into
our
project
and
then
is
there
anything
that
we
could
do
logistically
and
how
we
actually
break
up
garbage
collection?
Because,
right
now
it's
looking
at
its
like
sweeping
everything
across
if
we
use
git
babcom,
it's
sweeping
across
all
gitlab
comm
is
there
are
ways
that
we
could
limit
it
like.
Don't
look
for
any
tags
that
are
created
in
the
past
30
days
or
only
look
at
at
certain
projects
or
certain
groups
or
certain
instances
is
there
any
way
that
we
could
is.
C
Definitely
think
it's
worthwhile
to
help
customers
and
I'm
totally
can
be
convinced
if
people
actually
know
this
will
be.
You
know
if
we
looked
at
an
issue
like
that,
and
the
team
was
like
yeah.
This
is
like
really
easy
to
implement
and
Joe
yeah
that
totally,
but
I
just
think.
I
would
prefer
to
sort
of
do
something
that
sort
of
gets
the
low
hanging
fruit
sorted
out,
and
then
you
know
if
we
need
to
sort
of
iterate
a
bit
more
on
that
which,
until
it'll
be
fine
and
then
start
looking
at
online.
C
Garbage
collection
is,
oh
I,
think
that'll
be
a
fairly
involved
at
the
way,
but
Dhaka
did
that.
Is
they
I?
Don't
know
the
actual
history
of
it,
but
the
comparison
of
what
VTR,
Dhaka
trusted
registry
and
dr.
distribution
registry
do
is
that
Dhaka
trusted
registry
actually
puts
the
manifest
in
the
database
so
that
they
can
be
like
deleted
and
cleaned
up
in
a
real
time,
and
it's
not
gonna
be
able
I
mean
there
are
locks,
obviously,
but
it's
not
the
same
as
a
file
system.
C
C
Obviously
it's
Camille
I,
don't
know
how
that
would
work,
and
so
I
would
want
to
start
chipping
the
way
that
work,
because
we
know
we
have
to
go
there
and
just
have
to
evaluate
as
we're
going
along
whether
our
customers
need
something
else
to
be
able
to
clean
up,
because
none
of
those
smaller
efforts
are
going
to
help
get
live.
Comm
so
good
to
me:
that's
like
okay,
let's,
let's
make
sure
our
customers
can
do
it.
I
need
to
do
and
then,
let's
start
working
on
something
that
will
help
us.
Maybe
it's
n
time.
Okay,.
A
I
think
the
other
concern
I
have
now,
just
from
this
week,
I
learned
about
the
bulk
removal
process
and
and
the
performance
in
a
works.
A
lot
of
these
ideas
in
this
epoch
are
built
off
of
the
idea
that
we
could
leverage
the
existing
bulk
removal
API
to
do
things
like
set
retention
and
expiration
policies.
A
C
That's
fine
I
think
that
could
possibly
form
the
basis
I,
don't
I,
don't
see
any
reason
why
we
couldn't
have
scheduled
tasks
created
in
rescue
or
whatever
process
we're
using
to
do
that.
You
know
if
I
run
a
pipeline
and
I've
got
a
time
as
sort
of
a
retention
time
of
a
week
for
something
created
from
a
pipeline.
C
We
should
put
the
bulk
delete
as
a
background
process
because
synchronously
it
just
takes
too
long,
because
the
API
responses
too
slow
from
Dhaka
distribution
registry,
sorry
I,
could
see
all
that
fitting
together
right
now,
Steve,
maybe
you've
had
a
bit
more
experience
with
the
way
rescue
words
or
not
feel
free.
To
just
say:
no,
if
you
haven't
I.
D
Mean
yes,
no,
like
I
have
experience,
but
not
necessarily,
you
know
a
deep
understanding
of
whether
or
not
that's
how
things
would
would
go.
C
A
D
A
A
What
do
you
think
Nico
is
coming
in
with
a
new
perspective,
and
maybe
thinking
about
other
systems
that
have
these
problems
where
they
have
to
maintain?
You
have
to
manage
storage
and
do
things
like
garbage
collection
in
it
expiration
does
this
seem,
does
it
seem
like
anything's
missing
or
that
we
should
be
talking
about
a
different
anything
different
well.
E
The
thing
that
comes
to
my
mind
is
not
exactly
from
like
garbage
collection
or
maintaining
and
any
big
registry,
but
is
the
idea.
This
is
a
common
problem
that
there
is
a
lot
of.
Sometimes
we
work
on
a
lot
of
heavy
lifting
on
backhand,
side
or
algorithmic
side,
and
my
question
is:
did
we
maybe
try
to
think
of
any
piece
of
UI?
That
means
maybe
simple
to
build,
but
can
add
value
in
this
way.
E
For
example,
you
know
having
the
table
be
searchable,
ilight,
whatever
is
very
old
or
I
write,
which
kind
of
images
are
not
used,
and
so
that
the
user
can
simply
go
in
easily
detect
visually.
What
what
is
tail
or
what
is
not
using
and
remove
it
manually
I
know
that
that's
not
the
final
solution,
but
it
sometimes
it's
much
easier
to
build
and
it
relies
on
the
human
on
the
other
side
and
just
empowering
them
to
actually
detect
this
kind
of
stuff
that
no
thesis
is
viable,
yes
or
no,
but
well.
A
What
what's
interesting
but
made
me
think
of
that
is
when
we,
when
we
spoke
to
Harbor,
they
do
not
support
online
garbage
collection,
but
they
haven't
had
the
same
problem
for
their
customers,
because
they've
always
had
this
retention
and
exploration
policy
and
and
the
ability
to
just
run
garbage
collection
and
schedule
it
from
the
beginning.
So
they
didn't
see
the
same
ballooning
cost
or
in
storage.
So
that's
definitely
one
side
of
it.
The
other
side.
What
you're
mentioning
is
a
little
bit
more
complicated
in
this
case,
because
of
the
way
that
we
handle
the
UI.
A
D
A
C
Yeah
I
mean
that
might
be
worth
adding.
Actually
Tim
to
the
epic
has
issues
if
you
felt
like
it
was
worthwhile
would
be
you
know,
because
if
we
do
take
over
and
we
start
using
our
own
fork,
then
we
would,
we
would
want
to
add
those
sorts
of
things
you
know
paging
through
API
calls
and
and
sorting
the
API
responses
and
then
ultimately,
a
bridge
is
going
to
build
a
schema
that
supports
all
of
that
insight
and
historic
that
make
sense.
C
C
How
do
we
do
ordering
in
a
way
that's
helpful
where
our
customers
and
theirs
is
like
when
we
get
to
fifteen,
then
it
won't
work
because
then
you
go
to
another
page
and
then,
if
they're
ordering
goes
away,
so
the
ordering
is
never
going
to
work
as,
as
you
know,
you
pull
the
response
back,
you've
got
the
objects
and
you
know
JSON
or
in
dictionary
or
whatever,
and
you
can
order
and
have
you
want
at
the
front
end.
But
your
next
call
would
reflect
that
order.
A
C
So
it's
viable,
but
what
that
means
is
we
implemented?
A
few
years
ago,
doctors,
patient
registry
built
a
shim
in
the
API
in
the
mono
repo
to
actually
interact
with
those
api's
and
then
not
a
whole.
Lot
else
has
been
done
and
to
call
out
just
on
what
Tim
said
around
harbor
and
the
way
they
implemented
it
I
mean
that's.
A
C
The
little
gap
we
have
there
is
even
if
we
implemented
that
tomorrow
we
still
have
a
whole
bunch
of
stuff
in
the
container
registry
that
is
being
stored,
that
we
don't
really
know
the
state
of
it,
which
is
why
it's
kind
of
like
okay.
How
do
we
clean
this
up
in
a
way
that
makes
sense,
I,
think
you
call
out
of
like
looking?
What
can
we
wear
well?
Can
we
empower
our
users
to
do
I?
C
Think
it's
a
great
call
out
so
I
don't
mean
any
of
this
to
suggest
I
shut
that
conversation
yeah,
because
it's
a
great
conversation
to
have
only
really
appreciate
any
thoughts.
You
have
around
that
as
a
team.
Finding
solutions
to
our
customers
is
obviously
our
priority
here.
So
thank
you
for
bringing
that
and.
A
It
it
does
lead
me
to
the
next
epoch,
which
I
think
we'll
get
to
next
week,
which
is
creating
visibility
and
and
transparency
for
our
stage
for
the
packaging
container
registry
and
I
will
go
over
it
next
week,
but
I'll
just
touch
on
quickly.
This
is
what
right
now
Ian
is
sort
of
out
front
doing
user
research.
We
have
a
survey
out
talking
about
what
date.
Why
do
people
come
to
the
get
lab
package
registry
UI
or
the
container
registry
UI?
What
are
they
looking
for?
What
data
are
they
expecting?
A
What
is
what
functionality
are
they
expecting
there,
and
so
we
will
we're
working
on
some
new
designs
for
that
which
hopefully
will
help
as
much
as
possible
with
those
things
and
so
that
that
will
review
more
next
week.
Some
of
the
issues
that
we
have
planned
there
I
actually
did
not
I,
took
sorting
out
of
that
list,
because
I
thought
it
was
a
little
too
risky
that
one
I'm
more
focusing
on
the
metadata
that
we
include
and
ensuring
that,
like
I,
think
what
to
build.
My
dan
was
saying
we
build.
A
We
build
basically
our
container
registry
using
docker
DDR
Ducker
yeah,
dr.
Rogers
docker
registry.
What's
the
second
D
for
distribution,
dr.
Furr
I,
just
had
a
total
I
haven't
had
enough
coffee,
yet
dr.
distribution
registry,
and
it
looks
very
much
like
docker
hub.
If
you
go
to
our
UI,
it's
like
it
looks
almost.
You
know
it's
very
similar,
displaying
the
same
information
but
we're
not
docker
hub.
You
know
we're
get
lab
and
as
part
of
that,
we
need
to
build
something
that
makes
sense
for
us
and
I.
A
Think
a
big
part
of
that
is
making
sure
that
we're
giving
contextual
help
we're
showing
how
an
image
was
built
by
whom,
what
you
know
was
it
built
with
CI,
if
so,
which
pipeline
or
which
job
if
it
comes
from
a
docker
file.
What
commit
what
you
know?
Where
does
that
docker
file
live
so
really
I?
Think
that's
gonna
really
make
our
product
much
more
useful
for.
D
A
Users
and
I'm
really
excited
about
that
next
epoch
as
well,
because
then
now
it
feels
like
I
always
think
about
things
in
terms
of
value
steps.
You
know
like
the
first
thing
that
we
could
do
is
just
give
people
a
way
to
build
and
publish
images,
and
then
the
next
thing
we
have
to
do
is
give
them
a
way
to
delete
them.
C
I
think
this
is
something
we
talked
about
a
little
while
ago,
I
think
Tim
a
couple
of
times
and
having
you
in
here
and
and
Nicko
having
a
conversation
about
how
what's
the
gitlab
way
to
do
this,
you
know:
do
we
care
about
these
artifacts
and
I?
Sorry,
a
lot
of
faxes
are
used.
Words.
I!
Don't
want
to
use
that,
but
do
we
care
about
these
images?
Where
do
they
go?
How
long
do
they
live
for?
Do
we
need
our
users
to
be
involved
in
that?
C
If
our
users
have
an
understanding
of
what
that
means
so
like,
if
I
run
a
pipeline,
and
then
it
produces
an
image
or
you
know,
tag
whatever
and
the
pipeline
finishes
inaccessible
and
they
merge
it
all.
Do
they
need
it
anymore
like
do
we
even
should
we
even
have
a
user
be
involved
like
what
would
be
the
point,
and
is
that
something
we
need
to
actually
look
at
and
care
about,
I
mean
and
yeah.
We
give
options,
but
you
know
MDC
minimal,
viable
change.
C
Maybe
it
makes
sense
to
just
automatically
delete
those
things
when
the
branch
goes
away
as
Kim
sort
of
mentioned.
Earlier.
More
than
it
does
sort
of
leaving
it
hanging
around
for
X
days,
because
we
have
an
expiration
policy,
but
that
depends
on
you
know,
user
research,
that
Tim
is
doing
which
is
awesome,
and
so,
when
we
find
out
what
our
priorities
are,
maybe
everyone
uses
CI
the
most
that's
where
most
of
it
comes
from.
C
Maybe
it
makes
just
sense
once
they
merge
that
pipeline,
that
the
whole
thing
goes
away
and
they,
like
the
easiest
part
they,
but
there's
lots
of
lots
of
cool
problems
to
solve
here.
It
is
there's
a
big
there's,
a
big
pile
of
them,
so
it
could
be
kind
of
stressful.
But
it's
it's
awesome
that
we
get
to
solve
a
bunch
of
these
problems
because
it's
almost
like
we're
doing
something
from
scratch
in
a
company,
that's
totally
not
from
scratch
right
like
so.
D
A
D
C
And
I
think
what
I
was
getting
at
before
Steven,
that's
an
awesome
call-out.
Thank
you.
What
I
was
getting
at
before
is
the
the
work
that
Gigi
was
doing
on
cleaning
up
replacing
a
blob
without
dummy
blob
that
actually
goes
to
solve
that
problem
of
reducing
actual
costs,
and
that
was
my
understanding
of
their
work.
He
was
doing
he
I
think
he
said
is
one
of
his
lay.
There
are
recent
updates
that
he
is
kind
of
going
back
to
that.
C
C
A
And
we
are
at
time
I
wanted
to
say
one
more
thing,
which
is
that
I
I
really
want
these
meetings
to
be
useful.
I
feel
like
I,
talked
a
lot
today,
which
is
maybe
okay
for
now,
but
I
want
it
to
be
much
more.
I
want
to
style,
feel
like
we're
contributing
to
this
vision
and
get
to
the
point.
Where
we're
you
know.
It
becomes
much
more
of
a
discussion,
so
I'm
very
interested
in
making
sure
these
meetings
feel
useful
and
that
we're
respecting
people's
time
so
I'm
going
to
just
send
out
a
short
survey.
A
Just
asking
a
few
questions
like
was
this
effective.
Was
the
the
agenda
work
and
just
because
I
want
to
continue
to
improve,
to
make
sure
that
we
can
keep
these
discussions
going
and
useful.
So
I
appreciate
it
when
you
have
time
if
you
could
fill
it
out,
though,
that
would
be
great,
so
I'll
end
on
time
and
Dan
mentioned,
we
might
stick
around
and
have
a
brief
follow-up
discussion.
So
I'm
gonna
stop
the
recording
thanks.