►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
The
flip
side
of
that
is,
we
got
some
feedback
right
away
from
the
community
that
they
want
support
for
this,
for
all
projects
and
I
was
talking
to
those
user,
the
community
and
saying.
Well,
we
have
these
concerns,
and
so
this
conversation
is
a
follow-up
to
that
to
say
what
has
to
be
true
for
us
to
roll.
This
feature
out
to
all
projects
and
what
are
some
of
the
concerns
that
we
have
and
how
can
we
help
address
those
concerns?
A
So
I
had
some
items
added
to
the
agenda
and
I
think
Steve
you
had
the
first
okay,
so
the
thing
that
I
brought
up
was:
is
it
possible
to
turn
this
on
for
all
existing
projects
and
can
we
think
about
self
managed
instances
versus
get
babcom
separately
and
I?
Think
Steve?
You
had
the
first
kind
of
comments
in
the
dock
yeah.
B
And
so
I
was
I
was
kind
of
just
starting
a
comment
from
like
you
know.
What
kind
of
things
can
we
possibly
do
here
in
terms
of
turning
on
from
my
perspective,
which
is
just
from
the
like?
How
can
we
turn
this
on
for
some
people,
but
it
doesn't
necessarily
solve
the
problem
of
overloading
the
system,
and
so
from
the
non
overloading
the
system
standpoint
for
self-managed
instances,
we
could
a
day
configuration
option
something
in
the
admin
options,
maybe
for
just
being
able
to
turn
it
on
and
off
at
the
admin
instance
level.
B
However,
we,
you
know,
that's
probably
something
that
we
don't
necessarily
want
to
do
until
we
know
what
happens
if
someone
does
turn
it
on,
and
they
do
have
a
lot
of
tags
that
are
going
to
be
cleaned
up
right
away
and
then
forget
lab
comm
from
the
standpoint
of
turning
it
on
really
there's
just
some
validations
in
place
that
we
need
to
remove
in
order
to
allow
these
projects
to
run
it.
But
we
do
need
to
first
address
the
concern
about
performance.
B
C
It
possible
to
so
it's
only
for
new
projects:
I'm,
not
I'm,
not
on
your
person
that
share
the
mechanism
by
which
we
determine
that.
But
is
it
possible
to
you
know,
ease
that
back
and
say,
like
you
know,
projects
that
are
year
old
to
you
like,
because
they're
what
gradually
do
it
so
that
we
don't
overwhelm
the
system.
B
We
could
conceivably
do
something
based
on
time
right
now.
The
way
it's
set
up
is
all
new
projects
when
they're
created
they
automatically.
You
know,
like
a
a
database
record
for
a
container
expiration
policy,
is
also
created
associated
to
that
project,
and
so
pretty
much
we're
saying.
If
a
project
doesn't
have
a
container
expiration
policy,
then
we
don't
allow
them
to
create
one
like
it's
only
if
it's
created
automatically
when
the
project
is
created
that
we
allow
it
to
happen.
C
C
B
I
think
one
of
the
concerns
is
just
if
someone
turns
it
on
on
a
project
that
has
a
large
amount
of
tags
regardless
of
age.
So
you
know,
for
example,
if
we
turned
it
on
on
gitlab,
there's
a
very
large
number
of
repositories
with
a
very
large
number
of
tags,
and
if
we
someone
turned
it
on
and
said,
keep
100,
then
that
would
probably
break
things.
B
B
There's
two
sides
of
it.
There's
one
we're
calling
that
endpoint,
which
is
maybe
known
to
be
slow
and
then
there's
two
when
we
call
it
where
we're
finding
all
of
the
tags
we
want
to
delete
and
then
calling
it
for
each
tag,
one
at
a
time,
rather
than
making
a
single
request
for
all
tags.
So
I
don't
know.
If
there's
different
approaches,
we
can
look
at
to
help
with
that.
But
that's
something
I
was
kind
of
curious
about
yeah.
D
Between
deleting
attack
from
the
API
or
the
front
tanks,
or
deleting
a
tiger
synchronous,
which
is
the
one
that
I
believe
we're
talking
here,
so
that's
different
in
12:9,
we
are
introducing
the
performance
improvement
for
the
tech
elites,
both
for
the
API
and
the
UI,
and
that
has
moved
from
four
to
five
five
requests
to
delete
one
tag,
just
a
single
request
to
delete
attack
so
that
that's
that's
not
much
less
effort
on
the
network
and
on
the
on
the
systems
2
litre
tail.
But
the
improvement
was
not
applied
to
the
synchronously
deletion.
D
We
have
to
change
a
class,
but
it's
definitely
possible.
We
only
change
if
the
V,
API
and
and
the
front
end,
because
that
was
what
was
being
asked
on
the
issue
and
we
also
wanted
to
do
a
control,
a
rollout
of
the
new
feature
in
case
there
was
something
going
wrong
with
a
new
delete,
at
least
the
asynchronous
part,
with
the
would
work
as
expected.
But
now
that
the
new
feature
is
live
for
more
than
a
week
and
everything
is
ok,
we
can
probably
roll
that
to
the
asynchronous
deletes
as
well.
D
B
D
So
that
will
only
work
if
the
container
registry
being
used
is
the
container
registry,
so
that
will
only
work
for
sure
for
get
Lancome,
because
in
the
other
places
we
are
not
sure
what
registry
people
are
using
the
bit
like
what
our
registry
or
a
third
party
one.
So
even
with
with
with
the
performance
improvements
if
people
are
using
a
third
party,
but
the
registry,
it's
a
lot
with
your
fault
to
the
old
behavior,
which
is
the
slow
give
it
okay.
D
F
D
And
I
think
the
key
here
is
probably
is
probably
to
figure
out
an
algorithm
to
let
us
roll
out
these
progressively
to
more
and
more
projects
and
make
it
available
to
more
and
more
projects
like
every
day
or
at
least
every
week.
Every
week
we
could
then
lock
the
feature
for
projects
that
are
older
than
a
week
before
the
release
happening,
and
we
can
continue
to
do
that
for
several
months,
for
example,
because
yeah.
If
we
turn
this
on
for
a
lot
of
projects,
we
may
have
phones,
but.
F
F
D
Depends
on
the
load
I'm,
not
sure,
on
the
limitations
on
the
async
use
a
little
bit
sidekick
that
we
are
using
for
villain
and,
of
course
it
also
depends
on
the
number
of
instances
of
the
container
agency
that
will
be
under
oath
under
those
circumstances.
So,
to
be
honest,
I
don't
think
we
will
be
able
to
answer
that
with
a
high
degree
of
certainty
before
we
act.
We
try
and
expose
that
future
tomorrow
tomorrow
tomorrow,
because.
C
This
is
an
async
job.
Anyway.
Is
it
possible
to
put
these
these
tags
into
a
common
queue,
and
then
you
know
whenever
the
job
is
run
like
we
have
a
maximum
limit
that
we
know
is
is
acceptable
and
that
will
produce
an
acceptable
amount
of
load
and
just
chip
away
at
the
queue,
because
I
think
this
this
address
is
the
problem
of
well.
What
if
there's
one
repository
with
that's
gonna
delete
10,000
tags
at
once.
This
might
resolve
that
I.
Think
an
issue
with
this
is
that
you
know
get
lab
comm
MA.
C
B
Think
I
missed
you're,
saying
using
a
cue
system,
so
you
mean
a
cue
for
each
individual
run
or
because,
like
I'm
thinking,
if
you
have
like
someone
that
runs
a
10,000
clean
up
and
then
someone
that
runs
a
5,
but
it
gets
queued
after
the
10,000,
then
that
you
know
the
person
that's
just
trying
to
delete
the
5
tags.
It
never
seems
to
happen.
Yeah.
D
And
I
think
another
way
would
be
to
apply
throttling
on
on
the
on
the
queue.
So
even
if
you
someone
requests
the
volitional
for
1000
text,
we
could,
for
example,
define
that
the
queue
will
not
be
spatch
more
than
100
deletes
per
minute
or
per
hour.
Let's
say
so.
Requests
will
just
queue
up
on
the
queue
and
they
will
be
served
on
a
constant
pace
like
in
bursts.
At
least
that
would
be
a
controlling
approach.
We
would
always
have
an
expected
behavior,
because
the
loop
will
always
be
the
same
I'm.
Looking
at
I'm.
D
F
Maybe
we
can
directly
limit
the
algorithm
output
right
and
it's
not
the
phone,
the
queue
we
can
say
if
the
algorithm
is
spelled
out
that
we
need
to
delete
the
policy
says,
meaning
to
delete
a
10,000
targets
that
is
going
to
be
100
right
101.
They
will
go
back
to
100
and
we
can
like
document
this
for
the
user
directly
on
the
UI
and
say
we
don't
delete
more
than
100
at
once,
and
then
it
will
take
a
while
before
their
retention
policy
does
its
job.
B
D
A
D
So
if
the
registry
is
under
really
heavy
load,
it
makes
stop
processing
those
requests
or
take
too
much
time
and
eventually
a
timeout
will
trigger
and
we
will
start
impacting
other
functionalities
beyond
the
deletion
of
text
under
the
expiration
policy.
So
I
think
it
is
worth
to
find
a
limit
this
to
make
sure
that
everything
else
is
not
affected.
A
A
A
D
Think
that
may
be
excessive,
because
the
functionality
that
the
recorded
functionality
is
there.
The
only
difference
is
that
if
they
are
using
the
clock
container
registry,
the
actual
deletion
will
be
fast,
but
people
that
don't
use
the
control
agency
should
also
be
able
to
use
it
because
the
underlying
functionality
exists,
so
they
are
still
able
to
delete
tags
using
the
nodes
work
around
it,
which
is
slower.
Okay,
we
can
possibly
just
add
a
note
saying
if
you're,
not
using
the
deadlock,
container
registry
or
deletion
process
will
be
around
80
80
percent
slower.
D
So
maybe
that
will
also
commits
people
to
move
to
work
on
the
registry
at
some
point,
but
I
think
just
blocking
that
and
not
making
it
available
to
self-manage
these
instances
or
those
not
using
our
container
registry
is
maybe
a
bit
excessive.
I
would
prefer
limiting
the
throttling
then
actually
limiting
the
functionality.
A
Okay,
so
we
could
we
could
enable
that
well
once
we,
the
other
change
we
were
talking
about
is
the
asynchronous
updates,
and
that
requires
just
changing
something
on
that
we've
already
done
and
applying
those
changes
once
we
do
that,
can
we
then
focus
on
enabling
this
for
self-manage
instances
is.
Anything
else
have
to
happen
on
our
end
to
make
that
work
or
there
any
other
concerns
preventing
us
from
doing
that.
D
We
can
apply
the
the
first
technique
to
the
synchronous
jobs
whenever
we
want
a
lonely
work
project,
for
instance,
is
part
of
why
the
hotel
registry-
and
that's
the
only
that's
the
only
limitation
I've-
refrained
from
doing
that,
because
the
problem
was
on
the
UI.
It's
not
an
issue
if
I
give
it
takes
half
a
second
more
if
it
isn't
a
synchronous
job.
But
now
we
are
talking
about
a
lot
of
load
and
a
lot
of
the
elites
happening.
Asynchronously,
so
yeah
performance
will
matter
as
well,
so
yeah
we
should.
D
We
should
apply
that
as
well.
I
would
prefer
to
do
that
once
we
have
a
decision
around
the
long
term
compatibility
plan
with
third-party
registries
so
that
we
don't
end
up
having
more
and
more
workarounds
on
top
of
workarounds
to
maintain
compatibility
with
with
our
the
registries.
But
if
this
is
a
concern
now
we
have
to
do
it.
A
B
It
sounds
to
me
like
there's,
there's
sort
of
like
two
aspects
in
which
we
kind
of
hit
on
which
is
one.
Turning
on
the
change
that's
already
made
for
the
async
version,
which
solves
Gilad,
calm
and,
and
anyone
using
or
container
registry
and
then
implementing
some
sort
of
throttling
to
handle
all
other
instances
of
this.
But
it
sounds
like
for
safety.
We
need
the
throttling
before
we
turn
on
the
other
version,
because
if
we
turn
it
on,
then
anyone
that's
not
using
our
container
registry
might
have
problems.
B
So
it
sounds
like
there's
like
the
two
steps
are
implement
some
sort
of
a
throttling
and
then
update
the
async
to
use
the
fix,
that's
already
in
place,
so
the
real,
the
the
next
step
would
be,
like
you
know,
figuring
out
how
the
throttling
would
work,
which
is
a
little
bit.
We
can
probably
continue
that
discussion
and
an
issue
on
a
more
technical
level,
but
I'm
guessing
that.
That
sounds
like
the
path,
unless
anyone
has
any
other
thoughts,
yeah.
D
Throttling
is
mandatory.
The
performance
improvement
is,
is
nice
to
every
basically
I
think
it's
worth
to
look
at
the
modulus
tattling
thumb
on
the
documents
about
the
throttling
for
sake
crops.
Maybe
we
can
leverage
on
that
to
throttling
without
changing
the
code,
which
would
be
nice,
but.
F
There
is
something
that
we
need
to
do
short
term
and
we
were
discussing
this
with
Steve
today,
so
the
expedition
policy
Stern
it's
available
for
project
for
new
projects
right,
but
is
not
turned
on
by
default.
So
if
a
new
project
is
born
into
0.8,
they
go
for
three
months:
they
pile
up
1
million
tagged.
It
becomes
the
same
problem
as
an
old
project
which
may
be
a
million
tagged
accumulated
over
three
years.
F
E
D
Think
it
would
be
good
to
give
it
up
to
to
the
infrastructure
team
saying
we
have
a
feature
here.
We
are
going
to
turn
it
on.
Can
everyone
please
keep
an
eye
on
statistics
and
metrics
to
make
sure
that
we
are
not
overloading
the
system
and
in
case
something
what
is
what
we
can
act
and
either
disable
the
feature
or
spin
more
resources
to
handle
yeah.
B
Wondering
if
we
have,
if
we
do
turn
it
on
I,
feel
like
we'd
want
to
like.
Let
users
know
that
you
know
suddenly
I
mean
any
project
that
has
a
container
registry
or
container
expiration
policy
is
going
to
suddenly
start
having
tags
deleted
like
we
won't
turn
it
on
and
I
suppose
anything
that
up
to
this
point,
that
already
is
like
currently
turned
off.
We
shouldn't
turn
on.
We
shouldn't
just
like
turn
it
on
on
users
that
are
unsuspecting,
but
if
a
user
creates
a
new
project
and-
and
you.
B
D
It
is
already
yep
yeah
that
is
already
a
warning
for
auto
die.
Folks.
Whenever
you
create
a
project,
you
see
a
warning
saying
a
lot
of
their
jobs
has
been
enabled
in
because
you
don't
want
it
just
configure
it
by
yourself
by
creating
a
github
CI
file.
Maybe
we
can
have
something
like
that
saying
automatic
special
policy
is
being
set
up.
If
you
don't
want
it,
go
ahead
and
turn
it
off
or
just
settings.
D
F
Thing
that
we
could
do
immediately
is
to
display
on
the
container
registry
that
says
the
next
policy
is
going
to
run
at
this
time.
At
the
moment
we
already
have
to
seem
for
ready
in
the
API,
so
you
can
just
put
on
the
front
end,
and
then
we
could
have
some
nice
to
have
and
saying
like
X
amount
of
tags
are
going
to
be
expired,
or
this
tag
is
going
to
be
expired.
F
E
It'll
be
good
as
long
as
we
get
started,
where
we
kind
of
make
it
obvious
that
the
feature
is
now
enabled
and
then,
as
we
train
our
users,
that
this
is
like
the
default
of
what
we
expect
our
users
to
be.
We
can
start
winding
up
afterwards,
and
we
can
do
that
during
the
rollout
of
that
expiration
policy
as
well
and
there's
quite
a
few
areas
and
get
lab
where
we
kind
of
display
like
hey.
This
features
on
just
so.
E
A
A
F
A
It
sounds
like
there's
four
issues
to
open.
One
there's
two
front
end
is
shoes.
One
is
to
display
when
the
next
policy
is
going
to
run
another
one
is
saying
that
the
this
power
enabling
it
by
default
and
saying
that
you
know
this
is
enabled
by
default,
then
there's
the
throttling
issue
and
which
maybe
we'll
luck
out,
and
we
already
can
control
that
with
that
existing
my
Rumson
and
then
there
is
the
issue
of
turning
it
on
for
all
existing
projects,
first
for
self-managed
and
then
for
get
lab
calm.
D
Okay,
the
performance
improvements
for
this
includes
programs,
okay,
so
yeah
it
looks
like
we
can
work
on
some
of
these
in
parallel
like
the
first
one
should
definitely
be
the
warning,
but
on
the
container
registry
page
showing
when
the
next
one
is
going
to
run
and
also
the
warning
for
new
projects
that
enable
it
and
work.
But
before
that
we
need
to
work
on
on
the
throttling,
that's
mandatory,
then
we
can
level
it
and
optionally
improve
the
performance
of
the
en
suite.
C
A
So
we
drawn
and
Haley,
can
you
open
the
throttling
issue
and
the
change
for
rolling
out
the
asynchronous
deletes
and
then
Nico
and
Ian?
Can
you
open
the
issues
for
the
the
front-end
and
then
Steve?
Do
you
have
any
issues
to
you
could
use
the
existing
issue
that
we
have
that's
rollout
support
for
all
projects,
yeah.
A
Cool
thanks
everyone,
it's
hard
to
see
all
that
feedback
in
the
issue
of
people,
saying
that
they
were
mad,
that
this
didn't
cover,
but
at
the
same
time,
it's
nice
to
see
that
people
really
want
this
feature
and
that
you
know
they're
anxious
to
have
it
in
their
hand.
So
it's
it's
cool
to
see,
and
hopefully
we
can
get
this
out
and
in
their
hands
soon
and
I'm.
Gonna
stop
recording.