►
From YouTube: 2021-11-10 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
I
personally
do
not
have
anything
to
demo,
I'm
still
just
chugging,
along
with
gitlab
pages,
which
I
will
talk
about
in
a
little
bit
later.
Henry.
I
see
that
you
would
like
to
showcase
something
related
to,
unfortunately,
rolling
back
nginx.
B
Yeah,
let
me
talk
about
this,
so
I
feel
better
because
it
really
makes
me
sad.
I
spent
a
lot
of
time
trying
to
digest
what
is
going
on
there,
but
I
couldn't
really
find
a
root
cause
until
now.
Even
so
have
some
suspects,
and
so
I
also
don't
have
a
solution
for
now
and
that's
why
I
needed
to
dial
back
and
bring
nginx
in
between
aja
proxy
and
our
api
fleet
again.
B
So,
let's
maybe
start
with
what
happened.
So
we
removed
engine
x
from
api.
We
also
did
this
for
a
web
before
and
for
other
endpoints,
because
we
saw
that
actually
engine
x
isn't
really
needed
for
anything.
B
What
we
used
engine
x4
was
some
upload
request
buffering
for
certain
endpoints,
but
I
think
it
was
also
enabled
generally
with
the
other
way
we
used
it
for
upload
buffering,
but
for
one
endpoint
it
was
disabled.
I
think,
and
we
concluded
that
with
cloudflare
in
front
doing
this
anyway.
B
For
us,
we
don't
need
it
anymore,
because
upload
request
buffering
protects
us
from
things
like
slow
lowers
attacks
where
client
is
sending
very,
very
slowly
data,
and
so
workhorse
or
puma
needs
to
keep
connections
open
very,
very
long
and
a
lot
of
those
connections,
and
so
engine
x
was
protecting
us
from
that.
B
But
cloud
fair
is
doing
the
same
for
us,
and
so
what
we
did
is
remove
engine
x
and
directing
traffic
from
hp
proxy
directly
to
a
newly
introduced
internal
tcp
load
balancer
in
front
of
the,
which
is
the
service
endpoint
for
our
communities
deployments
and
from
then
on.
We
got
alerted
with
front-end
alerts.
I.
C
B
So
we
got
a
little
forefront
and
service
error
ratio.
Let
me
see
if
I
can
go
back
in
time
here
to
see
the
effect
say
for
two
days
so
today
around
I
don't
know
around
this
time.
I
changed
it
back,
took
actually
proc
engine
x
out
of
api,
and
before
that
you
can
see
we
had
a
very
elevated
error
rate,
especially
those
spikes,
which
mostly
happened
during
deployments,
and
this
was
triggering
alerts
which
isn't
nice
and
what
even
was
much
more
visible
and
cloudflare.
B
B
And
look
at
this
by
path
so
take
a
time
you
can
see
like
before.
We
had
nothing
and
after
I
disabled
it.
Also.
We
now
have
nothing
or
nearly
nothing,
just
not
visible
because
of
the
scale
now
and
with
nginx
enabled
you
can
see
we
have
up
to.
I
don't
know
13
000
or
something
I
guess
this
is
per
minute.
B
I'm
not
sure
if
this
is
per
second,
I'm
sure
of
the
unit
here,
but
at
least
it
was
a
big
difference
and
especially
for
this
endpoint,
the
jobs
request
point
which,
as
you
probably
know
by
now,
is
the
long
polling
request
that
runners
are
doing
to
workhorse
and
which
workhorse
is
delaying
by
50
seconds
to
answer,
because
we
want
to
throttle
down
runners
from
all
the
time
pulling
pulling
pulling
pulling
and
killing
api
with
that.
B
So
we
have
long
running
requests
which
showing
a
lot
of
errors,
and
I
was
trying
to
dig
down
on
that
and
on
the
way
to
that.
I
found
several
problems,
for
instance
that
apparently
the
locks
and
big
very
are
not
really
complete.
I
think,
because
I
didn't
find
all
the
errors
I
saw
in
hr
proxy
on
the
front-end
strike
be
looking
into
our
log
files.
B
So
we
need
to
look
into
that
and
if
I
look
directly
on
the
aha
proxy
logs,
then
I
see
that
we
get
five
or
twos
quite
often,
and
they
always
are
because
of
this
special
endpoint
here
is
terminated
from
the
server
side.
Like
the
tcp
connection
is
terminated,
it's
not
that
hdp
is
nicely
finished
and
anything
it's
truly.
B
The
only
thing
I
really
found
was
that
workhorse
is
a
go
application
and
go
introduced.
A
default
tcp
keeper
live
time
setting
of
15
seconds
a
while
ago,
which
means
that
after
a
connection
was
opened
after
15
seconds,
a
tcp
keeper
live
package,
which
is
just
an
tcp
package
will
be
sent,
and
but
after
that,
normally
the
tcp
keeper
live
interval
setting
should
come
into
place
when
the
next
package
is
sent,
which
should
be
something
like
several
minutes
on
linux.
B
B
B
What
I
did
is
trying
to
tcp
dump
connections
on
hr
proxy
and
also
on
one
of
the
grenadas
notes
to
see
what's
going
on
there
network-wise
and
I
could
confirm,
I
see
for
single
connections.
I
pick
out
for
this
endpoint.
I
can
see
that
really
every
15
seconds
a
tcp
keeper
live
package
is
sent
and
then
soon
packages
return
from
the
from
aj
proxy.
So
this
seems
to
be
working.
B
I
wasn't
able
to
find
a
connection
with
which
was
just
breaking
up
like
after
30
seconds,
no
other
tcp
keeper
life
was
returned
or
something,
but
maybe
just
because
I
missed
it,
because
this
is
super
hard
to
find
and
the
amount
of
traffic
to
find
a
broken
connection
with
exactly
that
pattern.
Maybe
I
just
wasn't
able
to
find
this
out
there,
because
this
is
super
hard
to
search
for,
and
that's
where
we
are.
B
I
saw
the
same
from
h
proxy
side
with
tcp
done,
and
one
suspicion
I
have
now
is
that
we
have
the
internal
tcp
load
balancer
in
between
hip
proxy
and
accumulators
and
the
tcp
load.
Balancer
might
be
a
network
component
which
is
dropping
tcp
connections.
B
A
B
This
belongs
to
this
connection
that
I'm
tracking
already,
but
if
this
table
is
overflowing
because
it
gets
too
full
or
maybe,
if
there's
a
timeout
for
how
long
connections
are
kept
in
this
table,
then
after
a
certain
amount
of
time,
a
new
package
from
that
connection
will
come
in.
It
will
look
up
as
it
belongs
to
any
connection,
and
it
will
not
find
it
anymore
in
the
table
and
it
will
maybe
say.
Oh,
I
don't
know
this
connection,
so
I'm
just
terminating
connection
here
or
something
like
that.
B
I'm
not
sure
if
this
is
the
right
explanation
for
this,
but
this
is
something
which
comes
into
mind
and
if
you
think
about
this,
it's
always
finished
after
exactly
15,
30
or
45
seconds.
That
means
if
a
tcp
acknowledged
package
is
coming
in
after
nothing
happened
on
the
connection,
because
workers
doesn't
do
anything
for
50
seconds,
then
this
one
back
package,
maybe
is
just
not
recognized
anymore-
to
belong
to
a
connection
in
some
cases.
B
I
don't
know
that
could
be
one
thing.
Another
thing
could
be
a
strange
interaction
with
pot
cycling
or
something
like
that,
but
I'm
not
sure
what
the
mechanics
could
be
there
where
to
go
from
here
now
is
I
enabled
engine
x
again
and
everything
is
looking
as
before.
B
B
The
bad
thing
about
this
is
then
it's
hard
to
do
anything
about
it.
If
it's
the
tcp
loop,
that's
all
you
need
to
see
if
google
can
do
something
about
that,
one
solution
could
be
to
upgrade
to
a
new
hg
proxy
version,
which
is
supporting
retries,
which
is
the
same.
What
engine
x
is
doing
for
us
in
case
of
tcp
connections
being
broken.
C
B
I
think
this
all
for
for
long
requests,
which
are
taking
a
long
time
are
affected
by
this,
so
requests
which
are
long
taking
a
long
time
to
get
a
an
answer
back.
This
is
mostly
for
the
long
polling
requests,
but
we
also
have
a
few
other
api
because
which
are
taking
a
lot
of
time,
but
they
are
compared
to
runner's
equals
fairly
minimal.
C
Now
yeah
trying
to
figure
out
if
this
is
something
where
the
clients
is
just
sitting
idle
waiting
for
an
answer
or
if
this
art
yeah
pro,
because
the
the
the
the
runners
one
are
these
type
of
requests
where
we
just
the
defiances
they're,
just
ask
for
some
workload
and
wait.
So
there's
nothing
coming
back
it
this
one,
I'm
not
sure.
B
A
B
A
lot
of
stuff
and
then
after
a
while
an
ancestral,
but
if
you
see
look
here
after
I
stopped
after
I
brought
back
engine
x,
you
don't
see
those
yellow
spikes
anymore
right,
yeah.
B
A
So
I've
got
a
few
questions
and
comments.
Question
is
we
still
have
nginx
disabled
inside
of
our
staging
environment?
Correct?
Yes,
okay,
so
we
should
theoretically
have
the
ability
to
reproduce
this
air
inside
of
our
staging
environment.
A
C
A
Okay,
with
nginx
back
in
the
place
in
production,
have
we
removed
any
silence,
alerts
that
at
this
point
just
to
make
sure
that
we're
back
into
an
okay
state
with
that
yeah.
B
They
expire
today.
That's
why
I
turned
it
back
in
production
again
because
I
didn't
want
to
you
know,
extend
the
silence
again.
I
extended
those
silences
day
over
day
now
for
a
while,
but
I
decided
no
that's
enough.
Now
I
did
find
a
solution
and
I
think,
looking
into
what's
happening
behind
engine
x
is
the
next
best
thing
to
do
to
prove
it's
if
it's
maybe
happening
on
the
load
balancer
or
if
it's
also
visible
on
engine
x,.
A
B
A
Get
someone
on
pretty
soon
like
I
don't
think
it
should
impact
what
you're
working
on.
If
anything,
it
should
make
it
easier
if
it's
fixed,
but
it
should
be
addressed
regardless
yeah,
okay,
so
just
to
highlight
what's
kind
of
the
next
steps
you
know,
we've
got
this
enabled
again,
unfortunately,
but
like
what's
what
do
we
want
to
do
next?
At
this
point,.
B
I
think
on
my
list,
I
have
two
things
right.
The
one
thing
is
looking
into
engine
x
locks
now
to
see.
If
I
can
prove
this
is
still
happening
or
it
vanished,
because
we
don't
have
30s
below
balancer
and
after
getting
an
answer
from
that,
I
think
it
would
be
a
good
idea
to
escalate
this
to
google
support,
so
they
can
give
us
some
answers
from
experts
how
the
internal
networking
might
interfere
with
tcp
live
packages,
and
an
interesting
thing
I
would
like
to
try
is
to
change
workhorse
to
not
use
the
default.
B
A
Be
curious
if
we
should
potentially
toy
with
changing
the
api
long
polling
to
see
if
that
might
have
any
sort
of
impact
like
we
set
it
to
50
seconds,
but
what,
if
we
set
it
to
something
below
15,
like
maybe
we'll
get
a
response
within
those
15
seconds
and
thereby
that
removes
this
http
502?
C
A
Nearly
three
times
more
requests
coming
to
us,
which
would
induce
a
little
bit
more
load,
but
that
would
be
proof
that
you
know
there's
still
something
that
we
need
to
look
into
in
some
way
shape
or
form.
But
that
might
be
a
mitigating
factor
that
we
could
consider.
If
that's
something
that
we
want
to
think
about.
That.
B
No,
it
wouldn't
have
it
would.
Just
you
know,
make
the
problem
go
away
for
those
connections,
but
we
still
wouldn't
know
exactly
where
it's
coming
from.
One
interesting
fact
to
know
is
that
apparently,
the
google
internal
loop
balancers
for
deciding
if
a
connection
is
alive,
I
think,
looking
into
if
there
is
data
sent
on
the
tcp
connection,
but
they,
if
I
read
some
some
comments
from
google
engineers
correctly,
they
don't
treat
tcp
keeper
live
packages
like
simple
packages
as
data
sent
over
the
over
the
connection.
B
B
Yeah,
I
think
it's
not
nginx
it's
it's
in-between
engine
x
and
and
workhorse
right
between
hi,
proxy
and
workforce,
where
things
are
happening
right,
workhorse
is
doing
the
right
thing.
H,
epoxy
is
doing
the
right
thing.
Both
are
sending
and
returning
keeper
live
packages,
but
in
between
something
is
happening.
Closing
the
connection.
C
If
this
is,
if
this
is
true,
this
is
why,
in
that
case,
nothing
happens
right,
because
there
is,
we
expect
there's
there,
they
exhibit
the
expected
behavior
compared
to
the
internal
bouncer.
I
was
trying
to
find
out
if
there
are
some
kind
of
configuration
options
for
the
internal
lock
balancer,
but
no.
B
I
wasn't
able
to
find
anything
yeah
yeah,
I
just
saw
you-
can
set
this
one
timeout
setting,
I
think
which
by
default
already
is
I
don't
know
five
minutes
or
something
so
that
shouldn't
affect
those
connections
that
are
taking
15
sec
50
seconds.
B
B
Then
we
have
the
first
http
package
with
a
post
request
sent
and
then,
if
you
look
at
the
timing
after
15
seconds
here,
you
see
the
first
tcp
keeper
life
back
package
sent
and
it's
answered
and
then
15
seconds
later
again.
Tcp
keeper
live
it's
answered,
then
15
seconds
later
again
and
then
after
50
seconds,
the
workers
decides
okay.
Now
it's
time
to
answer
the
request
and
it
worked
all
fine.
So
I
managed
to
find
a
lot
of
those
working
connections.
B
I
couldn't
find
any
example
where
it
broke
in
tcp
done,
because
maybe
it's
wireshark
not
being
able
to
track
lost
egg
packages
back
to
a
connection
and
things
like
that
and
I
get
a
lot
of
you
know-
lost
egg
packages
which
can't
be
put
into
one
connection.
That
knows
about
so.
You
just
have
a
lot
of
acknowledged
packages
which
fireshark
doesn't
know
where
they
belong
to,
and
I
can't
figure
out
if
this
is
coming
from.
One
of
those
connections
that
I
see
here
but.
A
Oh
man,
so
I
guess
at
this
point
there's
a
few
things
that
I
would
try
to
encourage
us
to
look
forward
to
in
the
near
future.
Is
one
maybe
talk
with
the
runner
team
to
see
if
there's
a
way
that
we
could
mimic
a
worker
request
that
hits
this
end
point
appropriately
and
forces
you
to
wait,
are
configured
50
seconds
and
you
should
get
a
204.
A
that
way.
You
could
run
this
yourself
and
you
have
a
way
to
capture
this
and
hopefully
recreate
the
issue
against
staging,
probably
yeah.
B
And
they
also
happen
in
clusters
right,
so
you
get
several
of
them.
If
you
look
on
one
of
the
load
balancers,
you
see
them
pop
up
like
three
or
four
or
four
or
20
of
them
within
I
don't
know
10
or
15
seconds,
and
then
nothing
is
happening
for
minutes
and
then
it's
happening
again,
and
I
think
this
correlates
to
scaling
events
somewhere.
A
B
A
B
C
A
C
A
Yeah,
let's
say
henry:
won't
you
work
on
that
outside
this
meeting.
That
way,
we
could
stay
within
our
time
bounds
for
this
meeting.
B
B
B
The
funny
thing
is
that
that
no
slo
was
affected
at
all
by
this
just
the
front
end
error
rate
was
going
higher,
but
nothing
that
really
was
visible
in
ours
also
are
really
affecting
customers.
Besides
just
one
single
special
endpoint
that
was
taking
longer.
I
think,
because
runners
don't
care
about
this,
they
just
retry
and
that's
it.
B
A
Well,
I'm
sure
we
do
yeah
something
else
you
mentioned
potentially
reaching
out
to
google.
I
would
advise
you
try
to
do
that
as
soon
as
possible.
You
know
it
takes
a
long
time
to
interact
with
google
support,
especially
with
getting
them
on
the
same
page
as
you
are
in
terms
of
what
you
want
them
to
help
you
solve
yeah.
A
A
Keep
that
in
mind
cool
anything
else,
no,
all
right.
So
the
only
thing
I
wanted
to
discuss
was
a
little
bit
about
the
current
state
of
getlab
pages.
I've
got
the
readiness
reviews
out
the
door,
so
we
now
have
those
being
reviewed
by
infrastructure,
security
and
engineering.
A
I
did
come
across
a
bug
in
our
helm
chart
yesterday.
At
the
very
end
of
my
day,
I
got
a
patch
for
that
already
in,
and
the
distribution
team
merged
it
so
prior
to
us
going
to
pushing
any
traffic
into
pages
in
production.
I
need
to
upgrade
our
home
chart
across
the
board
the
chart
that
we
consume
other
than
that
the
configuration
audit
is
pretty
much
complete.
This
bug
is
just
fixing
environment
variables
that
were
not
being
populated
to
the
gitlab
pages
service.
A
A
So
I
plan
on
starting
to
build
the
procedure
same
exact
procedure.
We
did
for
staging
I
plan
on
just
copying,
pasting
it
and
getting
it
ready
for
getting
it
ready
for
production.
So
I'm
hoping
maybe
sometime
next
week,
we
could
get
this
started.
Assuming
all
is
going
well,
so
that'll
be
exciting.
A
I
have
the
due
date
for
the
redness
reviews
to
be
completed
on
tuesday.
So
as
long
as
all
conversations
are
closed,
maybe
by
wednesday
we
we
start
the
process
for
the
first
time,
I'm
starting
to
shift
traffic
into
pages.
C
So
when
henry
was
trying
to
debug
this
issue,
I
came
with
this
kind
of
me
remembering
things
happened
in
the
past,
which
is
that
I
was
quite
sure
that
at
least
inside
of
omnibus
there
was
a
special
section
inside
nginx
where
basically
was
kind
of
tightly
coupled
with
the
product.
So
special
routes
had
special
configuration.
This
was
we
removed
nginx
from
the
mix.
So
maybe
this
is
the
reason
why
it
broke,
because
there
are
special
rules.
Now
this
link
here
points
to
the
same
rule
in
in
in
the
chart.
C
So
graeme
was
so
kindly.
He
was
investigating
my
claims
that
these
things
was
there
and
found
the
relevant
configuration
part.
So
my
point
is
that
well
it's
not
a
real
point,
it's
more
something
that,
if
a
distribution
level,
we
think
that
nginx
is
part
of
the
product.
Up
to
the
point
that
configurations
can
happen.
Special
configuration
can
happen
at
that
level.
C
It
it's
tricky
for
us
to
remove
it
right,
because
some
part
of
the
application
may
be
in
the
nginx
this
one.
I
do
understand
what
we're
telling
about
the
we
have
cloudflare
upfront.
That,
basically,
is
basically
is
undoing.
All
the
configurations
that
we
that
is
happening
there,
because
there
is
a
known
buffering
for
artifacts,
uploads
and
so
having
cloudflare
in
front
that
buffers
everything.
But
the
point
is
that,
for
instance,
the
runner
was
designed
to
stream
content
and-
and
this
content
is
no
longer
streamed
right
actually
because
it
is
buffered
up
front
right.
C
B
Yeah,
I
think
we
did
a
lot
of
investigation
of
what
engine
x
is
doing
for
us
when
we
started
to
think
about
removing
it
for
a
web
right,
and
I
think
we
investigated
the
meanings
of
the
current
settings
that
we
have
and
what
they
are
doing
for
us
and
really
after
looking
to
this
for
a
long
time,
we
we
discovered.
Okay,
it's
not
really
doing
anything
for
us
right
now,
like
cloudflare,
taking
over
uploads
request
buffering
for
us
is
basically
taking
over
everything
that
we
need
from
nginx.
B
Okay,
we
missed
the
retry
functionality
of
engine
x,
but
I'm
not
sure
how
often
this
is
doing
something
for
us
and
also
it's
kind
of
dangerous,
because
if
you
retry
endpoints,
which
are
not
item
potent,
it
could
also
cause
harm
right.
But
we
live
with
that
all
the
time
now.
So
it
seems
to
just
work
and
I
think
the
discussion
about
engine
x
being
part
of
the
product,
and
so
if
we
should
use
it,
I
think
we
had
it
in
other
places,
also
also
from
the
distribution
team.
B
I
think
talking
about
this
and
I
think
what
we
I
think
we
what
we
thought
about
this
is,
but
for
us,
it's
not
bringing
enough
value
to
use
it,
and
we
are
special
in
many
ways
anyway,
and
also
wasn't
there
some
thought
about
using
something
totally
different
than
nginx.
So
I
don't
know
this
maybe
is
pointing
to
the
future
more
than
trying
to
keep
the
stay
with
engine
x,
because
sometimes
in
the
future,
we
might
need
for
some
feature
or
something.
C
Yeah,
my
point
is
that
if
we
have
nginx
because
we
need
a
proxy
and
then
it
has
kind
of
a-
I
would
say-
default
configuration
but
say
the
same
configuration
everywhere.
Then
I
would
consider
it's
an
interchangeable
piece
of
software
they're
just
acting
as
a
proxy,
so
I
can
remove
that
and
put
another
proxy
in
place.
A
Henry
maybe
consider
talking
to
the
distribution
team.
I
know
a
few
members.
There
have
some
good
insights
into
nginx
and
maybe
they've
got
some
ideas
that
you
might
be
able
to
bounce
off
their
heads
and.
B
A
C
C
Nginx
buffers
everything
except
on
archives
because
they
get
streamed
with
content
with
chunk
of
the
content
encoding,
but
if
we
have
a
cloudflare
in
front
that
is
buffering
so
what
happens
at
cloudfur,
because
if
cloudflare
receive
a
streaming
incoming
connection
which
is
streaming
because
it
has
a
special
encoding,
what
is
it
doing?
Is
it
just
sending
it
as
a
big
one
request?
Is
it
streaming
again
is?
Is
it
passing
through
it?
I
don't
know
what
happens
at
cloud
third
level.
B
B
There
are
no
known
issues
with
that
in
streaming,
so
I
guess
there's
some
mechanism
in
cloudflare
which
is
able
to
detect.
This
is
a
stream
or
this
is
something
which
can
be
buffered.
I'm
not
sure
this
how
this
is
working,
but
I
don't
see
an
issue
happening
right
now.
I
just
see
it
working,
thankfully,
maybe
that
would
be
a
question
for
the
cloudflare
experts
among.
A
B
A
I
guess
the
last
comment
I
have
is
henry,
maybe
consider
a
date
as
to
cut
off
as
to
when
we
stop
going
down
this
route.
You
know
we
don't
want
this
to
go
on
forever
and
ever
if
we
can't
find
a
solution
by
say
I
don't
know
end
of
november.
Maybe
we
just
leave
engine
x
in
place
and
at
that
point
we
just
need
to
make
sure
we
understand
that
nginx
is
in
place
because
of
this,
and
we
need
to
figure
out
how
to
document
that.
B
Yeah,
absolutely
I
mean
we
are
not
in
a
bad
state
with
engine
x
in
place.
We
know
some
issues
that
we
have
because
of
that
with
uneven
distribution
of
traffic
because
of
scheduling
by
node
instead
of
bipod,
but
that's
it
right
and.
B
B
Which
could
help
us
even
to
achieve
this
with
nginx,
I
think
but
yeah.
So
I
will
try
to
check
now
what's
happening
behind
nginx,
and
I
think
after
that
I
will
just
have
issues
created
for
looking
into
that
than
we
want
to.
But
I
think
then
we
should
stop
for
now.
B
Maybe
one
update
for
me
from
from
registry
by
the
way
it's
fully
in
production
and
we
started
to
migrate
for
some
customers
already
and
it's
working
fine.
So
far,
so
we're
not.
B
B
Cool,
so
this
is
in
place
and
working
and
just
not
a
lot
of
traffic
on
the
database,
because
we
just
selected
a
very
small
set
of
customers
right
now
to
do
migrations.
So
it's
still.
C
C
We
still
in
that
phase
where
new
projects
will
get
to
strike
to
the
new
features,
so
they
will
okay,
because
I
was
testing
something
the
other
day
this
yesterday.
I
think
and
was
kind
of
surprised
by
the
the
feature
set
about
deleting
stuff
and
automation,
please
how
we
can
delete
tags
and
things
like
that-
which
I
never
seen
before.
So
I
was
thinking.
Maybe
that's
the
new
thing.
B
We
I
mean
what
we
can
do
now
is
that
we
can
clean
up
yeah.
C
And
clean
up
stuff,
yeah,
there's
the
ui
section
where
you
can
declare
which
type
of
cleanup
you
want.
So
by
time
you
can
say
run
every
day
and
I
want
for
every
tag.
I
want
to
have
no
more
than
x
amount
of
images,
only
the
last
x
images
for
this
a
token
tag,
and
then
you
can
say
for
this
specific
regex
of
tags
never
delete,
and
this
one
just
apply
to
cleanup,
which
I
mean
last
time:
a
check
wasn't
there.
So
I
was
kind
of
wondering
if
this
is
the
feature
set.
A
Alrighty
well,
thank
you
all
for
joining
today
enjoy
the
rest
of
your
day.
I'll
see
you
next
wednesday,
bye.