►
From YouTube: 2020-03-20 Background jobs improvements demo
Description
Part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/96
A
C
C
Obviously
so,
if
we
enable
sidekick
cluster
like
this,
this
allows
us
to
use
some
of
the
configuration
options
that,
like
plaster,
provides,
for
example,
the
two
groups
configurational.
We
are
the
q
group
option
using
this
setup.
This
should
start
to
psychic
processes
running
all
the
cues.
So
let's
try
that.
D
About
something
Bob,
so
imagine
that
someone
has
sidekick
running
on
the
server
like
just
exactly
and
then
we
change
the
config
and
the
person
will
like
ran
the
reconfigure
thing,
and
would
these
sidekick
process
get
killed?
So
then
new
sidekick
cluster
process
is
started
with
the
child's
processes.
Yeah
nice
I
was
thinking.
C
That
what
I
just
did
now
like
the
psychic,
was
running.
That's
the
single
process
that
I
showed
like
just
a
default
thing
that
runs
out
use
and
now,
as
you
can
see
here,
we've
got
sidekick
running
with
a
concurrency
of
50
and
all
of
the
queues.
This
is
the
thing
generated
by
sidekick
cluster,
and
here
we've
got
psychic
cluster
running.
C
This
is
just
part
of
the
reconfigure
since
then
one
thing
just
overrides
the
other
and
disables
itself.
That
means
that's.
The
sidekick
service
itself
is
disabled,
which
means
that
yeah,
that's
just
part
of
omnibus
already.
There's
one
thing
there
that
I
don't
know.
I
haven't
actually
tried
this
before.
D
C
D
C
B
You
have
to
opt
in
to
psychic
cluster,
like
psychic
square
brackets
cluster,
and
if
you
do
that
and
configure
psychic
underscore
cluster,
then
you'll
get
this.
Then
you
get
the
narrow
yeah.
If
you,
if
you
don't
opt-in,
you
won't
get
that
error.
So
if
you
just
have
regular,
psychic
and
regular
psychic
cluster
for
now
we're
supporting
that,
even
though
it's
a
weird
configuration
right,
yeah.
C
C
D
Was
thinking
about
how
how
much
time
should
we
be
running
this
like
as
often
but
for
Christmas?
Let's
say
we're
studying
12.9
today's
from
out
and
then
I'm
wondering
if
it
would
be
interesting
to
like
leave
this
open
for
Christmas
for
a
month
to
make
sure
that
people
can
use
them.
It's
not
raising
errors,
and
then,
after
that,
we
make
this
a
default
like
in
13.0.
Maybe
yes
or
that
would
be
too
much
yeah.
A
The
important
part
is,
let's
just
make
sure
that
if
all
of
this
works,
we
can
actually
make
the
call
to
make
this
a
default,
but
then
also
have
an
option
in
13
that
if
something
does
go
wrong,
we
can
instruct
people
hey.
This
is
the
fallback.
This
is.
The
old
behavior
will
fix
the
problem
right,
but
this
is
what
you
can
do
in
the
meantime.
So.
C
That
would
mean
just
like
right
now.
This
cyclic
cluster
variable
thing
is
set
to
false
by
default,
but
in
13
we
would
flip
that
the
true
yep,
but
then
we
would
still
have
the
old
sidekick
cluster
and
sidekick
kind
of
starting
everything,
and
then,
in
a
later
release
we
get
rid
of
the
old
sidekick
cluster
configuration
and
the
sidekick
configuration
is
the
only
way
to
start
sidekick
cluster
which
will
like
it
yeah.
A
Making
sure
that
we
have
a
step
back
in
case.
Something
does
happen
right,
like
you,
don't
know
what
kind
of
configurations
people
use,
especially
there
are
like
a
lot
of
them,
so
we
just
want
to
have
that.
But
one
thing
that
that
you
showed
there
Bob
is:
there
is
psychic,
cues
and
then
star
star.
Whatever
is
that
necessary,
or
you
just
wanted
to
use
this
to
show
no.
C
B
C
C
D
C
C
C
D
C
D
C
Running
both
next
to
each
other,
so
yeah,
both
services
still
exist
so
with
the
the
first
setup
I
had
with
cluster
enabled
that
would
still
be
the
service
sidekick
cluster
that
is
running,
but
I
discussed
it
with
a
swallow.
When
we
remove
the
option
to
start
side
kick
without
sidekick
cluster,
that
makes
sense,
then
we
should
name
that
service
sidekick,
sidekick
cluster
yeah.
B
C
About
there's
a
discussion
open
on
the
Omnibus,
mr
just
so
the
reviewer
sees
it
and
based
on
feedback
there
I'm
going
to
create
an
issue
in
our
scalability
issue
tracker.
That's
then,
going
to
be
targeted
that
13.1
or
something
like
that
yeah
remove
that
as
long
as
we
allow
running
both
next
to
each
other,
we
will
need
the
two
services
I'm
afraid
thanks.
A
C
D
Screen
quickly,
this
is
a
follow-up
work
from
Bridget
this
week,
so
just
like
boring
compared
to
box
demo
I'm
running
GDK
here
and
as
we
can
see,
there's
there's
running
running
background
jobs
and
there's
the
Eid
here.
If
I
do
a
ps3,
we
can
see
as
I
click
closer
running
all
the
cues
here
and
all
the
commands
workers
before
GDK
restart
everything.
So
basically,
under
the
hood,
we
are
using
your
using
cyclic
placer
and
we
pass
this
cyclic
workers
to
the
radios
background
jobs.
D
Specification
on
profile
would
be
interesting.
I
started
the
discussion
about
bumping
these
two
Chu
by
default,
two
processes
by
default,
because
I
think
that
would
probably
raise
a
few
errors
if,
like
concurrency
errors,
sometimes
that
would
be
interesting
to
see
young
in
development.
So
maybe
we
can
start
with
one
running
just
single
process.
Then
we
bow
to
two,
maybe
next
week
making
sure
that
everything
worse
works
well.
So.
A
A
These
things
in
parallel
in
omnibus,
but
we
can
do
the
same
thing
in
GDK
and
say
when
we
start
switching
to
default
in
13,
we
can
say
we're
just
gonna
bump
in
GDK
as
well,
so
that
you
can
see
the
impact
better,
because
I'm
afraid,
if
we
take
on
too
many
things
at
once,
we're
gonna
be
spread
to
10.
Imagine
dealing
with
judy
k,
&
omnibus
n
charts.
At
the
same
time,
it
would
be
a
bit
too
much
yeah.
A
D
D
This
is
just
a
small
one.
The
problem
that
we
are
solving
this
here
is
that
if
we
go
to
the
profiler
page
with
data
that
we
had
with
grow
services,
we
don't
show
the
version
of
the
deployments.
That's
the
version
of
the
actual
service.
Let's
say
work
hearts.
We
don't
show
the
version
of
work
horse.
If
you
what
we
bump
the
version
of
work
person
making
you
deploy,
we
won't
see
this
version,
so
we
can
compare
profiles.
So
here
we
are
making
a
very
simple
change.
D
Since
we
already
passed
suit
like
through
lab
kit,
we
passed
the
version
2
Prometheus
the
version
of
this,
the
actual
builds.
We
can
reuse
that
to
pass
it
over
to
dprofiler
to
the
go
profiler,
so
the
changes
is
very
simple
here.
So
if
you
interesting
I,
think
I
look
on
the
merge
request
feel
free
to
take
a
look,
boring
demo.
Sorry.
D
We
started
running
this
in
can
canary,
so
we
started
this
started,
seeing
a
few
bumps
on.
Let's
see
here
on
web
CPU
usage,
we
saw
a
bump
here
and
before
that
as
well.
So
we
decided
that
we
should
be
running
a
dprofiler,
at
least
for
like
more
two
days
on,
can
read
to
make
sure
that
it's
not
really
related,
and
we
can
make
sure
that
anything
is
alright
to
to
move
to
production,
but.
E
Yeah
I
mean
I
might
was
to
be
worth
just
putting
process
monitor
on
to
like
the
workers
processes.
I
don't
have
go,
has
anything
in
its
in
the
Prometheus
client.
That
tells
you
because,
like
how
much
of
that
is
his
workhorse,
you
know
that
would
be
at
the
moment.
It
could
be
anything
on
that
machine,
so
you
know
that
it's
work
or
that
would
definitely
help
it
might
be
worth
just
just
wiring
process
monitor
enjoyed
and
Wow
yeah.
E
C
Yeah
there
was
just
a
lot
of
back-and-forth
yesterday
after
rolling
out,
so
the
problem
is
the
lettuce
seep.
You
get.
Yeah
lettuce
uses
way
too
much
CPU.
If
we
watch
too
much
too
many
cues
at
the
same
time
like
have
a
one-sided
process.
Watch
too
many
cues,
because
I
know
you
can
probably
explain
this
better
than
I
can
can
you
could
you
take
this
from
me?
Okay,.
E
E
E
E
So
even
though
they
say
it's
an
o1
operation,
it's
very
clearly
not
and
it
seems
to
go
up
the
longer
the
list
and
how
and
and
the
more
the
clients
that
are
contending
for
those
to
use.
And
so
this
kind
of
leads
us
to
a
point
where
we
have
two
hundred
and
something
cues
at
the
moments.
And
we
have
the
strategy
that
very
clearly
goes
against
the
sidekick
documentation
that
we
have
a
cue
per
worker
class
and
the
the
psychic
documentation
says.
E
Don't
do
that
and
and
we've
kind
of
got
brighter
now,
but
I
don't
know
like
how
much
longevity
there
is
in
that
strategy
like
in
a
year's
time.
We
might
have
four
hundred
cues
and
we
might
have
even
more
clients
listening
to
those
cues
and
then,
in
that
case,
like
is
this
gonna,
be
more
of
a
problem.
F
D
E
It's
it's
basically
the
catch-all
note,
so
the
simplest
solution,
I
think
this
is
where
you're
leading
to
you
is
split.
The
catch-all
up
into
like
multiple
like
I,
don't
ya,
know
not
well
not
as
well,
but
it
was
so
like
multiple,
like
maybe
split
by
feature
category
or
something
like
that.
We
can
do
something
like
that,
but
I
think
it's
all
just
kind
of
beating
around
you
know
not
fixing
the
problem
here.
B
There's
a
couple
of
other
things
here
so,
first
of
all,
the
main
reason
that
sidekick
recommends
against
that
as
far
as
I
can
tell
is
because
of
the
psychic
Pro
feature
for
our
level
fetching.
We
can't
use
psyche
Pro,
but
we
do
have
our
own
gem
that
Valerie
wrote
to
emulate
it
like,
because
you
know
we
can't.
We
can't
distribute
psychic
for
it's
not
open
source
and
that
actually
has
two
options
and
we
don't
use
the
reliable,
reliable
fetch.
We
use
the
semi
reliable
fetch.
B
B
So
on
the
queues
thing,
I
think
I
said
this
in
the
issue,
but,
like
my
feeling
is
that
we
really
don't
want
to
migrate
queue
names
if
possible,
because
it's
a
pain
and
because
there
are
so
many
things
that
can
trip
you
up
and
I
can
go
into
those
later
when
we
start
discussing
solutions,
but
if
we
are
gonna
do
it,
we
should
probably
do
it
now
like
so
the
way
I
put.
It
is
the
best
time
to
do.
B
It
is
never
and
the
second
best
time
to
do
it
is
now
because
we
were
only
gonna,
add
more
queues,
so
I
think
what
would
be
interesting
to
know
is
so.
The
main
issue
here
was
that
we
added
the
casual
nodes
which
basically
do
the
same
as
the
best
effort,
which
is
listen
to
a
bunch
of
queues,
most
of
which
have
very
little
activity,
well,
I,
say
most
of
which,
some
of
which
have
zero
activity
like
including
all
the
giro
ones.
I'll
get
laughs,
calm
because
we
don't
have
to
on
Caleb
comm.
E
B
Have
the
same
problem
I
tested
that
out
because,
as
Igor
said,
it's
the
number
of
clients
times
the
number
of
keys
listened
to
times
the
number
of
clients
waiting
on
those
keys.
So
if
we
split
the
catch-all
up,
we
would
reduce
the
number
of
Q's
say:
we'd
have
the
number
of
Q's,
but
we
double
the
number
of
clients
and
the
number
of
clients
listening
to
the
keys
would
stay
constant,
so
you've
got
the
same
because
it's
just
the
product
of
those
three
you've
got
the
same
answer.
B
E
E
Sharding
working
group
Eric
had
this
like
really
good
points
and
way
he
said
like
look,
we
shouldn't
be
optimizing
for,
like
you
know,
ten
percent
more,
we
should
be
optimizing
for
like
like
three
hundred
three
hundred
percent
more
right
like
like
ten
times
more
traffic,
like
that's,
where
that's
when
kid
leprechauns
going
to
become
profitable
and
you
know
like.
Firstly,
the
number
of
queues
is
going
to
go
up
like
that's
not
going
to
change.
Unless
we
address
this
at
some
stage
and
then
the
number
of
listeners
is
just
going
to
go
up
like
that.
E
As
that
goes
up,
and
obviously
there's
other
things
we
can
do
like
shard,
the
registers
and
stuff,
but
it
feels
like
we've
kind
of
like
what
I
don't
want
to
do
is
double
down
on
a
bad
decision,
and
you
know
it
feels
to
me
like
this.
In
hindsight,
I'm
not
saying
at
the
time
it
was
a
bit,
but
now
it
feels
like
this
is
not
the
way
to
go.
E
You
know,
Redis
is
not
designed
for
its
sidekicks,
not
designed
for
it.
You
know
if
we
want
to
use
the
reliable
fetch,
things
become
more
difficult
with
the
with
the
swapping
queue
thing,
whatever
its
name
is
and
but
but
there
was
one
other
point.
I
think
like
this.
You
know
when
you
said
the
renaming
of
queues
is
difficult.
E
F
E
B
B
That's
yeah!
That's
that's
so
that
the
part
about
renaming
psychic
queues
is
that
we
have
to
do
it
for
everybody
and
yeah.
E
B
Means,
like
you
know,
making
it
work
everywhere.
That's
that's
my
issue.
So
let's
talk
through
I,
don't
know
so
I
feel
like
this
discussion
could
go
on
for
a
while
and
it
might
not
be
concluded
in
this
call.
But
we
try
to
have
a
sync
yesterday
and
I
think
having
a
call
is
useful
as
well
yeah.
Do
you
want
to
try
talking
through
the
option
and
then
I'll
talk
through
my
concerns
and
then
we'll
see
where
we
get
with
that
one
first
and
then
so.
E
So
so
very
quickly
like
what
it
is
at
the
moment
is
that
we've
got
like
these
250
workers
in
the
application,
and
each
of
them
has
got
a
queue
so,
like
imagine
like
from
left
to
right
elections.
Okay,
that
way
we
kind
of
like
each
one
of
them
has
got
like
a
pipeline,
that's
going
to
sidekick
and
then
psychic
cluster.
Does
this
very
clever
thing
where
it
takes
a
bunch
of
them
according
to
the
queue
selectors
and
it
groups
them
up?
E
And
then
it
starts
a
bunch
of
psychic
listeners
that
are
listening
to
to
those
and
effectively
on
those
queues.
It
will
pick
them
all
off.
You
know
from
from
the
whole
bunch
and
the
problem
that
we're
having
at
the
moment
is
that
some
of
those
lists
are
very
long
like
a
hundred
turn
in
queues
and
so
like
that's
where
the
BI
pop
come
problems
coming
from.
So
what
this
solution
is.
E
So
so
you
know
in
the
simplest
case,
you
know
in
the
GDK
case,
there
could
be
one
or
two
and
in
the
case
of
gitlab
comm,
this
10,
for
example,
you
know
one
for
each
of
the
not
ten
but
one
for
each
of
the
priorities
that
we
got
at
the
moment
and
so
we're
just
taking
the
the
maxing
capacity
and
we're
moving
it
from
sidekick
into
the
application.
And
then
we
sending
to
a
few
a
number
of
Q's
and
then,
when
we
pick
those
up,
we
don't
have
to
pick
them
up
from
200
different
queues.
E
B
So
the
main
issue
I
have
with
this
is
because
it's
determined
by
user
configuration
in
order
to
migrate
the
queues
like
if
you
change
your
configuration
we
either
need
to.
You
know
basically
rely
on
people
reading
the
documentation
or
we
need
to
know
what
the
old
configuration
was
as
well
as
the
new
down
figuration,
because
we
need
to
know
what
something
used
to
map
to.
B
C
E
C
B
Think
I
don't
think
this
can
be
get
lab.
Comm,
plus
everything
else,
I
think
the
the
other
problem
I
have
there
is
that
everything
that
sends
a
job
to
a
sidekick.
You
needs
to
know
the
entire
sidekick
cluster
configuration,
so
you
might
have
a
sidekick
node
that
says:
okay,
I'm,
processing,
high
urgency
CPU
bound
jobs,
but
it
also
needs
to
know
that
low
urgency
CPU
bound
jobs,
go
to
this
queue
and.
E
C
Gets
up
and
get
up
RB
and
therefore
also
get
about
llamo.
That
means
that
people
can
change
it,
which
means
that
the
whole
renaming
thing
like,
even
though
we
only
plan
to
do
it
once
more
and
then
we'll
never
ever,
do
it
again,
pinkie
swear
everybody.
Everybody
else
that
uses
that
can
change,
however
much
they
want,
and
if
they
have
queues.
That
means
that
the
whole
renaming
thing
that
Shawn
is
worried
about.
We
need
to
be
able
to
do
it
reliably
like
all
the
time.
No.
E
A
Just
say
that,
but
we
were
recommending
to
some
customers
to
use
this
different
configuration,
whether
they
use
it
or
not.
That's
a
different
thing,
but
when
they
ask
what
do
you
do
and
get
a
blood
come
to
process
this
many
jobs?
We
point
them
to
this,
so
there
are
chances
that
in
the
wild
there
is
someone
using.
B
B
B
Doing
that
like
if
we're
saying
we're
renaming
cues
based,
so
we
could
say,
for
instance,
that
we
are
going
to
rename
iqs
to
what
we
think
the
current
get
lab.
Comm
configuration
could
be,
and
then
we
will
provide
a
couple
of
cue
selectors.
That
will
do
that
like
sensibly
for
other
environments.
So,
like
you
know,
they
might
not
want
to
quite
as
granular
they
just
might
want,
like.
You
know
that
you
said
two
notes
and
we
can
just
provide
a
cue
selectors
that
do
that.
B
So
that's
fine,
but
if
we
want
to
change
that,
I
think
we're
actually
better
off
taking
the
the
hits
and
migrating
that's
in
the
application,
even
though
it's
painful
because
I
think
that's
more
robust
in
the
face
of
like
what
our
customers
are
actually
going
to
do
on
their
self
managed
instances
than
having
to
rely
on
them.
Reading
the
documentation
following
it
exactly
and
understanding
how
exactly
this
one
fits
together.
So.
A
B
E
Not
losing,
but
we
can
ever.
We
can
never
in
an
emergency
spin
something
up.
We
don't
have
the
flexibility
that
we've
got
at
the
moment.
So,
like
oh,
this
new
job
is,
you
know,
really
kind
of
like
we
need
to
isolate
that.
We
have
no
control
over
over
that
without
changing
the
application.
Where
this
is
a
you
know,
this
is
a
this
is
something.
B
B
E
B
E
B
Another
option
would
be
to
because
the
problem
here
isn't
really
with
the
queues
that
are
problem
queues.
The
problem
here
is
with
all
the
boring
queues
that
we
just
have
like.
If
you
look
at
how
many
queues
we
have
that
are
lower
agency,
have
an
unknown
resource
boundary
and
have
no
external
dependencies.
B
If
we've
missed
attributes
to
misattributed,
it's
not
tributed
a
worker,
then
we've
definitely
got
an
issue,
but
that
seems
to
me
more
of
a
boring
solution,
because
I
think
there
are
definitely
headaches
with
having
to
have
every
node
always
know
exactly
what
queues
are
available
and
also
having
to
do
this
dance.
Every
time
you
want
to
change
that
and
figuration.
E
You
know
I
disagree
on
the
on
the
on
all
call
nodes
having
to
know
because,
like
you
know,
that's
a
shift
configuration
change
that
we
roll
out
everywhere
and
there's
you
know
the
same
as
all
nodes
have
to
have
the
same.
Ridaz
configuration
the
same
Postgres
configuration
if
it's
not
the
same
everywhere
the
application
breaks-
and
this
would
be
exactly
the
same
as
that
so
I.
E
There
are
issues
with
that,
and
but
you
know,
if
we're
going
to
go
to
that
route,
I
mean
that
the
other
thing
to
consider
is
like
do
we
need
to
like,
if
we're
just
gonna
kind
of
take
all
those
kids
and
put
them
in
there,
then
do
we
actually
even
need
the
cue
selector
syntax?
Can
we
not
just
so
well,
you
know
channel
everything
to
default.
The
problem
that
I
have
is
that
we
lose
control
over
being
able
to
like
pick
out
certain
jobs,
run
them
somewhere
else
and
not
even
run
them
at
all.
E
Yeah,
it's
kind
of
it's,
it's
a
combination
right,
so
it's
not
jobs
that
are
only
just
running
know
where
it's
being
able
to
like
reroute
jobs
and
saying,
like
you
know,
hey,
there's
a
lot
of
cool
Murray
jobs
like
what
we're
going
to
do
is
without
making
application
changes.
We're
gonna
just
send
pull
mirror
jobs
to
its
own.
Its
own,
you
know,
say,
say
we
decided
it
was
really
bad
idea
or
you
know,
taking
the
pipeline
jobs
and
running
them
on
there
on
the
same
infrastructures.
Everything
else
was
a
really
bad
idea.
E
Obviously,
if
we
have
a
set
number
of
queues-
and
now
we
want
to
add
another
queue,
then
that
coordination
becomes
something
that
we've
got
to
do
with
self
self
managing
stalls
and
all
those
people
where
this
is
now
like
a
gitlab
comm
only
configuration
we
don't
have
to
kind
of
coordinate
with
with
self
managed,
etc.
I
think.
B
B
You
can
already
disable
a
cron
job
in
other
ways
like
they
could
all
just
go
in
a
cron
jobs
queue
there's,
certainly
all
the
ones
that
have
the
same
attributes
to
cap
new
queues
so
like
make
it
so
that,
like
it's
for
a
bunch
of
queues,
we
have
namespaces,
and
then
we
have
individual
queues.
We
could
just
bundle
those
together
a
lot
of
the
time,
and
then
you
know
if
we
can
also
increase
the
CPU
on
our
Redis
sidekick
instance.
B
C
F
B
Know
I
think
the
risk
of
that
would
be
quite
low,
and
you
know
just
by
like
grouping
together
jobs
into
the
same
queue.
Now
we
could
do
that
and
that
migration
is
also
easier
if
those
queues
are
not
heavily
populated,
because
we
don't
have
to
spend
as
much
time
in
Redis
like
moving
renaming
everything.
So.
E
So
if
we
do
that,
we
lose
the
ability
to
do
this
on
an
admin
side
right
and
like
every
change
becomes
a
merge
request
into
the
application
and
like
if
we
want
to
get
something
out
of
there,
or
we
discover
that
you
know
like
if
we
want
to
change
something
too,
to
reach
it
somewhere
else.
It
just
kind
of
feels,
like
we've,
built
all
this
stuff
already
right
with
the
you
selector
and
it
seems
to
work
pretty
well
like
I've,
been
pretty
happy
with
it
so
far
like.
Why
not
stick
with
that.
I
think.
B
E
Mean
you,
you
could
have
like
a
catch-all
and
anything
that
goes.
But
yeah
I
mean
you,
you
know
there
are
solutions
to
that,
but
I
just
kind
of
feel
like
we
have
this
lever
at
the
moment
and
I
really
like
the
fact
that
I
can
take
a
runaway
cue
and
I
can
divert
it
and
I'm
losing
that
lever
and
I
and
I.
You
know
and
felt
like
it's
been
working
really
well
for
us
for
gitlab
calm
and
we
can
reconfigure
things.
You
know
without
lots
of
changes
in
the
application.
E
You
know
we're
doing
all
the
stuff
at
the
moment
and
it's
you
know
worked
really
well
today.
We
just
reconfigured
things
so
that
we
don't
have
a
bunch
of
keys
and
it
just
you
know,
I
want
to
keep
that
level
of
configurability.
Forget
love,
calm
and
it
feels
like
we're
going
to
lose
that
and
that's
feel
sad
to
me.
I
think
my.
F
Phone
is
my
vote
is
for
that
you
know
I.
Think
like
we.
Definitely
you
know
where
the
quarantine
cues
that
like
in
we
could
have
external
dependency
problems,
that
we
don't
anticipate,
and
this
could
cause
problems
for
specific
use
and
we
don't
have
those
believers
to
move
things
around.
I
think
it
could
be
problematic.
I
mean.
B
E
C
B
That
is
because
it
gets
us
like
migrating
psychic
uses
a
headache,
though
not
the
contrib
ones,
but
also,
if
we
do
it
more,
we
will
probably
get
better
at
it,
and
if
we
do
it
more,
we
might
get
better
at
it
enough
that
the
other
option
becomes
more
feasible,
so
I
think,
even
if
we
were
doing
the
other
option
with
Congress
style
would
be
easy
bit,
which
is,
which
is,
you
know,
rename
queues
that
don't
really
have
anything
in
them
or
have
like
a
couple
of
things
in
them
and
go
from
there.
So.
E
Shawn,
if
we
had
like
a
if
that
proposal
had
a
thing
that
basically
looked
for,
say
orphan
queues
and
just
these
pulled
them
or
something
along
those
lines.
So
there's
a
bunch
of
jobs
that
have
been
sent
to
queues
that
are
no
longer
actively
configured
whatever
then
it'll
just
make
sure
that
those
run
on
the
best
efforts
and
be
done
with
that.
With
that
fix,
your
concerns,
they'd.
B
E
B
We
need
to
talk
to
her
a
bit
more
so
like.
If
I
have
a
worker,
that's
processing,
orphaned,
cues,
then
how
do
I
do
the
switch
over
to
do
the
switch
over
I
have
to
add
another
step
because
I
have
to
add
all
my
current
queue
configurations,
all
my
previous
cue
configurations
and
then
remove
the
previous
ones
after
a
time
because
otherwise
I
will
send
they
export
jobs
to
that
best-effort
catch-all,
node
I.
E
Mean
I
I,
don't
think
like
I
would
imagine
it
would
be
something
in
Redis
rather
than
like
a
configuration.
You
know
the
the
old
queues
I
would
imagine
that
that
could
just
be
something
in
Redis
like
this
is
a
list
of
queues
that
have
been
sent
to
in
the
last
24
hours,
rather
than
a
configuration
change,
which
is.
These
are
all
the
cues
that
we
configured
in
the
last
24,
because
that's
not
going
to
work
right.
No
one's
gonna
do
that.
E
E
B
E
E
If
a
worker
was
doing
something
bad,
we
could
just
take
it
out
of
sidekick
or
we
could
move
move
it
somewhere
else,
but
the
problem
was
that
just
ended
up
with
too
many
cues,
and
so
you
know
we
had
as
an
infrastructure
team.
We
had
that
ability
to
control
that
and
if
we
start
putting
everything
in
the
same
queue
without
configuration
for
that,
that
becomes
like
much
more
difficult.
You
know
it
makes
a
whole
bunch
of
stuff
with
infrastructure
people
more
difficult,
I.
B
E
C
A
What
I'll
ask
everyone
to
do
is
first
of
all
consider
Shawn's
suggestion
of
starting
to
reduce
the
number
of
queues
we
have
to
deal
with
so
putting
all
cron
into
cron
right
like
a
single
one,
and
then
we
are
not
talking
about
270.
We
are
talking
about
220
right,
and
maybe
there
are
a
couple
of
those
that
will
get
us
below
a
certain
line
and
give
us
a
different
perspective.
Maybe.