►
From YouTube: Scalability - Sidekiq Catchall Project
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Great,
so
thanks
everyone
for
for
dialing
in
today.
The
point
of
this
call
is
as
we're
at
the
start
of
the
work
to
deal
with
sidekick
radius.
I
just
wanted
to
get
everyone
on
the
call
together
so
that
we
can
talk
about
what
we're
going
to
do
and
how
we're
going
to
arrange
things,
because
this
is
one
of
the
few
projects
in
recent
times
where
there
are
going
to
be
this.
A
So
I'm
still
catching
up
on
a
lot
of
what
happened
over
the
last
two
weeks,
and
I
see
that
the
decision
is
now
to
do
the
first
iteration
of
the
work
being
to
make
the
catch-all
shard
use
two
cues
being
default
and
mailers
and
sean.
Can
you
just
give
a
quick
summary
of
of
where
we're
at
with
this
work
without
preparation?
Why
are
we
doing.
A
B
So
why
we're
doing
it
is
because
the
psychics
redis
cpu
saturation
is
approaching
the
limit
and
we
gained
a
bit
by
tweaking
the
br
pop
timeout.
B
We
use,
but
that's
not
going
to
last
for
long
craig
set
up
an
environment
with
the
ability
to
like
sort
of
mimic,
our
production
workload
on
a
redis
server
and
obviously
without
doing
the
actual
application
work,
but
do
the
same
sort
of
issue,
the
same
sort
of
commands
to
the
redis
server
and
you
know,
try
out
various
options
and
reducing
the
number
of
queues
for
the
catch-all.
B
Shard
had
a
big
impact
because
well,
basically,
the
cpu
usage
is
caused
by
a
combination
of
the
large
number
of
clients
that
we
have
the
length
of
the
arguments
in
the
vr
pop
commands
and
the
distribution
of
the
arguments
in
the
br
pop
commands
all
represent.
Cues
that
need
work.
That
may
have
worked
in
them
and
the
distribution
of
work
within
those
cues.
So
the
majority
of
the
cues
that
we
use
are
on
the
catch-all
shard
there's
like
300.
B
I
think
we
counted
and
if
we
can
re
so
we're
listening
to
about
400
queues
in
total
in
production,
if
we
can
get
the
catch-all
shard
down
to
two
queues,
then
we'll
be
listening
to
about
50
odds
or
60
yards
in
the
app
in
production,
which
will
you
know
is.
B
Of
magnitude
difference
and
that
had
a
pretty
significant
cpu
impact
in
craig's
tests.
So
that's
why
we're
doing
it?
Because,
oh
sorry,
also
because,
like
the
alternative
options
like
this
is
this
is
quicker
to
get
something
out
there,
because,
like
the
eventual
goal,
would
be
to
have
a
single
cube
per
shard.
But
if
we
focus
on
the
catch-all
shard
first,
we
sort
of
front-load
the
benefit
like
we.
Wouldn't
we
wouldn't
see
as
much
benefit
from
making
the
other
charts
a
single
cube
per
chart.
B
It
would
be
more
for
completeness
and
avoiding
technical
debt
if
we
did
that
what
we're
going
to
do
is
so
at
the
moment.
The
way
we
manage
our
shards
is
with
q
selectors,
which
can
select
queues,
so
each
worker,
in
our
application,
has
a
queue.
We
can
select
those
queues
that
we
listen
to
based
on
their
name,
which
is
what
we
were
doing
previously
or
based
on
attributes.
B
So
we
can
say
like
give
me
all
workers
that
belong
to
the
search
feature
category
or
give
me
all
workers
that
belong,
that
are
memory
bound
and
are
high
urgency
or
whatever
and
put
those
on
a
different
shard.
So
the
idea
is
to
reuse
that
same
idea,
but
to
push
it
a
level
up.
So,
instead
of
sidekick
listening
to
cues
that
match
these
attributes
on
the
application
side,
we
will
say
if
a
job
would
match
these
attributes
put
it
into
a
queue
with
this
name
and
then
on
the
infrastructure
side.
B
We
just
listen
to
default
and
mailers,
and
everything
that
should
be
in
catch-all
goes
into
default
or
mailers.
The
reason
for
the
mailers
one
is
just
because
that's
sort
of
built-in
cue
name
that
we
don't
directly
control
and
while
we
could
manage
it,
it's
not
really.
It's
not
really
important
here,
like
you
know,
two
is
basically
as
good
as
one
in
this
case
in
terms
of.
D
I
I
have
a
comment
to
that,
but
sorry
sorry
for
putting
it.
I
didn't
really
know
where
I
was
gonna
go
next,
so
yeah
I
it's
something
that's
related
and
it
is
that
we
saw
an
improvement
from
increasing
the
br
pop
timeout
right.
That's
the
tweak
we
recently
made
and
one
of
the
reasons
we
can't
increase
it
even
further
is
the
reliability,
the
the
risk
of
losing
jobs,
and
if
we
can
get
psychic
processes
to
listen
on
exactly
one
cue,
then
we
can
use
a
different
redis
command.
D
D
Yeah
well,
it
uses
the
non-blocking
one.
It
should
use
and
not
sleep
five
seconds
in
between
every
queue
which
is
horrendous.
D
So
if
it
uses
the
blocking
command
that
we
want
for
this,
then
we
could
raise
the
timeout
to
one
minute
or
something
like
that
and
the
timeout
is
part
of
the
workload,
because
every
time
the
timeout
expires,
the
every
client
needs
to
reconnect
and
the
regis
server
needs
to
say.
Oh
this
client
didn't
get
any
work,
give
them
new
work
so.
C
D
A
potential
benefit,
on
the
other
hand,
we
don't
know
if
I
I
don't
know
enough
about
this
problem,
like
if
clients
our
clients
starve
for
work,
a
lot,
because
if
they're
not
starved
for
work,
then
this
tearing
down
shouldn't
be
a
problem
in
the
first
place,
but
so
there
are
some
possible
benefits
to
really
going
down
to
one.
That's,
I
guess
all
I
wanted
to
say.
B
Yeah,
I
think
this
is
something
we
knew.
We
would
need
to
do
like
this
time
last
year,
like
we
knew,
we
would
need
to
do
this
at
some
point.
It
just
didn't
wasn't
a
priority,
and
now
it
is
to
like
you
know,
reduce
this
number
of
queues,
and
it's
one
of
those
situations
where,
unfortunately,
the
engineering
work
to
add
cues
is
much
less
than
the
engineering
work
to
reduce
use
because,
like
you
know,
expanding
migration
is
much
easier
than
a
contracting
migration.
B
Essentially,
so,
in
terms
of
the
work
itself,
I
haven't
sorry,
my
cat
is
head
butting,
my
headphone
chord.
I
haven't.
I
haven't
fully
gotten
around
to
like
putting
that
in
any
kind
of
order.
C
B
Know
bob
especially
last
week
did
some
work
in
like
creating
issues
and
like
marking
them
marking
those
relationships.
B
E
E
Okay,
so
I
think
the
most
important
one
is
to
add
the
ability
to
route
a
sake
job
to
a
different
queue.
So
there
is
still
ongoing
discussion
inside
that
issue,
and
we
haven't
come
to
the
conclusion
yet.
But
in
theory
I
propose
to
just
replicate
the
horned
routing
mapping
from
the
erosion
right
now
into
our
configuration
in
the
future.
E
It
means
that,
right
now
we
are
splitting
on
the
jobs
into
different
shades
and
each
one
will
have
some
selector
like
the
resource
battery
on
the
feature
gallery
and
during
the
admiration,
just
copy
one
of
them
and
put
a
name
just
exactly
the
one
into
the
mapping
routing
and
during
the
my
reason,
wizard
can
just
move
to
like
rolling
out
the
mapping
routing
to
one
by
one
and
after
that
we
can
just
like
cut
off
everything
and
just
default
everything
to
the
default
kill.
Yet.
D
I
think
there's
one
other
problem
that
we
that
is
sort
of
blocking
the
project,
and
that
is
the
job
submitted
by
new
codes
being
picked
up
by
server
running
old
codes.
E
D
B
Well,
so
we
can
I'll
just
I'll
just
share.
I
think
from
that
issue
that
my
mom
was
just
mentioning,
because
I
think
it
depends
on
what
is
that?
Oh
right
yeah,
that's
the
issue.
I
think
it
depends
on
how
we
do
that.
So,
for
instance,.
B
B
But
one
I
was
thinking
of
was
so
we
have
in
our
config.
We
have
a
mapping
of
selectors
to
queue
names,
so
these
are
two
very
simple
ones
just
to
get
the
idea
across
and
we
have.
B
We
then
need
two
special
cases
which
is
not
ideal,
so
one
special
case
is
like
actually
this
select.
So
the
idea
is
that
the
first
selector
that
matches
the
application
takes
that
and
says
pass
it
to
this
queue
so
for
catch
all
to
work,
it
needs
to
be
the
last
one
and
it
needs
to
use
the
star
selector,
which
we
already
support,
which
says
like
any
any
queue.
B
Obviously,
so
if
the
first
one
that
matches
wins,
then
we
just
put
all
the
other
queues,
all
the
other
shards
in
here,
and
then
we
put
star
at
the
end
and
map
that
to
default.
But
then
we
have
the
problem
that
we
don't
want
to
do
all
the
other
shards
first,
which
we
would
have
to
do
in
order
for
this
to
work.
So
we
were
talking
about
maybe
making
this
go
to
null
and
null
means
like
consider
this
matched,
but
just
send
it
to
whatever
cue
it
would
have
gone
anyway.
B
So
I
think
this
example
from
wang
min
is
is
a
good
example.
So
you
know
you,
you
define
your
memory
bound
shard,
but
you
say
it
goes
to
null,
so
it
still
goes
to
the
same
q
name.
So
this
is.
This
should
effectively
be
a
no
op
like
configuration-wise.
If
we
started
adding
this
with
null
on
the
right-hand
side,
then,
like
I
said
we
can
do
start
to
default,
to
make
everything
move,
but
we
could
start
this
with
just
say
like
moving
queues
of
a
specific
name
to
default.
That
already
exists.
B
So
then
we
don't
actually
need
to
solve
the.
We
don't
technically
need
to
solve
the
new
jobs,
new
workers
problem.
First,
we
do
need
to
solve
it
before
we
finish.
B
In
the
same
way,
because
we
can
still
move
some
things
to
default,
which
is
probably
a
useful
thing
to
be
able
to
do
anyway,
we
don't
want
to
we
don't
want
to
go
from
everything
on
catch.
All
goes
to
its
own
cue
name
to
everything
on
catch
all
goes
to
default.
Ideally,
we
would
be
able
to
like
you
know,.
B
D
Pick
and
choose
individual
individual
jobs
and
say
this
one
goes
to
default.
That
one
goes
to
default
and
then
you
pick
jobs
that
you
know
are
already
deployed
everywhere
or
recognized
everywhere.
Yes,.
B
Exactly
so,
we'd
pick
that
for
known
jobs
first,
because
they're
the
easy
case
and
then
we'd
tackle
the
harder
case.
Why?
Why
no,
instead
of
default
so
well
because
default
is
a
queue
name.
D
That's
the
point
bob:
if
if
they
go
into
default
and
it's
fancy
new
job
class
and
it's
on
default
and
somebody's
listening
on
default
with
the
old
code,
then
they
don't
know
what
to
do
with
fencing
new
job
class.
But
if
it's
says
no-
and
it
goes
on
the
name-
queue
saying:
fancy
new
dropdowns.
D
Handles
new
code
being
deployed.
B
Yes,
so
if
it
was
a
new
worker
and
it
got
a
null
on
the
right-hand
side,
it
would
go
to
its
own
queue,
nothing's
listening
to
that
queue
until
we
deploy
on
the
sidekick
side
and
once
we
deploy
on
the
sidekick
side,
we
start
listening
to
that
queue,
and
we
also
have
the
code
in
place
to
process
that
worker.
Sorry,
I
did
miss
explaining
that.
D
If
I
may
make
one
pedentic
suggestion,
can
we
make
the
rules
an
array
and
not
an
object
because
they're
order
dependent-
and
I.
B
B
Yes,
so
this
was
the
other
reason
I
was
thinking
about
doing
it.
This
way
is
because
we
have
this
other
issue
about
whether
we
want
to
build
something
specific
to
allow
cues
work,
sorry
workers
to
not
go
to
their
queue.
B
So,
like
you
know,
say
we
have
a
massive
problem
with
a
bunch
of
I
don't
know
some
new
worker
jobs
and
in
the
current
situation
they
would
be
going
to
the
some
new
worker
queue
and
we
would
be
able
to,
like
you
know,
stop
listening
to
that
queue
or
start
listening
to
it
on
a
different
node
or
whatever.
In
practice.
It
doesn't
really
work
very
well
because
of
our
configuration.
B
No,
but
in
this
option
we
could
just
add
an
a
selector
at
the
top
that
says
name.
D
B
D
B
And
I
think
the
only
wrinkle
with
that
is
so
what
I've
just
mentioned
sounds
great,
but
it
does
mean
we
would
potentially
need
to
do
the
processing
twice.
So
we
do
so
far.
I've
been
assuming
that
the
processing
happens
on
the
client
side,
so
psychic
client
is
what
schedules.
The
job
and
psychic
server
is
what
processes
the
jobs.
So
if
we
do
the
processing
on
the
client
side,
we
say
we're
going
to
schedule
this
job.
B
What
queue
does
it
go
into
and
then
we
look
at
this
map
or
array
of
arrays
and
we
say
which
rule
does
it
match?
First,
okay,
it
goes
to
whatever's
on
the
right-hand
side
of
that
rule,
but
we
could
also
do
the
same
logic
on
the
sidekick
server
side.
So
when
we
pick
up
a
job,
we
say
is
this
in
the
right
queue
like,
or
should
it
be
moved
to
a
different
queue.
B
So
I'm
saying
I'm
saying
we
do
both
so
like
the
only
reason
I'm
saying
this
is
because
if
we,
if
we
wanted
that
rerouting
to
take
effect
for
existing
jobs,
then
we
would
need
it
to
happen
on
the
server
side
as
well.
But
if
we
don't,
then
we
can
just
put
it
in
the
client.
But
it's
not
a
huge
wrinkle.
I'm
just
explaining
why
my
solution
there
doesn't
actually
necessarily
fully
solve
the
problem
because
it
doesn't
handle
the
existing
jobs
that
are
already
in
the
default
view.
B
So
I'm
saying
that
if,
if
the
goal
from
sres
is
to
say
that
all
of
these
jobs
that
are
going
to
be
processed
for
this
problem
worker
don't
happen
on
the
default
queue,
then
we
would
also
need
a
server
middleware
to
move
the
jobs
to
the
right
place.
But,
like
you
said
we,
if,
if
we
don't
want
to
process
those
jobs,
we
would
probably
just
delete
them,
in
which
case
we
don't
need
the
server
middleware
and
we're
already
here.
B
Right,
that's
the
other
thing
like
how
long
would
it
take
us
to
roll
out
the
configuration
change?
What
would
the
size
of
the
qb
by
the
time?
We
do
it?
That's
already
a
sort
of
blocker
to
us
doing
this
today
right
to
stop
listening
to
a
queue
like.
How
long
does
it
take
up
to
spin
up
the
instances
or
pods
that
we
need
that?
B
Don't
listen
to
this
queue
versus
just
letting
that
backlog
of
job
strain-
and
I
think
yeah-
I've
already
made
this
point
in
that
issue
like
what
do
we
actually
do
in
practice
today?
I
think
it's
mostly.
B
B
Yeah
definitely
my
fault:
this
ended
up
quite
unstructured,
so
one
of
the
goals
is
that
we
can
route
jobs
to
a
different
queue
and
I
don't
think
dealing
with
the
new
worker
case
necessarily
blocks
that,
but
obviously
it
blocks
the
completion
of
the
entire
thing,
because
you
know
we
need
to
solve
that.
So
yeah.
B
Yeah,
another
problem
is
the
observability,
so
mainly
our
dashboards
and
alerting
like
at
the
moment
are
cue-based.
We
need
to
add
a
parallel
set
that
our
worker
base
because
well
we're
migrating.
So,
while
we're
doing
this,
migration
we'll
have
this
going
on,
but
even
once
we're
fully
migrated.
B
We'll
still
want
to
know,
what's
happening
with
the
default
queue
and
what's
happening
with
this
specific
worker,
because
there'll
be
two
separate
concepts,
whereas
right
now,
what's
happening
with
the
queue
and
what's
happening
with
the
worker
mean
the
same
thing,
because
every
worker
has
its
own
queue.
So
we
need
to
do
that.
A
It's
probably
good
to
create
an
issue
now
that
starts
talking
about
how
the
migration
is
going
to
happen
because
or
how
the
rollout
is
going
to
happen
and
what
we
will
need
to
look
out
for
there.
I'm
worried
that
if
we
don't
raise
that
soon
enough,
we'll
get
to.
B
B
So
yeah
I
I
have
one
way
of
interpreting
that,
which
is
that.
Currently
we
have
two
to
three
casual
shards,
because
we
have
a
catch-all
on
vms.
We
have
a
catch-all
on
kubernetes
and
we
have
a
catch
nfs
on
the
mvms,
so
ideally
like
at
some
point.
All
three
of
those
would
just
be
the
catch.
The
catch-all
shard
and
right
now
they're
not,
but
was
that
what
you
meant
jacob
or
was
there
something
else
that
you
meant.
C
I
understand
the
end
goal
is
those
three
things
that
you
just
mentioned:
kubernetes
and
two
vm
groups
are
listening
to
two
queues,
mailers
and
default.
B
Yeah,
so
are
you
saying
like
we
could
end
up
with
so
let's,
let's
forget
we're
using
the
name
default
for
now.
Let's
say
we
end
up
with
using
one
cue:
that's
called
catch-all
vm
one
queue,
that's
called
catch
all
kubernetes
or
just
catch
all,
because
that's
going
to
be
the
future
and
one
queue.
That's
called
catch
nfs
and
that's
also
an
acceptable
outcome,
because
all
three
of
the
catch-all
shards
are
listening
to
those
queues
individually.
Even
though
we've
got
three
queues
instead
of
one.
D
D
I
I
I
know,
but
I
we're
talking
about
making
application
changes
to
make
this
possible.
Yes,.
B
So
as
far
as
I'm
concerned-
but
you
know
this
is
just
me-
I
think
if
we
make
the
application
changes
to
do
this,
but
don't
make
sure
that
the
infrastructure
changes
also
happen.
We
may
as
well
have
not
done
this
because,
like
I
mentioned
before,
I
think.
D
That's,
that's
not
what
I
mean
like.
We
should
also
be
rolling
this
out.
It's
just
that
you
can
design
the
system
in
a
way
where
it
makes
very
specific
assumptions
about
how
the
infrastructure
looks
or
you
can
make
it
a
little
bit
more
general
and
say
we
we
we
can
root
like.
D
D
C
C
Wait,
I
think,
I
understand
the
question
we
need
to
have
this.
This
thing
that
has
a
star
because
we
need
to
have
a
place
to
send
new
stuff
that
that's
the
new
stuff
is
a
different
problem,
but
people
right
now
we
have
the
queue
selectors
that
magically
add
cues
to
the
catch-all
shard
and
that's
when
we
say
catch-all
shard
in
this
case,
that's
the
kubernetes
one
and
any
new
worker
gets
picked
up
by
those
poles,
but.
D
B
C
B
Yes,
so
from
the
application
perspective,
the
the
goal
is
like
you
can
meet
all
of
the
routing
requirements
we
have
on
gitlab.com
by
pushing
to
an
individual
q
per
chart.
Now
what
those
cues
are
called
is
sort
of
deliberately
an
exercise
for
the
reader
at
the
moment.
But,
like
you
know,
fundamentally,
I
think
in
the
previous
epic.
B
We
talked
about
this
a
bit
because
if
we
let
you
name
them
anything,
it's
very
easy
for
self-managed
to
get
into
a
case
where
they're
sending
to
a
cue
that
nothing's
listening
to
and
if
we
provide
the
q
names
upfront
like
we
already
do,
then
we
can
limit
the
risk
of
that.
But
then
we
also
have
to
come
up
with
names.
So.
D
B
So
well
implicitly,
I
guess
at
the
bottom
of
this.
Even
if
you
don't
add
it,
there
would
be
a
star
rule
that
points
to
nil
right,
because
that
just
means
like
with
anything,
that's
not
matched
by
any
other
rule,
above
just
do
what
you
were
doing
before,
and
you
don't
need
to
write
that
in
the
configuration,
because
that
way,
it's
just
it.
D
Well;
okay,
that
touches
on
a
different
part
of
the
of
the
project,
which
is
that
our
initial
goal
is
to
come
up
with
something
better
for
gitlab.com
and
use
on
gitlab.com.
C
D
Eventually,
that
should
be
the
only
way
to
do
things.
Yes,
we
should
not
carry
forward
q
per
worker
alongside
this
system.
B
No
and
we
are
going
to
release
gitlab
14.0
in
june,
I
think
and
15.0
in
may
next
year.
I
think,
is
the
plan
so.
B
So
like
I
guess,
the
idea
is
that
by
the
time
we
get
to
the
end
of
this
project,
we
have
something
that
we
can
then
use
to
do
the
rest
of
gitlab.com.
Once
we've
done
the
rest
of
gitlab.com,
we
can
start
migrating
self-managed
towards
this.
D
B
B
Yeah,
basically,
unless
you
do
anything
else
like
even
if
you're
on
your
gdk,
your
sidekick
process
will
listen
to
like
400
name
views.
D
C
B
Yeah,
the
only
the
only
migration
case
we
have
for
self-managed
is
for
people
who
have
actually
done
this
like
who,
then
you
know,
we
need
to
add
deprecation
warnings
and
say,
like
this
isn't
going
to
work.
Do
this
instead
or
whatever,
because
at
the
moment
at
the
moment
the
knowledge
of
shards
is
local.
So,
like
the
catch-all
shard
knows
what
cues
it's
listening
to
and
the
memory
bound
chad
knows
what's
cues
it's
listening
to,
but
there's
nothing,
there's
no
application.
Instance.
That
knows
all
the
combinations
of
cues
that
are
being
listened
to.
B
A
But
so
we
are
at
the
half
hour,
oh
sorry,
having
only
practice
for
for
the
half
an
hour.
I
still
think
this
has
been
a
good
conversation
and
probably
gotten
quite
far
versus
having
all
of
these
conversations
async.
I
just
wanted
to
add.
One
final
thing
is
that
you'll
see
that
I
created
a
lounge,
not
a
lounge
channel.
I've
created
another
channel
yesterday,
invited
you
all
into
that.
As
I
said
on
the
channel,
it's
just.
A
This
is
an
experiment
to
see
how
this
goes
with
trying
to
hand
over
information
about
this
project
between
the
time
zones
to
try
and
save
people
having
to
feel
like
they've
got
to
pause
through
seven
hours
worth
of
updates
or
change
to
then
see
what
what
needs
immediate
attention.
So
the
experiment
is
to
try
and
put
information
about
this
project
into
this
channel
and
see
how
that
works.
But
please,
let
me
know
if
there's
any
concerns
or
problems
that
we
run
into.
B
Cool,
can
I
just
if
I've
got
time,
can
I
just
quickly
sort
of
go
through
so
the
way
I
see
it
in
terms
of
what
we
can
start
with
is
we
can
start
with
recruiting
psychic
jobs
to
a
different
queue
which
likewise,
that
is
the
the
main
thing
we
need
to
do
like
you
know,
that's
completely
non-negotiable,
and
if
we
can't
do
that,
then
we
can't
do
any
of
the
rest.
We
can
start
on
the
dashboards
and
alerting
stuff.
B
It's
not
urgent
that
we
start
that
now,
but
equally
in
terms
of
recording
rules
and
stuff,
like
probably
the
sooner
the
better
and
labels.
D
C
D
To
some
extent
it
may
block
us
the
moment
we
build
something
we
want
to
try
it.
B
Similarly,
the
unknown
worker
case
that
doesn't
block
the
rooting
jobs
to
a
different
queue
necessarily,
but
it
does
block
the
completion
of
the
epic
like
we
can't.
We
can't
do
that,
so
we
could
start
that
now
and
we
could
also
start
with
the
sidekick
catch-all
configuration
and
actually
we
could
start.
B
B
Line
which
lists
a
bunch
of
q
names,
which
is
exactly
not
the
point
of
what
we
were
doing
with
the
q
selectors,
and
I
think,
if
we
fix
that,
even
though
it's
not
directly
related
to
this
incident,
if
we
don't
have
to
carry
over
that
incredibly
long
line
with
all
the
names
to
this
new
configuration,
that
would
probably
benefit
us
in
the
long
run.
You
know.
C
B
D
B
Well,
we
don't
have
to
that's
what
I'm
saying
like
I'm
going
to
try
and
flesh
that
out
a
bit
more
today
to
like
either
make
a
case
that
this
shouldn't,
or
should
it
should
or
shouldn't
be
part
of
this
epic,
but
it's
mostly
just
like
this
is
infrastructure,
technical
debt
that
you
know.
Maybe
it's
worth
us
fixing
while
we're
here.
Maybe
it's
not
worth
us
fixing,
while
we're
here,
maybe.
B
Yeah
or
maybe
maybe
we
can
just
copy
and
paste
that
and
it'll
be
fine
like
who
knows,
I
don't
think
we're
going
to
rewrite
it
from
scratch,
because
it's
impossible
to
reason
about
at
the
moment.
So
I
think
I
think
that
that
last
part
is
actually
the
most
important
thing
is
like.
B
It's
not
really
very
easy
to
reason
about
our
shards
at
the
moment,
because
we
have
this
9000
character
selector
in
there.
So
if
we
can
get
all
the
selectors
down,
so
they
can
fit
on
a
page
which
they
should
be
able
to,
then
that
would
help
us
like
it's
the
classic.
Obviously,
no
problems
versus
no
obvious
problems.
Dichotomy
at
the
moment
I
can
say
there
are
no
obvious
problems
in
this
line
because,
like
it's,
9
000
characters
long
and
how
am
I
going
to
spot
that?
B
So
yeah
and
then
yeah
there's
some
sort
of
initial
additional
nuances
which
have
created
some
issues
for
and
some
not
about
scheduled
jobs
because
those
live
in
their
own
set.
So
we
need
to
do
something
with
those
because
otherwise
you
know
the
job
payload
is
already
in
a
set
that
will
say
like
I'm
going
to
this
queue,
but
what?
B
If
this
queue
isn't
listened
to
anymore
at
the
time
we
pull
that
job
out
of
the
set,
so
we
probably
need
to
figure
out
a
way
to
migrate
those
similarly
chrome
jobs
have
their
own
set
and
hash.
It's
a
weird
setup
in
redis
for
those,
but
I
don't
know
if
that
gets
rewritten
on
every
application
start.
So
maybe
it's
fine,
so
yeah!
B
Those
are
those
are
sort
of
minor
things,
but
I
think
the
big
ones
are
the
routing,
the
unknown
workers,
the
different
cue,
the
routing
unknown
workers,
the
observability
and
yeah
beyond
that
I'll
start
looking
at
those
but
I'll,
try
and
get
those
three
ready
to
work
on
today,
because
I
think
those
can
all
be
done
independently
and
then
I
can
start
focusing
on
the
rest
of
the
project.
A
Cool
anything
else
no
enjoy
the
rest
of
your
days.
Thanks
so
much
for
this.