►
From YouTube: Scalability - Single-Queue Project Prep
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Great
so
we're
having
a
conversation
today
about
the
single
q
per
shot
project
and
we're
going
to
start
with
just
a
bit
of
an
update
from
sean
on
the
catch-all
project
and
then
we'll
see
how
we're
going
to
move
that
into
single
cube
per
shard.
B
Yeah,
so
the
catch-all
project
craig
is
going
to
pick
up
on
his
tuesday
so
overnight
in
europe,
because
today
was
a
holiday
for
him.
So
that's
for
the.
B
That's
for
the
blank
stuff
and
then
there's
a
question
about
so
for
the
observability
operational
related
stuff
he
mentioned.
We
don't
want
to
go
to
production
without
that.
So
there's
two
items
there
that
are
both
sort
of
in
progress.
I've
suggested
that
we
could
go
to
staging
without
those
and.
B
Go
to
you
know
just
not
do
it
in
production,
but
I'll
see
what
he
says
when
he's
back.
C
B
No
wait
a
second
sure
observatory.
Contents,
observable
that
you
operate,
I'm
working
instead
of
cues.
It's
just
the.
B
Yeah,
the
other
one
is
the
allow
a
message,
add
a
method
to
allow
sres
to
intervene,
etc,
but
that's
actually
documentation
rather
than
code,
so
documentation,
mrs
just
in
in
review.
So
that's
actually
in
review
with
matt
but
the
other
person
she
would
review
that.
I
guess
is
craig.
B
You
know
either
either
way,
there's
kind
of
a
dependency
on
like
what
we're
already
doing
so
you
know
either
way
it's
somebody
who's
already
working
on
this
in
some
other
capacity
who
needs
to
review
that
okay,
yeah
and
obviously
there's
not
many
issues
left,
but
the
last
couple
could
take
a
while
depending
on
if
we
find
any
issues
with
moving
particular
workloads
to
the
default
shard.
So
I
mean
if
everything
goes
smoothly,
we
could
basically
be
done
this
week,
but
it's
equally
possible
that
I
say
basically
done.
B
We
could
be
pretty
much
done
in
the
next
week
or
so,
but
it's
also
possible
that
it
could
sort
of
drag
on
for
a
bit
longer,
as
we
discover
issues
with
particular
workloads.
Well,.
A
If
it's
well,
if
it's
a
case
that
I
mean
at
least
the
I
mean
it
sounds
like,
we've
got
to
do
rollout
monitoring
check
that
it's
running
okay,
and
only
at
that
point
we
just
say
we
can
say
that
we're
finished
with
it,
but
if
the,
if
we
at
least
get
through
putting
it
onto
production
this
week,
we
can
then
get
started
with
the
rest
of
them
with
the
rest
of
the
single
cube
per
shard
work
early
next
week,.
B
A
B
Yeah
so.
B
Yeah,
that's
fine.
The
only
issue
with
if
we
did
overlap
the
roll
out
is
we
wouldn't
be
able
to
measure
the
impacts
as
clearly
from
one
to
the
other,
but
I'm
not
I'm
not
sure
you're
saying
we
should
overlap
the
rollout.
I
think
you're
just
saying
we
should
go
and
get
the
issues
created.
B
A
I
think
that,
at
least
if
the
issues
are
created,
and
especially
the
change
issues,
if
they're
all
prepped
to
go,
then
when
it
comes
to
doing
the
rollout,
I
like
it's
all
prepared-
and
it's
just
a
case
of
getting
through
that,
whereas
at
the
moment
what
is
actually
required
for
single
coupon
isn't
documented
anywhere.
It's
it's.
It's
sort
of
it's
knowledge
in
people's
heads
right
now
that
I'd
like
to
get
out
so
that
we
can
understand
how
big
it
is.
B
No,
that's
fair.
So,
from
my
perspective,
the
two,
so
I
think
we
could
potentially
do
this
with
no
additional
back-end
developments,
but
there
is
a
weird
bit
in
the
in
the
way
selecting
cues
works,
which
is
why
we
pick
default,
because
this
will
work
with
both
ways.
So
if
you
use
the
queue
selector
which
we
are
doing
now-
and
we
will
continue
to
do
even
after
this,
epic
is
done
because
it
will
work
both
ways.
B
You
have
to
select
a
queue
that
the
application
knows,
the
name
of
so
you
can't
just
say:
well,
you
can
you
can
with
the,
but
that
doesn't
apply
to
the
the
cue
the
worker
routing.
So,
like
you
can
say,
I
want
workers
matching
this
feature
category
to
go
to
the
memory
bound
queue,
but
if
you're
using
the
q
selector,
you
have
no
way
to
listen
to
the
memory
bound
queue,
because
the
application
doesn't
know
that
there
is
a
memory
bound
queue.
It's
just
something
you
made
up
in
your
configuration.
B
B
The
queue
selector
you
can
listen
to
a
memory
bound
queue,
and
the
point
of
this
is
to
get
away
from
using
the
queue
selector.
So
that
might
work
okay,
but
I
think
it
is
confusing
and
I
think
we
might
want
to
consider.
I
don't
know-
because
I
think
I
mentioned
this
to
you-
wang
min.
C
B
Okay,
that's
fine!
Then
we
could
also
decide
to
do
that
so
yeah.
I
think
one
reason
the
queue
selector
works
this
way
at
the
moment
is
you
know
if
you're
selecting
by
name,
you
could
say
well,
if
I'm
selecting
a
cue
by
name
and
that
name
doesn't
exist,
just
listen
to
it
anyway.
But
if
you're
selecting
by
attribute,
say
you're,
selecting
cues
by
feature
category
is
issue
tracking.
It
won't
know
that
there's
a
that,
there's
a
cue
that
you
think
has
issue
tracking
that
it
doesn't
know
about
right
like
the
only
way.
A
B
B
I
think
it's
in
the
optional
follow-up
items
under
workers
that
depend
on
checking
their
own
queue
size,
so
we've
got
a
list
of
some
workers
that
we
know
that
we
need
to
not
put
in
a
queue
with
other
workers,
because
these
workers
expect
to
be
able
to
say
how
many
of
my
how
many
of
this
job
are
scheduled
by
looking
at
their
own
queue
and
if
their
own
queue
has
other
stuff
in
it.
B
They'll
always
be
like
whoa,
there's
loads
scheduled
or
not
always,
but
they
will
potentially
be
like
well
there's
loads
scheduled
when
actually
there
isn't
because
they're
looking
at
that's,
not
how
you
measure
that
is
this
an
exhaustive
list
like
is
this
every
or
these?
These
are
all
the
ones
we
know
of.
B
B
Sorry,
I
put
it
in
the
dark,
it
doesn't.
C
C
B
B
A
B
Yes,
the
two
hash
migration
ones
actually
don't
run
on
production.
Okay,
helpful!
It's
convenient
because
we
don't
production
is
already
fully
on
hash
storage,
although,
as
it
turns
out
staging
isn't
so
we
had
an
incident
about
that
the
other
day
and
we
disabled
support
for
non-hash
storage.
The
oh
right,
gitlab
sidekick
queue
is
about.
B
The
api
that
we
actually
provided
for
admins
to
be
able
to
delete
jobs
from
a
queue
and
that
actually
should
work
basically
fine
like
we
don't
need
to
really
do
anything
about
that.
It
just
needs
to
accept
cues
that
don't
work,
sorry
that
that
might
not
exist
in
the
configuration,
but
I
think
it
already
does.
The
elasticsearch
stuff,
I
think,
doesn't
apply
because
it's
checking
a
cue
size
of
a
cue.
It
maintains
it's
not
a
psychic
queue,
so
that's
fine!
B
It's
got
its
own
queueing
system,
then
there's
the
update,
all
mirrors
worker
project
import
schedule
worker,
so
that
needs
to
stay
in
its
own
queue,
but
then
there's
a
couple
of
things
already
on
catch-all
that
need
to
stay
in
their
own
queue.
I
think
let
me
find
what
they
were.
B
That
so,
basically,
what
I'm
trying
to
say
is
like
at
this
point
we
can
make
a
decision,
whether
we,
whether
we
try
and
fix
those
workers
to
allow
this
or
whether
we
don't
so
like
at
the
moment,
project
import
schedule
we're
just
excluding
from
this
project
with
the
the
reference
to
that
issue.
108.7.
A
B
Will
listen
to
mailer's
default
and
project
import
schedule?
Okay,
because,
like
project
import
schedule
is
cue,
we
can't
we
can't
just
throw
in
the
bin
right
now
yeah
yeah.
So
if
we
come
across
any
of
those
on
other
shards,
I
think
we'd
do
the
same.
So
if
we
find
a
worker
on
memory
bound
that
we
can't
migrate,
which
we
don't
know
about,
but
we
might
do,
then
we
would
keep
that
listen
to
our
members
of
memory
bound,
would
listen
to
the
memory
bound
q
plus
this
other
q.
B
So
I
don't
think
I
think
in
terms
of
orders
of
magnitude.
That
would
be
fine.
It's
just
a.
I
guess.
It's
technical
there!
It's
a
potential
trap
like
and
if
we're
advising,
if
we
get
to
the
point
where
we
recommend
this
to
self-manage,
because
at
the
moment
the
docs
say
don't
do
this.
Basically,
we
need
to
make
it
very
clear
like
what
you
do
about
these
queues.
So
if
we
end
up
with
like
one
or
two,
it
might
be
worth
us
trying
to
like
work
with
the
relevant
teams
to
try
and
fix
them.
A
At
the
end
of
the
project,
I
don't
want
them
to
be
ambiguity
as
to
how
you
use
sidekick,
and
I
don't
want
there
to
be
a
situation
where
people
can
look
at
the
existing
configuration
and
go.
Oh
well.
They
have
their
own
one,
so
we
can
have
our
own
one
and
then
we
get
this
explosion
of
named
hughes
again.
B
B
Open
basically
and
say
like
this
is
the
this
is
the
list,
and
then
we
can
add
related
issues
to
that
of
like
the
ones
that
we
know
about
that.
We
need
to
be
fixed
actually
on
those
hash,
storage
ones,
a
they
don't
run
on
gitlab.com,
which
obviously
means
that
you
know
we're
fine
with
them,
but
they
can
run
on
self-managed
and
they
could
run
on
staging.
But
I
think
the
stuff
that
geo
team
were
doing
in
14.0
might
be.
B
So
I
guess
the
the
tldr
of
all.
That
is
that
I
think
that
this
project
is
potentially
all
rollout.
Okay,
because
we've
basically
done
all
of
the
other
work
in
this
project
because
we
had
to
because
we
can't
do
any
roll
out
until
we've
done
all
the
other
work.
I
don't
know
what
you
think
man
is
there
something
I've
missed.
C
Yes,
I
just
I
want
to
say
that
there
are
a
lot
of
uncertainties
in
this
single
single
keeper
shade
present,
because
previously
we
have
done
a
lot
of
exploration
and
try
out
each
worker
to
see
whether
it
can
work
with
single
kills,
but
in
the
projects
there's
still
bunches
of
workers,
and
we
are
not
quite
sure
until
now.
So
I
don't
think
that
will
be
that
smooth.
A
C
B
B
That
was
based
on
knowing
certain
patterns
that
we
use,
but
like
there
might
be
some
patterns
that
are
specific
to
other
shards.
One
thing
I
would
say
there
why
men
is
like:
if
you
look
at
the
other
shards
like
they,
mostly,
they
don't
run
anywhere
near
as
many
queues.
So
we
could.
We
could
just
go
one
by
one
on
the
workers.
A
B
Each
like
we
could
say
like
before
we
do
urgent
other.
We
look
at
all
of
the
20,
odd
workers
that
run
on
urgent
other
and
you
know
go
through
those
because,
like
there's
between
20
and
30,
that's
not.
You
know
that
that's
tractable,
it's
not!
It's
not
detractable
for
the
240
odds
on
the
catch-all.
So
I
think
that's.
A
B
Well,
I
think
I
think
it
makes
sense
to
do
it
by
shard
anyway
right
so
we
would
say
like
if
we
want
to
do
another
shard
after
catch
all
these
are
the
steps
we're
going
to
do
so.
First
step
would
be
look
at
all
the
workers
that
run
on
that
shard.
Are
there
any
that
obviously
check
their
queue
size?
This
should
be
a
fairly
quick
check
because
most
don't
but
like
obviously,
some
will
be
more
complicated.
B
Then
the
second
step
is
okay.
That
looks
fine.
Let's
go
create
the
change
issues
and
you
know
start
the
roll
out.
Basically,
so
I
just
I
just
consider
that,
like
a
pre-check
step
before
creating
the
change
issue,
I
don't
think
that's
necessarily
a
huge
task
on
it
on
its
own.
I
don't
know
what
you
think
when
I
win.
C
Yeah,
I
I
think
it
is
just
like
every
check
before
we
do
the
running
out,
and
maybe
we
have
flattered
after
check
to
verify
that
as
well.
C
Check
on
the
metrics-
and
it
was
a
bit
missing
closest
as
well,
and
we
are
not
quite
the
one
who
are
really
confident
in
the
workers
budget,
so
we
may
even
the
owner
of
the
worker
as
well
like.
If
we
continue
with
the
electric
chart
site,
we
have
to
inform
the
global
shark
team-
yes,
indeed
rolling
out
as
well.
B
B
Hopefully
this
is
fairly
mechanical,
although
we
do
need
to
write
up
the
steps
we
need
to.
I
suppose
that
would
be
the
first
step
is
to
write
up
the
the
steps
we
should
do
before
and
after
to
check
that
we
think
this
chart
is
safe
to
migrate
and
then
how
to
migrate
it
and
then
what
to
look
for
after
it's
migrated
and
then
and
then
we
would
go
and
do
those
for
each
shard.
So.
A
Yeah
another
question
is
one
just
came
to
mind.
A
No
idea
it's
gone,
it's
popped
out
of
my
head.
I've
kept
them.
I've
put
the
notes
here,
so
we're
gonna
we're
gonna,
keep
the
the
task
of
fixing
and
renaming
the
queues
so
fixing
the
remaining
named
queues
out
of
this
project,
then,
for
each
of
the
shards
that
exist.
We're
gonna
raise
an
issue
per
shard
to
look
at
the
worker
and
create
a
change
issue.
Now.
Remember
the
question:
can
we
copy
the
structure
of
the
catch-all
migration
change
issue
for
these
ones?
A
B
A
B
I
think
that
would
be
the
first.
That
would
be
part
of
that
first
step.
So,
like
the
first
step,
I
was
saying
where
we
define
like
these
are
the
pre-checks.
This
is
the
change
issue.
This
is
what
we
look
for
afterwards,
because
also
looking
for
after
looking
post
checks
like
what
we
look
for
after
rolling
out
the
change
are
part
of
the
change
issue.
Anyway,
yeah.
You
know
it
says
like
what
do
you?
What
do
you
look
for?
So
I
think.
B
B
B
A
Yeah,
because
also,
if
we're
going
to
be
bringing
in
any
stage
groups,
because
something
has
gone
wrong
on
the
shard,
we
would
then
raise
a
specific
issue,
which
is
this
worker
didn't
work
or
this
worker
is
a
problem,
but
that's
a
that
we
only
have
to
create
those
if
something
is
not
working
properly
like,
rather
than
creating
a
whole
bunch
of
issues
and
a
whole
bunch
of
change
issues.
I
I
agree.
Let's
just
just
use
change
issues.
B
B
A
I'll
go
ahead
and
raise
the
first
one
and
I'll
ask
who
should
I
ask
to
yeah
just
to
build
up
that
issue
with
me
so
that
I
can
then
go
and
create
all
the
others?
Is
that
something
that
you
and
I
can
do
from
win
or
is
that
something
I
should
get?
I
should
ask
matt
or
craig.
B
I
think
it
would
be
good
to
get
well.
You
need
to
get
a
sre
review
anyway,
so
it'd
be
good
to
get
massacre
to
review
it,
but
I
think
I
think
yeah,
if,
if
you
two
do
it
together
initially
and
then
go
from
there,
they
can
because
there's
also
this
clone
issue
quick
action
as
well
now,
so
you
can.
A
B
Yeah,
I
think
it
would
be
good
to
say
that
the
first
step
of
like
checking
the
cues,
that
to
be
migrated,
is
done
by
a
backend
engineer
on
the
team,
and
then
they
pass
it
to
an
sre
to
actually
like
execute
the
change
rather
than
make
matt
and
craig
do
all
of
this.
A
No
for
sure,
I
also
think
that
some
of
the
post
checks
should
be
done
by
that
engineers
as
well,
because
yeah.
What
I'm
really
hoping
is
that
we
can
do
this
like
craig
rolls
it
out
back
in
engineers
check
it.
Matt
gets.
It
gets
like
some
kind
of
preparation,
like
I'm
trying
to
use
the
whole
the
the
the
time.
A
Okay,
all
right,
so
we
have
a
plan
on
what
we're
gonna
do.
Next.
Is
there
anything
else
we
want
to
cover
on
this,
because
if
we
could
keep
this
to
half
an
hour,
then
that's
amazing.
I
don't
think
we
need
a
whole
hour.
A
Cool,
I
hear
silence,
that's
great,
so
this
is
done
I'll
write
this
up,
we'll
get
we'll
get
started
with
this,
and
hopefully
we
get
through
the
we
get
the
catch-all
shot
progressing
along
nicely
so
that
we
could
get
started
with
this
part
next
week.
But
let's
see
how
we
get
on.
C
Actually,
I
have
one
question:
so
what
are
the
ac
materials
for
this
epic?
So
basically
we
might
write
it
on
a
day
chart.
We
have
to
light
every
variegate
the
qc
later,
but
we
want
to
push
that
to
later.
B
I
will
push
that
to
later,
because
we
can
only
realistically
do
that
and
get
15.0
which
will
be
next
may
so.
B
Yeah
exactly
we
just
have
to
support
both
both
for
a
year
basically,
but
that
gives
us
time
to,
like
you
know,
shake
out
any
issues
like
improve
the
documentation,
etc,
etc.
So
it's
not
all
bad.
A
So
I
think
that
the
biggest
success
criteria
for
this
was
related
to
the
okr
that
we
set,
which
was
to
reduce
the
the
cpu
set,
the
peak
cpu
saturation
from
75
to
25
percent.
A
A
Okay,
so
so,
if
we
put
the
two
together
right
like
so,
if
we
say
between
between
catch-all
and
single
q
per
shard,
the
goals
are
to
reduce
the
cpu
saturation.
But
then
the
second
goal
is
to
not
leave
behind
so
much
technical
debt
that
we
fall
over
it
later.
So
how
do
we
frame
that
sounding
a
bit
more
official.
B
C
B
A
B
Yes,
yeah
and
when
you
say
at
the
end
of
this
project,
that's
usefully
ambiguous
because
it
could
mean
before
we
close
this
project
I
could
mean
just
after
we
close
this
project,
but
it's
basically
that
should
be
trivial.
Once
we've
done
this
project.
B
A
Okay,
then,
I
think
what
I'm
also
going
to
do
is
I'm
going
to
raise
one
epic
to
be
the
parent
of
the
catch-all
and
the
single
cube
per
shot
so
that
it's
all
wrapped
up
into
one,
because
these
are
both
related.
A
B
A
Okay,
cool
anything
else.
C
There's
one
other
issue
so
that
we
should
repair
the
communication
plan
for
this
process
as
well,
because
it
could
be
a
really
long
running
migration
and
on
the
stage
should
be
aware
of
that.
And
maybe
we
want
to
try
to
release
that
again
later.
C
A
B
A
C
A
I
agree
that
that
that's
that's
the
goal
like
no
one
actually
notices
that
we
did
anything,
but
I
think
we
still
need
to
tell
people
that
it's
there,
because
also,
if
this
has
the
impact,
we
think
it's
going
to
have
it's
going
to
be
pretty
awesome
to
have
this
much
more
headroom
available
and
it's
just
nice
to
advertise
to
people
that
we've
done
a
cool
thing
and
no
one
noticed
which
is
great
yeah.
If
that
makes
sense
like
we
did
a
cool
thing
without
breaking
a
bunch
of
stuff.
So.
B
A
Yeah,
if
there's
nothing
else,
this
has
been
a
very
productive
call.
I
will
go
and
put
all
of
this
into
issues
and
get
the
ball
rolling
here
and
yeah.
Let's,
let's
see
if
we
can
get
the
catch-all
stuff
done
this
week.
Yeah
awesome
hope
you
both
have
a
great
rest
of
your
day.
Thanks.