►
From YouTube: 2020-03-06 Background jobs improvements demo
Description
Part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/96
A
A
Stop
it
I'll
start
with
something
real,
quick
and
then
we'll
go
from
there.
So
let
me
just
share
my
terminal,
so
we
want
to
move.
Can
you
see
my
terminal?
Ok
cool,
so
we
want
to
move
psychic
cluster
to
be
the
default
in
core
for
everything.
But
one
issue
have
right
now
is
that
if
you
run
ok,
cluster.
A
It
requires
two
groups,
so
that's
kind
of
annoying
because,
like
the
whole
point
of
it
is
that
you
want
to
select
specific
cues
and
we
have
a
couple
of
different
ways
of
doing
that
now,
but
for
the
default
case,
which
is
I,
would
say
almost
every
gate,
lab
instance
out
there
except
get
lamp
comm.
You
just
want
to
process
all
views.
So
now,
if
you
pass
an
asterisk,
it
will
run
or
queues
this
isn't
actually
merged.
A
Yes,
that's
why
I'm
on
this
branch
that
works,
if
the
funky
syntax
is
enabled
or
not,
because
we
don't
it's
not
a
wild
card.
It
just
start
means
everything
also
as
part
of
this
I
added
a
warning.
So
if
you
do
like
negate
everything
yeah
and
also
that
that's
helpful
for
the
queue
selector
syntax,
because
if
you
do
select
say
something
like
has
external
dependencies.
A
B
A
My
branch
so
on
my
branch,
it
stops
the
psychic
cluster
process.
Starting
I
haven't
tried
this
in
Omnibus,
but
I'm
pretty
sure.
That's
an
error
when
you,
because,
under
the
slight
tries
to
manage
the
processes
right
so
like
there's,
no
psychic
cluster
parent
process
started
in
this
case,
because
it's
failed
so
I
think
you
get
a
pretty
early
warning
and
omnibus.
For
that
case,
yeah
cool
and
then
another
quick
thing
I'm
just
going
to
mention,
while
I'm
here
so
I
was
working
on
a
dashboards.
A
So
we
already
have
a
histogram
or
CPU
time
for
sidekick
jobs,
but
we
just
never
added
it
to
any
dashboard
after
we
out
of
it
like
months
ago,
but
I'm
also
adding
gisli
time
and
database
time.
So
I
was
working
on
this
and
the
work
Andrews
done
on
making
this
all
available
in
graph
on
it
makes
it
quite
nice,
so
a
queue
detail.
A
Can't
I
can't
really
tell
what
he
back
is
because,
like
I
can't
see
the
the
subtle
grid,
gradients
subtle
differences
in
color,
so
this
is
just
a
line
chart
with
a
line
for
the
50th,
90th,
95th
and
99th
percentile,
and
because
it's
in
graph
on
it
using
JSON
it,
it's
quite
easy
to
you
know,
define
that
I
want
multiple
time
series
like
you
know
these
are
the
quantiles
I
want.
I
can
just
write
a
function
that
that
generates
those
those
queries,
all
those
targets,
I
guess
chronicles,
which
have
query
and
a
legend
so
yeah.
A
It
was
a
nice
experience.
Working
on
that
and
I
also
added
a
couple
of
things
which
are
still
in
this
mode
request
to
make
it
a
little
bit
easier
to
test
this
out
so
before,
like
you
could
test
it
out,
but
you
have
to
get
an
API
token
from
somewhere
and
I'm
really
lazy
about
stuff
like
that.
So
if
you
run
the
test,
dashboard
script
in
dry
run
mode
with
a
specific
dashboard
path.
It
will
just
print
out
the
dashboard
Jason.
A
Obviously
the
API
key
thing
is
probably
easier
once
you've
set
it
up.
I'm
just
super
lazy
about
stuff
like
that
and
I
work
unless
I
have
to
so
yeah,
so
that
was
already
nice.
You
know
anybody
was
a
bit
worried
about
making
dashboard
changes.
You
know
like
if
there's
a
couple
of
issues
on
the
board-
and
you
see
one-
that's
related
to
that-
it's
not
that
scary.
It's
actually
quite
fun!
So
yeah!
That's
me!
Oh
I'll
quickly
show
you
what
the
chat
looks.
A
Exactly
and
JSON
it's
I
found
a
bit
weird
because
it
doesn't
have
great
documentation
yet
so,
like
the
standard
library
says,
like
you
know,
say:
I
needed
fold-out,
so
like
a
reduced
from
the
left
side,
and
it
was
like
this
takes
a
function
and
I
was
like
okay.
Well,
which
argument
is
the
thing
I'm
accumulating
and
which
argument
is
the
individual
item
of
the
array?
That's
being
folded,
and
it
didn't
say
so:
I
just
had
to
trial
and
error.
It
basically,
which
obviously
there's
only
two
cases
this
that
wasn't
too
bad.
A
A
So
it's
this
this
thingy
here
so
instead
of
instead
of
the
heat
map,
it's
just
you
know
this
is
this
is
what
the
each
quantile
of
CPU
time
is
for
this
queue
and
then
we'll
have
something
similar
for
Italy
in
database
once
I've
actually
done
that
work,
but
yeah.
That's
all
my
demo,
Bob
Oh.
First
of
all
any
questions
and
then
Bob.
If
no
questions.
B
Okay,
if
there's
more
questions
for
Shawn
than
just
interrupt
me,
let
me
start
by
sharing
my
screen:
that's
going
to
show
the
entirety
because
I
don't
know
which
windows
already
yet
so
I've
been
working
all
automatically,
not
running
and
not
running
psychic
jobs
if
they
are
already
in
cute.
So
as
an
example,
here
I
marked
the
protect
import
scheduled
workers
and
workers
as
idempotent,
which
I
think
it
is
like.
You
could
even
merge
this,
but
need
to
go
to
that
later.
I'm
going
to
stop
sidekick,
so
I
don't
need
to
hurry
when
scheduling
jobs.
B
A
Love
this
like
coding,
well
the
or
like
preparing
well
still
staying
out
yeah.
B
Like
that,
because
what
to
see
Caleb
only
enter
is,
let's
say
so:
the
first
one
will
get
scheduled
and
the
second
one
didn't
get
scheduled
and
it
prints
out
is
nice.
Well,
nice
error
message
telling
it
that
it's
deduplicated
and
potatoes,
and
it
says
it's
a
duplicate
of
the
job
that
was
previously
scheduled,
interesting
things
about
this
is
like
we've
all
covered
a
little
bit
of
a
bee's
nest
of
sidekick
logging,
as
John
has
seen
me
complain
about
so
right
now.
This
got
printed
here
to
the
standard
out
of
my
current
process.
B
A
B
Maybe
going
to
show
that
quickly
too,
so
we
have
the
ideas
like
during
the
like
addition:
we
also
discussed
unique
jobs
camp
somewhere.
That
yeah
does
a
lot
of
kinds
of
things
and
we
were
kind
of
reluctant
of
just
including
it,
because
we
can't
easily
play
with
it
and
gradually
draw
out
stuff.
So
we
have
the
idea
of
implementing
strategies
and
right
now
we
have
one
strategy
and
it's
called
until
executing,
which
means
that
regard
to
methods
to
call
one
a
schedule
and
for
this
strategy
most
of
the
logically
lived
there.
B
B
Means
it's
idempotent,
it's
a
duplicate
and
the
future
flag
is
turned
on
and
then
we're
going
to
do.
What
psychic
wants
us
to
do
to
not
schedule
the
job
and
in
the
future
we
could
implement
different
strategies
for
this.
For
example,
we
have
a
lot
of
jobs,
a
pattern
that
is
only
execute
this.
If
you
can
get
at
least,
we
could
implement
a
strategy
that
does
this
for
us,
which
could
allow
us
to
like
just
not
schedule
the
job
in
the
first
place.
If
something
has
already
taking
the
least,
if
that
makes
sense,.
A
B
Regarding
strategies,
I've
created
an
issue
like
I
think
this
one,
the
until
executing
that
we've
discussed
before,
is
going
to
be
the
most
important
one
for
us,
but
I
create
animation
P.
For
that
we
could
look
at
because
it
could
be
from
a
performance
perspective
or
an
amount
of
job
run,
but
just
from
am
like
a
developer
kind
of
like
yeah
happiness,
it's
easier
to
do
it
like
that
thing
to
have
to
do
this
thing
with
the
least
of
the
jobs,
otherwise,.
C
C
B
Yeah,
it
is
exactly
like
the
idea
of
looking
yeah
for
now,
like
I
want
to
do
this
instead,
like
first
I
want
this
first
merge
request,
that's
been
merged
already,
but
not
the
point.
I
want
to
see
how
many
duplicates
we
have
like,
which
will
be
visible
in
the
first
step
and
then
based
on
that
you
can
see
which
workers
are
often
duplicated
and
then
try
to
make
those
item
put
them.
So
we
can
be
duplicate
them
and
have
the
most
effect
like
that
ties
into
the
issue.
B
C
Most
of
my
work
was
breaking
down
every
parrot,
but
we
had
in
the
first
version
of
the
station
part.
So
I
broke
it
down
like
releasing
stages
so
yeah
we
can
find
a
little
bit
more
today.
She
shoot
that
event
yep
it,
but
the
idea
is
first
life
we
meet
you
who
is
actually
shaped.
Psyche
was
the
street
or
a
make
happens
of
this
feat.
If
I
make
just
then
you
start
using
the
script
on
GDK
and
make
the
background
rocks
could
use
the
second
person.
C
B
A
A
A
C
C
Given
a
ballistic
tricky
tricky
one
there's
some
investigation
there,
it
gonna
be
head
baby,
head,
choo-choo
ways
of
keeping
type
t1
in
sidekick,
sidekick
itself
in
psychic
Buster,
and
if
we
run
on
with
us
today
it
off
like
a
neighbor,
you
get
all
those
processes,
one
one
should
wear
hello,
all
the
cubes,
which
is
sucky
on
the
other
ones.
That
I
just
even
feel
like
acid,
actually
cute,
and
basically
what
we
want
to
do
is
make
a
sidekick
the
sidekick
name
of
figuration
inside
each
person
by
default.
C
A
Have
a
small
mention
the
other
day
that
he
accidentally
said
unicorn
and
Puma,
and
they
bolts
on
the
same
host
before
and
it
missed.
You
found
out
that
it's
possible
to
enable
the
regular
psychic
process
and
psyche
cluster
at
the
same
time,
which
is
also
a
thick
bush
and
set
you
do
because
I
was
expecting
that
you
were
out
of
the
box,
because
we.
C
A
See
what
you
mean
yeah
I-
guess
that's
probably
where
it
is
but
yeah
that
was
surprising
to
me,
because
I
figure
like
yeah
I
guess
I
was
just
in
a
get
lab,
calm,
centric
mode,
but
we
have
had
a
couple
of,
or
at
least
one
customer
we
reach
out
and
say,
like
hey.
Would
you
psychic
cluster
if
you
could
set
different
concurrency
settings
for
each
process,
because
I
want
to
run
multiple
processes,
handling
different
views
on
the
same
host,
which
is
not
something
we
don't
get
loud
calm
but
doesn't
mean
that's
a
totally
invalid
idea.
C
Response
options:
I'm,
sick
us
from
the
issue
of
sharks
because
get
here
running
slightly
compartir
steps
that
they
will
have
one
psychosis
profiles.
So
if
you
think
about
head
inside
each
processor,
once
one
side,
a
person
process
jamendo
based
on
each
one-
and
you
can
make
much
sense
because
we're
basically
for
the
same
thing.
But
it's
still
a
volunteer
outside
it-
wasn't
like
a
twenty
new
default.
So
we
just
LS
that
causation
or
configuration
of
constructs,
which
is
like
impossible.
D
C
C
D
Somehow,
okay,
sorry
about
that
yeah!
So
if
he
saw
my
comic
there,
it
seems
like
these
environment
variables
are
only
available
on
the
App
Engine,
so
I
think
we'll
have
to
set
them
explicitly.
We
can
set.
We
can
set
environment
variables,
you
know
in
the
config,
so
that's
not
a
problem.
You
just
need
to
know
what
to
stick
them
to
the
cloud
project.
Obviously
I'm
not
sure
like
GAE
service
and
GE
version.
These
are
Google
App,
Engine,
specific
variables
right
and
we're
not
run
an
app
engine
all
right.
So.
C
D
D
D
B
Cool
then
I
just
wanted
to
bring
up
one
last
time.
The
thing
that
I
plan
on
doing
with
the
cyclic
log
files
may
be
explaining
the
situation
and
the
way
it
is
now
so
right
now
we
have
the
cyclic
logs,
which
we
read
from
like
the
cyclic
cluster
standard
out.
That's
like
the
process.
Molitor
provides
that
for
us
and
those
coming
to
the
index,
we
also
simply
sidekick
dog
log
inside
the
rails.
B
So
I
created
an
issue
to
get
rid
of
that
and
for
now
to
get
my
cellphone
block
with
the
laws
that
I
showed
you
before
for
the
deduplication
I
suggested,
adding
a
new
logger
called
psychic
climb
that
will
emit
structured
loads
if
you've
configured
the
JSON
flag
in
gitlab
of
gamal
or
hit
land
over,
be
in
omnibus
and
and
then
we'll
need
to
update
the
nodes.
All
of
the
nodes,
I'm
22,
look
at
that
file
and
add
it
to
the
to
the
sidekick
index.
That's
unreasonable
for
anybody
or
like.
B
A
I
think
removing
it
from
the
admin
UI
made
sense,
I,
think
job
is
the
person,
but
also
there's
a
couple.
I
know:
Craig
and
Igor
and
McHale
have
been
working
on
logging
stuff
as
well
lately,
because
they've
been
sort
of
fighting
issues
about
some
cases
where
our
log
schema
isn't
consistent
between
entries
in
the
same
log.
So
yeah
Jeff
are
you?
Okay
with
that
plan.
D
B
No,
that's
like
that's
part
of
the
plan,
but
I
moved
that
out
into
a
separate
issue
because
it
doesn't
like
here
if
I
start
writing
to
that
law
like
we
still
need
to
add
it
to
the
fluid
D
and
so
on,
and
it's
still
not
going
to
be.
Actually
no,
it's
still
not
going
to
be
logging
there
anyway,
because
cyclic
by
default
logs
to
standard
out,
which
is
what
we've
been
reading.
But
if
you
do
that
on
the
client
side
like
on
the
sidekick
climbs
I,
think
that
scheduled
jobs
that
could
be
puma
or
unicorn.
D
B
D
D
Yeah
so
I
would
say
so.
This
is
actually
less
of
an
issue
when
we
go
when
sidekick
moves
to
kubernetes,
because
we're
going
to
be
directing
logs
all
the
logs
coming
out
of
the
sidekick
pod
are
going
to
go
to
a
single
index
and
right
now
that
is
a
mixture
of
cyclic
standard,
alt
and
production
dog,
because
you
know
we
have
those
two
and
actually
anything
in
var
log
get
that
all
like
that
comes
from
a
side.
Pod
goes
to
the
cyclic
index.
A
B
B
D
This
is
actually
yeah
this.
This
is
sort
of
a
problem,
of
course,
like
the
fluid
D,
you
can
do
anything
like
we
could,
if
we
can,
although
the
log
yeah
so
the
way
that
fluency
works
in
kubernetes
right
is
then
we're
looking
at
the
logs
from
the
Cod
and
they're
like
and
it's
a
mixture
of
everything
that
pod
is,
is
logging
right.
You
know,
so
we
could
always
have
fluid
D
to
kind
of
inspect
what
kind
of
log
it
looks
like
and
then
redirected
to
a
different
index.
B
D
B
A
D
Yeah,
so
we
have
a
fluent
II
daemon
set,
so
we
have
like
one
pod
per
node,
that's
running
fluent
D
and
it's
looking
at
all
of
the
logs
that
are
being
outputted
by
all
of
the
containers
that
are
running
on
the
node.
And
so
what
we
have
is
the
container
name,
which
we
do
like
a
glob
for
everything.
That's
a
sidekick
container.
We
put
to
the
sidekick
instance,
anything!
That's
a
registry
container
you
put
to
the
registry
index,
so
yeah
I
mean
and
all
of
the
logs.
A
D
D
It
once
in
fluency,
so
I
might
have
to
unwrap,
but
you
can
unwrap
it
as
many
times
as
you
want.
That's
a
possibility
yeah.
It
would
be
better
if
we
could
just
add
it
to
the
log
message
itself
with
some
anaphora
structured
logs.
Maybe
maybe
we
introduced
like
a
type
or
a
service
annotation
or
something
that
you
know.
Every
long
message
will
have
this
yeah.
A
I
think
so,
because
I
was
talking
about
this
with
Craig
a
bit
the
other
day
because,
like
what
are
the
issues
we
also
have
this
like.
So
we
write
different
log
files
from
the
application
and
like
one
of
those
log
files,
is
writing
response.
That's
always
an
object
in
one
of
them
is
risk
response
as
a
string
and
that
doesn't
work
because
we're
putting
them
in
the
same
elastic
search
index
but
from
an
application
perspective.
A
You
don't
really
know
that
they're
going
to
go
into
the
same
index
and
the
reason
you're
putting
them
in
different
log
files
cuz
they
have
different
shapes.
So
if
we,
if
we
on
the
application
side,
if
we
were
responsible
for
tagging
them
and
saying
like
this-
is
a
log
of
this
type,
then
maybe
that
solves
like
both
of
those
problems,
because
then
we
can
say,
like
you
know,
if
it's
this
type,
then
it
has
this
schema,
or
this
you
know
probably
not
going
to
find
a
formal
schema.
A
Yeah
so
Craig
create
an
issue
where
he
was
like.
We
should,
because
we
can
like
verify
this
Rea,
like
I,
think
part
of
the
problem
is
were
still
in
this
weird
sort
of
in-between
state
where
we
support
structured
login,
it's
not
required,
and
so
you
have
to
sort
of
think
of
everything
twice
like
when
we
support
two
databases.
So
maybe
we
just
need
to
finish
up
that
first,
but
yeah
I
think
we
should
be
able
to
verify
like
on
the
application
side
that,
like
there
is
a
particular
schema
that
we
expect
this
log
to
match.