►
From YouTube: 2020-01-07 Background processing kick-off demo
Description
Part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/96
A
All
right,
well
hi,
everyone
to
the
first
inaugural
demo
for
this
team.
Let's
see
how
its
go,
but
I
am
going
to
give
you
a
short
intro
on
what
is
expected
here.
So
basically,
the
expectation
here
is
low
level
technical
discussion
that
is
developing
around
the
work
that
was
accomplished
throughout
the
week.
Whoever
has
something
to
show
can
put
themselves
on
there
and
we
can
go
and
discuss
what
value
does
it
have
and
if
it
has
any
value
how
to
increase
it
or
if
it
doesn't.
A
Why
did
why
was
it
done,
then,
more
importantly,
with
whatever
demo
we
do
here,
it
is
also
important
to
just
go
through
the
motion
of.
If
this
is
something
that
you're
showing
is
hard,
you
should
do
it
regardless
of
how
hard
it
is.
So
if
something
is
not
working
properly,
that
is
the
point
to
highlight,
and
we
want
to
see
like
whether
there
are
better
ideas
or
whether
we
can
do
something
a
bit
better
as
a
whole
team.
A
This
has
been
successful
in
the
two
other
teams.
I
was
running
and
at
the
beginning
it
is
a
bit
awkward
until
you
get
into
it
and
then,
when
you
get
into
it,
there
is
a
lot
of
value.
You
extract
from
just
understanding
where
others
are
in
their
work
and
what
kind
of
work
can
be
improved.
I
might
be
jumping
in
to
ask
questions.
Some
of
them
are
gonna
be
well.
Actually,
most
of
them
are
going
to
be
very
dumb,
but
I'm
hoping
I'm
gonna
squeeze
in
a
nice
question
directly
as
well.
A
Everyone
is
encouraged
to
stop
folks
and
ask
questions
and
do
the
same
thing
over
and
over
again
until
we
get
to
the
bottom
of
it
for
now,
once
a
week,
we'll
do
Fridays
most
likely
from
from
next
week.
Let's
see
how
this
goes
first,
so
with
that
andrew
has
a
first
demo.
He
wants
to
share
so
Andrew
share
your
screen
and
take
it
away.
B
Instead
of
saying
that
this
team
is
responsible
for
this
sidekick
worker,
we
can
say
this
feature
is
associated
with
the
psychic
worker
and
then,
as
the
team's
change
around
we
can
go.
Look
at
this
stage
is
nothing
file,
the
stages
document
in
the
handbook
and
that
helps
us
map.
It's
a
team
so
in
here.
C
B
Say
you
know
if
I've
taken
words
that
you
know
if
I
say:
oh,
we've
got
a
problem
with
an
incident
management
feature.
Actually,
let's
take
her
I'll,
explain
it
better
in
a
second.
Let
me
say:
are
the
pages
feature
70%
of
the
time
is
giving
us
errors.
We
can
go
back
in
here
and
we
will
see
that
the
pages
feature
person
to
speak
to
is
probably
Davi
frame,
so
he's
the
person
that
we
need
to
go
and
speak
to
about
that,
but
so
giving
a
quick.
A
D
B
D
B
B
Ladies,
so,
basically
on
you
know,
we
added
the
the
attributes
and
then
we
added
some
functions
to
to
the
classes
like
get
feature.
Category
latency-sensitive
worker
question
mark
how
rebus
specifically
say
the
question
mark
at
the
end,
when
you,
when
you're
talking
about
a
method,
that's
got
a
question
mark
on
it.
Would
you
just
ask
say
it
in
a
question:
II
kind
of
way
go.
B
So
working
class
has
external
dependencies.
Work,
has
external
dependencies
but
yeah,
so
so,
basically,
the
the
classes
have
these
methods
on
them.
So
we
know
that
the
classes
you
know
whatever
Mahler
or
whatever,
and
so
we
can.
We
can
pull
these
off.
Obviously
it's
only
if
the
class
includes
work,
attributes
which
we
test
and
funny
story,
but
the
first
version
and
let's
go
all
the
way
into
staging
without
that
and
didn't
work
very
well,
but
we
fix
it
but
anyway,
so
we
take
those
labels
and
we
apply
them
on
to.
B
Joe
Sakic
jobs
compute
in
seconds.
So
if
you
go
take
a
look
at
this
in
for
me.
Yes,
he
looks.
These
have
now
got
a
lot
of
labels
on
them.
It's
it's.
It
doesn't
look
great,
but
it
doesn't
add
to
the
cardinality,
because
each
worker
only
has
one
specific
label,
so
we're
not
kind
of
getting
this
hot
nasty
explosion.
But
it's
kind
of.
B
Yeah
and-
and
that
is
pretty
much
why
I
spend
quite
a
lot
of
time
just
going
to
the
individual
ones.
Sadly,
and
then
so
you
can
see
here,
we
get
you
know
this
particular
one
is
the
new
epic
you
that
we're
talking
about
cheers
so
I,
guess
that's
what,
when
you
create
a
new
epic
I'll,
get
lab
it's
falling,
something
on
the
new
epic.
You
and
you
can
see
here-
we've
got
all
these
attributes
no
like
doesn't
have
external
dependencies.
B
No,
what's
the
feature,
category,
agile
portfolio
management
and
then
obviously,
we've
got
like
whether
the
job
succeeded
or
failed,
whether
it's
and
it's
not
latency
sensitive
and
then
the
priority
and
and
all
the
other
attributes
that
we
use
out
before.
But
we
get
these
new
attributes,
and
so
we
can
aggregate
everything
up
according
to
those
attributes,
and
that
gives
us
this
kind
of
first
version
of
this
dashboard
and
so
technically
what
we
can
I
haven't
been
able
to
I'd
like
to
sort
this
every
time
by,
like
worst
at
the
top.
B
For
some
reason,
I
can't
get
tables
in
refinement
to
do
that.
So
obviously,
the
two
sort
of
columns
in
this
in
this
dashboard,
the
first
one-
is
apdex,
which
is
your
what
percent
effectively
what
percentage
of
your
requests
complete
within
an
acceptable
amount
of
time?
So
we
want
that
to
be
a
hundred
percent
and
the
lower
it
is
the
worse
it
is,
and
then
this
is
what
percentage
of
your
requests
will.
Workers
in
this
case
end
in
inner
error
and
obviously
for
that
we
want
lower,
is
better
so
the
to
a
slightly
different.
B
But
if
we
roll
it
up
by
feature
category,
we
can
see
here
that's
five
percent
of
of
things
that
are
not
owned
or
don't
need
their
latency
requirements
and
that's
kind
of
not
surprising,
like
obviously
that
the
stuff
that
nobody
owns
is
always
stuff.
That's
gonna
perform
the
worst,
and
when
we
did
all
that
work
attribution,
one
of
the
things
we
said
was
we
should
really
aim
to
to
to
reduce
that
down
to
zero
or
close
to
zero
overtime.
B
The
other
thing
to
point
out
is
that
the
values
that
I've
been
used
are
kind
of,
like
I
just
did
a
thumb
suck
on
them
and
we
should
probably
discuss
them
with
the
wider
audience,
and
so
what
I'm
saying
is
that,
if
something,
if
we
declare
that
something's
latency-sensitive,
we
say
that
it
needs
to
finish
within
ten
seconds.
So
we
this
was
a
discussion.
I
was
having
with
Sean
on
Friday
I.
Think
we're
saying
the
infrastructure
team
will
will
make
sure
that
that
your
job
starts
on
time.
B
If
it's
a
latency,
sensitive
job,
we'll
make
sure
it
starts
on
time.
As
long
as
the
engineering
teams
that
develop
that
job
make
sure
that
it
finishes
quickly
like
we
can't
have
things
that
take
ten
minutes
to
run,
and
we
it
will
have
something
takes
ten
minutes
to
run.
We
can't
guarantee
that
we
can
start
it.
You
know
in
five
milliseconds,
so
so
what
we're
saying
here
is
that
for
things
that
are
latency
sensitive,
they
need
to
finish
within
10
seconds
and
things
that
are
not
really
sensitive.
B
They
need
to
finish
within
five
minutes
on
average
and
if
we,
if
we
aggregate
that
up,
you
can
see
here
that
you
know
other
than
our
own.
The
next
slowest
thing
is
container
registry
and
one
of
the
cool
things
you
can
do
define
an
hour
is
you
can
actually
click
on
that
and
it
will
filter-
and
you
can
see
here
that
container
registry
has
put
these
two
jobs,
and
this
is
the
worst
one.
Performance-Wise
soda
needs
contain
a
repository
whatever
that
that's
so
we
go.
Take
a
look
at
that
we're
going
to
hear
well.
B
B
B
Step
with
this
is
obviously
to
do
the
same
thing
with
routes
or
controllers
and
actions,
perhaps,
and
then
once
we've
got
that
we
can,
we
can
have
a
much
more
overall
view
because
obviously
sidekick
is
only
part
of
the
system.
Once
we
have
everything,
then
we
can
start
prioritizing
and
maybe
using
this
as
a
as
a
narrow
budget,
and
so
saying
you
know
the
container.
B
Obviously,
the
one
thing
that
this
doesn't
take
into
account
at
the
moment
is
that
we
not
weighing
this
by
the
number
of
requests,
so
it
might
be
that
this
you
know,
I
had
that
issue
with
the
unit's
is
go
back
to
this
delete
container
repository.
Now
we
saying
that
in
three
percent
of
the
case
took
more
than
five
minutes.
Maybe
they
were
only
like
seven
requests
in
the
last
24
hours
and
it
doesn't
make
sense
for
that
team
to
kind
of
think
in
the
next
month's.
Focusing
on
this,
because
it's
you
know.
C
B
Happens
every
now
and
again,
but
it's
not
that
bad
and
so
before
this
becomes
like
a
proper
Eric
budgeting
tool
for
us
to
like
help
drive
those
discussions.
We
need
to
include
some
sort
of
waiting
so
that
we
don't
kind
of
like
say:
oh
there's
this
thing
that
happens
once
a
week
and
every
time
it
runs
its
slow
hundred
percent
of
the
time
it
doesn't
achieve
its
applix
cool.
You
know
it
probably
doesn't
matter,
there's
another
case
of
that.
Where
there's
a
feature,
category
called
license
management
and
like
50%
at
the
time
that
it
runs.
B
It
ends
in
error,
but
it
only
runs
like
twice
a
day.
So
is
that
a
problem
or
not
I,
don't
know,
but
it's
it's
not
a
high
priority
right
like
what's
what's
more
interesting,
it's
interesting
because
obviously
it's
run
since
then,
but
you
know
pages,
you
know,
look
at
this
and
there's
a
thing
called
yeah.
This
pages
domain,
SSL
renewal
and
also
the
other
thing
you
can
do
is
you
can
actually
click
through?
So
it's
quite
nicely
you
can
click
through
from
that
table.
It'll.
Take
you
here.
D
A
B
I've
made
that
mistake:
it's
like
use
the
staging
server
until
you
absolutely
sure
you
know
anything.
It's
like
I
know
what
I'm
doing
I'm
bad
now,
but
you
know
so.
This
is
kind
of
maybe
help
drive
that
conversation
with
the
team
and
sort
of
like
help
us
define
error
budgets
is
very
very
early
at
the
moment,
but
it
kind
of
feels
like
a
first
step
and
I.
Guess,
that's
the
end
of
my
demo.
Does
anyone
have
any
questions?
Other
questions.
C
B
B
B
C
B
A
A
B
C
B
B
There
isn't
anything
I
mean
if
you
were
to
look
over
depending
on
the
time
frame
that
you
used.
It
would
probably
well.
What
would
happen
is
that
for
the
time
that
it
was
you
know
where
prometheus
would
we'll
handle.
It
is
that
during
the
time
that
it
was
belong
to
one
feature,
category
would
be
attributed
there
and
then
we'd
switch
over.
So
you
know
50
percent.
B
At
the
time
of
this
24
hours,
it
was
owned
by
you
know
continuous
integration,
and
then
another
50%
of
the
Sun
was
learned
by
Cuban
eighties
configuration
the
attribution
would
sort
of
be
50/50,
but
there's
nothing
really,
oh,
and
you
get
the
other
things
shown.
Is
that
you
get
double
you'd,
see
it
twice
in
this
list,
you'd
see
a
dip
because
sorry
it
would
be
yeah.
Sorry.
B
B
You
think
I'm
answering
the
wrong
question
again,
but
in
the
github
repository
there's
a
script
which
will
take
the
stage
Azam
will
file
and
generate
a
kind
of
intermediate
form
off,
basically,
which
is
effectively
enlisted
future
categories,
and
then
the
CI
script
will
validate
the
CIA
job
will
validate
that
the
future
categories
match
up
with
that
and
kind
of
the
idea
is
that
every
now
and
again
you
go
and
run
the
script
and
get
a
new
list
of
future
categories.
And
then
you
check
that
in
a
menu.
B
B
B
It
is
generated
from
the
stages
file
at
the
location.
You
would
like
to
updated,
peas
run,
and
so
the
alternative
is
that
we
could
just
generate
this
on
every
CI
build
I,
just
kind
of
felt
at
the
time
that
it
made
sense
to
not
like
break
the
gitlab
build
if
something
was
wrong
with
the
handbook
and
it
kind
of
gives
you
a
bit
of
a
buffer
between
you
know,
because,
obviously
the
whole
world
and
his
wife
are
working
on
me
on
the
future
categories.
B
B
D
A
B
B
B
A
D
Okay,
so
the
thing
that
I've
been
working
on
Sean
and
Andrew,
probably
seen
is
adding
kind
of
metadata
tags
to
our
psychic
logs.
We
did
that
because,
right
now,
the
only
way
Sree
could
know
which
projects
job
was
running
for
which
used
their
triggered
a
job
or
anything
like
that
was
to
know
kind
of
what
the
arguments
like
they
need
to
know
the
code
and
the
order
of
the
arguments
to
know
which
IDs
were
being
used
and
so
on.
So
to
get
around
that
we
added
metadata
tags
on
top
of
the
jobs.
D
C
D
D
D
D
C
So
much
I,
don't
I,
don't
know
if
we
actually
told
anybody
about
this,
but
Stan
already
used
this
in
something
that
he
shared
so
I
think
Chris
having
it
having
it
there,
people
see
it
and
they're
like
oh
that's
handy,
they
don't
I
mean
it's
always
a
great
to
tell
people
about
it,
but
even
just
having
it.
There
means
that
you
know
you
can
just
yum.
B
D
Let
us,
as
you
know,
like
the
job
we
had
yesterday
and
that
we
worked
around
quite
quickly
was
that
you
would
attribute
mirrors
to
a
project
that
would
have
triggered
an
update,
which
is
not
my
type
of
project
receiver
Webb,
who
that
would
prioritize
its
mirror,
update
then
yeah,
the
updates
that
get
scheduled,
because
they
were
even
more
urgent
and
others
would
be
attributed
to
that
project,
which
is
obviously
incorrect
them
yeah.
So
we
fix
that
yesterday,
IFIF.
E
D
D
D
D
C
D
B
B
D
Right
now,
this
is
more
in
line
with
what
we
expect
get
my
Borg,
as
I
could
biggest
one
with
most
keep
with
the
most
workers
and
mailers
is
always
present
days.
I've
noticed,
and
then
this
one
is
the
one
that
Andrew
was
talking
about
before,
which
is
stated,
statistics
which
now
ran
a
thousand
something
times
in
the
past.
15
minutes.
D
One
namespace,
yes,
pretty
active
queue.
This
also
like
shows
that
all
this
really
curse
for
the
for
the
all
the
namespace,
the
top
five
main
spaces
we
run
around
so
I.
Think,
though,
elevating
a
pretty
cool
yeah
should
I
show
you.
Maybe
that's
mostly
everything
for
us.
Well
go,
but
I
could
show
how
how
we
add
this,
how
we
add
this
metadata
to
the
jobs,
which
would
also
explain
why
the
bug
was
there
before.
A
You
good
and
do
that,
which
you
think
we
should
Bob
I.
Think
I
would
like
you
to
take
an
action
item
to
record
one
five-minute,
video
for
like
just
focusing
on
this,
for
something
that
we
could
use
to
present
to
a
series
like
in
the
infra
coal
I
could
link
that
and
then
like
really
this
information
further
so
like
asthma,
take
it
around
books
also.
D
E
D
D
D
D
E
D
Inside
application
complex,
for
example,
if
we
don't
have
a
group
variable,
then
we
are
going
to
fall
back
like
a
root.
Namespace
is
then
going
to
be
the
truth.
Namespace
of
the
project
and
I
selected
fixed
like
that,
so
we
don't
always
need
those
variables
to
be
set
for
the
plan.
I
think
you
can
do
something
like
that.
D
D
There's
a
flight
middleware
that
will
add
the
current
context
to
the
job.
This
is
this
job
object.
Here
is
a
hash
that
gets
written
out
to
psychic,
to
Redis
for
the
psychic
server
to
pick
it
up
and
then
all
the
server
we
reinstated
here
with
them.
So
everything
that
happens
within
this
job
would
have
the
same
complex
like
that.
Yep.
A
A
A
A
E
D
Do
we
want
to
document
how
people
can
change
these
values
like
if
they
know
they're
going
to
be
working
on
a
different
context
within
a
request,
for
whatever
reason
we
can
document
how
they
should
be
using
what
we
built
to
to
articulate
the
context,
or
should
we
be
documenting
what
the
context
looks
like,
which
is
what
is
always
going
to
change
the
line,
which
is
what
is
going
to
change
I,
think
the
latter
is
more
important
to
us
and
I
think
that's
only
going
to
be
changed
and
so
on
by
us.
I
think.
C
So
that's
as
important
for
developers
to
know
like
how
this
all
might
they
don't
need
to
know
exactly
how
it
works,
but
they
need
to
know
like
you
know,
this
is
how
fields
get
added
here.
This
is
how,
like
you
know,
this
is
how
you
could
add
a
field
if
you
wanted
to
add
one
stuff
like
that,
so
I
think
and
then.
D
Anything
more
who
wanted
to
know
around
this.
The
next
thing
that
I'm
going
to
do
where,
like
we're,
still
lacking
a
bit
like
the
thing
that
I
mentioned
the
bug
with
the
mirrors.
That
thing
like
that
happened
because
I
assume
that
update
all
mirrors
would
only
happen
from
like
or
cron
job
and
would
therefore
not
have
metadata.
But
it
also
got
triggered
from
requests
within
a
project
and
that's
why
it
does
have
metadata.
And
so
one
of
the
next
steps
that
I'm
going
to
do
is
add
metadata
for
cron
jobs.
Based
on
my
arguments
and.