►
From YouTube: CI Backend Architectural Walkthrough - 2020-05-07
Description
Watch Fabio Pitino (Senior Backend Engineer, Verify:CI) walk through the backend architecture of CI for our team, followed by a Q&A with the team.
Diagram: https://drive.google.com/open?id=1RsDOOhVu7-ZSLLMD_mclIaX2e_EQ-8V3
A
Okay,
so
today,
I'm
gonna
show
you
what
the
CI
architecture
software
architecture
looks
like
today
and
unless
specify
software
architecture,
we
not
mean
like
a
infrastructure
side.
I
will
see
a
software
components
and
I'm
trying
to
simplify
a
lot
of
the
concepts
without
going
too
much
in
details,
as
tell
you
too
much
the
purpose
of
this
talk.
But
if
you
feel
like
he
wanted
a
little
bit
more
in
details
and
we
can
dive
in
details
or.
B
A
Okay,
prepare
this
diagram
here.
Can
anybody
see
the
bedroom?
Okay
so
after
that
I'll
be
sharing
this?
Probably
what
I'm
thinking
is.
If
there
is
any
feedback
about
this
specific
we
can
and
then
we
can,
we
can
definitely
improve
it
and
put
it
on
a
documentation.
Page
I
have
some
ideas
where
this
could
go
in
our
depth:
documentation,
page
I,
guess
this.
A
Duct,
minister
also
improvement,
so
we
can
start
from
the
left
hand
side.
So,
on
the
left
hand,
side
we
have
the
user
and
donation
can
be
a
human
being,
and
that
is
can
trigger
various
critical
parameters
of
this
one
is
type
of
triggers
so
having
to
give
here
the
one
we
have
from
the
most
important
one
to
the
least
famous.
A
So
the
big
push
is
definitely
one
of
the
events,
most
popular
events
that
triggers
pipelines.
Also
when
you,
when
we
use
the
API
or
whether
you
are
cooking
a
button
to
run
a
pipeline
when
a
magic
request
is
created
or
updated,
our
pipelines
also
created
Metis
trail
is
another
feature
that
can
trigger
pipelines.
Oh,
you
can
have
pipeline
set
up
with
a
cron
job
schedule,
and
that
is
trigger
at
this
very
time,
and
you
can
also
subscribe.
Have
a
project.
A
A
We
have
a
similar
feature
to
what
we
currently
have
with
image:
requests
that
also
work
on
Kotaku
requests,
events
connected
with
Kindle
app
and
for
CICP
people.
So
anytime,
you
change
the
pull
request
by
updating
the
code.
It
also
triggers
a
pipeline
and
last
last
point
I'll.
Actually,
yesterday,
I'm
gonna
talk
throughout,
but
so
these
are
some
of
the
triggers
that
we
have
as
of
today
for
critic
in
my
class.
If
I'm,
if
I'm
forgets.
A
And
basically,
sort
of
the
creek
pipeline
service
relies
on
this
service
and
the
outcome
is:
is
a
pipeline
being
persisted
into
our
database,
so
here
I'm
obstructing
a
little
dig
to
this
pipeline
data?
Just
you
know
this
can
include
pipeline
till
the
stages
artifacts.
Basically
anything
we
have
to
pipeline
just
I'm,
simplifying
this
this
concept
and-
and
you
can
see
that
the
dotted
lines
are
basically
data
flow.
A
We
can
write
and
the
straight
line
is
the
process
flow,
so
can
see
it
create
patterns
service,
persisted
the
pipeline
data
and
as
soon
as
it
finishes
immediately
calls
the
process
pipeline
service
process
back
when
service
is
actually
our
core
core
domain
service,
and
this
is
being
continuously
throughout
the
pipeline
life
cycle.
So
what
if
service
does
is
getting
into
the
pipeline?
A
And
if
this
responsibility
is
to
move
each
of
the
jobs
to
the
next
stage,
so
it
tries
to
take
jobs
there,
all
of
the
jobs
in
a
factory
and
as
soon
as
discrete
are
in
created
state.
So
the
purpose
of
process
back
oneself
is
to
move
them
up
to
completion.
So
that
means
anything
is
in
created,
state
prospect
when
services
figures
out
what
are
the
jobs
that
can
run
first.
So,
for
example,
in
a
staged
approach,
all
the
jobs
in
the
first
stage
are
transitioned
into
pending
state
and
etc.
A
So
it
keeps
moving
jobs
on
the
other
pipelines
completed
and
and
when
jobs
transition
in
a
pending
state.
This
is
a
particular
state,
because
it's
what
the
learner
can
actually
see
and
in
which
means
like
a
job,
is
being
killed.
So
if
you
jump
on
the
other
side
of
the
diagram,
so
we
have
our
pool
of
donors.
This
can
be
runners
to
be
a
shared
runners,
group,
runners
or
project
runners.
A
There
is
a
little
bit
of
simplification
this
this
point,
because
in
reality
we
have
one
course
that
intersect
with
these
connections
and
my
if
there
is
no
available
jobs
for
this
specific
bundle
is
conduct
with
the
server.
We
work
also
intercepted
connection
and
immediately
returns
response,
so
the
economy
doesn't
reach
outside,
but
this
is
a
kind
of
a
simplification
so
and
whenever
runner
connects
to
the
rail
outside
and
we
want
the
register
job
service.
A
So,
for
example,
if
if
it's
a
project
runner
and
requesting
a
job,
it
looks
all
the
jobs
from
that
specific
project
and
then
looks
at
what,
if
there's
any
tags,
what
tags
can
be
matched
and
then
picks
up
the
first
job.
In
that
case,
there
is
also
in
this
specific
case.
We
also
take
in
consideration.
They
didn't
be
nuts
that
specific
group
has
been
using.
So
it's
a
certain
bandwidth
of
if
there
is
no
more
minutes
available,
then
which
is
just
register
job
services
simply
dis
current,
but
that
job.
A
A
A
sign
is
today
to
the
runner
and
determine
basically
that,
as
a
response
to
the
the
one
at
this
point
has
all
the
information
to
run
the
job
and
does
the
execution,
but
as
soon
as
the
job
is
assigned,
we
are
updating
the
the
job
from
from
creating
so
for
insert
from
pending
to
running
and
and
and
again
any
jobs
status
updates
triggers
the
process.
My
concerns,
because
something
has
changed
in
the
pipeline.
A
Now
the
prospect
answered
this
is
trigger
again
and
its
job
is
are
going
to
take
all
the
remaining
jobs
they're
there
and
figure
out,
which
one
can
now
be
transitioning
into
pending
stage
and
or
let's
say
all
of
the
jobs
in
a
specific
stage.
I've
completed
in
this
stage
now
turns
to
turn
into
the
past
or
fail
and
imagine
kill.
The
pipeline
is
is
completed
as
well,
so
you
can
see
any
status
updates,
tributed
process
pipeline
services
as
well
as
other
interact
external
interactions.
A
Another
thing
is,
throughout
the
the
the
lifecycle
of
the
job
being
executed
by
anybody.
I
wonder
there
there
could
be
some
artifacts
being
uploaded,
I
want
to
download
it.
So
if
the
jump
link
generates
artifacts
that
the
user
specified
the
condition
to
be
uploaded,
then
those
gets
uploaded
and
save
object,
storage
and
also
some
metadata
information
to
save
in
the
pipeline
database
and
the
key
also
be
some
test
reports
and
other
types
of
reports
will
be
generated
from
from
defects
as
well
as
artifacts.
They
are
downloaded
by
their
owner.
A
So,
as
I
said
earlier,
so
when
a
job
physician
in
the
pending
state,
it's
basically
being
killed
for
the
runner
to
be
picked
up
and
there's
another
special
type
of
jobs
that
we
have,
which
is
the
pretty
job,
and
it
is
those
jobs
that
we
used
either
with
a
trigger
syntax
to
generate
a
pipeline,
a
downstream
pipeline,
those
type
of
jobs.
They
still
run
when
they
are
transitioning
the
pending
stage,
and
but
did
they
live
one
service
at
they
are
not
actually
centered
upon
or
to
be
exactly
there.
A
So
it's
something
we
do
internally
and
that's
why
it's
sort
of
special
job.
So
when
this
job
good
job
turns
to
a
transition
into
a
pending
state,
we
this
becomes
a
pipeline
trigger
and
which
is
used
by
the
create
trust
projects
by
plants.
That's
clearly
an
obstacle.
Today,
everything
needs
to
be
renamed
into
something
more
downstream
pipeline,
since
this
is
now
being
used
also
for
creation-
and
this
basically
creates
downstream
pipeline,
which
two
triggers
again,
the
whole
process,
back
pipelines
created
using
the
processor
and.
A
The
lifecycle
pipeline
there's
a
lot
of
information
here
left,
especially
what
do
we
do
with
artifacts?
How
do
we
manage
active
facts?
We
have,
for
example,
a
lot
of
types
of
artifacts
and
they're
also
something
fitting
into
text
reports.
Some
of
them
use
for
a
few
different
purpose
like
chop
logs,
these
artifacts
can
expire.
We
have
mechanism
to
expire
artifacts
to
be
in
the
item
after
a
while,
and
so
there's
also
kind
of
details
left
out,
but
this
is
the
purpose
of
just
making
it
extremely.
C
A
question
one
great
great
job
on
the
graph.
It's
a
lot
easier
to
visualize
architecture
versus
fumbling
around
through
the
code
base.
It's
an
awesome
job
I
was
wondering
so
like
the
after
a
pipeline
trigger
happens
and
we
have
to
create
pipeline
service,
so
the
gamble
processor.
Does
it
just
pretty
much
look
for
a
set
of
defined
keywords
to
feed
to
the
pipeline
service
that
which,
in
turn
writes
to
the
pipeline
database?
C
Are
you
saying
so
what
sort
of
configuration
I
have
been
used?
Yeah
correct
so
like
if
I
have
a
certain
keyword
like
trigger
or
stage?
Does
that
processor
kind
of
feed
that
in
a
certain
data
structure
to
the
pipeline
service,
which
does
various
things
and
then
writes
that
configuration
to
the
database
or
creates
a
data
structure
of
a
pipeline?
Yes,
so
the.
A
Create
pipeline
service
provides
basically
it
lays
on
top
of
this
processor
and
provide
some
information
about
where
the
configuration
is
for
for
this
pipeline
to
be
created.
This
could
be
an
example,
so
this
information
is
provided
to
the
city
by
the
create
pipeline
service.
That
in
them
is
extract.
What
is
the
Yamaha?
A
The
ammo
strangers
call
it
contact,
there
needs
to
be
used
to
create
a
pipeline,
and
this
is
fed
into
the
processor,
so
the
Clipper
client-centered
is
depending
on
how
you
are
reading
a
pipeline,
in
knows,
for
example,
a
set
of
rules
where
to
find
the
young.
So
the
normal
scenario
is
the
kid
grabs.
Eieio
fault
is
in
the
repository
and
then
so.
This
is
the
default
center,
but
you
might
change
the
path
of
that
file
to
be
a
custom
path.
A
So
in
that
case,
if
the
yellow
file
is
not
there,
then
it
looks
for
the
custom
path
or
there
could
be
other
situations
where
the
when
we
create
a
downstream
pipeline,
for
example
the
Yama
file.
It's
not
something
that
it
is
the
default
to
say
the
killer
Yahoo
file,
well,
that
we
use
one
week
later,
child
that
one
but
is
dictated
about
what
is
in
the
so
check
back
when
you
use
trigger,
include
a
set
of
files
or
snippet
yeah.
This
is
the
actual
real
file
that
is
then
passed
into
the
Yahoo
processor.
A
The
ANA
processor
extracts
basically
all
the
to
transform
basically
their
rules,
energy,
specifying
the
young
format
into
a
data
structure.
That
is
then
persisted
and
there's
gonna
be
a
bit
of
factories.
There
actually
make
everything
consistent
and
this
data
structure
consistent.
Once
it's
persistent,
then
they
process
background.
Certain
is
important,
but
yes,
so
they,
the
responsibility
of
the
processor,
is
to
transform
a
young
file
into
data
structure.
A
A
So
it
can
be
mentioned
that
the
younger
processor,
what
it
does
is
generate
a
big
ash
of
configuration
there
is.
That
is
normalized,
so
you
know
if
you
I
want
to
group
these
files,
and
this
other
file
or
I-
want
to
extend
this
job
with
this
snippet
here.
So
it
will
be
normalize.
The
whole
data
structure
and
also
will
do
that.
A
A
Assuming
everything
is
fine,
you
get
back
a
big
hash
of
of
taken
structure.
There
is
not
the
data
structure
that
is
exactly
persisted,
but
then
this
is
returned
back
to
the
create
pipeline
surface,
the
crate
back
when
sentence
just
knows
how
to
break
down
this
and
actually
save
in
persisted
in
different
tables,
so
different
models
so
we'll
extract
all
the
jobs
will
go
in
today
they
jumped
table
all
the
stages
will
go
into
the
stages
table
then
we'll
create
a
pipeline
that
also
has
all
the
jobs
and
linked
with
foreign
keys.
A
A
A
A
C
A
A
C
A
So
there's
a
lot
of
information
there,
a
lot
of
data,
and-
and
this
is
something
that
we
have
to
definitely
keep
in
consideration
any
time
we
want
to
make
changes
to
the
schema,
because
either
it
requires
a
migration
that
mile
a
very
long
time
and
I've
actually
seen
some
scenarios
where
you're,
starting
to
kind
of
let's
say,
find
some
sort
of
road
block
there
because
of
the
size
of
the
database.
So
we
can't
run
certain
migration
from
examples.
B
B
You
asked
if
we
single
database,
we
use
a
thing
called
PG
bouncer,
which
is
basically
a
connection
pool
er
to
the
backend.
It
appears
to
be
a
single
database,
so
we
only
query
a
single
database
but
deeper
down.
Pg
bouncer
actually
has
multiple.
It
says
well,
replicas
of
the
database.
One
of
them
is
the
master
and
the
other
ones
are
slaves.
All
of
the
slaves
are
read-only
and
PG
bounds
are
actually
pools.
B
A
A
B
A
A
Asylum
and
the
discussion
we
were
having
earlier
at
the
party
icd-10
talked
about
it's
it's
a
different
approach
where
we
want
to
sort
of
introduce
more,
are
natural,
queuing
mechanism
or
behavior
is
similar
to
queuing
mechanism,
but
right
now
is
more
appalling.
The
approach
the
drummer
keeps
asking
for
for
jobs
until
there
is
some
jobs
that
are
actually
can
be
assigned
to
this
wrong,
but
it's
basically
an
eager
implementation.
Yeah.
All
these
runners
that
are
always
connected
always
asking
for
jobs.
D
E
I
have
a
question
more
about
user
interaction
in
the
middle
of
pipelines.
So
let's
say
that
your
there's
a
pipe
there's
a
couple
of
pipeline
running
or
there's
one
pipeline
running
and
a
user
cancel.
Let's
say
the
doctor
pipeline:
how
do
we?
How
do
we
make
sure
that
we
are
still
generating
because,
for
example,
the
test
report?
Do
we
still
generate
like
a
report
for
what
has
been
run
or
do
we
kind
of
cancel
everything?
That's
that's
kind
of
the
result
of
the
build,
and
we
just
say
well
that
that's
been
cancelled.
A
When
you
cancel
a
pipeline,
the
translation
also
cascades
to
all
the
jobs
there,
the
can
be
canceled
and
I.
Never
that
case
is
like
whatever
has
been
already
reported,
so
all
the
artifacts
have
been
already
whole.
Each
of
them
will
complete,
so
the
artifacts
that
did
upload
those
will
be
available.
So,
even
if
you
want
to
go
lie
it
down
and
have
a
look
at
them,
it
will
be
able
to
see
there
is
that
they
might
not
always
be
available
because
they
negative
the
job
is
passed
and
the
the
artifacts
have
been
promoted.
A
So
benefical
ii,
what
you
were
saying
so,
when
you
cancel
a
pipeline,
the
process
pipelines
service.
Well,
it's
actually
different
service,
but
it
will
transition
everything
into
stage,
but
the
prospect
prior
service
or
gate
will
pick
up
the
status
change
and
see
basilica.
What
else
can
be
done
in
that
case.
C
A
C
You
know
tied
with
the
web
UI
different
things
like
that
or
and
it's
CI
CD
settings
could
be
tied
with
the
web
UI
too,
as
as
a
pipeline
trigger,
but
I
think
it
makes
more
sense
to
kind
of
have
them
broken
apart
to
say,
like
hey.
These
are
like
web.
Ui
could
be
all
of
these
group
together,
so
maybe
we
should
maybe
think
about
how
to
reorganize
the
pipeline
triggers
I,
don't
know,
maybe
like
have
a
a
parent
and
then
like
kind
of
like
go
down
and
show
like
the
different
web
UI
triggers.
Yes,.
A
Yeah
yeah
I
mean,
for
example,
another
thing
could
be
grouped.
Is
they
get
push
with
merge
request
created
up
game,
so
those
are
the
might
request.
Being
updated
is
a
consequence
of
the
kid
approach,
but
I
just
want
to
mention
that
as
a
different
and
trigger
to
distinguish
from
get
pushed
or
a
multi
branch
without
a
merger
was
with
Russia
in
the
context
of
merchants,
but
those
are
destroyed.
C
A
A
E
Mean
I
I
do
have
another
question,
but
I
don't
know
if
it
fit
them
to
the
scope
of
this.
It's
more
about
my
poor
understanding
of
runners,
and
so
one
of
the
thing,
that's
always
kind
of
kind
of
tricking
me
is
how
do
we?
How
does
the
communication
is
handled
and
how?
How
does
the
could
I
explain
it
like
when
I,
when
I,
let's
say,
they've
got
something
locally
and
I
wanna
register
my
brother
and
then
I
do
something.
E
I
will
sometimes
see
like
a
job
being
created
in
the
CI
side,
but
it's
not
picked
up
by
the
runner.
Even
though
it's
registered
and
I
know
it's
done
it
like
it's,
not
a
bug,
because
I
did
talk
true
with
a
runner
group
and
I
I
kind
of
got
an
explosion,
but
I'm
wondering
how
do
how
do
we
determine
that
the
job
has
been
queued
and
it's
there
is
the
responsibility
on
the
runner
or
on
the
CI
side
tonight,
like
I.
A
Okay,
so
between
the
runner
and
the
API
gate
really
here,
it
is
actually
another
layer
which
is
the
client
workhorse,
and
you
can
consider
a
load
balancer.
Basically,
the
workhorse
will
intercept
any
connection,
come
from
the
runner
to
the
rail
side
and
we
use
attacks
to
basically
return
immediately
from
request
from
the
runner
if
there
is
no
no
jobs,
and
so
this
is
more
like
a
caching
state.
So
the
raman
contacts
and
the
server
returns
back
immediately
say
this.
B
Mean
when,
like
you
said
it's
not
so
because
it's
two
systems,
it's
two
different
systems,
the
runner
system
and
the
web
app.
It's
not
the
code.
Execution
is
not
linear.
Like
you're,
not
gonna,
see
when
you
create
our
job
you're,
it's
not
the
code,
it's
not
going
to
linearly
transition
into
being
executed
and
there's
a
lot
of
factors
like
the
runner
system
might
just
be
so
what
we
don't
actually
contact
the
runners
themselves
directly.
There's
a
note
of
thing:
we
use
a
vendor
machine
or
doctor
machine.
B
Basically,
a
runner,
Orchestrator
I
think
it's
called
a
runner
manager,
I
think
so.
Basically,
that
basically
creates
new
runners
dynamically.
If
the
world
requires
that,
so
the
lag
you're
perceiving
might
be
the
runner
manager
creating
a
new
runner
and
then
the
new
runner
needs
to
pick
up
the
job.
It
also
might
be
something
else
that
might
there
might
be
a
subtle
I,
don't
know
basically
there.
It
might
just
be
something
that
we
don't
control
like
load
on
a
specific
machine.
B
The
realest
web
app
might
be
responding
like
to
a
certain
requests
slower,
but
also
what
Fabio
said
there
may
be
something
in
the
workhorse.
There
are
just
so
many
different
parts,
and
it's
just
stability
as
we
perceive
it
in
that
section
is
that
within
a
relevant,
they
small,
relatively
small
time
frame,
they're
gonna
sync
up,
so
it's
not
gonna,
be
as
that.
We're
not
trying
to
make
a
like
less
than
200
milliseconds
less
than
300
millisecond.
As
long
as
it
syncs
up
eventually
we're
fine
I,
don't
have
an
exact
explanation
but
yeah.
B
F
So
we
actually
do
have
a
bunch
of
metrics
in
the
graph
Alan
use
that
we
used
to
measure
how
much
time
it
takes
for
runners
to
pick
up
their
builds
eventually,
and
it,
of
course,
on
get
a
good
community
first
because
of
the
load
but
locally.
This
should
be
instant,
and
if
you
have
a
local,
GDK
or
gck
with
a
runner
installed,
if
a
runner
is
not
going
to
pick,
a
bill
will
be
like
second
or
two.
That
means
that
probably
some
part
of
it
or
GDK
is
not
working
properly.
D
I
had
a
question
actually
about
you
know,
given
this
architecture,
where
have
we,
if
there's
a
pattern
of
where
we've
maybe
seen
bottlenecks
in
the
past,
when
we
let's
say
have
thousands
of
concurrent
pipelines
being
fired
and
actually
a
further
question
I
wanted
to
ask
was
if
any
specific
SLA
is
we
have
on
get
left
calm
with
any
particular
pieces
of
this.
A
F
I
plan
it
I,
don't
have
numbers,
but
we
do
have
a
ton
of
other.
It's
wrong
entire
system
and
most
of
the
alerts
are
based
on
the
metrics
we
have
in
graph
on
in
the
dashboard
called
see,
I
run,
errands,
I,
think
and
our
infrastructure
team
is
the
team
that
actually
is
working
on
the
alerting
and
I
think
it
was
graduating
that
introduced
the
most
of
the
metrics.
F
We
have
right
now,
but
of
course
we
don't
have
enough
probably-
and
we
also
do
not
have
enough
alerting
and
what
I
hope
that
we
are
going
to
change.
Is
that
basically,
when
some
kind
of
an
other
fires
like,
for
example,
we
have
went
above
the
driscoll
threshold
of
getting
up
runners
waiting
for
bills
and
there
are
numbers
defined
somewhere.
F
B
F
D
D
A
Aside
from
the
register
so
job
service,
they
order
processes
here
that
we
see
the
other
senses,
they're
all
run
in
background
jobs,
and
so
there's
always
some
exception
where
the
create
background
service
actually
done
synchronously
with
a
user
request.
That
would
be,
for
example,
if
you
create
a
pipeline
from
from
the
web,
you
are
but
most
other
cases
kid
push
or
if
you
want
to
lock
when
I
merge
the
system
generated
most
of
the
time.
These
are
actually
a
trigger.
The
disk
aging,
for
example,
so
background
workers,
so
really
psychic
infrastructure
is
what
we
rely.
E
So
I
unfollow
up
on
that,
like
we
have
an
issue
currently
open
where
pipeline
scheduled,
because
we
in
the
UI
we
pretty
find
at
the
same
time.
So
everything
is
at
4:00
in
the
morning,
for
example,
for
every
day,
so
there's
a
there's
a
spike
at
that
moment
and
we've
seen
that
go
like
a
slow
down.
It's
this
than
the
register
job
surface.
That's
slowing
everything
down,
since
you
said
everything
else
is
in
background
job
or
is
it
like
a
crate
pipeline
service?
That's
that's
staggering!
E
A
A
They
can
all
have
different
sort
of
problems
they
might
introduce
like
when
we
have
a
lot
of
concurrent
service
center
and
some
pipeline
creation
might
be
easier
than
other
ones.
So,
especially
if
you
are
including,
like
external
files,
they
might
be
like
a
heavier
if
you
are
including
a
lot
of
and
but
then
the
there
is
the
main
problem
we
are
seeing
in
that
specific
center
of
the
4
a.m.
I.
Think
it's
certainly
to
the
rauner
side.
So
the
pattern
is
created
is
process.
A
We
all
the
jobs
transition
into
pending
state,
but
because
there's
a
lot
of
jobs
in
pending
state
across
entire
instance,
4
a.m.
so
we
have
a
lot
of
runners
there.
Each
one
is
that
we
have
that
don't
keep
up
with
the
number
of
jobs.
So
that's
why
we
see
King
time
increasing
for
jobs,
but
these
are
Q
times
for
the
runners.
If
that's
what
he
was
kind
of
discussion,
whether
we
could
have
some
sort
of
autoscaler
prior
to
4:00
a.m.
A
where
we
can
sing
double
the
number
of
runners
for
half
an
hour
and
those
will
solve
the
problem.
So,
but
once
that's
a
problem
solve,
there
might
be
maybe
other
sort
of
performance
issues
that
become
more
literally
one,
for
example
on
the
create
pipeline
service.
My
people
right
now,
the
big
heater,
is
the
fact
that
we
don't
have
enough
runners,
especially
if
that
for
most
of
matches
with
the
first
of
the
week
of
the
and
the
first
of
the
month,
so
that
it
country
means
emerges
all
the
possible
schedules
they
can
find
them.