►
From YouTube: 2020 11 17 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Everybody
so
welcome
for
this
new
jenkins
infrastructure
meeting.
This
one
is
a
little
bit
special
because
of
the
major
outage
we
had
last
week,
so
I'm
going
to
to
give
you
a
quick
overview
of
the
different
components
that
were
involved
in
dlch,
what
they
are
doing
and
so
we'll
discuss
as
well
about
what
we
can
do
to
prevent
such
thing
to
happen
again.
So
the
first
thing
to
understand
is:
we
have
the
way
the
jenkins
project
distribute
packages,
contains
four
different
services
and
that
we
name
update
sensor.
B
So
I
updates.jenkins.io
so
updates
that
jenkins,
that
are
your
distributes,
plug-in
version.
So
when
from
your
jkins
instance,
you
want
to
update
the
plugins
or
update
the
jenkins
score.
This
is
the
service
that
that
that
is,
that
is
that
you're
reaching.
So
basically,
this
service
is
generated
every
three
minutes,
and
so
it
contains
the
list
of
every
plugins
that
you
can
install
for
your
jenkins
version.
So
every
every
third
three
minutes
we
generate
the
list
and
then,
when
you
want
to
install
the
specific
plugins,
then
you
are
redirected
to
the
mirroring
service.
B
This
is
something
that
I
will
explain
in
a
little
bit.
The
next
major
services
package
jenkins
that
I
own
that
service
contains
the
distribution
packages.
So
when
you
want,
for
instance,
to
install
debian
package
or
red
app
package,
you
just
run
happy
to
get
installed
on
jenkins,
and
so
the
instructions
are
located
on
the
package.jso,
and
it
contains
the
metadata
information
that
you
need
to
install
jenkins
on
your
service.
The
same
then
same
that
the
update
center.
B
B
B
We
had
issues
to
scale
it,
and
so
we
decided
to
remove
that
service
by
the
one
that
we
had
issues
last
week,
which
is
get
the
jinkies.
That
are
you
so
get
the
jenkins
that
you
use
rely
on
two
applications,
so
the
release
database
to
store
the
indexes,
the
file
indexes,
the
file
hash,
a
list
of
hashes
for
every
files
available
that
you
can
download
and
obviously
you
we
need
the
files
that
we
are
distributing
and
those
are
stored
on
azure
file
storage.
B
So
basically,
what
happened
last
week
is
the
volume
the
radius
database
crashed.
We
had.
We
had
issues
to
mount
it
in
the
redis
server,
and
so
because
the
dutch
release
database
was
not
available.
It
forced
the
ketogen
keys
that
I
bought
to
restart.
B
We
fixed
the
redis
database
by
redeploying
it
on
a
managed
service
because
we're
not
able
to
access
the
azure
disk
used
by
the
database
for
some
reason,
and
so
we
would
deploy
a
managed
already
service,
and
so
after
that
we
were
not
able
to
use
the
azure
file
storage
used
inside
get.
The
thinking
that
I
use
so
mirror
beats
has
no
way
to
know
which
package
could
be
distributed
to
the
user
and
so
obviously,
because
the
mirroring
was
broken.
B
The
update
center
and
the
package
agencies
could
not
work
correctly.
So
that's
that's
basically
what
happened
to
us
last
week,
so
we
opened
an
azure
support
ticket
to
ask
why
it
stopped
working
so
the
service
you
get
that
drinking
has
been
running
since
march
last
week
and
we
never
experienced
such
issues.
So
it's
really
new
to
us
and
we
still
don't
understand
why
the
volume
is
behaving
like
that.
B
So
it
took
us
a
while
to
understand
that
the
volume
was
broken
because
some
files
could
be
retrieved
and
others
not
so
typically,
distribution
packages,
files
like
the
debian,
reddit
and
so
on
are
available.
But
for
some
reason,
when
we
try
to
read
the
plugins
information
it
it
send
us
a
timeout
issue.
So
we
are
not
able
to
read
the
data.
B
We
receive
many
different
errors
and
so
basically
what
we
decided
to
do
a
saturday
morning
when
we
realized
that
we
wouldn't
find
a
way
to
retrieve
access
correctly
to
the
to
that
azure
file
storage.
We
decided
sorry
to
reuse
the
michaels,
the
jenkins
value
service,
so
the
same
machine
that
already
have
every
files
that
we
distribute
in
our
infrastructure.
So
we
just
decided
to
redeploy,
mirror
pits
on
that
machine.
So
that's
what
we
did
until
now,
so
the
service
is
back,
is
working
again.
The
problem
is
we.
B
So
the
good
thing
is
the
the
machine
running
mirror
bits
at
the
moment
is
big
enough
to
under
the
load,
but
we
still,
we
are
still
at
the
moment
wondering
if,
when
we
will
be
able
to
use
azure
file
storage,
if
it's
the
right
service
that
we
should
rely
on
and
so
yeah,
we
still
have
an
open,
azure
ticket
at
the
moment
and
we
are
still
discussing
with
azure
support
to
know
what's
happening
so
any
question
until
now
before
I
continue.
B
Yes,
exactly
that's
the
point,
so
we
had
a
call
with
azure
today,
so
we
spent
the
afternoon
with
them
to
try
to
identify
and
to
replicate.
As
I
said,
it's
really
weird
because
I
can
so
it's
a
so
azure
file
storage,
it's
a
cifs
volume,
so
I
can't
mount
it
on
my
machine.
I
can
mount
it
on
my
machine.
B
I
can
list
some
files.
I
can
modify
create
files,
but
the
directory,
slash
plugins,
does
not
work.
If
I
try
to
to
access
it,
I
receive
a
disk
issue.
But
if
I
go
to
the
portal
azure
web
interface
there
I
can
create
files
in
the
slash
plugins
directory.
So
it's
a
really
weird
issue
and
yeah
that
that's
something
that
we're
still
trying
to
understand
why
it's
affecting
us.
B
So
right
now,
so
the
service
is
still
working.
So
that's
a
good
thing.
The
to
the
question
is:
could
we
have
prevent
this
to
happen?
I
don't
think
so,
because
we
had
issues
with
network
storage
suppose
for
the
radius
volume
and
for
they
get
the
junkies
that
you
volume.
We
have
network
issues
and
I
mean
we,
don't
we
don't
we
don't
we
don't
use
the
server.
B
I
mean
we
don't
manage
the
service,
those
are
provided
by
our
azure
accounts,
but
what
I
think
we
could
have
done
better
is
first
to
communicate
about
the
outage.
I
I
was
in
communication
with
a
lot
of
people
over
last
week
and
over
the
weekend
as
well
about
what
was
the
current
state,
why
we
were
having
that
issues
and
so
on,
and
apparently
it
ended
up
that
twitter
was
the
right
support
channel
for
that,
maybe
not
the
right
but
yeah.
B
I
think
we
really
like
a
central
way
for
people
to
understand
what
was
the
current
situation.
So
one
of
the
things
that
I've
been
wondering
now
is
about
having
a
status
page
for
the
jenkins
project.
So
I
already
shared
this
previously,
but
there
is
a
project
that
could
allow
us
to
to
generate
to
provide
information
to
a
static
page
by
providing
markdown.
So
we
could
say:
okay,
we
have
a
plan
maintenance
coming.
We
have
our
current
issues
and
so,
if
you
have
any
questions
feel
free
to
redirect
to
those
locations.
B
So
if
you
want,
I
can
do
a
quick,
a
quick
demo
of
what
I
was
able
to
provision
this
morning.
Any
suggestions
on
this
topic.
C
C
For
example,
during
the
eu
night
you're
offline
tim
is
offline.
As
long
as
we
have
one
of
the
social
media
folks
around,
we
can
say:
hey
we
have
infra
problems.
Could
you
post
a
tweet?
Nobody
needs
to
know
how
to
use
it
and
a
self-hosted
status.
Page
sounds
a
lot
like.
If
we
have
infra
trouble,
the
status
page
will
be
offline
as
well
and
we'll
just
post
a
tweet
anyway.
C
So
yes,
it's
not
as
fancy
as
you
know,
githubstatus.com,
but
I
think,
as
long
as
we
make
sure
that
once
we
as
soon
as
we
confirm
there's
a
problem
that
we
post
a
tweet,
I
think
that's
fine,
and
also
that's
probably
where
people
look
the
problem
with
the
status
page
is
we
need
to
advertise,
they
didn't
make
people
aware
of
it
and
if
nobody
knows
it
exists,
nobody
has
any
any
benefit
from
it.
B
So
I
I
totally,
I
totally
agree
with
you
about
the
fact
that
people
first
look
at
twitter
and
it's
easier
to
just
post
a
twitter
message,
and
I
also
agree
with
you
that
if
there
is
an
infra
outage,
we'll
probably
be
focusing
on
the
info
outage
instead
of
creating
a
markdown
or
notification
and
working
on
this.
This
is
something
that
I
mean.
B
This
is
something
that
takes
time.
I
just
I
have
the
feeling
this
time
that
I
received
a
notification
again
right,
I'm
using
twitter,
I'm
using
linkedin,
I'm
using
reddit,
and
I
received
a
notification
by
email
by
rc
by
twitter.
I
mean
I
had
the
thing
that
I
was
just
receiving
way
too
much
information
and
I
could
just
not
answer
all
of
them,
and
so
I
was
just
wondering
if
using
a
status
page
could
just
say
you
know
what
I
just
provide
one
information
and
just
look
at
that.
A
Just
saying
so
the
status
page,
one
that
olivia
posted
earlier,
you
you
don't
run
it
on
your
own
infrastructure.
It's
got
a
one-click
deploy
to
netlify
for
like
free
deployment
sort
of
setup,
so
it
doesn't
touch
any.
B
That's
a
good
point.
We
obviously
need
to
be
sure
that
the
status
page
is
not
running
in
our
infrastructure,
because,
obviously,
if
we
are
having
issues
with
our
infrastructure,
I
mean
it
can
be
problematic
for
it.
C
Okay,
the
way
you
had
phrased
that
sounded
a
lot
like
it
would
be
part
of
our
in-flight.
That
sounds
kind
of
silly,
okay
yeah.
But
if
that's
not
a
problem
still
I
mean
I
saw
some
of
the
tweets
and
there
was
an
announcement
posted
and
20
minutes
later
someone
posted
hey
any
updates
and
I'm
like
dude.
C
What?
What
do
you
expect?
Oh
I
and
I
really
doubt
that
this
would
just
magically
go
away
by
us
having
a
status
page.
B
Yeah
I
mean
yeah,
I
think
I
mean
we
don't
have
to
work
on
the
status
page.
I
think
it
may
be
nice
to
have
if
we
put
one,
we
definitely
doesn't
need
to
be
in
our
infrastructure.
I
just
wanted
to
give
you
a
quick
overview
of
what
I
did
this
morning.
So
just
this
is
just
mainly
an
example
from
the
cst
project,
and
so
you
just
put
a
mark
down
like
we
would
do
for
the
changing
spread.
B
Thinking
that
your
website,
and
so
we
can,
we
can
specify
a
few
information
in
the
the
file
like
what
are
the
tags,
so
we
could
see
in
this
case.
Okay,
we
had
get
get
the
junkies.
Are
you
issues
these
affect
those
websites,
and
so
you
can
provide
the
description
of
the
otage
with
link
to
twitter,
the
google
discussions
and
so
on,
and
so
we
could
also
be
able
to
list
all
the
issues
that
affected
that
specific
service
in
the
past.
So
it's
more
like
it's
more
like
it's,
not
a
monitoring
tool.
B
It
doesn't
detect.
If
your
service
is
down,
we
can.
We
could,
for
example,
say:
okay,
we
have
a
maintenance
coming
in
the
coming
weeks.
Let's
say
we
know
that
the
beginning
of
december.
We
want
to
do
a
maintenance,
we
can.
We
could
plan
this
in
advance
and
notify
that,
but
it
just,
I
think,
it's
more.
We
can
more
see
it
like
a
way
to
communicate
about
what
are
the
major
things,
but
obviously
we
want
the
idea
is
not
to
slow
down
our
processes
but
yeah.
These
suggestions
that
I
was
doing
for.
C
I
I
like
the
idea
for
us
being
able
to
use
it
for
things
that
would
not
make
the
twitter
account,
so
that
would
be
a
real
benefit
like
if
we're
doing
perhaps
a
migration
of
sorts
or
the
work
that
tim
and
I
were
doing
on
update
center
two
that
would
not
qualify.
I
think
for
the
twitter
account,
because
we
have
no
idea
whether
anyone
cares
or
whether
anyone
even
notices,
but
ultimately
we
some
people
noticed
and
if
we
had
just
just
had
had
the
status
page
there.
That
says
we're
tweaking
our
infra.
C
A
B
B
B
B
Perfect,
so
basically,
so
let
me
first
show
you
what
it
would
looks
like
the
way
we
configure
it
so
jenkins
in
from
cst.
So
it's
one
major
configuration
where
you
can
define
archives.
B
So
you
can
you?
Can
you
define
the
different
tags?
So,
for
example,
I
say:
okay,
I
have
the
tags
archive
the
jenkins
layout,
which
is
running
in
rackspace.
I
have
adapted
the
layout,
so
you
can
specify
a
bunch
of
tags,
and
so
you
can
filter
for
specific
event
like
this,
but
you
can
also
provide
links.
So
if
you
go
back
here,
we
could,
for
example,
say
here:
there
is
a
link
to
our
monitoring
solution
and
so
you're
automatically
redirected
to
datadog.
B
Obviously,
in
this
case,
it's
not
useful
because
it's
not
publicly
available,
but
we
could
have,
we
could
have
for
each
services.
A
description
of
what
the
service
is
doing
so
ldap.js
is
that
I
o
is
our
ldap
service,
but
we
could
also
provide
links
to
dashboards.
That
could
tell
people
is
it?
Is
it
working?
Do
we
have
like?
B
B
Saying:
okay
right,
I
have
a
network
issue
and
the
thing
is
the
service
was
back,
but
because
I
knew
that
I
had
an
issue
with
getting
kids
that
I
spent
quite
a
lot
of
time
each
time
to
investigate
if
the
problem
was
on
our
side
or
if
it
was
just
a
random
network
misconfigured,
and
so
I
think
we
could
use
this
kind
of
service
to
provide
information
to
the
end
users.
B
D
B
B
No
so
neglify
as
as
an
open
source
plan
that
we
could
leverage,
and
so
we
we
can
already
use
a
free
free
tier.
So
it
just
again,
as
tim
mentioned,
it's
just
one
minute.
We
just
we
just
defined
a
configuration
that
we
need
and
we
push
to
netlife
I've
been
using
it
to
find
for
my
project
and
it's
working
great
and
if
we
want
to.
B
B
So
if
you
don't
have
any
questions,
I
create
a
jira
ticket
to
keep
the
track
of
this
work,
but
again
this
this
won't
be
the
priority
just
a
nice
to
have.
So.
If
someone
is
interested
to
help
this
one,
I
will
just
create
a
git
repository
and
we
will
iterate
on
this
one.
A
B
The
next
thing
that
I've
been
wondering
is
how
we
could
have
detected
this
outage,
and
this
is
something
that
I've
been
wondering
for
a
while
is
how
we
can
monitor
that
the
latest
packages
are
available,
and
so
I
already
started
working
a
while
some
some
times
ago-
and
I
just
finished
this,
but
the
idea
is
to
have
a
datadog
custom
checked
that
test.
What
are
the
latest
stable
and
weekly
releases
and
then
ping
the
different
endpoints
and
get
the
junkies
that
I
use.
B
So
if,
if
we
just
release
a
weekly
release,
we
assume
that
get
the
junkies
layer
should
be
able
to
return
and
appropriated
packages.
The
custom
check
is
done.
I
just
finished
it
today.
I
think
so,
I'm
going
to
enable
it.
So
the
idea
is,
is
more
to
to
detect
the
issue
that
we
had
with
get
the
jenkins.
That
is
sooner
because,
typically,
what
happened
here.
I
think
it
took
me
two
hours
before
being
notified
about
the
about
the
otage
and
I
think
it's
yeah.
It's
quite
a
lot.
B
We
rely
on
user
monitoring,
someone
complains
and
then
we
investigate
and
that's
it
and
it's
not
acceptable
because
the
the
service
so,
but
basically,
what
was
weird
in
this
case
is
we
had
a
lot
of
side
effects
about
the
old
age,
because
one
of
the
one
of
them
was
if
get
the
jenkins
that
I
was
not
available,
could
not
use
any
mirrors,
then
it
falls
back
to
to
itself,
and
so
the
thing
is
mirror
beads
do
not
provide
do
not
provide
the
files,
it
always
redirects
you
to
my
house,
and
so
we
just
deploy
an
apache
next
to
mirror
bits,
a
service
that
just
allows
you
to
browse
the
different
files
and
to
provide
you,
but
because
all
the
mirrors
were
done.
B
So
that's
a
good
point.
I
think
it
has
every
plugin,
so
it
doesn't
have
the
old
thinking
version,
but
it
should
have
every
plugins.
I
increased
the
limits
that
archive
can
can
accept.
So
we
could
we
could.
We
could
use
it
instead
of
the
current
mirror.
B
To
be
honest,
the
mirror
bit
is
running,
and
I
just
don't
want
to
to
to
stop
it
right
now.
So
as
long
as
it's
running,
I
would
like
to
first
to
find
to
understand,
if
which
brings
me
to
the
next
point,
if
we
stick
to
azure
file
storage
or
not,
if
we
consider
that
service
reliable
enough
because
yeah
it's
been
a
few
days,
and
I
still
don't
understand
why
it's
broken,
but
the
problem
is.
B
And
the
thing
is
right
now
we
are
pushing
a
lot
of
things
to
a
machine
which
is
packaged
in
quesadilla,
and
ideally
I
would
like
to
remove
to
split
the
responsibility
of
that
machine,
because
right
now
it
does
way
too
much
things
and
each
time
we
modify
one
component
on
that
machine.
It
affects
other
services
like
update,
center
and
and
the
button
update
center
yeah
and
some
specific
other
scripts.
So
right
now
we
are
using
that
machine
temporarily,
but
ideally
I
would
like
to
use
a
different
solution.
B
So
if
it's
not
azure
file
storage,
we
have
to
think
about
a
different
way
to
basically
to
use
it
any
question
on
this
topic.
While
we
are
discussing
about
the
outage,
maybe
you
have
other
ideas,
I'm
really
open
to
suggestions
about
the
different
ways
that
we
could
have
managed.
This
outage.
C
If
they
say
if
they
basically
shrug
and
say
we
we
don't
know,
what's
going
on
now.
Obviously
that
would
not
give
me
the
confidence
that
I
would
like
to
continue
using
it
or
otherwise.
If
they
say
we've
identified
the
problem,
it's
extremely
unlikely
to
happen
again.
C
A
E
B
B
For
some
reason,
I
was
able
to
open
a
support
ticket
last
thursday
regarding
the
azure
disk
volume
issue
that
happened
for
redis
and
then,
when
I
wanted
to
open
an
azure
file
storage
issue
ticket
sorry,
then
it
said
that
we
didn't
have
support
basically,
so
for
some
reason,
so
I
paid
for
the
support
plan,
so
it's
100
100
per
month,
so
I
paid
it
last
week
and
anyway,
I
think
we
will
not
be
able
to
move
away
from
azure
anytime
soon,
because
if
we
decide
to
move
away
from
azure
I
mean
we
still
have.
B
We
are
still
using
specific
services
and
we
would
have
to
update
our
scripts,
just,
for
example,
for
the
release
environments.
We
are
using
specific.
We
are
using
azure
key
vault.
For
instance,
it's
not
a
big
deal.
If
we
have
to
switch
to
something
else,
but
yeah
it
would
just.
It
means
that
we
would
have
to
work
on
that,
instead
of
working
on
something
else.
E
It
seems
like
that
says
we
ought
to
accept
the
increased
cost
and
include
that
hundred
dollars
a
month
in
our
budget.
Does
it
need?
I
think
you
mentioned
a
hundred
dollar
versus
a
thousand
dollars
a
month.
Is
there
a
significant
given
the
the
pain
that
this
caused,
the
community
etc?
Is
it
worth
the
thousand
dollars
a
month.
B
No,
no
because
and
also
what
I
think
is
right
now
we
are
spending
around
8
thousand
per
month
on
that
on
that
account.
100
is
not
a
big
amount,
but
I
would
not
pay
it
one
thousand.
If
we
just
pay,
I
mean
if
we
pay
eight
thousand
per
month,
eight
thousands,
so
I
think
for
now
it's
fine
to
to
pay
the
100
dollar
permits.
B
We
also
have
some.
We
are
also
not
paying
for
our
redis
database,
so
it
should
be
300
per
month.
I
will
do
the.
I
will
scale
down
the
instance
so
because
it
I
was
in
the
rush,
I
put
a
big
instance
just
to
be
sure,
but
we
don't
use
the
full.
So
I
have
to
scale
down
the
release
database
that
we
are
using
right
now
so
yeah,
I
think,
per
month.
It
should
be
like
we
should
increase
by
200
or
300
dollars
per
month.
So
it's
not!
It's
not
a
big
deal.
E
Okay,
so
it
seems
like
one
action
out
of
this:
is
we
accept
the
the
increased
cost
and
willingly
accept
the
ongoing
support
payment
to
microsoft,
yeah.
B
But
I
think
we
are
now
spending
more
money
again
on
azure
account,
so
we
would
have
to
to
spend
some
time
to
to
reevaluate
how
we
can
save
money
on
diagrams,
because
we
still
have
a
heart
rate.
A
hard
limit
of
ten
ten
thousand
per.
E
Month
so
we
have
to
revisit
expenses
for
sure
we
stay
in.
B
Budget
well,
yeah!
That's
it
any
other
question
regarding
this
outage.
So
again,
the
the
service
is
not
back
again,
so
I'll
send
the
communication.
Once
once
once
we
know
a
little
bit
more
about:
what's
happened
with
the
azure
file
storage
if
we
could
have
prevented
it
and
what
would
be
the
downside
decisions
about
that
architecture.
E
E
B
Yeah
some
something
really
important
in
this
case
is
because
dior
was
not
obvious.
I
had
to
face
with
a
lot
of
weird
side
errors
like
timeout
issues
that
were
not
supposed
to
be
there,
the
container
that
would
not
start
because
of
mounting
issues
and
stuff
like
that,
so
yeah
it
was.
It
was
a
tough
one
to
to
diagnostic.
A
B
B
So
then,
I
propose
to
stop
the
meeting
here
and
we'll
cover
the
other
topics.
Just
one
thing
before
we
leave.
Obviously
the
fact
that
the
orbeez
was
down.
It
also
affected
the
weekly
release
happening
today,
because
the
release
the
with
the
release
directly
push
components
to
the
azure
file
storage.
B
So
it's
available
for
get
agency
that
iu
and
we
faced
a
really
weird
issue
again
with
the
file
the
same
azure
file
storage,
which
said
permission
issue
on
the
file
which
could
not
happen
because
it's
a
psi
of
cifs
volume
but
yeah.
So
basically,
I
had
to
modify
the
pod
this
afternoon
used
by
the
release
environment
to
not
mount
the
azure
file
storage.
B
So
if
we
decide
to
stick
to
the
azure
file
storage,
that's
fine,
we'll
have
to
revert
my
change
and,
if
not,
we'll,
have
to
slightly
modify
the
release
environments.
So
the
really
good
thing
is
right
now,
because
we
still
have
the
process
to
push
to
the
to
the
way
we
were
doing
previously.
B
So
we
still
have
the
process
to
automatically
push
new
our
components
to
package
the
jenkins
leo,
which
means
that
we
have
a
fallback
situation
as
long
as
get.jenkins.io
running
on
kubernetes
is
not
back
as
long
as
we
don't
have
access
to
azure
file
storage.
We
are
still
able
to
rely
on
the
machine
running
on
amazon.
B
So
sorry,
so
can
you
repeat
the
question.
C
We're
currently
on
a
fallback
situation
is
that
something
that
we
can
live
with,
if
need
be
for
several
weeks
or
do
we
need
to
get
to
a
real,
better
situation
and
restore
the
azure
stuff
as
soon
as.
B
Possible,
no,
so
the
current
situation
can
work
for
weeks,
even
because
the
machine
is
the
same
machine
that
was
used
for
mirror
searching
in
the
layout.
So
we
know
that
the
machine
can
handle
the
load.
So
that's
a
good
thing.
The
service
is
running.
It's
easy
to
configure.
That's
a
really
good
thing.
The
problem
is
that
machine
is
out
of
sync
with
our
puppet
master,
and
so
we
I
mean.
B
If
we
decide
to
keep
the
current
situation,
then
we
would
have
to
work
on
the
puppet
code
to
automatically
configure
that
machine,
because
it's
a
virtual
machine.
So
it's
not
a
community's
environment.
So
the
way
we
configure
the
service
is
slightly
different
and
so
yeah.
So
basically
we
need
to
have
access.
We
need
to
have
access
to
the
file
storage
because
mirror
bits
need
all
of
the
files
locally
to
know
what
are
the
ashes
for
those
files
and
based
on
all
those
ashes.
It
says
you
can
be
redirected
to
that
specific
url.
B
So
that's
why
we
need
a
location
containing
every
components.
So
to
answer
the
question
we
can,
we
could
stay
for
weeks.
What's
really
fear
me
is
when
you
do
manual
procedure
on
the
server,
if
the
person
who
did
the
procedure
left
are
not
available,
then
you
end
up
by
trying
to
figure
out
what
we
need
to
be
done.
Basically,
so.