►
From YouTube: New User Training: 05 Running Jobs
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
My
name
is
Helen:
he
I
am
going
to
talk
about
running
jobs
on
the
quarry
and
Edison,
so
I'm
gonna
start
with
the
system
introduction
and
some
of
the
running
jobs.
Basics
then
I'll
show
you
the
basic
batch
script,
examples
using
career
as
well,
and
this
template
and
then
I'll
touch
more
on
the
process,
thread,
memory,
affinity,
stuff
with
Cori
canal
and
I'll
mention
advanced
workflow
options,
then,
and
how
to
monitor
your
job,
how
jobs
are
charged
and
etc.
So
Edison
was
five
more
than
5,000
nodes.
A
A
Cori
is
XC
40,
but
it
has
two
hybrid
nodes
has
has
1
nodes
and
has
K
no
nodes
has
one
node,
it's
very
similar
to
the
Edison
nodes,
except
it
has
16
cores
per
socket
or
per
Newman
domain
and
K
nodes
has
68
course
in
Intel,
KL,
Knights
landing
course,
so
both
of
them
have
hyper
threads
available
for
house
war.
It
has
actually
I'm
going
to
show
you,
but
before
I
touch
that
I
want
also
wanna
mention
and
memory
wise
on
Cory.
A
Ok,
now
per
core
memory
is
smaller,
but
overall
the
total
memory
and
number
amount
of
memory
per
node
is
large.
So
if
you
want
to
use
smaller
than
my
MP
at
least
you
can't
get
more
memory
per
task,
then,
on
top
of
that
is
encouraged
to
use
open
MP
so
that
not
only
you
get
smart
scheming
but
also
you're.
Managing
your
memory
usage.
A
Yeah,
so
I'm
gonna
touch
upon
the
actual
core
line
up
how
what
are
there?
Hyper
sweaty,
cetera
in
a
cup
of
slides
later
so
animation
in
still
introduction
part
about
how
jobs
are,
what
kind?
What
kind
of
job
we
have
@nerdist?
Mostly,
we
have
parallel
jobs,
ranging
from
tens
and
to
many
many
large
number,
of
course,
and
we
also
have
large
number
of
serial
jobs,
especially
with
when
Cory
has
well
was
introduced.
A
It
was
it
was
acquired
and
procured
as
a
data
intensive
machine,
so
many
many
large
parallel,
several
jobs
are
it's
actually
the
highest
considered
high
super
jobs
is
also
the
nowadays
the
data
intensive
workload
we're
managing.
So
we
run
our
production
runs
in
batch
mode.
We
use
the
slurm
batch
scheduler,
it's
called
native
because
compared
to
originally
we're
using
the
it's
launcher
and
and
resource
manager
as
one
without
invoking
the
cray
scheduler,
we
have.
We
also
support
batch,
prepare
besides
batch
queue.
A
We
also
have
the
interactive
jobs
that
you
can
run.
It's
called
batching
to
active.
There
are
two
ways
of
doing
that:
I'm
going
to
introduce
when
is
debug
was
interactive
and
typical
jobs
around
for
many
hours,
so
your
jobs
are
in
the
queue
and
we
have
limits
setup.
So
your
it's
not
like
your
typical
workstation,
you
submitting
a
job,
and
you
get
report
results
immediately.
Now
you
have
to
wait
in
the
queue
and
all
the
scheduler
is
working
to
to
say
which
job
to
schedule.
A
There
are
two
types
of
nodes:
logging,
those
and
compute
nodes,
so
logging
nodes,
mostly
for
editing
the
compiler
and
for
submitting
your
batch
jobs.
The
compute
nodes
is
your
supposed
to
run
your
big
jobs.
There
do
not
run
your
applications
on
the
logging,
those
cases
shared
and
you
don't
use
up
the
shared
resources
that
you're
gonna
cause
slow
response
or
even
crash
the
nodes.
A
So
for
quarry
there
has
windows
and
and
I
am
and
Cana
notes.
The
binaries
are
somewhat
compatible.
Is
that
so
that
you
beautiful
by
husband,
just
do
around
on
K?
No,
but
as
Jen
she
pointed
out
you
you
better
compiler
separatory
for
K
L,
so
that
you
want,
you
would
be
able
to
utilize
with
the
node
features.
The
for
vectorization,
get
better
performance
and
the
also
library
is
a
target
for
Pitino.
So
you
want
compile
for
K
now,
specifically.
A
So
when
you
submit
a
job
to
computer,
you
have
to
write
a
sort
of
a
batch
script
with
directives
to
tell
what
kind
of
resources
you
want
and
then
also
in
that
batch
script,
you
would
and
you
would
have
a
s
wrong
command
for
parallel
executables.
You
do
not
need
a
strong
for
sequential
suitable,
so
when
the
a
strong
command
will
launch
your
executable,
aren't
you
all
know
that
your
job
is
assigned
for,
and
we
also
want
recommend
that
you
run
from
scratch
or
project
do
not
run
from
your
global
homes,
especially
for
large
applications.
A
This
global
home,
that's
supposed
to
you,
edit
your
files.
You
have
permanent
files.
Many
many
small
number
of
files
there,
so
IO
is
tuned
for
that
purpose.
It's
usually
not
optimal
for
large
application
runs
around
from
scratch.
Also
the
scratch
and
project.
You
have
much
larger
quota
there.
So
you
you
have
space
to
store
your
output.
A
A
So
here's
a
little
bit
illustration
you
use
on
your
login
node
into
s
patch
or
a
select,
and
then
you
have
you
will
land
on
a
head,
compute
node,
so
inside
that
batch
script.
Whatever
things
before
you
asked
around
all
these
commands,
you
were
run
on
ahead,
compute
node
and
then
you
launch
test
run
and
then
you
know,
distribute
your
work.
Para
workload
on
to
a
number
of
the
compute
nodes
that
you
are
allocated.
A
Here
is
what
I
want
to
mention,
what
type
of
note
and
not
like,
and
what,
how
scheduled
or
see
your
CPUs
and
numa
domains
etc.
So
this
is
a
hazard
compute
note.
You
can
see
that
the
top
bar
is
the
record
first
socket,
which
is
also
an
uma
domain.
0
and
bottom
row
is
the
human
domain
1.
The
memory
access
from
CPUs
to
access
memory
to
the
local
yama
domain
is
fast
faster
than
you
access
the
further
Numa
domain.
There
are
a
few
commands.
A
You
can
check
numerous
CTL,
which
hardware
information,
proxy
peering
for
hardware,
location
and
LS.
All
these
commands
I'll
tell
you
more
details
of
those
compute
nodes,
so
here
also
I
have
numbers
there
from
0
to
15
are
the
physical
cores
and
it
also
has
a
hyper
sweat.
Each
Co
has
another
another
hyper
sweat
and
they
are
numbered
as
32
corresponding
to
physical
0,
misery
of
3
corresponding
to
physical
one.
So
later
on,
when
you
do
is
when
you
do
Numa
CTL
or
when
you
do
some
affinity.
Checking
and
I'll
tell
you
my
my
job.
A
A
A
So
in
a
total
of
272,
so
in
the
quad
cache
mode,
which
is
the
default
setting
and
most
our
users
news
now
it
has
only
one
Neumann
domain,
which
means
every
single
CPU,
but
also
the
same
thing.
Numerator
man
has
the
same
memory
access
and
the
the
the
on
King.
Now
we
have
the
high
bandwidth
memory,
which
is
MCD.
A
A
So
I
will
touch
upon
that
here
now
and
in
the
later
slides.
So
now,
I'm
gonna
show
you
a
few
simple
example.
Example
scripts
are
using
has
well
as
a
template.
Edison
is
similar,
so
here
the
the
first
line
you
want
to
use
is
the
bash.
You
want
to
give
the
bash
script
a
shell
to
use
so
everything
that
I
recommend
in
it.
You
have
a
way
to
actually
execute
it.
A
This
debug,
but
also
most
user,
would
use
regular
if
you
want
wrong
a
little
bit
longer
a
little
bit
larger
and
you
would
ask
for
how
many
know
do
you
want
how
long
you
want
run
and
what
type
of
know
do
you
want
so
on
quarry?
You
would
say,
see:
housework
or
K,
now,
quad
cache.
Sorry,
it's
covered
so
Edison,
it's
optional,
but
if
you
want
to
say
it's
Ivy
Bridge
and
then
there's
some
some
more
optional
flags
you
can
use,
you
want
to
say
which
fire
system
my
job
is
dependent
on.
A
So
this
is
this
is
for
when
we
know
there's
no
one
fighting
system
issues,
your
job
would
be,
it
would
be
like
held
instead
of
launch
and
fail,
and
then
you,
kids
name
my
name,
my
job
name
you
can
ask
for
which
account
I
want
to
use.
If
you
have
multiple
accounts,
you
can
say
whether
I
get
an
email
whenever
my
job
starts,
finishes,
etc.
Lots
of
other
options
you
can
put.
B
A
Here,
even
though
this
is
a
MPI,
a
code
example,
a
putting
number
one
piece
of
s
equals
one,
especially
if
your
job
your
job
is.
Your
application
is
actually
hybrid
MPI
code,
but
you
want
you
want
to
run
it
in
pure
MPI
mode.
If
you
don't
do
it
some
of
the
compilers
as
ng
point
I
will
use
a
more
available
number
of
slots,
so
it
you
were
me.
Maybe
in
accidentally,
launched
multiple
threads
yeah,
so
it
to
prevent
that
happening.
You
would
set
for
pure
MPI
code.
You
want
to
set
that
to
one.
A
There's
a
esperança
manson
mention
that
exelon
commands
all
these
flags
were
over.
I
dispatch
keywords,
so
if
you
already
have
it
in
a
special
key
words,
you
don't
have
to
repeat
whether
you
can
also
repeat
to
override
it
with
a
new
value.
If
you
want
to,
he
also
want
to
mention
that
so
for
house
well-known,
we
have
32
total
of
physical
cores.
Slurm
sees
it.
I
have
a
total
of
six
sixty
four
CPUs
I
have
64
CPUs,
and
what
see
here
is
try
to
allocate
how
many
CPUs
per
node
to
my
MPI
task.
A
So
in
this
example,
it's
a
pure
MPI
code,
I'm
using
every
single
physical
core,
so
every
single
physical
core
will
have
two
CPUs,
because
it's
each
physical
CPU
plus
its
logical
CPU,
so
I'm,
giving
a
c2
here
and
also
use
that
CPU
buying
equals
course.
This
is
important,
especially
for
if
you're
you're,
not
using
every
single
MPI,
not
using
every
node
for
and
every
single
core
for
MPI,
which
means
we
could
not
fully
occupied
node
without
it.
You
might
get
very
strange
and
behavior
in
affinity.
A
Your
last
line
here
example
is:
you
would
use
25,
60
tasks,
then
c1,
because
now
using
each
CPU
poor
task,
then
CPU
by
an
equals
Reds
in
that
example,
and
for
hybrid
of
MPI
OpenMP
case
now,
so
we
would
recommend
you
set
number
of
threads
and
then
we
want
to
use
openmp
standard
settings
for
the
affinity
and
processor
affinity
bindings
and
again,
the
stash
C
is
also
defined
how
many
CPUs
for
per
MPI
tasks.
In
this
case
you
would
have
4
MPI
tasks
per
node
and
it
was
a
total
of
64
CPUs.
A
Your
see
here
is
16,
so
just
to
not
confusing
you
that
C
is
not
equivalent
to
and
OpenMP
threads
here
you
have
to
be
bigger
than
that
bigger
or
equal.
So
if
you
now
it's
twice
that
opening
piece
or
as
means,
actually
there
are
you.
If
you
buy
a
my
opening
destroyed
by
into
threads
I'm,
using
only
the
physical
course
not
using
the
hyper
stress
here.
A
Okay,
I'm
now
I'm
gonna
talk
about
a
few
of
other
type
of
jobs.
So
say
you
wanna
run
serial
jobs.
Ceo
jobs
is
like
I
want
to
use
one
core
only
or
if
I
won't
have.
My
job
needs
a
little
bit
more
memory.
I
can
use
a
few
course
memory
worth
of
that.
There's
a
QoS
we
designed
for
that
called
shared
by
default.
The
aura
nodes
are
exclusive
means.
Only
one
application
can
run
it,
but
in
this
queue
s
equals
shared.
Here
we
allow
multiple
nodes
jobs
to
from
different
users
to
share
nodes.
A
B
B
A
B
A
Is
a
way
to
do
that
you
can
and
yeah
there
are
up
to
four
applications
on
the
node
and
then
for
each
little
application
crossing
nodes,
you're
sharing
and
then
you
have
to
specify
memory
amount.
So
the
overall
memory
amount
total
summer
sum
summation.
My
memory
should
not
be
over
the
total
memory
available
on
the
node.
Do
we
have
a
link
on
the
website?
You
can't
you
can
check
that
and
I'll
show
you
how
at
the
break
for
the
serial
jobs.
A
Basically,
we
recommend
you
not
to
use
s
wrong
because
there's
overhead,
you
just
run
in
basically
a
dot
out
here.
So
this
is
serial
jobs
or
shared
partitions
and
now
is
how
to
run
my
interactive
batch
jobs.
So
debug
there's
a
you,
can
run
pretty
big
debug
jaws,
512
nodes,
but
there's
only
a
limit
of
30
minute
and
there's
a
wrong
limit
and
queue
limit
per
user,
because
debug
is
not
for
production
usage
for
interactive.
A
We
give
you
much
longer
hours
and
also
as
Genji,
also
mention
that
if
you
ask
from
this
node
you're
either
get
it
quickly
or
you
can
say
I,
don't
have
note
available
for
you,
but
then
you
can
ask
for
up
to
four
hours
and
then
up
to
464
nodes
together,
add
together
by
everybody
in
your
same
repo.
So
sometimes
you
see,
I
can
launch
a
job
because
somebody
else
is
using
it.
A
Okay,
so
we
want
to
interact.
You
said
it's
pretty
low
in
general.
We
definitely
recommend
you
to
try
that
if
you
trying
to
debug
something
small
enough
that
fits
in
it,
okay
wanna
mention
some
of
the
advanced
workflow.
You
can
bundle
jobs,
so
you
can
run
multiple
runs
in
one
script,
either
sequentially
or
simultaneously,
you
can
use
job
arrays
for
managing
a
severe
many
number
of
similar
jobs.
Can
you
can
choose?
A
Job
depends
on
these
features
to
change
jobs
that
you
know
my
second
job
have
to
run
after
the
first
job
cetera
they
can
use
birth
poverty
to
get
faster
I/o.
You
can
use
shifter
for
your
jobs.
With
your
custom
using
environment,
you
can
use
expert
queue
to
transfer
to
and
from
your
HP
SS,
it's
our
hybrid.
It's
called
basically
archive
system
and
you
have
some
big
memory
jobs.
You
can
use
the
pygmy
I'm,
so
very
a
little
bit
detail
here
is
the
Apollo
jobs
sequentially
sequentially?
A
Basically,
you
run
multiple
estarán
in
one
one
script,
then
you
would
ask
for
them
of
knows,
which
is
maximum
number
that
needed
by
any
of
the
single
job,
because
there's
a
wrong
one
after
another
and
simultaneously
you
asked
for
a
total
number.
Basically,
these
now
these
strands
are
supposed
to
run
simultaneously,
so
you
wouldn't
put
an
M
in
percent
so
that
you
can't
get
launched
second
job
using
the
rest
of
nodes.
A
A
On
the
same
note,
especially
like
in
this
regular
QoS,
so
job
erase,
basically
there's
a
parameter,
you
can
take
advantage
of
cost,
learn,
array,
job,
ID
and
you
would
say
my
array
and
one
wrong:
a
launch
ten
such
jobs
and
within
each
of
such
jobs.
I
do
something
so
you
and
then
in
so
now,
app
was
this
batch
script.
You
launched
ten
individual
jobs.
If
you
do
monitoring
to
see
what
my
job
ID
is,
you
would
see
a
job
ID,
an
underscore
one.
Job
ID
underscore
two:
it's
actually
individual
jobs.
A
They
will
be
actually
scheduled
individually
independently
whenever
the
resources
available.
So
there
are
sometimes
the
queue
limit
some
resting
limit.
You
will
be
subject
to
those
limit
as
well,
because
they're
considered
each
one
of
them
as
single
job:
okay,
dependency,
jobs.
As
I
mentioned,
you
submit
first
jobs.
First,
you
got
a
job
ID
and
then
my
second
job
we
use
attach
dependency
after
okay
or
after
any
of
my
this
first
job
ID.
You
can
also
put
in
a
batch
directive
there
with
that,
and
then
you
submit
this
batch
job
at
the
end.
A
First,
buffer
I'm
gonna
skip
this
afternoon
session
with
examples.
More
details.
Just
have
a
slide
in
here
shifter
as
well,
you
get
user.
An
opening
environment,
open
source
custom
environment
was
darker.
Container
format
is
also
an
afternoon.
Talk
about.
This
xfer
expert
is
noticed
that
in
s
batch
is
a
server.
It's
called
es.
Edison!
Oh
yes,
quarry
now.
It
actually
runs
on
the
summer
set
of
external
logging
nodes.
It's
not
on
the
either
has
where
okay,
okay,
now
those
computers
anymore.
The
purpose
for
that
is
so
some
users
have
the
need.
A
After
or
before
after
your
large
application
runs,
you
want
grab
data
from
HP
SS
or
you
store
them
back
to
HP
SS.
If
you
do
it
within
your
batch
script,
it's
gonna
waste
lots
of
universe
cowers,
so
you
won't
do
it
separately
and
you
could
do
an
extra
job.
You
know
in
your
batch
script
say
once
you
are
down
with
application
is
patch
to
the
yes
Edison
server,
my
tias
q
OS
X.
For
my
archive
script,
the
archive
script.
She
reached
her,
there's
no
a
silent
because
no
compute
nodes
involved.
A
A
All
right
any
questions
up
so
far,
so
the
next
part
is
going
to
touch
upon
this
thread:
affinity,
process
and
related
to
care
now,
especially
its
it's
just
more
irregular
868
cores
per
node
and
when
human
domain,
so
the
the
CPU
processor
affinity.
Everything
is
the
base
for
getting
your
optimal
performance.
If
you
say
you
have
multiple
CP
tasks
on
the
same
core,
multiple
threads
on
same
core,
it's
gonna
hurt
your
performance,
so
bad,
and
also,
if
you
have
the
base,
then
you
know
what,
if
I
do
some
optimization?
A
How
about
how
better
I
can
get
to
and
also
our
goal
is
to
always
use
openmp
standard,
instead
of
say,
Intel,
specific
settings,
etc.
This
is
the
core
cash
node
Numa
seat.
Here,
Aitch
I
tell
you.
There
are
available
only
one
node,
which
is
neumann
note.
It's
called
new
when
Numenor
zero
has
all
the
CPUs
and
Numa
node
zero.
Has
this
much
memory
because
it's
in
the
cache
mode?
So
you
don't
see
the
high
pens
memory,
and
these
are
the
numbers
as
I
as
I
mentioned,
you
can
see.
A
So
if
I
just
say,
oh
okay
I
have
to
scan
our
node
I
know:
I'm
gonna
run
sixteen
MPI
tasks
and
eight
openmp
stress
can
I
just
run,
say
s
wrong:
n,
16,
my
executable
and
also
even
even
though
you
have
you
know,
I
said
nicely
my
number
of
threads,
my
how
do
I
amuse
OpenMP
prod
by
and
places
etc.
But
was
this
naive
as
ROM
you
get
very
bad
affinity?
You
can
all
lined
up
different
MPI
tasks
on
the
same
tie
or
same
core.
It's
such
a
across
boundaries.
A
So
basically
the
reason
is
that
with
16
MPI
tasks,
it's
68
is
not
divisible
by
16
by
your
16.
So
it's
what
it
does.
This!
Oh
okay,
I
have
260
two
CPUs
divided
by
16
I'm,
trying
to
give
you
you
know
the
odd
number
of
CPUs
per
task,
etc.
That's
how
it
messed
up.
So
the
way
to
do
it
is
we
give
it
especially
to
tell
him
tell
the
scheduler
that
I'm
giving
you
16
CPUs,
which
is
4
physical
cores
per
MPI
task.
A
So,
of
course,
I
mean
I'm,
actually
wasting
extra
cores
on
purpose
but
now
they're
evenly
distributed,
but
also
because
it's
not
fully
occupied,
we
weren't
adding
CPU
by
an
equals
cores.
Now
we
as
a
report-
this
is
an
XT
hi,
is
a
code
I'm
gonna
mention
8
it.
We
have
a
binary
that
you
can
use
to
plugging
in
your
application
to
check
affinity,
but
with
this
studying
now,
I'm
getting
MPI
rank
0
all
the
way
to
15
and
the
CPUs
allocated
to
each
impair
task.
A
Each
color
has
the
means,
the
CPUs,
so
0
and
60
eight
zero.
Six
to
eight
up
to
two
zero
four
is
physical,
core
zero,
and
this
is
physical
core
one.
So
you
see
that
in
fact
rank
zero,
get
full
physical
cores
and
two
hyper
threads
per
core
and
very
similar
to
all
the
other
MPI
rank,
stimulate
and
nicely
distributed.
A
So
basically,
what
we
mean
is
that
these
are
essential
settings,
see
CPU,
bind
and
OpenMP
settings.
We
recommend
it
set
to
true
instead
of
spread,
even
though
I'm
from
for
Intel
and
Cray
it's
the
same,
but
for
Canoe
there's
a
issue
with.
If
you
do
spread
at
all,
though
use
half
of
the
course
only
so
we
set
these
two
and
OMP
places
equals
thrust.
That's
our
recommendation!
A
So
here's
the
sample
script
of
the
hybrid
MPI
OpenMP
on
two
nodes
and
I'm
running
64
cores
per
per
node
mpi
and
then
I'm,
using
all
every
single
CPU
tire,
it's
collage
across
CPU
or
hyper
threads,
and
then
that's
why
I'm,
giving
four
CPUs
per
MPI
task
and
also
a
moking
OpenMP
number
of
NP
equals
four.
In
this
case
the
thread
the
OpenMP
and
is
is
pinky
this
core
physical
core
like,
but
it's
not
finally
ping
to
the
each
of
the
speech
of
a
specific
CPU.
A
A
So
this
is
the
recommended
recommended
usage
and
when
we
get
into
the
memory
affinity,
if
the
quad
cache
it's
nothing,
you
need
to
do.
But
if
you
want
around
quad
flat
means
you
treat
the
high
bandwidth
memory
as
as
a
separate
Numa
domain.
It's
memory
because
you're
accessing
memory,
distance
different,
then,
if
your
memory
15
16
gigabytes,
you
can
use
you
know
it
is
a
force
mode,
Numa,
CTL,
m1,
and
if
it
doesn't
fit,
your
job
will
fail
or
there's
a
man
buying
optioning
s
wrong
a
similar
way.
A
A
Okay,
so
I
mentioned
we
have
the
binaries
already
compiled
for
you
if
you
have
a
pure
MPI
code,
if
you
have
a
hybrid
code
and
if
you
compile
with
the
Intel
compiler
or
Cray
compile
or
a
new
compiler,
these
are
the
corresponding
binary
names
and
it's
just
putting
it
around
this
binary
and
you
find
out
using
your
settings
of
C
dash
n
dash,
C
or
OMP
OMP
settings
to
check
your
affinity
and
these
numbers.
You
have
to
understand
what
they
mean
to
know.
A
If
they
are
correct
or
not,
there
are
the
Intel
and
Craig
compiler
settings.
These
are
just
for
OpenMP
honey.
You
don't
get
any
MJ
info
with
that.
There's
also
a
strong
command
flag,
there's
also
for
CPU
and
for
memory
find
to
check
affinity.
We're
going
to
skip
this
too
much
detail
here.
Also
I
want
to
mention.
We
have
a
new
script
generator.
It
is
art,
it
is
a
minor
dashboard
and
this
is
something
called
JavaScript
generator.
You
would
choose
a
machine.
You
choose
application,
how
many
nodes,
how
many
servers
etc?
A
And
how
to
monitor
your
jobs,
so
the
few
commands
sqs
s.
Q
SQ,
is
the
scheduler
scale.
Md
native
q
monitor
SQ
s
is
a
nurse
custom
wrapper
for
SQ
and
combining
focus
or
service
as
info,
and
you
can
see
q
look
and
then
complete
jobs,
etc.
There's
links
you
can
find
out
the
jobs
you
can.
You
know
on
those
web
links,
you
can
choose
my
jobs
only
or
everybody's
job.
There
are
lots
of
things
related
for
the
rest,
custom
user
commands
as
much
as
lock
I
mentioned
as
wrong
and
k.
A
Let's
cancel
a
job
as
control
as
account.
I'm
gonna
show
you
a
little
bit
more
details
here.
So
ask
us:
ask
us:
is
a
nurse
custom
rapper
and
provides
you
formatted
output
with
many
many
options
such
as
I
want
to
see
my
jobs
only
and
want
to
see
all
the
jobs
without
except
running
jobs.
They
want
see
other
jobs
except
shared
jobs,
or
only
want
to
share
jobs.
I
wanna,
see
if
wider
format
I
want
to
see
more
fields,
etc.
A
So
you
can
see
the
huge
main
page
or
help
to
find
that
so
one
thing
I
wanna
mention
is
the
scheduler
scheduled
start
is
a
addition
that
ask
you
command
doesn't
provide
at
all
what
it
does
give
you
is
if
your
job
is
already
being
scheduled
by
a
scheduler
it
you
see
exact
scheduled
start
time,
of
course,
depending
on
finished
jobs
or
new
jobs
coming
if
this
hire
prior
it's
dynamic.
But
it's
very
good
estimation.
A
Big
scheduler
is
already
set
aside
and
resources
for
your
job,
and
then
you
see
some
of
them
not
available,
because
safety
of
this
job
is
held
by
users.
So
there's
nothing
scheduled
considered
at
all.
Now
you
also
see
something
about
available
in
how
many
days
that
means
your
job
is
not
has
not
reached
prior
day
high
enough
to
be
scared
to
be
considered
for
scheduling,
and
in
this
many
days
or
hours
your
job
reached
high
enough,
then
your
picture
will
be
scheduled
for
or
be
considered
for
scheduling,
so
your
job
would
be.
A
A
Your
job
is
considered
to
be
eligible
for
backfield
and
in
mehron,
so
you
could
keep
it
around
sooner
than
that.
There's
also
a
second
note
here,
saying
upcoming
preventionists.
Actually
you
will
see
it
yesterday
and
today,
because
we
have
maintenance
today
on
Edison
or
it
is
tomorrow
on
Cory,
so
that
scheduled
time.
Sometimes
it's
like
way
off.
It
could
be
available
in
one
year
because
on
a
schedule
you
know
maintenance
reservation.
We
say
we
finished
and
in
one
year
and
then,
once
when
maintenance
finished,
we
just
released
that
reservation
for
maintenance.
A
A
Rest
account
is
also
clearing
the
scheduler
batch
scheduler
database.
So
it
was
lots
of
format
and
and
infuse
you
can
find
out.
Your
job
ran
anytime.
We
give
you
you
can
you
can
put
in
start
and
end
if
you
don't
it'll,
give
you
the
last
24
hours,
but
you
can't
do
a
query.
So
we
right
now
we
set
a
query
window
maximum
to
be
one
month
to
not
overloading
the
scheduler,
but
there
you
have
all
these
different
fields.
You
can
put
in
to
find
out
the
details
of
your
job.
A
The
stash
X
is
like
a
combined
for
per
multiple
steps.
Without
dash
X,
you
can
see
different
s
wrong
in
your
job,
with
details
as
well.
Okay,
how
your
jobs
are
charged
is
now
by
nurse
hours.
There's
a
base
charge
per
machine
medicines
48
has
was
18,
korie's
96
and
then
on
top
of
that,
there's
like
modification
by
QRS,
normal
is
1
and
then
premium
is
two
scavenger
is
when
your
jobs
run
out
of
your
repos
run
out
of
time.
A
So
just
an
example
here
for
house
were
around
for
one
hour
in
premium:
it's
like
4
times
1
hour
times,
80
times
2.
This
is
how
you
get
charged,
but
it's
basically
by
if
you
have
you
request
for
how
many
notes,
even
if
in
your
estarán
commands
you're,
not
using
these
minimum
of
notes,
you're
charged
by
all
the
notes
you
asked
for,
and
it's
not
charged
by
war
time
limit,
you
request
it,
but
it's
charged
by
it
actual
what
time
you
used.
A
So
that's
two
distinctions
to
make
so
which
system
to
run
for
my
job
things.
To
consider.
Of
course,
q
wait
time
and
you
would
see
how
long
my
job
would
waiting
on
different
system
the
throughput.
How
much
of
my
job
will
be
charged
whether
your
code
is
ready
for
care
now,
I
can
move
on
to
KL
medicine?
I
Rebecca
mission
is
a
large,
stable
machine,
a
low
charge
factor
and
Corey
Hosmer
was
for
the
data
intensive
applications.
A
Originally
it
only
on
Kurihara
we
have
the
shared
and
real-time.
Now
it's
also
exported
to
to
Edison,
for
the
shared
basically
allow
you
to
run
many
many
smaller
jobs,
serial
jobs.
So
it's
also
data
intensive
applications
for
quarry
has
large
capacity.
So
many
notes
that
you
know
the
wait.
Time
is
relatively
low
here
then
Interactive's
available
on
quarry.
Only
the
big
mem
is
also
also
only
available
on
Cory.
So
this
those
are
things
you
want
to
consider
which
system
to
use
and
for
your
job.
A
So
this
is
the
queue
policy,
I
I,
think
I
just
put
it
here,
and
you
can
refer
to
the
queue
policy
webpage.
We
you
asked
for
QoS
number
of
nodes,
and
then
there
was
limit
of
you
know
max
more
time.
You
can
ask
for
how
many
jobs
you
can
have
and
for
Cori,
it's
more
convoluted
because
has
and
K
now
has
more
on
the
same
tape
in
the
same
table.