►
From YouTube: 5. Running Jobs
Description
Learn how to run different types of jobs on Cori.
Slides for all sessions can be downloaded from here: https://www.nersc.gov/users/training/events/new-user-training-june-21-2019/
A
My
name
is
Helen.
He
I
also
work
at
the
nernst
user
engagement
group.
I
am
going
to
talk
about
running
jobs
out
Cory,
here's
a
brief
outline
basic
introductions
and
then
I'll
give
you
some
basic
back
script,
examples
and
then
introduce
some
more
advanced
workflow
options
and
then
we'll
talk
about
process
thread
affinity,
especially
on
Kenya
that
is
complicated
and
then
how
to
monitor
your
jobs.
A
So
jobs
at
nurse
mostly
are
parallel
jobs
using
from
one
node
to
thousands
of
nodes
for
capacity,
and
there
are
also
lots
of
serial
jobs,
especially
from
the
data
data
are
intensive
applications.
Some
of
them
they
can
be
just
run,
run
as
presently
parallel
jobs
or
embarrassment,
payroll
or
just
run
them
in
here
in
seeing
serial.
We
have
this
type
of
mechanism
to
support
these
serial
jobs
here
and
we
have
to
run
jobs
in
batch
mode.
So
it's
not
like
on
your
laptop.
You
assume
you
submit
a
job
and
get
result
immediately.
A
Jobs
has
to
be
sitting
in
the
queue,
and
we
have
to
be
fair,
that
a
job
by
user
still
submit
thousands
of
jobs,
and
you
some
just
start
one
job
right
after
that.
We
should
not
allow
the
first
sound
job
to
finish
until
your
job
would
start.
So
we
use
a
batch
scheduler
called
slurm,
and
we
also
support
some
type
of
interactive
job
that
is
instead
of
sitting
in
the
batch
queue.
A
You
could
also
submit
with
jobs
that
we
have
preserved
notes
for
them
that
you
could
watch
and
do
debug
some
kind
of
interaction
with
their
job
in
real
time.
So
debug
jobs
are
supported
up
to
30
minutes,
there's
also
a
interactive
QoS,
the
puts
up
to
four
hours,
typical
jobs,
as
runs
turns
and
few
hours
to
up
to
maximum
war
time
we
have
some
of
the
queue
has
48
to
72
hours.
A
There
are
two
types
of
logging
nodes
on
quarry:
logging,
those
and
compute
nodes.
The
logging
notes
is
Tazewell
and
there's
also
one
type
of
computer.
Us
also
has
well,
but
there
are
a
little
bit
different
specifications
of
logging
and
for
compute
nodes.
So
logging
notes,
especially
for
just
editing
and
compile,
submit
batch
jobs.
Please
do
not
run
production,
jobs
and
logging
notes
because
it's
a
shared
and
you
can
impact
other
jobs.
A
Other
users,
responsiveness
on
the
logging
nodes,
the
compute
node,
is
where
you
execute
your
application
most
of
the
QoS
in
the
queue
scheduler
with
with
the
when
we
will
get
exclusive
node
allocation
so
that
you
would
be
only
user
on
those
nodes
and
and
only
there's
one
another
type
of
qsr
that
allows
shared
nodes.
So
keep
in
mind
that
there
are
two
types
of
computers
that
you
could
optimize
for
each
as
genji
mentioned,
how
to
compile
specifically,
so
that
your
application
would
bomb
we're
on
optimum
suitable
for
each
type
of
notes.
A
So
here
is
how
this
should
would
work,
so
you
have
a
laptop
on
the
login
node
or
when
we
login
curry
on
the
login
node,
you
would
ask
batch
or
ask
Alec
and
then,
after
that
you
would
actually
land
on
a
computer
note
one
of
the
compute
node
that's
allocated
to
your
job
and
every
single
commands
in
your
batch
script,
not
starting
with
s
ROM
were
run
on
this
tab.
Compute
node,
then
in
some
some
period
even
would
submit
an
S
run
command
ask
for
a
multiple
number
of
MPI
tasks.
A
I
want
to
show
you
a
illustration
of
a
hazard,
compute
note
here
and
just
to
keep
in
mind
some
of
these
numbers
here
as
very
useful
later
on,
when
you
have
to
try
to
verify
whether
you're
running
optimally.
The
note
so
has
one
note,
has
two
sockets
and
each
socket
has
16
physical
cores
so
number
it
from
0
to
15,
on
the
top
in
Woonsocket
and
in
bottom
and
started
from
16
to
31.
You
would
also
notice
some
green
numbers
here,
for
example,
core
0
also
has
a
corseted
to
there.
A
A
So
this
is
when
we
have
to
use.
You
know
s
ROM
options
which
we
will
keep
that
in
mind,
not
to
have
memory
access
on
the
foreigner
modem
in
and
there's
a
ways
to
find
out
more
details
of
a
process.
Compute
note
and
like
I
said,
if
you
want
to
find
compute
node,
you
have
to
get
onto
a
computer.
The
first
first
with
Alec
come
in
I'm,
not
giving
the
details
right
now,
but
I
will
show
this
later.
Is
this
some
kind
of
a
Selleck?
A
And
then,
when
you
get
onto
the
head
computer
note,
there
are
a
few
commands
that
you
can
run
NUMA
CTL
hardware,
option,
cap,
proc,
CPU
info
hardware,
locality
data
and
I'll.
Give
you
a
detailed
list.
You
know
each
processor
how
many
process
of
each
socket.
What
does
the
CPU
speed?
The
distance
is
all
this
kind
of
information.
A
And
Kol
is
a
little
bit
more
complicated,
so
we
here
at
nernst,
we
basically
set
default
kena
mode
as
quad
cache
we've
been
in
in
quad
mode.
Basically,
the
all
the
one
single
compute
node
is
a
one
single
Numa
domain.
It
has
68
physical
cores
and
each
core
has
to
has
four
hyper
threads
so
again,
look
at
these
numbers
and
keep
some
of
the
numbers
in
in
your
mind.
A
For
example,
our
core
0
has
logical,
CPU
number,
0,
68
136,
and
to
2
and
200
for
physical
one
physical
cool
one
has
logical
cores,
169
137
to
0-5,
and
when
we
again
later
on
I
want
you
to
remember.
0
and
136
are
on
the
same
core,
1
and
137
on
the
same
core
will
show
that
numbers
later
and
cache
mode,
meaning
that
on
curry
there
is
a
fast
memory
and
CD
Ram
on
the
note
when
it
is,
it
can
be
set
down
in
quad
mode
in
cache
mode
or
in
a
flat
mode.
A
But
here
when
it's
that
quad
mode
cache
mode.
Basically,
we
have
huge
huge
size
of
cache
that,
when
your
memory
application
access
memory
is
super
fast,
then
getting
act
and
data
aren't
from
the
main
memory
right.
So
that's
the
basic
introductions
and
I'll
show
you
the
magical
examples
and
what
are
the
key
components
in
a
batch
script.
A
A
Bash
being
Bash
is,
did
you
say
I
want
to
swoop
to
interpret
as
bash
shell
and
I
was
logging,
shell
or
not,
that
L
is
optional
and
environment.
Before
you
submit
batch
script,
they
were
being
imported
into
your
fetch
job
and
the
second
part
of
those
things
are
what
kind
of
QoS
you
want
to
submit
to
nurse
has
regular
premium
low
us
thought
of
those.
A
A
There
are
other
more
keywords:
L,
that
cap
to
see
is
our
required.
Capital,
L
and
J
are
not
required,
L,
meaning
what
what
file
system
your
job
is
required
to
use.
So
this
helps
that,
when
in
circumstances
that
safe
scratch
file
system
has
issues
when
you,
if
you
submit
a
specified
I'll
scratch,
your
job
will
be
kept
without
starting
so
that
it
will
prevent
it
being
failed.
A
If
Spira
system
has
an
issue
that
we
are
noah,
we
know
of
j
would
give
you
a
job,
a
name
and
also
there
OH
eaters
just
to
give
a
custom
file
name
for
your
job
output
or
a
job
error,
etc.
There
are
a
lot
more
and
there's
a
E
M
like
if
you
want
some
as
there's
a
receive
an
email
from
your
jobs
when
it
starts
and
finishes
or
fails,
and
here
is
numpy
num
threads,
because
environment
variable
you
want
set,
we
want
to
specify,
am
try
to
emphasize
the
field.
Job
is
pure
MPI.
A
If
you
never
compiled
with
OpenMP
or
you
don't
use
any
thread,
its
libraries,
that's
fine,
but
otherwise
we
recommend
to
set
oMG
I'm
sore
as
equals.
One
in
just
to
prevent
sometimes
different
compilers
would
use
huge
number
of
a
default
number
of
thread
that
we
are
you
don't
it's
not
your
intention
to.
A
A
1,280
means
we're
running
with
dirty
to
MPI
tasks
per
node
and
see
is
what
you
want
to
give
each
MPI
tasks
per
node,
how
many
logical
CPUs.
So
this
is
an
housework
example.
We
know
that
there
are
total
of
32
physical
cores
times,
2
hyper
threads
per
core.
There's
total
of
64
logical
cores
on
a
has
one
node.
If
you
want
to
use
32
MPI
tasks
on
this
node,
you
want
to
give
two
logical
cores
to
each
MPI
task.
That
is,
that
c2
comes
from
and
I
CPU
by
an
equals
cost.
A
A
So
here
just
basically
follows
this:
the
previous
life
but
add
more
discussions
here.
So
if
in
this
in
in
this
case
you,
if
you
say
I
want
to
run
64
MGI
tasks
per
node,
it's
like
per
CPU
logical
CPU
for
MPI
tasks
when
MJ
tasks
per
logical
CPU.
At
this
point,
you
are
using
hyper
threading
for
the
MPI
tasks.
A
A
Oh
I
think
this
one
comes
from
just
the
zoom
thing
it
suddenly
shows
up.
Let
me
see
if
I
can
it's
not
on
my
slide
but
yeah,
so
we
talked
about
you
could
do
hyper
threading
with
mpi
as
well,
and
now
we
are
talking
about
vo
hyper
MGI
OpenMP
batch
script.
So
here
we
want
you
to
say
how
many
MPI
threads
you
want
set,
and
we
also
want
to
promote
open,
MP
standard
settings
of
proc
buying
and
oMG
places.
A
A
A
So
I
want
to
also
just
talk
about
the
theory
jobs
I
mentioned
mechanism
to
do
that
is
that
we
have
actually
a
shared
QoS
other
than
those
regular
law
etc.
Share.
Qos
is
exclusively
especially
for
those
multiple
applications
to
share
a
single
node.
So
if
it's
a
serial
job,
which
you
do
not
ask
for
capital
n
1
node
anymore,
instead,
you
would
do
little
N
and
D
for
this
one.
If
you
don't
say
it
or
you
could
say
little
and
some
bigger
number.
A
If
you
want
more
memory,
even
for
your
serial
job,
you
would
ask
you
get
the
equivalent
of
Marko
CP.
Multiple
CPUs
are
memory
worse
on
that
shared
node
or
you
could
ask
for
memory
by
mam.
Option
by
default
is
about
a
little
bit
less
than
two
megabytes
per
little
N
and
things
as
a
serial
job.
We
suggest
you
not
to
use
s.
Rom
exists
as
extra
overhead
and
the
shear
QoS
is
only
available
on
the
house
wall
nodes,
not
on
KL.
A
Then
I
want
promote
debugging
it
active
batch
Interactive's
jobs
for
debug.
You
would
say
it's,
you
could
do
such
submitter
debug
job
in
the
batch
job
script
or
you
could
do
it
interactively
with
Alec.
So
almost
the
similar,
the
key
words
here
and
you
that
you
used
in
Flora's
patch
in
your
batch
script
here
and
then
once
the
because
they're
in
reserve
notes.
So
it
returned
time,
it's
very,
very
quicker.
A
Then
you,
when
you
get
on
a
session
you're
already
on
your
head,
compute
node
for
interactive
is
especially
highly
recommended,
because
it's
either
you
get
a
note
immediately
within
five
minutes
or
you
get
your
rejected,
say
not
enough
notes
for
you.
So
there
are
192
nodes
each
on
has
well
and
on
KN
l
knows
that
you
can't
ask
for
Q
s
equals
interactive
and
you
get
up
to
four
hours
and
up
to
64
nodes.
There
also
there's
also
a
limit
of
64
total
combined
house
were
on
a
canal
for
each
repo.
A
A
A
Task
or
a
job
array
I'm
gonna
touch.
This
is
the
next
topic
yeah,
let's
workflow
options,
so
here
found
all
your
jobs.
This
is
just
a
summary
slide.
I
have
one
side
each
for
each
of
these
topics.
You
can
bound
o
job
arrays
dependency.
We
have
variable
time.
First,
buffer,
shifter
transfer,
pigma
BAM
I'm,
not
gonna,
go
through
details.
What
each
for,
but
just
let's
talk
about
here,
each
on
each
side,
founder
jobs,
meaning
you
want
to
run
multiple
runs
in
one
of
your
batch
script.
A
The
two
ways
to
do
that
you
can
run
multiple
jobs
sequentially
on
the
left
side,
so
sequentially
means
you
run
as
run
one
after
another.
So
here
what
you
want?
Half
for
the
number
number
of
nodes,
it's
just
the
biggest
number
of
nodes
of
your
job
when
the
from
as
well
but
t
for
time.
You
have
to
ask
the
constellation
of
your
I
strong
for
each
for
the
for
this
big
job
and
on
the
right
side.
Is
you
want
actually
multiple
restaurant
jobs
to
run
simultaneously?
A
So
what
now
you
have
to
make
amp
there's
some
some
blue
things
here.
So,
first
big
capital,
nine
number
of
not
note,
is
now
the
summation
of
each
node
each
SRAM.
It's
not
the
summation
of
little
and
add
together,
divided
by
32
or
something
because
each
estarán
has
to
be
exclusively
on
a
set
of
nodes,
and
then
you
also
have
to
put
each
s
run
into
the
background
and
add
a
weight
at
the
end
without
doing
that.
A
Your
batch
job
with
our
exit
prematurely,
so
the
the
the
advantages
of
those
point
finding
Jabar
no
jobs,
is
that
you
may
get
your
jobs
one
single
job,
the
first
easy
to
manage,
maybe
instead
of
manage
lots
of
lots
of
jobs.
Second,
because
they
have
some
wrong
limit
some
limit.
Now
it's!
This
was
just
one
job,
so
your
new
job,
so
you
know,
get
children
without
subject
of
the
those
limits
and
also
if
we
actually
found
a
big
enough
that
you
can
get
into
the
large
job
discount
character.
A
A
A
You
same
dependency
is
after
okay,
a
first
job
or
after
any,
whether
the
first
job
successful
and
a
half
it
means
after
any,
and
you
could
also
do
it
in
a
batch
script
and
dash
D
after
okay,
two
ways
to
do
that,
so
you
can
change
your
jobs.
One
thing
to
remember
is
that
if
the
job
is,
you
know
dependent
on
another
job
or
it's
waiting
in
the
queue
it
is
in
the
user
health
status,
it
does
not
accumulate
our
priority.
A
A
You
may
get
a
wartime
anywhere
between
timing
timing
to
time
which
is
like
2
hours
to
48
hours
in
this
example
and
around
it,
and
you
also
give
it
you
know
how
how
much
time
my
job
needs
to
have
a
little
bit
buffer,
that
to
actually
one
job
is
not
finished
right
because
I'm
only
getting
this
amount,
it's
not
finished.
I.
Give
a
tell
the
scheduler
that
my
job
needs
this
much
enough
time
to
do
my
checkpointing.
A
Ok,
so
it
all
right
into
that
checkpointing
time
is
like
threshold
is,
is
needed,
it'll
do
a
checkpoint
in
and
exit,
but
then
it's
recued
automatically,
and
I
don't
remember
how
much
time
your
job
has
run
already
until
all
the
way
your
accumulate
run.
Time
reaches
your
96
hours,
then
this
is
your
whole
job,
so
this
is
useful
that
you
can
get
long
time
more
time.
This
is
useful.
You
can
get
flexible
more
time
that
your
job
make
it
get
out
into
those
are
scheduling
opportunities
and
you
may
get
better
super.
B
A
We
have
a
requirement
yeah,
so
here
is
the
when
you
use
flux
to
especially
as
a
for
example,
there's
a
flux
QoS.
The
requirement
is
that
you
have
have
you
have
to
have
timing
less
than
two
hours
and
your
max
time
is
48
hours,
but
it's
again
it's
the
to
help.
You
improve
throughput.
It
also
helps
overall
system
utilization,
because
so,
if
there
are
such
jobs
existing
in
the
queue
it
without
the
tight
timing,
they
won't
start
now
they
can
actually
run,
especially
during
a
large
maintenance
or
some
larger
preservation.
A
A
The
first
buffer
on
the
quarry
we
have
the
high
high
bandwidth
IO
capability
used
by
using
burst
buffer,
and
you
can
request
some
kind
of
file
system
on
the
first
buffer
and
then
before
your
batch
job
its
before
its
start
and
after
the
ends
you
can
stage
in
a
stage
out
to
input
output
directory
and
Inc
files,
so
that
during
the
runtime,
you
have
much
much
better
and
I/o
performance.
There's
a
detail.
Talk
a
purse
before
this
afternoon
for
how
to
use
this
and
shifter
some
of
the
applications
have
their
custom
environment.
A
They
still
want
to
run
on
our
quarry.
They
want
bring
their
own
docker
image
and
we
can't
so
shifter
is
sort
of
a
modified
darker
image
environment
that
you
can
run
it
on
on
earth
that
you
can
upload
your
image,
rebuild
it
on
your
laptop
you
upload
to
the
nurse
shifter
registry.
As
an
on
route
and
and
runtime,
you
can
bring
out
poor
your
images
and
onto
computers
and
use
your
own
custom
environment
again.
There's
also
another
shift
to
talk
this
afternoon
for
details.
A
Transfer
jobs,
so
we
they're
dealt
mom
pipe
and
words.
The
long
term
high
storage
file
system
called
HP
SS.
It's
not
fastened
directly
mounted
on
curry
logging,
nodes
or
compute
nodes,
but
but
at
runtime,
if
you
want
to
get
data
on
T
for
usage
for
your
batch
job
and
expert
is
Q
is
exactly
for
this
purpose.
You
can.
You
can
stitch
data
from
a
tree
as
before
and
after,
if
you
do
it
inside
your
batch
script,
it
all
cost
to
you
a
lot,
because
this
is
run
actually
on
one
node.
A
If
you're
a
bad
job,
we
won't
ask
for
thousand
nodes
and
excited
you
get
data
from
HSI.
It
costs
you
lots
of
allocation,
so
you
want
to
do
it
separately
and
escrow
jobs
actually
runs
like
special
logging
nodes,
so
notice.
This
crash
capital
m
es
quarry
is
another
batch
scott
server,
slurm
server,
not
the
regular
quarry,
the
regular
em
quarry.
You
don't
have
to
specify
it,
but
for
expert
you
have
to
say:
I
want
to
run
on
a
yes
Cory
server,
big
ma'am
as
well.
A
A
B
B
A
A
What
it
does
process
affinity
is
basically
buying
MPI
tasks
to
CPUs
and
thread
affinity
with
pine
threats
to
the
the
CPU
is
already
allocated
to
the
MPI
process.
So
when
you,
when
you
find
those
it
you
know,
you
know
wanna
be
mindful
about
special
Numa
domains.
I
do
not
want
my
and
my
opening
piece
rats
on
a
foreign
uma
domain
and
I
want
my
empty
I
tasks.
How
many
you
know
if
I
say
I've
gone
to
socket
on
house?
A
Well,
if
I
run
one
MPI
tasks
on
this
honest
has
one
node
and
then
I
run
32,
open
and
peace
rest.
Then
there
are
some
open
empty
threats.
You
have
to
access
memory
on
other
one.
So
if
you,
for
example,
on
those
husband
else,
would
recommend
at
least
two
MPI
tasks
and
then
each
of
sixteen
open
observers
belong
to
when
MPI
tasks
would
be
on
one
single
Numa
domain.
Something
like
that.
A
So
the
goal
is
to
use
openmp
standard
so
that
there
are
open
OMP
proc
by
norm,
g
places,
settings
and-
and
this
is
more
portable,
because
these
are
standard.
It's
available
for
multiple
compilers
there's
a
detailed
page
about
house
affinity,
but
just
here.
Basically,
what
I
want
try
to
tell
you
is
that
how
important,
in
your
s
run
command
with
c
and
that
cpu?
How
those
helps
you
to
make
sure
your
opportunity
settings
are
correct,
and
this
so
I
mentioned
earlier,
there's
a
new
mercy
to
your
command
H
to
find
out
compute,
node
information.
A
A
The
All
Stars
was
knit
something
and
then
you
run
this
command,
and
it
gives
you
how
many
on
this
knit
0,
which
is
a
Kenyan
out
how
many
available
one
note
means
I,
have
one
new
module
man
when
no,
it
means
when
you
murder
me,
and
it
gives
you
my
all
my
logical
CPU
numbers
0
to
271,
which
is
6
total
number
of
6
to
8
physical
cores
times.
4
hard
was
read
and
then
tells
you
my
size
note.
A
0
is
actually
memory,
size,
96,
gigabytes
per
note,
and
if
it's
because
the
its
cache
and
if
it's
flat,
it
actually
gives
you
my
flat
quad
flat,
my
flat
on
K,
no
memory
mode
is
actually
under
the
human
domain.
A
Charlie
to
Numa
do
means
if
I'm
on
a
quad
flat,
note
and
I'll
tell
you
the
distance.
This
is
because
only
one
note,
basically
in
the
notes
from
0
to
0
is
the
same.
If
I
have
to
note
it
also
from
note
0
to
note
1.
A
A
So
here
is
an
example,
I'm
just
saying
without
see,
CPU,
even
though
I
have
setting
on
phenom
strats
proc
buying
everything
when
I
run
this
my
application
with
just
say,
I
want
MPI
16
mbit/s
on
this
node.
What
I
get
is
here
my
rank,
0
0
0,
my
rank
1,
and
so
at
0
they
actually
landed
on
the
same
physical
core,
which
is
totally
bad.
So
what
we
do
is
we
add
these
two
here
16
MPI
tasks
on
the
KL
node.
A
So
we
because
it's
the
reason
is
because
68
is
not
divisible
by
number
of
MPI
tax
16,
so
we
usually
aren't
purposely
just
waste
for
extra
nodes,
our
course
on
the
scanner
nodes,
just
treat
it
as
64,
no
course
on
the
KL
nodes,
then
there
are
256
total
logical
cores
here,
divided
by
number
of
MPI
tasks
per
node
in
this
example
is
16,
so
we
get.
C
is
also
16
with
that.
You
now
see
rank
zero,
so
at
zero
and
rank
zero
smart.
One
remember
I,
asked
you
to
remember
zero.
A
This
is
the
final
layout
with
see
and
the
CPU
buying
options
and
with
16mp
eye
tasks
you
could
get
I've
used.
I
used,
color,
color,
M
diagram
so
for
MPI
rank
zero.
There
8,
CP,
openmp
rats,
they
landed
on
4,
physical
cores
and
and
two
of
the
threads
would
be
on
the
same
court
or
the
same
core
and
then
the
rank
one
will
be
on
these
four
cores
back
to
you.
So
it's
loud
now
it's
pretty
and
neat.
A
A
So
here's
an
illustration
on
when
on
to
compute
nodes,
can
our
nodes
with
64
MPI
tasks
per
node
by
setting
OMP
number
sweats
equals
4
the
the
in
new
CPU
by
in
our
correct
settings.
So
here
without
a
MP
proc
bind
and
only
places
the
four
threads
are
freely
to
migrate
within
the
core.
So
with
and
64
MPI
tasks
see
for
your
giving
for
logical
cores
per
rank,
so
each
rank
will
be
on
each
physical
core
and
then
the
four
threads
are
freely
migrate
within
the
physical
core.
A
A
A
So
basically,
what
it
does
is
if
your
application
has
a
strong
command,
something
all
the
way
studying's
everywhere,
and
then
you
just
replace
the
application
with
check
MPI
and
to
figure
out
whether
your
binding
is
correct
before
we
actually
run
your
application,
and
now
you
have
to
actually
understand
what
these
numbers
are.
These
are
logical,
numberings,
as
I
shown
in
some
of
the
tariffs,
so
you
will
see
hey
whether
my
ranks,
you
know
my
rank,
went
on.
A
These
logical
course
make
sense
whether
they
are
you
know
across
tiles,
across
multiple
cores
or
they
actually
they're,
not
stepping
on
each
other.
These
are
the
numbers
to
check
if
your
affinity
is
correct
and
in
open,
MP
5.
Oh,
we
have
introduced
something
called
om,
p
display
affinity,
feature
you
can
set
it
to
true
and
with
given
an
affinity,
format,
custom
format.
It
is
this
already
the
open,
MP
5,
the
whole
thing
that's
not
available
yet
in
most
of
come
compilers,
but
this
feature
is
already
existing
in
some
of
these
compiler
version.
A
So
you
set
to
true-
and
this
is
the
format
you
say-
I
want
to
have
host
information
process,
ID
informations.
What
my
spell
number
is
and
what
my
sword
affinity
is,
and
these
host
equals
something
all
these
things
are
custom
string,
Xia
you
can
put
into
and
Y
just
by
setting
these
two
and
when
you
run
your
application,
you
get
those
reports.
We
can
also
help
you
to
check
before
that
OpenMP
feature.
They
were
like
custom
different
compilers
at
their
own
settings,
but
it's
not
standardized.
So
this
is
more
portable.
A
Finally,
I
want
you
to
introduce
you
the
JavaScript
JavaScript
generator
feature
it
just
might
know
stuff.
So
here
you
go
down,
there's
jobs
and
then
JavaScript
generator
you
can
choose
which
machine
and
what
my
application,
how
many
number
of
nodes,
etc
and
it'll
generate?
Give
you
a
template.
You
can
modify
based
on
that.
A
The
last
section
is
about
monitoring
your
jobs,
so
basically
it's
in
the
queue
and
then
you
which
we
talked
about
queue
prior
the
job
when
the
job
is
gonna
run.
It
depends
on
the
combination
of
which
QoS
you
submit
to
and
whether
you
ask
for
a
bigger
job,
small
wartime
or
small
job
long
long
time.
The
wait
time
is
different
and
this
is
sqs
and,
as
cute
helps
you
to
check
what
were
they
were?
Your
job
is
so
there's
I
talked
about
aspect
as
alec
as
wrong
as
cancel
can
delete
your
job.
A
A
Sqs
is
a
custom
nurse
custom
batch
queue
script,
so
it's
formatted
and
when
she
attached
you
here
in
the
username
or
without
that,
she
user
name
is
by
t4
shows
your
own
dash.
H
was
everybody's.
There's
some
other
options
to
show
or
not
show
some
certain
QoS
like
do
not
show
running
jobs,
show
running
jobs,
there's
a
wide
options.
Many
other
options,
but
at
the
standard
brief
without
doing
any
flags,
this
basically
gives
you,
when
one
column
that's
not
available
in
as
queue
it's.
A
Basically,
it
calculates
your
job
priority
and
compares
against
threshold
to
priority
that
the
our
scheduler
is
configured
to
whether
to
start
scheduling
your
job
or
not.
Just
by
pure
priority
scheduling
and
I'll
tell
you
your
job
scheduled
start.
Is
you
know,
within
things
in
1.2
days,
something
like
that?
That
means
in
one
point
two
days,
your
job
will
accumulate
prior
enough
to
be
for
the
concealer.
To
consider
your
job.
A
To
start
doesn't
mean
your
job
words
that
at
that
time
only
one
your
job
will
be
able
to
be
scheduled,
so
your
job
may
start
a
few
days
later
than
that,
so
we
have
a
node
actually
tried
to
clarify.
People
would
think.
Oh,
that
means
my
job
was
going
to
start
really
soon.
It's
not
it's,
but
again,
this
job
actually
can
start
really
soon.
If
your
job
fits
in
the
hole,
we
can't
be
caught
back
for
your
opportunities.
A
So
if
your
job
stop
your
job
won't
effect
the
next
highest
priority
job
in
the
queue,
because
that
job
is
huge
job
and
special
has
to
accumulate
lots
of
notes
for
that
one,
and
it
has
already
get
some
notes,
but
then
I
can't
start
that
job.
But
if
your
job
used
that
notes
and
can
finish
before
the
next
job
actually
can
start
your
job,
it's
back
field,
so
you
talked
me
back
for
you.
The
smaller
the
shorter
job
is
the
higher
chance
to
backfill,
okay,
so
that's
SQS
and
there's
info.
A
It
was
some
kind
of
format.
It
tells
you
how
many
available
notes
right
now
on
the
system.
So
we
just
say
this
is
a
very
simple
command.
It
tells
you
how
many
allocated
I,
don't
and
other
usually
means
down
and
total
you
can
see.
Has
worn
can
L
s,
control
show
job
a
job.
Id
is
very
useful,
for
the
jobs
are
currently
still
in
the
queue.
So
if
you
forgot
what
my
job
idea,
what
my
I'm
doing
I
you
think,
let's
control
show
job
my
job
ID.
A
A
As
account
is
clearing
the
slurm
database,
especially
after
your
job
has
completed,
say
I
can
even
to
count
my
job
in
January
and
I.
Give
you
with
with
my
job
with
job
I,
ran
how
many
notes,
but
whether
my
job's,
dead
state
was
and
I
have
a
exits
are
more
abbreviated
without
X
X.
It
gives
you
more
details
of
each
a
strong
command
as
well
with
that
with
X.
It
gives
you
the
bad
job
as
a
whole.
A
And
talk
about
how
your
jobs
are
charged,
the
unit
is
nurse
hours
and
house.
When
K
now
has
a
charge
factor
per
hour.
Use
is
90
and
then,
based
on
your
QoS
regular,
is
one
premium
charges
twice.
That
gets
you
high
to
the
to
the
top
of
the
queue
quickly
and
as
a
low
gets
you
25%,
discount
and
flux
gets
you
75%
discount,
no
and
flux
are
only
available
on
KL.
Scavenger
is
zero,
but
you
can't
submit
to
scavenger.
A
So
here's
an
example.
If
you
have
Iran
on
four
has
one
knows
for
one
hour:
it
does
not
mean
your
work.
Lock.
Request
is
the
one
now
this
one
now
is
actually
a
wartime
me,
your
job
start
from
start
to
finish.
You
may
use
only
five
minutes,
then
it
will
be
charged
only
for
five
minutes
then
times
qos.
So
here
examples
for
has
one
of
those
times,
one
hour
times
charge
factor
of
90
times
premium
q
qs'
of
two,
so
this
job
is
going
to
be
charged
for
720
nurse
hours.
A
A
The
queue
policy
like
I
said
each
QoS
has
maximum
nose
max
time
limit
and
charge
factor
and
priority
wise
charge.
So
this
you
can
find
on
the
nurse
web
page
with
the
queue
policy.
This
is
just
a
screenshot
of
that
as
of
today,
our
subject
to
change.
This
is
the
hospital
policy,
and
this
is
the
kennel
policy.
A
Time
is
much
much
shorter
than
has
well
plus
Allison,
just
retired
the
most
Addison
users
convert.
The
first
choice
would
be
hassle,
but
we
had
so
many
sessions
training.
We
actually
want
to
bring
all
the
mark
over
and
encourage
them
to
run
more
on
KL
and
try
to
say
your
application
needs
to
be
optimized.
Ok,
now
to
consider
you
know
throughout
optimization,
vectorization
optimization.
So
once
your
job
is,
you
know
more
uptime
chaos.
It
will
be
more
even
worthwhile
to
run
our
care
now
faster
and
NQ
short
wake
q
wait
time.
A
Other
considerations
are
what
what
what
QRS
are
available,
where
and
discount,
etc,
to
charge
and
remember,
to
compile
separately
for
applications
if
you
just
putting
the
flag
volcania
or
get
you
some
speed
up
without
by
with
compared
to.
Without
doing
that
again,
you
know
to
try
to
run
they're
good
jobs
for
ask
us.
If
you
can
and
Raj
larger
shorter
jobs
are
easier
to
scan.
You
then
long
smaller
jobs,
no
am
I
yeah.
A
Shorter
jobs
are
easier
to
get
back,
feel
there's
some
queue.
Wait
time
statistics
you
can
look
at
it,
but
it's
just
that
the
statistics
of
the
past.
There
are
lots
of
documentation,
some
darkstar
north
slash
jobs,
and
if
those
things
are
not
clear
to
you,
send
a
ticket
or
call
us
to.
We
can
help
you
further.