►
Description
NERSC Data Seminars Series: https://github.com/NERSC/data-seminars
Title:
Demo and hands-on session on ReFrame
Speaker:
Lisa Gerhardt, Alberto Chiusole - NERSC, Berkeley Lab
Abstract:
Overview and brief demo of the capabilities of ReFrame, and how we use it at NERSC to run pipelines on different systems and continuously test the user-facing requirements.
A
Okay,
well
thanks
everybody
for
joining,
so
I'm
lisa,
gerhart
and
me
and
alberto
today
are
going
to
walk
you
through
reframe
at
nursk.
So
first
I'm
going
to
talk
a
little
bit
about
what
reframe
is
and
how
it's
configured
and
then
I'll
walk
through
the
anatomy
of
sort
of
a
basic
test
and
then
alberto's
gonna
show
some
of
some
of
the
more
complicated
tests
and
then
sort
of
do
a
demo.
I
think-
and
maybe
a
walkthrough
of
some
of
the
some
of
the
benchmarking
tests
that
we
have
connected
up.
A
So
reframe
is
an
automated
testing
suite
that
was
originally
developed
by
cscs.
It's
the
it's
the
main,
it's
the
suite
that
we
use
for
all
verification
testing
at
nurse.
So
it's
python
based.
It
has
a
set
of
of
classes
that
specify
that
let
you
specify
basic
variables
and
parameters
of
tests.
A
So
it's
sort
of
a
unified
framework
that
you
can
use
to
to
create
your
tests
and
then
have
them,
be
compiled
and
run
either
in
the
batch
system
or
on
the
login
nodes
and
it's
repeatable,
and
you
can
share
this
information
with
anyone
through
our
get
github,
repeat
repo
and
be
able
to
run
this
consistent
set
of
tests.
A
So
it
it's
set
up.
You
know,
cscs
is
an
hpc
center,
so
it
supports
most
basic
hpc
actions.
You
can
do
things
like
negative
native
programming.
Environment
builds
like
it
understands
per
again
gnu
and
per
again.
A
You
know,
and
it
can
understand
things
like
cmake
and
do
automated
builds
that
way,
just
by
telling
it
build
type
c
make
which
is
really
nice
so
for
each
test
it
prepares
a
batch
file,
then
it
submits
the
batch
file
and
it
waits
and
retrieves
the
output
and
then
checks
it
for
sanity,
a
sanity
function
that
you
give
it
and
then
based
on
that
it
lets.
A
What
flags
you
want
to
use
for
each
system,
and
so
it
comes
in
handy
that
really
comes
in
handy
here
at
nurse
score.
We
have
multiple
different
systems
where
we
need
to
test.
I
mentioned
before
it
can
automatically
handle
most
build
systems,
and
you
you
can,
it
can
understand,
makes
you
make
it.
They
just
add
its
back
in
there
too,
and
you
can.
A
So
this
is
how
reframe
is
set
up
at
nursk,
it's
available
on
two
repos
and
then
there's
gitlab
instance.
So
this
one
we
have.
We
do
a
fork
of
the
cscs,
because
we
do
some
local
modifications
and
that
the
actual
framework
that
runs
the
tests
is
maintained
by
brian
in
the
csg
group,
but
the
the
repo
of
tests,
which
is
called
reframe
nurse
tests,
is,
is
a
group
effort
across
all
all
of
nurse,
with
mainly
contributions
from
consulting
das
and
dseg
and
and
a
few
others.
A
So
the
reframed
nurse
test
is,
is
just
it's
basically,
a
collection
of
all
the
tests
that
are
run
on
the
reframes
scaffold
and
each
test
is
intended
to
be
sort
of
an
atomic
check
of
a
single
functionality.
You
know:
does
this
particular
flag
in
fortran
work?
Can
we
compile
with
the
fftw
libraries?
Can
we
reach
hpss
and
connect
those
sort
of
things
so
there's
two
main
modes
that
are
used
at
nurse?
A
We
have
the
checkout
mode,
which
is
what's
run
after
a
system
maintenance
or
when
there's
some
kind
of
system
issue
to
see
if
everything's,
working
and
because
system
maintenance
is
usually
at
12
to
16
hours.
You're
often
doing
running
checkout
at
3
am
at
night
and
you're.
The
last
thing
that
happens
before
the
system
goes
back
to
the
user.
A
So
you
really
want
this
to
be
quick,
so
we
design
these
tests
with
the
goal
that
each
one
should
take
less
than
five
minutes,
while
still
being
able
to
check
the
functionality
and
it
gets
run
after
every
maintenance
and
csg
also
uses
this.
When
they're
doing
development
work
on
on
some
of
the
development
systems,
they'll
push
out
a
change
and
then
they
can
just
run
reframe
and
see
if
it
changed
anything.
A
So
it's
helpful
for
those
kind
of
iterations
and
then
the
second
mode
is
benchmark,
and
these
are
tests
that
are
used
to
record
long-term
performance
on
the
system.
So
these
are
things
like
ior
on
the
system.
You
know
against
scratch
and
cfs
the
osu
network
tests.
We
have
a
test
for
astropyl
to
test
how
long
it
takes
to
load.
A
A
huge
mass
of
shared
libraries
and
a
whole
suite
of
the
nh
benchmarks
are
are
included
in
reframe,
and
these
are
run
either
daily
or
for
the
really
long
tests
they're
around
weekly,
because
they
just
take
too
long
to
get
through
the
queue
and
then
the
output
from
that
is
collected
and
published
in
the
in
the
omni
sac,
and
I
think
that
alberta
is
going
to
be
talking
more
about
that
later
next
slide.
A
So
right
now
for
reframe
we
have
five
systems
defined.
We
have
corey
gertie,
pearl,
mudder,
alvarez
and
moller,
and
we
have
two
branches
and
I
apologize
everyone
for
the
mismatch
and
the
naming
here.
It
just
sort
of
happened,
but
we
have
main,
which
is
the
stable
branch
and
that's
what
you
would
run
for
check
out
on
corian
perimeter.
A
So
we
have,
for
instance,
a
number
of
a
number
of
tests
for
slingshot
11
that
are
only
reasonable
to
run
on
slingshot
11
gpus,
there's
also
some
tests
every
time
that
corey
changes
modules
there's
a
test
that
makes
sure
that
no
different,
no
unexpected
modules
have
changed
and
it'll
raise
an
alarm.
If
that
happens,
so
those
sorts
of
things
where
you
roll
them
out.
A
A
You
want,
generally,
when
you're,
making
codes
and
testing
it's
easier
to
use
the
development
systems
like
molar
or
gertie,
because
you
don't
have
to
wait
in
line,
especially
if
you're
using
the
batch
system,
but
there's
no
requirement
unless
you
need
something
at
scale
that
you
have
to
use
this
and
we
have
continuous
integration
that
will
check
all
the
new
tests
whenever
a
merge
request
is
run.
It
runs
the
whole
chain
on
on
all
on
four
of
the
five
systems.
Alvarez
is
too
unstable
to
really
be
give
us
reasonable,
continuous
integration
feedback.
A
So
right
now
for
tests
that
have
the
checkout
label,
we
have
371
tests
written
for
corey
and
330
for
perlmutter,
so
we
have
a
pretty
broad
suite
there
and
for
benchmarks
tests.
We
have
143
on
corey
and
22
on
parameters,
so
we
definitely
need
to
add
some
more
promoter
benchmarks.
If
you
have
ideas
there
or
ideas
for
tests
that
need
to
be
run
and
check
out
that
aren't
covered,
I
certainly
encourage
you
to
come
and
either
write
your
own
test
or
at
least
open
an
issue.
A
So
contributions
are
always
welcome
and
we
have
a
reframe
slack
channel
for
questions
where
folks
are
pretty
pretty
proactive
about
answering
questions.
So
this
is
intended
to
be
a
group
effort,
so
please
feel
free
to
contribute.
A
So
this
is
an
example
test.
So
this
is.
This
is
a
saxby
gpu
test.
So
just
it's
just
saxby,
which
does
some
little
exercising
of
the
gpus
and
does
calculates.
Does
some
I
forget
what
it
does
matrix
multiplication
or
something.
I
don't
even
remember
anymore.
Sorry,
it's
been
a
while,
since
I
read
this,
but
you
can
grab
the
saxby
code
off
the
web
and
be
able
to
do
a
calculation
and
this
basically
tests
that
the
gpus
work
and
can
be
accessed
which
sometimes
they
can't
be.
A
If
you
don't
have
the
right
settings
set
up
so
on
the
this
is
what
this
test
looks
like.
So
I
took
this
and
straight
up
copied
and
pasted
it
out
of
the
reframe
repo,
and
so
the
first
three
lines
at
the
top
are
sort
of
the
standard.
This
is
in
python.
A
So
it's
sort
of
the
standard
input,
the
libraries
that
you
need
and
then
the
first
thing
they
have
different
kinds
of
tests,
and
this
is
the
most
what
they
call
the
simple
test:
it's
their
basic
unit
of
testing
and
what
this
does
is.
It
builds
the
code
on
the
login
node
and
then
runs
it
via
the
batch
system
and
what
kind
of
batch
system
you
have
and
is
set
in
a
different
configuration
file
of
it.
A
So
and
then
you
create
the
test
and
the
next
few
lines.
You
have
a
description
and
a
maintainer,
and
basically
there
it
tells
you
what
is
this
test
and
who,
who
is
maintaining
it.
So
who?
Who
do
you
knock
on
if
it
fails
or
if
you
want
to
change
it,
and
this
this
description,
this
cuda
sucks
bc
example
is
printed
out
when
the
test
is
run,
so
it's
usually
intended
to
be
semi-informative
so
that
folks
know
what
it
is
testing
and
what?
A
A
So
we
just
limit
it
to
you,
know
the
n9
systems
and
to
run
in
a
qos
that
has
access
to
the
gpus,
and
then
it
works
in
all
three
of
the
programming
environments
that
are
available.
Some
of
the
tests
are
only
for
specific
programming
environments.
So
if
you
wanted
to
limit
this
a
little
further,
you
could
trim
this
list
down
and
then
you
can
see
down
here,
there's
a
source
path
that
tells
you
this.
This
code
is
checked
into
the
same
directory
and
that's
going
to
be
compiled
with
the
make
build
system.
A
So
you
just
say:
build
system
make
and
it'll
do
make,
make
install
and
all
that
stuff
behind
the
scenes,
and
it's
going
to
make
an
executable
called
main,
and
then
we
have
tags,
that's
how
we
keep
track
of
what
the
test
is
for,
and
so
the
tags
are
mostly
unconstrained.
You
can
use
whatever
tags
you
want.
The
only
two
that
are
sort
of
that
are
controlled
are
the
checkout
and
benchmark
tests,
because
those
are
those
have
specific
requirements
and
then
you
can
also
load
modules
during
the
test.
A
So
this
test
is
using
the
gpu,
so
it
needs
to
load
the
cuda
toolkit
module.
So
let's
go
to
the
next
screen.
Please,
and
then
you
have
a
bunch
of
stanzas
where
you
can
set
up
the
slurm
parameters.
So
these
are
these.
These
settings
here
translate
directly
to
flags
that
you
would
add
to
your
s
batch
command,
so
you
want
to
run
one
task
on
one
node.
A
You
use
two
cpus
and
you
want
to
have
access
to
one
gpu,
and
then
you
have
the
sanity
pattern
and
you're
asserting
that
it
must
be
in
there.
So
basically
this
means.
If
this
text
is
not
in
the
output,
then
the
the
test
failed,
and
so
this
frequently
what
will
happen
with
this
particular
test.
If
you
can't
access
the
gpus,
it'll
compile
and
run
it'll
come
back
with
the
max
error
zero.
A
So
it
doesn't,
it
does
not
work.
So
you
need
to
find
this.
This
line,
it
must
have
this
error
max
error,
198
in
it
to
be
successful
and
then
there's
also
a
way
you
can
have
special
special
settings
based
on
the
environment.
That
you're
in,
like
pergan
gnu,
needs
the
cpe
cuda.
So
if
we're
not
in
the
nvidia
one,
we
need
to
add
this
extra
module
list
and
that's
what
this
line
down
at
the
bottom
is
doing.
A
So
this
is
the
same
code
we
saw
before
so.
If
you
take
this
and
run
it
in
the
reframe
example,
what
it's
going
to
do
is
it's
going
to
produce
this
script
and
it's
going
to
make
two
scripts.
It's
a
build
and
run
so
it's
going
to
make
name
of
test,
underscore
build
and
name
a
test
underscore
job
and
I'll
show.
A
So
this
is
the
build
one
here
and
basically
there's
some
a
bunch
of
stuff
at
the
beginning
to
trap
the
errors
and
push
them
back
up
to
reframe
and
that's
just
boilerplate
that
reframe
puts
in
there
for
every
single
one
and
then
you
can
see
it
will
make
one
of
these
for
each
particular
kind
of
test.
Right
so
if
you're
on
pearlmoto
gpu,
it's
going
to
make
three
tests,
one
for
nvidia,
gnu
and
aocc.
A
When
that
succeeds,
it
generates
a
job
script,
and
so
this
is
the
second
half
of
the
code
that
I
talked
about.
You
can
see
it,
it's
translating
these
settings
into
into
s
batch
commands
and
then
it
carries
your
module
loads
through
and
then
it
takes
what
your
executable
is
and
does
the
s
run
and
then,
in
the
background,
reframe
queries
the
batch
system
until
the
job
finishes,
and
then
it
checks
the
output
for
that
that
sanity
pattern
that
it
has.
It
reports
success
or
not.
A
A
So
how
do
you
run
this
okay?
So
first
you
need
to
check
out
the
suite
of
tests
and
you
can
just
check
it
out
from
gitlab.
I
put
the
address
in
there
you
change
to
that
directory,
and
then
we
have
a
default
install
of
the
reframe
scaffolding
and
global
common
that
anyone
can
use,
and
if
you
wanted
to
run
all
the
tests,
you
would
just
just
do
this
line
so
reframe
and
a
couple
of
flags
and
then
select
which
tags
you
want
you're
good
to
go.
A
So
there's
a
couple
of
common
actions.
You
can
do
you
can
so
there's
a
little
r.
You
have
to
tell
reframe
to
do
something.
So
it's
an
action.
So
you
could,
you
can
either
run
it
or
list
it.
Those
are
the
main
common
ones
and
then
you
usually
want
to
select
tests
only
with
a
particular
chat
tag
usually
check
out,
and
then,
if
you
want
this
minus
c
dot
path
limits
which
tests
you're
going
to
which
paths
you're
going
to
run.
A
So
maybe
you
just
wanted
to
run
the
suite
of
tests
that
are
for
python,
so
you
would
say
minus
c
python.
Excuse
me,
my
cat
is
coming
and
then
you
can
this
capital
r
tells
it
to
recursively
look
in
all
directories.
So
if
you
have
sub
directories
with
tests
in
them,
it'll
search
for
those
wow.
So
the
other
thing
is
you
can
there's
some
optional
stuff
you
can
do.
They
can
specify
system
sorry
or
keep
stage
files
which
will
keep
the
output
like.
A
If
a
test
is
successful,
it
will
delete
all
the
output
that
it
makes
all
those
files
that
I
was
just
showing
before,
but
sometimes
you
want
to
keep
them
and
make
sure
that
that
everything
worked
as
intended
or
look
at
the
scripts
afterwards
and
make
sure
that
they
work.
And
so
then
you
can
tell
it
keep
the
stage
files
and
it'll
keep
them
around,
even
if
it
succeeds.
B
Yeah,
so
I
collected
here
a
couple
of
tests
that
we
that
we
wrote
inside
the
the
refrain
collection-
and
these
are
more
and
more
specialized
and
more
complex,
let's
say
so.
The
the
main
point
that
I
wanted
to
make
is
that
you
can
feed
a
refrain
with
different
parameters.
So
if
you
fit
it
with
two
parameters,
sorry
two
sets
of
parameters
we'll
try
all
the
possible
combinations
of
the
inputs
on
all
the
tests.
B
So
if
we
oops,
I
think
I'm
gonna
be
fine,
it's
gonna
break!
Well,
I
can
show
you
directly
inside
the
terminal.
Let
me
share
my
terminal.
B
B
So
this
test
is
quite
similar
to
the
to
the
normal
to
the
to
the
simple
test.
At
least
I
was
showing
with
the
sucks
pi,
I
think,
but
this
test
comes
with
two
different
parameters,
so
reframe
we'll
try
to
compile
it
for
hd5
serial
hd5,
parallel
and
sdf
serial
and
cds.
Parallel
on
several
different
systems.
B
B
B
C
C
B
C
C
D
I
have
a
question
about
the
parameters
I'm
familiar
with
pi
test
and
there's
something
called
a
test.
Fixture
is
that
kind
of
similar
to.
B
It
does
become
a
combination,
different
networks
yeah,
so
there
are
more
even
more
complex
tests
that
we're
gonna
see
in
a
second
as
soon
as
this
is
over
and
those
are
yeah
okay.
So
we
see
that
we
run
all
the
trial
test
cases
on
corey.
So
let
me
just
switch
to
vertica,
maybe.
B
B
Okay,
so
there
is
a
lot
of
extra
stuff
before
the
test,
the
actual
test,
but,
as
you
can
see,
the
only
parameter
to
this
test
is
a
single
variable
configs,
which
stores
a
list
of
these
are
named
tuples,
so
they're,
just
normal
tuples,
with
some
extra
fancy.
C
B
Shortcuts,
let's
say
so,
we
are
defining
a
specific
set
of
configuration
that
we
want
to
pass
to
the
test.
So
we
want
to
run
this
test
on
a
specific,
specific
file
size
and
we
want
to
we.
We
don't
want
to
enforce
any
classes
of
service,
but
we
expect
a
certain
class
as
output
and
we
don't
want
to
specify
any
system.
So
all
these
four
configurations
are
a
single
represent
a
single,
a
single
test.
B
So
right
now
we
are
we're
not
doing
any
fixture,
we're
not
doing
any
combinations,
because
we
want
this
specific
list
of
parameters
to
be
tested,
and
so,
when
we
pass
a
parameter
inside
the
refrain,
we
can
then
access
that,
through
the
variable
that
we
set
here
so,
for
example,
so
yeah
so
for,
for
example,
in
this
case,
if
the
file
size
to
be
tested
is
below
10
gigabytes,
then
we
had
to
check
out
for
bigger
tests.
We
don't
want
to
add
the
checkout
tag,
so
we
we
don't
need
to.
B
So
we
don't
make
csg
wait
for
20
minutes
for
this
test
to
go
through
after
a
maintenance,
for
example,
so
yeah,
as
you
can
see,
you
can
access
the
the
the
specific
parameter
inside
the
each
test
by
using
the
name
that
you
defined
here
outside
so
self.conf
will
be
a
single
or
the
specific
config
tuple.
B
Let's
say
here
one
one
of
the
config
tuples
that
we
are
feeding
the
into
the
the
test,
and
this
is
a
bit
more
complex
test
because
we
need
to
as
you
as
you
see
as
you
can
see,
we
need
to
specify
a
list
of
a
specific
variable,
so
we
don't
want
the
test
to
automatically
scramble
and
combine
different
parameters
together.
We
have
a
specific
set
of
parameters.
B
Then
yeah,
I
have
another
test
to
show
you
here:
it's
a
a
set
of
commands
that
we
want
to
test.
So,
for
example,
we
do
hsi
put
hsi
get
and
we
test
that
the
hpss
is
able
to
store
and
and
and
serve
the
device
that
we
that
we
send
in
send
it.
B
And
this
in
this
case
we
use
a
the
keys
of
a
dictionary
as
a
parameter,
and
then
we
use
the
key
to
access
the
the
values
in
the
configuration
dictionary.
So,
let's
see
the
tests
briefly,
maybe.
E
Why
do
you
pull
that
out?
I
had
a
quick
question:
the
does
the
sell.valid
programming
environments
look
for
what's
available
on
the
specific
system,
or
do
you
all
have
to
yeah,
okay,.
B
Okay,
that's
a
good
point,
so
yeah,
if
you
specify
you
can
you
can
specify
different
programming
environments
on
different
systems?
For
so,
for
example,
you
can
have
programming
environment,
intel.
B
Yeah,
you
can
have
the
programming
environment,
intel,
programming,
environment,
gnu
on
different
system
on
core
and
promoter,
and
the
reframe
automatically
knows,
but
well
we
instructed
it
to
to
know
that
there
is
no
programming
environment
intel
on
promoter,
so
it's
not,
it
will
not
try
to
combine
intel
with
vermonter.
Oh
is
that
the.
B
It's
a
long
json
file
basically,
and
I
can
briefly
show
you,
for
example,
for
chlorine.
Maybe
so
we
define
the
systems,
we
define
the
qss,
the
queues
that
we,
the
system
can
can
run
on.
We
define
the
programming
environments
for
each
queue
yeah.
So,
for
example,
this
is
corey.
B
We
define
the
stage
directory
the
output
directory
the
modules
to
load
by
default
for
every
test
that
should
be
run
on
corey
and
then
here
are
the
partitions.
So
you
see
the
login,
so
the
different
programming
environments
available
on
login
and
the
same
for
aswell.
A
They
have
a
sorry
alberto,
they
have
a
specific
call
out.
Do
you
see
this
datawarp
stuff?
They
have
a
a
way
in
reframe
that
you
can
add
customized
slurm
parameters
right
because
reframe
doesn't
know
about
data
warp
doesn't
know
about
the
burst
buffer,
and
so
you
have
to
tell
it
how
to
add
those
parameters
to
this
alarm,
and
so
they
have
a
mechanism
for
that,
and
you
can
see
down
below
the
same
thing
for
shifter,
like
we've,
we've
added
those
in.
B
Exactly
so,
let
me
skip
maybe
two
paramotor.
So
promoter
only
has
these
four,
so
it
doesn't
know
about
intel.
So
it
will
not
try
to
combine
promoter
with
intel
on
if
there
is
a
test
with
different
systems
and
different
programming
environments
available
only
on
specific
systems.
E
Yeah,
that's
fair
enough.
Thank
you
yeah,
because
I
was
thinking
like
we
had
aocc
removed
once
right,
and
that
would
actually
for
for
that
for
us
to
know
that
it
was
removed.
It
would
technically
need
to
go
up
in
the
module
loads
rather
than
in
the
available.
I
guess
for
a
test
to
work
like
yeah.
Okay,
sure
thank
you.
B
Yeah,
so
I
wanted
to
show
you
this
test
because
it's
a
bit
complex,
so
we
have
a
huge
dictionary
with
a
single
string
as
a
key
and
then
a
tuple
of
commands
that
we
want
to
run
on
on
inside
the
test.
So,
for
example,
there
is
a
test
that
sends
or
puts
a
file
inside
each
hpss
gets
the
file
back
and
then
checks.
If
the
file
has
changed,
removes
the
local
file
again
and
and
fetch
this
file
again
and
and
all
these,
this
long
list
of
commands
are
run
by
this
script
here.
B
Let
me
show
you,
for
example,
here
yeah
here,
so
we
we
use
the
key
to
feed
the
command
inside
the
script,
so
we
fetch
the
commands
from
the
outside
of
the
class,
and
this
is
a
way
I
found
to
to
run
this
test.
So
these
are
all
no.
These.
C
B
Yeah,
so
these
tests
are
run
on
corey,
gertie
data,
so
so
dtn
for
certain
tests
we
can,
we
can
tell
it
not
to
run
them
on
on
certain
nodes,
so
these
scripts
are
not
available
on
dtns
on
the
data
transfer
nodes,
so
we
only
add
them
where,
when
they're,
we
are
not
running
those
this.
B
So
we
we
add
the
data
transfer,
nodes
and
other
systems
when
we
are
not
dealing
with
this
test
and
yeah
so
and
we
add
all
the
n9
systems
only
when
we
are
not
dealing
with
the
red
regent,
because
there
is
a
dns
issue,
for
example,
and
as
you
can
see,
we
have
different
programming
environments
which
are
not
all
available
on
all
systems,
so
intel
is
not
available
on
the
n9
systems.
So
it's
not
refreshing
won't.
Try
to
combine
the
two
yeah.
B
So
we
only,
we
only
add
the
each
account
tag
to
the
other
set
of
tests.
Let's
say
and
we
use
the
and
we
use
an
class
inheritance
to
to
split
the
two
classes.
Let's
see.
B
Okay,
here
is
a
the
base
test
which
inherits
from
the
run
only
test,
so
this
test
doesn't
should
not
compile.
I
think,
yeah.
B
Do
a
normal
compilation
because
there
is
no
make
or
or
cma
file,
but
you
will
install
things
with
pip,
so
we
have
a
specific
list
of
commands
that
we
want
to
run
to
install
these
packages
and
then
so
and
as
you
can
see
it,
there
is
no
decorator
before
the
class,
so
this
class
won't
be
executed
by
refrain
will
be
ignored
because
reframe
only
executes
or
yeah
only
runs
on
classes
that
are
defined
with
the
with
the
decorator,
and
so
we
can
inherit
from
this
class.
C
B
The
valley
systems
only
on
a
subset
only
yeah,
so
the
the
previous
class
will
be
fine
for
the
k,
l
and
nodes.
But
when
we
add
you
know
in
a
separate
class,
all
the
other
notes
so
as
well
for
mother,
muller
and
virus.
We
also
add
the
check
out.
So
this
is
a
way
to
to
differentiate
between
the
two
two
systems,
let's
say
or
knl.
You
cannot
partition
and
all
the
rest
of
the
systems
under
a
nurse.
B
Let
me
see
there
is
a
question:
yeah,
okay
yeah,
so
I'm
showing
some
of
the
more
complex
tests
here
inside
the
frame
repo
just
to
show
you
what's
what's
possible
in
the
reframe,
but
you
can
start
from
a
very
simple
script
or
as
a
very
simple
test
as
the
one
showed
in
the
slides
before,
and
then
you
can
move
on
from
from
from
that.
B
And
we
also
show
you
yeah
the
run
after
setup,
because
it
will
it's
a
something
not
so
common,
maybe
okay,
so
the
ior
test
once
more.
These
are,
in
this
case
we're
gonna
run
a
fixture
of
tests,
so
we
run
on
different
file
systems,
either
on
scratch
or
cfs.
We
run
on
posix
mode
or
mpio
mode
on
these
two
five
systems
and
we
have
two
different
configurations
so
one
with
the
32
note
32
cores
and
16
task
for
no.
B
So
this
should
be
a
two
note,
yeah
well
yeah,
so
yeah.
B
And
then
different
segment
size
of
so
these
are
different.
Ior
configuration
and
different
blaster
striping.
B
So
right
now
we
we,
we
don't
write
the
programming
environment
here
alongside
the
valid
systems,
because
there
is
a
it
was
a.
B
Oh
no
yeah,
sorry,
never
mind
we.
We
only
run
on
cray
programming
environment
because
it
was
shared
among
the
different
systems
and
we
didn't
want
to
to
round
the
test
twice,
because
these
are
quite
time
consuming.
Let's
say
so,
since
cray
is
available
on
on
all
on
both
systems
around
only
in
that
on
the
different
partitions.
B
So
what
I
wanted
to
share
to
show
you
is
the
this
set
task.
So
when
we
run
on
gertie
on
or
or
mueller,
we
actually
reduce
the
number
of
tasks.
So
it's.
This
is
a
trick
that
we
use
to
have
the
same
input,
let's
say,
but
actually
run
a
smaller
test
on
on
on
the
development
systems,
just
because
we
don't
have
enough
nodes
to
feel
or
to
satisfy
the
1000
core
requests
here.
B
Yeah
and-
and
so
we,
we
also
use
that
to
to
add
all
the
options
inside
the
ior,
depending
on
the
on
the
on
the
two
systems
here
yeah.
So
what
I
wanted
to
show
you
last
is
the
the
performance
patterns
here.
So
ior
is
also
able
to
extract
the
details,
let's
say
from
from
from
the
output,
and
so
in
this
case
it's
extracting
the
the
right
speed
and
the
read
speed
from
the
output,
and
there
is
a
regex
here
to
extract
the
numbers,
and
then
we
use
this
as
reference.
B
So
when
we,
when
you
have
performance
patterns,
you
can
also
set
certain
boundaries,
so
you
want
to
to
have
these
numbers,
so
the
right
bandwidth,
for
example,
to
be
include
included
between
one,
so
one
should
be
the
average,
let's
say,
and
then
minus
one
so
from
zero
to
what
is
this
hundred
thousand
megabyte
per
second?
B
So
basically,
we
don't
enforce
any
any
any
any.
We
don't
expect
any
any
performance
from
these
tests.
We
accept
everything
and
we
store
so
and
we
we
passed
refrain
to
send
a
these.
These
performance
numbers
to
elasticsearch
and
what
I
wanted
to
show
you
last
is
the
yeah,
the
fact
that
we,
you
can
inspect
this.
These
numbers
from
elasticsearch,
for
example,
the
there
is
a
ior
test
running
on
32
nodes,
mpio
or
a
single
shared
file
against
c
scratch,
and
we
can
take
a
look
at
that.
B
So,
for
example,
this
is
the
the
kibana
plot
for
ior
running
daily,
so
from
in
the
last
on
the
last
30
days,
and
we
can
see
the
numbers
on
the
read
performance
and
the
right
performance
of
ior
running
mpio
on
32
notes
on
c
scratch.
B
B
To
end
with
the
these
two
links,
so
one
is
the
g3
before
the
test
that
we
already
explained
before,
and
the
other
one
is
one
a
page
of
the
documentation
of
reframe
where
you
can
find
different,
reframe
terms,
let's
say
and
they're
equivalent
in
on
the
different
or
sorry
no
different
batch
systems.
So
in
our
case,
it's
all
in
slurm,
but
reframe
is
able
to
work
with
different
batch
systems
on
its
own.
A
Yeah,
so
I
just
I
just
wanted
to
say
that
alberto
is
a
reframed
test
virtuoso,
so
he's
figured
out
how
to
do
some
really
complicated
httpss
queries
and
reframe.
A
The
vast
majority
of
the
tests
are
somewhat
simpler,
but
we
do
have
a
huge
library
of
tests
that
you
can
look
at
or,
if
you're
trying
to
figure
out
how
to
do
a
particular
thing.
You
can
always
ask.
A
Yeah-
and
I
think
I
think
that
was
it
did
you
have
anything
else-
alberto.