►
From YouTube: 09 File Systems and Burst Buffer
Description
Part of the NERSC New User Training on June 16, 2020.
Please see https://www.nersc.gov/users/training/events/new-user-training-june-16-2020/ for the training day agenda and presentation slides.
A
A
Storage
which
you
can
use,
so
that's
given
things
in
memory,
and
actually
it's
perhaps
worth
knowing
that
there
is
a
ram
disk
file
system
mounted
at
slash
temper
cube
nodes
on
quarry,
so
you
can
actually
put
files
directly
into
memory,
although
that
obviously
takes
out
some
of
your
applications.
Memory
next
in
performance
to
that
is
the
burrs
buffer,
which
you've
heard
about
quite
a
few
times
today,
which
is
an
SSD,
a
file
system
and
then
scratch
and
throw
these
I
would
dwell
on
them.
A
A
Most
one
that
you
would
encounter
the
most
if
you're
running
tours
on
quarry,
it's
quarry
scratch.
So
this
is
the
luster
based
filesystem,
which
is
one
of
the
most
mature
HP
HP
CD
file
systems.
You
know
one
of
the
most
important
things
that's
already
been
mentioned
about.
This
is
that
files
that
are
not
from
12
weeks
or
automatically
deleted.
So.
A
Intensely,
they
won't
be
deleted,
but
if
you
leave
things
around,
for
it
will
be
teared
up,
and
so
this
is
the
schematic
of
what
the
scratch
fossa
basically
and
I
own
network,
which
connects
lots
of
storage
servers.
So
you
can
use
various
tools
in
Quincy
wood
to
write
files
across
striped
across
multiple
of
these
servers,
but
really
that's
where
a
lot
of
the
bandwidth
performance
comes
from
is
spreading
things
across
servers.
So
just
one
thing
to
tell
you
in
order
to
do
that
is
to
control
the
striping.
A
So
actually
we
see
that
a
lot
of
fat,
quite
small,
and
so
by
default
we
just
have
things
stored
on
one
OST,
which
makes
sense
such
small
parts.
However,
if
you're
using
larger
files
and
you're
also
using
this
kind
of
MPI
shared
file,
for
example,
then
you
want
to
cross
multiple
servers
and
one
way
to
do
that
is
these
helper
scripts
that
we
provide
that
just
give
sort
of
optimum
options
based
so
here's
a
little
table
of
the
possible
sizes
and
so
fluid.
A
A
Okay,
so
then
I'll
talk
a
lot
more
about
the
first
buffer.
This
is
being
mentioned
quite
a
few
times
already
a
it's
provided
by
Cray,
a
way
of
accelerating
I/o
that
stores
things
on
SSD
based
file
system,
but
also
these
file
systems
on
the
fly
for
jobs.
So
this
means
it
isn't
a
huge
shared
file
system,
and
so
it
doesn't
have
all
the
metadata
contention
with
them.
So
this
can
lead
to
more
consistent
performance
and
also
better
performance,
both
for
high
I/o
bandwidth
applications
and
most
Alps
or
next
day
to
limited
applications.
But.
B
A
Nothing
that
forever
I
mean
one
is
clever
in
its
back-end,
but
once
it's
presented,
it's
not
difficult
to
use,
because
it's
all
system
that
uses
as
scratch
or
anything
like
that.
So
this
all
shows
the
architecture
of
this
that
Lucifer
duster
that
took
two
file
system.
The
burst
buffer
nodes
have
directly
mounted
SSDs
on
them,
but
these
can
be
seen
by
potentially
so
here's
an
example
of
how
to
use
it.
So,
as
I
briefly
mentioned,
you
use
directives
to
control
this
back
script.
A
You
have
these
s
patch
commands
that
you've
seen
earlier
and
then
to
control
the
burst
buffer.
You
add
pound
DW
con,
so
the
first
of
these
is
a
drop
DW,
which
means
that
this
allocation
will
only
be
for
this
particular
not
persistent
across
jobs,
and
there
is
an
option
for
that
and
that
I'll
come
to
you
next,
it's
striped.
So
all
the
compute
nodes
will
see
one
of
the
space
where
there
is
a
private
mode
which,
where
a
compute
node
only
can
see
its
own
space,
which
is
a
bit
local
disk
analog.
A
B
A
In
units
of
granularity,
which
we
have
system
setting
at
20
gigabytes,
so
if
you
request
100
gigabytes,
for
example,
you
would
be
straight
across
five
notes,
so
it's
actually
useful.
If
you
want
no
performance
and
data
streaming
performance
to
to
request
at
least
100
gigabytes
described
across
you
know
of
you,
then
the
system
also
provides
these
commands
to
easily
stage
in
data
from
scratch.
Announce
again
when
they
work
work.
Is
it
the
environment
variable
here,
so
you
provide
the
full
path,
but
then
you
can
stage
in
files
or
directories
and.
A
Job
not
paying
for
this
data
transfer
time,
and
it
also
because
actually
quite
fast
as
a
performance
stage
in
and
stage
out,
but
if
you
prefer,
you
can
actually
just
as
I
mentioned,
so
you
can't
just
copy
things
in
and
out
inside
your
jobs,
then
there's
also
so
then
what
button
this
DW
substrate
is
an
easy
way
to
found
the
mount
point.
So
it's
actually
mounted
on
some
obscure
path,
but
this
advice
to
it
and
here's
where
you
can
see
there's
executable.
A
This
assumes
that
your
executable
actually
takes
this.
It
doesn't
add
this
kind
of
argument,
magic
executable,
but
it
just
shows
that
you
can
then
use
this
path
instead
of
what
you
would
previously
used
on
Buster,
but
as
well
as
running,
and
also
use
it
interactively.
So
this
is
an
example
of
using
the
interactive
queue
here
and
you
can
add
this
PDF
flag,
where
you
put
directives
that
you
had
inside
this
file
and
then
it
will
actually
do
that
in
setup.
For
you
you're
for
the
interactive
job,
ok
and
then
I
mentioned.
A
A
A
You
need,
at
the
end
of
any
job,
create
also
need
to
use
a
learn
directive
to
do
that,
so
you
need
to
submit
a
short
job
to
create
it
and,
similarly
to
delete
it
the
red
side
just
to
make
sure
that
it's
been
shut
up
with
this
s,
control
show
burst,
come
on
that
shows
examples
like
this
and
here's
the
name
isn't
the
same
as
this
one
yeah.
It
shows
the
name
of
something
that
you.
B
A
Okay,
so
then,
so
that's
the
first
part,
so
sort
of
very
temporary
file
system.
As
on
there,
the
things
that
you
really
want
to
keep
around
for
a
much
longer
time.
You
want
to
use
the
community
file
system,
so
this
is
particularly
useful
for
large.
They
set
for
okay,
multiple
years.
Potentially
you.
A
Group
read
permissions
a
default,
but
it's
not
really
meant
for
intensive
up.
So
if
you're
really,
you
know
if
you're
hammering
the
file
system,
you
should
use
scratch
and
state
and
there
may
be
migrated
data.
You
want
to
keep
over
to
community
and
easily
share
data
externally.
Just
there
is,
you
can
create
a
WWF
tree
within
your.
A
Community
project
directory
and
then
put
your
username
there
and
it
will
appear
and
sort
of
web
address
data
on
here
is
never
you
know
purged
by
us
or
it's
not
not
regularly.
If
your
project
after
a
certain
period
it
will,
it
will
be
removed,
but
you
manage
that
is
snapshotted,
so
there's
backups
for
it.
If
something
goes
wrong
and
you
can
manage
your
own
usage
of
the
whole
project
is
managed
by
craters
as
place
out
between
multiple
directories
and
give
different
groups
that
use
these
different
quotas
and
I'll
show
an
example
of
that.
A
Okay,
so
then
HP
SS
is
a
thing
you
want
to
keep
sort
of
forever,
maybe
or
or
for
a
very
long
term
or
all
the
things
that
are
much
run
budget
capacities
that
you
can't
really
keep
them
all
in
CFS
or
scratch.
But
it
is
tape,
so
it
can
be
very
slow
to
access
now.
Actually
they
it's
a
spinning
disk
cache
for
performing.
A
You
know
you
want
to
split
it
up
according
to
how
you
might
want
to
pull
it
back,
because
it's
a
little
bit
cost
to
to
do
so,
then,
to
a
different
file
system
on
the
side.
Is
this
software
file
system
global
comment?
And
why
is
this
that
we
have
this?
We
saw
a
lot
of
variation
in
library,
low
performance
on
different
file
systems
that
we
have,
and
so
we
created.
This
is
something
that's
sort
of
optimized
for
libraries.
A
A
A
A
A
You
see
is
this
dashboard
and
you
can
already
see
something
of
your
craters
here,
but
you
can
see
a
lot
more.
If
you
go
to
this
data,
dashboard
page
cause
don't
be
as
big
as
mine.
Mine
probably
takes
a
moment
to
load
and
starts
my
father's
going,
but
so
then,
here
you
can
see
how
much
of
the
space
is
used
and
then
you
can
also
see
who
are
the
big
users.
So
if
you
find
someone
in
your
project
is
using
it
all
up,
you
can
easily
shout
at
them
from.
A
A
Ok,
so
that
is
that
demo
and
then
I'll
quickly,
other
one
which
is
in
iris,
so
you've
already
been
shown
how
to
log
into
iris.
So
in
iris,
you
can,
you
know,
search
for
your
projects
up
here
so
and
then
there's
actually
on
the
right
time
before
the
storage
tab,
and
this
shows
all
the
direct
pavulon
CFS
in
this
project.