►
From YouTube: Introduction: Migrating Cori to Perlmutter
Description
Introduction: Migrating Cori to Perlmutter
Presenter: Helen He, User Engagement Group
Training: Migrating from Cori to Perlmutter, March 10 2023
A
A
A
We
will
upload
slides
after
each
talk
and
videos
should
be
available
in
a
few
days
after
we
do
some
processing
it'll
be
available
next
week
for
this
training
we're
doing
Q
and
A's
in
Google
doc.
It
is
much
preferred
over
Zoom
chat
things
to
interleave
and
also
recorded
here's
a
link.
We
have
lots
of
nurse
staff
standing
by
to
answer
questions
for
you.
We
do
prefer
that
talks
are
not
interrupted
by
the
end.
We
may
have
some
time
to
do.
A
If
you
have
on
you
to
ask
questions
or
you
could
continue
to
ask
questions
in
the
Google
Doc.
We
also
have
a
section.
After
all,
the
talks
just
for
Q
and
A's.
We
hope
you
can
help
us
with
a
survey
afterwards
would
remind
you
at
the
break
and
after
the
training,
as
well,
so
about
this
training.
First
of
all,
this
is
a
rerun
of
the
December
1st
migrating
training,
with
some
minor
updates.
A
So
if
you
missed
this,
this
is
a
good
training
for
you
now
or
for
new
users,
because
I
knew
we
have
lots
of
new
users
joined
the
new
allocation
year.
January
18th
this
training
I
just
like
to
mention,
what's
covers
and
what
doesn't
what
it
doesn't
cover?
It
covers
formative,
architectures,
recommended
programming
models,
some
programming
tips
and
available
programming
environment,
especially
we
would
cover
building
and
running
jobs
and
CPUs
and
gpus
focus
on
differences
between
Quarry
and
parameter.
A
Here's
a
brief
agenda.
I'm
doing
this
quick
introduction,
talk
right
now,
then
drag
the
slip
or
cover
introduction
to
pro
model
architecture,
recommended
programming
models,
intro
to
the
GPU
and
some
science,
exciting
science
stories
from
our
users,
how
our
staff,
working
with
vendors,
to
prepare
for
this
programming
environment,
especially
on
gpus
with
tools
and
optimizations,
then
Eric,
Palmer
and
Lewis
Awards
I
want
are
going
to
talk
about,
is
particularly
migrating
from
Curry
to
permanent
on
CPU
and
migrating
from
college.
To
formula
on
GPU
GPU
is
brand
new.
A
A
Panasonic
size,
this
is
GitHub
location.
We
do
have
reservations
today
for
three
hours
and
we
added
everybody,
no
excuses
to
the
end
train
8
project
so
that
you
can
use
the
reserved
nodes.
A
Some
timeline
of
Quarry
Corey
will
be
retired,
as
we
have
announced
in
end
of
April
2023.
It
has
been
installed
for
us
have
been
here
for
over
six
years
installed
in
2015..
It
could
be
the
longest
lasting
system
at
nurse
we
do
allocate
for
ay
2023
all
based
on
performance
capability,
so
your
hours
allocated
for
parameter
can
be
used
on
query
on
CPU.
You
have
CPU
allocation
hours
and
GPU
allocation
hours,
so
the
CPU
hours
can
be
used
on
Quarry.
A
We
give
users
time
and
help
to
transition.
We
start
the
transition
with
like
office
hours
and
in
November.
We
also
have
published
a
transition
web
page
so
give
we
we're
offering
more
office
hours
in
March
and
April
will
be
retired
end
of
2023,
as
I
mentioned,
we
use
T
in
next
slide.
A
It's
the
the
purpose
of
retiring
query
is
because
we
want
to
save
power
usage.
We
want
to
give
space
for
the
next
system.
We
also
because
of
the
much
many
much
of
the
many
of
the
parts
on
Quarry
is
like
old
and
the
the
vendors
producers
are
not
producing
them
anymore.
So
it's
it's
going
to
be
hard.
If
we
have
anything
goes
bad,
we
won't
be
able
to
you
know,
replace
or
repair.
Those
are
all
some
of
the
considerations
for
retiring
Quarry
timeline.
A
We
had
office
hours,
transition,
Focus,
starting
November,
let's
use
T
here,
so
everything
bigger
system,
bigger
parts
has
well
known
as
Canon
nodes
are
to
be
retired
and
end
of
April.
However,
the
two
parts
actually
retires
early,
which
will
be
at
the
end
of
March
Curry
GPU
notes,
will
retire
end
of
March
and
Corey
large
memory
notes
as
well
return
end
of
March
and
it
the
large
memory
notes,
is
being
planned
to
be
moved
to
poor
murder.
A
At
some
point
of
time
we
don't
have
a
timeline
for
that,
but
it'll
be
moved
to
Perimeter
and
then
T
minus
one.
We
will
have
an
in
reservation
made
so
that
new
jobs,
starting
from
there,
were
not
running
through
T.
So
every
job
will
finish
by
T
and
at
T
time
we
will
delete
all
jobs
or
still
in
queue
so
and
no
new
jobs
can
be
submitted.
A
No
new
job
will
be
around,
of
course,
and
we'll
still
allow
your
logging
for
another
week
and
you
can
during
that
week
you
can
retrieve
your
files
from
query
scratch
on
the
files
on
all
the
other
file
system
are
still
available
on
parameter
or
other
nerve
systems,
so
you
don't
have
to
do
anything
special
for
them,
but
Corey
scratch
data.
You
have
a
one
week
to
retrieve
then
t
plus
one
week
all
the
logging
nodes
will
be
closed
permanently
and
followed
by
disassemble
of
the
system.
A
How
to
access
promoter
you
can
SSH
promoter,
Dash,
P1
or
saw
Dash
P1
saw
is
the
first
name
of
cell
perimeter
that
our
system
is
named
after
the
scientists
at
Berkeley
lab,
who
has
a
one
Nobel
Prize
and
you
still
use
MFA,
which
is
multi-factor
authentication,
which
is
password
plus
one
time
password.
In
the
same
way
as
you
do
on
query,
we
do
recommend
you
use
SSH
proxy
to
reduce
the
frequency
of
authentication
and
the
default
time
is
24
hours.
You
don't
have
to
type
your
password
or
MFA
again
and
again
within
24
hours.
A
A
The
configurable
GPU
option
is
for
you
to
use
node
reservations
such
as
today,
there
is
a
like
terminal
kernel
there
are
Jupiter
has
some
other
kernels
such
as
python,
pytorch,
Etc,
there's
also
a
terminal
kernel
that
you
can
choose
and
you
get
a
terminal
within
the
jupitab
that
you
can
do
file
access,
editing,
completion,
Etc
I'd
like
to
also
mention
the
file
system,
data
considerations,
as
I
mentioned
earlier,
that
your
data
on
query
in
your
Global,
home
or
Community
file
system
directories
are
available
on
parameter.
A
So
you
you
don't
need
to
do
anything
special
for
them.
For,
however,
there's
one
special
thing:
do
you
do
we
do?
We
have
a
SIM
link
called
Global
project
projectors
that
you've
been
using
on
Quarry
and
we
rather
file
system
upgrade
to
community
file
system,
so
the
new
file
system
is
already
CFS
and
that
Global
projectors
is
actually
a
Sim
link
to
CFS
on
Quarry
and
we
will
not
be
migrating.
The
Sim
link
to
parameter
so
on
per
model.
A
You
should
use
the
direct
link
of
CFS
so
be
sure
to
remove
this
from
old
scripts
if
you're
moving
over
from
query
to
parameter
and
use
the
direct
CFS
directory,
which
is
Global,
CFS
Cedars,.
A
Bios
Corey
scratch
is
not
available,
not
accessible
on
parameter
permanent
has
its
own
scratch
file
system
and
Corey
scratch.
Data
will
be
retired,
with
Corey
so
be
sure
to
migrate
scratch
data
there's
ways
we
have
a
link
here
and
how
do
you
move
your
data
to
CFS
or
to
hpss?
A
Or
if
you
want
to
move
to
pomodo,
there's
actually
two-step?
You
need
a
Globus
endpoint
from
Quarry
scratch
to
dtn,
to
CFS
on
data
transfer,
node
and
then
from
data
transfer.
Node
use
another
Global
step
migrate
onto
per
model,
but
details
in
this
link
now
I'm
going
to
touch
upon
some
of
the
similarities
and
differences
between
query
and
parameter
comparison.
A
Both
of
them
use
the
familiar
crate
user
environment
with
compiler
wrappers
little
CC
capital
ccftn
for
C,
C,
plus,
plus
and
Fortune
codes
compilation.
You
also
see
prgenv
modules.
You
would
use
module
swap
or
modular
another
pure
gnv
to
get
on
to
another
environment
for
like
such
as
canoe,
environment,
creating
environment
Etc.
A
You
will
see
similar
CPU
notes,
kind
of
standard,
CPU
architecture,
no
major
surprises
there,
except
it's
AMD
processor
CPU,
instead
of
Intel
CPU
and
for
clock,
speed
wise
this
promoter,
CPU
nodes
have
similar
clock
speed
as
in
Haswell
and
also
has
a
similar
number
of
cores
per
node.
As
in
KNL
on
query.
A
A
The
biggest
difference
is
that
Majors
may
not.
Margins
may
not
be
initially
available,
visible
like
when
you
do
module
Avail,
you
don't
see
all
the
available
modules.
You
need
to
use
module
spider
to
find
hidden
modules
that
are
has
have
some
hierarchy.
Dependencies.
You'll
hear
more
about
this
in
Eric's
talk
today.
A
Obviously,
GPU
nodes
are
brand
new.
Many
of
the
users
need
to
work
on
migrating
and
using
specific
program
model
that
are
for
explorate
GPU
nodes.
A
Some
of
the
users
have
all
the
considerations
of
portability,
performance
and
GPU
compatible.
You
have
different
versions
of
GPU
code.
You
may
have
CPU
only
versions,
so
exploring
GPU
nodes
is
a
bigger
topic
and
and
that
for
for
using
parameter,
compiler
programming
versions,
We,
there's
no
Intel,
compiler
nurse
is
exploring
it,
so
it
may
be
available
sometime
in
the
future,
but
not
now
Corey
has
the
Intel
component
as
a
default.
So
there
are
some
usages
of
you
know:
libraries
you
have
used.
A
We
do
have
some
recommendations
on
Eric's
talk
as
well,
then
we
do
have
a
new
Nvidia
compiler
that
has
the
best
CPU
and
GPU
support
on
promoter
as
I'm
as
I
mentioned.
This
training
is
not
covering
the
GPU
data
analytics
document
data
analytics
parts
of
the
usage,
but
here's
some
of
the
Great
Links
in
our
documentations
on
Jupiter
python,
Julia,
shifter,
workflow
tools
and
machine
learning.
Etc
I
also
want
to
point
out
lots
of
trainings
here
this
train.
A
This
set
of
trainings
covers
using
CPU
and
GPU
traditional
simulation,
GPU
programming
models
and
lots
of
data
analytics
topics
as
well.
So
the
the
lists
with
plus
sign
are
has
have
the
data
analytics
data
related
topics
that
we
want
to
explore.
More
all
the
trainings
here
have
slides
and
recordings
available.
The
big
pink
link
has
the
complete
list.
You
can
explore
more
trainings
on
this
webpage
as
well.
Not
only
just
these
are
listed
here.
A
We
had
new
user
training
using
promoter
training
gpus
for
science,
day-to-day
AI
for
science,
open
NPR,
float,
Cuda
training,
SQL
training
and
using
Nvidia
compiler.
So
all
sorts
of
trainings
here
on
this
page
lists
more
information
and
further
training
opportunities.
So
we
have
migrating
from
quality
to
promoter.
Documentation
covers
lots
of
topics
that
we
covered
for
all
the
presentations
today
and
more
and
we
are
having
more
office
hours.
A
We
have
had
10
office
hours
since
November
met
with
150
plus
users
of
two
more
scheduled
in
in
March
feel
free
to
come
and
bring
your
own
codes
or
just
some
some
users
just
stop
by
and
listen
to
other
people's
questions
and
answers
as
well.
We
are
having
another
training
in
April
early
April.
It's
in
ways
for
GPU
programming
boot
camp,
especially
geared
towards
new
users,
new
GPU
users,
so
it'll
introduce
the
various
programming
models.
Openmp
offload,
open,
SEC,
coder,
standard
language
parallelization
and
with
Hands-On
exercises
for
each
each
programming
model,
plus
a
mini
challenge.