►
From YouTube: NUG Monthly Webinar, January 21, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
everybody
welcome
to
the
january
nurse
accuser
group
meeting
we'll
get
started
in
just
a
few
minutes.
B
Yep
everybody
is
very
interested
to
know
that
said
before
we
start
on
pearl
mata.
We
we'll
run
through
our.
What
do
you
call
it
regular
agenda.
A
Okay
sounds
good.
I
just
wanted
to
have
something
up
that.
C
B
B
B
Start
so
the
meeting
is
being
recorded,
we'll
post
the
video
of
it
via
via
youtube,
but
on
the
meeting
web
page
soon
after
the
meeting
new
and
special
for
today,
thanks
rebecca
for
working
out
how
to
how
to
do
this
is
we're
actually
live
streaming
on
youtube
as
well,
so
as
well
as
having
zoom,
we
have.
B
So
a
quick
overview
of
the
plan,
those
who
have
been
coming
to
these
meetings
a
little
bit
familiar
with
it,
but
I
think
we've
got
a
few
people
at
least
who
perhaps
haven't
as
yet.
The
format
of
these
meetings
is
intended
to
be
quite
interactive.
We
have
a
very
large
group
at
the
moment,
so
you
know
there
might
be
a
few.
B
You
know
jumps
and
starts
while,
while
speaking
feel
free
to
speak
up,
though,
if
it
does
get
unmanageably
complex,
we'll,
perhaps
use
the
chat
to
you
know
sort
of
order.
Parts
of
the
conversation
also,
if
you
are
not
already
on
nurse
user
slack.
B
I
think
we
include
it
here
we
go.
I
have
a
link
to
it
here,
I'll
post
it
in
the
chat.
It's
really
good
to
join
it's
a
great
forum
for
having
discussions.
B
You
know
during
this
meeting
and
also
after
the
meeting
it
has
the
nice
little
advantage
of
it
keeps
the
the
conversation,
so
the
conversation
doesn't
just
disappear
when
the
zoom
meeting
closes.
B
Here
so
and
if
you
are
already
or
when
you
are
already
on
the
next
user
slack,
we
have
a
channel
called
webinars,
which
is
a
good
one,
which
is
what
we'll
have
the
discussion
in.
B
So
our
regular
agenda
for
these
meetings
is,
we
start
out
with
a
win
of
the
month
followed
by
a
today.
I
learned
section
we'll
explain
those
as
they
come
up
a
period
for
announcements
or
calls
for
participation.
So
this
is
not
only
announcements
from
nurse
there's
also
an
opportunity
for
you,
our
users,
to
announce
things
that
nurse
users
might
be
interested
in.
B
So
you
know
conferences
that
you're
involved
in,
for
instance,
then
our
topic
of
the
day,
which
today
will
be
pearl,
mata
and
then
we'll
finish
up
so
we'll
have
about
15
minutes
or
so
to
talk
about
perlmutter,
see
what's
coming
up
there
and
then
we'll
talk
quickly
about
what's
coming
up
and
a
brief
look
at
last
month's
numbers,
so
our
first
section
is
win
of
the
month.
The
idea
of
this
is
an
opportunity
to
you
know,
show
off
an
achievement
or
highlight
somebody
else's
achievement
that
you
know
about
this
doesn't
have
to
be.
B
You
know
a
nobel
prize
yeah
we're
we're
kind
of
about
celebrating
the
the
small
wins
and
you
know
learning
from
it
as
well,
yeah
things
that
we
we've
achieved
yeah.
That
will
be
interesting
to
other
nurse
users
yeah
things
like
if
you've
had
a
paper
accepted,
it
could
be
that
you've
been
working
away
at
a
challenging
bug
and
you
know
worked
it
out
and
solved
it.
B
B
We're
also
interested
in
hearing
about
innovative
uses
of
high
performance
computing.
That's
another
hpc
achievement
award
from
nurse
so
I'll
open
the
floor.
Please
just
speak
up
what?
What
did
you
do?
That
was
interesting
and
and
what
was
the
key.
A
B
Yeah
nervous
to
be
the
first
to
speak,
if
you're,
not
sure
about
speaking
or
you're,
in
a
position
where
it's
not
easy
to
speak.
Yeah
also
feel
free
to
write
something
in
the
chat
or
on
the
webinars
channel
of
the.
B
B
Yeah
and
that
that
one
is
one
that
you
know,
a
lot
of
people
made
contributions
to
to
make
sure
that
that
went
smoothly.
B
It
may
have
been,
I
guess,
a
quiet
month
for
a
lot
of
people
as
well,
because
of
the
the
winter
break.
D
Yes,
this
is
lippy,
I'm
a
currently
a
graduate
student
at
the
university
of
chicago
I'm
just
about
to
finish
up
and
graduate,
but
this
month
I
accepted
a
nisap
postdoc
at
nursk,
so
I
will
be
joining
nuris
a
postdoc
once
I
graduate
and
I'm
very,
very,
very
excited.
B
D
B
Yeah
nisip
does
a
you
know,
I'm
sure.
A
lot
of
people
here
are
involved
already
a
lot
of
really
interesting
projects,
bringing
applications
up
to
being
ready
for
next
generation
systems
and
we're
almost
always
looking
for
new
postdocs
to
join
the
effort.
So
you
know
spread
the
word
around
candidates
that
you
know.
B
So
if
people
are
nervous
to
talk
about
achievements,
then
oh
hell-
we
have
a
coming
here
just
before
moving
on
so
the
lsst
disc
project
now
has
a
new
data
portal
that
uses
spin
based
on
a
globus,
modern
research
data
portal.
So
if
you
look
in
the
chat
here,
there's
posted
a
couple
of
links
to
that
so
yeah.
I
gather
that
that
sounds
like
it
was
a
reasonably
big
project
to
implement
and
and
you've
had
some
success
doing.
B
So
the
other
side
of
the
win
of
the
month
coin
is
the
today
I
learned
this
is
almost
the
that's,
not
quite
the
lose
of
the
month,
but
but
part
of
the
aim
here
is
to
recognize
that
when
things
are
difficult
and
don't
work,
that's
actually
not
necessarily
a
bad
thing,
because
you
know
we
can
learn
a
lot
from
it
and-
and
you
know,
taking
several
shots
to
actually
succeed
in
something
is
normal
and
expected,
and
not
only
is
it
yeah
kind
of
what
research
is
about.
B
You
know,
there's
always
something
to
learn,
and
you
know
I
think,
as
a
group,
we
can
actually
benefit
a
lot
from
learning,
not
only
from
the
difficulties
that
we
hit
ourselves,
but
also
from
the
difficulties
that
each
other
had,
and
you
know
possibly
how
what
even
better,
how
we
got
around
it.
It
can
also
be
a
opportunity
for
you
know
if
you've
hit
a
difficulty.
Somebody
else
might
have
hit
that
in
the
past
and
be
able
to
give
a
tip
as
to
where
to
go
next
yeah.
B
This
also
helps
us
as
nurse
to
identify
things
that
you
know.
Maybe
we
can
improve
our
documentation
on,
or
you
know,
find
ways
to
make
a
little
bit
easier
and,
of
course
it
doesn't
have
to
be
things
that
you
got
stuck
on
either.
It
can
be
something
interesting
that
you
learned
that
might
benefit
nurse
users,
so,
for
instance,
you
know
you
might
have
attended
a
conference
and
seen
a
presentation
about
the
the
e4s
extra
scale,
ssl
computing
software
stack-
that
has
some
great
tips
in
it.
B
That
was
yeah
one
of
the
announcements
there's
a
new
version
of
that
out
at
nurse
now
that
was
in
the
weekly
email
this
week,
anybody
like
to
volunteer
with
something
that
they
learned
for
better
or
worse
or
would
like
to.
B
B
Lots
of
contemplation
here.
E
Hi,
patrick
I'm
just
starting
to
wade
into
the
waters
of
nurse-
and
I
had
looked
into
this
before,
but
only
in
the
last
week
found
these
new
user
tutorials
that
were
recorded.
These
videos
that
were
recorded
in
june
of
last.
B
But
some
yeah
it's
good
to
hear
that
that's
pretty
helpful
to
people
recording
the
tutorials
is
a
you
know,
a
a
relatively
recent
addition
to
nurse's
training
and
yeah
we're
very
interested.
Also
in
you
know,
feedback
and
ideas
for
other
things
that
you'd
like
to
see
training
on
but
yeah.
That's
that's
a
good
tip.
B
B
So
I
see
coachy
has
posted
a
link
in
the
chat
to
the
my
nurse
q
waite
times
as
a
helpful
tip
for
seeing
q
status,
and
I
expect
that
that's
was
especially
valuable
to
people
in
december
when
the
competition
for
time
on
the
on
the
system
can
get
quite
high.
Everybody's
trying
to
get
things
finished
before
the
allocation
year
ends.
F
I'm
looking
at
the
current
queue
under
jobs,
corey
kills,
you
get.
You
can
see
estimated
start
time.
How
reliable
is
that?
Because
I
was
very
surprised
at
a
job,
nearly
a
full
machine
job
that
I
hoped
would
run
before
in
last
year's
allocation.
At
some
point
it
showed
an
estimated
start
time
january
20,
2022.
A
So
peter
I
I
think
I
probably
know
what
happened
with
there.
So
so
those
start
times
are
a
function
of
what
slurm
is
telling
it,
and
my
guess
is
what
happened
there
is
we
had
a
made
a
reservation
for
the
allocation
year
and
they
made
the
reservation
just
arbitrarily
long,
so
probably
a
year
long
got
it.
F
A
Know
I
I
would
not
put
a
whole
lot
of
faith
in
it.
I
I
would
take
it
with
a
grain
of
salt.
I
have
to
yes,
yes,
because
so
many
things
can
happen
between
now
and
when
your
job
would
start
yeah
yeah.
But
I
would
say
if
your
job
is
starting
to
show
up
as
actually
having
a
start
time
that
that's
a
really
good
sign,
that
your
job
is
one
of
the
top
jobs
and
should
should
start
soon
for
certain
values
of.
A
Yeah,
so
so,
basically,
what
slurm
does
is
it
start
it?
It
will
create
a
schedule
for
the
next
96
hours
and
and
it
does
that
it
goes
down
the
list
of
all
of
the
eligible
jobs
based
on
based
on
their
priority
level,
and
and
it
does
that,
like
every
five
minutes
or
so
so
yeah,
so
that
so
sometimes
the
schedule
might
be
different
depending
on
how
it
can
fit
jobs
in
there.
Basically
yeah.
G
Yeah,
I
have
a
question:
I'm
a
relatively.
G
User
of
corey,
but
I
am
I'm
a
phd
student
at
rutgers
university
working
on
systems,
so
I
had
a
question
from
all
the
people
who
use
supercomputers
in
general
and
it
is
a
high
level
question.
But
let's
see
if
somebody
has
a
high
level
answer
to
it.
G
So
the
question
is:
let's
say
you
have
a
workload
right
and
you're
trying
to
allocate
nodes
to
it
using
slum
or
whatever
resource
manager
you
use
right
and
when
you
use
slum,
you
need
to
give
how
many
cores
you
want
to
allocate
how
many
nodes
do
you
want
to
allocate
and
how
much
memory
do
you
want
to
allocate?
So
what
heuristic
does
the
does
the
pi
or
the
researcher
uses
to
allocate
memory
while
running
their
systems
while
running
their
workloads?.
B
So
at
nurse
for
most
of
the
queues,
you
actually
request
in
blocks
of
a
node,
and
so
you
actually
get
all
of
the
memory
on
the
node
which,
for
corey,
is
128
gigabytes
for
the
haswell
nodes
and
96
gigabytes.
For
the
k,
l
nodes
minus
a
little
bit
that
the
operating
system
uses
where
you
do
have
to
allocate
memory
is
when
you're
using
the
shared
queue,
but
so
the
default
there
is
to
split
the
amount
of
memory
according
to
the
number
of
cpus.
B
So
you
know
if
it's
I
think,
if
I
recall
quickly,
the
shared
nodes
has
well
nodes,
so
they
have
128
gig
and
32
cores.
B
So
if
you
need
16
cores,
you
know
you're
kind
of
aiming
at
about
60
gigabytes,
but
you
might
need
a
different
amount
and
that
that
really
depends
on
the
application
that
you're
using
and
the
only
reliable
way
to
find
that
out
is
you
know
to
do
some
tests,
maybe
in
a
select
interactive
session,
to
get
a
bit
of
a
sense
of
how
much
memory
the
application
actually
needs.
G
Right
a
lot
of
times
what
happens
is
when
you're
doing
a
lot
of
io.
You
you'll
probably
use
a
lot
of
memory
that
way
as
well,
and
this
is
in
addition
to
what
the
application
allocates
for
itself,
and
so
it
might
not
be
very
useful
to
run
a
smaller
test
on
interactive
mode
and
see
how
much
memory
is
used,
but
yeah.
That
is
probably
a
good
heuristic.
J
That
share
my
the
difficulty
in
the
last
month.
It's
actually
probably
shared
by
many
users
just
about
getting
the
queue
at
the
allocation
here
by
the
way,
this
is
koichi
from
pinot.
Now.
J
Yeah,
so
in
my
case
I
so
I've
been
using
curry
kiana
kml,
usually
that
is
less
crowded
than
as
well,
but
actually
interesting.
That's
not
the
case
in
the
last
few
weeks.
Actually
knl
is
more,
is
busier
than
haswell.
So
that's
one
thing
I
noted
and
for
me
it
has
two
difficulties.
One
is
the
so.
The
premium
queue
competition
that
rebecca
and
the
other
people
has
mentioned
last
time,
and
so
it's
gonna
be
hopefully
changed
in
coming
this
allocation
here,
but
I
make
is
the
nas
ripple.
J
I've
been
using
was
also
run
out
twice,
so
I
get
run
out
in
the
end
of
december
once,
but
we
got
recharged
and
they
ran
out
again.
We
got
recharged,
so
I
was
wondering
in
general.
If
that
happens,
should
we
miss,
cancel
and
resubmit
the
same
job
again
or
it
should
automatically
get
its
cue,
whatever
the
status
of
repo
changing
negative
to
positive
balance,
I
got
some
advice
in
offline
from
slack
channel.
It
might
be
better
to
cancel
a
re-submit,
but
I
my
job
in
the
end
didn't
get
the
kill.
J
J
Probably
the
most
popular
job
category
in
night,
landing
from
200
to
500
seems
to
be
the
most
popular
according
to
the
queue
wait
time.
So
one
thing
I
might
do
is
to
maybe
avoid
some
kind
of
node,
but
if
you
have
any
other
recommendations-
and
also
this-
you
know
the
negative
positive
balance
in
the
repo
does
it
really
affect
cue
waiting.
I
I'd
like
to
hear
yeah
about
those.
B
So
so
your
repo
balance
generally
doesn't
affect
the
amount
of
time
that
you
wait
in
the
queue
with
the
exception
of.
If
your
repo
has
run
out
of
balance,
then
you
have
the
option
of
running
in
the
overrun
queue,
but
when
you
submit
to
the
overrun
queue,
the
job
has
a
much
lower
priority.
So
it
starts
much
further
back
in
the
queue
other
than
that.
You
know
everything
is
equal.
B
Okay
or
apart,
I
guess
from
premium
jobs,
which
again
is
that
you
can
explicitly
spend
more
of
your
repo
on
premium
and
we
and
this
year
we've
implemented
some
new
limitations
on
using
premium
because
it
is
intended
to,
you
know,
be
a
way
to
yeah
for
kind
of
your
emergency
cases.
You
know
meeting
a
deadline
rather
than
just
being
ahead
in
the
queue
in
the
in
the
normal
sense
of
things.
B
Probably
our
biggest
tip
for
getting
through
the
queue
faster
is
to
make
the
job
short,
if
possible,
it's
easier
for
slum
to
schedule
even
a
lot
of
nodes
if
the
job
doesn't
run
too
long,
whereas
a
job
that
runs
for
24
or
48
hours
slum
needs
to
find
a
gap.
That's
big
enough:
it
either
needs
to
find
a
gap.
B
J
Okay,
cool:
have
you
guys
yeah
thanks
for
the
advice?
Have
you
guys
tried
like
near
the
end
of
the
year
increase
cost
of
the
priority
premier
q
even
more
like
four
times
or
even
five
times
more
expensive
than
the
regular
iq,
so
in
in.
A
Past
years,
we've
simply
disabled
premium.
Oh
okay,
yeah!
What
we
hope
this
year
is
that
by
making
it
making
premium,
not
something
that
just
anyone
can
get
that
the
pi
has
to
approve
it,
and
also
that
it
that
your
cost
will
go
up
after
a
certain
point,
we're
hoping
that
that
will
help
this
year,
but
we'll
we'll
find
out
how
it
works.
B
So
I
see
there's
a
few
more
bits
of
advice
and
and
some
questions
in
the
chat
so
yeah
take
a
take.
A
look
at
the
zoom
chat,
we'll
also
capture
these
near
the
end
of
the
meeting
for
yeah
to
save
it
to
take
notes
on
it.
It
sounds
like
a
good
upcoming
topic
might
be
a
deep
dive
into
slum
and
how
to
make
the
most
of
slurm.
B
So
we
might
yeah
take
note
of
that
for
for
a
future
meeting,
it's
perhaps
time
to
move
on
to
our
next
topic,
thanks
to
everybody
who
shared
something
that
they've
learned
and
asked
questions,
and
hopefully
everybody
had
some
learning
experience
or
or
inspiration
for
something
from
that
announcements
and
calls
for
participation.
B
So
we
do
have
one
that
I
know
of.
In
a
moment
you
should
have
all
received
a
weekly
email
every
week.
There
are
announcements
in
that
so
be
sure
to
check
that.
I
think
the
big
one
for
this
week
is
that
the
new
allocation
year
starts
today.
So
particularly,
if
you're
a
pi,
you
want
to
make
sure
that
your
project
is
kind
of
you
know
configured
as
you
like.
B
You've
included
the
the
project
members
to
continue
that
you
need
and
that
you've
selected
who
should
use
premium
and
that
you're
generally
be
prepared
for
the
new
year.
Johannes,
do
you
want
to
speak
about
the
new
appointment
type
that
we
now
have
sure.
K
Yeah
thanks,
so
this
is
really
targeting
anyone
who
might
be
new
to
nurse
or
maybe
they
know
someone
who's
going
to
be
new
to
nurse.
K
For
example,
if
you
have
post-docs
or
phd
students
joining
your
group
regularly,
it's
good
to
remind
everyone
that
this
this
new
nurse
101
appointment
type,
is
really
there
to
help
people
get
started
and
ask
maybe
just
questions
because
they're
new
to
to
nurse,
but
also
maybe
they
they.
K
You
have
questions
that
you
were
just
afraid,
too
afraid
to
ask,
and
so,
if
you
go
to
nurse
dot
a
s
dot
m
e,
then
you
you'll
see
the
screen
on
the
the
right
there
and
you
can
see
now
we
have
a
bunch
of
appointment
types
to
choose
from
yeah.
Thank
you
very
much.
B
It
looks
good,
can
you
post
that
link
in
the
chat?
I
think
it
will
be
useful
for
people
sure
you
mean
so
it's
clickable
yeah.
I
B
Do,
oh,
I
think
we
already
spoke
a
little
about
the
ay
transition.
Does
anybody
else
have
any
announcements
that
they
would
like
for
other
nurse
users
and
nurse
to
hear
about.
F
A
Okay,
read
the
question,
so
we
will
have
out
allocation
productions.
I
guess
if,
if
users
have
not
used
their
allocations,
richard
I
see
is
here
richard.
Can
you
tell
us
what
the
schedule
is?
I
believe
we're
going
to
do
it
only
twice
instead
of
three
times
this
year,.
L
Hi,
yes
rebecca,
so
we
we're
going
to
do
it
twice
this
year
and
I've
forgotten
the
dates
right
now.
I
believe
it
was
sometime
in
may
and
sometime
in
september
or
october,
and
the
idea
will
be
to
return
time-
that's
not
being
used
to
the
program
manager,
so
they
can
service
requests
from
people
that
need
more
time
in
their
office
or
their
program
and
the
exact
percentages.
L
I
don't
recall,
but
for
instance,
if
it's
october-
and
you
only
use
ten
percent
of
your
time
or
some
some
some
number
that
we
will
publish,
then
a
certain
amount
of
your
time
will
be
taken
away
given
back
to
the
program
manager
and
if
that
project
needs
it
back,
then
they
can
go
to
their
programming
program
manager
to
discuss
it.
L
We'll
send
out
email,
announcements
and
people
can
ask
for
exceptions,
and
if
you
have,
if
you
have
a
need
to
be
able
to
use
all
your
time
late
in
the
year,
for
example,
maybe
it's
associated
with
some
experiment
or
submission,
or
something
like
that.
Then
you
can
say
you
you
can
let
us
know,
and
then
we
will
exempt
you
from
these
automatic
rollbacks.
B
Thanks
thanks
richard,
so
the
context
here
in
case
people
weren't
aware
of
it
is
that
predicting
the
future
is
really
difficult,
and
you
know
we.
We
understand
that
a
project
may
have
requested
more
time
than.
B
Your
turnout
for
it
to
need
in
a
given
year
and
another
project
may
have
requested
less
time
than
it
turns
out
to
need,
and
these
allocation
reductions
or
redistributions
during
the
year
sort
of
give
an
opportunity
to
you
know
account
for
those.
I
guess
unexpected
circumstances.
A
Point
don't
think
it's
documented
yet,
but
when
it
is
I'll,
definitely
announce
it
in
a
weekly
email.
Thank
you.
M
I
have
a
quick
question
about
related
to
that
yep,
the
pi
and
copis
used
to
have
the
capability
within
nim
to
transfer
time
to
another
project,
and
we
used
to
do
that
all
the
time
at
the
end
of
the
of
the
year,
when
colleagues
would
need
time
that
we
had
in
surplus
and
vice
versa.
M
It
would
be
nice
if
that
could
be
restored
or
added
to
iris.
So
this
sounds
like
stefan
yesterday.
L
I
can
tell
you
the
thinking
behind
this,
from
at
least
from
my
viewpoint
is
that
when
you,
when
you
write
a
an
ercap
proposal
to
the
program
managers,
you
lay
out
a
research
plan
connected
to
in,
in
most
cases,
a
research
fund
grant.
That
is
funding.
L
Certain
research
and
the
program
managers
may
make
a
decision
based
on
those
proposals
and
then,
if
individual
people
can
transfer
this
time
around
amongst
themselves
or
amongst
their
different
projects,
which
I
think
they
still
can
do,
that
it
is
not
with
the
approval
of
the
program
managers
that
gave
out
the
time.
So
I
I
guess
I'm
drawing
some
sort
of
analogy
of
the
time
has
value
and
analogy
with
the
actual
research
funding
itself,
for
instance.
L
So
you
would
not
be
able
to
say
transfer
money
from
your
grant
to
some
other
researcher
that
needed
it
and
that
that's
the
thinking
behind
that
we
can.
M
L
B
Thanks
so
I
think
we've
announced
this
one
a
couple
of
times.
Hopefully,
people
are
aware
now
that
there
are
some
changes
to
the
premium
qos
charging.
B
B
Okay,
we
have
a
little
about
the
transition
project,
progress
and
I
think
some
of
these
slides
may
have
been
inadvertently
left
in
from
from
last
month,
so
we'll
skip
through
them.
So
now
we're
up
to.
I
think
the
big
thing
that
people
are
very
interested
in
which
is
what's
happening
with
perlmutter,
and
we
have
a
little
bit
of
an
overview
that
and
yeah
rebecca.
Would
you
like
to
take
over
the
screen
and
walk
us
through
what
what
users
can
expect.
A
Okay,
so.
A
Okay,
let
me
know
if
you
can't
see
that
all
right,
wonderful,
okay,
so
we're
going
to
talk
about
pearl
matter.
The
number
one
question
I
know
everyone
has
on
their
mind
is:
when
will
I
be
able
to
get
on
perlmutter?
A
So
we'll
talk
about
that
and
what
kind
of
stages
we
have
to
go
through?
We're
also
going
to
kind
of
review
what
promoter
is
going
to
be
like
the
architecture
and
the
environment
and
then
have
some
discussion
about
how
you
can
prepare
for
perlmutter?
A
A
This
is
a
high
performance,
scalable
low,
latency,
ethernet,
compatible
interconnect,
it'll
have
a
35
petabyte,
all
flash
luster
scratch
file
system
and
it's
really
our
first
machine.
That's
designed
to
meet
the
needs
of
both
large-scale
data
simulation,
large-scale
simulations
and
data
analysis
from
experimental
facilities.
A
It
will
have
about
three
or
four
times
the
compute
capabilities
of
corey,
so
anyway,
we're
really
excited
about
it.
What
does
all
of
this
mean
for
you,
though?
There's
gonna
be
really
a
lot
of
new
and
unique
features
that
I
think
you're
gonna
like,
but
because
everything
is
so
new
and
under
the
current
circumstances
that
we're
all
facing
the
timelines
here
are
kind
of
an
estimate,
because
things
could
change.
A
So
when
will
you
be
able
to
use
this
fantastic
machine?
So
in
2021
the
machine
is
going
to
be
considered
a
pre-production
system,
so
we
will
not
charge
for
compute
time
in
2021..
A
A
There's
some
other
stuff
that
has
already
arrived,
but
the
the
gpu
cabinets
themselves
will
arrive
next
month
and
our
neesap
teams
will
get
access
to
it
in
approximately
the
second
quarter
of
2021.,
then
users
will
be
added
in
stages
as
the
system
matures.
We
don't
want
everybody
on
all
at
once
and
running
into
lots
of
problems.
So
we
want.
We
want
the
system
to
be
mature
before
we
get
everyone
on,
but
we'll
provide
more
details
as
the
time
nears.
A
As
we
know
more
then
in
2022
pearlmutter
will
become
a
production
system
and
we'll
start
charging
again
for
promoter,
and
one
new
thing
is
that
those
gpu
and
cpu
charges
will
be
allocated
separately.
So
usually
you
know
you
just
have
some
nurse
bucks
and
you
can
spend
them.
However,
you
want,
but
but
in
this
case
we're
going
to
have
separate
pools
for
cpu
and
gpu
usage,
okay,
so
here's
kind
of
our
timeline
about
what's
going
on
here.
A
So
this
orange
line
represents
like
we
are
here
so
there's
still
quite
a
few
things
that
we
need
to
do
so.
A
We've
got
to
install
the
machine,
the
phase
one
then
we'll
accept
it
and
we'll
let
our
new
sap
folks
on
and
then
then
you
know
we're
gonna
gradually
add
people
we're
gonna,
update
our
cpus
to
the
more
advanced,
better
cpus
and
then
we're
gonna
have
a
delivery
of
phase
two
and
we're
gonna
have
to
take
it
down
for
a
while
and
integrate
phase
one
and
phase
two
and
then
hopefully
in
the
final
quarter,
we'll
have
the
machine
all
together,
it'll
be
running,
everyone
will
be
on
running
great
and
nobody
will
be
charged
and
then
2022
its
production
resource
and
charging
will
begin.
A
Okay,
so
some
differences
between
perlmutter
and
corey.
So
this
the
slingshot
network
is
a
pretty
cool
thing,
so
it
allows
perlmutter
to
be
less
complex
than
corey,
because
it
just
has
one
network
for
everything,
so
you
can
kind
of
see
the
difference
here.
This
is
kind
of
everything
in
within
one
network,
whereas
on
corey
there
were
a
couple
of
different
networks,
so
scratch
is
is
on
the
same
network,
unlike
in
the
case
of
core.
A
A
So
that's
one
thing
that's
the
same,
but
but
obviously
it's
different
in
the
way
that
it's
implemented
and
then
finally,
perlmutter
is
going
to
use
containerization
for
better
orchestration
of
services.
So
when
you
log
in
you're,
going
to
log
into
a
containerized
service
that
is
provisioned
use
using
kubernetes.
A
So
I
guess
I
gave
that
away
here.
So
it's
a
the
non-compute
services
are
deployed
as
containers
using
kubernetes
for
orchestration.
A
So
here's
kind
of
what's
going
to
happen,
so
we
know
that
the
the
slingshot
network
is
super
cool.
It's
better
than
aries.
It's
faster
has
better
traffic
control
and
it's
ethernet
compatible
in
our
phase.
One
we're
going
to
get
a
non-compute
nodes
which
we've
already
received
they're
already
there
at
nurse
first
20,
login,
nodes
and
and
all
of
the
service
nodes
receive
the
storage
system,
which
is
also
already
there
and
then
finally,
we're
just
waiting
on
these.
Approximately
1500
gpu
compute
nodes
and
the
nodes
have
four
nvidia
a100
gpus
in
each
node.
A
So
then
phase
two
we're
going
to
get
a
four
large
memory
notes:
20,
more
login
nodes
and
about
3
000
cpu
compute
nodes
that
have
the
amd
64
core
cpu.
A
Now,
what's
what's
promoter
going
to
be
like
when
you
get
on
it?
Well,
it's
going
to
be
very
similar
to
corey.
I
mean,
despite
the
fact
that
we've
got
this.
You
know.
Kubernetes
orchestration
container
sounds
really
fancy
and
scary.
It's
just
going
to
look
the
same
to
a
user,
so
you
access
it
using
ssh
you're
going
to
have
the
same
home
and
cfs
file
systems
are
mounted
to
it
and
you're
gonna
we're
gonna
follow
the
same
sort
of
scratch
usage
model.
A
Where
you
have
quotas
where
it
gets
purged,
you
know
it's,
it's
gonna
be
pretty
much.
The
same,
jupiter
will
be
available
for
pearl
letter
and
it's
going
to
be
pretty
familiar
compilers
and
programming
environments.
So
it's
going
to
have
modules
just
like
just
like
you've
seen
before
there
may
be
a
few
minor
differences,
but
but
the
basic
idea
is
there.
A
So
the
programming
environments
themselves
are
going
to
be
a
little
different.
You
may
notice
there's
no
programming
environment
intel
because
you
know
we're
going
to
have
amd
processors,
but
there's
going
to
be
four
different
programming
environments
available
and
I
think
the
best
ones
for
most
users,
especially
users
of
gpus,
is
going
to
be
the
pgi
and
the
gnu.
A
So
how
do
you
prepare
for
pearl
butter?
So
first
thing
is
read
about
it,
so
we've
got
some
documentation
up
about
perlmutter
readiness,
it's
under
the
performance
area
in
the
nurse
documentation,
and
this
link,
of
course,
is
direct
to
it
and
then
get
some
help.
So
we've
got
our
nurse
101
office
hours,
appointments
that
you're
more
than
welcome
to
get
get
an
appointment
with
us
and
talk
to
us
about
promoter
about
preparing,
and
then
we
also
have
some
gpu
nodes
on
cory.
A
We
don't
have
very
many,
so
you
have
to
share
with
others,
but
you
can
request
access
to
these
gpu
development
nodes
if
you
have
a
genuine
need
for
developing
your
code
on
on
nurse
gpus,
so
those
are
the
three
ways
that
you
can
get
prepared.
A
A
Okay,
so
I
saw
one
question
about
nurse
101
appointments:
can
we
make
an
appointment
for
a
group
of
people
absolutely
but
you'll
just
need
to
coordinate
with
that
group
to
figure
out
a
good
time
for
your
appointment?
C
So
somebody
from
our
application
performance
group
who's
on
might
have
a
ready
answer
for
that
question.
It's
it's
something
that.
B
You
know
we're
certainly
tracking
by
a
new
set,
but
I
have
to
admit
I'm
not
completely
up
to
date
with
what
what
the
numbers
we're
seeing
are.
A
Yeah,
okay,
that
sounds
right
to
me.
Okay,
another
question
is:
will
no
machine
work
the
same?
Yes?
Yes,
there
shouldn't
be
any
difference
there,
how
much
memory
per
node
for
the
cpu
nodes.
N
A
All
right,
yeah,
okay,
will
shifter
be
used
for
containers
or
something
else.
So
I
think
for
containers
running
on
the
machine.
Yes,
it'll
be
shifter
for
the
container
of
your
login
node.
I
I
don't
know.
Actually
I
don't
have
any
idea.
A
Yeah,
I
don't
think
we
know
what
does
the
per
node
network
bandwidth.
I
think
we
also
don't
know
that.
L
All
right,
so
it
is
richard.
Let
me
answer
doug's
question:
if
you,
if
we'd
like
to
hear
about
your
specific
use
case
for
the
outbound
connectivity,
because
a
lot
of
things
are
possible
and
the
final
configuration
may
not
be
quite
set,
so
we
are
interested
in
learning
about
what
your
needs
are.
So
if
you
could,
if
you
could
send
this
an
email
to
say
consultant
nurse.gov
describe
me
on
your
use
case.
That
would
be
great.
A
A
A
Are
there
some
notes
with
both
gpus
and
cpus?
Yes,
so
the
the
gpu
nodes
actually
have
cpus
in
them
as
well.
It's
just
more
interesting
to
talk
about
the
gpus.
I
guess,
but
it's
going
to
be
the
same
cpus
throughout
the
whole
machine.
B
The
main
difference
is
the
the
number
of
them,
so
the
cpu
only
nodes
have
no
gpus,
obviously,
but
twice
as
many
cpu
cores.
While
the
gpu
nodes
have
four
gpus
each.
A
Okay,
will
corey
eventually
be
phased
out?
Yes,
yes,
we
will
run
quarry,
definitely
through
2022,
but
I
don't
I
don't
know
beyond
that.
We
may
have
to
get.
We
may
have
to
take
corey
out
to
make
make
room
for
the
next
machine.
We
don't
know.
A
Yes,
I
I
believe
you
should
be
able
to
do
that,
because
it
will
just
be
one
slurm
instance
covering
the
whole
machine,
so
it
would
be
like
today.
In
theory.
I
don't
know
that
anybody
ever
does
this,
but
in
theory
you
could
schedule
a
job
that
would
go
on
both
the
haswell
notes
and
the
k.
L
nodes
of
corey.
F
A
Okay,
so
yes,
it
is
possible,
it
would
have
to
be
done
through
a
reservation
and
like
right
now
the
way,
the
way
that
the
qls's
are
set
up.
They
are
all
like
the
way
that
the
currently
that
the
the
qos
are
set
up,
none
of
them
bridge
both
the
knl
and
the
haswell
nodes,
but
we
could
in
theory,
create
one
and
then
you
would
be
able
to
schedule
a
job
that
would
go
across
both
types
of
nodes.
A
F
A
A
Okay
is
cory
going
to
be
gone
soon
after
perlmatter
becomes
available,
no
no
not
soon
after,
but
but
it
will,
a
few
years
down,
the
road
will
perlmutter
scratch
be
attached
to
globus.
Dtn
notes.
O
A
All
right
will
our
map
be
provided
very
nice
profound.
It
is
very
nice
performing
yes,
we
I
think
we
planned
on
we
plan
on
doing
that.
Will
nurse
support
singularity.
I
do
not
know
of
any
plans
for
supporting
singularity.
A
Well,
the
application
runs
on
compute
nodes,
involve
containers
or
just
bare
metal,
I
think
by
default
they
will
just
basically
be
bare
metal,
but
of
course,
you
could
always
run
in
a
container
wow.
Okay,
we're
getting
a
lot
of
really
great
questions
here.
Okay
and
I
still
killed
it.
I
found
has
his
hand
up
too.
So
maybe
we'll
take
a
break
from
the
comment
question
and
go
to
stefan.
M
Hi
you
mentioned
that
the
korea
environment
wouldn't
support
open
acc.
Is
that
right
or
you're,
just
discouraging.
O
L
B
H
H
H
Learning
from
version
9
uses
nlvm
backhand,
so
it's
it's
more
similar
to
whatever
rvm
can
support
and
for
the
for
the
openmp
part
it
does
support
openmp5
and
offload
feature
as
well,
especially
starting
labs.
Pc
11
has
a
more
significant
improvement
over
the
cce10.
H
The
fortune
part
is
still
using
the
crate
classic
compiler
for
the
for
the
fortune.
Support.
A
Okay,
stefan
that's
a
good
question:
okay,
okay,
what
npi
libraries
will
be
available.
A
Yeah,
I
don't
know
that
I
haven't.
I
have
any
information
on
that
beyond
just
it's
going
to
be
the
gray
and
pitch
based
mpi.
L
In
learning
why
you
need
a
specific
mpi,
so
if
you
need
open
mpi,
you
have
a
specific
need
for
a
specific
version
again
we'd
like
to
know
there
are.
There
are
various
tips
in
the,
for
instance,
the
exit
scale
computing
project,
looking
at
different
flavors
of
mpi,
and
we
may
be
able
to
leverage
that
work
on
perlmutter,
but
we
do
need.
We
need
use
cases
and
we
need
to
see
demand
that
people
need
it
and
we
would
want
it.
A
Yeah
thanks:
okay
does
nurse,
have
a
list
of
perlmutter
ready
or
in
progress
nurse
supported
codes
and
their
current
stamps
other
than
me
right.
So
we've
got
these
kneesap
codes
that
we've
been
working
with
them
and
supporting,
but
are
you
talking
about
more
like
if
you're
talking
about
more
like
applications
such
as,
for
example,
we
always
provide
certain
computational
chemistry
apps.
A
A
O
Can
I
could
I
so
we
we
currently
do
have
a
private
internal
shifter
registry
that
we
can
upload
images
to
that.
If
you
have
a
specific
use
case,
that's
not
covered
by
that,
then
you
know,
please
let
us
know
and
we'll
work
with
you.
B
B
So
it
has
occurred
to
me
that
we've
actually
reached
the
official
end
of
the
meeting
time.
There's
obviously
still
quite
a
lot
of
interest
in
discussion,
but
what
we
might
do
is
pause
flick
very
quickly
through
the
last
couple
of
agenda
items
and
we
can
perhaps
stay
on
for
a
short
amount
of
time
for
kind
of
open
discussion.
After
that.
B
That's
quite
optional
sounds
good.
M
B
So
our
last
couple
of
items
are
coming
up.
We
won't
go
too
deeply
into
these,
but
we
are
always
very
interested
in
topic,
requests
or
suggestions,
and
especially
we'd
love
to
hear
from
some
of
our
users
during
the
the
topic
of
the
day
section.
So,
if
you'd
like
to
talk
about
the
work
that
you're
doing
at
nurse
or
using
nurse
facilities
for
yeah,
I
think
that
would
be
a
really
interesting
topic
for
other
users
to
hear
so.
You
know
a
relatively.
B
We
call
it
a
relatively
light
lift
in
that
it's
only
a
10
to
15
minutes
section.
So
it's
more
like
a
lightning
talk
than
a
conference
talk
but
yeah.
If
you're
interested
in
talking
about
that,
please
let
us
know
either
a
direct
message
on
the
nurse
slap
channel
or
a
comment
in
the
webinars
user.
Select
the
nurse
user
slack
channel
or
send
us
a
ticket
at
help.nurse.gov
with
a
a
request
or
a
self
nomination.
B
Finally,
last
month's
numbers,
so
the
scheduled
availability
was
quite
high.
The
overall
availability
was
somewhat
lower,
and
this
is
because,
as
you
probably
remember,
we
had
a
a
multi-day
about
a
four-day
I
think
outage.
While
we
upgraded
the
power
at
earthness
facility
in
preparation
for
perlmutter
looked
at
on
a
chart,
you
can
see
sort
of
there's
a
few
red
marks
for
unscheduled
outages
and
the
the
big
black
mark,
which
was
the
scheduled
power
work.
B
While
corey
was
unavailable
for
that,
we
had
a
a
relatively
high
number
of
unscheduled
incidents,
totaling
a
little
over
11
hours.
Most
of
them
were
very
short
sort
of
a
few
minutes
to
an
hour
and
and
thus
are
not
very
visible
in
the
chart.
Just
a
function
of
scale.
B
Corey's
utilization
in
december
was
up
over
95,
so
that
was
great
to
see,
and
it's
also
great
to
see
that
corey
is
being
used
for
jobs
that
are
at
a
scale
that
just
can't
be
done
in
most
other
places.
So
55.2
of
corey's
workload
fell
under
the
category
of
large
jobs.
B
More
than
more
than
1024
knl
nodes,
we
had
427
new
tickets,
460
closed
tickets,
so
there's
a
typo
on
that
that
should
say
first
of
january,
so
the
current
or
the
as
a
couple
of
weeks
ago,
backlog
of
tickets
is
a
little
under
500.
B
So
that's
all
of
the
official
parts
of
the
meeting
agenda
we'll
stay
on
for
a
few
more
minutes
for
further
discussion,
because
it's
clear
that
people
are
quite
interested
to
to
talk
more
about
pearl
mata
but
feel
free
to
leave
it
anytime.
So
a
reminder
again,
the
meeting's
being
recorded
we'll
post
the
recording,
also
the
slides
and
and
some
notes
about
the
meeting
on
the
website
sort
of
here.
Shortly
after
it
finishes.
B
Thanks
everybody
for
participating,
and
especially
thanks,
rebecca
and
some
of
the
other,
the
nurse
people
who
presented
bits
and
and
answered
some
questions
and
yeah.
Thank
you.
Everybody
also
for
for
asking
questions
and
for
sharing
your
experiences
in
the
you
know,
the
various
parts
of
the.
B
B
So
it
does
look
like
the
speed
of
questions
coming
through
in
the
chat
is
slowing
down.
William's
comment,
I
think,
is
very
valid.
The
documentation
needs
a
said
command.
We're
going
to
need
to
update
things
due
to
name
changes
so
that
will
that
will
gradually
happen
over
over
the
near
term.
B
B
B
B
Okay,
thank
you
all
and
we'll
see
you
at
the
next
one.