►
Description
As well as our usual discussions, we look at the Annual NERSC User Survey, and some outcomes from the most recent one
A
Okay,
let's
make
a
start,
looks
like
we
have
a
relatively
relatively
small
group
of
us
today,
so
again
be
quite
discussion.
Oriented
so
first
heads
up,
of
course,
is
oh
well.
First
of
all,.
B
A
To
the
may
monthly.
B
A
First,
heads
up
is:
we
are
recording
the
session,
so
the
recording
will
be
posted
on
the
website
kind
of
you
know
soon
afterwards,
along
with
the
slides.
So
if
you
prefer
not
to
be
recorded,
please
you
know
to
knock
video
and
so
on.
A
We'll
follow
our
normal
agenda,
which
is
so
win
of
the
month
today.
I
learned:
we've
got
quite
a
bunch
of
announcements
today
and
for
topic
of
the
day,
we'll
take
a
bit
of
a
look
at
some
of
the
yeah
some
of
what
we
learned
in
the
annual
user
survey
this
year
and
then
a
look
at
what's
coming
up.
A
So,
first
up
win
of
the
month,
so
this
opportunity
to
show
off
an
achievement
or
shout
out
something
that
somebody
else
has
achieved
and
it
can
be
big
or
small.
From
accepting
a
getting
a
paper
accepted
solving
a
bug,
something
that
would
be
a
candidate
for
a
science
highlight
or
a
nurse
award.
C
Shout
out
to
the
nurse
team
for
getting
the
pearl
motor
cpu
nodes
available
to
users.
So
thank
you.
That's
a
that's!
A
win.
A
C
A
Yeah,
so
a
lot
of
people
have
been
working
kind
of
very
hard
to
you
know
make
this
happen,
and
of
course
there
are
still
you
know,
tweaks
and
so
on,
going
on
see,
don't
just
join
he's,
probably
particularly
deserving
of
that
shout
out,
because
he's
let
a
lot
of
the
effort.
What
I
think
you
might
have
just
missed
was
stephen,
giving
a
shout
out
for
getting
call
miners
cpu
nodes
up
and
available
to
users.
D
Oh,
it's
very
kind.
That's
great
yeah!
It's
been,
it's
been
very
exciting.
We've
been
trying
to
bring
up
the
phase
two
of
pro
motors
sort
of
very
slowly
and
and
with,
as
I
know,
always
been
disruptive
but
as
little
disruption
as
we
possibly
can.
So
everybody
can
keep
using
it
so
enjoy
is
very
much
appreciated.
A
And,
thank
you
all
so,
for
you
know
banging
on
it
testing
it
out.
Yeah
letting
us
know
when,
when
things
need
further
tweaking
and
the
the
issues
and
improvements
that
you're
finding.
A
So
I
I
kind
of
have
one
that's
halfway
between
a
you
know,
a
shout
out
of
a
win
and
a
and
a
today
I
learned
so
I
might
use
that
as
a
segue
onto
our
next
slide,
which
is
the
other
side
of
the
coin.
Today
I
learned
something
that
surprised
you
that
can
be.
A
It
might
be
beneficial
to
others,
and-
and
this
can
range
from
something
you
got
stuck
on-
something
that
you're
still
stuck
on
and
would
like
and
put
on
through
to
something
interesting
that
you
watched
or
saw
or
or
stumbled
across
and
the
kind
of
somewhere
between
some
something
I
learned
and
showed
that
here
was
that
I
was
fortunate
enough
to
go
to
the
cree
user
group
meeting
a
couple
of
weeks
ago
now,
which
is
you
know,
attended
by
mostly
people
who
work
at
the
sites
who
that
have
cray
systems,
and
there
was
one
particularly
interesting
presentation
that
I
was
in
a
session
of.
A
I
think
it
was
in
fact
an
excuse,
giving
the
presentation,
although
I
think
this
particular
work
was
done
mostly
on
on
some
instead,
so
she
was
describing
work
about
using
different
molecular
dynamics.
Molecular
simulation
tools
and
how
some
of
it
applied
to
covert
research,
but
that,
but
the
real
today
I
learned
that
that
jumped
out
at
me
was
so.
We
have
nurse
users
use
a
whole
range
of
different
software.
Sometimes
what
appears
to
be
the
same
thing
to
do?
A
You
know
people
people
here
might
already
be
kind
of
very
familiar
with
these
and
the
differences
between,
for
instance,
material
science
is
a
fairly
big
area
and
we
have
people
using
lamps
and
vas
and
amber
and
namdi
and
a
few
others,
so
yeah
qe,
berkeley,
gw
et
cetera,
et
cetera
and
in
one
of
the
slides
in
this
presentation.
A
The
the
speaker
put
up
kind
of
a
few
of
these
sort
of
from
you
know
lamps
through
amber
through
what
was
the
one
in
between
through
devs
and
was
describing
about
how
you
know
why.
I
use
these
different
tools.
You
know
this
tool
does
particularly
well
on
very
large,
but
more
sort
of
course,
resolution
or
short
time
span
simulations.
A
This
one
does
very
well
on
the
the
fine
grinders
right
yeah
right
down
into
the
you
know
the
the
quantum
sort
of
level,
but
because
of
that,
you
can
only
simulate,
for
you
know
a
small
number
of
crypto
seconds
or
picoseconds,
or
something
and
so
yeah
that
I
found
that
really
interesting
as
somebody
who
kind
of
works
alongside
these
things,
but
not
directly
with
them.
Getting
that
better
understanding
of
how
the
tools
are
used
and
the
benefits
of
each.
A
So
I'll
drop
names
and
and
shout
out
to
who
presented
that.
A
A
So
what
people
are
thinking
about
it
actually
there's
one
more
sort
of
shout
out
for
an
achievement
that
I'd
like
to
make,
which
is
oops
kind
of
back
a
slide,
but
also
forward
a
slide.
I
see
is
online
and
he
set
up
some
structure
and
connects
to
people
to
start
a
kind
of
a
like
a
special
interest
group,
a
subgroup
within
nugget
people
interested
in
or
people
who
use
wharf.
And
I
think
that
was
a
really
good
move
and
we're
seeing
some
sort
of
interest.
So
just
like
that.
E
Oh
yeah
yeah
thanks.
Thank
you
steven
to
mention
that
so
yeah
from
our
discussion
that
nuggets
we
sort
of
launching
a
pilot,
a
special
interest
group
among
us
users
and
from
climate
science
community.
We
choose
one
particular
very
common
application,
called
wolf.
E
Not
every
user
are
aware
of
best
practices
and
to
improve
the
workflow,
and
this
model
is
quite
flexible.
So
someone
you
know,
some
of
us
want
to
share
those
knowledge,
and
then
we
do
have
a
large
file
size
of
input.
Data,
for
example,
global
one
kilometer
tubular
data
that
right
now,
individual
users
are
downloading
to
their
own
sketch
space
or
project
space.
So
we
are
talking
to
share
those
information
files
and
further
facilitate
collaboration
among
us
users
from
different
projects.
E
So
we
sent
out
invitations
to
a
few
mailing
lists
across
the
d.o.e
programs
and
we
already
got
about
20
people
signed
up,
and
then
we
got
also
inquiries
from
outside
and
asked
asking
hey.
Is
this
just
unique
to
an
ask
user?
This
is
sounds
like
a
great
idea
and
then
also
I
heard
another
software
engineer.
Maintaining
a
computer
server
for
the
observation
data
set
that
the
nas
does
arm.
Programs
are
sort
of
key
to
what's
happening
in
this
group,
so
they're
getting
nice
momentum.
E
A
E
Oh,
can
I
ask
you
just
one
quick
question:
I'm
stuck
for
two
months
now
for
this
particular
program
to
compile
sort
of
legacy,
chronic
model,
css,
1.5,
workings,
helen.
The
model
suddenly
complains
that
we
allocation
era
in
the
memory
model
memory
space,
and
this
happens
when
we
change
the
compiler.
I
thought
from
version
18
or
older
to
90
or
here
with
19
or
newer.
E
We
got
this
error
no
matter.
What
we
do
usual
solution
is
to
use
the
mc
model
medium
or
large,
which
atmosphere.
I
think
we
have
to
make
build
the
model
or
code
as
a
shared
library
or
dynamically
linked
executable,
but
it's
not
really
helping
us
and
then
helena.
I
are
pretty
confused.
This
model
con
stubbornly
try
to
make
statically
linked
libraries
of
its
own,
but
I'm
just
curious.
E
If
anybody
knows
what
changed
between
version,
18
and
version
19
of
iphone
from
wikipedia
version,
90
start
to
include
some
fortune,
2018
features,
but
that's
the
only
one
I
can
see
at
version
18.
They
cover
full
fortran
2008
support,
but
that's
I
I
really
don't
have
got
much
idea,
but
just
just
in
case,
if
anybody
knows
really
there's
any
huge
change
between
my
phone
version.
19
of
iphone.
A
That's
a
good
question:
has
anybody
I
guess
come
across
other
issues,
moving
from
18
to
19.
A
E
I
knew
about
it.
Actually,
I
haven't
tried
even
newer
ones,
so
yeah
and
this
pretty
close
bug
or
problem
is
not
just
unique
to
task
environment.
I
heard
similar
error
happening
at
the
computer
system,
national
center
for
atmospheric
researches,
so
right,
unique
to
someone
by
fault
or
any
intel
environment
but
yeah.
I
might
just
give
a
try
to
the
latest
version
available
right
now
on,
mask
and
see
what's
happening.
A
So
I
think
it
is
still
possible
to
to
use
version
18.
A
Yeah,
I
think
there
is
still
a
way
to
get
it,
so
we
can.
We
can
kind
of
get
talk
about
that
offline
as
to
how
to
do
it,
so
that
might
be
one
kind
of
work
around
another
thing
that
might
be
worth
it.
Try
that
I've
found
is
a
linear,
forge
or
arm
forge.
A
Rather,
the
ddt
tool
has
a
memory,
a
memory
debugging
option,
and
I
found
that
can
actually
be
pretty
helpful
for
if
it
genuinely
uses
a
plug
in
the
code,
as
opposed
to
something
the
compiler
is
doing
wrong
right.
That
can
be
pretty
good
for
finding
it.
E
A
So
some
interesting
interesting
challenges
to
solve
there
next
up,
unless
somebody
else
is
something
they
wanna
add.
Very
briefly,
we
have
quite
a
bunch
of
announcements
and
news
at
the
moment,
so
there's
quite
a
few
in
the
latest
weekly
email.
Hopefully,
you've
seen
that
or
at
least
can
find
it
buried
in
your
inbox
easily
enough-
and
I
think
the
you
know,
the
big
exciting
one
that
was
already
alluded
to
is
that
all
nurse
users
now
have
access
to
perlmutter
and
palmata
has
cpu
nodes
up
as
well.
A
One
kind
of
important
change
that
is
worth
knowing
about
is
before
the
cpu,
but
yeah
phase,
2
cpu
only
nodes
were
integrated.
The
default
program
was
program,
nvidia
yeah,
because
it
was
very
much
oriented
towards
building
and
running
for
gpus.
However,
now
perlman
is
both
a
gpu
and
a
cpu
machine,
so
to
sort
of
reflect
that
there
are
different
ways
of
using
it.
A
The
default
is
now
program
gnu
when
you
log
in
and
whether
you
prefer
to
use
program,
gnu
program,
nvidia
or
I
think
it's
still
sort
of
you're
in
the
process
of
being
built
out
and
tested.
Some
of
the
others,
such
as
the
yeah
there's
a
aocc
compiler
available
the
credit
compiler
as
well.
A
Yeah,
I
guess
the
important
thing
is
that
now
you
do
need
to
remember
to
change
across
to
the
program
that
you
want.
If
it's,
if
is
isn't
ideal
for
your
situation,
for
your
particular
code,
that's
it.
Gnu
quite
often
does
a
pretty
good
job
on
most
things,
the
other
element
of
palmata
news.
This
is
more
kind
of,
I
guess
one
level
deeper
but
yeah.
I
I
think
it's
interesting
news
is
the
cpu
nodes
are
set
up
with
a
newer
version
of
the
interconnect.
A
You
probably
you
might
have
seen
the
the
name,
slingshot
10
and
slingshot
11
bouncing
around.
So
that's
that's
the
difference.
The
cpu
nodes
are
on
a
newer
interconnect
and
we're
still
tweaking
the
settings
so
yeah
as
you
as
you
run
things
yeah,
we're
interested
in
hearing
your
experiences
and
particularly
around
performance
and
so
on.
A
Doug,
while
we're
on
a
permanent
topic.
Is
there
anything
else,
you'd
like
to
tell
people
about.
D
I
can
give
a
little
bit
of
a
you
know
the
sneak
peek
of
where
things
will
be
going.
You
know
over
the
next
few
weeks.
Clearly,
you
know
there
is
no
real
timeline
that
we
can.
We
can
commit
to,
and
that's
just
because
we're
because
you
know
we're
we're
relying
on
things
sort
of
getting
done
as
they
as
as
soon
as
they
can,
but
just
sort
of
in
general,
the
expectation
so
we
have
at
the
end
of
this
experience,
we're
going
to
have
3072
cpus
you'll
notice.
D
D
You
can
also
expect,
and
we're
also
doing
a
lot
of
other
network
related
work
and
we're
getting
tons
of
updates
and
feedback
and
collaboration
from
hpe,
and
so,
as
a
consequence,
we're
likely
going
to
have
weekly
maintenance
is
on
promutter
as
one
kind
or
another,
in
order
to
make
sure
that
we're
getting
getting
sort
of
these
corrections
and
updates
out
as
fast
as
we
can,
because
we
have
to
sort
of
keep
our
test
systems,
which
are
very,
very
advanced
of
where
we're
at
sort
of
settled
and
and
functional
where
things
will
get
really
interesting,
and
we
have
sort
of
mentioned
this
already.
D
There's
been
a
lot
of
curiosity
around
it,
but
I'm
happy
to
speak
to
it.
A
little
bit
is
going
to
be
as
we
move
the
gpu
nodes
from
slingshot
10
to
slingshot
11..
So
certainly
we
can't
guarantee
that
everything
is
going
to.
D
You
know
that
everything
in
the
melanoc
software
stack
is
going
to
continue
to
work
as
it
has
that's
not
been
the
point.
What
we
will
be
doing
is
we're
working
on
building
a
fairly
significantly
sized
test
resource
right
now
in
which
to
it
will
have
at
the
end
256
gpu
nodes
on
our
alvarez
test
system
and,
to
that
end,
we're
going
to
be
working
with
hpe
to
get
sort
of
all
of
our
critical
software.
D
Codes
that
we
use
to
validate
the
system
working
as
well
as
possible
before
we
start
remanufacturing
and
rebuilding
the
rest
of
the
the
system
and
bringing
that
forward
for
the
users.
So
you
know
what
I'm
trying
to
say
is
that
change
is
coming,
but
we're
doing
what
we
can
to
to
make
sure
that
what
we
deploy
when
you
see
it
will
be
reasonably
mature
and
quite
quite
useful.
A
A
C
Yeah,
I
really
appreciate
that
the
pro
motor
system
has
the
debug
and
interactive
cues
from
the
beginning.
That's
really
useful
for
kicking
the
tires
on
it.
I
was
wondering
whether
you're
planning
on
eventually
having
a
shared
queue
on
perlmutter
as
well.
You
know
if
you
need
to
access
just
one
gpu,
not
all
four
or
just
a
subset
of
the
cores
on
a
cpu
node.
Is
that
planned
for
the
future.
D
I'm
happy
to
I'm
happy
to
so.
Actually
that
is
been
one
of
our
key
plans
all
along
with
something
that
we
really
would
have
liked
to
have
delivered
earlier.
We
found
some
pretty
significant
technical
issues
that
I
don't
really
want
to
address
what
they
were,
but
it's
prevented
us
from
enabling
that
functionality.
D
C
Okay,
that's
good
to
know
that
it's
in
the
works.
You
know
from
the
end
user
side:
it's
not
that
big
a
deal
while
the
you
know
we're
not
getting
charged
for
them
anyway,
but
I
anticipate
in
the
future.
There
will
be
times
when
I
only
need
a
fraction
of
a
node,
so
it'd
be
good
to
have
that
option
in
the
queue
thanks.
A
So
I
think,
basically,
the
current
news
of
pearl
manor.
We
have
a
bunch
of
other
announcements
about
other
things
too.
One
is
you
might
have
seen
in
the
weekly
email.
Does
a
survey
out
at
the
moment
for
nurse
users
of
machine
learning?
You
know
that
machine
learning,
ai
type
stack
of
tools,
we're
kind
of
looking
to
you-
know
work
out.
A
What's
the
best
way
to
arrange
optimize
design,
future
systems
for
ml
capabilities
and
performance
where
we're
getting
at
and
actually
we'll
see
this
pretty
shortly
in
there
we're
talking
about
user
survey,
results
we're
seeing
a
distinct
increase
in
interest
and
usage
of
ml
related
approaches,
so
yeah
planning,
for
that
is
just
kind
of
important.
A
Some
various
calls
for
participation.
These
all
have
more
details
and
links
in
the
weekly
email
and
it's
probably
easier
to
look.
There
then
post
a
great
deal
of
information
here
summer.
Internships
are
starting
soon.
So
if
you
have
or
are
a
student
who's
interested
in
doing
an
internship
at
nurse,
I
think
there's
still
time
to
apply
and
get
into
that.
A
The
cm
conference
is
coming
up.
I
have
for
gotten
the
dates,
but
there's
a
broader
engagement
program
as
part
of
it
and
it's
looking
for
participants
at
the
moment.
We
also
just
recently
announced-
and
this
is
organized
in
part-
by
zinji
and
some
people
like
nurse
as
well
as
others-
the
third
international
symposium
on
checkpointing
for
supercomputing,
so
that
has
a
cfpr
at
the
moment
too,
a
whole
bunch
of
training
events
coming
up
or
a
mixture
of
training
events
and
webinars.
A
So
in
was
it
next
week
the
ecp
webinar
series
has
a
webinar
on
how
to
be
a
great
mentor.
There's
some
great
webinars
in
this
series.
That's
where
they
look.
A
A
If
I
remember
rightly,
the
crash
course
in
supercomputing
was
initially
set
up
to
help
new
interns
get
on
board
and
new
students
how
to
yeah
come
up
to
speed
quickly
so
yeah,
particularly
if
you're
working
with
students
over
the
summer,
that
might
be
a
good
option
for
them
and
then
also
in
june.
The
next
idea,
ccp
webinar,
is
on
normalizing
inclusion
by
increasing
difference.
That's
almost
a
month
from
now.
A
A
Give
up
we
can
go
on
to
our
one
topic
of
the
day,
which
is
the
nurse
annual
user
survey.
So
every
every
year,
usually
around
october
or
november,
you
probably
start
seeing
emails
from
nurse
or
from
in
more
recent
years,
a
group
called
mbri
asking
you
to
participate
in
a
annual
user
survey
and.
A
We
only
do
it
once
a
year,
try
to
minimize
the
amount
of
load,
but
it
also
gathers
really
valuable
information
for
nurse,
so
a
little
bit
of
an
overview
of
what
it
is
and
why
we
do
it
and
what
we're
learning
so
we've
been
surveying
most
users
each
year
since
at
least
1998,
and
in
fact
I
think
earlier
than
that.
But
1998
was
when
we
moved
to
lbl-
and
I
guess
web
pages
changed.
A
There's
been
a
few
changes
over
the
years.
You
go
back
several
years
and
the
survey
was
very
long
and
a
couple
of
times
now.
I
think
it's
been
made
a
little
bit
shorter
and
simpler.
A
So
for
that
20
questions
plus
some
three
four
questions
now
and
in
the
last
two
or
three
years,
we've
actually
had
the
survey
run
by
a
group
called
mbri,
which
is
a
a
dedicated
survey
organization,
a
move
that
came
with
this
so
on
there
on
their
recommendation
about
current
best
practices
for
surveys.
A
Whereas
most
questions,
we
did
have
a
seven
point
scale,
which
meant
that
there
was
there
was
a
answer
in
the
middle
for
neutral
and
the
current
best
practices
are
actually
to
use
a
six
point
scale
and
encourage
participants
in
the
survey
to
get
to
choose
whether
they
fall
on
the
on
the
plus
on
the
negative
side.
A
So
from
you
know,
from
year
to
year,
this
is
sort
of
not
a
terribly
obvious
difference
that
gives
us
some
interesting
challenges
when
we're
comparing
year
to
year,
though
things
change,
so
the
survey
consists
at
the
moment
you
know.
A
In
the
most
recent
survey
we
had
18
ranking
type
questions
where
we
ask
people
to
to
comment
about
whether
to
to
select
where,
on
a
scale,
from
very
dissatisfied,
to
very
satisfied,
they
are
on
a
number
of
aspects
of
nurse
services
and
resources,
and
then
there's
three
free-form
sort
of
questions
which
are
really
there
to
try
to
cover
the
bits
that
weren't
covered
in
the
ranking
questions
and
to
you
know,
to
give
survey.
A
Respondents
sort
of
an
opportunity
to
you
know
talk
about
things
that
either
they
really
like,
or
they
really
don't
like
that.
You
know
might
not
have
hit
our
radar
yet,
and
one
of
them,
of
course,
is
other
other
comments
which
yeah,
which,
which
is
really
a
catch-all,
is
it?
Is
there
anything
we
missed
in
the
question
so
far
this
most
recent
year
we
also
added
a
new
category
question,
which
is:
how
do
you
primarily
use
nurse
services?
A
So
there's
kind
of
a
couple
of
reasons
for
the
survey
really
the
two,
the
two
primary
things
that
we're
looking
for
when
we
run
the
survey
is
if
we
as
nurse
kind
of
care
about
the
services
we're
providing
and
the
resources
we're
providing
to
our
users.
A
We
want
to
make
sure
that
we're
you
know
heading
in
the
right
direction,
detect
things
that
are
on
users,
minds
and
that
areas
to
improve
in
areas
to
keep
on
doing
what
we're
doing
so
so
you
know
so
one
important
aspect
is
for
nurse
to
help
to
help
nurse
identify
our
user
needs,
and
the
other
aspect
is
it's
part
of
our
reporting
to
the
department
of
energy.
So,
each
year
nurse
produces
a
annual
report,
and
what
I
should
have
done
was
gonna
only
do
this.
A
You
can
actually
find
this
on
there
on
the
nurse
web
pages.
I
think
it's
under
www.nurse
for
users.
A
But
the
yeah,
the
annual
reports
each
year
are
published
and
the
the
user
survey
is
not.
You
know
not
by
a
long
way.
The
only
thing
that's
part
of
the
annual
report,
but
it
does
inform
a
fairly
large
and
important
section
of
it.
A
So
who
fills
it
out?
This
is
this
is
always
kind
of
an
interesting
challenge,
because
we
actually
have
quite
a
lot
of
users.
So
in
in
2021
we
sent
the
survey
out
to
8776
nurse
users,
and
you
know
we
have
to
sort
of
make
some
calls
and
what
counts
as
an
active
user,
because
you
know
when
you're,
when
you're
going
on
a
year
by
year-
and
your
allocations
happen
year
by
year.
C
A
You
know
that
the
projects
that
use
the
used
nurse
the
most
heavily
are
the
ones
that
are
most
impacted
by
things.
So
we
look
to
you
know
we
we
generally
try
to
get
responses
from
10
of
nurse
users
overall
and
for
the
responses
to
sort
of
represent
between
them.
50
of
nurse
scours,
and
you
can
see
this
year.
We
didn't
quite
get
50
part
of
that.
I
think
there's
a.
A
Part
of
it,
I
think,
is
because
the
the
distribution
of
units
is
a
little
flatter.
A
This
year
we
did
manage
to
hit
10
of
users,
which
is
good,
so
one
of
the
things
were
we
were
seeing
was
that
it
was
getting
increasingly
difficult
to
sort
of
you
know,
drum
up
participation
and
getting
embara
mbri
on
board
kind
of
helped
a
bit
with
that
yeah
from
both
from
a
survey
best
practices
can
make
the
survey
more
tractable
and
get
easy
for
users
to
fill
out
and
also
encouraging
people
to
fill
the
survey
out
and
gathering
your
results
and
and
doing
analysis
on
it.
So
you
see
we've.
A
This
is
the
just.
You
know,
fraction
of
users
and
the
actual
number
of
users
increases
year
to
year,
because
we're
getting
an
increasing
number
of
people
using
those
resources
which
means
that
the
the
number
of
responses
we're
looking
for
is
increasing
gradually
as
well.
A
A
So
this
new
question
that
I
talked
about
and
a
little
bit
about
machine
learning
and
data
analysis
type
things.
You
go
back
a
few
years
and
I
suspect
what
you
would
find
is.
If
you
looked
at
the
nurse
user
community,
it
would
be
almost
entirely
people
who
run
simulations,
especially
kind
of
you
know,
fairly
fairly.
Traditional
type
of
simulations.
A
In
yeah,
as
as
sort
of
time
goes
on
the
way
that
science
happen
is
expanding,
we're
getting
a
lot
more
science
happening
through
data
analysis
as
well,
and
we're
also
sort
of
seeing
that
kind
of
a
wider.
You
know
variety,
I
guess,
of
science
domains
within
the
offices,
so
we're
interested
yeah.
How
many,
how
many
of
our
users
are
using
nurse
resources.
I
guess
in
a
more
traditional
way,
compared
to
newer
use
cases
that
are
more
data,
oriented
and
other
things
as
well.
So
we
asked
the
question
of
how?
A
How
do
you
primarily
use
it,
and
was
it
primarily?
So
you
only
get
to
pick
one,
because
I
know
a
lot
of
people
actually
could
quite
validly
say
all
of
the
above
on
this.
So
what
we
found
was
that
more
than
half
are
running
simulations,
but
only
a
little
more
than
half
nearly
a
quarter
are
primarily
using
nurse
resources
to
analyze
data,
so
so
the
use
of
data
and
data
analysis
mechanisms
is
increasingly
important.
A
We
also
have
a
fairly
large
chunks
of
about
almost
15
who
are
actually
developing
or
supporting
software
as
their
main
kind
of
role
at
nurse.
So
my
interpretation
of
this
is
you
know:
most
projects
have
got,
you
know
they
need
to
use
some
software,
a
lot
of
projects
or
you're
developing
developing
their
own
software
essentially
and
running
simulations
and
those
projects
are,
you
have
have
people
doing
different
roles,
so
we
have
some
people
who
are
developing
the
software
and
supporting
the
software.
A
Just
out
of
curiosity
and
at
a
show
of
hands
people
here
in
in
terms
of
in
terms
of
I
guess,
developing
software
versus
doing
the
doing
science
with
the
software.
What's
the
what
does
the
split
look
like
if
you're,
mostly
mostly
developing
software?
Why
don't
I
raise
a
hand.
A
Okay,
so
I
see
three
who
are
more
on
the
software
side,
so
yeah,
so
the
split
here
is
probably
pretty
similar,
there's
kind
of
in
the
order
of
15
people.
Here
we
also
have
a
number
of
users
who,
who
are
you
know
mostly
on
for
the
sake
of
managing
their
team
and
and
some
other,
and
they
didn't
sort
of
drill
down
for
today
into
what
the
other
was.
A
So
I
think
it
would
be
interesting
watching
over
the
next
few
years
how
this
chart
changes,
whether
we
see
user
is
it
fairly
stable?
Are
we
going
to
see
an
increase
in
one
area
or
another.
A
So
what
else
did
we
learn
from
it?
So
the
good
news
from
nest's
point
of
view
is
that
overall
nurse
users,
at
least
the
ones
who
respond
to
the
survey
already
nurse
quite
highly,
which
which
is
encouraging.
It
means
that
we
must
be
doing
something
right
here.
A
So
there
are
sort
of
two
groups
of
questions
we
have
some
sort
of
your
large
scale
overall
type
categories,
as
well
as
a
bunch,
more
detailed
questions,
and
so
the
the
two
bars
here
are
comparing
this
year
and
last
or
the
most
recent
with
the
the
previous
one,
and
we
have
a
target
which
is
set
in
there.
A
I
think
it's
going
to
be
the
three
quarters,
the
way
up
in
terms
of
sort
of
average
scores
and
it's
nice
to
see
that
our
ratings
are
consistently
above
time
above
target
and
when
you
get
down
to
the
individual
questions,
there's
a
little
more
variation
but
they're
still
all
yeah
well
above
target.
Something
that
I
didn't
manage
to
do
for
this
presentation
was
a
sample
of
some
of
you
know
what
what
the
actual
breakdown
on
a
per
question
looks
like,
but
it
is
quite
skewed.
A
We
see
that
the
vast
majority
in
the
the
the
very
satisfied
with
a
few
in
the
moderately
satisfied,
but
then
with
a
longer
tail
of
people,
reporting
less
satisfaction.
A
And
if
we
look
historically
so
then
the
key
here
is
a
little
bit
too
small
to
read.
But
this
goes
back
to
1998
and
it's
it's
kind
of
a
a
little
bit
of
a
sample
of
one.
And
the
first
question
asks
just
overall
for
participants
overall,
satisfaction,
witness
services
and
resources
and
yeah
there's
been
consistently
kind
of
yeah,
fairly
high
and
well
above
target
for
nurse
history.
A
When
we
start
to
get
some
additional
interesting
outcomes,
as
in
the
in
the
free
form,
questions,
because
this
can
help
to
you
know
pop
out
things
that
we
might
not
have
asked
in
the
ranking
questions
that
are,
that
are
key
themes,
and
so
there
are
really
two
free-form
questions
that
are
the
the
key
here.
The
the
third
one
about
other
comments
tends
to
sort
of
have
a
mixture
of
both
of
these.
A
A
Cory
is
an
interesting
one.
I
guess
corey
being
the
primary
resource
means
that
that's
where
people's
experience
hits
resources.
A
I
think
a
theme
that
jumps
out
here
is
queue
time
and
getting
jobs
through
and
the
the
an
amount
of
resources.
A
Which
is
worth
sort
of
digging
into
a
little
bit
more,
and
this
is
another
advantage
of
having
a
dedicated
survey
company.
Do
some
of
this?
Is
that
yeah
they
can?
They
can
do
some?
What
do
you
call
it?
Statistical
analysis
that
you
know
might
not
be
beyond
this
to
do,
but
it's
certainly
quite
you
know
it's
a
different
specialization.
It
would
take
quite
a
lot
of
effort,
so
yeah
we're
able
to
get
a
good
analysis
by
basically
pulling
into
experts.
A
So
one
of
the
things
that
they
do
is
they
look
at
what
themes
in
the
free
form
answers
correlated.
A
In
fact,
I
think,
both
in
the
free
form
and
the
ranking
answers
correlated
to
differences
in
the
overall
scores
and
sort
of
your
top
top
level
scores
and
found
there
was
that
there
were
several
but
kind
of
yeah,
probably
four
top
ones
that
jumped
out
so
so
one
was
computational
resources
and
everybody
wants
more,
of
course,
but
people
are
generally
positive
about
it
and
we
found
that
it
was
found
that
comments
about
computational
resources
generally
correlated
with
higher
overall
satisfaction.
A
A
The
interpretation
I
think
here
is
that,
generally
nurse
documentation
is
considered
to
be
quite
thorough
and
high
quality,
but
there
are
areas
that
we
could
still
improve
it.
There
are
areas
that
people
would
like
more
documentation
and
sometimes
navigating
it
is
a
straightforward.
So
that's
kind
of
an
interesting
area
for
from
this
perspective,
something
for
us
to
look
at
you
know.
How
can
we
tweak
it
further
and
and
how
can
we
sort
of
do
more
with
what
we've
got
here?
A
We're
heading
in
there
we're
heading
in
a
good
direction,
there's
certainly
more
that
we
can
do
and
the
other
interesting
one
was
queue
time.
So
q
time
got
a
lot
of
comments
around
in
terms
of
things
that
people
would
like
to
see
improved.
So
so
you
know,
people
feel
the
pain
of
long
cue
weights,
but
it
was
actually
only
very
weakly
correlated
with
having
a
lower
overall
satisfaction.
I
think
the
interpretation
for
this
is
that
qtime
definitely
is
a
pain
point,
but
our
users
also
understand
that
this.
A
It
is
actually
an
area
that
is
a
very
high
interest
to
disc
as
well,
and
one
of
the
things
that
we
are
fairly
fairly
frequently
constantly
working
on
is
ways
to
improve,
especially
the
utilization,
the
the
less
time
that
compute
nodes
are
sitting
idle,
the
more
time
that
they're,
actually
you
know,
working
on
people's
jobs
and
getting
them
out
of
their
queues
and
so
doing
things
to
sort
of
tweak
the
cues
to
fill
in
those
gaps
and
to
most
most
efficiently
pack.
The
machine.
It's
a
is
a
perennial.
A
Interest
sorry
get
to
the
end
of
that
one,
rather
rather
suddenly.
So
that's
kind
of
the
high
level
overview
of
the
survey
and
and
the
things
that
we
found
in
the
survey
for
2021.
So
most
recent
one
that
happened
over
the
over
the
winter.
A
A
Which
is
all
good,
hopefully
at
least
it's
a
interesting
overview
of
nurse
perspective
on
you
know,
finding
out
how
things
are
going
and
and
where
to
focus
next.
A
So
our
normal
last
sort
of
couple
of
items,
a
quick
look
at
last
month's
numbers,
some
metrics
and
what's
coming
up
last
month,
those
of
you
who
joined
us.
We
had
an
interesting
sort
of
discussion
about
what
metrics
are
interesting,
and
so
I
think
the
the
would
you
call
it.
The
key
outcome
of
that
that
that
I
took
from
that
was
that
the
post
hoc
metrics
are
kind
of
interesting,
but
what's
much
more
interesting
is
a
way
to
easily
find
out.
A
What's
the
state
of
the
machine
now
and
not
just
in
this
sort
of
a
black
and
white,
is
it
up
and
that
up
or
down
but
yeah?
How
is
it
before
me
how's
the
file
system
performing
so
on,
like
that,
unfortunately,
building
out
the
capability
for
that
is
going
to
take
a
little
longer.
A
A
So
for
the
moment
we
just
got
that
simplified
version
really
just
a
highlight
of
what
were
there
the
schedules
and
then
scheduled
our
digits
and
we
did
on
corey
have
a
couple
of
unscheduled
outages.
Last
month
one
was
a
slim
issue.
This
was
an
interesting
one
because
it
was,
it
turns
out,
with
a
a
large
enough
command
line.
A
A
A
Topics
coming
up,
so
we
have
a
few
things
kind
of
in
the
pipeline
that
we're
looking
at
for
monthly
meeting
topics.
One
is
hp
ss
interfaces
and
another
is
data,
citation
and
doi's.
This
has
been
mentioned
by
people
as
topics
of
interest,
we're
very
interested
to
hear
what
else
people
are
interested
in
hearing
about
and
better
still
what
else
are
people
interested
in
speaking
about?
This
is
kind
of
a
this
topic
of
the
day.
Slot
is
kind
of
a
good
opportunity
to
show
off
work
that
you're
doing.
A
If
you
have
a
topic
that
you'd
like
to
nominate
or
better
still
present,
we
have
a
google
form
here
at
this
address
I'll
paste
it
into.
A
So
that's
all
we
have
on
the
agenda
for
today,
we've
actually
gotten
in
before
the
before
the
top
of
the
hour.
B
A
Available
pretty
close
to
that
so
there's
a
few
that
are
set
aside
for,
like
certain
certain
queues
and
so
on.
We
do
have
a
doctor
page
on
it.
It's
it's
a
quite
large
number.
A
You
know
more
than
I
forgot
what
the
number
is,
but
I'm
pretty
sure
it's
well
more
than
8
000
nodes
before
you
start
to
need
a
reservation
for
yeah
for
a
couple
of
thousand
nodes.
In
fact,
if
you're
using
more
than
one
thousand
nodes,
you
get
a
large
job
discount.
B
So
right
now
the
queue
the
time
my
student
told
me
usually
two
or
three
days
in
the
queue,
then
the
job
just
go
to
one
to
be
long,
but
even
larger,
let's
see
well,
this
is
roughly
I.
My
rough
calculation
is
roughly
one
thousand
280.
Okay,
that
is
some.
You
know
1280.
That
is
a
totally
core.
You
know
yeah,
it's
just
the
car,
it's
a
cpu,
a
regular
cpu.
B
A
B
Hourly
charge
is
going
to
be
very
high
right.
B
A
Large
job
so
for
jobs
over
1024
nodes,
there
is
a
discount
of.
A
Yeah,
so
for
a
large
job
on
knl,
that's
when
you're,
using
over
a
thousand
nodes,
so
so
1024
nodes
of
knl
gets
you
around
close
to
70
000
cores.
I
think,
because
the
68
cores
per
node
okay,
so
so
that
is
quite
a
big
job,
even
even
for
a
fairly
large
job.
I
think
I
think
we
found
up
to
about
like
a
quarter
of
the
machine.
So
a
couple
of
thousand
nodes.
A
If
you
can
get
the
wall
time
down
to
chunks
of
sort
of
four
hours
or
less
it'll,
probably
get
in
pretty
quickly
when
you
start
getting
wall
times
longer
wall
times
yeah,
they
kind
of
correlate
with
cue,
wait
times
and
48
hour
wall
times,
pretty
pretty
much
wait,
pretty
much
wait
until
everything
in
front
of
them
has
run.
So,
if
you
can,
if
you
can
do
things
like
checkpointing
to
break
the
job
into
shorter
chunks
or
if
the
job
can
scale
higher
and
use,
you
know
twice
as
many
cores
for
half
as
much
time.
G
I
got
a
small
question
when
running
on
pearlmatter
say
I
want
to
run
a
couple.
Probing
jobs
like
lscpu,
nvidia
smi.
Will
those
be
rounded
up
to
the
nearest
unit
or
how
does
it
work?
Does
slurm
account
for
like
fractional
usage.
A
Do
you
mean,
as
in
your
especially
selecting
your
node,
but
just
running
ls
cpu
on
it?
Yes,
so
sloan
doesn't
actually
so
for
charging
purposes.
Slim
doesn't
look
so
much
at
what
you're
running
as
just
how
many
nodes
for
how
long
you
are
occupying.