►
From YouTube: NUG Monthly Meeting - March 2023
Description
The NUG Monthly meeting for March 2023 - we looked at NERSC Science Highlights - short vignettes showcasing work done by NERSC users, using NERSC resources - and talked about where to find them and how to submit your own work for inclusion in the collection
A
So
that
we'll
post
it
later
so
just
I
guess
a
heads
up
that
that's
happening.
Welcome
all
to
the
monthly
meeting
for
March.
A
So
yeah,
so
just
a
heads
up
that
we're
recording
all
posted
on
the
meeting
page
afterwards.
So
if
you
prefer
not
to
be
recorded,
then
this
is
probably
to
keep
the
camera
off
and
maybe
limit
questions
to
to
chat.
A
That
said,
we
normally
go
for
a
pretty
interactive
format
here
and
we
will
do
the
same
today
and
because
we're
a
reasonably
small
group
I
think
we
can
just
unmute
and
speak
up
when
you
have
something
to
say
we'll
go
through
our
Roi
kind
of
usual.
Pattern
of
you
know
missed
editing,
something
in
the
slide
there
of
a
win
of
the
month.
Today,
I
learned
the
user.
Community
survey
was
a
artifacts
from
last
month's
discussion.
Although.
A
For
users
to
show
off
an
achievement
or
to
shout
out
somebody
else's
achievement
that
you're,
aware
of
and
can
be
big
or
it
can
be
small,
getting
a
paper
accepted
solving
a
bug
or.
B
I
don't
know
if
this
will
be
in
the
announcements
later,
but
I
heard
parameter
stopped
charging
until
two
weeks
from
now.
A
That
will
be
in
the
announcements.
This
will
talk
a
little
bit
about
that
quite
shortly.
A
A
And
give
a
shout
out
to
one
that
I
know
happened
and
I
saw
there
was
an
announcement
about
it
in
the
nug
slack
yesterday,
kudos
to
Shazam
Justin
and
a
few
others
for
getting
the
latest
or
nearly
latest
the
the
2211
release
of
the
e4s
stack
deployed
on
Palmetto,
because
it's
it's
quite
a
large
stack
there
are.
There
are
literally
hundreds
of
software
packages
in
it.
A
It's
all
under
a
module
load,
e4s,
there's
some
docs
on
sort
of
how
to
use
it,
and
in
this
this
latest
build
has
been
built
for
kind
of
multiple
compilers
and
there's
also
a
Cuda
based.
A
What
do
you
call
it
variant
of
the
stack
available
so
yeah?
That
was,
you
know,
quite
a
significant
effort
and
a
good
outcome.
C
Yeah
I
was
gonna
share
that
I
got
hired
at
nurse
as
a
staff
member
after
being
a
postdoc
for
a
year
and
a
half
which
is
very
exciting,
so
I'm
gonna
be
part
of
the
user
engagement
team,
working
with
Steve
and
and
Rebecca
and
others.
C
So
hopefully,
you'll
see
more
of
me
during
these
meetings
and
in
the
community
more.
My
focus
is
going
to
be
on
user
community
building
which
you've
been
hearing
a
lot
about,
and
then
also
one
of
the
things
I
want
to
start
working
on.
Is
your
user
onboarding
Rebecca
is
going
to
have
me
start
looking
at
how
users
get
onto
the
system?
How
long
does
it
take
them
to
start
using
the
system
to
you
know,
get
get
set
up,
and
so
so
they
can
be
productive.
C
And
what
are
some
resources
we
can
make
to
to
decrease
the
amount
of
time
that
that
takes
so
if
those
are
any
of
the
things
that
you've
ever
thought
about
or
have
ideas
or
thoughts
on
you're?
More
than
welcome
to
reach
out
to
me,
hopefully,
you'll
see
me
around
more
but
I'm
very
excited
because
I
love
nurse
as
I
started
as
a
user
way
back
in
2019
as
a
physicist,
so
I'm,
not
a
computer
science
person
I'm
a
physicist
by
training,
while
I
was
doing
my
PhD.
C
So
it's
it's
not
it's
surprising
to
me
that
I
work
at
a
HPC
facility,
but
I
love
it
so
I'm
very
excited.
So
that's
a
big
win.
A
Yeah,
that
is
great
news
and
and
great
to
hear
and
congrats
Lippy,
so
so
from
the
inside.
Of
course,
you
know
I
already
somewhat
knew
that
that
was
coming,
but
what
I
didn't
know
was
how
long
it
takes
to
get
through
the
HR
process
here.
So
it
was
really
good.
Yeah.
C
I,
so
I
haven't
technically
started
yet
either
so
I
think
that
they're
still
doing
some
work
so
technically
I'm
still
a
postdoc
right
now
until
the
end
of
the
month,
but
I.
My
mindset
has
definitely
moved
on.
A
Yeah
I'd
say:
there's
a
couple
of
congrats
in
the
in
the
chat
as
well,
so
that
one's
going
to
be
a
hard
one
to
talk.
But
anybody
got
anything
else
that
they'd
like
to
to
show
off.
F
I,
don't
have
not
too
many
wings
of
mass,
because
I'm
trying
to
still
compile
you
know
different
models
on
Parramatta,
but
thanks
to
previous,
whenever
I
get
a
problem
just
throw
in
the
Kombat
mismatch
standard,
Legacy
yeah
and
it
works.
So
that's
quite
smooth
last
few
weeks
for
me
and
IO
again,
just
incredibly
fast.
It's
really
helpful.
F
This
is
the
60s
30
15
seconds
on
average
to
write
down
this
62
gigabytes
of
let's
see
their
files.
So
it's
really
helps
us
to
shorten
the
simulation
time
and
spend
more
time
on
thinking
about
science.
A
Yeah,
so
actually
that
seems
a
really
good
segue
into
today
I
learned
because
there
was
a
couple
of
tips
there.
So
in
a
way
this
is.
This
is
partly
the
flip
side
of
the
the
win
of
the
month,
which
is
that
not
everything
goes
smoothly
all
the
time,
but
when
things
don't
go
smoothly,
generally
there's
something
to
be
learned
out
of
it
and-
and
you
know
that's
kind
of
what
science
and
research
is
about,
getting
things
wrong
until
we've
only
understand
things
well
enough
to
get
them
right.
A
And
and
I
guess
that
a
Corolla,
Corollary,
yeah
bad
pronunciation
to
to
you
know
discovering
things.
The
hard
way
is
also
stumbling
across
interesting
information
and
Coach.
You
mentioned
one
there,
which
I
think
we
now
have
in
the
in
the
docs
page
a
tip
about
it,
but
the
Fortran.
Sorry,
the
yeah,
the
gnu
fortrain
compiler,
has
a
flag,
Dash
f.
Allow
argument
mismatch
if
I
remember
correctly,
and
that
makes
a
big
difference
when
you're
compiling
it.
A
It's
quite
useful
for
even
things
like
MPI,
where
you
know,
potentially
the
same
call
can
be
done
with
different
types
of
arguments.
Depending
on
whether
you're
passing
you
know
your
your
message
is
going
to
be
integers
or
reels
or
whatever
else,
and
a
very
strict
Fortran
implementation
such
as
canoe
Fortran,
we'll
complain
about
that.
So,
if
allow
argument
mismatch,
loosens
the
rules
a
little
bit
on
that
and
the
other
one
you
mentioned,
Gucci
was
the
performance
of
scratch
and
yeah.
That's
really
good
to
hear
that
it's
it's!
A
It's
serving
a
good
purpose,
so
the
permanent
scratches
in
all
I
have
a
blank
on
the
name.
It's
not
Spinning!
Disk,
it's
flash!
It's
an
all!
Flash
file
system
so
yeah
we're
we're
still.
B
A
Know
shaking
out
Paul,
mutter's
performance
and
so
yeah
we
have
seen
an
occasional
times
when
the
fast
system
performance
isn't
isn't
quite
where
it
needs
to
be
just
yet,
but
that
is
being
worked
on
then,
when
it
works
at
that
it
really
does
work
well,.
E
I
had
an
interesting
one
that
I
and
I
learned
and
helped
a
user.
Learn
that,
just
because
your
Loops
have
openmp
parallel
does
not
mean
you
can
just
slap
and
and
works.
You
know
does
not
necessarily
mean
that
you
can
just
slap
Target
on
there
and
offload
it
to
a
GPU.
E
Because
it
may
be
calling
other
code
that
works
just
fine
under
under
parallel
on
the
CPU,
but
your
compiler
may
not
know
that
it
needs
to
compile
that
other
code
to
also
run
on
a
GPU
and
exactly
how
to
do
all
the
memory.
Mapping
and
things
like
that.
So
so,
just
if
you,
if
you're
thinking,
hey
I've,
got
these
parallel
for
parallel
four
or
do
loops
if
I
throw
if
I
throw
a
Target
on
it,
it
should
just
go
over
to
the
GPU.
That's
not
always
the
case.
F
That's
the
exactly
the
reason
sorry
I
I
gave
up.
Yes,
I
have
one
Fortran
code,
the
only
partition
it's
using
is
just
Auto,
MP
Ultra,
so
I
attended
last
year.
One
of
those
you
know
put
into
Power
model
workshop
and
hey.
Oh,
maybe
I
should
throw
in
the
flag
and
it's
got
around
GPU
and
no
it's
a
it's
only
one.
F
You
know
dude,
that's
very
long
one
and
then
calling
multiple
several
things
in
the
other
course
so
yeah
right
now,
I
just
gave
up
and
then
tried
to
learn
more
about
the
code
itself.
A
Did
you
stumble
across
or
or
discover
any
I
guess
indicators
that
you've
got
that
problem
or
or
tips
for
finding
it
or
resolving
it?
During
your.
E
Debugging
so
far,
so
the
the
way
we
found
it
started
off
as
just
an
internal
compiler
error
right.
So
it
was
a
big
Fortran
code
with
a
whole
bunch
of
parallel
do
loops,
and
he
you
know
the
user
just
threw
Target
on
all
of
them
and
then
went
well
now.
E
The
compiler
just
stops
with
an
internal
compiler
error,
and
so
it
took
us
a
little
while
of
like
pulling
some
things
apart
and
then,
eventually,
we
ended
up
with
a
link
time
error,
because
that
code
hadn't
been
compiled
for
a
GPU,
so
that
version
of
the
procedure
didn't
exist.
So
it
did
take
some
some
digging
to
kind
of
realize.
That
was
what
was
happening
so
yeah.
A
Yeah,
coming
up
with
a
internal
compiler
error
is
a
interesting
outcome.
I
wonder
if
the
compiler's
attempts
to
pull
it
all
in
yeah
just
use
too
many
resources.
E
Something
along
those
lines
is
my
sus
is
my
suspicion
because
it
was
all
in
one
file.
So
in
theory
it
can
see
everything.
Oh
yeah
I
need
to
compile
this
for
the
GPU
Etc
and
just
ran
out
of
whatever
it
was
doing
to
do
all
the
analyzes
for
what
needs
to
go
where
when
and
how
to
map
things
over
and
all
that
so
yeah.
That's
that's
my
suspicion.
A
B
Plus,
if
you
have
a
huge
Loop
like
what
koichi
described
earlier,
it
might
run
out
of
variables
based
on
register
space.
If
you
just
blindly
add
OMB,
Target
yeah
I
found
a
new
one
that
by
default
Dash
MP
only
enables
openmb
multi-threading
on
CPU
Dash
MP
equal
GPU
by
default.
Well,
it's
not
the
default,
so
you
have
to
be
explicit
about
the
equal
GPU
to
enable
openmp
Target
offload.
B
This
is
coming
from
helping
a
user
who
is
used
to
open
ACC
where
ACC
automatically
means
GPU.
A
B
That's
ACC:
there
is
a
dash
ACC
equal
GPU,
but
it
is
the
default
that
ACC
usually
means
equals
GPU.
There
is
also
equals
multi-core
and
equal
compare
or
something
like
that.
I
think.
That's.
B
The
Nvidia
compiler
doesn't
like
Dash
G
on
higher
optimization
levels.
Sometimes
it
just
gives
a
compiler
a
internal
compiler
error,
trying
to
put
debugging
information
to
certain
subroutines
I've,
seen
this
twice
with
the
Nvidia
compiler.
B
So
the
trick
is
if
it's
a
complex,
build
system
just
put
Dash
g
into
the
stuff
that
you
want
to
debug,
not.
D
A
B
Not
like
put
in
the
make
file,
so
that's
G
everything
turns
out
that
crashes.
The
compiler
I
don't
know
if
maybe
there's
already
fixed
in
the
newest
nvhpg
version
I
think
they
already
released
23.1.
Maybe
they
are
on
the
way
to
do
this.
23.3
soon.
A
I
think
the
latest
we've
got
at
the
moment
is
either
22
7
or
22
9,
but
getting
at
an
internal
compiler
error
like
that
is
probably
worth
having
a
ticket
for
if
you
haven't
already,
because
that
is
that
generally
means
some
sort
of
a
bug
in
the
in
the
compiler.
E
Yeah
any
any
internal
compiler
error
is
always
a
bug,
even
if
your
code
was
invalid,
but
I
I
will
I
will
say
that
you
know
the
the
combination
of
optimization
and
debug.
Symbols
is
always
a
tricky
one,
because
the
when
it
inserts
the
debug
symbols
it
may
be
trying
to
do
so
for
code
that
had
already
optimized
away
right.
So
the
the
no.
B
Matter
what
so
it
claims
to
sorry
I
have
a
leaf
blowers
outside.
So
that's
G
opt
in
Nvidia
compiler
claims
that
it
doesn't
change
the
assembly,
the
the
generated
optimized
code,
but
it
inserts
debugging
symbols.
B
I
forgot
if
it
happened
with
death,
G
or
death,
G
opt
or
both,
but
it
happened
at
o1.
E
D
I
I
do
want
to
push
back
a
little
bit
on
the
statement
that
any
ice.
Sorry,
that's
short
for
internal
compiler
error-
is
always
a
bug
to
report.
If
it's
happening
due
to,
I
o
errors
on
Nurse
scratch
file
system,
then
that's
not
truly
a
compiler
bug.
That's
the
compiler,
just
not
knowing
how
to
deal
with
a
faulty
file
system
so
reporting
those
are
probably
going
to
be
ignored
by
the
vendor.
Don't.
D
In
theory,
in
theory
all
of
their,
I
o
calls
they
should
check,
for
you
know
short
rights
and
and
short
reads
and
error
returns.
That's
not
practical
and
they're
not
going
to
put
effort
into
that.
So
I've
had
plenty
of
you
know
big
compiles
where
there
are
hundreds
of
files
and
such
that
you
know,
I
just
have
to
run
make
five
times
before
it
finally
completes
on
days
when
the
the
scratch
file
system
is
having
a
bad
day.
E
E
B
Plus
I
don't
like
doing
bug
isolation
on
this
particular
code,
because
it's
that
particular
file
has
like
4
000
lines
or
for
Trend
code
or
something.
E
D
A
So
we
should
probably
start
to
move
on
to
our
next
segment
of
announcements.
We
have
a
few
at
the
moment
and
there's
a
couple
that
are,
you
know,
especially
current,
and
the
first
one
is
that
you
hopefully
saw
the
email
that
went
out
yesterday
about
Palmetto's
charging
holiday.
A
So
there's
a
few
things
going
on
here
that
are
that
are
worth
sort
of
knowing
about
and
and
this
is
sort
of
all
all
in
the
email.
So
you
can
read
a
little
bit
more
details
there,
but
yesterday,
during
the
afternoon
we
changed
the
setting
to
temporarily
to
disable
a
feature
of
ss11
and
slingshot
11..
That's
the
the
network
that
the
interconnect
the
Palmetto
uses.
A
What
this
feature
provides
is
performant
GPU
to
RDMA
communication.
So
that's
remote
direct
memory
access.
A
You
know
useful
in
in
into
node
communication,
but
there
is
currently
a
critical
issue
in
there
which
has
been
worked
on
and
that
was
leading
to
some
node
failures.
So
we
have
for
the
moment
disabled
that-
and
there
are
a
couple
of
side
of
effects
that
you
should
see
from
this.
One
is
that
if
you
were
seeing
jobs
failing
with
a
node
fail
status
at
a
particularly
GPU
jobs,
they
may
well
have
been
hitting
this
back,
and
so
this
this
workaround
should
allow
them
to
get
past.
A
That
should
prevent
that
from
happening.
However,
it
does
substantially
affect
performance,
so
you
might
see
that
your
code
that
uses
GPU
your
GPU
aware
MPI
that
sort
of
thing
could
run
significantly
slower,
so
we're
anticipating
that
we'll
be
able
to
remove
this
in
around
about
a
week's
time
at
the
next
week's
maintenance.
A
But
partly
because
of
this
and
partly
to
you,
know,
help
us
shake
out
some
of
these
issues,
we'd
like
to
encourage
people
to
still
run,
and
we
recognize
that
if
you
know
things
are
going
slowly,
that's
you
know
not
ideal
for
your
allocation,
so
we've
declared
a
charging
holiday
for
the
next
two
weeks,
so
all
jobs
on
Palmetto
will
run
free
of
charge
for
the
next
two
weeks.
A
So
so,
please
don't
be
daunted
by
the
fact
that
performance
on
on
some
jobs
will
be
a
bit
slower,
submit
the
jobs
and
take
advantage
of
the
charging
holiday
and
get
to
help
us
to
shake
out
other
issues
because,
of
course,
we're
quite
Keen
to
get
issues
shaken
out
as
as
soon
as
possible,
so
that
we
can
retire
Corey.
It's
you
know,
reaching
end
of
life
as
well,
and
if
you
do
hit,
you
know,
problems.
Let
us
know
by
opening
a
ticket
before
we
move
on
to.
A
A
Sounds
like
all
clear,
the
next
kind
of
big
current
one
is
this
year
you
hopefully
have
heard
by
now,
via
emails
and
weekly
emails
and
and
various
announcements,
that
we
have
an
updated,
appropriate
use,
policy
and
code
of
conduct,
and
we
need
everybody
basically
to
read
and
agree
to
the
updated.
A
You
know
there's
updated
documents
we
still
have
at
least
as
of
a
day
or
two
ago,
quite
a
large
number
of
users
who
haven't,
read
and
agreed
to
them
and
reading
it
and
agreeing
to
it
is
very
simple
to
to
see
it
that
the
first
time
that
you
log
into
irs.nets.gov
it
will
it
basically
won't
let
you
do
anything
until
you've
gone
through
this
dialogue.
A
It'll
pop
up
a
dialogue
and
and
present
it
to
you,
so
we
send
out
an
email
to
all
users
who
have
not
yet
sign
the
doc
last
earlier
this
week,
so
yeah,
if,
if
you
didn't
receive
the
email,
that
probably
means
that
you
signed
the
dock
and
possibly
forgot
about
it,
but
it
doesn't
hurt
to
log
into
Iris,
and
you
know
see
if
you
get
the
dialogue.
A
If
you
got
that
email
and
then
when
you
log
into
IRS,
you
don't
see
the
dialogue.
We've
had
a
couple
of
examples
of
that
happening
that
there
are
a
couple
of
things
that
could
cause
that.
But
you
know
if
you,
if
you
find
any
difficulty
with
it,
open
a
ticket
so
that
we
can
get.
You
know,
visibility
on
that.
A
The
big
one
is
that
Corey's
retirement
is
approaching
soon,
so
we
talked
about
this
in
in
quite
a
lot
more
depth
in
the
last
month's
user.
Meeting
Corey
basically
reached
end
of
life
in
terms
of
its
components
and
yeah.
Because
of
that
we
need
to
retire
it
and
so
you're
aware
working
pretty
hard
to
get.
A
You
know
Paul
Mudder
working
well
enough
and
you
know,
prepare
our
users
all
to
move
off
Quarry
and
onto
Perl
matter.
A
Part
of
that
we've
had
a
few
office
hours
session,
there's
another
one
coming
up
in
a
couple
of
weeks
time.
We
also
had
a
a
day
of
training,
last
Friday
on
migrating
from
Corey
to
Palmetto,
and
so
this
training
is
actually
recorded
and
available
online.
So
the
the
recordings
and
the
slides
are
available
at
this
web
address,
which
you
can
get
to
by
pointing
your
smartphone
or
or
possibly
a
you
know,
a
desktop
tool
at
this
QR
code,
so
yeah,
if
you're
not
running
on
Pearl
money.
Yet
please
give
it
a
shot.
A
Even
if
your
code
isn't
GPU
ready,
promoter's
got
yeah
over
3000
CPU
only
nodes,
yeah
Each
of
which
is,
you
know,
significantly
more
powerful
than
you
know,
than
a
Corey
K
L,
node
they've
got
I.
Think
the
CPU
only
nodes
have
got
128
cores
each
two:
two
sockets
or
64
cores
and
they're
full
strength
cores.
So
so
they're
like
a
yeah.
A
Other
things
coming
up
summer,
internships
are
available
if
you
are
a
student
or
have
a
student
who's.
Looking
for
an
internship
jump
to
to
this
page
on
our
www
under
research
and
development,
internships,
we've
got
a
list
of
projects
that
nurse
is
looking
for
interns
to
work
on
over
the
summer.
A
There
is
a
research
software
engineer,
Association
conference
coming
up
at
the
end
of
this
year
in
Chicago
and
submissions
are
open
on
that
there's
web
address
here.
For
it
there
is
a
doe
cross
facility,
workflows,
Workshop
in
just
about
a
month's
time
in
April
I.
Don't
actually
have
a
link
to
to
that.
A
Maybe
check
check
the
weekly
email,
I.
Think
there's
a
little
more
information
about
that
in
there
also
nurse
is
hiring.
We
have.
We
have
great
news,
but
we've
filled
one
of
our
positions
being
sliffy,
but
we
have
quite
a
few
other
positions
that
we're
looking
to
hire.
So
you
know,
we've
found
that
nurse
users
tend
to
make
great
nurse
staff
as
well
so
yeah
take
care.
Take
a
look,
there's
a
link
here
and
we'll
post
these
slides
on
the
midi
notes
afterwards
on
a
meeting
page
afterwards.
So
you
can
just
click.
A
There's
at
least
one
other,
because
Brad
was
telling
us
before
about
a
nurse
Fortran
users
group.
Do
you
want
to
yeah
tell
us
more
about
that
yeah.
E
So
I
dropped
a
little
bit
of
information
and
some
links
into
the
chat,
but
we're
starting
a
new
Fortran
users
group,
the
Fortran
users
of
nurse
so
fun,
but
the
the
first
event
we've
got
planned
is
for
April
4th.
It's
posted
on
the
nurse
events
calendar
where
we're
just
going
to
kind
of
have
like
a
Fortran
office
hours
and
initial
discussions
about
you
know
what
do
people
want
to
get
out
of
the
group?
E
You
know
that
kind
of
stuff.
So
there's
a
mailing
list,
there's
a
there's
already
a
Fortran
channel
on
the
the
nurse
users
slack
and
I
also
posted
a
form,
a
link
to
a
a
survey.
So
if,
if
anybody's
interested
in
providing
feedback,
that's
another
way
to
do
it
as
well,
and
you
can
always
reach
out
to
me
via
email
directly
or
find
me
on
the
slack
or
the
mailing
list.
Or
what
have
you
feel
free
to
ask
questions
introduce
yourself
and
all
that
fun
stuff.
A
It
sounds
great,
so
that
was
April
4
for
the
office
hours,
yes
and
I,
guess
going
to
Slack
the
the
Fortran
channel
on
slack
is
going
to
be
a
good
first
place
for
getting
more
information.
Yep
thanks,
great
yeah.
That's
that's
really
good
for
Trends,
I
think
widely
used
and
quite
an
important
language,
and
you
know,
despite
it,
it
seems
to
get
a
yeah
a
lot
of
dissing
out
there,
but
it's
it's
actually
a
yeah
we're
really
nice
languages.
It's
it's
quite
is
quite
user-friendly.
I.
E
Was
I
always
forget
where
the
original
quote
comes
from,
but
rumors
of
fortrans
demise
have
been
greatly
exaggerated.
Yes,.
A
A
We'll
go
on
to
our
next
segment,
which
is
our
topic
of
the
of
the
day,
and
so
today
we're
going
to
talk
a
little
bit
about
nurse
science
highlights
starting
out
with
what
are
they
so
so,
every
month
in
in
this
meeting,
we
start
out
in
our
win
of
the
month.
Talking
about
you
know
a
little
spiel
about
what
it's
about
and
and
mention
these.
You
know
something
as
a
candidate
for
a
science
highlight,
which
kind
of
does
raise
a
fairly
valid
question
of
you
know.
What
actually
is
that?
A
What
does
it
look
like?
Why
does
it
matter
what
sort
of
thing
counts
as
a
science
highlight?
Where
can
I
find
them,
and
how
do
I
get
my
work
showcased
like
that.
A
So,
for
what
does
it
look
like
traditionally
they've
taken
the
form
of
a
single
slide
that
was
presented
to
a
you
know
in
in
doe
reports
that
describes,
and
it
usually
has
some
sort
of
a
visual,
some
sort
of
a
you
know,
a
scientific
achievement
and
a
little
bit
about
the
significance
and
impact,
and
you
can
see
in
you
know
this
example
at
the
top
yeah.
It's
very
brief.
A
Very
concise
kind
of
eye-catching,
just
yeah
points
out
that
yeah,
the
key
details
and
yeah
has
a
few
links
to
to
more
information.
More
recently,
we've
been
writing
up.
Some
of
the
science
highlights
in
kind
of
a
you
know,
a
longer
article
form,
and
we
have
a
page
here
which
we'll
share
a
link
around
to
sort
of
shortly
with
an
example
of
one
of
these
articles,
and
so
there's
a
you
know,
a
more
detailed.
A
You
know
Science
News
type
provider,
so
you
know
it
can
take
a
couple
of
forms
and
basically
what
it
does
is
it
is.
It
describes
in
non-domain
expert
terms,
a
scientific
achievement
and
the
significance
and
impact
of
that
achievement
and
yeah,
particularly
of
course,
it's
attained
around
centered
around
nurse
users.
So
you
know
this
is
as
highlighting
work
that
used
nurse
resources
either.
You
know
nurse
compute
systems
or
nurse
storage
systems.
They've
been
science
highlights
around.
A
You
know:
data
collections
using
hpss,
for
instance,
yeah
and
or
collaborations
with
desk
staff
as
well.
A
So
then,
why
does
it
matter?
Actually
it's
kind
of
pretty
important.
Really,
if
you
look
at
the
nurse
Mission
here,
it's
yeah,
the
mission
of
nurse
is
to
accelerate
scientific
discovery
at
the
department
of
energy
office
of
science
through
high
performance,
Computing
and
data
analysis,
which
is
to
say
that
the
science
is
the
reason
that
we're
here,
and
so
you
know
paying
attention
to
these
in
the
form
of
highlights.
You
know
it's,
it's
valuable.
You
know
both
in
terms
of
showcasing
the
work,
that's
being
done.
C
A
Know
helping
to
keep
our
attention
and
I'll
Focus
yeah
yeah
on
our
mission,
so
yeah
these
get
presented
to
doe
in
some
regular
reports
and
get
used
in
our
annual
reports.
It's
good
for
for
visibility
for
nurse.
It's
also
good
for
visibility.
For
you
know
the
nurse
users
producing
this
science.
It's
a
good
way
to
get
get
your
actual
work.
A
A
So
yeah
and
I
thought
it'd
be
nice
to
go
through
a
few
sort
of
recent
examples,
and
just
talk
a
little
bit
about
them
and
our
caveat
here
is
that
yeah
I'm
not
a
domain
expert
in
in
any
of
these
really.
A
A
So
this
is
the
one.
This
is
the
most
recent
one
published
just
earlier
this
week.
New
math
methods
and
promoter
HPC
combine
to
deliver
record-breaking
machine
learning
algorithm
and
the
the
I
guess.
The
the
summary
of
this
is
that
nurse
users,
working
in
mathematics
for
experimental
data
and
in
Earth
and
environmental
science
came
up
with
an
approach
for
gaussian
processes
that
solves
one
of
its
limitations,
which
was
that
the
you
know
once
once
the
The
Matrix
gets
beat.
What's
the
covariance
Matrix
gets
large,
it
becomes
a
little
bit
unmanageable.
A
You
know
a
large
number
of
Perl
matter,
GPU
nodes,
and
so
now
the
the
individual
sub
Matrix
sub
matrices
are
small
enough
to
be
processed
within
a
single
GPU,
and
this
suddenly
makes
you
know
a
much
larger
set.
Oh
yeah
yeah.
It
makes
much
larger
problems
tractable
so
and
then
to
demonstrate
the
the
approach.
A
The
the
mathematic
mathematics
for
experimental
data
group
worked
with
the
Earth
and
environmental
science
group
to
actually
apply
this
method
to
a
large
climate
data
set.
It
goes
into
the
details
here.
I
think
it
was.
A
It
was
a
a
data
set
around
temperatures,
basically
but
yeah,
so
they
demonstrated
the
ability
to
be
able
to
to
use
this
method
to
deal
with.
You
know
a
huge
volume
of
data
coming
out
of
your
newer
climate
science
work,
which
is
you
know
there
is
a
very
large
amount
of
data
coming
out
there
so
and
as
well
as
just
being
able
to
deal
with
the
sheer
volume
of
data.
A
They
also
got
a
really
significant
speed
up
by
breaking
it
down
into
sub
matrices
that
could
fit
on
the
GPU
they'll
see
something
like
a
25
times.
Speed
up
over
Quarry
and
yeah,
which
is
which
is
great
25
times,
is
a
is
a
really
good
result,
and
this
work
was
just
recently
published
in
nature.
Scientific
reports.
A
Yeah
the
article
goes
into
a
lot
more
detail,
but
it
sounds
like
to
yeah
to
these
to
this
non-domain
expert
instead
of
pre-guessing.
What
this
Varsity
patent
is
they're
able
to
use
this
to
actually
discover
the
sparsity.
A
Another
one
from
a
little
earlier
this
month
shining
a
light
on
electrons
role
in
energy
transfer
among
2D
materials.
So
so
this
was
an
interesting
combination
of
experimental
work
and
simulation
work
so
there
so
so
this
group
is
working
with
2D
materials,
essentially
a
single
single
atom
layer
sheets.
A
So
they've
got
a
couple
of
layers
and
one
of
the
one
of
the
challenges
is
heat
dissipation
through
those
layers
and
they
discovered
experimentally
that,
by
directing
you
know
certain
certain
wavelengths,
perhaps
of
light
at
one
of
the
layers,
they
could
increase
the
heat
dissipation
by
by
a
factor
of
more
than
100,
like
like
really
really
significant
couple
of
waters
of
magnitude,
and
so
the
next
step
was
to
develop
an
actual
understanding
of
what
was
going
on
and-
and
you
know,
build
up
a
theory
of
this
and
to
do
that
they
ran
a
bunch
of
initio
simulations
using
Corey.
A
If
I
remember
rightly,
I-
think
this
one
they
they
did
was
this
one's
I
know
this
one
wasn't
sick
of
2K.
This
was
a
a
custom
code,
so
yeah,
basically
they
were
able
to.
You
know
use
quite
a
lot
of
processes
on
Quarry
to
run
simulations
to
actually
understand
what
was
going
on
and
see.
The
role
of
electrons
in
you
know
greatly
greatly
increasing
the
energy
transfer
between
the
layers,
and
this
one
also
got
a
write-up
in
this
term
in
in
nature
nanotechnology,
foreign.
A
This
is
an
older
one
from
from
last
year,
and
this
is
in
the
the
Highlight
slide
format
and
design.
That
was
a
snapshot
in
that
that
earlier
slide,
you
can
see
this
one's
actually
an
animated
slide,
and
a
few
of
them
are
using
animated
graphics
like
this.
This
was
some
really
interesting
work
using
artificial
intelligence.
A
A
Yeah
I'm
not
enough
of
an
expert
to
know
you
checklist
of
how
they
did
this,
but
I.
Guess
it's
like
a
physics,
what's
called
physics,
enabled
deep
learning
model
and
so
yeah
they're
able
to
use.
A
Basically,
your
neural
network
to
with
pretty
good
accuracy,
predict
some
kind
of
Key
weather
variables
up
to
10
days
in
advance,
and
so
one
of
the
great
things
about
sort
of
you
know
deep
learning
and
gpus
is
that
it
scales
really
well,
and
this
is
partly
why
it's
taken
off
so
well
and
so
yeah
they're
able
to
scale
this
thing
over.
A
You
know
4
000,
gpus
of
Perl
matter
and
then
once
it's
done,
the
loading,
the
actual
you
know
inference
that
you
know
making
the
forecast
using
the
the
deep
Learning
Network
div
neural
network
is
significantly
faster
than
doing
it
in
you
know
conventional
numerical
approaches,
so
so
they
sell
like
a
44
000
time
speed
up
by
by
taking
this
sort
of
combined
approach.
A
So
yeah
that
was
some
great
work.
Another
one
this
one
actually
nurse's
own
Kevin
Gott,
who
isn't
here
at
the
moment
is
you
know
in
a
different
meeting,
was
involved
in
some
other
work
for
this
warp
X
code
we've
got
a
Gordon
Bill
prize,
so
in
high
performance
Computing.
A
The
Golden
Bell
prize
is
an
annual
prize
for
basically
for
for
your
work
done
at
really
high
scale,
that
sort
of
pushes
the
it
pushes
the
limits
of
HPC
and
Advantage
advances
HPC
in
that
sense,
but
while,
while
still
doing
useful
scientific
work
so
so
this
is
a
code
called
wapx
which
is
being
used
in
the
development
of
plasma
accelerators,
which
are
a
kind
of
a
promising
development
for
shrinking
the
the
sizing
cost
of
particle
accelerators.
A
So
warp
X
is
an
ECP
project.
It
uses
a
technique
called
adaptive,
mesh
refinement
and
it's
built.
On
top
of
you
know
an
AMR
Library
called
AMR
exit,
amrex
amrex,
which
has
been
used
pretty
heavily
on
cold
water
and
we've
seen
some
sort
of
good
results
there.
A
So
but
the
team
was
able
to
run
at
scale
on
four
of
the
top
10
supercomputers
in
the
world
at
the
moment,
so
Frontier,
which
is
the
number
one
at
the
moment,
fugaku,
which
I
think
was
previously
the
number
one
or
yeah
in
Japan
Summit
and
our
own
Pearl,
Manor
and
yeah
disability.
This
these
results
got
at
the
2022
Gordon
Bell
prize
foreign.
A
A
So
another
one
we've
got
here:
we
we
actually
on
Palmetto.
So
you
know
some
nurse
users
were
possibly
the
first
or
or
close
to
it,
at
least
to
break
the
exit
op
barrier,
so
we've
been
working
towards
extra
flops
for
a
while
and
in
fact,
I
think.
The
exit
flop
barrier
was
recently
broken
too
so
fxa
XR
is
10
to
the
power
of
18.
A
flops
as
floating
Point
operations
per
second,
and
tends
to
mean
you
know
full
64-bit.
What
do
you
call
it?
Precision
yeah,
but
you
know
that
that
sort
of
level
of
precision
is
kind
of
expensive
and
yeah.
We
use
it
a
lot
because
we
need
the
numerical
stability,
but
it's
not
always
actually
necessary,
and
so
this
team
showed
that
for
certain
problems
they
could
use
mixed
precision
and
use
a
lower
Precision,
even
fp16.
A
So
16-bit
plating
point
for
some
of
the
calculations
and
higher
Precision
for
other
parts
of
the
the
arithmetic
and
actually
there's
a
little
bit
more
about
this
in
the
next
slide,
which
is
you
know,
a
more
detailed
write-up
of
it,
but
so
they
sort
of
achieved
two
things
here.
A
One
was
that
they
used
a
sub
Matrix
method
to
break
up
a
very
large,
sparse,
Matrix
and
distribute
it
over
again
lots
of
Perl
matter
gpus
and
by
using
mixed
Precision
they're
able
to
do
a
bunch
of
the
calculations
in
XP
in
sp16,
which
meant
that
they
could
use
Palmetto,
gpu's
tensor
cores,
which
gives
a
you
know,
an
enormous
amount
of
parallelism
and
so
running
on.
You
know
1100
nodes,
so
4400
gpus
on
perlmana.
They
managed
to
correct
the
extra
Ops
barrier.
A
So
yeah,
that's
just
a
small
selection
of
of
some
of
the
hype.
The
science
highlights
that
have
come
out
in
the
last
six
months
and
they
they
get
published
reasonably
frequently
every
every
few
weeks
and
where
you
can
find
them
is
on
the
www.ness
page
under
science.
A
There's
a
subpage
called
science
news,
and
you
can
point
your
phone
in
this
QR
code
and
and
jump
directly
to
it
and
see
the
list
soon
see
if
this
is
just
the
most
recent
list
as
of
a
day
or
so
ago,
this
one
a
couple
that
we
looked
at
just
now,
so
the
ones
that
we
looked
at
you
know
some
of
them
were
being
published
in
nature.
Some
of
them
were
quite
big
achievements.
A
Thank
you
also.
You
know
a
lot
of
highlight
worthy
science.
There's
things
in
progress
or
coming
up
as
well,
so
some
of
these
articles
are
talking
about
the
work
that's
being
done.
For
instance,
the
the
super
facility
project,
which
you
know,
allows
compute
facilities
such
as
nurse
to
integrate
really
tightly
with
experimental
facilities
such
as
particle
accelerators
and
provide
near
real-time
processing
to
you
know,
support
work
being
done.
You
know
experimental
work
being
done
with
computational
analysis.
A
F
A
So
I
think
it's
one
of
the
options,
but
it's
not
the
primary
one
that
we
use
to
find
Science
highlights
and
actually
that
it's
a
good
good
segue
to
jump
onto
the
next
slide
oops,
which
is
mostly
the
way
that
we
find
Science
highlights,
is
by
talking
with
users
and-
and
you
know
sometimes
by
hearing
rumors.
They
are,
you
know,
yeah,
basically
hearing
news
from
nurse
users
about
work
that
they've
been
doing,
and
we
are
very
interested
in
you
know
in
hearing
more
of
these,
so
we've
got
a
form
so
yeah.
A
We
would
like
to
really
encourage
you
to
submit
work
that
you've
done
as
a
as
a
candidate
for
a
science
highlight
yeah,
it
was
a
good
opportunity
to
showcase
your
work.
It's
also
available
to
nurse,
to
you
know,
be
able
to
showcase
our
users
work
and
show
that
you
know
nurse
users
are
making
significant
scientific
advancements
using
these
resources.
A
F
Yeah,
so
that's
kind
of
it
is
me,
maybe
my
second
question
I'm
just
valuing
through
this
initial
form
and
I
wonder
for
me
personally.
I
am
interested
us
because
it's
nask
highlight
I
might
be
more
interested
in
what
language
they
use
to
write
their
model
or
what
libraries
they
used
for
that
model.
Oh.
C
F
What
kind
of
workflow
they
use
in
any
particular
special
queue,
or
how
many
knows
or
do
they
do
any
public
trans
search
for
the
you
know,
Optimum
configuration
of
the
number
of
nodes
and
number
of
processes
panel
and
those
things,
particularly
for
both
those
big
reputation,
job,
and
if
that
this
home
has
an
optional
sections
like
that,
that's
actually
more
into
maybe
more
interesting
for
an
Ask
users.
F
You
know
it's
more
also,
some
as
some
kinds
of
relevancy
or
more
connections,
because
it's
interesting
for
me
as
scientific
curiosity,
just
reading
other
fields
in
the
in
a
you
know:
non-domain
scientists,
language,
but
still
even
more
interesting.
To
me,
for
example,
yeah
I
saw
some.
You
know
those
really
nice
works
because
in
machine
learning
and
moved
to
the
gpus
and
I
just
curious
how
they
used
to
use
python
or
if
it's
python,
which
Library
did
they
use,
and
those
kind
of
injectories
also
help
me
my
work
as
well.
F
If
some
some
things
you
already
used,
it's
a
really
big
way.
Then
maybe
I
could
follow
some
example.
So,
just
not
my
you
know,
yeah
the
sense
that
that
might
that
kind
of
items
might
be
interesting
to
be
considered
into
a
submission
form.
F
A
That's
a
that's
a
really
interesting
idea,
actually
of
as
as
well
as
the
science
highlight.
You
know
that
presents
the
outcome.
Yeah
some
details
that
other
nurse
users
can
can
use
to.
C
There
is
a
part
of
this
submission
form
where
people
optionally
can
provide
us
information
with
of
like
what
their
workflow
was
and
some
more
like
details
about
on
the
nurse
side
of
things.
So
you
know
if
you
yourself-
and
you
know
anyone
who's
here.
C
If
you
share
this
with
your
students
and
collaborators,
maybe
encourage
them
to
fill
that
part
of
this
form
out
if
they're
submitting
something
because
then
we
can,
you
know
if,
if
they're
submission
is
able
to
be
highlighted,
you
know
on
our
website
or
whatever
we
could
maybe
either
internally,
with
their
permission
share
how
they
did
that
in
another
format,
maybe
via
slack
or
something
else,
another
platform,
so
yeah
that
might
be
really
helpful
to
other
users
to
to
be
able
to
understand
what
how
they
achieved
the
thing
that
they
did.
C
Yes,
that's
right,
koichi
found
that
in
there
there's
a
part
that
it's
not
required.
If
people
don't
want
to
share
or
don't
feel
it's
necessary
for
whatever
reason,
but
if
you,
if
you
do,
use
this
form
or
if
you
ask
your
students
or
collaborators
to
use
this
form,
maybe
encourage
them
to
at
least
give
some
information
in
that
part
as
well.
C
I
I
will
encourage
people.
We
can
also
always
reach
back
out.
I
mean
once
we
have
a
contact
with
that
person.
It
opens
things
up
for
like
a
dialogue,
so
if,
if
you
know
I
mean
most
papers
have
a
corresponding
author
in
case,
someone
is
interested
in
what
they've
done
more
more
in
more
detail.
So
you
know
that's
also
just
helpful.
A
So
we
had
possibly
still
have
I
think
it
was
connected
to
the
to
when
we're
moving
to
k,
l
in
the
early
days
of
Quarry
a
set
of
case
studies
of
you
know,
projects
that
have
migrated
to
k,
l.
In
fact,
we
might
well
have
them
for
Pearl
matter
as
well
in
our
in
our
doctor,
nurse.gov
not
entirely
up
to
date
with
with
it
but
yeah
that
that
could
be
sort
of
a.
What
do
you
call
it?
A
corresponding.
A
C
A
So
and
I
saw
that
Libby
you
pasted
the
the
direct
link
into
the
into
the
chat
here.
The.
C
Link
is
in
the
chat
and
I'm
also
going
to
put
it
into
the
nurse
the
general
channel
for
the
next,
the
nurse
User
Group
Slack,
and
we
can
maybe
we
can
just
have
it
float
around
every
once
in
a
while.
So
if
people
lose
track
of
this
link,.
A
Yeah,
that
sounds
like
a
great
idea,
so
we're
actually
getting
quite
close
to
the
the
top
of
the
hour.
So
we
should
before
winding
up
just
a
a
brief
look
at
what's
coming
up.
So
we
have
next
couple
of
topics
that
we
have
lined
up
is
in
April.
Nurse
Johannes
is
going
to
talk
a
little
bit
about
using
Julia
right
mask.
A
So
if
you
have
it
in
Canada,
yet
Julia
is
a
fairly
new
language
and
it's
a
syntax
I
think
is
actually
a
little
bit
inspired
by
fortrade
amongst
other
things,
but
it's
a
it's
a
dynamic
language,
but
it
compiles
it
compiles
down
to
to
code,
so
it
can
produce
a
very
fast
code
uses
multiple
dispatch.
There's
you
know
it's
it's
it's
a
really
nice
language
actually,
and
you
know
we
have
some
support
for
it
here
and
then
in
May
intensively.
A
Roller
will
tell
we'll
give
us
a
bit
of
a
walk
through
nurse's
Jupiter
hub,
so
you
may
already
use
jupiter.net.gov
and
it's
a
really
convenient
and
Powerful
platform
for
accessing
their
skin.
In
fact
it
can
provide.
You
know
pretty
much
complete
access
you,
you
don't
actually
need
to
ssh.
A
In
to
nurse
systems,
depending
on
you
know
on
your
workflow,
you
can
do
yeah
a
bunch
of
stuff
through
Jupiter
Hub,
but
you
know
it
includes
a
terminal
ability,
so
yeah
we'll
have
a
bit
of
a
walk
through
there
and
as
as
always,
we
are
always
interested
in
more
topic
ideas
and
especially
in
nisk
users,
presenting
some
of
their
work.
So
if
you
have
a
topic
idea
or
have
some
work
to
present
we'd
love
to
hear
from
you
and
there's
a
a
form
here
and
a
QR
code
to
it
to
to
submit
through.
A
That
kind
of
ties
up
today
well
stop
recording.