►
From YouTube: NUG monthly meeting June 2021
Description
Our topic of the day was GPU Programming Models and Interoperability - GPU experts from NERSC's Advanced Technologies and Application Performance groups joined for a discussion about GPU programming models and interoperability, also our regular sections includng soem handy tips about how to find your screen session on Cori
A
Thank
you
and
just
a
heads
up
that
the
meeting
is
being
recorded,
we'll
post
a
link
to
the
recording
on
the
meeting
website
kind
of
shortly
shortly
after
the
meeting.
If
you
go
to
the
meeting
website,
the
slides
are
available.
Now,
if
you
wanted
to
click
through.
A
So
let's
just
paste
the
address
to
that
in
the
invite
to
that
in
the
chat
in
case
you're
not
already
on
it's
a
good
place
to
continue
the
conversation,
in
fact,
there's
probably
a
a
better
place
for
typing
chat
than
using
the
zoom
chat,
because
the
slack
chat
will
remain
visible
after
the
meeting
and
continue
the
conversation
afterwards
and
there's
a
hashtag
webinars
channel
in
slack,
which
is
a
good
place
for
discussion
here.
A
A
A
So
our
agenda
will
follow
our
typical
agenda,
which
is
we'll
start
out
with
a
session
for
the
win
of
the
month
and-
and
today
I
learned
we'll,
have
a
few
announcements
and
calls
for
participation
and
also
an
opportunity
for
people
here
to
announce
and
other
things
that
you
know
about
that
might
be
interesting
to
nurse
users
and
then
we'll
spend
a
while
on
our
topic
of
the
day
which
we're
going
to
talk
today
about
gpu
programming
models
and
interoperability
and
we'll
finish
up
with
a
little
bit
of
a
heads
up
for.
A
A
Component
of
the
meeting
is
to
show
off
an
achievement
or
shout
out
somebody
else's
achievement,
and
so
it
can
be.
It
can
be
something
small.
It
can
be
something
big
you
may
have
had
a
paper
accepted.
You
might
have
solved
the
bugs
that
have
been
giving
you
grief
for
a
little
while
or
that
had
something
particularly
interesting
or
challenging
about
it.
You
know,
especially
if
it's
something
that
can
be
a
good
tip
for
for
other
nurse
users.
A
You
know,
maybe
you
you
achieved
a
major
scientific
achievement
or
something
that
would
be
a
good
candidate
for
a
science
highlight
or
a
high
impact
scientific
achievement
award
or
a
innovative
use
of
high
performance
computing
award
if
you've
used
nurse
facilities
in
a
way
that
it
is
a
bit
different
to
the
traditional
usage
of
hpc,
and
you
know,
exposes
new
capabilities
that
we're
very
interested
in
in.
A
Are
innovatively
making
use
of
the
resources
available.
A
C
C
A
Just
for
my
insanity,
the
audio
was
working
right.
Somebody
somebody
give
away
to
indicate.
C
A
So
I
guess
on
outside,
in
the
in
the
last
month,
next
exciting
news,
which
will
be
the
promoter
dedication,
which
is
sort
of
a
important
step
towards
promoting
becoming
available
for
users,
we're
still
in
the
process
of
the
getting
it
getting
the
environment
ready
for
early
access
users
to
come
on.
But
that
should
happen.
A
Hopefully
soon.
I
think
I
think
paul
sent
an
email
around
just
the
other
day
gathering
applications
all
right,
maybe
we'll
jump
across
to
the
other
side
of
the
coin,
which
is
the
today.
I
learned
this
is
opportunity
to
to
share
experiences
with
something
something
new
or
interesting
or
surprising.
A
That
you
know
might
might
be
beneficial
also
for
other
users
to
hear
about,
and
this
can
be
something
that
didn't
work.
You
know
we
we
do
research,
often
cutting
edge
stuff,
and
that
does
always
carry
a
risk
that
you
know
we
try
something
and
it
turns
out
that
it
doesn't
work
and
it's
sort
of
an
important
element
of
the
science
discovering
what
doesn't
work
as
well
as
what
does
and
learning
from
that
as
well
as
that,
it
can
be
just
something
interesting
that
you
came
across.
A
Yes,
there
is
a
what
you
call
a
less
public
address
for
each
login
node,
although
in
most
cases,
if
you
need
to
get
to
a
specific
login
node,
probably
the
easiest
way
to
do
it
is
to
you
know,
to
log
in
as
normal
and
then
from
the
login
node.
You
can
also
just
ssh
choreo
4,
for
instance,
that
can
be
useful
if
you're,
for
instance,
running
screen
in
the
background
was
a
william.
Was
there
a
particular
use
case
that
you
had
where
the
the
224
address.
A
Yeah,
so
for
that,
you
don't
actually
need
to
remember
the
different
address
so
long
as
you
can
remember
the
which
login
node
you
are
running
on.
A
So,
if
you're
like
me
and
tend
to
forget
that
sort
of
thing,
putting
a
a
little
readme
note
or
a
you
know,
screen
dot
text
in
in
your
home
directory
with
yeah,
with
which
node
you're
running
on,
can
be
a
helpful
theme
as
well,
because
that's
shared
across
all
of
the
nodes.
A
We've
got
to
comment,
my
audio
might
be
breaking
up
a
little
that
is
possible.
Unfortunately,
the
limitations
of
telework
and.
A
So
something
that
I
discovered
in
the
last
month
was
somewhere
between
a
today
I
learned
and
oh,
that
was
a
riverwind,
was
using
shifter
to
do
things
that
are
easy
in
ubuntu,
but
not
necessarily
easy
on
a
system
like
corey
this
particular
example.
Whereas
there
was
a
question
about
running
later-
and
you
know
typically
that's
something
that
you
would
do
on
on
a
home
system,
but
yeah,
there
are
certain
use
cases,
whereas
it's
handy
to
be
able
to
run
it
on
cory.
A
But
it
has
a
lot
of
dependencies
and
it's
evidence
it's
pretty
sort
of
a
complex
thing.
It
turns
out
that
there's
a
docker
image
for
latex
of
a
tech
live
and
with
a
simple
shifter
image,
pull
on
that
docker
image
and
then
shifter
dash
dash
image.
A
Oh
a
little
bit
more
about
using
screen.
So
there's
some
tips
from
robert
and
william
about
how
to
how
to
find
your
screen
session.
A
And
that's
a
that's
a
fairly
neat
script,
william!
That
might
be
a
good
one
to
actually
post
as
a
clip
into
the
the
nurse
slack,
because
it's
one
that
people
can
put
in
their
yeah
and
to
enter
their
profile
as
a
quick
way
to
find
the
screen
I'll.
Try
to
capture
that
by
the
end
of
the
meeting.
A
All
right
announcements
and
calls
for
participation,
so
we
have
a
a
few
and
the
details
of
these
can
be
found
in
the
weekly
email
from
this
week.
Some
of
them,
I
think,
may
also
have
individual
emails
send
around
one
kind
of
important
one
coming
up,
but
still
it's
still
a
few
weeks
away,
but
there
will
be
a
power
outage
at
nurse
on
the
weekend
of
july
9
through
12,
and
it's
essentially
it's
a
tri-annual
every
every
few
years.
We
need
to
do
some
maintenance
on
the
on
the
power
system.
A
Server
at
lbl,
and
this
particular
maintenance
this
year
or
in
july,
will
impact
the
building
that
that
hosts
nurse
and
holds
things
like
cory
and
a
lot
of
nurse
systems,
so
we're
anticipating
that
our
backup,
power
and
cooling
should
be
able
to
keep
most
of
the
auxiliary
systems
running,
but
corey
will
be
down
during
that
weekend
and
we'll
piggyback
cory's
regular
maintenance.
On
top
of
that,
so
that
the
maintenance
for
july
will
be
shifted
to
kind
of
happen
in
tandem
with
that
power
outage.
So
you
can
check
the
weekly
email
for
details.
A
A
So
just
a
heads
up
that
to
be
aware
of
that.
Another
important
heads
up
is
that
we
will
in
the
next
few
months-
and
I
think
that
august
is
the
target
need
to
update
corey's
operating
system,
and
it's
a
it's
a
it's
a
medium
scale
update.
We
anticipate
that
executables
that
are
statically
linked
will
need
to
be
re-linked,
for
you
know:
newer,
correct
versions
of
static
libraries,
stuff
that's
dynamically
linked,
should
just
work
so
yeah,
that's
preparing
for
that
as
well.
A
Some
calls
for
participation,
there's
a
parallel
applications
workshop
on
alternatives
to
mpi.
That
will
happen
at
sc21
and
also
supercheck,
which
is
a
checkpoint,
restart
workshop.
It
will
also
happen
at
sc21
and
in
the
weekly
email.
There's
some
links
to
more
information
about
that.
B
A
Yes,
that
is,
that
is
a
good
tip.
A
So-
and
I
think
in
that
email
that
mentioned
that,
so
if
you're
already
part
of
a
new
set
project,
that's
that's
kind
of
already
worked
out.
There's
a
email-
maybe
maybe
you
can
correct
me
from
wrong
kevin-
is
requesting
applications
for
early
access
to
call
matter
outside
of
you
know,
lisa
and
I
think
maybe
ucp
so
yeah.
B
If
you
think
your
code
is
close
to
ready,
kneesap,
ecp
and
super
facility
projects
are
already
on
the
on
the
list.
They're
already
checked
off.
So
if
you're
one
of
those
groups,
you
don't
have
to
fill
that
out.
If
you're
any
other
group
fill
it
out.
A
Yeah
thanksgiving
so
yeah
look
out
for
that.
We've
got
a
few
training
events
coming
up.
Oh
actually,.
C
Was
there
any
other
announcements
that
somebody
wanted
to
make?
I
think
I
saw
somebody
hi
steve.
It's
rob
ryan.
I
just
wanted
to
ask
you
how,
regarding
the
parallel
applications
workshop
alternatives
to
mpi
plus
x,
do
you
know?
Do
people
ever
discuss
coral
reef
iv
at
these
workshops?
It's
it's
an
alternative
to
mpi
plus
six.
I'm
just
curious
because
I
tried
it
for
the
first
time
and
it
found
it
kind
of
quite
useful.
C
A
I
don't
think
I've
actually
been
to
the
poor
etm
workshop
in
the
past,
so
I
haven't
scanned
through
the
list,
but
that
does
sound
like
within
within
scope,
because
it's
yeah,
it's
it's
parallel.
It's
corey
fortran,
I
think,
is
part
of
the
fortran
standard.
Now
it's
reasonably
well
supported
or
somewhat
somewhat
supported,
at
least
so
yeah.
I
I
would
anticipate
that.
The
answer
is
yes.
B
It's
it's
definitely
possible.
It
really
depends
on
what
the
topics
are
that
year.
So
if
there's
a
heavy
thing
say
on
you
know
gpus,
then
they
might
not
talk
much
about
fortran,
so
you
might
want
to
look
at
the
a
little
bit
more
detail
on
what
that
workshop
is
doing
this
year
and
what
their
plan
is.
It's
definitely
in
the
purview,
but
it's
hit
or
miss
as
to
whether
cora
ray's
come
up.
Okay,
all
right
thanks.
E
So
this
is
cheyenne
from
pn
annual,
so
I
have,
I
usually
attend
this
workshop
and
there
is
always
discussion
on
corey
fortran,
because
damien
rousen
usually
attends
it
and-
and
I
remember
a
couple
of
years
back-
there
was
a
very
lively
discussion
on
coding,
fortran
saying
that
you
know
so
yeah
I
mean
there
is
always
some
discussion
on
cafe
and
there
are
caf
people
who
usually
attend.
I
think
and
as
you
know,
damien
drousson
is,
I
guess
he
has
recently
joined
lvl.
I
guess
so.
I
think
there
would
be
participation.
B
A
Interesting
things
on
it:
it's
quite
the
competition
for
attention
there,
so
we
have
a
few
training
events
coming
up
as
well.
Next
week
on
on
june,
22
we'll
have
a
session
on
elmob
for
perlmutter,
so
you
may
have
used
elmwood
before
on
a
different
system.
It's
kind
of
the
follow
on
from
the
environment
modules,
which
is
tcl
based
that
we
use
on
quarry
and
so
on,
perlmutter
we're
going
to
shift
to
using
elmore,
but
for
the
most
part
it's
it's
the
same.
A
It's
the
same
user
experience,
you
type
module
load,
you
get
a
module
modular
veil,
module
unload,
but
there's
a
few
little
differences
in
how
it
works
to
be
useful,
to
know
about,
and
also
a
few
extra
capabilities
that
are,
that
are
quite
helpful.
So
we'll
have
a
training
session
on
that
next
week.
Also,
in
a
couple
of
weeks
time,
we
have
a
training
session
scheduled
for
ci
at
nurse.
A
So
if
you're
using
something
like
you
know,
a
gitlab
ci
or
you
know
jenkins
or
other
things
like
that,
and
you
would
like
to
use
that
type
of
a
workflow
at
nurse.
B
A
Have
some
tips
about
that
coming
up,
there's
a
ecp
ideas,
webinar
also
coming
up
in
a
couple
of
weeks
time
this.
This
can
be
really
interesting,
for
you
know,
picking
up
tips
and
tricks
about
kind
of
achieving
things
in
software
and
scientific
software
development
there's
also
a
cuda
multi-threading,
with
streams
workshop
hosted
by
or
presented
by
nvidia
coming
up
really
soon.
A
E
So
this
is
cheyenne
from
pnnl
again
I
just
wanted
to
point
out
that
siam
annual
meeting
is
next
month
and
the
early
registration
deadline
is
july
5th,
and
there
are
some
there
are
some
mini
symposia
on
parallel
programming
models,
specifically
one
that
I'm
arranging
with
my
with
another
guy
at
pnnl
is
on
big
ass,
api
and
languages.
E
So
folks,
who
are
planning
to
attend,
please
try
to
find
the
practical
and
efficient
partition
global
address
space,
support
for
data,
intensive
applications
mini
symposia.
Thank
you.
A
That's
yeah,
that's
good.
Do
you
think
you
could
paste
a
link
into
either
the
chat
or
the
slack
chat
eyeshadow
to
where
people
can
find
out
more
and
register.
A
All
right,
so
our
topic
of
the
day
is
gpu
programming
models
and
interoperability,
so
this
is
kind
of
a
fairly
broad
topic.
So
we
have
a
a
few
kind
of
tips
and
overview
here
and
we
have
a
few
people
online
from
nurse.
A
Let's
see
we
should
have,
I
think,
ronnie
who's
part
of
our
application
performance
group,
and
I
think
chris
is
also
on
who's
part
of
our
advanced
technologies
group
and
they
have
a
good
bit
of
experience
with
with
some
of
the
different
gpu
models.
I
think
there's
a
couple
of
other
nurse
people
as
well
and
we're
very
interested
in
the
input
and
experiences
from
the.
B
A
Other
nurse
users
in
the
in
the
room
here
so
we'll
go
through
a
quick
overview
here
and
then
we'll
go
into
a
discussion
section
so
right
up
front
between
different
programs.
Improv
interoperability
tends
to
be
sort
of
limited
and
difficult,
and-
and
this
is
not
just
for
gpus-
this
is
for
everything.
So
if
you're,
using
a
c
library,
you're
calling
a
c
library
or
working
in
c,
things
are
kind
of
relatively
easy
c
has
a
pretty
universal
api,
and
so
your
c
libraries
tend
to
be
able
to
be
used.
A
You
know
from
from
any
compiler,
you
know
in
any
programming.
Environment,
yeah
fortran,
for
instance,
uses
special
name
mangling
rules,
and-
and
there
are
there's
a
standard,
bind
c
to
map
for
you
for
a
given
fortran
implementation,
the
correct
binding
to
to
get
between
c
routines
quantum
routines-
and
you
know
c,
plus
plus,
strokes
to
see
fairly
negatively
so
calling
stuff.
That's
in
c
is
reasonably
easy
beyond
that,
though,
going
across
programs
tends
not
to
be
straightforward.
A
Both
c
plus,
plus
and
fortran
use
the
particular
apis
and
name
mangling
rules
that
are
implementation,
specific,
so
code,
c,
plus
plus
code
compiled
with
one
compiler
or
programming
environment,
probably
can't
talk
to
c
plus
plus
code
compiled
with
a
different
one.
So
just
as
a
as
a
general
rule
for
the
you
know,
these
higher
level
languages
sticking
to
one
programming
environment
is
generally
the
best.
A
A
You
know
a
fair
bit
of
work
has
gone
in
both
from
nvidia's
side,
as
well
as
praise
side
hpe
to
make
sure
that
this
programming
environment
works
pretty
smoothly
generally,
and
it's
actually
got
the
best
support
for
interoperability
as
well
of
the
different
programming
environments.
A
There
are
a
couple
of
caveats
to
that.
Admittedly,
it
sounds
like
having
audio
drops.
What
I
might
do
is
stop
my
video
in
the
hope
that
it
frees
up
a
little
bit
more
bandwidth.
A
So
your
code
will
need
for
the
cpu
code
as
well,
will
also
need
to
be
able
to
be
built
with
the
nvidia
stack,
and
you
know
it
is
possible
that
other
compiler
suites
that
might
give
better
performance
for
the
cpu
code.
We
sort
of
anticipate.
You
know,
particularly
in
phase
one,
the
fellmatter,
which
is
very,
very
gpu,
oriented
that
the
gpu
code
will
be
doing
the
other
block
of
the
heavier
lifting
in
terms
of
the
computational
work.
So
so
for
most
cases
we
expect
that
programming
video
will
be
the
one
to
use.
A
It
tends
not
to
be
easy
and
so
yeah
in
general.
If
you
can
avoid
mixing
gpu
programming
models,
you
know
things
will
go
more
smoothly
using
higher
level
models
can
help,
because
you
know
it's
easier,
for
instance,
to
use
your
cocos
to
call
a
gooder
library,
then
mixing
openmp
and
openhc
together.
A
So
this
is
a
fairly
complex
illustration
that
was
kind
of
an
overview
in
the
case
of
programming
nvidia.
A
So
when
you,
when
you
module,
load
the
program,
you
kind
of
get
everything
built
in
so
you
you've
got
cuda
support.
You've
got
standard
fortran,
which
has
got
things
like
you
know,
duke
and
and
coerry
for
tran
you've
also
got
open,
acc
and
openmp
support
yeah
that
extends
to
gp
gpu
offload,
as
well
as
the
more
traditional
openmp
for
multi-threading.
A
So
open,
mp
and
open
acc
include
a
degree
of
support
for
for
interoperability.
A
Creampitch
also
has
a
degree
of
gpu
support
which
really
comes
down
to
being
able
to
send
things
from
a
gpu.
You
know
a
buffer,
that's
in
gpu
memory,
rather
than
in
which
got
ram,
get
regulate
and
cpu
ram,
and
so
from
these
different
models
in
the
nvidia
programming
environment.
A
You
get
access
to
all
of
the
normal
things
that
you're
familiar
with
so
crayon
pitch,
libraries
like
ftw
and
hdf5,
but
also
the
cuda,
math,
libraries,
yo,
kublai,
less
and
qfft
just
a
tip,
though
there's
this
nvla
math
library
that
gives
you
things
like
tubeless
and
so
on,
but
with
the
more
typical
blast
interface,
it's
a
little
bit
easier
to
use.
A
So
for
an
example,
this
is
a
little
snipper
of
a
mini
app
called
games,
which
is
a
openmp
uses,
openmp
task
offloading
uploading,
but
it
can
call
cuda
libs
underneath
and
it
uses
this
use
device
pointer
to
define
the
things
that
are
in
the
gpu's
memory
and
then
makes
the
calls
with
those.
A
So
using
other
programming
environments
is
likely
to
be
a
little
bit
more
complicated,
so
program.
Gnu
compiler
suite
has
some
gpu
support.
It's
a
it's
a
little
patchy.
The
performance
is
generally
not
as
good.
The
cray
programming
environment
has
quite
good
cpu
performance
generally.
A
An
interesting,
relatively
new
development
is
that
cray
and
hpe
are
moving
towards
using
a
llvm
claim-based
back-end
for
at
least
the
cnc
plus
plus
compilers.
They
found
that
the
cleanbase
over
llvm-based
fortran
compilers
aren't
sort
of
quite
there
yet
for
for
you
know
their
purposes.
A
A
That's
not
nvidia
cuda's
not
built
into
the
environment,
so
there'll
be
a
separate
module
load,
good
toolkit
that
you'll
need
to
do
to
get
access
to
the
girder
libraries,
and
so
you
know
the
image
looks
a
little
bit
similar,
but
a
little
bit
different,
so
you've
got
you
know
an
extra
step
here
for
gnu
to
get
cuda
and
you've
still
got
access
to
kind
of
the
same
things
and
again,
you've
still
got
your
higher
level.
Libraries
like
cocos
and
raja,
which
can
compile
down
to
these
things.
A
Grey
similar
again
with
a
slight
extra
complexity
that
your
fortran
and
cc,
plus
plus
sides,
are
sort
of
split
out,
and
I
think
at
the
moment
cray
is
only
officially
supporting
openacc
from
fortran.
A
I
don't
think
they've
announced
you
know
any
commitment
to
support
of
open
acc
for
the
cnc
plus
plus
stack.
So
if
your
code
uses
openhcsc
and
cnc
plus
plus,
this
is
probably
not
the
programming
environment
that
you
need.
A
All
right
so
as
a
summary
interoperability
is
possible,
but
we
don't
recommend
relying
on
it
as
as
much
as
you
can.
Keeping
within
one
programming
model
is
is
generally
a
good
tip
and
the
the
higher
level
frameworks
are
worth
looking
into
yeah,
particularly
if
you
know
if
your
code
is
c
plus
plus
so
you're
doing.
You
know
new
development
take
a
look
at
things
like
cocos
and
raja.
A
They
can
give
you
more
flexibility,
and-
and
here
they
aim
at
performance
portability
as
well,
so
both
open
mp
and
openhc
have
support
for
interoperability,
especially
openmp
5.x,
I
think,
has
has
explicit
support
for
interoperability,
but
how
well
that's
supported
by
the
compilers
is
still
still
kind
of
in
progress.
A
There
are
a
few
tips,
there's
been
some
training
sessions
that
nurse
has
hosted
over
the
last
year
or
so
that
go
into
more
detail
about
these,
and
so,
if
you
grab
the
slides
from
the
meeting
website,
you
can
just
sort
of
click
through
these
and
you'll
find
the
detail
in
that
a
little
bit
more
easily.
A
A
I
could
set
some
questions
about
promoter,
specifically
okay,
so
we
might
bring
these
up
in
the
q
a,
but
just
before,
going
to
q
a
either
chris
or
ronnie.
Do
you
want
to
say
something?
Is
it
I'll
say
anything
in
terms
of
general
comments.
H
I
think
you
gave
a
good
overview
steve.
I
think
our
general
advice
is,
I
mean,
do
not
unnecessarily
mix
these
programming
models.
Try
to
stick
with
one
programming
model.
I
mean
that's
going
to
give
you
the
maximum
portability
to
different
doe
systems.
H
So
I
mean
here
at
nurse
we're
we're
big
advocates
for
things
like
openmp
target
offload,
which
will
be
portable
across
all
doe
machines,
yeah
and
if
you
have
particular
use
interoperability,
use
cases
that
are
important
to
you.
I
mean
please
let
us
know,
and
then
we
can
actually
speak
with
the
vendors
directly
to
ensure
that
they
are
supported
and
because
yeah
there's
in
a
way
there's
like
a
million
different
interoperability
use
cases.
You
could
imagine
so
it's
very
helpful
for
us
to
prioritize
things.
I
Yeah,
I
think
just
one
other
thing-
I
guess
yeah,
so
the
optimal
use
cases
well,
the
most
performant
use
case.
You
would
see
it
if
you're
sticking
to
one
program
model
by
one
programming
environment,
which
is,
I
think,
what
chris
was
leading
to
as
well.
So
unless
you
need
to
do
not
like
mix
things,
we
will
have,
we
might
end
up
having
different
cuda
versions
that
are
more
latest
than
what's
in,
let's
say
the
programming
environment
nvidia
that
what
steve
showed
before.
I
So,
if
you
want
to
do
that,
yeah
sure
go
ahead
and
it
would
still
work.
There
won't
be
any
problems
there,
but
there
can
be
some
performance
hits.
But
if
you
feel
like
there
is
a
use
case,
please
let
us
know
and
that's
how
we'll
be
able
to
alleviate
some
of
that.
I
can
take
two
of
the
things
that
question
I
think
came
up
as
well
and
might
be
related
whether
it
is
cuda
aware
mpi.
I
Yes,
it's
all
where
mpi
will
have
crayon
pitch
and
ucx
on
the
system
that
you
can
use
for
npi.
There
won't
be.
I
believe
there
won't
be
any
open
mpi
on
the
system.
Steve
correct
me:
if
I'm
wrong
so.
A
So
the
the
native
mpi
for
the
system
will
be
cray
in
pitch
right,
there's,
nothing
fundamentally
stopping
somebody
from
building
open
mpi
and
it
will
be
able
to
use
the
crane
network
via
the
open,
ofi
interface,
so
open
fabrics,
good
favorite
interface,.
I
We'll
know
that
it
will
be,
you
know,
built
in
and
provided
right
yeah.
We
will
not
have
an
open
mpi
build,
but
you
will
you're
more
than
welcome
to
have
your
own
we're
going
to
be
able
to
assist.
You
did
not
quite
understand
the
question
where
the
problem
are
having
hit
hip
is
mostly
used
on
the
for
the
gpu
side
of
things
for
amd.
I
I
F
Yeah,
I'm
just
thinking
you
know
five
or
ten
years
down
the
line
will
cuda
still
be
around
or
will
we
have
gone
towards
the
language,
that's
more
interoperable
on
more
hardware,
so
there
might
be
something
to
be
said
for
looking
at
hip.
I
That's,
I
think,
that's
a
fair
point,
but
at
the
same
time
I
think
it
is
also
like
hip
is
currently
developed
by
amd.
So
like
there
are
performance
implications.
I
mean
when,
even
if
you're
using
hip
to
run
it
on
an
nv,
nvidia
gpu,
which
is
what
like
like
all
cf,
is
doing
right.
You
can.
You
can
run
that
on
summit
and
you
can
run
it
on
frontier
as
well,
but
but
I
agree
that
having
that
option
would
be
something
nice,
but
I
don't
believe
at
this
point.
A
So
if
I,
if
I
understand
correctly
hip,
comes
in
the
form
of
essentially
a
template
library
that
can
be.
F
H
H
D
I
think
it
will
only
be
important
for
ecb
teams
once
they
get
access
to
frontier,
because
frontier
uses
hip.
A
Yeah
in
in
a
way,
I
guess,
chip
and
cuda
are
both
fairly
low
level
programming
models,
so
so
there's
sort
of
a
lot
to
be
said
for
using
higher
level
ones
if
you're,
if
you're
starting
a
new
project
and
deciding
whether
to
use
hip
or
cuda.
Maybe
it's
worth
considering
using
neither
and
using
perhaps
openmp
target
offloading
or
kokos
or
raja.
E
H
Yes,
that
is
yeah
the
cuda
runtime
and
the
nvidia
openmp
runtime
are
completely
interoperable,
so
you
could,
for
example,
allocate
data
using
cuda,
runtime
api
functions
and
then
use
that
data
in
your
openmp
target
or
float
regions.
H
Yeah
it
works.
Do
you
have
any
particular.
E
D
A
A
There
is
a
a
caveat
that
yeah,
open,
openmp
target
offload
and
openacc
target
offload
can
play
together
to
a
to
a
certain
degree,
and
I
think
it
is
related
to
you
know
being
able
to
interoperate
about
the
what's
the
word
for
it:
sort
of
yeah
the
they
use
device
pointer
data
management,
but
not
necessarily
for
the
you
know,
compute.
A
What
do
you
call
it?
You
know
scheduling
and
and
offloading
so
yeah.
So
a
degree
is
possible,
but
not
complete,
whereas
it
sounds
like
nvidia
might
have
a
little
bit
deeper
support
for
that.
H
A
Yeah,
I
don't
recall
off
the
of
the
top
of
my
head,
whether
the
caveat
with
the
with
the
cray
compiler
was
was
for
single
source
file
or,
for
you
know,
even
across
source
files,
but
it
was
about
the
the
data
location
rather
than
mixing
the
you
know,
openmp
and
acc
directives
for
actually
you're
doing
the
offloading
we're
going
to
turn
the
kernel
into
a
so
yeah.
I
guess
the
the
summary
version
is
interoperability
might
be
possible,
but
it's
not
trivial,
so
yeah,
avoiding
it
where
you
can.
H
A
A
Needed
I'm
curious,
have
any
of
our
users
online
had
any
experiences,
I
guess
either
on
corey,
gpu
or
other
systems
of
codes
where
they
need
to
mix
different
programming.
A
Code
coming
from
from
robert
about
mixing
mpi
and
cuda
together,
that
should
be
yeah
somewhat
more
straightforward
in
that
the
the
programming
models
are
a
little
bit
more
orthogonal.
A
So
yeah
that
sounds
like
the
most
common
use
case
is
cuda,
plus
a
different
model.
G
This
is,
this
is
eric's
turn,
so
we
use
cocos
and
also
on
a
machine
that
has
multiple
gpus.
I
think
we
have
to
use
mpi
to
mix
to
you
know,
reduce
the
results
from
the
different
gpus
and
that
gets
tricky
and
some
machines
have
mpi
configured
to
make
it
easy
and
some
machines.
A
G
So
the
machine
that
well,
the
machine
which
eventually
worked
was
a
was
actually
a
node
of
a
power
pc
with
four
four
gpus
reminiscent
of
one
of
the
olcf
machines,
and
it
was
it
was
configuring,
mpi
and-
and
I
wasn't
responsible
for
doing
this,
but
we
had
to
have
that
the
people
working
on
that
machine
had
to
do
something
to
make
it
eventually
work
and
then
on
a
different
machine,
which
was
an
intel
machine
with
multiple
gpus.
G
The
people
who
are
different
people
were
not
as
responsive
and
they
were
they
didn't.
They
didn't
make
it
work
in
the
end.
So
I
guess
I
don't
have
a
good
tip
for
for
a
nurse
except
I'm
hoping
you
guys
are
better
than
the
people
we.
A
Had
possible
to
do
so,
yeah,
I
don't
remember
if
we
have
any
kind
of
you
know
really
ready
tests
for
that,
but
it's
definitely
one
to
keep
in
mind
so
we're
getting
we're
getting
close
to
the
top
of
the
hour
and
it
sounds
like
the
the
rate
of
questions
and
comments
is
slowing
down.
Does
anybody
have
any
final
questions,
comments
or
tips
before
we
move
on
to
our
next
section.
A
And
if
not
thanks
again,
chris
ronnie
and
everybody
else
who
participated,
there's
a
lot
of
interesting
content
and
learning
there.
A
So
flipping
back
through
to
our
last
couple
of
sessions
coming
up
we're
always
looking
for
topic,
requests,
topic
suggestions
even
better.
If
you'd
like
to
show
off
your
work,
this
is
a
good
forum
for
it.
You
know
10
or
15
minute
slot
time
for
some
q
a
time
for
a
quick
overview
of
what
you're
using
nurse
for
and
tips
and
tricks
that
you've
learned
or
you
know
gotchas
that
might
help
or
you
know,
simply
interest
other
nurse
users.
A
It
might
also
be
an
opportunity
to
make
contact
with
people
in
quite
different
science
areas
who
it
turns
out
doing
similar
things
computationally
to
what
you're
doing
so.
A
If
you
would
like
to
you,
know
to
participate
in
this
way
to
to
give
a
bit
of
a
lightning
talk
and
tell
us
about
what
you're
doing
we're
very
interested
to
hear
you
can
post
something
in
webinars
or
or
send
me
a
dm
or
an
email
if
you
like,
and
we
can
set
something
up
but
choose
a
meeting,
that's
convenient
for
you.
A
That
so
our
final
section
is
a
quick
look
over
last
month's
numbers,
so
for
may,
we
had
generally
generally
pretty
high
availability.
We
did
have
a
couple
of
unscheduled
adages,
due
in
one
case
to
a
essentially
a
hardware
issue
with
the
power
supply
and
another
brief
outage.
While
due
to
some
luster
issues,
the
user
experience
was
scratch
hanging.
We
had
a
normal
scheduled
maintenance
other
than
that
things
kind
of
ran
reasonably
smoothly.
A
So
we
tell
you
really
large-scale
stuff
and
we've
seen
that
quite
high,
actually
for
for
quite
a
while
now
inquiry,
it's
been
sitting
up
over
over
30,
for
I
think
at
least
the
last
year,
our
ticket
incoming
and
outgoing
rates
seems
to
be
sitting
reasonably
steady,
they're
going
out
at
about
the
the
same
rate
that
they're
coming
in.
They
have
a
current
backlog,
a
little
over
or
somewhat
somewhat
over
400
tickets.
A
So
that's
our
current
state
and
all
that
we
have
for
today's
meeting.
Thank
you
all
for
joining
continue
the
chat
in
the
webinars
channel,
I'll
capture,
the
chat
on
zoom
and
post
a
few
of
the
links
that
people
posted
there.
I
think
there's
some
some
good
tips
and
the
recording
should
arrive
and
have
a
link
to
it
from
the
from
the
meeting
website
fairly
soon.