►
From YouTube: nug meeting apr 2023 julia
Description
The NUG monthly meeting for April 2023. We talked about Julia at NERSC
A
I
think
we
could
probably
get
started.
Yeah
didn't
go
away
yet:
okay,
well
hi.
Everyone
welcome
to
the
up
meeting
this
month,
thanks
for
being
flexible
with
the
changes
go
ahead.
Charles.
B
Okay,
well,
first
off
we'd
like
to
welcome
all
of
you
for
attending.
Please
feel
free
to
participate,
speak
up,
ask
questions,
contribute
as
you
see
fit.
If
you
have
not
joined
the
nurse
user
slack,
then
please
be
sure
to
join
it
as
well
to
stay
up
to
date
with
any
news
and
meetings
to
start
off
today.
B
We'll
do
a
little
bit
of
introductions
introducing
myself
and
Lippy,
who
will
be
basically
facilitating
and
overseeing
the
the
meetings
moving
forward
and
we
can
out
I
can
provide
a
brief
introduction
about
myself
and
Lippy
too.
So
I
am
actually
a
new
consultant
at
the
nurse
I
joined
last
month.
B
A
part
of
the
user
engagement
group
I
am
a
science
engagement,
engineer
and
HPC
consultant
working
with
the
community
to
create
a
community
of
purpose
and
Foster,
more
interaction
and
collaboration.
So
that
will
be
the
goal
that
both
Libby
and
myself
will
be
working
on
and
a
little
bit
of
background
about
me.
My
background
is
in
performance,
modeling
and
optimization
of
scientific
applications
and
energy.
B
Aware
Computing,
my
PhD
is
from
Texas
A,
M
University
and
my
background
in
Industry
includes
doing
development
work
with
IBM
for
their
parallel
platforms,
as
well
as
Oracle
and
I'm
located
here
in
Atlanta.
B
In
the
past
three
years,
I've
been
working
in
the
startups
team,
seen
as
a
consultant
working
with
a
member
over
a
dozen
different
startups
as
a
technical,
advisor
and
consultant
here
before
joining
nurse
I'm,
not
I'm,
not
new,
to
the
doe
Labs
as
I
previously
also
worked
at
Oak,
Ridge
and
Livermore
lab
as
well,
but
I'm
glad
to
be
working
with
everyone
at
nurse
and
all
of
our
our
users
and
looking
forward
to
working
with
everyone
moving
forward.
A
C
A
Go
ahead
and
introduce
myself
yeah,
so
I
think
some
some
of
you
may
have
seen
me
before
my
name
is
Lippy.
I
was
up
until
recently,
a
postdoc
at
nurse
and
I've,
now
transitioned
into
a
staff
position,
as
also
as
a
science
thinking
instrument
engineer
so
Charles
and
I
are
going
to
be
working
very
closely,
which
is
very
awesome,
I'm
glad
to
have
him
here
with
me.
A
My
background
is
actually
in
physics,
so
I
did
my
undergraduate
degree
in
physics
at
Cornell,
I
did
my
PhD
in
physics,
at
University
of
Chicago
and
I
also
did
some
research
at
slack
National
Lab,
which
is
where
I
became
a
nurse
user
up
until
that
point,
I
didn't
know
what
nurse
was
and
I'd
never
done
any
type
of
high
performance,
Computing
and
then
once
I
became
a
nurse
user.
I
just
became
really
interested
in
scientific,
like
computational
work
and
that's
how
I
ended
up
joining
nurse
gets
a
postdoc.
A
A
Of
of
familiarity
with
resources
and
hoping
to
bridge
any
gaps
and
also
just
make
the
process
of
using
nurse
as
scientists
a
little
bit
easier,
so
yeah,
so
that's
a
little
bit
about
me!
I'm!
Also
remote
I,
don't
live
in
California
I
live
in
Oregon
in
Corvallis
Oregon.
If
any
of
you
are
familiar,
it's
where
Oregon,
State,
University
is
and
yeah
I
think
that's
kind
of
it.
But
I'm
looking
forward
to
working
with
all
of
you
and
with
our
users.
E
B
I'm
I'm
so
happy
to
be
working
with
Lippy.
Already,
we've
come
up
with
a
couple
of
great
ideas,
so
we're
looking
forward
to
bringing
those
to
light
moving
forward
and
also,
although
technically
I
am
new
to
nurse
as
they
employee
I'm,
not
you're
the
nurse
as
a
user,
because
it
is
the
first
Super
Computing
Center
that
I
had
access
to
as
a
graduate
student
at
Texas
A
M.
B
So
it
feels
good
to
be
working
with
our
community
of
users,
just
like
I
had
guidance
when
I
first
got
into
HPC
in
parallel
performance
modeling
as
well.
So
we'll
continue
on
today
with
just
an
overview
here.
You
can
see
our
plan
for
today
with
the
win
of
the
month
and
what
you
learned,
as
well
as
a
few
announcements
about
Quarry
retirements
and
calls
for
participation,
and
then
we
have
our
topic
of
the
day,
which
will
be
Julia
nurse
as
well.
B
Okay,
and
so
for
this,
what
we
want
to
just
do
is
to
have
anyone
that
wants
to
share
something
that
they
have
achieved
over
to
this
month
as
related
to
anything
in
your
research
or
using
the
platforms
it
could
be.
If
you
have
a
paper
accepted
or
you're
working
on
a
paper
or
you're,
you
know
presenting
a
poster
or
attending
a
conference.
B
If
you
add
a
milestone,
breakthrough
in
solving
a
bug
or
whatnot
or
or
any
other
achievement
that
you
like
would
like
to
share
so,
do
we
have
anyone
that
would
would
like
be
open
to
sharing.
D
Hi
this
this
is
Robert
Ryan
I
could
mention
something
hello,
Santa
Fe,
New,
Mexico
I'm,
an
accelerator
physicist
and
last
week
I
was
able
to
use.
I
was
on
Corey
and
I,
managed
to
simulate
a
process
from
first
principles
called
self-amplified
spontaneous
emission,
and
that's
something
that
happens
when
an
electron
beam
and
a
free
electron
laser
passed
through
an
undulator
and
a
coherent
signal,
grow
inside
of
noise
and
self-emplified
spontaneous
emission
had
been.
It
has
been
simulated
for
many
years,
but
I
don't
believe.
D
B
D
And
I
have
I
put
some
things
online
in
the
project
workspace
if
anyone
is
interested
and
I've
also
been
posting
about
this
on
LinkedIn.
So
if
anybody
looks
me
up
on
LinkedIn,
you
can
see
what
this
is
about.
D
B
Okay,
awesome!
Oh
that's
great!
Thank
you!
So
much
for
sharing-
and
you
know
my
background-
is
in
performance.
Optimization,
so
I
would
basically
work
with
a
researcher
or
physicist
like
you,
that
does
acceleration
and
help
to
optimize
your
app
your
algorithm,
for
you
know
better
parallelization
or
even
better
execution
on
a
GPU.
So
you
know
if,
if
anyone's
having
issues
like
that,
I
would
love
to
get
together
and
have
a
discussion.
So
we
could
see
what
might
be
going
on
and
that's
basically
what
my
PhD
work
was.
B
It
was
optimization
of
scientific
applications
from
weather
simulations
to
gyratorial
codes
to
improving
them
for
predicting
performance,
reducing
execution,
time
and
energy.
Aware
optimization
as
well.
So
lots
of
you
know
areas
that
something
like
you
just
just
made
a
breakthrough
on
could
be
applied
for
improvement.
So
awesome.
That's
definitely.
D
Win,
oh
and
in
fact
this
code
is
so
far
it's
still
CPU
only,
but
the
next
natural
step
is
to
move
it
to
the
GPU.
So
yeah
happy
to
hear
you're
there
and
able
to
help
assist
I
mean
help,
make
the
transitions
of
gpus
definitely.
E
Oh
yeah,
this
is
koichi
from
Vienna.
Now
hi
I
just
got
my
one
paper
accepted.
This
is
paper
about
climate
models
and
this
is
not
really
scientific
but
not
discuss
technical
aspect
of
this
particular
series
of
simulations
we
produced,
and
so
basically
we
provide
enough
information
to
reproduce
our
simulations
as
well
as
enough
details
of
the
data
users
to
be
aware
of,
for
example,
you
know
we
compare
model
simulations
to
other
observations.
We
know
what
out
of
the
earth's
climate.
This
model
has
weakness
what
we
call
biases
one
value.
E
That
has
to
be
careful
but
probably
most
interesting
aspect
of
this
paper
is.
We
provide
some
challenges
of
running
this
product
model
code
on
the
recent
HPC
systems,
but
this
is
night
landing,
but
at
that
time
no,
it's
not
about
GPU,
but
it's
more
about
the
vectorization
and
the
specific
memory
usage
on
those
heterogeneous
CPUs.
You
know
connected
to
different
memory
units
in
here
now
and
we
found
out
very
common
kind
of
models.
It's
called
CSM
is
very
strong
on
night
landing.
E
E
Does
included
some
subtle
models
in
the
future
plan
like
from
tomorrow,
everything
or
columns
and
then
optimize
that
small
cannonl,
but
that
is
not
propagate
into
the
main
branch
of
the
CSM.
So
we're
trying
to
Point
kind
of
moderate
us
to
those
issues,
particularly
the
code
written
by
domain
scientists
are
very
poorly
written
in
terms
of
the
you
know,
good
order
or
its
speculation
and
the
other
memory
management
and
then
also
in
supplement.
E
We
provide
another
artificial
challenges
that,
as
for
the
production
Q
waiting
time,
so
with
the
help
from
Steve
League,
actually
he's
a
cause
of
this
paper.
We
made
a
2021
average
QA
time
on
Corey
hospital
and
calling
height
learning
as
a
function
of
requested
hours
and
requested
number
of
nodes
and
providing
what
kind
of
workflow
would
be
make
it
easier
to
run
climate
scale
simulation,
particularly
within
the
three
years,
funding
cycle,
which
is
getting
more
difficult
if
the
Concorde
runs
slower.
E
So
I
put
the
link
on
the
free
print
actually
I
presented
this
this
story
or
stream
about
a
year
ago,
in
this
last
meeting
and
now
finally,
it's
published
so
might
be
still
interesting.
Some
of
you,
but
next
challenge
is
obviously
to
to
you
how
to
use
GPU
for
those
kind
of
model
codes
and
that
should
accept
on
my
channel
anyway.
That's
all
thank
you
very
much.
Okay,.
B
Awesome
awesome.
Well,
congratulations!
Well!
So
it
was.
You
were
able
to
incorporate
the
alternative
algorithms
that
work
better
on
the
system
than
the
previous
version
that
you
had.
E
Not
as
a
group,
yes,
I
am
not
a
core
developer,
but
yeah
we've
been
aware
of
one
of
the
issues
is
the
CSM
has
very
poor
memory,
scaring
for
each
task,
drawing
unnecessary,
unnecessary
Global
arrays.
So,
as
you
more
going
more
higher
solution,
the
less
efficient
memory
is
being
used.
E
Another
subtitles,
the
model
already
has
GPU
enabled
version
slightly
all
the
call
the
base,
but
so
the
next
step
for
me
is
to
really,
of
course,
that
GPU
enables
the
browser
to
our
mother
and
then
see
how
fast
it
can
run.
B
Awesome
awesome
well,
thank
you
for
sharing
so
make
sure
that
we
stay
on
track
with
time.
We're
gonna
go
on
to
our
next
area,
but
wow.
Those
are
already
two
great
wins.
If
you
have
any
others,
then
please
feel
free
to
go
ahead
and
share
in
the
chat
too.
B
Another
area
is:
what
did
you
learn
anything
today
or
this
month?
What
we're
in
our
wins
of
the
month
we're
actually
able
to
share
kind
of
what
they
were
able
to
learn
from
modifying
and
updating
updating
their
codes?
Does
anyone
else
have
any
you
know
it
could
be
anything
small,
Milestone
based
wind
that
you
that
you
or
that
you
learned
or
is
there
anything
within
the
nurse
documentation
that
helped
you
or
do
you
have
any
recommendations
for
improvements
as
well.
E
F
Yeah
I
actually
have
a
question.
We
used
to
have
a
a
code
to
bind
our
CPU
processes
to
cores
and
has
pneuma
changed
in
the
latest
operating
system.
I
hadn't
been
running
for
a
while
and
when
I
tried
using
that
recently
I
got
a
complaint
about
the
having
a
dash
between
a
range
of
CPU
cores.
B
And
so
what
was
the
exact
error
that
you
got
well.
B
Okay,
maybe
we
could
communicate
offline
and
make
sure
we're
using
the
right
flags
for
it,
because
that
could
be
what
could
just
be
the
problem
with
that.
But
you
should
still
be
able
to
do
physical,
binding
to
the
CPU
as
well
as
memory
binding
for
the
allocated.
B
Yeah,
thank
you
for
asking
anyone
else,
anything
that
you
learned
or
a
tip.
C
A
quick
follow-on
format-
yep
pneuma,
is
still
pretty
important
on
pearlmata
and
we
do
have
some
pages
talking
about
Affinity
I'll
pick
up
some
links
to
them
and
add
them
add
them
to
the
chat
after
the
good
catch.
If
the
Newman
control
syntax
has
changed,
we
might
need
to
tweak
some
documentation.
B
Awesome
thanks,
Steve,
okay,
all
right!
Well,
we'll
continue
on
I!
Think
next
we
have
our
announcements
than
Libby
yeah.
A
Awesome.
Okay,
so
just
this
is
information,
that's
all
available
in
the
weekly
emails
that
have
been
going
out,
as
well
as
the
announcement
emails,
but
just
as
a
reminder
to
everybody
that
the
court
that
Corey
is
now
scheduled
to
be
retired,
on
May
31st
at
noon,
Please
be
aware
that
you'll
still
be
able
to
log
in
on
the
login
nodes
and
you'll
be
able
to
access
the
scratch
system
for
one
week
after
that
until
Wednesday
June
7th.
A
But
it's
important
that
you
move
your
data
over
to
promoter
or
to
another
place
where
you
have
access
to
it.
If
you
need
to
move
from
Quarry
scratch
to
promote
or
scratch,
you
can
use
Globus
and
there's
information
about
which
endpoints
to
use
and
also
how
to
use
Globus.
If
you
haven't
yet
you
can
find
it
in
the
docs
page.
A
We
suggested
that
you
use
the
community
file
system
CFS
for
any
data
that
you
retrieve
kind
of
regularly.
If
you
have
large
amounts
of
data
that
are
not
retrieved
as
often
potentially
I.
E
A
This
can
still
be
access
pretty
frequently,
but
just
maybe
it's
more
like
long-term
storage,
you
can
use
hpss
tape
archive
and
there's
a
lot
of
information
in
the
weekly
email,
as
well
as
in
the
announcement
emails
about
the
best
way
to
do
this
transfer
and
so
I
would
recommend
that
you
either
reach
out.
If
you
need
help,
you
can
submit
a
ticket
and
ask
for
help
with
this
moving
your
data
or
you
can
check
out
those
documentation
Pages
for
the
best
way
to
move
large
amounts
of
data.
A
A
So
again,
if
you
have
any
questions,
if
you
need
help,
please
the
best
way
to
do
that
is
to
submit
a
ticket
through
servicenow.
You
can
also
attend
the
Corey
to
prometer
office
hours,
there's
going
to
be
several
in
the
month
of
May,
but
the
next
one
is
on
May
2nd
and
you
can
find
all
of
those
on
the
nurse
public
events
calendar.
A
So
if
you
need
help-
or
if
you
can't
find
those,
you
can
also
submit
a
ticket
to
get
information
about
that
as
per
usual,
please
check
out
the
docs
Pages
for
information
about
how
to
use
Pro
Mudder.
So
these
query
to
prominent
office
hours
also
a
good
way
if
you're,
if
you've
been
relying
on
Quarry
and
things
aren't
working
as
you
expect
on
Pro
matter,
please
go
to
the
office
hours
to
get
help
great
okay,
so
I
guess.
Does
anyone
have
any
questions
about
this
before
we
move
on
to
some
regular
ones?
E
Yes,
just
curious:
how
does
God's
space
on
power,
mother
Compares
with
the
that
one
on
Corey,
is
that
larger,
overall
size.
D
C
Yes,
so
everybody
has
a
20
terabyte
scratch
allocation
on
whole
mother,
scratches
faster.
It's
an
all
SSD
system.
We
don't
yet
have
the
ability
for
quitter
increases
on
perlmana,
like
we
do
on
query
I.
Think
that's
planned
eventually,
but
it's
not
in
place
yet,
but
it
doesn't
mean
that
if
you
have
a
like
a
larger
a
normal
quota,
you
may
need
to
back
up
some
data
to
hpss
I
wanted
to
remind
people
about
hpss.
E
Okay,
so
I
saw
some
kind
of
follow-up.
Related
question
is
that
I
thought
I
mean
for
the
first
question
about
the
fact
I
asked
because
it's
part
of
Mother's
Pastor,
you
know
around
time
and
then
faster
IO
and
produce
even
more
data
more
quickly,
so
I
tend
to
run
out
of
my
squad
space
more
quickly
than
on
Quarry.
So
I
was
thinking
to
increase
my
my
quota
on
Armada
and
the
other
question
is
that
when
we
moved
data
from
quality
scratch
to
Parramatta
scratch
that
doesn't
bring
the
right
setting
automatically
to
parameters.
A
So
to
answer
your
first
question:
you
can
request
an
increase
in
your
quota
using
it
via
ticket,
so
you
can
submit
a
ticket
to
do
that.
I
I,
don't
have
the
answer
to
your
second
question:
I'm,
not
familiar
with
that
maybe
Steve
or
Rebecca.
If
they're
on
we'll
have
more
I.
C
F
F
Create
the
directory,
where
you
want
the
data
to
go,
it
could
be
your
top
level
scratch
directory.
It
could
be
a
subdirectory
of
it.
Then
immediately
do
lfs
set
stripe
minus
C,
the
number
you
want
and
the
name
of
that
directory
and
then
get
into
Globus
and
start
moving
the
data,
and
you
can
you
know.
C
C
C
No
I
think
I
raised
my
hand
to
say
something
earlier,
but
I
think
I
already
said
it.
Oh.
A
A
You
need
more
help.
Please
submit
a
ticket,
that's
really
going
to
be
the
best
way
to
get
I'll
help.
You
need
okay,
awesome,
all
right,
so
just
to
highlight
a
couple
of
upcoming
seminars,
events
and
trainings,
so
the
Kodi
training
is
happening.
Currently,
this
you
can
get
more
information
on
the
trainings
website
at
nurse.gov.
So
this
is
the
link
to
it
there's
day
two
tomorrow.
So
if
you
missed
day
one,
you
might
be
able
to
catch
day,
two
there's
a
PHP
update.
That's
coming
up
soon.
A
So
if
you
use
PHP
for
any
of
your
applications,
please
make
sure
that
you
update
it
and
check
if
it
works.
If
you
need
help
with
that,
there's
some
information
at
these
links
again
this
is
from
the
weekly
email.
So
you
can
find
these
links
in
the
weekly
email
or
you
can
submit
a
ticket
if
you
need
further
help
with
that,
there
is
an
ECP
seminar
coming
up.
A
G
A
What
that
is-
and
if
that's
helpful
for
you,
please
find
the
link
in
the
weekly
email
and
just
as
a
reminder,
the
weekly
email
is
jam-packed
with
events
and
event
trainings
and
seminars,
so
these
are
just
a
couple
that
are
kind
of
coming
up
soon,
but
you
can
find
even
more
in
that
email
and
then
also
these
are
some
calls
for
participation.
So
there's
a
student
cluster
competition
info
session,
so
this
takes
place,
I
think
at
SC,
but
the
info
session,
so
this
one
probably
passed
already
but
there's
another
one
on
Thursday.
A
So
if
you
have
questions-
or
you
want
to
find
out
more
about
what
the
student
cluster
competition
is,
please
make
sure
to
check
this
out
again.
The
link
is
in
the
weekly
email
and
the
Fortran
User
Group
is
looking
for
some
feedback
about
llvm
flying,
so
they
have
a
survey
here.
This
link
is
also
in
the
weekly
email
and,
lastly,
nurse
is
looking
for
some
feedback
about
the
message
of
the
day
website.
So
this
is
a
webpage
on
nurse.gov,
where
we
post
the
nurse
status.
A
So
it's
where
you
can
find
out
if
Corey's
up
or
promoters
up
and
various
other
systems
and
we're
looking
into
making
that
a
little
bit
more
accessible.
So
please
use
this
survey.
If
you
have
thoughts
on
how
to
make
this
message
of
the
day
better
and
more
useful
for
you,
that
will
be
really
important.
A
A
No
worries,
no
worries,
Johannes
is
well
I,
guess
I,
don't
know
what
your
position
is.
I've
just
I've,
just
known
Johannes,
has
just
been
a
fixture
at
nurse
since
I
joined
as
an
all-around
expert,
but
in
particular,
as
a
Julia
expert.
A
So
he's
been
doing
a
lot
of
work
at
getting
Julia
up
and
ready
for
people
to
use
at
nurse
and
he's
kind
of
the
expert
at
it
at
nurse.
So
he's
going
to
tell
us
more
about
the
language
what
it's
used
for
and
maybe
maybe,
if
we're
lucky,
do
a
demo
yeah
awesome
did
you
want
me
to
show
your
slides
and
then
maybe.
H
I
am
gonna
share.
My
screen
should
see
my
screen
here
right,
yep,
all
right,
brilliant,
well,
thank
you
that
was
I
was
a
bit
too
flattered
there,
so
yeah
so
I'm
going
to
just
give
a
I'm
just
gonna
talk,
informally,
actually
about
Julia
and
and
Juliet
nurse
in
particular.
So
if
you
have
questions
I'm
keeping
an
eye
on
the
time,
so
there's
no
danger
in
running
over.
H
So
if
you
have
any
questions,
please
just
ask
it's
better
than
than
just
putting
something
in
the
chat
and
if
anyone
sees
a
question
or
chat,
there
should
be
answering
that
I'm,
not
saying
please.
Let
me
know
so.
H
I
always
like
to
show
some
introduction
slides
about
the
Julia
language,
because
I
think
it's
one
of
those
languages
that
you
know,
languages
like
C,
Fortran
and
python,
don't
need
much.
Introducing
and
and
Julia
still
needs
a
little
bit
of
introducing,
especially
in
the
HPC
Community
and
so
I
I
always
like
to
start
the
story
with
a
slide
from
cray
from
hpe
and
and
what
they've
done
is
they've
kind
of
taken
this
little
survey.
H
It's
mainly
intended
to
promote
their
language
Chapel
down
here,
but
it
accidentally
also
promotes
Julia,
and
so
what
they've
done
is
they've
kind
of
plotted
they've
compared
the
code
size
with
the
execution
time,
and
so
the
idea
is,
if
you're
down
here
in
this
corner,
it
means
that
you
need
vertical
code
to
produce
very
fast
programs,
and
so
so
you
can
see,
we've
got
a
variety
of
languages
here
and
maybe
the
general
trend
is,
you
know:
languages
like
I'm
trying
trying
to
find.
H
H
The
other
thing
and
I
think
this
is
my
maybe
even
more
important
than
the
the
verbosity
of
of
a
language
is
the
community,
and-
and
this
is
the
main
reason
why
I
actually
advocate
for
Julia
for
HPC
is
because
the
community
is
HPC,
aware
more
so
than,
for
example,
python
and
so
I
think
it
produces
a
a
high
performance,
high
performance,
High
productivity,
language,
language
that
has
a
community
that
that
is
able
to
engage
with
you
very
readily
when
you
go
and
say
well,
I'm
running
on
Frontier
or
I'm
running
on
Pearl
matter
and
I'm,
seeing
weird
Behavior
you're,
probably
in
the
forums
get
oh
yeah
I'm.
H
Also
doing
this,
and
here
this
is
what
I
did
to
make
it
work
now
to
switch
a
little
bit
more
to
the
technicality
and
language
design.
If
you,
if
you
start
to
look
at
some
of
the
discussions
around
High
productivity,
high
performance
languages,
you're
frequently
see
this
sort
of
issue.
That's
levied
against
python,
which
is
it's
great
until
you
can't
work
within
the
restrictions
of
the
package
that
you're
using,
and
this
is
really
a
problem
of
front-ends
versus
speckans.
H
It
often
requires
a
lot
of
work
to
become
a
back-end
developer,
and
so,
if
you,
for
example,
want
to
create
your
own
really
weird
loss
function
in
pi
torch
that
you
can't
just
Express
in
the
in
the
pi
torch
packages
that
are
available.
You'll
find
it's
very
hard
oops,
it's
very
hard
to
do
and
and
keep
performance.
Another
Benchmark
that
we've
we've
worked
on
is
just
the
the
cost
of
switching
between
a
front-end
and
a
back
end.
So
here
are
different
function.
H
Signatures
and,
if
you
use,
if
you
use
something
like
Pi,
bind
11
or
just
the
the
python
C
API,
then
just
making
this
function
call
actually
costs
some
time,
and
if,
if
you
use
the
language
like
Julia,
because
it
is
actually
using
a
jet
compiler
behind
the
scenes,
so
your
Julio
code
gets
produces
llvm
intermediate
representation
code
and
that
code
just
uses
the
C
API,
so
the
C
ABI
to
to
make
the
function
call
you
can
see.
You've
got
pretty
much.
H
Call
now
you
might
ask
you
might
say
well
function
calls
aren't
that
that
big,
a
deal
right
like
I,
make
it
one
off,
but
once
again
it
locks
you
into
a
a
framework
where
you
use
the
high
productivity
language,
really
only
to
coordinate
your
work
and
then
all
the
and
then
once
you're
in
the
C
substrate,
so
to
speak,
you're
kind
of
trap
there,
because
every
time
you
jump
between
the
two
you're
paying
a
penalty
and
in
case
you
are
you're
thinking,
oh
well,
nobody
uses
Julia
I
I
want
to
point
out
so
we've
been
we've
been
working
on
this
paper
here
that
just
like
I
like
to
show
there
are
many
different
courses
here
from
many
different
centers
around
the
world,
and
if
you
even
just
poll
the
folks
at
nurse,
you
can
see
that
about
half
of
nurse
users
have
the
the
intent
to
to
actually
use
Julia
so
well.
H
Half
of
the
people
that
responded
to
the
survey,
but
still
those
responses
to
the
survey
were
pretty.
There
were
a
lot
of
responses.
So
I
think
this
is,
as
it
was
a
representative
as
a
survey
can
get,
and
finally
I'd
like
to
just
always
point
out:
I
love
the
Julia
language
itself.
It
has
some
neat
features
when
you
start,
you
might
think
you
know
what
what
the
hell
is
this,
but
then,
as
you
get
involved
deeper,
you
realize
oh
no.
H
This
is
really
neat
stuff
and,
and
I've
actually
had
this
experience
myself,
where
I
keep
when
I
dive
into
something
like
a
python
application,
it
becomes.
You
realize
that
it's
sometimes
not
designed
with
HPC
in
mind,
I'm,
not
dissing,
on
python,
by
the
way
I
realize
I
am
but
I'm
sorry
I
didn't
mean
to
I
like
what
I'm
saying
is
that
for
different
applications?
There
are
different
strengths
and
for
HPC
code.
I
strongly
suggests
that
if
you
are
looking
for
a
high
level,
language
give
Julia
a
try
all
right.
H
So
some
I'm
not
seeing
any
chat
by
the
way.
Okay
good.
So
so
some
some
news
about
Juliet
nurse,
so
we
are
aware
that
the
current
Julia
models
on
on
Prima
dab
it
broken.
This
is
because
both
hpe
are
changing
a
lot
of
things.
Things
behind
the
scenes,
but
also
the
Julia
ecosystem
is
moving
very
quickly,
which
means
that
the
model
of
providing
pre-built
environments
isn't
really
working,
even
though
the
Julia
language
and
API
are
stable.
H
Just
the
fact
that
you
know
having
to
rebuild
Julian
environments,
all
the
time
makes
it
very
difficult
to
to
keep
keep
on
top
of
it,
and
therefore
we
are
moving
away
from
having
a
nurse
provided
environment
and
said
just
make
it
very
easy
for
you
to
do
the
right
thing.
When
you're
setting
up
your
own
environments.
H
Okay,
and
so
so
so
in
this
in
this
along
those
lines,
there
are
basically
two
pull
requests
that
one
of
them
has
already
emerged.
The
other
one
is
on
the
verge
of
being
merged.
H
H
So
very
soon,
in
fact,
if
you
use
the
the
branch
in
the
pull
request
here,
you
can
just
run
MPI,
preferences.use
system
library,
and
you
say:
vendor
equals
cray,
MPI
executable
equals
s
run,
and
it
will
automatically
find
all
the
right
things
to
do,
and
your
MPI
install
will
work
perfectly
and
then
also
for
setting
up
for
for
getting
Julia
I've
been
working
with
some
of
the
Julia
app
developers
and
Julia.
H
The
tool
is
now
HPC
friendly,
which
allows
you
to
very
easily
pick
the
version
of
Julia
that
you
would
like
and
have
it
a
new
environment
and
work.
Well,
okay,
I'm,
just
double
checking
the
message.
Messages.
H
Okay
I
the
comments
about
Jax
versus
Julia,
maybe
a
little
later,
but
I
do
have
thoughts
and
and
I
think.
F
H
Can
I
don't
know
the
auto
dial
capability,
but
actually
I
think
I
have
a
link
to
it
at
the
end.
There's
something
called
kernelabstractions.jl
and
that's
a
autom.
Yes,
oh
yes,
yes,
yes,
oh
yeah,
so
you
can
write
application.
Portable
GPU
code,
in
fact,
there's
a
good
and
best
submission
this
year.
That
does
exactly
that.
So
a
single
Julia
code
that
runs
on
AMD
and
Nvidia
gpus
and
yes,
Julia.
H
The
way
that
Julia
is
designed
I
might
make
another
presentation
later
about
a
differentiable
programming
and
look
for
scigode.jl,
but
actually
the
Julia
language
itself
is
built
in
such
a
way
that
differentiable
programming
is
is
very,
very
easy
and
in
fact,
I
have
examples
of
this
in
in
some
of
the
links
I'm
showing
oops
I'm
just
going
to
go
back
and
then
finally
I
just
wanted
to
point
out
in
terms
of
news.
H
This
is
still
a
work
in
progress,
but
right
now
another
way
that
a
brand
new
supercomputer
likes
to
mess
up
your
plans
is.
It
will
expose
a
completely
new
way
of
talking
to
the
network,
and
this
is
true
for
slingshot
as
well
so
right
now.
This
is
this.
This,
what
I've
considered
an
awful
hack
where
you
need
to
tell
Julia
which
adapter
to
which
Nick
to
bind
to-
and
this
is
now
this
won't
be
necessary.
H
We
are
working
and,
if
you're
interested
in
actually
helping
us
with
that.
That
would
be
great,
but
our
work
at
the
moment
is
to
use
HW
lock
to
bind
to
the
correct
to
the
right
technique
for
the
high
speed
Network,
oh
and-
and
we
already
mentioned
some
some
nice
capabilities-
I
want
to
point
out
that
there
is
a
this
year.
There
was
for
the
ECP
Community
Days
a
tutorial
and
above
on
Julia.
H
Everything
in
that
is
available
on
the
on
the
following
GitHub
page,
including
some
slides,
for
example,
of
things
like
automatic
differentiation
and
and
automatic
GPU
offloading
I
wasn't
I
didn't
highlight
an
automatic
GPU
offloading
example.
Instead,
I
wanted
to
highlight
something:
that's
a
little
bit
less
well-known
example
for
the
the
folks
that
enjoy
the
way
that
Fortran
does
arrays
Julia.
Does
it
in
the
same
way.
H
So
one
of
the
presentations
at
that
tutorial
was
integrating
machine
learning
into
a
Fortran
application
and
what
they
actually
found
is.
H
It
was
easier
for
them
to
Port
their
Fortran
code
to
Julia
and
then
use
the
flux.jl
Julia
package
to
do
machine
learning
in
situ
for
the
application
and
I
really
like
the
slide,
because
the
video
shows
you
know
here:
here's
your
Fortran
Loop
structure,
it's
a
triple
A
nested
loop
as
as
frequently
we
tend
to
do,
and
you
know
it's
it's
pretty
much
like
a
conservation
like
a
finite
volume,
conservation,
law,
stencil
and-
and
here
we
have
the
same
thing
in
Julian
and
you
can
see
pretty
much
the
same
thing.
H
You
just
have
to
change
your
your
array
index
brackets
to
square
brackets,
and
you
don't
have
to
worry
about
the
line
breaks
as
much
but
but
Julia
arrays
are
Fortran
ordered
and
they
started
one.
So
in
that
sense
it's
it's
very
much.
The
same
deal.
H
All
right,
just
making
sure
yeah
so
I
think
the
first
I
can
start
the
first
demo
when
people
start
working
in
in
Julia
I.
They
frequently
Overlook
the
package
management
infrastructure
that
that
Julia
has
and
I
wanted
to
just
demonstrate
that
and
what
better
place,
but
to
use
Jupiter
for
that.
H
So
I'm
just
going
to
stop
the
screen
here
and
to
find
it
in
my
many
tabs.
So
you
should
see
my
Jupiter
screen
is
that
right.
H
Brilliant,
so
so,
if
you,
if
you
were
on
Jupiter,
you
can
click
on
any
of
these
kernels.
As
I
said
some,
we
are
going
to
add
Julia
up
to
this,
and
then
you
can
just
come
with
your
own
favorite
version
of
Julian
and
then,
if
I
was
to
start
one
of
these
kernels
in
this
case,
Julian
1.8.
H
So
first
thing
I'm
just
going
to
create
myself
a
temporary
directory
and
I'm
just
gonna,
go
to
that
temporary
directory
and
and
I'm
gonna
print,
my
working
directory
just
to
make
sure
where
I
am
so
here.
This
is
a
temporary
empty.
Space
can
even
confirm
that
it's
empty
semicolon
runs
a
shell
command
in
Jupiter
for
for
using
the
Julia
kernel
and
I
can
just
run
the
tree
command.
You
can
see,
there
are
zero
directories
and
zero
files.
H
I'm
gonna
import,
the
package
package,
the
PKG
package
and
I
want
to
create
a
blank
package
and
I
want
to
highlight
the
fact
that
you've,
if
you're,
building
an
application,
you
should
be
working
within
a
package.
It
shouldn't
just
be
a
bunch
of
source
files
thrown
together
in
directory
packages,
allow
Julia
to
Marshall
pre-compilation
correctly.
H
So
and
I'm
going
to
run
the
package.generate
and
I'm
going
to
look
at
my
tree
again
and
you
can
see
now
it's
done
several
things.
It's
gonna.
It's
created
a
my
package
directory
and
it's
put
a
Tamil
file
in
here
and
it's
going
and
it
puts
you
know
just
a
little
source
file.
You
know
it's
a
bit
Surly
we
could
even
like
go,
and
just
edit
have
a
look
at
it
and
would
be
my
package
source.
H
My
package.jl
is
really
kind
of
boring.
It's
just
an
empty
module
with
the
Hello
World
and
then
you
could
start
editing
right
and
that
that
is.
It
did
two
things
it.
It
created
a
self-contained
environment
for
your
source
code,
but
also
put
this
project
at
home
loan.
So
let's
see
what
what
this
one
of
these
does.
H
I'm
gonna
create
another
empty
temporary
directory.
I'm
gonna,
see
there's
nothing
there,
I'm
going
to
now
use
package.activate
and
oops,
and
this
join
path
is
unnecessary
by
the
way
I
can
just
go
and
I'm
going
to
activate
the
package
at
my
current
location.
That's
what
this
at
dir
does.
I'm
gonna
add
two
and
two
packages
to
this
environment,
so
this
is
it's
a
little
bit
like
pip
activate.
H
If
you
were
to
use
python,
this
takes
a
moment
and
it
says
I'm
gonna
add
all
of
these
Plus
dependencies
and
I'm
gonna
take
a
look,
and
now
I've
got
a
manifest.to
mode
and
a
project.toma
I'm
gonna
take
a
look
at
the
project
Autumn
and
you
can
see
see
what
the
project
Autumn
was
for
it.
It
keeps
track
of
the
precise
version
of
all
the
dependencies
that
your
your
environment
has.
H
So
a
package
is
basically
source
code
plus
an
environment,
and
that
way
you
can
put
all
your
dependencies
together
with
your
source
code
in
in
one
package
and
that
avoids
confusions
later
down.
The
line.
I
also
want
to
point
out
some
HPC
considerations
at
this.
At
this
stage,
you,
if
you
want
to
use
Julia
at
scale,
I
highly,
recommend
that
you
look
at
the
package.package
compiler
project
and
that
can
now
take
your
package
object
and
turn
it
into
a
single
shared
object.
H
File
called
assist
image,
and
if
you
then
combine
that
with
partman
HPC,
you
could
create
an
image
that
has
all
the
dependencies
and
all
the
pre-compiled
code,
together
with
your
with
your
Julie
application
inside
that
image.
That's
the
recommended
approach.
If
you
can't
pre-compile
everything
you
know
this
might
not
be
feasible.
H
Okay,
I
have
a
list
I'm
going
to
end
my
talk
very
soon
with
a
list
of
cool
packages
to
look
at
I.
Think
plotting,
isn't
one
of
them,
but
I'm
going
to
mention
where
to
find
plotting
and
and
people
are
already
answering
great.
Now,
I
really
want
to
show
this
example.
How
much
time
do
we
have
Libby
by
the
way.
A
I
would
say
another
five
to
seven
minutes
and
then
we'll
just
wrap
up.
H
Brilliant
oops,
this
is
the
wrong
one.
Yes,
so
that
that,
let's,
let's
me
show
the
thing
that
I
really
care
about
in
in
Julia,
which
is
it
has
someone
already
mentioned
the
chat.
It
has
a
lot
of
Applied
math
stuff
in
in
natively,
but
it
also
has
parallelism.
H
H
So
this
is
a
bit
of
a
contrived
example,
but
we
don't
have
much
time,
and
so
so,
let's
say
now
now:
I've
added
eight
workers
to
my
pool
of
workers,
actually
query
the
workers
you
can
see.
I
have
now
an
eight
element
array.
H
Now,
let's
say
I
I
want
to
do
something
kind
of
stupid
like
toss
it
at
coin
many
times,
and
here
I
have
a
Serial
implementation
of
the
number
of
heads
function
and
a
distributed
implementation
you
can
see
that
at
distributed
will
distribute
your
for
Loop
content
and
everything
inside
the
bracket.
Here
is
your
reduction
function,
so
if
I
Define
this
and
I
ran
these
I'm
gonna,
let
those
run
in
the
background,
while
I
explain
so.
Basically,
these
now
are
now
running
first
cereal
and
then
a.
H
Example
and
it's
it's
creating
these
benchmarks.
This
is
what
Benchmark
tools
the
package.
Does
it
shows
you
a
little
histogram
of
the
runtimes,
and
it
gives
you
some
statistics
and
in
a
moment,
we'll
see
that
you
know
using
eight
workers
instead
of
one
for
this
conveniently
parallel
example
gives
you
almost
10
times
speed
up
now
we're
not
done
yet.
H
H
Okay,
so
I
can
now
go
and
Define
a
slur
manager,
and
it's
going
to
take
give
me
128
workers,
I'm
gonna,
give
it
the
term
argument,
so
I'm
gonna
submit
to
the
CPU
queue
using
decide
the
debug
queue
using
CPU
partition
and
I'm
gonna
grab
two
nodes,
start
that
and
actually
in
the
meantime,
I'm
going
to
open
a
terminal
and
show
you
what's
happening
with
sqs
and
you
can
see.
I
have
a
I've
just
queued
my
job
from
Jupiter
and
it's
gonna.
It's
gonna
take
a
moment.
Hopefully
I.
H
It's
okay,
but
what
you
should
be
seeing
is
eventually
when,
when
it's
done,
you
are
going
to
drop
into
its
it's
going
to
release
control,
and
then
you
can
run
your
job.
You
on
multiple
nodes
using
distributed
and
and
I
don't
have
a
GPU
example.
I
think
GPU
is
is
something
it's
also
fairly
portable,
but
you
know
GPU
is
a
little
bit
more
difficult,
so
so
maybe
I
will,
in
the
future,
also
give
an
example
of
using
GPU
programming,
I'm
gonna
while
be
awaiting
oh
yeah.
Julia's
stats
is
wonderful
and
I.
H
Thank
you
urgent
for
for
feeling
some
questions.
Okay,
actually,
last
time
this
really
didn't
take
very
long,
and-
and
maybe
oh
no
here
we
go,
we've
got
our
nodes
Brilliance.
Okay,
now
we've
connected
to
our
workers.
Yeah
we
can.
We
can
query
each
individual
worker
using
the
spawnet
command,
and
then
we
can
see
what
the
host
names
are
and
I'm
actually
also
going
to
rerun.
My
parallel
example,
but
now
I'm
going
to
rerun
it
on
two
nodes
using
128
workers.
H
I've
the
previous
example
I
only
used
a
few
workers,
so
this
is
actually
I.
Maybe
shouldn't
have
changed
my
demo
too
much,
but
still.
C
H
Yes,
okay,
so
it's
going
to
take
a
moment,
I'm
kind
of
surprised,
maybe
there's
an
issue
all
right.
H
This
is
the
this
is
what
happens
when
you,
when
you
change
a
demo
on
the
Fly
I
I
just
put
10
before
and
now
I
thought.
H
Oh
well,
I
can
do
128.,
it's
a
much
nicer
number
okay,
this
is
this
is
taking
a
time
to
really
spread
the
work
across
the
network
and
then
oh
here
we
go,
and
so
it's
it's
collected
the
host
names
from
from
every
worker,
and
you
can
see,
we've
got
nid's
repeat
it
until
we've
got
64
of
them
and
then
we've
got
another
64
here
and
now
running
The
Benchmark
tools-
and
here
we.
H
This
is
what
I
want
to
show
you
and
now,
because
we've
got
two
nodes
going
at
a
really
going
full
Pelt
edit
we've
decreased
our
speed
runtime
from
by
another
Factor
10,
basically
from
48
down
to
about
five
all
right
and
that's
pretty
much
the
getting
to
the
end.
I
will
just
leave
this
up.
These
slides
are
shared
anyway,
so
for
for
all
your
HPC
needs.
I
suggest
you
look
at
these
packages
here
for
plotting.
H
I
said
recommended
package
called
plots,
it's
just
that
and
for
AI
work,
I
recommend
you
look
into
flux.
Flux
is
influx
at
the
moment,
so
you
might
need
to
debug
some
of
the
other
examples,
but
it's
actually
a
really
cool
tool
so
and
with
that
I
yield
my
time.
Thank
you
very
much
for
listening.
A
Thank
you.
Thank
you.
So
much
feel
free
to
continue
asking
some
questions
and
if
you
have
time,
maybe
Johannes
are
you
able
to
stick
around
for
a
couple
minutes.
I.
H
Am
and
I'm
gonna
ask
you:
how
should
I
I'm
happy
to
share
those
notebooks
but
I,
don't
know
how
to
add
them
to
the
presentation?
Yes,.
A
We
let's
work
on
that,
so
that
people
can
get
access
to
those.
Thank
you,
okay.
Well,
let's
wrap
up.
So,
oh
sorry,
slideshow,
okay,
so
we
have
several
more.
We
have
these
meetings
every
month.
We've
got
some
good
topics
coming
up
in
the
future.
A
That'll
include
a
presentation
on
Jupiter
Hobbit
nurse.
Some
were
nurse
tips
and
tricks.
We're
gonna
have
a
presentation
from
security
at
some
point.
Tell
us
more
about
how
security
at
nurse
works,
so
please
stay
tuned
for
those
and
in
general,
we'd
love
to
learn
more
from
from
you
all.
If
you
have
things
that
you'd
like
to
present
we'd
love
to
work
with
you
on
that.
So
if
you
have
any
thoughts,
please
feel
to
please
feel
free
to
submit
those
to
us
via
this
QR.
A
So
the
QR
code
is
to
this
form
here
that
will
take
you
to
a
place
where
you
can
nominate
a
topic
or
suggest
a
topic,
and
then
also
remember
if
you
are
working
on
some
cool
science-
and
you
have
anything
you'd
like
us
to
highlight
or
make
us
aware
of
something
you
did
at
nurse,
because
we
have.
This
highlight
submission
form.
Please
spread
this
to
your
students
to
your
colleagues
collaborators.
We
always
want
to
hear
about
what
people
are
working
on
at
ersk
and
Charles.
Did
you
want
to
add
anything.
B
You
got
everything
pretty
covered,
we're
just
looking
forward
to
an
engaging
and
interactive
year,
so.
G
E
H
Right
so
there
is
a
monthly
Julia,
so
there's
a
Julia
HPC
working
group
that
meets
monthly
and
I
think
it
is
a
very
nice
cross-section
of
both
different
data
centers
at
some
point,
I
want
to
make
a
little
mac.
You
know
world
map,
you
know,
or
you
know,
from
Europe.
We
have
we
really
the
only
place.
We
don't
really
have
much.
H
Representation
is
Central
Asia,
so
so
maybe
if
they
find
someone
at
cost
or
something
but
yeah,
we
have
many
data
centers
where
HPC
is
traditionally
and
the
the
science
that's
being
done.
There
is
it's
a
simulation
data
analysis
applied.
Math
is
one
of
those
areas
that
have
taken
to
Julia
fairly
early
on
because
of
the
way
it
does
erase
and
and
AI
research
actually
is
also
starting
to
grow,
because
flux
is
reaching
maturity.
H
So
it's
it's
very
diverse.
Basically,
I
can't
think
of
any
topic
that
that
I
work
with
a
here
at
nurse
that
I
don't
know
someone
who's
actually
also
using
Julia
on
so,
for
example,
there's
one
group:
that's
a
cosmology
group
at
Berkeley
and
they're,
using
Julia,
busov
PDS,
but
also
to
analyze
data.
So
there's
a
it's
very
diverse
in
the
science.
E
H
Glad
to
hear,
and
if
you
have
any
other
questions
you
know,
let
me
know.