►
From YouTube: NUG Monthly Meeting, July 21, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
everybody
welcome
to
the
the
july
neck
meeting.
Normally
steve
leak
is
your
host,
I'm
filling
in
for
him
today.
So
let's
go
ahead
and
get
started.
A
Okay,
so
first,
let's
just
talk
about
what
we're
going
to
do
today.
So
remember
this
is
an
interactive
meeting.
I'd
love
to
have
you
participate
in
the
meeting,
so
you
can
raise
your
hand
in
in
zoom.
You
can
raise
your
hand,
that's
you
can
do
that
by
going
to
reactions
and
then
choose
raise
your
hand
or
you
can
just
even
just
speak
up.
We
remember
also
in
the
nurse
user
slack
we
have
the
webinars
channel,
so
you're
welcome
to
start
some
discussions
in
that
channel.
A
Okay,
so
for
our
agenda
today
we're
going
to
talk
first
about
our
wins
of
the
month.
People
have
wins,
we'd
love
to
hear
about
them.
Today
I
learned
so
things
that
you
learned
that
might
help
other
users.
We're
gonna
have
some
announcements,
calls
for
participation
and
then
our
topic
of
the
day
is
hpc
workflows
for
scientific
facilities,
and
so
my
colleague
bjorn
enders
will
be
joining
us
for
that.
A
Then
we'll
talk
about
upcoming
meetings
like
any
topic,
suggestions
or
requests
and
then
finally,
last
month's
numbers
at
nurse
okay.
So
let's
talk
about
our
wins
of
the
month.
So
if
you've
not
been
to
one
of
these
meetings
before
we
just
want
you
to
show
off
an
achievement
or
give
a
shout
out
to
someone
else's
achievements,
so
you
know:
have
you
had
a
paper
accepted?
Did
you
solve
a
bug?
A
A
B
Weak
scaling
study
of
our
code,
infix
exa,
it's
a
combined
elearning,
lagrangian
particle
unresolved
cfd
code
with
the
slingshot
11
on
pro
motor,
that's
being
installed
right
now
and
we're
seeing
about
a
20
overall
improvement
in
time
to
solution
and
for
the
particles
which
are
even
like
the
most
communication,
bound
part
of
the
code
upwards
of
almost
40
percent,
and
I
was
curious
if
anyone
else
has
done
so
much
studies
with
their
codes.
B
D
Hi
this
is
richard
head
nurse.
What
what
scale
were
you
running
at
and
and
what
was
the
code
again.
E
Yeah,
so
can
you
hear
me
right?
E
Yes,
okay,
so
I
have
a
paper
that
was
accepted
recently
in
advances
in
water
resources
that
used
nask
resources.
E
So
we
we
had
a
high
resolution
simulation
model
that
was
trained
to
understand
river
water
intrusion
into
one
of
the
sites
at
the
pacific,
northwest
national
lab
here
called
hanford
site
and
where
in
which
we
use
the
doa
code,
such
as
pfloatran
and
e4d,
to
generate
the
simulation
data
and
and
use
that
data
to
train
machine
learning
models
where
in
which
we
use
the
combination
of
mpf
for
pi
tensorflow,
to
run
on
more
than
400
nodes
to
develop
scalable
ai
models
and
train
that
models
and
inward
for
permeability
field.
E
So
that
was
that
paper
was
recently
accepted
in
the
journal
and
just
wanted
to
highlight
the
I
highlight
and
thank
the
nurse
for
the
resource.
A
Wow,
that
is
awesome.
Thank
you
so
much
for
sharing
about
that.
You
know,
I
think,
that's
a
major
accomplishment
that
we
would
probably
would
love
to
find
out
more
about.
If
you
would
be
willing
to,
you
know,
send
us
an
email
and
we
can
you
know
we
could
maybe
even
do
a
science
highlight.
I
mean
that's
amazing,
awesome.
E
A
B
Yeah,
I
have
a
just
a
small
progress
that
I
can
share.
That's
well
timed
with
the
the
topic
for
this
meeting
at
general,
atomics
we've
been
given
access
to
the
real-time
cues
on
nurse
and
are
developing
a
workflow
for
equilibrium.
B
Analysis
for
the
fusion
experiment,
d3d
that
is
on
site
and
the
goal
is
to
develop
high
fidelity
reconstructions
that
can
be
used
to
assist
with
the
experiment,
monitoring
in
the
control
room,
using
the
nurse
resources
in
real
time,
and
so
we
have
I've,
just
tested
it
out
this
morning
with
some
changes
that
were
made
during
the
downtime
and
I'm
seeing
jobs
launch
very
rapidly
and
share
a
hundred
processes
sharing
three
nodes:
real-time
execution,
which
is
a
great
starting
point
for
the
development
we're
trying
to
do
and
we're
now
seeing
if
we
can
get
enough
progress
together
for
submitting
a
paper
to
the
super
computing
upcoming
workshop.
D
Hey
tauren,
that's
great!
That's
great
news!
That's
exciting!
I
wonder,
is
your
workflow,
predictable
or
scheduled,
or
is
this
something
that's
just
going
to
be
kind
of
ongoing,
unpredictable
time
of
need.
B
It's
a
little
bit
mixed.
I
think
it'll
be
mostly
predictable
in
that
the
experiment
generally
has
pretty
regular
runtime
hours
start
up.
Around
9
00
am
till
generally
around
4
pm
is
last
shot,
but
we've
had
some
extensions
and
our
our
facility
calendar
tends
to
be
pretty
variable
month
to
month
throughout
the
year.
So
it's
it's
a
bit
of
both.
A
All
right,
I'm
going
to
take
that
as
a
no,
so
we'll
move
on
to
our
next
topic,
which
is
today,
I
learn
so
again
if
you've
not
been
to
one
of
these
meetings,
which
we're
talking
about
like
what
thing
that
you
learned.
That
was
surprising.
Perhaps
that
might
benefit
other
users
to
hear
about
so
something
you
got
stuck
on.
But
then
you
figured
out
be
nice
to
give
others
the
benefit
of
your
experience,
some
kind
of
a
tip
for
using
nurse
or
anything
else
that
you
learned
that
might
benefit
other
nurse
users.
A
F
E
It's
a
cpu
version.
You
can
hear
me
right,
yeah,
yeah
yeah,
so
it's
a
cpu
version.
I
will
share
the
link
here
right
now,
and
so
the
cpu
version
of
the
the
nurse
corey
used
like
we
used
is
nurse
corey,
intel
haswell,
as
well
as
knights
landing
for
simulations
to
run
to
generate
the
data
specific
to
the
field
site.
We
use
cpus
for
training
the
deep
learning
models
we
also
use
cpus
for
hyper
parameter,
tuning
and
and
also
for
inference.
E
We
are
planning
to
port
that
to
gpus
permitted
gpus
using
a
package
called
the
piper
and
ray,
and
that
is
work
in
progress,
but
whatever
we
did
so
far
was
for
cpus
did
that
did
I
answer
your
question?
Yeah?
Yes,
yes,
okay!
Thank
you.
A
Okay,
well,
okay,
I'll
give
you
a
a
today.
I
learned
that
I
learned
not
today
and
not
really
this
month,
but
recently
the
s
account
command
s-a-c-c-t.
I
believe
I
spelled
that
right.
I'm
sorry,
I'm
terrible
at
spelling
out
loud
the
s
account
command,
can
tell
you
about
your
job
history,
so
you
can
use
that
in
various
ways,
but
one
thing
that
I
learned
recently
was
you:
can
it
you
can
also
use
that
to
look
up
a
particular
node.
A
So
if
you
want
to
let's
say
you
had
a
job
and
it
failed-
and
you
want
to
you-
want
to
see,
if
maybe
other
jobs
that
use
that
note
also
failed.
You
can
use
that
feature
of
s
account
to
find
out
to
list
all
of
the
jobs
that
were
on
that
node.
You
know
during
a
certain
time
period
or
whatever
it's
quite
handy
so
anyway,
I
guess
we
will
move
on
to
the
next
topic.
So
the
next
topic
is
announcements.
So
richard,
would
you
like
to
give
this
announcement.
D
Sure
so
corey
was
first
the
phase.
One
was
first
done
in
2015
and
I
was
just
trying
to
look
it
up
this
morning
and
I
think
actually
you
know
going
into
its
seventh
year-
is
the
nurse
longest
lasting
system
ever,
which
is
pretty
amazing,
think
about
it,
but
in
any
case,
pearl
mudder,
which
has
which
will
have
when
fully
configured
the
cpu
partition
equivalent
to
all
quarry
and
that's
in
addition
to
this
large
gpu
partition,
it
should
be
fully
operational
for
2023.
D
So
our
current
plans,
our
current
expectations,
is
that
we
will
retire
corey
at
the
end
of
this
allocation
year.
You
know
if
unex
we
have
made
provisions
that
if
something
unexpected
arises
with
perlmutter
in
the
meantime,
we
we
can
extend
that
date
if
needed,
but
we
don't.
We
don't
expect
anything
to
happen
there,
and
so
allocations
for
2023
will
be
based
on
promoter's
capability
and
then
the
recap
season
is
starting
in
less
than
a
month,
and
I
guess
that
there'll
be
we'll
talk
about
that
at
the
next
mug
meeting.
A
Thanks
richard
anybody
have
any
questions.
D
And
so
we
we
are
asking
users
that
to
let
us
know
if
they
think
there's
something
unique
or
special
about
corey
that
that
they're
relying
on
that,
and
so,
if
they're
uncertain,
if
something
might
work
or
not
work
on
perimeter,
please
try
it
now,
but
also
let
us
know
and
we'll
try
to
address
that
in
the
meantime,.
E
Quick
question
richard
so,
though
I
see
the
per
motor
cpus
are
free
for
testing
correct.
So
when
can
we
expect
that
to
be
that?
To
be
true
means,
like?
Is
there
any
hard
deadline
when,
after
which
per
motor
cpus
for
testing
will
not
be,
will
not
be
available
for
free
of
charge?.
D
So
right
now,
there's
still
free
of
charge,
we're
we
are
kind
of
watching
the
allocations
and
usage
and
evaluating
when
or
if
this
this
allocation
year,
we
might
have
to
charge
for
those
cpus.
It's
still
it's
still
undetermined.
You
know
the
system
is
still
being
configured
and
tested
and
things
so
that
when
the
when
the
size
of
the
system
is
fully
realized,
might
play
into
that
too.
D
So
I
don't
have
a
don't,
have
a
firm
answer
for
you
yet,
but
in
the
meantime
I
do
encourage
people
to
get
onto
those
cpus
and
try
them
and
take
advantage
of
them
while
they're
free
now.
E
Okay,
thank
you
yeah
we
can.
We
can
do
scaling
studies
or
simple
studies
on
that
for
free
of
charge,
right
yeah
for
the
time
being.
Yes,
us
for
time
being.
Okay,
thank
you.
So
much.
A
A
Okay,
any
other
questions
about
the
corey
retirement.
A
Alrighty
so
we'll
move
on.
So
all
of
these
have
been
announced
in
the
weekly
email,
but
just
re-announce
them
here.
So
we
have
a
lot
of
opportunities
to
participate
in
workshops,
so
we've
got
the
parallel
applications
workshop
alternatives
to
mpi
plus
x
or
atm
that
one
of
their
call
for
papers.
It's
due
july
29th
got
the
workshop
on
accelerator
programming
using
directives
that
one
papers
are
due
august
5th.
A
A
We've
got
some
some
good
conferences
or
workshops
that
are
coming
up
as
well
so
week
after
next
we've
got
the
open,
acc
and
hackathon
summit
august
2nd
through
4th,
and
it's
it's
more
than
just
about
hackathons
or
even
open
acc.
I
think
they
have
a
really
interesting
program.
A
That's
going
on
there,
we've
got
the
beyond
dft
electrochemistry,
with
accelerated
and
solvated
techniques.
A
Workshop,
that's
going
to
be
august,
15th
through
16th
and
nurse
is
providing
some
training,
accounts
and
reservations
for
that
workshop,
although
I
don't
think
we're
really
involved
in
any
other
way
and
then
the
confab
22
meeting.
This
is
esnet's
first
annual
meeting
that
is
going
to
take
place
october
12th
through
13th,
and
so
you
can
register
for
that.
A
If
that
seems
like
an
interesting
meeting,
which
I
think
it
will
be
okay,
so
then,
as
for
training,
so
we've
got
these
upcoming
trainings
we've
got
the
e4s
at
nurse
training
on
august.
25Th
e4s
is
a
curated
set
of
scientific
libraries
that
are
all
curated
to
work
together,
and
so
I
think
that's
a
good
workshop.
If
you're
interested
in
using
libraries
we've
got
the
our
spin-up
training
is
going
to
happen
on
august
10th,
that's
not
too
late
to
sign
up
for
that,
and
then
we've
got
this
ai
for
science
boot
camp
going.
C
A
Is
going
to
be
august,
25th
or
26th,
we
had
originally
scheduled
it
I
think,
for
spring
of
last
year.
Maybe
I
can't
remember
it
didn't
work
out
then,
because
perlmutter
wasn't
stable
and
we
really
wanted
to
have
it
on
pearlmutter.
So
it's
going
to
be
august,
25th
or
26th.
D
Rebecca
can
I
say
one
quick
word
more
about
e4s.
I
don't
know
how
many
people
are
familiar
with
what
it
is,
but
it
is
a
set
of
software
packages
that
were
developed
under
the
x-scale
computing
program
ecp
and
what
they
have
done,
among
other
other
things,
is
they've
ensured
that
software
packages
are
compatible.
D
So
you
don't
have
name
space,
collisions
and
all
that
kind
of
stuff,
but
they've
been
done
a
lot
of
tuning,
optimization
and
development
for
gpus
and
gpu-like
systems,
exascale
systems
and
beyond,
and
they
have
packaged
them
up
in
such
a
way
that
you
can
install
them
yourselves
like
this
back
installer.
I
think
they
have
containers
as
well.
D
A
Thank
you
richard
yeah
yeah.
I
would
also
second,
that
I'd
really
encourage
people
to
you
know
check
out
this
training.
You
can
participate
online
in
the
training
or
you
can
actually
come
to
berkeley
lab
and
see
us
all
in
person.
If
you
want.
A
Okay,
so
our
next
topic
is
hpc
workflows
for
scientific
facilities,
so
I'm
going
to
stop
sharing
I'm
going
to
let
my
colleague
bjern
anders
from
the
data
science
engagement
group
at
nursk.
Talk
to
you
all
about
this
topic.
A
C
Hello,
hello,
everyone
bjorn
thanks
rebecca
for
introducing
me
so
here
in
this
next
slide.
I
will
talk
a
bit
about
two
user
facilities
that
have
workflows
running
at
nurse.
Also
part
of
the
super
facility
effort-
and
I
give
like
very
low.
Very
you-
know
brief
introductions
into
the
facility.
So
everyone
knows
what
what
they're
doing
and
the
technology
is,
that
you
said
nice
all
right.
Take
a
look.
These
actually
are
facilities
that
are
close
to
home
they're,
both
on
our
campus
first
facility.
C
C
It
has
about
two
thousand
users
annually
who
go
to
this
to
the
singleton
tool
to
run
the
experiments
about
50,
000
users
inside
every
day,
200
staff
and
40
beamlines.
So
b9,
you
ask
yourself
what
the
hell
is,
a
beamline,
so
a
synchrotron!
Basically,
is
it's
an
electron
storage
ring.
So
it's
a
it's
a
circular
arrangement,
the
electrons
go
almost
at
speed
of
light
in
a
circle.
Every
time
they
get
deflected,
they
emit
like
a
light
pulse,
so
just
very
basic
electrodynamics
here
and
these
lights.
C
These
light
prices
can
be
tuned
by
actually
the
magnets
that
are
deflecting
the
electrons
or
like
they
are
fixed,
specific
wiggle
magnets,
so
they
can
create
light
or
very
high
purity
and
they
can
tune
these
light
properties
very
well,
and
this
makes
them
ideal
probes
for
for
lots
of
different
purposes
like
and
generalized
materials,
chemical
processes,
specimen
et
cetera,
et
cetera
and
one
of
the
beam
lines.
That's
usually
very
common.
At
the
synchrotron,
it's
a
tomography
beamline.
C
We
essentially
have
like
a
full
flat
beam
and
you
have
a
specimen
in
there
that
you
rotate
and
then
you
from
the
different
from
the
radiographs
different
angles.
You
can
reconstruct
the
three-dimensional
volume
and
I
do
have
some
nice
lights.
I
don't
know
if
I
can
actually
very
well
paraphrase
what's
happening
here,
but
this
is
actually
a
sample.
That's
what
they
were
imaging.
You
know
part
of
the
with
this
wine.
C
C
You
can
also
do
frivolous
things
so
one
of
the
videos
they
show
a
lot
since
you
is
getting
a
gummy
bear,
but
I
think
the
the
point
I'm
showing
this
video
is
that
these
they
actually
these
scans
they're
doing
actually
quite
fast,
so
they
need
feedback
about
the
results
of
their
measurements
in
a
in
a
timely
manner.
C
Right,
so
the
other
facility
is
also
closed
here
on
campus,
it's
the
national
center
for
like
microscopy,
it's
part
of
the
molecular
foundry
in
the
nation
and
they
provide
electron
microscopes,
state-of-the-art
instrumentation.
C
It's
part,
as
I
said,
it's
part
of
the
micro
foundry
one
of
the
facilities
there
and
they
they
also
do
topography
here
on
the
lower
right.
You
see,
like
electron
tomography,
result
where
they
actually
kind
of
you
know
split
in
different
parts,
actually
don't
kind
of
say
much
about
this,
but
they
they.
They
really
do
amazing
stuff.
C
A
high
resolution
like
from
microscopy
tomography,
z2
experiments
and
40
stem
4d
stem
is
actually
the
they
make
the
technique
that
we
have
an
engagement
with
them
because
them
is
called
scanning
them
is
stands
for
scale
transmission,
electron
microscopy
and
essentially,
what
you're
doing
is
you're
scanning
your
electron
beam
across
the
sample
and
it's
called
4d
because,
instead
of
just
you
know,
aggregating
the
electron
count
behind
the
sample
at
each
point.
You
instead
take
a
diffraction
pattern.
C
You
see
that
at
the
low
part
of
the
image
you
see
that
there
is
a
pattern
forming
and
they
have
a
very
fast
detector
that
can
report
these
patterns
so
they're
taking
for
they
scan
into
d.
They
take
two
dimensional
images.
So
that's
why
it's
a
4d
technique
and
a
very
much
encourage
you
to
read
the
the
new
center
announcement
without.
C
So
what
is
this
has
to
do
with
hbc?
Why
do
they
need
to
work
with
us
and
for
enzyme
is
quite
obvious:
they
have
this.
They
they
take
like
thousands
of
images
per
second
each
image.
You
know
as
a
couple
megabytes
and
then-
and
then
this
means
it's
a
very,
very
high
data
rate.
I
think
it's
400
gigabits
per
second,
that
is
the
data
rate
with
the
the
instruments
on
tape,
but
the
single
transit
many
beamlines
also
have
detectors
at
each
beam
line
and
as
they're
getting
upgraded.
C
They
need
they
get
better
and
better
light.
That
means
they
can
make
more
and
more
data,
and
so
they're
actually
going
into
the
they're
actually
having
this
they're
waiting
for
this
freak
event
in
the
future,
where
they
actually
can't
take
up
with
it
anymore,
so
that
they're
starting
to
put
their
workflows
on
hvc
machines
in
order
to
you
know,
get
a
hold
of
the
data.
C
So
I
want
to
move
the
data
here
when
I
analyze
it,
that's
the
that's
the
main
impulse
all
right,
so
it's
not
really
news
because
it's
our
neighboring
facilities,
we've
known
about
this
for
a
while,
but
we've
known
about
this,
also
for
as
part
of
in
on
a
as
part
of
the
super
city
project.
Where
we've
been
asking,
you
know
all
the
different
programs
in
office
science.
Now,
what
do
you
do
with
nurse
and
then
a
high
fraction
of
them
actually
identified?
C
Oh
we're
doing
data
analysis
et
essentially
so
yeah.
So
this
is
just
getting
a
hold
of
the
data
and
man
and
and
building
data
to
it
and
analyzing
the
data.
That's
that's
the
main
main
draw
for
these
facilities
to
work
together
with
nurse
and
to
make
it
a
bit
more
structured
here,
or
here
I
mean
I
would
create
a
supercity
project
where
we
can
actually
try
to
match.
You
know:
network
hpc
and
maximum
facilities,
kind
of
work
together,
and
that
created
one
of
these
nights.
C
Nice
plots
we
actually
can
see.
Okay.
Well,
on
the
left
side,
we
have
all
the
different
partnering
eod
facilities
and
on
the
on
the
columns,
we
have
all
the
different
technologies
that
they
could
potentially
use,
and
so
we
made
this
this
chart
to
see
like
how
can
we,
you
know,
make
the
make
the
development
you
know
more
structured
and
here
on
the
left,
side,
als
and
enzyme.
C
They
have
like
a
few
technologies
in
common,
for
example
the
api
and
spin,
but
there's
a
whole
bunch
of
other
things
like
data
movement
data
in
the
dashboard.
I
want
to
highlight,
though,
the
the
spinning
the
api
here,
what
was
instrumental
for
the
success
and
if
you
haven't
heard
of
spin,
I
mean
probably
it's
been
before
and
rebecca
said:
there's
spin
training
next,
so
you
should,
you
should
go.
It's
been
really
really
useful.
C
So
spin
is
our
is
our
platform
for
services
and
it
can
be
used
for
science
gateway
as
well
from
interest
databases
and
other
network
services,
and
you
from
within
spin.
Whenever
you
have
like
when
you
deploy
something
speed,
you
can
access
our
htc
file
systems
and
you
can
use
public
or
custom
software,
which
is
really
useful.
So,
essentially,
all
our
all
the
science
engagements
in
them
with
they
all
deploy
the
software
on
skill,
spin
and
here
on
the
right
side,
is
the
you
know.
You
know,
selection
of
what
other
projects
might
be.
C
My
my
building's
been
for
and
the
other
interesting
technology
that
we're
offering
is
the
superstar
api.
That's
relatively
new.
The
api
is
a
unified
programmatic
approach
to
access.
Nurses
essentially
want
each
endpoint
to
to
affect
a
certain
action.
You
can
do
a
nurse,
so
you
don't
actually
have
to
go
on
iris.
You
can
actually
log
in
anymore.
This
should
be
a
thing
of
the
past.
Instead,
you,
you
know,
get
a
client
and
talk
with
our
rest
api.
C
The
authentication
is
very
standard,
very
modern.
We
have
an
extensive
range
of
documentation.
You
go
follow.
This
just
follow
this
link
here,
but
we
also
have
like
an
interactive
swagger
documentation.
I
can
just
well
it's
going
to
quickly
move
over
here
to
the
right.
So
if
you
just
go
to
apinos
api
version
1.2,
you
can
see
all
the
different
endpoints
that
we
offer
here.
Like
the,
for
example,
status,
input
can
be
used
to
to
get
the
system
status.
Actually,
let's
just
try
this
once
here
for
you.
I
think,
of
course
it
works.
C
It
does
all
right.
So
you
see,
I
requested
the
general
system
status
and
you
get
like
all
the
different
systems
that
what
we
provide
like,
for
example,
you
see
permoda,
is
currently
active
and
corey
is
currently
acquired,
but
this
requires
reporting
the
api,
but
also
others
all
for
other
stuff,
like
the
compute
endpoint,
they
need
to
place
jobs.
You
know
accounting
endpoint
to
look
at
how
much
compute
computer
used
and
you
know
and
organize
your
groups
etc
really
quite
helpful.
So
I
encourage
you
to
take
a
look.
Also,
please
anytime,
interrupt
me.
C
C
All
right,
I'm
going
to
the
nice
bits
like
and
explore
a
bit
of
the
use
cases
that
ensemble
has
been
using
most
for
so
I've
been
telling
talking
about
that.
Enzyme
has
electromicroscopes
and
on
our
left
side,
you
see
this.
This
block
here
is
the
electron
microscope
and
the
detectors
of
the
40
stem
camera
for
a
bunch
of
fpga
modules,
each
of
them
records
at
100
gigabits
per
second
to
a
range
of
four
receiver,
pcs,
and
they
push
the
data
into
a
local
flash
server.
C
So
they
usually
capture
650
gigabytes
of
data
in
15
seconds
and
the
the
old
workflow
was
to
you
know,
push
it
to
50
seconds
to
the
ram
of
the
receiver.
Pcs
then
move
it
over
to
the
flash
store
it
takes
140
seconds.
So
this
you
can't
really
change
that
much.
It's
like
their
kind
of
a
fixed,
fixed
time
in
their
workflow,
so
the
the
aggregated
bandwidth,
here's
80
gigabits
per
second,
and
then
they
would
do
an
operation
that
called
counting.
C
Essentially
it's
reducing
all
the
detector
data
into
electron,
counts
and
saying
you
know
here:
an
electron
hit
the
detector
like
this
and
this
pixel.
That
is
an
event
so
make
it
make
it
from
like
just
like
areas
of
of
detector
data
into
event
data.
Essentially
that's
the
accounting
procedure.
That's
why
sparse
hd5
is
the
output.
C
Now
they
built
an
app
called
distiller
on
spin
and
then
and
distiller
talks
now
with
the
api,
for
example,
if
it's
compute
jobs
and
in
these
compute
jobs
they
actually
instruct
nurse.
C
C
That's
where
the
errors
are
thinner
here,
so
there's
less
data
that
it
eventually
lands
in
the
communication,
therefore,
like
a
direct
copy,
workflow
and
then
the
reduction
workflow,
and
then
they
used
it
and
the
the
beauty
of
this
is
they
get
a
very
nice
interface
and
they
could
their
turnaround
speed
with
this
setup.
C
And
this
is
essentially
how
it
looks
like
it's
a
catalog
app.
You
know
on
the
right
side,
you
see
all
the
different
data
sets
that
take
they
can
they
can
rename
it
and
they
can
perform
action
on
it.
You
see
on
the
left,
maybe
a
little
small
just
right
under
the
image
you
see
this
transfer
and
this
count
button
that
transfer
is
just
transferring
to
the
community
file
system
and
count
is
transferring
to
scratch
and
then
counting
afterwards-
and
I
do
have
a
video
for
this-
showing
this
in
action.
F
C
C
Sets
I
think
this
is
a
preview,
that's
automatically
generated
in
the
locally,
and
you
know
they're,
taking
a
different
one
and
press
the
button.
Yeah
account
button.
F
C
And
speed
up-
and
you
see
the
state,
there's
a
check
mark
there.
So
they're
done
it's
kind
of
unassuming
and
it's,
but
that's
also
that's
also
what
we
want.
We
want
the
api
to
be
able
to
be
integrated
into
your
apps
so
that
you
know
for
the
user,
at
your
that's
sitting
at
your
experimental
station.
Just
they
only
have
to
they.
They
don't
actually
know
that
this
is
happening.
C
This
is
the
the
architecture
for
distiller.
You
see
when
you
start
building.
This
gets
a
bit
more
involved
and
you
know
if
you
really
want
to
know
more,
I
think
you
should
get
in
touch
with
chris
harris
at
kitwell
peters
who
stands
them,
but
essentially,
if
you
go,
if
you
go
to
a
local
page
into
the
lower
part
of
this,
there
is
the
single
page
web
app
that
is
deployed
in
spin
it.
It
talks
within
a
with
a
wrapper
around
this
entire
interface.
C
They
have
an
own
wrapper,
a
fast
api
wrapper
for
their
for
the
services
and
they
have
like
a
there's
a
split
split
architecture
there.
They
have
some
there's
some
process
that
run
on
the
local
machine
that
actually,
you
know,
checks
if
new
data
is
coming
in
and
reports,
these
events
to
a
kafka
message,
bus
and
also
they
have
a
like
a
server
that
runs
on
spin
that
actually
initiates,
commands
and
services
of
the
api
and
also
talks
for
the
software
message.
So
they
actually
using
using
kafka
very
extensively.
C
But
once
everything
is
kind
of
aligned
right,
they
just
an
issue,
a
job
that
does
a
bbc
copy.
If
you
sipping
copy
from
the
server
on
tuners.
C
All
right,
so
what
is
the
the
key
features
that
ensembl
were
using?
They
have
a
they
have
a
lucky
situation.
They
have
a
direct
link
into
the
nest
network,
but
you
can
also
you
don't
need
to.
You
can
also
make
a
pull
from
from
other
parts.
You
don't
have
to
be
in
the
nurse
network,
just
be
a
bit
slower,
they're
using
the
real-time
queue
to
actually
run
these
jobs,
so
they
know
that
the
turnaround
is
is
quickly
for
specifically
for
corey,
because
r
is
a
different
network,
it's
not
internet.
C
So
they
have
this
and
there's
some
it's
the
border
nodes
that
have
to
translate
the
the
transfer.
They
have
to
translate
the
the
message,
the
the
data
packets
into
corey.
So
they
have
to
go
through
like
water
now,
so
they
have
to
there's
a
slump
plug-in
that
allows
the
balance
pull
across
the
border
notes.
They
use
spinoff,
costos
distiller.
They
use
the
api
to
talk
with
nurse
for
everything
really
and
what
you
didn't
see
here,
but
they
do
have
a
switch
in
the
app
that
they
actually
can
say.
C
Oh
I'm
using
corey
or
music
promoter
and
that
actu
that
switch
actually
is
informed
by
the
status
api.
So
they
can,
they
have
a
drop
down
menu
and
they
have
a
little.
You
know
greenish
or
reddish
thing
that
says
you
know
it's
online
or
perma
doesn't
mind.
So
they
can.
Actually,
you
know,
decide
you
know,
basically
of
which
based
on
which
system
is
actually
available,
where
they're
gonna
run
the
workflow.
I
think
that's
that's
pretty
cool
and,
of
course,
the
whole
software
stack
is
containerized
and
runs
and
shifter.
C
Doesn't
seem
the
case,
you
can
also
ask
later
all
right
so
and
sam
focused
on
having
like
this,
that
one
instrument
they
really
want
to
make.
You
know
we
don't
want
to
get
rid
of
the
data.
Put
it
to
nurse
and
have
their
analysis
run.
It's
really
kind
of
a
tighter
focus,
but
they
make
a
very
integrated
product
that
they
made
together
with
kit,
where
als
had
a
different
focus
and
they
wanted.
C
Their
focus
was
to
build
centralized
data
services
for
all
their
and
for
all
their
users,
essentially
we're
starting
with
one
beam
number.
They
actually
want
to
scale
it
out
and
the
the
it's
kind
of
a
confusing
graph
to
the
right,
but
essentially
they
want
to
the
they
want
to
give
like
the
full
circle
experience.
So
you
could
collect
data,
you
might
do
some
edge
computing,
you
put
it
into
nurse
or
and
any
other
hpc
facility,
and
then
once
it's
once
the
data
arrives.
C
C
What's
the
pipeline
look
like
so
it's
a
linear
pipeline
more
or
less
what
they
have
been
implemented
so
far,
so
they
have
a
the
beam
line
right
at
least
one
of
their
bmx
tomography.
Beamline.
There's
a
data
mover
app
that
actually
copies
the
data
onto
nurse
using
globus,
and
then
it
sits
in
our
and
one
of
our
stock
on
our
storage
systems,
and
then
they
have
a
separate
service
assistance
bin.
C
That
takes
a
look
at
the
data
that
has
arrived
and
ingests
that
in
a
catalog
app
that
color
gap
is
based
on
cycad.
It
has
been
customized
for
ls
and
it
runs
on
spinner,
of
course,
in
order
to
ingest
the
metadata
it
has
to
have
access
to
those
file
systems
and
then
once
it's
there,
you
know
the
idea
is,
you
have
to
you
can
view
your
data
in
this
catalog
app
and
then
you
can
switch
over
to
analysis
tool
to
actually
do
something
with
it
and
then
re-ingest
the
data
back
into
the
catalog.
C
So
you
know
you
you,
you
have
like
some.
We
have
a
psychic
app
to
find
your
raw
data
sets.
You
go.
You
switch
over
to
like
a
jupiter
workflow,
and
then
you
do
something
with
it.
You
can
make
analysis
and
this
one
gets
ingested
again
into
your
catalog
and
I
do
have
another
short
video
for
this
and
I'm
just
just
going
to
show
like
the
beginning.
C
This
is
the
the
psychic
user
interface
and
here
you
see
a
whole
bunch
of
data
that
has
been
collected
and
what
happens
here
is
that
he
looks
up
the
the
place
where
the
data
actually
sits
in
the
catalog.
C
I'm
just
gonna
jump
over
here
and
then
he
enters
it
into
a
jupiter
notebook
place.
You
pull
the
data
into
an
interactive
jupiter
notebook,
do
something
with
it
and
puts
it
back
there.
You
go
right,
there's
one
aspect
but,
and
they
don't
they
not
only
want
to
be
able
to
to
pull
the
data
out
of
this
catalog
app.
They
also
want
to
give
a
very
customized
jupyter
experience,
so
they
know
they
have
like
these.
A
huge
amount
of
users,
and
essentially
those
are
different
user
groups
have
different
needs.
C
They
also
don't
want
to
share
that
data
with
other
user
groups,
so
they
want
to
actually
have
silo
each
of
their
user
groups
into
yeah.
They
want
to
sign
each
of
the
user
groups
and
that's
why
they
want
to
use
this
entry
point
feature
that
we
have
in
jupiter
now
and
essentially
run
their
own
jupiter
hub,
a
super
half
experience
and
then
it's
added
with
all
their
their
own
apps.
C
It
has
only
access
to
their
folders
and
in
this,
in
this
case
they
could
essentially,
you
know,
collect
data
and
then
for
each
work
group.
Then
the
vlan
scientist
instructs
them
here.
You
know
you
can
now
go
on
jupiter.
You
know
pick
this
particular
entry
point
and
then
what
spawns
for
them
is
a
a
half
or
two
per
notebook
that
essentially
only
shows
their
data
and
then
they
can
work
on
it
and
do
the
analysis.
You
know
themselves.
C
And
they're,
guided
through
it
from
the
als
login
page,
also
pretty
cool
how
they
integrated
they
have
a
remote
access,
control
and
compute
interface
and
that
guides
them
to
you
know
the
services
locally
and
the
services.
So
what
als
was
using
for
the
success
is,
you
know
they
use
file
friends
with
globos,
they
use
the
global's
collab
endpoints,
so
they
can
send
to
send
data
to
nurse,
and
it
arrives
in
the
name
of
the
machine.
Account
that's
very
useful
if
you
want
to
have
multiple
access
to
this
data.
C
C
Their
software
is
containerized
and
deployed
with
shifter
and
docker,
and
they
also
use
https
extensively
to
archive
the
data
because
they
have
a
whole
lot
of
them
all
right.
That
was
everything.
Thank
you
so
much
for
your
attention
and
there's
two
links.
I
want
to
point
out.
One
is
the
super
facility
case
studies
where
essentially
for
super
facility.
C
We
made
a
project,
a
final
project
report
where
you
can
read
everything
a
super
facility
has
done,
and
it's
linked
in
the
top
of
this
page
in
our
docs,
and
here
was
I'm
pulling
out
some
of
the
information
from
it
and
augmenting
it,
and
essentially
what
you
can
do
is
you
can
go
through.
You
can
read
it
up
and
you
can
follow
the
links
that
should
direct
you
to
the
relevant
technologies
of
our
nurse
dog
pages.
C
So
you
can
essentially
just
read
this
and
say:
oh
I
like
this
and
click
on
the
links.
It
should
guide
you
to
the
right
point,
and
hopefully
that
gives
you
some
inspiration
from
what
you
can
do
yourself
and
if
you
want
your
own
story
published,
maybe
you
can
can
reach
out
and
then
it
would
be
interesting
for
the
users.
You
know
if
you
have
more
case
studies
up
and
the
other
link.
C
That's
in
the
slides
is
the
general
super
facility,
page
ordinance.gov,
and
here
you
can
see
a
lot
of
like
all
of
the
activity
of
super
facilities
linked
here
to
a
demo.
Video
slides
everything,
so
I
really
recommend
you're
taking
a
look
there
too,
all
right.
That
was
everything
thanks
so
much
for
having
me,
I'm
gonna
stop
my
share
or
how
do
you
guys?
Anybody
has
a
question
for
stop
sharing.
A
Yeah
thanks
baron.
That
was
really
really
interesting,
really
pretty
appreciate
you
sharing
that
and
then
the
good
news
is.
This
meeting
is
being
recorded,
so
we're
gonna
post
it
on
our
youtube
page,
and
if
anybody
has
any
other
questions
they
might
they
might
reach
out
to
you,
then,
okay.
So
next
I'm
going
to
share
my
screen
again
and
it's
gonna
be
the
last
things
that
we
are
gonna
talk
about
today.
A
So
coming
up,
we
have
a
few
plans
for
for
our
upcoming
nog
meetings,
but
of
course
we're
really
interested
in
anything
that
you
all
might
want
to
offer.
As
you
know,
some
lightning
talks
or
about
your
research
or
you
know
whatever.
A
So
in
august,
our
topic
is
going
to
be
ercap,
since
our
cap
will
have
just
opened
at
that
point
in
september,
we'll
probably
talk
about
the
the
annual
meeting
and,
what's
going
to
happen
there
and
how
cool
it's
going
to
be
and
then
in
october,
instead
of
having
this
meeting,
we
will
have
the
nug
annual
meeting
and
it'll
be
the
last
week
of
october.
I
believe.
E
F
A
We'd
love
to
hear
from
you
all.
If
you
have
some
lightning
talks,
you
might
want
to
give
about
your
research
that
you're
using
nurse
for
would
love
to
hear
that
okay,
so
last
month's
numbers,
so
I'm
afraid
I
was
not
able
to
get
the
query
utilization
number,
but
the
large
jobs
number.
So
that's
jobs
that
were
running
on
at
least
one-eighth
of
the
machine.
A
That's
1024
nodes
or
more,
we
had
41.71
percent
of
all
hours
went
to
those
types
of
jobs.
We
had
642
new
tickets
from
you
all
unless
last
month
and
we
closed
643
tickets.
So
now
our
ticket
backlog
is
at
590..
A
F
Rebecca,
I
have
a
one
question
when
you
say
it's
large
jobs,
what
is
consider
the
last
jobs
there.
A
F
A
Okay,
so
we
never
know
what
you
do
on
the
note
right:
okay,
but
ideally
what
we
hope
that
you're
doing
is
running
a
big
job
that
uses
mpi
on
the
notes
and-
and
surprisingly,
when
I've
looked
at
these
types
of
jobs,
most
of
the
jobs
are
fairly
short.
You
know
a
couple
of
hours
tops,
they
don't
tend
to
be
like
a
48
hour
job.
A
We
see,
we
see
more
jobs
that
use
few
nodes
that
are
very
long
in
in
wall
time.
So,
okay,
yeah
and.
A
It
yeah,
it
could
be
anything
I
think
yeah
there's
just
not
maybe
there's
just
not
as
as
much
that
you
need
to
do
while
you're
using
1024
notes.
You
know
you're,
probably
using
it
like,
maybe
to
capture
that
much.
You
know
that
much
memory
in
your
job
or
or
you're
using
it
to
just
get
stuff
done
quickly
right.
A
So
if
you
run
a
job,
that's
1024
nodes
or
more
you.
You
also
get
a
discount
on
the
on
the
charge.
So
I
think
it's
a
50
discount,
so
half
off,
if
you,
if
you
run
on
1024
nodes
or
more
so
don't
run
on
1023,
run
on
1024
right.
A
A
It
is
going
to
be
hybrid,
so
you
can
either
come
in
person
or
we
will
also
have
it
virtual
and
we
are
going
to
hold
it
at
berkeley
lab
because
we
we
know
what
kind
of
av
facilities
are
available
in
order
to
make
it
a
quality,
hybrid,
meeting.
Okay,
thank
you.
B
Yeah,
this
kind
of
goes
back
to
the
announcement
that
corey's
retiring.
What
I've
seen
nurse
do
in
the
past
is
generally
a
system
is
retiring
to
make
way
for
something
new.
Can
you
say
anything
about
what
that
is
like,
or
what
that
timeline
would
be.
A
I
can't
say
much
about
nurse
10,
so
that'll
be
our
next
system,
so
just
for
people
counting
along
corey
was
our
eighth
system
that
we
procured
in
the
history
of
nurse
and
then
promoter
is
our
ninth
system
and
then
the
future
system
is
nurse
10.
and
so
yeah.
I
I
I
can't
say
all
that
much
about
it,
except
for
that.
A
We
have
been
already
working
on
nurse
tan
for
several
years
and
and
yeah
we're
gonna
we're
gonna,
remove
corey
in
part,
so
that
we
will
be
able
to
support
nurse
10
when
we
finally
get
it.
We
certainly
don't
have
enough
power
or
space
to
have
three
machines
at
once,
so
we
definitely
need
to
take
down
corey
and,
like
richard
said,
corey
is
a
pretty
old
system
at
this
point,
which
makes
me
feel
like.
I
am
officially
a
long-timer
at
nurse
because
I
started
when
corey
was
just
being
commissioned.
A
A
I'm
gonna
take
that
as
a
note.
So
thanks
everybody
for
joining
us
and
we'll
see
you
next
time.