►
From YouTube: NUG Monthly Webinar, March 18, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Yeah
so
we
are
live
on
youtube
now.
Let's
see,
we've
got
15
participants,
so
so
steve
cleverly
set
up
some
automated
emails
to
go
out.
So
let's
just
wait
like
maybe
another
two
minutes
or
so
to
just
see.
If
we
get
some
more
folks
right
sounds
good.
A
C
A
A
Yeah,
I
I
think
it
was
it
might
have
been.
There
was
an
update
and
after
the
update,
if
someone
starts
talking,
maybe
it's
like
the
type
of
view
I
have
on
the
screen.
It
won't
scroll
to
whoever
is
doing
the
talking.
It
won't
put
them
up
on
the
the
main
five
on
the
screen
or
whatever
yeah.
Until
I
hit
the
arrow
and
go
hunting.
If
I
go
hunting
and
not
even
on
the
next
slide,
then
they'll
pull
it
all
the
way
up.
Then
they're,
like
oh
he's,
looking
and
pull
it.
A
Pretty
good
so
hello,
everybody
welcome
to
the
nugs
monthly
meeting
for
march,
so
hopefully
you've
had
a
good
month.
I
believe
nurse
has
had
a
pretty
good
month.
We'll
go
over
some
of
the
details
here
today.
I
believe
the
things
to
say
are
on
the
first
page
here,
yes,
so
this
is
a
very
interactive
meeting.
A
If
you
have
ideas
or
questions
or
statements
or
things
you'd
like
to
share,
please
feel
free
to
do
so.
That's
the
whole
point
of
this
meeting.
It's
not
meant
to
have
you
know
one
person
talking
but
to
get
a
feel
for
the
community.
There's
a
lot
of
things.
I'm
also
looking
at
right
now
in
terms
of
just
collecting
data
about
what
you
guys
might
like
or
want,
or
your
next
big
issues,
so
definitely
speak
up
about
anything.
That's
bugging
you
or
or
anything
like
that.
A
A
This
is
a
pretty
standard
agenda,
a
couple
of
things
where
we
talk
as
a
group
and
and
hopefully
learn
some
things
and
share
some
things,
announcements,
and
then
we
have
a
speaker
of
the
day
this
time
it's
parallel
wear
and
we
have
emmanuel
and
javier
here
we
will
present
on
parallelware
and
then
suggestions
for
upcoming
topics
which
you
guys
might
be
interested
in
hearing
about
in
some
lightning
talks
and
then
some
numbers
which
sadly
they're
not
all
up
yet
so
all
the
cool
numbers
we
have
a
couple
of
them,
but
oh
well,
all
right
all
right.
A
C
So
before
we
get
too
far,
I
want
to
point
out
that
I've
enabled
the
live
transcript.
So
you
can
click
on
that
button
at
the
bottom
of
your
zoom
screen
live
transcript
and
you
can
select
whether
you
want
to
see
subtitles
or
whether
you
want
to
see
like
a
live
transcript
on
the
side.
So.
A
D
I've
accomplished
getting
logged
in
and
getting
my
allocation.
C
E
D
A
A
A
F
Awesome
great
news
where
at
it's
in
jens
a
journal
of
instrumentation,
I
believe.
A
It
goes
for
anyone
else
too.
Oh,
if
you're
interested
in
that,
I
didn't
highlight
it
too
heavily.
You
could
throw
something
in
the
webinars
channel
and
then
there's
user
slack.
Actually,
let
me
put
rebecca,
did
you
put?
Can
you
put
the
the
link
to
get
into
the
nurse
slack
in
the
chat
window?
Oh
yeah,
let
me
do
that.
I'm
sharing
screen
yeah
yeah,
just
a
sec,
so
if
you
ever
want
to
contact
us
over
something
like
that,
definitely
just
go
to
the
nurse
slack
and
throw
it
in.
A
A
I
don't
think
I
can
recall
it
something
simple
like
I
was
able
to
get
people
to
actually
document
things
yeah.
That
could
be
a
real
pain.
A
A
Okay,
so
if
we're
through
with
wins,
does
anybody
have
any
lessons
that
you
have
learned
in
the
past
week
that
you'd
like
to
pass
off
something
that
might
help
people
out
one
big
tip
nurse
I've
heard
from
many
many
sources,
even
other
super
computing,
centers
look
at
nurse
documentation
and
we're
proud
of
that,
and
we
actually
like
to
keep
that
going
so,
especially
anything
that
we
I've
done
wrong
in
the
documentation
or
something
we
could
add
to
it
or
some
tip
that
you
guys
have
that
we
could
add
to
it.
A
A
A
It
sets
up
emacs
to
behave
like
vi
and
now
there's
a
gigantic
debate
going
on
in
one
of
the
slack
channels
about
all
the
emacs
people,
trying
it
out
and
really
proving
that
vi
is
wrong
and
all
that,
but
I
thought
that
was
really
interesting
as
I'm
a
vi
user
and
emacs
does
have
some
interesting
characteristics,
so
it
might
be
a
good
way
to
try
out
emacs
or
at
least
be
able
to
do
something
on
a
system
that
you
know
heavily
uses.
Emacs
look
up
evil.
It's
really
easy
to
remember
too.
F
A
D
Learn
anything
I
have
a
question
for
you,
guys
it
with
open
acc.
I
I
find
a
lot
of
times
like
even
if
I'm
using
m
info
it'll
just
go
ahead
and
decide.
You
know
what
I'm
not
going
to
put
that
on
the
gpu,
but
I'm
not
even
going
to
tell
you
that
I'm
not
going
to
put
it
on
the
gpu
and
it
just
runs
on
the
cpu.
Have
you
guys
dealt
with
that
type
of
problem?
And
what
do
you
do.
A
Interesting,
I
haven't
actually
heard
of
that
one
before
we
could
probably
put
you
in
touch
with
some
open
acc
people
to
dig
into
what's
happening
there,
but
I've
never
heard
that
occur
before.
D
Not
really
it's
just
it's
weird.
If
you
know
it,
it
seems
like.
If
I
do
certain
things
it
just
says,
you
know
what
I'm
just
going
to
do
it
on
the
cpu
and
and
then
it
doesn't
do
it
on
the
gpu
at
all
and
the
other
problem
I've
had
a
lot
of
is:
is
that
at
least
using
the
pg
compilers,
not
in
the
managed
mode.
D
The
data
sometimes
doesn't
get
copied
over
correctly,
even
if
you
say
copy
in
copy
out
and
all
that
it's
just
like
it
never
got
there
and
it
it's
doing
the
work
on
the
gpu,
but
just
on
garbage
data
and
yeah,
and
then
it
doesn't
come
out
right.
So
I
don't
know
it's
probably
just
you
know
newbie
problems.
I
I
honestly
don't
know
that
the
last
one
sounds
like
a
synchronization.
A
G
B
In
c
we
when
we
prepare
some
courses
for
nurse.
Last
year,
we
experienced
some
problems
with
data
movement
because
in
c
open,
ecc
and
openmp
when
you
pass
the
pointer,
it
is
assuming
that
the
compiler
has
capabilities
to
determine
the
actual
data
range
to
be
transferred
and
it
makes
assumptions
and
it
continues
the
execution.
E
D
Yeah
what
I
what
I
first
tried
to
do
is,
I
said
you
know
what
my
function
takes
like
it
takes
these
arrays
and
it's
going
to
output
this
array.
D
So
then
I
just
simply
said:
okay,
I'm
going
to
do
a
data,
I'm
going
to
do
a
open,
acc
data
enter,
and
I
pass
it
all
of
my
copy
ins
and
copy
outs,
and
then
then
I
say:
pragma,
acc,
kernels
and
just
say,
like
you
figure
out
what
the
hell
you
want
to
do
here,
I
really
don't
care,
and
then
I
say
you
know
at
the
end,
I
delete
the
you
know
the
temporary
stuff
and
that
doesn't
seem
to
work
very
well.
D
And
the
weird
thing
is:
is
that
if
it,
if
it
gets
into
a
weird
situation,
a
lot
of
times,
it
just
simply
says
I'm
just
going
to
throw
my
hands
up
and
not
do
it
and
also
by
the
way,
I'm
not
going
to
tell
you,
I
didn't,
do
it
in
info.
It
just
says:
okay,
I
did
all
the
work.
You
wanted
me
to
do.
Good
luck
and
then,
when
I
run
nv
prof
it's
like
oh
yeah,
it
didn't
do
any
of
the
work
on
the
gpu.
We
just
did
it
all
in
the
cpu.
B
Yeah
that
could
go.
That
could
be
a
good
explanation,
because
kernels,
you
are
relying
on
the
compiler
to
really
determine
if
the
code
can
be
really
executed
in
parallel
safely
or
not
right.
If
you
use
directive
parallel,
you
are
instructing
the
compiler
and
saying
to
the
compiler.
I
know
what
I'm
doing.
I
know
it
is
parallel
and
it
is
safely
to
secure
it.
So
compiler
will
be
very,
very
conservative.
If
you
are
using
kernels,
it
will
probably
default
to
cpu
execution.
That
might
be
an
explanation.
I
guess.
D
Yeah
well,
the
the
thing
is:
is
that
the
problem
I
always
have?
Is
it
doesn't
tell
me
this
it
just.
Does
it
right
and
then
and
then
I
find
out
hey,
it
looks
like
it's
it's
working
now
and
then
I
look
at
it
and
I'm
like
no,
it
didn't
do
anything
it
copied.
Data
over
to
the
gpu
did
nothing
with
it.
Did
it
all
in
the
cpu
and
then
it
copied
data
back
but
didn't
use
it
like
because
of
my
data
copy
in
and
data
copy
out,
so
I'm
still
playing
with
it.
B
G
A
A
Well,
I
was
just
mentioning
what
you
might
want
to
do
is
follow
up
in
the
nurse
channel.
I
can
find
someone
who's
an
open,
acc
expert.
My
expertise
is
primarily
cuda
and
hip
and
dpc
plus
plus.
So
I
don't
know
the
ins
and
outs
of
the
nitty-gritty
of
the
open
acc
directives
very
well,
but
if
you
throw
something
up
in
the
in
the
slack
channels,
either
general
or
webinar
or
something
I
I.
D
A
So
when
you
jump
in,
you
should
be
in
a
general
and.
D
C
Slack
and
it's
sponsored
by
the
nug-
it's
not
a
first
channel,
but
it
is
a
good
way
to
you
know,
talk
with
other
users
and
also
some
of
us
staff
do
sometimes
participate,
but
it's
not.
We
don't
have
any
guaranteed
slas
for
responding
to
your
queries
there
or
anything
like
that
see
I
I.
D
C
If
it's
an
old
message
that
may
not
be
valid
anymore,
yeah,
that's
what
it
says.
I
put
it
in
today's
chat
here
in
the
zoom.
Okay
follow
and
it's
a
password
protected
page,
so
that
we
can
make
sure
it's
really
nurse
users
who
are
getting
in.
But
you
follow
that
and
then
there's
a
link
in
there
that
you
just
click
on
and
join
our
nurse
users.
Slack.
D
Yeah
it'll
use
your
iris
password
and
stuff,
oh,
and
then
it
gives
me
the
link.
I
got
it
okay,
so
now
I
can
come
in
here
and
okay
cool.
I
appreciate
that.
That's
good
to
know.
A
So
there
is
an
open,
acc
channel,
but
there's
only
three
members,
so
you
might
want
to
put
it
in
something
like
general
or
in.
If
you
put
it
in
webinars,
I
can
ask
around
in
arch
slack
channels
and
see
if
I
can
find
someone
who
can
follow
up
with
you
or
you
can
feel
free
to.
You
know,
put
your
question
a
couple
of
different
places
and
see
who
responds.
A
C
H
C
H
Good,
so
I
have
a
code
that
uses
modern,
fortran
and
mpi,
and
maybe
people
have
heard
this
already,
but
if
you
use
the
very
latest
g-fortran
compiler,
it
has
such
type
tight
type.
Checking
that
if
you
have
multiple
mpi,
send
or
receives
with
different
data
types,
it
throws
an
error
so,
for
instance,
in
our
code
at
one
point
we
send
real
data
and
another
point:
we
send
integer
data
and
both
times
we
use
the
mpi,
send
library
routine
and
it
throws
an
error.
H
D
C
Very
pedantic,
yeah.
H
Yes,
hold
on
I'll.
Tell
you
I'll,
put
it
in
chat
in
just
a
second
interesting.
A
A
A
Wow
interesting,
okay!
Well,
let's
see,
if
that's
all
for
today
for
those
types
of
things,
a
couple
of
announcements
before
we
get
into
our
talk.
So
I'll
start
at
the
bottom,
because
these
are
easy.
Next
scheduled
maintenance
is
april
21st,
it's
a
standard
one
day,
maintenance
so
mark
your
calendars.
A
The
trick
I
have
for
maintenance
is
is
find
a
paper
that
I
don't
need
to
read
right
now
and
just
put
it
on
my
desktop
somewhere
and
just
leave
it
there,
and
I
always
have
something
to
do
on
maintenance
day
so
in
case
that
helps
anybody
having
a
maintenance
day
task
that
that's
usually
what
I
do.
I
always
forget
when
it
happens,
but
I
always
have
something
to
do
when
it
occurs.
There's
a
little
bit
more
of
a
jarring
maintenance
coming
up.
A
Hpss
is
doing
some
big
upgrade
type
deals
for
for
perlmutter,
so
it
will
actually
be
down
from
sunday
to
thursday
april
11th
to
16th
and
everything
will
be
unreachable.
A
So
if
you
need
data
or
you
regularly
put
or
pull
data
out
of
hpss
plan
for
that
be
aware,
pick
something
to
do
the
what
friday
or
saturday
before
wait.
No,
you
should
go
farther
back
than
that,
because
hpss
the
week
before
you
might
want
to
make
sure
you
pull
all
the
data
out
that
you
would
possibly
need
and
anything
on
scratch
that
may
go
away
soon.
You
might
want
to
package
up
and
push
over
extra
early
because
you
will
have
no
place
to
put
it
during
that
week.
C
Yeah
now
it's
really
the
time
to
start
preparing
for
that
outage
and
also,
if
you
need,
if
you
need
like
a
temporary
quota
increase
on
scratch
or
on
the
community
file
system,
we're
happy
to
offer
those
for
the
duration
of
this
maintenance.
A
Oh
okay!
Yes,
so
if
you
do
need
those
questions.
A
How
do
you
request
yeah
just
put
in
a
ticket
just
yeah
just
send
us
a
ticket,
there's
actually
a
form,
I
believe,
for
increased
quota
requests.
C
There
is
a
form
yes,
so
you
could
use
the
form.
You
could
also
just
kind
of
send
in
a
ticket
and
say:
hey,
look
I'm!
This
is
what
has
what
I
need
to
have?
How
can
I
best
do
this
and
we're
happy
to
help
you
out
figure
out
how
to
best
put
your
data
in
the
right
place.
A
Either
way
will
work,
what
they
might
do
is
they
might
direct
you
to
the
form
or
they
might
go
to
the
forum,
to
look
up
all
the
stuff.
They
need
to
do
to
fulfill
the
request,
but
either
way
just
send
in
a
ticket,
and
we
should
be
able
to
help
you
out
with
that.
A
Okay,
so
there's
three
training
events
coming
up,
there's
the
ecp
for
community
day
or
days,
so
anybody
involved
with
ecp
or
is
has
a
subset
of
the
community
of
that
feel
free
to
join
them.
I'm
actually
looking
forward
to
that
one
that
should
have
some
interesting
stuff.
There's
a
training
on
hpc
toolkit,
especially
with
pearl
modern
hpc
toolkit,
has
made
a
lot
of
interesting
and
unique
things
that
should
be
very
helpful
to
a
general
scientific
community
in
terms
of
profiling
and
all
that
kind
of
stuff.
A
So,
if
you're
interested
in
trying
out
different
profilers
or
seeing
what
information
is
out
there,
profilers
and
debuggers,
and
that
kind
of
stuff
sign
up
for
that
it's
march,
29th
and
april
2nd.
So
two
weeks
yeah,
I
think
it's
two
weeks,
my
plus
seven
math
is
not
on
par
or
not,
but
not
completely.
Not,
but
you.
C
A
Well,
how
convenient
oh
wish
her
happy
birthday
for
me
and
for
anyone
interested
in
spin
on
containers
and
things
like
that
we
have
a
new,
not
a
new
another
spin
workshop.
A
I
found
the
spin
workshops
are
even
helpful
that
if
it's
something
you
might
be
interested
in,
throw
it
on
in
the
background
and
listen
to
it
and
just
kind
of
put
into
your
head
the
kind
of
things
you
might
want
to
do
with
spin
and
then,
whenever
that
time
comes
up
in
the
future,
you
can
spin
is
actually
something
you
think
about
so
spins
there
feel
free
to
sign
up
for
that
workshop.
It
should
be
very
useful
and
a
couple
other
quick
ones
number
one,
the
easier
one.
A
The
reduction
dates
this
year
are
have
been
decided.
They
are
may
4th
and
september
7th.
So
if
you
have
about
what
two
weeks
now,
no
two
months
now
before
a
reduction
date
comes
in
and
your
allocations
might
go
down,
so
you
have
a
little
bit
of
time
but
be
aware
of
when
they
are.
You
don't
want
to
lose
a
bunch
of
hours
that
you
are
actually
going
to
use
because
of
you
know
you
waiting
for
coding
and
things
like
that.
A
Days,
okay,
so
this
is
mostly
for
pis
and
for
repos,
but
in
order
to
not
so
what
what
users
tend
to
do
or
groups
tend
to
do
is
when
the
hours
get
close
to
running
out.
They
use
them
all
so
early
on
in
the
year,
people
don't
use
a
lot
of
the
hours
out
of
the
repo
and
they
stock
up
and
stock
up
and
stock
up,
and
the
problem
is,
if
you
get
to
the
end
and
70
of
the
repos
are
trying
to
use
all
of
their
hours.
A
There's
no
time
the
system
gets
crammed
and
you
can't
do
any
work
so
a
couple
of
times
throughout
the
year,
the
the
project,
the
repo,
if
it
has,
I
think
it's
10
or
15
or
25
it
changes
from
from
year
to
year
and
it
gets
adjusted.
But
if
you
have
way
more
hours
than
you
should
have,
if
you
evenly
use
your
hours
throughout
the
year,
they
will
cut
some
of
it
off
to
prevent
that
from
happening.
C
So
rishi
there's
information
about
this
in
the
nurse
weekly,
email,
okay
and
then
there's
a
webpage,
there's
a
link
in
the
email
that
you
can
follow
to
find
out
more
but
yeah
kevin's
right.
Basically,
if
you
haven't
used
your
allocation,
then
we're
assuming
that
means
you're,
probably
not
going
to
use
it.
We're
going
to
take
that
time
from
you.
D
So
is
it
possible
to
specify
your
allocation
schedule
so
that
how
do
I
put
this
so
that
it's
more
in
line
with
what
you're
going
to
use?
D
Because,
honestly,
the
way
I
look
at
it
is
initially
I'm
not
going
to
use
my
allocation
heavily
because
I'm
doing
debug
and
just
getting
set
up
and
whatnot
and,
like
I
have
a
nine
month
project
right
so
months,
zero
through
three
I'm
going
to
use
it
very
little
and
then
months
months,
four
through
six,
I'm
gonna
use
it
a
little
and
then
I'm
gonna
use
it
heavily
month,
seven
through
nine.
D
C
You
could
have
told
us
about
your
the
in
the
urcap
submission
and
then
also
you
can
apply
for
an
exemption
so
just
sending
a
ticket
that
says
hey
I'd
like
to
be
exempted
from
the
may
4th
and
here's.
Why?
Because
I
have
this
plan
of
how
I'm
going
to
use
the
time.
D
A
Exceptions
happen
all
the
time.
The
this
particular
pattern
is
meant
more
for
multi-year
or
contiguous
sort
of
repos
that
continue
over
a
longer
period
of
time,
for
something
like
a
nine-month
one,
yes
just
send
something
in
you
almost
certainly
get
exempted.
That
shouldn't
be
a
problem.
Okay
and
rebecca
stole
my
thunder
at
the
end.
A
Here
I
was
going
to
say
fine
for
the
details
of
absolutely
all
of
this
just
check
this
week's
weekly
email
by
rebecca
so
yeah,
so
for,
if
you
want
the
details
of
literally
any
of
these
things,
just
go
to
the
weekly
email.
Look
it
up
and
you
can
follow
up
the
other
final
cool
one
that
I'm
hoping
that
I
get
a
good
use
of
for
is.
There
is
now
a
new
compile
queue
that
is
being
tested.
A
A
single
haswell
node
has
been
put
aside
purely
for
heavy
compile
groups,
so
you
have
to
ask
for
access
to
this,
and
it
can't
just
be
that
I
compile
every
once
in
a
while,
and
I
want
to
use
the
compile
queue
to
compile.
It
has
to
be
that
you
have
a
compile
heavy
workload.
A
You
actually
compile
a
whole
lot
in
your
development
and,
if
that's
the
case,
you
can
apply
to
get
access
to
this
compile
queue
which
should
give
you
much
quicker
turnarounds
on
doing
compile
and
you
don't
compile
on
a
head
node
or
something
like
that
and
jam
things
up.
So
if
you
are
interested-
or
you
think
that
applies
to
you,
find
the
details
in
the
weekly
email
and
apply
to
join,
the
compile
queue
should
be
a
rather
interesting
little
thing.
A
Are
you
guys
here
and
available.
B
B
Power
tools
are
available
at
nersk
since
one
year
or
one
year
and
a
half
ago,
and
we
have
a
tool
that
can
help
you
to
speed
up
your
applications
or
parts
of
your
applications
both
for
multi-core
cpus
and
for
gpus,
and
we
have
created
a
materials
with
rebecca
and
rebecca's
group
to
really
understand
how
to
apply
in
practice
and
use
the
tools
to
accelerate
some
of
the
codes
that
you
can
see
here
in
this
listing
for
something
very
simple,
like
heat
or
matmul,
up
to
something
more
complicated
like
particularly
in
cells
method.
Okay.
B
So
this
is
just
here
for
your
reference
to
take
a
look
at
the
materials
you
already
have
at
the
nurse
website.
You
can
look
at
the
videos,
the
slides,
the
working
examples
and
the
quick
start
that
we
prepared
as
part
of
those
trainings.
So
it
may
be
interesting
at
some
point
for
the
users
to
take
a
look
at
that.
Okay,
so
today
you
will
see
a
demonstration
with
a
different
use
case
from
a
different
field.
B
The
scanning
kind
of
is
an
image,
poses
an
algorithm
and
edge
detection
for
each
detection
in
images,
and
essentially
it's
a
very
interesting
example,
because
it
contains
more
than
30
loops,
with
a
wide
variety
of
loops,
with
different
properties
that
somehow
are
similar
to
the
problems
that
you
can
experience
with.
The
other
examples
that
we
have
addressed
in
trainings
from
pi
hit
madmold
up
to
lulesh,
cg
and
zippy
okay.
B
So
what
you
will
see
in
the
demo
is
also
a
proposed
pathway.
So
let
us
explain
briefly
what
you
can
see
here.
We
have
experiments
for
plan
gcc
and
icc,
the
three
main
compiler
that
typically
users
use
on
the
cpu
side
at
least,
and
here
what
you
can
see
is
what
you
can
get
out
of
the
box
by
enabling
the
maximum
optimization
level
of
the
compiler,
typically
minus
o2,
minus
o3
and
the
difference
between
disabling
automatization
capabilities
and
enabling
those
capabilities.
B
In
this
example,
you
can
see
that
you
can
get
a
five
percent
reduction
in
the
runtime
just
by
enabling
this
flux.
So
the
question
is:
can
you
get
more
out
of
the
compiler
out
of
the
system
that
you
already
have
by
making
changes
in
your
code
that
enable
to
exploit
cmd
in
the
paralysis
inside
your
threads
and
the
threads
can
be
a
single
threaded,
sequential
application
or
one
thread
within
an
open
mp
application,
for
instance,
okay
or
an
openness
application
on
the
cpu
side,
not
on
the
offloaded
path?
B
And
with
this
we
have
been
working
on
that
and
you
can
see
that
you
can
reduce
the
runtime
up
to
50
just
by
using
syndicate
abilities
available
in
your
systems.
Okay-
and
these
experiments
were
conducted
mainly
in
in
corey
from
some
of
them
and
others
in
a
in
a
different
cluster.
But
you
can
expect
similar
results
on
choreo,
typical
machines
that
you
have
available
at
nersk.
B
And,
of
course,
if
you
combine
these
with
multi-threading
capabilities
or
floating
capabilities
to
gpus,
you
can
expect
if
the
workload
is
big
enough
to
increase
the
reduction
execution
time
in
sixty
percent
or
even
beyond
that.
For
computer
intensive
kernels,
okay,
so
this
is
more
or
less
to
provide
you
a
pathway
of
how
you
can
benefit
from
gpu
of
loading
the
power
of
gpus
instead
of
starting
with
the
gpu.
B
That
may
be
a
bit
interesting
or
difficult
in
the
beginning,
how
you
can
start
with
something
to
make
it
more
incremental,
with
cmd
multithreading
at
some
point
decide
what
paths
have
readily
offloaded
to
the
gpu,
because
it
makes
sense
for
the
performance
point
of
view.
Okay,
so
going
to
the
parallel.
Where
approach
we
base
the
this
in
two
pillars,
and
you
can
see
in
the
chat,
I
have
shared
one
link
with
a
a
recommendation
or
a
defect
in
the
pentra
catalogue.
This
acatron,
that
is
free,
is
open
and
will
always
be
open.
B
We
envision
that
at
some
point
the
community
will
maintain
this
set
of
defense
recommendations,
because
it's
a
great
resource
for
learning,
for
experimentation
and
also
it
opens
the
opportunity
to
create
tools
that
can
automate
checking
big
codes
or
big
code
bases
to
find
recommendations
or
defects
that
appear
in
the
catalog
okay,
and
so
that
you
can
have
an
understanding
of
what
information
you
have
there.
This
we
do
this
by
working
with
the
centers
customers
partners,
and
you
can
see
examples
of
defects,
for
instance
the
one
that
we
copied
here
is
this
one.
B
And
if
you
take
a
look
at
this,
you
can
see
why
it's
in
the
issue.
Why
is
it
a
relevance
for
parallelism
and
for
software
acceleration,
which
are
the
actual
actions
that
you
really
need
to
do
to
get
rid
of
this
problem
and
code?
Examples
illustrated
typically
with
openmp
are
open,
ecc,
okay,
because
part
of
this
knowledge
base
has
been
created
while
preparing
the
courses
that
we
did
with
rebecca's
workplace
at
nurse
okay.
So
you
have
all
of
this
available
and
we
will
be
very
happy
to
see
limitations
problems.
B
B
So
what
you
will
see
in
a
few
minutes
in
the
demo
is
you
will
see
two
tools,
the
trainer
tool,
that
is
a
graphical
user
interface
and
the
analyzer
two,
which
is
a
command
line
interface.
All
the
capabilities
are
there.
The
ui
is
intended
to
work
on
hot
spot
loops
that
you
have
already
identified
and
quickly
explore
how
to
make
the
uploads
into
the
gpu
using
opening
p
or
open
ecc,
to
provide
you
with
an
example.
B
The
analyzer
is
intended
for
you
to
get
started
with
a
big
code
that
you
don't
know.
You
are
not
aware
of
it.
You
don't
know
which
are
the
critical
parts
and
to
help
you
guide
you
through
the
process
of
focusing
and
finding
which
are
the
issues
that
you
can
and
need
to
fix
to
really
make
the
code
run
faster
and
correctly
on
in
your
target
system.
Okay,
you
will
see
those
tools
in
the
demonstration,
so
I
think
that
it's
a
good
moment.
I
will
not
go
into
the
details
of
all
of
these.
B
You
have
explanations
of
video
recordings
of
the
course
we
organized
with
rebecca
in
october,
but
just
for
you
to
have
an
understanding
of
how
the
tools
look
like
before
seeing
the
demo
analyzer
is
a
set
of
four
command
line
tools.
Pw
report
provides
your
metrics.
Is
the
entry
point.
Pw
check
provides
you
an
optimization,
vectorization
report
of
things
you
need
to
do
so
from
the
catalog.
You
will
see
a
listing
of
the
defense
recommendations
that
the
tool
have
found
in
your
code.
B
Okay,
the
loops
focuses
on
opportunities,
new
loops
that
you
can
parallelize
and
directives
provides
your
writing
capabilities
so
that
you
can
ask
the
tool
to
insert
open
npr,
open
ac
pragmas
in
your
code
and
of
course
you
have
the
final
word
in
controlling
which
are
the
changes
that
are
made
on
the
source
code.
That's
the
way
the
tools
have
been
designed.
B
So
this
is
how
the
tools
look
like
you
will
see
it
in
the
demo.
This
is
pw
report.
This
is
pw
check.
This
is
pw
loops.
You
can
see
the
reporting
of
the
functions
and
the
loops
that
you
can
find.
There
are
information
about
them
and
reporting
capabilities
to
writing
capabilities
reported
by
directors-
and
this
is
the
graphical
user
interface
of
the
trainer,
where
you
can
see
different
kinds
of
icons
that
relate
to
opportunities,
effects
and
recommendations
that
you
see
in
the
catalog
and
when
you
click
on
these
nice
buttons.
B
Here,
you
open
a
dialogue
where
you
can
select
how
you
want
parallelized
the
code
that
has
been
recognized
and
detected
by
the
tool
and
then
the
true.
The
trainership
includes
the
rewriting
capabilities
of
the
to
notate,
your
source
code
with
openmp
and
open
scc
progress.
Okay,
so
I
will
stop
here.
I
just
have
tried
to
keep
it
as
short
as
possible
to
give
a
very
short
overview
of
what
you
will
find
there.
Just
a
reminder:
you
have
the
recordings
in
the
in
nurse
website.
I
will
stop
sharing
now
so
that
xavier
can
share
his
screen.
I
Okay,
so
let's
get
into
the
demos,
I
guess
we
are
pretty
tight
on
time.
So
please,
let
me
know
at
any
time
if,
if
just
feel
free
to
stop
me,
so
I
connected
to
to
corey
via
nx,
so
you
could
see
that
we
have
both
the
trainer
and
analyzer
available
there.
We
are
working
on
updating
with
the
last
versions.
We
are
on
1.6
on
analyzer
017.
I
This
should
be
happening
in
the
following
day,
so
you
can
go
to
korea
and
just
load
these
modules
and
start
using
the
tools.
Okay,
today
I
will
use
my
machine
to
use
those
latest
versions.
I
will
start
with
a
hopefully
quick
demo
of
analyzer
first,
which
will
be
analyzing
the
canny
image
processing
algorithm.
It's
I
mean
it's
a
it's
an
image
detector
that
performs
this
highlighting
of
the
edges
of
the
image
that
you
can
see
in
the
screen,
so
that
code
is
over
here
in
this
code
editor.
I
It
is
less
than
1000
lines
in
just
one
single
file,
and
we
know
from
profiling
that
this
gaussian
smooth
function
is
the
hottest
spot.
So
we
have
profiled
the
code
and
we
determined
that
this
function
that
contains
these
two
loopness
is
the
hottest
point.
So,
like
manuel
said,
parallel
analyzer
are
four
command
line
tools.
We
always
recommend
starting
with
pw
report,
that's
the
go
to
entry
level
tool
that
provides
a
high
level
overview
what's
in
the
code.
I
So
if
I
run
that
for
kanye,
you
see
that
from
the
code
coverage,
the
analysis
did
not
succeed
and
we
get
a
suggestion
that
you
enable
their
reporting
with
issue
failures
flag.
So
if
we
do
that,
then
we
get
a
classic
error.
That
is
that
you
forgot
to
add
the
header
directory
that
your
code
requires.
If
you
see
here,
the
project
is
just
one
single
file,
but
it
has
this
couple
of
headers.
So
we
can
do
that
following
this
recommendation,
which
uses
gcc
and
clan
syntax
to
add
the
header
directories.
I
So
it's
pretty
common,
and
this
is
something
that
you
always
have
to
do
with
the
static
code
analysis.
We
need
access
to
the
source
code
and
it
must
be
valid
and
compilable
for
the
analysis
to
succeed.
So
now
we
have
a
decent
code
coverage
that
you
can
see
here
and
in
the
metric
summary,
you
see
the
different
actionable
items
that
are
available
in
analyzer.
So
first
you
have
the
facts
and
recommendations.
I
These
relate
to
the
catalog
presented
by
manual.
Defects
are
backs
such
as
databases
that
should
fix
right
away,
refer
to
optimization
opportunities,
how
to
enable
a
loop
to
be
vectorizable
or
parallelizable
or
how
to
follow
best
practices
for
different
technologies
as
a
openmp,
for
instance,
then
you
get
suggestions
on
how
to
proceed
next.
I
So,
for
instance,
if
we
are
interested
in
the
defects
and
recommendations,
we
can
invoke
pw
check
I'll
just
do
that
by
copying
this
command
here,
and
you
see
that
you
get
a
very
structured
report
following
the
function,
definitions
and
the
different
loops.
So
you
can
relate
these
to
the
structure
that
you
see
on
the
right
and
in
each
level,
for
instance,
here
in
the
first
level
of
the
loop,
you
see
the
different
recommendations
that
they
are
there.
I
You
can
add
more
detail
with
level
three,
which
is
the
more
variables,
and
you
see
that
you
get
an
extended
output
where
you
get
the
whole
loop
code
lines,
more
verbose
message
with
the
offending
line
and
an
explicit
suggestion
on
what
to
do
in
this
case
move
the
variable
x
to
the
beginning
of
the
loop,
for
instance,
and
there
is
a
lot
of
information
that
you
can
extract
from
pw
check,
which
is
the
most
classic
static
code
analysis
uses
that
verifies
a
set
of
rules.
I
In
this
case
the
defects
and
recommendations
analysis
provide
output
in
csv
and
json
formats.
So
it's
easy
to
integrate
them
with
other
tools
and
going
back
to
to
the
initial
pw
report
output.
Then
we
can
go
following
these
opportunities
for
parallelism
that
they
are
here
so
for
that
we
recommend,
or
the
tool
recommends
using
pw
loops.
So
we
do
that
and
you
see
a
listing
of
all
the
loops
that
they
are
in
the
file.
I
So
the
information
here
in
this
table
is
the
loop
whenever
it
could
be
successfully
analyzed
a
classification
which
is
key
in
the
parallel
world
technology,
the
static
codonized
technology
that
we
have,
which
is
classify
all
the
variables
using
the
loop
into
different
compute
patterns
and
based
on
this
information,
we
can
then
classify
the
loop
if,
if
it
qualifies
as
an
opportunity
of
a
given
type
for
instance,
here,
the
outer
loop
is
a
multi-threading
opportunity.
This
loop
over
here
and
the
innermost
is
the
same.
I
I
So
what
I
will
do
is
I
will
perform
what
we
call
a
hybrid
parallelization
for
these
two
loops,
where
we
will
parallelize
the
outermost
loop
using
multi-threading
and
the
innermost
loop
using
so
for
that
we
use
like
it
is
printed
here
in
the
last
suggestion
the
pw
directives
tool.
So
I
just
pass
as
instructed
there
the
line
number
where
the
loop
is
located,
the
first
one.
I
I
will
do
the
same
for
the
second
one:
okay
862,
that's
the
second
one,
I'm
pointing
here
the
outermost
ones,
and
I
tell
it
okay,
please
generate
openmp
directives
for
multithreading
plus,
and
this
is
where
the
hybrid
comes
in
cmd,
so
in.
If
there
is
anything
the
opportunity
in
that
loop
nesting
also
vectorize
the
loop.
Not
only
do
this
in
the
at
the
the
multi-threading
parallelization,
so
I
tell
you
to
edit
the
original
file
and
also
add
the
required
heavy
directory.
I
So
upon
doing
that,
you
get
a
bunch
of
information
here,
but
on
the
right,
you
can
see
what
happened
here.
So
you
get
the
automotive
parallelized.
You
get
the
directive
for
multi-threading
and
also
the
innermost
loops
here
and
down
here,
parallelized
or
vectorized
in
this
case,
using
open
cindy
and
the
required
reduction
clause
for
these
two
reductions
that
happen
over
here.
I
Okay,
so
this
was
a
very
fast
crash
course
in
how
to
do
a
hybrid
parallelization
and
how
to
check
for
defects
and
recommendations
using
analyzer
pw
loops
has
many
other
sub
analysis
and
pw
directives
have
many
other
options
for
validization,
using
cmd
for
compiler,
specific
directives
or
different
paradigms
or
even
open
acc.
Now
I
will
switch
real
quick
to
trainer
to
see
a
what
that
this
does
looks
like
in
trainer,
which
is
kind
of
a
different
beast
and
how
to
do
an
example
of
loading
from
there.
I
So
yeah,
let
me
dance
trainer
here
and
what
you
have
here
is
an
integrated
development
environment.
You
can
open
any
folder
or
install
the
bundle.
Examples
that
you
have
here.
It
will
ask
for
a
directory
and
then
open
any
folder.
You
don't
need
to
do
any
prior
setup.
So
in
this
case
I
have
opened
the
head
folder,
which
comes
with
those
examples.
If
I
double
click,
you
will
see
that
now
you
have
some
red
green
circles
next
to
some
loops.
These
constitute
opportunities
for
parallelism.
I
So
if
I
click
one
of
them,
you
get
a
parallelization
dialogue
with
all
the
different
options
that
you
have
to
parallelize.
The
idea
with
trainer
is
that
you
try
different
parallelizations.
You
have
a
version
manager.
You
will
see
that
in
a
moment,
and
you
switch
back
and
forth
between
different
versions
and
execute
right
from
this
graphical
user
interface
and
compare
the
performances
of
your
different
parallelizations
and
then
you
take
away
your
work.
I
So
if
I,
for
instance,
wanted
to
do
uploading
with
open
ecc,
I
select
the
corresponding
options
and
you
see
that
the
correct
fragments
will
be
produced
here.
It
is
missing
just
this
array
range
in
some
cases
we
cannot
infer
all
the
information.
I
This
is
one
such
case,
so
I
will
go
back
and
restore
the
original
version.
Like
I
said
there
is
a
version
manager,
so
it
automatically
generates
for
you
a
new
version.
Every
time
you
generate
you
change
the
code,
so
you
don't
lose
anything
and
you
can
restore
at
any
time
all
the
versions.
So
I
go
back
here,
select
the
same
options
and
I
enter
here.
The
missing
information,
which
was
the
t,
goes
from
zero
to
an
x
okay,
so
that
should
make
it.
So
now
we
have
the
complete
information.
I
Like
I
said
you
don't
require
any
prior
setup
of
your
project,
but
when
you
want
to
build
and
run
from
here,
you
will
get
asked
about
this
command.
So
in
this
case
I
open
the
invocation
to
nvc
to
the
old
pga
compiler
and
how
to
run
the
code.
I
So
we
can
just
click
on
run
and
it
will
do
all
things
you
get
the
input,
the
output,
sorry
from
nbc,
on
the
build
output
console,
and
then
you
get
the
execution
information
here
on
the
execution
output
in
the
parallel
world
output,
you
will
have
some
information
regarding
what
options
there
are
for
the
parallelization
and
anything
that
you
need
to
be
aware
of,
and
you
have
some
terms
that
are
underlined
and
you
can
click
to
get
information
embed
in
the
in
the
product.
I
Okay,
to
finish
for
the
questions,
you
can
also
see
what
the
actual,
let
me
follow
that
I'm
looking
for
how
the
checks
look
like
here,
so,
for
instance,
developing
a
code
that
has
some
defect.
I
You
will
get
this
right
triangle
and
for
recommendations,
this
yellow
triangle-
in
this
case
it
is
that
here
this
code
has
a
data
rays,
because
the
j
is
incorrectly.
That
is
here
in
in
correlation
sorry.
So
if
you
were
to
add,
as
following
this
recommendation
here,
j
to
a
private
clause,
this
would
be
piece
right
away.
I
You
could
also
get
around
this
error
or
prevented
this
error
by
following
these
recommendations
with
the
adding
default
null
or
declaring
explicitly
the
scope
for
all
the
variables
there.
So
that's
it.
I
know
it's
been
pretty
fast,
but
pretty
tight
and
wanted
to
show
a
lot
of
things.
So
if
there
are
any
questions,
feel
free
to
fight
away.
Thank
you.
I.
D
I
Well,
they
recommend
it
for
me,
yeah,
I
think
they
recommended
is
nx.
That's
what
we
have
been
using
in
the
courses,
and
this
is
what
you
are
seeing
right
now
on
the
screen.
It
runs
incredibly
well.
I
can
try
to
load
the.
I
mean
it's
really
fast,
even
if,
if
it's
a
remote
connection
way
better
than
x11.
D
A
So
no
machine
is
a
virtual
machine
that
is
sitting
on
nurse.
So
what
you
do
is
you
log
into
no
machine
and
no
machine
will
run
all
the
visualizations
literally
right
there
against
corey.
So
all
the
complicated
graphical
stuff
is
done
locally
and
all
it
does
is
share
that
what
the
screen
looks
like
back
to
you,
so
it
massively
increases
how
your
your
interactive
ability
on
this
kind
of
graphical
interface
and
it
allows
things
to
run,
really
well
insight
tools,
the
nvidia
stuff.
A
This
is
the
only
way
I
ever
even
attempt
to
run
it
on
nurse,
and
it
runs
extremely
well.
It's
almost
it's
there's
a
little
bit
of
delay,
but
not
enough
to
really
hinder
anything
anymore.
So
if
you
search
no
machine
nurse
or
nx
nurse,
or
something
like
that,
the
information
will
pop
up
it's
pretty
easy
to
set
up
and
get
into,
and
yeah
definitely
use
it
for
stuff
like
this.
Okay.
A
I
believe
that's
the
current
way.
Yes,
there's
there's
been
a
bit
of
history
with
it,
so
there
was
like
a
web
interactive
that
part
of
that
you
could
do
it,
but
it
wasn't
working
well
and
and
prevents
presented
some
secure
security
issues.
So
we
didn't
do
that
that
got
suspended,
but
I
believe
yeah.
Yes,
so
the
current
recommended
way
is
just
put
in
a
client.
It's
a
pretty
lightweight,
no
big
deal
client
and
yep.
A
A
A
So
one
one
statement
for
me
in
one
question
number
one
openmp
had
default,
none
in
it,
so
you
guys
are
already
awesome
that
that
that
you
get
massive
brownie
point.
That's
amazing!
Yeah!
A
Second,
while
I
was
looking
at
this
while,
while
you
were
doing,
I
looked
up
your
your
the
website
with
the
was
it
examples
on
it
or
no
the
possible
questions
or
tips,
I
forget
what
they,
what
you
call
it,
one
that
I
would
think
would
be
interesting,
would
be
in
lining
of
gpu
device
functions
for
most
of
other
cases
that
I've
seen.
Inlining
gives
you
a
gigantic
performance
improvement
because
well
it
has
the
potential
to
because
the
number
of
registers
you
use
is
severely
reduced.
A
A
That's
where
it
gets
tricky
or
interesting.
Remember
my
background
is
cuda.
So
for
us
we
would
just
label
all
the
functions
in
line
and
if
you
missed
one
it'd
be
a
big
issue.
You'd
have
to
go,
find
it
and
it's
pain
to
find
if
you're
doing
it
through
openmp.
Sometimes
you
might
have
to
manually
label
things,
and
sometimes
you
won't.
I
wouldn't
I
don't
know
exactly
where
yeah
I
wouldn't
know
exactly
the
cut-off,
but
that
would
be
kind
of
the
point.
A
A
We
can
wrap
up
for
the
day,
all
right
so
to
wrap
up.
Does
anybody
else
have
any
useful
or
interesting
ideas
for
talks
or
things
that
they
would
like
to
hear
during
these
nug
meetings?
I
have
a
couple
of
examples
here
that
I
would
topics
that
I
am
currently
working
on
so,
for
example,
a
perl
motor
overview.
A
What's
going
to
be
in
there
how
users
might
need
to
think
about
things
or
a
gpu
reporting
guide,
what
language
to
go
with
or
what
tools
to
use
to
help
you
figure
things
out
or
something
like
that.
Is
there
anything
else?
Anybody
else
would
be
interested
in
hearing
learning
some
more
about
or
digging
deeper
into,
or
do
these
sound
good
it
would
making
these
a
future.
Nug
talk
be
useful
to
anybody.
J
J
Special
about
the
the
file
system
and
how
it
can
read
directly
what
was
it
tcp
ip
or
something
like
that?
What's
that
going
to
do,
for
the
users,
for
example,.
A
I
I
know
that
acronym.
I
don't
remember
it
though.
Oh
all
right,
yeah,
yeah
yeah.
I
know
you're
talking
about.
Yes,
all
right.
That
would
make
okay.
I
will
put
that
down
as
a
future
talking
point
anything
else,
anything
that
people
will
have
questions
about
or
digging
in
and
would
like
to
hear
something
deeper
about.
A
I
find
it
a
bit
odd
that
we
wrap
up
with
the
numbers,
but
hey
I
actually
like
numbers
and
plots,
so
I'm
not
complaining.
So
this
is
last
month's
numbers
from
nursk
looks
like.
Overall,
we
had
very
good
utilization,
so
thank
you
all
for
using
the
system,
and
actually
you
know
taking
this
resource
and
making
science
out
of
it
on
a
regular
basis.
You'd
be
surprised
how
complicated
and
difficult
that
is
from
time
to
time.
The
outages
were
actually
very
good
this
month.
A
Besides
our
schedule
maintenance,
we
had
a
couple
of
small
issues,
two
slurm
issues
and
a
sea
scratch
issue
which
seemed
to
be
our
ongoing
issue,
but
I
think
we
had
a
c-scratch
issue
yesterday,
so
next
month's
will
be
even
more
interesting,
whatever
all
worked
out
pretty
good.
Hopefully
you
guys
took
advantage
of
this
and
got
a
lot
of
good
work
done
and
94
looks
like
we
got
two
sets
of
numbers.
I
think
this
is
the
accurate
one,
so
you
had
a
94
utilization.
A
Of
course,
well,
I
would
like
to
what
was
that
there's
the
chart
that
I
always
like
to
see
of
wait
time.
Maybe
we
should
put
wait
time
in
here.
I
find
that
interesting.
So
our
utilization
was
94.
Also
very
good
and
looks
like
our
tickets
are
doing
about
normal
closed
about
600,
and
our
backlog
is
about
the
same
as
it
is
for
month
to
month.
So
that's
very
good
yeah.
That's
actually
a
really
good
sign
for
me
that
a
lot
of
good
things
are
happening
with
getting
stuff
done.
A
So
if
anybody
has
one
of
those
backlog
tickets
and
it's
been
sitting
there
for
a
while,
please
feel
free
to
just
ping
that
ticket
loud
and
remind
us
to
go
and
do
it,
but
otherwise
looks
like
you
guys
are
interacting
with
us
very
well
and
hopefully
we're
answering
a
whole
bunch
of
important
questions
for
you
guys
and
helping
you
get
science
done
on
the
system,
all
right.
A
D
I
have
a
just
a
general
question
if
you
guys
have
any
idea
on
what
to
do
so,
a
lot
of
times
in
my
codes.
I'll.
Do
this
thing
where
I
I
copy
from
one
data
buffer
to
another,
one
where
basically,
I'm
doing
some
kind
of
transpose
or
something
so
that
on
the
next
step
I
can
parallelize
and
get
sequential
usage,
and
so
what
I
do
a
lot
of
times
is
I'll
copy
from
like
my
dat
to
my
temp,
but
then
outside
of
that
I'll.
D
Just
swap
the
pointers
so
that
the
rest
of
the
code
works
properly
right.
But
when
I
go
to
open
acc,
what
I
kind
of
want
to
do
is
say
on
the
gpu
like
here's,
my
data
and
the
temp
is
just
temp.
I
don't
really
care,
but
I
want
to
kind
of
swap
the
pointers
on
the
server
not
locally,
because
I
just
basically
want
to
say,
like
hey,
keep
using
these
and
do
what
you
have
to
do,
don't
copy
it
back
locally
and
then
copy
it
back.
D
So
I'm
wondering
is
there
a
way
to
not
copy
the
data
off
the
gpu,
but
just
somehow
tell
openacc
hey
what
used
to
be?
Data
is
temp
and
what
we
used
to
be
temp
is
data,
and
it's
it's
not
so
simple
that
I
can
just
change
other
pieces
of
the
code,
because
you
know
the
other
pieces
of
the
code
expect
the
right
data
set
right
and
it
depends
depending
on
how
many
iterations
I've
done.
It
could
be
either
way.
A
Yeah
I'd
imagine
there
should
be
a
way
to
do
it.
Does
anybody
have
any
ideas.
F
A
That
sounds
like
another
question
for
the
the
nugs
slack
channel
yeah.
D
You
know
then
it'll
all
stay
on
the
gpu.
I
don't
have
to
copy
it
back
to
the
cpu,
because
I
swapped
some
stupid
pointers.
Right
I
mean
the
problem
is
the
interaction
between
open,
acc
and
and
and
and
c
right
so
like.
If
I
was
doing
this
in
cuda,
it
wouldn't
be
a
problem
at
all.
I'd
be
like
oh
yeah,
here's,
my
pointer
that
I
want
to
use
I'm
going
to
use
this
one
this
time
now
use
that
one.
I
use
this
one.
D
A
It
sounds
like
there's
that
seems
simple
enough,
that
there
should
be
some
capability
and
way
around
it.
Yeah.
D
A
D
A
D
I
mean
I'm
wondering
if
it's
as
simple
as
as
just
making
the
making
the
data
section
larger
or
something
like
that,
so
that
it's
you
know
it's
got
the
parallel
loop
inside,
but
you're
not
doing
the
copy
in
there
you're
doing
it
higher
up
or
something
like
that.
A
A
Okay,
well,
thank
you
guys
very
much
for
participating
in
this.
This
user's
meeting
come
back
next
month
and
feel
free
to
come
with
all
kinds
of
interesting
things
that
you,
if
you
if
this
is,
if
this
is
your
first
meeting,
you
don't
know
the
kind
of
things
we
talk
about
so
be
sure
to
have
some
questions.
Have
some
things
up
front
that
you
could
discuss
and
bring
to
light
for
the
group,
and
we
hope
to
see
you
next
month.