►
From YouTube: 01 Welcome and Introduction
Description
Part of the NERSC New User Training on June 16, 2020.
Please see https://www.nersc.gov/users/training/events/new-user-training-june-16-2020/ for the training day agenda and presentation slides.
A
A
First
I
just
wanted
to
show
you
all
what
our
schedule
is
going
to
look
like
and
tell
you
about
some
logistics,
so
we've
muted,
you
conjoining
zoom,
because
we
have
so
many
attendees
we'd
appreciate
it.
If
you
could
change
your
name
and
zoom
to
be
your
name,
followed
by
your
user
name
in
parenthesis.
A
The
slides
are
all
available
and
we're
recording
these
sessions
right
now,
they're
gonna
undergo
some
editing
and
then
we'll
upload
the
videos.
After
that
we
have
a
Google
Doc
that
you
can
use
to
ask
your
questions
and
what
will
happen
there.
Is
that
we'll
read
the
questions
to
the
speakers
and
the
speakers
can
answer
Mallory
and
then
at
the
end
we
would
really
appreciate
it
if
you
would
take
our
survey
to
let
us
know
how
we
did
and
what
we
can
do
better.
A
Okay,
so
our
agenda
is
first
half
hour
I'm,
going
to
welcome
you
and
introduce
you
to
nurse,
then
we'll
learn
about
accounts
and
allocations.
How
to
connect
to
nurse
using
SSA
genetics
have
a
little
break
and
we'll
talk
about
the
programming
environment
and
compiling
your
coats
running
jobs,
debugging
and
profiling
tools,
and
then
we'll
have
a
lunch
break.
A
Then,
after
lunch,
we'll
have
a
overview
of
the
data
system
ecosystem,
the
data
ecosystem
will
talk
about
workflows,
nurse
file
systems
and
the
burst
buffer
data
trends
for
best
practices
and
IO
best
practices.
Little
break
to
give
you
some
time
to
think
we'll
talk
about
Python
and
Jupiter
and
shifter
deep
learning
and
then
that'll
be
the
end
of
the
day.
A
So
today,
I'm
gonna
provide
you
with
a
little
overview
of
nurse,
so
let's
go
ahead
and
get
started
on
that.
So,
as
I
said
before,
my
name
is
Rebecca.
Hartman
Baker
I
leave
the
user
engagement
group
here
at
nurse
and
I'm
gonna
provide
you
with
an
overview.
So
first
we're
gonna
talk
about
nurse.
What
is
no
risk?
We're
gonna
talk
about
the
hardware
that
we
have
here,
software
that
we
have
here
our
guide
sort
of
for
interacting
with
nurse
and
user
responsibilities
and
expectations.
A
So,
let's
start
with
an
introduction
to
nurse.
So
nurse
is
an
acronym.
It
stands
for
the
national
energy
research
scientific
computing
Center
were
established
in
1974
as
the
first
unclassified
supercomputer
Center.
Our
original
mission
was
to
do
computational
science
as
a
complement
to
magnetically,
controlled
plasma
experiments,
and
so
actually
we
had
a
slightly
different
name
done,
but
today
our
mission
is
to
accelerate
scientific
discovery
at
the
do-e
office
of
science
through
high
performance
computing
and
extreme
data
analysis.
A
nurse
is
a
national
user
facility.
A
So
from
the
do
ease
perspective,
they
give
out
time
on
our
machines,
and
this
is
kind
of
the
percent
of
the
hours
on
the
supercomputers
that
were
used
per
office
in
allocation
here
at
2019.
So
there's
these
different
offices,
you
can
see
they
used
different
amounts
and
so
big
one
here
is
basic
energy
sciences.
A
A
We
have
about
7,000
users
with
800
projects,
600
different
codes,
that
people
run
on
the
machine,
hundreds
of
users
on
machine
every
day
and
our
allocations
are
primarily
controlled
by
do-e.
So
80%
of
our
allocations
go
through
the
do-e
annual
production
of
words
called
her
cap
and
that
those
words
range
from
you
know
tens
of
thousands
to
typically
tens
of
millions
of
hours.
These
are
proposals
that
you
submit
to
do
a
program,
managers,
Nate
and
they're-
the
ones
who
select
what
you
know
what
projects
should
be
awarded.
A
Then
another
10%
goes
to
the
do:
e-oscar
leadership,
computing
challenge,
which
is
kind
of
a
high
risk,
potentially
high
payoff
sort
of
computing.
So
that's
also
very
interesting
way
to
get
onto
our
machines
and
then
the
remaining
10%
is
our
reserve
that
we
use
for
our
own
special
projects
or
we
use
for
overhead
or
staff
use,
or
things
like
that.
A
So
I
mentioned
that
we
have
over
600
codes.
This
is
this
is
an
analysis
from
the
28
team
here,
but
it's
still,
you
know
pretty
current,
so
there's
the
top
10
codes.
They
make
up
50%
of
our
workload
and
then
the
next
20
codes
well
10
and
then
10
more
make
up
two-thirds
of
our
workload.
So
if
you
look
at
this,
if
you
look
at
this
pie
chart
this
is
sort
of
showing
this
so
vasp,
because
our
number
one
code
uses
almost
20%
of
all
hours
and
then
you
can
see
from
there.
A
A
So
our
big
focus
is
on
science,
so
our
users
produce
and
published
more
than
any
other
Center
in
the
world.
We
think
about
2500
articles
per
year
in
scientific
journals.
So
in
2018
we
had
14
articles
in
nature,
31
in
Nature,
communications,
82
and
other
nature
related
journals.
We
had
11
in
science
31
in
penis
and
also
we
have
6
Nobel
Prize
winning
users.
So
we
take
great
pride
in
that
as
well.
A
A
Right,
so
if
you
have
any
questions,
please
go
ahead
and
put
those
in
Google
Doc.
Thank
you.
Okay.
So
we're
gonna
talk
about
nurse
hardware,
so
we
get
a
new
system
every
couple
of
years,
and
so
you
know
May,
since
you're
new
users-
you
probably
don't
remember
Edison-
was
a
great
machine.
Then
we
got
it
in
2013
and
we
we
decommissioned
it.
Last
year,
then
we
got.
A
We
got
Cory
in
2016
and
Cory
is
our
current
machine
that
we
have
and
then
starting
at
the
end
of
this
year,
we'll
be
getting
Perlmutter,
which
is
our
next
system.
And
then,
after
that,
we'll
get
another
machine.
We
don't
know
what
it
we
call,
but
right
now
we
call
it
nurse
ten.
So
we're
always
getting
a
new
machine
every
couple
of
years
that
is
more
powerful
and
at
the
same
time,
more
energy
efficient,
and
then
we
can
deliver
more
science
to
our
users.
A
So
let's
talk
a
little
bit
about
Perlmutter,
so
Perlmutter
it
is
going
to
be
a
system.
That's
gonna
be
three
to
four
times
more
powerful
than
Cory.
It's
gonna
be
our
first
system.
That's
designed
to
meet
the
needs
of
both
large
scale
simulation
and
data
analysis
from
experimental
facilities,
so
it's
going
to
include
nvidia
gpus
and
it's
going
to
have
AMD,
cpus
and
there's
gonna
be
some
notes
that
have
GPUs
and
some
nodes
that
have
only
CPUs
on
them.
A
It's
gonna
have
a
really
fast
Network
called
the
crash
link,
shot
Network
and
it's
gonna
have
an
optimized
data.
Software
stack
that'll
really
help
with
analytics
and
machine
learning
at
scale.
Another
unique
capability
is,
it's:
gonna,
have
an
old
flash
scratched
file
system
and
in
flash,
you
know,
is
a
lot
more
performant
than
spinning
disk.
So
it's
gonna
be
super
fast
and
then
we
also
have
a
readiness
program
for
a
simulation
data
and
learning
applications
and
complex
workflows.
That's
currently
going
on.
A
Some
of
you
may
be
involved
in
that
it's
called
the
nice
app
program
and
then
our
phase
1
is
going
to
come
this
year
at
the
end
of
the
year.
So
we're
super
excited
about
the
machine
and
we
can't
wait
for
it
to
be
here,
so
we're
naming
it
after
Saul
Perlmutter,
you
may
have
noticed
we
had
Edison,
which
was
named
after
Thomas
Edison.
We
have
Corey,
which
is
named
after
Gertie
Cory
it.
A
His
project
called
the
a
supernova
cosmology
project
was
a
pioneer
in
using
supercomputers
to
combine
large
scale
simulations
with
experimental
data
analysis.
So
Saul
Perlmutter,
we
asked
him
hey.
Can
we
name
our
machine
after
you
and
he
said
sure,
but
my
condition
is
you've
got
to
make
it
so
that
people
don't
have
to
type
my
really
long
last
name
in
order
to
log
in
so
you
you
have
to
make
it
so
that
you
ssh
to
salt
nurse
gov,
it's
much
shorter
and
people
won't
misspell
it.
A
B
A
Core
architecture-
and
then
it
has
about
2,400
nodes
that
are
of
the
Zeon,
has
bail
course.
So
it's
it's
got
these
two
different
architectures
and
you
can
see.
We
have
a
lot
more
of
the
KL
many
core
notes
than
we
do
of
the
Haswell
notes.
A
So,
in
addition
to
Cory,
which
is,
of
course
the
major
thing
oh
I
should
mention
it
has
this
burst
buffer,
which
is
an
all
flash
file
system
for
optimal
performance.
So
if
you're,
if
you're
doing
something
that
uses
lots
of
data
I/o
reading
and
writing,
you
could
consider
using
the
burst
buffer
for
that
and
it
has
a
thirty
one,
petabyte
scratch
system.
A
So,
in
addition
to
Cori,
we
have
a
big
archive.
It's
HP,
SS
archive
and
we'll
learn
more
about
that
later
and
then
we
have
the
community
file
system
and
we
have
our
home
file
systems
and
we
have,
of
course,
auxiliary
systems
like
our
data
transfer,
nodes,
spin
and
science
gateways.
Things
like
that
all
connected
through
our
Ethernet
and
Ivy,
and
it's
all
connected
to
es
net,
which
is
the
energy
scientist
Network,
there's
a
fast
network
connecting
national
labs
and
other
research
facilities
in
the
United
States
and
actually
over
to
CERN
as
well.
A
Okay,
so
I
mentioned
on
Cori,
we've
got
these
two
different
types
of
nodes,
so
we've
got
these
Haswell
nodes.
The
purpose
sub
days
really
is
for
throughput,
so
these
are
startup
design
that
the
purpose
of
having
these
nodes
is
so
that
people
who
are
doing
like
data
analysis
and
things
like
that
can
get
their
jobs
through.
So
we
have
some
cues
on
there
that
a
that'll
even
allow
single
core
jobs,
so
you
don't
have
to
use
a
whole
node.
A
You
can
just
use
a
fraction
of
the
node
and
we
have
longer
wall
time
limits
in
support
of
these
smaller
jobs.
Unfortunately,
it's
a
very
popular
resource,
absolute
is
to
have
very
long
queues.
The
cattle
nodes.
On
the
other
hand,
we
have
you
know
like
three
times
as
many
notes,
almost
I
guess
four
times
many
notes.
Really,
though,
these
nodes
are
really
great
for
performance
like
if
you
can
get
your
code
performing
really
well.
This
is
perfect
for
that.
A
A
A
So
for
our
file
systems,
we
have
the
home
and
community
file
systems,
though
there
are
global
file
systems.
We
have
some
local
file
system,
which
is
the
scratch
and
burst
buffer,
and
then
we
have
a
long-term
storage
system
which
is
HP
SS,
and
this
is
actually
a
picture
of
HP
SS
on
the
right,
so
our
global
file
system,
so
we've
got
you.
You've
got
your
home
directory
and
this
is
a
permanent,
relatively
small
storage
for
you.
So
we
give
you
a
40
gigabyte
quota
in
your
home
directory
and
we
don't
change
that.
A
That's
what
you
have
a
home
is
mounted
on
all
platforms,
but
it's
not
tuned
to
perform
well
for
parallel
jobs.
So
in
fact,
what
what
we
really
want
you
to
do
with
it
is
just
use
it
for
storage
of
small
data
like
your
source
code
or
your
shell
scripts.
We
don't
want
you
to
actually
use
it
in
your
parallel
jobs,
so
it
has
what's
called
snapshot
backup.
A
So
if
you
accidentally
deleted
something
home
directory,
if
you
had
it
yesterday,
then
you're
good,
you
could
just
go
into
those
snapshot
backups
up
to
seven
days
back
and
retrieve
that
file.
We've
also
got
of
the
community
file
system,
which
is
also
permanent,
larger
storage,
that
its
larger
than
home.
It's
not
an
on
all
platforms
and
it
is
kind
of
medium
performance
for
parallel
jobs.
It's
not
great,
but
it's
much
better
than
home.
We
can
increase
your
quota
on
community
and
it
does
have
the
snapshot
backup
capability.
A
So
we
really
want
you
to
use
your
community
file
system
directories
for
sharing
data
within
your
research
group.
So
if
you
have
like
a
big
data
set
or
something
that
you
know,
everybody
needs
to
use
and
put
that
one
in
the
in
your
community
file
system
storage,
so
that
everybody
can
look
at
it.
A
Okay.
So
next
we
talk
about
local
file
system,
so
these
are
just
local
to
the
machines.
So
scratch
is
a
large
temporary
file
system.
So
it's
it's!
It
has
what
we
call
a
purge
policy.
So
what
that
means
is
that
if
you
have
a
file
that
has
just
been
sitting
there
and
not
being
used
for
12
weeks,
then
we
reserve
the
right
to
delete
that
file
to
make
room
for
other
files.
A
The
scratch
system
is
optimized
for
readwrite
operations
and
not
optimized
for
storage.
It
is
not
backed
up.
So
if
somehow
your
file
on
scratch
got
corrupted
or
it
got
purged,
then
it's
kind
of
too
bad.
It's
there's
really
nothing
that
we
can
do
about
it.
So
scratch
is
a
really
great
place
to
stage
your
data
and
perform
your
computations
and
read
right
from
during
your
job.
That's
what
scratch
is
really
great
for
and
that's
how
we
want
you
to
use
it.
Ok
and
finally,
we'll
talk
about
the
burst
buffer,
so
the
burst
buffer.
A
You
can
have
a
temporary
per
job
storage
on
it,
a
high-performance
SSD
file
system,
so
it
runs
on
solid
state
drives
rather
than
on
spinning
disk
drive.
So
that
means
it's
a
lot
faster
and
your
I/o
pattern
doesn't
matter
as
much
as
it
would
on
a
spinning
disk.
The
burst
buffer
is
exclusive
to
Cori
and
it
is
really
perfect
for
getting
really
good
performance
if
you
have
a
code
that
is
constrained
by
IO.
A
Okay,
so
finally,
we've
got
our
long-term
storage
system,
which
is
H
PS
s
and
H.
Bss
stands
for
high
performance
storage
system,
so
it's
an
archival
storage
system
and
it's
kind
of
where
you
put
your
data
that
you
don't
need.
Very
often
so
it's
kind
of
a
hierarchical
thing,
so
first
it
ingests
your
data
on
to
some
disk
arrays
and
then,
after
that,
it'll
store
it
in
the
back
end
on
tape.
So
again,
we'll
see
it
more
about
it
in
later
presentations.
A
So
before
we
go
on,
though
I
want
to
give
you
my
favorite
little
analogy
here
so
I
like
to
liken
our
file
systems
to
and
and
our
yeah
our
cool
ecosystem
here
actually
to
a
giant
kitchen.
Okay,
so
let's
say
that
nurse
is
like
a
giant
shared
kitchen:
it
has
all
the
latest
gadgets
of
super
cool
stuff,
so
computing
is
kind
of
like
baking
in
our
kitchen.
The
input
is,
you
know
your
baking
ingredients.
Your
output
is,
let's
say,
a
cake.
A
Alright,
so
nurse
itself
like
the
supercomputer,
is
like
an
oven
and
home
and
cfsr
kind
of
like
your
pantry
and
fridge.
You
know
where
you
store
a
lot
of
ingredients
that
you
might
use
fairly
frequently
HP
SS
is
like
a
freezer
where
you,
where
you
store
like
the
frozen
blueberries
or
something
that
you
don't
use
that
frequently
and
scratch
is
like
your
kitchen
counter.
A
So
when
you
bake,
you
state
your
ingredients
from
your
pantry
in
your
fridge,
possibly
from
your
freezer
onto
the
kitchen
counter,
and
likewise,
when
we're
computing,
we
want
to
stage
our
data
and
executable
on
to
the
scratch
filesystem,
that's
our
countertop,
okay!
So
after
baking,
then
you
want
to
clean
up
after
yourself.
So
it's
alright
to
let
your
cake
cool
on
the
counter,
but
ultimately
we
got
it
leaves
leave
our
space
clean
for
the
next
user.
A
A
So
on
a
Cray
supercomputer,
the
operating
system
is
a
version
of
Linux,
it's
kind
of
an
optimized
version.
We
have
compilers
that
are
provided
on
machines
and
we
have
libraries
that
that
are
own
machines.
Many
of
them
are
provided
by
a
crate
and
then
others
we
actually
build
for
you,
and
we
also
have
some
applications.
A
Software
packages
that
we
have
for
our
users
and
there'll
be
more
details
than
in
later
presentation
on
how
all
this
works,
though
we
have
all
kinds
of
chemistry
and
material
science
applications
that
we
actually
provide,
because
these
are
very
commonly
used
and
we'd
like
for
optimized
versions
to
be
used
on
our
platforms.
So
that's
why
we
provide
those.
A
We
have
a
new
software
policy,
so
the
default
gulps
are
consistent
for
one
allocation
here.
So
we
use,
we
have
the
same
Cree
programming,
environment
software
available
as
default
for
the
whole
allocation
year,
except
in
cases
where
we
might
have
security
issues
or
major
operating
system
upgrades,
and
we
may
not
be
able
to
continue
supporting
the
same
versions.
A
Then
our
software
that
we
provide
by
nurse
provides.
We
have
four
support
levels,
so
we've
got
priority,
provided
minimal
and
restricted
so
priority.
We
provide
it,
we
take
we,
you
know
we
take
it
very
seriously.
We
have
it
on
a
high
priority.
We
perform
regular
functionality
and
performance
testing
if
we,
if
it's
at
a
provided
level,
we'll
provide
it
and
we
regularly
make
sure
that
it
works
at
least
if
it's
minimal,
we
may
or
may
not
provide
it
to
you.
It's
a
pretty
low
priority
and
we
don't
perform
any
testing.
A
A
A
So
here
is
the
consulting
and
account
support
team.
So
when
you
send
in
a
ticket
what,
if
these
fine
people
will
answer
it
or
triage
it
to
be
answered
by
someone
else,
so
in
2019
we
handled
seven
thousand
eight
hundred
and
twenty
five
tickets
from
two
thousand
seven
hundred
and
nine
unique
users,
and
you
can
see
sort
of
these
different
areas
that
we
handled
ticket
in.
So
we
got
a
lot
of
account.
Support
running
jobs
is
always
a
big
one.
Software.
A
Data
and
I/o,
you
know
you
name
it.
We
answer
questions
about
it,
so
yeah,
that's
that's
what
we
do
and
we
promise
that
we
will
reply
to
you
within
four
business
hours.
Now.
A
business
hour
is
obviously
time
that
we
are
open
for
business,
so
that's
Monday
through
Friday,
8
a.m.
to
5
p.m.
Pacific
time,
except
for
holidays,
we'll
help
you
resolve
your
problem
and
keep
you
up
to
date
on.
A
So
for
you
we'd
like
for
you
to
help
us
to
help
you
so
when
you
send
us
a
ticket,
it's
helpful.
If
you
can
tell
us
what
is
the
problem?
What
machine?
When
did
it
happen?
What
modules
were
loaded?
How
did
you
try
to
fix
it
or
work
around
it?
If
you
just
send
us
a
ticket
that
says
my
job
didn't
run,
that's
not
particularly
helpful.
You
know
we
need
to
know
what
your
job
was,
what
you
were
trying
to
do,
here's
your
script.
You
know
things
like
that.
A
Otherwise
we
can't
really
help
you
very
easily
it'll
just
take
longer,
okay.
So
next,
let's
talk
about
our
operation
staff,
so
we
have
operations,
staff,
on-site,
24/7,
365
days
of
the
year,
Christmas
Thanksgiving
all
the
days
to
supervise
the
operation
of
the
Machine
room.
So
they
are
some
really
smart
people
and
they
know
how
machines
are
doing
and
they
can
help
you
with
some
tasks
like
killing
jobs
that
really
won't
die
and
changing
your
reservation.
That's
already
running
stuff
like
that,
but
generally
we
ask
you
to
avoid
contacting
operations
except
in
case
of
an
emergency.
A
Okay,
so
next
is
the
the
nurse
user
group.
So
this
is
our
community
of
nurse
users,
so
they
are
a
great
source
of
advice
and
feedback
for
a
nurse
and
we
listen
to
what
they
have
to
say.
There's
an
executive
committee
there's
three
representatives
from
each
office
in
the
office
of
science
plus
three
members
at
large.
So
if
you're
interested
in
in
joining
that
we
have
elections
every
year,
they
they
hold
monthly
teleconferences
and
they
have
a
slack
channel,
and
so
you
can
join
it
at
this
URL.
A
You
just
need
to
log
in
in
order
to
get
there
and
then
please
also
join
us
for
the
nug
annual
meeting,
which
is
gonna,
be
online
on
Monday
August
17th,
coming
up,
okay,
finally,
user
responsibilities
and
expectations.
So
here's
what
we
ask
of
you
please
be
kind
to
your
neighbor
users.
So
don't
abuse
the
shared
resources.
We
have
some
problems,
sometimes
with
people
who
will
kind
of
over
use
the
login
notes
and
then
everyone
else
who
happens
to
be
logged
into
that
node
suffers.
So
please
don't
do
that.
A
Use
your
allocation,
smartly,
so
pick
the
right
resources
for
your
job
and
your
data
and
small
jobs
is
an
example.
Those
work
really
great
on
has
little
notes,
maybe
not
so
good
on
the
P&L
notes.
So
you
know
just
try
to
think
about
where
you
want
to
run
your
jobs
and
how
you
want
to
do
it,
backup
your
your
your
data,
especially
stuff,
that's
on
scratch,
which
has
that
purge
policy,
please
acknowledge
nurse
in
your
papers,
acknowledge
us
so
that
we
can
stay
in
business.
Alright.
A
If,
if
the
Department
of
Energy
thinks
well,
there's
no
papers,
there's
no
science
coming
from
nurse,
then
we
might
have
to
go
out
of
business,
so
acknowledge
us
so
that
we
can
stay
in
business
and
you
can
keep
using
our
resources
and
then
finally
pay
attention
to
security.
Don't
share
your
account
with
others
and
don't
hesitate
if
you're
not
sure
to.
Let
us
know
if
you
think
maybe
something's
happened.
Maybe
your
accounts
been
compromised,
we
don't
mind
false
alarms.
We
prefer
false
alarms
over
not
finding
out
about
something
that
happened.
A
B
Questions
so
some
of
the
question
in
the
Cougar
dog
had
already
been
answered
like
what?
How
how
does
the
nurse
nine
name
come
from?
Where
is
the
house
named
and
where
the
slides
available,
but
there
are
a
few
others
I
think
you
could
help
to
answer
as
well.
They're
also
people
asked
answered
about
how
do
we
count
those
publications,
but
we
could
also
hear
what
you
say
about
it.
A
It's
a
very
hard
problem
actually,
and
so
we
so
someone
we
rely
on
self-reporting.
So
if
you
make
a
publication-
and
you
want
to
tell
us
about
it-
we'd
love
to
hear
about
it.
In
fact,
we
might
even
use
it
as
a
feature
in
one
of
our
science
articles
or
things
that
we
submit
to
do
e
to
talk
about
nurse,
but
then
also
we
do
we
perform
searches,
but
we
probably
aren't
seeing
all
of
all
of
the
papers
that
are
coming
from
work
performed
at
nurse,
but
that's
basically
how
we
do
it.
A
A
A
A
Okay,
that's
a
good
question.
So
when
I
was
talking
about
small
jobs,
I
was
really
talking
about
what
I
was
really
thinking
of
was
jobs
that
jobs
that
use
like,
maybe
one
or
two
or
couple
of
course,
because
we
have.
We
have
this
shared
queue
that
you
can
use
and
you
can
take
advantage
of
that.
And
then
you
don't
have
to
pay
for
the
use
of
an
entire
node
and
you
can
just
use
part
of
a
node
and
only
pay
for
what
you're,
what
you're,
using
basically.
A
Yeah
I
mean
you
could
run,
you
can
run
single
node
jobs
on
you
know,
which
is
totally
fine,
and
you
can
probably
get
pretty
good
group
it
from
that
as
well.
But
yeah,
that's
kind
of
what
I
was
really
meaning
was
jobs
that
don't
they
don't
use
any
API.
They
don't
even
necessarily
use
any
threading.
Those
jobs
are
really
better
on
the
house
well
nodes,
rather
than
the
Kol
notes.