►
From YouTube: Natural source EM SimPEG meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
I
know
that
there's
a
few
new
faces
to
introduce
around
so
maybe
we
could
just
do
like
a
quick
round
of
introductions
if
that
works.
I'm
Fernando
all
maybe
asking
you
to
go
last
so
hopefully,
by
that
time,
dog.
B
A
C
A
D
E
E
F
A
G
G
Most
of
I've
been
doing
a
lot
of
natural
source
stuff
in
the
past,
maybe
a
year
and
a
half
with
the
UBC
jiff
codes
and
working
with
our
sponsors.
So
I've
had
a
lot
of
interest
lately
in
kind
of
developing
workflows
and
really
seeing
what
the
practical
needs
are
for
for
people
who
start
with
a
dataset
and
would
ultimately
like
to
invert
it.
So
that's
kind
of
my
involvement
for
with
this.
F
So
high
for
the
new
faces,
my
name
is
Fernando
Perez
I'm
faculty
at
UC
Berkeley.
My
training
actually
did
my
PhD
in
Colorado,
not
at
mines,
but
at
cu-boulder.
In
the
physics
department
and
I
was
trained
as
a
particle
physicist
I've
had
a
weird,
long,
winding
path
that
has
mostly
involved
a
lot
of
software.
I've
been
very
involved
with
the
scientific
Python
ecosystem.
From
start,
ipython
was
my
PhD
procrastination.
F
Project
I
grew
it
to
Jupiter,
so
that's
gonna
become
a
big
God
today
and
I'm
faculty
in
stats,
and
my
my
interests
for
the
context
that
probably
this
discussion
or
really
that
I've
explicitly
chosen
to
kind
of
focus
domain
aspect
of
my
research
around
the
geosciences.
I,
probably
should
have
done
to
do
a
physics
PhD
instead
of
a
particle
physics
PhD.
F
If
I
I
have
called
things
right
and-
and
so
that's
exactly
how
I
connected
with
Lyndsey
a
couple
of
years
ago
and
how
she
came
to
me
was
an
explicit
attenti,
basically
focusing
on
the
intersection
between
basically
physics,
as
represented
in
problems
in
the
Geoscience
physics
in
questions
in
data
science,
learn
English
learning
without
losing
sight
of
my
interest.
In
open
source
and
open
science
and
the
development
of
open
source
tools
for
scientific
research,
and
so
last
year,
we
we
were
awarded
a
grant
from
the
NSF
for
a
project
that
we
thought.
F
F
The
way
we
frame
back
to
the
NSF
was
that
we
would
work
on
having
a
couple
of
domain
use
cases
in
geosciences,
one
which
would
be
exactly
the
kind
of
geophysical
problems
that
that
synthesis
on
as
well
as
work
in
hydrology
and
worked
on
the
analysis
of
large
scale.
Climate
model
data
to
couple
that,
with
the
development
of
infrastructure
in
the
Jupiter
and
fangio
ecosystems.
For
those
of
you
who
may
not
be
familiar
just
in
a
couple
of
words.
F
Ngo
has
been
very
successful,
it
has
been
grown
well
and
our
grant
is
kind
of
the
second
grand
on
the
Pangea
thread
of
ideas
and
in
fact,
one
of
the
one
of
our
cookie
I
who's,
one
of
the
original
Indians,
so
I'm
I
wanna
I'm
here
I
may
not
stay
for
the
entire
two
hours
of
the
meeting,
but
my
intent
is
sort
of
it.
Understanding
some
of
the
scientific
use
cases
that
you
folks
have
in
mind
and
how
we
can.
We
can
use
that
and
connect
that
with
this
project.
F
About
this,
I
do
on
both
sides.
At
this
point,
she's
actually
were
connected
with
an
NGO
team
and
I
am
so
I'll,
be
mostly
listening
in,
but
I'm
happy
to
answer
any
questions.
If
anybody
hasn't
and
it's
pleasure
meeting
meeting
the
new
faces,
I've
heard
I've
heard
about
all
of
you
folks,
the
ones
I
didn't
know
so
it's
great
to
put
faces
to
the
names
of
her
dumb.
A
So
I
guess
to
kind
of
kick
things
off.
Maybe
we
can
think
through
I
mean
we've
got
this
document
running.
That's
like
some
of
the
needs
and
things
like
that
for
development
in
simpang,
but
maybe
it
would
be
helpful
to
start
with
kind
of
use
cases
and
what
folks
like
are
most
excited
about
sort
of
getting
involved
in
and
then
we
can
person
you
some
of
these
things.
A
Yeah
I
mean
so
a
couple
of
you
spoke
to
focus.
I
might
just
ask
you
to
use
a
background
from
you.
Thank
you
so
yeah.
So
as
Fernando
mentioned,
I
guess
from
my
perspectives
for
the
Troubadour
me
series
project
in
terms
of
data
I've
been
in
touch
with
well
Paul
Bedrosian.
We
actually
wrote
into
the
grant
at
USGS.
They
have
some
interesting
data
over
some
of
the
Aleutian
Islands
and
we're
really
really
struggling
to
invert
it
with
ma
diem
so
for
Fernando.
A
Sigma
diem
is
sort
of
the
standard
go-to
package
in
in
Mt
and
so
yeah,
so
they
were
really
struggling
with
with
that.
So
it's
like
it's
there's
got
to
be
some
interesting
scientific
questions,
I
mean
so
part
of
what
he
thinks
is
the
problem.
Is
they
actually
have
stations
on
the
ocean
floor
and
they
also
as
well
as
on
land,
and
they
can't
seem
to
be
successful
in
inverting
all
of
them
simultaneously.
G
Yeah,
that's
that's
actually
a
pretty
good
one.
We
actually
have
a
lot
of
examples
that
are
already
packaged
for
for
for
jiff
tools
related
stuff.
So
there
is
the
L
block,
full
Ford,
modeling
and
inversion
of
the
L
block.
There's
also
a
synthetic
tkc
example,
so
you
can
find
out
on
the
jiff
tools.
Cookbook
and
I
I
acquired
real
MT
and
Zent
M
data.
So
I
know
we're
not
working
on
said
time
right
now,
but
real
data
from
Cloncurry
and
did
a
walk
through
like
an
inversion
for
that.
So.
G
C
G
A
Would
be
great
I
just
at
the
bottom
of
the
document,
I
started
in
examples
list.
If
you
have
that
link
handy
I
could
just
drop
it
in
there.
That
would
be
great
yeah.
A
So
I
know
both
John
and
Joe.
You've
been
doing
a
bit
of
a
dive
into
the
code,
I,
don't
know
if
you
want
to
just
kind
of
give
a
bit
of,
perhaps
an
update
of
where
you
think
things
are
at
and
what
your
priorities
are
and
where
you
perhaps
would
want
support
or
somebody
else
to
sort
of
take
a
piece
yeah.
C
Yeah
I'm
I'm,
looking
at
looking
at
the
code
right
now,
just
trying
to
experiment
with
how
we
can
optimize
it.
A
little
bit
like
I
was
like,
like
I,
started,
taking
out
so
that
you
can
calculate
the
frequencies
on
the
fly,
because
every
every
time
you
go
to
do,
j-jake
back
or
JT
vac
you're
gonna
be
going
over
the
frequencies
anyway,
so
you're
going
over
frequencies
to
calculate
all
the
fields
and
then
you're
going
and
calculating
the
JT
vac.
So
maybe
by
putting
it
all
together,
we'll
save
some
time
there.
E
C
And
I
keep
solving
them
all
in
parallel
kind
of
thing.
Okay,
you
have
to
get
the
frequency
or
the
fields
for
that
certain
frequency
and
then
go
through
and
calculate
the
rest
of
J
back
JT,
they're
back,
there's
yeah,
and
that
and
then
I
was
thinking
about
trying
to
introduce
X
array.
Instead
of
having
the
dictionary
passed
around
yeah
just
little
things
like
that,
and
then
maybe
just
even
storing
the
fields.
T'as
are
just
little
things
that
we
tried
with
the
DC
yeah
right
now.
A
Thing
that
we
could
sketch
out
because
I
think
there's
potentially
a
couple
different
ways
we
want
to
parallelized
over
frequencies,
so
we
could
try
and
do
it
in
turn.
We
could
also
try
and
do
it
at
sort
of
the
data
misfit
level
and
just
have
each
like
more
like
the
global
problem
idea
so
Fernando
when
I'm
solving
the
MT
problem,
you're
potentially
doing
a
sweep
over
like
we're
solving
Maxwell's
equations
and
a
bunch
of
each
of
those
solves
is
completely
independent.
A
So
what
we
could
do
is
have
like
basically
break
that
up
internally
and
parallelized
internally,
or
we
could
try
to
do
it
at
the
top
level
and
basically
like
treated
like
a
joint
inversion,
even
though
it
well,
it
doesn't
really
matter,
but
each
each
data
misfit
term
could
then
like
we
could.
We
could
break
it
up
at
that
at
that
point,
because
that's
basically
equivalent
and
so
I,
don't
know
which
makes
the
most
sense
to
do
do.
Does
that
make
sense
John
what
I'm
yeah.
C
E
Know
I
mean
I,
think
I
kind
of
like
that
approach
better,
so
that
we
don't
worry
about
modifying
the
MT
code
too
much.
Basically,
the
idea
is
like
we
were
just
gracious
living
up
into
a
bunch
of
different
surveys
with
the
break
it
up
by
sources
all
right.
The
source,
frequency
so
weak
survey
said
anything
like
100
frequencies.
They
have
like
ten
or
something
but.
A
And
I
think
was
was
kind
of
nice
about
that
is
thinking
about
then
like
the
using
a
different
mesh
for
each
frequency
at
that
point
is
a
little
more
transparent,
is
using
a
coarser
mesh
for
lower
frequencies
and
that
denser
mesh
for
higher
frequencies.
I
mean
we'd
need
some
utilities
for
that,
but
the
octree
is
actually
really
nice,
for
that
is
because
it's
a
really
natural
way
to
Courson
and
refine,
which
would
be
pretty
cool.
E
A
E
E
A
A
Okay,
what
are
the
other
questions
and
I?
Think
Joey
or
I'd
appreciate
your
thoughts,
cause
I
know
you've
been
looking
at
the
code.
Quite
a
bit
is
with
respect
to
integrating
the
natural
source
Ford's
solve
with
the
frequency
domain
solve
and
having
basically
like
what
I
would
like
to
see.
A
I
think
is
that
we
basically
don't
actually
have
an
MT
forward
simulation,
that
all
of
that
is
handled
by
sources
and
receivers
and
we're
always
just
relying
on
the
same
forward
solve
because
it
is
just
frequency
domain,
but
that
then
has
the
subtlety
that
we
need
to
like
right
now.
The
frequency
domain
code
assumes
that
a
source
has
only
one
source
term
associated
with
it,
whereas
with
the
natural
natural
field,
we
need
both
polarizations,
so
fernanda.
E
E
B
E
For
empty
I,
don't
anticipate
any
problem
with
that
with
combining
them.
That
was
something
that
I
went
about
when
I
favor
we
were
looking
at
and
when
I
was
creating
the
get
again
a
function
for
simulation,
I,
I
noted
parts
or
it's
okay.
Well,
we
need
to
update.
We
just
basically
need
to
update
a
lot
of
those
implicit
functions
so
that
they
arbitrarily
accept
matrices
as
input
as
well.
E
A
A
Then
that's
kind
of
cool
because
then
we'll
actually
have
played
around
with
boundary
conditions
and
actually
getting
boundary
conditions
implemented
in
the
frequency
domain
code,
which
I
think
opens
up
I,
don't
know,
potentially
some
fun
things
to
play
with.
Even
when
it's
not
natural
sources,
I
see
darkness
jump
in.
How
would
your
bike
ride
I.
F
A
So
Don
to
bring
up
dispute
we're
just
working.
We've
got
some
notes
in
the
documents
we
sketched
out.
Devin
kindly
put
some
links
in
for
some
of
the
examples,
so
we
can
look
to
you
and
then
we
were
just
chatting
through
like
some
of
the
main
element.
I
know.
But
actually,
if
you
don't
mind,
I
think
it
would
be
helpful
for
everyone
here
if
you
would
be
willing
to
just
kind
of
share
like
what
sorts
of
things
you
want
to
get
out
of
this
and
are
interested
in
working
on
along
the
way.
H
Yeah
I
think
I
mean
it's
not
that
I
don't
have
much
insights
of
how
the
NT
color
works,
but
I
think
if
we
can
do
this
thing
that
we
did
with
the
DC
code,
we
basically
you
want
to
be
able
to
scale
it
up,
and,
and
industry
really
wants
to
have
a
an
empty
code.
They
can
yeah,
they
can
work
with
industry
and
its
industry,
size
things
and
the
UBC
codes
are
great,
I'm
sure,
but
there's
so
few
people
who
are
willing
to
invest
the
tens
of
thousands
of
dollars
to
do
it.
H
I
think
they're
more
willing
to
pitch
money
for
a
development
in
terms
of
adding
developers
and
having
it
to
be
open
code,
so
both
of
the
on
the
mirror
side,
both
of
the
consultants
and
the
consortium
of
companies.
They
all
have.
They
all
say
the
same
thing
go
ahead.
You
basically
spin
them
as
much
like
ours
as
you
want
to
push
your
code
so
that
we
can
good
get
our
hands
on
it.
H
So,
basically,
that's
where
that's
what
I'm
at
and
I
think
that
document
that
you
started
putting
together
I,
think
all
the
ideas
are
pretty
much
in
there.
You
could
store
sensitivities
to
this.
That
would
be
probably
help
help
out
to
be
old
scaler
and
you
tiled
a
metric
coupling
approach.
I
mean
all
of
it
is
technically
there
right.
It's
just
a
matter
of
figuring
out
efficiency
holes
in
the
in
the
code
and
I
think
John.
You
spent
already
a
ton
of
time
on
us,
so
much
Italy,
I'm,
not
I'm,
not
there.
H
A
John
one
thing
that
might
actually
be
helpful
for
context,
so
we
were
just
talking
about
actually
how
to
think
through
paralyzing
the
paralyzing
the
code
and
do
we
actually
do
it
at
the
internal
level
or
do
we
think
about
it
more,
like
the
data
misfit
level?
Do
you
could
you
share
sort
of
like
the
high
level
structure
of
how
the
global
problem
works
right
now,.
H
H
A
E
A
H
Okay,
exactly
it
exactly
and
so
I
think
that
this
fit
is
night
is
natural
there
to
send
those
those
misfit
functions
to
different.
If
we
have
like
a
distributor
this
Ruth
memory
problem,
then
it
would
make
sense
to
send
it
to
each
because
Paradiso
can
be
done
on
separately
on
different
machines.
You
know
we
don't
need
to
worry
about
cross
talking
because
yeah,
because
the
JPEG
jakie
avec
requires
the
the
party
store.
Any
solvers
Riedel
happens
there,
so
that
part
makes
sense
to
be
on
there
on
a
global
global,
narrow
memory.
Does
that
make
sense?
E
H
B
H
The
forward
is
done
on
their
visual
measures
right
so
let's
say
you
have
either
blocks
of
datas
or
blocks
of
frequencies.
Dakota
all
had
their
own
meshes,
but
then
the
the
tiled
map
converts
does
that
you
know
do
the
transpose
of
like
a
JPEG
on
a
local
match
back
to
the
global
for
the
gradient
stuff.
That
makes
sense
well.
B
H
And
no
right
because
the
meshes
domitius
should
be
nested.
So
in
theory,
all
foreign
missions
should
cover
the
same
extent.
And,
yes,
you
can
have
numerical
precision,
they
will
vary
from
match
to
match,
but
I
think
in
terms
of
a
connectivity
interpolation.
They
should
all
see
the
same.
The
same
global
like
stuff,
okay,.
H
F
H
Yeah
and
I'm
also
not
an
expert
in
paralyzation,
so
I,
probably
not
using
the
right
words
to
it.
You
say
it,
but
basically
what
I
have
in
mine
is
that
atomistic
level
you
know
so
basically,
each
local
forward
problem
knows
its
own
mesh
and
this
can
be
sent
on
a
on
a
different
node
that
has
its
own
memory
allocation
to
do
to
its
own
thing
and
then
once
it's
done
is
just
sending
back
a
back
there
like
like
lenses,
so
so
whatever
protocol
were
using
to
paralyzer.
Basically,
we
have
two
levels.
H
F
H
What
custom
it
in
my
head,
like
we
need
a
lot
of
things:
yeah
I,
know
I,
guess
I
see,
but
because
in
my
in
my
head,
MPI
is
is
a
distributed.
This
revision
memory
cell
but
I,
might
be
wrong.
That's
how
you
VC
codes
are
using
you
using
API
to
do
split
it
across
nodes
and
open,
impede
for
local
for
localization,
but
I'm
sure
MPI
can
be
used.
I
just
don't
know
what
I'm
talking
about.
D
F
Decided
there
was
any
more
to
it
in
the
sense
of
parts
of
it
being
implemented,
actually
where
you
guys
are
implementing
with
FBI
a
large
amount
of
message
passing.
You
can
use
MPI
to
do
something
like
that,
where
you're
simply
managing
tasks
and
and
your
root
node
just
collects
those
gathers
and
collects
things
and
that's
it
or
you
can
use
MPI
in
the
way
it's
used
in
most
PDE
codes,
where
there's
a
very
large
amount
of
internal
communication
and
potentially
with
a
complicated
apology
right.
So.
B
F
H
No
I
so
then,
and
that's
something
that
we're
currently
don't
have
any
experience
and
someone
like
human
and
your
group
will
probably
help
a
lot
cos.
We
don't
really
have
a
good
handle
on
how
to
do
that
to
force
desk.
You
know
to
you,
know,
basically
tell
tasks
how
to
spit
the
problem
like
do
shared
memory
for
that
fired
versus
versus
distributed
memory
for
that
part,
and
so
I
think
that's
some
that,
with
this
empty
project
that
we
should
try
to
push
is,
is
to
get
a
good
handle
on
on
how
to
do
that.
Well,.
D
In
a
higher
level,
so
what
basically
will
be
one
is
first
thing
is
a
matrix
vector
product.
So
that's
like
sort
of
like
a
very
basic
level
of
paralyzation
that
we
want
to
do.
The
second
level
is
almost
like
an
embarrassingly
parallel
like
we
just
went
to
looping
over
and
like
a
large
simulation,
or
even
some
most
Malaysian
and
relatively
fast.
It's
I
mean
bad.
Those
are
two
key
items
and
that,
like
we're
trying
to
minimize
the
talk
in
between
so
it's
almost
very
close
to
the
embarrassingly
parallel
problem.
D
G
So
I
guess
I'm
wondering
like
what
conflicts
might
happen
if
you're
trying
to
it's
like
parallelizing
within
parallelizing,
am
I
kind
of
on
the
right
track
here.
So
are
you
planning
to
partition
the
work
like
a
different
frequency
for
each
node
and
let
the
and
then
on
on
each
node?
You
would
partition
up
all
the
locations
or
do
you
want
to
partition
up
a
chunk
of
locations
and
for
that
chunk
of
locations
you
would
go
and
solve
the
from
the
problem
at
all
all
frequencies
yeah.
H
This
chunk
out
back
there
with
that's
to
my
show,
and
then
we
just
delay
all
those
operations
for
later
and
then,
when
we
call
desk
all
it
does
it
just
send
this
to
this
computer,
and
this
is
it
kind
of
ran
down
and
again
and
just
stacks
it
all
up
and
and
give
you
back
as
just
a
normal
vector.
So
you
don't
have
to
worry
about,
like
indexing
you're,
just
building
you're,
just
delaying
your
your
operations,
yeah.
H
Yeah,
that's
then
its
own
best
jobs
to
just
like
schedule
which
parts
of
the
vector
they
want
to
compute
at
what
time
you
know,
okay
and-
and
that's
all
done
for
us
who
don't
need
to
worry
about
that
part,
at
least
at
least
just
for,
if
we're
gonna
for
sorry
it's
for
shared
memory,
we
don't
have
to
worry
about
that
part.
It's
the
problem.
If
we're
gonna
start
doing
distributed
memory,
then
we
need
to
worry
about
which
jobs
we're
sending
to
which
pool
of
processors.
That
kind
of
that
kind
of
stuff
and.
E
H
I,
don't
even
think
so
Joe
because
technically,
if,
if
each
block
out
their
own
mesh,
then
they're
gonna
have
their
own
solver
right.
They're
gonna
solve
their
own
systems
internally
and
then
just
send
back
the
vector
and
yeah
they
don't
need.
You
don't
even
need
to
share
a.m.
we
can
split
before
that.
D
H
E
D
I
mean,
like
that's,
that's
another
like
there's
another
pathway,
that
I
meant,
as
I
mentioned,
like
just
simply
fertilizing
matrix,
vector
product.
You
can
save
like
a
sign
in
switching
to
iterative
sold.
For
instance,
you
can
actually
massively
paralyze,
because
if
you
think
about
it,
a
solver
is
literally
just
matrix,
vector
product.
So
if
you
can
like
paralyze
that
part
of
salt
of
both
like
a
getting
a
solution
or
in
the
inversion
iteration
that
yeah
we
can,
you
can
massively
paralyze
that
part
which
will
pretty
Slyke,
which
will
speed
up.
H
D
D
Solvers,
pretty
fine
and
that's
pretty
fast,
but
once
your
move
on
to,
let's
say
10
million
and
solving
like
a
croco
system
like
a
Maxwell
I,
think
you
can't
even
use
pretty
so
so
so
I
get
there's
there's
a
bit
of
split.
So
if
you
can't
like
so
there's
it's
a
it's
a
choice
that
we
need
to
make.
So
let's
say
the
problem
that
we're
dealing
with
for
MP
is
like
ten
millions.
Then
I
mean
you
cannot
even
use
the
threshold.
We
must
go
to
the
iterative
solver
so.
F
D
D
E
A
E
A
I
mean
I
would
be
inclined
to
start
with
that
scale,
and
first
I
mean
because
I
think
that's
the
most
commonly
used
scale
for
for
MT
is
still
something
that
can
fit
into
parody.
So
that
seems
to
me
like
a
reasonable
place
to
start
and
then
to
compare,
do
I
mean
also.
Did
you
be
able
to
compare
with
the
EBC
code
with
all
of
these
examples
that
we've
already
gone,
knowing
that
we're
using
the
same
solver
and
that's
sorts?
This
thing
like,
hopefully,
a
benchmark
of
of
where
things
are
on
well.
G
A
D
If
that's
the
pathway
that
we're
going
I
can't
actually
like
the
dance
idea
breaking
apart
in
terms
of
locations
as
well
as
long
as
we're
nested,
we
can
set
the
same
by
under
condition
what
I'm
a
little
bit
worried
about
that
coarsening
part
for
e/m
problem
that
interpolation
I'm
not
sure
how
accurate
it
is.
So
we
probably
to
test
that
like
a
little
bit
more
rigorously
like
whether
we're
getting
the
right
solution.
That's
the
our
like
a
error
level.
The
worse
to
me.
That's
kind
of
okay
I,
actually
really
like
to
break
that
apart.
D
Even
when
you
can
think
about,
you
can
break
them
apart,
but
by
sounding
by
sounding
so
are
empty.
Sounding
could
even
have
a
single
match.
So
in
such
a
way,
you
can't
break
that
a
brick
like
that.
You
can
reduce
the
size
that
if
the
problem
you're
solving
a
lot,
so
that's
like
an
ideal
but
I'm
not
sure
how
well
that's
gonna,
that's
gonna
work
out
is.
A
G
A
Then,
once
we
have
that
machinery
in
place,
as
Dom
said
it
shouldn't,
be
too
I
mean
the
the
extension
to
doing
something
more
sophisticated
like
that,
should
actually
just
be
at
the
level
of
creating
utilities
rather
than
in
the
guts
of
the
code,
but
the
paralyzing
over
frequencies
I
started
getting.
That
is
I,
think
probably
a
first
key
step
to
unlock
some
of
these
next.
These
next
things
right.
E
H
I
like
creating
this
is
just
a
wrapper
to
create.
You
know
you
give
it
a
full
full
survey
and
then
it
just
like
splice
slice
it
into
smaller
chunks
and
create
your
your
global.
Your
global
misfit
functions.
Everything
one
thing
that
we'll
need
to
consider-
and
we
should
put
it
in
the
notes-
is
that
if
we
start
creating
a
large
number
of
meshes,
we
should
start
thinking
at
not
storing
the
full
mash
into
this
simulation,
but
only
storing,
like
the
operators
that
you
as.
E
B
B
H
D
Thing
that
I
not
I,
haven't
played
that
much
with
the
desk.
The
only
way
that
I
can
get
a
good
scale
with
desk,
like
for
a
simple
embarrassment,
Perl
problem
was
basically
storing
everything
into
a
hard
disk
and
and
minimizing
that
the
message
passing
in
between
cuz
I'm
thinking
about,
let's
say,
like
even
a
vector.
H
D
D
Something
like
that,
so
I'm
not
sure,
like
a
word,
we're
like
exact
calculation
is
happening,
but
in
my
sort
of
imagination
when
we're
computing,
the
matrix
vector
product
or
JPEG
or
JT
vague.
We
do
that
in
the
local
process
and
then
basically
sending
that
vector
back
into
the
global
or
parent
and
do
the
inversion
of
operations
there.
That's
what
I
was
thinking.
Maybe
I'd
come
wrong.
Well,.
H
H
D
H
D
H
H
H
H
A
Don't
want
question
following
on
that
so
I
at
least.
What's
the
things
that
I've
played
around
with
sort
of
the
vector
was
never
the
concern
but
passing
an
object
potentially
was
so
the
serialization
deserialization
step
actually
is
potentially
subtle.
Have
you
run
into
any
of
those
sorts
of
problems
yet
or.
A
In
this
case,
if
you
are
paralyzing
like
the
global
problem,
we're
then
passing
a
eight,
a
misfit
which
has
the
mesh
in
the
simulation,
like
all
of
that
stuff
sort
of
locked
on.
Have
you
encountered
any
problems
like
sending
the
data
misfit
objects,
so
not
in
the
retrieval
of
results,
but
in
actually
the
dispersal
well.
H
Currently
everything
that
done
was
it
was
all
shared
memory.
So
yeah,
that's
good!
That's
a
good
point.
That
being
said,
I
don't
think
the
misfit
objection
contain
anything
more
complicated
than
the
non
PI
vectors,
but.
A
It
does
contain
I
mean
it
contains
the
the
simulation
the
mesh
so
like,
if
you
actually,
if
you
sort
of
naively
serialize
that
we
I
mean
this
is
something
we
should
play
around
with
and
I
think
we
might
be
able
to
like
this
is
where
the
properties
library
could
come
in
handy
is
figuring
out.
What
are
the,
what
are
the
right
ways
to
serialize
and
deserialize
these
things,
yeah,
cuz
I,
wouldn't
be
surprised
if
we
actually
try
and
scale
up
to
a
cluster.
A
H
F
This
is
this
will
be
something
that
you're
gonna
have
to
spend
time
on,
because
it
does
become
the
heart
in
in
object-oriented
Python
codes.
This
often
becomes
a
pretty
tricky
point,
because
if
you
naively
try
to
serialize
container
objects,
then
ieave
tools
will
and
as
black
boxes,
and
they
will
recurse
down
the
objects
internal
hierarchy
and
they
will.
They
will
do
stupid
things
that
end
up
being
very
expensive,
because
it's
hard
to
be
smart
about
that
problem
of
serializing
serializing
a
complicated
object
in
an
opaque
way.
F
If
you
can't
make
any
assumptions
and
you
if
the
tool
doesn't
know
what
it
is
see
realizing,
it
has
to
take
very
conservative
approach,
isn't
and
they
tend
to
produce
actually
enormous
cost.
So
I
mean
I.
Python
had
a
kind
of
a
pretty
sesor,
which
was
the
ipython
pearl
machinery
and
we
spent
years
office.
We
spent
a
huge
amount
of
time
on
machinery
for
constant
civilization.
I
was
just
reading
through
the
serialization,
Docs
and
asked
this
is
these?
F
F
Okay
now,
I'm
gonna
send
send
this
array
alright
well,
grab
it
and
put
it
where
I'm
going
to
send
it
well
that
act
could
cost
you
a
huge
amount
of
memory
right
and
you
actually
may
end
up
with
multiple
copies,
because
then
you
hand
it
over
to
the
library
and
the
library
has
Network
buffers
where
it
will
copy
things
again,
so
you
can
easily
have
three
or
four
copies
of
your
arrays
without
notice.
It's
extremely
easy
to
fall
into
that
without
noticing
it.
F
So
you
will
probably
profile
the
living
out
of
that
and
you
will
be
implementing
custom
serializers
and
that's
what
the
machinery
that
that's
provided
for
is
to
give
you
the
hooks
for
you
to
say:
okay,
I,
actually
own,
first
of
all,
I've
only
present
this
and
this
and
that
because
I
know
I
can
reconstruct
on
the
other
side
very
easily
or
I'm,
not
going
to
need
it
for
that
operation.
So
I
don't
need
to
animate
all
of
these
scalar
things,
fine
do
whatever
and
then
for
these.
F
These
are
non
PI,
arrays
and
then
I
know
they're
gonna
be
not
play
arrays.
That
I
can
then
we
hand
them
down
to
the
fast
transfer
networking
code
that
will
do
the
best
it
can
to
effectively
tell
the
actual
networking
library,
for
example.
In
our
case
it
was
in
0q
and
c++.
You
end
up
saying:
okay,
you
are
the
one
who
gets
an
actual
pointer
right.
F
A
memory
address
in
you
are
told
how
to
walk
that
memory
so
that
you
only
copy
whatever
has
to
happen
at
a
very
low
level,
from
a
networking
perspective
to
say,
like
copying
the
networking
buffers
and
you
get
that
with
your
net
carbon,
whatever
has
to
happen
down
there,
but
you
never
say
here's
a
4
gigabyte
array.
Please
make
a
copy
of
it
to
send
right.
F
So
that
task
has
the
hooks
for
that,
but
there's
always
a
little
bit
of
your
old
work
that
has
to
happen
in
helping
in
helping
it
understand
the
structure
of
your
of
your
objects
and
and
to
do
these
things
intelligently,
because
otherwise,
otherwise
you
do
and
now
it's
a
target
right.
You
just
end
up
wasting
a
huge
amount
of
time
in
potentially
it
can
be
anywhere
from
a
nuisance
to
a
kind
of
strong
ink
like
bottle.
Mark.
H
A
I
was
just
gonna
say
the
naive
approach
that
I
took
on
just
to
run
a
bunch
of
simulations,
and
this
isn't
how
we
want
to
do
it,
but
it
worked
well.
Enough
is
I
mean
we
have
all
the
object
serialization
for
the
most
part
in
place
through
the
properties,
library
that
just
C
realizes
it
to
JSON,
and
we
can
send
that
to
whatever,
but
I
do
serialized
it
to
JSON,
which
is
then
just
a
string
of
stuff
that
I
then
passed
passed
around,
and
that
was
lightweight
enough.
A
That
was
good
enough
to
get
to
get
me
going,
I
mean
we
don't
necessarily
want
to
have
users
be
doing
that,
but
the
it
was
encouraging
to
see
that
most
of
the
serialization
stocks
that
we
have
in
place
already
shouldn't
facilitate
this
and
I
could
see
for
the
tree
much
doing
something
like
instead
of
actually
serializing
like
providing
the
ability
actually
to
send
the
function
that
was
used
to
create
the
mesh
rather
than
the
full
mesh.
That's
not
something
that
we
have
yet
so.
E
E
E
B
E
H
A
H
F
Now
one
one
thing
to
your
advantage
is
that
dance
obviously
has
like
you're
not
gonna,
have
to
implement
the
low
level
management
of
non-fire
raise
and
ask
ask:
has
machinery
to
do
that.
So
it's
more
matter
of
helping
it
understand
the
structure
of
your
objects
and
letting
it
know
where
is
it
gonna
find
numpy
arrays
with
which
once
it
sees
that
it'll
do
it'll
do
the
right
thing
and
it's
been
optimized
for
that
I
see
that
they
have
an
object.
F
They
have
a
function
called
register
generic
which
seems
to
fit
if
your
objects
fit.
That
pattern
which
they
probably
do.
You
might
actually
be
able
to
get
away
with
just
help.
Basically,
a
small
amount
of
help
with
that
stools.
They
just
let
bass,
understand
your
objects
and
that
that
may
be,
but
may
be
enough
because,
basically
they
say
if
your
objects
are
mostly
things
that
reconstruct
without
having
to
do
without
a
very
complicated
custom
constructor,
and
there
are
mostly
simple
attributes
and
numpy
arrays.
F
Then,
if
you,
if
you
use
this
kind
of
generic
thing
and
tell
that
how
to
traverse
your
objects,
then
it
will
do
the
right
thing.
So
I
would
I
would
I
would
do
a
few
tests
and
I
would
one
important
thing?
Is
you
wanna?
You
wanna
see
what
a
benchmark,
what
the
what
the
actual
slowdown
computation
is
from
I
mean.
F
Do
you
end
up
waste,
basically
with
your
notes,
all
of
a
sudden
becoming
effectively
I
know
because
they're
just
waiting
for
waiting
for
this
stuff
to
go
back
and
forth
and
and
in
terms
of
basically
idle
on
floating
point
right
because
they
made
what
you
don't
want
to
be.
You
don't
want
to
be
busy
either
on
a
lot
of.
B
F
Also,
don't
want
to
be
busy
with
CPU,
but
now
it's
CPU
serialization
and
deserialization
code,
but
you
don't
want
to
do
either
of
those
things
right.
So
you're
gonna
want
to
try
to
find
a
good
way
to
understand
the
impact
of
both
the
serialization
CPU
time
and
the
transfer
time
against
your
raw
kind
of
optimal
floating-point
CPU
time.
Hey.
F
E
Denying
profiling
tools
that
work
on
Python
but
they're
difficult
like
a
desk
that
essentially
measure
how
long
it
takes
to
do
each
thing
they
each
line.
Ok
took
this
long
to
do
this
line
a
desk.
That's
that
guy
really
working,
because
it's
just
gonna
say:
oh,
it
took
all
the
time
to
do
this.
Compute
right.
We
want
to.
F
H
C
H
A
C
Actually,
I
haven't
gotten
to
the
x-ray
stuff.
I
didn't
get
to
it
last
night,
but
I'm
at
the
step,
where
I'm
gonna
start
testing
that
but
yeah
the
fields
object
is
difficult
to
pass
her
out.
That's
what
I
was
wondering
if
we
can
make
it
easier,
but
I
haven't
done
how
deep
it
goes
like
how
much
of
the
code
we're
gonna
have
to
fix.
If
we
change
the
field
object
altogether,
I
have
a
feeling
that,
yes,
we
maybe
should
just
just
initially
here,
but
this
is
what
I
was
going
into
right.
C
Now
is
just
trying
to
see
like
start
off
the
simple
keeping
the
fields
object
and
see
what
we
can
do
by
paralyzing
that
or
then
add
in
the
x-ray,
is
that's
what
I'm
going
to
do
now
just
to
see
if
we
get
any
benefits
that
way
but
yeah.
If
my
first
impression
is,
we
might
have
to
change
the
object
ya.
A
A
Are
pretty
inefficient
about
how
we
set
it
up?
There's
a
lot
of
reshape
operations
that
happen
when
we
first
construct
it,
which
is
really
expensive
when
you
start
to
get
even
moderately
large,
so
the
data
object,
I
think
we've
done
a
much
better
job
of
basically
creating
a
smart
dictionary
that
just
knows
how
to
index
properly
so
rather
than
reshaping
things
you
can
still
call
it
like.
A
You
would
think
two
things
to
call
an
array,
but
all
that
that
is
doing
is
then
just
grabbing
the
right
indices
and
so
we're
not
doing
any
reshaping
operations,
so
I
think
there's
potentially
that
I
think
would
help,
but
I
think
it's
it's
totally
valid
for
us
to
look
into
x-ray
and
see
if
that's
the
right
tool
for
this.
But
but
I
don't
know
at
this
point
so
so.
F
Really
good
question
Lindsey
so
when,
when
you
talk
about
basically
having
smart
sort
indexing
back
that
helps
you
that
helps
you
kind
of
write,
write
national
code
and
have
in
the
mixing.
That
knows
what
parts
of
the
array
to
look
into
as
you
thought,
you're
thinking
of
this
restructuring
for
distribute
compute!
F
Do
you
imagine
that
feel
that
Lee
becoming
a
distributed,
object
that
now
we're
now
or
effectively
submarines
live
on
different
notes,
because
if
that's
the
case,
kind
of
smart
indexing
approaches
may
end
up
with
the
issue
that
those
indices
actually
mapped
to
multiple
places
and
then
all
of
a
sudden,
you
have
kind
of
cross
no
data
communication
issues
that
get
a
little
delicate,
so
I,
don't
know
if
industry
factoring
that
you're
thinking
for
paralyzation
it
does
involve
effectively
cutting
up
those
arrays
in
ways
that
will
end
up
eventually
across
across
language
spaces.
So.
A
In
this
case,
I,
don't
think
we'll
encounter
that
at
least
if
the
first
pass,
because
if
we
paralyze
and
send
each
data
misfit
to
to
the
to
each
different
node,
the
data
misfit
is
the
thing
that
we're
indexing
in.
So
it
is
actually
localized
and
associated
with
that
given
object,
and
so
all
of
the
machinery
to
orchestrate
anything
from
higher
up
will
like
we
would
have
already
needed
to
deal
with
that.
A
So
I
don't
think
that
there's
any
indexing
that
we
should
be
doing
that
like
thinks
it
should
be
grabbing
data
from
there,
but
actually
it
should
be
grabbing
data
from
there.
So
yeah
I,
don't
I'm
not
concerned
about
it
thought
at
this
stage.
E
Thank
you,
so
just
kind
of
a
the
fields
object
is
nice.
When
people
want
to
interact
with
it
right,
it's
nice
for
it
being
able
to
have
other
people
interact
with,
but
it's
definitely
not
set
up
to
be
accessed
quickly
or
anything
like
that
right.
So
just
kind
of
a
good
broad
thinking
of
the
field
object
or
just
kind
of
like
an
out
there
thought
would
it
be
better
to
move
a
lot
of
those
operations
like
okay,
get
E
or
like
that.
E
H
E
A
Yeah
no
I
think
that's
totally
a
fair
point.
I
guess
I
hesitate
to
do
that
before
we
try
refactoring
the
fields
object
a
bit
because
I
actually
think
it
like
is
a
useful
construct.
It's
just
right
now,
because
we're
so
inefficient
in
the
way
that
we
set
it
up
on
that.
It
is
actually
pretty
painful
I'm
wondering
like
if
we
solve
that.
A
If
that
actually
simplifies
things
enough,
that
it
still
is
a
useful
construct
in
the
code
and
I
don't
know,
maybe
it
won't
be,
and
maybe
then
at
that
point
we
should
totally
like
syncs,
like
we
think
through
where
those
functions
should
live,
but
yeah
at
this
point,
I
would
hesitate
to
to
get
rid
of
it
before
we
before
we
try
and
make
it
more
efficient.
That's.
E
C
A
D
D
A
So
well,
we've
got
I'm
I
mean
we
can
keep
sort
of
going
on
along
this
line,
but
I'm
wondering
if
we
could
maybe
like
pull
back
and
come
up
with
some.
What
are
the
big
picture
next
steps
and
like,
what's
it
constructive,
either
like
a
constructive
example
to
start
from
and
then
how
do
we
actually
like
practically
want
to
start
moving
forwards?
A
If
you're
willing
to
sort
of
play
around
with
x-ray
and
see
like
what
sorts
of
improvements
we
might
be
able
to
get
if
it's
appropriate
or
or
if
not
I'm,
not
entirely
sure
one
way
or
another
yeah
yeah,
okay
and
then
it
sounds
like
to
with
respect
to
like
the
idea
of
the
global
problem.
That's
then
something
we
should
probably
think
about
trying
to
get
like
upstream
dab
it,
because
we
were
saying
it's
really
only
implemented
for
potential
fields.
At
the
moment,
no.
H
A
That
sounds
good,
okay,
so
that's
sort
of
to
low
level
things.
It
then
I.
Guess
we
start
to
get
into
some
more
of
the
details
of
the
e/m
side
of
things
so
thinking
through
I.
Guess
the
the
natural
source
am
survey
structure
and
to
see
if
we
can
actually
then
basically
get
rid
of
the.
We
also
get
rid
of
the
the
NSE
M
forward
simulation
so
that
all
of
the
primary
secondary
is.
E
A
G
I'd
like
to
be
able
to
have
like
a
confident
having
a
really
good
forward,
modeling
with
the
the
boundary
conditions
implemented
and
then
having
and
having
that
work
on
the
simulation
branch
and
making
a
tutorial
from
it,
because
I
have
a
feeling
that
there's
sort
of
the
you
know.
How
was
how
do
we
solve
the
problem
and
it's
maybe
it's
not
the
most
efficient
way
and
we
don't
use
parallelization,
but
for
the
end-user.
We've
basically
gotten
to
the
point
where
the
the
syntax
for
running
the
most
basic
implementation
does
not
change.
G
So
that
can
be
finished.
And
then,
once
that's
that's
in
the
simulation
branch,
then
we
start
going
in
and
say:
how
can
we
make
this
more
efficient?
How
can
we
do
this?
This
next
level,
stuff,
I,
guess
I'm,
just
thinking
about
potential
users
and
people
who
are
just
getting
started
and
I'd
like
to
be
able
to
have
them
model
stuff
and
learn
quickly
and
have
a
basic
implementation
and
then
be
able
to
go
and
do
some
very
fancy
high-level
things
once
they're
more
knowledgeable
so
are.
H
A
E
A
So
there's
two
sort
of
things
I'm
thinking
about
with
with
respect
to
how
do
we
sort
of
move
forward
is
one
what's
like
an
example
that
we
all
want
to
be
perhaps
working
on,
and
where
do
we
work
on
that
and
then
to
within
simple?
A
E
E
A
A
G
E
G
G
G
E
H
A
B
That
actually
sounds
really
appealing,
because
now
we've
got
some
serious
boundary
conditions
to
think
about,
and
you
know
it's
kind
of
like
iceland
perhaps,
but
it
is
in
a
smaller
scale
and
then
the
other
one
which
we've
talked
about
for
a
long
time
and
I
will
follow
up-
is
with
daniel
and
chile.
That
was
the
jesus
example
and
that
wasn't
extreme.
That
was
extreme
topography,
and
so
I
think
that's
that's.
A
nice
kind
of
you
know
sort
of
end
examples
that
can
just
kind
of
keep
in
mind
and
then
maybe
we
can
think
about.
C
G
The
we
have
the
L
block
for
all
three
codes
with
the
Zed
Tim
and
then
there's
also
the
T
Casey
has
said
to
M
involved
in
it
and
then
that
comprehensive
workflow
I
have
a
real
real
data,
set
that
I
converted
with
Zed
time
as
well.
So
there's
there's
also
a
lot
there
for
Zed
time
when
we
get
to
it
excellent.
A
G
And
if
we're
gonna
do
use
like
pretty
significant
typography,
then
I
mean
if
you've
read
the
manuals
for
the
jiff
codes.
You
have
to
do
it
a
different
implementation
of
the
boundary
conditions,
so
we
can
always
start
with
flat
topography.
Get
that
working
be
happy.
Okay.
Now
we
have
to
do
a
new
new
sort
of
a
numerical
scheme
and
then
try
and
compare
that
against
jiff.
A
That
sounds
good,
okay,
so
that's
so
if
we
start
the
l-shape
example:
where
do
we
all
want
to
work
like?
Do
we
want
to
start
a
github
repository
or
Dropbox
or
like?
What's
the
easiest
way
to
kind
of
be
able
to
share
notebooks
and
scripts,
and
things
like
that
just
so,
we
can
iterate
on
ideas.
A
C
H
If
the
team,
if
the
team
is
okay
with
this
I,
would
start
working
on
a
global,
the
global
problem,
but
just
apply
it
to
the
simulation
static
problem
just
because,
in
terms
of
time,
it's
more
pressing
for
me
and
at
the
same
time
it's
an
easier
problem.
It
will
be
able
to
probably
figure
out
things
a
bit
quicker
on
simpler
physics.
So
that
would
be
my.
That
would
be
my
contribution
if
you
guys
think
it's
yeah
yeah.
A
For
sure
Dom
I
think
that
would
be
great
I
mean
just
having
one
implementation
that
we
can
then
sort
of
make
more
general
is
that's
a
great
starting
point.
B
B
You
know
a
number
of
different
problems
that
would
arise
in
practice
and
it
might
also
be
a
place
to
kind
of
think
about.
You
know
you
know
various
geometries
or
various
kinds
of
geologic
structures.
You
know
that
could
pose
a
problem
and
just
you
know
at
least
a
place
to
kind
of
consolidate
and
jot
down
ideas
that
people
have
about.
You
know
at
the
end
of
the
day,
I
really
want
this
code
to
be
able
to
do
this,
or
it's
got
this
kind
of
functionality.
B
G
Well,
I
mean
for
that
I
guess
a
good.
A
good
reference
is
that
MT
comprehensive
workflow.
That
was
the
whole
the
whole
design
behind
doing
that
was
being
able
to
address
those
kinds
of
issues
like
oh
I,
want
to
I
want
to
use
finite
length
wire
for
for
I
want
actually
define
the
length
of
my
dipole
for
electric
field
measurements
when
I
disparate
eyes.
I
need
to
actually
have
those
be
on
the
surface
of
my
discretized
topography,
cuz
right
now
they
might
be
hanging
out
in
the
air.
G
A
A
A
Yeah
we
could,
we
could
think
about
that.
The
the
thing
I've
always
found
challenging
is
then
somebody
should
sort
of
take
ownership
of
keeping
track
of
sizing.
Yeah
and
I
always
start
with
ambitions
of
doing
that
and
they
quickly
wane,
but
it's
not
something
that
you'd
be
interested
in
doing
Devon.
We.
G
A
So
each
what
you
can
do
is
you
can
create
github
issues
so,
for
example,
dealing
with
a
finite
length
of
wire
in
the
receivers
for
natural
source
iam.
We
could
create
that
as
an
issue
on
the
simple
repository
and
then
Devon
would
file
that
under
sort
of
the
list
of
to
dues
for
the
for
the
natural
sort
cm
project.
So
it's
basically
like
a
layer
on
top
of
github
issues.
E
G
So
I
guess
everyone
sees
my
screen
now.
Yes,
okay,
so
you
would
make
some
kind
of
task
for
the
project
under
do
you
know
you
could
add
something
and
once
and
then
somebody
could
decide
to
assign
themselves
to
this
I
need
to
re-familiarize,
but
then
it
goes
into
in
progress
and
once
you're
finished
and
you
can
always.
You
can
move
these
things
over
to
the
next
one.
G
So
you
basically
would
set
up
some
kind
of
task
and
what
it
is
and
then
a
person
could
be
assigned
to
it
and
you
would
see
where
they
are
in
the
process
and
then
it
would
move
along
to
being
ready
for
testing.
Somebody
could
give
their
comments
if
changes
need
to
be
made,
it
goes
back
and
then
finally,
it
would
end
up
in
a
tutorial
section.
So
the
one
that
I'm
showing
you
right
now
is
a
bunch
of
tasks.
G
We
had
four
potential
fields
when
I
was
developing
tutorials
and
working
a
little
bit
with
Dom
I
had
a
list
of
things
that
we
should
be
able
to
to
simulate
ford
simulations
basic
conversion,
applying
iteratively,
re-weighted
least
squares,
and
we
went
should
we
went
through
and
just
ticked
off
the
boxes.
So
you
know
once
you
can
put
things
in
like
this
and
say:
yes,
we've
done
this.
This
portion
of
this
task
and
anyway
I
can
I
can
be
in
charge
of
organizing
this
and
you
can
get
at
it
through
github,
under
projects.
G
B
G
B
A
Not
something
that
we
can
try
to
implement
is
because
right
now
we're
deploying
the
website
or
deploying
the
docs
through
Google
App
Engine,
whereas
it
would
be
nice
if
we
switch
that
over
to
github
pages,
which
Bain
did
for
us
for
the
discretized,
Docs
and
I.
Think
that
makes
it
a
bit
easier
to
have
basically
multiple
versions
of
the
documentation
so
that
you
can
look
at
what
is
master
versus
what
is
simulation
right
now.
Nobody
actually
sees
what
the
simulation
Doc's
are,
which
is
unfortunate.
E
H
Idea
direct:
if
you
have
a
wish
list
of
things
you
would
like
the
NT
code
to
accomplish,
we
could
create
your
okay,
create
a
tab
wish
list,
and
then
you
put
all
your
ideas
in
there
and
then
we
slowly
chip
away
at
it.
You
know
move
it
to
like
a
progress
when
someone
wants
to
take
it
on,
but
I
think
it
would
be
logged
somewhere.
H
G
Can
link
it
to
issues
which
is
nice
too,
so
you
can
say
ok.
This
task
has
been
we've
made
an
issue,
and
then,
when
that
issue
is
closed
then
then
the
task
ends
up
being
completed
like
it's
a
way
of
combining
all
the
good
things
about
the
Google
Doc,
with
good
things,
about
keeping
track
of
stuff
on
github
and
then
just
a
visual
set
of
tiles.
To
say
where
everything
is
ok,.
G
A
G
A
I
think
for
this
one,
let's
start
with
one
in
this
simple
repository,
because
most
tasks
are
going
to
be
development
tasks
on
simple
and
then
we
need
to
organize
stuff
around
the
examples.
We
can
then
start
to
think
about
one
for
the
examples
repository,
but
at
least
for
the
stuff.
That's
basically
like
we
need
to
go
in
and
tinker
with
Sanpei
code.
Then
we
can
keep
that
in
this
in
peg,
repository
yeah.
G
E
A
A
One
hesitation,
I
guess
I
have
with
forks
is
like
it
is
a
little
less
fluid
to
be
sort
of
sharing
and
asking
somebody
to
sort
of
jump
on
and
and
like
iterate,
exactly
off
of
what
you've
done.
A
fork
is
much
more
so
like
as
an
individual
developer.
You
go
and
do
you
do
your
thing
and
then
come
back
right.
E
A
A
Could
you
just
check
out
my
branch
and
see
what
I've,
what
we
are
working
on
here
and
then
you
and
I
just
work
together
on
a
branch
I
mean,
at
least
in
my
experience-
that's
been
way
easier
on
branches,
then
on
on
Forks,
but
yeah
I,
don't
know,
I,
don't
know
if
anybody
else
has
I
mean
I
I
think
it's
totally
fair
if
we're
gonna
just
go
off
and
sorta
like
play
around,
but
if
we
want
to
sort
of
be
working
working
together,
I
personally
would
prefer
to
be
to
be
on
branches.
H
E
E
E
E
A
G
Thought
maybe
one
more
thing:
I
was
gonna
pick
Dom's
brain
about
pseudo-3d
inversion
with
Mt
I
think
you
guys
are
much
more
advanced
in
terms
of
the
the
octree
and
the
3d
modeling,
but
I
think
there's
some
value
in
having
the
1d
modeling
in
there.
So
I
was
gonna.
Do
that
and
I
thought
we
could
maybe
do
a
pseudo
3d
inversion,
Joe's,
yeah
I
think
Joe's
right
about
the
recursive
solution.
It's
it's
much
faster,
much
more
stable,
so
I'm
gonna
implement
that.
As
my
my
1d
scheme,
I
think.
H
G
I
think
the
thing
I
just
I
wanted
to
know
was
I
mean
basically
that
in
the
Ford
simulation,
only
basically
only
only
cells
or
areas
under
different
receivers
are
going
to
produce
produce
data.
So
basically,
the
sensitivity
of
a
lot
of
cells
in
the
3d
mesh
is
actually
zero.
If
you
just
were
to
think
about
it's
kind
of
like
the
the
number
of
parameters
that
directly
impact,
the
data
are
different
than
the
number
of
model
parameters
that
we
are
applying
a
regularization
to
yeah.
H
We
can
go
over
it
another
time,
but
basically
the
idea
is
that
the
3d
mesh
doesn't
exist.
It
only
exists
if
you
want
to
see
it
in
3d.
Otherwise
it's
it's
a
de
l'année.
It's
just
belong
your
organization,
so
you're
not
seeing
a
3d
match
basically,
but
we
can
go
over
it.
Okay,
right
yeah,
don't
worry
about
this
stuff
once
the
once,
you
are
really
happy
with
the
Wendy,
the
Wendy
inversions
after
this
is
just
in
just
mechanics
that
we
already
have
for
that.
Yeah.
G
D
H
B
H
E
D
Right
so
for
now,
actually
I
have
a
question
for
year,
so
I'm
thinking
about
like
scaling
up.
So
let's
say
we
have
thousands
of
nodes
and
when
each
node
would
want
to
access
to
something
into
hard
drive
like
as
our
file.
Okay,
do
you
think
that's
actually
possible
to
excel,
not
even
sure
like?
Where
are
we
going
store
so.
E
D
E
Ideally,
each
of
those
things
would
be
stored
on
the
local
machines.
So,
if
you're
looking
at
your
HPC
clusters,
there's
details
on
how
to
if
you
want
to
write
to
the
local
machine,
you
can
do
just
that,
and
those
kind
of
kinds
of
things
should
be
written
to
the
local
machines.
That
way,
you
don't
have
to
deal
with
all
the
network
communication
of
syncing
up
a
bunch
of
files
and
pulling
them
over
network.
You
don't
want
to
do
that.
H
D
E
F
It's
never
gonna,
be
something
better
goals,
network
network,
Ross
mounting
and
so,
and
that's
the
job
of
a
good
kind
of
cluster
HDC
administered
figure
things
so
that
you
have
easy
ways
of
uniformly
distinguishing
between
what
is
shared
storage
that
all
the
nodes
can
see,
but
that
typically
is
expensive
to
access,
especially
if
you
access
it
simultaneously,
because
you
create
these
nasty
storms
on
the
network.
Backplane
that
clog
everything
up,
verses,
pads
that
resolve
guaranteed
locally
guaranteed
no
network
traffic.
F
Often
there
are
gonna,
be
either
SSDs
or
even
RAM
disks
if
you're
really
getting
fancy
or
octane
or
something
super
expensive,
but
they're
gonna
be
very,
very
fast
to
access,
and
you
can
know
for
sure
that
if
you
write
into
that
pad,
so
you
configure
your
coats.
You
always
write
these
quantities,
go
to
these
paths
and
wherever
machine
it
is,
however,
the
jobs
in
distributed.
It's
always
gonna.
Keep
you
local
so
now
the
catch
is
you
need
to
find
out
that
information?
F
H
F
However,
you
want
to
call
it
because
when
you
start
working
with
large-scale
data
in
the
cloud,
you
actually
don't
want
to
be
thinking
in
terms
of
passive
file
system
path.
Right,
you
need
to
be
thinking
in
terms
of
basically
the
kind
of
storage
primitives
that
cloud
systems
offer,
which
are
typically
object,
stores
with
bucket
addresses
that
are
these
generic
things
that
don't
exactly
look
like
a
file
system
path
right.
F
The
HPC
world,
on
the
other
hand,
comes
from
a
tradition
that
is
kind
of
a
continuous
outgrowth
from
like
desktops
and
clusters,
and
that
HTC
machine,
which
is
very,
very
explicit
path.
Oriented
and
those
two
philosophies
collide
a
little
bit
in,
and
it's
a
little
bit
of
a
question.
How
do
you?
How
do
you
want
to
architect
a
code?
Do
you
want
to
architect
a
code
to
allow
the
user
to
have
experience
Blissett
file
management,
which
feels
very
natural
and
very
comfortable
and
very
familiar
because
it's
what
everybody
is
used
to
in
it?
F
It
works
all
the
way
down.
It
works
from
the
laptop
to
supercomputer,
but
when
you
start
accessing
remote
arbitrary
data,
that
is
behind
a
cloud
optics
for
system
and
whatnot,
that
model
doesn't
work
as
well
anymore
and
expecting
and
there's
some
advantages
to
that.
Because,
then,
all
of
a
sudden,
if
you
make,
if
you
cannot
accept
that
you
have
to
cross
into
that
other
mode
of
data
management,
then
all
of
a
sudden
global
data
is
okay.
Data
that
my
activists
stored
in
different
data
centers
can
resolve.
Actually
that's
what
the
Gateway
project
is
about.
F
It's
about
a
way
of
basically
giving
you
remote
tasks,
workers
that
can
live
co-located
with
your
data
on
any
data
center
on
the
planet
across
Amazon
regions
right
and
so,
but
it's
a
different
way
of
thinking
right.
It's
a
different
way
of
structuring
your
cousin.
What
not
and
I'm
not
going
to
pretend
to
tell
us
in
effects
team,
what
to
do
where
you
folks
aren't
in
the
thinking
of
that
without.
H
F
F
Need
to
step
out
in
a
second,
so
so
I'll
just
say
my
thank
you
to
all
of
you
Lindsay
for
inviting
me.
I
greatly
appreciate
your
invitation
and
to
all
of
you
for
being
welcoming
in
patients
with
my
questions
and
I
hope.
I
never
I
said
was
somewhat
marginally
useful,
but
it's
a
pleasure
meeting.
You
all
and
I
hope
through
this
project
that
involves
Lindsay
and
Jupiter,
will
continue
connecting
thank.
H
A
I
mean
just
if
there's
anything
anyone
else
has
to
add
before
we
wrap
up
I
guess
one
like
quick
housekeeping
thing.
I
was
thinking
for
communication,
just
on
like
the
iam
slack
channel
on
this.
Does
that
make
sense:
yeah,
okay,
Doug!
You
look
like
you
were
about
to
say
something,
but
you
were
muted:
oh
okay,
you're
still
muted
by
the
way,
I'm
perfect.
Okay,
thanks
everyone
I
think
this
has
been
pretty
productive.
I'll
get
some
github
repos
going,
and
then
we
can
yeah
make
some
next
time
from
there.