►
From YouTube: SimPEG Meeting February 10th
Description
Weekly SimPEG Meeting from February 10th, 2021
A
Okay
again
welcome
everyone.
We
should
probably
get
right
into
it
this
week.
The
notes
are
up
again.
If
you
have
some
quick
reports.
Please
add
yourself
to
that.
So
we
know
to
hit
you
up
a
couple.
Quick
agenda
items
talked
last
week,
we're
going
to
be
doing
a
I'll,
be
doing
a
after
meeting
hosting
a
code
session.
My
goal
is
to
at
least
look
through
most
of
these
pr's
with
people
that
are
still
around.
A
C
There
we
go
yeah,
just
a
quick
update,
yeah
I
got
dropped
a
big
full
tensor,
mag
dataset,
airborne
data
set
and
yeah.
It
just
motivated
me
to
do
a
tiled
version
or
get
the
tile
hookup
or
tiled
inversion
hooked
up
for
that
for
mvi.
So
that's
working
now
and
then
I
made
an
example
as
well
and
added
it
to
the
examples
list
and
yeah.
I'm
doing
a
pretty
large
scale.
C
Dc
survey
too
and
forming
j
is
yeah
being
a
little
bit
costly,
but
it's
kind
of
interesting
because
it
takes
a
like
right
now
with
like
about
a
million
data.
It's
it's
both
and
saving
it
as
float32.
I've
got
it
about
850
gigabytes.
So
it's
it's
under
a
terabyte
of
ram
but
yeah.
It's
it's
still
pretty
costly,
however,
using
the
pre-conditioner
as
well,
we
saw
or
the
it
converges
a
lot
faster.
So
in
the
end,
we,
the
inversion,
is
happening
a
lot
faster
than
say
what
we
would
use
with
tensor
but
yeah.
C
D
Yeah,
I
think
yeah
right,
that's
where
the
that's
where
the
tiling
is
gonna
come
handy,
but
because
you
have
your
problem
is
so
hard
to
tile
right
because
you
survey
all
the
all
the
receivers
every
time.
So
you
can't
really
block.
C
A
A
C
D
Isn't
you
I'm
not
sure
if
we
looked
into
the
compression
of
the
czar,
though
right,
because
there
are
different
compression
methods
so
might
be
able,
it
will
be
a
bit
slower
right
but
takes
less
disk
space
yeah?
I
forgot
about
that
yeah.
That
was
a
while
ago,
no
good
test.
It
does
different
corporation
styles.
C
The
cell
size
is
actually
a
little
bit
lower.
I
got
it
down
to
about
600
000
yeah
or
actually,
I
think,
it's
even
less.
It
might
be
yeah
about
half
a
million,
so
yeah,
okay,.
C
Yeah,
it
is
the
thing
is,
it
does
take
quite
a
bit
to
form
that
if
you
like,
because,
like
you
said,
it's
literally
solved
doing
a
solve
for
every
receiver,
so
you've
yeah,
as
you
start
adding
that
on
it
does
kind
of
take
it
up.
Unless
you
can
do
break
that
into
parallel
pieces
too,
so
I've
yeah,
you
can
cut
time
by
going
into
increasing
the
tiles
and
the
accuracies
doesn't
seem
to
be
degrading
too
much.
I
did.
I
broke
that
dc
on
their
dom
into
10
tiles.
C
C
C
F
T-Bolt,
sir,
you
had
yeah,
like
you
said:
yeah
like
you're,
measuring
all
your
receiver
for
your
sources
and
do
you
keep
that
or
do
you
do
a
do?
You
do
a
subset
or
sub
sample
when
you
do
the
inversion.
C
It's
pretty
much,
I
break
the
tiles
up
by
the
sources.
So
all
those
receivers
are
like
every
source
is
in
a
different
location,
but
the
receivers
are
still
translated.
All
the
way
through
yeah.
F
Yeah,
I
I
don't
know
if
mike
is
in
the
meeting
today.
Maybe
not,
but
it
was
just
making
me
think
of
like
max
there's
this
problem,
where
he
was
in
an
underground
setting
and
like
if
he
was
like
he
like
at
the
end,
he
could
have
recorded
like
millions
of
combination
of
electrodes.
F
So
a
lot,
a
big
part
of
his
stasis
was
to
oh
hi,
mike
yeah.
See
you
here
so
like
was
to
to
choose
a
subset
of
the
data
to
to
collect
and
to
invert.
So
I
I'm
not
sure
if
that's
something
that
you
are
interested
in
or
to
look
at.
That's
just
that.
Just
one
thought
that
I
that
I
had.
C
Yeah,
actually
I'd
have
looked
into
it
and
I've
got
like
pieces
at
every
level
like
even
at
the
processing
level,
like
creating,
say,
smart
dipoles,
so
you
create,
like
your
larger
dipole
sizes
at
far
offsets,
because
you
want
to
increase
the
signal.
Yeah.
I've
looked
at
that
and
then
I've
also
looked
into
what
loch
did
kind
of
years
ago,
using
the
resolution
matrix
as
well
to
kind
of
narrow
down
what
what
dipoles
to
use
but
yeah,
nothing
concrete
yeah.
I
don't
know
mike.
You
got
a
comment.
G
Oh
well,
I
mean,
I
guess
mostly.
What
I
did
is
I
kind
of
got
a
good
idea
of
the
the
region
that
I
wanted
to
be
able
to
image,
and
then
I
took
like
a
little
test
block
and
I
basically
moved
it
through
that
region
of
interest
for
like
a
given
source,
and
I
just
tried
to
figure
out
kind
of
which
of
my
sources
kind
of
did
the
best
job
of
exciting
that
block
in
each
of
its
kind
of
positions.
G
Well,
that's
interesting,
yeah
and
I
mean
for
the
dc
stuff
a
reasonably
good
way
of
doing.
It
seemed
to
just
be
kind
of
looking
at
the
charge,
build
up
on
that
little
test
block
that
I
was
moving
through
the
region
of
interest.
So
if
you
want,
I
can
I
can
well.
G
We
have
a
draft
of
the
paper.
We
had
it's
kind
of
been
stagnant
for
a
while,
but
I
could
definitely
send
you
like
either.
The
draft
of
that
paper
that
I
put
together
or
or
my
thesis,
yeah
that'd,
be
great.
That's
cool
yeah!
I
don't
know
how
much
of
it
would
be
useful,
but
yeah
I
might
might
get
some
ideas
from
it.
E
E
And
yeah,
what
is
the
point
actually
forming
that
huge
sensitivity
matrix
anyway,
just
yeah?
Well,
that's
the
thing
that
for
you,
I
guess,
because
it's
it's
a
complication
test
like
so
you're
increasing
the
computation
cost
by
increasing
the
number
of
data
in
that
case
yeah,
but
that's
sort
of
the
bottleneck,
so
why
we
kind
of
like
that
not
forming
j,
you're
kind
of
free
from
that
part.
So
you
can
use
as
many
as
your
data
because
it's
not
like,
we
don't
have
that
cost
and
when
we're
not
forming
j,
so
yeah.
C
And
that
that's
the
sheer
size
of
it
is
definitely
driving
it
towards
okay,
we
got
to
start
like
we
don't
need
that
much
data,
but
like
even
in
this
testing
that
we've
done
like
the
first
data
set,
was
just
60
000
and
it
was
defining
a
lot
of
the
similar
features.
This
one
is
already
reduced
like
it
is
actually
like
1.5
million
data
set,
and
now
it's
just
under
a
million.
So
you
know
you
don't
really
you
probably
don't
need
everything
yeah,
but
which
ones
do
you
want.
E
But
isn't
it
hard
to
sell
on
your
company
and
because
the
I
think
that
I
thought
like
that
sort
of
the
pitch
in
like
a
lot
of
like
this
multi-array
system,
is
actually
having
lots
of
data
rather
than
limited
to
some,
like
a
small.
E
Yeah,
I
wasn't
sure
what
there
should
be
a
good
trade-off.
I
guess,
but
I
I
wasn't
sure
what
set
that's
a
good
way
to
move
forward.
C
Yeah
and
that's
kind
of
what's
driving
all
this
is
like
we'll
we'll
find
out
like
what
is
the
best
like,
because
you
know
you
can
just
collect
one
dipole
size,
but
you
are
going
to
add
information
as
you
do
increase
like
a
dipole
size
here
and
there
like
there's.
There
should
be
an
intelligent
way
to
make
like
enough
data
that
will
get
you
your
result
without
having
to
be
all
this
redundant
stuff
and
crazy
amounts
of
time
that
says
yeah,
but
at.
H
A
Very
good
idea
I'll
go
quick
here
as
well,
so
the
stuff
I've
been
working
on
with
something
is
still
just
the
the
modern
condition
stuff,
but
I
think
I
believe
I've
gotten
to
a
nice
spot
with
it
as
far
as
making
keeping
it
generalized
for
each
meshes
and
giving
people
a
general
structure
of
if
they
want
to
do
it.
The
idea
that
I've
settled
on
is
making
it
mimic
the
same
sort
of
properties,
the
same
sort
of
steps
we
take
to
to
do
inner
products
to
represent
those
inner
products,
so
we're
gonna.
A
A
A
We
have
discretized
averages
to
cell
centers
for
everything
right
now
so
now
we
just
need
to
define
them
all
from
faces,
averages
to
faces,
and
then
we
need
essentially
boundary
outward
normals
defined,
as
well
as
essentially
selection,
matrices
that
select
the
boundary
elements,
so
those
kind
of
things
are
kind
of
what
works
to
make
a
generally
structured.
Now
I
can
give
it
a
give
it.
I
can
work
together,
get
explored
this
for
next
week.
A
Meeting
that
I'd
like
to
show
everyone,
but
just
the
idea
is
to
mimic
the
same
way
that
we
are
representing
the
inner
product
matrices
and
the
inner
product
intervals
with
like
t
the
boundary
integrals,
the
boundary
surface,
integrals.
H
So
so,
you're
you're,
basically
saying
that
you
are
you're,
going
to
actually
create
almost
like
a
surface
integral
matrix
or
piece
that
you
that
you're,
adding
instead
of
before,
when
you
called
it,
what
was
it
the
the
cell
grad
matrix,
but
it
was
actually
like
the
inner
there's,
the
inner
product
matrix
with
d
transpose,
plus
something
it
was
all.
It
was
all
one
thing.
A
Like
if
you
have,
you
would
need
to
come
up
with
some
way
to
represent
your
values
at
the
boundaries,
so
we
can
have
a
few
default
conditions.
So
if
it's
a
neumann
condition,
then
you
know
we
can
generate
those
matrices
pretty
easily
for
them
or
if
it's
durationally
obvious,
it's
pretty
simple,
where
you
just
essentially
take
these
matrices
and
multiply
by
them,
but
it
keeping
it
in
this
kind
of
generalized
format
allows
even
more
complex
boundary
conditions
that
could
make
be
made
use
of.
H
H
Yeah,
because
that
that
made
a
lot
more
intuitive
sense
to
me,
instead
of
having
something
called
cell
grad
with
boundary
conditions,
but
it's
actually
the
inner
product
of
or
sorry
it's
actually
got
the
inner
product
matrix
times
d
transpose
times
or
plus
some
other
piece
already.
It's
not
really
a
gradient
operator.
A
Yeah,
so
it's
okay,
it's
about
building
and
ensuring
that
that
has
those
necessary
building
blocks
so
that
we
can
use
to
create
them
like
we
would
still.
I
would
still
like
to
have
those
default
matrices
and
default
like
options
in
there,
because
they
come
up
so
often,
but
it's
it
also
just
making
sure
that
all
these
things
are
defined
can
also
lead
to
the
point
where
it's
okay.
Well,
somebody
wanted
to
like.
So
I
was
talking
about
last
week.
If
somebody
wanted
to
design
a
channel
and
only
stimulate
on
the
channel,
then
it's
the
same.
E
G
E
A
D
A
B
Yeah
not
too
much
but
soggy
generated
a
pull
request
for
gsi
labs,
which
prompted
me
to
also
look
again
at
the
testing
there,
which
was
relying
on
travis,
which
no
longer
works,
and
so
I
got
github
actions
up
and
running
for
that
testing.
So
if
anyone
needs
to
change
repositories
over
that
we're
relying
on
travis
to
something
else,
there's
a
bit
of
an
example
there
and
happy
to
answer
any
questions
on
that.
It
was
pretty
straightforward.
So
that's
that's
kind
of
nice,
that's
about
it!.
A
I
I
haven't
looked
into
the
github
actions
too
much
yet
is
there
like?
What
kind
of
does
it
offer
like?
Parallel
builds
similar
to
how
you
know,
azure
travis
used
to.
B
Yes,
I
so
I
haven't
looked
into
like
the
details
of
how
you
can
split
up
your
testing
matrix
and
things
like
that,
because
the
tests
we
have
for
this
repo
are
pretty
straightforward,
but
I'm
still
testing
on
two
different
versions
of
python,
for
example,
and
those
all
run
in
parallel.
So
I
imagine
that
you
can
get
the
whole
testing
matrix
if
you
got
something
more
complex
all
plugged
in
there.
I
Yes,
yeah,
you
can
have
it
in
in
parallel
and
and
as
many
you
can
also
split
it
up
by
text
file,
so
you
can
make
one
for
the
docs
and
one
for
for
linking
and
one
for
for
tests
and
then
within
the
files
you
can
have
them
in
parallel,
too
so
android.
I
think
it's
pretty
because
it's
from
the
same,
I
think
it's
pretty
the
same
as
azure
pipelines.
I
I
E
Lindsay
why
you
had
to
change
to
a
github
action?
Is
there
like
a
sort
of
end
of
service
like
of
travis
or,
what's
what
was
your
sort
of
motivation
to
change
to
get
them
back.
A
So
at
the
end
of
last,
I
can't
really
hear
lindsay
if
you're
talking,
but
I
can
go
ahead
at
the
end
of
last
year,
the
use
of
tests
and
travis.org
all
right
now.
Travis.Org
is
basically
gone
away
and
it's
all
on
travis.com
now,
which
is
the
paid
service
they
transition.
I
see
that
gotcha.
D
A
H
Sure
yeah
I
mean
I
have
a
a
couple
projects
that
are
sort
of
hanging
around
one
is
the
discretized
tutorials
project,
so
I
think
doug
had
committed
some
time
to
offering
a
review.
I
think
you
and
lindsay
also
have
done
some
review
on
it.
I
just
want
to
remind
everyone.
H
It
exists,
but
since
joe
you're
doing
so
much
kind
of
fundamental
work
with
discretize,
I'm
kind
of
okay
with
it
hanging
out
for
a
little
bit
and
once
that
that
sort
of
full
suite
of
implementing
any
boundary
conditions
is
done,
then
I
I
think
there
are
some
parts
to
the
tutorials
that
can
get
improved
and
finished.
H
So
I
guess
I'm
not
I'm
not
sort
of
very
worried
about
it
being
in
the
state.
That's
it's
in
now,
but
then
there's
also
the
em1d
project.
So
we
did
sort
of
the
first
step
to
getting
this
into
simpeg,
and
I
I
think
there
is
a
fair
amount
of
interest
to
trying
to
get
it
to
to
master,
because
people
want
to
use
it,
and
I
think
we
discussed
on
what
the
next
steps
were
going
to
be.
H
H
So,
if
nobody's,
if
nobody's
done,
that,
I
might
raise
an
issue
related
to
it
and
I'll
put
down
some
things,
it'll,
be
you
know
an
incomplete
list
and
anyone's
welcome
to
look
at
it,
but
yeah.
It
would
be
nice
if
we
could
continue
moving
forward
on
it.
So
this
is
just
sort
of
my
polite
nudge
to
everyone
to
be
involved.
A
H
So,
okay,
so
you've
actually
because
you
said
you've
gone
in
and
made
outline
some
items
on
kind
of
where
you
think
the
next
stage
is,
is
going.
A
H
H
Okay
yeah,
I
guess
it
would
be
helpful
if
you
made
that
available
and
I
can
take
a
look
at
it
and
at
some
point
we
can
make
a
game
plan
for
what
to
do
next.
I
I
They
compared
two
different
ways
of
of
the
curl,
and
so
where
is
the
difference
too
big
and
then
refined
in
there?
So
it's
kind
of
expensive
to
compute.
If
you
would
only
do
forward
modeling,
you
would
probably
not
do
it,
but
if
you
have
big
inversion
projects,
it
could
be
an
interesting
thing
to
reduce
the
number
of
of
cells
needed
in
your
computation
so
yeah.
I
I
posted
the
link
for
the
report
and
the
repo
I
mean
they
were
only
there
a
couple
of
months,
never
heard
of
chief
physics
and
inversion
before
so
take
that
into
account,
but
I
think
it
could
be
an
interesting
thing
to
start
off
once
with.
There
are
some
adaptive
meshings
around,
but
I
think
it's
a
topic
we
every
now
and
then
talk
about
again.
How
can
we
easier
make
our
meshes,
so
it's
kind
of
lindsey
did
in
for
our
comparison
paper
gradient
based
so
based
on
the
model,
and
this
one
is
based
on
the
fields.
I
No,
nothing
surprising,
obviously
around
the
source
and
around
the
receivers.
It
tried
to
make
it
finer
and
finer
and
then
any
big
contrast
closer
to
the
source
was
much
heavier.
So
if
you
had
an
object
close
to
the
source,
it
mostly
refined
their
day,
what
they
did
was
just
they
refined
the
f
iteratively,
the
five
percent
that
got
the
worst
result.
I
D
D
There's
often
the
case
is
that
the
you
know
the
the
algorithm
is
gonna
is
gonna
refine,
the
the
neighbors
and
the
x
and
y,
but
we
have
corner
cells
that
don't
get
refined.
I
don't
know
if
it's
something
that
can
you
know
that
can
be
fixed
or
it
can
be,
could
be
changed
on
the
right.
D
D
A
Yeah
I
mean
it's
something
that
we
can
do
to
ensure
that
that
level
is
appropriate
right
now
it
was
sufficient
to
just
ensure
that
those
like
direct
neighbors
yeah,
that's
what
I
figured
and
but
it.
D
I
D
And
actually,
since
you,
since
you're
really
good
at
that-
and
you
have
your
hands
to
the
in
the
c
code,
if
we
could
get
the
the
neighbors
the
address
of
those
neighbors
too
right
be
able
to
get
a
cell
neighbor,
not
just
the
up
down
east
west,
but
also
the
cross
neighbors.
That
would
be
really
useful
for
me
down
there
down
the
road.
So,
basically
the
address.
A
A
D
I
E
E
Fun
to
like,
if
you
have
a
couple
of
like
our,
I
don't
know
like
a
good
examples
in
that
example,
it
will
be
interesting
to
actually
run
that
and
see
how
much
actually,
how
good
this
criteria
we
had
to
have
and
that'll
actually
give
us
a
good
idea
how
good
job
we're
doing
or
how
poor
job
we're
doing.
Discretizing,
because
they're
usually
assumed
okay,
we're
just
good
enough
and
matches
our
half
space
solution.
I
A
B
I
G
E
Smash
on
on
on
the
boundary
nodes,
whereas,
like
the
octree,
had
to
have
like
lots
of
cells
to
get
to
an
accuracy
that
you've
got
from
that
tensor
code
anyway,.
D
D
D
D
So
I
can
just
show
one
that
one
plot
here
so
right
now
for
the
fpm
like
the
runtime
is
like
not
even
on
the
same
scale
right
like
the.
If
we're
not
storing
it,
it
takes
so
much
more
time
to
to
run
up.
D
So
this
is
a
single
gauss
newton
and
all
the
models
are
starting
the
same
data
right,
I'm
just
doing
one
gauss
newton,
but
then
changing
the
number
of
cg's.
You
know
from
2,
4,
8,
16
and
then
32
cg's,
and
also
the
the
slices
of
the
model
for
for
each
of
them
one.
So
I
have
two
cg's
we're
solving
almost
nothing
right,
we're
not
we're
not
really
taking
a
a
big
step
at
all,
and
then
this
is
four
and
this
is
eight
16
and
then
32
right.
D
So,
even
even
at
32
cgs,
the
model
is
still
changing,
so
we
would
have
to
like
let
it
run,
I
think,
and
go
to
like
100
cg's
or
something,
and
I'm
guessing
eventually
we'll
we'll
stabilize
to
like
a
steady,
steady
model,
but
the
run
time
was
just
like
yeah,
it's
not
comparable
right.
It
takes
like
hours
to
to
run
up
without
storing
up.
D
D
Time
it's
the
total
run
time
for
what
you
saw
this
one.
This
one
was
a
like
a
full
inversion,
but
now
I'm
running
with
the
just
for
a
one
one,
gas
newton
right,
okay
for
the
runtime,
so
but
but
everything's
going
to
be
scaling
down
like
I,
you
know,
I
have
the
numbers
running
right
now
and
you
know
it's
gonna
scale.
D
Basically
the
same
you
know
this
is
without
storing
so
a
thousand
for
one,
just
newton,
600,
300
200,
where,
if
I
run,
if
I
store
j,
then
it's
like
you
know
50
seconds,
then
they're
all
flat
at
that
time,
because
the
cg,
the
cg
solves
with
that
problem
are
are
so
short
when
you
start
when
you
store
j
matrix
right,
so
really
all
that
overhead
time
is
just
to
to
store
it
after
this
computing
doing.
The
the
gaussian
term
is
like
almost
like
free.
D
E
Utility
there's
a
counter
yeah,
so
they
can
actually
count
how
many
jvec
and
gt
bag
has
run
throughout
that.
So,
if
you
run
the
inversion
code,
you
can
actually
count,
so
it
actually
gives
just
like
a
summary
of
how
many,
how
many
times
you're
in
they
actually
might.
It
might
be
interesting
to
get
the
number,
because
it's
kind
of
hard
to
see
in
the
runtime
what's
actually
happening.
E
So
if
you
just
count
like
how
many
run
like
how
many
times
you
run
the
jvec
and
jt
back
without
storing
j
with
storing
j
it,
it
okay,
it'll
it'll
kind
of
explain
why
that's
the
case,
because
I
might
expect
you,
you
may
need
actually
much
more
jayback
and
gt
back
when
you're
not
storing
j
compared
to
storing
j.
In
your
case,
probably.
D
E
No
though,
but
the
the
number
of
run
to
form
j
is
like
usually
much
more,
that's
usual
thought,
but
if
you,
if
you've
got
lots
of
cg,
runs
to
like
run
the
jpeg
and
gt
back,
that's
probably
give
you
more
runs.
I'm
not
sure
because,
usually
like
forming
j
thought
is
an
expensive
process,
because
you
need
to
run
that
as
number
of
data,
for
instance,
so,
but
in
in
rcg,
you
probably
need
to
run
more
than
we
expect.
A
D
So
you
want
you
want,
you
would
like
a
test
for
a
natural
source.
Then.
E
A
E
Without
with
j
or
without
g,
so
that'll
give
us
like
it's
supposed
to
give
much
more
number
on
not
forming
when
you're
not
forming
j,
because,
like
that's
what
your
runtime
tells
us,
but
I
I
was
kind
of
curious.
Actually
how
many
times
you
had
to
run
jvec
and
jt
back
with
well.
D
E
E
You
just
need
three
four
simulation
right:
four
simulation
to
predict
the
data,
one
for
simulation
to
calculate
jvac
and
another
four
simulation
to
calculate
jt
back
so
like
you
only
need
three
simulation,
even
if
you
do
a
thirty
iteration,
you
just
need
nine
interaction
like
a
90
simulations,
but
what
we're
getting
as
a
number
as
a
runtime.
It
seems
like
much
much
bigger.
So
that's
why
I
am
a
bit
confused,
whereas,
like
computing
j
forming
j,
let's
say
you
got
80
80
like
80
data
set.
E
You
need
to
run
the
simulation
80
times
so
I
like,
I
don't
know
it's
because
and
then
actually
you
need
to
run
that
simulation
eighty
times
per
each
iteration,
so
that
seems
like
in
simple
calculation.
That
seems
like
a
much
more
expensive
way,
but
the
what
we're
getting
is
quite
opposite.
So
I
want
you
to
have
some
number
like
that.
I
could.
E
D
A
I
still
think
it's
like
it's
this
graphs
you're,
showing
are
makes
sense
to
me
right
now,
because
of
that
reason,
they're
saying
like
it
takes
that
same
number
of
calls
to
the
solver
to
construct,
because
it's
you
know
one
receiver
per
source
less
of
them.
I
think
it
makes
more
sense
like
you
could.
I
would
like
to
see
this
compared
like
just
pure
mt
right,
like
do
many,
many
many
receivers
per
one
source
right,
because
that's,
I
think,
a
bigger
difference
and
it
would.
E
That
was
my
expectation,
but
when
we
were
testing
the
dc
curve
actually
was
actually
faster,
because
dc
is
obvious
case,
where
we
got
lots
of
receiver
for
each
source
but
yeah.
So
that's
where
I
was
a
bit
confused
and
I
thought
actually
running
jvec
and
jt
vegas
is
much
more
expensive
than
actually
running
force.
That
was
my
conclusion,
but
anyway,
that's
that's
where
I'm
still
kind
of
curious.
Why
why
that's
the
case.
F
Yeah
just
yeah,
so
I
was
helping
eric
johnson
on
the
sympath
discourse,
so
he's
working
with
paul
bowman
and
joe
scientist
without
border
for,
like
a
pro
a
ground
water
project
again
in
uganda
and
and
in
he's
trying
to
use
slim
peg
for
that
project,
so
like
very
similar
to
the
landmark
project
that
you
you've
been
involved.
So
I
I
asked
him
if
he
was
in
contact
with
you.
I
don't
know
he
didn't
reply,
but
yeah.
He
was
having
a
couple
of
issues
actually
like
he
was.
F
F
That
was
one
of
the
examples
that
that
just
forms
the
survey
explicitly
within
the
the
script
and
actually
these
scripts
assume
that
your
survey
file
is
ordered
by
sources,
and
that
was
not
the
case
of
this
file.
So
that
didn't
work
so
like
so
when
he
was
importing
like
he
had
like
something
like
800
datas
and
because
it
was
not
sorted,
he
was
taking
huge
chunk
of
the
file
each
time
and
he
was
ending
up
with
like
10
000
data
files,
but
then
he
had
like
800
measurements,
so
that
was
obviously
not
working.
F
F
The
other
issue
he
has
is
that
it's
like
each
survey
is
a
mix
of
dipole
dipole
and
the
winner
array
and
the
and
the
pseudo
section
function
just
crash
on
that,
and
there
is
a
couple
of
things
happening
actually
like
at
one
place.
I
think
there
is
like
an
if
statement.
We
were
not
testing,
so
it
was
instead
of
appending
something
to
a
list.
It
was
replacing
it.
So
that
was
an
issue,
but
then
the
result.
I
got
replacing.
F
That
is
actually
not
good,
so
I
need
to
dive
more
into
that
and
he
had
like
also
a
big
like
he's
kind
of
in
a
valley.
So
he
has
big
topography
and
the
pseudo
section
function
does
not
does
not
blank
the
like
the
top
of
the
topo.
So
we
when
we
as
you
have
this
valley,
like
the
apparent
resistivity,
is
actually
interpolate
within
the
air
within
that
thing,
so,
like
the
the
photo
section
you
get
out
of,
that
is
just
useless.
F
So
so
he
gave
me
so
he
he
was
kind.
He
is
also
using
rest
2d
and
runs
through
the
end,
and
he
sent
me
the
output
of
the
section
with
the
software.
That's
much
nicer,
so
I'm
going
to
try
to
get
something
as
good
as
that.
So.
H
So
yeah,
so
if
I,
if
I
recall
correctly
when
I
made
some
improvements
to
utilities,
is
there
was
there
was
like
multiple
functions
that
could
be
used
to
create
pseudo
sections
and
there
was
a
whole
slew
of
things
to
like
read
and
write
data,
and
I
I
made
I
made
a
utilities
place
where
I
was
kind
of
trying
to
put
the
final
form
of
all
that
stuff.
F
He
had
so,
he
had
like
a
bmn
file
format
like
like
csv
or
whatever,
like
just
like
a
text
file,
and
he
was
in
in
one
and
in
some
of
the
example
in
the
in
the
gallery
like
there
is
like.
There
is
an
example
where
it
downloads
like
such
a
bml
file
and
with
it,
and
within
that
example.
There
is
like
a
for
loop
that
creates
the
source
list
and
the.
G
F
F
H
I
think
I
made
it
and
I
don't
know
if
it
was
examples
gallery,
but
in
tutorials
both.
H
Yeah,
so
for
for
tutorials,
when
I
created
those,
I
wasn't
overly
satisfied
with
what
we
had
for
like
an
io
at
the
time
and
knowing
everything
about
my
survey
and
wanting
to
just
show
how
you
would
work
with
everything
once
you
sort
of
created
your
survey
yeah,
I
just
read
it
in
and
wrote
it
out
like
it
was
it's
it's
very.
It's
tutorial
specific.
It's!
It's,
I'm
not
using
a
general
functionality,
but.
F
F
And
it
was,
there
was
no
mention
anywhere
that
the
survey
was
supposed
to
be
ordered,
so
he
he
could
not
figure
out
why
he
was
not
working
for
his
case.
So
I'm
just
adding
a
warning
to
that
script,
that
the
input
is
supposed
to
be
ordered
by
sources
for
that
one,
and
we
have
an
example
with
the
iot
utility.
So
I
asked
him
to
to
take
a
look
because
it's
true
that
a
lot
of
people
have
this
abmn
type
of
file
format
and
they
in
the
iot
utils.
F
He's
like
like
I,
I
do
not
know
about
that
lucky
so
maybe
yeah,
so
maybe
it's
worth
contacting
him.
I
I
mentioned
that
you
worked
on
something
like
that
with
myanmar,
I
sent
him.
I
sent
him
like
the
different
sdg
abstract
you
put
last
year,
but
I
did
not
actually
mention
the
app
I
have.
I
had
forgot
about
that,
but
yeah
I
mean
that's
kind
of
like
it's
he's
on
this
course.
So
it's
yeah
I
was
but
yeah
that
was
like.
F
H
If
you
go
simpag
utils,
I
o
utils,
I
made
a
space
where
I
was
starting
to
actually
put
finalized
functionality
for
reading
and
writing
input
files.
So
I
made
a
so
one.
One
of
them
was
for,
I
think,
dcip
autry
and
dcip
3d.
H
Basically,
if
you
want
to
read
in
something
that
is
formatted
for
this
code
and
you'd
like
to
to
bring
it
into
simpeg
or
write
it
out
to
that
format,
you
know
exactly
what
function
you're
supposed
to
be
using.
H
So
in
that
place
I
would
probably
want
to
put
like
a
you
know:
load
dcip,
xyz,
or
something
like
that,
because
right
now
we
did
some
general
functions
and
they
weren't
they
weren't
the
na.
The
naming
could
have
been
better
and
the
options
that
you
can
use
could
have
been
better
and
what
exactly
they
can
be
used
for
could
be
better.
H
Yeah
so
yeah,
I
would
say
that
there's
there's
a
lot
of
good
functionality
in
in
the
resistivity
portion
of
simpeg
for
reading
and
writing
stuff,
and
I
didn't
want
to
just
delete
it
all,
but
there
is
actually
a
space
where
we're
doing
kind
of
like
these
finalized
functions.
I
would
consider
putting
something
there.
F
F
H
Tebow,
if
you
ever,
if
you
want
to
ask
me
about
any
of
the
stuff
that
I
I
developed
for
these
kinds
of
things,
I'd
be
happy
to
do
it,
because
I
I
did
think
about
this
stuff
and
I
did
put
some
time
into
it.
But
I
don't
think
I
got
to
the
end
of
where
I
wanted
to
be.
F
Okay,
no,
I
I
would
be-
I
think
I
put.
Maybe
I
will
put
you
in
the
reviewers.
I
think
right
now.
I
just
want
to
fix
the
the
bugs
and
issue,
but
maybe
there
is
a
longer
there
is
a
definitely
a
longer
term
thing
to
think
about
the
dc.
I
o
utils,
because
that's
the
entry
point
of
simpek
for
many
people
that
want
to
input
data
and
it's
true
that
it's
not
our
strongest
model,
so
it's
kind
of
like
a
yes.
F
It
doesn't
give
like
the
true
image
of
and,
like
I
actually
like,
to
give
you
like
kind
of
like
eric
johnson's
point
of
view
like
he
was
like,
because
he
could
not
plot
his
data.
He
thought
that
simpek
could
not
invert
it,
but
actually
the
inversion
code
is
able
to
invert
for
any
type
of
configuration,
but
because
the
plotting
was
wrong,
it
was
like.
Oh,
so
it
simply
cannot
end
all
those
data
and
he
was
trying
to
look
at
something
else.
So
that's
that's
how
bad
that
can
be.
It's
like.
H
Yeah,
well,
I
guess,
if
you're,
if
you're
going
to
make
any,
I
o
utilities
for
dc
and
ip.
H
Then
please
look
at
simpeg
utils
I
o
utils
and
see
how
that's
organized
because,
okay
yeah,
that's
that's
the
final
place.
I
think
we
decided
months
ago
where
we
were
going
to
put
this
and
that
we
were
just
yeah.
Just
I
just
take
a
look
at
what
we
have
there
and
how
the
function
is
named
and
how
the
doc
string
is
is
is
done
and
it'll
make
a
lot
of
sense
and
it's
a
it's
going
to
be
a
clean
place
where
I
want
to
send
everybody
to
do
any
input.
E
One
common
tivo,
I
wasn't
actually
sure
I
did
recognize
that
we
cannot
plot
it
up
sort
of
mix
types
of
data,
whether
whether
schlumberger
and
dipole
dipole.
I
wasn't
actually
sure
how
to
actually
generalize,
because
it
uses
slightly
different
rule
of
thumb
to
plot
the
data
and
that
without
knowing
which
portion
of
data
is
dipole
dipole
and
which
portion
of
that
data
is
running
vga.
I
thought
that's
like
pretty
hard
to
actually
generalize.
So
if
you
can
generalize
that
that
would
be
very.
F
Nice,
well,
I
was
actually
I
was
actually
gonna
poke
you
probably
soggy
later,
because
I
did
not
generalize
it
either.
It's
like
so.
As
I
said,
there
was
an
actual
bug
in
the
function
that
you
had
a
list
of
things
where
you
were
happening
things
and
there
was
like
an
else
statement
where,
instead
of
happening,
it
was
replacing
it.
So
it
was
causing
a
like
an
error
message
at
the
end,
so
I
fixed
I
fixed
that
by
creating
an
app
on
by
adding
back
the
append
instead
of
replace.
F
So
you
have
that
huge
gap,
that's
kind
of
weird,
so
I
don't
and
I
don't
know
how
to
rest
to
the
end
like
actually
generalize
that.
But
if
you
send
me
a
pictures
of
the
pseudo
section
within
rest
to
the
end,
and
that
is
much
nicer.
So
I
don't
know
how
they
did
that,
but
at
least
it
it.
It
gave
us
kind
of
a
goal
of
what
to
where
to
bring
that
function.
Like
was
like,
it
removed
the
topography.
F
I
think
it's
important
because
yeah
I
o
ut
is
the
entry
point
for
many
people
and
because
the
I
o
util
was
not
working
for
him.
He
was
like,
oh
seem.
Peg
is
not
gonna
work
in
versus
data,
while
the
sim
inversion
code
is
actually
much
more
advanced
that
this
survey
is
easy,
easy
to
invert
and
you
can
visualize
the
model
fine,
but
the
visualized
data
is
the
first
step
of
so
many
people.
That's
we.
H
H
H
It's
breaking
and
why
it's
breaking
and
it's
because
at
some
point
in
I
believe
the
survey
object,
there's
a
a
property
or
an
attribute
that
says
what
type
of
electrode
configuration
it
is,
and
so,
when
you're
doing
a
lot
of
this
functionality,
it
looks
and
says
give
me
the
survey
configuration
and
it's
pull,
pull
pull,
dipole,
dipole,
pull
or
dipole
dipole
and
there
isn't
there
isn't
a
property
for
mixed
and
that's
probably
why
it
breaks
in
a
lot
of
places
I
mean
so
it's.
H
F
Actually,
that's
actually
that
that
section
was
fine,
because
the
paul
paul
paul
diaper
extract
just
refers
to
the
type
of
your
source
and
receivers.
It's
not
like
the
dipole
dipole
is
not
actually
a
dipole
dipole
survey.
So
in
this
case
every
like
both
sources
and
receivers
were
dipole
dipole.
So
that
was
fine,
but
it
had
somewhere
like
it's
a
it's
a
it's
a
it's
a
it's.
A
dipole
dipole
survey
so
like
the
source
and
receivers
are
outside
each
other's.
F
F
And
that's
the
issue
and
and
so
and
the
if
statement
was
between
this
and
this
configuration
and
for
that
configuration
that's
where
we
had
the
bargain.
I
don't
think,
and
it's
true
that
I
haven't
seen
any
tests
in
cpeg
that
tested
for
for
a
vino
type
of
array.
We
were
always
testing
on
dipole
dipole,
so
that's
probably
why
we
did
not
catch
that
error
yeah.
I
got.
H
H
Extreme
topography,
but
you
basically
you'd
have
all
your
points.
You
can
define
a
plane
and
some
distance
from
the
plane
and
it'll
just
plot
up
all
your
points
in
3d
and
if
you,
if
you
take
a
2d
slice
and
you've,
got
data
near
that
that
that
plane,
it
plots
it
up.
Geometrically
like
a
really
nice
2d
pseudo
section
in
3d
space
and
it's
it
works
really
well.