►
From YouTube: SimPEG Meeting January 27th
Description
Weekly SimPEG Meeting from January 27th, 2021
A
Exciting
so
let's
get
right
into
this
week's
meeting.
There's
a
quick
thing.
I
believe
peter
was
going
to
give
us
a
command
line
interface
tour
of
his
package,
with
that
we
can
run
through
the
quick
reports
and
then
move
on
to
that
afterwards.
Does
that
sound
good,
okay.
A
So
see,
devin
is
listed
first
on
the
quick
reports
here.
B
Yeah
sure
so
I've
just
been
working
on
finalizing
the
em-1d
stuff,
so
talked
to
soggy
a
little
bit
and
we
came
to
consensus
and
we
decided
to
make
a
a
waveform
class
and
we're
just
putting
everything
in
its
final
place
and
I'm
updating
the
tutorials
and
and
tests.
So
progress
is
being
made
and
I'm
hoping
to
finish
that
off
for
a
final
review,
pretty
quick,
hopefully
in
the
next
week
and
this
stuff
gets
in
the
way.
B
We
didn't
really
want
to
combine
the
1d
and
the
3d
stuff
in
any
way,
but
there
there
is
sort
of
like
a
1d
utilities
package
under
the
the
utilities.
So
does
that
kind
of
answer
your
question
or.
A
B
There's
they're
similar.
I
guess
in
that
you
yeah,
you
have
a
discrete
waveform.
B
I
think
for
one,
your
for
your
3d
problem,
you're,
going
to
be
discretizing
things
based
on
the
time,
stepping
that
you've
chosen,
whereas
for
the
1d
stuff
you're
going
to
be
putting
something
in
and
then
it's
actually
going
to
discretize
it
according
to
the
times
it
needs
to
do
the
convolution,
and
I
guess
I
didn't
like
the
idea
of
of
creating
classes
that
you
could
use
for
the
1d
problem
and
you
could
use
for
the
3d
problem,
but
it
just
had
the
properties
of
everything,
because
then
it
might
not
be
as
obvious
which
properties
are
being
called
for
the
3d
and
which
are
being
called
for
the
1d.
B
B
B
C
B
B
You
might
not
intuitively
know
what
functionality
and
utilities
are
going
to
be
used
in
solving
a
1d
problem
or
in
solving
the
3d
problem,
because
the
the
1d
problem,
we
you
sort
of
organize
it
by
you,
sort
of
solve
it.
By
for
each
location,
you
solve
all
the
time
channels,
whereas
in
the
3d
problem,
you'd
be
doing
the
time
step
for
you'd
be
doing
the
time,
stepping
and
then
projecting
it
to
all
the
receiver
locations
like
it's
kind
of
kind
of
the
opposite
in
a
way.
A
B
That's
true
I
mean
I,
I
guess
let
me
I
guess.
Let
me
ask
you
this
question
then:
do
you
want
a
receiver
class
that
is
used
for
both
the
1d
and
the
3d
problems
where
you,
if
I
let's
say
I'm
doing
the
1d
problem
and
I'm
looking
at
sort
of
the
the
attributes
or
the
properties
of
my
my
receiver?
B
A
B
I
just
think
it's
really
confusing
to
have
this,
this
heavy
class
that
you
would
be
using
for
all
of
these,
these
different
simulation
types,
because
if
I
want
to
learn
about
what
this
object
can
do,
what
methods
are
associated
with
it.
I'd
like
to
be
able
to
work
in
in
sort
of
the
space
that's
relevant
to
the
problem,
I'm
trying
to
solve,
as
opposed
to
making
a
source
class
a
receiver
class,
a
waveform
class.
That's
that's
trying
to
be
applicable
to
every
type
of,
say,
frequency
domain
simulation.
D
But
if
you're,
okay,
if
you're
a
user
and,
let's
suppose
you've
got
you-
know
a
3d
data
set
and
ultimately
you're
wanting
to
do
3d,
but
if
that
you
know
part
way
through,
perhaps
at
the
beginning,
you
want
to
just
do
1d
inversions
or
you
yeah.
You
want
to
do
somehow
stuff,
that's
connected
with
one
d.
B
No,
I
mean
maybe
I'll
just
things
are
a
little
bit
disorganized
for
me
here,
but
I
can
maybe
show
you
kind
of
how
it's
partitioned.
I
won't
do
anything
that's
going
to
take
a
long
time,
but
I'll
just
show
you
kind
of
the
biggest
partition
of
this
here.
B
B
I
guess
keep
the
code
clean
and
to
to
have
have
objects
whose
properties
are
only
relevant
to
the
the
simulation
you're
choosing
to
use
and
doesn't
have
lots
of
kind
of
irrelevant
properties
that
you're
not
going
to
be
using.
E
One
comment
that
I
have
is
like:
I,
I
sort
of
understand
why
joe
is
asking
for
like
consistent
sort
of
structure
and
at
the
end
it's
it's
like
that's
way
to
go.
I
think,
but
it's
a
sort
of
like
we
need
the
incremental
steps
to
go
there,
because
the
current,
like
there's
it's
quite
different
like
how
that
current
pde
solver
solves
a
problem,
and
the
1d
code
solves
like
it's
quite
different.
E
So
there
are
details
that,
like
I
need
to
serve
those
purposes,
for
instance,
time
stepping
like
it's,
it's
sort
of
like
a
almost
an
art
like
okay,
how
to
set
the
time
step.
Pd
code,
it's
it's
still.
It's
still
in
question.
How
do
we
really
handle
that,
like
it's,
even
a
research
question,
so,
whereas
in
in
the
1d
code,
it's
pretty
robust
because
it
doesn't
really
matter
how
you
set
the
time
steps
but
still
like
it
costs
like
if
you
put
a
lot
of
time
step,
it's
still
costing
1d.
E
So
what
I
would
suggest
like,
we
can
sort
of
make
an
incremental
step
with
what
kind
of,
but,
as
devin
suggests,
we
have
a
separate
thanks.
But
as
we
getting
kind
of
developing
our
pd
code
more,
we
can
sort
of
like
go
into
a
direction
that
we
can
merge
things
that,
at
the
end
like
we
can
have
sort
of
common
class
that
can
handle
both
yeah.
Just.
B
Yeah,
well
I
mean
in
the
way,
the
way
that
you
go
in
and
define
sources
and
receivers
their
locations
assigning
things
a
wave
form
those
are
on
on
the
surface.
Those
are
handled
exactly
the
same
way.
G
Is
there
a
way
that
we
could?
I
don't
know
if
this
is
possible,
but
is
there
a
way
that
we
could
have
kind
of
one
class
like
joe's
talking
about
that
kind
of
covers
everything?
But
then,
when
you're,
actually
using
it
and
like
devin,
says
if
you
were
creating
like
a
1d
simulation?
G
If,
when
you
then
go
to
tab
complete
for
the
1d
simulation
to
find
out
all
the
methods
associated
with
it,
that
it
only
shows
you
the
1d
ones,
but
not
all
of
them,
related
to
the
full
class,
because
if
we
could
do
that,
we
could
have
one
single
class.
That
kind
of
still
has
everything
in
it
and
we
wouldn't
have
like
a
lot
of
replication
in
code.
It's
been
kind
of
copied
and
pasted
to
different
areas
with
just
like
small
tweaks
in
it,
because
I
can
see
how
that
would
be.
G
That
just
makes
it
harder
to
kind
of
manage
the
package
right
like
if
you
have
to
go
in
and
refactor
something
or
make
a
change
instead
of
making
it
in
one
place.
Now,
if
you
have
like
four
three
copies
of
like
the
same
code
with
kind
of
different
flavors
added
to
it
for
the
different
dimensions,
I
can
see
how
that
becomes
a
bigger
job.
But.
B
Yeah,
I
guess
maybe
that
I
guess
the
the
biggest
reason
I
did
it.
This
way
is
for
the
3d
problem.
B
If
you're
in
the
frequency
domain,
the
frequency
is
a
pro
is,
is
a
property
of
the
source,
I
believe
so
when
you,
when
you
say
you
know,
I
have
this
this
source
and
it's
operating
at
this
frequency,
the
the
yeah,
the
frequency
is
a
property
of
the
source,
object
and
you're
going
to
solve
a
forward
problem
for
every
every
frequency.
H
B
B
So
you
could
think
about
the
3d
problem.
The
actual
data
vector
being
output
by
it
in
3d
is
organized
by
sorts.
Then,
by
like
I
guess,
source
location,
no
frequency,
source
location,
receiver,
location
and
then
in
1d.
It
ends
up
being
source,
location,
receiver
and
then
frequency
which
maybe
we
do
want
to
change
in
the
future.
That's
just
the
most
efficient
way
to
solve
it.
E
It
doesn't
really
matter
depth
because,
like
a
in
1d
like
that,
each
each
competition
costs
so
like
the
order
doesn't
really
matter.
I
think
it's
probably
what
I
chose
at
one
point.
Then
I'm
happy
to
change
so
it's
I
it
yeah.
It
doesn't
really
matter.
E
No,
I
I
think
that
that's
sort
of
my
point,
I
think
they
both
codes
need
some
upgrade.
I
think,
because
the
3d
code
is
not
perfect
and
it
actually
actually,
I
think
it
needs
a
lot
of
work
to
make
it
kind
of
actually
usable
in
like
a
decent
sized
problem.
So
my
my
point
was
like:
oh,
we
actually
need
a
lot
of
work
to
do
on
3d
code.
B
Yeah,
I
would
support
that
because
otherwise
we're
just
in
sort
of
a
perpetual
state
of
development
and
we
never
we're
not
bringing
new
tools
to
people.
So
I
would
like
that
idea.
Soggy.
If
we
we
finished,
we
finished
things
up
the
way
they
are
now,
but
definitely
if
it
can
be
improved
in
the
future,
we
should
do
it.
A
I
would
just
say
that
my
opinion
is
that
I
think
it
should
go
to
where
we
think
it
will
end
up
living
in
the
end,
because
if
not,
then
it
gets
harder
for
people
to
update
code
and
if
it's
changing
locations,
if
it's
a
pain,
I
think
that
it
should
be
like
if,
if
it,
if
this
is
the,
if
you
wanted
to
match
the
3d
structure
like
the
3d
source
things
eventually
like
this,
you
know
the
source
receiver.
However,
you
want
them
to
match.
A
Eventually,
we
should
just
make
it
match
now
and
worry
about
efficiency
later,
but
it
should.
I
don't.
I
personally
wouldn't
want
to
have
to
keep
updating
my
code.
Every
time
like,
oh
the
simulation
mt
or
the
simulation
1d
em
stuff
got
moved
around
again.
I
gotta
change
my
code
up
again,
that's
kind
of
what
I
want
to
avoid
yeah.
B
B
B
It
kind
of
snowballs
into
like
a
bigger
kind
of
handling,
an
organization
of
1d
versus
3d
simulations.
I
I
Have
we've
got
a
bit
of
a
structure
to
work
with
multiple
different
dimensions
of
of
simulations,
and
so
my
my
two
cents
would
be
to
kind
of
take
a
look
at
that
and
try
not
to
introduce
a
new
sort
of
mental
model
or
a
completely
distinct
structure
between
the
em
and
the
dc,
and
also
the
mt
we've
also
kind
of
got
a
structure
for
you
know
1d,
2d
and
3d,
so
that
that's
just
what
I
would
add
to
that
to
kind
of
echo
joe's
comments
of
trying
not
to
move.
I
You
know
if,
even
if
the
organization
isn't
perfect
off
the
off
the
bat
trying
not
to
have
that
be
moving
around
too
too
dynamically.
A
Those
are
just
like
I'm
not
sure
if
we
need
to
come
to
a
consensus
at
the
second.
We
should
have
more
of
this
discussion
on
the
polar
list.
J
Yes,
so
this
last
week
we
were
john
and
I
have
been
running
a
you
know,
large
dc,
just
to
test
test.
The
the
parallelization
seems
to
hold
up
pretty
nicely
and
then
got
a
request
pretty
recently
to
to
start
looking
into
you
know:
paralyzing
the
nt,
the
natural
source
code.
So
that's
what
I'm
on
now
because
of
your
great
work,
joe,
it's
gonna
be
very
straightforward,
because
it's
basically
the
exact
same
thing
as
the
as
a
you
know:
airborne
atm.
J
So
it's
just
a
matter
of
double
checking
things
at
this
point,
but
that's
what
I'll
be
doing
this
week
and
yeah.
If
one
thing
you
should
you
can
look
at,
is
that
the
the
frequency
example
that
devon
had
written
sorry
in
the
tutorial
part,
so
the
you
know
the
natural
source
simulation
branch
this
this
forward
simulation,
doesn't
output
the
same
as
before,
so
it
would
have
to
be
looked
at.
J
A
J
Yeah,
so
basically,
if
you
run
devon's
tutorial
3d
forward,
you
should
you
should
be
a
bullseye
right
and
right
now
the
target
seems
to
be
sort
of
on
the
side
or
I'm
not
sure,
what's
happening
so
just
double
check
that
and
then
we
should
be
good
to
go
because
I
did
the
merge.
I
already
did
natural
source
merge
onto
the
tile
simulation,
because
I'm
guessing
natural
source
gonna
go
in
first
and
I
wanted
to
start
testing
so
yeah
that
worked.
It
was
pretty
easy
and
then
that's
it.
For
me,
cool.
C
C
It's
when
you
do
the
test
on
the
cross
gradients,
it's
the
real
component,
it
just
it
seems
to
be
giving
out
just
slightly
different
values
when
you
do
the
d
pred
for
that
calculation,
but
then
yeah,
I
started
looking
and
pulling
out
the
fields
and
the
fields
do
look
differently
from
from
what
we
had
in
master.
But
when
you
pull
every
when
you
actually
do
it,
it
seems
to
work
most
of
the
time.
It's
just
like
this
yeah
there's
something
there.
That's
just
like
you
said
it's
just
subtle.
J
Although,
but
for
the
like,
the
the
the
tutorial
example
like
the
the
data,
looks
nothing
like
it
was
before
so
there's
something
pretty.
J
That
we're
not
we're
not
doing
the
same
as
as
before
yeah.
Maybe
that's
a
good
entry
point.
If
you
can't
reproduce
that
tutorial
data,
then
we're
on
the
on
the
right
path.
Right.
C
Yeah
I'll
go
down
that
row
row
later
this
week,
slowly
picking
away
at
it
and
yeah,
I
did
some
tiling
inversion
on
some
cluster
nodes
and
yeah
zar
seems
to
be
staying
on
the
node,
so
things
are
looking
good.
A
Got
I
have
a
few
quick
reports
on
updates
on
the
2d
mt
step.
I've
been
able
to
go
through
and
get
the
formulation
working
correctly
with
the
boundary
conditions
for
both
for
all
the
formulations
for
the
t
tm
modes,
both
of
them.
It
works
pretty
well,
actually
like
it.
A
But
even
at
this
point
it
seems
it
has
a
really
good
solution
at
the
surface
like
it's
only
off
by
maybe
I
don't
know
0.1
percent
with
that
distribution,
this.
What
that
disc
values
were
using
for
that
mesh
that
you
had
before
even
at
the
edges
like
it
just
goes
right
up
to
be
like
the
left
and
right
sides.
It's
nice
like
that.
I
Oh
sorry,
joe,
I
was
just
gonna
ask
if
you've
tried
a
quarter
space
solution
with
the
2d,
yet.
A
E
For
tm
mode,
like
a,
I
thought,
the
t
is
kind
of
fine
for
knowing
man
but
tm,
because
the
it
crosses
like
a
electrical
field
is
crossing
the
boundary.
In
that
case,
I
thought
like
when
I
was
reading
a
couple.
Other
papers,
like
authors,
had
to
do
something
on
the
boundary
for
tm
mode,
just
just
curious.
What.
A
A
A
A
Just
at
the
no
at
the
like
the
left,
maybe
like
the
left
and
right
side
of
the
models,
someone
that
we're
concerned
about
at
the
top.
It's
a
dirichlet
condition
at
the
top,
where
we're
just.
A
The
formulation
I
have
is
I
I
say
it's
like
it's.
You
know
technically
correct,
however,
meaning
it
results
in
the
correct,
a
matrix
like
the
correct
solution
matrix,
but
the
the
way
it
gets
to
it
internally
is
not
quite
theoretically
like
mimics
the
theory,
because
it's
using
right
now,
like
the
face
divergence
and
the
phase
divergence
operator
to
do
it
in
2d,
whereas
it
should
just
be.
A
A
A
Oh
and
then
I'm
also
working
and
hooking
up
the
receiver,
1d
receiver
objects
and
2d
receiver
objects
to
make
sure
that
they're
working
properly
with
the
mt,
so
things
like
making
sure
that
the
impedances
and
parent
resistivity
parent
resistivities
phases
are
calculated
similarly
to
how
they
are
in
the
3d,
on
the
receivers
and
not
in
length,
I
don't
think
they
should
be
fields
like
they
shouldn't
be
part
of
the
fields
object.
They
should
be
on
the
receivers
and
receivers
the
one
that
calculates
the
impedances.
K
I
can
see
that,
but
the
command
line
itself
is
kind
of
boring
because
you're
just
going
to
see
two
three
commands,
so
I
thought
that
quick
run
you
through
the
notebook,
how
we
would
run
it
in
a
notebook
and
then
also
I
showed
the
equivalent,
but
what
you
can
do
now
in
the
command
line,
if
that
makes
sense,
just
to
get
a
handle
on
on
how
I
run
things.
So
I
think
everybody
that
is
here
maybe
knows
emg
3d.
K
K
So
it's
a
very
specific
solver
and
it
uses
the
multigrid
method
to
be
a
bit
lightweight
on
on
memory
and
runtime
in
a
way,
and
we
use
that
in
in
the
current
consortium
work
it's
used
in
in
in
the
chief,
that's
called
the
joint
inversion
framework
where
we
try
to
currently
combine
nip
tomography,
gravity,
lab
data,
well
data
and
csem,
and
this
particular
case
is
in
the
barrancy.
It's
about
some,
a
region
with
a
lot
of
salt
stones
that
we
try
to
to
improve
the
knowledge
of
of
this
area.
K
So
many
things
are
similar
that,
like
in
in
simple
surveys
and
simulation,
and
all
that
it's
not
the
same,
for
the
main
reason
that
I
can
often
things
write
much
simpler
because
I
just
have
one
method.
So
I
don't
have
to
take
care
of
all
the
possibilities
you
guys
have
to
take
care
of,
because
I
just
have
one
type
of
source:
it's
an
electric
source
and
there
there
is
just
the
electric
receiver.
K
So
I
can
simplify
many
things
but,
as
you
know,
I
use
discretize,
so
you
know
this,
so
what
we
usually
would
do
is
we
load
a
model,
an
h5
or
a
json,
or
an
np
set
file
that
has
in
this
case
a
mesh
and
the
corresponding
model.
So
this
mesh
has
140
000
cells.
That
would
be
the
inversion
model.
That's
the
current
state
of
the
inversion.
Let's
say
we
can
here
see
the
model.
It's
on
a
the
model
is
defined
on
log
10,
conductivities
isotropic,
and
it
has
a
100
by
94
by
15
cells.
K
Xray
under
the
hood
to
to
store
this
survey,
so
we
see
here
my
data
set.
The
raw
data
has
four
frequencies:
374
receivers,
14
sources,
that's
actually
the
other
way
around.
You
use
reciprocity,
so
there
were
14
receivers
on
the
seafloor
and
then
the
source
is
flying
over
it
and
you
bin
it,
and
this
gives
you
374
source
signature.
Let's
say
you
can
have
a
look
at
this.
Well,
let's
this
is
the
full
data.
Let's
go
to
a
smaller
data
set.
K
I
I
select
here
just
a
few
sources
to
make
it
smaller
to
run
faster,
and
I
just
choose
two
frequencies
and
you
can
look
at
this.
So
we
have
this.
This
named
four
six,
eight
ten
source.
K
We
have
a
whole
bunch
of
receivers
from
zero
to
307
and
these
two
frequency
and
then
the
noise
floor
and
the
relative
error.
That's
just
how
you
define
it!
It's
here,
a
frequency
dependent
noise
floor,
so
there's
2
to
the
power
of
minus
14
for
the
one
hertz
and
2
to
the
power
of
minus
15
for
the
4
hertz
data.
K
K
It's
relatively
shallow,
it's
something
about
200
meter
water,
so
you
have
still
a
huge
influence
of
the
air
layer.
We
have
here
these
four
sources
and
in
this
receiver
line
that
goes
over
these
sold
domes
and
then
similar.
As
in
in
simpek,
we
define
with
data
simulation,
a
simulation
takes
a
survey,
a
mesh
model,
and
then
it
takes
creating
options.
K
K
K
The
colored
curves
here
are
the
measured
data
which
we
probably
have
to
cut
a
bit
more.
We
didn't
do
a
lot
yet
here
and
then
the
gray
lines
behind
are
the
synthetic
data
we
computed
and
you
can
also
do
the
gradient
for
that
and
then
save
it
to
an
h5
file.
So
that's
how
you
would
do
it
in
a
notebook,
but
we
said
we're
gonna
do
it
on
on
the
terminal.
So
let
me
make
that
a
bit
visible
instead
visible.
K
K
K
So
this
is
such
a
configuration
file.
You
can
define
here
the
names
of
your
server,
your
free
model
you
can
put
in
simulation
parameters
like
if
you
want
to
have
a
single
grading
for
all
data
or
if
you
want
to
have
source
specific
reading,
how
many
workers
solver
options
like
you
do
it
for
like
you,
can
do
it
for
partieso
the
the
gridding
options
and
you
can
also
do
data
selection
and
then
we
can
do
a
it.
K
Has
a
minus
d
flag
for
dry
test
run,
so
you
can
look
if
it
can
find
all
files
and
they
can
write
the
files.
So
you
don't
run
the
whole
modeling
and
then
just
to
find
out
it
fails
because
it
could
write
the
file
and
then
you
can
see
it
loads.
The
survey
and
the
model
it
creates
a
simulation
and
then
here
will
come
the
forward
because
we
did
the
dry
run,
it
doesn't
do
the
forward
and
then
it
saves
the
result
to
your
defined
things.
K
And
if
we
take
here
to
dry
out-
and
we
actually
run
it,
then
again
it
will
read
the
data
it
will
create
the
it
will.
Take
the
survey:
if
they'll
take
the
data
you
selected
and
then
it
does,
it
creates
the
internal
computational
meshes
and
now
it
does
the
forward
computation
and
then
the
end.
It
will
write
it
to
your
file
and
that's
how
at
the
moment
runs
this
joint
inversion
framework
with
with
the
partner
companies.
K
E
K
It
just
uses
the
python
concurrent
futures,
so
it
just
throws
them
off
it.
It
puts
them
in
a
puts
them
in
a
queue
for
of
the
concurrent
features,
and
it
runs
it
on
on
10
threads.
I
think-
or
what
do
you
mean
so
they
are
then
completely
independent.
E
Right,
like
I
was,
did
you
parallel
paralyze,
your
code
like?
Is
it
gonna,
separate
your
different
frequencies
or
like.
K
K
K
K
E
K
K
So
that's
a
good
point
here
when
I
throw
it
away
and
well
when
I,
when
I
put
it
out
in
parallel,
then
it
starts
from
scratch
each
time
I
did
some
testing
with
when
I
did
time
domain
modeling
and
I
needed
a
lot
of
frequencies.
K
If
you
go
from
one
to
the
other
frequencies
and
use
the
previous
frequency
as
a
starting
point,
it
can
speed
up,
but
it
can
also
slow
down
it.
It
can
be
beneficial
in
low
frequencies
because
it
doesn't
change
a
lot,
but
then,
when
it
comes
to
the
region
where
it
becomes
highly
oscillatory,
you
might
even
start
at
the
wrong
at
a
verse
point
and
then
by
starting
with
a
zero
field.
So
it's
it's
at
the
beginning.
K
It's
funny
because
when
I
really
went
at
the
beginning
into
this
multi-grid
solver,
there
are
so
many
different
things
like
how
do
you
loop
through
in
a
w
in
an
f?
How
many
pre-smoothing
and
post-smoothing
step?
Do
you?
Do
you
use
line
relaxation
semi-coursing
tricks
to
make
it
faster
and
everything
works
in
some
cases
in
some
others
or
not,
and
in
the
end
I
see
most
people.
K
K
It
can
do
both
you
can
put
into
the
solver.
You
can
either
use
it
as
a
solver
on
its
own,
or
you
can
do
one
multi-grid
step,
and
then
you
can
choose
any
of
the
psi.
Pi
curl
of
subspace
over
like
big
conjugate,
gradient,
stabilized
or
or
conjugate
gradient
or
any
of
those
and
then
go
back
to
yeah
as
a
pre-conditioner.
K
A
Models
see
here:
does
your
implementation
automatically
do
the
receiver
source
reciprocity
to
reduce
the
number
of
sources.
K
A
There's
a
little
bit
of
that
thought
in
the
dc
problem,
but
not
it!
It
somewhat
translates
to
that
problem
as
well.
I
mean
it's
just
it's
just
kind
of
like
okay.
Well,
let's
count
at
some
point
like
the
crudest
way
is
just
because
count
the
number
of
receivers
and
the
number
of
sources
and
do
which
everyone's
smaller
right.
That's
like
the
cutest
way
of
doing
it,
but.
K
Yeah
it's
at
the
moment.
The
thing
is
the
way
you
have
it
implemented.
The
source
can
be
arbitrary
shapes.
You
can
even
make
a
loop,
and
then
you
have
a
magnetic
source.
You
can
make
some
line
source
finite
length,
source
and
receivers
as
they
are
implemented
at
the
moment,
are
just
point
dipoles,
so
it's
an
interpolate.
You
interpolate
it
at
one
point
in
your
in
your
space
right
and
get
the
field
there,
so
you
would
have
to
do
that
by
hand.
E
K
Well,
I
don't
have.
I
don't
actually
have
magnetic
receivers
because
everything
is
I'm
just
I
look.
I
don't
have
different
formulations,
it's
emg
3d
just
does
electric
source
and
you
get
the
electric
field,
and
then
you
just
use
florence
to
obtain
the
magnetic
fields
right
and
interpolate
it
from
there.
But
if
then,
if
I
then
want
to
back
propagate
the
field
to
create
the
gradient,
I
need
magnetic
sources.
And
yes,
I
do
that
by
just
making
this
square
loop
electric
loops,
which
gives
me
a
magnetic
source.
You
know
a
dummy.
K
K
K
K
But
yeah
it's
mainly
because
these
other
companies,
they
don't
really
use
python.
So
this
way
it
makes
it
easy
because
they're
used
to
command
line
and
they
have
all
their
their
run
scripts
and
their
init
script.
So
this
way
it
makes
it
easy.
You
could
also
make
it
a
python
minus
c
and
then
your
command,
but
it's
in
the
end,
it's
just
it's
similar
yeah.
K
L
K
K
K
So
if
they
have
an
issue,
I
can
often
just
solve
it
and
that's
sending
me
the
log
file,
and
so
that
is
definitely
something
that
helps.
I
don't
have
to
tell
them,
run
this
command
and
tell
me
what
is
the
output
or
do
that
and
tell
me
what
what
it
shows.
I
can
just
say
send
me
the
log
file
and
that
make
things
quite
easier
to
to
a
client
support.
How
do
you
call
it
diagnosis.
J
A
good
point
for
us:
I
was
looking
at
the
directives
to
save
save
results
and
we
have
about
four
directives
that
do
sort
of
hack
jobs
on
each
one
of
them.
It
would
be
great
if
you
have
one
directive
that
can
you
know
with
switches
on
it
to
decide,
save
this.
Not
this.
You
know
and
just
consolidate
everything
we
have
because
right
now,
it's
a
mess.
H
J
A
A
This
friday
afternoon
this
week,
we've
been
doing
them
around
four
o'clock,
so
I
think
that's
still
four
o'clock
pacific
time,
so
it
should
work
well
feel
free
to
stop
by
if
it
if
it
goes
past,
you
know
normal
working
hours
or
we
decide
that
we're
done
coding,
we'll
play
some
games
or
do
something
else.
K
Sounds
cool
too
late.