►
From YouTube: SimPEG Meeting May 26th
Description
Weekly SimPEG meeting from May 26th, 2021
A
Nice
to
see
you
guys
all
add
yourselves
to
the
quick
reports.
If
you
got
something,
there's
not
really
too
much
on
the
agenda
for
me
today
or
things
to
talk
about.
B
A
D
D
D
E
Yeah
I
have
been
tackling
the
merging
main
into
simulation
mt
right
now,
yeah,
ultimately
to
get
tiled
all
brought
in
or
made
and
all
brought
into
the
simulation
or
the
tiled
simulations
yeah.
So
just
picking
away
at
that
slowly
here
and
I
went
on
a
little
bit
of
a
tangent
and
looked
at
that
pi
amg
solver
or
that
multi-grid
solver
yeah
for
the
big
problems
yeah,
it
doesn't
take
up
too
much
ram,
but
it
it's
still
much
slower
like
a
couple
or
over
a
couple
orders
of
magnitude
slower
than
party.
E
So
so
I
don't
know,
maybe
there's
some
options
that
you
can
play
around
with.
I
was
just
a
quick
and
dirty
implementation,
but
yeah
good
on
ram,
but
not
so
fast,
not
much
faster,
interesting
yeah,
and
then
I
got
a
request
at
work
to
look
at
tiled
tensor
meshes
they
don't
really.
I
haven't
really
been
able
to
sell
everybody
on
the
off
trees
fully,
so
they
like
the
speed
with
the
tiling,
but
they
won't
kind
of
want
the
tensor
mesh
stuff,
so
I'll
maybe
be
picking
away
at
that
over
the
next
little
bit.
E
If
it's
possible
or
yeah,
I
might
bug
somebody
to
if
I
get
lost
in
the
title
code
just
to
see
if.
F
E
I
guess
it's:
when
the
the
cells
start
growing,
it's
like
yeah,
there
always
just
seems
to
be
a
distinct
boundary
like
it's
distinct
between
the
layers
or
like
in
the
octree
levels.
Yeah
I've
I've
tried-
or
I
haven't,
tried
too
hard,
but
I'm
sure
if
you
discretize
properly
enough,
you
can
kind
of
minimize
that
effect,
but
everybody
that
I
got
to
play
with
it
they're
just
like.
Oh.
B
E
Always
looks
you
can
always
see
this
boundary
where
it's,
where
the
cells
change,
but
yeah,
that's
mostly
on
dc,
though
I've
had
really
good
luck
with
the
mvi
stuff.
No
one
can
no
one
really
complains
about
the
octree
on
the
mvi,
but
the
dc.
It
seems
to
be
a
little
bit
of
an
issue.
F
E
I
think
like
dom-
and
I
did
a
test
set
together
here
or
you
were
involved
there
tebow
on
that
on
that
monitoring,
the
dc
one
and
those
audrey
models.
Look
pretty
good.
I
thought,
like
didn't
really
seem
like
there
was
too
much
boundaries
at
the
entry
levels,
but
yeah,
I
guess
playing
around
with
it
enough
people
are
just
like
hey.
G
I
think,
what's
my
feeling
is
that
they're
always
pushing
too
hard
the
misfits
and
then
that's
when
you
start
seeing
the
the
actual
changes
that
make
sense,
yeah
yeah
I
mean
I
can
tell
you
right
away
like
the
tiled
on
dancer
is,
is
like
it's
pretty
good.
It's
going
to
be
messy
because
you
can't
nest
anything
right,
so
start
doing
partial
volumes
and
all
that
I
don't
know
yeah.
B
They
not
like
impressed
by
the
by
the
savings
you
know,
because
you
got
sort
of
where
those
those
corners
are
in
the
mesh
and
in
octree
it
gets
to
be
not
very
populated
with
cells,
but
when
it's
tensor,
you
got
these
long
skinny
like
prisms,
yeah
yeah,
and
they
weren't,
I
think
also
the
for
for
em.
Maybe
the
you
kind
of
run
into
that
aspect,
ratio
of
the
cells-
and
you
might
not
be
solving
it
as
as
accurately
there.
Somebody
correct
me
if
I'm
wrong,
but
I
thought
there
was.
I
found.
D
F
Like
I,
I
think
I've
said
it
before,
but
the
ubc
dcip
octree
ubc
code,
like
the
commercial
one,
cannot
handle
non-cubic
cell,
like
you,
can
create
a
mesh
with
non-cubic
cell
but
you're
gonna.
It's
it's
gonna
create
massive
artifacts
and
I
think
there
is
maybe
something
in
the
gradient
or
I
don't
know,
what's
what's
really
happening
but
like
yeah,
it's
like
the
the
like
the
ubc
code,
with
the
when
you
put
like
cells
that
are
non-cubic,
you
create.
F
I
had
experience,
whereas
it
created
massive
artefacts
that
disappeared
when
I
was
putting
cubic
cells,
but
how
how
non-cubic
did
you
go?
Oh,
like
a
ratio
of
like
even
like
1,
to
point
five
like
something
like
ridiculously
small
ratio
and
was
like
yeah?
No,
that's
like
I
could
not
use
it
with
non-cubic
cell.
Basically.
B
Like
I
might
try
out
some
some
testing
on
that,
because
we
made
some
improvements
to
that
code,
nothing
related
to
what
you're
talking
about,
but
yeah,
I'm
kind
of
in
it
right
now,
so
I
might
be
able
to
just
try
something.
Was
there
a
particular
geology?
Was
it
real
data?
Was
it
synthetic?
Oh.
F
Yeah,
it
was
real
data
and,
like
I
inverted
with
like
cubic
cells,
was,
and
it
was
a
very
big
survey-
that's
the
thing
too
so
I
was
trying
to,
and
I
was
trying
to
to
gain
on
like
lateral
thing
extra,
like
my
electrical
spacing
was
50
meters
and
like
line
spacing,
was
150
or
something
like
that,
and
I
was
trying
to
get
it
some
a
little
higher
in
term
of
like
the
across
line
to
gain
a
bit
because,
like
with
a
cubic
cell,
I
will
only
I
could
only
go
for
even
on
the
cluster.
F
F
That
inversion
was
taking
like
two
weeks
to
run,
so
it
was
like
a
big
one,
so
I
was
trying
to
to
increase
a
bit
just
since,
like
I
think
I
went
like
12.5
and
like
20
meters,
something
like
that
and
he
was
just
putting
massive
con
and
and
suddenly
massive
conductors
appeared
between
lines,
and
I
mean
this
was
not
and
those
features
were
not
in
the
cubic
cell
response
in
inversion.
So.
B
F
G
I
think
you
can
do
it.
It's
just.
You
need
to
be
careful
on
the
when
you
do
your
projection
right
from
the
global
to
local,
and
I
think
joe,
maybe
with
your
new
enterprise
that
can
they
can
possibly
do
it.
It
will
be
a
bit
slower
to
generate
them,
but
it
might
work
yeah.
It
might
work,
it's
just
it's
a
bit
messy
because
your
your
nested
meshes
are
gonna.
Have
some
you
know
it's
not
it's
not
a
one-to-one
interpolation,
because
the
cells
are.
A
Well,
right,
yeah,
the
volume
average
operator
should
work
for
it.
If
I
get
tested
against
it,
it
should
work
for
that
in
general.
Okay,
though
I
mean,
I
don't
think,
it'll
they'll
probably
be
some
small,
artifacts
and
stuff
from
the
overlapping
cells.
It's
not
as
clean
as
the
one-to-one
ratio
from
like
option
nesting
like
for
nesting
them.
As
far
as
timing,
it'll
work
it'll
still
be
volume
averaging.
E
E
I
guess
that's
yeah
about
all
that.
I'm
working
on
and
yeah
doll
I'll
be
getting
on
to
the
subtitle
or
sorry
the
freak,
the
vtem
styled
survey,
subtitling
yeah.
I
just
we're
moving
offices
and
I
just
got
my
cluster
back
up
and
going
so
good
to
go.
G
Not
to
worry
ben
is
working
on.
His
work
is
working
on
it
right
now
using
the
new
implementation,
so
we'll
be
testing
it
on
our
site
first,
but
I'll
try
to
get
it
to
you
by
the
end
of
the
week.
B
I
got
a
few
things:
we're
gonna
do
a
big
release
of
the
proprietary
proprietary
software
stuff
that
our
our
group
does.
So
it's
not
really
related
to
any
of
your
things,
but
some
some
stuff
that
might
be
interesting
is
that
we've
been
playing
around
with
different
sensitivity,
weightings
for
dcip
octree
and
we've
also
been
investigating
our
time
domain,
em
octree
code,
so
by
by
kind
of
identifying,
maybe
issue,
let's
say:
imperfections:
we're
really
kind
of
trying
to
zero
in
on
kind
of.
B
What's
the
best
way
to,
I
guess,
define
sources
or
kind
of
like
these
little
details
and
because
we're
we're
not
worried
about
being
able
to
solve
the
system
a
bunch
of
different
ways.
We
just
want
to
find
the
best
all-purpose
approach
for
for
for
simulating.
So
some
interesting
things
have
come
out
of
it
that
maybe
we'll
end
up
getting
adopted
into
simpeg.
B
We've
been
working
on
this
mdru
project
with
mag
and
grav
tebow
and
myself,
and
that's
been
pretty
fun
and
and
playing
around
with
kind
of
trying
to
remove
a
background
or
a
regional
contribution
and
then
inverting
locally.
So
we
still
have
some
questions,
but
the
mag
seems
to
be
giving
the
good
results.
The
gravity
is
very
sparse
and
kind
of
difficult
to
work
with,
but
we're
moving
along
in
that
and
then
when
I
can
find
time
more
docs
drinks.
A
Anything
anything
exciting
not
too
much
relating
to
simpeg
yeah,
working
on
dom's,
refactoring,
dom's
geo,
apps
package
and
yeah.
I
did
finally
kind
of
touch
some
syntpag
code
recently
but
yeah
just
on
our
local
child
simulation
branch,
which
maybe
one
day
we'll
get
pushed,
but
keeping
it
local
for
now.
Yeah
cool.
H
Last
week,
I
just
I
did
some
minor
reveal
for
the
depth
sweetie.
According
to
you
and
the
dom's
comments.
I
think
it's
a
it's
good
to
be
it's
good
to
merge
in
the
frame
main
branch.
What
do
you
think.
A
H
Okay,
okay,
I
see
so
and
test
the
test
files
in
the
test.
Folder.
G
Yeah,
well,
we
did
the
yeah.
We
worked
with
xiao
right
and
yourself.
I
tried
to
get
this
general
distance
waiting
function,
looking
pretty
good.
Otherwise,
I
started
looking
into
regularizing
the
you
know
different
the
different,
like
functions
of
the
model
in
terms
of
like
amplitude
right
now,
it's
a
bit
more
complicated
than
I
thought,
and
just
because
the
way
cell
weights
are
applied
right,
we're
doing
the
mapping
the
mapping
of
weights
and
then
we
kind
of
want
to
have
two
layers.
It's
not
just
applying
the
weights
on
everything.
G
Anyway,
it's
a
little
bit
more
complicated
than
I
thought
some
tinkering
around
trying
to
see
how
we
can
you
know,
maybe
add
another
layer
or
reflect
whatever
we
have
that's
what
I'm
up
to.
A
A
A
It
happens
everywhere,
except
for
when
you
have
the
update
sensitivity,
weight
thing,
a
sensitivity
weight
all
like
includes
the
volume
turned
in
so
kind
of
you
have
to
take
it.
You
have
to
use
like
a
special
regularization
function.
I
think,
when
you're
using
the
that,
because
you
can't
use
like
the
t-connectorization
this
stuff
because
of
the
volume
like
the
volume
gets
multiplied
essentially
twice
there.
G
So
it's
a
bit
worse
than
that,
unfortunately
yeah,
because
the
simple
sorry,
the
the
small
right
on
the
in
the
canal,
basically
always
multiplies
by
right
match
volume,
even
if
you
have
if
your
cell
weights
are
normalized
or
not.
G
So
we
need
to
change
that
priority
because
it's
different
behavior.
Basically,
if
you
use
simple
or
think
enough,
it's
big
of
an
issue,
so
we
should
uniformize
everything
they
should
all
behave
the
same
and
then
yeah.
Basically
in
the
directive,
denormalize
the
or
you
know
divide
by
volumes
beforehand,
so
that
when
it
goes
into
the
regularization
it
multiplies
it
back
in.
G
Yeah
and
make
it
clear
that
the
weights
should
be
volume
normalized
right.
If
you
put
anything
anything
in
there,
it
should
be
volume
normalized.
Maybe
that
john,
that's
something
you
should
look
into
to
write
like
in
the
in
your
code
that
you're,
using
which
regularization
you're
using
basically
because
that
might
affect
you
big
time.
E
I
I
I
see
so
in
the
regularization.
You
always
multiply
the
volume,
so
that's
the
user's
job
to
like
divide
your
weights
by
volume
exactly
every
time.
A
F
I
Yeah,
I
don't
have
nothing
particular
yeah,
I'm
actually
trying
to
invert
the
remote
sensing
data
using
syntax.
We
actually
wrote
a
small
flow
modeling
code
and
it's
actually
basically
1d
it's
very
similar
to
airborne
inversion.
So
you
like
take
account
the
remote
sensing
data,
which
is
the
time
series
that's
like
as
like
a
one-time
version.
But
you
got
zillions
of
points,
so
you
can
set
it
up
as
a
1d
problem
and
the
goal
is
actually
recovering
the
the
head
variations
in
the
aquifer
system.
So
I'm
probably
I'm
pretty
promising.
I
I
So
yeah
it'll
be
interesting
to
test
that
code
and
one
thing
that
we're
going
to
try
a
little
bit
new
with
the
mg3d
is
the
anisotropy.
So
we're
going
to
invert
for
like
at
least
like
a
horizontally
isotropic,
but
an
isotropic
resistivity.
I
I
A
Okay
and
then
me,
I
just
I've
been
going
through
working
on
one
of
my
own
small
projects,
simulating
electric
fields
in
the
skull
which
is
kind
of
interesting,
and
then
I've
also
been
going
through
the
m1d
stuff
as
well,
trying
to
look
through
it
and
make
simplifications
where
I
can
kind
of
see
what's
happening
and
working
towards
getting
that
time
domain,
one
working,
much
quicker,
I'm
working
towards
getting
it.
A
I
just
kind
of
my
only
handle
that
I
don't
have
on
it
is
I'm
not
sure
how
much
memory
it
takes
up
already
in
regards
to
okay,
if
I
want
to
store
a
few
extra
things
that
are
going
to
be
that
impactful
on
the
memory
footprint.
I
I
think
it's
pretty
minor
joe,
but
the
memory
consumption
is
pretty
minor
and
actually
increasing
memory.
Consumption
is
kind
of
fine
in
the
sense
that
if
you
can
actually
improve
the
time
yeah
like
usually
even
my
laptop-
is
pretty
fine
to
work
like
invert,
the
large
scale
at
one
year,
data
so
nah.
I
Like
I
got
eight
gig,
it's
not
using
eight
gigs,
so
it's
it's
pretty
small.
A
Okay,
then
I'm
going
to
keep
chugging
away
at
it.
I
just
I've
been
making
simplifications
to
things
like
you
know,
just
looping
like
getting
all
the
loop
through
the
source
receivers
calculate
all.
A
I
Are
you
thinking
about
like
let's
say
you
got
a
thousand
soundings
and
are
you
thinking
about
handling
those
ten
like
a
thousand
sounding
ones?
Is
that
is
that
what
you're,
thinking
or
you're
still
are
thinking
at
the
level
of
a
single
sounding?
I'm
still
the
level
of
like
a
single.
A
Sounding
but
also
like
what
you're
talking
about
like
ten
thousand
sound
menus
like
things
like
that,
I
mean,
if
they're
all
over
it'll
right
now.
I
just
have
it
kind
of
going
through
and
looking
at
the
unique
source,
receiver,
offsets
and
just
modeling
those
for
each
sounding.
So
hopefully
we
can
do
that,
but
at
the
level
of
like
you
know
the
multiple
like
the
time,
the
stitched
20
stuff,
you
can
at
least
take
advantage
of,
like
you
know,
calculating
all
those
coefficients
ahead
of
time.
Instead
of
repeatedly
calling.
C
Them
right.
D
If
you
ever
have-
or
anyone
has
a
huge
airborne,
inversion,
job
or
project
or
something,
I
think
it
might
very
well
be
worth
to
try
and
design
a
filter
for
airborne
stuff,
because
I
don't
think
any
of
the
filters
we
use
was
designed
having
airborne
in
mind.
So
we
should.
We
could
probably
get
away
with
much
or
to
filter
specific
for
that
topic,
which
would
then
weigh
in
heavily
on
the
runtime.
D
I
D
A
I
I
think
they
actually
solved
like
a
simple,
like
least
square
problem,
because
the
you
got
couple
of
unknowns
right,
like
you,
don't
know
what
exponent
you
need
to
use.
You
also
don't
know
like
that.
There's
a
constant
factor
that
not
needs
to
be
into
into
that
depth
weighting,
so
I
think
they
actually
just
think
in
the
beginning.
They
they
got
sensitivity,
kernel
and
then
they
kind
of
solved
a
simple
problem
to
to
figure
out
what
that
obama.
H
I
One
that's
cool
cool
and
related
to
that.
I
found
that
I
thought
that
what
the
project
like
projected
gauss
mutant
that
we're
using
it's
kind
of
fine,
but
still
I
like
I
compared
that
that
that
the
raglan
inversion
result
with
the
upc
code
and
I
found
like
a
it-
was
pretty
good
like,
but
the
especially
where
you
have.
I
I
think
it's
actually
happening
when
your
update
are
negative,
but
then
projected
gaussian
newton
kind
of
put
that
into
a
positive
value
and
I
think
it
kind
of
generates
a
weird,
very
sharp
boundaries.
So
I'm
not
sure,
like.
I
think
we
spent
quite
a
bit
of
time
to
actually
like
modify
that
that
projector
gauss
newton,
but
still
it
seems
like
it's
not
like
as
good
as
the
ubc
code,
which
uses
the
slightly
different
method.
I
think
they're
using
the
log
barrier.
I
I
guess
so
I
thought
maybe
that's
something
worthwhile
to
think
about
implementing,
especially
for
that
positivity
constraint.
I
think
in
general
that
that
the
projected
gospel
is
kind
of
fine,
but
I
think
if
they're
like,
if
the
projection
has
a
lot
of
like
invisible
sets,
then
then
that
could
still
make
a
problem.
I
don't
know
that
was
just
a
thought,
but
yeah
yeah
sure
you're
implementing
the
old
kind
of
style
thing.
So
I
thought
it'd
probably
worthwhile
to
think
about
that.
I
And
then
it
uses
if,
if
the
values
are
on
the
bound,
we
use
the
first
order
for
those
like
values,
on
the
bounds
so
which
we,
which
we
follow
whatever
the
methodology
is
so
I
don't
know
like
we
can
make
a
big
improvement
there.
I
The
another
option
was
the.
I
think
that
that
ubc
code
used
his
death
log
bearer
method.
I
guess
not.
G
A
Yeah,
maybe
then
yeah
that's
not
worthwhile,
though
well,
what
I
know
from
the
log
barrier
just
like
it
slows
down
like
the
light
late
iterations,
just
don't
update
very
much
and
then
it's
hard
when
there's
a
bunch
of
stuff
near
the
boundary
you
get
like
small
updates
for
stuff.
It
just
slows
down
when
there's
a
lot
of
stuff,
with
the
bound.
A
A
D
A
Yeah,
so
I
took
a
look
through
it
and
I
kind
of
went
through
the
tensor
mesh
code.
At
least,
I
think
what
we
could
do
like.
What
should
probably
happen
is
that
when
you're
plotting
like
those
face
values
and
things,
we
should
do
a
slice.
We
should
slice
it
before
we
average
it
to
the
cell
centers,
because
right
now
everything
gets
averaged
to
cell
centers
and
then
sliced
along
the
axis,
which
is
what's
taking
up
a
lot
of
memory.
A
D
A
D
D
A
A
D
A
D
I
think
I
I
created
a
p
color
mesh
manually
and
then
plotted
with
discretize,
and
then
you
can
see
there
are
tiny
differences
because
yeah
one
is
interpolated
to
cell
center
and
then
yeah.
I
just
remember
me
that
I
had
a
couple
of
times.
I
could
compute
the
electric
field,
but
when
I
tried
to
slice
to
plot
it
with
the
slicer
or
plot
slice,
then
it
would
say
memory
error.
A
G
Sounds
good!
I'm
still
curious,
though,
about
the
the
editor
of
solver
that
you
talked
about
joe
right.
Why
is
john
getting
such
a
slow
slow
solve
compared
to
what
you,
because
you're
resolving
for
three
million
cells?
I
mean.
A
E
Yeah
I
did
have
like
it
doesn't
take
the
multiple
right
hand
side,
so
I
have
to
loop
over
the
right
hand
side
so
that
right
there
kills
quite
bad,
of
course,
but
that's
the
thing,
though,
when
you
look
at
the
par
diesel,
one
they're
actually
looping
over
the
sources
as
well.
So
I
thought,
oh,
maybe
I
can
get
away
with
it,
but
it's.
E
E
A
E
I'll
push
it
to
this
branch
that
I
made
for
that
neo
guy
I'll,
just
add
it
to
there
and
you
can
take
a
look.
Yeah
dieter.
I
D
That
the
thing
is
with
multi-grid
it's:
if,
if
it
works,
fine,
it
works
fine
and
if
it
doesn't,
it
doesn't
so
like
the
stretching
the
the
thing
that
is
implemented
in
emg
3d.
There's
this
line
relaxation
and
semi-coarsening,
because
otherwise,
with
an
isotropy
or
stretched
cells,
you
would
very
quick
become
very
slow.
D
G
D
Like
there
are
many
things
like
in
emg
3d,
if
you
make
the
air
very
resistive
or
the
frequency
very
low,
you
will
very
soon
go
to
never
ending
land
because
convergence
becomes
very
slow.
So
you
can,
for
instance,
at
the
moment,
not
use
it
for
dc.
We
would
have
to
implement
additional
things
so
each
I
think
it's
and
I
think
it's
a
with
a
lot
of
iterative
solve
for
the
case
that
you
have
to
to
tailor
them
to
your
problem,
whereas
with
direct
solvers,
they
are
much
more
robust
and
flexible.
A
And
is
that
using
multi-grid
as
the
solver
or
a
pre-conditioner
to.
A
C
D
I
can
use
it
as
a
precondition
for
the
for
any
or
many
of
the
sci-pi
trill
of
subspace
solvers,
but
often
I
use
it
on
its
own
yeah.
C
A
D
I
Yeah
surprisingly,
if
I
read
the
literature,
people
still
use
the
jacobi
and
I'm
yeah,
which
is
surprising,
but
it's
probably
working.
I
I
Yeah,
I
think
so
the
potential
john
is
actually
transferring
emg
into
an
mt
code
may
not
be
a
huge
hurdle.
I
think,
because,
as
long
as
we
use
maybe
the
secondary
field
formulation,
I
think
there's
nothing
to
change.
We
just
need
to
change
the
right-hand
side
in
the
emg
3d,
so
I'm
not
going
to
do
that,
but
the
I'm
just
going
to
test
whatever
existing
code
and
the
cscmt
example,
which
is
much
simpler.
I
But
I
think
what
probably
worthwhile
to
generate
is
a
receiver
class
that
can
handle
the
impedance
so
because
it
seems
like
the
typical
system
is
contact
system
and
what
they
basically
provide
is
the
apparent
resistivity
and
the
phase
from
either
zxy
or
zyx.
So
yeah.
I
think
that's
probably
my
goal
and
as
long
as
we
can
implement
that-
and
I
think
from
there
actually
updating
that
into
an
actual
empty
code.
G
I
Be
a
big
big
kind
of
work.
I
guess
so
there's
a
chance
and
I
think
that
eater's
code
is
working
like
relatively
high
frequency.
Let's
say
I
know
0.1
I
guess
for
sure,
but
I
think
that's
probably
mostly
the
case
that
you
would
like
to
invert
for
for
a
lot
of
mining
unless
you
go
really
really
deep,
like
a
mental,
so
I
think
it's
in
a
practical
sense
for
neural
exploration.
I
think
that
could
be
quite
useful,
but
anyway,
that's
just
the
thought
that
we
have
at
the
moment
yeah.
E
I
could
be
into
that
for
sure.
That's
that's
cool,
we're
actually
doing
some
csa
mt
ourselves.
Now
I've
got
some
good
data
sets
that
might
be
able
to
use
like
if
we
we've
like
got
all
the
wire
paths
taken
out
like
we've
collected,
like
all
the
details,
so
that
we
can
simulate
it
in
a
way.
Eventually,
I
was
going
to
get
that
oh
great,
so
you're
actually
going.