►
From YouTube: SimPEG Meeting February 3rd
Description
Weekly Meeting from February 3rd, 2021
A
Anybody-
I
don't
really
quite
know
of
anything
that
we
need
to
announce
here.
I
think
next
week
we'll
try
to
do
an
after
meeting
hacking
thing,
I'm
trying
to
get
back
into
doing
it
every
other
week,
so
it'll
be
next
week.
Next
wednesday
will
be
after
after
meeting
coding.
A
A
Yeah
this
past
friday,
we
did.
We
talked
about
a
little
bit
of
a
few
little
things
and
helped
each
other
where
we
did
and
just
kind
of
had
a
nice
social
outlet.
That's.
A
A
John,
I
see
you've
added
some
stuff
in
there.
Would
you
like
to
go
first,
please
with
the
quick
reports
yeah.
C
Sure
yeah,
I'm
just
wrapping
up
the
mt
right
now
that
simulation
mt
there's
just
a
couple
tests
and
it
looks
like
it's
got
something
to
do
with
like
the
the
docs
yeah.
I
might
need
a
little
bit
of
help.
Just
fine,
there's
someone
or
there's
a
little
help
of
what
that's
pointing
me
at
with
the
error.
I
just
can't
really
find
out
what
it
is
exactly
in
the
docs
but
yeah
I'll
touch
base
later
and
then
I'm
just
running
some
large-scale
dc
tiled
inversions.
C
I
had
to
wiggle
this
morning
to
start
it,
but
it's
running
and
yeah.
Let's
see
what
the
timing
is
and
how
it
works,
and
then
I've
also
taken
on
getting
the
mvi
examples
and
nvi
tiled
stuff
going.
I
got
dropped
a
huge
airborne
data
set
that
needs
to
be
done
and
yeah
I'll
work
on
that.
C
Or
was
we're
writing
it
to
disk
right
now,
so
it's
it's!
It's
a
little
slower
than
trying
to
store
it
ultra
ram
right
now,
yeah,
the
computer
that
I
got
is
only
got
a
terabyte
and
this
one's
probably
going
to
be
a
little
bit
more.
So
I'm
just
writing
it
to
disk
right
now
and
it's
it's
doing
its
job.
It
hasn't
finished
yet,
but
yeah
it's
doing
a
job.
D
Yeah,
I
would
be
curious,
like
how
big
is
that
kind
of
like,
like
here's
like
a
how
big
you're
storing
a
dick
curious
about
the
numbers.
So
if
you
can,
let
us
know
the
number
like
kind
of
a
quick
number
about
number
of
data
and
the
number
of
file
size
and
the
matrix
size.
It
would
be
kind
of
interesting
to
sort
of
know.
C
Yeah
for
sure
I
once
I
get
some
hard
numbers
here
after
this
run
I'll
I'll
I'll,
let
you
know
because
yeah
I
don't
know
how
big
this
is.
I
got
you
can
do
like
some
kind
of
guesstimation,
but
it
so
far
when
I've
run
them.
They've
always
come
out
bigger
than
what
I
expected
so
we'll
see.
A
A
B
Yeah
man,
so
this
weekend,
I
I
continue
testing
the
the
tiling
for
the
for
the
fem.
B
I
posted
the
slides
there
that
I'm
just
depending
as
as
I
go,
basically
did
a
nine
by
nine
three
frequencies,
airborne
style
system
over
a
block
and
then
compared
the
the
traditional
way,
the
storing
j
and
then
some
like
frequency,
dialing
and
then
the
speed
up
is
quite
amazing.
I
don't
know
if
you
guys
seen
the
numbers,
but
we
went
from
like
86
minutes
for
five
iterations
down
to
4.5
minutes,
just
storing
the
j
and
then
about
like
roughly
another
25
decrease.
B
If
we
start
tiling
here,
obviously
tiling
on
a
single
machine
is
not,
you
know,
has
its
limitations
because
eventually,
paradiso
is
fully
booked
but
yeah.
Basically
a
factor
almost
15
year.
Speed
up
so
it'll
be
really
interesting.
Now
I'm
trying
on
the
natural
source,
because
it's
exactly
the
same,
the
same
desk
implementation.
I
actually
have
not
nothing
to
do
it's
using
the
same.
The
same
calls
so
I'm
just
trying
now
on
the
natural
source.
A
Anyway,
so
yeah
it's
good
to
really
get
some
of
the
anticipation,
so
we
can
take
advantage
of
it.
B
D
Again
here
it
might
be
actually
worthwhile
to
count
the
number
of
I
kind
of
have
the
same
similar
experience
with
the
dc,
because
actually
storing
j
was
faster
than
not
storing
j,
which
is
like
somewhat
surprising,
so
it
might
be
actually
interesting
to
because,
like
a
basically
count
like,
if
you
do
non-store
j,
then
we
got
lots
of
cg
and
lots
of
like.
E
D
Solve
of
jpeg
and
jtvac,
so
it
might
be
worthwhile
actually
just
like
the
count.
How
many
like
for
a
single
inversion
and,
let's
say
five
iteration,
basically
counting
how
much
how
many
solve
we
need
to
get
to
that
solution,
and
actually
that,
like
that
kind
of
give
us
okay,
why
that's
actually
much
faster,
because
otherwise,
it
seems
like
somewhat
unintuitive
like
in
general.
Storing
j
is
expensive
and
like
actually
requires
more
solve,
but
what
we're
getting
is
actually
storing.
Data
is
more
efficient,
so
yeah.
B
So
you
want
to
have
basically
the
crossover
point
right
when
how
many
cgs
that
eventually
jay
storm
jig
gets
more
efficient
yeah.
I
get
it
yeah,
because
here
I'm
I'm
doing
yeah.
You
know
20
cg's
right
so
yeah.
That's.
B
B
Like
that,
but
you
see
you
know
in
order
to
be
able
to
get
that
that
result
that
I'm
getting
right
now,
where
we're
sort
of
getting
you
know
about
the
right
depth
for
the
block.
You
know
the
inversion
is
right.
We
need
to
bump
our
cg's
if,
if
we
under
solve
it,
the
result
looks
crazy.
So
it's
important
to
solve
it
so
that
well
right,
I'm
just
gonna.
I'm
just
gonna
finish
saying
that.
D
Anyway,
yeah
I
mean
like
as
long
as
we
do
a
apple
double
comparison.
I
think
it's
fine.
A
B
Yeah,
that's
what
I
started
to
encounter
with
natural
source
right,
because
we
need
to
cycle
through
the
the
data
on
each
receivers
and
they
have
multiple
ones.
I
think
it's
going
to
be
more
efficient.
We
start
stacking
them
rather
than
calling
paradiso.
You
know
looping
through
all
the
all
the
data.
Basically.
F
Right
now,
natural
source
is
still
pretty
freaking
slow.
So
are
you
telling
me
that
the
the
number.
E
Or
why
not
have
it
stop
after
some
kind.
B
Yeah
tolerance
is
fine,
but
usually
we
never
reach
a
tolerance.
We
quit
we
quit
before.
We
reach
it
right
because
it's
too
expensive.
A
Yeah,
it's
like
to
reach
some
of
these
top
like
some
of
the
common
tolerances.
It
takes
hundreds
sometimes
of
iterations,
even
if
you
have
a
good
like
if
you
have
like,
like
the
default
cg
solver
and
like
sci-fi
like
it
can
like,
depending
on
the
system
right,
it's
hundreds
of
thousands
of
iterations,
depending
on
the
condition
things
like
that.
B
And
that's
an
open
question
to
the
group:
do
you
like
what
is
the
right
tolerance
right?
It's
kind
of
a
magic
number
that
we
pull
from
thin
air,
but
I
don't
know
I
don't
have
that
answer.
I
don't
know
if
anyone
knows
here
but
yeah,
but
devon
yeah,
it's
the
same
as
usual
right
it's
either.
We
cut
it
off
after
a
certain
point
or
we
have
a
magic
number
as
a
as
a
residual.
D
D
I
don't
know
I
think,
and
what
ldad
actually
said
initially
in
like
in
his
a
lot
of
courses,
we
don't
really
need
to
solve
that.
Well
so,
like
that
was
the
kind
of
like
idea
of
having
a
few
solve
because
we're
doing
it
multiple
times,
but
I
yeah
in
practice.
I
wasn't
sure,
what's
exactly
the
better
way,
I
don't
know.
A
And
it's
one
of
those
it's
it's
kind
of
like
okay!
Well,
the
gradient
yeah!
You
get
a
little
bit
better
convergence
criteria
or
convergence
in
general.
If
you
do,
you
know
an
accurate
solve,
but
gradient's
already
a
downhill
direction
anyway,
you're
just
kind
of
moving
it
when
you
dcg
a
little
bit
so.
B
Someone
can
someone
can
try
it.
Someone
can
test
it.
I
think
conceptually
we
want
to
do
it
well
right,
because
otherwise
we
don't
know
what
we're
minimizing
we're
already
doing.
You
know
gas
newton
steps,
so
we're
kind
of
we're
kind
of
guessing
that
we're
minimizing
anything.
If
we're
not
if
we're
not
really
doing
the
cg
as.
A
G
G
Some
of
the
really
compact-
and
this
one
is
a
little
bit
smooth
and
similar.
This
is
a
way
more
smooth
and
these
models
are
really
compact.
I
just
performed
these
in
variance
a
total
of
the
160
times
and
then
I
based
on
the
boho
data
to
decide
which
model
are
acceptable
and
which
are
rejected.
Finally,
I
just
calculated
the
standard
deviation
to
analyze
the
uncertainty
of
these
areas.
H
That's
cool
man,
can
you,
can
you
explain,
I'm
sorry
I
missed
it.
Can
you
explain
what
the
suite
of
models
were,
that
you
were
looking
at
what
you
were
varying.
H
Well,
I
was
just
wondering
how
what
what
is
being
buried
to
get
these
different
models?
Oh.
G
G
You
going
to
do
then,
and
then
I
just
so
far
here
this
black
dot
is
a
well
location
and
we
have
the
trigger
data
and
in
these
areas.
So
after
perform
the
inversions,
I
obtained
a
large
sequence
of
recovered
models,
and
then
I
implemented
the
record
data
to
help
me
make
the
decisions
say
which
model
are
acceptable
or
which
are
should
be
rejected,
because
this
recovery
model
cannot
satisfy
the
trivial
data.
H
You
generate
a
whole
bunch
of
models
and
you've
got
one
calibration
point
and
you
look
to
see
at
that.
One
calibration
point
which
of
the
models
is
like
the
closest
to
to
that
has
done
the
best
job
at
predicting
that
and
then
you
use
that
for
the
as
a
basis
for
looking
at
the
whole
set.
Is
that
correct?
Yes,
that's
correct.
D
Is
that
are
you
like
determining
accept
or
not
using
that
one
point
is
that
like
100
150
sets,
so
if
that
match
with
the
well,
you
accept,
if
that
doesn't
you
don't
accept?
Is
that
what
you're
doing
yes,
I
see
and
that
slide
four
is
basically
like
some
sort
of
variance,
even
within
your
model.
Yes,.
G
Okay,
okay,
yeah.
This
part
work
is
primarily
to
do
with
the
dom's
comments:
how
to
determine
the
optimal
range
of
these
rs
values.
We'll
likely
have
a
triple
data
in
this
area,
and
then
we
can
decide
which
model
actually
or
not
right
and
then
how
many
like
you
had
to
remove
by
one.
At
one
point,
you
mean
the
how
many
variants
I
perform.
G
B
G
So
so
for
this
for
this,
for
this,
where
locations
we
have
the
report
and
we
know
the
loss.
Ology
of
this
part
is
metal
gathering
and
then
we
calculate
to
the
mean
value
of
the
metal
gear
from
the
recovery,
the
model,
and
then
we
compare
into
this
mean
value
with
the
tree
holding
mean
value
of
the
recovery
model
with
the
tree
holding.
D
D
C
D
Uncertainty
that
I
have
to
determine
this
as
a
certain
unit-
I
don't
know
it's
like
just
in
general
like
like
thinking
about
your
clients
or
audience.
Maybe
they
may
care
about
more
about
that.
I
guess.
G
D
If
that's
the
uncertainty,
I
would
expect
much
higher
uncertainty
at
depth,
but
then
that's
not
really
captured
in
here.
So
I
don't
know
like
a
if
I'm
a
reviewer
of
your
paper.
That
would
be
my
first
question.
Okay,
why?
I'm
seeing
very
small
uncertainty
at
death
because
I
know
there's
large
uncertainty
there
anyway,
just
a
few.
B
Well,
sorry,
it
is,
though,
there
right,
because,
because
the
top
of
the
anomalies
are
are
much
like,
the
the
white
band
is
much
thinner
on
on
top
of
the
animators
at
the
bottom
of
it.
B
Right
you,
let's
see
the
cross
section
in
the
middle
there,
like
the
the
white
and
the
white
and
red
spot,
is
like
basically
all
of
the
bottom
of
the
of
the
body
right.
It's
not
it's
not.
On
top,
the
top
of
the
body
is
always
fairly
well
constrained.
D
By
the
data
yeah,
so
that's
somewhat
confusing
right
like
so.
You
have
a
large
standard
deviation
where
you
got
high
confidence.
I
guess
because
you're
pretty
sure
about
that
blog
is
there,
but
what
the
standard
deviation
says.
Oh
it's
pretty
uncertain
at
that
part.
D
I
know:
what's
why
why
that's
happening
again
but,
like
I
said
it's
somewhat
like
a
miss
it.
Like
a
misunderstanding,
it's
because
like
when
I'm
looking
at
that
standard
deviation
map
what
I'm
looking
for
the
like
a
red
value
is
a
highly
uncertain
area,
but
what's
actually
what
you're
showing
is
actually
high,
confident
area?
So
it's
it's
it's
almost
confusing,
so
I
would
like
a
kind
of
either
normalize
and
do
something
to
show
where,
where,
where
are
the
uncertain
parts,
that's
red.
B
Okay
and
that's
because
the
density
values
are
changing
so
much,
maybe
you
need
to
before
you
do
your
standard
deviation.
You
need
to
normalize
the
values
as
success,
because
in
the
middle
of
the
body,
you're
pretty
confident
the
body
is
there.
You
should
have
a
very
low
standard
deviation
in
the
middle
of
it
right
or
the
top.
So.
D
G
D
Why
the
main
sub
sorry
we're
dragging
here,
but
what
I
mean
so
so,
like
you
got
a
lots
of
models.
You
take
the
mean
that
you
can
that,
as
like
somewhat
representative,
what's
happening
in
the
model
space,
and
then
you
got
a
variance,
because
if
you
divide
that
variance
by
a
mean
it
gives
the
relative
like
a
changes
in
your
model,
so
that'll
kind
of
show,
where
the
uncertain
part
more
relevantly.
In
this
case.
G
I
Sure
I'm
more
announcements
on
things
I
copied
in
a
message
from
sarah
garrett.
She
got
in
touch
she's,
helping
organize
a
small
eage
workshop
and
they're
gonna
do
a
session
on
sort
of
open
science,
interactive
initiatives
for
sort
of
education
and
science
and
geophysics
there's
a
link
there
and
she
was
wondering
if
one
of
us
or
multiple
of
us
would
be
interested
in
giving
a
bit
of
an
overview
of
simpeg.
I
I
think
it'll
be
pretty
informal,
I
mean
we
might
have
a
couple
of
slides
or
it
might
just
be
kind
of
ad
hoc.
So
if
there's
folks
I
mean
I'm
happy
to
potentially
be
a
part
of
that,
but
I'd
be
very
happy
to
see
other
other
people
sort
of
jump
in.
If
this
is
a
community,
you'd
be
keen
to
connect
with,
and
if
you
want
to
take
a
leadership
role
on
that.
That
would
be
awesome.
I'm
happy
to
be
support
staff.
I
So
if
somebody
wants
to
to
take
lead
on
that,
yeah
I'd
appreciate
hearing
hearing
your
thoughts.
I
I
I
don't
know
yet
if
I'm
gonna
be
able
to
be
there,
but
I
would
encourage
folks
to
to
go
and
give
us
a
bit
of
a
an
update
afterwards,
because
I
think
it'll
be
a
really
interesting
discussion.
Do
you
know
much
about
this
forum
or
what's
happening?
I
hadn't
heard
about
it.
So
what
it
is
is
it's
basically
okay,
so
for
timing.
It's
on
the.
I
I
think
it's
at
sorry
10
a.m.
Central
so
it'll
be
a
little
early
on
the
west
coast,
but
basically
what
it
is
is
it's
kind
of
like
an
offshoot
of
the
transform
conference
idea,
but
this
is
a
little
more
scoped
and
there'll
be
sort
of
multiple
of
these.
I
think
throughout
the
year.
So
I
think
it's
really
meant
to
be
like
sean
is
going
to
give
a
bit
of
a
presentation,
and
then
he
posed
a
few
questions
that
is
really
going
to
be
sort
of
a
discussion.
I
Yeah
thanks
tom,
so
there's
a
link
that
has
a
bit
more
of
a
description
on
that,
but
it
should
be
sort
of
like
a
mix
of
presentation
and
discussions.
So
I
think
it
should
be
a
pretty
good
session.
Oh
okay,.
B
I
talked
to
sean
a
little
bit
and
he
has
the
he's:
creating
a
directive
basically
to
re-weight
re-weight
the
errors
on
the
fly
all
right,
so
that
will
be
it'll,
be
quite
interesting
to
you
to
see
oh
cool
yeah.
B
I
Yeah
yeah,
so
I
guess
on
the
first
point,
is:
does
anyone
I
mean
I'll
just
put
the
question
on
the
spot?
Does
anyone
want
to
take
lead
on
the
eage.
I
A
I
For
sure,
okay
well
I'll
leave
that
open
and
if
anyone
just
follow
up
and
feel
free
to
ping
on
slack,
and
we
can
get
back
to
sarah
in
the
next
next
few
days
or
next.
G
E
E
So
the
methods
are
the
the
1d
stuff,
is
kind
of
partitioned
into
frequency
domain
and
time
domain
em
and
now
we're
sort
of
ready
to
plan
on
the
next
step.
So
seemed
like
a
good
idea
to
to
bring
it
in
because
the
the
tests
have
been
updated.
The
tutorials
have
been
completed
and
added
to
the
docs,
so
yeah
we
can
kind
of
come
up
with
a
global
next
step
for
that
and
just
waiting
for
review
on
the
discretize
tutorials.
E
So
I
got
some
feedback
from
doug
and
lindsay
a
week
or
two
ago
I
made
some
some
improvements
on
how
we
we
do
the
tutorials,
expanded
them
and-
and
now
I'm
just
looking
for
some
feedback
on
on
that.
So
yeah,
maybe.
A
Yeah,
sorry,
I
remember
I
was
I
I've
looked
to
him
since
you've
updated
him
again.
I
think
it's
definitely
on
the
right
track.
As
far
as
the
to
discretize
information,
a
few
things
is
that
I
I
some
of
the
pages
definitely
seem
very
repetitive
as
well,
even
though
we've
cut
them
down
some
yeah,
at
least
just
a
few
of
them.
E
Yeah
and
there's
there's
a
couple
of-
I
guess,
there's
a
little
bit
of
functionality
that
I'm
hoping
disputize
will
have
before
I
complete
them,
and
one
of
them
is
just,
I
know
it's
basic
and
we
don't
really
use
it
for
em,
but
the
nodal
inner
product
matrix.
E
So
I
I
may
even
just
do
that
for
the
the
tensor
mesh
just
so
that
I
can
put
in
a
complete
sort
of
an
inner
product
example
and
and
call
that
but
yeah
there's
an
and
I'm
looking
forward
to
when
those
the
boundary
conditions
on
differential
operators
is
is
finished.
But
I
know
that's
not
trivial.
E
So
there's
there
is
some
stuff
that
I'd
like
to
to
add
to
it
once
my
wish
list
of
of
discretized
functionality
is,
is
in
place.
A
Good,
it
came
back
to
me
what
I
was
going
to
say
on
the
tutorials.
What
is
everyone
else's
opinions
as
far
as
whether
like,
if
you're
looking
at
tutorials,
if
it's
more
helpful,
to
see
like
in
documentation,
if
it's
more
helpful,
to
see
code
like
bit
by
bit
or
just
big
chunks
of
code
and
then
comments
and
discussions
outside
of
them
bit
by
bit?
For
me,
I
agree.
G
I
A
Okay,
I
so
then
I
I
would,
let's
see,
if
there's
a
way,
that
we
can
break
up
the
bits
of
code
a
little
bit
more,
that
are
in
your
tutorials.
E
Oh
yeah,
that
would
be.
That
would
be
pretty
easy
to
do.
I
I
definitely
want
to
make
sure
that
there's
a
very
strong
heading
that
says:
okay,
we're
going
to
do
this
separate
thing
now,
because
the
thought
process
was
to
try
to
keep
each
example
very
separate
and
as
long
as
that's
that's
obvious,
there
there's
definitely
a
way
to
have
yeah.
You
know
the
two
three
lines
of
code
and
then
a
few
words.
E
So
instead
of
I
guess
basically
what
you're
asking
is,
instead
of
having
the
kind
of
some
some
information
instead
of
having
it
as
comments
to
to
take
it
out
of
the
code
block
and
then
just
have
it
as
words
and
then
when
we're
actually
running
the
the
lines
of
code.
That's
that's
in
in
sort
of
the
code
box
and
that's
easily
changeable.
A
Sounds
like
a
good
idea,
a
good
plan,
I'm
gonna
I'll,
go
through
and
provide
a
few
more
detailed,
point-by-point
comments
as
well.
Soon.
A
A
There's,
definitely
there's
a
few
things
on
there.
I've
when
looking
through
the
code
myself,
I
think
they're
all
things
that
can
be
done
so
things
like
okay
well.
First
of
all,
the
simulation
classes
should
be
renamed
to
go
along
with
the
convention
that
we
we
have
within
simpeg
already,
so
I
mean
that's
an
easy
change,
easy
fix
the
looking
at
how
the
receivers
and
sources
are
structured.
A
I
think
we
can
get
away
with
just
using
the
ones
that
are
there
without
I
mean
maybe
possibly
adding
a
little
bit
more
functionality
to
the
classes
that
already
exist,
but
I
think
it
should
be
pretty
easy
to
make
use
of
the
existing
sources
and
receivers,
at
least
for
the
frequency
domain.
One
that'll
be
really
easy
and
the
time
domain
will
likely
require
a
little
bit
more
thought
as
far
as
unifying
them.
D
I
agree,
we
don't
have
we
don't.
I
agree.
D
D
Have
a
look,
I
can
have
a
look
at
the
sip
example
because,
like
it's,
it's
more
convoluted,
so
I
wasn't
sure
like
I
can
really
break
it
apart
as
a
mapping
is
that
I
had
a
heart
like
in
general,
I
thought
I
could
do
that,
but
when
I
was
implementing
that
wasn't
that
kind
of
distinct
to
be
separated
so
yeah.
Currently
I
have
an
internal
function.
That
kind
of
does
it
kind
of
mapping
like
function
but
yeah,
I'm
pretty
sure
we
could
do
that,
but
I
wasn't
sure
incremental
implementation-wise.
A
A
Okay,
I'm
not
sure
if
there's
a
good
way
to
take
that
we
can
take
advantage
of
them
within
I
mean
they're,
certainly
not
supported
in
the
current
frequency
domain
codes
right.
It's
a
all.
The
connectivities
have
to
be
are
assumed
to
be
constant
throughout
every
frequency
that
you
get
a
simulation
right.
A
D
Yeah,
I
thought
this
a
little
bit
better,
like
kind
of,
if
you,
for
instance,
let's
say
if
you're
using
cold
coal
having
a
separate
property
of
ada,
tau
c
because
like,
for
instance
like
just
thinking
that
as
it's
like
a
separate
property,
that's
how
it's
implemented
now
and
I
think
it's
a
bit
better
than
like
a
generating
a
mapping.
E
D
E
Kind
of
seems
like
one
of
those
things
where
it's
you
want
to
add
a
detail
and
then
it's
ballooning
into
a
more
like
global,
sim
peg
type
project,
and
I
really
want
to
avoid
that.
E
Yeah
I
mean
I
also
thought
that
you
know
the
cold
coal
model
in
some
ways
is.
It
seems
like
it's
a
very
widely
used
model
for
for
the
ip
effect,
but
I
I
don't
think
it's
necessarily
the
only
model
that
you
could
use
to
describe
that
effect.
That
would
explain
the
data
over
that
range
of
frequencies.
A
D
Yeah,
I
think
the
first
step
to
maybe
just
like
expanding
the
current
assumption
about
real
conductivity
too
complex.
That's
probably
a
first
step.
Then.
If
you
want
to
parameterize,
then
we
can
have
another
mapping
that
could
parameterize
like.
So
now
you
need
to
connect
the
different
frequencies,
so
it
has
even.
E
More
complexity,
I
guess
soggy
with
your
research.
I
guess
it's
been
a
while,
but
you're
you're
kind
of
recovering
a
an
independent
like
frequent
you're,
you're
you're,
going
to
recover
a
a
complex
conductivity
for
each
frequency
and
then
in
your
regularization.
E
D
H
If
I
can
make
a
comment
here
that
trajectory
of
excuse
me
just
working,
you
know
getting
into
the
complex
domain
for
conductivity
and
then
having
options
for
how
we
want
to
treat
things,
because
there
really
are
quite
a
few
ways
that
people
tend
to
think
about
how
to
parametrically
represent
that
complex
connectivity
and
yeah.
So
co-parameters
is
one
but
there's
also
a
lot
of
others
that
take
place
so
having
that
flexibility
would
be
good.
A
It
almost
seems
like
we
need
a
special
property.
Okay,
we
have
these
like
the
way
that
we
define
these
the
things
right
now
right,
it's
like.
Oh,
this
is
a
density
model.
This
is
a
this
is
a
you
know:
scalar
they're,
all
scalar
properties
right
now,
so
it's
like.
We
need
an
extension
of
that
class
to
be
able
to
have.
A
E
You
could
always
put
in
that
that
real
conductivity
as
a
property
and
then
yeah
either
as
your
your
infinite
or
your
dc
conductivity,
and
then
you
could
have
a
second
parameter,
which
is
basically
the
the
the
difference
from
that,
basically
the
the
ip
effect.
E
B
Does
that
make
sense,
yeah,
and
one
thing
that
we
should
talk
about
too,
that
I
started
doing
and
that
we
haven't
talked
about
is
that
right
now
only
a
simulation
has
as
mapping,
but
in
case
of
tiling
the
misfit
should
have
attack.
It
should
have
a
map
as
well
and
it
could
have
a
data
map
and
it
could
have
a
model
map
right.
So,
basically,
after
the
simulation
is
done,
creating
the
fields
or
whatever
doing
its
thing.
B
The
misfit
knows
how
to
transform
this
into
something
a
little
bit
more
higher
level
and
then
that's
something
that
we
need
to
talk
about.
At
some
point,
I've
been
mapping
on
other
objects
in
simulation.
Basically.
D
Don't
explain
a
little
bit
more?
Why,
like
a
like
a
misfit
knows
the
simulation,
which
knows
the
mapping,
and
why
do
we
need
like
a
a
separate
mapping
for
misfit
function
like
it's
a
sort
of
like
propaganda
simulation?
I
guess.
B
Well,
in
the
case
of
tiling
right
we're
doing
a
projection
from
a
local
mesh
to
a
global
mesh,
but
the
simulation
cannot
know
about
that
projection,
because
otherwise,
when
you're
completing
your
sensitivities,
you're
basically
computing
sensitivities
for
the
global.
The
global
match
right
because
it's
doing
the
it's
doing
inside
the
the
j
call
it's
doing
the
derivative
of
the
mapping.
So
basically
you
would
be
projecting
right
away
to
the
to
the
global
mesh.
B
B
B
There's
no
way
around
this
because
mapping.
If,
if
the
projection
is
inside
yeah
the
the
mapping
of
the
simulation,
then
everything
is
global.
Global
map
global
mesh-
I
don't
know
if
that
makes
any
sense
to
people,
but
that
is
how
it
works,
because
a
a
as
the
as
the
as
the
mapping
in
it
right.
A
of
the
simulation
has
its
mapping
yeah.
D
B
Yeah,
but
if
misfit
has
a
mapping
to
do
that
that
reverse
projection,
when
we
call
you
know
when
the
the
global
misfit
is
asking
asking
for
a
deep
breath,
then
you
can
just
do
the
map
and
pass
this
model
to
the
simulation.
So
basically,
the
misfit
is
sending
a
small
vector
to
the
simulation
simulation.
Only
ever
see.
You
know
its
own
local
map.
B
D
Conceptually
not
I
sort
of
understand
why
you're
doing
that,
because
the
conceptually
our
local
simulation
needs
to
know
the
mapping
that
I
can
map
global
model
to
your
local
mesh.
So
as
long
as
your
local
simulation
knows
that
there's
no
need
to
pass
another
mapping
to
a
misfit
function
because
we're
seeing
we're
passing
the
simulation
to
the
data
atmosphere
function.
But
I
guess
your
problem
in
parallel.
Parallelization
is
more
because,
like
it's
kind
of
heavy,
so
I
think
it's
more
like
a
no.
B
B
A
Yeah
one
other
thing
I'd
like
to
talk
about
is,
let's
some
boundary
condition
stuff,
I'm
working
on
getting
the
averaging
operators
that
we
would
need.
So
essentially,
everything
is,
has
a
corresponding
boundary
integral
so,
like
normally
you're
doing
the
weak
form,
we're
talking
about
volume
integrals
over
the
mesh
and
things
are
defined
at
different
locations.
A
So
now
we
also
have
to
think
about
the
boundary
enter
like
boundary
integrals
on
those
same
kind
of
operations,
and
some
of
them
are
not
as
easily
defined
as
the
others
like
as
some
of
the
boundary
integrals
like
it's
harder
to
find
some
of
them
with
crown
products
just
because
of
the
way
that,
like
structures
like,
if
you
think
about
edges,
some
of
the
the
edges
are
shared
by
multiple
face,
multiple
different
types
of
boundary
faces
and
it
can
get
a
little
bit
thoughtful.
Sometimes
I've
gotten
some
of
those
implemented
nicely.
A
I
have
a
few
there's
a
few
that
we
form
boundary
integrals
defined
for
edge
curls
and
face
divergence
right
now,
those
kind
of
boundary
intervals
they
should
be.
They
should
work
on
any
other
structured
meshes
generally.
How
I
have
them
set
up,
so
I
have
them
in
terms
of
just
kind
of
you
know,
face
I've
basically
broken
broken
them
down,
sometimes,
operators
that
we
have
for
the
edge
like
the
inner
product,
matrices
that
we
see
from
the
weak
forms.
A
I
it
should
be
easy
to
extend
it
to
the
tree
mesh
as
well.
The
other
question
that
I
have
is
that
so
when
I,
when
we're
coming,
I'm
coming
up
with
some
of
these,
like
default,
matrices
that
we
can
assume
so
I'm
trying
to
put
in
the
tools
for
people
to
implement
boundary
conditions
on
their
own
as
well,
but
I'd
like
to
have
a
few
of
them
in
here
that
are
like
well
use
this.
A
I
kind
of
had
a
question
on
naming
conventions
for
them,
so
there's
in
general,
I
was
kind
of
people
in
terms
of
robin
because
for
most
of
them
the
robin
can
support
both.
You
know
robin
mormont
and
directly
conditions
like
the
same
implementation
can
support
all
three
of
them.
So
I
just
kind
of
leave
it
like
that.
A
A
So
if
we're
applying
boundary
conditions
to
that,
should
it
be
named
after
the
base
divergence
or
should
it
be
named
after
the
cell
gradient,
which
we're
approximating
and
the
same
kind
of
thing
applies.
For
you
know,
okay,
if
we're,
when
we
do
the
nodal
gradient,
well,
we're
actually
doing
a
weak
form
of
the
edge
divergence
I
mean,
even
though
that's
not
really
defined
so
should
it
be
named
after
a
edge
divergence.
Even
though
it's
not
an
operator,
we
have,
or
should
it
be
named
after
the
nodal
gradient.
D
Yeah,
it's
a
subtle
point,
but
isn't
it
like
there's
no
gradient
in
in
that
wave
form
like
if
you,
the
cell
center,
is
defined
like
there's
no
gradient
like
you're
approximating
with
yeah.
I
I
We're
applying
the
boundary
conditions
too
so
like
if
we
called
it
something
like
face
divergence,
boundary,
condition
robin
or
maybe
not
saying
the
boundary
condition
I
don't
know,
does
something
like
that
kind
of
yeah.
D
Tempted
to
go
gradient
because,
literally
like
a
face
divers,
you
don't
need
any
boundary
condition
like
you,
that's
perfect.
It's
well
defined,
so
the
problem
is
a
transpose,
and
so
I
think
I
would
I
don't
know
like
here,
focusing
on
the
physic
like
a
physical.
What
we're
actually
physically
doing
seems
a
bit
better
because
gradient
is
like
not
well
defined,
so
you
need
some
known
parameters
to
set
that
up
so
and
that's
basically
what
we're
doing
so.
I
guess
I
would
tempt
it
to
like
use
some
sort
of
grad
there
rather
than
her.
A
Good
answer,
because,
if
we
so
if
we
kept
it,
the
convention
of
naming
after
what
it
physically
represents
a
photo
would
be
okay,
there'd,
be
a
cell
gradient,
one
like
a
cell
gradient,
weak
form,
robin
whatever
there'd,
be
a
edge
divergence,
weak
form
and
there'd,
be
a
face.
Curl
weak
form,
so
these
are.
A
A
I
That
that
makes
sense
I'll
reverse
my
I'll
reverse.
My
first
statement
and
instinct
on
that,
because
I
I
do
I
like
that.
Basically,
if
you
were
writing
that
down
like
if
you're,
actually
starting
from
the
derivation,
which
is
where,
if
you
are
interacting
with
this
hunk
of
code,
you're,
probably
starting
from
the
derivation,
then
that
seems
that
actually
seemed
quite
appropriate.
A
D
Cool,
oh
and
joe,
another
thing
in
flow
problem
is
it's
like
a
because
there
are
a
lot
of
unused
cells
and
then
you
can
actually
define
the
boundary
condition
inside
the
inside
of
the
cell.
Suppose
you
got
a
river,
then
the
head
is
fixed,
so
people
use
like
either
neumann
condition
or
dirichlet
in
that
domain.
D
So
this
is
something
like
if
you're
thinking
about
kind
of
generalizing
discretize
for
general
use.
We
do
need
like
kind
of
that
types
of
stuff,
and
I
think
it's
not
very
hard.
It
kind
of
did
it
at
some
point,
but
I
wasn't
sure
how
to
really
generalize
in
a
way
that
we
can
basically
use
the
same
operator
that
we
have
now.
A
Yeah,
I'm
kind
of
at
this
point
I'm
thinking
of
just
generalizing
it
to
the
meshes
themselves
and
assuming
that
the
mesh
is
defined
along
the
boundary.
What
I'm
imagining
for
that
kind
of
problem
is
that
you
would
design
a
mesh
like
you,
maybe
see
design
an
unstructured
mesh
that
would
follow
that
or
design
a
curved
linear
mesh.
That
would
you
know
where
the
boundaries
of
the
mesh
align
with
the
boundaries
of
your
domain.
A
I
mean
that's
kind
of
what
we're
artificially
doing
when
we're
opposing
the
boundary
condition
on
surfaces
in
like
in
dc,
when
we
said
the
frequently
remaining
enter
em
when
we're
setting
the
connectivity
really
low
up
there,
we're
creating
like
an
artificial
boundary
condition
there.
A
J
J
So
here's
just
kind
of
our
model-
these
are,
I
guess,
the
black
lines
are
just
fall
traces
and
then
right
now
these
kind
of
iso
surfaces
that
we're
seeing
are
some
of
the
the
positive
density
anomalies
that
are
there.
So
it's
kind
of
interesting
that
some
of
them
are
kind
of
following
the
edges
of
these
faults
quite
nicely,
and
then,
if
we
look
at
kind
of
the
main
negative
anomaly
is
kind
of
this
one.
J
That's
there
that
could
be
related
to
melt
of
some
sort,
that's
kind
of
the
heat
source
for
all
that
hydrothermal
activity,
so
just
kind
of
that'll
be
some
of
the
next
steps
is
trying
to
get
some
good
geological
constraints
on
the
size
and
the
depth
of
that
basin
right
there
and
trying
to
see
just
from
modeling.
If
that
can
explain
most
of
the
the
low
that
we're
seeing
in
the
data
so
just
kind
of
playing
around
with
some
of
that
stuff
kind
of
fun.
B
Looks
like
you're
getting
some
nice
dips
in
there.
We
should.
We
should
re-implement
the
directional
direction
of
the
rib
dubs
to
to
reinforce
it.
J
B
No,
I
mean
in
your
model,
things
are
dipping
already
south
sorry
north.
They
seem
to
be
dipping
north
northeast
right.
It
could
be
yeah
yeah
yeah,
so
we
should
try
to
re.
You
know:
do
the
gradients
on
the
in
the
inversion
just
to
reinforce
those
those
dips.
J
J
If
you
guys,
I
don't
know
if
you
can
see
the
cursor,
but
it's
just
south
of
clear
lake
itself
and
it's
an
area
called
the
geysers
which
is
kind
of
one
of
the
largest
geothermal
fields
and
the
well.
I
don't
know
if
it's
the
country
or
in
the
world,
but
they
have
a
series
of
like
13
or
14,
big
geothermal
power
plants
there
that
have
been
active
for
20
30
years,
at
least
maybe
a
little
longer
that
I
mean.
A
I'm
assuming
they're
are
they
just
so
they're,
not
like
they're,
just
taking
advantage
of
what
they're
naturally
occurring
geothermal
waters
there
to
produce
power.
J
J
So
I
guess
the
big
our
big
research
question
is,
if
there's
just
kind
of
like
an
intrusion,
that's
in
there
that
has
kind
of
mostly
solidified,
and
it's
just
still
giving
off
all
this
heat
or,
if
there's
actually
kind
of
liquid,
like
some
partial
melt
down
there,
because
that
has
a
big
impact
on.
I
guess
the
longevity
of
the
resource
and
the
hazards
that
could
be
associated
with
it.
J
J
J
Seeing
that
in
the
future,
yeah
yeah,
no,
it's
a
cool
project
and
I
guess
lots
of
people
have
done.
There
have
been
lots
of
different
seismic
studies
that
have
been
done,
but
most
of
those
are
fairly
well
they're,
just
like
maybe
big
earthquake
tomography
like
teleseismic
type
studies,
so
that
they're
not
super
high
resolution
for
just
kind
of
looking
at
a
fair,
a
smaller
area
like
this
they're
more
for,
like
large
scale,
crystal
structure
in
this
area.
J
That's
a
good
question.
I
mean
if
it's
solidified.
I
think
I
don't
think
we're
gonna
see
a
big
contrast
in
resistivity.
J
Some
good
interpretation
from
that,
but
I
guess
if
nothing
else,
it'll
definitely
help
us
if
we
can
combine
that
with
our
gravity.
It'll
give
us
a
better
idea
of
is
our
low
density
stuff
all
at
the
surface,
or
is
there
maybe
something
more
at
depth
to.
J
B
A
D
Oh,
a
quick
update
on
my
end
was
the
okay.
I
worked
on
that
airborne
project
and
it
has
finished.
Data
will
be
open
soon
so
in
a
week
and
they
actually
asked
like
do
you
want
to
actually
save
the
code
and
want
to
upload
to
their
open
data
portal.
So
I
asked
like
to
okay,
like
I'm
kind
of
interested,
and
then
we
can
have
a
small
document
that
can
point
to
so
yeah.
D
I
was
hoping
to
kind
of
have
1d
code
sort
of
ready
and
then
point
like
that:
1d
code
and
a
bit
of
documentation
to
their
open
data
portal.
So
that
could
be
an
interesting
sort
of
place
that
we
can
kind
of
advertise
simp
so
yeah
anyway,
that
just
an
update
and
anyone
interest
in
that
data.
Now
you
can
use
it
so.
I
D
I
D
One
thing
that
I
was
hoping
to
be
connected
with
one
of
the
waterboard
person,
so
it
was
working
on
that
part
having
a
small
like
an
app
that
they
can
use
and
then
show
kind
of
what
simply
could
do
in
the
groundwater
problem
in
california.
That's
one
and
the
other
thing.
I
was
curious
about
the
license
of
the
data,
whether
because
I
think
I'm
not
sure
who
can
actually
use
that
data,
because
it's
pretty
complicated
format
and
it's
not
sometimes
it's
not
a
even
csv.
D
So
I
was
hoping
if
you
can,
even
like
some
small
portion
of
that
data
into
simple
format
like
a
csv
and
then
hold
in
our
google
cloud,
or
so
we
can
easily
like
download
or
even
we
can
just
use
their
data
portal
download
and
put
it
in
our
page.
I
wasn't
exactly
sure,
that's
possible,
so
I
was
going
to
ask
them
if
that's
possible
yeah,
I'm
happy
to
do
that.
That'll
be
great.