►
From YouTube: SimPEG Meeting July 21st
Description
Weekly SimPEG meeting from July 21st, 2021
A
Welcome
to
the
meeting
again,
let's
moving
on
with
the
agenda
and
some
quick
updates
as
well.
A
So
as
I've
been
going
through
the
like
this
cross-gradient
stuff,
it
does
keep
coming
coming
back
to
something
that
we
discussed
a
while
back
and
I'd
like
to
get
people
thinking
about
it
and
updating
how
we're
doing
stopping
criteria
and
the
inversion
and
how
we're
handling
that
and
how
we're
making
that
accessible
for
people
to
implement
their
own
versions
of
it
and
make
it
like
a
more
extensible
way,
because,
right
now,
it's
they're
handled
in
the
directives,
but
it
might
be
more
flexible
to
handle
them
in
a
different
manner
on
their
own
and
just
kind
of
thinking
about
that
and
I'd
like
to
have
a
see
if
people
wanted
to
kind
of
have
some
thoughts
on
that.
A
Maybe
sketch
out
some
ideas
of
that
and
do
that
later,
one
of
the
other
things
dieter
you
brought
up
and
we
can
chat
a
little
bit
about
the
error
and
how
it's
assigned.
A
I
think,
that'd,
probably
be
a
relatively
relatively
good
conversation
to
have.
We
could
probably
have
that
one
first
after
we're
done
with
these.
After
we're
done
going
through
the
updates
and
then
the
other
item,
I'll
bring
up
right
now
is
that
okay?
So
let's
have
a
look,
let's
do
the
thing:
let's
do
the
have
a
meet
up
and
barbecue
girl
for
the
vancouver
people
sunday's
good
sunday,
the
25th
for
the
lack
of
a
better
time,
let's
say
meet
around
three
and
we
can
start
grilling
up
and
have
some
dinner
ready
around
five-ish.
A
A
D
We
usually
play
at
spanish
banks-
west,
oh
okay,
so
that's
not!
We
play
into
like
one.
A
I
don't
know
what
are
people,
what
I'm
still
getting
into
it
and
finding
out
what's
work,
what
works
best
like
it?
I
think
jericho
probably
would
be
a
nice
location
to
get
to
at
least
a
go-to
place
for
sure
yeah
I
mean
I
can
always
like
bring
some
things
over
in
my
vehicle
and
then
come
back
and
bike
back
over.
If
there's
rough
parking
or
something.
A
So
three
o'clock
at
jericho-
beach-
maybe
I
don't
know-
does
the
east
side
work
better?
Does
the
west
side
work
better.
E
We're
usually
on
the
west
side
as
well
yeah.
What
is
that
over
by
like
locarno
or
yeah?
Actually
even
closer
yellow
area
is
usually
where
we
are.
E
I
don't
know,
I
think
ashley
just
has
a
sentimental
thing
about
that
place.
I
don't
know
what
it
is,
but
yeah.
B
B
Well,
yeah,
there's
a
big
public
parking
spot
that
right
next
to
the
the
sailing
selling
club
right.
So
jericho
parking
lot
and
then.
E
A
D
Double
unneeded
yeah,
so
pretty
much
finished
through
the
discretized
stuff
review
is
needed
and
I
think
there's
maybe
some
finishing
touches
and
then
kind
of
some
like
global
scale,
formatting
of
how
we
really
want
it
to
look
and
and
have
people
interact
with
it,
but
I
would
say,
like
99
of
the
content
and
the
doc
strings
are
actually
finished,
now
tried
to
get
as
many
images
and
quick
examples.
D
So
that's
been
filled
in
it's
pretty
comprehensive,
so
I'm
I'm
quite
happy
and
in
the
meantime,
I've
just
started
with
the
the
simpeg
utils
seemed
like
a
pretty
good
place
to
start.
I
know
we
wanted
to
actually
go
through
the
simpeg
utilities
and
you
know
rename
functions
appropriately
and
define
their
input
arguments
appropriately.
D
So
I've
got
a
branch.
Now
I
branched
off
of
maine
in
simpeg,
and
I've
just
started
the
process
of
of
filling
in
dock
strings.
So
I
might
as
well
do
that.
D
I
think
maybe
I'll
want
a
bit
more
of
a
plan
for
some
of
the
other
classes,
seeing
as
people
are
probably
actively
working
on
areas
of
simpeg,
and
I
don't
really
want
to
be
kind
of
moving
stuff
around
or
yeah
working
on
those
files,
if
they're
actively
being
changed
and
people
are
making
new
functions
and
that
kind
of
stuff.
So
when
we
get
to
that
point,
I
think
I'll
I'll
get
some
advice
from
the
group,
but
that's
more
or
less
what
I'm
working
on
right
now.
A
Okay,
it'll
be
great.
I
can.
I
should
have
time
to
review
it
this
week
at
least
that
end
as
far
as
the
syntax
stuff
yeah,
I
think
that
documenting
the
utils
functions
is
a
great
place.
To
start
the
classes
are
going
to
be
a
lot
more
difficult
because
it's
more
they're.
Well,
we
have
to
deal
with
the
properties
stuff
as
well.
A
The
there's
a
really
nice
package
for
formatting
for
it
that
takes
the
python
suggestion,
like
the
suggested
typing
that
comes
with
the
newer
python
versions
and
actually
turns
that
into
like
a
it
automatically
enforces
them.
What
does
lindsay?
Do
you
remember
the
name
of
the
package?
Are
you
talking
about
pedantic?
Yes,
that
one?
D
A
D
A
D
B
E
Okay
about
that,
my
wits
end
with
the
with
these
two
errors,
but
I
think
I
finally
found
them.
I
just
don't
quite
understand
so
the
errors
seem
to
be
related
to
the
data
that
they're
pulling
from
some
repository.
E
So
the
one
that's
yeah,
I
think
what
was
what
was
it.
It
was
the
resolve
one
yeah
when
it
goes
to
there's
a
line
in
there
where
it's
like
river
path
equals
resolve,
and
it
looks
for
a
value
for
some
item
and
it
says
that
that
that
the
data
object
doesn't
have
value.
E
So
it's
pulling
some
it's
not
pulling
something
correct
from
yeah
from
some
repo,
and
that
seems
the
size
utils
that
keeps
failing
on.
It
too,
is
actually
just
a
link
check
and
it's
not
putting
together
the
path
properly.
So
it's
not
downloading
the
data
and
it
seems
to
be
failing
on
the
tests
and
then
I
went
through
a
bunch
of
the
other
pr's
to
see
you
know
is
this
happening
to
everybody
and
it
seems
pretty
common
across
all
the
pr's
that
those
two
are
failing.
E
So
there
might
be
just
a
path
check
and
I'm
just
kind
of
getting
into
that
right
now,
just
to
see
what
it's
trying
to
build
is
the
path,
and
I
was
going
to
check
with
you
guys
if
that's
where
it
should
be
getting
that
data,
because
I'm
thinking
that's
all
that's
failing,
is
it's
just
trying
to
pull
it
and
it
doesn't
load
and
then
doesn't
have
these
keys
in
it
yeah.
Maybe
one
thing.
F
Oh
sorry,
I
didn't
mean
to
interrupt
one
thing.
You
could
do
john
that
then
soggy
jump
in
if
you
agree
or
disagree
here.
I
think
for
the
it's.
This
is
the
bupreno
example
yeah.
I
think
you
could
just
remove
like
remove
those
examples
for
now
from
the
docs
and
start
a
new
pull
request
with
them
and
we
can
fix
them
later.
I
think
that
actually
like
so
we
did
those
in
2017
when
we
didn't
have
the
em-1d
code
and
as
we
bring
in
the
em-1d
code.
F
Those
are
that's
the
code
that
should
be
used
for
those
examples,
so
we
might
just
replace
we
might
just
replace
them.
I
mean
in
some
ways
it's
nice
to
keep
them
because
it's
what
we
showed
in
the
paper,
but
we
can
archive
them
somewhere
else
so
yeah.
I
wouldn't
bother
spending
a
bunch
of
time
like
you
debugging
that
when
it's
it's,
not
your
fault
that
the
pads
broke
so.
E
Okay,
cool,
that's
a
good
release
because
I
I'm
pretty
sure
all
the
empty
stuff
is
working
perfectly
and
it's
in
there
fine,
it's
just
these
little
things
that
are
blocking
the
pull
requests,
but
yeah
I'll,
take
a
I'll
just
see
about
removing
it
or
what
it
does
the
other
one
for
testing
the
testing
links.
It's
supposed
it's
in
the
docks
for
the
sphinx
gallery
stuff,
so
that
one
might
be
a
little
bit
more
important
I'll,
pull
that
one
together
and
I'll
just
post
about
it.
Yeah.
A
E
I'll
check
that
out
and
yeah,
I
should
be
back
on
the
parallelization
stuff
again
here
in
a
couple
weeks
trying
to
get
my
cluster
set
back
up,
but
our
we're
not
up
to
fire
code
in
the
new
office
and
no
contractors
want
to
take
it.
So
we
can't
have
anybody
upgrade
our
our
server
area.
A
E
I'm
yeah
I'm
a
little
bit
a
little
bit
mad,
but
I
gotta
wait
go
away
a
little
bit
longer
for
the
paralyzation
stuff
on
the
node.
A
My
stuff,
I've
just
I've
gotten
I've
made
a
decent
headway
in
the
cross,
gradient
review
process
process
just
tested
a
few
things
from
what
just
a
quick
updates.
I
I
from
what
I
saw
it
was
working
so
long.
You
had
a
correct
like
the
full
implementation
of
that.
The
second
order.
A
Yes,
like
it
was,
it
was
the
correct
version
of
it.
However,
it
is
often
not
symmetric
positive,
definite
that
the
hexagon
and
when
we're
solving
it
with
the
cross
gradient
term
or
with
the
conjugate
gradient
right.
C
A
The
inversion,
so
I
was,
I
added
an
option
in
there
too,
like
that
says,
like
a
approximate
approximate
hessian
defaulting
to
true,
so
that
way
like
if
someone
did
have
a
way
of
doing
the
full,
if
like
you're
doing
by
cgs
or
something
to
get
the
step
direction,
you
could
use
the
full
hessian.
But
I
was
just
having
a
better
look
with
the
symmetric.
H
Yes,
so
I
will
push
a
pr,
maybe
in
the
next
few
weeks,
for
improvement
on
pgi,
like
both
quality
of
life
improvement
like
some
plotting
function
for
the
gmm,
instead
of
having
like
a
full
script
in
the
in
the
tutorial,
a
couple
of
numerical
stabilizations
to
like
when
you
you
want
to
use
the
full
gmm
rather
than
the
least
square
approximation.
I
have
come
up
with
a
couple
of
numerical
stabilization
for
that,
so
that's
and
also
better
default.
H
Sometimes
when
you
don't
put
all
of
the
ingredient,
let's
say
like
I
had
mike
at
the
usgs
working
getting
his
first
models
on
with
gravity
and
the
pgi
and
yeah
like
he
as
a
beta
user.
He
showed
me
a
couple
of
places
where,
like
we
could
use
better
defaults.
That's
I
just
never
noticed,
because
I
was
just
always
specifying
that
the
specific
arguments,
but
you
don't
need
to
so
I
will
push
that
soon
and
then
oh
yeah,
like
maybe
just
two
conversations,
just
maybe
to
put
them
on
the
radar
for
now.
H
I
know
that
we
want
to
talk
about
the
standard
deviation
for
with
detail
first,
but
just
to
mention
them.
H
I
had
a
bit
of
a
misadventure
lately
like
something
very
stupid
that
in
my
inner
coding,
even
as
an
it's
a,
I
think
it's
a
good
story
because
it's
like,
I
think
I
can
consider
myself
as
an
advent
user.
But
that
was
something
very
stupid
and
maybe
like
we
need
something
more
bulletproof
for
that.
The
story
is
that
and
don't
figure
it
out
for
me.
H
So
thanks
again,
thom
for
that
was
that
for
in
this,
for
in
this
indexing
active
cells,
I
was
passing
a
vector
of
zero
and
one,
and
I
didn't
realize
that
the
zero
and
one
were
integers
instead
of
boolean.
So
I
was
always
referencing
to
cells
cell
0
and
cell
1
instead
of
true
and
false,
and
it
took
forever
to
to
figure
out
what
was
happening.
H
So
yeah,
I
was
just
not
passing
the
right
zero
in
one
so
and
I
think,
looking
at
the
utils,
we
always
output,
boolean
arrays,
but
right
now
the
active
cells
could
be
either
integers
or
boolean.
So
I
will
maybe
add,
like
a
take
the
side
of
just
accepting
boolean
arrays
for
now
and
putting
a
check
on
that.
But
that's
that's
kind
of
like
the
one
thing
that
I
could.
I
could
probably
try
to
come
with
a
pr
and
a
test
for
that.
H
But
if,
if
if
nobody
is
really
adamant
of
you
wanted
to
use
integers
array
for
indexing,
that's
that's
the
thing.
H
D
A
A
H
H
Yeah,
no,
I
can
see
what
the
index
is.
It's
just
I
don't
know.
It
was
like
a
code
that
I
wrote
quite
late
on
friday
night
and
like
I,
I
passed
something
and
I
I
put
integers
instead
of
boolean,
that's
I
mean,
and
it
took
it
took
forever
to
figure
out
what
was
going
on
and
I
understand
you
can
use
both
and
just
I
I
I
I
just
wanted
to
told
that.
That's
that
funny
story
and
see
if,
like
yeah
like
I
usually,
I
always
use
boolean.
H
B
Okay
with
just
enforcing
bull,
to
be
honest,
it's
cheaper
anyway
to
store
an
area
of
bold
and
integers
and
yeah,
sometimes.
F
Well,
if
it's
a
really
long
array-
and
you
only
have
a
couple
of
values-
you
probably
should
be
using
some
sort
of
other
mapping
or
your
mesh
design-
is
weird
one
of
the
things
I
encountered
that
was
challenging
with
using
integers,
and
I
this
is
why
I
think
booleans
are
better.
Is
you
don't
know,
you
don't
actually
know
what
the
size
of
the
mesh
is?
If
you
use
integer.
F
So
you
can
actually
end
up
sort
of
making
a
mess
with
integers,
but
with
booleans
it
has
to
be
the
same
as
the
number
of
cells,
so
I
think
you're
a
lot
safer
with
booleans
like
you
can
always
go
to
integers,
but
you
can't
necessarily
go
back,
and
so
that
would
be.
One
reason
I
would
I
would
stick
with
booleans
is
what
we
enforce.
H
Yeah,
so
that's
kind
of
like
the
funny
story
of
things
and
also
a
conversation
just
to
get
through
the
idea
here,
but
we
we
did
discuss
that
a
bit
with
lindsay
and
they
are
nothing
on
the
gifs
like
channel.
No,
not
on
the
sim
peg.
One
is
multipliers
in
the
objective
function.
I
also
talked
about
that
with
joe
yesterday
of
right.
H
Now
we
have
two
type
of
multipliers
like
betas
and
alphas,
when
at
the
same
time
we
all
we
need
will
be
actually
one
multiplier
for
each
objective
for
each
term
of
the
objective
function,
and
we
were
discussing
that
with
joe
yesterday
and
because
of
the
cross
gradient
and
I
mean
pose
gradient
is
fine,
but
like
definitely
like
the
way
we
approach.
The
joint
problem,
like
the
definition
of
the
objective
function
between
pgi
and
the
course
gradient,
is
that
pgi.
H
I
have
one
regularization
term,
and
that
includes
everything
with
one
beta
well,
in
the
course
gradient
has.
The
regularization
is
defined
as
a
two-part
with
one
beta
for
one
physical
properties
and
one
beta
and
the
second
beta
for
the
for
the
second
physical
properties
plus
the
coupling
term,
and
so
right
now
there
is
like
a
lot
of.
H
We
were
seeing
that
with
joy,
that
we
have
a
lot
of
directives
that
we
will
we
that
we
will
have
to
keep
both,
but
specifying
that
some
are
focused
gradient
and
some
are
for
pgi,
for
example,
for
joint.
But
the
incompatibility
is
not
really
in
what
they
do.
H
It's
the
way
it's
formulated
and
as
we
maybe
we
want
to
move
forward
with
more
joint
in
the
inversion
joint
inversion
scheme,
we
might
have
to
think
about
like
a
more
general
formulation
of
the
objective
function
with
just
one
objective
function:
term,
one
multipliers,
and
then
you
like
you,
you
you
you
select
on
which
one
you
want
to
cool
or
depending
of
your
strategies,
but
the
and
then,
in
fact,
also
like
the
two
multipliers
that,
like
beta,
is
multiplying
all
the
alphas,
etc.
H
I
H
H
D
Yeah
yeah,
I
looked
through
the
utils
that
you
made
for
the
pgi,
but
I
decided
not
to
play
around
with
anything
yet
so
any
suggestions
you
have
and
also
just
explaining
what
some
of
the
stuff
does,
because
I'm
not
really
familiar
with
it.
So
we
can
definitely
do
that.
Okay,
yeah
sure,
like
let's
ping,.
A
Okay,
any
other
quick
report
before
you
go
on.
A
I
Talking
about
there,
yes,
I
don't
know,
shall
I
share
the
link
again,
maybe
hold
ons
or
I
can
click
share.
If
you
want
sure
you
see
this
yeah,
so
I
don't
know
tebow
answered,
I
think,
on
slack,
and
probably
others
have
seen
it.
It's
just
this
question
that
we
had
the
discussions
with
with
my
code
in
the
company,
I'm
very
for
why
why
it
was
this
or
the
other,
and-
and
I
think
I
had
it
once
with
sogi
about
a
year
ago,
and
I
just
thought
I
asked
once
what
was
the
reason
for.
I
Let
me
silence
that
for
having
just
the
sum
and
not
the
square
root
of
the
square
root
of
the
squares
as
it
is
an
uncertainty
propagation,
and
I
guess
it
would
be
a
very
easy
fix.
That's
nothing
big!
I
was
more
wondering.
Is
there
a
reason
and
if
there
is
a
reason
I
would
be
interested
in
what
is
the
reason
and
if
not,
if,
if
he
would
like
to
change
or
not,
I
plotted
it,
then,
and
I
mean
the
difference-
is
really
not
big
and
I
would
not
expect
big
changes
in
inversion.
I
It's
really
just
at
the
area
where
responses
come
approach,
the
noise
level,
so
it
would
have
a
bit
a
a
harder
kink
in
the
waiting.
Well,
the
weighting
would
be
won
over
that
so
yeah.
It's
not.
H
I
B
A
I
H
So
so
the
reason
I
can
hear
that
little
brought
up
is
that
it's
actually
kind
of
like
it's
a
statistical
formulation
of
additive
random
variables
for
the
noise
like
what
we
consider
here
is
that
we
have
a
noise
flow
coming
more
like
a
noise
flow
and
then
and
then
we
have
like
a
noise
relative
to
the
to
the
amplitude.
And
if
you
don't,
if
you
consider
those
two
noises
independent
with
their
own
standard
deviation,
and
you
add
them
together,
then
the.
H
Of
that
of
that
additive,
noise
is
a
square
root
estimation.
That's
it's
just
how
the
statistic
works
out,
because
what
what's
additive
is
the
variance
and
the
variance
is
the
square
is:
is
the
square
of
the
standard
deviation?
So
it's
just
the
statistics
and
like
now
that
little
point
has
it.
I'm
like.
I
kicked
myself
for
not
saying
it
earlier,
but
yeah
I
mean
it's.
H
This
is
like
the
correct.
This
is
the
correct
statistical
formulation
with
the
square,
and
I
believe
that
why
we
use
that
just
addition
of
standard
deviation,
it
might
be
just
because
that's
what
how
the
ubc
code
was
and
langling
it.
So
I
I
imagine
the
reason
is.
The
reason
is
more
historical
than
based
on
any
mathematics
or
like
strong
argument.
D
It
seems
like
for
a
lot
of
our
problems.
We
correct
me
if
I'm
wrong,
but
we
sort
of
choose,
choose
our
standard
deviations
to
either
be
kind
of
dominated
by
a
floor
or
dominated
by
a
percent,
and
it's
I
don't
know
how
often
it
is
where
the
floor
and
the
percent
are
kind
of
having
equal
weight
over
the
assigned
uncertainties
of
of
almost
all
the
data.
I
mean
a
lot
of
them.
H
Yeah,
it's
like,
for
example,
for
I
don't
know
what
like
I'm
just
gonna
choose
like,
I
don't
know,
em
or
dc
sometime
and
some
people
say,
like
I'm
gonna
throw
out
like
set
my
noise
level
to
be
like
I'm,
like
a
certain
percentage
of
your
data
or
below
that
noise
level
or
something
but
like
the
problem
is
for
the
data
where
the,
where
the
value
is
close
to
your
noise
floor.
So
that's
where,
like
the
kink
is
happening.
That's
where
you're
gonna
think
for
the
very
high
values.
H
D
Yeah,
I
mean,
I
guess
one
school
of
thought
is
you
have
a
like
an
independent
floor,
noise
source
and
an
independent
percent
source,
in
which
case?
Yes,
you
do
this
square
root
situation
or
you're
saying
I
think
that
the
standard
deviation
is
a
floor
plus
the
percent.
C
I
J
Can
make
some
comments
here?
This
is
kind
of
where
we
I
mean
we
had
recognized
years
ago
that
wasn't
what
we
were
doing
was
not
exactly
correct
in
the
sense
that
we're
just
taking
you
know
the
sort
of
standard
deviation
of
of
a
floor
and
then
also
the
relative
amplitude,
just
adding
it
together.
But
you
know
we
never
found
a
case
where
that
kind
of
thing
seemed
to
make
any
any
particular
difference.
So
we
just
it
was
simple.
We
just
just
just
left
it
like
that.
J
So
what's
I
mean
this
is
actually
interesting
and
also
opens
up.
Maybe
the
potential
for
you
know
looking
at
how
we
treat
you
know
how
we
actually
treat
errors
when
we
really
have
you
know
complex
data
and
also
in
sometimes
when
you
have
you
know
your
measurements
are
real
and
imaginary,
or
your
measurements
might
be
amplitude
and
phase.
J
You
know
who's
who's
more
fundamental
and
how
you
go.
How
do
you
go
back
and
forth
and
how
do
you
really
kind
of
keep
track
of
make
sure
that
you're
not
throwing?
You
know
a
curveball
into
something
so
in
in
your
case
dieter?
What
was
the
motivation
for
for
looking
at
this,
or
did
you?
Was
there
something
that
some
flag
that
came
up
and
said,
wait
a
minute?
This
is
not
quite.
I
Right,
oh
it's
just
like
the
company
I've
worked
with
in
this
joint
inversion
framework.
All
the
connection
between
the
different
methodologies
or
different
data
is
done
through
standard
deviation,
the
weighting
and
that's
why
they
are
very
careful
that
these
things
are
done
correctly,
to
assure
that
the
weighting
of
different
parameters
and
different
physical
methods
is
done
correctly,
so
that
they
just
very
focus
on
that,
and
they
told
me
change
it.
So
I
changed
the
emg
3d,
but
that
means
that
it's
different
to
stimpak.
Now,
that's!
I
think
why
we
had
it.
I
I
had
a
very
soggy
about
it
when
we
tried
to
incorporate
emg,
3d
and
simple,
because
he
said
I
I
do
the
misfit
diff
well,
there
are
two
things.
One
is
standard
deviation
and
the
other
thing
is
that
was
my
next
step.
If
we
have
complex
data
or
not
and
how
we
compute
this,
the
misfit
from
it
but
yeah-
that's
how
it.
A
I
J
I
H
Yeah,
no,
I
agree
with
you.
I
mean
at
this
point.
I
will
be
all
for
we
change
for
the
mathematically
correct
noise
estimation,
because,
like
I
don't
I
don't
think
we
have
any
good
reason
other
than
history
call
for
keeping
it
the
way
it
is,
and
we
see
that
it
doesn't
make
much
difference.
It
makes
much
difference
for
the
intermediate
values
and
I
don't
think
that
it's
good
that
we're
overestimating
errors
on
intermediate
values
that
can
still
bring
a
lot
of
information,
especially
in
that
in
that
in
that
region.
D
Yeah,
I
guess
I
generally
compute
it
outside
of
that
that
misfit,
so
instead,
instead
of
using
keyword,
arguments
to
say
this
is
my
floor,
and
this
is
my
percent,
I
tend
to
go
and
create
a
vector
with
my
uncertainties
outside
of
that
and
then
just
assign
them
directly,
in
which
case
it
wouldn't
change
at
all.
If.
I
D
H
Yeah
but
same
it
depends
also
like
the
way
you
built
your
vector.
Also,
I
mean
you
yeah
really
like
you
have
to
like
it's
good
for
you
to
at
least
know
that,
statistically,
when
you
add
when,
when
you
add
random
variables,
that's
the
correct
standard,
deviation
estimation
that
is
a
quadratic
one
after
that
different
cases
for
sure.
H
But
I
will
say
that
when
we
create
the
noise
within
the
within
the
the
utility
in
simplex
that
we
have
at
least
a
statistically
correct
estimation
of
the
noise
when
we
when
we
I,
when
we
put
a
noise
flow
and
a
relative
error,
yeah
and
it's
go,
and
it
comes
down
to
the
conversation
we
had
before
with
joe
that
when
he,
when
we
change
the
the
when
we
actually
change
the
the
what
the
standard,
the
meaning
of
standard
deviation
within
simpeg,
because
what
we
call
standard
deviation
was
not
actually
the
standard
deviation.
D
K
People
I'm
not
like
agreeing
with
that,
because
the
okay,
the
standard
deviation,
was
defined
as
like
a
so,
I
think
like
how
we
define.
Maybe
I
think
there
was
a
little
bit
of.
Could
there
could
be
a
bit
of
error,
but
the
definition
of
standard
deviation
was
the
sum
of
four
plus
percentage,
but
then
I
think
once
that's
actually
fine.
K
H
So
when
I
was
when
I
was
saying
that
so
yeah
it
was
more
referring
to
like
previously
in
simperi,
we
had
that
std
like
argument,
and
it
was
actually
the
percentage
of
noise
so
like
right.
H
G
D
H
But
if
you
use
a
keyword
argument,
I
would
believe
that
we
need
to
do
a
good
job
at
ending
this
keyword
argument.
G
Yeah
I
observe
the
interesting
phenomenons
relates
to
this
related
to
these
problems
that
I
noticed
the
data
residual
maps
is
not
always
random
in
sometimes
we
can
still
observe
some
coherent
features.
Even
I
use
synthetic
models
and
use
exact
the
standard
deviation
of
the
noise,
but
I
can
still
observe
some
coherent
features:
patents
on
the
residual
maps,
so
what
I
was
thinking
is
maybe
we
can.
Maybe
we
need
some
advanced
master
to
estimate
the
noise
like
we.
G
Maybe
we
can
iteratively
iteratively
updates
the
data
covariance
matrix
in
the
inversions
until
we
can
obtain
the
completely
random
residual
maps.
This
is
what
I
was
thinking.
B
Yeah,
it's
kind
of
related
to
what
to
what
sean
is
is
doing
right.
You
know
rewaiting
his
his
uncertainties,
yeah.
Yes,.
K
H
If
you,
if
we
deal
with
like
potential
field
or
so
on,
is
that
we
get
one
outcome
out
of
that
inversion,
but
you
have
made
still
many
models
that
will
still
still
fit
your
data
equally
well
with
the
same
targets,
but
for
you,
where
you
could
change
your
when
you
could
change
your
target,
that
you
would
that
you
would
get
if
you
were
using
some
stochastic
methods
to
get
a
bunch
of
realization
so
like
for
non-linear
problems.
H
K
So
there
seems
like
a
two
level,
so
one
is
even
just
with
the
real
value
data.
What
theta
was
showing
either
we
square
then
sum
then
take
square
root
or,
as
we
usually
do,
we
just
like
kind
of
just
at
whatever
positive
floor,
plus
some
sort
of
percentage,
so
well,
even
okay.
So
there's
two
options:
how
do
we
actually
even
test
like
which
one
should
be
our
default?
I
this
is
something
okay,
so
I
think
we
need
a
good
default,
we're
not
actually
sure
which
one
is
better
default.
B
B
K
H
Because
I
think
that
sean
was
writing,
that
too
is
like
more
options.
Sometimes
is
less
like
yeah.
If
you
will
put
like
oh
do
you
want
the
noise
estimation
ubc
style,
which
is
like
incorrect,
like
you
have
to
it's,
it's
it's
another
flag.
You
have
to
default,
parameter
that
you
have
to
change
and
we
don't
have
really
a
good.
D
I
almost
want
to
add
it
as
a
utility
function
and
just
say:
give
it
give
it
a
vector
of
data,
and
then
you,
you
put
it
in
it'll,
compute
the
uncertainties
but
then
leave
that
that
class
itself
like
require
you
to
manually,
give
it
the
uncertainties.
D
K
Right,
I'm
actually
wondering
like.
Have
you
actually
seen
the
literature
that
used
this
as
an
uncertainty
like
a
beta
error,
because
I
think,
as
far
as
I
remember,
it's
mostly
kind
of
the
the
other
one,
maybe
I'm
very
like
biased
to
ubc
style,
but
that
I
think
I've
read
many
inversion
papers
and
none
of
them
actually
actually
used
this
kind
of
definition.
I
K
I
I
more
had
a
look
on
the
investigation
into
into
uncertainty,
propagation
and
that's
where
yeah
that's
very
everywhere
shows
you
should
add
it
like
this.
Okay.
H
I
A
I
K
K
I
Think
I
think
I
have
much
more
opinion
on
having
the
misfit
separate
and
real
and
imaginary
than
having
a
different
standard
deviation,
but
I
think
that's
probably
an
entirely
different
topic
for
discussion,
because
I
think
there
it's
really
in
my
opinion,
it's
not
physical
to
to
not
treat
them
as
as
one
value
to
get
the
misfit.
But
that's
that's
probably
almost
a
philosophical
question.
K
I
K
K
There
any
noticeable
difference
by
doing
so,
if
not,
then,
actually
in
practice
that
doesn't
really
matter
so
by
keeping
whatever
mathematically
correct
form
might
be
a
better
better
way
to
go.
I
guess.
F
D
A
I
I
would
my
personal
thoughts
on
that,
like
I
would
probably
prefer
the
more
statistically
relevant
one
and,
like
you
guys,
just
talked
about
if
it's
most
of
the
time,
it
probably
won't
make
a
difference,
especially
if
you're,
using
one
or
the
other,
like,
if
you're,
just
using
relative
noise
as
you're,
saying
as
your
assignment
or
you're
just
using
real,
like
the
doesn't
matter
like
it's
exactly
the
same.
It's
only
when
you've
got
both
doesn't
make
a
slight
difference.
I
B
Because
you're
splitting
the
the
the
real
and
imaginary
part
you're
you're
separating
them
right,
but
it
was
a
full
complex.
I
I
B
K
For
your
comment
about
real
and
imaginary,
we,
I
think
I
agree
with
you
if
the
the
real
world
was
bit
idealistic
where
you
get
both
real
and
imagine
component
reliably,
but
I
think
it
really
depends
so
some
of
the
actually
frequency
domain
system
actually
measure
in
phase
and
outer
phase,
not
the
time
series.
So
this
actually
was
the
kind
of
like
a
was
sort
of
a
shocking
point
to
me.
K
They
do
actually
measure
that
so
that
there's
a
potential
the
error
structure
can
rely
on
actually
interface
or
quadrature
phase
and
sometimes
actually
some
of
the
ground
system.
They
got
much
larger
error
in
the
in
phase
compared
to
quadrature,
so
we
like.
Sometimes
we
just
need
to
throw
away
in
phase
due
to
many
reasons,
so
I
think
kind
of
why
we're
splitting
that
up
and
kind
of
considering
this
as
a
like
a
separate
entity.
K
C
I
Well,
I
I
think
it
would
also
be
interesting
to
implement
the
option
even
in
a
branch,
and
then
we
could
do
some
testing
to
see
what
is
actual
thing.
I
think
for
measurements
that
are
measured
as
amplitude
and
phase,
and
then
you
get
real
and
imaginary
from
amplitude
and
phase.
You
definitely
should
treat
them
together
because
they
are
clearly
linked.
I
I
K
Right
and
I
I
I'm
happy
like
it's
it's-
we
should
actually
introduce
that
that
considering
that
as
a
complex
data,
because
actually
numerically
it's
actually
more
cheap
so
rather
than
solving
twice,
we
just
need
to
solve
once
like
we
like.
We
need
for
that.
Having
separate
real
imaging,
we
need
to
solve
the
a
invert,
well,
not
a
inverse,
but
the
the
back
the
exposition
twice,
whereas
if
you
just
consider
it
as
a
complex
data,
you
just
need
to
solve
once
so
yeah.
Everything
like
when
the
problem
gets
bigger.
I
I
I
K
I
K
I
I
H
The
way
you
will
do
it
in
simpeg
is
that
right
now,
like
the
way,
also
when
we
write
the
ubc
file
and
so
and
the
noise
is
treated,
it's
like
it
assumes
that
you
have
an
unbiased
noise
that
we
stole
like
we
still
know
just
the
noise,
as
we
saw
it's
it's.
The
standard
deviation
of
our
noise,
like
everything
added,
but
it
assumes
zero
mean
what
you
will
do
in
cpeg.
H
If
you
have,
if
you
believe
you
have
bias
noise,
is
that
you
would
you
will
remove
the
mean
of
your
noise
from
the
data,
because
that's
what
that
is
right,
like
you,
you
you
just
like
the
the
bias
shift,
your
data,
so
if
so,
you
will
invert
your
data
minus
the
mean
of
your
noise,
with
the
standard
deviation
of
the
unbiased,
left
oval
and
bias.
That's
what
you
would
do.
I
K
J
I
J
I
H
Yeah-
I
don't
I
remember
just
to
mention
it.
I,
when
sarah
gary
was
there
like
she
was
doing
agricultural
application
of
geophysics,
especially
with
dc,
and
she
was
also
doing
like
a
monitoring
so
like
time
like
like
something
in
the
same
survey
over
and
over-
and
I
don't
remember
the
context
right
now,
but
we
had
discussed
that
they
want
she.
She
had
that
kind
of
like
well.
H
She
had
like
an
initial
inversion
and
she
was
seeing
some
point
with
like
significant
standard
deviation,
and
she
mentioned
the
idea
of
maybe
having
that
misfit
between
the
modeling
and
the
observation
from
the
original
inversion
propagates
to
the
different
inversion
at
different
time,
and
so
I
think
that's
why
we
discuss
stuff
like
okay,
then
you
did,
you
could
not
fit
that
data,
that's
the
closest
you
could
get.
So
maybe
that
difference
that
misfit.
That
just
adds
response
from
your
initial
inversion.
H
J
Well,
I
guess
kind
of
following
along
on
that,
if
you
actually,
if
you
come
back
to
the
you
know
to
the
mt,
then
sometimes
like,
if
you
have
what
used
to
be
called
static
shift,
so
if
you've
got,
you
know
small
conductors
very
near
electrodes
or
something
it,
it
basically
increases
the
electric
field
and
changes
your
impedance,
and
that
causes
the
curves
to
go
a
bit
wonky.
And
so
then,
in
the
in
the
inversion
framework,
you
can
actually
put
in
a
variable
that
you
tried
to
account
for
this
static
shift.
J
So
it's
not
something
that
you
can
add
as
noise
into
the
data
and
it's
not
something
that
you
can
subtract
from
directly.
But
since
each
station
has
its
own
potentially
static
shift,
that's
associated
with
it.
You
can
try
to
kind
of
solve
for
these
things,
then,
as
well
as
fit
the
larger
scale
structure.
So
that
kind
of
does
that
that
3d,
so
in
in
that
sense,
you
know
there
is.
There
is
a
bias
to
all
of
the
data.
It's
just
that
you
have
no
idea
what
it
is.
So
then
you
solve
for
it.
I
Yeah
yeah,
I
guess
the
idea
is
just
that
perfectly
gaussian
zero
mean
noise
is
a
very
idealistic
thing
that
is
probably
not
what's
happening
in
reality.
That's
that's
more.
The
the
point
of
interest
that
will
be
can
has
it
been
investigated.
Has
anyone
run
inversions
with
losing
these
assumptions
and
and
adding
more
complex
noise?
I
guess.
In
other
words,
that's
that
that's
a
question.
A
B
Can
we
can
we
reserve
it
for
our
next
next
meeting,
then.
A
Yeah
I'll
share
a
little
quick
outline,
at
least
what
I
did
on
their
head.
What
I
was
thinking
of
on
slack,
it's
pretty
simple:
it's
essentially
just
using
the
bit.
Why,
like
overloading
bitwise
operators
to
represent
the
stopping
conditions,
so
we
can
just
arbitrarily
combine
them
and
put
them
together.
Just
you
could
say:
oh
it's
like.
I
want
this
condition
and
this
condition
or
that
condition
and
yeah
hey,
that's
about
it.
I'll
share
a
little
quick
thing
afterwards,
but
thanks
for
coming
see
some
of
you
guys
on
sunday.
A
J
Yeah,
I
believe,
that's
correct.
If
you're
double
vaccinated,
you
can
come
into
canada.
As
of
the
ninth,
without
having
to
do
quarantine.