►
From YouTube: SimPEG Meeting January 20th
Description
Weekly Meeting for January 20th, 2021
A
Just
a
few
quick
things
on
my
thoughts
before
we
get
going,
I
was
thinking
of
hosting
another
I'd
like
to
host
another.
You
know
friday
afternoon
session,
where
we
can
either
code
or
if
we
don't
feel
like
coding,
we
can,
you
know,
do
some
games
or
something
in
the
afternoon,
just
playing
around
socialize
a
little
bit
next
friday
afternoon.
Everyone
is
available
for
that.
Your
welcome
joint,
probably
similar
times
that
we've
been
doing
in
the
past,
maybe
start
with
a
little
bit
of
coding
and
then
move
on
to
something
else.
Afterwards.
C
Yeah,
just
taking
all
the
feedback
from
the
the
tutorials
and
theory
stuff
from
discretize
and
chipping
away
at
that,
so
made
a
lot
of
progress
this
week,
but
there's
still
work
to
be
done
before
the
next
review
and
then
I
put
some
work
towards
finalizing
our
pr
for
getting
em1d
stuff
into
simpeg,
so
yeah
all
that
stuff,
updating,
tutorials
updating
tests,
partitioning
it
out
so
that
it's
it
fits
with
our
sim
peg
structure.
C
Still
some
work
to
be
done
on
that
before
it's
ready
for
for
review,
but
yeah
progress
is
being
made.
I
don't
think
it'll
take
too
too
long,
but
there's
still
stuff
to
do.
A
C
Yeah,
I'm
thinking
about
that
there's
some
interesting
stuff
with
with
wave
forms
and
and
systems,
and
it's
kind
of
a
legacy
for
when
it
was
from
when
it
was
its
own
repository.
C
C
D
Yeah
still
plugging
away
at
that
mt
pull
request.
I
found
some
frequency
domain
stuff.
I
think
it's
on
the
eb
formulation
and
the
analytic
tests
are
failings,
so
I'm
just
gonna
yeah
I've
been
digging
around
trying
to
figure
why
those
are
breaking
right
now
and
then
yeah
I've
been
working
on
the
tiling
a
little
bit
yeah
kind
of
put
the
mt
down,
because
that
was
a
little
bit
more
appealing
so
yeah.
I
was
just
testing
it
with
the
pneuma
node
kind
of
theory.
D
We
had
and
yeah
it's
kind
of
see
that
if
you
you
can
squeeze
out
like
yeah,
you
can
scale
up
by
adding
more
tiles.
But
if
you
want
to
really
like
get
some
more
gains,
it's
you
just
keep
adding
a
pneuma
node
to
your
to
your
work
like
for
as
a
worker
and
yeah.
You
can
start
to
get
orders
of
magnitude
faster
that
way,
and
so
I
the
way
that
I've
done
that,
though
I
can
squeeze
out
a
little
bit
more.
D
As
you
see,
if
you
go
up
to
30
in
the
plot
there
you
can,
you
can
get
quite
you
can
get
more
out
of
just
whatever
new
notes
you
have,
but
I
can't
seem
to
do
that
with
desk
dash
seems
to
kind
of
just
I
guess
optimize
just
to
yeah,
like
you
can
get
about
an
order
of
magnitude
out
of
the
way
that
we
do
desk
right
now,
but
I
can't
squeeze
it
out
because
I
can't
mask
you
can't
really
tell
the
or
you
can't
set
the
affinity
for
the
cpus
in
the
scheduler
or
in
the
I
guess
in
the
workers.
D
So
you
can't
quite
squeeze
it
out
enough,
but
yeah
I'm
working
at
it.
I
think
there's
a
way
that
you
can
still
ask.
I'm
just
gonna
dig
around
and
find
the
right
config.
D
So
the
pneuma
node
is
pretty
much
the
memory
controller
for
a
cpu,
so
a
cpu
can
have
like
10
cores,
but
it's
only
got
one
perc.
It
might
have
two
pneuma
nodes
like
it
might
be
broken
down
into
two,
but
most
cpus
are
just
have
one
pneuma
node
that
controls
all
the
memory
and
all
the
processors
ask
or
go
talk
to
it,
and
then
that's
what
talks
to
your
ram.
D
So
you
can
become
bandwidth
limited
within
the
cpu
itself,
because
it's
not
actually
not
each
core
has
its
own
memory,
it's
all
being
accessed
through
that
pneuma,
node,
okay,
yeah
and
actually
dieter.
I
was
gonna.
Ask
that
really
quickly
playing
around
with
partiso
they're,
just
testing
it
by
itself
on
a
machine.
That's
not
intel!
I
found
yeah
the
the
performance
is
terrible
like
it.
C
It's
not.
D
That
good,
so
were
you
able
to
compile
pardeso
with
blasts?
I
thought
you
were
talking
about
that
one
time:
yeah
dieter,
sorry,
yeah.
E
Me
no!
No.
I
I
just
I
just
ran
into
the
issues
and
I
tried
to
run
similar
big
mod
size
models
with
simpek.
Then
I
with
my
code
and
then
it
it
for
big
models.
It
just
blows
up,
but
I
didn't
never
try
to
compile
or
no
that's
when
I
just.
I
wondered
once
if
anyone
tried
the
new
pardesa
version
like
an
academic
license
or
no,
but
I
myself,
I
never
did
no.
D
A
A
A
A
It's
all
the
you
know,
that's
like
sse,
three
vectorization
instruction
sets
like
this
higher
order
or
more
fundamental
routines
like
that
that
are
just
disabled
they're.
D
Just
disabled
yeah,
maybe
I'll,
look
into
it
a
little
bit
there,
but
yeah
intel
chips
are
usually
nicer.
D
F
Oh
yeah,
I
didn't
do
much
in
terms
of
like
contributing
to
to
steampunk
lately,
like
with,
like
the
only
like
very
minor
thing,
with
detail.
We
updated
the
badge
in
the
readme,
we
added
detail
added
the
conda
ford
version
and
I
and
I
replaced
the
trevis
badge
for
the
azure
pipeline
one.
So
so
read
me:
badges
up
to
date.
A
G
Yes,
I
don't
have
too
much
report
at
this
moment,
I'm
still
working
on
the
tutorial
and
the
example
for
the
cross
gradient.
Also
one
of
my
colleague
kenny's
defenses,
his
thesis
master.
This
is
this
morning
and
I
was
taking
taking
charge
of
his
inversion
code,
so
I
think
maybe
his
work
can
also
be
contributed
in
the
same
pack.
Research.
I
think
that's
all
from.
A
A
Oh,
I
have
been
working
with
lindsey
and
dieter
again
on
the
mt1d
with
boundary
conditions
in
the
em.
It's
going
along
really
well.
It
works,
like
the
boundary
condition
things
that
I
was
testing
with
discretize
those
simple
those
little
routines
seem
to
be
working
well
for
that
problem.
A
The
I've
dropped
a
link
in
the
ios,
a
notebook
to
the
cell
in
a
in
the
notebook
that
I
have
a
question
on
youtube
to
you
guys.
So,
if
you
look
at
this
system,
the
the
issue
that
I'm
having
is
a
system
doesn't
necessarily
make
a
symmetric
matrix
for
factorization
in
1d
tv.
A
We
can
get
it
to
be
symmetric
and
there's
two
kind
of
options
that
we
can
mess
with
here.
So
the
first
is
that,
if
you
look
at
it,
we
we
can
no
longer
just
pull
where,
if
you're
looking
at
where
the
a
equals
line
is
on
the
right
it,
something
gets
multiplied
by
the
inverse
of
the
like
the
mu
inverse
matrix,
but
it
doesn't
get
correspondingly
multiplied
on
the
left.
So
it
doesn't,
it
doesn't
make
a
symmetric
matrix
there
in.
A
A
F
A
A
H
H
What's
at
it
seems
quite
new
to
me:
is
that,
like
a.
H
H
Actually,
that's
nice
cool.
I
didn't
know
that.
A
Yeah,
so
we
can
mult
anyway,
our
options
like
this,
like
I
said,
I'm
not
sure
how
that
will
handle
stability
of
the
matrix.
The
other
option
is
to
pull
this
matrix
out
on
the
right,
the
right
hand,
side,
but
that
kind
of
leads
to
a
weird
like
odd,
solve
operation.
A
A
So
this
I'm
just
kind
of
curious
what
people
have
any
we
end
up
doing
a
similar
sort
of
thing
when
we
multiply
by
v
as
well
in
a
different.
So
this
is
the
tm
mode
in
the
te
mode
we
have
to
multiply
by
v
on
the
left
to
get
a
symmetric
matrix
in
the
formulation
and
I'm
just
trying
to
avoid
having
all
these
double
mu
terms
and
l
n.
Also,
if
we
do
this,
if
we
do
it
the
other
way,
the
derivative's
weird,
but
then,
then
again,
the
derivative
is
odd
here
as
well.
A
H
Yeah,
I
don't
really
see
the
problem
there
and
obviously,
if
you're
making
like
the
matrix
symmetric
that
would
make
the
condition
number
better.
I
guess
then
unsymmetric
matrix-
I
guess
that's
my
intuition,
so
I
think
that's
what
we
typically
did
just
making
the
a
symmetric
and
rather
than
multiplying
something
after
you
solve
yeah.
A
Okay,
just
looking
for
input
on
it,
that
was
it
and
I've
linked
that
notebook
cell
into
the
notes
here,
if
you
guys
want
to
look
at
it
as
well.
A
Thank
you
for
that.
The
the
2d
mt
should
be
following
pretty
shortly
here
and
then
I
also
had
a
question,
so
I'm
not
very
well
versed
on
the
differences
between
the
mt
conventions
and
syntax
versus
what
it's
expecting
like
in
all
of
the
outside
stuff.
That's
getting
put
in
so
I'm
fairly,
confident
that
this
is
internally
consistent,
meaning
that
it
gets
the
accurate
y
and
like
e
y
and
bx
field,
and
that
the
signs
are
correct.
Within
our
right
hand,
coordinate
system.
A
C
So
I
think
we
just
need
to
have
a
good,
a
good
function
for
reading
in
and
writing
out
stuff
for
particular
conventions
and
yeah.
I
already
did
all
the
you
know:
making
making
plots
and
figuring
out
which
convention
you're
in
because
that's
something
you
have
to
do
every
time
you
look
at
mt,
so
you
just
need
to
kind
of
make
a
version
of
that
for
simpeg,
but
it
shouldn't
be
hard.
C
I
Goodness
a
good
person
to
ask
for
that
too,
because
he
went
through
all
of
that
initially,
when
getting
everything
set
up
with
the
same
heck.
C
Yeah
I
go.
C
Do
you
have
any
any
intention
of
putting
in
the
nodal
inner
inner
product
matrix,
I'm
just
going
through
some
of
these
basic
tutorials
for
discretize,
and
I
mean
we
don't
need
it
to
do.
You
know,
to
maybe
add
physical
properties
and
stuff
to
it,
but
just
to
be
able
to
call
nodal
inner
product
matrix.
C
A
H
A
Yeah:
okay,
if
we
had
a
nodal
formulation
for
the
like
for
this,
for
that
1dmt
system
right,
we
would
be
doing
an
inner
product
on
on
nodes
on
a
few
things
here,.
C
A
A
So
that's
about
all
I
had
lindsey
did
you
have.
You
seem
like
you
have
some
things
to
say.
I
One
one
quick
update
as
I
connected
with
jared
peacock
earlier
this
week,
he's
at
usgs
for
folks
who
don't
know
him,
and
he
does
a
lot
in
mt,
they're,
really
curious
to
just
kind
of
keep
a
pulse
on
on
what
we're
up
to,
and
so
maybe
once
we're.
We
are
kind
of
happy
with
the
state
of
the
pull
request
and
have
a
few
examples,
and
things
like
that
it'd
be
it'd,
be
great
to
reach
out
they've
offered
time
on
the
usgs
hpc
center
to
try
things
out.
I
So
if
there
are
examples
that
you
have
that
you'd
like
to
try
out,
particularly
if
it
is
empty,
because
that
is
directly
in
line
with
with
his
interests,
but
if
not,
if
there's
other
aspects
of
sim
peg,
it's
something
that
like
we
could
probably
have
a
have
a
conversation
about,
and
then
the
one
example
that
he
mentioned.
That's
particularly
of
interest
is
actually
the
the
1d
and
having
a
bit
more
flexibility
there
to
use
that
to
perhaps
build
up
a
starting
model.
I
A
lot
of
other
1d
codes
have
like
you
actually
really
solve.
You
solve
the
inverse
problem
really
well,
which
isn't
necessarily
what
you
want
for
a
starting
model,
and
so
there
there
could
be
some
interesting
applications
in
kind
of
thinking
through
like
the.
I
If
we
do
sort
of
a
spatially
constrained,
1d
and
t
inversions
and
early
stopping
what
what
kinds
of
things
would
you
would
you
want
to
do
perhaps
using
the
1d
to
build
up
a
starting
model
for
for
a
3d,
so
just
wanted
to
flag
that
if
that
is
of
interest
to
anyone.
J
J
H
Oh
yeah,
sorry
have
you
thought
about,
let's
say
so:
their
hpc
would
likely
have
a
multiple
nodes,
so
you
have
a
physically
multiple
nodes
and
what
I
wasn't
sure
you
store
a
czar
file
at
some
point
and
now,
actually
you
not
only
have
a
single
computer.
You
got
a
multiple
computer,
so
you
would
likely
distribute
that
star
file
into
multiple
files
have.
H
D
Well,
if
we
just
pass
back
the
vectors
we
can,
the
tsar
file
can
sit
on
the
node
like
it'll.
Just
have
it'll
just
write
it
there
and
then,
whenever
we
need
that
vector
it'll
be
passed
back.
However,
I
haven't
tested
with
that.
Quite
yet,
because
I'm
more
on
the
side
is
like.
We
have
all
the
ram.
We
need
like
ideas,
so
I
I've
just
been
throwing
ram
at
the
problem
and
it
yeah.
I
haven't,
really
been
storing
it
on
most
of
my
tests.
I've
just
been
putting
it
in
yeah,
okay,
but
yeah.
D
H
D
That
node
for
that
piece
of
the
j
from
that
zar
file
sitting
on
there.
I
believe
I
don't
dom
you
kind
of
came
raining
in
the
end,
but
I
think
that's
the
plan
for
they're,
storing
the
sensitivity
with
zar.
It
would
just
if
you
send
it
to
the
worker,
which
is
a
node.
It
would
just
write
it
there
and
then
we
would
just
ask
for
the
information
back
right.
K
Yeah,
that's
our
hope
right
because
we're
basically
going
to
give
them
a
local.
Let's
say
I'm
going
to
tell
the
worker
here's
the
simulation
go
compute
j
and
then
once
the
worker
has
then
technically
the
twos
are
should
be
a
local
right.
That's
the
thing!
We
need
to
check
right
to
make
sure
that
it
is
a
local
right.
Otherwise
we
might
need
to
really
specify
like
this
is
the
location
that
you
want
us
to.
K
You
to
write
it
but
we'll
see,
hopefully
the
worker
just
grabs
it
and
writes
it
locally,
and
then
that's
yeah
and
just
returns
just
returns
deep
red
at
the
end
or
the
g
back
back
there
right.
So
nothing
else
is
sent
back.
H
A
K
Yeah,
so
I
you
know
trying
bigger
problems,
they're,
bigger
dc
problems
and
starting
to
encounter
the
issue
that
the
storing
each
meshes
of
each
style
is
starting
to.
You
know
weigh
a
lot
on
on
the
actual
process
and
we
don't
really
want
to
send
the
three
meshes
across
the
network
right
from
the
central
node
to
the
to
the
workers.
K
So
I
started
looking
into
getting
rid
of
the
of
the
mesh
on
the
simulations.
So
basically
the
simulations
will
be
only
storing
the
the
operators
that
it
needs
to
do
the
things
and
then
just
drop
off
the
the
mesh
right
away.
So
for
right
now,
for
this
it's
pretty
simple.
K
We
only
need
to
store
the
projections
and
then
a
gradient
and
a
divergence
operator,
and
then
we
can
just
flush
out
the
mesh
and
that
really
that
really
reduce
the
memory.
Consumption
right
and,
as
I
said,
we
don't-
we
won't
have
to
send
the
tree
mesh
across
so
we'll
have
to
see
how
easy
it
is
for
the
other
simulations.
But
I
don't
see
why
not.
K
Matrices
yeah
so
for
the
inner
product,
all
it
really
needs
is,
is
to
hold
to
hold
the
projection
right.
Let
me
let
me
double
check
again.
I
can't
remember
on
top
of
my
head,
but
it's
possible
to
just
store
to
just
start
the
operation.
If
you
just
store
the
the
operator,
let's
see
well,
we
can
go
over
it
later
joe,
but
it
it
wasn't
a
big
issue:
okay,
yeah
where's,
the
I
can't
remember
where
it's
stored
I'll
have
to
die.
K
A
K
Yeah,
I
know
I
know
so
when
it
calls
the
the
inner
product-
it's
just
the
it's
just
a
mass
matrix
and
the
grain
operators,
so
we
can
basically
do
it
without
the
mesh
yeah.
We
just
need
to
store
it
the
essential.
Basically,
that's
what
I'm
suggesting,
because
really
at
the
end
of
the
day,
we
don't
need
the
entire
mesh
right.
We
just
need
a
few
operation
operators.
Basically
that's
the
idea.
H
H
H
Right
so
my
point
was
like:
oh,
I
see
creating.
K
H
H
K
K
K
A
Did
you
I
thought
at
one
point
you're
working
on
building
the
meshes
locally
on
each
node.
K
K
We
could
technically
build
a
mesh
store
it
on
drive
and
then
you
know
fetch
it
back,
but
I
don't
see
the
point
really
because
at
the
end
of
the
day,
if
we're
going
to
form
a
mesh
right
and
then
store
operators
on
the
mesh,
it's
just
more
expensive.
Why
don't
we
just
keep
only
the
operators
and
don't
keep
the
rest
of
the.
K
Okay
and
anyway,
so
now
you
know
we
ran
some
huge
problems,
did
it
against
dcip3d
and
we
went
from,
like
you
know,
an
hour
and
a
half
for
iterations
down
to
like
seven
minutes.
You
know
it's
like
wicked
fast,
and
this
is
not
even
without
tiling.
This
is
just
like
you
know,
just
storing
j
and
lighting
up
all
the
all
the
operations.
K
So
that's
really
promising
quite
excited
john
and
I
will
be
testing
it
on
a
on
a
data
set
for
my
client
next
week.
I
guess
and
yeah
it's
quite
promising
future
is
bright.
Pretty
excited
about
this.
What.
K
Iteration
yeah
an
hour
and
a
half
yeah
the
old,
the
old
code
right,
but
at
the
same
time
you
know
it's
just
redundant,
we're
just
removing
the
redundant
operations,
there's
like
too
many
matrix,
vector
products
and
then
factorizations
or
black
cells
that
are
done
over
and
over
and
over
again
right,
every
cg
you're
doing
it
like
multiple
times
and
just
the
fact
that
we
can
convert
all
this
to
like
a
single
matrix,
vector
product.
That's
that's
the
gain.
We
just
it's
killer,
quit
yeah.
H
Yeah,
it's
it's
interesting
because,
like
when
we're
learning
to
from
ldad
the
idea
was
never
porn
j
and
now
we're
going
back
to
forming
jay,
and
I
I
I
agree
with
them,
because
that
that
that's
actually
expensive,
like
that
not
forming
j,
is
not
cheap.
H
K
But
I
mean
there
was
a
good
reason
right
that
ldap
didn't
want
to
didn't,
want
to
form
j.
Is
that
before
jay
needed
to
be
full
and
random
right?
You
need
to
be
the
whole
thing
needed
to
be
storming
around,
but
now
we're
using
the
new
development
is
that
ssd
can
act
as
ram,
and
we
can
just
you
know,
parse
it
out
chiang
bai
chan,
and
so
that's
the
game.
H
But
let's
say
dumb
like
it's
just
a
hypothetical
situation,
let's
say
you're
doing
a
seismic
version,
so
you
got
like.
If
you
want
to
really
form
j,
you
need
a
huge
hard
disk.
Let's
say
a
couple
terabyte
and
maybe
even
more
and
do
you
think
that
that.
K
C
K
That's
a
huge
save
up
and
then
the
other
layer
is
like
tiling,
let's
be
smart
about
where
sensitivities
live,
so
yeah,
of
course,
eventually
we'll
hit
a
bottleneck,
you
know,
but
we
we're
making
progress.
H
H
I
don't
know
we
have
a
killer,
preconditioner
yeah,
because
actually
it's
a
good
point,
because
once
you
like
start
over
in
a
couple
million
cells
like
like
the
current
pre-conditioner
that
we're
using
our
regularization
function,
it's
actually
quite.
J
A
K
A
K
We
cut
it
pretty
short,
usually
because
it
has
an
expensive
part
right,
so
it
was
like.
Oh
we're,
gonna
do
five
cg,
that's
enough.
Let's
quit
and
that's
dangerous,
because
we
don't
know
what
we're
minimizing.
If
we
do
this
right,
whereas
now
we
can
just
solve
cg
all
the
way
to
precision,
you
know
one
into
minus
four.
Whatever
solve
it
now
we're
there.
Now
we
have
a
real
gaussian
step
right,
we're
not
that
broke
we're
not
cutting
it.
So
that's
that's
pretty
good
for
us,
too,.
H
Yeah,
I'm
actually
curious,
like
testing
a
ds
data
set,
because
now
you've
got
lots
of
data,
so
the
disadvantage
of
forming
j.
When
you
got
lots
of
data,
so
I
think
the
number
of
solve,
like
I'm,
not
sure
which
one
actually
wins,
because
in
that
case,
because
you
got
zillions
of
data,
that's
the
number
of
salt
that
you
need
to
do
form
j
anyway.
H
H
Seems
like
a
good
example
to
beat
like
if
you
can
beat
that
example,
and
I
don't
know,
that's
probably
way
to
go
well.
K
That's
an
interesting
part
right
sag
is
that
when
we're
forming
j
when
we're
storing
them,
it's
the
same
amount
of
work
as
doing
basically
a
single
jvec
operation,
plus
the
storing
part
right,
because
we're
all
doing
the
same
operations,
we're
looping
over
the
receiver
over
the
sources
and
now
the
only
extras
that
we're
storing
it.
But
that's
one
jpeg.
A
A
A
So
when
you,
when
you
get
back
that
receiver
like
that
j,
it's
a
or
when
you
get
back
that
receiver
projection
you're
doing
it
on
the
entire
projection,
matrix
right,
the
array
of
the
projection
matrix
so
you're
doing
it
you're
doing
the
solve
on
a
matrix
on
a
bunch
of
right
hand,
sides
at
once,
whereas
so
you,
you
repeat
that
solve
operation
for
each
right
hand,
side.
Essentially
it
happens
inside
of
paradiso,
but
it
does.
K
H
That's
so
like
the
number
of
solve
for
forming
j
in
our
case
is
the
number
of
data.
So
you
got
millions
of
data,
then
you
need
to
solve
millions
of
times
so.
K
H
No
well,
I
think,
then
I
think
there
is
a
problem.
I
guess,
because
that's
what
it's
supposed
to
be
so
so
solving
like.
Actually,
if
you
want
to
form
j,
think
about
like
a
j,
it's
an
n
by
m
matrix
and
then
it
depends
on
which
way
you
solve.
So
you
solve
either
on
a
model
side
or
you
solve
on
a
dataset.
K
H
A
H
A
K
Well,
let's
look
at
it
and,
let
me
know,
let
me
know
if
we
can
stream
line
up,
but.
K
It's
only
does
it
one.
I
thought
I
only
see
it
once,
but
maybe
maybe
slug
is
like
is
right.
I'm
not.
A
A
K
I
don't
understand
how
that
works,
because
because
our
sensitivities
at
the
end,
have
the
dimensions
of
m
data
by
m
cell.
A
A
H
That's
like
that,
that's
what
I
meant
so
when
we're
solving
jt
value
we
multiply
the
vector,
but
when
we're
forming
j
we
multiply
the
matrix.
So
I
mean
if
you
are
considering
that
as
a
one
solve
then
you're
right,
but
it's
literally
a
multiple
solve.
Let's
say
you've
got
a
10
points
in
your
receiver,
then
that
actually
do
in
a
matrix,
vector
product
10
times
I
see
I
see
I
see
yeah
yeah,
but
I.
K
It
you're
correct.
I
got
you
right,
yeah,
that's
what
you
mean
yeah
and
also
it's
happening
there
so
inside
priority.
So
there's
there
are
more
operations
happening
in
the
gadget
than
it's
still
a
single
call,
but
internally
there's
more
happening.
I
totally
get
sure
yeah.
A
A
H
Actually,
speaking
of
that
mt
stuff,
I
got
very
similar
problem.
I
could
actually
quickly
share
my
screen.
H
Let's
see
if
I
can
so
actually
it's
a
literally
the
same
problem
in
urban
yam
and
sort
of
what
I'm
doing
so.
I
do
a
1d
stitched
in
version
in
erroneou,
so
you
you
got
this
profiles
like
this
and
then
the
definition
of
model
is
in
the
vertical
profiles.
So
I
think
it's
same
as
mt.
I
guess
so.
If
you
have
a
multiple
station,
you
don't
want
the
inversion.
You
got
vertical
profiles
that
there
is
tv.
So
that's
the
starting
point,
but
if
you
think
about
the
volumetric
footprint
of
that
resistivity,
it's
much
much
larger.
H
I
guess
so.
I
think
it's
it's
not
just
like.
I
don't
know
how
you
define
that.
Let's
say
you
put
it
in
a
20
100
meter
block,
but
it's
it'll
be
much
larger
and
I'm
not
sure
like
maybe
at
deeper
depth.
It
may
increase,
even
in
the
empty
case,
not
sure
like.
We
probably
need
to
look
at
the
sensitivity
function,
but
when
you're
going
back
to
the
3d
you
need
to
think
about,
you
can
think
about
it.
H
As
a
volumetric
averaging,
so
suppose
you
start
from
the
3d
grid
and
then
you
develop
an
averaging
function
and
then
define
that
on
vertical
profiles.
Okay,
so
that's
your
four
problem
and
once
you
have
that
four
problem
you
may
want
to
go
back
like
this,
so
see.
That's
your
in
like
a
result
of
mt1
the
inversion
and
then
you
can
build
a
3d
model
model
you
could
use
as
a
starting
model.
So
yeah
it's
basically
the
same
problem.
H
And
other
than
that,
I
was
I'm
kind
of
working
with
quite
a
bit
with
that,
like
geostatistics
and
people
doing
geostatistics,
and
they
have
like
a
very
similar
inverse
problem
that
we're
doing,
but
their
regularization
is
a
variogram,
and
so
there
were
there
are
a
couple
of
applications
of
like,
or
it
is
basically
switching
our
regularization
matrix
with
their
covariance
matrix
anyway.
So
it
seems
like
an
interesting
avenue
and
adding
their
statistical
regularization
function
into
zoomtek
might
be
an
interesting
application
to
add.
Anyway.
That's
that's
what
I'm
doing.
K
A
K
K
What
do
you
mean
in
terms
of
run
time
and
the
result?
The
result
next
sounds
runtime
is
good.
What
do
you
think.
H
So
that's
where
I
had
a
bit
of
run:
time's
not
too
bad
like
less
than
10
minutes,
but
the
because
once
you
form
that
that
averaging
matrix
is
forming
averaging
averaging
matrix
is
a
little
bit
expensive
because
you
need
to
search
over
once
then,
but
that's
sparse
matrix.
So
we
can
store
it.
H
Actually,
the
major
cost
was
forming
the
regularization
function
and
every
iteration
so
because,
like
it's
not
necessarily
very
cheap,
and
that
was
actually
the
major
cost
and
anyway
so
I
was
thinking
about
what
can
we
store
when
we're
moving
to
a
bigger
problem?
But
but
maybe
at
that
stage
that
may
not
be
a
major
cost
like
to
solving
the
physics,
maybe
the
major
cause.
I
wasn't.
H
A
Exciting,
well,
I
believe,
that's
a
comment
on
our
hour
meeting.
Unless
everyone
has
any
else,
anything
else
touch
base
again
next
week,
a
little
bit
of
you
know
just
reach
out.
If
you
have
questions
as
well,
the
other
thing
is,
I
was
kind
of
entertaining
the
idea
of
hosting
like
a
like,
almost
like
at
office
hours
for
simpek
related
stuff,
a
time
when,
like
okay,
I
would
be
available
if
some
or
someone
else
wants
to
be
available
or
something
just
to
have
people
pop
in
and
ask
questions
on
random
items.
A
B
H
Who's
for
like
who
is
it
for
is
that
okay,
it's
for
our
group
or
like
more
people
on
the
outside.
A
Yeah,
so
just
floating
the
idea
with
that.
I
guess
I'll
see
you
guys
next
week
and
it's
reminder
if,
as
I
brought
up
the
beginning
of
the
meeting,
those
who
weren't
here
I'll
be
hosting
a
coding
session
party
thing
next
week
next
friday
afternoon,
so
we
can
start
with
some
coding,
maybe
play
some
games
later
or
something
socialize
a
bit.