►
From YouTube: SimPEG meeting April 15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm
not
sure
we
got
some
high-quality
result.
We
yeah
I
was
working
with
Joe,
mostly
for
the
past
fight
I,
don't
three
to
four
days
to
finalize
a
CG,
abstract
and
yeah.
So
we
were
basically
summarizing
what
our
courts
to
develop
RTC
code
to
serve
the
RG
debris
project.
So
we
developed
a
1d.
Well,
we
had
to
kill
game
3d
with
so
we
developed
a
1d
code
thanks
pre,
say:
yeah
I
actually
used
your
ankle
transform
as
well.
A
Okay,
yeah
I
was
actually
thinking
about
using
your
code,
but
there
was
a
bit
of
accuracy
shoe
if
I
set
that
zero
frequency,
so
it
in
the
large
off
said
that
Christian
was
fine,
but
at
the
very
near
offset
there
was
some
problem.
I
could
see
anyway.
So
that
was
the
reason
why
I
had
to
revamp
this
propagation
matrix
stuff
but
anyway.
So
that's
there
was
one
new
development
and
Joe
worked
really
hard
to
improve
the
efficiency,
our
of
our
2d
and
3d
code.
A
C
Essentially,
I
think
it
talked
about
a
little
bit
before,
but
it
just
goes
through
the
survey
and
to
find
all
of
the
unique
pull-pull
combinations.
So
every
like
dipole-dipole
winter
slumber
is
a
pole
dipole.
They
can
all
be
reduced
down
to
linear
combinations
of
pole,
pole,
electrodes
and,
furthermore,
those
pool
pool
electrodes
can
be
arbitrarily
switch
to
the
reciprocity
very
so
it
kind
of
goes
through.
It
goes
through
and
then
finds
all
the
unique
hole
sources
which
only
rich
therefore
reduces
the
amount
of
matrix
solution
that
needs
to
do
at
the
forward.
C
J
a
vacuum
J
transpose
J,
transpose
back
operations
to
just
the
number
of
unique
source
electrodes
instead
of
the
number
of
sources.
So,
even
though
you
could
like
for
these
examples
that
we
had
before
we
had
a
dipole
dipole,
sir,
when
our
son
was
a
type
surveys
where
there
could
be
1500
data
points
in
each
of
those
data
points
had
a
different
combination
of
had
a
different
dipole
source.
So,
even
though
there
are
only
sent
me
two
electrodes
you
can
reduce,
you
can
calculate
the
entire
survey
using
71
sources.
C
A
That's
a
huge
improvement,
if
you
think
about
like,
if
you
just
fit
the
like
a
kind
of
naive
approach,
you
have
thousands
of
sources
even
for
like
a
relatively
small
size
to
DDC
survey,
so
you
need
to
store
all
the
fields
of
thousand
sources
which
could
be
pretty
expensive.
So
we
had
a
problem
memory
problem
problem
which
was
insane
all.
A
D
D
D
A
A
A
D
A
Transform
that
into
a
real
domain,
so
Peter
developed
a
handful
transform
that
I
was
using.
So
we
were
just
talking
about
a
man,
the
number
of
filter
kind
of
like.
If
you
got
more
filters,
it
increases
your
computation
cost.
So
if
you
can
have
minimum
number
of
filters,
that'll
increase
your
increase,
your
speed,
so.
D
Yeah,
you
know
they
can
look
into
that.
It's
okay,
because
in
this
publication
they
mentioned
that
for
huge
conversions-
and
you
are
talking
about
huge
conversions,
you
might
actually
be
faster
to
do
first,
a
loop
with
a
quick
filter
design
for
the
problem
at
hand
and
then
look
using
a
short
filter
as
possible.
A
Another
I'm
done
anyone
can
have
a
final
input.
That'll
be
that'll,
be
great,
so
we
we
ended
up
bumping
up
the
floor
that
so
that
the
undulating
feature
I
think
it
was
because
of
that's
sort
of
the
noisy
data
at
the
larger
offset.
So
by
bumping
up
the
floor,
we
could
remove
like
those
like
a
sink.
Okay
like
undulating
features.
Oh
now,
like
all
3d,
all
like
a
free
inversion,
are
quite
consistent
as
well
as
okay,
so
and
in
3d
I
think
we
can
actually
like
better
recognize
those
dry
zones
within
the
aquifer.
A
F
And
yeah
I
usually
get
it
usually
finishes
in
about
there.
We
go
eight
seconds
ten
seconds,
so
that's
pretty
quick
and
yeah.
So
we've
got
some
apparent
resistivities
you
can
yeah,
you
can
model
apparent
resistivity
or
you
can
model
impedances.
So
we've
just
got
a
more
resistive
block
in
a
more
conductive
half-space.
F
H
F
Able
to
effectively
invert
impedance
data
and
the
apparent
resistivities
and
I
found
that
the
I
did
an
analytic
computation
of
the
sensitivities.
I
did
the
math
and
it
actually
runs
faster,
so
I'm
glad
that
I
put
in
all
that
work
and
it
led
to
good
things,
but
yeah
I
think.
Maybe
if
somebody
wanted
to
review
it
at
some
point,
I
think
it's
pretty
much
ready
to
to
be
merged
in
Joe
gave
me
some
feedback,
which
was
very
useful,
but
when
I
have
time
I'm
thinking
about
how
I
can
look
at
pseudo-3d.
F
C
C
C
F
F
E
C
C
B
G
G
The
empty
result
for
pH
and
impedance
and
I
was
also
doing
like
some
some
tricks
of
normalizing,
the
response,
so
that,
if
you
had
like
because
like
if
you
were,
if
you
were
starting
to
low
for
example-
and
you
were
putting
like
very
low,
conductivity
and
stuff
like
that,
like
your
senior
can
blow
up-
and
you
were
getting
like
massive
errors
due
to
rounding
issues
and
so
I
had
a
lot
of
tricks
to
that.
It
worked
in
pretty
much
in
any
situation.
C
C
G
F
I
I
started
going
back
to
the
DC.
The
simulation
DC
part
and
I
was
running
some
some
tests
for
Nick
when
we
his
data
certain
even
with
I,
guess
no
we're
not
we're
not
using
it.
Yeah
basically
did
a
mini
survey
that
Joe
implemented.
We
don't
have
it
currently
with
the
with
the
gas
book
and
even
then,
when
we
have
a
very
large
survey
with
many
many
sources.
I
Our
bottleneck
right
now
is
the
the
right
hand,
side
and
the
fields
right
where
we
still
have
to
build
those
giant
dance
matrices.
So
what
I
was
thinking
yesterday
and
I
might
try
to
implement
it?
This
week
is,
if
you'll
be
possible
to
just
call
fields
for
specific,
specific
sources
right.
So,
instead
of
doing
the
whole
thing
all
at
once,
you
could
block.
C
I
A
A
A
Yeah
cuz,
usually
when
I,
what
I
saw
from
even
the
yes
data,
which
is
pretty
3d.
It's
still,
it's
pretty
local
on
it's
like
a
you,
had
a
limited
number
of
electrodes
that
you
can
use
for
a
single
shot,
or
so
they
basically
move
white
they're
like
a
thick
electrodes
array
from
here
you're
here.
So
it's
still
quite
localized,
so
I
thought
basically
just
do
a
tiling
kid.
You
could
save
a
lot.
I
thought
cuz
if
you
can
like
say,
reduce
you're
like
a
simulation
that
may
not
last
a
factor
of
live.
I
A
B
I
A
I
Could
just
loop
at
that
point
right
because
right
now
the
costs,
the
memory
costs
is
that
it's
advanced
matrix
of
n
cell
by
n
sources.
So
if
you
have
thousands
of
sources
and
you're
in
there
like
a
hundred
thousand
two
million
cells,
that's
a
huge.
Those
are
two
massive
matrices
that
you
need
to
store
in
memory
correct
when
that,
when
really,
what
we
want
to
do
is
compute
data
at
the
end
of
the
inversion,
so
we
could
for
loop
through
a
few
blocks
of
it
without
having
to
store
the
whole
thing
at
once
right.
C
B
A
cool
idea,
DOM
and
I
think
actually
that
might
even
provide
us
a
bit
more
flexibility
with
how
we
paralyze
things
to
down
the
line
is
be
able
to
twist
like
take
chunks
of
the
sources.
Even
if
we're
not
doing
the
tiling
would
still
be
able
to
actually
parallel
lies
that
way.
I
think.
I
G
A
I
A
I
Hand
side
is
really
really
at
least
for
for
the
D
seed
right
hand.
Side
is
like
extremely
cheap
to
form
right,
because
it's
a
bunch
of
zeros
with
like
one
or
two
cells
that
have
non
zeros
on
it,
so
it
should
be
a
sparse
matrix
at
first
and
then
second
to
four
methods
is
like
is
insanely
easy,
so
we
should.
We
should
just
form
it
on
demand.
Basically,.
C
I
The
fly
is
for
the
inversion.
If
we're
gonna
store
sensitivities,
we
only
need
it
for
you
know,
for
at
the
beginning
of
forming
J
and
for
at
the
end
of
the
inversion
we
only
needed
to
compute
the
predictor,
so
we
don't
really
need
to
hold
it
hold
it
in
memory.
Does
that's
full-time.
It
will
be
kind
of
a
basically
a
trade-off
between
how
fast
you
want
to
compute
versus
how
much
memory
you
want
you
want
to
use.
So
we
would
have
that
that
right.
I
C
I
A
A
I
C
C
C
If
you're
gonna
be
stored,
like
if
you're
gonna
be
doing
something
with
the
fields
object
anyway
and
you're,
storing
the
kilt
object
somewhere,
you're
already
gonna
be
using
that
memory
anyway.
He'd
be
an
option
for
some
of
those
solvers
that
we
would
could
take
advantage
of.
A
B
Well,
I
think
Sonia
depends
if
you
want
to
store
it
as
the.
What
we're
storing
is
the
because
the
sources
can
each
keep
track
of
their
own
source,
vector
right
now.
I
will
be
you
this.
How
this
is
working,
at
least
in
the
e/m
lighting,
so
that
each
source
can
keep
track
of
its
own
vector.
That
is
then
used
to
construct
the
right
hand
side.
So
I
don't
think
for
the
most
place,
at
least
in
the
e/m
codes.
B
We're
not
storing
the
right
hand
side,
but
each
source
is
storing
its
vector,
but
we
could
we
could
easily
have
it
not
actually
store
that
and
have
it
computed
on
the
fly.
Each
time.
B
B
E
G
Last
two
days,
I
was
a
ping
Sarah
to
get
some
visualization
stuff
going
on
and
going
with
project
and
I
need
to
create
an
issue.
I've
just
haven't
done
it
yet
so
we
succeeded-
and
she
just
I
see
that
she
just
posted
a
picture
on
on
the
this
channel.
So
it
looks
pretty
cool,
so
we
succeeded,
but
it
was
not
as
straightforward
as
it
could
have
been
all
it
just
like
yeah
between
the
so
so
they
were
issue
at
two
levels
so
like
we
can
like.
G
C
G
But
what
I'm
saying
is,
like
you
know
like
when
you
think
that
field
object,
and
you
you
you
say:
there's
the
new
centers,
a
syntax
so
that
you
fortunately
but
like
you
know
when
you
when
you're
writing,
Phi,
it's
fine
but
you're
supposed
to
be
you
supposed
to
be
able
to
write
E
or
G
or
charges,
but
it
fails.
That's
what
I'm
saying
like
so
there
is
I
think
there
is
a
I
accept
if
I
was
really
doing
something
wrong.
G
A
C
A
C
It
is
Gator,
but
what
you
have
to
do
is
you
have
to
call
this?
That's
for
the
2.5
decoder.
You
have
to
do
that
simulation
that
fields
and
then
there's
another
operation
that
you
can
do
that
calls
fields
to
space
on
that
simulation.
Object
that
implements
that
transformer
then
stores
it
back
to
the
as.
G
G
Because
I
can,
even
when
you
call
the
field
of
I,
realize
that
like
when
you
call
the
field
object
with
Phi,
it
actually
gives
you
a
vector.
That's
kind
of
like
mesh
size
by
number
of
spatial.
Fourier
transforms
so
like
a
16
in
that
case,
so
we
had
like
it
was
giving
you
batteries
and
matrices
the
size
of
like
make
number
of
cells
by
a
number
of
frequencies.
Yep.
E
G
A
G
We
get
something
because
she
won't
just
want
it
like
a
visualization
of
the
current.
Only
a
mesh
like
to
show
what's
what's
what's
the
so
C
is
like,
and
what
the
current
look
like.
They
were
just
for
visualization,
but
he
was
like
yeah
I'm,
just
I'm,
just
saying
that's
wieszczyk,
with
the
three
mesh
and
the
2.5
D
was.
It
was
not
as
easy
as
like:
oh
cool,
the
e
field
and
then
plot
slice,
a
vector
field.
You
had
to
pretty
much
like
yeah
like
do
a
lot
of
things
by
him.
That's
all
I!
That's
what.
G
H
H
H
Yeah
I
haven't
got
around
to
testing
it,
but
yeah
I
want
to
see
if
just
having
calculating
it
on
the
fly,
storing
it
as
I
got
delayed
x-ray
and
then
compute
just
what
field
you
want
at
that
time,
yeah
so
I'm
just
working
on
that
right
now,
then
the
DC
I
just
went
back
and
did
some
Joe's
suggestions.
There
I
was
trying
to
form
those
big,
sparse
matrices
ahead
of
time.
Instead
of
call
doing
the
transform
every
time
from
an
umpire
to
that
and
yeah.
H
Well
then,
I
put
all
those
other
again
a
degree
of
get
rotors
or
rotary
into
one
big
delayed
function
so
that
it's
just
yeah
because
they're
they
gotta
be
called
in
sequence:
anyways,
that's
about
it,
but
yeah,
mostly
the
MT
I've,
just
I'm,
trying
to
see
if
yeah,
what's
difference.
If
we
just
calculate
fields
on
the
fly,
Oh.
B
Super
cool
well
I'm,
looking
forward
to
chatting
through
that,
maybe
in
more
detail
tomorrow,
when
we
do
the
MT
meeting
I
think
that's
gonna,
be
yeah
I'm
excited
to
see
what
you've
been
doing
so.
I
H
A
Another
node
I
think
for
the
MT.
We
probably
need
to
consider
like
using
an
iterative
solver
cuz
like
that's,
actually
much
more
making
sense
in
terms
of
parallelization
so
yeah.
If
you
can
figure
out
like
a
relatively
good
iterative
solver
that
can
solve
Maxwell's
equation,
I
think
that
will
bump
up
a
lot.
But
you
can
solve
much
larger
problem
to
be.
D
Make
a
paper
of
custom
and
pet
gem
I,
don't
know
if
you
have
heard
of
those
had
brought
from
the
Barcelona
supercomputing
Center
custom
is
from
liveness.
Institute
of
Chief
is
applied,
physics
in
Germany
and
simple
and
and
my
code,
which
are
all
codes.
I
can
do
C,
SEM
modeling
and
they
kind
of
want
to
show
up
at
the
similarities
and
differences
of
these
codes
and
do
some
3d
comparisons,
mainly
because
of
well.
D
You
always
compare
it
to
analytical
solution
and
1d
solutions,
but
if
you
have
a
3d
code,
you
would
like
to
have
also
some
comparisons
with
3d
models.
So
that's
what
they
try
to
do
and
it
was
a
bit
on
and
off
for
a
while
and
now
I
really
tried
to
get
it
over
the
edge
so
I
hope
by
next
week.
I
can
put
it
to
Raphael,
which
is
the
guy
from
custom
and
then
yeah
you
go
from
there.
I
owe.
A
A
D
A
D
It
Marlin
are
3d
and
sending
the
publications.
It
was
it's
a
realistic,
3d
resistivity
model
offshore
Brazil.
It
was
raised
by
some
Brazilians.
They
took
seismic
velocity.
They
actually
followed
some
stuff
on
it.
In
my
PhD,
which
was
cool
they
they
took
seismic
velocities
and
well
logs
and
all
the
information
they
had
and
created
a
3d
resistivity
model,
I
published
other
open-source
in
the
first
publication,
and
then
they
they
use
like
a
slumbering
MJ
software
to
compute
3d,
see
SEM,
results
and
publish
that
open
source.
F
A
D
D
D
A
D
That
was
one
and
the
other
thing
is
with
my
code,
I
tried
to
integrate
it
more
in
the
workflow
of
the
consortium
in
which
I'm
involved
at
the
moment,
it's
currently
trying
to
be
greater
than
the
joint
inversions
of
gravity
and
magnetic
data
that
sort
of
thing.
So
there
I'm
looking
into
the
gradient
at
the
moment
that
actually
on
task,
how
does
impact
do
gradient
in
3d.
B
We
can
keep
that
all
analytically
and
so
like
the
method,
Jay
Beck,
for
example,
we
we
basically
gone
through
and
gone
and
done
the
derivation
of
basically
what
are
all
the
matrix
vector
products.
Thank
you.
That's
J
times
a
vector
and
J
transpose
times.
B
E
C
D
D
C
F
D
E
E
B
Perfect,
okay,
well
yeah
in
terms
of
my
updates.
I
guess
been
working
on
s
e-g
abstracts,
in
addition
to
the
Myanmar
ones,
out
of
one
for
machine
learning,
on
the
classify
new
XO
from
yam
data
and
that
one's
basically
done
I
think
I
dropped
the
link
in
last
time,
which
should
be
the
same
yeah
I
can
drop
it
in
the
notes
again.
But
if
anyone
reads
through
and
has
any
feedback
or
ideas,
I
mean
this
is
one
that
I
would
like
to
eventually
turn
into
a
paper.
D
About
this,
if
you
want
to
ping
yon
villain,
he
chose
impact
a
couple
of
weeks
ago,
so
he's
a
master's
student
here,
he's
hurting
the
Fugro
and
he's
looking
into
this
topic,
not
a
machine
learning
but
just
to
its
coding.
Well,
maybe
he
will
be
interested
to
read
it
down
and
give
some
feedback
or
yeah
that.
B
Yeah
and
then
so
tomorrow,
we'll
do
a
chat
about
empty
and
natural
sources.
I
think
there
might
be
a
couple
people
from
Berkeley
here,
just
drop
in
to
say
hello,
who
are
involved
on
sort
of
the
the
Jupiter
side
of
things
just
to
kind
of
get
a
sense
of
what
we're
working
on,
but
yeah.
We
can
use
that
as
a
chance
to
start
to
sketch
out
some
development
plans
and
figure
out
how
to
sort
of
divide
and
conquer
to
make
some
progress.
There
yeah.
A
A
D
We
just
pick
one
in
line
and
one
broadside
line,
and
then
you
don't
need
the
whole
model,
because
then
you
limit
your
your
survey,
massively
I,
don't
think
you
would
do
a
CSV
M
survey
of
that
size,
at
least
not
in
the
beginning,
yeah,
but
I.
Don't
think
that
the
likes
of
EMGs
they
have
quite
massive
models
in
deadly
in
the
dozens
of
millions.
I
would
think
it's,
but
then
also
mostly
iterative,
solvers
and
not
director.