►
From YouTube: SimPEG meeting March 11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
D
B
C
A
B
C
C
F
F
Computer
try
to
local
computer
I
can
tree
produce
it
at
all
so
and
then
kind
of
got
me
looking
at
as
your
pipelines
today
and
it
doesn't
happen
there
either
so
I've
been
trying
to
like
locally
test
some
as
your
pipeline
stuff
to
get
that
working
and
seeing
how
it
played
out.
So
it
was
messing
around
with
that
a
little
bit
beforehand
and
it
doesn't
do
it
there
either.
So
I
don't
know
what's
up
with
the
Travis
bill
and
why
it's
stalling
on
something:
okay,.
F
F
Outside
of
that
soggy
work
done,
that's
all
you
got
some
tree
mesh
generation
functions
into
the
I/o,
DC
and
I
was
working
on.
That's
part
of
that
speed-up
and
then
I
implemented
that
a
little
trick
to
not
miss
area
trick,
which
is
kind
of
reduce
the
survey
size
to
reduce
the
memory
requirement.
Consider
for
the
d2d
DC
toad,
so
we've
been
working
on
know
what
I've
been
messing
with.
F
Essentially,
what
happens
is
with
the
like:
I
talked
to
him
for
with
the
Leonard
numbers,
a
types
survey
acquisition
parameters.
There
is
a
dipole
for
every
receiver,
right
as
it
was
along.
The
dipoles
like
the
source.
Dipole
was
changed.
The
receiver
moves
along
with
it,
so
basically
it
just
reduces
all
of
that
down
to
the
number
of
unique
source
electrodes
and
then
can
reconstruct
the
data
from
that
from
doing
it
on
the
reduced
size.
So
replaces
a
bunch
of
dipoles
with
poles
at
each
unique
source.
Electrode
locations
is.
A
Is
there
any
check
to
make
sure
that
the
the
boundary
conditions
are
still
being
satisfied?
The
the
reason
I'm
asking
is
that
years
and
years
ago,
when
we
first
started
doing
this
like
if
you
actually,
if
you
actually
model
the
like
a
pole,
a
dipole
system,
then
you
know
you
could
get
by
with
you
know
a
bit
of
a
relaxed
boundary
condition.
If
you
model
the
pole,
then
you
needed
to
go
out
farther
and
even
if
you
subtract
the
results
from
two
poles,
you
might
lose
actors.
So
so.
F
There's
a
little
bit
of
like
from
what
I'm
saying,
there's
been
a
little
bit
of
accuracy
lost.
It's
still
been
within
the
tolerances
of
numerical
precision
for
these
surveys
that
I've
been
testing
it
against,
and
there
are
some
tests
built
into
there
and
to
simply
very
into
the
testing
ducts
to
ensure
that
it's
still
handling
it.
Okay,
okay
and
it's
right
now,
it's
just
a
it's
a
option
to
enable
it
doesn't
do
it
by
default.
D
We
have
we
have
the
mixed
boundary
condition
for
the
cell
centered
option,
so
in
the
if
you
actually
get
the
app
so
there's
an
if
statement.
If
the
survey
type
is
more
poll
where
we're
forcing
to
use
the
self-centered
problem,
say
it
that
in
the
on
the
app
side
that
shouldn't
be
a
problem
but
yeah
you're
right,
like
so
directly
using
poll
poll,
you
need
to
set
the
rifle
on
your
vision,
yeah.
F
And
it's
something
like
I've
noticed
that
a
little
bit
different
in
terms
of
the
dura
Schley,
zero
boundary
condition
that
had
the
largest,
like
kind
of
small
little
errors
accumulating
as
opposed
to
the
zero
Noi
Minh
boundary
conditions.
But
it
was
from
the
test.
I'd
ran
it
on.
It
was
sufficient
sufficient
sufficiently
able
to
reproduce
the
original
surveys.
A
So
when
you
do
the
mixed
boundary
condition
so
er
you
just
kind
of
setting
that
boundary
condition
or
as
if
you
had
a
poll
sort
of
right
in
the
center
of
the
volume
of
interest,
and
then
you
kind
of
specified
what
that
ratio
is
Jewish
laid
to
join
Minh
on
the
outside.
Or
are
you
doing
something?
That's
more
dynamic.
F
D
A
D
D
Okay,
I
think
particularly
I
was
ya,
helping
yes
Joanna
to
boost
up
that
D
CK,
so
Jojo
said
that's
very
simple
like,
but
it's
actually
complicated
and
but
the
speed-up
is
huge.
So
yeah
I
really
want
to
emphasize
that,
because
previously
try
really
run
the
code,
so
they
were
actually
having
like
serious
memory
problem.
D
D
A
F
E
Do
that
they
can
get
the
results
of
the
test
Doug.
If
you
go
to
slack
and
you
go
to
the
shared
conversation
between
you,
myself,
Joe
and
soggy,
you
can
actually
see
the
comparison
between
the
original
tensor
inversion.
How
long
it
took
how
much
ram
it
took,
and
you
can
see
the
same
things
for
me
running
it
last
night.
D
D
Yes,
that
DC
like
to
DDC
data
so
like
their
usual,
spacing
about
like
500
to
600
meter
and
then
some
of
the
some
of
the
end
spacing
are
quite
large,
but
they
were
basically
putting
all
the
data
in
and
trying
to
fit.
But
usually
those
like
a
large
and
spacing
data
are
I,
think
it's.
It
doesn't
look
noisy
but
like
if
you.
D
C
A
C
You
you,
you
pick
and
a
resistivity
that
you're,
assuming
is
a
reasonable
background
value.
You
can
just
look
at
your
various
to
be
these
two
to
guess
and
a
number
and
then
convert
back
from
ever
with
you
back
to
voltages.
That
will
give
you
kind
of
your
floor
value
for
all
your
end,
spacing
because
now
you're,
driven
by
what
the
the
resistivity
should
be,
rather
than
just
arbitrarily
picking
picking
a
floor
value.
B
D
D
So
I
kind
of
still
preferred
the
voltage,
but,
but
my
point
was
a
little
bit
different,
like
is
a
small
voltage
if
that
they
are
measuring.
Is
that
if
you
have
a
large
and
space
in
your
voltage
will
be
small
and
like
it
looks
smooth
on
your
shoes
accession,
but
the
it
could
I
could
be
actually
wrong
and
it
could
be
just
noisy,
so
I
don't
know.
If
I
can
you
need
some
careful
look
at
those
large
and
specie
or
basically
remove
those
large
and
spacing
so.
F
D
Could
be
could
be
I,
don't
think
that
actually
makes
a
hard
time
to
fit
the
data.
That's
my
guess,
but
not
not
not
sure
so
I
mean
like
meaning
like
if
she
can,
if
you're
having
hard
time
to
fit
the
data
or
if
you're
in
version
tries
to
put
a
lot
of
structures
that
that's
sort
of
like
for
me.
That's
that's
the
signature.
Oh
there's,
some
inconsistency
in
your
data,
probably
some.
C
G
Too
much
I've
been
mostly
stuck
on
MT
data
processing
here,
but
on
this,
when
I'm
waiting
for
some
of
that,
the
process
I've
got
a
pretty
good
solution
or
an
interim
solution
for
the
MT
I've
cut
myself
time
in
half,
which
is
kind
of
nice
and
then
been
playing
around
with
the
whole
idea
there,
with
the
dcpip,
with
assigning
different
floors
at
different
ends
and
spacings
and
seen
quite
a
bit
of
that's
actually
really
good
stuff
happening.
There.
Definitely
getting
a
lot
better
fits
playing
with
that.
G
That
was
a
thanks
Augie
for
opening
that
up
to
me,
that
has
really
helped
a
lot
and
then
I'm
just
playing
between
inverting
and
or
chargeability,
and
then
just
inverting
for
voltages
I'm
having
having
I
have
a
data
set
where
it's
having
a
hard
time
fitting
the
charge
abilities.
But
it's
having
a
no
problem
fitting
the
voltages
but
yeah
other
than
that
Oh.
G
B
Is
that
you
push
that
yeah.
A
G
B
E
Nothing
too
much
I
mean
just
doing
some
testing
on
on
the
new
stuff
for
those
notebooks
hoping
to
get
the
tutorials
related
stuff
merged
in
as
soon
as
possible.
You
know
just
so
that
they
new
functionality
doesn't
get
added,
which
breaks
the
tutorials,
which
means
I've
got
to
go
fix
it
again,
which
in
then
it's
just
kind
of
an
ongoing
thing.
I
have
to
manage.
E
B
Devon,
do
you
know
I
can
send
you
this
link
if
you
have
been
seen
them
before.
These
will
be
slightly
out
of
date,
but
we
did
call
it
an
effort
into
the
examples
themselves.
In
the
sim
page
github
organization,
there's
one
called
e/m
notebooks,
and
these
were
actually
tutorials
that
we
put
together
for
the
so
I
just
dropped.
The
link
into
the
notes
and
I
can
send
that
to
you
on
slack
afterwards,
but
yeah.
B
If
you
want
to
take
any
of
those
that
would
be
great
and
I
believe
that
on
dieter
made
a
pull
request
for
an
example
that
talks
about
the
time
stepping
in
the
time
domain.
So
if
anyone's
willing
to
like
take
a
quick
look
at
that
Devon
I,
don't
know
if
you'd
be
interested
in
taking
a
quick
look.
That
would
be
great
it's
directly
under
master,
which
is
just
fine.
We
can
upstream
it
because
he's
familiar
with
the
code
on
masters,
so
we
didn't
want
to
like
impose
that.
E
E
Instead
of
making
the
tutorial
be
sort
of
a
workflow
that
somebody
would
actually
be
doing
in
practice,
if
you're
doing,
if
you
want
an
inversion
workflow,
you
generally
start
with
typography
and
a
data
file,
and
if
we're
doing
a
tutorial
on
inversion,
you
should
probably
just
start
with
typography
and
a
data
file.
So
it's
it
could
take
some
extra
effort
to
to
parse
it
out,
but
it
would
be
nice
to
know
that
there's
a
problem
on
a
reasonable
size
that
I
could
solve
yeah.
B
A
Yeah
that's
actually
coming
along.
We
had
a
large
zoom
meeting
a
couple
of
days
ago
where,
for
the
first
time,
everybody
kind
of
got
got
together
and
was
communicating
so
I
think,
especially
with
the
new
code
that
that
could
really
have
to
speed
things
along
and
also
it
would
benefit
from
I
think
overall,
a
little
bit
of
benefit
from
point
of
view
of
how
things
are
structured
and
named,
so
that
we
can
go
ahead
and
reproduce
the
example.
A
A
If
we
can
accomplish
that,
then
I
think
we're
in
really
good
shape.
So
right
now
we're
looking
at
a
job
for
you
haven't
really
been
involved.
We've
got
about
five
different
case
or
five
different
surveys
that
we're
going
to
try
to
make
into
case
histories
and,
in
some
cases,
they're
actually
going
to
grow
and
we're
going
to
see
you
know,
there's
some
validation
there
or
not.
A
B
A
Yeah
we
tried
sunny
a
couple
of
other
times
to
acquire
the
IP
data,
weren't
weren't
successful.
For
a
couple
of
reasons.
One
was
because
max
was
having
trouble
getting
the
instrument
reprogrammed
to
actually
get
IP
data,
and
then
there
was
also
a
question
of
winning
the
tenant
overcame
that
then
there
was
a
question
of
time:
it's
taking
quite
a
bit
of
extra
time
to
try
to
get
it
so
I,
don't
think
we
actually
have
any
IP
data
from
one
of
their
field
test
sites.
A
D
Yeah
the
reason
why
I
asked
like
when
I
was
reading
the
case
history,
so
our
matrix
is
basically
like
a
conductor.
Is
water
like
as
conductor
of
like
30
to
50
or
30
200
meter
is
basically
a
water,
but
that
it's
non
unique
because
they
could
be
clay
as
well.
So
we
need
some
I,
don't
know
if
it
gave
in.
If
you
have
some
prior
knowledge
about.
Oh
this
region,
we
don't
know
have
that
much
clay
that
that's
probably
fine
but
yeah.
D
A
D
B
D
B
A
F
So
essentially,
I'm
reading
in
one
of
the
data
files
from
the
survey
from
the
surveys
in
my
n
mark
and
then
I'm,
adding
a
few
more
other
types
of
receivers
to
its
right.
Now.
It's
all,
essentially
the
dipole
dipole
survey
type
acquisitions
like
boards,
transmitters,
dipole
receivers
and
I
just
added
a
few
other
things
to
double-check.
That
I
was
working
for
other
different
combinations
of
poles
and
dipoles
here,
I'm
just
generating
a
mesh
from
that.
So
this
one
has
60,000
cells.
F
So
the
option
that
I
added
this
kind
of
is
over
here
this
miniaturized
thing
labeled
by
default,
but
it
recognizes
that
the
number
of
sources
greater
the
number
of
source
electrodes
and
to
do
this
calculation
on
the
60,000
mesh
takes
about
five
seconds.
And
if
I
look
at
my
memory,
you
guys
can't
see
a
little
bit
longer
that
time
I
went
up
to
like
one
gigabyte
like
1.25
and
then,
if
I
do,
if
I
try
to
compute
it
without
many
it's
rising,
it.
F
It's
essentially
gonna
blow
up
my
computer.
It's
good!
It's
gonna
move
into
the
cache
like
into
the
cache
storage.
Like
memory
usage,
goes
up
well
over
20
gigabytes
on
this,
and
it's
just
slogs
and
slows
and
slows
down
and
slows
down
like
I
haven't
I
still,
don't
let
this
completely
run
out
so
I'm,
just
saying,
like
my
computer,
16
gigabytes,
and
it
ends
up
using
like
a
lot
of
slot
memory
which
really
slows
down
the
computation.
So
it's
it's
almost.
F
B
F
F
Yeah
and
this
one's
still
going,
so
this
is
only
with
4000,
so
so
it's
just
kind
of
the
idea.
This
is
how
much
like
the
speed-up
isn't
just
predicting
the
data
so
that
so
the
things
like
the
line,
searches
are
gonna,
take
significantly
less
time.
No
there's
about
a
factor
of
20
there,
which
is
about
what
I
would
expect.
Your
fifteen
hundred
solves
down
to
about
72
solves
the
J.
The
implicit
J
vector
operations
have
the
same
sort
of
speed-up
J
transpose
as
well
and
Ford.
F
If
there's
not
really
any
way,
we
can
get
around
it,
we
can
speed
up
on
the
constructing
the
sensitivity.
Matrix
is
a
little
bit
it's
not
as
drastic
and
that's
just
because
of
the
fact
that,
no
matter
what
we're
gonna
have
to
do,
at
least
the
number
of
data
solves
on
the
system
to
build
the
sensitivity
matrix
well
before
each
one
of
those
solves.
We
were
calling
the
solver
1500
times
and
now
I'm,
calling
it
72
times
and
able
to
take
advantage
of
parallelizing
over
the
right
hand.
F
B
F
D
You're
gonna
have
a
bunch
of
dipole
combination,
but
when
you're
computing,
the
fields
you're
basically
have
hundred
fields,
yep
yep
and
what
I
was
actually
curious.
As
previously
when
you're
projecting
to
your
data
space
just
needs
one
field
right
because
for
a
single
source
you
got
one
field.
You
know
you
got
to
generate
one
datum.
You
have
multiple
fields
at
least
two
fields
to
generate
yeah.
How
do
you
like
a
like
a
sadati.
F
D
F
Have
like
the
kind
of
logic
behind
it
here
so
essentially,
I
know
I
finally
need
sources,
unique
source,
electrode
locations
and
then
I
go
through
each
each
of
them
and
what
I
keep
track
of
is
each
survey
or
each
new
source
will
have
a
certain
the
same
receivers.
So
I
look
and
find
all
the
receivers
that
used
that
single
hole
source
and
then
so
that
they
have
a
number
of
data
and
then
I
can
just
combine
I'll
just
I.
Do
the
data
operation
like
I?
D
F
F
D
F
Nothing
that
you
can
do
about
it
like
no
matter
what
like,
as
long
as,
if
you
have
things
that
are
reusing
the
same
pole
then
it'll
take
advantage
of
that.
But
I
mean
the
most
a
pole,
pole
survey,
where
you
monitor
you
know
the
current
at
each
at
every
electrode,
where
you
injected
every
electrode,
and
you
monitor
that
voltage
at
every
I.
Look,
so
that's
I
mean
that's
the
much
data
is
you
can
collect
from
a
survey
right,
so
everything
else
is
just
different
combinations
that
right
right.
F
F
F
D
F
That's
it
I
mean
I
feel
like
that's
another
level
that
could
be
added
on
to
on
top
of
this.
It's
just
another
level
of
logic,
and
this
level
resulted
in
the
least
amount
of
changes
to
the
actual
code,
like
after
doing
this
logic,
ahead
of
time,
modifying
the
prediction
and
sensitivity
matrices
like
there
was
I
had
to
do
very
little
to
modify
those
yeah.
D
F
F
If
so,
it
basically
monitors
which
data
indices,
it
needs
to
add
and
subtract.
It
checks
when
doing
a
survey,
so
the
fields
nothing
had
to
change
there
right.
That's
the
same.
The
prediction
operation,
basically
I,
do
a
little
check
here.
Okay,
if
I
have
a
mini
survey,
I'll
use
that
that
I'll
use
the
full
survey
and
then
this
was
the
same
and
then
it
essentially
does
this.
Does
that
operation
that
projects?
F
The
data
from
the
mini
survey
to
the
full
survey
and
that's
basically
I
just
have
to
use
that
same
operation
to
do
the
jpg
so
again
added
that
this
is
all
the
same
and
then
afterwards
it
does
that
so
really
small
modifications.
That
was
the
goal
of
messing
around
with
it
like
that,
every
I
guess
it's
all
very
similar,
and
then
the
transpose
operation
is
done.
I'm,
JT
Beck,
then
this
little
trick
to
return
it
on
the
full
matrix.
D
A
F
F
F
D
D
F
B
B
G
B
B
But
I
believe
it.
She
got
the
best
results
using
the
one
where,
basically,
we
stick
like
one
and
then
have
10
zeros,
but
one
10
zeros.
So
you
basically
like
rotates
the
rotate,
the
indices
where
you
have
entries
and
then,
in
that
case,
yeah
with
the
handful
of
examples
that
I
played
around
with
I,
don't
remember
off
how
how
many
I
had
to
use
hope
like
what?
What's
your
dream,
I'm.