►
From YouTube: SimPEG Meeting February 24th
Description
Weekly SimPEG Meeting from February 24th, 2021.
Includes, discussions on next steps for merging the EM1D subpackge into SimPEG, and some interpolation work by Seogi Kang.
A
So,
and
welcome
to
the
meeting
today
this
week,
okay,
devin
or
dieter
has
to
leave
early,
I'm
not
sure
how
quickly
he
has
to
leave,
but
we
can
get
into
some
quick
reports
soon.
A
I'd
just
like
to
say
that
I
think
so
it's
been
another
two
weeks
we
can
have
a
friday
social
hour,
slash
coding
session
if
people
are
up
for
it,
stop
by
and
say,
hi
and
hack
away
at
some
things
probably
go
through
I'll,
be
going
through
some
more
pull
requests
and
such
having
a
drink
if
you're,
if
you're
up
for
it,
I've
been
doing
them
around
four
o'clock,
so
try
to
keep
that
again.
B
Yeah,
I
don't
have
to
leave
too
early
8-ish,
I
think
so
yeah
just
yeah.
I
don't
actually
have
a
lot
to
say
I
I
was
in
the
field
yesterday
and
I
think
it
was
the
first
time
for
me
since
years
now,
so
that
was
kind
of
nice,
although
I
was
helping
with
seismic
data,
so
not
my
my
domain
that
didn't
measure
em
that
measured
on
monday
dual
em.
Has
anyone
worked?
It's
a
dual
duolin.
B
A
B
Of
you
have
been
with
walker
sean
walker
two
weeks
ago,
or
so
so,
there's
another
one
this
friday,
with
matteo
ravasi
talking
about
pileups
and
qpi,
to
do
stuff
on
gpus
it's.
I
think
it's
4
p.m,
utc.
So
that
should
work
for
you.
I
think,
maybe,
if
you're
interested
and
then
steve
just
told
me
that
friday
a
week
he
will
do
with
wesley
banfield
and
himself,
and
maybe
maybe
myself
do
some.
B
I
rendezvous
on
this
recent
javascript
interactive
figures
with
curve
note
that
they
implemented
some
chatting
about
that.
How
how
wesley
achieves
things
and
what
steve
has
in
mind
to
do
with
curse.
Note.
I
guess
I
don't
know
more.
So
I
just
added
in
in
the
in
the
meeting
notes
these
two
figures.
I
quickly
tried,
which
are
which
can
now
be
linked
to
any
curve
notebook,
and
I
don't
think
they
have
figured
out
yet
how
you
can
have
several
sliders
or
drop
downs
with
an
easy
callback.
B
C
B
That
does
actually
compute
it.
While
you
shift
the
slider
and
then
in
the
second
one
I
just
pre-computed
it
for
this
100
resistivity
set.
I
have
and
then
it
goes
in
as
a
javascript
callback
into
the
figure,
so
it's
stored
in
the
figure
the
result.
So
I
guess,
if
you
would
do
that,
for
a
lot
of
different
combinations,
then
the
figure
will
become
very
heavy,
but
so
far
it's
not
too
bad.
B
So
yeah.
I
think
it's
very
early,
but
I'm
sure
maybe
steve.
Will
polish
these
things
a
bit
and
hopefully
make
some
tutorials.
So
I
think
he
could
be
very
powerful
particular
for
students
to
yeah
that
they
don't
even
have
to
start
up
their
own
jupiter
or
binder
or
anything.
It
doesn't
require
anything.
D
See
if
yeah
does
it
change
the
state
for
everybody
there
or
it's
just
when
you
say
you
move
the
slider,
you
guys
are
not
seeing
that
I'm
moving
the
slider
right.
It's
just
it's.
A
C
Sure
so
I
talked
with
tebow
yesterday
and
we
have
a
bit
of
a
game
plan
to
maybe
clean
up
or
better
organize
the
static
utils.
So
some
things
like
plotting
pseudo
sections
or
reading
and
writing
data
to
ubc
gif
format
was
sort
of
not
done
with,
maybe
say,
final
usage
in
mind.
So
you
know
we
have
a
couple
of
redundant
functions.
C
C
C
Stuff
is
finished
before
I
finalize
that
work
and
then
I
think
try
to
come
up
with
a
plan
for
the
next
step
of
the
em1d
merge.
So
I
think
we're
all
kind
of
working
on
some
different
stuff
right
now,
but
I'd
like
to
come
back
to
it
and
really
iron
out
the
sort
of
step-by-step
process
that
we
need
to
complete
to
get
that
into
simpeg.
A
Okay,
how
we
can,
let's
do
that
after
we
get
through
the
quick
reports,
then
today,
if
that
works
all
right
yeah,
we
can
like
plan
out
the
next
steps.
Then,
after
all,
the
quick
reports.
A
Okay,
well,
I
don't
think
we
have
anything
else
on
the
agenda
for
today.
So
if
we've
got
some
time
we
might
as
well
so
I
can
go
next.
Then
I've
gotten
all
the.
If
you
notice
on
the
youtube
channel,
I
found
out
gotta
get
all
the
previous
meetings
up
on
youtube.
They're
all
there.
If
you
went
back
and
looked
through
them,
so
it
should
be
a
little
bit
more
streamlined
getting
the
next
ones
up.
A
I've
been
going.
Also,
I've
been
going
through
a
bunch
of
the
pull
requests
on
my
end
again,
just
double
checking
them
trying
to
get
them
in
the
last
stages
for
getting
in
people
now
that
you're
here
do
you
know
if
there's
anything
other
changes
that
you
see
that
should
be
made
on
your
pgi
branch.
E
No,
it's
been
quite,
it's
been
stable
since,
like
early
december,
and
I'm
not
touching
it
anymore,
so,
okay,
that's
the
stage
that
it
should
be
yeah
so
yeah.
It
would
be
good
to
have
like
maybe
a
meeting
for
all
people
involving
joint
inversion
and
see
how
to
advance
things.
Okay,.
A
So
we've
already
got
the
the
updated
sense
waiting.
We
updated
like
beta
estimators
in
there
right.
It's
already
been
merged
into.
A
E
Also
a
lot
of
things
about
just
the
name
convention
to
have
and
like
organizational
organization
like
all
that
type
of
thing.
That
needs
to
be
fought
in
advance
and
maybe,
as
I
say,
yeah,
maybe
we
should
do
a
meeting
altogether.
At
some
point.
A
A
Other
than
that
I've
been,
I
guess,
I've
gone
through
some
prs.
The.
A
The
c
plus
refined
functions
for
the
tree
meshes
that
I've
been
working
on.
They
I've
been
implementing
them,
putting
them
back
into
the
mesh
like
refined
tree
xyz
thing
that
the
dom
had
worked
on
before
the
function
that
I
have
changed
the
function
or
changed
the
results
slightly,
because
so
previously
the
way
that
diamond
had
it
checking
was
just
you
took
a
cell.
A
If
you
had
like
a
box,
it
just
checked
if
the
center
of
the
cell
was
in
a
box
before
refining
it.
Now,
I
have
it
like
check.
If
the
box
is
like,
if
you
give
it
a
box,
if
a
cell
overlaps
that
box
at
any
point,
then
it
will
refine
it,
so
it
gives
it
a
little
bit
more
makes
it
a
little
bit
easier
to
refine
it
that
way,
but
it
just
changes
the
output
slightly,
especially
at
the
edges.
F
Just
a
quick
one
I'll
be
giving
a
talk,
and
I
think
joe
was
on
the
abstract.
I
actually
forget.
All
who
was,
I
know,
dom
saw
it
and
a
few
others
I'll
be
giving
a
talk
on
tuesday
at
the
siam
session,
on
open
source
software
and
so
just
to
flag
that
I'll
send
her
on
some
slides,
hopefully
at
the
latest
kind
of
by
early
monday,
so
that
folks
can
can
take
a
look
and
provide
some
feedback.
So
hopefully
it'll
be
a
good
session.
A
F
It's
at
the
session
that
I'm
in
is
from
one
till
three
pm
mountain,
so
I
guess
for
you,
that's
what
two
two
till
four:
okay,
okay!
Okay!
Thank
you.
A
A
A
Okay,
then
well,
then
I
think
we
can
go
on
to
talking
about
the
m1d
I
plan
on
get
a
plan
of
action
to
kind
of
bring
it
in
to
the
next
steps.
C
Yeah
yeah,
I
I
took
a
look
at
that.
That's
good.
H
C
So.
For
me,
that's
the
first
step
is:
is
for
the
the
organization
of
the
the
output
data
vector
to
be
the
same
for
the
1d
in
the
3d
and
that's
gonna
require
some
base
changes
to
to
the
simulation
class.
C
But
once
that's
done
all
of
the
other
stuff
that
you're
proposing,
like
you
know,
using
the
the
receiver
class
structure
for
3d
and
then
just
kind
of
adapting
that
for
the
1d
like,
I
think
all
that
stuff
becomes
very
easy
afterwards,
but
I
would
say:
that's
the
that's.
The
first
step
is
to
is
to
get
the
the
data
structure
coming
out
to
be
the
same.
The
ordering
to
be
the
same.
A
You
know
what
I
mean
like
look
through
all
the
sources
and
receivers
figure
out
what
the
actual
like
offsets
and
frequencies
you
need
are
and
then
call
the
function
with
the
proper
ordering
and
then
go
back
and
put
them
on
the
resources
receivers.
I
think
that'll
probably
be
the
easiest
way
to
do
it
also
the
most
efficient
way.
C
Yeah,
I'm
sort
of
just
looking
at
what's
getting
fed
in
and
out
of
these
these
functions
I
mean
it
kind
of
seems
like
in
the
end
you're
you're
feeding
in
almost
like
this
list
of
like
offset
and
frequencies.
Just
so
one
line
is
like
a
particular
offset
in
frequency,
so
I
think
you
just
need
to
feed
it
in
the
right.
The
right
thing,
and
it
just
kind
of
gives
you
back
like
each
row,
is
a
unique
offset
and.
A
A
A
C
H
A
G
G
If
you
got
multiple
offset
so
they're
like
a
the
current
code,
is
designed
to
optimize
that
process,
but
the
if
the
goal
is
actually
getting
a
same
stream.
I
mean
one
day,
that's
like
but
like
much
much
faster
than
3d
code.
So
at
this
point
we
probably
forget
about
deficiency
she's,
just
like
okay,
let's
just
follow
the
3d
code,
and
maybe
later
we
can,
we
can
revisit
that,
but
that
if
the
goal
is
actually
getting
the
same
structure,
let's
forget
about
that
part
and
just
follow
well
or
whatever
that
3d
structure
is.
G
Fine
like
even
if
you
don't
need
to
but
doing
two
simulation
for
real
and
imaginary
it's
kind
of
fine,
but
that
will
sacrifice
that
computation
cost.
A
F
That's
probably
something
we
should
even
think
about
for
the
3d
code
as
well,
because
there
is
a
bit
of
inefficiency
right
now,
because
we
have
the
real
and
imaginary
receivers
and
we're
forming
projection
matrices
for
for
those
when
we
don't
need
independent
ones
for
for
both.
I'm
not
suggesting
we'd
like
solve
that
with
this,
but
like
there's
a
there's,
a
similar,
perhaps
thought
process
we
we
need
to
put
in
place
in
the
3d
code.
A
C
Yeah
because
it
sounds
like
what
you
have
right
now
is
this
thing
where
these
there's
like
projection
matrices
that
are
the
property
of
the
receiver
class,
whereas
I
guess
what
I've
been
used
to
with
the
ubc
gift
code.
They
just
make
a
global
projection
matrix
from
the
solution
of
the
fields
on
edges
or
nodes
or
wherever
it
is
that
just
projects
to
the
data.
C
A
C
There's
some
there's
some
dangerous
stuff
with
that,
and
I
I
ran
into
it
before,
where
trying
to
like
reuse,
cert
receivers
for
certain
situations,
and
I
I
think
one
situation
I
ran
into
was
what
was
it.
It
was
the
fact
that
I
think
the
projection
has
a
dependence
on
the
frequency
like
if
you
solve
for
the
fields,
the
electric
field
on
edges,
and
then
you
want
to
get
the
magnetic
field,
you
would
have
to
there's
a
there's
like
a
omega
in
that
and
then
that
wasn't
carried
through
properly.
D
Yeah-
and
I
can
tell
that
you
know
for
tiling-
it
makes
things
very
complicated
because
you
know
the
receivers
are
basically
are
defined
as,
as
being
a
you
know,
a
field
thing
instead
of
being
a
spatial
spatial
thing,
and
so
that
means
you
can
have
you
know
the
locations
is
replicated
a
bunch
of
time.
D
A
C
Well,
I
like
I
do
like
soggy's
suggestion
not
to
be
obsessed
with
the
like
the
maximum
efficiency.
C
I
really
like
the
idea
of
getting
something:
that's
working
properly,
getting
it
merged
into
into
simpeg
being
able
to
provide
this
new
tool
that
we
really
wanted
to
and
then
definitely
go
back
and
try
and
improve,
what's
happening
under
the
hood,
because
it's
not
going
to
change
how
the
user
interface
or
uses
the
code,
but
we
definitely
would
want
to
improve
it
later.
But
I
like
the
idea
of
just
trying
to
get
something
that
works
properly
in
as
soon
as
possible.
Yeah.
A
B
B
C
A
There's
where
the
known
systems
thing
is,
or
if
it's
something
that's
worthwhile,
to
keep
it
as
part
of
senpai
or
move
it
somewhere
else.
So
if
you
guys
can
get
a
little
quick
overview
of
what
is
in
that
file,
what
it's
doing
like
what
his
purpose
is.
That
would
help
me
a
lot
to
understand
because
I
just
see
it
as
something
in
there.
That's
like
not
being
used
anywhere.
C
No,
it
doesn't
right
now
it
doesn't
fit
back
when
the
1d
stuff
was
in
its
own
repository
and
everything
was
part
of
a
survey
class
that
was
basically
a
way
to
to
build
surveys
with
known
geometries.
C
So
if
you
had
like
a
v10
system
from
2016,
you'd
you'd
have
the
you
know:
loop
radius,
you'd
have
the
waveform
you'd
have
like
a
sampling
frequency.
You
just
have
everything
that
you
needed
for
for
how
this
system
works,
and
you
could
just
say
make
me
a
survey
a
where
these
are
the
the
sounding
locations,
but
when
transferring
it
over
to
what
we
have
now
it
those
pieces
of
information,
don't
really
sit
in
the
same
place.
I
didn't
really
want
to
delete
it.
C
That's
I
thought
about
doing
that
and
I
didn't
I
didn't
get
to
it,
but
yeah.
I
took
all
the
known
waveforms
and
put
those
in
that
that
waveforms
dot,
pi
and
then
yeah.
I
thought
this
known
systems.
Well,
I'm
just
making
a
a
source
which
is
going
to
be
associated
with
the
receiver
and
yeah.
The
source
item
could
probably
carry
everything
I
just
didn't
get
around
to
doing
it.
A
Because
it
seems
like
it
could
be
useful
to
me
like
we
extended
like
the
current
em
sources
to
have
like.
Oh,
this
is
a
v10
source,
or
this
is
a
you
know.
This
is
like
common
sources
like
that,
so
it
seems
like
it
could
be
useful
and
applicable
to
you
know
to
make
use
of
it
and
the
time
release
as
well.
C
Oh
definitely,
I
think
it
would
be
useful
and
people
are
probably
going
to
want
to
be
modeling
with
those
those
systems
and
be
easy
to
make
a
tutorial
for
it
and
yeah
yeah.
I
mean
I
didn't
want
to
get
rid
of
it,
but
it
could
be
put
in
sort
of
a
hidden
place
until
we're
ready
to
deal
with
it.
C
I
mean
one
one
consideration
is
that
because
we
really
want
to
merge
the
the
1d
and
the
3d
stuff
to
live
in
the
same
place.
If
we
add
these
survey
or
these
these
source
classes
for
1d,
we
also
have
to
ensure
that
they
work
for.
C
D
A
gut
feeling
is
that
they
should
almost
live
in
a
different
repo
that
we
should
just
be
pulling
basically
json
files
for
different
things,
and
we
know
how
to
reconstruct
the
the
sources.
I
don't
think
I
don't
see
it
in
in
the
scope
of
simpek
itself,
but
that's
my
opinion
very
useful.
We
should
have
it,
but
I
don't
see
it
inside.
C
D
Yeah
and
by
experience
like
pre-recording
waveforms,
they
change
every
single,
damn
survey.
So
there's
not
really
much
point
in,
like
you
know,
pre-recording
them,
but
we
should
have
utils
to
be
able
to
reform
them
or
like
change
them
easily.
I
think
that
would
be
really
nice.
C
Yeah
we
do
have
that.
There's
there's
a
couple
like
specific
years
that
we
have
wave
forms
for,
but
then
we
I
did
put
in
the
ability
to
kind
of
alter
or
make
whatever
kind
of
vtem
style,
waveform
or
ramp
off
style
waveform.
So
you
can
play
around
with
that,
and
it's
pretty
well
documented
in
tutorials
yeah.
I
guess
one
one
useful
thing
that
we
should
we
can.
C
D
G
I
got
a
couple
of
ideas
to
like
I
make
it
robust
and
oh
my
god,
my
internet
is
very
slow.
Maybe
I'm
carnivaling,
I
guess
we
can
still.
D
G
You
yeah,
oh
perfect,
yeah
like
my
screen,
is
actually
froze,
like
you,
guys
are
not
moving.
G
Yes,
so
one
thing
that
I
have
done
for
my
local
3d
code
is
just
computing,
the
step
response,
so
you
just
do
a
fine
fine,
discretization
like
that
is
specially
designed
for
airborne
em.
Specifically,
then
that
actually
usually
worked
for
well
for
most
of
the
airborne
case.
Then,
once
you
get
the
step
response
as
we
do
in
the
1d
code,
we
just
convolve
with
the
whatever
waveform
that
provided
so
I'm
not
sure
like
that,
actually
makes
an
inverse
problem
quite
expensive,
but
the
in
terms
of
forward
modeling.
G
Then
you
don't
have
to
worry
about
that
time,
discretization
that
much
so
that
could
be
useful
and
like
then
users
don't
really
need
to
define
what
are
the
time
steps
that
then
like
then
you
can
just
provide
the
time
channels,
then
yeah.
So
yeah,
that's
one.
One
way,
I'm
not
sure,
that's
a
great
way
to
go,
but
if
you
want
to.
A
Make
much
simpler.
A
G
That
that
could
be
something
that
we
could
consider.
C
If
you
were,
if
you
were
gonna,
go
and
express
the
sensitivities
for
that
situation,
could
you
kind
of
could
you
kind
of
express
it
as
the
sensitivities
for
a
step
off
and
then
you'd
have
some
kind
of
matrix?
That's
like
a
a
digital
filter.
That's
doing
the
convolution
you're
saying
that's
that's
expensive,
or
is
that
possible.
G
Like
we
don't
need
a
digital
filter,
we
just
need
a
time
convolution,
which
is
like
a
still
linear
operator,
but
the
the
hard
part
like
now.
It's
actually
you
need
to
do
that,
convolution
for
cell
by
cell,
by
sensitivity
by
sensitivity,
values,
so
it's
it
could
be,
I'm
not
sure
how
expensive
it
is.
My
expectation
it
it'll
be
more
expensive
than
how
we
are
handling
now
the
general
accuracy
would
be
better
and
then
you
don't
have
to
be
worried
about
the
time
steps
like
designing
the
timestamp
too
much
so
yeah.
G
It
could
be
an
alternative
yeah.
A
G
No,
I'm
not
saying
the
frequency
domain
like
a
it's
like
you
still
work
in
the
time
domain,
but
you
just
solve
the
step
off
problem,
so
you
don't
define
like
so
then,
like
you,
don't
have
to
define
the
waveform
at
the
first
like
step,
then
you
just
compute
the
step
response,
then
convolve
with
that
with
the
given
waveform
and
that'll
like
simplify
quite
a
few
things,
because
the
actually
the
hard
part
of
designing
timestamp
is
both
in
the
waveform
as
well
as
the
off
time,
where
you
got
time
channels.
G
So
by
doing
so,
then
you
could
actually
handle
that
as
a
similar,
like
a
very
similar
to
1d
problem
that
you
don't
really
need
to
worry
about
how
much
time
step
you
put
it
in.
G
Yeah
correct
so
yeah
and
doing
that
every
time
it's
it's
actually
time
consuming.
So
if
either
we
need
an
automated
designer
or
something
like
that
to
make.
A
G
I
can
talk
to
joe
later.
People
actually
didn't
choose
that
way
in
general,
like
even
if
you
look
at
the
ubc
work,
they
ended
up
using
the
time
convolution
and
I
my
guess,
is
actually
there
could
be
some
sort
of
bias
happening
like
aliasing
is
happening,
and
if
you
don't
have
enough
sampling
frequency,
then
that
in
time
you
can
have
artifacts
and
also
when
you're
transforming
the
waveform
into
frequency.
G
There
could
be
some
biases,
so
I
I
don't
know
why
and
actually
the
the
way
that
convolution
works
in
time,
we're
not
doing
a
typical
convolution,
but
the
we're
like
we're
like
doing
more
kind
of
efficient
way
than
than
like
than
general
convolution.
So
it's
actually
not
that
expensive.
I
I
could
yeah
if
you're
interested,
I
could
be
talking
to
you
a
little
bit
more
okay.
A
G
Actually,
I
would
I
kind
of
have
one
thing
I
probably
should
share
my
can.
I
share
my
screen.
G
Can
you
guys
see
my
screen?
It's
coming
there,
it
is
yep,
so
the
problem
that
I
have
working
on
was
estimating
the
depth
to
the
bedrock,
and
so
you
can
interpret
like
okay.
If
you
have
aem
section
resistive
section
and
you
can
get
a
point
like
you
can
interpret
where
the
basement
is,
but
you
got
very
sparse
like
a
so.
This
is
the
am
lines
here,
for
instance
there
and
the
triangles
where
we
got
the
well
data.
So
we
got
the
well
data
that
kind
of
penetrates
the
basement.
G
G
This
is
depth
to
the
bedrock
from
like,
so
this
is
the
elevation
of
the
bedrock
surface
so.
J
Yesterday,
so
I
so
you've
you've
taken
the
in
the
em
data,
you've
inverted
it
and
then
you're.
Looking
for
some
contact
for
a
resistive
basement,
correct,
correct,
okay,
so.
G
There's
like
so
like,
I
write
a
numerical
code
to
pick
that
up
and
as
I
pick
that
up
numerically
and
then
so
I
got
some
points
from
the
h
am
sounding
location.
Those
are
the
circles
and
the
triangles
are
the
contact
between
basement
and
sediments
from
the
from
the
well
data.
G
G
And
that
that
the
the
points
from
am
like
should
have
some
uncertainty.
Well,
we
need
to
somehow
estimate
the
uncertainty
of
that
and
also
interesting
thing
about
that.
The
point
data
from
am
it's
not
a
point
because
like
to
be
seen
at
that
depth
from
am
means.
You
need
some
sort
of
large
volume,
so
you
need
to
consider
it
as
like
as
a
volumetric
data.
So
you
have
some
sort
of
much
larger
sampling
volume
you
can
use
the
smoke
ring
to
have
okay
at
like
100
meter
depth.
It's
about
a
couple
hundred
meter
sampling
volume.
G
Then
the
well
data
is
much
more
accurate,
but
it's
a
point
data,
so
you
can,
but
still
I
can
put
some
sort
of
error
estimate
of
okay
like
I
got
plus
minus
five
meter,
for
instance.
So
you
got
point
data,
but
that
much
more
is
like
a
large
volume
of
sampling
data
from
a
n.
Then
you
can
set
that
out
as
an
inverse
problem.
G
G
Then
then
you
can
put
upper
and
lower
amounts,
and
you
can
actually
like
just
use
this,
like
whatever
the
inverse
problem
that
we're
using
the
exact
same
inverse
problem
that
we're
using
and
solve
an
inverse
problem
and
what
I
actually
have
done.
Actually,
I
used
the
lp
norm
inversion
here,
so
you
can
see
kind
of
the
sharp
boundaries
here
like
this.
This
is
where,
like
there's,
a
kind
of
like
a
big
slope
happening
from
am,
and
you
can
match
that
2d
map
satisfying
both
well
data
and
aem
data
anyway.
G
So
that's
that's,
usually
the
stuff
that
I'm
working
on
like
combining
the
ground.
Truth
data
with
the
geophysical
data
and
using
a
synthetic
code
to
to
set
it
up
as
an
inverse
problem.
I
thought
it
might
be
an
interesting
problem
and,
let's
see
so
yeah.
So
if
you
look
at
that
code,
it's
actually
basically
the
same
code
that
we're
using
for
lp
norman
version.
So
you
just
what
you
need
to
do
you
just
set
it
up
a
simple
problem
class
that
kind
of
does
certain
averaging.
C
D
Awesome
man,
do
you
have
any,
do
you
have
a
seismic
line
by
any
chance
around
there
be
able
to
check
the
the
basement.
G
No,
my
check
was
the
well
data,
and
that
was
actually
quite
matching.
Well,
so
my
previously
what
I've
done.
I
just
estimate
the
surface
with
just
the
geophysics
and.
G
With
the
well
data
and
see
how
that
matches,
so
that
was
actually
pretty
good,
but
there
were
some
points
that
were
not
matching
so
okay,
then,
how
do
I
adjust?
That
surface
is
okay,
I'm
going
just
assign
some
uncertainty.
Actually
how
else
I
did.
The
uncertainty
was
doing
a
smooth
inversion
for
the
basement
and
then
a
very
sparse
inversion
like
just
setting
the
p
x
and
p
set
to
zero.
G
So
that
will
give
you
a
very
sharp
interface,
then
difference
between
them
as
like
an
uncertainty
so
because,
like
that
kind
of
pushing
the
boundary
so
that
if
you
have
a
like
a
deeper
depth,
you
have
a
large
uncertainty,
because
that
will
show
a
pretty
large
difference
between
them,
and
this
is
actually
something
that
we
could
consider
like.
It's
basically
some
same
idea
of
gem
pi
right.
G
So
if
you've
got
a
sparse
contact
points
and
what
they
do,
they
use
some
sort
of
like
tricking
estimate
to
interplay
to
the
surface
so
like
we
can
actually
do
that.
But
if
you
don't
want
to
use
the
gen
pi,
but
you
can
actually
use
the
same
inverse
problem
in
every
iteration
and
you
update
that
surface.
So
suppose
you
got
the
weld
data
and
somehow,
in
the
inverse
process,
you
can
update
the
interface
also
from
the
geophysics.
G
Then
you
can
put
that
small
inverse
problem
within
the
inversion.
Just
update
the
surface
until
we
fit
the
data
or
something
like
that.
I
think
so.
I
think
that's
what
you
were
interested
at
some
point,
so
this
could
be
also
a
way
that
you
could
do
because,
like
using
geostatistic
approach
is
always
kind
of
a
little
bit
challenging,
because
you
need
to
estimate
all
these
like
parameters,
correlation
length,
variograms
and
actually
estimating
those
parameters
not
simple
at
all,
so
yeah,
actually
even
like
kind
of
using
what
we
usually
do
without
like
without
too
much
assumption.
G
It's
it's
kind
of
nice
anyway,.
G
That's
something
that
I'm
not
100
sure
and
on
the
northern
part,
where
I
got
lots
of
well
data.
I
have
a
pretty
high
confidence
in
the
southern
part.
We
don't
have
that
much
rail
data
but
as
far
as
I
know,
there's
supposed
to
be
a
fault.
So
there's
like
that
very
sharp
surface
supposed
to
be
a
fault
I
think.
G
And
yes,
I
I've
seen
a
couple
of
geological
sections
that
they
have
provided
at
some
point
and
they
expect
to
have
some
sort
of
fault.
I'm
not
sure
whether
that's
completely
coming
from
the
imagination
or
they
have
some
sort
of
ground
truth.
Data.
C
G
Prove
it,
but
that
on
the
aem
side
it
was
pretty
clear
because,
like
there's
like
the
transition
was
very
sharp
so
like
that,
if
you
look
like,
if
I
look
at
the
geological
like
an
aem
section,
I
can
actually
see
like
clearly
like
oh
there's
in
one
side,
there's
no
basement
on
the
other
side,
there
is
basement
and
there
is
a
transition
which
is
like
very
sharp.
I
don't
know
like
that.
G
Transition
is
supposed
to
be
very
sharp,
like
a
sloppy
transition
or
not,
but,
like
I
know,
there's
some
sort
of
discontinuity
in
between
one
and
other
side.
D
It
does
definitely
look
like
a
like
a
faulting
faulting
block,
I'm
pretty
sure
that
they
would
have
seen
at
the
surface
right.
If
you
look
at
george
cole
map,
they
will
have
identify
if
there
was
a.
If
there
was
a
fault,
it's
probably
going
to
be
offset
it
from
your
basement
right
because
it
has
an
angle
and
you're
you're.
Looking
at
it
from
the
top,
but
yeah
that's
cool
man
looks
it
looks
real.
D
Okay,
so
that
I
had,
I
was
thinking
about
you
the
other
day
yeah.
We
had
that
code
right.
We
were
using
a
a
tange
function
to
move
a
surface.
It
was
like
basically
like
a
heavy
side
function.
Yeah.
Does
that
work
in
3d?
Do
you
remember
it
was
it?
Could
it
work
with
a
3d,
a
3d
mesh
like
a
3d
surface,
yeah
yeah,
so
we
could
morph
a
blob
if
we
wanted
to
if
we
had
like
a
starting
sphere
or
something
we
could
like
morph
it.
Oh.
G
Oh,
I
see
I
see
like
no
that
that
that
that
one
was
for
like
a
2d
surface
yeah
plane,
but
well
it's
just
a
matter
of
like
whether
you
have
a
function
like
if
you
have
one
like
a
polygon
or
polyhedra,
then
there's.
A
Properties,
I
believe
it
would
really
be
helpful
if
people
could
take
a
look
at
that
see
if
it
works
as
just
kind
of
drop
in
replacement
for
what
you've
been
working
on
like
in
your
projects
to
see,
if
I
think
it
should
work
as
a
drop-in
replacement,
but
just
check
see
if
you
run
into
some
bugs
it'd,
be
really
helpful
and
then
the
next
step
that
I
would
need
forward
with
that
is,
we
can
add
more.
A
Other
than
that,
I
don't
think,
there's
anything
else,
so
I
will
see
most
of
you
guys
or
some
of
you
guys
on
friday
for
some
social
hours
and
whatever
you
want
to
do.
If
not
I'll
see
you
all
next
week,
bye
everyone.