►
From YouTube: SimPEG Meeting September 19, 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
A
A
B
B
D
D
F
A
A
F
There's
maybe
three
out
of
the
two
major
tests
that
keep
failing.
One
is
with
the
examples
I
know
for
sure,
but
the
other
ones
are
just
some
of
the
tests
and
I
thought
it
maybe
was
desk
related,
but
I
got
desk
now
in
the
ymo
file
and
it's
more
passing,
but
there's
still
some
failing,
but
I'll
get
a
little
dig
a
little
bit
deeper
and
take.
A
F
Good
point:
hey
I,
didn't
even
think
of
that
yeah
I'll
do
that
next,
but
yeah
other
than
that
yeah
I'm,
also
working
on
desking
for
the
MT
now
as
well.
Excellent
but
ya,
know
one
thing
at
a
time.
Yes,
I've
also
been
tasked
to
try
to
do
some
time.
Domain,
iam
inversions.
Now
too
so
out
of
the
list,
yeah
good
I.
C
C
Your
desk
your
sensitivities
and
then
purge
a.m.
so
now
we
will
skip
ourself
when
one
priority.
F
So
nice,
yeah,
okay,
so
purge
after
cool
yeah
I
will
check
that
out
in.
B
F
C
H
So
that's
when
you're
calling
the
fields
probably
I,
think
might
change
it
to
store
it.
So,
rather
than
regenerating
when
you're
computing
sensitivity,
I,
guess
countries
it's
storing
into
a
list,
I
guess
I'm,
not
sure,
that's!
That's
pushed
to
master
its
merged
to
a
master,
but
I
think
Mike
did
it
that's
so,
but
they
MT.
If
you're
a
problem
getting
larger.
You
cannot
really
store
it
like
because
like
if
the
problem
is
really
big
and
even.
H
H
H
C
C
Which
we're
currently
we're
doing
using
using
the
sensitivities?
If
we
store
the
sensitivities,
then
every
time
we
do
J
times
the
vector,
GT
vac,
then
we're
just
calling
Mississippi's
front
desk.
So
I'm
saying
is
that
why
don't
we
just
form
J
on
desk
while
we
completing
the
field?
So,
instead
of
adding
to
store
self
the
a.m.
we
purge
it,
we
know
we
calculate
the
fields
calculated
sensitivities
and
encourage
self
I
am
for
that
frequency
that
you
know
what
I
mean,
but.
H
H
C
H
C
F
D
A
G
G
G
A
Yeah,
that's
a
good
question
and
so
I
think
the
my
trajectory
or
my
suggestion,
trajectory
on
this
would
be
that
we
set
them
up
with
the
current
master
version
of
simpe
and
because
that's
what
you're,
designing
and
developing
notebooks
in
and
then
it's
their
choice
as
to
whether
or
not
that
they
want
to
update
down
the
road.
And
so
if
there
is
a
new
simulation
class
that
comes
in
well
when,
when
that
comes
in,
we
will
put
out
instructions
as
to
how
to
update
and
what
we'll
need
to
update
in
the
scripts.
A
I
mean
so
like
updating
the
software
will
just
be
like
a
Conda
updates
MPEG.
So
now
that
Simba
is
on
Conda,
that's
gonna
greatly
simplify
a
lot
of
those
things,
so
they'll
do
a
Conda
updates
impact
and
then
there
will
be
some
things
that
need
to
update
in
the
scripts,
and
so
that's
something
that
we
can
provide
and
maintain.
A
We
can
like
update
github
repositories
that
we
want
to
keep
pace
and
then
point
them
to
say:
hey,
please,
go
grab
the
new
update
and
that
should
all
then
be
fine
if
they've
gone
off
and
done
their
own
development
and
created
their
own
scripts
or
something
like
that.
They'll
need
to
update
those.
But
we,
like
that's
one
of
the
things
that
we
need
to
be
very
conscious
of
when
we
do
that
release
is
making
sure
that
it
is
extremely
clear
for
folks
how
they
go
and
update.
G
G
A
Okay,
anyone
anyone
else
have
things
you
want
to
chat
about
before
we
give
the
floor
over
to
sake
I
just.
H
G
B
G
A
The
simplest
thing
for
us
to
do
is
basically
to
follow
the
instructions
for
setting
up
mini
khandha,
so
downloading
setting
up
mini
Conda
and
then
Conda
installing
cippec,
because
at
this
point
in
time
that
should
work
on
any
reasonably
modern
machine
and
based
on
the
conversations
that
we
had
with
them.
It
sounds
like
if
they're
running
at
least
sort
of
Windows
7
era
stuff,
which
it
sounds
like
they
all
are
they're,
not
that
workflow
should
work.
A
One
of
the
things
that
we
should
do
is
download
mini
Conda
on
to
like
a
USB
stick
just
so
that
you
know
you
can
copy/paste
stuff
over
from
there
if
it
is
slower
to
download,
but
if
they
have
decent
internet
connections,
which
it
sounds
like
they
to
you,
then,
then
those
are
the
steps
that
we
can
work
through,
I'm,
not
something
that
we
should
test
out
together
in
the
very
near
future.
A
G
D
A
Downloading
the
stuff,
and
so
one
of
the
things
that
we
can
do,
is
actually
pre
download
everything
onto
a
USB.
Stick,
it's
simply
it's
simpler
if
they
can
just
access
the
Internet,
and
so
that
is
like
trajectory
one
on
the
trajectory
cue.
Is
we
download
mini
Conda
and
populate
it
like
with
the
install
instructions
like
populated
with
Symphony?
Basically.
D
A
A
E
A
H
D
A
Like
I
think
that
solution,
one
I
mean
if
they
have
a
new
fiber-optic
cable,
will
probably
be
able
to
use
the
Internet
and
it
won't
be.
It
won't
be
a
problem.
I
mean
we
were
able
to
video
chat
with
them,
and
so,
in
order
to
video
chat,
there
has
to
be
a
decent
man
like
Internet
bandwidth,
so
I
think
that
it
shouldn't
be
too
too
much
of
a
concern.
But
it's
good
to
think
through
these
things
for
sure
yeah.
G
A
Different
lab
is
tested
well
for
sure
on
Chrome
and
Firefox
are
like
basically
always
going
to
be
working.
Internet
Explorer
in
theory
should
work,
but
it's
a
bit
of
a
pain
in
the
neck.
Sometimes
yeah
so
I
guess
like
genuinely
encouraging
folks
to
use
Chrome
or
Firefox
is
probably
the
best
solution.
They're.
A
E
H
Yeah,
okay,
let
me
see
if
I
can
share
my
screen.
A
H
Yeah,
so
what
I
I'm
really
prepared?
A
presentation,
I'm
just
gonna,
show
you
how
what
what
I
was
working
on.
So
what
I'm
hoping
to
do
is
I
got
a
pretty
large
region
where
everyone
eeehm
survey
covers
I,
say
like
30
kilometer
by
3
kilometer.
So
if
you
generate
like
a
a
mesh,
you're
gonna
have
like
pretty
large
mesh
and
then
typical
size
of
the
mesh
that
I'm
using
is
about
10
by
10
by
5
meter
or
10
by
10
by
even
smaller.
H
So
you
end
up
with
like
a
lot
of
cells,
so
I'm,
not
even
thinking
about
the
inverse
problem,
I'm
just
thinking
about
the
for
problem.
So
what
I'm
hoping
to
do?
I
I
have
a
fairly
large
global
mesh
and
I
can
make
even
coarser
like
then.
My
formalin
match,
at
least
for
the
horizontal
directions,
then
that
when
I'm,
Ford
modeling
I
want
to
use
a
fine
mesh,
but
very
localized
close
to
this
sounding
so
I'm
hoping
to
handle
like
a
couple
thousand
or
ten
thousand
soundings
in
a
reasonable
amount
of
time.
H
So
I
want
to
paralyze
each
process,
each
sounding
computation
to
to
compute
the
time
domain
response.
So
that's
so
I
do
require
like
a
functionality
to
generate
large
mesh
and
then
interpolate
that
into
a
local
mesh
and
then
do
a
for
modeling
in
a
parallel.
Wait
then
bring
it
back
to
the
main.
So
that's
that's!
Basically
what
I'm
doing
I
a
deep
task
for
the
1d
code,
but
there
are
a
couple
more
steps
that
I
need
to
add
for
a
3d.
H
H
A
H
A
Are
folks
familiar
with
screen
like
when
know
so
just
to
give
a
quick
idea?
Is
it
basically
it
lets
you
open
up
a
process
on
a
remote
machine
and
that
process
will
stay
open
if
you
lose
the
connection,
and
so
that's
why
soggy
like,
because
if
he
had
just
opened
it
and
just
done
an
SSH
tunnel
from
like
a
login
node
anytime,
that
he
lost
that
connection,
they
it
would
kill
the
process,
whereas,
if
you're
running
inside
a
screen,
no
it's
actually
persistent
and.
D
H
So
that
that's
that's
perfect
description,
so
like
a
sometimes
I'm
running
like
an
like
an
example
that
runs
a
couple
of
days
and
I,
don't
want
to
lose
it
so
by
doing
so,
you
can
keep
that
and
you
can
still
use
the
Jupiter
environment.
The
Jupiter
note
both
so
yeah,
so
here's
my
AEM,
so
I'm
gonna
start
with
the
multi
processing.
H
So
I
tried
to
I
tried
both
by
multi
processing
and
desk
and
I.
Can
I
can
show
you
what's
the
price
on
cons
and
the
important
part
is
this
impacts?
Sky
10,
like
Stanford,
is
exclusively
using
Sky
temps
I'm,
not
really
like
handling
other
system.
I'm
really
focused
on
skype.
Am
anyway
we
can
easily
expand
that
to
other
other
systems.
H
H
So
that's
that's
the
mesh
that
I'm
dealing
with
and
I
I'm
supposed
to
actually
handle
quite
bigger
mass,
much
bigger
mess
than
this,
so
I'm,
simply
setting
up
like
half
space,
so
Eric
on
the
DVD
1u,
a
21
meter.
That's
the
background
and
that's
the
survey
so
when
it's
pretty
simple
similar
to
what
what
we
usually
do,
like
topography,
source
location,
receive
a
location
and
the
source
location,
I
got
200
sources,
for
instance,
in
this
case
and
yeah
this.
This
is
all
like
that
very
specific
for
Sky
temp
system.
H
Anyway,
that's
how
it
works
and
I
have
a
problem
class
called
global
sky
time.
So
it's
nothing!
It's
like
there's
not
too
many
things.
It's
just
sort
of
a
carrier
that
you
can
paralyze
your
problems
and
you
basically
pass
as
match
conductivity,
and
this
is
and
paralyzed
option.
Okay,
let
me
know
if
you
guys
have
any
questions
guys,
not
sure,
and
the
first
step
is
writing
everything
on
the
disk.
So
what
I
had
problem,
especially
when
I'm
have
a
lot
of
like
a
workers
or
a
lot
of
jobs?
H
Task
has
a
bit
of
problem
or
even
market
processing
passing
large
erase
takes
a
while,
so
rather
than
actually
passing
it,
I
was
thinking.
Okay,
I
can
write
everything
because
we
didn't
write,
it
seems
pretty
fast.
So
if
I
do
that,
that
actually
runs.
So.
Can
you
guys
see
here
this
is
mine,
so
I'm
using
like
15
workers
to
write
everything
down
to
the
disk,
so
that's
actually
done
and
I
can
show
you.
H
So
those
are
the
files
that
I
wrote,
it's
all
pickled
files
and
then
the
number
means
each
source.
So
I'm
gonna
write
out
all
the
parameters
that
I've
been
required
to
run
simulation
at
each
sounding.
So
that's
what
I'm
son
writing
out
mesh
and
projection
matrices
and
everything
and
the
final
step
is
basically
run.
So
this
is
also
paralyzed
and
I'm
running
it.
Stick
it
takes
about
a
minute
and.
A
B
A
H
That's
basically
what
I'm
writing
out
so
I
have
a
list
of
inputs
that
I
need
to
write,
so
I
think
it's
probably
better.
If
you
can
serialize
it
and
then
just
write
back
but
I
think
they're,
like
a
son,
actually
pickling
the
mash
as
well
yeah,
it's
sometimes
mesh
is
sort
of
the
heavy
one.
So
I
don't
want
to
regenerate
it.
So
I
just
want
to
basically
store
to
that.
That's
gonna
load
it
up
whenever
I
need
it
so
I
think
later.
If
you
can
actually
just
pick
a
problem,
that'll
be
the
easiest
way.
H
Cuz
like
your
problem,
could
be
quite
heavy.
Let's
say
you
can
have
like
a
factorization
fields
and
stuff,
but
anyway,
so
that's
that's
how
I'm
doing
it
so
it's
so
I
can
actually
talk
about
a
little
bit
more
about
the
multi
processing
and
ask
so
it's
actually
pretty
similar
like
there's,
not
that
many
different
things.
What
I
have
to
do
is
what
they
call
bagging.
So
if
you
got
a
lot
of
jobs
rather
than
doing
like
a
setting
all
the
jobs,
you
can
just
put
it
in
the
back.
H
H
H
Then,
as
long
as
I'm,
increasing
number
of
jobs
desk
might
be
sort
of
like
comparable
with
multi
processing
and
also,
if
you
have
a
multiple
cluster,
we
cannot
use
multi
processing,
so
using
desk
isn't
actually
it
much
better.
I
haven't
I
haven't
figured
yet
how
to
use
the
multi
culture
on
our
Stanford
clusters,
but
that's
something
that
I
want
to
try
next.
So
let's
say:
if
I
have
a
ten
cluster,
so
I
could
I
could
use
rather
than
fifteen
nodes.
H
D
C
H
That's
a
good
point:
I
can
show
you
how
I
manage
that
so
there's
another
important
items
to
deal
with
is
the
producer
so
Purdue.
So
it's
like
a
user
OpenMP
to
to
actually
factorize
this
solve
so
by
default.
It
use
all
all
of
your
threats.
So
if
you
you
most
like,
if
you
paralyze
that
without
any
setup
like
it's
not
going
to
work
out
well
you're
not
going
to
see
the
linear
increase
of
the
time
or
your
decrease
of
the
time,
so
you
need
to
set
a
specific
number
of
threat
that
you're
going
to
use.
H
C
You
compare
your
fields
if
you
run
it
multiple
times.
If
you
compare
the
fields
that
you're
getting
out,
can
you
repeat,
did
you
compare
your
field
so
the
actual
like
data?
Did
you
compare
if
you
run
it,
because
that's
the
thing
that
that
Linds
Lena
I
worked
on
right?
So
even
if
we
were
limiting
the
number
of
threats
that
MK
l
was
using
yo
each
character
imputations
were
were
different
because
of
the
double
layer
of
party,
so
on
top
of
on
top
of
desk.
So
can
you
compare?
Can
you
just
look
at
your
data?
C
H
A
It
might
be
done
that
what
we
were
doing
to
is
that,
like
we
basically
send
job
two
like
that
used
parody.
So
where
is
what
I
think
soggies
done?
I
think
there's
a
subtle
difference
that
he's
actually
imported
and
actually
set
the
number
of
threads
and
everything
inside
of
what
probably
is
the
desk
delete
function?
Is
that
exactly
correct
soggy,
like
you're
running
this
run
simulation
skite
M
is
actually
like
delayed
with
correct
this.
A
So
I
think
that
that
it's
probably
okay,
so
I
think
the
use
case
that
we
were
trying,
where
you're
actually,
basically
trying
to
like,
send
this
all
to
a
desk,
delayed
I,
think
that
gets
funky.
But
as
long
as
you've,
basically
like
allocated
your
little
space
where
you're
gonna
do
the
computation
and
then
you
happen
to
load
Paradiso
into
there
think
you're,
okay,
but
it's
it's
worth
checking
in
its
words
being
sure
on,
because
there
is
some
strange
there's.
Some
strange
behavior
is
pretty
so
runs
on
different
threads.
You
get
different
results
right.
H
Like
you
need
to
check
whether
this
goes
up
to
more
than
hundred
so
hundred
means
you
Space
Station
one
thread,
but
if
you
look
at
usually,
if
you
don't
put
any
number
of
thread
like
it
may
up
to
1200
or
1500,
which
uses
the
maximum
amount
at
the
threats,
but
that
so
now
I
like
I'm
using
about
like
mostly
hundred
so
it's
basically
using
one
thread.
So
that's
how
I
was
checking
in,
but
you
like
a
this
was
for
very
small
problem.
So
my
each
simulation
for
each
song
is
pretty
small.
H
It
takes
about
a
couple
second
or
one
second,
but
you
may
have
much
larger
problem
for
for
like
a
local
match.
So
then
you
may
need
to
think
about
how
much
thread
I'm
going
to
assign,
so
it
needs
some
tests
yeah
other
than
that
other,
like
a
that,
was
fairly
straightforward
to
paralyze
like
whenever,
like.
If
you
write
everything
to
do
this
I
thought
that's.
Actually
it
makes
quite
simple
to
paralyze,
and
then
we
could
do
the
similar
things
for
a
lot
of
other
codes
like
a
DC
codes,
so
John.
H
What
John
and
Dom
did
is
paralyzing
the
sensitivity
of
completing
a
sensitivity
function,
but
you
can
can
go
another
level
where
we
can
actually
break
apart
a
big
match
to
a
small
mesh.
Let's
say
you've
got
five
lines.
Then
we
can
break
that
apart
to
a
file,
different
problem
and
there
you
store
the
sensitivity.
So
yeah
I
mean
that's
like
that
battle
actually
see
I
think
the
current
current
implementation
mean
acida
may
not
speed
up
that.
H
Much
because,
like
we're
not
really
like
a
previous,
we
don't
release,
generate
a
sensitivity
matrix
so
now
you're
actually
generating
the
sensitivity
matrix
which
takes
a
while
sorry
like
it
may
not
speed
up
that
much.
But
if
you
actually
break
that,
apart
to
several
problems,
Oh
summer,
mash
it'll
it'll
speed
up
quite
a
bit
and
same
as
for
Mt.
H
C
B
G
C
If
we
do
it
at
a
misfit
level,
then
we
don't
need
to
change
any
of
the
simulations
or
any
of
the
problem
classes
right,
they're,
just
stacked
at
the
higher
level
at
the
other
misfit.
So
they
all
you.
They
are
computing.
The
four
calculations
you
know
when
they,
when
the
I
GU
am
reading
the
the
combo,
the
combo,
miss
Phil
I.
Think
it's
less
invasive.
If
we
do
it
this
way,
right.
H
Yeah
I
have
no
objection
about
that.
I
just
didn't
like
a
know
how
things
are
working,
a
look,
how
you
did
it.
There
was
very
specific
for
de
linear
problem
where
you
got
big
matrix
and
then
it
wasn't
quite
working
for
my
problem.
We're
have
where
I
have
lots
of
other
things
going
on.
So
yes,
I
mean
like
we
could
definitely
do
it
level
like
on
the
misfit
level.
I
guess
it.
A
Seems
like
and
correct
me
so
if
this
is
not
quite
what
you're
thinking,
but
it
seems
like
in
some
of
these,
it
might
be
worth
having
both
style
so
yeah
like
I,
mean
because
it
feels
like
what
you're
really
after
here
is.
The
forward
simulation
so
like
the
misfit
is
perhaps
like
a
step
beyond
correct.
H
H
C
Just
find
it
weird
and
we're
that
we
have
a
problem
that
has
meshes
they
have
tons
of
meshes
to
me.
It
kind
of
count,
there's
the
building
block
logic
of
EPs
impact,
but
because
now
we're
not
marginalizing
anymore,
we're
kind
of
eliminating
a
bunch
of
like
addict
functions
to
make
it
paralyze
instead
of
paralyzing
the
blocks.
I,
don't
know.
A
I
mean
so
there
might
be
some
steps
that
we
can
take
so
that
we
don't
have
to
dump
out
a
mesh
for
every
forward
stimulation,
but
I
think
there.
It
seems
like
what
soggies
newscast
is
showing.
Is
that
there's
value
in
just
thinking
only
about
the
forward
stimulation
because
he's
not
actually
running
an
inversion
here
it
just.
This
problem
naturally
breaks
up
into
a
parallel
forward
simulation,
so
they're
like
dirty
this.
A
We
should
be
able
to
leverage
at
the
misfit
level
but
I
think
like
we
should
try
and
look
at
both
of
these
problems
through
the
same
lens
and
try
and
see
like
are
there
generic
pieces
that
we
can
be
working
on
that
can
either
like
give
us
a
parallel,
streamlined
forward
simulation,
as
in
the
airborne
case
and
where
it
can
be
picked
up
and
dropped
into
like
the
the
misfit
formulation
cuz.
We
do
we
designated
completely
agree.
C
It's
just
semantics
is
the
misfit
function.
You
can
ask
for
a
crowd
and
it
will
run
all
the
forward
simulations
and
spit
you
out
of
breath.
Right
I
mean
maybe
because
miss
the
name
is
konbu
misfit.
This
may
be
not
the
right
name,
but
it
is.
It
is
a
composition
of
different
multiple
problems
with
their
own
meshes,
and
this
is
exactly
what
what
psyche
is
doing
right
now,
but
inside
inside
the
you
know,
a1
problem
yeah.
H
H
That's
potentially
what
I
can
think
of,
because,
like
the
reason
why
I
didn't
want
to
touch
the
data
misfit
because
like
how
I
need
to
set
it
up,
it'll
be
very
problems
dependent
so
or
like
it.
We
can
strechted
that
I
wasn't
sure
how
to
do
it.
So
what
I
could
think
of
now?
I
can
generate
data,
mystery
function
for
a
specific
problem
and
then
just
plug
in,
like
I,
can
inherit
that
that,
like
typical
data
misfit
and
generate
some
something
like
specific
problems
specifically
to
mr.
H
C
H
What
I
meant
like
a
what
store
or
not
store
and
how
do
how
to
wrap
like
a
like
yeah?
That's
that's!
That's!
That's
something
not
that
dot
that
portion
there
there
could
be
quite
general
portion
of
that.
We
can
generalize
if,
at
the
at
the
end
of
the
you're
gonna,
get
a
lot
of
specific
points
for
a
certain
problem
when
you're
paralyzing
it
so
I
wasn't
sure
like
they
do.
Mister
class
is
actually
a
good
place
to
put
all
those
like
a
specification.
H
H
H
A
I
mean
in
and
that's
perfect
because
now
having
this
implementation
and
then
having
what
Dom
has
taught
to
respect
to
the
the
data
misfit
with
having
both
of
these
things,
we
can
start
to
like
look
at
similarities.
You
see
what
pieces
are
redundant
and
where
those
key
differences
are
with
just
how
how
both
been
thought
through.
But
it's
in
the
in
this
research
yeah.
H
Haven't
done,
I
haven't
done
in
like
I'll.
Try
that
well
actually
later
I
haven't
tried
the
simulation
French
yet
so
this
might
give
you
example
to
try
cool
right
in
the
use
case
in
here
is
actually
mostly
simulation.
So
a
lot
of
like
research
and
Stanford.
They
basically
don't
need
really
meet
that
sensitivity,
function
or
inversion,
so
they
just
need
to
have
a
lot
of
simulation
so
like
hundred
thousand
or
thousands
of
simulation
to
generate
the
power
models.
H
H
The
next
step
for
the
inversion
was
actually
using
the
cylindrical
code
to
generate
the
sensitivity
function,
assuming
1d
and
then
use
that
still
I
got
I
can
get
the
3d
sensitivity
function
and
then
map
that
into
my
global
mesh
and
compute
the
J
back
there
and
JT
back
there.
So,
in
such
a
way,
I
can
actually
jet
develop
a
pretty
fast
3d
and
AM
inversion
code,
I
guess
anyway.
Well,
that's
the
plan.
H
So
I
was
we're
like
we're
going
to
do:
ask
item
survey
in
October,
so
one
of
the
motivation
that
I
had
was
I
want
to
do
a
simulation
first
before
the
survey
which
does
which
never
happens.
I
think
maybe
that
happens
in
diet.
Yes,
I
guess
but
I've
never
said
futur.
So
what
I
did
what
we
did?
So
we
actually
got
a
lot
of
wreaths
ology
wells
so
by
using
some
Jews
statistics,
I
generated
our
3d
model
like
this
and
then
all
the
points
are
where
the
soundings
located.
H
So
it's
about
like
30
kilometer
by
30
kilometer
and
then
this
is
the
data
like
so
I
ran
the
it
takes
about
15
minutes.
I
got
5,000
4,000
sources,
and
this
is
the
beta
that
I
generated
from
different
like
high
moment
low
moment
and
I'm
going
to
do
an
inversion
one,
the
inversion
to
get
some
idea,
whether
we
can
see
something
or
not,
so
that
was
sort
of
my
motivation
to
develop
this.
D
H
G
H
No
problem,
yeah
and
I
was
working
for
the
Myanmar
project
and
generated
the
DC
simulation
and
inversion
app,
so
yeah,
if
you
guys,
have
I'm
on
take
a
look
and
give
me
a
feedback.
That'll
be
great.
It's
like
almost
like
a
gift.
Oh
so
like
I
I
was
trying
to
replicate
what
Jeff
still
does
in
sim
pack,
using
Jupiter,
apps
or
Jupiter
widgets,
so
yeah
I
think
it
seems,
work
on
deaths
and
rands
machine.
So
if
you
guys
can
take
a
look.
A
A
D
E
Just
played
around
with
them
for
about
an
hour
before
the
big
meeting
and
yeah
I
think
you
really
kind
of
captured
it.
So
I
like
two
thousand
two
and
a
half
thumbs
up
for
you
on
that.
So
well,
we'll
continue
to
do
that.
Maybe
after
this
maybe
maybe
let's
see,
if
you
and
star
you
can
just
we
just
sort
of
stay
on
and
just
kind
of
make
sure
that
fun
things
on
my
computer
managed
to
get
it
on
two
Rams.
So
we
should.
D
E
A
H
A
C
A
E
C
A
Sure
I
can
definitely
I'm
hoping
to
actually
have
a
setup
where
I'm
running
like
a
couple
of
thousand
well,
a
thousand
or
so
simulations
so
yeah
I
will
shoot
for
that
and
shoot
for
being
able
to
demonstrate
an
example.
That's.