►
From YouTube: SimPEG Meeting February 17th
Description
Weekly SimPEG Meeting from February 17th, 2021
Show preview of new boundary condition implementation coming to discretize!
A
So
a
quick
thing
I
talked
about
last
week
that
I
was
going
to
show
you
guys
how
some,
how
these
boundary
condition,
implementations
work
that
I
was
dealing
with.
You
can
do
that
after
the
reports
other
kind
of
agenda
type
items,
people
want
to
add
to-
or
at
least
discuss
later.
B
Yeah
I
mean,
I
might
add,
a
couple
more
just
trying
to
move
forward
the
em-1d
stuff
and
the
discretized
documentation.
So
I'll
I'll.
Add
that
if
we
have
time
okay.
D
Yeah
in
recent
weeks
I
finished
the
tutorial
of
the
cross
gradient
gently
inverted.
I
created
the
synthetic
models,
synthetic
density
and
susceptibility
models
and
finish
the
tutorial
part,
but
I
made
a
problem.
So
is
anybody?
Is
anybody
help
me
to
upload
the
synthetic
model,
observe
the
data
and
the
tuple
to
the
to
the
I
notice?
There
is
a
google
storage
or
some
place
to
store
this
data.
In
that
case,
we
can
just
write
some
code
and
download
the
data
to
the
local
pieces.
Like
other
examples,.
A
A
Definitely
the
best
process,
and
if
you
either,
if
you
want
to
send
it
to
me,
I
can
put
it
up
there
and
get
it
piped
in
for
you.
D
D
Yeah,
this
is
a
screenshot
for
the
log
files,
so
this
column
is
estimate,
the
regularization
parameter
for
the
gravity,
data,
and
this
column
is
for
the
magnetic
data.
So
I
noticed
the
eigen
the
eigenvalue
estimation
method
works
well
for
the
first
data
sets.
It
starts
from
the
just
several
thousands,
but
for
the
magnetic
data
looks
like
it
cannot
work
well,
because
the
initial
beta
is
really
large,
similar
with
the
freeways
method
and
similar
for
the
5b
database
fit
here
for
the
gravity
it
works
well,
but
for
the
magnetic
again
it
starts.
D
D
I
just
thought
you
use
the
default
values
same
way
that
keyboard
used.
I
think
so.
A
A
E
Week
and
while
we're
at
it,
I'm
just
looking
at
the
code
right
now,
we
don't
have
the
option
to
to
revert
to
the
older
older
version
of
the
beta
estimator,
so
it'll
be
good.
If
we,
if
we
had
that
switch.
D
D
Oh
sorry,
don.
Can
you
just
repeat
what
you
just
said?
Oh
I,
what
I
said
is,
I
think
the
this
first
column
looks
like
is
caused
by
the
new
data.
New
beta
method,
estimation
method
and
this
column
looks
really
looks
like
the
it's
a
skill
of
the
method,
because
the
previous
method,
the
initial
beta
value,
always
start
at
a
really
large
value.
I
noticed.
D
F
Yeah
yeah,
no
just
sort
of
notice
that
it
is
a
lot
more
than
the
default
yeah
yeah.
E
F
D
C
Yes,
yeah,
we
resubmitted
our
our
comparison
paper,
which
we
also
uploaded
on
curve
note.
So
if
you
want
to
check
out
the
figures
it's
on
in
the
swung
team,
there
are
all
these
comparison
figures
of
the
fields
of
the
meshes
other
than
that
I
sort
of
struggled
to
find
a
nice
time
domain
example
that
has
very
accurate
responses
in
3d.
C
It's
kind
of
you
can
find
examples,
but
then
either
they
struggle
at
early
time
or
at
late
time,
or
they
are
overall,
not
super
precise,
like
when
I
look
at
say
one
percent
error
trying
to
achieve
so
it
seems
to
be
a
difficult
topic.
I've
never
done
a
lot
3d
time
domain
modeling,
so
I'm
just
lacking
experience,
but
it
seems
to
be
a
difficult
thing.
C
Yeah,
because
that's
normally
my
frequency
goal
to
get
down
to
one
percent
and
then
now
I
do
the
fixed
time
domain.
I
also
try
to
get
below
one
percent
and
it
works
for
layers
models
which,
because
I
can
check
that
with
the
empire
mode
easily.
But
I
I
struggle
to
find
the
time
domain
3d
code
that
can
easily
handle
that.
B
I
mean
I
I've
been
doing.
I've
done
a
validation
with
the
time
domain
code
against
layered
earth,
so
the
similar
test
that
you've
done
and
I've
done
it
with
a
conductive
and
permeable
sphere.
B
So
I
can
send
that
off
to
you
it's
written
in
a
nice
notebook,
but
I
don't
think
I'm
getting
down
to
one
percent
error.
B
G
Three,
I
have
to
go
check.
That
will
be
fine.
B
Sure
I
will
I'll
send
you,
those
notebooks.
H
That
you
probably
think
about
the
time
domain,
it's
likely
mostly
secondary
field
after
the
time
off
so
actually
doing
a
same
one
percent
like
even
though
you
use
the
exact
same
frequency
domain
code
and
calculate
like
in
a
transform
and
calculate
the
error
in
time
domain.
It's
probably
not
one
percent
so
because,
like.
H
C
That
shouldn't
be
the
case,
because
you
cannot
separate
easily
the
secondary
and
the
prime
well
yeah
from
the
modeling
point
of
view
you
can.
But
from
data
point
of
view,
note.
H
Right,
so
what
like,
what
I
usually
see
like
if
I
compute
the
relative
response?
Usually
time
domain,
has
much
larger
like
anomalous
response
compared
to
frequency
domain,
where
you
got
both
interesting,
I
meant
primary
and
secondary.
H
C
Yeah,
it's
an
interesting
question
because
it's
it's
that's
a
paper
that
I
have
about
fourier
transforms
and
then
they
push
you
to
to
compare
as
complicated
models
as
possible,
but
what
we
saw
with
our
3d
comparison
paper.
If
you
go
to
to
complicate
the
models,
you're
going
to
see
much
more
the
differences
between
different
modelers
than
you
will
see
from
the
fourier
transform.
So
it's
kind
of
a
a
difficult
task
right
and
then
you
can
tell
me:
where
is
the
bigger
error?
And
I
don't
know,
but
it's
it's
hard
to
prove
anything
yeah
anyway.
C
So
I
know
that
rafael
rocklets
in
custom,
the
finite
element
code.
He
has
now
also
implemented
time
domain
modeling
in
3d,
with
like
simpek.
Has
it
so
not
not
via
via
transform
but
actually
maxwell's
equation
in
time
domain
and
as
far
as
I
know,
he
has
arbitrary
waveforms
included
so
step
one
step
off
and
anything
in
between.
So
I
might
give
that
a
try,
but
then
that's
quite
involving
to
build
a
finite
element
mesh
so
yeah,
it's
it's
it's
easier
to
build
a
tensor
mesh
or
an
octree
mesh
anyway,
yeah
thanks
devin.
B
Yeah,
I
would
say
the
spheres,
so
I
just
sent
you
one
for
a
layered
earth.
I
can
send
you
one
for
the
sphere,
but
but
the
sphere
is
it's
an
analytic
solution
that
makes
certain
assumptions.
B
So
it's
in
a
vacuum
and
there's
some
things
about
how
smooth
the
the
primary
field
is
as
it
reaches
the
sphere
etc.
So
I
don't
think
you're
gonna
get
the
errors
that
errors
of
you
know.
One
percent
just
because
of
some
assumptions
that
are
made
in
this
analytic
solution,
yeah,
but
check
out
the
one
for
the
layered
earth
and
if
you're
happy
with
it
and
it's
it's
doing
good,
then
I
can
always
send
you,
the
other
one.
C
A
Good
so
then
we
can
move
on
to
lindsay
next.
E
She
said
her
her
environment.
F
Was
noisy,
I
am
actually
back
it
just
got
quiet
sorry,
but
not
a
major
update.
I
just
want
to
share
the
link
that
theater
put
the
notebooks
from
the
paper
upon
curve
note
and
also
flag.
If
anyone
wants
to
put
anything
from
the
simpeg
research
repository
up
on
the
the
simpeg
group
feel
free
to.
If
you
run
into
any
problems,
you
can
bug
me
or
brown
or
steve.
A
Okay,
I
can
go
next
then
on
here
we
about
a
month
ago,
or
so
we
were
contacted
by
an
organizer
of
whoever's
name
slipping
right
now,
but
we're
asked
to
give
a
talk
at
a
at
the
parsley
fd
conference
in
nice
friends.
I
probably
spoke
to
ortebo
how
you
mad
at
me
saying
it
like
that.
A
They
were
doing
a
session
on
where
they
are
doing
a
session
on
mostly
inversion
methods
like
seismic
conversion
focusing
on,
but
they
also
wanted
to
have
input
from
outside
seismic
as
well,
so
that
we're
invited
to
submit
an
abstract
for
that.
Hopefully,
we
would
focus
on
the
hpc
in
hpc
inversions,
with
synthetic
and
what
we're
doing
to
tackle
those
problems.
So
we
submitted
a
abstract
to
them
kind
of
discussing
what
we've
been
doing
with
desk
those.
A
I
Oh,
it's
more
of
a
information
thing
as
opposed
to
specific
questions,
but
you
know
after
having
gone
to
myanmar
and
developing
all
of
the
sort
of
the
open
source
resources,
especially
for
the
dc
and
especially
for
you
know,
water
environments.
There
actually
has
been.
I
You
know,
other
groups
that
are
kind
of
interested
in
what
we're
doing
and
then
there's
the
so
there's
a
group
in
colombia
that
outreached
to
us,
oh
a
couple
of
months
ago,
so
they've
got
an
area
in
columbia
where
they've
got
they've,
got
data
and
yeah
they'd
actually
like
to
use
sempeg
and
all
the
open
source
resources
and
make
use
of
the
the
case.
History
format
that
we
got
to
start
to
process.
I
You
know
a
trial
data
set
from
from
them,
so
lindsay
soggy.
I
think
devin.
You
were
in
on
on
this
too
at
one
point
that
that
is
still
proceeding.
They're
interested
in
you
know
kind
of
developing
some
kind
of
interaction,
and
I
think,
from
our
point
of
view,
it
would
be
as
nice.
It
would
be
nice
to
help
them
as
much
as
possible
without
getting
too
overwhelmed
by
getting
into
their
problems.
I
I
Material
so
that
they
can
get
up
and
running,
we
can
have
as
few
you
know,
interactions
on
the
technical
details
as
possible
as
many
interactions
on
the
scientific
questions
as
possible,
and
you
know
have
a
place
for
them
to
you
know
that
they
can
store
data
and
just
provide
the
resources
to
them.
So
that's
that's
kind
of
the
the
background
now
and
they're
they're
at
the
point
where
they're
asking
if
they
could
get.
I
I
So
I
yeah
just
kind
of
wanted
to
throw
that
out.
You
know
see
if
anybody
in
particular
is
interested.
I
And
yeah,
that's
that's
about
it.
I
mean
just
kind
of
commentary.
I
mean
devon
might
be
something
that
you're
particularly
you're,
particularly
connected
with
I'm,
not
sure
whether
you
were
in
on
all
those
calls
that
we
had
had
before
it
kind
of
lost
track.
Since
we
hadn't
talked.
I.
B
Think
I
was
for
the
most
part,
but
I've
been
following
it,
maybe
not
as
intensely
recently
because
of
the
other
stuff
that
I
have
been
tasked
to
do.
But
yeah
I'd
be
happy
to
be
involved.
I
think
I'd
just
like
to
know
it
I'd
like
to
know
what
is
sort
of
explicitly
what
their
goals
and
needs
are
in
terms
of
help
right
now.
It's
not
very
well
constrained,
but
I
think
if
I
could
get
kind
of
an
explicit
list
of
those
things,
then
I
would
know
how
to
help
a
bit
better.
I
Yes,
that
might
be
worth
the
conversation
to
you
know,
interact
with
them.
I
think
at
this
point,
what
we
had
asked
them
to
do
was
to
isolate
at
least
one
data
set
and
try
to
start
putting
things
into
a
case
history
where
they
first
delineated
and
articulated
what
the
problem
was,
that
they
had
and
then
they'd
have
a
data
set,
that
we
could
process
with
them
and
basically
it'd
be
very
similar
to
kind
of
what
I
guess
a
little
bit.
I
What
went
on
in
in
myanmar,
so
this
might
be
a
nice-
might
be
a
nice
additional
example,
kind
of
for
the
use
of
case
history
that
case
history
document
and
hooking
up.
All
of
you
know
the
simpag
software
for
the
for
the
processing
and
just
reaching
out
to
you,
know
another
country
and
a
different
problem.
B
Yeah,
so
I
guess
in
a
way
I
mean
we
have,
we
have
the
case
history
template
that
we
provided
with.
We
provided
the
drd
that
could
essentially
serve
as
the
template
that
we
would
provide
to
them.
The
only
difference
is
they're
using
the
full
sim
peg
package,
as
opposed
to
these
these
little
apps,
but
I
think
the
the
approach
in
a
lot
of
ways
is
pretty
similar.
H
G
A
E
Yeah
so
last
week
I
went
back
to
the
natural
source
and
versions
the
compute,
j
and
tiling
part
of
it
and
found
a
pretty
neat
way
to
be
able
to
not
loop
through
the
data,
I'm
just
doing
a
loop
over
a
transmitter
and
then
loop
over
receiver
and
just
do
everything
as
a
matrix
operation.
So
that's
quite
nice
because
it
reduces
a
lot.
E
The
the
number
of
of
parties
or
calls-
and
you
know
the
speed
up
is-
is
quite
nice
and
now
I'm
more
brainstorming
ideas
on
how
to
generalize
the
tiling
approach,
because
I
think
we
want
to
have
a
single
function
to
call
right
to
be
able
to
to
see
the
tiling
and
the
hard
part
is
to
figure
out.
You
know
how
many
tiles
is
optimal
and
how
to
split
up
so
I'm
kind
of
brainstorming
ideas
on
how
to
do
this.
H
Thinking
about
like
a
very
generic
function
here
or
like
some
sort
of
problem
specific
piling,
because
I
think
at
the
end
it's
very
problem
specific,
although,
when
you're
generating
a
function,
you
want
to
make
it
like
a
kind
of
generic
function
that
can
serve
like
lots
of
problems.
E
So,
what's
what's
your
goal
here,
so
what
I
have
in
mind
is
that
we
basically
know
already
whether
the
big
segmentations,
where
it
needs
to
happen
right.
So
if
it's
a
frequency
system
or
times
we're
going
to
be
the
same,
then
it's
going
to
be
first
on
split
on
frequencies.
E
Then
we
would
be
estimating
basically
what's
the
the
tile
styles
and
what
the
global
size
is
and
then,
after
this
it
would
be
a
split
on
sources
and
then,
if
the
sources
have
locations-
and
you
start
doing
some
blocking
of
you-
know
number
of
tiles
for
locations.
If
they
don't
have
locations
like
a
natural
source
of
potential
fields,
then
you
go
down
another
level,
you
start
tilling
to
the
receiver
and
then
you
keep
blocking
the
receivers
right
and
ideas
that
would
basically
be
flooding
a
curve
of
like
a
number
of
tiles.
E
What's
the
the
tile
styles
in
terms
of,
like
you
know,
memory
size
and
the
total
size
right,
so
the
user
will
be
able
to
be
able
to
pick
okay,
I
want
to
tile
only
on
frequencies.
I
get
enough
savings
or
I
want
to
go
down
the
the
tiling
path.
If
you
want
from,
like
you
know,
frequencies
to
transmitters
sources
and
then
down
to
receivers
after
does
that
make
sense,
yeah,
yes,.
E
Because
we
basically
want
to
break
where,
where
he
hits
the
the
most.
You
know
the
most
bang
for
a
buck
right,
and
this
is
kind
of
like
the
natural
order
yeah,
that
I.
F
E
Probably
a
couple
of
ideas
later
yeah
I'll
prototype,
something
maybe
just
a
notebook
on,
because
we
don't
really
need
anything
or
just
really
creative
simulations
and
some
meshes
to
be
able
to
test
this.
So
I'll,
create
a
notebook.
And
then
I
can
send
it
around.
J
J
E
Great
yeah,
because
natural
source,
that's
where
we're
gonna,
have
the
most
gain
rights
when
we
can
send
it
to
two
different
different
notes.
E
A
Implementations
that
we
deal
with
are
they
come
into
play
when
are
approximating
the
inner
products
specifically,
when
are
approximating
the
inner
products
of
like
gradients
times,
vectors
ingredients
of
gradients
of
a
scalar
times
a
vector
scalar
times
the
divergence
of
a
vector
or
the
curl
of
vector
dotted
with
a
vector.
So
those
are
the.
These
are
the
three
things
that
we
have
operation
defined
for
right.
We
have
on
discretized
meshes,
there's
nodal
gradients,
there's
face
divergences
and
there's
edge
curls,
so
we
don't
have
like
there's
not
like
a
face
curl.
A
So
if
a
thing
lives
on
a
face,
we
can't
do
that.
We
can't
take
the
curl
of
the
thing
that
lives
on
our
face
generally.
So
that's
why
we
use
the
weak
form
to
get
to
something
that
we
can.
We
have
the
edge
curl
we
have.
I
mean
I
do
realize
that
we
have
a
cell
gradient,
but
we
don't
also.
We
don't
have
the
necessary
parts
to
use
that
kind
of
cell
gradient
based
formulation
for
a
finite
volume.
A
It
works
out
a
little
bit
differently,
but
generally
we
we
make
use
of
these
transformations
a
lot
in
these
inner
products
when
we
say
that
things
are
have
like
the
natural
boundary
conditions.
That
literally
just
means
that
we
don't
like
we're
ignoring
this
part
of
it.
You
guys.
Can
you
guys
can
see
my
cursor
correct
yeah?
It
literally
means
we're
ignoring
this
part
of
it.
A
So
it's
like
natural.
It's
like
it's
the
easy
boundary
conditions,
not
the
next.
It's
the
easy
one
because
we
say
that
one
of
these
things
in
here
are
usually
zero,
so,
okay,
v,
dot
n
is
zero
or
w
cross
n
is
zero.
A
It's
just
these
terms
are
zero
and
we
don't
worry
about
them,
and
then
we
end
up
approximating
just
these
two
inner
products.
A
A
Deterministic,
so
that
the
scalar
function
lives
in
cells,
centers
and
the
fate
and
the
vector
function
lives
on
faces.
So
that's
what
I
do
here.
I
evaluate
I
set
up
a
mesh
on
the
same
domain,
so
zero
to
one
zero
one,
zero
to
one
with
a
certain
number
of
discretization,
I'm
evaluating
you
at
the
style,
centers
and
v
at
the
faces.
A
A
So
we
need
the
phase
divergence
operator
and
that
kind
of
matrix
that
represents
the
bound
or
the
the
integral
part.
The
v
multiplied
by
v,
essentially
for
the
the
face
divergence
will
output,
something
that
lives
on
cell
centers
and
u
lives
on
cell
centers,
so
that
that's
an
easy
thing.
We
just
assume
that
they're,
both
constant
on
each
cell,
multiplied
by
their
volume
to
approximate
the
integral
so
using
the
natural
boundary
condition
on
this
right.
U
is
equal
to
zero
on
the
boundary.
A
A
So
I
now
have
so
I've
implemented
these
boundary
integrals
for
something
or
for
discretize,
I'm
still
not
quite
sold
on
how
I'm
naming
these
because
they're
kind
of
defaults,
but
so
right
now
I
have
a
boundary
integral
with
a
scalar
value
that
lives
on
the
faces.
B
What
we
had
before
so
when
you're
saying
joe
that
sorry
b,
sorry
mbf
is,
is
basically
a
boundary
inner
product
matrix
yeah.
B
A
A
Yeah,
it's
a
it's
a
boundary
integral
matrix
instead
of
an
inner
product
matrix.
So
but
it's
essentially
doing
a
similar
thing
right.
It's
taking
care
of
the
orientations,
the
relative
orientations
of
u
and
or
your
v
and
n
hat,
because
v
is
on
the
faces,
as
well
as
the
areas
that's
going
on
there.
So
it's
taking
care
of
all
that
and
you
just
got
your
test
function
so
in
this
kind
of
form,
when
we
end
up
using
it
to
approximate.
A
A
So,
let's
switch,
it
basically
switch
it
around
here.
So
now?
U
is:
let's
say
that?
U
is
a
stellar
scalar
function,
but
it's
defined
in
nodes
and
our
v
is
defined
at
edges
then,
and
we're
taking
essentially
the
edge
divergence,
which
doesn't
is
not
a
thing
that
we
have
practically.
It
just
ends
up
switching
this
like
these
two
operations,
it
just
switches
around
them.
A
A
A
A
So
for
this
in
general,
we
need
to
define
that,
like
each
component
of
v
is
on
the
boundary
on
these
boundary
nodes,
because
that's
where
the
boundary
isn't
it's
like
on
these
nodes,
the
way
is
what
I
mean
when
I
say
that
we
have
to
define
all
three
components
of
v
to
be
able
to
do
it
or
of
all
three
components
of
be
at
the
boundary.
A
V,
dot
n
on
an
unstructured
mesh
could
be
different
on
both
sides
of
the
edge,
so
something
lives
at
the
edge
right
you
need
to
have.
You
would
have
to
have
multiple
components
to
be
able
to
approximate
it
on
both
sides.
So,
let's
say
at
if
you
just
think
of
a
corner
on
the
in,
like
on
a
2d
corner
or
a
3d
corner
like
an
edge
on
a
3d
mesh
on
one
side
of
the
face,
the
normal,
like
the
normals,
are
different
on
both
sides
of
those
edges.
A
So
that's
why
we
can't
necessarily
say
that,
but
we
need
to
know
you
know
we
need
to
know
only
one
component
at
every
edge
or
at
most
we
need
to
know
two
components
at
every
edge,
but
it's
hard
to
say
beforehand,
which
two
components
they
are
or
which
direction.
Those
two
components
are,
and
sometimes
you
only
need
one
component.
So
let's
say
if
both
of
the
normals
on
each
side
of
an
edge
of
a
boundary
edge
are
the
same,
then
we
only
need
one
component
of
that
boundary
boundary
edge.
A
A
A
Again,
I
approximate
that
integral
and
I
get
a
much
closer.
The
answer
is
much
closer
to
the
accurate
again
completed,
similar
things
for
the
curl
product,
so
this
is
the
curl
of.
If
so
in
here
we
usually
take
the
curl
of
something
that
lives
on
a
face
times,
something
that
lives
on
an
edge
to
get,
because
we
don't
have
that.
You
know
face
curl
operator.
A
A
A
A
A
A
A
A
A
A
A
A
That
average
things
too
in
that
in
those
cases
for
the
inner
product
matrices,
they
all
average
to
cell
centers
they'll
average
to
cell
centers
to
take
the
integrals
from
here,
we
need
to
average
to
faces
to
boundary
faces,
we
need
to
average
properties
to
boundary
faces,
so
I'm
averaging
those
boundary
nodes
to
the
boundary
faces
here
and
building
up
the
matrix.
So
the.
A
A
So
this
matrix,
this
sp
diags
here,
is
what's
kind
of
handling
the
operation
of
taking
the
three
components.
So
this
is
I'm
just
using
a
shorthand
here
to
kind
of
block
all
the
x
components,
all
the
y
components
and
all
the
z
components
into
one
matrix,
so
the
x
component
of
you
know:
f,
dot
or
f
dot,
n
or
the
f
sorry,
the
x
component
of
the
face
normal
dotted
with
the
y
things
like
that.
A
So
that's
handling
the
block.
Okay.
So
then
what
do?
What?
What?
If
we
want?
Let's
say
the
so
we
had
the
vector
function
defined
at
nodes.
Well,
what
if
we
wanted
to
say
that
it's
at
the
faces?
Instead,
this
is
still
you
can
still
do
that.
It
just
changes
essentially
where
this
average
operator
goes.
A
So,
instead
of
it
happening,
like
the
average
transpose
to
this
these
functions,
it
happens
after
you
block
the
matrices
together.
So
this
is
like
d
instead
of
before
blocking
it
happens
after
blocking
it
just
changes,
so
it
still
works.
There's
still
kind
of
general
functions
work.
If
we
have
things
defined
at
other
spaces,
it
just
generally
makes
sense
to
have
them
defined
at
the
boundary
nodes.
Here,
your
boundary,
it's
easier
for
the
boundary
items
to
be
to
live
at
the
same
place
where
the
the
function
you're
multiplying
it
by
list.
H
Joe,
so
here,
if
you
take
the
gradient
of
normal
function,
it
actually
leaves
in
edges
right,
so
we're
imposing
here.
I
guess
the
neumann
I
think,
like
you're,
imposing
like
a
some
certain
value
of
the
flux
at
the
boundary.
That's
the
goal.
I
guess
it
would
that's
how
it
is
used.
Yes
right
then,
like.
A
Right
so
it
so,
what
comes
to
it
is
that
we
we're
approximating
this
integral
right,
this
boundary
integral.
So
we
need
you
and
we
need
everything
to
eventually
live
up,
live
on
the
faces
afterwards,
something
so
it's
it's
usually
easier.
If
you
know,
v
is
also
at
the
same
place
that,
u
is
catch,
one,
we're
implementing
higher
order
integrals,
but
it
doesn't
have
to
be
right.
A
It's
just
usually
easier,
especially
when
we
start
dealing
with
like
like
robin
type
boundary
conditions,
or
something
like
that.
When
you
know,
then
we
would
say
that
in
this
case,
v
is
a
a
function
of.
U
like,
especially
v
is
a
function
of
v
right.
It
makes
sense
for
it
to
live
at
the
same
place
on
the
boundary.
A
A
So
it's
I've
got
all
like
the
point
is
I
got
all
the
piece
parts
there,
but
yeah
it's
it's.
If
you
have
a
boundary
condition.
This
is
what
you
use
to
approximate
your
integral
and
the
disc
discrete
form
like
it's
part
of
the
discrete
form
that
you're
doing
and
then
you
just
carry
it
through
your
discrete
yeah.
B
And
if,
if
I,
if
I
understand
what
you
have
right
now,
it's
that
you
said
okay,
I
have
you
and
I
have
v
and
I'm
telling
you
right
now
that,
like
I,
I
can
now
implement
this
properly
and
knowing
u
and
v,
I
can
approximate
the
the
actual
inner
product
right
and
it
doesn't
have
to
assume
zero
on
the
boundaries.
A
B
A
Not
I'm
just
using
these
inner
products
as
a
way
to
demonstrate
what
they're
doing
oh,
okay,
okay,
it's
it's
not
necessarily
like
this
doesn't
mean
that
you
know
we
can't
do
boundaries.
This
means
that,
like
I'm,
using
these
functions
to
show
you
that
the
boundary
conditions
work.
A
A
A
We
don't
end
up
doing
this
and
I
can
show
you
what
happens
in
the
1d
case
eventually
before
before
I
do
that.
I
just
wanted
to
show
you.
So
if
we
have
like
you
know
crazy
shaped
boundaries,
what
does
it
need?
Well,
we
we
just
need
these
three
operations
right.
We
need
to
project
nodes
to
boundary,
we
need
to
be
able
to
project
things
or
we
need
to
select
the
items
at
the
boundaries
and
then
define
the
outward
normals
because
of
how
these
functions
are
defined
right.
This
is,
this
is
general
generalized.
A
B
A
A
A
A
We
know
the
scalar
function,
plus
the
gradient
of
the
scalar
function
dotted
with
the
outward
normal
is
equal
to
something.
So
if
we
rearrange
this,
we
want
to
make
an
approximation.
We
approximate
the
derivative
at
the
boundary
as
a
function
of
like
a
ghost
point
and
this
value
at
the
cell
that
it's
on
the
boundary
of
so
we
approximate
that
outward
derivative
and
then
solve
for
the
value
at
the
face
at
that
boundary
face.
A
So
this
is
our
this.
Is
this
essentially
the
solution?
This
is
the
value
of
the
boundary
face,
so
we
need
so
I
can
here
I'm
generating
the
functions
to
do
that.
So
this
would
it's
really
simple.
A
A
So
I'm
discretizing
this
thing
here
that
discretized
looks
like
negative
v
transpose
d
transpose
v
plus
v
transpose
times
this.
These
were
those
a
and
b's
that
I
gave
back
from
the
robin
part
and
they
were
built
off
of
those
inner
product
operations
or
those
boundary
face.
Vectors
are
those
boundary
face.
A
Integrals
one
of
the
derivations
here,
okay,
so
this
is
what
we
end
up:
solving
the
the
full
discrete
like
we
use
the
two
discrete
systems
of
equations.
We've
got
this
part,
which
is
essentially
the
divergent
phase,
divergence
the
weak
form
of
a
cell
gradient
operation
with
the
necessary
boundary
conditions
implemented
in
there
right.
So
then,
this
ends
up
looking
like
here.
I've
combined
everything
on
the
right
hand,
side
into
the
b
and
b
c
vectors
the
b,
matrix
and
b
c
here
h
is
the
magnetic
field.
A
A
B
Yeah
well,
it
sounds
like
so
it
sounds
like
you've.
You've
created
the
the
surface
integral
matrix,
which
is
a
a
tool.
That's
going
to
be
universally
used,
and
now
the
next
part
is
to
create
those
the
the
pieces.
You
had
that
with
you
had
ax,
plus
b,
basically
the
pieces
that
are
specific
to
what
boundary
conditions
that
you're
imposing
and
and
that's
some
that's
coding
that
needs
to
get
created.
Am
I
correct.
A
A
So
what's
left
right,
you
can't
it's
hard
to
apply
robin
boundary
conditions
to
the
face
to
the
nodal
gradient
operation,
because
it's
a
little
bit
back.
That's
a
little
bit
backwards
right
where
we're
doing
the
nodal
gradient
on
something
else.
So
if
we
say
that
we
know
the
value
of
the
boundary
of
a
edge
effect
edge
vector
to
do
the
nodal,
gradient,
weak
form
thing,
then
it's
like
saying
we
know
a
neumann
condition.
We
know
a
flux,
condition
on
the
thing
we're
solving
for.
A
And
there's
not
it's
so
it's
a
little
bit
backwards,
the
and
then
for
the
edge
curl
kind
of
I've
been
trying
to
think
about
in
terms
of
the
impedance
conditions
that
we
that
are
seen
see.
If
I
can
do
this.
H
Yeah
right,
like
you,
you
impose
like
a
impedance
boundary
condition
like
a
as
a
as
a
robin.
A
Yeah
sorry,
I'm
trying.
A
I
might
be
able
to
just
hold
it
up,
so
these
are
the
kind
of
boundary
conditions
I
end
up,
seeing
with
like
functions
of
like
an
f,
say,
f
vector
when
we're
taking
those
curls
we
end
up
seeing
so
f.
Cross
n
is
something
that
happens
a
lot
right
and
then
the
impedance
condition
ends
up.
Looking
like
this
right,
we've
got
all
those
normal
crosses
in
there.
A
I
haven't
I've
been
trying
to
look
around
to
see
other
examples
or
other
properties
of
impedance
conditions
on
like
in
electromagnetic
magnetics,
but
this
is
the
only
type
of
thing
I
end
ups.
I
end
up
seeing
things
like
this,
I'm
not
sure
if
anyone
else
has
seen
anything
besides
those.
A
A
H
I
thought,
like
the
boundary,
is
always
the
face.
It's
like
a
it's
an
area
like
that,
like
what
is
going
out.
A
Yeah,
it
is
like
we
are
always
we're
approximating
those
integrals
by
doing
things
on
faces
right,
but
the
same
way
that
you
think
of
doing
inner
products
we're
we're
approximating
we
average
things
to
the
cell
centers
to
do
the
inner
products
right,
even
if
something
lives
on
the
edge
we
average
it
to
a
cell
center
and
then
and
then
move
on
right.
So
it's
the
same
thing
here,
it's
like
even
if
something
lives
on
the
edge
we
average
it
to
the
face
and
then
take
the
and
then
do
the
integral.
H
Right
so
in
the
user
perspective
like
so,
you've
done
all
the
hard
work,
but
what
we
defined
is
the
value
at
the
face
right.
So
even
if
we
let's
say
define
the
mixed
boundary
condition
what
we're
going
to
define.
Okay,
I
know
the
like
the
flux
value
or
like
the
ratio
between
flux
and
like
the
fields.
Then
we
just
defined
that
value
at
the
face
as
a
user
perspective,
and
then
we
got
an
intricate
operation
that
could
handle
that.
H
A
H
Fluxes
and
what
was
the
challenge
in
the
nodal
like
to
define
the
mixed
robin
boundary
condition.
A
A
We're
often
taking
like
the
divergence
of
the
like
del.j
right,
taking
the
divergence
of
that,
and
we
also
say
that
you
know
j
is
equal
to
sigma
the
gradient
of
something
all
right.
We
so
we're.
All
we
want
to
end
up
imposing
is
that
we
want
to
impose
boundary
conditions
on
phi,
but
we're
doing
the
integral
approximating
that
integral
on
j
right.
So
we
can
only
use
that
to
do
stuff
with
j.
A
So
if
we
we
can't
say
we
know
j
at
the
boundary,
if
what
we
know
is,
we
know
phi
at
the
boundary
we
can.
If
we
know
phi
at
the
boundary,
then
it
kind
of
we
need
to
approximate
j
at
the
boundary
right.
That's
what
we
need
to
do
today
to
be
able
to
use
these
kind
of
formulations.
We
need
to
approximate
j
at
the
boundary
some
way.
A
In
a
specific
case,
robin
is
actually
easier.
The
only
weird
one
with
that.
I'm
sorry
is
the
like
a
dirichlet
condition
on
phi
and
it's
actually,
if
you
have
like
pure
dirichlet
conditions
on
like
the
gradient
or
the
the
scalar
function
at
the
nodes,
I
I
don't
think
the
nodal
gradient
is
the
best
way
to
implement
something.
A
If
you
know
the
value
of
the
scalar
at
the
boundary,
because
it
will
still
imply
that
j,
like
that
divergence
of
j
equals
zero
at
the
boundary,
so
you're
not
just
doing
a
dirichlet
condition
on
the
boundary
you're
doing
dirichlet
and
norman
on
the
boundaries
right.
So
it's
like
it's,
it's
not
quite
what
you
want
it
to
do.
Right
like
if
you
know
you
can
yeah
you.
G
A
A
So
it's
almost
like
okay!
Well,
if
you
know
it's
like
okay
yeah,
you
can.
Maybe
it
should
be
a
warning
like
if
we
know
that
if
you
know
the
value
of
phi
at
the
boundary,
but
because
that's
the
one
that
we
have
to
like,
rearrange
these
system
of
equations
properly,
to
get
it
to
reduce
the
number
of
unknowns
and
go
on
so.
A
H
Yeah,
I
I
was
thinking
about
that
and
like
a,
but
can
we
still
like
an
imposed
boundary
condition.
J
is
equal
to
sigma
graph
phi
like
it
could
right.
If
you
want
yeah,
although
yeah.
I
know
it's
not
well
defined
but
like
there's
no
need,
but
that
if
you
want
at
the
boundary,
that's
where
I
had
a
bit
of
problem
like
then
you
got
this
discrepancy
issue.
H
Okay,
you
can
impose
boundary
condition
on
the
one
side
and
also
the
other
side
and
then
yeah
like
if
it's
not
needed,
like
you,
don't
have
to
define
two
boundary
conditions
and
anyway
yeah,
like
I
wasn't
sure
like.
I
know
we
just
need
to
define
at
one
place,
but
I
like
it,
there
is
possibility.
You
can
define
both
place
and
I
wasn't
sure,
what's.
H
A
Yeah,
so
it's
the
point
is
that
you
need
to
approximate
the
value
of
the
thing
that
you're
weakly
removing
right.
That's
the
thing
that
you
need
to
approximate
the
value
of
at
the
boundary.
H
A
So
that's
the
the
direction
I
went
with
this
boundary
condition
stuff.
It's
like
it's
about
approximately
the
boundary
integral
and
being
able
to
get
that
about
being
able
to
get
the
value
at
the
boundary.
I
think
is
a
something
that
I
I
can
implement
and
that's
like.
I
said
I've
already
done
it
for
like
the
robin
condition,
so
it
makes
sense
to
have
a
few
boundary
a
few
default
things
like
that,
but
I
wanted
to
make
sure
that
all
the
tools
were
there
to
implement
things
generally.
H
And
joe,
have
you
thought
about
like
let's
say
I
want
to
on
the
on
the
portion
of
my
boundary,
I
want
to
define
dirichlet
and
portion
of
my
boundary.
I
want
to
define
next
unfortunate
laundry.
I
want
to
put
some
knowing
my
let's
say:
that's
my
goal:
how
would
you
generalize
that
kind
of
process.
A
So
it
still
comes
down
to
approximating
the
value
of
the
thing
at
the
boundary
right.
It's
just
you're,
approximating
it
differently
in
different
areas
of
the
boundary,
so
right
now
that
robin
function,
like
the
cell
gradient
robin
weak
form,
can
handle
that
because
of
the
way
that
it's
implemented,
like
the
ghost
the
solution
for
the
you
at
the
ghost
form,
or
those
ghost
points
is
solvable
for
each
of
those
conditions.
So
if
alpha
is
equal
to
zero,
it's
still
got
a
solution.
H
And
one
thing
that
I
liked
about
previous
rowan's
implementation
about
cell
grad,
that
bc
see.
Actually
you
can
pass
the
list
of
like
each
face
on
like
it's
assuming
the
tensor
like
usually
annoying
mentorship,
then
then
actually
creates
the
mdf
on
your
case.
That
would
be
kind
of.
H
A
Yes,
that'd,
be
we
can
do
something
like
that.
It's
like
it's
it
for
me,
though.
It's
made
sense
not
to
have
those
things
like
the
properties
of
the
mesh
like.
Oh,
I
want
this
mesh
has
a
norman
condition,
because
you
could
end
up
using
like
the
mesh
for
multiple
times
and
multiple
different
types
of
conditions.
At
the
same
time,
right,
the
boundary
thing
should
be
things
that
we
ask
for
from
the
mesh
and
give
them
a
problem.
B
You'd
have
that
function
and
you'd,
say:
okay,
here's
yeah
the
alpha,
betas
and
gammas.
For
all
my
my
my
faces
like
I
need
to.
I
need
to
figure
out
what
the
values
are
in
the
faces
and
then
it
kicks
out
the
the
the
matrix
and
the
vector
that
you
would
use
to
approximate
the
value
on,
say
the
faces
or
yeah.
I
guess
on
the
faces
where
that's
where
the
boundary
integral
is
being
solved.
H
That's
great,
I'm
joe,
I'm
actually
really
looking
forward
to
update
our
dc
code
with
the
with
our
like
your
sort
of
latest
utility
codes
and
functions,
yeah
that'll.
G
B
B
Yeah
one
one
thing
I
think
you've
accomplished
and
it
can't
be
overstated
enough
is
you
you
seem
to
have
an
implementation
that
matches
very
well
with
the
theory
that
you're
presenting
there's
no
hand-wavy
kind
of
stuff
where
you're
skipping
steps,
it
sounds
like
you're,
really
you're
going
to
be
able
to
directly
show
some
type
of
equation
or
expression,
and
then
it's
implemented
in
the
exact
same
way,
and
it's
going
to
be
a
lot
less
confusing
to
learn
how
to
do
this.