►
From YouTube: SimPEG Meeting Sept 10, 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Unfortunately,
nurse
is
actually
down
today,
so
I
can't
really
demo
much
I
mean
we
can.
We
can
try
to
put
a
few
things,
but
I
think
like
if,
if
folks
are
okay
with,
maybe
postponing
some
of
that
conversation
until
next
week
when
we
can
get
a
demo
up
and
running
I'm
gonna,
there's
other
desk
related
stuff,
like
yeah,
we
can
indefinitely
chat
about
that
I.
Just.
A
C
C
A
C
A
Alright,
so
do
folks
maybe
want
to
start
with
just
like
quick.
A
D
Well,
yeah
I
can
go
go
first.
The
simulation,
JK
poor
quest
I
just
have
to
go
through
and
I.
Think
I
just
got
to
add
tasks
for
the
Tesla
start
running
clear
that
and
just
on
the
IP
side.
I
was
wondering
if
I
go
and
run
a
pre-conditioner,
it's
the
kind
of
run
that
IP
the
same
as
the
DC
and
when
it
calls
for
the
jtj
or
like
the
sensitivity
matrix.
D
D
Cuz
right
now
it
doesn't
have
a
JT
j
guy.
It
just
uses
its
the
JT
backs
and
whatnot,
but
who
does
it
really
have
to
calculate
it
already
before
that
or
because
the
decency
already
has
a.
D
F
D
B
F
I'm
not
sure,
like
cuz,
like
a
you,
have
a
some
sort
of
like
a
specific
implementation
about
J,
evac
and
JT
bag
using
desk.
So
I'm
like
you,
probably
need
to
take
a
look,
how
the
IP
code
works,
but
I
think
in
theory
it's
supposed
to
work,
I
guess
but
I'm,
not
sure
the
previous
implementation
sort
of
like
kind
of
well.
You
know
interconnected
with
what
you've
done.
So,
oh
that
part
I'm
not.
D
D
D
D
G
D
Yeah
I
think
that's
the
only
thing
I
have
to
really
confirm
on
it,
but
it's
running
and
it
goes
it's.
You
can't
really
run
it
on
a
laptop.
It
still
takes
about
15,
gigs
or
so
and
or
I
guess.
If
it's
a
little
bit
saying
without
the
tree
mesh,
it's
it's
yeah
and
I.
Think
yeah
I
started
a
a
problem
on
that
or
something
like
that.
Where
do
what
we
can
do
about
getting
on
the
tree
into
it,.
G
A
Yeah,
that's
that's,
awesome,
John
that'll
be
extreme
and
then
having
actually
some.
A
You're
not
talking
Thanks
yeah,
so
that'll
be
extremely
useful,
I
think
also
because
one
of
the
things
that
would
be
nice
to
do
is
refactor
the
MT
code
to
be
much
more
heavily
integrated
with
the
frequency
domain.
Yeah
that'll
take
a
bit
of
thought
with
respect
to
sources
and
things
like
that,
but
having
a
couple
like
slightly
more
complex
examples
that
we
can
get
anyone
to
make
sure
that
we
haven't
broken
anything
in
the
reef
actors.
That'll
be
extremely
valuable.
A
E
A
E
E
E
E
A
E
A
E
A
If
it's
I
mean
if
it's
like
tight
enough,
that
it
can
go
on
to
master
and
let's,
let's
do
that-
and
people
who
are
using
simply
right
now
can
do
that
and
like
there's,
no
problem
pulling
it
upstream,
yeah
yeah.
The
simulation
is
like
a
bit
of
a
Frankenstein.
So
if
we
can
sort
of
keep
that
somewhat
scoped
I
think.
E
So
yeah
yeah,
yeah
I
was
thinking,
let's
just
go
to
masters
and
then
have
a
simple
example
on
our
tree.
My
shows,
because
now
it's
all
done
right,
so
we
can
do
that.
I
have
a
question
for
you
regarding
the
Joe
regarding
the
the
interpolation
right
you,
you
started
working
on
this.
The
volume
volume
averaging
so
I
have
an
implementation
and
a
tile
map,
which
is
probably
not
the
so
they're
clean,
are
the
fastest.
How
far
are
you
from
having
like
a
master
mentioned--
of
Lalor
I.
I
Really
close
I've
got
it
working
for
a
tensor
meshes:
okay,
okay,
so
just
having
him
I
know
exactly
how
I
need
to
do
it
with
the
tree
mesh.
The
idea
is
that
we
take
the
tree
mesh
and
then
I
have
to
go
down
to
the
underlying
tensor
mesh
from
that
the
tree
mush
has
and
that
volume
and
volume
average
from
that
to
the
load
to
the
finest
discritization
essentially,
and
then,
since
I
have
a
treat
a
tensor
master
tensor
mesh
code.
I
can
just
use
that
to
link
the
two
and.
E
I
E
E
I
I
E
I
E
I
I
E
I
E
E
So
see,
I
have
one
more
one.
More
thing:
I
have
a
PR
on
on
simpang,
and
this
is
my
catching
up.
My
mistakes,
the
day
that
I've
done
in
the
past,
so
Lindsey.
If
you
could
bring
this
up
as
soon
as
possible,
because
it's
really
it's
really
improving
issues.
We
have
with
the
simple
and
sparse
organization
and
has
to
do
with
the
length
scales
right
so
after
after
finding
a
better
way
to
do
it.
This
will
really
improve
the
avian
version,
so
we
can
bring
this
down.
That'll
be
great.
I
can.
A
E
A
A
F
Yeah
I'm
working
on
am
problem
airborne
en
problem
paralyzing
it
so
I
got
actually
a
draft
code.
Working
and
I
was
currently
starting
from
multi
processing
and
yes,
Sims
desk
is
okay,
so
the
important
portion
to
me
was
actually
writing
the
inputs
into
the
file
into
the
disc.
So
I
was
looking
at
their.
What
John
and
Tom
did
you?
F
Things
are,
which
looked
pretty
good
I,
really
liked
it,
how
that
was
implemented,
but
but
that
let
the
implementation
was
really
nice
when
you
got
really
big
sensitivity
matrix
because,
like
that's
the
bottom
act
like
we
got
really
big
problem,
big
sensitivity
matrix,
you
want
to
generate
it,
but
you
don't
store
it.
So
you
want
to
just
load
it
up
on
the
fly
but
Bornean
problem,
because
local
sensitivity
is
pretty
small.
F
So
yeah
like
it
says
it's
a
little
bit
different
I,
don't
need
a
saw
alright
to
store
it,
but
I
just
pickled
your
goal
of
like
an
meshes
and
other
things.
Are
there
all
the
inputs
that
I
need
and
basically,
whenever
I'm
paralyzing,
it
I'm
just
writing
things
up
and
then
loading
it
up
whenever
I
need
to
compute.
So
I
think
in
that
way
desk
was
actually
fine,
there's
almost
comparable
with
multiprocessor,
because
I
think
a
lot
of
like
crazy
things
happening.
Pure
input
is
large.
So
that's
something
that
I
had
I
guess.
F
So
the
Fisher's
was
pretty
bad
for
to
ask
so,
but
once
the
right
things
up,
it
seems
fine,
so
I'm,
actually
looking
forward
to
I
mean
you
can
choose,
you
could
choose
either
one,
but
you
could,
if
you're
working
on
a
single
culture,
multiple
processing
is
actually
better,
but
if
you're
working
on
multiple
cluster
lots
of
computation,
maybe
you
way
you
may
want
to
use
to
ask
so
like
that.
So
I
I'm
not
sure.
F
So
though
this
is
the
tensor
mesh
that
I'm
working
on
like
200
by
40
by
800
and
I,
got
thousand
about
like
100
stations,
so
I
can
write
things
on
desk
and
I
can
run.
It
takes
about
three
minutes:
I
used
a
16
clock
like
a
16
CPU,
so
yeah
I
think
that
that
was
divided
sort
of
my
goal
like
okay.
If
I
can
and
all
like
sounding
finished
simulation
in
a
couple
of
minutes,
that's
actually
good
and
then,
if
you
had
more
resources
that
can
go
down
much
like
much
faster.
B
F
Cpu
it'll
be
much
better
but
anyway,
so
that's
what
I
just
wanted
to
share
where
I
was
I,
yeah
I
think
if
your
sir
paralyzing,
the
DC
I
think
you
may
need
to
do
some
other
things
like
once
you
kind
of
make
a
local
mash,
it's
not
better,
even
like.
If
you
do
it
line
by
line
you
see
it's
not.
Your
mesh
is
not
going
to
be
that
much
today
or
sensitivity
portion,
but
other
things
I
think
you
don't
even
have
to
use,
are
I,
guess,
yeah
I,
don't
know
I'm,
not
sure.
E
F
So
my
main
storage
is
actually
a
mesh,
so
I
got
thousand
so
like
a
1300
mesh,
so
that
could
like
it.
That
could
be
quite
a
bit
and
if
you
are
loaded
up
in
a
memory
and
like
I
want
to
handle
like
a
hundred
thousand
mesh.
So
that's
something
that
I
want
to
load
it
up
everything
into
the
memory.
So
that's
that's
the
main
thing,
but
I
also
have
a
mat.
F
E
I
F
Correct
correct
I'm,
just
like
a
guest
wearing
an
interpolator,
you
don't
have
to
you,
don't
have
to
do
a
but
like
if
you,
if
you're,
storing
a
mapping,
then
it
always
stores
the
mesh
global
match,
which
is
not
good.
So
that's
that
that's
what
I
was
kind
of
preventing
it's.
Okay,
just
using
like
a
knife,
not
the
instructor
that
we
have
you're
gonna,
get
really
big
memory
for
like
because
of
the
mapping.
A
That's
may
be
written
out
into
a
single
file
and
so
having
having
us
be
able
to
like
go
back
and
forth
between
sort
of
the
naive
usage
and
then
actually
sort
of
you're,
smarter
use
case.
Where
you
only
want
one
mesh
file
right
note
would
be
nice
and
that
should
be
fairly
straightforward
for
us
to
do
like
actually
at
the
simpad
levels
so
that
you
don't
have
to
like
implement
extra
machinery.
F
Correct
yeah,
I
think
I
deal
I
haven't
tried
yet
idealistically.
If
you
can
a
pickle
problem,
that'll
be
sort
of
the
easiest
way.
Cuz
I
can
the
problem.
It
includes
everything
that
we
need
yeah,
so
pickle
the
problem,
store
it
and
then
just
like
bring
things
up
whenever
it
need.
It
I
think
that's
probably
the
easiest
way
to
sort
of
paralyzed
because,
like
just
need
to
write
everything
and
then
just
load
it
up
while
you
go
and
then
if
you
want
to
use
desk
I,
think
that
seems
like
that
sort
of
the
best
way.
A
B
Devon's
help
to
do
this
kind
of
comparison
between
breasts
to
the
end
and
dcpip
2d
and
synthetic,
and
we
have
where
were
using
the
jiff
synthetic
model,
although
we
could
use
others
as
well,
presumably
as
well
as
a
I
think
a
hydra
hydra,
geologic
data
sets
from
for
max
at
holder.
I
think
about
it
from
sri
your
by
so
that's
yeah,
we've
been
starting
that
we've
got
some
some
results
so
far.
Yeah.
H
H
Not
quite
but
we're
kind
of
getting
there
and
we've
got
ya
the
base,
some
of
the
basic
ideas
that
we
think
we
would
like
to
see
it
kind
of
parallel,
pretty
much
what
was
had
the
previous
you
know
in
the
previous
buoys.
So
but
we
can.
We
can
fine-tune
that
a
little
bit
more
but
I
think
first
shot
through
is
just
to
check
the
numbers
and
make
sure
we've
got
the
same
results.
A
B
B
A
We
have
a
couple
libraries,
so
I,
don't
think
the
name
of
that
one
has
changed,
but
the
way
that
that
works
is
like
the
GPG
labs,
basically
copies
a
subset
of
gos
iLabs
and
then
same
thing.
Yeah
copies
a
subset
of
geo
Scilab,
so
GOC
labs
is
like
the
app
warehouse
basically
and
then
these
posit
orys
are
scoped
for
different
audiences,
so
GPG
labs
is
like
meant
to
be.
Basically
what
is
needed
for
350
and
e/m
is
like
only
a.m.
there
should
be
a
need
like
Meg
or.
A
C
A
C
A
A
A
And
if
you
do,
two
underscores
is
from
the
two
previous
of
the
second,
you
go
back
to
calculations
and
so
that
it
actually
is
really
handy
if
I
mean
I've
accidentally
set
up
and
run
like
an
expensive
computation
and
then
forgot
to
assign
variable
to
it
for
the
output
and
didn't
realize
that
you
actually
still
have
it
in
your
namespace.
So
I
learned
that
giving
a
good
tutorial
yesterday
and
so
I
dropped
the
link
in
there
for
geo
hack
week,
there's
some
interesting
stuff
in
there.
A
So
if
you're
actually
looking
at,
if
you
need
to
be
doing
stuff
that
involves
mapping
or
shapefiles
or
things
like
that,
there's
a
ton
of
useful
tutorials
that
have
been
created
by
the
instructors
here.
So
it's
worth
just
like
flipping
through
that
github
organization.
If
you're
in
need
of
resources.
A
Okay,
if
I
think
that's
all
I
have
for
updates,
so
I'll,
maybe
flip
over
to
job
queue,
is
actually
I.
Think
the
specific
thing
that
I'll
talk
a
little
bit
about
I
was
hoping
to
do
a
demo,
but
nurse
is
undergoing
upgrades,
so
I
can't
even
actually
access
what
I
was
doing,
but
I
can
show
you
the
notebook
and
we
can
chat
through
it.
Briefly,
it's
a
bit
all
over
the
place,
because
it's
very
much
like
a
research.
A
A
Okay,
cool,
thank
you,
so
this
notebook
I've
been
going
through
and
building
up
some
stuff
for
some
casing
simulations.
So
like
that's
the
that's
the
context
so
I've
been
setting
up
I
created
a
little
wrapper
library.
That
makes
it
easy
for
me
to
make
meshes
that
are
sensible
for
the
casing.
So
that's
what
this
casing
sin
is.
A
So
all
of
this
stuff
up
here
is
just
like
setting
out
my
simulation,
so
they
got
my
mash
that
all
those
standard
standard
things
plotting
the
mesh
in
the
model
where
things
start
to
get
a
little
more
interesting
on
the
tasks.
Side
of
things
is
right
around
here,
and
so
in
this
case,
when
you
want
to
run
tasks
when
you
want
to
spin
up
remote
workers,
you
want
to
be
using
desk
job
queue.
So,
if
you're
just
using
desk,
it's
gonna
spin
up
workers
on
your
on
the
machine
that
you
are
currently
operating
on.
A
If
you
use
desk
job
queue,
you
can
actually
basically
tell
it.
This
is
how
I
want
to
submit
a
job,
and
then
here
I've
been
also,
you
also
going
to
need
to
set
up
a
client.
So,
in
this
case,
nurse
uses
the
slurm
job
queue,
and
so
here
I've
been
using
like
set
up
a
job
queue.
I
set
up
a
slim
cluster
Craig.
What
kind
of
job
queue
submission?
Do
you
work
with.
F
A
A
So
if
you
look,
these
are
the
desk
job
queue
Docs,
so
John
Q
dot
org.
So
you
can
actually
go
in
and
see
configuring
desk,
John,
Q
I
think
they
have
example.
Sorry
example
deployments
so
here
they
show
you
how
to
set
up
a
PBS
cluster
so
soggy.
If
that's,
if
that's
what
you
want
to
be
using
or
what
you
need
to
be
using
I'm
using
a
slum
cluster
and
yeah.
A
So
this
is
this
is
a
good
resource
page
on
how
to
basically
set
up
that
set
up
that
cluster
and
then
what
you
can
do
from
there
is
here:
I
actually
just
print
out
the
job
script.
So
this
is
actually
like
what
the
slurm
job
script
looks
like.
So,
if
I
was
running
into
problems
with
this
job,
submission
like
I
would
basically
screenshot
this
and
send
it
to
somebody
at
nurse
can
say
hey.
A
This
is
what
I'm
trying
something's
going
wrong,
because
this
has
all
the
information
that
they
need,
but
you
can
see
specific
things
like
here,
so
I
asked
for
120
gigs
of
memory,
and
it
gives
me
a
hundred
twelve
because
there's
some
there's
some
overhead,
but
that's
that's
what
we
that's.
What
the
the
submission
looks
like
then.
The
next
thing
you
need
to
do
is
create
a
client,
so
here,
if
I
am
doing
it
with
my
cluster
that
I've
just
created,
we
ask
for
that
to
be
set
up
with
the
cluster.
A
Otherwise
I
would
set
that
up
locally,
and
then
here
that
gives
you,
then
your
dashboard.
So
I
don't
know.
If
people
have
seen
this.
This
link
is
not
going
to
show
us
anything
constructive
right
now,
but
this
actually
shows
the
desk
extension,
and
so
you
can
go
in
and
see,
there's
some
really
cool
demos,
and
hopefully,
next
week
we
can
show
one
where
you
can
actually
see.
Like
the
memory
usage,
you
can
see
the
tasks
that
are
being
submitted
and
being
processed,
and
things
like
that.
A
A
So
here
I
have
and
then
I
think
I'm,
just
passing
actually
as
an
umpire
array
which,
if
that
gets
large
enough,
will
be
very
inefficient.
But
for
now
that
was
not
enough
of
bottleneck
that
I
needed
to
like
think
it
through
anymore.
So
then,
with
my
simulation
parameters,
I
actually
go
in.
Oh
no
I
was
passing
em,
sorry
as
as
JSON,
so
here
I.
A
So
that's
all
that
that
function
will
do.
I
then
need
to
before
you
perform
a
computation.
You
have
to
scale
your
cluster,
so
you
have
to
say
like
go.
Grab
me
some
notes
in
this
case
when
I
do
that
on
nurse.
It's
actually
going
to
like
stick
me
in
the
job
queue.
Basically,
so
that's
scaled,
and
then
here
we
can
go
in
and
actually
run
or
set
up
the
script
to
run.
So
here,
I've
wrapped
my
simulation
with
the
desk,
dolly
and
then
here
when
I
actually
go
in
and
run
I
do
da
stock
compete.
A
A
So
here,
in
this
case,
I
originally
was
using
the
decorator,
but
then
just
for
testing
some
things
out,
I
switched
to
putting
it
inside
a
function
handle
you
don't
want
to
do
both
of
those
things
at
once,
but
if
you
needed
to,
for
example,
if
I
was
doing
a
for
loop
inside
of
here
and
I
wanted
to
paralyze
that
as
well,
you
can
actually
do
that
with
desk,
so
you
can
have
nested.
You
can
have
nested
things
that
are
paralyzed
and
then
it'll
compute
the
tree
structure
and
like
how
to
actually
do
that.
A
F
A
This,
the
notebook
geo
nurse,
has
a
Jupiter
lat
or
a
Jupiter
hub
that
I
can
log
into
so
I
log
into
like
the
login
node
of
nurse,
and
then
I
can
run
in
the
Jupiter
like
on
that
Jupiter
Q.
So
that's
sort
of
limited
memory,
it's
a
shared
shared
node,
but
then,
when
I
actually
request
tasks
when
I
set
up
my
desk
cluster
I
can
ask
to
be
put
in
whatever
queue.
A
C
Thanks
Lindsay,
that's
that's
cool
that
yeah
that
answers
a
few
questions.
I
had
I
think
I'll
have
to
have
a
bit
closer.
Look
at
your
notebook
for
the
details,
but
yeah
I.
Remember
that
thing
about
not
not
kind
of
double
rapping
dusk
delayed
being
an
issue
as
well.
They
that's
in
their
documentation.
They
recommend
not
to
do
that.
Yeah.
A
C
B
A
C
Yeah
yeah,
it
seemed
a
little
complicated
to
work
around
so
I
was
like
a
fires
up
the
first
I
think
it
comes
to
when
you
start
your
cluster.
If
you
request
like
yes,
a
three
nodes,
it
only
it
fires
up
three,
but
only
one
of
them
is
actually
doing
anything
and
the
other
two
are
just
kind
of
idling,
but
still
kind
of
taking
up
resource
or
still
being
booked
out.
If
you
like,
on
the
scheduler,
but
not
actually
running.
A
C
A
For
sure,
for
sure,
that's
a
good
call,
yeah
I'm
happy
to
put
that
on
my
to-do
list
for
for
next
week
did.
F
You
have
to
get
some
help
from
from
the
nurse.
It
was
just
curious,
like
if
you're
using,
like
whatever
desk,
provide
as
like
I
suppose,
like
there's
a
PBS
cluster
or
slum
cluster.
Like
did
the
nourish
cat
to
do
something
for
you
or
cuz
I
can
when
I
was
working
on
here
we
have
a
C's
and
then
they
had
to
they
had
to
do
something
on
their
a
and
to
make
that
work.
I,
think
I,
hear
I,
think
they
they're,
not
that
made
desk
users
and
I.
A
A
There
was
minimal
documentation,
but
I
was
I,
didn't
actually
like
once
I
sort
of
figured
out
enough
of
the
things
and
poked
around
with
the
functions
it
was
working.
It
was
working
for
me,
so
I
didn't
actually
have
to
like
pain,
the
nursing
staff
and
ask
them
to
change
anything
right
so
and
I
I
think
I
installed
tasks.
So
I
had
enough
control
over
my
environment
to
kondeh
install
tasks,
so
I
yeah
I
didn't
have
to
actually
ask
them
for
for
anything
to
get
this
line.
F
F
A
There's
I
mean
there's
a
handful
of
examples
out
there,
but
I
think
that,
like
this,
isn't
all
that
heavily
hit
on
yet
so
I'm,
it's
entirely
possible.
You
did
things
right
and
it
just
like
the
the
set
up
is
a
bit
more
challenging
there.
So
yeah
if
you're
still
running
into
issues-
and
you
think
it's
on
the
desk
end
I
mean
I,
think
feel
free
to
file
an
issue
and
I'm
sure
there's
there's
people
who
would
jump
in
and
try
and
help
out.
F
F
So
I
was
like
just
using
regular
tensor
like
a
regular
mesh
which
have
like
your
check,
which
is
having
exact
same
mesh
size
and
then,
but
local
mesh
is
off
tree.
It's
in
that
way,
so
I
can
be
quite
flexible,
so
I
could
even
do
like
a
2d
to
3d,
interpolations
it
or
3d
to
free
interpolation
yeah
like
those
a
okay
solution
to
me,
yeah,
I,
think
or
DC.
F
Probably
it
may
not
be
the
case
so,
but
there
are,
but
like
there
tons
like
quite
a
few
analogous
that
we
can
we
can
make,
but
other
than
that,
I
think
that
I
was
pretty
pleased
like
that
desk
is
actually
working
for
a
very
simple
parallelization
problem
which
is
supposed
to
work
I
guess,
but
that
the
writing
part
was
was
important.
Just
took
a
while
to
figure
it
out,
yeah
so,
but
I
think
ever
not
really
using
the
multiple
cluster.
It's
not
very
exciting,
cuz
like
then.
F
E
Think
the
advantage
will
be
in
there
paralyzing
at
multiple
levels
right
without
having
to
insert
multi-processing
everywhere,
for
instance
like
if
you're,
storing
your
your
projection
matrix,
then
the
operation
of
projecting
your
model
onto
your
local
mesh
can
be
just
a
bad
delayed
function
where
the
depilation
matrix
lives
on
lives
on
disk
all
right,
and
so
is
your
for
modeling
also,
and
then
gas
will
paralyze
all
the
way
through
your
operations
rather
than
have
to
like
do
multiple
levels
of
multiple
project.
Multi-Processing
I
think
that's
that's
a
minute
right.
F
E
F
F
A
F
Yeah
I
could
yeah
I
could
I
could
show
what
I've
done
and
I
could
put
some
thoughts
on
it?
What's
what
could
be
a
good
sort
of
direction
that
we're
going
and
Joe
I
was
just
gonna
curious
that
the
volume
averaging
stuff
can
we
put
that
typography
in
because
currently
it's
there
like
a
mesh
to
mesh
but
like
usually
what
we
need
is
I
want
to
use
like
move
from
active
cells
to
active
cells?
Is
that
going
to
be
simple
for
you
to
kind
of
implement
that
I.
I
E
F
Yeah,
what
what
could
be
the
worst
case,
because,
let's
say
you
have
air
cells
and
then,
if
you're,
simply
doing
like
a
linear,
interpolation
or
volume
averaging
you're
gonna
interpolate
between
air
cells
into
the
earth
cells,
which
is
not
great,
so
I
mean
that
that's,
like
sort
of
the
main
reason
why
we
sort
of
like
want
to
move
from
just
an
active
cells
to
the
other
active
cells.
Well,.
E
F
I
E
F
A
A
A
G
E
A
2
or
3
on
Wednesdays,
ok!
Well,
let's,
let's
give
that
a
try
and
yeah,
we
can
go
from
there
excellent
all
right.
So
you
guys
thanks.