►
From YouTube: SimPEG Meeting September 29th
Description
Weekly SimPEG meeting from September 29th, 2021
A
Hello
again,
everyone
glad
to
see
y'all
found
the
links
again
up.
There
add
yourself
to
the
notes
if
you'd
like
like
something
to
talk
about
as
far
as
quick
reports
add
yourself
attending
as
far
as
agenda
today,
I
think
we
probably
finalized
the
discretized
logo,
quick
mention
about
the
october
seminar
lindsay.
When
is
that
happening.
A
Cool
and
then
also
I,
it
was
brought
up
on
slack
about
alternating
meeting
times
for
to
allow
people
from
the
eastern
side
of
the
globe
to
join
up
to
join
in.
So
we
can
talk
about
that
and
maybe
see
if
we
can
come.
C
A
D
I
finally
have
kubernetes
running
with
with
desk
and
yeah,
with
the
parallel
with
the
tile
code.
There's
the
one
thing
I
just
couldn't
do
is
I
don't
have
it
running
with
partiso,
I'm
just
using
tool
mod
right
now,
because
I
just
got
to
find
out
how
to
put
the
that
mkl
library
on
the
image
and
then
we
can
run
party
so
on
there,
but
yeah
dom
everything
stays
on
its
pod.
It
just
brings
back
the
vector
when
we
do
for
the
sensitivities
yeah,
it
runs
actually
pretty
slick.
It's
pretty
cool.
E
So
do
you
have
to
modify
a
bunch
of
stuff
or
it's
just
I
kind
of
lost
track
of
where
you.
D
Like
I'm
just
trying
to
invert
like
larger
projects,
the
thing
is
is
like:
if
we're
going
to
go
larger
than
most
of
our
just
like
our
tests
and
examples
you
are,
we
are
going
to
have
to
kind
of
modify
things.
Das
seems
to
complain
when
the
graph
gets
bigger
or
when
objects
are
a
bit
bigger
than
even
like
200
megabytes,
and
it
does
actually
seem
to
slow
down
when
it's
when
things
get
a
little
bit
too
big.
D
D
C
D
Better
so
yeah,
I
guess
I've
been
playing
around
there,
but
what
I've
found
the
solution
is
going
to
be
is
yeah.
It's
just
gonna
have
to
be
like
its
own
kind
of
little
service.
You
send
it
and
where
the
data
sits
somewhere
on
the
network,
and
it
just
pulls
the
data
that
it
needs
and
kind
of
like
fires
up
the
simulation
on
its
own,
not
passing
it,
because
even
if
it's
just
the
operators
yeah,
you
know
as
soon
as
you
hit
like
200
megabytes,
it
seems
it
starts
to
complain
and
slow
down.
D
D
Yeah,
that's
that's
kind
of
the
pain,
but
that's
the
only
way
to
get
it
to
go
faster.
It's
not
it's
not
more
efficient,
but
yeah.
Just
passing
around
that
much
desk
is
just
not
into
it.
I
think
we
even
saw
we
were
trying
some
stuff
with
spark
even
and
getting
the
same
thing
that
they
just
don't
like
when
the
graph
gets
too
big.
So.
E
Uh-Huh
yeah
yeah.
I
started
juggling
with
the
idea
that
mostly
well,
I'm
not
that's
kind
of
a
different
way
you're
talking
about,
but
if
you're
on
a
local
system.
E
It
kind
of
makes
sense
that,
since
the
parallelization
is
only
going
to
happen
with
paradiso,
that
we
would
trigger
things
in
in
series,
like
all
your
thousand
series,
because
it's
going
to
be
maxed
out
maxing
out
the
cpus
anyway
right
on
the
on
the
solvers
yeah.
D
D
D
A
Okay,
my
from
my
end
related
to
sit
back
stuff,
discretized
stuff,
but.
C
A
Rough
break
gradient,
pull,
request,
review,
it's
pretty,
I'm
pretty
much
there.
A
I
just
think
we
need
to
I'm
going
to
have
to
add
some
more
tests
in
there
for
it
to
make
sure
it's
hitting
all
the
lines
that
I
want
and
then
I'll
just
take
care
of
that
myself
and
do
it
and
then,
on
the
discretized
side,
I
finalized
the
nodal
base
interpolation
for
the
those
tetrahedral
meshes,
that's
the
easy
one
and
then
in
the
process
of
finalizing
the
edge
and
face
effector,
interpolation
ones,
they're
a
little
bit
trickier-
and
this
is
just
just
a
tiny
bit
trickier.
A
It's
the
point
is
we're
interpolating
vectors,
I'm
treating
it
as
for
interpolating
vectors
to
points,
because
that's
what
we
use,
so
it's
like
face
x,
face
y
faces,
z,
interpolators,
so
it
won't
be
just
a
general
phase.
Two
point:
it'll
be
vectors
defined
on
those
spaces
and
edges
two
points,
which
is
what
we
actually
need.
A
Yeah
uses
it,
I
just
found
some
nice
linear,
linear
basis,
function
for
those
edge
and
face
vectors
that
work
well.
A
B
Yeah
just
a
couple
quick
items,
so
I
put
the
google
doc
link
in
the
notes
for
the
october
newsletter
because
october
is
coming,
and
so,
if
folks
have
any
items
you'd
like
to
see
included
in
that
that'd
be
great
I'd
like
to
get
that
out
early
next
week.
So
that
there's
plenty
of
time
to
advertise.
Sheehan
and
jaja's
seminar.
B
And
then
the
other
one
I
just
wanted
to
resurface
quickly,
because
I
think
this
is
a
quick
item
and
we've
talked
about
it,
but
haven't
made
any
action
or
taking
any
action
on
it
is
dieter
brought
up
a
while
ago
the
right
to
forget
with
youtube
videos
of
our
meetings
being
posted
and
we
sort
of
talked
about
it
and,
I
think,
had
a
general
consensus
that
this
seems
like
a
good
idea.
We
had
talked
about
sort
of
like
a
multi-staged
approach.
B
Thinking
about
that
and
thinking
about
managing
that
I
feel
like
that's
just
gonna
get
complicated,
so
my
suggestion
is
why
don't
we
just
make
them
unlisted
after
one
year
that
way,
if
there's
links
to
them
and
there's
some
reason
that,
like
this
seems
like
an
important
video
somewhere
on
the
internet,
it
can
still
be
found,
but
it's
just
not
advertised
and
not
like
super
discoverable.
So
we're
not
deleting
anything
and
then
the
overhead
management
is
just.
There
is
two
states
that
videos
can
be
in
is
either
public
or
unlisted.
B
We
just
wanted
to
get
yeah
thumbs
up
or
thumbs
down.
If
anyone
has
concerns
with
that
all
right,
I
will
take
that
as
as
a
guess
and
we'll
just
we'll
go
for
that
and
it'll
probably
be
like
once
a
year
I'll,
just
throw
something
in
the
calendar
to
do
that,
so
some
videos
will
be
out
longer
than
a
year,
but
roughly
a
year
timeline.
B
C
You
you
hear
me
yes,
yeah
yeah,
yeah
yeah,
for
the
quick
report.
I
participate
in
the
schmuckerville
colloquium,
which
was
originally
called
the
electromagnetic
enforcer
colloquium.
It's
a
bi-annual
event
of
the
german
em
community
that
this
year
the
first
time
happened
online
because
of
the
pandemic,
but
usually
they
go
to
a
place
and
meet
there,
and
I
had
a
discussion
about
standard
deviation,
misfit
and
random
noise,
partly
what
we
discussed
and
I
discussed
stuff
with
the
terraces
and
it
here
at
tu
delft.
C
So
it
was
quite
interesting.
There
were
max
about
40
people
in
the
room
when
we
had
the
discussion,
so
quite
a
big
crowd
and
there
were
a
couple
of
outcomes.
One
is
that
actually
a
lot
of
people
think
about
it
and
are
not
sure
which
way
would
be
the
best.
But
then
you
had
two
kind
of
two
camps
and
one
was
like
yeah.
C
If
you
do
synthetic
studies,
you
should
really
think
about
it
and
do
it
correctly
and
then
all
the
practitioners
kicked
in
and
said,
but
it
doesn't
matter
because
we
have
so
little
idea
even
about
our
noise
floor
and
about
our
error
level
that
all
these
little
things
in
the
end,
don't
matter
that
much
if
you
have
real
data,
so
you
had
kind
of
the
two
things,
the
ones
that
said
yeah.
If
you
test
code,
you
should
do
it
right
and
the
other
side,
but
in
the
end
it
doesn't
matter
so
it
was.
C
It
was
kind
of
interesting.
Thomas
gunter
was
the
host
one
from
the
pie
gimli,
and
he
also
said
like
they
just
add
noise
plus
relative
error
times.
Data
like
simpek
did,
and
he
thought
most
do
it
like
this.
Just
because
it's
way
you
often
see,
and
then
there
was,
I
forgot
his
name
from
uppsala
and
he
said
he
was
the
only
one
who
stood
up
and
said
their
code
always
used
the
quadrature
addition
in
quadrature
their
code,
so
their
excess
both,
but
no
one.
C
I
would
have
been
interested
if
anyone
ever
did
an
a
test.
What
would
be
the
consequence
in
an
inversion,
but
no
one
did
so
yeah,
but
it
was
interesting,
so
it
was
not
just
me
bothering
people
that
are
really
everywhere,
people
think
about
it,
but
then,
because
of
lack
of
time
or
nothing
is
published,
and
you
just
continue
what
you
always
did.
So
it
seems
to
be
a
general
thing.
C
E
Well,
not
sure
man,
I
think
we
all
need
it
yeah.
We
need
a
standard
and
we
need
to
show
what
the
differences
are,
as
you
said,
and
prove
it
like
one
way
or
the
other
right.
C
And
actually
in
the
preparation
of
all
this,
they
came
in
a
fourth
topic
that
we
started
to
discuss
often
is
why
do
folks
use
amplitude
and
face
instead
of
real
and
imaginary
and
everett's
ever
such
a
guess
is
that
earlier
on,
you
have
not
enough
confidence
in
your
recorded
data,
so
you
just
use
the
amplitude,
because
that
was
the
best
thing
you
had
at
the
later.
You
got
more
confidence,
so
you
added
phase,
instead
of
maybe
switching
to
really
imaginary,
which
would
be
easier
to
wait.
Well,
they
have
the
same
weight
right,
an
absolute
face.
C
You
have
this
problem
of
waiting,
so
I
think
there
is.
There
is
the
theoretical,
the
practical
dimensions
already
two
there's
the
historical
dimension
and
then
em
has
so
many
different
methods
that
you
also
have
a
methodological
dimension
and
then
it
becomes
very
complicated
to
give
a
single
answer.
I
think,
but
it's
definitely
interesting
and
I
didn't
resolve
anything
I
just.
F
Yeah,
it's
a
it's
a
hard
research
question
because
there
are
so
many
factors
comes
in,
even
if
you
prove
like
a
certain
way,
is
theoretically
correct
and,
like
it's
very
easy
to
say
somebody,
oh
in
practice,
you
actually
don't
know
the
noise,
but
it's
a
noise
function.
Then
it's
hard
to
defend.
I
guess.
C
F
A
little
bit
of
pandora
box
to
open
but
yeah,
it's
interesting.
F
C
I
I
tried,
for
instance,
for
the
standard
deviation.
I
try
to
to
dig
a
bit
papers,
but
people
do
and
most
often
it's
not
even
defined
they
say
we
have
a
waitress,
a
weighting
matrix
and
it's
one
over
the
standard
deviation
usually
diagonal
and
that's
it.
So
then
you
could
define
your
standard
deviation.
However,
you
want
it,
it's
not
even
specified,
which
is
probably
also
bad
for
reproducibility
concerns,
buddy,
yeah
and
maybe
even
many
don't
know
any
longer.
What
is
in
their
code,
you
would
have
to
go
and
take
and
see.
G
So
yeah
I've
been
working
on
a
few
things
lately
and
I
recently
had
to
do
just
a
clean
install
of
my
my
anaconda
and
ran
into
some
pretty
common
problems
that
we've.
I
think
some
people
have
encountered,
especially
if
you're
a
windows
user.
So
one
of
them-
and
I
think
tebow
made
an
issue
with
this-
was
the
simple
regularization
breaking
and
it
turns
out
it's
happening
when
all
the
cells
are
active.
So
this
is
why
it
wasn't
breaking
on
the
tutorial
example.
G
When
I
tested
it
is
there
was
actually
topography.
So
there
was
inactive
cells,
but
for
this
example
it
was
no
bueno.
So
this
could
be
a
pretty
easy
patch
to
bring
in
almost
like
a
one-liner
if,
if
all
the
cells
are
active,
we
just
say:
okay,
here's
here's
the
active
cells
model.
For
that
another
thing
I
found
was
I'm
I'm
working
with
a
frequency
domain.
Em
simulation.
G
That's!
I
just
want
to
run
it
on
my
laptop,
I'm
not
trying
to
do
any
inversion,
but
I
did
want
to
get
a
nice
plot
as
a
function
of
frequency.
So
I
wanted
to
do
a
forward
simulation
at
quite
a
few
frequencies
and
if
I'm
as
the
code
works
right
now,
just
in
the
standard
simulation
class,
it's
going
to
store
all
the
factorizations
of
a
inverse.
G
So
I
wanted
to
just
introduce
a
really
basic
forward.
Only
keyword
argument
like
we
do
with
the
mag
problem,
where
it's
not
going
to
store
those
factorizations.
So
I
know
that
there's
some
really
great
stuff
happening
on
the
child
simulations
branch,
but
as
of
right
now.
This
would
just
be
a
really
simple
option
to.
Let
let
me
do
this
without
really
impacting
the
code
whatsoever,
and
the
last
thing
was
about
the
pymat
solver,
so
we
usually
were
as
we're
we're
getting
people
to
downgrade.
G
I
think
pi
mkl
to
an
older
version
to
get
partiso
to
work
for
that
pimat
solver
package
and
I
found
out
that
for
windows
it,
the
the
modern
installation
of
everything,
is
it
breaks
because
it
can't
find
a
dll
file,
and
so
I
mean
I
just
kind
of
went
into
my
python
packages
and
I
just
sort
of
packed
that
one
line
to
get
it
to
find
the
actual
the
dll
file
that
it
wants
to
use.
G
So
I
know
joe
is
working
on
an
improved
implementation
to
connect
this
to
to
simpeg.
But
for
right
now.
G
That's
sort
of
what
I
found
is
a
bit
of
a
solution,
it's
kind
of
ugly,
but
that's
the
reason
that
it's
breaking
so
I
think
the
first
two
cases
for
that
store
the
forward
only
for
frequency
domain
em
and
the
really
simple
fix
for
if
all
the
cells
are
actives
and
that's
simple,
regularization,
I'd
like
to
bring
that
in
as
a
pull
request
pretty
quickly,
because
I
don't
think
it's
going
to
impact
anyone
else's
work
and
it's
very
lightweight
yeah
and
then
the
last.
B
Thing
for
one
question:
I
was
just
wondering
more
like
a
naming
question
with
respect
to
the
forward.
Only
do
we
want
to
call
it
store
factors.
I
could
see
cases
where,
if
you,
depending
on
the
problem
set
up,
maybe
you
don't
want
to
store
store
all
the
factors
or
we
come
up
with
something
more
clever,
but
it's
not
an
indian
version
just
to
be
a
little
more
explicit
about
what
that
flag
is
doing,
because
if
it's
only
triggering
changes
on
the
storage
of
factors,
it
might
be
nice
to
just
have.
G
That
be
a
little
more
specific
yeah.
I
I
can
I
considered
that
I
considered
a
flag
that
was
store,
factorizations
true
false,
but
then
you
also
need
to
call
a
inverse
when
you're
doing
jvec
and
j
transpose
vec,
and
I
think
in
a
practical
sense,
you
would
store
that
factorization.
G
You
would
do
jvec
and
j
transpose,
but
the
way
that
the
code
is
set
up.
It'll
basically
have
to
make
a
new
factorization,
give
you
jvec
dump
it
then
refactor
it
and
then
do
j
transpose
back
so
you're
you're.
My.
B
My
only
comment,
dub
is
just
what
do
we
call
the
keyword,
I'm
not
not
suggesting
you
change
any
of
the
functionality,
I'm
just
wondering
what
we
call
the
keyword
and
because,
for
example,
if
you
were
doing
a
forward
simulation,
but
you
wanted
to
do
a
sensitivity
calculation,
you
actually
would
want
the
factors
stored,
but
it
is
still
forward-only
in
a
sense.
So
my
only
question
is:
do
we
want
to
call
that
flag
like
store
factors
or
store
factorizations,
as
opposed
to
forwarding
yeah.
G
Okay,
I
guess
I
I
I
sort
of-
did
it
to
be
consistent
with
what
the
mag
problem
is
doing
and
the
the
situation
when
we
would
use
it
because
yeah,
I
guess
when
I,
when
I
think
of
store
factorizations
as
true
or
false.
I
would
think
of
that
being
something
that
would
happen
in
both
the
forward
simulation
and
in
the
inversion.
G
If
I'm
gonna
in
the
simulation
class
say
store
factorizations
and
say
no,
I
would
assume
that
that's
gonna
happen
enough.
I
could
apply
that
to
a
forward
simulation
and
to
an
inversion.
G
Yeah
yeah
there
there
I
think,
there's
a
way
to
improve
that's
efficiency.
I
mean
as
of
right
now
you
could
just
you
could
do
it,
but
it's
not
really
optimized
right,
because
it's
gonna
try.
It's
gonna
have
to
store
that
factorization.
It's
gonna,
create
it
it's
gonna,
compute
jvec
and
then,
instead
of
using
that
same
one
for
j
transposovac,
it's
gonna
dump
it
and
then
make
a
new
one.
When
it
calls
j
transpose
back,
so
it's
slower
than
it
needs
to
be,
then
there
would
need
to
be
some
work
to
optimize
it.
G
But
I
mean
I
think,
if
we're
doing
a,
if
we're
inverting,
pretty
standard
fem
systems,
they
don't
have
like
21
frequencies
right.
It's
like
three
frequencies
that
you're
inverting.
So
that
might
not
be
that
very
many
factorizations
and
for
a
small
enough
problem
you
could
you
could
store
everything
and
invert
it.
In
my
case,
I
had
11
frequencies
that
I
wanted
to
forward
model,
but
the
original
way
would
make
me
store
11
factorizations
of
a
inverse
so
for
what
I
want
to
do
with
the
code
right
now.
G
E
I
would
say
we
could
have
both
right.
You
could
have
a
ford,
only
switch
that
basically
turned
off
the
store
factorization
and
I
agree
with
joe
it
doesn't
I
mean
if
someone
wants
to
never
store
it,
then
you
know
javak
and
jt
back
would
have
to
recompute
it
every
time,
and
that's
just
that's
just
what
it
is.
You
don't
want
to
store
it.
You
need
to
recover
yeah.
C
A
E
B
Yeah
and
that's
totally
fair,
just
kind
of
thinking
forward
a
bit
but
sticking
with
forward
only
for
now,
especially
because
we've
established
that
I
think
makes
makes
sense,
but
just
just
something
to
keep
on
the
reader.
E
Yeah
yeah
try
it
yeah
I've
just
implemented.
This
ford
only
put
it
in
the
for
loop
of
the
of
the
field,
calculation
right
and
then.
G
G
That
I'm
I'm
trying
to
do
but
yeah
I
mean
I'll
I'll,
make
a
little
pr
I'll
link
it
to
the
issues
we
have.
If
it's
lightweight
and
everyone
is
cool
with
it,
we'll
bring
it
in,
and
if
somebody
has
an
idea
that
we
just
have
to
bring
in
then
like
that's
what
the
pr
is
for
yeah
yeah,
and
then
I
guess
the
last
thing
was
about
these
code
validations
that
I've
been
making
for
a
jupiter
book
and
dom.
G
You
raised
a
pretty
good
point
when
we
talked
last
just
about
the
long-term
way
that
we
would
maintain
this.
So
our
our
sim
peg
code
is
always
evolving,
which
can
sometimes
lead
to.
You
know
breaking
some
of
these
examples,
so
we
we
want
to
maintain
them.
G
So
they're
they're
consistent
with
the
up-to-date
code
and
some
of
these
notebooks
and
as
we
get
more
of
them,
they
they're
a
little
bit
heavier
than
the
really
basic
examples
and
they
take
longer
to
run
so
we
don't
really
want
to
trigger
them
or
trigger
that
build
every
time
we
we
push
to
something,
and
so
it
would
be
nice
to
maybe
sketch
out
a
long
term
plan
to
maintain
these.
That
doesn't
really
really
require
a
lot
of
hands-on
maintenance,
something
where
you
know
once
we
release.
G
You
know
an
updated
version
of
simpeg
that
we
would
like
install
from
pip
or
conda,
or
something
like
that
that
it
would
trigger
those
and
just
let
us
know
if
any
of
them
are
breaking,
so
we
don't
have
to
be
constantly
rebuilding
it
because
they
take
a
really
long
time.
But
it
would
be
nice
to
have
something.
A
G
G
B
We
should
version
it
and
in
the
readme,
just
always
have
a
line
that
just
says
this
is
the
version
of
simpeg
and
the
version
of
discretize
that
this
was
last
checked
on,
and
so,
even
if
you
know
we
release
synpeg
tomorrow,
it's
fine
that
it's
like
kind
it
just
like
sets
that
expectation
that
yeah
we
it
ran
on
simpeg
0.14.
B
E
A
Close
to
usable,
I
haven't
tested
it
too
much,
but
it
basically
reproduces
the
functionality
of
the
pi
mkl
at
this
point,
and
we
can
add
more
stuff
onto
it
later.
A
So
we
could
just
I'll
make
it
I'll
put
that
on
the
conduit,
forge
and
get
it
on
pip
or
get
it
on
pipe
and
put
it
to
counter
force,
and
we
can.
Then
we
need
to
switch
pi,
matt
solver
to
use
that
instead
because
dom
and
john,
I
both
gave
it
to
you
guys
to
work
with,
and
you
guys
didn't
have
any
issues
installing
it
on
anything
different
versions
at
all.
Right.
D
Yeah
I
have
I
I
have
it
running
on
my
linux
system.
Still
I've
been
using
it,
so
it's
all
been
working
good.
G
All
right
actually
yeah
joe
are.
We
are
we
just
figuring
out
the
logo
logo
for
the
discretize
api,
or
can
we
bring
that
in
and
that's
basically
what's
happening
after.
A
Yeah
right
now,
so,
okay,
let's
is
there
any.
C
E
Well,
I
just
want
to
say
that
I
I
stalled
the
refactorization
and
because
pgi
is
such
a
beast,
so
I
started
the
review.
The
pull
request,
review
on
tebow's,
pgi
refactoring
thing,
and
then
I
ping
you
a
few
places
joe
if
you
could
just
chime
in
just
try
to
push
that
one
first,
so
that
there's
less
for
me
to
to
patch
after
that
would
be
great.
Okay,.
A
So
these
are
some
of
the
the
logos
that
we're
looking
at.
A
A
F
A
So
I
will,
if
everyone
is
happy
with
that,
I'm
for
I'm
perfectly
fine
with
that
it
looks
like
a
good
discretized
logo
yeah.
I
like
that
one
too,
I
will
put
it
into
vector,
graphics
and,
like
I'm,
I
I
will
probably
going
through
and
make
sure
that
the
colors
are
the
same
same
that
are
matching
the
simple
colors,
but
other
than
that.
It
should
it'll
look
pretty
close
to
that
and
we'll
go
with
that
I'll
factorize
it
get
it
going
whoa.
So
we're
gonna
go
with
this
one.
F
Cool
joe,
where
are
you
on
the
like
actual
paper
about
the
discretize
like?
Is
that
kind
of
coming
coming
along.
A
It's
getting
there,
I'm
gonna
be
reaching
out
to
some
people
to
start
parallelizing
some
tasks
here,
but
mostly
like
it's
just
it
had
stalled
for
a
bit
with
getting
everything
for
the
class
up
and
going.
A
Logo
in
we'll
merge
in
all
those
dot
the
api
stuff,
beautiful,
we'll
go
back
and
look
at
those
tutorials
again
really
quick
and
make
sure
those
are
up
to
date
or
make
sure
they're
relevant
with
regards
to
the
new
api
and
then
looks
good
okay.
E
A
A
So
yeah
we're
at
0.7
now
like
there's,
no
functionality
change
between
the
two.
It's
just
a
lot
more
documented.
A
A
So
that
would
be
more
afternoon
time
for
a
specific
people.
A
I
know
most
of
the
days
I
am
busier
in
the
afternoon
personally,
at
least
like
after
I'm
usually
free
after
two
o'clock
on
wednesday
through
friday,
so
it
doesn't
necessarily
have
to
be.
On
the
same
day
I
mean
it
would
be
easier
on
the
same
day,
just
like
oh
it's
wednesday.
If
I
go
simp
meeting
or
just
switch
times
back
and
forth,
are
there
any
other
big
like?
Oh,
I
can't
do
wednesday
at
all,
except
for
this
time.
A
If
not,
we'll
probably
put
a
a
a
poll
up
on
the
meetings
channel
in
the
slack
just
looking
for
some
other
time
times,
but
is
aiming
for
wednesday
at
least
a
reasonable.