►
From YouTube: CESM Workshop: SIMA & CESM Cross Working Group
Description
The 26th Annual CESM Workshop will be a virtual workshop with a modified schedule on its already scheduled date. Specifically, the virtual Workshop will begin with a full-day schedule on 14 June 2021 with presentations on the state of the CESM; by the award recipients; and three invited speakers in the morning, followed by order 15-minute highlight and progress presentations from each of the CESM Working Groups (WG) in the afternoon.
On 15-17 June 2021, working groups and cross working groups have half-day sessions, some with presentations and some that are discussion only.
B
A
It
we're.
B
And
I'll
keep
an
eye
out
for
keeping
people
muted,
it
goes
off
during
presentations
and
if
you
need
any
other
support,
just
let
me
know.
A
Okay,
thank
you,
and
maybe
we
can
just
show
the
first
first
slide.
Yeah.
B
B
And
then
you're
also
a
co-host,
so
you
can,
you
should
be
able
to
mute
and
turn
people's
video
off
as
well.
In
case
anything
happens,.
A
Oh
great,
so
we
will
run
from
your
computer
or
from
ryan's,
I
guess
from
ryan's.
I.
A
B
Yeah,
I
don't
think
I
received
a
copy
of
them,
but
if
you
want
me
to
run
it,
I
want
to
send
them
to
me.
I
can
do
it
or
if
you
just
want
to
do
it
from
your
computer,.
A
Okay,
so
do
you
see
if
you
don't
see
it
in
the
in
the
drop
I
mean
the
google
drive,
I
can
send
again
directly
to
you
yeah,
I'm.
A
Oh
well,
I
think
we're
at
one
o'clock
so
so
welcome.
Welcome
to
this
csm
working
group
session
on
a
cross
working
group
session
on
sema
and
stands
for
system
for
integrated
modeling
of
the
atmosphere.
A
Hopefully
we'll
get
that
first
slide
with
that
logo
soon
from
mary,
and
my
name
is
hanley
liu,
I'm
from
the
high
altitude
observatory
of
ncar,
I'm
the
geospace
co-lead
for
sema
and
also
the
session
chair
and
the
agenda
page
will
come
up
soon
and
that
will
list
all
the
the
leads.
We
have
the
chemistry
colleagues,
mary
barth
here
with
me
and
also
I
think
bill.
A
Yes,
thanks
mary,
let's
go
to
the
first
or
the
second
slide,
the
agenda
page
yes
and
bill
scammerock
is
our
weather.
Colleagues,
I
think
I
saw
his
name
there
and
also
our
climate
colleagues
is
andrew
gentleman
he's
you
might
have
heard
from
him
about
sema
in
the
previous
csm
workshops.
A
Obviously,
he
found
a
better
place
to
be
today
and
also
we
have
our
soft
software
engineer,
project
manager,
jordan,
powers
and
the
project
started.
I
think
more
than
two
years
ago
since
that
time,
especially
since
the
sema
workshop
last
summer,
many
great
progress
have
been
made
both
in
software
infrastructure
and
development,
and
also
science,
development
applications,
and
in
today's
session
we'll
hear
a
lot
of
great
things.
A
Each
talk
is
15
minutes
and
I
will
let
the
speaker
know
when
you
have
three
minutes
left,
so
I
think
without
further
ado,
mary
go
ahead.
D
And
so
you
know
for
climate
modeling.
We
use
the
cesm
and
cam
model.
We
use
wacom
and
wacom
x
to
get
up
into
the
upper
atmosphere,
but
you
can
go
down
in
scale
and
that's
when
we
might
want
to
use
the
model
for
prediction
across
skills
to
look
at
synoptic
and
mesoscale
weather
systems
same
with
the
weather,
research
forecast
model,
looking
more
at
the
regional
scale
and
smaller
scales,
and
then
focusing
more
at
cloud
skills
and
turbulence.
D
We
have
cloud
models
and
we
have
large
eddy
simulations
and
then,
of
course,
we
also
have
process
models
that
are
locally
centered
and
we
want
our
vision
for
sema
is
to
take
this
modeling
ecosystem
and
move
it
to
what
we
see
on
the
right.
And
so
we
have
one
modeling
infrastructure
for
this
whole
range
of
of
scales,
from
local
scales
to
regional
scales,
to
global
scales.
D
So
the
sema
vision
is
it's
an
effort
to
unify
encar
based
community
atmospheric
modeling
across
the
weather,
climate,
chemistry
and
geospace
sciences,
and
some
of
you
may
have
attended
the
sema
community
workshop
that
was
sponsored
by
nsf
last
summer.
We
spent
a
couple
days
thinking
about
sema
what
what
we
envisioned
sema
to
be.
D
So
we
went
to
that
exercise
and
we
revised
and
updated
the
semen
vision
statement
and-
and
so
some
of
the
things
that
came
from
that
is,
is
what's
listed
here.
So
we
want
a
configurable
system,
not
a
single
model.
We
want
atmospheric
models
within
an
earth
system
model.
Let's
see
esm,
we
want
to
have
a
minimal
set
of
interoperable
components.
That
means
physics,
parameterizations
chemistry,
other
atmosphere,
parameterizations
as
well
as
dynamical
cores,
and
we
have
a
you
know.
D
We
really
want
to
have
a
common
infrastructure
and
methods,
and
so
that
means
we
can
share
and
extend
our
diagnostics
among
the
different
scales,
but
it
also
means
that
we're
reducing
duplication
of
effort,
but
even
more
important.
We
can
take
it
and
take
us
the
community
forward
and
address
some
frontier
applications.
D
So
let's
talk
about
sema
and
cesm,
so
just
to
want
to
be
clear
with
with
this
community,
so
sema
is
a
way
to
configure
the
atmosphere
model
for
cesm
and
sema.
Allow,
but
sema
also
allows
for
new
use
cases
beyond
climate,
such
as
urban
or
regional
chemistry,
whether
forecasting
or
geospace
applications.
D
D
Let's
think
of
the
schematically
so
shown
here
in
the
green
border
around
the
outside
is
cesm
and
on
the
right
you
can
see
the
model
components
that
are
in
cesm
and
the
coupler
and
then
on
the
left.
We
we
zoomed
in
as
to
what
sema
is
and
let's
go
ahead
and
zoom
into
sema
and
and
think
about.
You
know
what
what
does
it
mean,
and
so
I'm
gonna
go
through
a
number
of
slides,
showing
possible
sema
configurations,
so
sema
climate-
or
maybe
you
know,
cam
7.
D
For
polar
applications,
we
might
want
to
zoom
in
to
the
arctic
or
antarctic
regions,
and
so
you
could
have
the
same
physics
and
aerosol
parameterizations,
but
use
the
regional
refinement
capabilities
of
the
sc
dynamical
core,
and
that's
true
with
chemistry
too.
That-
and
this
is
already
in
existence.
We
call
this
musica
version
zero,
where
we're
able
to
do
regional
refinement
over
a
particular
region
of
the
globe
and
look
at
regional
air
quality
in
the
global
context.
D
For
geospace,
we
could
have
similar
configuration
but
add
in
the
geospace
parameterizations
for
the
upper
atmosphere
and
in
some
cases
for
like
space
weather,
include
data
simulation.
D
And
then,
when
you
come
to
weather
applications,
you
can
switch
dynamical
cars
to
the
the
non-hydrostatic
dynamical
car.
That
mpas
has
and
you
could
switch
it
to
a
physics
team,
that's
appropriate
for
mesoscale
processes
like
the
war,
physics,
morph
and
m-pass
physics
schemes
and
again
you
can
also
include
data
simulation,
and
then
you
know
the
benefit
is
that
it's
now
an
earth
system
model,
and
we
can
couple
this
with
the
ocean
model
to
look
at
science
topics
like
tropical
convection
and
tropical
cyclones.
D
And
we
can
add
chemistry
to
that
and
we
can
then
zoom
in
and
look
at
urban
air
quality
in
the
global
context,
or
we
can
look
at
the
impact
of
convection
on
regional
to
global
composition
and
so
that
I'm
just
kind
of
reiterating
here
that
the
chemistry
will
benefit
both
in
terms
of
looking
at
urban
and
regional
air
quality.
With
with
this
new
system
without
having
to
duplicate
efforts
among
different
models.
D
So
here's
the
status
of
cima.
We
now
have
a
website
sema.viewcard.edu.
D
If
you
go,
there,
you'll
find
this
front
page
shown
here
on
the
right
and
and
so
that's
up
and
learn
more
about
what's
going
on
at
that
website,
and
so
just
as
an
update
on
what
we've
been
doing
last
fall
2020,
we
released
the
musica
version
0
as
a
comp
set
of
cesm,
and
so
that's
now
available
for
the
community
to
use,
and
last
year
we
also
spent
a
bunch
of
time
working
on
a
physics
suite
from
the
morph
and
m-pass
models
to
become
compliant
with
the
common
community
physics
package,
ccpp,
and
so
that
physics
suite
is
ready
to
move
forward
when
it's
ready.
D
When,
when
we're
ready
to
make
the
connections
this
year,
we've
been
working
on
steema
version
one
and
one
of
the
first
things
that
we
did
was
hire
a
project
manager
to
oversee
the
software
engineering
efforts
and
that
project
manager
is
jordan
powers.
He
does
that
about
a
quarter
of
his
time
and
and
then
we've
been
designing
and
developing
new
sima
v
version.
D
One
configurations-
and
so
let
me
describe
those
a
little
bit
further
and
you're,
going
to
hear
more
about
those
more
in
detail
in
these
upcoming
talks,
and
so
we
picked
a
number
of
science
questions
and
target
configurations
to
go
after
and
I'll
just
go
through
these
one
by
one.
D
So
for
coupled
weather,
the
goal
was
to
have
the
m-pass
dynamical
core,
with
a
regionally
refined
grid,
going
from
15
kilometers
global
down
to
three
kilometers
over
a
region
such
as
shown
there
on
the
right
and
the
again
the
benefit
would
be
coupling
this
to
the
ocean
model.
So
we
can
look
at
those
tropical
weather
moving
to
air
pollution
or
air
quality.
D
D
Moving
onwards,
the
science
question
of
hydrological
climate
extremes
there's
a
sort
of
two
parts
to
this:
one,
where
we
do
regional
refinement
over
particular
regions
like
the
us
and
south
america
and-
and
those
are
shown
here,
you'll
hear
more
about
the
south
america
simulations
in
a
few
minutes
and
then
the
second
part
is
part
of
a
bigger
project
called
earthworks,
where
it'll
be
a
looking
at
the
global
uniform
four
kilometer
grid
mesh
with
cam
and
mpass.
D
To
address
space,
weather,
honli
and
carburetors
are
looking
at
a
configuration
where
you
have
25
kilometer
grid,
spacing
over
the
globe
running,
doing
both
the
atmosphere
to
the
ionosphere
with
the
geospace
models
and
then
there's
a
polar
prediction
where
there's
regional
refinement
going
down
to
25
kilometers
over
the
arctic,
coupled
with
greenland
and
over
the
antarctic,
both
in
uncoupled
and
couple
scenarios
and
again
you'll
hear
about
this
more
in
a
few
minutes.
D
So
I'm
just
going
to
sum
this
up,
so
we
can
hear
more
about
those
particular
configurations.
Those
so
just
to
remind
you
that
this
is
what
we've
been
working
on
this
past
year,
but
we've
also
been
starting
some
planning
for
next
year
so
for
sema
version.
Two
and
we'll
hear
more
about
that
at
after
these
next
set
of
talks
and
and
at
the
beginning
of
the
discussion
period.
So
with
that
I'll
take
any
clarifying
questions.
Otherwise
we
can
move
on.
A
Thank
you
mary.
Any
oh
goku
go
ahead.
F
Yeah,
I
missed
the
beginning
of
the
presentation
mary,
but
at
the
end
you
were
showing
regional
refinement
versions
within
sema
version
1.0
and,
for
example,
for
the
south
american
grid.
You
have
both
acidic
core
and
empass
dynamical
core
and
it
looks
like
you
have
the
same
thing
for
himalayan
dicors.
Sorry,
himalayan,
regional
refinement.
D
They're,
so
good
they're,
actually
not
duplicate
they're
going
in
let's
go
here
there.
The
impasse
dynam
grids
are
refining
even
further
to
the
non-hydrostatic
grid.
Spacings,
so
I
mean
I
can
go
into
here
right
with
the
chemistry
run,
so
this
is
going
from
15
kilometer
outer
grid
to
a
three
kilometer
grid,
spacing
in
the
middle,
so
that
we
can
explicitly
represent
convection
in
this
region.
A
And
also
goku,
I
should
add
that
sima
is
not
a
you
know,
single
model,
it's
a
it's
a
single
framework,
so
different
die
course
could
be
used
as
shown
right.
F
D
A
Questions
no
okay;
okay!
Thank
you
mary
again!
So
now,
let's
go
to
our
next
talk
it's
by
kevin
fenn
from
hao
in
car
halo
and
talking
about
whack-a-max
coupling
with
gamera
kevin
now
you're.
There.
G
Yes,
okay,
so
I'll
be
talking
a
little
bit
about
wacom
x,
coupling
with
gamera
I'll,
also
briefly
describe
what
is
gamera,
so
I'm
kevin
femme
entire
high
altitude
observatory.
G
So
why
do
we
really
want
to
couple
wacom
x
with
gamera?
Part
of
the
reason
is
that
the
wacom
x
development
is
ongoing,
it's
the
latest
and
greatest,
and
it's
significantly
better
than
the
old
thai
gcm
model
that
it
replaced
and
in
the
geospace
community
we're
still
using
thai
gcm
for
a
lot
of
things
and
it's
about
time
that
we
start
moving
to
something
better.
So
in
the
wacom
x
development
side,
there's
the
new
spectral
elements
dykor.
G
I
think
peter
will
talk
about
that
later
and
there's
also
the
new
regretting
scheme,
which
is
very
important
for
moving
back
and
forth
between
the
different
grids
in
geospace.
You
have
the
geographic
grid
you
have
magnetic
grid
and
so
on.
So
there
are
a
lot
of
different
grids
and
the
an
efficient
regretting
scheme
is
very
important
for
that,
and
so
in
coupling
between
wacom
x
and
gamera
gemara
is
the
grid
agnostic,
mhd
for
extended
applications,
and
it's
a
newly
developed
magnetosphere
mhd
code.
G
That
was
designed
with
extra
scale
computing
in
mind,
the
it
replaced
what
was
known
as
the
lfm
magnetosphere
model
that
was
built.
You
know
approximately
40
to
50
years
ago
and
so
having
a
new
model.
A
new
magnetosphere
model
is
very
good,
and
so
wacom,
x
and
camaro
coupling
is
also
part
of
the
multi-scale
atmosphere,
geospace
environment
or
the
mage
coupled
model,
and
this
is
this.
Mage
model
is
part
of
the
center
for
geospace
storage,
which
is
one
of
the
nasa
drive
science
centers,
with
the
pi
being
slava
merkin
over
at
apl.
G
So
currently,
the
current
work
for
wacom,
x
and
gomer
coupling
is
just
a
single
one-way
coupling
from
gamera
to
wacom
x,
and
so
this
is
being
tested
right
now,
where
gamera
is
providing
the
electric
potential
and
energetic
particle
precipitation
in
the
high
latitudes
for
wacom
x
and
then
wacom
x,
and
this
is
specified
at
a
one
minute.
Cadence.
G
We,
we
drove
wacom
x,
one
way
using
both
the
old
fvdi
core,
as
well
as
the
new
sc
dicor,
and
then
this
is
compared
to
tai
gcm.
So
I'll
talk
about
this
in
a
little
bit.
G
So
why
do
we
want
a
couple
wacom
x
to
gumera?
So
from
the
gumara
side
we
want
wacom
x,
because
black
max
is
is
significantly
better
than
tai
chi
sam,
that
it
replaces.
So
why
does
wacom
x
need
gumera?
So
normally,
if
you
don't
couple
to
a
magnetosphere
model,
the
high
latitude
driving
for
wacom
x
is
driven
by
an
empirical
model
such
as
helis
or
weimer
in
the
mage
coupled
model,
which
is
this
diagram
that
you
see
down
here.
G
G
These
three
models
are
then
coupled
to
thai
gcm
using
the
file
exchange
coupling
this
becomes
important
a
little
bit.
So
the
future
plan
is
to
replace
this
coupled
system,
this
thai
gcm
with
wekamax
and
this
coupled
system,
and
so,
if
you
take
a
look
at
this
male
diagram
here,
this
becomes
important.
So
the
blue
line
here
is
the
results
from
mage
for
the
electric
potential
driving
and
then
the
the
amount
of
heating
into
the
into
wacom
x
or
into
the
ionosphere,
is
through
this
bottom
panel
here.
G
So
in
the
mage
couple
model,
which
is
the
blue
line,
you
can
see
that
as
far
as
the
electric
potential
you
see
significantly
more
variability
within
within
the
electric
potential.
G
It's
smooth
and
you
don't
have
you're
missing
a
lot
of
dynamics
for
for
the
input,
so
in
the
right
side
here,
which
is
just
a
a
polar
plot
of
of
the
line,
plots
that
you
see
on
the
left
the
right
side
here,
you
can
see
that
from
mage,
which
is
this
top
the
top
row
here,
you
can
see
significantly
more
small
scale
features
compared
to
the
weimer
model,
which
is
the
bottom
plot
here
you
can
see
in
weimer
it's
very
diffused
and
large,
so
replacing
it
with
a
dynamic
magnetosphere
model
is
a
significant
improvement
so
currently
in
mage
we're
using
thai
gcn.
G
So
let
me
play
this
movie,
so
the
results
that
you
see
in
the
top
panel
here
in
the
top
row
here,
the
polar
plots
are,
is
the
neutral
density
at
400
kilometers.
You
can
see
two
satellites
flying
through
or
virtual
satellites
champion,
grace
they're
flying
at
about
400,
kilometers
or
so,
and
every
time
they
fly
through.
You
can
see
that
occasionally
you
see
these
waves
that
are
propagating
out
from
the
pole
due
to
the
the
high
latitude
heating
from
gamera,
and
so
occasionally
these
satellites
will
cross
these
waves.
G
And
so,
if
you
heat
in
the
wrong
place,
you
will
generate
these
waves
with
the
wrong
amplitude
and
velocity,
and
then
the
satellites,
the
spacecraft,
will
not
measure
them
at
the
right
time.
You
can
see
a
comparison
between
the
mage,
neutral
density
and
the
the
spacecraft
neutral
density
in
the
bottom
line
plot
here,
and
you
can
see
that
a
lot
of
these
variability
a
lot
of
these
enhancements
and
spikes
that
you
see
we're
capturing
and
a
lot
of
these
are
actually
these
waves
that
the
spacecraft
are
flying
through.
G
If
you
have
to
do
some
sort
of
filtering
of
the
pole
singularity,
you
start
losing
a
lot
of
information
that
we're
inputting
and
so
having
a
high
resolution
and
exascale
ready,
high
latitude
driving
into
wacom
x.
Sc
is
very
important,
and
so
this
is
a
comparison
between.
This
is
a
special
analysis
between
the
three
different
models.
So
you
have
thai
gcm
results,
which
is
the
dotted
line.
You
have
a
wacom
x,
finite
volume
dicor,
which
is
the
dashed
line,
and
then
you
have
the
solid
line.
G
Is
the
wacom
x
se,
the
altitude
changes
between
lower
altitude
and
the
bottom
row
here
to
higher
altitude
in
the
top
row
here,
and
then
you
have.
This
is
the
northern
hemisphere,
65
degrees,
latitude,
you're
near
the
equator
and
then
65
degrees
in
the
southern
hemisphere
on
the
right
here.
G
So
you
can
see
that
as
far
so,
you
have
wave
number
and
the
x-axis
and
then
it's
wave
power
for
the
the
y-axis,
and
you
can
see
that
when
you
get
to
the
higher
wave
number
thai
gcm
starts
losing
a
lot
of
power
at
the
high
wave
number,
especially
when,
when
you're
at
the
the
northern
and
southern
hemispheres,
you
can
see
that
it's
a
large
drop
off
in
the
equator.
You
can
see
there's
a
much
larger
difference
between
the
three
models.
G
Wacom
x
fv
has
filtering
at
the
high
latitudes,
and
so
since
you
have
filtering
near
the
pole,
that
also
contributes
to
losing
a
lot
of
the
power
at
the
higher
wave
number,
because
you're
filtering
all
of
them
out
with
the
sedi
core
you're
able
to
capture
and
maintain
a
lot
of
the
way
power
at
the
higher
wave
numbers
and
you're
able
to
capture
a
lot
of
that
just
throughout
the
globally,
in
the
in
both
the
higher
latitudes
as
well
as
the
equator.
G
So
if
we
can
do
comparison
of
the
temperature
variation,
the
left
panel
here
is
wacom
x,
fv,
the
middle
panel's
wacom
x,
sc
and
taiji
sims
on
the
right.
So
the
thai
gcm
is
able
to
capture
a
lot
of
the
large
scale
structure
everywhere.
It
uses
a
ring
average
filtering
for
the
for
the
pull
singularity.
G
However,
in
the
oops
in
the
wacom
xfv
here,
you're
you're
missing
a
lot
of
the
large-scale
structures
at
the
pole
due
to
the
the
artifact
of
the
pole,
but
you're
able
to
capture
some
of
the
smaller
scale,
structures
that
you're
missing
in
thai
gcm.
So
that's
due
to
the
underlying
lack
of
mix,
rather
than
the
the
fe
in
wacom
xse
you're,
able
to
capture
both
the
large-scale
structures
that
you're
getting
in
thai
gcm
as
well
as
a
lot
of
the
small-scale
structures.
G
You
see
significantly
more-
and
one
thing
to
note
is
that
the
driving
for
all
three
of
these
is
exactly
the
same.
It's
the
same
gamer
driving
for
all
three,
and
you
can
see
that
the
wacom
x,
sc,
results
and
and
distribution
is
significantly
better
and
so
to
put
all
of
this
into
context
or
into
numbers.
How
do
we?
How
do
we
quantify
the
model
performance?
G
So
you
get
something
like
this
table
here:
the
rmse
the
root
mean
square
error
is
the
left
set
of
numbers
and
then
the
r
squared
is
the
right
set
of
numbers.
So
we
split
the
we
split
the
event
into
three
parts.
You
have
a
quiet
part
where
nothing
not
a
whole
lot
is
happening.
You
have
a
main
phase
of
the
solar
storm
with
geometric
storm,
the
recovery
phase
and
then
just
taking
everything
as
a
single
number,
so
the
whole
run
doesn't
sink
the
whole
event
as
a
single
number.
G
However,
in
the
wacom
xse,
we're
actually
getting
significantly
better
rmse
for
the
main
face
of
the
storm.
Weimer
doesn't
do
a
very
good
job.
You
get
a
big
improvement
moving
to
a
couple
of
model
mage,
but
then
you
do
get
some
improvement
moving
to
wacom
x
se,
so
you
get
further
improvement
going
to
wacom
xse.
G
However,
if
you
go
to
grace
at
the
grace
a
spacecraft
you
go
from
wacom
x
se,
it
isn't
doing
that
much
better
than
weimer
for
the
main
phase
of
the
storm.
But
if
you
move
to
the
recovery
phase
again
you
get
significantly
better.
This
might
have
to
do
with
the
way
that
we're
coupling
you
get
two-way
two-way
coupling
in
mage,
but
you
don't
necessarily
have
two-way
coupling
in
wacom
x
and
so
during
the
main
phase
of
the
storm
at
the
grace
altitude.
G
Here
that
two-way
coupling
might
become
a
little
bit
more
important,
and
so
once
we
cup
two-way
couple
welcome
max
sc.
We
should
see
significant
improvement
here
if
you're
closer
to
one
it's
better,
and
this
captures
how
the
morphology,
how
well
you
capture
the
morphology
between
the
spacecraft
measurements,
as
well
as
the
the
model
measurements
so
in
weimer
for
a
quiet
period
where
all
the
models
are
doing
pretty
well.
Wekamax
sc
seems
to
not
capture
the
quiet
period
that
well.
G
This
has
to
do
with
the
fact
that
wacom
xc
is
capturing
too
much
variability.
That
is
not
seen
in
the
data.
However,
in
the
main
phase
you
can
see
that
going
from
0.3
to
0.5,
you
see
significant
improvement,
going
from
a
coupled,
dynamic,
magnetosphere
model
going
from
an
empirical
magnesium
model
to
a
couple
magnesium
model,
you
see
significant
improvement
and
then
somewhere
in
the
recovery
phase,
you
see
a
significant
improvement.
G
So
when
you
take
everything
together,
then
going
from
using
weimer
to
wacom
x
sc,
you
see
a
very
large
difference
in
both
the
rmse,
as
well
as
the
r
squared
so
currently
in
development
and
future
works.
Is
that
we're
currently
developing
and
a
higher
resolution
wacom
x
in
both
the
vertical
three-minute
component
and
the
horizontal
component
one-way
couple?
We
will
one-way
couple:
the
high-resolution
welcome
x,
with
the
high-resolution
gamera,
so
high-resolution
chimera
can
run
at
half
a
degree
or
even
a
quarter
degree
for
the
high
latitude
and
then
for
wacom
x.
G
You
can
go
down
to
six
kilometers
and
then
we're
also
currently
developing
the
two-way
couple.
Wacom
mix
with
gamera-
and
this
will
be
replacing
the
this-
will
be
using
the
the
file
based
exchange
that
currently
that
we're
currently
doing
with
high
gcm.
So
it
should
not
be
too
difficult
to
couple
and
so
as
a
quick
showcase.
This
is
improving.
G
This
is
using
wacom
x
with
120
elements
for
the
vertical
for
for
the
horizontal
resolution,
and
then
your
vertical
levels
is
this:
one
tenth
scale
height,
normally
we're
using
a
quarter
scale
height
as
opposed
to
1
10..
So
you
can
see
that
as
you
go
from
the
stratosphere
to
the
f
region,
250
kilometers
or
so
we're
getting
significantly.
G
So,
in
summary,
one-way
driving
of
wekamax
sc
performs
significantly
better
than
the
two-way
couple,
not
significant.
It
performs
better
than
the
two-way
couple
type
gcm
and
it's
a
significant
improvement
over
using
empirical
models,
especially
just
a
stand-alone
anti-gcm,
and
it's
able
to
capture
a
lot
of
the
small
scale
structures
that
type
gcm
was
not
able
to
do.
This
doesn't
account
for
the
lower
atmosphere,
the
lower
atmosphere,
structures
and
features
that
wacom
x
has
that
thai
gcm
doesn't
have,
and
so
that
that
would
be
something
to
explore
in
the
future.
G
So
once
we
do
two-way
couple,
then
we
get
a
more
self-consistent
and
higher
resolution
driving
from
wacom
x
and
high
latitude.
This
should
further
improve
the
model
performance
for
wacom
x
when
we
move
to
two-way
coupling
and
then
work
to
two-way
couples.
Ongoing
and
then,
like
I
mentioned
coupling
to
micromax,
enables
the
lower
atmosphere
to
play
a
significant
role
in
dynamically
influencing
all
of
geospace.
A
Thanks
kevin,
we
have
maybe
a
time
for
one
question.
A
Okay,
steve
you
just
okay,
I'll,
just
read
it:
why
use
file
based
exchange.
G
So
the
file
basic
we're
using
a
file
based
exchange
because
tai
gsm
is
a
very
old
code,
so
ripping
apart,
the
thai
gcm
mpi
code
to
merge
it
with
gamera
would
be
a
significant
undertaking
and
it
wouldn't
necessarily
provide
that
much
of
an
improvement
over
file-based
exchange.
So
the
the
fastest
way
to
get
the
coupling
to
work
is
using
something
like
a
file
based
exchange
in
the
context
of
geospace.
We
typically
are.
G
Our
events
are
typically
a
couple
days
at
most,
and
so
how
fast
high
gcm
runs
even
with
the
file
based
exchange,
the
the
exchange
itself
is
not
a
limiting
factor.
A
Okay,
quick
question
from
peter:
do
we
have
observations
to
compare
against
the
for
tke
spectra.
A
I
don't
think
there's
any
very
good,
because
I
mean
at
certain
altitude,
maybe
in
the
mlt
region
we
have
a
bit
more
wind
measurement
there,
either
from
ground-based
or
observation
I
mean
satellite
there
we
can
get
some.
You
know
information
but
higher
up.
It's
it's
very
sparse.
You
know,
I
don't
think
yeah.
Okay,
let's
move
on
to
the
next
one
thanks
kevin
again,
and
our
next
speaker
is
patrick
callahan
and
he's
going
to
talk
about
variable,
mesh
and
csm,
patrick.
I
Okay,
thank
you.
You
see
my
screen,
yes,
pretty
good.
Okay,
I
want
to
start
with
the
brief
overview
of
the
south.
American
grids.
Grids
is
plural
here,
because
we're
actually
talking
about
a
family
of
refinement
grids
that
encompass
the
south
american
continent,
with
resolutions
ranging
from
100
kilometers
down
to
six
kilometers
and
for
each
horizontal
grid.
The
high
resolution
region
is
aligned
with
an
existing
wharf
grid,
shown
in
the
figure
on
the
right
and
for
each
of
these
grids.
I
In
order
to
be
consistent
with
the
wharf
simulations
strong
nudging
to
era
5
outside
the
refinement
region
will
be
applied
to
impose
boundary
conditions
and
because
we're
going
to
impose
these
boundary
conditions,
we
weren't
concerned
about
how
quickly
we
transitioned
from
the
high
resolution
to
the
low
resolution
region,
so
we
did
as
quickly
as
possible
to
reduce
the
overall
number
of
grid
points
and
to
facilitate
future
inner
comparison
between
different
resolutions.
I
So
this
this
work
is
part
of
the
development
of
these
grids
as
part
of
the
cam
7,
supplemental
funding
and
it
overlaps
with
efforts
in
sema,
musica
and
the
end
car
water
systems
group
focus
on
south
america.
I
I
So
a
long
version
of
the
title
for
this
talk
is
variable:
variable
mesh
csm
for
south
america
as
a
test
case
for
streamlining
streamlining
and
simplifying
the
workflow
and
assessing
the
fidelity
of
the
high
resolution
simulations
with
inner
comparisons
with
convective.
Resolving
warf
results.
I
So
I'll
give
a
brief
summary
of
the
current
workflow
and
then
talk
about
some
pending
improvements
to
that
workflow
and
then
and
wrap
up
with
talking
about
the
problems
that
emerge
at
higher
resolutions
and
what
we're
going
to
try
and
do
to
address
them.
I
So
the
the
current
workflow
you
start
with
you
create
a
grid,
and
then
you
run
a
series
of
scripts
and
programs
to
generate
mapping,
domain
and
initial
condition
files,
and
then,
once
those
are
done,
you
create
a
or
you
install
the
grid
by
basically
copying
them
into
a
local
repository.
And
then
you
go
through
a
process
of
iteratively
adjusting
the
configuration
values
until
you
get
the
model
to
run
stably
and
you
end
up
in
the
it
works
phase.
So
it'll
run
at
that
point.
I
So
the
current
process
is
fairly
smooth
and
there
have
been
no
major
problems
reported
with
it.
Most
of
the
problems
have
to
do
with
when
the
grid
resolution
gets
pushed
down
to
three
kilometers
or
some
difficulties
that
we
have
to
overcome
yet
and
another
than
that.
The
only
other
problem
was
a
user
that
didn't
have
access
to
the
in-car
environment.
Had
some
trouble
getting
the
files
that
we
need
that
they
needed
to
to
do
the
to
create
their
grids.
I
The
variable
resolution
tools
and
documentation
are
available
in
a
github
repository,
we've
created
a
high
resolution,
variable
resolution
bulletin
board
for
users
that
need
support
and
help,
and
we
are
working
on
a
high
resolution.
Variable
variable
resolution
website
that
will
be
available
soon.
They'll
have
we're
hoping
to
have
full
documentation,
tables
working
configurations
meshes
and
data
sets
available
there,
along
with
the
faqs
and
links
to
additional
help,
resources.
I
So
once
you
get
the
model
running,
there's,
usually
additional
steps
are
needed
to
tune
the
model
into
pre-processed
data
sets
that
are
maybe
needed.
Depending
on
your
usage
of
these,
you
have
to
pre-process
emissions
data
and
then,
if
you're
going
to
nudge
the
model,
you
have
to
pre-process
the
nudging
data
onto
your
new
grid
and
the
resources
for
doing
this
are
available
as
part
of
the
the
musica
toolkit
and
then
once
you
get
done,
if
you're
going
to
have
a
you
know
valid,
you
need
to
validate
your
model.
I
Then
you
go
through
the
process
of
land
models.
Sped
up,
spinning
up
the
model
and
tuning
for
validation
and
cecile
is
heading
up.
An
effort
to
have
documentation
on
you
know
giving
guidance
and
how
to
do
that.
I
So
of
these,
the
pre-processing
of
the
nudging
data
is
the
least
simple
and
efficient
in
terms
of
time
and
disk
resources,
so
the
generating
the
mapping
domain
files
and
the
pre-processing
and
nudging
data
right
now
are
the
most
time
con
time,
computational
and
storage.
They
require
the
most
of
that.
So
there's
so
there's
a
couple
things
we're
good
that
are
coming
soon.
That
will
simplify
these
steps
and
the
first
is
the
new
ops.
I
In-Line
regrouping
will
simplify
the
process
and
eliminate
the
need
to
pre-process
the
nudging
data,
so
there
will
be
initially
basically
a
set
of
four
modules
that
will
maintain
and
manage
the
processing
paths
between
input
and
output
grids
and
the
horizontal
mapping.
Module
will
also
accommodate
user
requests
for
other
computational
methods
for
horizontal
grids,
for
example,
zonal
means
or
spatially
filtered
values
that
can
be
used
for
scale.
Selective
nudging,
the
once
the
horizontal
and
vertical
options
have
been
validated
a
temporal
map.
I
Module
is
going
to
be
added
to
manage
the
data
time
slices
for
higher
order.
Methods
for
managing
the
way
time
is
is
interpolated
in
the
in
the
nudging.
An
example
would
be
a
local
diagonal
fit.
I
So
the
current
nudging
values
applied
at
a
given
time
are
obtained
by
a
linear
interpolation
between
re-analysis
time
slices
and
a
paper.
That's
in
review
for
from
nick
davis
showed
that
there
are
non-negligible
errors
introduced
when
the
interval
between
these
time
slices
is
longer
than
one
hour
and
lastly,
here
each
of
the
components
will
have
a
user
template
to
accommodate
installation
and
testing
of
new
mapping
options
between
model
grids
or
for
new
data
set
grids
that
may
emerge
in
the
future.
So
hopefully
this
will
be
extensible
and
flexible
for
new
applications.
I
Some
new
nudging
features
will
be
added
to
further
simplify
the
workflow.
So
using
the
refinement
map
from
the
vr,
the
grid
editor
can
be
used
to
impose
or
to
to
define
the
the
the
boundary
conditions
you
can
pose
them.
Independent
of
the
interior,
nudging
and
selective
nudging
larger
scale.
Variability
will
also
be
available
as
well,
so
the
ultimate
goal
for
optimizing
the
workflow
is
to
have
it
easy
to
use.
I
So
once
a
grid
is
created,
the
user
can
have
only
have
to
spend
a
minimal
amount
of
time
getting
the
model
running.
So
these
improvements
will
greatly
reduce
the
then
to
work
at
the
generating
of
mapping,
domain
and
initial
condition.
Files
and
the
the
pre-processing
of
nudging
data
will
be
eliminated
and
nudging
will
only
require
a
modification
of
a
name
list
to
switch
it
on.
I
The
other
problem
is
the
so-called
gray
zone
problem,
where
it
resolutions
below
about
10,
kilometers,
convection
becomes
partially
resolved
and
the
assumptions
underlying
the
convective
parameterizations
break
down.
So
this
is
a
problem
because,
ultimately,
we
want
the
model
that's
easy
to
use,
but
we
also
want
it
to
give
realistic
results,
and
that
leads.
These
problems
lead
us
back
to
why
the
grid
domain
was
chosen
and
configured.
I
The
way
it
was
shown
here
is
the
website
for
the
south
american
affinity
group
and
is
part
of
the
what
end
car
water
systems
focus
on
energy
and
water
cycle
of
the
south
american
region.
I
It's
a
collaborative
effort
across
ncar
and
the
international
research
community
and
one
of
the
main
roles
for
end
car
is
in
development
of
a
high-resolution
conductor,
permitting
modeling
system
for
south
america,
and
the
members
of
this
group
have
self-organized
into
a
number
of
overarching
research
areas
shown
here
on
the
left.
I
But
it's,
and
one
of
the
main
things
is
that
there
are
three
one-year-long
simulations
that
are
underway
for
each
phase
of
the
enso,
and
so
the
results
of
these
simulations
will
provide
a
great
resource
for
access,
assessing
the
csm
results
across
horizontal
resolutions
and
there's
an
observational
component
to
this
as
well.
So
observational
data
for
the
region
will
be
used
to
evaluate
the
results
of
both
wharf
and
csm
and
they're
working
down
the
road.
I
There
will
eventually
be
20
year
long
simulations
for
this
region
so
to
take
advantage
of
these.
Of
these
wharf
simulations
richard
neal,
julio
bachmaster
and
kristen
rasmussen
and
myself
have
an
nfc
allocation
to
carry
out
a
series
of
one-year
simulations
for
the
refined
regions
of
south
america
with
and
comparing
aligned
with
the
four
kilometer
grid,
and
so
there
will
be
four
resolutions
for
available
and
we'll
run
it
for
three
phases
and
so
and
for
two
vertical
resolutions.
I
So
all
the
experiments
that
we
will
run
will
consist
of
basically
a
set
of
simulations
for
the
four
horizontal
resolutions
hundred
down
to
six
kilometers
a
set
of
baseline
one
year.
Select
simulations
for
each
phase
of
insult
will
be
run
one
with
the
deep
convection
on
one
with
it
off
and
those
will
be
run.
I
The
70
level
vertical
option
and,
in
addition
to
those
will
be
a
series
of
one
year,
simulations
for
selected
enso
phases
with
deep
convection
on
off
running
with
the
32
level
option
we'll
try
some
different
alternate
reference
surfaces
for
damping
and
maybe
some
alternate
spectral
damping
applications.
I
Okay,
thank
you.
So
the
alignment
of
the
spectral
elements
of
these
of
these
grids
will
allow
for
a
high
order.
Partitioning
of
the
four
kilometer
wharf
results
into
resolved
and
sub-grid
values
at
each
resolution,
where
the
resolved
values
normally
represent
the
results
we
expect
to
obtain
from
lower
resolution
simulations
and
ideally
the
parameterizations
representing
the
suburb
variability,
would
reproduce
the
tendencies
of
recovery,
evolution
of
the
resolved
scales
so
and
lastly,
to
to
address
the
non.
I
The
hydrostatic
limitations,
additional
non-hydrostatic
correction
terms,
will
be
included
and
evaluated
in
comparisons
with
warf
results,
so
for
a
non-high
risk,
non-hydrostatic
terms
other
than
those
associated
with
vertical
accelerations
relative
to
the
model,
surfaces
or
movements
of
the
model
surfaces.
They
all
can
be
included
in
the
vertical
momentum
equation
without
having
to
modify
the
current
prognostic
equations.
I
So
the
south
american,
the
south
american
various
resolution
simulations
and
the
contemporaries
high-resolution
dwarf
simulations,
will
provide
a
valuable
resource
for
the
development
and
improvement
parameterizations
and
down
in
the
future
was
if
it
was
initially,
we
tried
to
have
a
three
kilometer
grid
in
the
set,
but
it
was
not
able
to
pass
the
number
of
the
initial
tests,
so
it
had
to
be
excluded
so
we'll
once
those
problems
get
resolved,
there'll
be
a
three
kilometer
option
added
to
this
family
of
of
grids
and
then
down
the
road.
A
Okay,
if
not,
I
will
move
on
to
the
next
one
thanks
patrick
again,
and
so
our
next
talk
is
by
adam
harrington
from
ncar.
E
All
right
that
was
an
excellent
talk
from
patrick
and
definitely
makes
you
appreciate
all
the
steps
required
in
generating
a
new
grid,
and
so
this
talk
is
going
to
be
entirely
focused
on
the
it
works
phase
and
just
talking
about
the
science.
So
it's
a
nice
segue,
I'm
talking
about
semapolar,
and
this
is
a
picture
of
one
of
the
polar
grids
and
a
nice
little
visualization.
E
You
can
see
the
grid
lines
and
the
precipitation
rates
anyways,
I'd
like
to
also
acknowledge
people
helped
me
along
the
way
the
lipscomb
andrew
gedelman,
marcus,
laverstrom
peter
larritzen,
with
his
name
spelled
wrong:
kate,
thayer
and
calder
and
bill
sacks.
E
So
let's
get
started,
I'm
just
going
to
talk
about
some
science
results
from
the
polar
grids,
and
so
this
is
the
two
arctic
grids.
Here.
All
the
polar
grids
are
using
the
spectral
element
dicor,
so
the
arctic
grid
is
this.
We
reform
we
refine
the
top
panel
of
the
cube
sphere
to
a
quarter
degree
and
one
degree
outside
that,
and
the
arctic
chris
grid
is
further
refining
the
green
lanai
sheet
to
eighth
degree,
and
so
these
these
two
are
available
to
run
right
now.
E
In
csm
2.2
with
these
great
aliases
and
then
I'm
going
to
be
talking
about
I'm
coupling
this
arctic
grid
with
with
the
pop
ocean
right
now
and
so
I'll,
give
you
a
little
status
update
on
that
and
if
I
have
time
some
program,
preliminary
results
from
this
newest
polar
grid.
The
antarctica
grid.
E
So,
in
order
to
evaluate
a
variable
resolution
grid,
we
need
to
know
it's
good
to
have
low
resolution
reference
solutions
and
a
tangential
science
interest
of
mine
that
that
I
think
is
fascinating
is
for
your
sort
of
typical
gcm
resolutions.
These
one
degree
grids.
E
So
this
is
the
finite
volume
two
degree
finite
volume,
one
degree
canvases
c
slam,
one
degree
and
the
same,
but
with
a
coarser
resolution
physics
than
the
dynamics,
you
can
see
that
the
greenland
ice
sheet
is
something
that's
sort
of
marginally
resolved
in
gcns,
and
so
I
always
like
to
understand.
If
we're
going
to
be
moving
out
of
lat
lawn
world
into
quasi-uniform,
unstructured
grid
world,
are
we
losing
sort
of
the
higher
resolution
representation
of
greenland
that
we
had
benefited
from
from
these
lat
long
grids?
E
E
But
you
know,
despite
that,
there
may
still
be
some
improvements
in
the
representation
of
the
green
ice
sheet.
So
here
is
what
the
average
grid
spacing
over.
The
greenland
ice
sheet
looks
like
for
all
six
of
these
grids,
and
so
this
this
green
one
here
is
is
is
the
kmsc
ne30
iii,
and
so
it's
it's.
It's
one
degree
average
grid
spacing
over
greenland
111
kilometers.
Here's
the
finite
volume,
one
degree
where
you
you
benefit
from
that
led
to
longitude
grid
and
it's
closer
to
60
kilometers
finite
volume.
E
Two
degrees
is
not
that
much
more
coarse
than
the
any
30
pg-3
interesting
enough
and
pg2
is,
is
a
little
farther
out
here.
The
arctic
in
the
arctic
grist
grids
are
27.8,
kilometers
and
14.3
kilometers
respectively.
E
So
I
did
some
20-year
simulations
of
1979
and
1984
of
all
these
six
grids
in
cam6
and
what's
shown
here,
are
the
the
primary
source
and
sync
terms
of
the
surface
mass
balance
integrated
over
the
entire
greenland
ice
sheet.
E
The
black
line
is
a
time
series
from
rachmo,
which
we
does
a
very
good
job
of
reproducing
the
observations,
and
so
for
the
purposes
of
this
talk,
that
will
be
the
truth
and
and
so
your
integrated
precipitation,
your
purple
and
your
pink
lines,
which
are
the
arctic
and
arctic
gris,
do
very
well
compared
to
rackmo.
The
20-year
averages
are
over
here
in
the
circles.
E
E
The
finite
volume
and
the
variable
resolutions
actually
perform
very
similarly,
and
that
is
skillfully
compared
to
rackmo,
whereas
the
canvas
secrets
you
see
are
down
here,
and
so
it
tends
to
produce
too
much
runoff
and
so
to
look
at
this
a
little
closer
by
sectors
of
the
greenland
ice
sheet,
and
so
these
are
the
these
wally
basins.
E
The
ones
that
I
I
want
to
focus
on
are
three
and
four,
which
are
southeast
greenland,
which
get
gets
most
of
the
precipitation
are
in
these
regions,
seven
and
eight,
quite
quite
a
bit
too.
But
so
this
is
three
or
four
here,
and
this
is
the
integrated
precipitation
over
that
basin
and
your
rackmount
and
your
your
purple
and
your
pink.
So
your
verarez
are
doing
quite
well
again.
The
higher
resolution
you
tend
to
get
smaller,
integrated,
precipitation
fv,
I
should
say
cam
sc.
E
It
has
too
much
precipitation
in
these
regions
and
then
finite
volume
is
somewhere
in
between,
and
so
you
know
the
takeaway
is
that
all
one
degree
grids
over
produce
precipitation
and
that
sc
is
worse
than
fv
and
the
virus
reduces
the
bias,
and
this
is
this
is
probably
due
to
more
realistic
or
graphic
precipitation.
So
once
you
start
producing
a
more
realistic
precipitation
pattern
that
dumps
out
snow
right
at
the
coastal
margins,
instead
of
letting
the
snow,
I'm
sorry
letting
the
moisture
propagate
further
inland
the
net
effect.
E
Is
you
actually
have
less
the
better?
You
resolve
your
graphic,
precipitation,
the
less
precipitation
you
actually
have,
and
this
this
bottom
bar
plot
is
snow,
plus
ice
melt.
So
it's
not
runoff,
it's
purely.
How
are
we
resolving
the
melt
processes,
and
this
is
interesting
because
in
you
know
three
or
four,
for
example,
the
finite
volume
grids
do
just
as
well
in
this
metric,
as
the
higher
resolution
grids
and
canvas
c
sticks
out
is
not
doing
all
that.
Well,
a
lot
of
this
has
to
do
with.
E
It's
we
use
elevation
classes
to
compute
the
smb
in
on
the
clm
grid
and
in
those
in
those
elevation
classes,
contains
real
information
about
the
elevation
area,
relationships
of
the
actual
greenland
ice
sheet
that
we
can't
resolve,
and
so
the
elevation
classes
do
an
excellent
job
of
really
helping
out
at
coarser
resolutions
to
help
you
produce
a
realistic
melt
rate,
but
it
has
limitations
and
that
if
your
resolution's
too
coarse,
you
can't
take
advantage
of
it
as
much
so
one
thing
I
wanted
to
test
was
so.
E
So
I
tried
to
project
this
topography
onto
the
finite
volume
grid,
and
so
you
get
this
laughable
looking
blocky
world
here
that
I
don't
know,
I'd
ever
be
able
to
get
past
peer
review,
but
that's
the
the
joy
of
giving
a
talk
and
so
that
that's
I've
ran
that
out
from
1984
to
1998
and
that's
given
in
the
red
bars
here
again.
E
This
is
integrated
precipitation
over
greenland,
integrated
melt
and,
and
you
can
see
that
the
red
bar
more
closely
matches
the
light
blue,
which
is
finite
volume
one
degree
than
it
does
the
light
greens
in
all
cases,
and
so
it
indicates
that
the
blockiness
of
kmsc
alone
is
not
necessarily
responsible
for
it,
but
rather
well
before
I
jump
the
page.
E
You
know
cam.
Both
of
these
knfv
has
obviously
terrain
following
coordinate,
and
so
this
bottom
boundary
condition
may
be
very
sloppy,
but
because
there's
grid
cells,
multiple
grid
cells
in
each
one
of
these
steps,
the
actual
terrain
following
coordinate,
can
kind
of
smooth
out
over
that,
and
that's
apparent
if
you
just
look
at
the
surface
pressure
differences
between
a
run
with
this
regular
topography
and
the
one
with
the
blocky
topography.
E
E
I
think
that
this
suggests
that
the
degradation
of
sort
of
the
sort
of
of
the
organ
precipitation
and
the
canvas
c
grids
is
not
necessarily
coarser
topography,
but
just
a
lack
of
dynamical
resolution
to
resolve
these
fine
scale
flow
features,
and
so
in
this
situation
as
much
as
I
hate
to
say
it,
the
lat
long
grid
performs
better
than
kmsc
for
producing
precipitation
over
greenland.
E
So
that
has
nothing
to
do
with
variable
resolution
grids.
But
it's
an
interesting
question
that
anyways
so
going
back
to
variable
resolution.
So
here's
a
nice
pretty
snapshot
of
the
arctic
gris
grid.
I
got
a
nice
storm.
E
You
know
dumping
out
precipitation
at
the
southeast
coast
with
some
vertical
exaggeration
to
show
that
orographic
precipitation,
and
so
these
the
the
sort
of
variable
resolution,
really
gives
you
that
realistic,
organic
precipitation.
It's
hard
to
deny
that
that
that
is
certainly
one
characteristic
of
these
variable
solution
grids.
E
Another
way
I
decided
to
look
at
the
storms
is,
I
used
tempest
extremes
and
then
I
used
a
compositor
to
composite
what
the
storms,
the
synoptic
storms
look
like
for
northward
or
45
degree
north,
which
is
where
the
refinement
transition
begins
for
the
arctic
grid,
and
so
here's
just
a
nice
composite
of
what
all
the
storms
look
like
in
january,
for
example,
to
get
the
nice
comma
shape,
and
so,
if
I
do
a
pdf
of
all
the
precipitation
rates
for
all
the
snapshots
that
make
up
this
composite.
E
So
basically,
what
are
the
precipitation
pdfs
of
these
storms?
Look
like
for
june
july
and
august
and
the
red
is
the
arctic
grid
and
the
black
is
era
5
and
and
then
the
blue
is
the
lower
resolution
solutions
and
so
clearly
you
benefit
from
producing
more
extreme
precipitation.
E
Another
question
that
I
was
I'm
interested
in
is
is
sort
of
these
polar
lows
which
could
be
you
know.
E
My
understanding
is:
there's
a
strong
interaction
of
the
surface,
winds
and
sea
ice
and
open
ocean
surface
fluxes
that
can
contribute
to
the
generation
and
the
genesis
of
these
storms,
and
so
that's
one
research
question
that
we're
interested
in
coupling
this
arctic
grid
to
the
pop
ocean,
and
so
I'm
going
to
give
it
a
half
minute:
that's
okay,
and
so
here's
here's
what
the
status
is
of
me,
coupling
the
arctic
grid
to
the
to
the
ocean.
E
Well,
when
I
started
out
of
the
box
naively
around
it
for
way
too
long,
this
configuration
is
way
too
warm.
This
is
the
sema
control,
pi
control,
and
so
I
had
to
go
to
retuning
the
varez
tuning
approach
that
I
took
is
you
tune
a
one
degree
model,
but
you
use
the
variable
resolution
physics
time
step.
E
Most
of
the
domain
is
in
the
course
region
and
the
only
difference
in
variable
resolution
versus
uniform
resolution
and
one
degree
in
the
course
rigid
in
the
course
regions
is
the
physics
type
stuff.
So
that's
primarily
the
thing.
That's
really
altering
your
your
climate
and
that
you
need
to
tune
for
so
I
try
to
get
a
reasonable
spatial
pattern
to
short,
with
cloud
forcing
and
get
a
good
rest
on,
so
I
got
there
so
so
how
do
I
initialize
these
runs?
So
this
is
the
long.
E
This
black
line
is
the
long
cement
6pi
control
for
csm2
and
you'll
notice.
It
warms
over
the
pre
and
over
the
control
period,
and
you
know
that
may
not
be
ideal,
but
that
is
what
occurs
and
then
these
are
the
jgbg
spin-up
legs
from
marcus's
work
where
he
actually
spun
up
the
entire
coupled
system,
including
cism,
to
be
in
equilibrium
and
so
the
ice
sheets.
E
Internal
temperature
profiles
are
equilibrated
out
here
at
the
end,
and
so
this
is
where
I
wanted
to
branch
off,
and
so
I
did
so
here
with
the
new
tuned
bcom
set.
You
can
see
it's
cooling
a
little
bit.
That's
obviously
a
lot
better
than
warming
and
that
that
this
is
the
configuration
we're
working
for
so
there's
a
multi-century
runs
in
the
pipeline
all
the
way
out.
We
have
a
five-member
ensemble
plan
for
1970-2020
and
hopefully
we'll
be
running
that
any
within
the
next
week.
E
I
can't
I
don't
have
time
to
talk
about
this,
but
please
look
at
trees,
poster
tree
data
and
john
leonard's
the
poster
session
later
today,
and
also
youtube
search
for
youtube.
Vislab,
greenland
and
you'll
see
a
really
cool
visualization
of
the
greenland
visualization
that
I
showed
snapshots
from
in
this
talk.
So
I'm
gonna
stop
there
and
take
questions.
A
Thanks
adam,
maybe
we'll
address
one
of
the
questions
julio.
Maybe
you
can
go
ahead
and
ask
a
question.
B
Adam
I
I
was
just
wondering
down
scaling
to
the
csm
to
the
system
grid
is
not
going
to
help
produce
the
right
upslope
wind,
that
you
need
to
get
the.
B
E
A
Okay,
there's
another
question
but
adam:
maybe
you
can
address
that
in
the
in
the
chat
box
and
they
will
move
on
and
thanks
adam.
So
our
next
talk
is
by
forest
lacey
in
car
acom,
regional
simulation
of
chemistry
with
musica
great
thank.
J
A
J
Okay,
great
so
yeah,
I
was
asked
to
give
a
talk
about
basically,
the
initial
results
of
chemistry
using
musica
and
the
sema
framework,
and
then
also
some
of
the
chemical
model,
developments
that
we
have
ongoing
in
ncar
and
acom,
and
this
is
work
that
I've
done
myself,
but
also
with
a
large
portion
of
the
musica
team
as
well
particularly
becky
schwantis.
Who
is
now
on
that
series
in.
J
Noah,
so
just
a
quick
overview
of
what
musica
is
because
mary
talked
about
this
at
the
beginning.
J
But
musica
is
an
infrastructure
for
looking
at
chemistry
and
aerosols
and
modeling
the
atmosphere,
something
that's
easily
configurable,
so
we
can
choose
different
packages
or
different
chemistry
and
aerosol
suites
based
on
our
science
question
of
interest-
and
this
is
the
vision
paper
was
in
bams
just
about
a
year
ago,
and
how
this
links
to
sema
is
that
this
all
fits
within
the
sima
framework.
So
our
modeling
of
the
atmosphere
fits
within
sema
and
then
music
has
a
larger
expansion
of
that.
J
J
J
So
we
have
a
global
one
degree
model
that
steps
down
by
a
factor
of
eight
to
give
us
around
14
kilometer
resolution
over
conus,
and
what
we
can
kind
of
expect
from
this
is
is
that
finer
resolution,
emissions
and
chemistry
are
going
to
be
more
accurately
represented
and
pollutants
are
then
simulated
at
human
exposure,
relevant
scales,
so
we're
able
to
add
model
things
like
ambient
air
quality
within
a
global
model.
Something
that's
been
challenging
in
the
past,
and
this
also
includes
all
of
the
global
feedbacks
that
we
get
when
using
csn.
J
And
as
an
example
of
the
the
science
questions
that
we're
starting
to
look
at
and
as
our
first
case
that
we
did,
we
wanted
to
see
how
model
complexity
impacted
our
air
quality
simulations.
So
for
music
version
zero.
We
looked
at
two
cases,
one
of
them
using
the
standard
ne30
grid,
which
is
a
one
to
gear.
One
degree
cube
sphere
shown
on
the
left
and
then
the
second
one
is
that
one
degree
global
grid,
with
a
factor
of
8
regional
refinement
over
conus
shown
on
the
right.
J
In
addition
to
resolution
improvement,
we
also
wanted
to
look
at
chemical
complexity,
because
we
know
that
at
finer
scales,
our
resolution
for
resolving
chemistry
is
going
to
be
different
as
well.
So
we
took
the
standard,
camcam
mozart,
ts1
mechanism,
which
is
around
150
species,
and
then
we
also
simulated
with
the
mozart
ts2.1
mechanism,
which
has
a
pre-improved
isoprene,
interpreting
chemistry
impact
and
also
a
high
knox,
high
and
low
nox
pathway
for
soas.
J
And
I
wanted
to
show
kind
of
why
we're
looking
at
both
horizontal
scale
and
chemistry
complexity.
So,
if
we're
looking
at
the
significance
of
resolution,
one
thing
that
I
think
we're
all
familiar
with
is
that
wildfires
are
becoming
a
an
important
source
of
ambient
air
quality.
J
And
this
is
showing
the
the
rim
fire
from
2013,
which
was
our
simulation
year,
which
at
the
time
was
the
the
biggest
fire
in
california.
And
you
can
see
in
the
middle,
which
is
the
one
degree
model
we're
really
smoothing
out.
Emissions
from
that
source
over
a
much
wider
range,
whereas
in
the
regional
refined
case
we're
seeing
the
discrete
fire
points
and
then
the
impact
of
that.
We
can
also
see
there's
multiple
fires
going
on
throughout
the
west,
the
western
u.s,
which
may
not
be
as
clear
in
the
one
degree
model.
J
And
the
difference
between
those
two
shows
that
that
smoothing
impact
and
in
terms
of
of
chemical
mechanism
importance,
this
is
being
shown
here.
These
are
now
both
the
regional
refined
cases
with
the
the
ts1
and
the
ts
2.1
mechanism,
and
it's
important
here
to
focus
on
the
difference,
plot
you're,
seeing
that
we're
getting
differences
of
up
to
30
ppb,
especially
near
populated
areas,
so
there's
the
outflow,
from
los
angeles
and
in
southern
california
we're
getting
large
differences
depending
on
what
time
of
day
and
how
the
emissions
are
coming
out
of
the
city.
J
Quality,
so
these
are
the
kind
of
the
takeaway
results
from
the
initial
simulations
with
musica
version
zero,
and
these
are
in
papers
that
both
becky
and
myself
have
in
preparation
and
planning
to
submit
very
soon
and
in
general.
The
results
show
that
changes
due
to
shifts
in
resolution
are,
at
least
in
the
same
order
of
magnitude
as
changes
in
chemical
complexity,
and
this
is
especially
relevant
as
we
move
to
as
we
move
to
higher
resolution.
J
So
here
I
can
kind
of
explain
that
a
little
bit
better
here
here
we're
seeing
like
a
min
and
max
change
of
around
40
ppb
for
changes
in
surface
ozone,
as
we
shift
from
one
degree
to
the
regional
refined
case,
whereas
we're
getting
between
15
to
five
changes
in
ppb,
as
we
shift
just
between
the
two
chemical
mechanisms,
so
they're
they're
not
identical
but
they're
on
the
same
order
of
magnitude,
and
this
is
very
clear,
with
pm
2.5
as
well.
J
So
that
was
from
the
paper
that
becky's
preparing
that
compares
specifically
to
a
number
of
flight
tracks.
The
one
that
I
have
is
more
health
focused
and
I'll
discuss
those
results
in
a
session
tomorrow.
But
I
did
want
to
show
some
of
the
just
the
model
biases
and
how
we've
started
to
analyze
those.
J
So
this
is
the
shift
in
ozone
bias
and
the
shift
in
pm
bias
based
on
all
available
epa,
aqs
observations
within
both
specific
regions
and
seasons,
and
I
think
this
is
where
it's
really
clear
that
both
so
the
top
is
showing
the
shifts
based
on
resolution
improvements
and
the
bottom
is
showing
shifts
based
on
chemistry,
improvements
or
chemistry.
Complexity.
J
I
did
want
to
discuss
as
well
the
the
model
independent
chemistry
module,
so
this
is
mikkum,
and
this
is
basically
a
database
and
a
solver
that
allows
for
easily
changing
a
chemical
mechanism.
We
were
not
able
to
use
this
for
the
musica
version,
zero
work,
but
I'll
get
to
this
in
the
conclusions.
I
think
this
is
an
important
tool
to
help
us
answer
some
of
the
science
questions
that
we're
looking
at
in
advance
of
complex
global
simulations.
J
So
here
you
basically
have
all
of
these
same
input.
Databases
that
you
can
access
and
then
you
basically
just
pick
and
choose
either
by
uploading,
a
file
or
anything
either
text
or
csv
that
identifies
what
species
and
reactions
you
want
to
use
and
then
it
takes
that
configures
it
and
puts
it
into
a
box
model.
So
these
are
basically
the
processes
for
using
music
box,
which
is
available.
J
J
So
if
based
on
measurements,
you
have
different
henry's
law
coefficients
and
you
want
to
see
what
happens
if
you
change
those
within
the
chemical
database.
You
can
do
that
here
and
then
it
allows
you
to
to
run
a
box
model
and
plot
the
model
results
and
simply
compare
two
mechanisms
which
I
think
is
important.
If
you're
talking
about
looking
at
chemical
complexity
and
how
that
impacts
air
quality,
this
will
allow
you
to
run
a
number
of
different
cases
within
a
box
model
before
implementing
them
into
computational
computationally.
J
Expensive,
music
and
model-
and
this
is
kind
of
a
demonstration
of
what
we
would
expect
to
get
these
results-
are
actually
from
camcam,
but
we
would
expect
to
have
these
within
the
box
model
of
music
box
at
some
point
and
basically
you
have
the
different
mechanism,
name
the
number
of
compounds,
and
then
this
would
give
you,
within
a
box
model
kind
of
the
impacts
on
ozone
for
that
mechanism
or
pm
2.5.
J
Or
what
have
you
other
ongoing
development,
which
I
think
has
been
touched
on
a
little
bit
already
as
well,
is
the
use
of
different
dynamical
cores
within
musica,
and
this
gets
to
gokan's
question.
Why
would
we
want
to
do
that?
Well,
the
reason
we
want
to
do
that
is
to
answer
science
questions
about
the
impact
of
your
assumptions
about
dynamics
on
ambient
air
quality.
This
may
tell
you
or
give
you
an
uncertainty
on
ambient
air
quality
based
on
your
choice
of
dynamical
core.
J
We
may
have
higher
uncertainty
in
populated
areas
using
the
spectral
element
versus
mpass,
so
I
think
that
the
ability
to
have
multiple
dicors
really
allows
us
to
get
some
questions
that
are
really
difficult
to
answer.
Moving
forward
and,
in
addition
to
the
ongoing
development
on
patrick
callahan,
has
done
a
lot
of
work
on
developing
a
user
interface
for
generating
grids.
J
J
So
kind
of
conclusions
that
I
I
want
to
discuss
from
this
work
are
that
musica
and
sema
allow
for
the
simplified
evaluation
of
model
complexity
and
how
that
impacts.
Atmospheric
state-
and
this
is
all
within
a
single
framework-
and
I
think
this
has
been
highlighted
a
couple
times.
It's
important
that
this
is
a
framework.
It's
not
a
model.
J
Our
initial
experiments
show
that
both
horizontal
scale
and
chemical
complexity
are
of
equal
importance
if
we're
looking
at
nonlinear
species
such
as
ozone
and
pm
2.5-
and
I
think
it's
a
really
nice
segue
to
the
fact
that
mikkum
and
music
box
are
tools
that
can
be
aid
in
kind
of
exploring
some
of
these
questions
before
implementation
into
a
3d
model.
I
think
that's
really
powerful
that
we
can
test
the
chemical
mechanism,
see
what
results
it
has
in
a
box
model
and
then
directly
have
that
same
tool.
J
So
I
think
with
that
I'll
end
and
take
any
questions
if
we
have
any
time.
A
Thanks
for
us,
so
I
forgot
to
remind
you
the
time
sorry
about
that.
So
any
question
for
for
a
forest.
A
I
have
a
quick,
maybe
naive.
Question
is
makum
similar
to
the
preprocessor
we
used
before
in
in
for
mozart,.
J
So
mikkum
is
so
basically
the
the
type
of
preprocessor
that
you
use
is
that
goes
back
to
your
model.
Configurator,
so
like
here,
your
host
model,
characterization,
that's
where
you
define
what
type
of
preprocessor
you've
been
using
in
the
model.
So
if
it's
kpp
this
would
identify
that
and
then
it
would
tell
it
it
basically
tells
it
to
output
a
kpp
file
or
whatever
chemical
processor.
You
have
so
it's
it's
very
similar
to
that.
But
it's
it's
more
flexible
because
it
allows
for
output
of
multiple
different
chemical
mechanism
types.
H
H
A
lot
of
people
who
have
been
contributing
to
this
work.
First
of
all,
I'd
like
to
thank
the
amp
software
engineering
group
for
all
the
code,
reviewing
they've
been
doing
and
all
the
work
we're
doing
and
mariana
wertenstein
and
jim
edwards
for
their
siemens
scene
and
csm
support,
and
then
you
know
in
the
last
year
or
so
things
have
really
started
to
move
on
the
mpas
integration.
H
So
I
really
would
like
to
thank
miles
curry
and
michael
dude
of
all
the
hard
work
that
put
in
and
all
the
discussions
and
spin
up,
the
the
build
builder
scammer
has
been
providing
to
spin
me
up
on
the
details
of
impasse
for
the
special
element.
Part
of
this
talk
I'd
like
to
thank
mark
taylor,
alexander
guber,
and
as
well
as
first
francis
vettenham
lee
liu
and
for
the
ffv
program,
I'd
like
to
thank
you,
christianity
and
lucas
harris.
So
it's
a
rather
large
group
of
people
involved
in
this.
H
H
So
if
e3
was
released
with
csm22,
apparently
in
the
last
year
or
so,
there's
been
a
lot
of
updates
to
the
dykor
from
gftls
we're
pulling
those
into
our
modeling
system
and
then
there's
currently
a
funded
proposal
from
noaa
to
look
at
the
fe3
inside
of
cesm.
So
that
proposal
and
our
collaboration
with
lupus
heroes
will
help
us
to
kind
of
finalize
a
quote-unquote
final
configuration
for
the
csm
dynamical
core
evaluation
effort.
H
On
the
right-hand
side,
we
see
our
reference
simulation
with
70
layer,
finite
volume
wacom
on
the
left
is
spectral
elements,
and
you
see
that
the
rather
very
strong
degradation
of
the
simulation
of
the
qbo,
which
already
had
70
layers
with
fv,
is
not
that
great.
But
I
had
the
suspicion
for
a
long
time
that
the
vertical
remapping
was
too
diffusive
and
didn't
have
the
time
to
code
up
stuff
myself.
H
Other
changes
that
are
not
fully
on
the
checked
in
on
our
quote-unquote
trunk.
What
are
very
close
to
being
there
is
for
a
long
time,
patrick
keller
and
I
have
been
observing
a
noise
with
spectral
elements
over
steep
oreography
and
we
later
started
to
work
with
mark
taylor
on
this
and
he
developed
some
simple
modifications:
the
pressure
gradient
force
and
the
damping
of
temperature
or
the
hyper
viscosity
damping
of
temperature.
H
It
really
improved
the
simulations.
I
would
go
into
the
details
of
the
numerics
here,
but
just
show
you
the
difference
between
the
csm22
release
of
spectral
elements
on
the
left
hand,
side
and
spectral
elements
with
this.
H
These
algorithmic
modifications
on
the
right
and
I'm
showing
you
know
a
lot
long
average
from
an
amp
simulation,
I'm
showing
the
vertical
velocity
at
500
hectare
scale.
That's
usually
indicative
of
noise
issues
near
a
steep
topography,
and
when
we
introduce
these
changes,
we
get
a
vertical
velocities
that
are
much
more
consistent
with
the
other
dynamical
cores
in
sema.
H
As
alluded
to
in
in
earlier
talks
today,
and
yesterday,
there's
been
a
lot
of
work
in
wacom
x
in
order
to
adapt
the
spectral
element.
Dynamical
core
for
high
top
modeling,
where
you
have
species
dependent
thermodynamics,
also,
we've
added
operators
to
represent
thermal
conductivity
and
molecular
viscosity
inside
the
dicor.
So
the
dicortix
pair
of
the
horizontal
part,
physics,
the
vertical
parts
and
we've
had
quite
a
few
stability
issues.
There
has
been
changes
to
the
sponge
layer
to
four
high
top
applications.
H
I
just
wanted
to
to
let
people
know
that
we
tried
to
implement
this
in
a
in
a
very
general
way,
both
in
terms
of
the
composition
of
dry
air,
but
also
the
the
moist
aspects
of
moist
components
of
air
such
as
cloud
liquid
eyes
rain,
and
so
so
we
coded
up
a
framework
where
you
specify,
via
the
neighbors
which
dry
air
species,
are
active
in
your
model.
H
Here's
the
example
from
wacom
x
and
here
are
the
examples
for
cam
4,
5
and
6.,
but
that
then
tells
the
supperty
or
subroutines
in
a
common
module.
What
the
heat
capacity
should
be
gas,
constant
thermal
conductivity
coefficients
and
so
on.
So
currently
the
spectral
element
dicor
and
the
cam
vacuum
physics
package
now
calls
common
routines
to
ensure
thermodynamic
consistency
between
physics
and
dynamics,
and
I
hope
we
can
make
use
of
this
infrastructure
for
other
dynamical
cores
who
wish
to
add
this
functionality
to
sima.
H
So
there's
been
some
a
lot
of
interesting
research
here
and
figuring
out
how
to
do
this
consistency.
So
the
basic
consistency
that
we
have
to
work
with
is
that
the
mpas
uses
a
z
base,
the
height
based
vertical
coordinate,
whereas
all
the
other
dynamical
cores
and
cam
physics
use
a
pressure-based
vertical
coordinate.
So
we
have
to
figure
out.
How
do
we
pass
the
discrete
impasse
state,
which
is
a
modified
potential
temperature,
velocity
components,
dry,
density
of
air
and
dry
mixing
ratios?
H
H
So
if
we
assume
hydrostatic
balance,
that's
pretty
straightforward,
because
if
you
take
the
hydrostatic
equation,
integrate
it
over
a
layer,
you
can
easily
show
that
the
pressure
level
thickness
can
be
formulated
in
terms
of
impasse,
prognostic
variables,
namely
the
dry
density
or
the
mixing
ratio
for
where
our
species
are
are
in
air.
And
then
you
know
dz
that
in
impasse
stays
constant,
so
that
transformation
is
straightforward.
H
A
little
detail
here
and
pass
similarly
to
sc
similar
to
fe3
does
have
a
thermodynamic
active
water
species
that
are
not
water
vapors.
They
have
all
the
condensates
active,
whereas
chem
physics
only
takes
into
account
the
kinetic
potential
internal
energy
of
water
vapor.
So
we
exclude
the
condensates
in
the
calculation
of
the
pressure
level
thickness.
H
The
next
level
of
consistency
can
be
formulated
in
terms
of
energy
conservation,
but
basically,
when
computing,
the
energy,
assuming
now
that
mpas
is
hydrostatic,
so
I'm
not
taking
into
account
the
non-hydrostatic
effects
at
this
point
in
the
game,
then
we
would
like
the
energy
computed
using
impasse,
prognostic
variables
to
be
the
same
as
that
using
can
prognostic
variables.
H
A
lot
of
devil
in
the
detail
here,
but
just
to
give
you
an
idea
of
the
kind
of
consistencies
we've
been
we've
been
deriving.
So,
as
I
told
you,
the
pressure
level
thickness
is
easy
to
compute.
That
means,
if
we
sum
up
pressure
level
thicknesses,
we
can
get
pressures
at
at
a
certain
half
level,
so
in
between
levels
by
computing,
a
consistent
top
level
pressure,
which
is
not
constant
in
mpas.
H
However,
we
have
to
be
really
careful
when
we
compute
the
mid
level
pressure,
so
we
can't
just
take
the
arithmetic
mean
of
the
pressures.
Sorry,
it
should
have
been
a
plus,
not
a
minus
here,
because
that's
not
consistent
with
m
pass
that
defines
the
mid
levels
in
terms
of
the
arithmetic
being
of
the
height,
not
the
pressure.
H
Anyway,
you
can
use
the
equation
of
state
to
compute
what
one
over
the
the
pressure
should
be
at
the
mid
level,
and
it
similarly
follows
the
the
extra
function,
expressions
so
long
story
short.
H
It
means
that
when,
if
we
use
these
formulas
here,
then
when
we
compute
the
height
and
the
chem
physics
state
state
which,
for
which
height
is
diagnostic,
we
get
exactly
the
same
height
as
is
used
in
empaths
and
that
then
results
in
consistent
energies
when
we're
passing
the
state
from
dynamics
to
physics
so
now
to
a
much
harder
problem,
namely
the
time
evolution
of
energy
in
the
model.
So
the
cam
physics
package
was
designed
with
energy
conservation
in
mind
and
the
issue
we're
running
into
you.
H
Don't
need
to
read
all
the
text
here,
I'm
going
to
tell
you,
but
the
issue
we're
running
into
is
the
camp
physics,
energy,
temporal
evolution
of
of
energy?
That
formula
here
is
different,
so
it's
used
in
empaths,
because
here
we
are
assuming
that
the
model
sub
pressure
is
constant.
Empires
has
a
constant
z
formulations
and
these
two
energy
formulas
are
different.
So
the
question
is:
what
do
we
do.
H
H
So
the
first
thing
we
do
is
we
pair
we
go
into
dynamics
immediately,
pass
the
state
to
physics
I
just
described
how
we
do
that.
Then
we
run
all
the
physics
packages
incrementally
and
in
camp
physics.
The
energy
budget
is
closed.
If
we
assume
that
the
pressure
stays
constant
during
physics
updates,
which
of
course,
is
not
true
when
water
enters
or
leaves
the
atmosphere.
H
So
at
the
very
end
of
physics,
we
have
what
we
call
dna
adjusts
dme
adjust.
We
account
for
the
water
change
in
the
column,
we're
not
treating
that
energetically
consistent,
so
we're
letting
the
energy
fixer
fix
that
and
we
transform
everything
back
to
the
dynamic
state
run
the
dicor
start
over
again.
So
the
part
of
this
loop
here
shown
with
these
likely
error
arrows
here,
that's
what
the
energy
fixer
is
fixing.
H
H
H
So,
let's
show
you
the
importance
of
keeping
having
consistency
here
is
if
I
were
to
compute
this
moisture
adjustment
using
the
chem
physics
formula
shown
on
the
left
hand
side
here.
This
is
a
one
year
average
of
the
energy
tendency
due
to
that
adjustment
on
the
right
hand,
side
here,
I
show
exactly
the
same
thing:
using
the
energy
equation
consistent
with
the
n-pass
dynamical
core.
They
look
similar.
H
They
should
but
they're
different
in
the
details
and
in
the
average
value
that's
written
up
here,
but
if
we
subtract
the
two
from
each
other,
you
see
that
locally.
You
can
be
a
couple
of
watts
off
due
to
inconsistent
energy
formulas,
so
I
think
we've
we've
reached
a
first
level
of
the
consistency
in
terms
of
that
the
energy
fixer
is
doing
the
the
the
right
thing.
H
H
So
obviously,
if
we're
going
on
hydrostatic
scale,
we
could
make
it
non-hydrostatic,
which
would
mean
the
addition
of
the
vertical
velocity
term,
which
is
the
only
term
missing
in
this
formula
energy
formula
currently
used
and
then
we're
not
including
thermodynamically
rigorously,
including
condensates
in
the
in
the
energy.
On
the
physics
side,
that's
also
an
inconsistency
with
spectral
and
fe
free.
A
D
A
All
right,
so,
let's
move
on
to
the
plan
future
plans
and
I'm
going
to
share
my.
A
Okay,
can
you
see
my
screen
here.
B
A
We're
better
now
yep,
okay,
good
all
right,
so,
let's
back
up
a
little
bit.
So
yes,
so
so
we
have
seen
all
this
great
development
and-
and
I
think
the
plan-
what
we
like
to
do
is
build
upon
this
and
go
forward.
So
here
are
a
few
items
on
the
top
of
our
list.
First
thing:
we,
like
you,
know
the
the
great
achievements
we've
just
seen.
A
We
would
like
to
maybe
engage
the
community
and
also
let
everyone
use
it
so
so
the
the
first
item
is
to
hopefully
release
the
sema
v1
configuration
this
fall
and
also
have
a
tutorial
and
workshop,
also
in
the
fall
and-
and
another
important
item
is
to
create
a
sema
advisory
panel
or
board
to
help
help
guide
a
further
configuration
development
and
in
this
process,
hopefully
will
bring
the
feedback
from
the
community
in
in
in
the
future
development
configurations
and,
of
course,
the
the
sima
science
and
infra
infrastructure
development
is
always
central
to
to
this
effort
and
project
this
on
to
next
year.
A
Here
are
a
few
specific
items
we
like
to
do
on
coordination,
infrastructure
and
science
goals.
A
So
for
the
coordination
of
the
software
engineer,
project
manager,
it
plays
a
key
role
in
coordinating
or
communication
between
the
colleagues
and
the
software
engineers
and
also
coordinating
between
or
among
the
software
engineers,
from
different
in-car
labs.
A
So
so
definitely
we
want
to
have
a
continued
funding
for
that
position
and
also
with
the
release
upcoming
release
of
the
model,
we
would
like
to
have
a
sema
community
liaison
so
that
we
can
better
know.
What's
the
what
are
the
needs
from
the
community
and
also
mentioned
earlier,
the
sema
advisory
board
and
and
the
liaison
should
be
part
of
that-
that
board
for
the
infrastructure
development.
There
are
a
few
items
already
discussed
yesterday
julio
in
his
talk.
A
He
mentioned
the
ccpp,
the
common
community
physics
package
we
like
to
bring
that
into
sima
and
also
common
regretting,
that's
very
important,
as
we
have
seen
in
the
in
this
talks
there.
There
are
a
variety
of
different
grid
grids
as
se
regional
refinement
and
mpas
and
also
geomagnetic
grid.
So
it's
very
important
to
continue
the
work
described
in
in
patrick's
talk,
so
the
for
the
chemistry
development
forest
already
mentioned
a
few
of
them
and
also
the
infrastructure
for
for
aerosol
processing.
A
That's
that
will
be
on
our
list
for
next
year's
development
and
and
improvement
of
workflow
again
patrick
and
his
talk
gave
a
very
good
example
of
that,
and
so
we
want
to
improve
the
efficiency
of
you
know,
pre-processing
and
also
post-processing
analyzing,
all
the
the
results-
and
you
know
the
software
tools
for
for
doing
this
and
for
science
goals
the
configurations
in
this
each
of
the
subject
areas.
You
know
we
already
get
quite
a
few
simulations
going
and
maybe
we'll
have
more
so
definitely
it's.
A
It
will
be
a
very
interesting
to
analyze
all
those
results
to
gain
new
insights
and
also
maybe
publish
a
lot
of
papers
on
that
and
the
the
development
of
sema,
related
diagnostics
and
also
integration
with
observations.
A
I
think
luisa
in
her
talk
yesterday
mentioned
the
work
of
melody.
I
think
that's
a
good
example
of
that,
and
so
that
will
continue
with
with
the
with
the
sema
development
and
the
for
da
data
simulation.
As
mary
showed
in
her
talk,
it's
an
important
part
of
the
sema
vision.
So
so
it's
definitely
what
we
like
to
work
on.
Currently,
the
data
simulation
on
the
csm
side
and
the
climate,
and
also
geospace
side
we're
using
darts
but
we'd
like
to
explore.
A
Maybe
the
first
steps
for
also
extend
that
to
to
the
weather
applications
for
geospace,
we
like
to
improve
the
ionosphere
coupling
that
worth
what
we
have
in
mind
is
transition
to
a
two-way
communication
or
two-way
coupling,
and
also
maybe
build
on
the
kind
of
work
described
by
peter.
You
know
work
with
ampas
to
develop
a
deep
atmosphere
dicor.
A
So
those
are
some
of
the
things
on
our
list
and
I
think
that
we
have
50
minutes
left,
we'll
just
open
the
four
for
discussions
either
the
future
plan
or
any
other
issues
you
want
to
bring
up.
These
are
the
few
items
we
have
you.
Can
you
know
you
have
a
comments
on
this.
Just
speak
up
or
you
have
other
thoughts.
Also,
please
go
ahead.
A
I
I
cannot
see
anyone's
hand
right
now,
mary,
if
you
see
anyone
you
can,
you
can
call
their
names.
H
C
Thank
you,
so
I
I
wonder
about
the
base
gate
configuration
you're
using
for
the
cube
sphere.
Great.
Obviously,
it's
it's
excellent
for
covering
the
contiguous
u.s
domain,
but
it
has
these
pesky
nodes
and
one
of
them
is
over
the
north
atlantic.
H
Sure
yeah
so
so
adam
and
I
I
have
actually
looked
quite
extensively
at
this
with
the
spectral
element
dicor
where
in
like
idealized
aquaponics
simulations,
we
did
see
more
variability
at
the
element
x
edges
than
in
the
interior
of
the
element,
but
that's
so
you
have
this
spurious
dependency
on
where
you
are
in
the
element
and
that's
why
we
argued
that
you
should
not
be
calling
the
physics
from
these
very
anisotropic
distributed
points,
but
it
should
actually
integrate
over
a
more
equal
area,
grid
cells
and
pass
that
to
physics
and
that's
what
we're
doing
with
the
spectral
element
c,
slam
version.
H
C
H
D
B
Thank
you
so
much.
I
think
it's
more
of
a
comment,
I'm
not
sure
yet.
So
thanks
so
much,
I
think
that
was
a
super
interesting
question.
I
also
really
appreciated
some
of
the
details
that
came
out
in
peter
lauritsen's
talk
about
how
difficult
it
actually
is
to
use
different
vetted
components
and
put
them
together.
B
D
So
I
guess
I'll
start,
but
the
other
sema
leads
can
also
chime
in.
So
that's
one
of
the
reasons
we're
we're
pushing
to
get
to
a
sema
version,
one
set
of
configurations
that
can
be
released
to
the
community
and
then
there's
a
reason
for
the
community
to
use,
use
these
configurations,
but
also
then
start
giving
feedback
or
or
create
for
their
developments.
D
So
that's
where
I
see
the
community
getting
involved
and
you
know-
and
hopefully
that's
right-
that's
soon.
That's
in
the
next
few
months,
yeah
I
hunley
or
bill.
Do
you
have
any
further
comments.
A
I
agree
with
what
you
said
and
also
I
I
think
there
are
already
a
lot
of
synergy.
For
example,
for
as
kevin
talked
in
his
presentation,
there
are
a
lot
of
synergy
between
you
know,
sema
effort
with
a
community
efforts
like
the
mage
model
or
the
drive
nasa
drive
center.
A
You
know
this
this
will.
This
does
have
a
high
impact
among
the
community
participation,
the
users,
I
think
those
kind
of
activity
you
know
will
naturally
be
very
helpful
for
promoting
the
status
of
sema
and.
D
And
I
I
guess
I
want
to
make
a
further
comment
on
the
chemistry
side
of
things
right,
because
music
of
version,
zero
has
become
mature
forest
showed
doubt
after
the
initial
configuration
that
he
and
becky
put
together,
that
there
are
more
applications
going
on
and
that's
not
just
with
an
end
card.
But
it's
starting
to
happen
outside
of
ncar.
But
one
of
the
other
tactics
we're
taking
is
to
try
to
create
a
community
simulation
that
a
number
of
people
from
the
community
can
help
analyze
for
their
own
research
interests.
B
Yeah,
this
is
just
to
follow
up
on
hud
lee's
point
about
proposals
and
judith.
If,
if
there's
a
release
with
the
release
of
sema,
v1
and
workshop
and
tutorial
once
this
gets
in
the
hands
of
the
community
and
they
begin
to
use
it,
we
certainly
encourage
them
to
work
it
into
proposals
where
they
plan
to
to
use
it,
pursue
it
and
evaluate
it.
B
F
Is
actually
a
little
bit
related
to
it,
and
I
just
wanted
to
sort
of
distinguish.
F
Where
and
when
community
input
can
be
considered
so
far
in
your
response
and
in
the
responses
provided,
I
get
the
impression
that
actually,
the
idea
is
to
provide
a
framework
or
a
simulation
to
the
community
after
the
fact
right
and
then
the
and
that's
at
least
my
impression,
because
you
want
the
community
engagement
happening
way
before
that.
You
want
the
essentially
community
to
have
something
a
say
in
how
that
particular
simulation
was
actually
created.
F
I
noticed
that
instead
of
a
scientific
steering
committee,
I
don't
know
whether
it
was
a
mistake
or
not.
There
is
actually
a
mention
of
advisory
committee
and
the
two
are
quite
different,
and
if
there
is
a
scientific
committee
or
steering
committee
that
has
the
final
say,
executive
power
rather
than
the
whatever
the
current
structure
is.
But
instead
of
I
mean
if
an
advisory
committee
is
there
and
they're
just
there
to
advise
right,
not
necessarily
to
tell
you
what
to
do
so.
D
Well,
let
me
answer
that
a
little
bit
okay,
so
I
you
know
there.
There
are
a
few
things
going
on
on
right.
So
sema
is
a
new
project.
It's
trying!
You
know
it's
trying
to
get
places,
but
you
know
it
has
been
sort
of
been.
You
know.
D
We've
been
waiting
for
discussions
from
our
our
management
is
to
helping
us
along
that
path,
and
so
we've
been
careful
about
how
much
community
engagement,
how
much
for
say
it's
a
done
deal
versus
moving
it
forward,
but
we
have
you
know,
remember
we
had
that
community
workshop
a
year
ago
and
we
have
kept
in
touch
with
the
organizers
of
that
community
workshop
and
trying
to
get
feedback
every
now
and
then
from
them,
but
that
I
agree,
that's
not
enough
and
we
don't
have
a
formal
government
structure
and
that's
why
we're
trying
to
push
that
forward
a
little
bit
more
in
in
starting
to
create
a
a
better
government
structure,
but
at
the
same
time
we
don't
want
to
be
duplicating
efforts
of
cesm
or
wharf
or
or
other
government
structures
that
are
out
there.
D
So
so
it's
kind
of
we
have
to
kind
of
take
things
in
small
steps
and
and-
and
you
know
march
forward,
as
as
things
grow
and
and
become
more
common
with
sema
again,
we
can
talk
about
it
in
more
detail.
Later,
go
come
but
happy
to
discuss.
You
know,
hear
any
more
comments
from
hanley
or
bill
on
that
topic.
B
To
to
move
forward
with
governance,
but
that
really
requires
encore,
ucar
and
part
of
nsf
also
to
to
be
on
board
with
it
and
we're
kind
of
waiting
on
them
to
give
us
feedback
as
to
how
we're
going
to
move
this
forward.
Our
understanding.
H
B
A
And
a
little
bit
to
add
to
what
mary
said
too
is
a.
I
agree
that
the
design
of,
for
example,
the
case
experiments,
use
cases
that
should
happen
early
on
and
actually
that's
exactly
what
has
been
done
or
you
know
we
tried
to
do
that
in
the
during
the
vision
workshop,
some
of
the
design
of
the
v1
experiment
or
use
cases
are,
you
know
as
a
result
of
that
of
that
discussion.
C
D
D
So
while
people
are
maybe
thinking
about
questions,
you
know,
if
you
think
of
something
later
after
the
fact,
you
know,
please
feel
free
to
contact
any
of
the
sema
leads.
That's
myself
bill,
hundley
or
the
missing
in
action.
Andrew
gettleman
we'll
be
happy
to
take
that
our
you
know.
As
we
mentioned,
jordan
powers
is
the
product
project
manager
for
the
se
development.
But
it's
you
know,
he's
a
good
contact
person
too.
So
any
of
us,
if
you
want
to
whoever
you
feel
comfortable
talking
to.
D
And
if
you
want
to
you
know,
if
you're
interested
in
you
know,
there's
one
thing
with
you
know,
waiting
until
cmi
verse,
one
configurations
are:
are
ready
and
released
to
the
community,
but
if
you're
really
eager
to
get
started
and
want
to
get
involved,
you
know
we're
happy
to
to
make
that
happen.
And
again
please
contact
us
and-
and
we
can,
you
know,
start
talking
and
start
figuring
out
the
best
ways
to
involve
you
all
in
in
the
sema
model.
A
Okay,
I
think
we
are
at
the
top
of
the
hour
three
o'clock
and
maybe
we'll
end
here.
A
Thank
you
for
your
participation
and
I
look
forward
to
work
with
you
in
the
future
on
the
simone
project.