►
From YouTube: CESM Workshop: Atmosphere Model Working Group
Description
The 26th Annual CESM Workshop will be a virtual workshop with a modified schedule on its already scheduled date. Specifically, the virtual Workshop will begin with a full-day schedule on 14 June 2021 with presentations on the state of the CESM; by the award recipients; and three invited speakers in the morning, followed by order 15-minute highlight and progress presentations from each of the CESM Working Groups (WG) in the afternoon.
On 15-17 June 2021, working groups and cross working groups have half-day sessions, some with presentations and some that are discussion only.
C
D
E
A
Well,
thanks
everybody
for
for
participating
in
this,
hopefully
last
completely
online
session,
but
we'll
see
so
first
just
a
few
things
about
about
logistics.
I
want
to
just
share
our
code
of
conduct
to
ensure
sort
of
you
know
a
respectful,
respectful,
productive
dialogue
in
the
session.
A
This
is
this
is
our
agenda.
I
just
want
to
point
out
for
people
that
there
will
be
four
posters
from
amwg
at
the
at
the
poster
session
this
afternoon
at
at
three
well
3
20
there.
There
are
four
four
very
interesting
posters
in
the
session
and
from
my
understanding
it's
going
to
be.
A
It's
very
very
well
nicely
organized
with
lots
of
lots
of
lots
of
chance
for
interaction,
and
I
also
want
to
point
out
that,
in
addition,
in
addition
to
the
main
session,
that's
here
on
the
agenda
after
that's
over
at
at
around
12
we're
going
to
have
three
smaller
group
discussions
that
you
can
join,
you
know
join
in
as
you
wish.
A
A
I
just
want
to
go
over
the
process
for
asking
a
question:
everybody's
muted,
by
default,
to
ask
a
question:
either:
use
the
chat
function
or
the
raise
your
hand
feature
then
you'll
be
called
on
at
the
end.
To
ask
the
question
after
the
seminar
after
the
talk
there'll
be
a
discussion
period
at
the
end.
A
So
if
we
haven't
gotten
to
your
question
right
away,
we
can
we
can
address
it
there
and
just
as
final
notes,
the
session
will
be
recorded,
the
chat
will
be
saved
and
you
are
being
streamed,
live
on
youtube.
So
with
that
I'll
stop
sharing
and
we
can
go
to
the
first
talk
and
chris,
were
you
gonna?
Were
you
gonna
chair
the
the
the
first
session
here
peter.
F
I
was
going
to
yeah,
okay,
okay,
take
it
away,
okay,
so
nice
job
finishing
early
julio,
and
so
I
will
just
say
that
you
all
have
15
minutes
for
your
talk,
including
questions.
So
I
will
give
everyone
a
12
minute
warning
so
and
hopefully
you'll
be
done
about
then.
So
our
first
talk
today
is
from
yan
yang
about
fast
climate
responses,
aerosol
emission
reductions
during
covet
19..
So
take
it
away.
G
G
G
We
all
know
that
the
abrupt
outbreak
of
the
code
19
has
been
spreading
over
the
world
which
has
caused
hundreds
of
thousands
of
deaths
in
order
to
control
the
rapid
spread
of
disease.
Many
countries
have
taken
dramatic
mirrors
which
greatly
reduced
the
human
activity
and
the
air
pollutants
emissions,
especially
those
from
the
motor
vehicle
traffic,
also
has
been
recognized
as
a
major
factor
in
affecting
global
and
regional
climates
as
it
influences
the
radiation
budgets
of
the
earth
directly
by
scattering
and
absorbing
solar
radiation
and,
indirectly,
by
altering
cloud
properties
so
bill.
G
G
To
investigate
the
fast
climate
response
of
the
global
climates
to
aerosol
emission
reduction
during
korea
knighting
period,
we
perform
the
simulations
using
the
cam5
for
year
2020
in
order
to
better
match
the
observation.
Large-Scale
circulation
patterns.
Wing
fields
are
not
to
the
merit2
analysis.
G
Since
the
korean
19
pandemic
was
was
still
going
on.
When
we
did
the
analysis,
three
different
scenarios
were
designed
to
represent
the
possible
emission
pathways
in
year
2020
in
different
simulations.
We
assume
different
situations
of
the
epidemic
control
and
are
using
the
fast
mid
and
slow
simulations
with
my
feeder.
G
So
this
figure
is
an
example
of
how
emission
change
in
2020
the
east
asia,
especially
in
china,
took
actions
first
to
contain
the
disease
spread
in
january
2020
and
started
the
back
to
work
stage
in
april
for
the
rest
of
the
world.
Three
emission
reduction
scenarios
are
designed
with
the
fast
medium
and
the
slow
prevention
of
control.
G
So
here
is
the
results,
as
we
expected
annual
mean
aerosol
column
burden
and
the
optical
depths
were
significantly
decreased
due
to
the
aero
emission
reductions
during
the
kobe
19
period,
over
many
regions
of
of
the
world,
including
east
asia,
south
asia,
the
middle,
the
middle
east,
europe
and
the
eastern
eastern
u.s,
and
the
changing
the
radiative
forcing
due
to
both
aerosol
radiation
injections
and
the
aerosol
cloud
interactions
between
the
emission
reduction
simulations
and
the
control
simulation.
G
G
G
Statistically
significant
surface
warming
primarily
appeals
over
land
of
the
northern
hemisphere,
with
a
zonal
mean
temperature
increase
of
0.05
to
0.0
0.04,
choose
0.07,
k
between
30
degrees,
north
and
50
degree
north
in
the
arctic.
Due
to
the
decrease
in
low
lower
clouds
near
surface
air
temperature
decreased
substantially,
and
we
can
see
that
there
is
a
longer
a
longer
duration
of
the
global
emission
reductions
will
produce
the
warmer
economies.
G
G
G
An
important
implementation
of
the
fast
economic
responses
to
the
koi
knighting
emission
reduction
is
the
impact
on
the
temperature
in
the
real
world.
The
year
2019
is
reported
as
the
second
hottest
year
in
terms
of
the
global
and
annual
mean
in
the
past
past
140
years.
In
the
climate
record,
in
the
first
three
months
of
over
2020,
the
observed
global
mean
surface
temperature
is
even
higher
than
2019
over
central
and
eastern
china.
G
The
observed
global
mean
temperature
is
also
higher,
and
this
is
the
place
where
the
aerosol
emission
reductions
were
substantially
reduced
due
to
the
cold
with
nightly
long-term.
The
observed
temperature
increase
can
can
be
explained
by
10
to
40
percent
by
the
aero
emission
reductions
during
the
coping
19..
G
Although
the
impacts
are
on
global,
near
surface
temperature
or
the
precipitation
is
small
due
to
the
kobe
19
emission
reduction,
the
impacts
of
the
emission
reduction
on
extremes
at
a
final
scale
could
be
significant
in
the
summer
of
the
2020,
a
record-breaking
flooding
related
to
the
extreme
rainfall
struck
eastern
china.
Without
him
being
more
than
100
deaths
and
3
million
hectares
of
crops
damages
with
an
economic
loss
of
10
billion
us
dollars.
G
Many
studies
reported
that
the
natural
variability
contributed
to
these
events,
in
addition
to
the
natural
variability
real
human
influence,
also
have
a
dramatic
impact
on
the
extreme
rainfall.
So
with
an
example
of
simulations
from
the
states
of
the
science
are
six
models,
including
e3
sm
and
the
ces
and
csm.
G
We
also
found
that
the
kohi
knighting
related
reductions
in
aerosols
can
explain
about
one-third
of
the
observed
summer
pre
extreme
precipitation
in
eastern
china.
So
but
since
the
the
study
was
just
the
manuscripts
were
just
submitted,
I'm
not
going
to
the
detail
of
the
study.
Hopefully
we
can
get
a
good
result
later.
G
So
this
is
the
summary
we
provided
the
first
of
its
kind
global
insights
into
the
impact
of
the
kobe
19
climate.
We
found
that
an
enormous
surface
warming
appears
over
the
north
northern
hemisphere,
continents
in
response
to
earth
reductions
during
kobe
19
and
the
emission
reduction
explained
the
observed
2019-2020
temperature
increase
by
10
to
40
percent,
so
it's
in
china
and
a
longer
duration
of
the
emission
reductions
will
produce
the
warmer
climates.
So
this
is
a
citation
of
the
study
we
I
just
mentioned.
F
All
right
thanks
that
was
extremely
interesting
and
well
put
together.
So
so
that's
great!
You
have
a
couple
of
questions
in
the
chat,
so
the
first
one
is
from
julio.
Do
you
want
to
read
your
own
question
julio.
A
Well,
yeah,
sorry
for
the
bad
typing,
I
I
I
was
confused
at
first.
It
seemed
that
you
were
saying
these
runs,
are
nudged,
correct.
A
It's
so
it's
it's
just
it's
umv
nudging,
so
at
least
the
meteorological,
the
meteorology
and
all
the
experiments
is
exactly
the
same
tonight.
G
Actually,
we
just
enlarged
the
wing
fields
above
the
550
millibars,
so
near
the
surface.
It
can
respond
to
the
the
emission
reduction
and
also
other
light
precipitation
and
the
temperature
can
be
response.
A
kind
response
to
the
aerosol
emission
reduction.
A
F
Okay,
yeah
great,
so
I
have
a
question
which
is
that
I
was
struck
by
the
small
magnitude
of
the
temperature
response
that
you're,
seeing
and
also
by
it,
looked
to
me
from
the
global
maps
like
the
areas
where
we
see
changes
aren't
particularly
well
correlated
with
areas
that
we
know
there's
a
lot
of
anthropogenic
aerosol
emissions
yeah,
and
so
that
makes
me
wonder
about
statistical
significance.
F
So
have
you
done
anything
to
test
that.
F
Okay,
what
kind
of
significance
test
is
that
a
t-test
or.
F
Okay.
Okay,
great!
Thank
you!
Okay.
We
have
another
late-breaking
question
hannah
brodsky.
Do
you
want
to
ask
your
question.
H
Yeah,
I'm
curious
how
much
how
you
determined
how
much
of
the
warming
you
measured
was
caused
by
climate
change
versus
emissions
reduction
due
to
covered.
H
Yeah,
like
I
guess,
because
we
we
would
expect,
like
some
increase
in
temperature,
surface
temperature
due
to
climate
change,
and
so
I'm
curious.
How
did
you
determine
that?
Some
of
the
surface
temperature
increases
like
measured
this
year
were
caused
by
the
aerosol
reduction
versus
would
just
be
anticipated
anyway.
G
Okay,
you
mean
the
the
the
temperature
caused
by
the
emission
reduction
in
the
real
world.
Is
that
right?
In
the
observational
perspective,
we
just
compare
the
compare
the
temperature
change.
If
there
is
a
change
perfect,
we
can
see
that
the
change
is
the
increased
temperature
in
eastern
china
in
the
first
three
months
of
the
2020
compared
to
2019.
G
So
there
is
a
actually
the
the
temperature
increased
increases
can
be
due
to
both
the
the
climate
change
or
the
in
internal
variability
or
the
aerosol
reduction.
So
we
did
the
simulation
to
to
to
see
the
pure
impacts
from
the
aero
reduction
without
considering
that
any
climate
change.
So
this
is
the
aerosol
effect.
Then
we
calculate
the
ratio
of
the
of
the
impacts
from
air
sourcing
in
the
in
the
observe
the
temperature
change.
So
we
can
get
a
10
to
40
percent
of
the
the
signal
can
be
explained
by
the
air
source.
I
All
right
all
right:
well,
thanks
guys
glad
to
see
everyone
again
well
I'd
much
rather
see
everyone
in
boulder,
but
I
guess
this
will
have
to
suffice
for
this
year,
so
I'm
just
gonna
give
a
brief
update
kind
of
on
what
we've
been
working
on
over
the
last
year
as
part
of
one
of
the
climate
process
teams.
I
So
this
is
a
project,
that's
jointly
funded
by
the
national
science
foundation
as
well
as
noaa,
and
in
particular,
we're
really
focused
on
kind
of
the
representation
of
momentum,
fluxes
and
kind
of
turbulence
within
the
boundary
layer
in
club,
with
kind
of
an
eye
towards
towards
cam
7..
I
So
on
the
first
slide,
I
just
kind
of
couldn't
fit
everyone
on
there.
So
this
is.
This
is
the
entire
team,
so
we
have
a
lot
of
people
and
I'm
going
to
show
some
just
really
top-level
results
from
a
few
folks
today.
But
you
know
we
got
a
lot
of
great
people
working
on
this,
and
I
also
just
wanted
to
note
that
I've
highlighted
and
read
some
of
our
early
career
scientists.
I
So
we
kind
of
have
a
wide
range
of
expertise,
so
hopefully
kind
of
the
the
next
generation
of
climate
modelers.
I
suppose
so.
This
is
just
kind
of
the
one
slide
overview,
but
in
reality
we
have
two
two
primary
foci
for
the
cpt,
so
one
is
the
implementation
of
prognostic
momentum
fluxes
within
club,
so
the
kind
of
current
default
version
has
an
eddie
deep,
usivity
formula,
so
only
it
kind
of
diagnoses,
downgrading
or
dying
diagnosis
momentum,
fluxes
using
a
downgrading
approach.
I
So
the
idea
is
to
use
or
reuse
some
of
the
the
club
architecture
to
be
able
to
actually
directly
predict
momentum,
fluxes
and
then
also
defining
a
regime
specific
eddy
time
scale.
So
this
is
somewhat
analogous
to
being
able
to
kind
of
have
a
a
more
flexible,
turbulent
length
scale
associated
with
the
boundary
layer,
parameterization
and
kind
of
being
able
to
have
something
that
varies
as
a
as
a
function
of
kind
of
atmospheric
regime.
It's
not
really
just
a
hammer
that
we
hit
every
grid
cell
with
and
then
this
first
bullet
point.
I
I
really
want
to
kind
of
also
emphasize
that
the
focus
is
really
on
on
credibility.
So
we
are
actually
looking
at
this
from
a
very
bottom
up
approach.
So
usually,
when
we're
kind
of
thinking
about
global
climate
there's
a
lot
of
top
down,
you
know
twiddling
some
knobs
here
or
there
to
kind
of
get
the
you
know:
global
correlation
of
sea
level,
pressure
to
to
nudge
up
slightly,
but
right
now
we're
kind
of
really
looking
at
a
process
level,
and
I
I
like
to
joke
that.
I
Sometimes
we
may
have
to
take
a
step
backwards.
Sometimes
we're
you
know
may
introduce
something
that
actually,
maybe
you
know
kind
of
drag
the
the
global
climate
back
a
little
half
step,
but
in
the
long
run,
I'm
hoping
that
that
will
provide
us
kind
of
a
more
physical
path
forward
from
a
dynamical
perspective.
I
I
Is
you
know
if
you
have
kind
of
this
prognostic
formulation
of
these
vertical
momentum,
fluxes
that
horizontal
momentum
is
actually
subjected
to
both
the
vertical
pressure
gradient,
which
actually,
we
normally
think
of
as
this
kind
of
cumulus
pressure,
but
also
this
horizontal
pressure
gradient
so
over
here
kind
of
in
this,
this
blue
box-
and
you
know
when
a
parcel
is
essentially
lifted
into
an
area
of
sheer
you
know.
Essentially
you
have
this.
I
This
pressure
gradient
force,
even
in
the
horizontal,
that's
acting
to
kind
of
reduce
the
effective
defusivity
of
that
particular
parcel,
and
you
know
what
this
is
one
of
the
kind
of
the
interesting
things
we're
looking
at,
because
all
of
these
extra
you
know
terms
that
we
can
now
consider
with
regard
to
the
momentum,
fluxes
kind
of
allow
us
to
relax
that
hammer
of
using
some
of
you
may
be
familiar
with
this
parameter
ck10,
which
is
essentially
a
giant
tuning
knob
that
that
is
in
in
kind
of
club.
I
As
the
you
know
standard,
you
know
how
how
diffusive
you
want
to
make
the
boundary
layer
and
that
kind
of
ck10
is
currently
kind
of
like
taking
all
of
these
kind
of
diffusive
properties
and
kind
of
throwing
them
into
one
pot
and
we're
trying
to
kind
of
be
able.
By
having
this
this
prognostic
momentum,
we
found
that
we've
been
able
to
kind
of
directly
simulate
some
of
these.
You
know
extra
forces
or
terms,
and
it
requires
less
of
us
having
to
kind
of
use
ck10
as
that
giant
knob.
I
This
is
some
work
by
my
phd
student
kyle
at
penn
state
kyle
nardi,
but
essentially
he's
been
using
simple
models
in
particular
kind
of
a
an
f
plane,
aqua,
planet
kind
of
with
variable
resolution
grids,
and
what
he's
found
is
that
if
you
apply
this
kind
of
sensitivity,
analysis
framework,
so
you
can
look
at
different
physical
knobs
within
club
and
you
can
kind
of
rank
them
based
off
of
some
prescribed
metric.
I
You
can
kind
of
get
a
measure
of
how
sensitive
the
climate
is
to
these
terms,
but
also
kind
of
how
consistent
a
change
in
that
term
would
be
from
a
directional
standpoint,
and
I
think
the
really
cool
thing
about
this
is
you
can
kind
of
take
these
heat
maps
with
all
these
different
parameters
in
these
metrics,
and
you
can
kind
of
almost
guess
at
what
the
full
three-dimensional
kind
of
coupled
simulation
response
will
be,
and
he
finds
that
you
know
you
could
use
kind
of
these
simple
maps
here
and
you
can
very
accurately
predict
what
is
going
to
be
the
response
when
you
actually
run
the
full
full
cam,
the
full
cesm.
I
This
is
some
cool
work,
that's
being
done
by
chris
cruz
who's,
a
project
scientist
in
amp,
so
he's
been
playing
around
a
lot
with
the
nudging
capabilities
within
within
cam
and
in
particular
kind
of
nudging
in
the
far
field,
but
not
nudging
over
the
great
planes
where
we
have
the
low-level
jet-
and
this
is
actually
we
joke.
These
are
our
ikea
furniture
runs.
I
This
is
the
nudging,
but
not
the
low-level
jet,
but
what
it
allows
us
to
do
or
allows
chris
to
do
is
kind
of
look
at
what
the
simulation
produces
and
then
compare
those
observations
to
what
are
measured
at
this
radar
wind
profiler
within
the
domain.
I
So
these
are
just
two
hot
molar
plots
over
the
period
of
a
month.
This
is
just
the
meridional
wind,
so
up
top.
These
are
observations
on
the
bottom.
This
is
cam.
You
know
to
first
order.
The
nudging
appears
to
be
effective
for
reproducing
the
local
meteorology
so
that
we
can
do
this
direct
comparison
with
the
radar
wind,
profiler
and
chris
is
looking
at
jet
strength,
the
height
of
the
maximum
wind
and
the
time
of
the
maximum
wind
and
tying
this
into
turbulence.
I
Theory-
and
this
is
just
kind
of
another,
interesting
slide
that
he
provided,
but
this
is
the
the
average
diurnal
cycle
of
the
north-south
wind
may
june
july.
Over
kind
of
this
great
plains
region-
and
you
see
a
couple-
you
know.
Interestingly,
if
you
look
at
cam6,
you
get
kind
of
this
nighttime
max
in
the
jet,
but
the
jet
occurs
a
little
too
late.
It
occurs
a
little
too
low
and
really
one
big
thing
is
you
don't
get
this
diurnal
variability?
This
diagonal
span
that
we
see
in
observations.
I
We've
also
implemented
by
we
george
bryant
in
the
m
cubed
lab
at
ncar,
has
implemented
club
within
cm1.
So
now
we've
been
playing
around
a
lot
with
running
a
club
within
this
mesoscale
or
this
large
eddy
simulation
model.
I
I
So
it's
kind
of
taking
peeling
back
another
layer
of
the
onion
kind
of
moving
to
these
large
eddy
simulation-like
simulations.
We
get
the
same
response
kind
of
with
changes
in
club
that
we
see
in
cam.
So
this
is
good
verification
of
what
we're
doing
on
the
cam
side
of
things,
but
this
also
provides
another
computationally
efficient,
much
higher
resolution
tool
to
kind
of
explore
exactly
what's
going
on
within
club,
so
that
it's
not
kind
of
behaving
as
much
of
a
black
box
as
when
we
use
it.
I
Maybe
in
a
global
climate
simulation
and
then
over
here.
These
are
just
the
black
curves
being
what
cm1
is
simulating
from
a
large
eddy
perspective
and
the
red
curves
being
what
club
is
simulating
and
you
can
see
it's
a
variety
of
different
boundary
layers
like
a
convective
pbl,
a
stable
pbl.
So
on
and
so
forth,
but
we
can
see
some
places
where
it
looks
like
club
is
doing
a
pretty
good
job
with
relation
to
the
large
eddy
simulation
and
then
also
places
where
it
looks
like.
Maybe
there's
there's
potentially
some
room
for
improvement.
Moving
forward.
I
Julio
touched
on
this
very
briefly
yesterday,
but
one
thing
we've
also
noticed
moving
to
high
resolution
and
and
particularly
with
lower
diffusivity
values
in
club,
is,
if
you
look
at
snapshots
of
the
surface
wind
field.
So
this
over
here
is
kind
of
like
the
default
pm6
configuration
and
you
see
these
kind
of
oscillatory,
behavior
or
oscillatory
features
appear
within
the
wind
field,
and
this
generally
seems
to
occur
with
high
surface
roughness
when
you
have
a
lower
lower
model
level.
I
So
this
is
possibly
something
we
need
to
think
about:
moving
towards
a
cam
with
more
more
levels
in
the
vertical
and
more
levels
in
the
boundary
layer
and
what
we've
actually
traced.
This
is
is
actually
a
physics
dynamics,
coupling
error
so
essentially
the
coupling
frequency
between
the
land
surface
and
how
that
land
surface
is
passing
the
surface
stress
into
cam,
which
is
used
as
a
lower
boundary
condition
the
boundary
layer
scheme
and
how
that's
kind
of
being
sub-cycled
over
you
know,
perhaps
a
30-minute
period.
I
So
this
is
just
one
example
of
an
improvement
that
we've
implemented
as
part
of
the
cpt,
which
is
just
doing
kind
of
like
an
update
of
the
surface
stress
field
within
the
club
loop.
This
is
really
really
simple
cost
effective
way
to
do
it.
I
You
kind
of
get
rid
of
a
lot
of
this
noise,
but
adam
harrington,
sean
santos
and
thomas
toniato
have
also
been
chatting
a
lot
about
this
and
something
I
think
we
want
to
think
about
moving
forward,
whether
it's
something
like
whether
we
want
to
reorder
the
physics
something
that
adam's
looked
at
you
know
just
do
we
want
to
make
the
physics
dynamics
coupling
a
little
tighter
something
sean
suggested,
but
that's
something
definitely
to
consider
and
then
the
last
couple
slides.
I
I
So
we
have
all
of
these
tendencies
from
the
dipor,
the
physics
and
then
different
components
within
the
physics,
and
this
is
now
on
the
development
branch,
so
you
can
use
it
so,
if
you're
interested
kind
of
in
looking
at
this
budget
and
creating
kind
of
you
know
or
performing
analyses
with
these
different
terms,
I
I
recommend
talking
to
chris.
I
He
has
a
lot
of
scripts
that
can
help
do
this
and
then
also
kate
has
been
working
a
lot
on
this
idea
of
of
kind
of
expanding
our
diagnostics
again
to
get
a
better
idea
for
physically
what
these
profiles
are
doing
within
club,
so
sort
of
built.
Similarly,
to
things
like
the
standard,
amwg
diagnostics,
but
kind
of
allowing
us
to
drill
a
little
deeper
and
plot
some
of
club's
higher
order
moments,
you
know
things
like
picking
different
points
on
the
the
globe.
I
Maybe
comparing
the
radio
signs
so
on
and
so
forth,
and
this
is
something
we're
hoping
with
the
amwg
diagnostics
framework
that
that
will
have
some
future
integration.
I
All
right,
perfect
timing,
so
with
that
I
will
leave
my
summary
slide
up.
This
just
says
pretty
much.
What
I've
just
been
talking
about,
the
one
thing
I'll
say
is:
if
you
have
any
questions
or
you
want
to
play
around
with
any
of
this
code
or
kind
of
maybe
look
at
some
of
our
data
more
than
happy
to
share
just
feel
free
to
contact
me
and
I'd
be
happy
to
chat
with
you.
Thanks.
F
Great
thanks,
colin,
that
was
exceptional,
so
we
have
a
question
from
adam.
Do
you
want
to
ask
your
question
adam.
H
Really
more
of
a
comment,
I
was
just
saying
that
we're
gonna
be
moving
forward
with
reordering
the
physics
to
deal
with
the
the
wind
issue.
H
So
that's
that's.
How
we're
going
to
move
forward,
and
one
of
the
advantages
is
over
colin's
fix-
is
that
and
collins
fix
and
and
no
offense
column,
but
it
only
impacts
the
the
surface,
stress
and
cam
and
that
doesn't
get
communicated
to
the
other
model
components.
H
So,
by
reordering
the
physics
we're
ensuring
that
all
the
model
components
are
still
seeing.
The
same
surface
stress.
I
Yeah
I
mean
I
should
clarify.
I
don't
really
want
to
get
too
into
the
weeds
on
this,
but
you
know
there's
kind
of
like
the
spectrum
of
hammers.
You
know
you
could
obviously
tighten
the
physics
dynamics
coupling,
as
is
that's
computationally
kind
of
expensive.
The
fix
that
we're
using
for
this,
which
you
know
it's
kind
of
like
a
simple
clutch.
I
You
know
it's
one
of
these
things
you
can
just
drop
in
really
easily
with
the
vetted
model
and
adam,
like
you
said,
with
the
the
rearing,
the
physics,
this
might
be
kind
of
the
the
best
option
moving
forward,
and
I
do
think
that's
probably
something
we
should
consider,
especially
some
of
the
work
you
know
peter
and
aaron
have
also
done
with
process
ordering
the
one
thing
there
is
it.
You
know
it
does
require
obviously
a
a
huge
person
effort
in
terms
of
retuning
the
model
and
making
sure
everything
is
consistent.
I
That's
a
good
question:
we
have
not
played
with
a
different
vertical
grid,
yet
with
the
nudging
runs
or
or
anything
like
that,
you
know
I,
I
would
be
curious
to
see
what
would
happen.
We
should
get
a
couple
template
files
from
from
you,
and
maybe
chris
could
play
around
with
it.
H
Yeah
yeah
thanks
colin
I'm
curious,
especially
at
the
south,
great
plains
site.
There's
so
many
observations
and
I
suppose
a
lot
of
less
have
been
run
there
is
there
any
evidence
that
the
sheer
and
stress
vectors
aren't
aligned
so
that
your
your
momentum,
flux,
move
where
you're
going
on
actually
getting
the
momentum?
Flux
has
to
deal
not
only
with
the
magnitude,
but
the
direction.
I
Yeah,
that's
a
really
good
question
and
I
don't
want
to
put
my
foot
in
my
mouth
and
over
step
on
someone.
So
ganila
svensson
actually
has
a
phd
student.
Joking
pico!
That's
that's
pretty
much
looking
at
that
exact
question,
so
looking
at
kind
of
the
wind
turning
and
actually
the
directionality
of
the
momentum,
flux
using
radiosondes
and
hopefully,
I'm
actually
getting
some
updated
slides
for
him
soon.
So
maybe
we
could.
We
could
chat
more
about
that
in
a
couple
months.
F
All
right:
well,
we
need
to
move
on,
but
you
seem
to
have
gone
a
little
bit
viral,
so
check
the
chats
and
there
might
be
some
responses
there.
So
peter
alerts
and
you're
up.
F
J
Thank
you,
so
I
started
to
become
interested
in
energy
parties
of
the
atmosphere
about
three
four
years
ish
ago,
with
some
work
with
dave
williamson,
but
things
really
took
off
when
nicholas
catalan
and
I
organized
a
a
workshop
at
the
banfield's
national
research
station
right
a
couple
of
months
before
covet
broke
out.
So
I
really
like
to
acknowledge
this
group
of
people
for
for
many
many
discussions
we're
working
on
a
manuscript,
that's
currently
87
pages.
I
hope
you
can
cut
that
down
significantly
for
publication
for
what
I'm
going
to
talk
today.
J
I
really
like
to
acknowledge
the
many
many
zoom
calls
I've
had
with
this
group
of
people
here
where
we
tried
to
answer
the
question:
what
is
the
total
energy
equation
for
the
atmosphere
means
at
the
time?
I
won't
go
through
all
the
details
of
that,
but
simply
state
that
the
equation
we
came
up
with
is
the
one
shown
on
the
slide
here.
We've
made
quite
a
few
assumptions
to
get
to
to
this
formula
here.
First
of
all,
we're
looking
at
the
primitive
equations
hydrostatic
shell
atmosphere,
assuming
air,
is
an
ideal
gas.
J
So
the
equation
here
is
to
explain
some
of
the
terms
the
left-hand
side,
the
integrand
here
is.
We
have
the
density
of
dry
air
here
m
here:
are
the
dry
mixing
ratio
with
superscript
d
will
be
dry.
Air
h2o
will
be
the
sum
of
all
the
mixed
ratios
for
water
in
the
atmosphere:
okay,
here's
kinetic
energy
fire,
surface
geo
potential,
heat
capacity,
temperature
and
then
on
on
the
far
right.
Here
you
have
the
latent
heat
terms
here,
and
they
take
this
form.
J
When
you
choose
ice
enthalpy
reference
state,
they
look
different
if
you
use
different
reference
states,
so
the
time
change
of
the
global
integral.
On
the
left
hand,
side
here
have
to
be
balanced
by
fluxes
in
and
out
of
the
the
system,
and
those
consist
of
an
enthalpy
flux
here,
where
f
here
is
a
net
flux
of
water
in
and
out
of
the
atmosphere,
and
we
associate
a
single
temperature
sodium
associated
with
that
that
I
call
t
tilde
surf
as
here.
J
So,
let's
get
rid
of
some
some
terms
first,
if
we,
so
I've
done.
Some
aim
simulations
to
diagnose
these
different
terms.
If
you
look
at
the
kinetic
energy
terms,
both
this
term
here
and
this
term
here,
associated
with
water
change
in
the
column,
you
see
that
the
grid
mean
values
are
very,
very
small,
so
they're
like
two
thousands
of
of
a
watt
per
meter
squared
or
which
compared
to,
for
example,
energy,
dissipation
in
the
dicor
or,
for
example.
The
physical
signal
of
climate
change
is
a
very,
very
small
value.
J
So
I'm
not
going
to
consider
that
term
also
note
that
the
two
terms
actually
do
not
balance,
and
that's
due
to
the
fact
that,
where
the
water
is
removed
in
the
atmosphere
say,
falling
precipitation
may
be
formed
higher
up
in
the
atmosphere
where
the
wind
is
different
from
what
I
have
used
here
as
the
wind
or
the
velocity
of
the
water
as
it
enters
or
leaves
the
atmosphere,
and
here
I've
just
used
the
surface
wind.
J
So
you
see
they
don't
really
balance
and
that's
due
to
us
not
rigorously
taking
into
account
everything
that
happens
with
water
falling
out
of
the
atmosphere
or
entering
the
atmosphere,
similar
story
for.
Oh,
yes,
let's
get
rid
of
that
term.
Similar
story
for
the
phi
s
term.
Now
the
two
terms
actually
do
balance
exactly
because
phi
s
is
is
fixed,
but
again
long
story,
short,
very
small
term.
Let's
not
consider
that
either
next
many
models
make
the
assumption
that
the
latent
heats
associated
with
phase
changes
is
not
a
function
of
temperature.
J
J
J
So
that
simplifies
these
terms
here
now
we
just
have
constant
terms
in
front
of
these
latent
heat
terms
and
the
same
up
on
the
left
left-hand
side
of
the
equation.
Similarly,
not
some
further
simplifications
for
cam.
We
only
include
water
vapor
in
terms
of
the
kinetic
energy
and
internal
energy
and
potential
energy,
and
is
that
a
severe
assumption,
I
would
say
no
here
I've
I've
assessed
the
magnitude
of
including
all
forms
of
water.
J
The
second
point
here
we
assume
that
the
total
water
in
the
atmosphere
is
constant
during
physics
updates.
I
denote
that
with
this
notation
here
I'll
come
back
to
that
in
a
moment
and
then
also
in
cam.
We
do
not
take
into
account
this
enthalpy
flux
term.
That
was,
on
the
right
hand,
side
of
this
equation.
On
the
previous
slide,.
J
So
each
parameterization
in
chem
physics
and
theory
satisfies
this
equation
here.
So
if
there
is,
for
example,
momentum
diffusion,
it
will
be
compensated
by,
for
example,
temperature
increase,
so
that
this
equation
always
holds
under
the
assumption
that
water
remains
constant
during
physics
updates,
and
you
can
check
that.
That's
actually
true
here
is
a
long
mean
of
the
left
hand,
side
and
the
right
hand
side,
and
if
I
take
the
difference
between
the
two,
I
get
no
close
to
round
off
errors
in
the
realm
of
energy
errors.
Here.
J
And
this
system
here
is
actually
energetically,
consistent
and
after
having
worked
on
this
for
a
while
now,
as
I
agree
as
if
it's
very
very
cleverly
chosen
to
to
not
have
to
deal
with
rather
complex
thermodynamics,
mostly
related
to
water,
entering
and
leaving
the
atmosphere.
J
J
I
mean
by
eye
they're
pretty
much
indistinguishable,
but
the
average
is
different
between
these
two
as
they
should
be,
because
when,
for
example,
a
falling
hydrometeor
falls
through
the
atmosphere,
it
would
exchange
energy
with
the
atmosphere.
It
doesn't
all
go
into
the
surface.
Well,
let's
say
we
just
do
that,
then
the
difference
or
the
imbalance.
That's
left
here
is
shown
in
this
plot
up
here
and
you
see
we
reduce
our
local
imbalances
to
about
a
couple
of
watts
per
meter
squared
compared
to
about
you
know,
30
watts
per
meter
squares
that
we
had
before.
J
J
J
For
example,
if
the
temperatures
is,
if
the
true
temperature
quote-unquote
is
higher
than
what
we're
using,
then
you
know
some
of
that
energy
should
have
remained
in
the
atmosphere
and
not
go
to
the
other
component
and
vice
versa,
with
the
different
temperature
bias
so
to
gain
a
deeper
understanding,
as
we
probably
need
to
look
at
las
simulations.
J
I'd
also
like
to
make
the
audience
aware
that,
being
really
rigorous
in
monitoring
energy
budgets,
energy
conservation
in
the
model,
it
really
forces
the
modeler
to
have
a
high
level
of
consistency
both
between
parameterizations
but
also
between
the
the
dicor
and
paranization.
So,
even
if
you
don't
believe,
energy
conservation
is
important
for
your
problem,
it
can
actually
help
you
have
a
model
that
follows
the
laws
of
physics.
J
Last
but
not
least,
they
were
in
the
process
of
integrating
the
monster
ocean
model
into
csm3
and
up
and
contrary
to
the
pop
model.
We
actually
need
to
specify
enthalpy
fluxes
for
the
mom
6
model,
so
in
preparation
for
that
mariana,
wertenstein
and
gustavo
and
frank
brian
have
implemented
infrastructure
in
the
csm
coupler
that
allows
for
the
exchange
of
enthalpy
fluxes
between
components,
and
currently
we
have
it.
We
have
the
the
infrastructure
in
there
to
start
exploring
the
atmosphere,
ocean,
enthalpy,
flux
exchange
so
hopefully
pretty
soon.
F
All
right,
nice
job,
peter
you're
perfectly
on
time.
Let's
see,
I
don't
see
any
questions
in
the
chat
right
now
so
going
once
going
twice.
I
think
we'll
move
on
thanks
peter.
F
Okay,
so
next
up
is
bryce
harrop
talking
about,
I
think,
a
pretty
similar
thing.
K
Yeah,
thank
you.
Peter
go
ahead
and.
K
Use
your
pointer
great,
thank
you
so
much
for
having
me,
my
name
is
bryce
herrick
and
I'll,
be
talking
about
conservation
of
dry
air,
water
and
energy
and
cam,
and
its
impact
on
tropical
rainfall.
I'd
like
to
acknowledge
my
co-authors
here,
including
peter
from
the
previous
talk.
This.
K
The
waxing
project
under
the
rgma
portfolio,
so
I'm
really
happy
that
peter
went
just
before
me,
so
that
I
could
springboard
off
of
his
discussion,
and
I
really
want
to
focus
in
a
little
bit
more
on
this
adjustment
energy
tendency
that
he
was
showing
towards
the
end
of
his
talk,
which
I've
reproduced
here,
although
with
a
different
color
scale,
just
to
make
it
slightly
more
confusing
and
I've
highlighted
the
regions
of
heavy
rainfall
here
in
this
magenta
contour,
just
to
kind
of
orient
your
eye
towards
where
it's
raining-
and
we
see
this
very
distinct
picture
of
the
water
cycle
show
up
in
this.
K
And
so
I
wanted
to
explain
this
on
a
slightly
different
fashion.
Hopefully
between
the
two
explanations.
Then
this
will
stick
in
everyone's
mind
and
so
to
orient
us
and
maybe
remind
a
few
of
us
who
who
work
with
cam.
One
of
the
assumptions
that
gets
made
during
the
physics
parameterizations
is
that
there's
a
fixed
hydrostatic
pressure
thickness
that
gets
used
to
compute
the
mass
in
each
vertical
layer,
and
we
assume
that
that's
the
sum
of
dry
air
and
water
vapor.
K
This
is
something
that
peter
was
showing,
and
so
we
got
the
pressure
thickness
divided
by
the
gravitational
acceleration
is
our
mass
in
units
of
mass
per
per
area
and
that's
assumed
to
be
the
dry
air
plus
water
vapor,
and
we
assume
this
thing.
The
total
mass
here
is
held
fixed
during
our
physics
parametrizations.
K
Obviously
this
becomes
problematic
when
we
start
allowing
water
vapor
to
change
during
those
same
parameterizations
and
we
break
this
equality
pretty
quickly.
Now,
if
we
were
to
try
to
force
the
equality,
we
would
end
up
with
spurious
sources
and
sinks
of
dry
air
mass,
which
is
something
we
really
don't
want.
So
at
the
end
of
all
the
physics
parameterizations,
we
call
this
mass
adjustment
routine.
That
changes
the
total
mass
and
all
the
pressure
thicknesses
and
surface
pressure
such
that
we
retain
the
equality.
K
K
So
why
does
this
produce
the
energy
tendency
that
we
see
if
we
go
back
to
this
is
now
the
energy
expression
that
is
used
within
cam,
it's
very
similar
to
what
peter
was
showing
with
slightly
different
notation,
and
the
main
thing
is
that
we
have
this
total
mass.
That
again,
we
are
assuming
is
held
fixed
during
the
physics
parameterizations.
K
And
so
then
it's
really
easy
to
show
mathematically
that
this
adjustment,
energy
tendency
is
the
sum
of
these
terms
related
to
that
change
in
water,
vapor,
mass
and
the
change
in
water
vapor.
Then
it
makes
it
fairly
clear
why
we
get
the
relationship
with
the
water
cycle
that
we
see
right
in
regions
where
we
have
net
water,
vapor
loss
from
the
atmosphere.
K
We
have
a
negative
energy
tendency
and
in
regions
where
we
have
a
net
water
vapor
gain,
we
have
a
positive
energy
tendency
and
what
this
really
boils
down
to
is
the
fact
that
by
including
water
in
the
vapor
phase
in
the
energy
budget,
we
have
allowed
water
vapor
to
have
internal
potential
and
kinetic
energy.
Just
like
the
dry
air.
K
The
problem
is
that
when
water
vapor
changes
phase,
we're
really
only
retaining
the
latent
energy
within
the
model,
so
that's
a
fixed
value
and
whereas
the
total
energy
that
water
vapor
may
have
is
allowed
to
vary
with
things
like
the
temperature
and
the
the
issue
is
that
that
remaining
internal
potential
and
kinetic
energy
that
aren't
exactly
equal
to
the
latent
heat
value
that
we've
assumed
within
the
model,
all
that
gets
basically
just
thrown
out
and
we
leave
it
up
to
the
global
fixer
to
handle
and
applying
a
global
fixer
to
a
pattern
that
looks
like
this
means
you're
still
going
to
end
up
with
these
very
large
local
imbalances,
which
violates
the
divergence
theorem
locally.
K
And
so,
if
you
were
to
imagine
looking
at
the
energy
budget
of
a
column
say
in
the
west
pacific,
where
it
rains
a
lot,
you
would
find
that,
within
the
in
the
physics,
the
energy
tendency
will
not
match
the
sum
of
the
top
of
atmosphere
and
surface
fluxes
in
cam.
This
becomes
additionally
problematic
when
we
start
thinking
about
longer
time
scales
and
doing
diagnostic
analyses
with
the
model.
K
For
example,
we
usually
think
of
on
long
time
scales
the
tendency
kind
of
goes
towards
zero,
and
then
the
physics
and
dynamics
tendencies
will
balance
one
another
out.
What
we
find
in
cam
is
that's
not
exactly
the
case,
so
here
we're
looking
at
physics,
dynamics
and
adjustment
tendencies
in
the
tropics,
and
I
what
I'm
taking
on
the
x-axis
is
an
increasing
precipitation
threshold.
So
I'm
selectively
choosing
regions
with
heavier
and
heavier
rainfall
on
the
annual
mean
and
what
we
find
is
sort
of.
K
However,
the
amount
of
energy
divergence
that
we
get
out
of
those
same
regions
actually
decreases
as
you
go
to
heavier
and
heavier
rainfalls
and
the
physics
tendency
is
largely
being
compensated
for
by
this
increasing
energy
sink
from
the
adjustment
tendency
right,
so
we're
pumping
energy
into
the
atmosphere.
A
lot
of
that
is
going
into
both
the
dry
air
and
the
water
vapor,
but
then,
when
it
condenses
we're
losing
some
of
that
energy
and
then
the
global
energy
fixer
adjustment
that
gets
added
on
is
not
enough
to
compensate.
K
So
there
are
a
number
of
different
ways
that
we
could
go
about.
Quote-Unquote
fixing
this
problem
and
they
largely
revolve
around
trying
to
make
the
atmosphere
of
tendency
match
the
the
toa
and
surface
fluxes,
and
this
was
something
peter
was
hinting
at
in
some
of
his
slides
and
I've
put
this
as
a
nice
kind
of
simple
thought:
experiment
where
we
imagine
water
vapor
is
condensed
and
rained
out.
K
We
have
some
total
water
vapor
energy,
that
is
the
sum
of
latent
energy,
plus,
whatever
remaining
internal,
kinetic
and
potential
energy
in
the
default
cam
version
right,
we
retain
latent
energy
locally,
and
then
we
smear
out
globally.
The
remaining
internal,
kinetic
and
potential
energy,
the
various
pathways
that
one
might
take
towards
fixing
this
and
allowing
the
divergence
theorem
to
be
satisfied
locally,
would
include
either
prescribing
that
remaining
internal
connecticut
potential
energy
as
a
surface
flux,
assuming
that
it
all
goes
into
the
rain,
and
there
are
varying
levels
of
complexity.
K
K
You
can
include
the
condensates
in
the
energy
budget
and
start
to
take
into
account
the
energy
exchanges
as
hydrometers
fall
through
the
atmosphere.
You
can
do
that
at
varying
levels
of
complexity,
which
I
don't
have
time
to
go
into
today.
K
Another
alternative,
which
is
considerably
simpler,
would
be
to
kind
of
double
down
on
the
assumption
that
condensed
water
species
have
no
enthalpy.
We
call
this
a
local
energy
fixer
term,
which
would
be
like
instead
of
using
the
global
fixer,
basically
just
take
all
that
energy
and
put
it
in
the
local
column.
Another
way
to
think
about
this
physically
would
be
if
we've
assumed
that
condensate
enthalpy
is
zero,
then
the
latent
heat
defined
as
the
difference
between
vapor
and
condensate
enthalpies
would
just
be
equal
to
the
enthalpy
of
the
vapor
right.
K
So
then,
the
latent
heat
that
we
get
from
condensation
is
just
going
to
be
equal
to
the
total
water
vapor
enthalpy
that
we
have
to
begin
with
now
as
a
way
to
gauge
the
sensitivity
of
the
model
to
all
of
these
discussions.
We
actually
implemented
this
option
this.
K
This
final
local
fixer
sensitivity
test
into
both
cam6
and
sp
cam,
to
try
to
figure
out
which
aspects
of
the
simulated
climate
may
or
may
not
actually
care
about
these
assumptions
that
we're
making
and
what
we
found
is
that
for
both
cam6
and
sp
cam,
despite
having
fairly
different,
mean
annual
rainfall
values
in
their
base
state,
their
response
to
the
sensitivity
experiment
is
actually
very
similar
between
the
two
of
them.
K
So
we
see
large
increases
in
equatorial
rainfall
with
decreases
over
some
of
the
tropical
land
regions
and
the
indian
monsoon
area,
where
I
think
this
really.
K
Get
interesting
is
when
we
look
at
rainfall
variability
measured
here
is
the
standard
deviation
in
the
daily
mean
rainfall
again.
The
base
state
between
cam6
and
sbcm
is
very
different.
This
is
well
known.
Their
response
to
the
sensitivity
experiment
is
again
very
similar,
like
it
was
in
the
in
the
mean
where
we
see
large
increases
in
tropical
rainfall.
K
Variability
mostly
constrained
along
this
equatorial
belt,
where
we're
seeing
the
increases
in
total
rain,
and
these
increases
in
rainfall
variability
are
considerable
on
the
order
of
50
to
100,
and
we
want
to
dig
in
a
little
bit
more
to
figure
out
where
those
were
coming
from.
What
we
found
is
that
they
manifest
largely
in
known
equatorial
wave
modes.
K
So
we
see
large
increases
in
the
sort
of
moist,
kelvin,
wave
and
mjo
regions
of
the
of
the
wheeler
colitis
phase
diagrams
here,
and
one
of
the
interesting
things
for
this
particular
set
of
experiments
is
that
the
the
increase
in
strength
in
mjo,
going
from
the
control
to
the
experiment
in
cam
6,
is
on
the
same
order
of
intensity
change
as
going
from
cam6
to
sp
cam.
K
So
it's
interesting
food
for
thought
that
we
can
get
similar
changes
in
some
of
these
equatorial
wave
modes
that
we
we
tend
to
like
having
in
sp
cam,
just
by
changing
some
of
the
assumptions
that
we
make
in
the
energy
budget
related
to
water
species
in
the
parametrized
version
of
the
model.
K
So
I
want
to
save
plenty
of
time
for
questions
so
I'll
wrap
up
here
and
I'll
reiterate.
Some
of
the
main
points
that
are
takeaways
from
this
study,
so
we've
shown
here,
is
that
water
vapor
has
internal
kinetic
and
potential
energy,
but
when
it
condenses
only
its
latent
energy
is
retained
by
the
model,
while
any
remaining
energy
that
it
has
is
thrown
out
and
left
the
global
fixer
to
handle.
K
This
problem
is
being
exposed
during
the
mass
adjustment
routine,
but
it's
really.
It
really
comes
down
to
the
fact
that
we've
taken
the
mass
of
water
in
its
vapor
phase
and
included
it
in
the
energy
budget,
but
we're
not
appropriately
accounting
for
all
the
sources
and
sinks
of
of
energy
related
to
water,
as
it
changes
phase
in
particular
or
as
it
moves
across
the
surface
which
peter
was
showing.
K
So
finally,
during
this,
through
the
sensitivity
test,
we
were
able
to
show
that
there
are
fairly
profound
impacts
on
the
hydrologic
cycle.
Some
of
these
other
potential
avenues,
like
the
fixes
where
we
have
a
surface
flux,
are
likely
to
have
somewhat
different
responses
because
they
impact
the
climate
system
through
different
physical
mechanisms
and
we're
really
excited
to
start
getting
some
of
those
things
implemented.
So
we
can
explore
how
those
will
end
up
changing
the
simulated
climate
and
with
that.
K
Thank
you
very
much
for
your
attention
and
I'm
happy
to
take
any
questions.
F
All
right,
thanks
bryce,
that
was
another
excellent
talk
in
the
series.
So
I
have
a
question
which
is
it's
not
clear
to
me
why
your
local
enthalpy
approach
isn't
the
solution.
K
Because
we
know
that
condensates
actually
do
have
enthalpy
so
by
assuming
that
they
have
a
zero
where
we're
taking
a
we're,
making
a
decision
that
we
know
is
unrealistic
and
there
have
been
observations
of
surface
fluxes
from
toga
core
way
back
in
the
day
where
they
were
showing
in
regions
of
heavy
rainfall.
The
surface
budget
is
actually
dominated
by
the
enthalpy
flux
of
rain,
under
like
deep
convective
events,
which
are,
admittedly,
a
small
portion
of
the
of
the
total
time
average
state
of
the
atmosphere.
K
But
in
in,
as
we
start
moving
towards,
like
convective
resolving
simulations,
we
expect
that
those
situations
might
become
more
important
for
the
simulation.
K
That's
gonna
be
valuable
to
have,
plus
things
like
mom
6
require
enthalpy
fluxes
from
the
atmosphere.
So
if
the
atmosphere
says
like
hey,
I've
decided
I'm
going
to
ignore
that
that
becomes
a
problem
for
the
coupled
model.
H
F
So
then,
the
the
sensitivity
studies
that
you
showed
were
just
looking
at
the
impact
of
making
assumptions
of
enthalpy
rather
than
hey.
This
is
what
happens
if
you
do
it
right.
K
Right
and
we
did
do
some
offline
tests,
I
actually
have
extra
slides
where
we
are
trying
to
gauge
the
impact
of
the
temperature
tendency
related
to
if
we
were
to
account
for
hydrometers
falling
through
the
atmosphere-
and
I
won't
describe
this
in
a
whole
lot
of
detail,
just
comparing
the
green
line
to
the
black
line.
K
This
is
sort
of
the
cooling
induced
by
trying
to
warm
hydrometers
up
as
they
fall
through
the
atmosphere
relative
to
the
adjustment
offset
needed
for
this
local
fixer,
so
both
of
these
kind
of
produce
a
stabilizing
effect
but
they're
very
different
channels.
The
adjustment
tends
to
warm
the
upper
troposphere,
whereas
the
hydrometers
tend
to
cool
the
lower
troposphere.
So
it's
possible,
they
might
have
some
similarities,
but
I
would
imagine
if
we
were
to
do
what
I
consider
a
more
realistic
fix.
The
magnitude
of
the
response
would
be
somewhat
smaller
than
our
sensitivity
test.
F
Okay
thanks,
so
we
have
some
other
questions.
Rich.
K
As
cam
6
is
that
right,
I
think
so
it's
been
a
little
while,
since
I've
played
with
sam
okay,
I'm
just
wondering
yeah,
okay,.
H
Adam,
I
think
this
plot
answers
my
question,
but
so
it's
the
energy
fixer
is
is
local
in
the
vertical
too
so
you're.
Basically,
it's
applying
it
where
there's
auto
conversion
yeah
what
that's?
What
the
pattern
is?
Okay,
thanks.
I'm
curious
to
hear
peter's
comment
all
right,
peter.
H
J
H
J
K
Right,
but
there
are,
I
mean
there
are
steps
that
you
can
take
on
on
your
journey
towards
quote
unquote
doing
it
right
that
might
offer
improvements
along
the
way
that
I
think
are
worth.
F
All
right,
so,
let's
move
on
then
so
joao.
D
Perfect,
thank
you
all
right.
Thank
you
very
much.
Thanks
for
the
opportunity
again,
what
I'll
do
is
just
summarize
some
of
the
most
recent
developments
of
this
climate
process
team
project,
the
edm,
f,
cpt
I'll,
go
to
the
next
slide.
My
connection
is
not
great,
so
please
do.
Let
me
know
if
I
am
talking
to
a
slide
that
you
are
not
yet
able
to
see,
because
that
often
happens.
D
So
what
is
this
edm
f
cpt
is
funded
by
nsf
about
eighty
percent
of
the
funding
comes
from
nsf
about
20
from
noaa.
The
overall
goal
is
to
reduce
the
key
biases
that
we
know
exist
that
are
related
to
pbl
clouds
and
deep
convection
in
both
the
ncaa
and
gftl
models.
D
We
as
you'll
see
we're
doing
a
lot
of
things,
but
the
main
aspect
is
really
implementing
and
evaluating
unified,
pbl
and
convection
approaches
into
these
two
models.
D
But
we
are
focused
on
specific
aspects
of
the
model:
the
specific
physical
processes,
mostly
the
transition
from
sort
of
boundary
layer,
convection
to
deep
convection
from
a
spatial
transition
perspective
over
the
ocean
and
from
a
sort
of
temporal
transition
perspective
a
diurnal
cycle
over
land,
I'm
the
lead,
pi,
the
other
pis,
julio
blackmeister
and
carlio
donner
at
gftl
wrongfu
at
ucla,
george
mathew
at
the
university
of
connecticut
does
most
of
the
lds
stuff
and
michael
whedy.
Who's
now
at
nps
is
responsible
for
some
of
the
microphysics
and
turbulence.
D
L
D
Matter
it's
my
problem.
So
very
briefly,
in
one
slide,
what
is
edmf
edmf
is
basically
an
approach
in
which
we
try
to
unify
the
representation
of
turbulent
mixing
and
convective
mixing
in
one
single
physical
package
and
also
one
single
algorithm
and
one
single
numerics.
D
We
often
use
eddy
diffusivity
to
solve
the
sort
of
small
scale,
diffusive
mixing,
gaussian
mixing,
and
then
we
use
a
mass
flux
model,
of
course,
they're
coupled
to
represent
sort
of
the
larger
plume
thermal
perspective
on
the
mixing
and
recently
over
the
last
10
years,
or
so.
We've
used
this
approach
that
involves
multiple
plumes
in
which
we
trigger
plumes.
From
the
surface
from
you
know,
based
on
the
pdf
of
surface
layer
thermodynamics,
we
use
some
different
types
of
monte,
carlo
sampling.
D
You
know
they
had
a
lot
of
entrainment,
and
so
what
this
multiple
plumes
brings
to
the
table,
which
is
you
know,
one
of
the
key
aspects
of
the
entire
parameterization
is
the
fact
that
you
can
have
different
types
of
convection
coexisting
in
the
same
model
grid
box.
So
you
can
have
you
know
dry
convective
plumes.
You
can
have
shallow
convection.
You
can
have
deep
convection
all
working
and
happening
at
the
same
time
in
the
plume
as
as
in
nature,
without
having
to
trigger
things
or
move
from
one
parametrization
to
the
other.
D
So
let
me
just
highlight
the
few
results
of
the
last
six
months
or
so
one
of
the
key
aspects.
And
although
you
know
this
is
a
brief
presentation,
so
I
cannot.
What
is
it?
Pay
justice
to
all
of
the
great
work
has
been
done.
Largely
simulation
model,
as
many
of
you
know,
are
at
the
center
of
everything
we
do
in
terms
of
parameterization,
mostly
in
terms
of
turbulence
and
convective
mixing
and
pbl.
D
In
general,
the
team
at
the
university
of
connecticut,
led
by
george
mathew,
has
been
doing
amazing
work,
I'm
just
showing
two
highlights,
but
there's
many
other
things.
A
couple
of
papers
published
the
first
one,
the
the
the
project
has
decided
to
focus
on
a
series
of
cases.
One
of
them
is
a
diurnal
cycle
of
convection
overland
from
the
amma
experiment,
which
is
a
bit
different
from
the
lba
case,
which
is
the
typical
one
we've
used.
D
These
are
just
snapshots
of
cloud
water
paths
and
domains.
I
hope
you
can
see
my
mouse
domains
of
80
kilometers
by
80
kilometers,
and
you
can
see
for
the
last
four
hours
of
demolition,
transitioning
from
sort
of
scattered
shallow
cumulus,
popcorn
style
structures
to
after
you
know
those
four
hours
you
can
see
much
larger
structures
depicting
deep
convection.
D
One
of
the
interesting
fascinating
results,
really
that
george
mateo's
team
has
been
able
to
start
studying
is
the
fact
that,
as
you
can
see,
you
know
depicted
in
a
rather
nice
way,
convection
organization,
and
potentially
from
that,
you
know,
vertical.
Mixing
clouds
and
precipitation
strongly
depends
on
less
domain
size.
So
often
we
do
les
the
lds
experiments
in
fairly
small
domains.
D
Just
large
domains,
but
you
see
the
difference
so
the
first
one
on
the
left,
the
first
image.
Again,
I
don't
know.
Okay,
I
hope
you
can
see
my
mouse.
You
can
see
a
simulation
of
sherlock
precipitation,
shallow
convection.
This
is
the
reco
case
with
a
domain
of
40
by
40
kilometers
in
the
middle,
you
have
the
same
simulation
by
80
by
80
kilometers
and
to
the
right.
D
You
have
a
simulation
of
the
same
experiment,
shallow
convection,
precipitating
160
by
160
kilometers,
and
you
can
see
that
when
the
convection
in
the
alias
model
is
not
constrained
by
the
domain
size,
it
develops
into
a
very
different
type
of
convection
when
compared
to
situations
where
it's
much
more
constrained.
So,
given
the
fact
that
over
the
last
20
or
30
years
because
of
computer
limitations,
we
have
often
run
less
in
fairly
small
domains.
D
D
So
one
of
the
things
that
was
done
at
gftl-
and
this
is
the
teamly
led
by
by
leo
donner-
was
to
realize
that
so
so
we
first
did
edmf
implementations
using
the
added
diffusivity
scheme
there,
but
we
felt
that
one
aspect
we
needed
to
move
forward
with
or
at
least
test
is
to
implement
a
terminal,
full-blown
prognostic
turbulent,
kinetic
energy
closure
into
the
gfdl
model
to
replace
the
current
diagnostic
closure
based
on
block
and
louis
and
people.
That
know
lock,
and
we
know
what
I'm
talking
about.
D
But
I'm
happy
to
answer
questions
if
you
want
to
know
more
about
the
details.
But
what
is
shown
here
is
the
difference
in
so
the
top
two
maps
show
the
difference
between
low
cloud
cover
from
the
gfdl
control,
the
am4
version
and
the
new
gfdl
version
using
a
turbulent
kinetic
energy
closure,
and
you
already
can
see
there.
D
The
differences
in
regions
like
off
the
west
coast
of
continents
in
the
stratocumulus
regions
that
the
new
model
is
able
to
actually
represent
more
of
these
clouds,
as
the
observations
indicate,
if
you
look
at
what
we
call
the
gpci
pacific
cross
section
and
I'll
show
in
two
or
three
slides
where
it
actually
located,
you
know
precisely,
but
it's
a
cross
section
that
goes
from
the
west
coast
of
so
probably
coast,
california,
across
oh
y,
into
the
ipcc
regions,
and
if
you
do
this
diagnostics,
if
you
look
at
the
simulations
from
that
perspective,
those
are
the
two
figures
at
the
bottom.
D
You
can
see
pressure
in
the
vertical
coordinate,
longitude
latitude,
longitude
and
latitude.
Horizontal
coordinates,
and
you
can
see
on
the
left,
is
the
control
run
on
the
right?
Is
the
new
tke?
These
are
profiles
of
liquid
water
cross
sections
of
liquid
water
and
again
you
can
see
that
not
only
the
cloud
cover
you
could
see.
The
maps
above
is
improved
and
increased
in
the
stratocumulus
regions.
D
There
is
much
more
liquid
water,
as
observations
seem
to
point
us
to,
and
the
impact
of
this
is
that
not
only
this
new
version
is
able
to
produce
a
more
realistic,
pbl,
more
realistic
clouds.
D
So
let
me
now
just
show
a
few
results
from
ncar,
so
at
ncaa
what
we
did
we
decided
to
rethink
our
approach
a
little
bit
and-
and
you
see
why,
but
the
idea
here
is
that,
since
with
the
n-car
model,
the
latest
version,
we
can
use
higher
order
closure
instead
of
eddy
diffusivity,
we've
been
initially
conjecturing
how
feasible
it
would
be
to
actually
couple
fully
coupled
club
with
mf
blooms.
With
this
sort
of
multiple
plumes,
we've
decided
to
do
to
go
ahead
with
this
after
sort
of
doing
some
theoretical
studies.
D
We've
implemented
this
in
the
code,
it's
fully
coupled
in
the
sense
that
it's
solved
in
the
same
five
diagonal
prognostic
solver,
that
we
saw
for
mean
fields
and
turbulent
fluxes
with
club
and
the
results
I'm
going
to
show.
We
have
the
full
plume
cloud
macro
physics,
but
we
haven't
yet
coupled
it
directly
to
other
processes
which
it's
not
necessarily,
but
but
we
can
go
to
that.
D
So
let
me
just
show
you
first
one
very
simple
basic
result:
we
wanted
to
study
the
impact
of
this
approach,
of
adding
this
multiple
plumes
to
club
and
solving
it
together
in
the
same
algorithm
in
the
case
of
bowmax,
which
is
sort
of
our
bread
and
butter.
Shallow
convection
case
that
that
everyone,
you
need
to
do
bomb
works
well,
otherwise
you
have
to
go
back
to
the
drawing
board
and
this
figure
has
so.
This
is
five
lines
but
bear
with
me
for
a
second,
so
this
both
of
them
relate
to
cloud.
D
This
is
the
mean
cloud
water
on
on
the
left
on
the
right
is
the
cloud
cover
altitude
in
meters
on
the
vertical
coordinate?
The
black
line
is
the
larger
simulation
model.
The
blue
line
is
club
by
itself.
So
it's
the
current
version
of
the
model.
D
The
red
line
is
club,
plus
the
mass
flux
and
then
the
dotted
and
the
dashed
are
either
just
the
club
components
of
the
cloud
from
the
cloud
plus
mf
or
just
the
mf
component,
and
what
you
can
see
in
this
plot
and
you
can
see
many
other
profiles
that
we
could
show
is
an
improvement
in
the
simulation.
When
you
add
these
multiple
plumes
to
club
and
one
way
to
try
to
understand
why
this
is
is
to
look
at
the
next
slide
and
in
the
next
slide.
D
D
And
if
you
go
to
the
les
and
you
take
all
the
information,
all
the
points
and
you
you
know
plot
a
histogram
of
total
water
on
the
left,
so
vapor
plus
liquid
minus
the
mean.
So
that's
why
it's
a
zero
hearing
negative
and
the
liquid
potential
temperature.
We
do
the
same
and
we
look
at
this
phase
diagram
and
we
look
at
three
different
distributions.
One
in
gray
is
dles
joined
pdf,
so
it's
the
histogram
of
the
lds
results
in
black
lines
is
the
club
joint
pdf
at
that
very
moment.
D
This
is
these
are
the
plumes
that
survive
from
the
surface
up
to
that
level,
and
what
you
can
see
clearly
here
is
that
the
the
different
mass
flux
plumes
really
populate
this
skewed
part
of
the
pdf-
that,
although
you
know
unlikely
of
low
probability,
is
where
a
lot
of
the
vertical
mixing
is
taking
place
and
because
the
multiple
plumes
scan
the
script
looms
can
populate
that
part
of
the
pdf.
They
can
actually
produce
that
sort
of
more
efficient
mixing
that
leads
to
these
better
results
in
the
bomex
simulation.
D
We
we
did
many
more
simulations,
it's
not
just
bomex.
This
one
is
the
arm
case.
This
is
a
diurnal
cycle
of
shallow
convection
at
the
bottom.
You
can
see
the
less
results
for
total
water,
so
it's
vapor
plus
liquid,
but
essentially
it's
vapor.
D
This
is
time,
so
you
can
see
the
dyno
cycle,
the
boundary
layer,
growing
shallow
convection,
the
vertical
coordinate
is
again
altitude
meters.
The
top
image
is
the
difference
between
club
minus
lds
and
the
mid
plot
is
cloud
plus
mf
minus
the
lds.
So
these
are
the
biases
of
the
different
parameterizations,
and
what
is
clear
is
that
the
the
club
bias
the
drive
by
is
about
minus
four
grams.
Almost
minus
four
grams
per
kilogram
is
sort
of
reduced
in
about
half
to
about
minus
two
grams
per
kilogram.
D
When
you
include
the
plumes
and
create
this
more
efficient
mixing.
My
last
slide
is
a
slightly
different
experiment
that
we
have
done
also
with
the
gftl
model.
I
do
have
a
slide,
but
probably
won't
have
time,
but
if
there's
a
question
I'm
happy
to
show,
we
have
been
playing
around
with
this
idea
of
testing.
How
important
is
the
mixing
that
this
blooms
create
in
the
sub
cloud
layer
or
just
the
drive
boundary
layer?
D
So
we
have
the
full
edm
for,
in
this
case
cloud
plus
mf,
but
we
don't
allow
the
plumes
to
condense,
which
means
that
they
won't
sort
of
they
don't
get
the
buoyancy
that
they
would
get
if
they
condense
and
as
such
they
often
sort
of
stop
fairly
early
on
above
the
lifting
condensation
level.
If
there
is
one-
and
this
is
just
an
image,
this
is
my
last
image.
I'll
have
a
summary
slide.
This
is
the
cross
section,
the
pacific
cross
section
at
the
bottom.
D
You
have
club
versus
club
plus
mf,
again
pressure
at
the
left
longitude,
so
the
transition
from
stratocumulus
close
to
the
coast
to
cumulus
close
to
hawaii
to
well.
There
will
be
convection,
but
you
won't
see
it
here,
and
what
you
actually
can
see
is
that
the
club
plus
dry
mf
in
itself
alone,
already
produces
much
more
realistic,
static
humor.
So
what
it
says
is
that
just
that
mixing
that
happens
in
the
sub
cloud
layer,
the
dry,
convective
boundary
layer
done
by
those
plumes-
is
already
important
to
sort
of
make
produce
more
realistic
clouds.
D
It's
my
final
slide
I
won't
have
to.
I
won't
talk
to
it.
If
there
are
any
questions,
please
go
ahead.
Thank
you
very
much
for
the
opportunity.
F
Thanks
rao,
I
I
became
mesmerized
by
your
talk
and
forgot
to
give
you
the
three
minute
warning
so
we're
at
about
time.
I
don't
see
any
particular
questions,
but
I
have
one,
which
is
that
since
club
is
already
handling
skewed
distributions,
I'm
always
a
little
bit
confused
about
the
handoff
between
club
and
and
mass
flux
approach.
In
terms
of,
is
there
any
double
counting
issues.
D
E
D
Resemble
yeah,
yes,
so
so
in
this,
I
think
the
reason
why
this
figure
is
helpful
is
actually
to
address
your
question,
which
is
a
very
good
one.
What
is
what
are
these
discrete
plumes
doing?
That
club
is
not
doing
it
necessarily,
and
in
this
case
you
can
see
that
that
club
does
produce
some
skewness,
but
it
does
not
seem
to
be
enough
and
that
these
plumes
go
and
sort
of
go
into
that
phase
space
part
of
the
diagram
and
produce
mixing
there
that
adds
to
to
to
the
club
mixing.
D
So
so
I
think
that's
that's
that's
a
fair
way
of
depicting
to
to
be
completely
clear.
There
is
another,
so
so
we
can
also
make
the
same
plot
in
which
you
have
the
club
joint
pdf,
when
you
don't
have
any
plumes
and
that
joint
pdf,
it's
not
too
different
from
this
one,
although
it's
a
little
bit
broader,
so
the
variance
at
least
is
a
little
bit
larger,
but
it's
still
lacking
it
does
not
populate
under
quotes
that
region
of
the
phase
space.
Where
we
know
the
sort
of
the
most
active
thermals
are.
F
Okay,
great,
so
there
are
more
questions
or
discussion
in
the
chat,
but
I
think
we'd,
better
move
on
to
our
last
talk
by
mingui.
Thank
you.
Yeah
thanks.
M
M
Okay,
hi
I'm
in
ho
diao,
I'm
an
associate
professor
from
san
jose
state
university.
So
today
I'll
be
talking
about
ice
and
mix.
This
cloud
characteristics
that
we
observe
in
southern
ocean
and
antarctic
and
we
evaluate
the
ncar
csm2
model.
So
most
of
the
work
today
is
actually
done
by
my
graduate
student
xinhua,
young
tyler
barron
and
my
former
undergrad
student
jackson
e.
I
also
want
to
thank
all
the
collaborators
and
these
funding
support
from
nsf
and
doe.
M
Let's
go
into
the
full
screen
mode,
all
right.
So
if
you
check
out
the
definition
of
mixed-phase
cloud
in
ams
glossary,
it
is
just
a
mixture
of
ice
and
supercooling
water.
That
seems
pretty
simple,
but
we
actually
have
observed
very
complex
microphysical
properties
and
micro,
physical
properties
in
the
real
atmosphere.
On
the
right
hand,
side
is
a
time
series
of
a
flag
campaign
called
the
orcas
campaign
down
in
the
southern
ocean.
M
We
have
seen
short
segments
of
pure
ice
phase
and
short
segments
of
pure
supercooled
water,
and
we
have
seen
a
mixture
of
both,
so
this
actually
poses
challenges
for
both
the
simulation
of
these
type
of
clouds
in
the
global
climate
model,
but
also
for
the
evaluation.
In
my
talk,
I
will
focus
on
three
scientific
questions.
One
is
how
frequent
how
frequently
do
we
observe
three
cloud
phases
over
the
southern
ocean
and
antarctica?
M
M
So
the
first
field
campaign
I
use
is
called
the
nsf
socrates
campaign.
It
is
over
hobart,
australia
and
took
place
in
january
february
2018
a
series
of
key
instrumentations
I
use
include
the
pixel
hygrometer,
which
is
a
water
vapor
hygrometer,
located
on
the
top
of
the
g5.
As
you
can
see
here,
it
measures,
25
hertz
and
for
all
the
analysis.
Today,
I
only
use
onehurst
data
for
the
cloud
probes.
M
We
use
a
series
of
cloud
probes
such
as
cvp,
2ds,
fast
2dc,
kim
probe,
rice,
icing,
detector,
etc,
and
the
aerosol
probe
is
the
sas
and
you
can
see
where
these
probes
are
located
on
the
wind
part
of
the
aircraft.
M
One
thing
I
want
to
point
out:
it
is
very
critical
to
have
high
quality
data
during
this
fuel
campaign
during
the
2018
summer.
My
team
and
I
spent
the
summer
time
at
incar
and
we
provided
the
laboratory
calibration
of
the
vixel
water
vapor
hygrometer,
and
I
highly
recommend
you
to
check
out
these
pi
calibrated
water
vapor
data
for
the
socrates
campaign.
M
It
definitely
made
a
difference
such
as
the
relative
humidity,
with
respect
to
liquid
used
to
peak
at
96
percent,
using
our
older
calibration
and
now
is
more
closer
to
101
percent
similar
for
the
rh
over
ice
and
is
much
better
now
after
the
calibration,
so
to
quickly
introduce
you
about
the
method
that
we
use
to
identify
different
cloud
phases.
We
previously
published
a
paper
in
journal
climate
2019
using
the
mass
and
number
concentration
of
various
cloud
probes.
M
Now,
moving
on
to
the
new
result
that
I
want
to
show
today,
so
this
is
the
evaluation
we
have
done
on
three
climbing
models:
the
cam
6
cam
5
and
the
e3
sm
eam
version.
One.
All
three
models
are
notched
towards
the
reanalysis
data
and
they're.
All
at
one
degree
resolution
and
here
on
the
bottom
right
is
a
time
series
to
demonstrate
how
we
actually
compare
the
observation
with
the
model.
M
Now
that
is
just
a
demonstration
for
one
time
segment,
so
we
can
use
that
type
of
comparison.
Moving
on
to
statistical
analysis,
but
one
very
important
thing
I
want
to
point
out
is
we
try
to
apply
a
scale-aware
comparison?
That
is,
we
average
the
observation
from
hundreds
of
meters
to
100
kilometer.
That
will
be
closer
to
the
one
degree
resolution
of
this
reclining
model
on
the
bottom.
M
I
show
the
cloud
phases
in
three
different
color
ice
in
liquid
icing,
blue
liquid
in
red
and
green
for
make
space
we
can
see
the
colder
temperature
definitely
has
higher
ice
phase
frequency
and
among
the
three
climate
models.
The
east
that's
chem,
6
shows
the
most
similar
result
compared
with
the
100
kilometer
observation,
which
is
a
significant
improvement
compared
with
the
cam5
that
does
not
allow
any
super
cool
water
below
minus
10
degrees
celsius,
and
the
easterseal
model
seems
to
underestimate
some.
I
space
below
minus
20
c,
but
overestimated
above
that
temperature.
M
Now
we
also
want
to
demonstrate
how
robust
our
face
id
is
during
the
in
situ
observation,
so
we
use
two
different
set
of
identification
methods
based
on
2dc
or
2ds
probe,
and
the
top
and
bottom
seem
to
show
very
consistent
result.
So
we
are
pretty
confident
with
the
robustness
of
the
ice
phase
detection,
moving
on
to
the
evaluation
of
ice
and
liquid
water
content
right
here
we
have
liquid
water
content
versus
temperature
on
the
top
row.
M
Now
for
the
ice
water
content,
we
actually
are
surprised
to
see.
There
is
a
little
bit
underestimation
of
ice
water
content
below
-20
c,
but
above
that,
temperature
is
pretty
close
to
the
100
kilometer
observation
for
the
aerosol
indirect.
In
fact
here
this
is
really
actually
a
correlation
with
the
aerosol
total
number
concentration
above
0.5
micron
or
0.1
micron.
I
only
show
the
0.5
micron
here,
because
the
result
for
both
sides
range
are
pretty
similar.
We
have
seen
too
many
effects
from
the
observation
on
both
ice
and
liquid.
M
That
is,
we
have
higher
iso
and
liquid
number
concentration
and
mass
in
the
observation,
and
we
also
see
the
bergeron
process
located
around
minus
18
to
minus
25
c,
that
is
the
anticorrelation
between
ice
and
water
and
then
for
the
all
the
model
we
seem
to
see
very
weak,
tumi
effect
or
sometimes
even
no
effect
for
ice.
M
The
second
part
of
my
talk
is
very
quickly:
evaluation,
like
myrtle
station
over
antarctica
for
the
cloud
fraction
and
cloud
phase.
So
this
is
the
doe
aware
campaign
took
place
around
2016
and
we
use
a
combination
of
radar
and
lidar
to
detect
cloud
fraction
and
cloud
phase
on
the
top
is
a
case
study
based
on
the
later
and
lidar
combination.
We
can
find
unknown
phase
or
ice
in
blue
liquid
in
red,
etc,
and
we
can
also
see
different,
I
space
from
the
model.
M
So
from
this
one
case
study,
we
already
see
something
interesting
that
when
model
misidentified,
this
dry,
clear
sky
layer,
as
I
space
there's-
also
a
correlation
with
overestimation
of
relative
humidity
in
the
model.
So
we
move
on
to
see
if
this
is
something
statistically
seen
throughout
the
entire
campaign.
M
On
the
left
column,
we
have
the
total
cloud
fraction
over
the
entire
campaign.
Red
is
in
observation,
so
the
model
consistently
underestimate
cloud
fraction
below
three
kilometers,
but
overestimate
cloud
fraction
over
mcmurdo
above
three
kilometer
in
the
middle
panel,
we
have
a
cloud
phase
evaluation.
M
That
is,
the
observation
is
in
blue
for
the
I
space
and
the
observation
liquid
for
the
second
row
and
the
mixed
phase
is
shown
in
the
third
row.
Now
we
separate
the
cloud
fraction
for
low
cloud
fraction.
The
model
actually
overestimate
overestimate
ice
phase
for
low
cloud
fraction,
but
then
for
the
high
cloud
fraction.
M
The
model
significantly
underestimates
ice
cloud
fraction,
and
that
seems
to
be
consistent
with
what
we've
seen
over
the
southern
ocean
that
there
is
a
underestimation
of
ice
water
content
over
there,
and
then
we
further
trace
the
bias
of
cloud
phase
and
cloud
fraction
into
the
thermodynamic
bias.
We
found
that
whenever
we
have
a
higher
biases
of
cloud
phase
and
fraction,
there's
also
a
higher
bias
in
relative
humidity
and
by
decomposing
the
relative
humidity
bias
into
water,
vapor
and
temperature
bias.
M
So
that
is
the
summary
slide
of
my
talk.
We
found
some
interesting
result
and
also
some
valuable
lesson.
One
is
that
we
we
need
to
use
similar
scale
to
compare
the
observation
and
model,
especially
between
high
resolution
in
situ
observation
and
course
resolution
global
climate
model.
M
We
also
found
that
the
cl6
shows
the
most
similar
result
compared
with
100
kilometer
scale
observation
in
terms
of
the
cloud
phase,
frequency
distribution
over
the
southern
ocean
and
for
the
microphysical
properties,
chem
6
overestimate
liquid
water
content
by
0.5
to
2
orders
of
magnitude
below
-20
degrees
c,
but
they
also
underestimate
ice
water
content
by
similar
range
of
other
magnitude,
also
below
-20
c
for
aerosol
indirect
effect.
All
the
models
seem
to
have
very
weak
or
no
correlation,
with
aerosol
total
number
concentration,
especially
for
I-space
and
for
the
observation
over
myrtle.
M
We
found
that
water
vapor
seems
to
be
one
of
the
key
contributor
to
rh
bias,
as
well
as
further
correlated
with
cloud
property
biases.
So
these
work
are
currency
in
revision
under
ggr,
and
I
know.
Lastly,
I
want
to
thank
the
funding
support
from
nsf
and
card
and
also
during
my
visit
to
earncart
in
2016
and
2018
from
the
ncar
asp
faculty
fellowship.
So
thanks
for
your
listening
and
I
can
answer
any
questions
now,.
F
All
right
thanks
a
lot
that
was
an
excellent
talk.
So
I
have
one
question
which
has
two
parts
that
I'm
not
sure
about
so
at
least
in
cam
5,
the
ice
cloud
fraction
was
more
or
less
proportional
to
relative
humidity,
and
so
your
last
punch
line
here
that
it
seems
like
the
the
cloud
fraction
cloud
phase
is
correlated
with
biases
and
relative
humidity.
F
Does
that
mean
that
the
relative
humidity
is
a
reasonable
thing
to
be
using
for
our
ice
cloud
fraction?
And
then
it's
just
that
that
we
have
moisture
biases
that
are
the
models
responding
correctly
in
terms
of
cloud
fraction.
M
That's
a
very
good
question
peter.
So
the
way
we
evaluate
ice
cloud
fraction
is
actually
not
using
the
diagnosed
cloud
fraction
or
the
ice
cloud
fraction
variable.
We
actually
directly
use
the
mass
concentration
of
ice
water
content
and
liquid
water
content
and
then,
when
we
define
a
different
cloud
phase
in
chem,
we
actually
say
if
it's
less
than
0.1
ice
mass
fraction,
that
is
called
the
liquid
base
and
if
it's
about
0.9
ice
mass
fraction,
that
is
ice
phase.
M
So
that
is
not
exactly
the
same
as
we
use
in
the
ice
cloud
fraction
variable.
But
it
is
interesting
that
rh
seems
to
be
correlated
with
the
mass
biases
as
well.
F
Okay,
great,
so
I
think
that
we
don't
have
any
other
questions
right
now
and
it
is
officially
our
time
for
break.
So,
let's
take
that
break
and
come
back
at
20
after
so
in
about
15
minutes.
So
thanks.
N
Yes,
I
will
okay,
so
yeah,
it's
20
20.
After
let's
get
back
to
our
amwg
session.
The
next
speaker
is
carrie
black
from
the
university
of
ehrenbero
and
he
talks
about
our
results.
Oh,
she
sorry.
O
O
O
So
as
aerosol
and
as
anthropogenic
aerosols
are
currently
the
largest
uncertainty
in
human
driven
climate
change
and
they
have
the
potential
to
affect
the
regional
climate.
Here
we
want
to
compare
through
aids
and
see
how
the
storm
tracks
change
with
different
levels
of
aerosols.
O
So
we
first
pick
1920
to
1940
at
the
start
of
the
20th
century,
where
there
was
low
emissions,
then
we
pick,
then
we
look
at
1960
to
1980,
and
this
is
when
aerosol
emissions
peak.
And
lastly,
we
look
at
present
day
which
we
define
as
1986
to
2006,
and
this
is
the
most
recent
historical
data
from
the
model
that
we
use.
O
O
O
So
our
results
looking
at
how
aerosols
affected
the
storm
tracks
during
our
period
of
increasing
aerosols
in
the
20th
century-
and
these
plots
show
again
the
black
contours,
show
the
climatology
and
the
colours
represent
the
difference
between
the
periods
showing
how
the
storm
track
change
when
aerosols
increase,
so
the
all-forcing
simulations
reveal
the
changes
in
the
storm
tracks
across
both
the
basins
and
here
at
the
comparison
of
the
all-forcing
and
the
fixed
aerosol
simulations
show
that
aerosols
will
cause
an
equatorial
shift
to
the
storm
tracks
in
both
the
pacific
and
the
atlantic
basins.
O
Here,
the
maxima
represents
the
jet
stream
and
we
can
see
this
strengthens
equator
word
and
weakens
forward,
which
this
relates
to
the
equatorial
movement
of
the
storm
tracks
during
this
period
that
we've
seen
earlier
so
the
opposite
effect
occurs
during
periods
of
decreasing
aerosols,
as
the
jet
stream
weakens.
We
see
the
storm
track
shift
north,
so
here
I'm
going
to
look
at
the
outputs
from
20
of
them
sample
members
from
the
all
forcing
simulations
again.
The
colors
is
showing
the
difference
between
the
two
periods,
so
these
are
20
of
them.
O
Some
members
used
to
calculate
the
ensemble,
meaning
previously
discussed.
If
we
compare
the
first
plot,
the
first
ensemble
member
and
the
20th
ensemble
member
down
here,
we
have
two
almost
opposite
and
plots,
and
so
this
highlights
why
we
should
investigate
whether
our
results
are
due
to
external
forcing
or
this
internal
variability.
O
So
here
and
we're
looking
at
the
average
synoptic
variance
over
the
north,
a
box
represent
in
the
north
atlantic
ocean
for
both
all
forcing
and
the
fixed
aerosol
simulations.
The
potential
dense
density
curves
show
the
20
ensemble
members
spread
of
results
which
are
displayed
throughout
the
ensemble
members,
and
so
this
results
that
the
internal
variability
results
in
a
blurred
signal
aim
a
blurred
climate
change
signal.
O
So
when
we
compare
to
the
observations
which
is
the
dashed
lines
for
the
periods-
and
we
can
see
that
the
further
back
and
time
we
go,
the
observations
and
the
model
began
to
disagree
more
and
more,
and
this
is
due
to
the
lack
of
observational
data
in
the
early
20th
century.
So
the
model
in
the
period
1920
to
1940
doesn't
agree
that
well.
O
So
these
plots
and
reveal
the
contribution
of
internal
variability
for
both
simulations
all
forcing
and
fixed
aerosols
and
we're
looking
at
the
fixed,
the
decreasing
aerosol
period.
So
we
used
a
method
from
nathal
2018,
and
this
allows
us
to
produce
this
very
complex
pattern,
which
represents
the
percentage
of
internal
variability
in
the
model.
O
So,
as
we
see
in
the
north
atlantic
ocean,
there
is
an
area
here
which
reaches
60
to
70
percent
internal
variability
and
along
the
storm
track
between
30
to
40.
Whereas
if
we
look
at
the
fixed
aerosol
simulations
for
the
same
regions
and
internal
variability
contributes
much
less.
And
similarly,
if
we
look
at
the
pacific
ocean
regions
here
and
reaching
up
to
fifty
percent,
whereas
similar
regions
and
the
fixed
aircell
simulations
there's
much
less
internal
variability
contribution.
O
So,
overall,
we've
been
able
to
conclude
that
ourselves
play
an
important
role
in
driving
past
changes
in
storm
tracks,
with
increasing
aerosols,
resulting
in
apple,
equatorial
shift
of
the
storm
tracks
and
decreasing
aerosols,
doing
the
opposite
and
causing
a
forward
shift
on
the
storm
tracks.
We've
also
shown
that
internal
variability
of
the
model
influences
these
shown
changes.
O
So
the
next
step
in
this
research
is
looking
to
the
future
and,
as
we
would
expect
in
the
future,
with
decreasing
aerosol
emissions,
the
storm
tracks
would
shift
forward.
However,
this
is
not
what
we've
observed,
so
we
need
to
look
into
whether
this
is
due
to
internal
variability
or,
if
there's
other
factors
to
consider.
N
O
On
this
yeah
so
use
the
the
the
dots
represents,
the
statistical
significance
90
using
the
students
t-test
and
the
plots.
N
O
And
the
future
plots
there's
not
that
much
significant
statistical
significance
across
the
storm
tracks,
whereas
looking
at
the
results,
if
I
can
get
back
to
them
and
there's
a
oops
went
too
far
back
sorry
and
across
the
storm
tracks,
there's
much
more
okay,.
N
A
I
was
just
wondering
carrie
if
you've
looked
at
differences
between
sort
of
anthropogenic
and
natural
aerosols,
I
mean
because
we
don't
have
any
control
over
dust
and
sea
salt,
but
we
do
over
others.
O
O
So
that's
what
this
is
showing
the
effect
of
the
anthropogenic
air
cells.
Not
we've
not
looked
at
natural.
D
E
Great,
can
you
see
my
screen?
Yes,
we
can
go
ahead.
Okay,
hi.
My
name
is
jonah
shaw:
I'm
a
graduate
student
at
the
university
of
colorado,
boulder
working
with
professor
jen
kaye
and
today
I'll
be
talking
about
clouds
in
cam,
specifically
how
they
square
up
against
satellite
observations
and
how
they've
changed
from
cam,
4
and
cam
5.
in
this
presentation,
we'll
represent
work
by
myself,
brian
madeiros
encar
and
professor
k,
and
I'm
supported
by
the
nasa
pre-flight
mission.
E
So
I'm
going
to
be
talking
a
lot
about
satellite
simulators
in
this
talk,
and
so
I
just
wanted
to
stop
here
and
really
describe
what
they
do
and
why
they're
so
important
for
anyone
who
might
be
unfamiliar
and
to
start
off
is
just
to
note
that
the
geophysical
values
that
are
retrieved
from
satellites
versus
those
that
are
produced
by
models
are
very
different,
and
this
is
for
two
main
reasons.
E
The
first
being
that
model
resolution
is
generally
much
coarser
than
the
size
of
a
satellite
footprint,
but
also
that
satellites
are
inferring
atmospheric
properties
via
retrieval
algorithms
and
they're,
also
not
observing
all
of
the
atmosphere.
We
can
think
about
a
lighter
being
being
attenuated
by
thick
clouds
and
not
observing
those
below
it.
E
This
is
done
by
implementing
subgrid
down
sampling
and
satellite,
retrieval
algorithms
within
a
gcm
and
so
cosp,
which
is
a
really
wonderful
satellite
simulator
that
has
been
produced
by
the
cf
mip
group
and
also
here
at
ncar,
has
been
implemented
in
the
last
three
versions
of
cam
and
also
it
includes
some
great
observations
of
clouds
that
now
have
really
robust
multi-decade
observational
periods.
So
this
is
a
really
wonderful
tool
that
we
can
use
to
evaluate
no
cesm
and
cam,
specifically,
if
we're
not
using
a
satellite
simulator.
E
So
if
we're
just
looking
at
this
shortwave
cloud
forcing
we
can
generally
see
that
these
colors
are
becoming
coming
kind
of
less
intense
and
more
muted
from
cam40cam
6..
And
if
we
look
at
the
global
error
and
the
rms
error,
we
can
also
see
a
pretty
substantial
reduction
in
the
rms
error
in
the
shortwave
cloud.
E
So
perfect
model
is
going
to
have
a
normalized
standard
deviation
of
one
and
then
angularly.
We
have
the
correlation
with
observations,
so
we
also
want
the
model
to
have
a
unitary
value
here,
and
so
a
perfect
model
is
going
to
sit
at
this
one.
One
point,
and
also
here
we're
also
visualizing
the
relative
bias
through
the
size
of
this
bubble,
and
so
we'd
also
want
this
bubble
to
be
effectively
as
small
as
possible.
E
So
just
kind
of
reiterating
what
we
saw
in
the
last
slide
with
the
global
maps
is
that,
if
we're
looking
at
the
shortwave
cloud,
forcing
which
is
these
bubbles
marked
by
one
from
cam
4
to
cam
5
to
chem
6,
we
see
this
motion
towards
the
1
1
point.
So
there
is,
you
know,
a
clear
improvement
of
the
model
when
we're
looking
at
the
long
wave
cloud
forcing
as
we
kind
of
have
more
trouble
discerning
from
those
plots.
E
We
also
see
from
camford
again
five
there's
somewhat
of
an
improvement
both
in
the
correlation
and
the
standard
deviation,
but
we
see
a
much
larger
relative
bias
and
cam6
kind
of
splits,
the
difference
where
it's
not
clear
that
it's
objectively
better
than
cam
4
here.
E
So
what
we're
looking
at
here
is
total
cloud
amount
from
three
different
satellite
missions,
iskip
miser
and
calypso,
and
these
first
two
are
passive
instruments,
and
so
we've
excluded
values
forward
of
60
degrees,
because
their
retrieval
algorithms
are
not
going
to
be
quite
as
reliable
there
and
as
we'd
expect
from
three
different
satellites
over
fairly
similar,
but
not
completely
overlapping
time
periods.
E
Is
that
there's
a
lot
of
commonalities,
but
there
al
also
are
differences
in
the
spatial
patterns
of
the
total
cloud
amount
observed
here,
and
so
what
a
satellite
simulator
is
going
to
do
is
hopefully
capture
those
differences
that
are
due
to
kind
of
the
intricacies
of
instrumentation
flight
path
and
retrieval
algorithms
of
these
different
satellite
missions.
And
so
we
do
see
that
there
are
some
differences
between
all
within
cam
4
but
across
these
different
simulated
satellite
simulators.
E
And
so,
if
we're
looking
now
kind
of
in
the
evolution
of
cam,
we
can
see
from
cam
4
to
cam
5
and
into
chem
6.
There's
this
really
strong
deepening
of
colors
representing
a
strong
increase
of
clouds
over
the
southern
ocean,
and
we
also
see
an
increase
of
clouds
in
the
northern
high
latitudes
in
a
slight
decrease
of
clouds
in
the
tropics.
E
again,
looking
through
a
taylor
diagram,
we
can
see
that
all
of
these
different
total
cloud
amount
kind
of
simulated
values
are
grouped
together,
which
makes
sense
basically
indicating
that
the
differences
between
the
simulators
are
larger
than
differences
between
the
different
versions
of
cam.
E
And
if
we
look
from
cam
4
to
cam
5,
we
can
see
that
there
is
this
improvement
in
the
correlation
with
observations,
but
there's
also
a
departure
from
kind
of
this.
One
value
of
the
normalized
standard
deviation
but
overall
I'd
say
an
improvement
in
total
cloud
amount
in
cam5
from
cam5
to
chem
6.
However,
we
see
very
little
change
in
the
correlation,
but
the
standard
deviation
has
really
gone
to
a
significantly
too
high
of
a
value.
E
So
this
again
is
just
indicating
that
from
cam
4
to
cam
5,
we
see
somewhat
of
an
improvement
in
clouds
but
from
cam
5
to
chem
6.
The
representation
of
clouds
in
cam
has
degraded
relative
to
these
observations.
E
If
we
want
to
understand
why
this
standard
deviation
has
shifted
in
the
way
that
it
has,
we
can
look
at
zonal
plots
of
this
total
cloud
fraction
in
this
case
from
calypso,
and
we
can
see
that
from
cam
4
to
cam
5,
there's
an
increase
in
cloud
amount,
basically
across
all
latitudes,
but
it's
concentrated
at
the
higher
latitudes
leading
to
this
forward
shift
and
that
increase
in
the
normalized
standard
deviation.
The
increase
in
the
normalized
standard
deviation
from
can
five
to
count.
E
Six,
however,
we
can
see
is
not
due
to
this
increase,
but
is
just
a
redistribution
in
that
we
see
more
clouds
at
higher
latitudes
and
fewer
clouds
in
the
tropics,
and
so
we're
seeing
different
reasons
for
our
steps
between
can
four
and
ten
five
and
cam
five
and
chem
six.
E
If
we
want
to
know
specifically
what
kinds
of
clouds
are
responsible
for
this
change,
we
can
look
at
the
low
level
cloud
fraction
again
using
the
simulator
allows
us
to
make
a
really
great
comparison
between
the
low
level
variable
produced
by
calypso
observations
and
those
produced
in
the
model,
and
we
can
see
that
this
is
just
an
exaggerated
version
of
the
trend
that
we've
seen
in
the
total
cloud
fraction,
where,
with
this
large
decrease
of
low
clouds
in
the
tropics
and
this
large
increase
at
higher
latitudes-
and
it
turns
out
that
campfires
actually
does
a
really
good
job
reproducing
the
low
clouds
and
cam6
has
really
kind
of
distorted
this
representation
of
it.
E
If
you
want
to
go
a
little
further
and
move
from
understanding
kind
of
which
clouds
are
responsible
for
this
change
to
what
parts
of
the
model
are
responsible,
it's
really
useful
to
look
at
this
figure
from
a
paper
by
brian
majeros
in
2016.
and
effectively.
Here
they
did
aquaplanet
simulations
with
cam5
as
the
reference
model
here
in
blue
and
then
we're
going
to
be
mostly
looking
at
this
orange
and
red
lines
which
just
are
where
they
implemented.
E
The
morrison
kettleman
microphysics
2
scheme,
so
just
iterated
the
microphysics
scheme,
and
here
we're
not
looking
at
cloud
amount
we're
looking
at
the
shortwave
cloud
radius
effect,
and
we
can
see
that
when
the
microphysics
scheme
is
changed,
we
see
a
reduction
in
the
shortwave
cooling
in
tropical
clouds
and
an
increase
in
the
shortwave
cooling
in
higher
latitude
clouds,
and
this
would
be
consistent
with
a
decrease
in
tropical
clouds
and
an
increase
in
higher
latitude
clouds.
E
Of
course,
I
can
effectively
remake
this
same
plot
with
our
cam
4
cam
5,
cam,
6
simulations
and
also
observations
again
from
series
and
see
basically
the
same
pattern
so
the
effect
of
effectively
iterating
the
morsi
gentleman
microphysics
scheme
is
basically
the
effect
that
we
see
from
cam
5
to
106,
which
isn't
a
huge
surprise,
because
the
introduction
of
this
new
microphysics
scheme
was
a
pretty
substantial
change,
and
so
this
is.
We
can
see
that
the
microphysics
scheme
is
responsible
for
this
polar
shift
of
clouds.
E
But
of
course
we
had
the
advantage
of
looking
at
observations
here,
and
so
we
can
also
see
that
cam
6
performs
better
than
cam
5
when
we're
looking
at
the
cloud
rate
of
forcing-
and
this
is
to
really
think
about
if
we're
using
cloud-rated,
forcing
as
our
kind
of
measuring
sick
to
evaluate
and
develop.
These
models
is
that
we
might
end
up
with
this
kind
of
result
where
we
have
really
good
agreement
with
this
observation.
E
But
we
know
that
the
clouds
fields
themselves
have
kind
of
shifted
in
a
way
that
maybe
we
didn't
we
didn't
see,
and
so
I'd
argue
here,
that
tuning
into
cloud-rated
forcing
is
not
enough,
especially
when
we
have
these
really
wonderful.
You
know
very
useful
simulators
and
really
robust
satellite
products
that
we
can
use
in
tandem
with
the
cloud
radio,
forcing
of
course,
because
microphysics
influenced
cloud
phase
so
strongly.
E
We
also
looked
at
cloud
phase
using
the
clyde
b
satellite
simulator
and
just
a
shout
out
to
the
folks
who
do
the
computer
engineering
for
cam
and
cesm2
and
csm
is
that
we're
able
to
effectively
have
backwards,
compatibility
and
use
a
different,
a
new
version
of
cost
that
was
not
developed
for
cam,
4
and
and
just
plug
it
in
and
get
it
to
work
and
we're
going
to
look
at
liquid
cloud
here,
because
we
know
that
kind
of
a
big
motivator
for
the
new
microphysics
scheme
in
cam.
E
6
was
this
deficit
of
liquid
cloud,
especially
over
the
southern
ocean.
And
so
what
we
can
see
here
is
that
there
is
an
increase
over
the
southern
ocean
and
liquid
cloud.
But
overall
I'd
say
the
agreement
with
observations
and
the
liquid
cloud
amount
has
really
been
reduced.
So
this
new
microphysics
scheme
continues
to
kind
of
complicate
our
representation
of
cloud
phase,
and
I
think
we
can
say
that
both
from
a
perspective
of
cloud
amount
and
cloud
phase
that
cam
5
is
performing
better
than
can
6..
E
I
think
I
might
be
close
on
time
so
I'll
try
to
get
through
the
slide
really
quickly.
We
started
looking
a
little
closer
at
how
model
differences
are
manifesting.
Here
is
a
function
of
the
environment
or
regime
in
which
clouds
are
forming,
and
so
here
we
kind
of
have
global
tropics
and
southern
ocean
shortwave
cloud
forcing,
but
we've
divided
them
up
by
the
circulation
regime.
So
on
the
x-axis
here
we
have
mid-tropospheric
vertical
velocities.
E
We
have
convecting
regimes
on
the
left
in
subsiding
regimes
on
the
right
and
while
the
global
picture
is
somewhat
muddled,
when
we
look
at
the
southern
ocean
or
the
tropics
and
the
southern
ocean
separately,
we
can
see
that
basically
across
all
circulation
regimes
in
the
tropics,
the
cooling
shortwave
cooling
from
clouds
is
reduced,
whereas
over
the
over
the
southern
ocean,
that
cooling
decreases-
and
these
here
are
just
histograms
effectively
showing
the
frequency
of
clouds
so
effectively
a
waiting
function
that
you
can
maybe
use
guide
your
eye
to.
E
E
So
all
these
plots
that
I've
showed
can
tell
us
about
how
a
model
is
performing
and
even
help
to
identify
where
in
the
model,
changes
are
coming
from.
But
if
this
analysis
isn't
really
easy
to
do,
it
doesn't
have
a
ton
of
use
for
us
and
so
just
kind
of
thinking
about
ways
to
make
this
analysis
more
accessible
and
one
is
just
to
normalize
running
costs
in
cesm,
and
you
know,
with
the
new
version
of
cost,
I'm
pretty
sure
that
the
running
time
is
increases
only
very
little.
E
Something
else
we
can
do
with
this
kind
of
analysis
is
kind
of
decide
where
to
apply
models
and
what
kind
of
studies
to
use
them
in.
So
if
we
have
doing
a
study
about
cloud
feedbacks
in
the
second
half
of
the
21st
century,
it
might
make
more
sense
to
use
cam
5
instead
of
cam
6,
because
we've
shown
that
it
has
better
agreement
with
observations.
E
But
we
also
have
a
lot
of
great
satellite
observations
that
use
multi
or
hyperspectral
observations,
and
this
is
something
like
the
ayers
mission,
which
has
more
than
2000
special
channels
and
also
has
more
than
a
two-decade
observational
record,
and
so
how
do
we
interface
between
models
in
these
kinds
of
observations,
whereas,
if
you're
talking
about
apples
and
oranges,
if
we
have
a
hyperspectral
instrument,
we
have
something
like
a
fruit
salad,
where
the
information
that
we
want
to
compare
the
model
is
hidden
in
there,
but
getting
it
out
is
more
complicated
and
implementing
a
satellite
simulator
for
a
hyperspectral
instrument
does
not
really
make
sense
computationally.
E
N
E
Yeah,
I
know
that
in
cam5
it
was
just
running
atmosphere.
Only
there
was
somewhere
between
50
and
a
doubling
in
cost
for
running
costs,
but
with
the
new
cost.
Is
that
I'm
pretty
sure-
and
someone
can
correct
me
on
this-
is
that
it's
virtually
the
same
as
running
the
model
without
okay?
Oh.
P
P
They
actually
not
use
the
consistent
assumptions
used
in
the
model
in
the
microphysics,
such
as
mg2
microphysics
in
the
model.
So
in
e3sm
we
actually
end
up
changing
all
those
parameters
because
they
are
what
they
used
is
just
a
single
moment
very
simple
assumptions:
even
not
gamma
distribution.
P
So
there's
a
lot
of
uncertainties
in
there.
I'm
not
sure.
If
you
play
around
you
made
any
change
on
that
play
around
with
uncertainties
with
cusp.
N
B
Can
you
see
my
slides
okay?
Yes,
we
can
go.
Okay!
Thank
you,
hello,
everyone,
I'm
jian,
zhu,
a
project
scientist
here
at
paleo
and
the
polar
climate
section
at
ncaa.
So
my
talk
today
is
about
informing
cloud
parameterizations
in
csm2
through
simulation
of
the
last
glacial
maxima.
First,
some
background
about
the
last
glacial
maximum
or
the
lgm.
B
So
through
decades
of
research,
the
paleoclimate
community
have
known
much
better
about
their
climate
force
thing,
and
we
know
that
the
major
forcings
are
the
greenhouse
gases
and
the
ice
sheets,
and
over
the
years
we
have
accumulated
close
to
a
thousand
geochemical
sst
records.
Those
are
thought
to
be
high
quality
data
and
in
a
recent
paper
we
estimated
the
lgm
global
cooling
to
be
6
degrees
c.
B
Lgm
is
considered
to
be
one
of
the
best
time
periods
for
informing
climate
sensitivity,
so
here
shows
a
figure
of
model,
simulated,
lgm,
global
cooling
and
the
model
ecs
in
different
csm
models.
So
the
first
thing
you'll
notice,
is
that
we
have
this
very
nice
correlation
between
lgm
cooling
and
ecs,
the
higher
the
ecs,
the
greater
the
the
lgm
global
cooling.
B
So,
in
the
case
of
csm2
you'll
see
that
the
lgm
global
cooling
is
12
degrees
c
and
it's
way
outside
the
proxy
suggested
range
that
made
us
to
conclude
that
the
high
essence
of
csm2
is
likely
unrealistic.
That
is
consistent
with
the
other
independent
assessments.
So
one
interesting
experiment
you
can
do
is
that
instead
of
running
csm2
with
campfire
cam6
the
default
atmosphere,
felix
package,
you
can
run
csm2
with
cam5.
B
B
That
really
motivated
us
to
examine
the
details
between
chem
5
and
chem
6..
So
here
I
basically
follow
andrew's
approach
documented
in
2019
in
full,
a
top
of
lgm
configuration.
So
I
test
data
scheme.
You
run
the
csm2
with
cam6,
but
with
the
older
version
of
the
ice
duo
creation
scheme,
it's
called
a
head
freeze-off
and
I
also
tried
a
scheme
configuration
using
the
mg1
scheme,
so
this
configuration
is
called
mg2
off
and
I
also
have
club
off
so
you
use
the
yoda
shallow
convection
and
the
boundary
layer
scheme.
B
B
So
basically,
this
ni
max
sets
the
maximum
cloud
ice
number
and
it
was
designed
to
work
with
the
older
version
of
the
is
location
scheme
with
a
longer
macrophysical
time
step,
and
it
was
not
modified
accordingly
when
you
when
they
develop
the
new
schemes.
So
that's
why
I
use
these
new
configurations
called
node
and
imacs
to
investigate
the
impact
on
the
lgm
temperature
response.
B
So
here
is
a
quick
summary
of
the
test
data
configuration
and
for
each
configuration.
I
run
fully
coupled
lgm
simulation
with
paired
pre-industrial
in
the
f32
resolution
for
lgm.
I
use
the
greenhouse
gas
ice
sheet
and
orbital
forcing
and
I
use
pre-industrial,
aerosol
and
vegetation
and
other
simulation
lgm
was
were
initialized
from
the
same
csm
1.2
lgm
simulation
so
for
each
configuration.
I
also
performed
the
wco2
experiments
with
paired
pre-industrial
to
examine
the
impact
on
cloud
feedback
and
ecs.
B
B
Next,
let's
look
at
some
results.
The
figure
on
the
left
shows
the
lgm
global
cooling
again,
data
is
here,
and
the
figure
on
the
right
shows
the
shortwave
cloud
feedback
in
the
lgm
fully
coupled
simulation
and
in
the
dublin
co2
slab
ocean
simulation
based
on
present
day
climate,
and
you
see
that
if
you
run
csm2
with
chem
6
felix
with
a
hundred
years,
you
can
clearly
tell
that
the
model
is
drifting
away.
B
Their
gm
cooling
is
certainly
too
much
than
data
and,
if
you
run
cam
6
with
cam5
cam
csm2
with
cam
5
physics,
you
have
this
sprung
here.
You
see
that
we
have
significant
reduction
in
the
cloud
feedback.
As
a
result,
the
lgm
cooling
is
much
reduced.
It
agrees
better
with
the
proxy
suggestion
and
the
colored
lines.
B
B
You
see
that
starting
from
cam6
configuration,
if
you
remove
the
limiter,
you
have
a
decrease
in
cloud
feedback
and
after
a
hundred
years,
you'll
see
a
significant
improvement
in
the
lgm
performance.
So
this
is
really
promising.
However,
there
was
a
problem
with
this
node
and
imax
configuration
the
panel.
A
here
shows
the
in-cloud
cloud
ice
number
concentration
in
the
cam6
configuration
and
the
bottom
one
is
after
you
remove
this
limiter.
You
see
that
the
cloud
ice
number
grows
to
higher
values
900
here
per
later
and
300
here.
B
So
this
is
certainly
too
high
than
the
default
model
and
then
the
observations.
So
we
need
to
improve
that.
An
obvious
approach
is
to
increase
the
macrophysical
time
step.
So
here
I
have
different
simulations,
so
this
one
means
you
remove
that
limiter
and
do
two
sub
steps
in
mg2,
so
you
can
see
that
as
the
subset
of
number
increases,
you
see
the
cloud
ice
number
decrease
and
after
a
time
step
of
maybe
8
to
16
the
solution
starts
to
converge.
B
B
B
The
numbers
are
the
global
mean
value,
starting
from
the
node
ni
max
you'll,
see
that
if
you
do
sub
stepping
you'll
see
this
significant
decrease
in
the
global
mean
shortwave
cloud
feedback,
and
if
you
increase
the
substance
sub
step
to
eight,
you
get
another
job
in
the
shortwave
cloud
feedback
and
it
stays
the
same
with
the
further
increase
of
sub
step
number.
B
So
here
I
perform
the
additional
configuration
so
instead
of
removing
noi
max,
I
directly
perform
substituting
microphonics
scheme.
So
you
only
have
this
eighth
sub
step
in
mg2.
That's
the
purple
line
here.
You
see
that
we
also
observe
this
reduction
in
the
cloud
feedback
over
subtropics
and
middle
latitudes.
This
sensitivity
experiment
tells
us
that
the
reduction
in
cloud
feedback,
when
you
do
mg2
substituting,
is
largely
through
processes
other
than
the
cloud
ice
number.
B
B
B
This
is
certainly
a
huge
improvement
over
the
default
cam
6
configuration
with
this
configuration.
We
can
perform
a
deck
simulation
around
4
times
co2
to
calculate
ecs.
Here
is
a
gregory
plot.
You'll
see
that
ecs
is
close
to
4k,
that's
much
lower
than
the
standard
configuration,
and
I
can
also
calculate
ecs
using
slab
ocean
configuration
you'll
see
that
you
still
get
this
number
of
four,
so
it's
even
more
reduction
compared
with
the
default
configuration.
B
I
also
quantified
the
aerosol
cloud.
Interaction
and
you'll
see
that
we
have
20
percent
decrease
in
the
errors
of
carbon
interaction,
and
the
figure
on
the
right
shows
a
time
series
of
the
historical
simulation
you
see
that
the
blue
is
paleocytic
configuration.
The
performance
is
as
good
as
the
other
csm2
cam6
configuration.
B
We
think
that
we
have
reduced
a
groundhog
gas,
induced
warming
and
reduced
aerosol
induced
global
cooling,
so
you
get
a
cancellation
in
the
end.
You
still
have
overall
realistic
historical
simulation.
B
H
Q
And
I
think
it'd
be
really
fun
to
run
cusp
and
assess
if
this
is
actually
an
improvement
against
modern
day
observations.
I
mean
it's
kind.
H
And
yeah
jiang.
So
this
I
guess
we'll
kind
of
answer,
peter's
question,
but
it
looks
like
you
said:
sub
stepping
eight
times
is
about
a
20
burden.
So
six
times
I've
measured
it
it's
about
12
and
that
that's
easier
to
sell.
Have
you
seen
whether
6
converges
to
the
0.49
feedback
parameter
as
well?
N
F
Yeah,
I
guess
I'll
just
echo
what
the
past
two
questions
have
said
by
saying
that
I
think
that
computational
expense
is
a
potential
concern
here,
and
I
also
am
very
curious
about
the
more
general
impact
of
these
tuning
changes
in
terms
of
taylor,
scores
and
general
metrics.
Do
things
look
worse,
other
outside
of
just
paleo.
B
B
N
Okay,
great
thanks.
Thank
you
I'll.
Just
take
a
look
at
the
watch.
There
are
a
few
more
comments
zhang
from
ilar,
and
I
simpson
also
jin
k
again.
Could
you
please
get
back
to
them
via
the
chat?
I
think
we
should
move
on
and
go
to
the
next
talk.
Thank
you
very
much.
C
All
right,
can
you
see
that?
Yes,
okay?
So
thank
you
so
yeah,
I'm
gonna
talk
about
this
project
that
we've
been
moving
forward
slowly
over
the
past
year.
That's
stepping
toward
having
a
new
diagnostics
package
that
would
replace
the
amwg
diagnostics
that
we
that
we've
been
using
for
a
long
time
is.
This
has
also
been
wrapped
into
an
effort
that
was
looking
to
develop
new
variability
diagnostics
as
well,
so
especially
looking
at
high
frequency
variability,
and
so
that's
all
being
packaged
up
into
one
thing
called
the
amwg
diagnostics
framework.
C
At
this
point,
so
I
want
to
acknowledge
the
work
that
jesse
and
cecile
put
into
this
so
far,
as
well
as
the
the
input
that
julie,
carone
and
the
rest
of
the
diagnostic
task
force
has
has
had
as
we've
moved
forward.
So
where
we're
starting
from
is
the
venerable
amwg
diagnostics
package,
you
can
see
here
a
screenshot
of
the
website
that
you've,
probably
all
looked
at
many
times
on
the
right.
The
core
of
that
package
is
a
c
shell
script,
it's
more
than
3
000
lines
long.
C
It
basically
is
calling
many
many
ncl
scripts
and
the,
and
it
dates
back
to
about
2001,
so
we've
been
using
this
for
about
about
20
years,
with
many
updates
along
the
way
and
has
been
very
useful
for
these
20
years.
C
So
if
you
look
at
the
code
base
of
that
that
package
on
glade
there's
about
50
000
lines
of
ncl
code
and
the
whole
package
has
about
143
directories
and
about
400
files,
much
of
that
is
actually
the
html
that
is
used,
so
it
produces
these
static
web
pages
that
are
very
specific
in
their
structure
and
this
is
can
be
configured
for
model
versus
observations
or
model
versus
model
comparisons.
C
So,
after
20
years
we
have
a
couple
of
drawbacks
from
this
package.
One-
and
probably
the
most
important
at
this
point,
is
that
ncl
is
kind
of
end
of
life
and
it
served
us
well
for
a
long
time.
But
at
this
point
it's
not
supported
and
we
don't
want
to
be
going.
We
don't
want
to
move
forward
with
diagnostics
packages
that
rely
on
on
software.
That's
not
supported.
C
The
other
thing
is
that
it's
actually
very
difficult
for
a
normal
user
to
make
much
use
of
the
diagnostics
in
terms
of
customizing
it
or
making
it
flexible
or
changing
the
html
or
things
like
that.
It's
like
it's
a
it's
a
lot
of
there's
a
lot
of
infrastructure
and
baggage
associated
with
the
package,
and
then
the
other
thing
is
coming
from
this
diagnostics
task
force.
C
Is
that
there's
a
lot
of
desire
to
have
variability
diagnostics,
especially
looking
at
diurnal
cycles,
sub-monthly
variability,
and
there
have
been
efforts
over
the
years
to
develop
different
things,
and
some
of
those
efforts
have
have
contributed
to
various
packages
that
exist
out
in
the
world.
Some
are
just
scripts
that
float
around
and
some
entire
variability
packages
have
been
developed
but
fallen
into
disuse.
The
major
exception
to
this
is
adam
phillips's
cbdp,
which
is
loved
and
used
a
lot
and
is
a
model
for
these
kinds
of
diagnostics
packages.
C
So
the
vision
to
replace
the
amwg
diagnostics
is
to
develop
an
extensible
framework
that
provides
a
common,
simple
interface
to
diagnostics
for
model
evaluation.
The
simple
cartoon
of
this
is
users,
sits
down
at
their
computer,
enters
the
command
or
a
few
commands,
and
then
it
invokes
adf.
The
amwg
diagnostics
framework,
which
takes
care
of
data
ingestion
processing,
produces
standard
diagnostics,
including
statistics
and
plots
and
maybe
runs.
Custom
diagnostics
that
are
provided
by
the
user
and
the
output
is
data
files
text
plots
maybe
with
an
interface
to
browse
the
results.
C
So
with
this
very
general
concept,
we
can
just
build
a
prototype
and
doing
so.
We
know
that
we
are
reinventing
the
wheel.
We
are
aware
that
there
are
many
packages
that
do
exactly
what
we've
said:
some
of
them
even
applied
to
global
atmospheric
models,
some
of
them
even
applied
to
global
atmospheric
models
that
are
very
similar
to
cam
or
maybe
even
our
cam.
C
But
when
we,
when
we
started
thinking
about
rebuilding
the
diagnostics,
we
looked
around
at
the
landscape
that
we
knew
about,
and
our
assessment
was
that
nothing
was
meeting
our
specific
needs
and-
and
so
we
wanted
to
start
from
scratch.
With
a
couple
of
principles
in
mind.
The
first
among
those
was
to
keep
it
as
simple
as
possible.
We
would
like
to
have
code
that
scientists
can
look
at,
can
understand
and
can
modify.
C
So
we
don't
want
to
over
engineer
this
package,
but
we
do
want
to
make
it
flexible
and
extensible.
We
don't
want
to
have
a
lot
of
repeated
code.
We
want
to
make
it
modern
looking,
even
though,
even
while
keeping
it
simple,
we
want
it
to
be
efficient,
but
we're
not
going
to
prioritize
performance.
So
so
that's
not
something
that
you
we're
thinking
about
at
this
point.
C
If
we
get
to
the
point
where
this
package
is
is
accepted
by
the
community
and
is
being
used
and
performance
becomes
an
issue
we'll
deal
with
that
at
that
point,
but
we
do
want
to
prioritize
using
python
wherever
we
can,
because
we
think
the
scientific
python
ecosystem
is
appropriate
for
building
a
diagnostics
package.
There
are
the
all
the
tools
are
there
and
it
it
feeds
back
onto
these
simple
and
flexible
and
extensible
principles
that
we've
laid
out.
C
So
with
that
in
mind,
we've
developed
this
prototype
and
it
has
one
command.
It's
called
run
diag,
and
it
takes
one
argument
which
is
config
file,
which
is
a
yaml
file
and
what
it
does
is
when
you
run
run
diag.
It
orchestrates
calls
to
a
number
of
python
scripts
and
it
basically
steps
through
these
steps.
The
first
thing
is
that
it
takes
history,
files
and
processes
processes
them
into
time
series
files.
This
relies
on
ncr
cat
right
now,
that's
our
one
non-python
package
that
we
need,
and
that
is
for
efficiency.
C
C
Those
time
series,
files
or
commentology
files
can
be
regridded
to
enable
comparisons
between
models
or
observations
and
models
on
different
latitude,
longitude
grids.
The
example
right
now
it
uses
x-rays
inter
method
to
do
bilinear,
interpolation,
so
a
really
simple
method,
but
good
enough
for
plotting
purposes.
C
The
next
step
is
to
do
whatever
analysis.
Scripts
are
found
in
a
sub
in
a
subdirectory
of
the
package
right
now,
there's
one
it
produces
a
table
of
global
statistics,
that's
inspired
by
the
amwg
table
that
that
compares
two
runs
and
then
finally,
it
goes
through
a
list
of
plotting
routines
right
now
it's
producing
maps
of
annual
and
seasonal
mean
maps,
and
then
annual
unseasonal
mean
zonal
means.
The
outputs
from
these
green
steps
are
our
future.
C
Our
files
that
you
can
use
for
future
use
for
whatever
you
need
the
analysis
scripts
can
produce
files
for
later
or
and
or
output
that
can
be
ingested
into
an
html
website
generator.
The
plots
are
dumped
into
a
directory
that
can
also
be
sucked
into
a
website
generator,
and
so
we've
also
implemented
a
really
simple
static,
static
site
generator
that
I'll
show
you
in
a
minute
examples
of
the
plots
that
we're
producing
right
now
are
are
here,
so
the
maps
are
are
shown
here,
there's
just
simulation,
one
simulation
two
and
their
difference.
C
Zonal
means
can
be
either
pressure
versus
latitude
for
vertical
for
three-dimensional
fields
or
just
line
plots
for
two-dimensional
fields.
C
So
these
plots
are
really
simple
and
they're
meant
to
be
simple
just
because
this
is
a
prototype
and
it's
not
it's
not
production
quality
at
this
point,
but
we
haven't
ignored
everything.
There
are
some
bells
and
whistles,
so
we're
doing
proper
area
averages
and
area
weighted
root
mean
squared
errors
for
the
maps
we're
doing.
C
We
have
tick
marks
that
show
the
actual
level
variable
level
values
instead
of
just
showing
you
a
linear,
coordinate,
and
then
there
are
things
like
smart,
smart
selection
of
color
maps
and
also
smart
selection
between
making
this
contour
plot
and
making
this
line
plot,
so
the
user
doesn't
have
to
specify
which
which
plots
need
need
to
be
made
for
each
which
variable
this.
The
static
website
generation
is
really
rudimentary
and
simple,
but
it's
functional
and
it
produces
plots
that
are
web
pages
that
look
like
this.
C
So
there's
an
index
page
that
tells
you
what
cases
you're
looking
at
and
has
links
to
a
plots,
direct,
a
plots
page
and
a
tables
page.
If
you
click
on
the
link
to
the
plots
page,
it
shows
you
this
page
on
the
right
that
just
as
a
list
of
variables
and
under
each
variable
name
are
the
list
of
the
plots
that
it
was
able
to
find
for
that
variable
and
then
links
to
the
each
of
the
plots.
So
here,
lat
lawn
or
the
maps
and
zonal
or
the
zonal
means.
C
The
the
installation,
instruct
and
structure
of
the
package
is
really
simple,
so
we've
got
this
all
on
a
github
repository
which
you
can
clone
into
your
local
workspace
cd
into
cam
diagnostics.
If
you
run
the
tree
command,
you
see
the
entire.
The
entire
package
looks
like
this
there's,
basically
three
levels
of
hierarchy
here,
just
six
directories
or
so
a
few
files,
and
I
wanted
to
point
out
that
I
counted
the
number
of
lines
so
run
diag.
Is
that
command
that
you
run
as
the
user
to
invoke
the
rest
of
the
package?
C
Cam
diag
is
the
does
most
of
the
the
in
the
orchestration
of
calling
the
different
routines.
Those
are
together
less
than
500
lines
of
code,
so
that
compare
that
to
the
3000
line
c,
shell
script,
that
we
have
from
the
amwg
diagnostics
and
then
what
this
does
is
basically
cam.
C
Diag
is
looping
through
these,
the
subdirectories
in
the
scripts
directory
and
running
every
python
file
that
it
finds
in
there,
depending
on
how
it's
been
specified
in
the
yaml
file,
and
you
can
see
also
there's
not
much
code
in
these
in
these
guys.
So
we
have
a
huge
reduction
in
the
just
amount
of
code
in
the
package
so
far,
but
we're
producing
probably
I'm
roughly
roughly
guessing
about
70
of
what
the
mwg
diagnostics
package
does.
C
The
yaml
file
is
a
simple
text
file
that
can
be
edited
by
hand
but
can
be
read
by
the
computer.
It
has
essentially
dictionaries
in
it.
So
you
have
diag
basic
info
and
you
you
tell
it
if
you're,
comparing
with
obs
or
model
versus
model,
and
things
like
that,
you
tell
it
where
the
data
is
and
where
to
put
the
data.
C
C
C
But
all
of
these
steps
rely
on
finding
dedicated
resources
for
the
development
and
maintenance
of
this
package,
because
right
now
it's
all
been
on
a
volunteer
basis
and
and
speaking
of
that
tomorrow,
in
the
software
engineering
session,
there
is
a
there's
a
session
with
breakout
sessions
to
discuss
diagnostics
for
cesm,
and
I
think
that
that
will
be
a
good
venue
to
continue
whatever
discussion
we're
about
to
have
about
diagnostics
in
this
session.
So
I
hope
to
see
you
there
and
I'll
take
any
questions
or
comments.
Q
Oh
thanks,
hi
brian,
it's
nan.
I
was
wondering
if
this
is.
If
the
package
is
suitable
or
yeah,
you
know
if
you
wanted
to
run
a
spot
check
on
a
simulation,
whether
it's
a
pi
control
or
some
like
a
single,
forcing
experiment
where
you'd
really
like
to
know
that
you
you're
getting
the
signal
that
you're
you're
applying.
Is
this
sort
of
a
situation
where
you
might
be
able
to
run
the
diagnostics
package
on
an
early
stage
of
a
simulation?
Or
is
this
intended
to
be
run
sort
of
at
the
end
of
a
simulation.
C
So
it
is,
it's
not
optimized
to
run
on
shorter
simulations,
but
it's
not,
but
we
haven't
precluded
that
in
any
way.
So
if
you
wanted
to
run
it
after
even
a
few
months
of
a
simulation,
it'll
try
to
do
its
best,
it
will
produce
like
if
you
wanted
to
run
it
on
three
month
simulation
just
to
see
what
was
coming
out.
C
C
So
if
you
wanted
to
spot
check
it
along
the
way,
I
think
that
you
could
do
that,
and
one
of
the
ways
you
could
optimize
that
make
it
run
relatively
fast
is
to
only
do
a
few
variables
and
spit
them
out
along
the
way.
What
we
haven't
done
is
hook
this
into
being
right,
able
to
be
run
as
a
as
a
as
being
incorporated
in
a
cesm
run.
So
it
won't
stop
the
model
and
automatically
run
these
diagnostics
at
this
point,
but
that's
something
that
we've
talked
about
in
various
presentations
of
this
package.
Q
H
Thank
you,
brian
thanks
for
this
presentation.
Do
you
anticipate
that
you're
going
to
extend
support
of
this
to
time
series
plots,
and
so,
like
an
example
that
comes
to
mind
is,
is
rest
dom
where
nan
just
mentioned,
like
a
global
temperature
that
right
now
I
don't
know
that
the
amwg
package
does
that
I
think
that's
handled
in
separate
packages.
But
would
you
expect
that
this
newer
package
would
would
handle
the
that
functionality.
C
Yeah,
I
expect
that
we
can
put
that
well.
I
know
that
we
can
put
that
in
trivially
and
I
think
that
we
might
as
well
since
we're
already
producing
time
series
plots.
It
is
really
easy,
then,
to
our
we're
already
producing
time
series
files.
So
it's
really
easy
to
add
time
series
plots
it's
just
a
matter
of
plugging
it
in,
and
I
think
that
that
would
take
for
a
first
cut
that
would
take
an
hour
and
then
to
make
it
really.
Nice
would
take
a
day
so
yeah.
C
N
Thanks
brian,
there
are
a
few
more
questions.
Please
look
at
the
chat
and
get
back
to
the
peter
and
rich
neil
and
please
stop
sharing
your
screen
and
let
me
go
to
cecile
hanai
and
she's
our
last
speaker,
and
then
we
have
a
discussion
led
by
julio.
So
cecile
will
talk
about
the
tuning
exercises
with
cam
6..
L
Okay,
then,
this
this
talk
is
a
little
bit
of
follow-up
on
the
winter
walking
group,
where
several
people
requested
to
have
more
information
about
the
way
that
we
are
tuning
cam.
Then
it
allowed
them
to
tune
the
model
themselves.
Then
we
have
started
to
develop
a
tuning
document
and
it
will
be
basically
the
recipe
book
for
tuning
cam
there.
A
few
people
in
am
that
have
started
to
contribute
to
this,
and
it's
still
in
the
early
stage,
but
we
are
welcoming
feedback
and
also
and
the
contribution
you
would
like
to
make.
L
Yes,
some
document
there
and
there
sometimes
a
little
paragraph
in
our
paper,
but
there's
not
really
a
document
describing
what
we
do
and
basically
what
we
have
tried
to
develop.
It's
this
camp
tuning
recipe
that
will
make
the
cam
tuning
process
more
explicit
and
more
transparent
for
users,
and
then
it
will
also
allow
users
that
are
doing
some
cam
development
to
do
the
tuning
themselves
and
not
to
always
rely
on
us.
I
have
people
that
reach
all
the
time
and
ask
me:
how
do
I
tune
this?
L
L
L
L
L
We
have
a
parameter
in
a
cam
called
dcs,
it's
not
the
full
name,
but
we
refer
to
to
it
and
it's
the
thresh
structural
diameter
to
convert
cloud
to
ice
particle
to
snow.
L
And
basically,
when
you
have
cloud
ice
in
cloud,
you
can
have
smaller,
smaller
particle
and
you
can
have
bigger
particle
and
in
a
camo
we
decide
a
diameter
that
tells
us
when
one
of
this
little
clot
ass
particle
is
going
to
fall
out
of
the
cloud.
L
And
basically,
if
you
have
a
smaller
dcs,
all
the
bigger
cloud
ice
particles
are
going
to
fall
out
of
the
cloud.
While,
if
you
have
a
larger
dcs,
all
the
particles
are
going
to
stay
in
the
cloud
and
in
the
on
the
left
side,
you
have
less
cloud
eyes
and
then
less
long
wave
cloud
forcing
why,
on
while
on
the
right
side,
you
have
more
cloud
eyes
and
then
more
long
wave
cloud
forcing
then
this
is
an
example
of
a
tuning
knob
that
we
are
using
and
we
try
not
to
have.
L
L
Okay,
then.
The
second
part
I'm
going
to
focus
on
in
the
definition.
It's
what
I
call
best
agreement
with
observation
and
an
example
of
something
that
we
try
to
do
it's.
We
know
that
the
top
of
the
atmosphere,
radiative
balance,
should
be
small.
The
quantity
called
restaurant
income
should
be
small,
but
if
you
don't
tune
this
quantity,
you
can
end
up
with
a
quantity
of
5
watt
per
meter
square
globally,
and
you
will
have
a
climate
that
drift
very
quickly.
L
L
And
there
are
a
lot
of
other
key
variables
that
we
target,
like
the
c
surface
temperature,
the
cloud
forcing
a
lot
of
other
quantity,
and
usually
we
use
the
atmospheric
booking
group
diagnostic
package
to
look
to
evaluate
this
variable
against
observation.
That's
what
brian
was
just
talking
in,
but
there
are
some
data
sets
that
are
being
updated.
As
brian
mentioned
in
his
talk,
the
package
is
old,
he's
20
year
old.
L
There
are
some
data
said
that
they've
been
updated
for
a
while
and
right
now
we
are
not
trying
to
modify
the
atmospheric
diagnostic
package
anymore
and
we
are
updating
data
set
in
the
new
atmospheric
diagnostic
framework.
L
And
I'm
going
to
give
you
a
few
examples
of
what
we
are
doing
trying
to
do.
Then.
For
example,
we
try
the
sst
bias.
We
compare
to
the
atlas
sst
and
the
we
try
to
have
a
bias,
absolute
bias,
less
than
0.3
kelvin
and
a
root
mean
square
error
smaller
than
1.2
as
a
quantity
that
we
look
extensively.
L
L
We
are
using
an
older
version
of
serious
ebuff,
but
brian
medeiros
just
provided
new
data
sets
that
are
not
included
in
the
diagnostic
or
diagnostic
package,
but
ief
include
number
in
the
tuning
document
and
we
don't
look
on
only
the
global
value.
We
look
also
at
local
bias.
Then,
for
example,
for
software
class
thing,
we
are
going
to
look
at
stratocumulus
region
of
southern
ocean
to
see
how
the
bias
is
same
for
long-term
cloud,
forcing
we
like
to
focus
on
deep
convective
region,
not
only
on
the
global
value
as
a
quantity.
L
L
L
Also
is
the
surface
stress
bias
that
we
compare
to
large
eager,
but
for
this
data
set,
when
we
have
an
issue,
we
don't
have
really
a
tuning
parameter
to
fix
the
problem
that
we
see,
but
this
is
also
quantity
that
we
look
at
and
in
the
past,
especially
when
we
develop
the
spectral
element
dicor.
They
were
a
lot
of
issue
because
they
were
a
small
shift
in
the
maximum
of
the
surface.
Stress
bias
in
the
southern
ocean.
L
Okay,
then,
what
are
the
tuning
steps?
Then?
The
first
step-
and
this
is
this-
is
a
little
bit
schematic-
I
they
are
modi
yeah,
it's
not
as
three-step
like
I'm
going
to
describe,
but
it
gives
a
good
idea.
Then
the
first
step
we
start
to
do
a
bunch
of
cam
only
simulation,
and
we
do
five
to
ten
year
of
f2000,
climber
or
long
map
simulation,
and
we
are
going
to
look
at
a
bunch
of
key
variables
that
I
have
described
in
the
previous
one
of
the
previous
slide.
At
this
stage.
L
L
If
you
have
a
restaurant,
that's
not
point
one,
but
once
we
have
and
all
the
other
components
are
doing
the
same
step
and
at
some
point
we
put
everything
together
and
we
are
start
to
do
fully
couple
predestrian
simulation
and
we
start
to
do
small
simulation
and
we
try
to
get
a
restaurant
and
yeah.
I
put
10
to
20
years
it
10
to
20
years.
It's
not
if
it's
0.1
10
to
20
years,
I'm
going
to
see
how
my
restaurant
is
behaving.
L
If
I
have
like
a
restaurant
of
five,
I'm
going
to
do
something
right
away.
If
I
have
a
restaurant
of
0.4,
I'm
going
to
run
a
little
bit
longer
to
see
where,
where
the
restroom
is
going
and
then
once
we
have
reached
a
point,
one
sort
of
point:
one
point:
two
threshold:
we
start
to
do
longer
simulations
and
we
look
really
at
the
restroom
and
just
t
as
drift,
and
we
also
look
at
the
key
variable,
because
when
we
are
going
to
change
the
rest
of
it
can
also
impact.
L
Usually,
we
are
going
to
adjust
the
rest
of
through
tuning
parameters
that
affect
the
radiative
balance,
and
it's
going
also
to
impact
my
cloud
forcing
and
at
this
stage
we
start
also
to
look
at
as
a
variable
and
so
emoc
or
the
ci
sickness
and
extend
that
would
be
important
once
we
do
the
third
step,
when
we
start
to
do
the
fully
coupled
historical
realm,
and
in
this
case
we
are
going
to
look
at
the
historical
warming
and
there
will
be
no
tuning
at
this
stage.
L
L
Oops
and
something
that
I
want
to
mention,
and
it's
another
kind
of
form
and
I'm
not
going
to
go
in
detail
and
there
is
no
detail
in
the
tuning
document
yet,
but
usually
we
do
when
we
do
this
step
the
fully
coupled
pre-industrial
simulation.
When
we
do
the
tuning,
we
use
a
spun
upstairs
to
do
the
initialization
and
there
will
be
more
detail
later
in
the
document.
L
Then
we
have
a
few
dilemma
while
tuning
and
the
one
of
the
first
dilemma
is
the
subjectivity
of
the
tuning
target.
When
you
do
tuning,
it
involves
choice
and
compromise.
I
have
show
you
your
set
of
very
key
variables
that
we
look
at
and
it's
what
we
pick,
but
we
could
have
decided
to
look
at
something
else.
L
Because,
often
when
you
get
something
better,
it
will
get
so
you
will
get
at
the
same
time
something
works,
but
overall
the
tuning,
as
we
have
seen
that
it
has
limited
in
effect
on
the
model.
Skill,
then
also
you
can
decide
to
tune
on
pre-industrial
or
on
present
day.
Usually
we
do
our
tuning
in
pre-industrial,
because
we
are
using
longer
simulation
where
we
have
radiative
equilibrium,
but
the
problem
is
that
all
the
solid
satellite
data
are
in
present
day.
L
Then
we
do
the
assumption.
Then
the
satellite
data
are
valid
when
we
are
tuning
our
pre-industrial.
L
Also,
there
is
the
fact
you
tune
individual
component
or
in
the
coupler
mode.
When
you
do,
you
saw
that
the
step
one
it's
very
fast,
because
what
you're
doing
very
short
run,
but
there
is
no
guarantee
that
the
result
transfer
to
the
comfort
mode.
Then,
usually
you
start
to
do
some
rough
tuning
with
the
individual
component.
But
then
you
have
to
go
into
couple
mode
to
do
more
tuning
and
at
the
end,
tuning
exercise
itself
is
very
educative
because
during
the
tuning
you
also
learn
a
lot
about
the
model
itself.
L
Then,
where
the
tuning.
N
L
L
I'm
working
up,
it's
I'm
done,
then
this
is
where
it
leaves.
Then
you
have
it's.
If
you
go
on
the
atmospheric
web
page,
you
will
have
the
link
to
the
developer
corner
and
it
will
bring
you
to
the
tuning
document
and
I'm
going
just
to
put
this.
This
is
what
it
provides.
It's
a
living
document,
it's
other
construction,
but
you
will
have
the
overall
process
that
I
talk
a
little
bit
about
the
key
variable
to
look
at
the
tuning
parameter
and
also
the
data
set
that
we
use
to
do
this
tuning
and
yeah.
H
Yeah,
I
don't
have
to
ask
it.
I
just
wanted
to
know
what
other
for
what
other
component
parameters
that
you
use
like
sea
ice
albedo
things
like
that
or
is
that
now?
Is
it
all
in
the
atmosphere
now.
L
We
will
use
a
lot
into
me.
I
I
know
more
how
to
tune
the
atmosphere,
but
often
we
will
use,
for
example,
if
the
sea
ice
is
too
thin
or
too
thick
in
the
b1850,
I
will
go
to
the
to
dave,
bailey
and
marika
and
us
some
guidance
to
tune
the
sea
ice.
Albedo
the
snow
on
sea
ice
just
to
adjust
it
to
make
sure
that
when
we
run
the
20th
century
we
end
up
with
the
sea
socialize,
that's
more
or
less
realistic.
L
L
Recently
that
we
are
adjusting
the
sea
ice
albedo
even
that
yeah,
but
when
it's
small
tweak
often
it's
into
the
atmosphere.
L
L
For
example,
when
I
was
talking
about
dcs,
it's
the
order
of
500
micron,
and
but
I
know
that
this
parameter
will
is
going
to
have
some
impact
on
some
region
and
when
I'm
looking
at
my
my
bias
in
some
region,
I'm
going
to
use
some
parameter
that
I
know
will
have
impact
into
specific
region
and
we
try
to
keep
a
fairly
small
here.
I
have
just
put
three
of
them
there
more
on
this
and
there
would
be
a
document
coming
soon
by
with
an
effort
left
by
andrew
getterman,
the
camp
ep.
L
N
So
we're
cutting
into
our
discussion
time,
which
and
we're
happy
to
entertain
this
discussion
further.
Also,
please
look
at
additional
questions
in
the
chat
and
I
will
hand
it
over
to
julio
and
this
now
leads
into
a
discussion
period
and
we
have
about
20
minutes
and
then
extra
time.
Oh,
don't
worry
a
very
interesting
system.
Well,
okay,.
A
I
actually
just
wanted
to
kind
of
add
to
cecile's
answer
to
peter's
question.
I
I
think
another
way
to
describe
what
tuning
parameters
are.
Are
you
know,
parameters
that
are
poorly
constrained
by
observations,
but
that
we
know,
as
developers
have
big
effects
on
the
schemes?
For
example,
convective
entrainment.
N
So
also
see
very
a
lot
of
open
questions,
how
we
tune
variable
resolution
configurations
right
and
whether
even
machine
learning
could
help
us
tune.
These
automatically.
D
F
And
maybe
we
can
also
just
one
of
us
can
share
the
document
that
julio's
linking
do
you
want
to
do
that
julio
or
do
you
want
me
to
I,
I'm
okay.
A
So
what
I'm
trying
to
do
here
is
share
my
my
google,
my
my
browser
window.
It
should
have
amp
development
priorities.
Does
everybody
see
that
yeah.
A
Okay,
so
this
document
is
open
to
anybody
with
the
link,
I'm
a
little
scared
of
I
you
know
as
long
as
people
are
polite-
and
you
know
add
your
comments
on
page
two,
you
know
if
you
start
adding
your
comments
here.
I
I
would
just
you
know,
it'd
be
really
great
for
us
to
get
community
input
on
this
and
there's
quite
a
large
number
of
people
attending
today.
So
hopefully
some
of
you
will
contribute
to
this.
A
This
is
a
document
that
we
put
together
internally
and,
as
as
I've
said
already,
we
really
would
like
input
from
other
people.
The
the
main
thing
here
is:
we
have
tried
to
define
critical
applications
for
cam
for
cam7.
You
know
that's
kind
of
our
shorthand
for
the
next
model
and
you
can
see.
I
don't
want
to
hijack
the
discussion
for
too
long
on
this
document,
but
you
know
I
just
want
to
point
out
that
you
know
one
of
our.
A
Of
course,
one
of
our
you
know
kind
of
star.
You
know
applications
is
climate
modeling,
you
know
cmip.
If
you
want
to.
You
know
use
that
as
a
shorthand,
and
I
you
know,
I'm
comment
here
is
meant
to
ask
you
know
climate
modeling.
Does
it
you
know,
does
really
does
that?
All
that
mean
is
you
know
just
having
a
model
that
you
can
run
for
thousands
of
years?
A
I
think
you
know
the
ansel.
The
answer
to
that
is
certainly
partially.
Yes,
you
know,
I
think
our
needs
include.
You
know
where
wants
and
needs
in
this
document.
Are
you
know,
sort
of
subjective?
So
you
can
comment
on
that
too.
A
A
So
you
know,
look
at
this
document
comment
on
it.
This
document
has
a
major
contribution
from
andrew
who's:
who's,
not
here
to
defend
himself
so
I'll,
go
ahead
and
say
that
I
don't
think
all
these
seam
applications
are
uniquely
sema
applications.
A
In
particular,
I
think
you
know
there's
a
lot
of
interest
in
high
resolution
weather
type
applications
within
within
the
sort
of
the
let's
call
it
the
default
or
standard
amwg
community,
and
we
have
you
know
other
applications.
A
We
also
have
an
extensive
suite
of
simpler
models
and
I
believe
these
will
be
much
easier
to
use
with
containerization
it's
a
topic
we
really
haven't
talked
about
at
all
at
this
meeting,
but
I
think
it
basically
amounts
to
downloading
sort
of
virtual
virtual
assist.
You
know
a
virtual
operating
system
that
you
can
run
run
these
simple
models
on
without
having
to
worry
about.
A
You
know
implementing
lots
of
different
packages
on
your
own
system
so
anyway.
This
is
this.
This
is
there
for
your
for
your
comments
and
at
this
point
we
have
about
13
minutes
left
and
I
would
just
like
to
open.
You
know
I.
I
would
like
to
open
the
open
the
discussion
to
people.
There
were
a
lot
of
comments
in
the
chat.
There's
been
a
lot
of
great
discussion
already,
but
we
really
haven't
had
sort
of
you
know
kind
of
a
an
extended
discussion
period.
A
So
I
I
will.
I
will
just
be
quiet
and
let
let
you
all
contribute
what
you
think.
What
you'd
like
to
comment
on
about
what
you've
heard
this
morning
or
anything
else.
You
might
have
heard
yesterday
as
well.
A
Edit
permissions
I,
or
at
least
I
had
intended
to,
let
me
I
think.
H
E
It
opened
up
in
safari,
but
then,
when.
A
Okay,
well,
I've
just
changed
the
permissions
to
editor.
It
was
view
before.
Is
it?
Is
it
better
now
adam?
Can
you
can?
Can
you
make
a
change.
H
No,
it's
still
it
I'm
on
chrome
and
it's
still
a
static
document.
I
don't
know
if
it's
just
me.
A
You
may
have
to
you,
may
have
to
open
it,
as
you
know,
open
it
in
the
whatever
the
the
google.
H
A
Right,
okay,
good,
so
I
see
I
see,
a
couple
of
people
have
already
accessed
the
document,
so
yeah
people
go
to
town.
Let
us
let
us
let
us
know
what
you
think
and
yeah
we'll
we'll
keep
track.
A
So
are
there
other
other
comments
that
people
would
like
to
make
on
other
topics
that
have
come
up
today
or
yesterday.
A
Yes,
I
think
people
can
just
go
ahead
and
and
why
don't
people
just
go
ahead
and
and
speak
I'm
gonna.
I
think
I
will
have
to
stop
sharing
to
see
the.
F
If
nobody's
gonna
say
anything
and
I'd
really
rather
hear
from
people
that
don't
talk
all
the
time,
I
will
say
a
couple
of
things.
It
seems
like
one
recommendation
might
be
to
use
the
lgm
in
our
tuning
process
in
the
future
is
a
possibility.
It
seems
like
the
enthalpy
flip
fix.
Doing
something
about
enthalpy
is
probably
going
to
be
important.
Surface
wind
oscillation
should
definitely
try
to
fix
as
well,
and
it
also
seems
to
me,
like
the
ni
max
thing,
with
a
cnt.
F
L
Hi
this
is
trudeau
hammer
and
I'm
working
with
andrew
gettleman
and
the
ni
max
stuff
is
something
that
he
has
been
working
on
and
he's
gonna.
J
A
Okay,
does
anybody
have
any
thoughts
on
increasing
our
coupling
frequency.
H
H
If
we
relax
that
constraint
and
work
on
how
to
sort
of
like
a
a
flux
reservoir
that
you
can
draw
on
as
you
as
you're,
you
know
that
depends
on
how
your
model
is
evolving.
Then
I
think
all
this
goes
away
and
it's
a
fix
for
many
things,
not
just
your
one
problem.
You
have
now,
but
it's
it's
not
trivial
to
relax
that
constraint,
but
it
might
be
something
to
put
out
there.
A
H
You
actually
look
at
look
at
how
your
model
is
evolving
and
do
a
good.
You
know
a
physics-based
change
in
how
the
fluxes
would
change
over
a
coupling
interval.
I
H
Thanks,
it's
just
something
to
put
out
there.
A
All
right
thanks,
it
looks
like
thomas
tonyatso
has
his
hand
up.
H
Yes,
thanks
julio,
I
we
we
keep
coming
across
what
seem
to
me
to
be
convergence
issues.
H
Should
we
pay
more
attention,
perhaps
in
the
future,
when
we
modify
or
develop
parameterizations,
including
the
energy,
including
vertical
levels,
to
make
sure
when
we
are
not
dragging
along
relatively
stupid,
errors
that
come
from
the
fact
that
we
are
not
taking
care
of
the
numerics
sufficiently,
it's
something
that
they
start
whining
a
little
bit
about.
A
No,
I
I
I
agree
with
you
that
we
need
to
pay
more
attention
than
we
have
to
issues
of
sub-stepping
and,
and
things
like
that
sort
of,
I
call
them
guard
rails
in
our
physics
that
we've
relied
on,
so
there
were
other
other
hands
raised.
I
I
can't
see
the
first
name.
The
last
name
is
schmittner.
J
Hello,
I'm.
H
A
paleoclimate
scientist
and
I
want
to
emphasize
the
the
importance
and
you've
already.
J
Than
the
two
degree
version
you
know,
maybe
even
just
for
testing
purposes,
I
think
that
that
may
be
something
that
many
people
may
be
interested
in.
H
I
would
just
make
the
point
andreas
that
if
you're
coupling
to
the
ice
sheet,
I
mean
you,
you
want
to
really
take
advantage
of
being
able
to
resolve
the
ablation
zones
and
if
you
go
to
a
four
degree
grid,
it's
not
clear.
You
could
do
that.
J
H
Yeah
yeah,
I
agree
with
you:
are
you
aware
of
the
jgbg
spin-ups
that
marcus
slopper
strum
has
done
in
order
to
open
up
the
coupled
system
I'll,
send
you
a
link
to
it
that
that's
hit?
That's
one
option
where
you
can
reduce
the
overhead
and
still
spin
up
the
coupled
system?
Okay,.
L
A
Thanks
adam
way,
one
has
had
her
hand
up
what
yeah.
R
Can
you
hear
me,
can
you
hear
me
okay,
great,
thank
you
so
responding
to
the
coupling
question.
So,
first
of
all,
I
certainly
will
agree
with
thomas
taniato
that
we
need
to
pay
more
attention
to
numerix
and-
and
I
also
agree
with
the
comment
that
well
in
cam
and
both
ethereum
atmosphere
model.
R
We
we
can
do
a
lot
better
than
what
how
the
coupling
is
done
right
now
if
we
want
to
keep
the
relatively
long
time
step
sizes,
although
I
I
I
so
my
my
colleagues
and
I
have
done
some
work
and
we
certainly
found
that
in
so
talking
about
those
process
pairs.
R
Some
of
them
are
rather
sensitive
to
to
the
coupling
frequency,
but
some
some
seem
not
so
so.
For
example,
julio
you
mentioned
the
dribbling,
so
we
realized,
for
example,
that
letting
the
club
and
mg
subcycles
know
the
radiation
and
dynamics.
Tendencies
at
higher
frequency
definitely
makes
a
big
difference,
but
what
has
been
puzzling
for
us
was
that
the
other
way
doesn't
seem
so
important
right
now,
so
radiation
doesn't
seem
to
need
to
know.
What's
going
on
with
the
other
processes,
I
think
that
has
an
important
implication.
R
A
R
A
Thanks,
thank
you.
It
sounds
like
colin
has
a
a
really
sort
of
a
related
comment.
Do
you
want?
Do
you
have
anything
that
you
wanted
to
add
colin.
I
No,
I
mean
I
I
think
it's
been
been
said
pretty
pretty
well
so
far
I
mean
the
one
thing
I
keep
coming
back
to
is:
is
you
know,
obviously,
in
a
perfect
world,
you
know
coupling
the
and
thinking
about
coupling
the
component
models
like
the
land
surface,
the
ocean
surface,
such
that
the
values
that
are
important
are
passed
back
are
more
kind
of
physically
consistent
is
a
good
thing,
but
I
don't
know
if
it's
quite
I
I'm
not
the
expert
to
say
whether
it's
quite
so
simple
as
just
turning
up
the
the
coupling
frequency,
because
I
think
we
want
to
consider
what
these
other
component
models
are
also
doing
on
these
time
scales
as
well,
just
to
make
sure
that's
that's
valid,
but
I
I
wholeheartedly
agree
that
the
more
we've
looked
at
this
with
with
club
the
more
it's
become
apparent
that
there
are
definite
avenues,
low-hanging
fruit
for
improvement
in
this
in
this
space.
H
But
I'll
mention
that
one
of
the
things
that
really
tripped
up
the
bgc
working
group
in
the
development
of
cesm2
is
that
we
discovered
at
one
point
that
cam
was
not
conserving
co2
tracers,
that
that
the
overall
numerics
were
not
conservative
and
then,
when
there
was
a
fix
put
in,
we
found
out
that
constancy
preservation
was
also
broken
and
I'm
not
sure
what
how
we
could
ensure
that
a
modification
to
the
prop
development
process
to
make
sure
that,
for
these,
what
I
would
consider
basic
numerical
properties
are
don't
get
broken
in
the
future.
G
J
H
I
think
the
conservation
was
a
different
parts
of
the
code
where
this
would
be
the
dry,
tracer,
dry,
mixing
ratio
and
other
parts
we're
assuming
moist
mixing
ratios.
So
it
was
a
incompatibility
so
that
I
think
it
was
kind
of
a
coupling
between
the
two.
That's
how
I
would
understand
it.
J
J
Yeah,
but
but
this
this
error
with
the
dry
and
wet
that's
that's
more
quote-unquote
box
in
physics
and
I've
long
been
wanting
to
change
everything
to
dry
mixing
ratios
because
it
makes
the
code
much
much
less
error-prone
to
these
kinds
of
errors.
A
So
dave
bailey
did
you
wanna.
S
Yeah
sure
so,
just
following
up
too
on
colin's
comments,
I've
been
sort
of
chatting
with
him
a
bit
offline
about
it,
but
the
there's
an
interesting
aspect
about
the
subgrid
scale.
S
In
this
that
you
know
we,
we
compute
the
fluxes
of
momentum,
sensible
and
latent
heat
over
five
different
ice
categories
internally
in
the
sea
ice,
but
I
think
the
land
does
this
sort
of
subgrid
scale
flux
computation
as
well,
but
then,
at
the
end
of
the
step,
those
are
just
merged
into
single
flux,
value
for
each
grid
cell
and
that's
what
the
atmosphere
sees
and,
in
fact
even
then
you
know
if
you've
got
a
grid
cell
atmospheric
grid
cell
with
land,
ocean
and
sea
ice,
then
you
see
the
merge
of
all
three
of
those.
S
A
Okay,
well
thanks
everyone
for
for
your
comments,
so
far
we're
already
past
12.,
so
I
I'm
gonna,
I'm
gonna
go
ahead
and.
A
I
hope
everyone
can
see
my
screen
we
or
the
the
the
agenda
that
I've
shared
or
I'm
trying
to
share
we're
gonna
go
ahead
and
adjourn
the
main
session.
Thank
you
for
every
thank
you
to
all
the
pres
presenters
and
to
all
the
participants
for
a
good,
lively
discussion.
A
A
We
won't
go
longer
than
45
minutes.
We
won't
go
as
long
as
we
won't
go
longer
than
we
need
to,
but
we'll
we'll
stop
at
after
about
45
minutes.
A
So
you
can
see
the
topics
the
what
what
biases
we
should.
We
should
be
tackling
in
cam
seven
and
how
new
science
frontiers.
What
should
we
emphasize
in
the
future
and
what?
What
exciting
research
can
we
do
with
the
existing
simple
model
configurations,
so
we're
gonna?
A
I
I
hope
we.
We
can
arrange
these
breakout
sessions
efficiently,
so
I'm
gonna,
I'm
gonna,
stop,
sharing
and
stephanie
will
be
setting
up
these
these
rooms
and
we'll
probably
take
a
few
minutes
for
all
of
us
to
get
our
ducks
in
a
row.
But,
as
I
said,
the
the
main
room
here
will
remain
open
for
informal
socializing
until
around
one
and
again,
thank
you
very
much
to
everyone
for
their
participation
and
I
hope
some.
You
know.