►
From YouTube: CESM Workshop: Earth System Prediction Working Group
Description
The 26th Annual CESM Workshop will be a virtual workshop with a modified schedule on its already scheduled date. Specifically, the virtual Workshop will begin with a full-day schedule on 14 June 2021 with presentations on the state of the CESM; by the award recipients; and three invited speakers in the morning, followed by order 15-minute highlight and progress presentations from each of the CESM Working Groups (WG) in the afternoon.
On 15-17 June 2021, working groups and cross working groups have half-day sessions, some with presentations and some that are discussion only.
A
B
A
Great
all
right,
I
guess,
go
ahead
and
get
started,
I'm
steve
yeager
and
on
behalf
of
fellow
co-chairs,
yaga
richter
and
kathy
pajan,
I'd
like
to
welcome
you
to
this
earth
system
prediction
working
group
session.
A
A
So
we've
scheduled
seven
20-minute
talks
leaving
time
about
five
minutes
each
for
q,
a
we'll
have
a
break
around
10
a.m,
come
back
for
a
couple
more
talks
and
then
an
update
from
the
co-chairs
and
end
with,
hopefully,
a.
A
Before
noon,
I
just
wanted
to
point
out
that
we
did
have
one
change
in
the
schedule,
so
judith
burner
is
now
going
to
be
speaking
at
9
15
and
she
switched
spots
with
dan
hamrein.
So
I
hope
speak
right
after
the
break.
A
So
without
further
ado,
I'll,
stop,
sharing
and
we'll
jump
right
in
nick
davis
is
going
to
tell
us
about
sudden
stratospheric
warming
predictions.
C
Looks
good:
okay
thanks,
hi
everyone.
My
name
is
nick
davis,
I'm
in
the
wacom
group
in
acom,
and
normally
I
would
be
giving
a
talk
or
something
in
the
whole
atmosphere
session,
but
because
of
the
amazing
groundwork
laid
for
this.
This
earth
system
prediction
framework
by
yaga
and
her
team
sasha
and
jim
edwards
I've
been
able
to
to
come
visit
you
today
and
talk
about
what
we
think
is
evidence
of
at
least
the
sort
of
limited
impacts
of
the
january
2021
sun
stratospheric
warming.
C
So
the
first
question
is
obviously
what
is
a
sudden
stratosphere
warming
on
the
left?
I'm
showing
you
this
plot
from
ncep,
which
is
the
2020
to
2021.
C
And
so
these
events
are
driven
by
wave
forcing
from
the
troposphere,
and
it
tends
to
have
some
degree
of
predictability,
one
to
two
weeks
in
advance.
C
The
plot
on
the
right
is
sort
of
an
interesting
volumetric
way
of
looking
at
a
sudden
warming
where
in
red
they're,
showing
you
the
positive
winds
in
the
stratosphere
and
in
blue
the
negative
winds.
So
you
can
see
that
the
polar
vortex
begins
as
this
really
organized
vortex
and
then
begins
to
break
down
into
these
irregular
shapes
before
finally
recovering.
C
So
these
events
are
typically
followed
by
a
particular
order
of
surface
impacts,
they're
often
associated
with
a
negative
phase
of
the
northern
annular
mode,
indicating
an
equator
is
shipped
to
the
jet
and
a
warm
polar
cap.
These
are
plots
from
deng
it
all,
showing
you
on
the
top,
the
zonal
wind
anomalies
and
the
dashed
and
solid
contours,
and
then
the
shading
shows
you
the
temperature
anomalies.
This
is
in
the
month
after
sudden
warming,
so
you
can
see
that
the
stratosphere
and
upper
troposphere
warm
quite
a
bit.
C
That's
associated
with
an
equator
would
shift
to
the
jet
and
at
the
surface,
in
these
sort
of
boxed
regions.
These
indicate
some
of
the
most
robust
regions
where
we
expect
particular
weather
regimes,
including
a
lot
of
cold
weather
over
eurasia
and
parts
of
north
america,
a
warm
atlantic
and,
in
particular,
greenland
and
northeastern
canada.
C
In
terms
of
predictability,
it
actually
typically
can
degrade
somewhat
in
europe.
We
expect
a
sort
of
particular
cold
in
europe,
however,
because
of
the
storminess,
it
can
actually
be
very
difficult
to
get
accurate
weather
prediction
in
terms
of
the
sort
of
day-to-day
weather,
but
it
improves
in
north
america,
asia
and
the
middle
east.
So
sudden,
warmings
are
these
cases
where
we
think
they're
a
window
of
opportunity
to
essentially
gain
increases,
or
at
least
a
better
understanding
of
predictability
in
the
month
or
two
ahead.
C
So
in
this
study
we
used
the
cesm2
earth
system
prediction
framework
which
has
been
introduced
a
few
times.
It's
csm2
wacom
6
with
a
prognostic
atmosphere,
ocean
sea
ice
and
land
components
and
the
atmosphere
and
ocean
are
initialized
by
nudging,
the
atmosphere
to
the
nasa
f
data
set,
and
so
essentially,
every
week
right
before
the
forecasts
are
kicked
off.
On
monday,
a
model
is
nudged
to
nasa
f
pit
and
then
initialized
with
the
random
fuel
perturbation
method.
C
The
ocean's
reinitialized
every
five
years
to
jra55
to
take
care
of
some
drift.
The
land
is
spun
up
with
noah
ncep
cfs
version
2.,
so
there
are
a
set
of
five
member
hind
casts
that
are
available,
but
right
now
there
are
21
member,
or
at
least
there
were
over
the
winter
21
member
real
time,
ensembles
initialized
every
monday.
C
All
you
can
really
do
is
characterize
the
variability
and
the
predictability
in
general.
It's
probably
not
good
to
look
at.
You
know
a
surface
pattern
after
an
event
occurred
and
say:
oh,
this
was
due
to
the
sudden
warming,
because
a
lot
of
other
things
were
going
on
in
the
earth
system.
It's
not
even
clear
that
the
weather
regime
that
drove
the
sudden
warming
also
didn't
set
up
the
weather
after
the
sun
warming.
C
So
essentially,
we
looked
at
four
separate
21
member
ensembles
here
with
varying
degrees
of
initialization,
to
try
to
understand
what
are
the
robust
impacts
of
the
sudden
warming.
C
So
the
first
thing
I
want
to
do
is
show
you
surface
temperatures,
that's
sort
of
the
the
basic
thing
we
care
about
when
we
think
about
you
know:
weather
impacts
from
a
sudden
warming,
we're
using
noaa
cpc
as
our
observational
data
set.
So
it's
northern
hemisphere,
land
coverage
here
and
on
the
right-
is
the
standard
forecast.
These
are
all
climatological
anomalies,
so
everything's
had
the
climatology
removed.
C
So
in
terms
of
the
observations,
what
we
see
here
are
some
cold
over
northern
europe
and
and
parts
of
eurasia,
and
then
a
lot
of
warmth
over
canada
and
the
north
atlantic,
and
the
standard
forecasts
did
a
pretty
decent
job
of
capturing
this
over
the
ensuing
two
weeks
after
the
sudden
warming.
C
So
the
standard
forecast
where
everything's
initialized
did
predict
a
cold
eurasia
and
a
warm
northern
canada
and
north
atlantic,
and
this
is
sort
of
somewhat
typical
of
the
negative
nam.
So
we
produced
about
50
of
the
background
variability
when
we
scramble
the
stratosphere.
So
we
don't
actually
allow
a
sudden
warming
to
occur
in
the
forecasts.
C
So
essentially,
if
you
wanted
to
understand
surface
temperatures
in
the
week
or
two
after
the
sudden
warming,
you
did
not
need
the
sudden
warming
to
occur.
They
basically
played
no
role
at
all.
It
played
no
role
at
all
when
you
scramble
the
tropospheric
initial
conditions.
You
basically
do
not
initialize
the
troposphere.
C
C
You've
got
cold
over
north
america
and
some
warmth
over
eurasia.
This
is
actually
quite
similar
to
what
you
get
with
the
scrambled
atmospheric
conditions,
which
kind
of
tells
you
a
little
bit
that
the
surface
boundary
conditions
are
actually
forcing
some
of
these
anomalies
that
we're
seeing
in
the
in
the
scrambled
tropospheric
initial
condition
forecasts.
C
C
So
moving
on,
we
can
push
the
push
the
envelope
now
to
weeks
three
to
four.
This
is
sort
of
a
medium
scale,
sort
of
sub
seasonal
time
scale,
and
so
the
observations
indicate
a
much
colder
siberia
and
some
warmth
over
over
europe
and
asia
and
again
the
sustained
warmth
over
the
north
atlantic,
greenland
parts
of
greenland
and
northeastern
canada
again
sort
of
a
bit
like
a
negative
nam.
C
This
is
somewhat
reproduced
by
the
forecasts,
although
they
had
a
little
bit
too
broad
of
cold
over
siberia,
the
r
squared
really
plummets,
but
that's
partly
because
we're
now
getting
so
much
divergence
in
the
ensemble
members
that
you
know
the
magnitudes
and
the
patterns
are
just
getting
washed
out
again.
The
forecast
with
no
sudden
warming.
So
the
scrambled
stratospheric
initial
conditions
here
and
panel
d
show
essentially
the
same
thing
as
the
standard
forecasts.
C
So
even
three
to
four
weeks.
Afterward
about
eighty
percent
of
the
surface
temperature
response
in
the
forecast
can
be
understood
without
having
to
consider
the
sudden
warming,
and
certainly
the
bulk
of
the
pattern
appears
consistent
with
just
the
troposphere
evolving
alone,
without
the
influence
of
the
sudden
warming.
C
All
right,
we've
got
a
very
similar
correlation
with
observations,
which
really
indicates
some
issues
with
the
the
drift
of
the
ensemble
members
over
time.
Similarly,
before
the
forecast
with
scrambled
tropospheric
initial
conditions
bears
some
resemblance
to
the
forecast
with
scrambled
atmospheric
initial
conditions,
but
in
general
it's
still
well
it.
It
shows
some
resemblance.
C
It's
actually
much
more
close
to
the
original
forecasts,
so
the
forecast
with
scrambled
tropospheric
initial
conditions,
where
we
just
let
the
troposphere
evolve
in
response
to
the
sudden
warming,
actually
show
decent
skill,
we're
still
getting
the
same
skills
we
did
with
the
standard
forecast,
so
maybe
30
of
the
temperature
variability
at
the
surface
was
governed
by
the
sun
warming
in
weeks
three
through
four,
but
I
just
wanted
to
emphasize.
You
really
would
have
gotten
essentially
the
same
large
scale
pattern
if
your
forecasts
did
not
take
the
sun
warming
into
account.
C
So
dynamically
it's
worth
pulling
back
to
understand.
Why
we're
getting
these
different
responses,
and
I
think
this
gives
you
a
good
perspective
on.
Actually,
you
know
why
the
stratosphere
was
important
in
a
way,
even
if
you
didn't
see
it
in
the
correlations
so
much
so
this
is
showing
you
the
polar
cap,
geopotential
height
anomaly,
so
the
average
of
geopotential
height
standardized
from
60
to
90
north.
This
is
a
good
indicator
of
some
of
these
large-scale
dynamics
in
the
aftermath
of
the
sun.
C
Warmix
on
the
top
is
merit2,
and
it
shows
you
these
high
heights
in
the
stratosphere
at
10
millibars
indicative
of
that
warm
polar
cap.
We
expect
after
sun
warming
it
propagates
downward
to
the
lower
stratosphere
at
about
100
millibars
and
periodically
couples
with
high
heights
the
surface
in
the
troposphere
for
nearly
six
weeks
before
finally
dissipating
and
the
standard
forecasts
capture
this
out
to
about
four
weeks,
at
which
point
they
do
not
pick
up
this
re-intensification
of
the
high
heights
at
the
surface
in
observations.
C
What
you
can
see
is
that
when
we
run
with
scrambled
tropospheric
initial
conditions,
the
stratospheric
anomalies
really
don't
descend
that
far
to
the
lower
stratosphere
and
they
tend
to
dissipate
much
more
rapidly
by
january,
18th,
you're,
already
sort
of
losing
those
polar
cap,
height
anomalies
in
the
stratosphere,
but
you're
still
getting
the
polar
cap
high
polar
cap
heights
at
the
surface,
when
we
run
with
scrambled
stratospheric
initial
conditions,
so
we
we
don't
have
a
sudden
warming.
C
We've
got
the
complete
opposite
very
cold
heights
in
the
strata
very
low
heights
in
the
stratosphere,
but
the
troposphere
actually
induces
some
high
heights
in
the
lower
stratosphere
here,
and
so
we
think
what
that
means
is
that
you
need
both
the
troposphere
and
the
stratosphere
evolving
simultaneously.
With
this
positive,
this
negative
signal
to
drive
the
downward
propagation
and
sustained
high
polar
cap
heights.
C
On
the
aftermath
of
the
sudden
warming,
so
what
caused
the
extreme
cold
over
the
u.s?
If
we
look
at
the
first
four
box
plots
box
plot
indicators
on
the
right.
This
is
showing
you,
the
temperature
anomalies,
averaged
over
north
america
during
this
cold
event,
and
none
of
the
forecasts
that
we've
shown
so
far
really
suggest
that
there's
going
to
be
any
cold
over
north
america,
none
of
them
are
statistically
distinguishable.
C
So
we
ran
the
model
out
instead
and
initialized.
The
february
8th
instead
of
january
4th
before
the
sudden
warming
and
a
standard
forecast,
actually
shows
some
hint
of
moderate
cold
over
north
america
when
we
scramble
the
tropospheric
initial
conditions
and
allow
the
stratosphere
to
just
govern
the
troposphere
independently
of
the
troposphere.
We
actually
find
that
this
this
simulation
predicted
exceptional
warmth
over
north
america.
C
So
what
we
think
this
means
is
that
there's
not
evidence
to
suggest
the
extreme
cold
over
north
america
was
driven
by
the
sudden
warming
and
in
fact
there
may
even
be
evidence
that
it
would
have
acted
to
warm
north
america
over
the
same
period
in
the
absence
of
the
troposphere.
C
So
I
know
I'm
running
short
on
time,
so
I'll
leave
this
up
and
just
say
I
think,
forecasts
with
different
aspects
of
the
earth
system
left
uninitialized
can
provide
near
real-time
attribution
of
sub-seasonal
weather
and,
despite
all
the
signatures
of
downward
coupling,
the
sudden
warming
didn't
really
have
a
direct
impact
on
surface
temperatures
of
the
surface.
Now
but
stratosphere
troposphere,
coupling
both
seemed
to
work
to
sustain
the
nam
and
I
think
both
are
critical
to
predictability.
C
A
Thanks
nick
that
was
great
and
right
on
time.
So
we've
got
about
five
minutes
for
questions.
You
can
raise
your
hand
or
post
in
the
chat.
C
I'll
I'll
keep
my
slides
up
just
so
I
can
go
to
the
slide.
I
need
to
for
any
questions.
C
Yeah,
that's
the
loaded,
one
wow.
This
is
yeah.
I
think
this
is,
I
would
do
best
to
say
this
is
I
I
don't
have
an
easy
answer
for
that
and
this
the
smartest
people.
I
know
that,
try
to
understand
strat
trope
coupling
in
the
aftermath
of
a
sudden
warming
are
still
working
very
hard
to
try
to
understand
it.
C
There's
arguments
for
how
mass
is
moving
around
and
inducing
anomalies,
there's
arguments
for
wave
coupling
and
how
the
sun
warming
may
shut
off
wave
activity
entering
the
stratosphere,
which
then
feeds
back
on
the
troposphere.
Those
are
those
are
just
a
couple
ideas
out
there,
but
I
don't
think
there's
any
easy
answer
yet.
D
I
was
just
wondering
what
your
take
is
on
that
there's
been
a
couple
of
papers
come
out
recently.
I
think
simon
lee
and
right,
and
they
they
have
argued
that
the
strat.
This
warming
was
important
for
extreme
cold,
the
texas
and
the
then
other
things
over
greece
right.
Would
you
say
that
that's
not
correct
and
they've
just
inferred
that
without
actually
showing
the
causality.
C
Yeah
I
mean,
I
think
the
problem
is,
if
you
have
you
know,
I'm
I'm
familiar
with
the
write
paper,
I'm
not
familiar
with
simon's,
so
I
know
in
so
in
write.
They
basically,
you
know,
examine
the
some
re-analyses
and
some
remotely
sensed
winds,
observations
and
things,
and
so
the
problem
is,
if
you're
trying
to
get
at
causality.
C
I
don't
see
how,
with
the
standards
you
know,
model
output
or
with
you
know,
obs.
You
have
no
way
of
untangling.
Was
it
the
tropospheric
forcing
that
drove
the
ssw
that
ultimately
also
set
up
that
circulation
regime
in
the
aftermath
of
sudden
warming,
or
was
it
the
sun
warming
itself?
And
so
I
think
you
know,
statistically,
you
know
you
can
do
an
analysis
of
a
lot
of
ssws
and
say:
okay
in
the
aftermath
of
these,
we
expect
you
know
cold
air
outbreaks
over
the
us
when
it
comes
to
an
individual
event.
C
C
That
was
observed,
but
I
think
the
differential
here,
the
forecast
initialized
right
before
the
event
where
you
have
the
standard
forecast
and
then
the
one
where
you've
let
the
stratosphere
evolve
and
sort
of
where
you've
put
in
the
sudden
warming
but
you've
let
the
troposphere
stay
uninitialized
the
fact
that
those
don't
get
the
the
extreme
cold
to
me
says
either
the
extreme
cold
was,
you
know,
so
dependent
on
so
many
things
lining
up
besides
the
sun
warming
that
it's
gonna
be
hard
to
say
anything
either
way
about
it
or
the
sudden
warming
didn't
cause
it.
C
C
A
B
A
I
I
was
interested
in
that
figure.
You
showed
the
the
hog
mueller
of
atmospheric
temperature
across
your
experiments.
It
showed
that
the
scrambled
trophospheric
signal
is
is
much
weaker,
even
right
at
the
start
right
in
the
stratosphere,
so
you're
initializing,
the
stratosphere,
but
already
within
a
day.
You've
lost
a
lot
of
the
signal
on
the
stratosphere,
and
I
I
guess
my
question
is
number
one:
are
you?
A
Are
you
having
to
drift
correct
these
initialized
predictions,
and
if
so,
can
you
really
assume
that
these
scrambled
ic
experiments
can
be
de-drifted
kind
of
the
same
way
as
all
of
your
other
heim
cast,
because
you've
you've
kind
of
created
this
frankenstein
initial
condition?
That's
going
to
have
all
kinds
of
transients
that
aren't
common
across
the
hind
cast?
So
I
don't
know
how
you
get
around
that
problem.
C
Yeah
so
yeah
the
the
issue
is
so
the
way
we
initialize
just
to
bring
it
back.
Is
you
know
you
you
just
don't
nudge
part
of
the
atmosphere
and
so
you're
letting
the
model
adjust
in
that
case
you're
letting
the
troposphere
adjust
to
the
stratosphere,
but
you're
never
really
well,
obviously,
you're
not
correcting
it
for
anything,
but
at
least
at
initialization.
There's
no
like
shock
to
that
system.
It's
not
like
you,
but
I
understand
your
point.
There's
no
there's
no
drift
correction
here.
C
It
is
a
bit
of
a
frankenstein
experiment
because
you're
sort
of
making
it
so
that
your
initial
conditions
in
the
troposphere
are
really
not
going
to
be
representative
of
what
was
going
on
necessarily
at
the
actual
time
of
initialization.
C
That
said,
I
do
have
to
say
like,
despite
the
fact
that
these
polar
cap
anomalies
look
so
different,
the
temperature
anomalies,
don't
necessarily
look
so
different
that
whenever
you
break
something
down
to
the
zonal
mean
it
can
look
like
you
know,
there's
a
lot
of
difference
here,
but
if
I
go
back
even
to
the
surface
temperatures
in
some
cases
here
you
know
things
are,
are
looking
quite
similar,
but
I
understand
your
point
that
it's
it's.
C
It
can
be
difficult
to
to
interpret
this
and
I'm
not
even
sure
how
you
correct
this
for
some
of
these
issues,
it's
a
it
because
this
we're
trying
something
so
kind
of
unconventional.
It's
a
little.
It's
a
little
dicey
to
try
to
you
know
we
understand
we
have
to
be
careful
with
over-interpretation
is
what
I'm
trying
to
say,
but
I
think
if
there
was
a
robust
effect
here
from
these
things,
it
should
probably
come
out,
regardless
of
how
how
hard
we
beat
the
drift
down.
B
E
So
thanks
for
the
as
the
screen's
coming
up,
thanks
for
the
invite
and
I'm
gonna
talk
a
little
bit
more
from
a
hydrologist's
perspective
about
predictability
and
what
you
could
do
if
you
had
skillful
climate
predictions
to
connect
them
to
water
management,
which
is
an
important
new
theme
for
encar,
is
it
tries
to
do
more
actionable
science
so
there?
I
don't
need
to
probably
dwell
on
the
importance
of
sos
hydroclimate
predictions
during
myriad
applications.
E
The
general
goal
of
research
over
the
last
two
decades
or
so,
and
particularly
some
recent
projects
that
we've
had
with
reclamation
and
the
corps
of
engineers,
is
to
try
to
improve
the
relevance
of
climate
forecast
through
watershed
scale,
post-processing
and
analyses,
and
to
try
to
incorporate
climate
information
through
a
sort
of
conditioning
process
to
generate
streamflow
forecasts
that
include
the
reflect
the
information
and
then
to
take
it
all
the
way
to
the
final
goal
of
assessing
benefits
for
one
management.
So
I'll
I'll.
E
Just
talk
about
two
examples
of
this
work
that
have
been
worked
on
recently
at
encar,
the
upper
rio
grande
basin
and
the
colorado
river
basin,
but
I
want
to
say
a
few
words
about
predictability.
First,
as
hydrologists
or
land
people
focus
on
the
land
surface,
we
are
very
interested
in
the
interplay
between
land,
initial
conditions,
on
the
hydrologic,
outputs
and
in
the
predictability
that's
stored
in
the
watershed
and
meteorological
predictability,
so
weather
forecasting
and
climate
forecasting
and
all
that
balances
out
these.
This
initial
condition
effect.
E
So
we've
also.
We
also
do
the
kind
of
attribution
studies
that
nick
was
just
talking
about
and
in
particular
we
and
we
use
an
ensemble
framework
where
we
look
at
a
very
common
technique
for
streamflow
prediction,
which
is
called
esp
or
ensemble
street
prediction,
in
which
you
have
a
perfect
initial
condition
through
spinning
up
a
model
and
sort
of
initializing
the
state
and
then
in
uncertain
future
condition
through
driving
it
forward
with
boundary
forcings
from
various
sources.
E
It
could
be
historical
climatologies
or
it
could
be
from
a
climate
model,
and
what
you
see
in
this
kind
of
framework
is
that
early
in
the
year,
the
initial
can
so
if
this
is
predicting
april
through
july
spring
runoff
and
you
start
predicting
it
in
october,
and
you
move
forward
all
the
way
through
july
early
in
the
year.
E
So
we
can
take
this
through
into
into
analyses
where
we
look
at
gradients
of
forecast
skill
as
they
relate
to
gradients
of
and
uncertainty
in
the
seasonal
climate
forecasts
or
the
initial
conditions,
and
just
note
that
there
are
times
of
year
when
forecast
skills
such
as
in
early
in
the
year,
if
you're
forecasting
going
through
into
in
into
april
the
seasonal
climate
forecasts,
uncertainty
almost
has
no
impact
on
the
forecast
scale,
whereas,
as
you
get
into
the
spring
milk
period,
the
climate
forecast
uncertainty
ends
up
having
a
big
influence
on
how
much
how
uncertain
the
streamfield
forecasts
are.
E
So
I
don't
really
have
time
to
dwell
on
that
kind
of
study,
but
it's
been
picked
up
in
a
lot
of
forecasting
agencies
to
sort
of
try
to
understand
where
investments
in
seasonal
climate
forecast
versus
initial
condition,
improvement
will
have
a
better
benefit
on
run,
a
forecasting
skill.
So
in
any
case,
the
fact
that
seasonal
climate
forecast
skill
is
important
in
the
spring
is
good
because
that's
when
a
lot
of
the
forecasting
agencies
are
trying
to
make
predictions
for
spring
runoff
and
so
they're
very
interested
in
climate
forecasts.
E
At
that
time,
I'm
going
to
talk
about
one
study
that
was
statistically
based
in
the
rio
grande,
which
is
south
of
us
southern
colorado,
northern
new
mexico,
and
this
was
work
done
by
fabio
laner,
where
we
looked
at
prediction
for
a
number
of
gauges
in
this
basin
and
noted
that
over
time,
if
you
look
at
forecast
error,
there
tended
to
be
over
forecasting
tendency
in
the
last
in
in
the
recent
period
of
history
versus
an
under
forecasting
tendency
earlier
on.
E
So
the
question
was:
to
what
extent
is
this
influenced
by
climate
and
potentially
trends
in
climate,
such
as
in
temperature
and
impact
on
runoff
efficiencies
in
these
basins?
E
So
we
did
a
study
in
which
we
followed
another
technique.
That's
commonly
used
for
runoff
prediction,
which
is
just
regression
using
land
predictors
such
as
snow
or
equivalent
in
prior
rainfall
or
flow
to
predict
future
runoff,
which
in
hydrology
terms,
is
q,
and
so
in
this
particular
framework.
We
asked
whether,
if
you
included
information
about
climate
which
you
can
take
from
the
national
multi-modal,
ensemble
and
or
other
forecasting
systems
like
the
system
for
system
five
and
dc
and
vf
which
capture
these
forecasting
systems,
arguably
capture.
To
some
extent
this
temperature
trend.
E
E
So
I'll
talk
about
another
general
approach
to
forecasting,
which
is
to
use
seasonal
climate
forecast
from
from
climate
models,
sort
of
a
dynamic
approach,
and
in
this
case
we
focus
on
sub-seasonal
seasonal
predictions
in
the,
and
one
of
the
first
things
to
note
is
that,
as
these
predictions
are
produced
by
operational
centers,
they
have
a
very
sort
of
space-time
product
set
that
they
generate,
such
as
the
one
on
the
right.
E
And
yet
these
water
managers
are
focused
on
very
small
basins
and
very
in
or
you
know,
maybe
at
the
largest,
their
regional
scale
versus
kind
of
local
watershed
scale
and
they're
focused
on
watersheds
and
how
the
watersheds
behave
and
respond
to
climate
and
in
particular
they're
in
this
particular
watershed.
Right
now,
there's
really
a
a
very
high
priority
need
for
information
to
predict
inflows
on
this
colorado
river
basin.
E
So
one
of
the
first
steps
we
took
was
to
create
kind
of
watershed.
Scale
analyses
that
we
ran
in
real
time
using
cfs
version
two
and
natural
multi-model
forecasts
and
quantifying
their
skill
on
watershed,
scale
levels
and
doing
different
kinds
of
post-processing,
which
I'll
mention
really
quickly.
E
We
did
work
on
this
to
assess
whether
post-processing
could
add
some
skill.
So,
rather
than
look
at
the
precipitation
temperature
outputs
of
the
climate
forecast
directly,
we
decompose
the
climate
outputs
into
different
principal
components,
using
this
sort
of
partially
squares
regression
technique
and
then
train
models
and
in
general,
find
that
post-processing
can
add
some
skill,
particularly
in
this
case,
for
temperature,
where
in
this
particular
watershed,
you
can
definitely
capture
more
signal
using
this
component,
basically
less
so
for
precept.
E
If
you
look
around
the
west
or
if
you
look
around
the
country
at
this
kind
of
technique-
and
there
are
many
many
ways
of
post-processing
there-
you
can
definitely
see
that
they're
places
where
you
can
strongly
add
predictability,
and
then
there
are
other
places
which
are
shown
in
gray
over
here
where
the
benefits
of
post-processing
are
more
mixed
or
don't
help,
and
that's
just
for
this
particular
technique.
It
wasn't
a
large
scale
study
but
more
to
show
the
proof
of
concept.
E
So
then,
moving
from
there
with
these
post-process
claim
forecasts
can
we
then
use
those
to
condition
the
reservoir
inflow
forecasts
in
a
basin
at
the
scale
of
say
the
colorado
basin.
So
here
we
use
this
is
showing
the
upper
colorado
river
within
flows
into
lake
powell,
one
of
the
big
storages
serving
the
southwest
what
water
needs.
So
the
question
is
whether
we
could
do
sort
of
simply
condition
these
streamflow
forecast
ensembles
using
information
from
national
multi-model
technique,
and
we
use
the
analog
approach
that
we
won't
go
into.
E
I
also
looked
at
the
influence
of
using
client
information
that
it's
for
regional
scales
versus
even
smaller
scales,
to
see
whether
the
client
models
would
have
successfully
sort
of
resolve,
gradients
and
and
climate
forecast
anomalies
in
in
the
basin.
That
would
be
helpful,
such
as
sort
of
north
to
south
wet
to
dry
or
warm
cold
conditions
and
in
general.
E
What
we
found
in
this
kind
of
a
study
is
that
compared
with
this
basic
esp
technique
that
I
mentioned
earlier,
which
doesn't
use
climate
information,
these
analog
techniques
do
tend
to
lower
the
error
in
the
forecast
of
the
inflows
at
this
regional
scale
for
this
reservoir.
So
these
blue
and
purple
scatter
plots
of
the
skill,
metric
or
rmse
are
lower
than
the
esp
ones
and
then
the
question
from
there
is
so
so
what
if
you
can
improve
the
inflow
forecast
for
these
large
scale
reservoirs?
E
E
She
has
been
building
this
colorado
basin,
streamfield
forecast,
testbed,
in
which
you
can
intercompare
different
streamflow
forecasts
and
operating
policies
and
run
it
through
a
management
model
which
is
called
m-tom
and
then
assess
or
sort
of
benchmark,
different
approaches,
and
this
is
just
a
schematic
kind
of
showing
what
that
model
looks
like
it
resolves,
12,
major
reservoirs
in
the
basin,
and
you
feed
it
in
time,
series
of
predictions
and
reservoir
storages,
and
you
look
at
how
those
evolve
with
current
operating
rules.
So
we
use
this
basin
to
look
at.
E
This
is
an
example
of
the
kind
of
things
that
you
look
at
in
a
in
a
reservoir
operating
test
bed
where,
if
the
black
line
is
kind
of
the
observed
lake
level
or
pool
elevation
in
lake
powell
over
the
years,
you
can
see
that
varies
quite
a
bit
and
there
are
certain
thresholds
that
are
shown
in
these
dashed
lines
that
are
really
critical
for
the
operations.
For
instance.
E
This
one
here
is
on
the
bottom,
triggers
certain
kinds
of
releases
that
lead
you
toward
or
that
can
take
place
in
shortage
conditions.
So
it's
really
critical
to
be
able
to
know
when
you're
going
to
cross
one
of
these
thresholds
because
of
the
large-scale
impacts
on
stakeholders
and
in
general,
we
find
running
these
tests
that
the
climate
informed
inflow
forecasts
do
lead
to
improved
predictions
of
pool
elevation.
E
For
instance,
again
these
are
the
colors
from
before,
but
we
find
that
we
can
lower
the
error
of
these
pool
elevations
going
out
even
forecasting
out
two
years,
and
so
that's
a
significant
result.
So
I'll
just
stop
without
finding
and
note
that
there
are
really
ample
applications
needs
for
client
forecasts
with
the
stakeholders
that
we
do
deal
with
and
they're
in
the
s2s
to
sub-date
subdicatal
time
scale.
So
it's
a
lot
of
interest
in
forecast
going
out
two
years.
Five
years
forecasts
do
have
usable
scale
in
the
context
of
water
management.
E
You
have
to
think
of
it
in
terms
of
hydrologic
predictability
and
when
initial
land
conditions
sort
of
trump
the
climate
impacts
or
climate
forecasts.
Forecasting
information
that
temperature
predictions
alone,
which
are
more
skillful
than
precept,
can
be
impactful
in
snowmelt-dominated
systems
such
as
in
the
western
us.
This
kind
of
tailoring
and
post-processing
can
provide
some
benefit,
so
it's
worth
doing
these
formal
test
beds
for
benchmarking.
E
The
forecast,
I
think,
are
really
useful
on
raising
awareness
about
the
the
and
helping
to
quantify
these
benefits
for
stakeholders
and
also
having
these
integrated
teams
where
you
work,
you
know,
with
climate
experts
and
hydrology
experts
and
water
systems
experts
as
a
key
to
keeping
this
research
moving
forward.
So
that's
it
for
me.
B
I'll
ask
a
question
and
andy:
do
you
have
a
good
idea
in
what
format
and
how
do
the
stakeholders
like
to
get
the
information
like
you
know,
we're
running
weekly
forecasts
and
we
put
them
on
the
data
online?
How
do
they
get
their
information
and
how
do
you
think
they
would
like
to
get
the
information
if
we
have
information
to
give
yeah.
E
I
mean
I
a
lot
of
the
forecast.
A
lot
of
the
stakeholders
are
quite
sophisticated,
so
you
know
they
will
have
you
know.
Computing
systems
are
set
up
to
extract
in
an
automated
fashion
data
from,
but
they
tend
to
get
it
from
providers
that
are
almost
like
intermediate
to
the
climate
forecast
group.
So
if
it
comes
down
in
the
form
of
streamflow,
it
will
come
through
a
streamflow
forecasting
center
that
that,
in
that
center
is
able
to
download
and
process
the
client
forecast
as
inputs
into
their
systems.
E
I
think
there
is
a
need
for
this.
You
know
going
back
to
these
these
kinds
of
tailoring
applications.
I
think
there's
a
lot
of
value
in
bringing
client
forecasts
that
are
just
more
generally
served
up
on
a
global
or
national
basis,
down
into
kind
of
a
watershed
context
and
adjusting
for
watershed.
Climatologies,
and
things
like
that.
So
these
anomalies
are
calculated
against
something
that's
a
little
more
unfamiliar
to
the
user
and
in
fact
now
ezreal
a
group
over
there
started
to
pick
this
up
and
try
to
make
products
like
this.
E
So
I
think
tailoring
is
very
helpful,
but
I
think
there's
a
lot
of
technical
capability
in
the
stakeholders
to
pull
data
in
any
format.
E
F
G
One,
I
don't
know
yeah,
that's
perfect
thanks
thanks
yeah.
Thank
you
so
much
for
giving
me
the
opportunity
to
speak.
I
thought
I
should
talk
about
a
python-based
verification
package
for
initialized
forecasts
that
we're
developing.
G
We
are
developing
this
in
the
context
of
s2s
forecasts,
but
we
plan
to
also
use
it
for
some
of
the
multi-annual
runs,
so
the
work
that
I'm
going
to
show
has
been
almost
completely
done
by
abi
j,
these
python-based
jupiter
notebooks,
but
it
wouldn't
have
gone
without
yaga
and
sasha,
and
on
the
software
side
we
are
leveraging
previous
efforts
by
orangebling
and
riley
brady.
G
The
verification
package
is
being
developed
using
noaa
funds
unless
the
grant
shown
below
so,
I
think,
all
across
anchor
and
but
also
other
centers.
There
is
sort
of
a
new
approach
to
diagnostics
and
maybe
even
to
do
science,
and
the
idea
is
to
have
a
distributed
development
that
is
open
to
the
public.
Typically,
the
software
is
somewhere
on
a
github
you
can
share
and
then
you
can
develop
it
and
make
a
pull
request
so
that
your
development
goes
back
into
the
trunk
and
can
be
distributed
to
the
greater
community.
G
So
there's
a
person
that
that
has
control
of
what
goes
in
or
not
or
or
some
sort
of.
Hopefully
it's
not
a
person,
but
some
sort
of
group
of
people,
then
python-based
jupiter
notebooks,
are
being
developed
as
a
series.
A
nice
thing
about
distributed
notebooks
is
they're
very
nice
to
debug,
to
look
at
intermediate
steps
and
then
and
are
used
for
exploration.
G
G
There
is
now
the
opportunity
to
program
parallel
code
called
dusk,
including
parallel
io,
and
so
the
processing
is
a
lot
faster
than
what
is
done
with
last
generation
diagnostic
tools.
Obviously
they
are
related
effort.
One
is
the
esds
the
earth
system
data
science
here
at
encar,
then
we
had
a
software
session.
It's
the
soft
engineering
working
group
yesterday
about
diagnostics
and
then
in
amp,
we're
developing
the
amwg
diagnostic
framework.
G
Adf
and
brian
gave
a
talk
about
this
in
the
awg
working
group,
and
so
this
is
to
develop
a
mean
and
variability
diagnostics
for
the
atmospheric
component,
but
we're
very
happy
to
also
work
then
across
the
lab
to
have
also
the
other
system
components
included.
G
So
what
is
different?
Why
do
we
are
we
developing
something
else?
Well,
we
have
in
initialized
forecasts
we
have
that
e-time
dependence-
and
so
here
I
just
randomly
grabbed
two
plots,
but
on
the
left
side
is
anomaly.
Correlations
or
skill
score
shown
as
function
of
focus
lead
forecast
day.
G
So
these
are
the
forecast
lead
times
and
then
here
on,
the
right
is
from
a
paper
by
yaga,
led
by
yaga,
where
we
have
here
the
anomaly
variation
here
of
the
neo
index,
and
this
is
done
as
a
function
of
forecast
weeks
and
here
in
the
sos
com
context.
We
often
have
bi-weekly
averages
of
a
metric,
and
so
this
lead
time
dependent
is
typically
not
included
in
climate
diagnostic
packages,
and
so
we
we
want
that
here
and
we're
developing
it
here,
but
we
definitely
want
to
feed
into
the
greater
effort
in
a
collaborative
way.
G
So
what
does
this
package
do?
Well,
it
reads
in
and
pre-possess
its
forecasts
and,
and
so
the
idea
is
that
all
of
the
heavy
lifting
should
be
done
so
that
a
user
graduate
student
or
whoever
wants
to
do
science
with
it,
can
then
not
spend
the
time
with
all
of
the
pre-processing
and
reading
the
data,
but
write,
go
and
and
look
at
the
signs
they
have
in
mind.
So
what
we
want
to
do
is
provide
a
minimal
number
of
deterministic
and
probabilistic
skill
and
the
idea
is
to
have
the
workflow
there.
G
Another
thing
is
that
there's
temporal
averaging
so,
for
example,
in
the
sos
world,
you
typically
compute
weekly
or
bi-weekly
averages,
and
so
so
you
have
some
temporal
averaging
and
then
we
are
computing,
deterministic
and
probabilistic
skill
scores
and
we
want
to
have
a
minimal
set,
which
is
a
nominee
correlation
root,
mean
squared
error.
So
this
is
done
in
a
sample
setting.
G
So
we
want
to
have
the
spread
we
want
to
have
and
a
typical
metric
used
in
s2s
is
the
rank,
probability
skill
score
for
tercei
forecasts,
so
that
is
to
assess,
if
I
say
my
week,
four
six
forecast
is
either
in
the
highest
tertile
or
in
the
lowest
hostile,
so
it's
above
or
below
normal.
G
So
here's
a
oops
here
is
a
snapshot
from
abby's
github
page.
So
it's
a
number
of
that.
The
workflow
is
it's
a
number
of
different
jupyter
notebooks,
the
first
one
launches
the
dusk
cluster.
G
The
second
one
is
we're:
building
we're
reading
in
the
forecast
and
we're
building
our
own
tsar
file
on
tsar
store
by
combining
all
the
forecasts
that
are
typically
in
net
cdf
forecast.
G
G
So
this
is
normally
when
people
in
the
esds
world
go
and
they
share
their
jupiter
notebook
and
they
scroll
through
it
and
you
can
get
live
results.
I
did
not
dare
to
do
that,
plus
I'm
having
lots
of
technical
problems
with
jupiter
hub
personally,
because
there's
a
bit
of
a
learning
curve
to
it
a
new
software,
but
so
this
is
here
the
top
of
the
verification.
G
Script
so
first
you
read
in
the
libraries
you
need
then,
here
again
you
have
this
task
server
running
and
here
is
sort
of
where
the
user
can
then
say:
okay,
which
verification
data.
Do
you
want
which
domain
do
you
want?
Which
metric
do
you
want
to
compute?
G
It
then
goes
and
opens
these
pre-processed
files,
which
I
know
is
our
files
and
does
the
verification
and
something
I
will
talk
later
about
what
you
see
here
is
so
here
we're
looking
at
three
models:
csm1
csm2
and
wacom-
to
compare
them,
but
some
of
them
are
started
on
mondays
and
some
of
wednesdays.
So
we
had
to
work
around
these
different
start
times
and
and
have
here
verification
data
sets
that
are
aligned
with
those
different
lead
times.
G
So
I
think
you
all
know
what
jupiter
books
are,
but
basically
they
have
these
different
cells
and
then
you
can
click
through
them
and
then
you
can.
While
you
click
through
them,
you
can,
for
example,
visualize
a
map
of
a
anomaly
correlation,
so
this
was
computed
with
this
package.
It's
in
the
paper
yaga
submitted
about
csm2
performance
to
james,
and
so
here,
for
example,
we
show
spread
and
error.
G
The
rms
error
is
shown
in
the
solid
bars
and
the
spread
in
the
dashboards
for
these
three
different
models
in
the
three
different
colors,
and
then
we
showed
for
five
and
ten
members,
because
in
this
particular
comparison
some
were
only
run
with
five
members
vacuum
because
it
would
have
been
too
expensive
otherwise,
and
so
you
can
see
for
the
different
lead
weeks,
then
the
skill
score
for
global
land,
north
america
and
at
the
top
is
the
winter
season
at
the
bottom
is
the
summer
season.
G
G
So
in
the
s2s
context,
these
are
large-scale
atmospheric
indices
like
their
nao
pna,
the
imagined
union
oscillation
and
so,
and
maybe
certain
ssws,
certain
statistic,
warmings
and
so
here's
a
paper,
here's
a
plot
by
ferranti
et,
and
so
this
is
the
skill
as
functional
forecast
day,
and
you
can
see
that
if
the
initial
state
projects
under
an
nao
minus
state,
then
you
have
a
longer
forecast
skill.
You
have
you're
more
skillful
at
day,
15
than
if
you
are
in
a
blocked
state
in
this
particular
case,
so
we
plan
to
implement
that.
G
So
how
are
we
going
to
use
this?
So
we
will
use
this
at
the
asp
summer
colloquium
this
year,
and
so
at
this
point
I
think
we
want
to
get
something
ready
and
then
students
will
be
using
it
and
I'm
sure
they
will
be
fantastic,
beta
testers,
finding
our
problems
with
it.
G
We
will
also
leverage
that
for
the
wmai
s2s
challenge.
So
it's
a
price
challenge
to
improve
sub-seasonal
to
seasonal
predictions
using
ai
that
some
of
us
plan
on
participating
in
and
then
we
have
started
initial
conversations.
It
would
be
really
nice
to
adapt
it
to
the
smile
forecasts
and
to
multi
annual
forecasts.
G
G
You
want
people
to
get
involved
early,
so
you
have
more
capability
and
you
have
more
people
contributing
to
it,
but
obviously
we
all
know
there's
plenty
of
bugs
and
difficulties
when
you're
developing
code,
like
that,
that's
meant
to
be
general,
and
so
I
find
it
really
hard
to
decide
at
which
point
to
to
to
share
to
share
the
code.
G
Another
thing
about
this
distributed
way
of
working
is
how
to
make
sure
contributors
get
credit.
We
are
leveraging
here
a
lot
of
work
that
has
been
developed
by
other
people
and
so,
for
example,
the
large-scale
indices
have
been
coded
by
someone
and
the
alignment
of
the
verification
with
the
forecast
data
has
been
programmed.
So
I
think
this
is
a
more
general
issue
that
we'll
need
to
somehow
address
as
a
community,
and
then
here
I
have
the
clean
code
with
this
useful
code.
G
So
part
of
it
would
be
nice
to
make
very
clean
code
where
you
just
maybe
compute
one
metric,
and
you
make
one
plot.
However,
when
we
actually
using
it,
we
almost
almost
always
want
to
compare
several
models
and
and
look
at
the
differences.
G
So
so
you
end
up
making
code
that
was
clean
and
it
becomes
more
and
more
complex
and
less
and
less
readable.
I
mean
this
is
a
general
issue,
so,
for
example,
what
we
had
is
we
had
very
clean
code
doing
all
these
skill
scores,
but
then
we
went
to
do
bi-weekly
so
based
on
daily
data.
Then
we
went
to
weekly
averages
and
some
of
the
capability
in
klimpred,
which
is
an
existing
python
package.
We
were
leveraging,
didn't
work
any
longer,
and
so
we
had
to
actually
code
everything
rather
than
leverage
existing
packages.
G
So
so
this
is
sort
of
a
challenge
then
I
already
showed
through
the
monday
wednesday
start
date.
So
even
while
this
code
was
very
clean
now
that
it
is
useful
to
make
a
plot
that
we
might
want
to
put
into
a
paper,
it
has
become
pretty
lengthy
and
then
finally
funding
and
support.
B
G
Yes,
so
we
want
to
make
those
publicly
available
to
everyone,
including
the
university
and
I'm
hesitating
to
say
definite
date,
but
after
the
asp
summer
school,
I
think,
would
be
maybe
a
good
a
good
time,
and
so-
and
this
is
not
promising
any
su-
any
support
obviously
be
happy
to
tell
you
how
to
use
them.
But
if
something
needs
to
be
fixed
we
might
just
not
have
any
funding
to
to
fix
that
or
this
somehow
needs
to
be
bounced
through
the
community.
G
Yes,
I
definitely
want
to
coordinate
this
with
the
pandeyo
community,
and
maybe
we
can
talk
offline.
If
there's
anything
I
want
to
do,
but
I,
if
I
need
to
write
somewhere
or
how,
how
to
link
it
exactly
but
ryan,
abernathy
and
and
and
people
in
the
panjayo
community
are
aware
of
this
effort.
G
Oh,
the
next
yeah
I
just
looked
at
the
yeah
so
as
part
of
it
we're
also
trying
to
zara
file
the
s2s
simulations
with
csm,
and
we
have.
We
have
space
on
the
aws
cloud
and
that
would
further
make
it
faster
to
access
those
initialized
forecasts
and
for
people
to
then
compute
cloud,
possibly
in
the
cloud
or
not
at
other
centers.
G
I
think
that
would
be
very
interesting
for
people
in
developing
countries
who
often
just
don't
have
the
high
performance
computing
to
to
do
some
of
this
heavy
lifting
en.
G
And
so
but
again
we
don't
have
any
funding,
so
we
have
to
all
sort
of
do
it
bit
by
bit
with
existing
resources.
H
H
What
one
thing
I
was
wondering
is
you
showed
us
the
plot
of
anomaly
correlation,
so
these
are
especially
average
plot.
So,
if
you
think
about
what
dr
wood
was
showing
like
specially
distributed,
what
is
skill
in
my
base
in
versus
colorado-based
universities
mobile
basis?
So
is
there
a
way
like
this
skill
sets
can
be
generated
in
a
specially
graded
way,
either
the
regular
grid
or
the
sub?
Basically
skill
grid.
G
Yes,
no,
so
this
should
be
readily
available.
The
idea
of
these
books
is
that
we
first
compute
all
the
skills
for
as
a
map
with
spatial
dependence
and
then,
after
that
we
are
subsetting
and
and
looking
at
different
regions
and
spatial
averages.
So
this
is
clearly
part
of
the
workflow.
G
What
we
had
to
do
is
I
was
saying
how
fast
it
is,
but
I
mean
we
are
not
software
engineers
we
had
for
some
of
the
plots.
We
actually
had
to
subset
the
space
like
north
america
before
because
it
became
computationally
so
expensive.
But
the
idea
is
these:
these
notebooks
you,
you
can
go
in
and
you
can
plot
your
spatial
maps
all
along,
and
then
you
say
this
is
the
region
I
want
to
average
over
at
the
very
end.
So
yes,
that's
all
available.
H
H
Yeah,
thank
you.
Thank
you
for
this
opportunity.
I
today
I'm
going
to
talk
about
the
seasonal
to
multi-year
small
monster
forecasting.
I've
talked
about
pieces
of
this,
this
manuscript
at
a
couple
of
previous
meetings.
So
finally
we
decided
to
publish
it
so
now
it
has
been
published.
So
you,
if
you
need
more
details,
you
can
look
into
that
paper
and
I
thank
all
the
co-author
for
their
contribution.
H
I've
designed
this
talk
more
for
what
looking
talk.
So
what
are
the
remaining
question
and
what
what
opportunity
that
we
can
tap
into
in
terms
of
improving
the
forecast?
So
what
you
see
in
this
plot
is
the
potential
predictability
of
the
soil
moisture,
as
well
as
the
precipitation
of
north
america.
That
was
computed
using
the
dple
forecast,
and
you
see
the
the
brown
line.
That
is
a
small
mass
of
potential
predictability,
skill
that
is
significantly
greater
than
the
precipitation
predictability,
skill,
so
yeah,
while
taking
this
paper
to
the
review
process.
H
What
I
learned
that
yeah
this
this
is
known
to
the
community
that,
yes,
all
master,
has
a
higher
predictability
than
precipitation.
But
what
is
little
not
very
clear
is
why.
Why
is
that?
Why,
so
all
monster
has
higher
predictability
than
precipitation,
so
I
will
be
spending
more
than
fifty
or
six
percent
of
my
time
in
explaining
at
least
ends
and
not
the
answer.
H
And
then,
when
we
go
to
the
next
I'll
try
to
answer.
Have
we
reached
the
upper
limit
of
predictability
or
what
would
be
like
if,
if
we
get
a
little
bit
model,
a
perfect
model?
Okay,
so
we'll
we'll
go
ahead
and
get
started
with
the
with
the
first
one
if
the
voice
almost
has
a
higher
predictability
than
precipitation.
H
So
what
we
said
that
paper
is
the
land
processes,
including
soil,
moisture
memory,
emergence,
land
atmosphere,
interaction
can
transform
a
less
predictable
precipitation
signal
into
a
more
predictable
soil,
moisture
signal.
So
there
are
a
couple
of
different
hypothesis,
so
we'll
I'll
go
through
one
by
one
and
finally
also.
I
will
also
go
into
like
why
the
precipitation
is
somewhat
less
predictable,
at
least
the
time
skill
that
we
look
into
like
a
seasonal
to
sub
seasonal
time
scale.
H
Okay.
So
what
is
the
reemergence
hypothesis?
So
many
of
you
are
expert
in
the
ocean
science.
So
you
think
about
there
is
a
surface
layer
and
then
there
is
a
deep
layer
where
a
lot
of
memory
resides
and
that
gives
a
low
frequency
climate
variability.
So
similarly,
we
can
also
think
about
the
soil,
massive
processes
where
there
is
a
root
zone
where
lot
of
dynamics
take
place,
and
then
there
is
a
subsurface
layer
where
the
memory
is
stored,
and
this
has
been
confirmed
using
the
observation
so
one
on
the
top
right
panel.
H
You
see
the
soil,
moisture
observation
from
the
illinois
climate
network,
and
what
do
you
see
that
I
I
saw
the
climatology
using
the
contour,
color
contour
and
then
the
line
contour
or
the
variability,
and
then
the
depth
is
shown
on
the
left
hand
side.
So
you
see
there
is
a
lit
yeah.
The
on
the
in
the
deeper
soil
layer
that
is
0.5,
meter
and
below
variability
is
smaller,
and
then
there
is
a
memory
that
goes
like
more
than
a
year
and
when
we
get
into
the
top
soil
layer,
that's
a
root
zone.
H
There
is
a
lot
of
variability
and
then
there
is
a
seasonal
drying
as
we
go
into
the
summer
and
what
we
think
that
some
of
these
memories
that
that
get
stored
in
the
deeper
soil,
layer
or
subsurface
wall
there
can
be
get
tapped
into
during
the
early
spring
to
summer
season
to
give
the
year
two
year
memory,
and
we
did
some
of
the
statical
analysis,
anomaly
correlation,
and
we
do.
We
do
find
that
as
we
go
into
this
bottom
panel,
that
yeah
anomaly
correlation
decreases
as
we
expect
from.
H
They
are
one
process,
but
after
a
year
it
is
start
increasing
and
then,
on
the
right
hand,
side
it's
the
same,
but
it's
a
seasonal
cycle.
So
we
we
have
some
suggestion
that
there
is
a
soil,
moisture
emergence,
but
it
remains
a
hypothesis
means
we
have
not
proven
it
or
disapprove
it.
So
it
that's
a
concept
that
we
are
good
in
a
paper
two
years
ago.
H
Second
thing
you
can
think
about,
as
if
why
I
have
a
land
and
I
pour
a
water.
It
will
quickly
evaporate,
particularly
in
this
kind
of
heat
wave
going
around.
But
if
I
put
some
sort
of
vegetation
or
of
plants,
then
it
will
offer
the
resistance
so
that
will
decrease
the
flow
water
flow
into
the
atmosphere.
H
Similarly,
as
soon
as
the
land
will
evaporate,
it
will
decrease
the
gradient
of
of
the
vapor
pressure
deficient.
In
both
cases,
it
will
decrease
the
rate
of
flow
to
the
to
the
atmosphere.
In
that
case,
if
the
rate
of
flow
decreases,
then
there
will
be
more
residence
time
in
the
in
the
land
surface
and
then
that
that
giving
the
more
predictability
of
the
soil
moisture
compared
to
the
precipitation.
H
So
that's
landed,
transparent
coupling
hypothesis,
at
least
this
hypothesis.
We
have
tested
it
using
the
climate,
modeling
experiment
where
we
run
two
set
of
ensemble
experiment,
couple
and
uncoupled,
and
in
this
plot
you
see
the
difference
in
the
soil,
master
memory
between
couple
and
uncoupled
experiment
and
in
terms
of
the
like
one,
auto
correlation,
and
we
do
find.
There
is
a
increase
generally,
particularly
in
the
mid-latitude
region,
both
central
north
america
and
part
of
mediterranean
or
southern
europe
due
to
the
coupling.
H
Okay,
coming
back
to
the
why
precipitation
is
less
predictable.
So
in
this
case
we
we
looked
into
a
non-parametric
metric
that
is
called
affordant
entropy.
This
is
very
interesting
metric.
We
wanted
to
compare
both
precipitation
and
the
soil
moisture
using
similar
metric
and
if
the
each
the
number
is
smaller,
this
means
seasonal,
video,
sub,
seasonal
to
seasonal
variability
is
much
smaller,
whereas
if
number
is
larger,
then
that
means
it's
very
hard.
H
First
thing
I
wanted
to
bring
to
you
notice
is
this:
color
scale
is
one
order,
magnitude
greater
than
the
what
is
color
scale,
I'm
showing
for
the
small
moisture
in
the
bottom
final,
because
the
things
were
not
fitting,
so
I
simply
multiplied
it
by
the
time
and
you
see
there
is
a
lot
of
subseasonal
to
seasonal
variability
in
the
precipitation,
whereas
yeah
the
west
coast,
you
have
the
central
north
america,
whereas
the
soil,
moisture,
variability,
subseasonal
procedural
variability.
H
The
order
of
magnitude
is
smaller
than
the
precipitation
and
that
that
that
gives
some
situation
like
yep
because
of
very
high
variability
in
precipitation.
H
It
looks
like
it
is
less
predictable
in
our
prediction
system,
so
what
we
did
it
it
we
averaged
the
precipitation
forecast
or
for
all
the
12
months
and
in
the
top
panel
you
see
the
precipitation
potential
predictability,
skill
for
the
at
the
annual
time
scale.
This
is
year
one
to
year,
10
in
the
dple,
and
in
that
case
we
interestingly
found
that
this
precipitation
for
potential
predictability.
H
Skill
is
comparable
to
the
soil,
moisture
skill.
So
at
the
annual
time
scale
there
is
predictability
of
the
precipitation.
So
a
way
I
think
about
this
thing
is:
if
something
happens
in
the
pacific
or
the
specific
temperature
changes
occur,
then
precipitation
respond
to
that
forcing.
But
if
yeah,
that
those
changes
in
the
specific
are
small,
then
yeah,
there
is
no
signal
in
the
precipitation
forecast.
H
So
that's
that's
what
it's
showing
in
terms
of
the
annual
scale,
but
in
in
the
bottom
panel
you
find
that
what
fraction
of
the
precipitation
variability
is
is
explained
by
the
inter
annual
or
the
annual
time
scale.
H
So
how
does
the
land
surface
process
is
contribute
to
improving
the
predictability
of
the
soil
moisture?
So
we
started
with
simple
water
balance.
Equation
change
in
this
all
moisture
is
equal
to
precipitation,
managing
vapor
transportation,
minus
runoff.
We
did
some
algebra
and
you
can
come
up
with
the
budget
of
the
soil,
moisture
variants
and
then
it
it's.
It's
combination
of
four
different
terms.
So
one
is
the
soil
moisture
memory
here.
This
is
where
the
precipitation
factor
comes
in
precipitation
and
the
soil
moisture.
H
Coupling
soil,
moisture
and
et
coupling
and
then
soil,
moisture
and
run
off
coupling
time
and
what
it
turns
out
to
be
like
we,
we
computed
this
term
using
the
land
model
data
and
we
found
the
other
terms
are
also
kind
of
quite
quite
big,
at
least
seasonal,
substantial
to
seasonal
time
scale.
Here
are
some
of
the
number
memory
term
is
0.88.
H
Okay,
so
I
almost
finished
with
my
first
question:
I
think
yeah
it
can
be.
A
answer
is
more
predictable
than
prejudice
now
I'll
go
into
the
second
question:
have
you
reached
the
upper
limit
of
predictability
in
terms
of
the
multi-year
forecast?
Yeah,
I
put
very
kind
of
in
a
very
clear
sense.
My
answer
is
no.
Why
do
I
say
that
is
the?
What
in
this
plot
is
saying
that
you
have
the
forecast
lead
time
in
a
month
and
then
what
is
realized
forecast
for
the
soil
moisture
in
this
brown
color?
H
That
is
our
realized,
forecast
skill,
whereas
if
we
can
compute
a
similar
scale
using
the
observation
means
the
offline
land
model
simulation,
and
you
see
there
is
a
big
gap
between
what
is
what
these
anomaly
correlations.
So
in
terms
of
the
observation
that
is
shown
in
the
black
line
versus
what
is
realized
in
in
terms
of
the
dple
forecast-
and
almost
I
would
say
like
one
third
has
been
realized
and
remaining
two
third
remains
to
be.
They
realized,
of
course,
there
are
uncertainty
in
the
observation.
H
These
two
black
line
are
two
of
the
clm
experiment
and
then
the
third
green
line
is
from
the
amsterdam
model,
that's
another
soil,
moisture-based
product
and,
and
there
are
uncertainty.
H
So
all
these
reconciling
these
uncertainty,
as
well
as
improving
the
forecast
skill,
is
what
is
the
potential
for
improving
the
skill
where,
where
does
this
kind
of
big
gap
comes
from?
H
We
have
some
idea
one
one
thing:
we
know
that
from
our
analysis,
we
we
found
that
in
in
the
dple
system,
the
soil
moisture
initialization
does
not
improve
the
precipitation
skill,
so
precipitation
is
killed
shown
by
the
block
line
here
and
that's
basically
one
among
even
if
we
initialize
the
random
initial
condition
that
is
shown
by
the
gray
line
and
whereas,
if
you,
if
you
compute
the
same
using
the
observation,
you
see
the
black
line
that
is
much
higher
than
what
is
a
blue
line.
H
Again,
there
is
a
gap
between
what
is
realized
into
the
system
and
then
what
we
can.
We
are
finding
from
the
observation.
So
this
is
not
to
me
the
first
showing
like
there
is
a
weak
soil
moisture
to
precipitation
feedback
into
into
the
csm
climate
model.
This
has
been
here
for
for
a
while,
like
this
is
2012
paper
where
they
have
compared
a
nod
with
the
cam
4
and
clm
4,
and
then
again
you
see
there
is
a
weaker
skill
in
here,
cam,
4
and
clm4.
H
Similarly,
there
is
another
paper
in
2016
and
again
the
author
did
not
find
improvement
in
the
skill
due
to
soul,
match
and
initialization
for
the
precipitation.
H
So
yes,
there
is
opportunity,
despite
all
these
challenges,
this
is
the
forecast
for
the
2012
drought
and
in
the
california
nevada
region,
and
it
shows
a
good
skill
like
you
can
compare
the
blue
line.
That
is
a
forecast
line
compared
to
the
black
line.
That
is
observation
and
it
matches
very
well.
So
there
is
a
opportunity
to
develop
a
skillful
drought
prediction
system
using
salt,
massage
state
and
which
can
be
relevant
for
for
the
society.
H
What
are
the
challenges
we
know
like?
We
can
trade
in
the
past
yeah
some
some
of
the
drought,
so
good
skill,
but
at
some
other
drought
do
not
so
good
skill,
so
yeah.
Why
is
that?
I
do
not
know
the
high
the
long
memory
land
process
are
represented
in
the
climate
model.
Most
of
the
land
model
have
been
used
to
provide
the
flux
to
the
atmosphere
system,
so
yeah.
H
H
There
is
opportunity
to
improve
the
forecast
skill,
so
I
don't
know
if
I
am
old
enough
to
put
a
challenge
for
the
csm3,
but
if
I
were
to
draw
the
same
graph
after
after
two
years
or
three
years,
I
what
I
would
be
kind
of
comparing
against,
did
this
brown
line
improved
little
bit,
not
here,
but
at
least
from
this
35
percent.
Did
it
go
to
50
percent,
also
I'll
I'll
end
it
here.
Thank
you.
B
Thanks
so
much,
we
have
a
race
hand
from
isla.
D
I'm
I'm
just
a
bit
confused
about
why
you
think
we
have
the
potential
to
get
up
to
that
black
line,
because
doesn't
that
that's
relying
on
skillfully
predicting
precipitation,
which
we
have
a
limit
on
right.
Or
am
I
misunderstanding,
what
the
black
line
is
showing
that's
offline
land
simulations
with
precipitation.
H
D
H
That
that's
certainly
a
part
of
equation
and
that's
certainly
a
kind
of,
I
would
say,
even
a
major
part
of
the
equation.
So
it's
not
like
one
processes
that
will
give
us.
That's
why
I
put
like
it's
csm3
challenges.
It's
it's.
A
land
atmosphere,
interaction!
It's
a
teleconnection
processes,
it's
land
processes,
so
it
it.
H
It
requires
coordination
among
different
working
group
to
improve
to
push
that
line
to
even
kind
of
like
on
the
y-axis
50-60
percent.
A
Yeah,
I
want
to
follow
up
sort
of
the
same
question.
I'm
confused
by
what
you're,
showing
here
the
dple
looks
green
to
me
is
that
the
skill
verified
against
actual
soil,
moisture
observations
and
have
you
done
the
potential
predictability
analysis
correlating
dple
with
the
land
model
simulation
that
it
was
actually
initialized
from
which
was
drawn
from
the
cesm1
large
ensemble?
Actually,
it
wasn't
initialized
kind
of
to
any
historical
estimate.
It
was
initialized
from
a
free-running
historical
simulation
and
so
to
quantify
potential
predictability.
You'd
have
to
correlate
dple
with
that
particular
land
model
simulation
evolution.
H
Yeah,
so
what
we
do
it
here
in
this
brown
line,
yeah,
I'm
somewhat
color-
is
not
very
good
for
me,
so
this
brown
line,
we
we
use
that
particular
number
34
land,
initial
condition
to
compute
the
anomaly
correlation
of
that
initial
condition,
with
the
ensemble
average
salt
moisture
anomalies,
and
in
all
these,
like
thin
gray
line,
I
use
the
remaining
39
large
ensemble
initial
condition
to
correlate
with
the
forecasted
kit.
That's
why
I'm
able
to
say
like
yeah
that
34
initialization
did
improve
the
forecast
skill.
H
We
can
do
the
same
analysis
using
the
offline
land
model
simulation
and
the
the
there
are
different
years.
You
can
take
like
initial
condition
and
then
how
it
projects
into
the
several
years,
like
12
months
after
for
all
the
year,
taking
from
1980
to,
I
guess,
2010
not
or
something
like
that
one
I
did
okay.
Does
that
answer
your
question.
A
I
All
right,
so
I'm
going
to
be
talking
about
initialization
methods
and
relate
that
to
model
bias,
drift
and
and
effective
trends
on
calculating
the
skill
of
seasonal
decay
or
std
initialized
hindcast,
that's
kind
of
the
first
part.
The
second
part
are
going
to
be
some
preliminary
results
from
csm1
and
e3sm
v1
on
initializing
these
models
with
two
different
methods
and
seeing
how
they
drift.
So
I
want
to
acknowledge
ben
kirkman
and
sasha
who've
been
working
on
that
second
part.
The
drifts
in
the
csm1
and
e3smv1
and.
E
I
So
if
you
look
at
the
literature,
there's
a
lot
of
different
ways
to
initialize
std
predictions,
these
are
four
of
them.
I
kind
of
rank
them
from
cheapest
to
most
expensive.
The
first
is
the
so-called
brute
force
method,
and
you
just
take
re-analysis
products
or
some
observed
products,
for
example
the
cfsr
re-analyses
for
ocean
and
atmosphere,
and
just
basically
put
that
right
into
the
model.
Interpolate,
the
reanalysis
data
to
the
model
grid
and
those
are
your
initial
states
and
you're
good
to
go.
I
So
that's,
obviously
the
cheapest
and
easiest
method
to
use
one's
a
little
more
expensive,
expensive
is
there's
various
variations
on
this
method.
The
nudging
method
so
you're
trying
to
relax
the
model
state
to
some
reanalysis
product
or
some
other
observational
source,
so
you're,
not
exactly
at
the
observed
initial
state
but
you're
somewhere
close
to
it
in
your
model,
initial
states,
the
third
which
is
a
bit
more
expensive
from
that,
is
the
one
that
we've
used
in
ncar.
I
Quite
a
bit
called
the
heincast
initialization
and
you
run
the
ocean
model
through
five
cycles
of
20th
and
early
21st
century
climate,
with
the
time
evolving
observed
forcing
from
the
atmosphere
and
that
forcing
the
atmosphere
gets
kind
of
imprinted.
Then
on
the
upper
ocean
and
the
fifth
cycle
that,
when
you've
run
through
this
four
times,
the
fifth
cycle
you
take
that
those
is
your
atmosphere
or
your
model.
I
Initial
states
for
initialized
time
casts
and
predictions-
and
this
is
sometimes
referred
to
referred
to
as
the
forced
ocean
sea
ice
or
fosse,
and
that
acronym
is
what
I'm
going
to
be
using
today
to
initially
to
refer
to
this
heinkes
initialization
and
then
there's
the
kind
of
the
cadillac
of
all
initialization
methods
using
couple
data
assimilation,
whereas
assimilating
all
available
observations
and
atmosphere
ocean
into
the
model.
So
that's
kind
of
four
different
methods,
and
I'm
going
to
be
talking
about
two.
F
I
Today,
the
brute
force
method
and
the
heincast
initialization
method
of
the
fosse
flossing
method.
So
I
showed
this
plot
on
monday,
but
I
just
wanted
to
say
a
little
bit
more
about
it,
because
this
illustrates
the
bias
and
drift
problem
when
you
have
observed.
Initial
states
in
this
case,
represented
by
global,
mean
surface
temperatures
as
the
black
line
and
you've
got
some
trend
over
this
period
from
the
50s
to
the
to
basically
present,
which
is
that
black,
the
straight
black
line
and
then
you've
got
these
drifted
states.
So
this
is
from
the
dpla.
I
So
each
of
these
dots
is
the
ensemble
average
for
the
year,
one
hein
cast
year,
two
and
so
on,
and
so
for
when
you're
going
from
blue
to
red
that's
going
from
year,
one
to
year,
ten
and
so
the
year
ten
drifted
states
are
out
here
and
you
can
see
that
they
drift
somewhere
near
the
uninitialized
or
free
running
simulation
from
the
csm1
large
ensemble,
which
is
this
green
line.
I
And
you
see
a
couple
of
things
in
this
plot,
which
I
pointed
out
on
monday.
One
is
these
drifts
are
pretty
large
and
they're
pretty
rapid.
So
the
model,
even
though,
if
you
started
out
close
to
the
observations
it
really
wants
to
drift
away
to
its
preferred
state,
which
is
somewhere
here
in
this
uninitialized
kind
of
realm,
which
is
a
product
of
its
systematic
errors.
The
systematic
errors
in
the
model
are
making
it
want
to
go
away
from
where
you've
started
it
out
close
to
the
observations,
that's
one
thing.
I
The
other
thing
is
that
if
you
look
at
the
trends
in
this
case
from
the
dple
from
the
observations
and
from
the
drifted
states
to
see
that
the
magnitude,
the
trends
are
different
now,
these
trends
introduce
a
lot
of
different
issues
and
they're
more
acute
or
most
acute
for
the
std
time
scales,
because
if
you
imagine
you're
computing
anomalies
for
predictions
or
heinkes
relative
to
the
average
model,
climatology
of
the
drifted
states,
the
average
that
model
climatology
is
going
to
represent
something
here
around
1990.
I
This
isn't
as
big
a
problem
for
the
s2s
or
s2a.
I
time
scales
because
s2s
a
lot
of
times
s2s
is
only
using
a
climatology
of
18
years.
The
trends
aren't
that
big
s2i,
for
example
the
nme
project
they
were
using
a
climatology
of
30
years,
but
for
a
big
initialized
heinkes,
set
on
the
std
time
scales
like
the
dpla
you've
got
62
years.
So
you've
got
a
lot
of
time
for
these
trends
to
develop
and
affect
how
things
come
out.
I
So
I'm
going
to
be
talking
about
bias
as
the
difference
between
model
initial
state
and
observations.
This
is
literally
what
what
the
states
are
the
day
you
start
running
the
model
and
the
dple.
These
are
november.
First
start
dates.
So
if
you
take
the
model
state
on
november
1st
and
the
observed
state
of
november
1st
and
take
the
difference,
that's
going
to
be
the
bias.
I
The
drift
is
going
to
be
the
differences
that
develop
after
the
model
starts
running.
So,
as
the
model
runs,
it's
going
to
drift
and
that's
these
kind
of
represented
by
these
dots
here,
so
those
that
is
how
I'm
dividing
bias
and
drift.
So
if
you
look
at
the
bias
now
in
the
dple
this,
these
are
for
the
november
first
start
dates
for,
in
this
case,
1979
to
2010,
where
we
have
pretty
high
quality
sea
surface
temperatures.
I
It
would
be
similar
for
other
time
periods
as
well,
but
this
is
just
to
illustrate
that
the
bias
on
november
1st,
with
the
model,
initial
states
and
the
observations
from
that
start
period,
have
this
kind
of
a
pattern,
and
you
can
see
these
biases
are
pretty
big,
they're
kind
of
approaching
one
degree
c
off
the
west
coast
of
the
continents.
Here.
South
africa,
south
america
and
north
america
got
some
big
bias.
Positive
biases
in
the
north
north
atlantic.
H
I
The
north
pacific-
and
you
get
this
kind
of
kind
of
banded
structure
here,
which
is
probably
related
to
the
position
of
the
analytics
certain
polar
current.
But
these
are.
This
is
what
you've
got
because
you've,
you
haven't
initialized
the
model
directly
with
observations,
but
you're
initializing
it
with
this
fossee
method
or
this
initialized
cast
method,
so
by
definition,
you're
not
going
to
be
directly
relatable
to
the
observations
but
you're
going
to
have
some
drift
or
some
bias
already
in
it.
And
that's
this
is
the
bias
and
the
pattern.
I
It
has
a
pattern
and,
of
course,
when
you're,
looking
at
brute
force
by
definition,
brute
force,
this
field
would
be
nearly
zero
because
you're
initializing
the
model
to
literally
the
observed
initial
state.
So
if
you
relate
this
now
to
the
drift,
so
here's
this
plot
again,
I
reproduce
it
over
here.
This
is
the
bias
for
the
dple.
I
Now
these
are
the
drifts
for
the
early
period
from
1955
to
75
in
the
dple
the
first
month
first
year,
third
year
fifth
year
and
seventh
year,
and
this
is
for
the
first
kind
of
part
of
the
hind
cast.
This
is
for
the
1980
to
2010
part
of
the
hind
cast,
and
this
is
the
difference
between
the
early
and
the
late
periods
to
assess
kind
of
what
the
difference
in
the
trends
are,
what
the
trend
differences
mean
to
this
pattern
of
drifts.
I
So
if
you
first
look
at
this
early
period,
you
can
see
here.
This
is
basically
the
same
pattern
you
see
here.
So
in
the
first
month,
you
haven't
gone
too
far
away
from
the
initial
bias,
but
by
the
time
you
average
the
first
year
the
drifts
in
the
first
year.
Now
you
start
to
see
this
kind
of
these
anomalies
growing
here
in
the
mid-latitudes
negative
in
the
northwest,
pacific
and
northwest
atlantic
negative
in
the
subtropical
atlantic
and
developing
this
negative
bias
here,
a
drift
in
the
equatorial
pacific
as
you
get
to
year
three.
I
Now
these,
of
course
the
colors
get
darker,
because
the
magnitudes
of
these
the
drifts
get
bigger.
But
the
pattern
pretty
much
sets
up
early
and
it's
the
pattern
stays
pretty
much
the
same,
and
so
the
model
is
going
really
quickly
to
its
its
preferred
systematic
air
state
and
then
you
that
just
kind
of
grows
as
you
go
along.
I
And
so
when
you
look
at
the
later
period,
you
see
very
a
very
similar
pattern
to
what
you
see
in
the
early
period.
What's
kind
of
interesting
is
the
differences
between
the
early
and
the
late
period,
and
if
there
was
no
difference
in
the
trends
between
these
two,
this
field
would
be
nearly
zero
for
all
of
these
time
periods.
But
of
course
it's
not
because
there
are
differences
in
the
trends
and
you
can
see
the
year
one
difference.
I
You
actually
have
this
kind
of
warmer
anomalies
in
the
tropical
pacific
and
in
the
some
areas,
the
indian
ocean.
But
as
you
go
along,
it
kind
of
things
go
more
negative
and
that's
because
when
you're
earlier
in
the
period,
the
early
minus
late,
you're
going
to
have
more
positive
differences
and
because
the
drifts
in
the
early
period
are
more
negative.
When
you
get
the
early
minus
late
differences.
You're
going
to
see
more
negative
anomalies
here.
But.
I
So
when
you're
actually
computing
anomalies-
and
this
relates
to
what
judith
was
talking
about-
there's
various
ways
of
doing
this-
the
more
kind
of
typical
way
of
doing
this
that
all
time
scales
used
from
s2i,
s2
or
s2s,
s2i
and
s2d.
I
Is
you
difference
your
prediction
from
some
kind
of
drifted
model
or
the
appropriately
drifted
model
climatology?
So
if
you
have
a
hind
cast
for
your
average
for
years
three
to
seven,
you
average
all
that
your
three
to
seven
drifted
model
states
and
form
a
climatology
over
the
entire
time
period
and
do
your
differences
that
way.
I
Another
method
we've
used
at
incar
is
to
try
to
get
away
from
this
effects
of
the
trend.
As
you
bias
adjust
the
predictions,
and
then
you
difference
that
from
the
previous
15
years
from
observations,
the
idea
being
you've
bias,
adjusted
the
the
predictions
back
to
some
that
were
close
to
the
observed
state.
You've
taken
the
bias
out,
and
you
can
difference
that
from
the
previous
15
years
from
observations
and
that
addresses
this
problem
of
trends
in
the
data
in
the
model
record.
I
But
you
can
also
introduce
unrealistic
skill
if
the
low
frequency
variability
and
observations
is
large
compared
to
the
hind
gas
on
time
scales
of
greater
than
15
years.
So
that's
kind
of
a
problem
with
that
method.
So
we
propose
something
that
kind
of
takes
elements
of
both
of
these
and
take
the
differences
compared
to
the
previous
15
years
from
the
start
date
for
each
start
date
with
the
amount
of
initial
states,
and
this
gives
you
someone
lower
skill
because
you're
you're
taking
away
this
any
artificial
enhancements.
I
You
may
have
coming
from
these
other
two
methods,
but
it
does
remove
the
problem
of
these
long
differences
in
long-term
trends,
and
we
can
actually
see
how
that
actually
works
here.
This
is
another
plot
I
showed
on
monday
to
kind
of
show
that
verification
for
this
initialized
prediction
we
made
out
into
the
future
that
was
going
to
verify
in
the
future
after
the
model
after
the
paper
was
actually
published,
and
indeed,
we
were
predicting
and
that
prediction
was
for
ccsm4.
I
We
were
predicting
something
transitioning
to
the
positive
phase,
the
ipo,
which
actually
ended
up
happening.
So
luckily,
for
us
that
ended
up
panning
out
that
we,
our
prediction,
actually
verified
after
several
years
of
waiting
around
for
it
to
verify.
But
this
is
just
to
show
the
differences
in
what
you
see
in
these
patterns,
the
verification
patterns
using
different
methods
of
calculating
anomalies.
This
is
now
the
method,
calculating
kind
of
the
standard
method
where
you
calculate
the
anomalies
relative
to
the
entire
model
climatology.
I
You
can
see
that
a
lot
of
these
colors
are
red,
indicating
that
there's
probably
a
trend
coming
in
here
from
the
warming
from
greenhouse
gases-
and
you
see
this
in
the
observations
as
well,
because
you're
doing
the
same
kind
of
thing,
you're
calculating
it
and
there's
a
lot
of
trend
in
here,
but
the
verification
is
pretty
good.
It's
0.86
because
you've
got
getting
a
lot
of
skill
just
from
the
trend,
the
fact
that
everything's
warmed
up
in
the
model
in
the
observations.
I
Of
this
trend,
you
actually
get
a
nice
nicer
pattern
here.
We
can
see
the
positive
phase
of
the
ipo,
so
you're
not
getting
the
effects
of
the
trend.
However,
the
pattern
correlation
normally
panic
correlation
is
a
bit
lower
because
you
remove
the
effect
of
the
trend
here,
and
this
is
the
method
we
were
using
before
compared
to
the
15
previous
15
years.
I
The
observations
seeing
something
similar-
it's
not
quite
as
it's
a
little
bit
larger
amplitude
compared
to
this
other
method,
but
it's
still,
it
verifies
quite
well
at
0.82,
which
is
something
similar
to
what
we
got
from
the
0.86
verification
for
the
pattern,
correlation
for
the
entire
climatology.
I
Middle
one
is
probably
the
the
more
accurate
representation
for
the
std
time
skills.
What
the
skill
actually
may
be.
So
now
we're
going
to
look
at
this
effects
of
these
two
initialization
methods
in
these
two
earth
system
models.
So
we're
going
to
look
at
the
csm1.
This
is
the
dple
version.
The
kind
of
a
one
degree
class
earth
system
model,
your
e3
sm,
which
is
the
new
dui
or
system
model,
is
a
one
atmospheric
model,
that's
similar
or
comparable
to
the
atmospheric
model.
I
The
cam5
that
was
in
csm1
they've
made
some
changes
to
it,
but
it's
still
relatively
close
to
the
cam5,
but
the
biggest
chain
difference
between
these
two
models
is
in
the
ocean.
The
e3sm
v1
has
this
impasse
ocean,
which
is
a
very
variable
resolution
ocean
model.
It
has
resolution
from
60
down
to
30
kilometers,
so
we're
gonna
use
these
each
model
with
two
initialization
methods.
We're
gonna
need
four
combinations
and
these
are
all
initialized
at
november.
First
of
each
start
year,
the
csm1
fossee
is
the
dple
which
we've
already
shown
results
for.
I
Of
course,
we
have
a
lot
of
ensemble
members
for
that,
but
these
three
new
ones
that
we're
running
now
are
the
csm1,
with
the
brute
force
method,
e3
sm,
with
the
fossee
method
and
ethereum,
with
the
brute
force
method,
and
so
far
we've
only
got
four
star
years
with
these
with
five
ensemble
members,
each
these
are
in
production,
so
I've
got
an
asterisk
here.
These
are
now
running
with
more
hind
casts
with
more
start
years.
We're
gonna
have
six
more
start
years,
at
least
so.
I
We'll
have
a
total
of
ten
start
years
with
five
ensemble
members,
each
at
least
initially,
and
one
of
the
goals
of
this
exercise
is
to
see
if
we
can
actually
learn
something
by
using
fewer
start
years
and
see
which
we
can
put
in
more
variations,
and
so
we
don't
have
to
run
an
entire
dple
for
every
initialized
timecast
sensitivity,
experiment,
so
we're
hoping
that
we
can
actually
learn
something
using
fewer.
E
I
Of
start
years,
so
we
don't
have
to,
as
we
can
save
on
computer
time
and
still
learn
something
from
these
things.
So,
let's
see
how
these
things
are
behaving.
So
these
are.
This
is
going
to
be
the
month
one
drift.
I
This
is
the
csm1
brute
force,
e3sm
brute
force,
csm1
fosse,
which
is
the
dple,
which
is
what
we've
already
seen
and
the
e3sm
fosse.
So
if
you
look
at
the
two
brute
force
anomaly
patterns
here
compared
to
the
two
fossee
methods,
you
can
see
that
the
anomalies
are
a
lot
smaller
here
in
the
brute
force,
because
the
model
is
only
run
a
month.
So
you
haven't
gone
that
far
from
your
initial
state,
which
was,
by
definition
initialized
to
the
observed
state.
I
The
two
fossey
methods,
of
course,
have
bigger
anomalies
because
you're
starting
further
away
from
the
observed
state.
You
can
see
right
here
that
these
patterns
are
already
somewhat
different
between
the
csm1
and
the
e3sm
ones,
so
it
does
seem
to
make
a
difference
as
to
which
ocean
model
you're
actually
using
and
how
these
drifts
are
manifested.
Now
this
is
the
year
one
drifts,
so
the
csm1
brute
force
and
the
ethers
aetherius
e3sm
brute
force
are
still
fairly
similar.
I
They
still
haven't
drifted
far
much
very
far
from
the
observed
initial
states,
but
the
two
fosse
models.
Their
patterns
are
somewhat
different,
but
you
can
see
the
anomalies
are
still
a
lot
bigger,
but
this
pattern
here,
that's
now
sitting
here
into
the
e3sm
fossee.
Now
you
can
see
it's
showing
up
here,
pretty
much
the
same
pattern
and
it's
pretty
much
the
same
pattern.
I
That's
now
set
in
here
in
the
e3sm
brute
force
version,
and
so
it
looks
like
in
the
e3sm,
at
least
by
the
time
you
get
to
the
third
year,
no
matter
which
initial
initialization
method
you
use,
you're,
going
to
end
up
with
about
the
same
drift
anomaly
pattern,
but
for
the
the
csm1,
comparing
the
brute
force
with
the
fosse,
you
see
it's
a
little
bit
different
you're,
getting
kind
of
warmer
anomalies
here
in
the
southern
hemisphere
and
in
the
tropics
compared
to
what
you're
getting
in
the
fossee
here
with
the
kind
of
cooler,
extra
civic,
sst
and
cooler
tropics.
I
And
if
you
now
go
to
the
year
five,
the
two
e3
sm
versions
are
still
pretty
much
the
same
pattern,
so
that
pattern
hasn't
changed
a
whole
lot,
but
there
does
seem
to
be
a
little
bit
of
a
difference
between
the
csm1
brute
force
and
the
csm1
fossee.
And
so
the
caveat
big
caveat
here
is
that
we
still
haven't
finished
running
all
these
simulations
with
the
brute
force
and
the
fosse
realizations.
And
so
these
are
giving
us
some
at
least
some
initial
indications
that
the
brute
force
definitely
has
smaller
drifts
and
earlier
on.
I
So
to
summarize,
these
drifts
between
the
the
drifts
definitely
are
one
of
the
biggest
challenges
and
their
product.
The
systematic
errors
in
the
models
and
the
biases
and
the
drifts
and
the
difference
in
the
trends
can
cause
issues
with
how
you
calculate
anomalies
to
verify
skill
and
std
predictions.
I
We
looked
at
three
different
ways
of
calculating
anomalies
and
figured
out
or
advocating
that.
Maybe
the
differences
from
the
previous
15
year
average
from
observations
may
be
the
most
most
accurate
of
the
three
and
then
we
showed
preliminary
results
with
these
two
models
with
two
different
initialization
methods,
showing
that
the
there's
different
behavior
when
the
models
with
two
different
ocean
models
and
that
the
e3
sm
seems
to
have
very
similar
drifts
with
the
two
initialization
methods.
But
there
seem
to
be
some
differences
in
csm1
thanks.
D
I
was
just
wondering
about
the
north
atlantic
with
your
15-year
method.
It
seems,
like
the
skill
is
way
down
there
and
I
guess
you'd
expect
that,
because
it's
it's
low
frequency
and
so
removing
the
15
years,
you're
kind
of
removing
some
of
the
signal,
but
is
that
kind
of
the
right
way
to
judge
the
skill?
I
guess
it's.
The
skill
relative
to
persistence
is
kind
of
more
what
you're,
showing
with
that
method.
Yeah.
I
I
The
ipo
and
kind
of
a
cooler
north
atlantic
north
of
20
north
say,
whereas
all
the
verification
show
that
the
north
atlantic
cooled
and
warmed
up
kind
of
here
in
the
subtropics,
and
this
is
kind
of
what
we
can
see
here
in
this
method,
where
we're
looking
relatively
the
previous
15
years,
so
that
we're
making
getting
a
better
look
of
what's
going
on
in
the
atlantic
and
some
areas,
the
pacific.
But
you
know
again
this
all
these.
I
A
I'm
muting
here
thanks
jerry.
That
was
really
interesting.
I'm
wondering
you
know
you.
You
mentioned
that
you're
doing
the
small
sets
of
sensitivities,
so
you
don't
have
to
rerun
the
full
dple.
But
how
then
do
you
compute
a
robust
drift
correction
from
say:
five
starts.
I
Yeah,
well,
that's
well,
hopefully,
we're
going
to
have
10
starts,
but
yeah.
This
is.
This
is
an
issue
right.
So
this
is
what
we're
going
to
test
we're
going
to
see.
If
we
can,
we
have
the
dple
to
compare
to
so
we're
going
to
have
the
for
the
csm1
at
least
we're
going
to
have
the
smaller
number
of
starch
for
the
brute
force,
and
then
we
can
actually
sub
sample
the
dple
to
get
the
smaller
number
of
starts
for
the
posse
but
yeah.
I
This
is
this
is
where
we're
going
to
see
if
this,
if
this
works,
that's
obviously
going
to
be
a
caveat
for
any
of
these
kinds
of
methods
where
we're
trying
to
figure
out
ways
of
learning
something
about
initialized
heincats,
where
we
don't
have
to
run
a
full
dple
and
spend
all
those
computer
resources.
So
if
we
can
actually
get
something
out
of
a
fewer
number
of
start
dates
that
we
can
actually
learn
something,
I
think
that
would
be.
I
F
Jerry,
this
is
tony
rosati.
I
just
have
a
question:
why?
Why
can't
you
do
it
the
same
way
we
do
in
seasonal
forecasting,
just
use
the
last
30
years
or
20
years,
or
something
as
the
period
it's
the
forecast
for
the
decadal
thing
that
you're
looking
at.
I
A
bit
different
from
using
the
whole
62-year
climatology
from
the
dple,
which
is
this
top
panel
here,
see
and
that's
where
you
really
see
the
effects
of
the
trend
coming
in
here,
because
it's
kind
of
warming
up
everywhere
and
that's
just
coming
from
the
trend.
And
when
you
take
the
effects
of
the
trend
out,
you
still
have
a
little
bit
of
the
trend
in
here,
but
you're
getting
more
of
kind
of
a
pattern
with
positive
and
negative
anomalies.
So
yeah.
I
Is
this
method
would
be
closer
to
what
is
being
used
for
s2s
and
s2i?.
F
I
F
F
At
it
with
the
lead,
dependent
drift
removal
that
that
what
you're
saying
is
that
basically
it's
linear,
and
so
that
is
an
assumption
that
we're
making
in
that
anyway.
Yeah.
I
Yeah,
so
I
think
initially
you'd
say:
okay,
this
is
a
linear
trend,
but
you
know
trends
usually
aren't
linear,
so
you
could
come
up
with
ways
of
you
know:
kind
of
some
kind
of
quadratic
fit
to
figure
out.
If
you
can
get.
If
you
can
capture
the
non-stationary
base
state
and
a
bit
better,
so
I
think.
I
Going
to
think
it's
a
it's
a
linear
trend
and
then
maybe
think
about
better
ways
of
calculating
that
as
we
get
further
into
it
see
if
we
can
get
anything
out
of.
A
Okay,
well
we're
almost
on
time
five
minutes
late
yaga.
Should
we
go
ahead
with
a
15-minute
break
now
and
come
back
at
10,
35
and
hope
to
make
up
time.
A
All
right,
we,
we
can
come
back
at
10
30
and
we
might
get
started
a
few
minutes
late,
but
all
right.
B
B
B
B
A
Well,
it's
hard
to
tell
if
people
are
back
from
break
yet
or
not,
but
maybe
we
can
get
started
with
dan
amrin's
talk.
A
A
J
J
Okay,
excellent
all
right.
Well,
thanks
a
lot
everyone.
I
wanted
to
take
this
opportunity
to
share
some
recent
work
with
data
simulation
and
cesm
for
the
questions
of
initialized
predictability
that
have
already
come
up
several
times
today.
I
want
to
acknowledge
a
long
list
of
collaborators
here.
This
is
a
cross
lab
effort
at
ncar.
That's
benefited
from
interactions
of
folks,
both
in
cgd
and
in
the
data
assimilation,
research
section
as
part
of
sizzle.
J
J
Talk
about
some
efforts
that
we
have
slated
for
the
the
near
term
in
terms
of
what
we'll
call
loosely
coupled
data
simulation
product
for
initializing,
s2s
studies
in
ocean
extremes.
I'm
talking
about
some
initial
results,
comparing
some
of
these
da
results
to
the
fosse
cases
and
then
talk
a
lot
about
future
directions
and
started.
Emphasizing
that
you
know
this
is
hopefully
an
opportunity
to
solicit
feedback
from
folks
about
what
things
they
most
want
to
see.
J
With
some
of
these
efforts,
so
you
know
in
this
community,
we've
talked
a
lot
about
how
initialization
is
a
potentially
important
component
of
predictability
across
climate
time
scales.
I
guess
I
say
potentially
because
maybe
there's
an
awareness,
that's
at
some
time
scales
and
for
some
problems
where
you
start
the
model
is
really
important
and
for
long
time
scales,
maybe
it
becomes
less
and
less.
But
you
know
a
demonstration
of
this
came
from
the
fosse
runs
and
I'm
sorry
not
from
fossey
frosty,
initialize
predictability
runs.
J
This
is
a
figure
from
the
2018
dple
paper,
showing
that
forecast
skill
is
increased
in
a
statistically
significant
way
for
ocean
heat
content
in
some
parts
of
the
ocean
relative
to
an
uninitialized
ensemble,
and
so
when
we
think
about
how
to
improve
initialization
or
why
initialization
is
important
and
jerry
already
talked
about
some
of
these
things.
But
I
guess
you
know
if
these
challenges
have
kind
of
at
least
three
components
for
how
we
want
to
initialize
a
predictive
ensemble
for
ensemble
forecasts.
J
On
the
one
hand,
we
want
to
start
off
with
an
initial
state
that
is
relatively
accurate
relative
to
the
system
that
we
are
trying
to
predict.
So,
if
you're
trying
to
predict
the
earth
system,
you'd
like
to
have
some
sort
of
initial
state
that
reflects
that
system
as
well
as
you
can,
but
at
the
same
time
we
want
to
make
sure
that
we
are
also
reducing
initial
shocks
and
drifts
when
we
start
to
let
our
model
go.
J
That
reflects
our
uncertainties
in
the
initial
conditions,
because
we
want
those
initial
those
uncertainties
to
be
propagated
further
into
our
forecast,
and
so
this
kind
of
motivates
you
know
these
things
taken
together,
motivates
the
sort
of
a
jury
called
the
cadillac
of
these
data
simulation
approaches
that
incorporate
observations
into
the
predictive
model
and
give
us
explicit
uncertainties
and
yeah.
J
I
usually
drive
this
to
work,
but
now,
first
day
at
home,
I
just
park
it
in
the
driveway
and
work
with
air
conditioning
on
so,
and
you
know,
there's
been
an
ongoing
effort
to
do
data
simulation
in
cesm,
including
for
the
goal
of
initializing,
decadal,
predictability
and
predictability
and
other
time
skills
and
in
a
2015
paper
by
alicia,
karsbeck
and
co-authors.
J
J
That's
really
the
only
places
that
you
see
changes
in
skill
are
coming
most
strongly
in
the
north
atlantic
subpolar
gyre
and
that
in
particular,
there
weren't
a
lot
of
strong
advantages
of
data
simulation
that
initialized
states,
as
opposed
to
the
sort
of
hindcast
or
fosse
runs,
and
the
authors
pointed
out
that
there
were
a
number
of
caveats
here
and
I've
kind
of
added
to
the
list
a
little
bit.
That's
you
know,
there's
relatively
no
low
number
of
predictive
ensemble
members.
J
In
this
case
there
are
persistent
problems
with
the
model
being
biased
which
could
sort
of
reduce
the
efficacy
of
having
data
simulation.
This
is
also
a
time
period
1961
to
2005
where
the
ocean
data.
So
this
I'm
sorry
I
should
have
said
this
was
for
a
ocean
data
simulation
products.
So
the
ocean
data
are
relatively
sparse,
so
you
can
imagine
as
you
just
yes,
you
sample
the
system
better
and
better
that
the
data
simulation
would
improve,
but
you
know
in
a
way.
J
I
think
this
poses
a
challenge
for
us
to
sort
of
moving
forward.
You
know
identify
the
ways
in
which
we
think
that
data
simulation,
which
we
know
theoretically,
should
be
the
best
way
of
initializing
where
that
really
shows
up,
and
so
I
want
to
highlight
you
know
one
possible
particular,
possibly
rosier
picture
when
you
initialize
not
with
a
data
simulation
product
but
when
sort
of
a
perfect
model,
predictability
experiment.
This
is
from
a
recent
paper
by
leu
and
co-authors.
J
You
can
see
higher
scale
and
larger
parts
of
the
ocean,
although
I
want
to
have
the
caveat
that
they
use
a
different
significance
criterion
here,
so
it
might
not
be
completely
apples
to
apples.
But
I
guess
I
wanted
to
sort
of
raise
the
specter
that
you
know
as
model,
so
in
a
way,
there's
a
reduced
impediment
for
this
study
right,
because
it's
a
perfect
model
experiment.
J
So
these
conversations
that
I've-
you
know-
we've
had,
I
guess,
over
the
past
18
months
of
being
in
car.
I
guess
I
know
I've
kind
of
come
to
think
that
some
the
objectives
that
we
can
be
looking
at
right
now
for
data
simulation
and
earth
system
prediction
are
a
few
first
to
kind
of
evaluate
which
processes
and
time
scales
benefit
from
data
simulation
initialization
for
predictability
and
thinking
about
why.
J
I
think,
there's
11
interesting
science
questions
there,
but
also
acknowledging
that
we
don't
expect
this
necessarily
to
be
a
one
and
done
where
there
can
be
a
single
experiment.
And
we
say
oh
data
summation
works,
or
maybe
we
don't
need
it,
but
acknowledging
that
this
is
an
evolving
process
and
so
maintaining
a
test
bed
to
continue
these
experiments
moving
forward.
J
And
then
you
know
also
acknowledging
that
this
is
part
of
you
know
some
broader
efforts
on
the
part
of
espwg
to
think
about
seamless
prediction,
having
sort
of
a
unified
framework
across
time
scales
and
also
you
know
this
just
really
dovetails,
naturally,
with
data
simulation
efforts
that
are
ongoing
at
ncar
as
well
for
other
purposes.
J
So
so
I
wanted
to
focus
a
little
bit
on
these
first
two
questions
and
to
think
a
little
bit
about
what
this
actually
looks,
like
what
bases
should
we
have
for
testing
data
simulation
efficacy?
And
so
you
know
one
of
the
things
that's
come
up.
Maybe
you
know
the
first
sort
of
thing
you
would
like
to
do
is
to
have
a
sort
of
da
initialized
ensemble
heinz
cast
skill
measure,
a
sort
of
data
assimilation
dple.
J
The
advantage
of
that
acronym
is
only
pronounceable,
but
it
also
sounds
like
tadpole
when
you're
holding
in
your
nose.
You
know
over
a
large
span
of
time.
This
would
give
us
statistics
that
we
could
compare
the
dple,
but
there's
a
major
challenge.
In
that
I
mean.
Certainly
the
hindcast
experiments
are
expensive,
but
the
data
simulation
experiments
to
run.
J
You
know
coupled
online,
you
know
fully
nkf
are
quite
onerous,
it's
a
large
cost
and
a
lot
of
person
hours
in
order
to
make
that
happen,
and
so
you
know,
while
we
could
hypothetically
run
this
experiment,
if
we
got
the
time
the
computer
time
it's
difficult
to
iterate
and
think
about
that
as
a
framework.
J
For
you
know,
as
we
improve
data
simulation
or
tweak
some
knob,
oh,
how
can
we
look
at
the
predictability
there
so,
at
least
in
the
short
term,
I
think
what
we've
landed
on
is
the
notion
of
kind
of
leveraging
existing
processes
to
generate
shorter,
reanalyses,
focusing
on
maybe
shorter
prediction,
time
scales
and
also
some
test
cases
of
particularly
representative
or
thorny
problems,
and
also
in
the
background
thinking
about
data
simulation
approaches.
J
That
would
allow
us
to
iterate
more
quickly
and
to
try
to
turn
the
crank
and
do
some
of
this
development
work
to
turn
this.
These
things
around
more
quickly
in
the
future,
and
so
what
these
goals
are
looking
like
or
how
they're
manifesting
in
the
short
term
are
what
we're
calling
a
kind
of
loosely
coupled
ocean
atmosphere,
re-analysis,
and
so
this
term
was
coins
at
the
very
I've
seen
it
at
least
in
this
car.
J
Spectacular
2013
paper
sort
of
loosely
coupled
means
that
you
are
in
this
case
forcing
an
ocean
model
where
you're
doing
data
simulation
with
an
atmospheric
state
where
data
simulation
has
also
been
performed.
So
this
is
in
opposition
to
weekly
and
strongly
coupled.
J
J
You
know-
and
there
are
some
questions
that
we
would
like
to
address
with
this
run.
First
of
all,
you
know
jerry
has
talked
a
lot
about
the
difficulty,
their
challenges
and
computing
drifts.
J
One
question
we
have
is
whether
or
not
this
is
a
long
enough
run
to
be
able
to
compute
those
drifts
accurately,
and
then
you
know
so
we're
thinking
of
then
using
this
as
a
basis
for
doing
predictable
experiments
from
some
of
the
s2s
protocols
and
then
also
looking
over
this
time
period
over
a
couple
of
test
cases
which
might
be
particularly
apt.
J
One
of
them
is
this
2015
north
atlantic
subpolar
gyre
cold
blob,
which,
in
this
recent
publication
by
liz
moon
and
co-authors.
Basically,
this
subplot
cold
blob
is
a
feature
that
evaded
the
dple
and
also
a
number
of
other
ensemble
members
that
these
co-authors
ran
to
try
to
get
it,
and
so
you
know
some.
Some
of
the
questions
posed
by
this
paper
are,
you
know.
Is
this
a
function
of
initialization
error?
J
Is
this
a
rare
event
just
extremely
rare,
because
it's
you
know
the
large
ensemble
is
able
to
produce
things
like
this,
so
that's
kind
of
an
interesting
test
case
that
would
be
you
know
we
can
go
after
and
we
can
think
about
other
test
cases
which
are
also,
unfortunately
blobs
of
one
form
or
another.
J
I
I
do
think
that
you
know
yaga
was
mentioning
that
maybe
a
test
case
for
shorter
time
scales,
great
community
conversation
would
be
what
a
sort
of
battery
of
cases
might
look
like
for
these
longer
term
prediction,
time
scales
and
also
just
a
quick
reference
to
this
talk.
We
heard
on
the
plenary
talk
in
the
first
day
about
you,
know:
justice
and
climate
science
thinking
maybe
about
test
cases
that
aren't
mostly
affecting
people
in
you
know
england
and
the
university
of
washington
so
yeah.
J
So
I
think,
I'm
you
know
a
large
part
of
my
point
here
or
our
goal
is
to
communicate
sort
of
the
workflow
that
we're
arriving
at
in
terms
of
studying
these
problems
and
acknowledging
that
I'm
kind
of
short
on
time
I'll
just
say
briefly
that
we're
going
we're
using
the
ensemble
adjustment
kalman
filter
in
both
the
atmosphere
and
ocean.
So
to
do
this
schematically,
you
have
your
model
simulations
running
in
parallel.
J
When
you
have
observations,
you
sample
the
models
like
the
data,
compute
model
data
misfits
and
adjust
the
model
state
by
adding
an
increment
and
then.
J
Forecast
the
model
forward
in
time
until
you
encounter
another
observation,
and
so
in
the
case
of
the
cam6
derby
analysis.
This
is
over
the
time
period,
2011
to
2019,
as
I
mentioned,
we're
doing
six
hourly
assimilation
of
millions
of
observations
a
day
from
a
range
of
different
sources.
J
This
is
nicely
documented
in
a
white
paper
by
kevin
rader,
and
all
that
I'd
encourage
you
to
look
up.
I
would
emphasize
for
atmospheric
predictability
that
this
is
a
data
simulation
product
is
in
cam
6,
and
so
you
know
a
exciting
tool
in
its
own
right
for
the
ocean
data
simulation
component,
we're
looking
at
pop
2..
I
want
to
emphasize
that
a
lot
of
really
awesome
work
has
been
done
to
document
doing
data
simulation
by
the
dart
group.
It's
a
great
sort
of
gateway
drug
for
data
simulation.
J
If
that's
something
that
you're
looking
into
so
we're
looking
at
the
nominal
one
degree
pop
simulation
with
80
ensemble
members
forced
by
the
cam
680
ensemble
members,
and
these
assimilations
are
slated
to
be
completed.
This
fall
as
part
of
a
csl
proposal,
along
with
with
some
of
these
initialization
coupled
experiments.
J
J
So,
in
this
case,
I
just
grabbed
a
state
from
the
upper
ocean.
So
this
is
the
upper
ocean
temperature
on
the
top
I'm
showing
from
the
pop.
They
did
some
this
loosely
coupled
data
stimulation
and
on
the
bottom,
from
the
fosse
simulation,
and
you
know
by
eye
these
things
look
extremely
similar.
If
we
look
at
the
differences,
we
can
see
that
you
know
there
are
differences
on
order
of
up
to
maybe
five
degrees
c
between
these
states.
J
But
another
thing
we
want
to
look
at
is
how
this
difference
kind
of
compares
to
the
standard
deviation
or
the
spread
across
our
ensemble
numbers
that
we
get
from
data
simulation,
and
so
here
I'm
showing
an
example
of
of
this
spread
from
this
particular
time.
J
So
in
this
case,
the
spread
among
from
the
da
ensemble
members
is
quite
tight,
and
so,
if
you
were
to
plot
the
normalized
difference
between
pop
and
fosse,
the
differences
are
extraordinarily
large.
So,
at
least
in
this
case,
I
think
we
can
make
the
argument
that
fosse
or
and
pop
are
statistically
different,
and
we
might
expect
that
at
the
very
least
sort
of
this
initialization
procedure,
using
the
pop
da
as
opposed
to
fosse,
should
at
least
give
us
some
different
results,
not
necessarily
improved
on.
J
So
I
think
we'll
have
to
stay
tuned
for
that
all
right,
so
I'm
I'm
at
the
end
of
my
time.
I
just
wanted
to
emphasize
some
future
directions
and
kind
of
put
these
on
the
map.
It
would
be
really
fantastic
if
people
are
interested
in
working
on
any
of
these
directions.
I
think
that's
kind
of
the
best
way
of
working
forward
is
finding
cool
problems
that
need
some
of
these
solutions
and
collaborating
on
those.
J
So
particularly,
you
know
what
tools
do
we
have
to
kind
of
crack?
The
problems
that
we're
looking
at
this
recent
data
simulation
effort
by
csiro
is
doing
some
kind
of
unorthodox
things
in
terms
of
sampling,
atmospheric
data
for
a
couple
reanalysis,
or
only
doing
monthly,
increments
they're,
less
expensive
data
simulation
approaches
that
we're
interested
in
applying,
maybe
leveraging
some
capabilities
from
other
data
simulation
approaches
and
also
a
lot
of
interesting
work
to
be
done
on
diagnosing
and
reducing
model
bias,
which
is
a
persistent
problem
in
data
simulation
all
right.
J
B
Questions:
here's
one
go
ahead:
sanji.
H
Yeah,
thank
you.
This
is
very
good
introduction
and
have
you
thought
about
like
collaborate?
I
know
there
is
a
new
scientist
who
is
started
even
in
the
land
group
for
that
land.
Part
of
the
data
assimilation
I
have
thought
about,
including
the
even
the
land
part
of
the
data
assimilation,
particularly
on
the
land
part.
There
is
some
interest
on
the.
How
do
we,
if
we
incorporate
the
observed
vegetation
state
and
how
it
would
impact
even
kind
of
like
yeah
those
kind
of
drought,
prediction
or
the
heat
wave?
J
Yeah,
absolutely,
I
know
that
there's
you
know
a
lot
of
really
active
work
being
done
between
dart
and
clm,
and
so
it's
exciting
I
mean
especially
because
my
impression
is,
it
wouldn't
add
very
much
to
the
cost
exciting
to
think
about
incorporating
the
land
state
assimilation,
I
guess
potentially
alongside
the
ocean
d.a,
that's
not
a
conversation.
I've
had
explicitly
with
anyone,
but
especially,
I
guess
for
just
some
of
the
reasons
you're
mentioning
that
would
really.
I
Hi
dan,
so
you
you
mentioned
there
may
be
some
methods
that
would
reduce
the
computational
expense
of
coupled
data
assimilation
or
the
methods
you're
using.
I
was
curious
if
if
there
was
any
any
kind
of
new
developments
there,
that
would
make
this
kind
of
more
feasible
to
use.
J
I
mean,
I
think
one
of
the
more
promising
approaches
I
wouldn't
say
is
a
new
development
is
what
was
used
by
fred
castruccio
for
the
10th
degree,
pop
ocean-only
re-analysis,
which
is
to
do
this
ensemble
optimal
interpolation.
So,
instead
of
having
error
covariances
that
are
updated
through
time,
you
have
a
static
error,
covariance.
J
So
that's
a
direction
that
we've
been
pursuing.
I
guess
one
thought
or
interest
thing
I've
been
looking
at
is
whether
we
can
leverage
large
ensemble
simulations
to
get
ensemble
covariances.
That
would
help
that
problem.
So
yeah.
I
think
that's
that's!
Maybe
one
of
the
more
interesting
I
would
say
that
you
know
at
that
point.
Maybe
it's
not
the
cadillac,
maybe
it's
kind
of
the
buick.
Let's
don't
make.
J
I
don't
even
know
you
know
we
expect
the
skill
to
be
lower,
but
in
basically
you
reduce
the
cost
by
in
this
case
180th,
because
you're
only
running
a
single
ensemble
member.
So
that's
something
I'm
really
excited
about.
B
I
think
we
need
the
tesla,
though
anyway
we're
at
10
54..
So
why
don't
we
go
ahead
and
move
on
to
the
next
talk,
we'll
be
by
matthew
simpson,.
F
K
So
this
project
is
a
collaboration
between
in-car
and
scripps,
and
I've
got
all
the
team
members
listed
here
to
make
sure
that
they're
all
acknowledged.
We
do
have
one
team
member
that
recently
had
a
job
change
now,
working
with
the
national
institute
of
water
and
atmospheric
research
in
new
zealand,
that's
peter
gibson,
so
we
want
to
make
sure
he
gets
acknowledged
as
well.
So
it's
great
having
such
a
large
team.
K
The
result
is
you
get
a
diverse
background
and
expertise
for
obviously,
climate
modeling,
but
also
mesoscale
dynamics
and
for
expertise
and
high
performance
computing
to
help
with
the
large
number
of
simulations
we're
going
to
be
making.
So
it's
great
being
a
part
of
such
a
large
team,
so
much
expertise
so
moving
into
the
motivation
for
the
research,
so
existential
seasonal
forecast
skill
can
be
low
over
the
western
united
states.
K
So
anything
that
can
be
done
to
improve
the
seasonal
forecast,
particularly
precipitation
drought,
is
going
to
be
highly
advent
advantageous
to
water
resource
managers
and
to
highlight
that
point
I
just
got
an
image
on
this
slide
of
the
lake
powell
water
level.
I
took
this
image
just
two
days
ago,
showing
the
massive
drop
in
water
level
there,
and
so
any
predictions
of
is
it
going
to
improve
or
get
worse
with
time
is
highly
valuable
to
the
water
resource
managers.
K
If
I
increase
vertical
resolution
and
dynamical
models?
Does
that
improve
our
seasonal
forecast
skill
and
to
demonstrate
kind
of
our
logic
for
the
motivation
here
on
the
bottom?
We've
got
a
three
panel
image
showing
a
time
series
of
zonal
mean
flow
and
the
trophics
that
demonstrate
the
qbo
development
and
oscillation.
So
on
the
far
left,
we've
got
the
observational
baseline
from
error5
reanalysis,
and
so
again,
what
we're
looking
at
is
a
zonal
flow
as
a
function
of
height
or
pressure
with
time
and
in
the
observational
baseline
you're.
K
Seeing
a
well-developed
qbo
signature
in
the
middle
we've
got
the
wacom
bottle,
and
this
represents
kind
of
the
current
high
top
model
with
weak
qbo
signature,
especially
in
the
lower
stratosphere,
and
one
reasoning
behind
that
could
be
the
just
insufficient
vertical
resolution
to
be
resolving
the
full
amplitude
of
the
qbo
irrelevant
height
and
on
the
far
right.
We've
got
increased
vertical
resolution
with
a
new
grid.
K
So
this
is
higher
vertical
resolution
than
the
stratosphere,
and
this
is
showing
a
much
greater
qbo
signature,
particularly
the
amplitude.
So
our
thinking
is
that
if
we
can
improve
and
perform
a
large
number
of
simulations
with
this
increased
vertical
resolution,
to
start
to
understand
the
feedbacks
on
qbo
development,
okay,
so
getting
into
the
specifics
of
our
research
methodology
and
on
the
run
setup
we're
going
to
be
using
the
csm2
model,
which
is
the
same
as
being
same
version.
K
In
terms
of
initialization
data,
we're
going
to
be
using
the
same
ocean
and
land
data.
That's
used
to
initialize
smile,
but
for
the
atmosphere
we're
going
to
be
using
jira
5
instead
of
arrow
5
data
set
and
our
reasoning
there
is.
We
need
data
going
up
sufficiently
high
to
initialize
our
high
top
vertical
model
configuration
the
period
of
our
hind
cast.
Is
we're
going
to
be
going
from
1970
to
2020
so
covering
five
decades?
K
Okay,
so
moving
on
specifically
to
the
vertical
configurations
in
our
research
methodology,
we're
going
to
be
running
two
experiments
as
we
were,
calling
it
low
top
and
the
high
top
and
to
demonstrate
the
difference
in
these
configurations.
The
image
on
this
slide
is
showing
the
vertical
grid
spacing
as
a
function
of
pressure
and
height
and
so
for
the
low
top
model
or
experiment
one.
The
vertical
grid
spacing
is
shown
in
gray,
and
this
is
representing
the
default
csm2
configuration.
K
K
This
configuration
is
now
shown
in
red
on
the
the
image
and
there's
two
key
differences
here
with
the
experiment,
one
so
for
experiment.
Two:
we've
increased
the
model
top
up
to
80
kilometers
and
the
vertical
grid
spacing
has
been
significantly
reduced
and
the
troposphere
and
lower
stratosphere
to
500
meters,
making
it
far
better
resolution
than
the
default
csm2
and
the
wacom
model,
which
is
shown
in
black
here
and
also
wanted
to
make
a
note
that
you
know
we
are
adjusting
the
vertical
grid
spacing
in
the
troposphere
lower
stratosphere.
K
Okay,
so
obviously
making
so
many
simulations
over
a
large
period
of
time
is
going
to
require
a
large
amount
of
computational
resources.
So
this
is
where
supercomputing
that's
available
to
cw3e
is
coming
into
play
so
cw3e
and
the
san
diego
computer
center.
We
kicked
off
an
agreement
beginning
on
april
1
under
the
this
year
that
we
would
obtain
majority
use
of
one
of
their
super
computers
called
comet.
K
So
initially
we
only
had
65
percent
capacity,
but
that's
going
to
be
increasing
to
100
percent
in
just
two
weeks
from
now
so
commerce,
a
large
supercomputer
with
440
million
cprs,
which
would
translate
to
about
12
000
years
of
cpu
on
your
average
laptop,
and
we
currently
have
20
projects
running
on
comments.
So
it
is
being
used
significantly
as
we
speak.
K
So
as
part
of
getting
access
to
the
super
computing
and
a
super
computing
advisory
group
was
set
up
with
cw3
with
numerous
collaborators,
one
of
which
includes
in-car,
and
the
goal
of
this
working
group
is
really
just
to
make
informed
decisions
and
recommend
high
impact
research.
That
alliance
with
cw
3e's
mission
to
use
the
super
computing
resources.
K
Okay,
so
we're
hoping
that
there's
going
to
be
numerous
benefits
to
the
s2s
community
from
this
research
from
a
generalized
standpoint,
I
mean
just
improved
understanding
of
the
impact
of
the
vertical
resolution
on
s2s
forecast.
Skill,
I
think,
is
a
good
finding
and
beneficial
in
itself
and
even
more
specifically
understanding
how
that
vertical
resolutions
translates
to
qbo
predictability
over
a
12-month
forecast
horizon
could
be
of
great
benefit.
K
Obviously,
now
the
southwest
is
going
through
droughts,
we've
got
a
current
heat
wave
going
on
so
understanding
that
relationship
between
qbo
and
how
that's
translating
into
the
quantities
we
care
most
about
could
be
a
benefit
coming
from.
The
project
also
hope
that
we
could
really
set
a
standard
on
an
optimal
run
design
for
s2s
predictions
by
performing
our
analysis
with
the
higher
vertical
grid.
K
Studies
on
increased
vertical
resolution
and
to
hopefully
improve
simulations
of
qbo
and
ngo
connection
and
then
lastly,
this
large
data
set
that
we
generate
could
also
be
used
for
a
training
data
set
or
machine
learning
based
algorithms
on
s2s
studies,
so
really
hoping
that
there
are
multiple
areas
where
this
research
could
benefit
the
community
so
moving
on
to
the
current
status
of
our
project,
so
our
in-car
collaborators
have
successfully
ported
the
csm2
model
to
the
comet
supercomputer,
because
we're
making
so
many
simulations
and
taking
advantage
of
the
supercomputer
in
car
collaborators
installed
workflow
manager
silk
on
comment-
and
this
is
just
really
to
to
automate
the
runs
as
much
as
possible
and
to
optimize
the
use
of
the
high
performance
computing
resources
that
we
have
next
we've
downloaded
and
regretted
all
the
required
initial
conditions
to
perform.
K
Our
simulations
we've
conducted
a
series
of
test
simulations
that
are
I've
been
successful,
just
to
demonstrate
that
the
model
is
running
correctly,
that
the
workflow
is
set
up,
fine
and
everything
seems
to
be
working
and
then,
lastly,
the
current
status
is
that
we've
finalized
our
list
of
output
variables.
K
This
has
really
been
kind
of
an
iterative
procedure
on
thinking
about
what
are
the
variables
that
we
want
for
this
project,
but
also
what
are
variables
that
could
be
used
for
analysis
for
other
projects
going
on,
so
that
that's
really
taken
a
lot
of
input
from
different
scientists.
And
again
it's
been
an
iterative
procedure,
but
I
think
we've
got
a
good
list
of
output
variables
that
we
want
to
work
from
now
and
then
moving
on
to
next
steps.
K
So
we
want
to
submit
and
complete
november
runs,
just
specifically,
the
november
runs
for
all
years
of
experiment,
one
or
the
low
top
experiment
and
then
perform
initial
quality
control.
These
simulations-
and
our
logic
here,
is
that
we
just
want
to
start
with
the
november
runs
verify
that
everything
is
working.
We're
getting
good
results,
we're
seeing
signals
that
we
expect,
before
moving
on
to
the
full
suite
of
simulations
and
from
a
rough
estimated
timeline,
hope
that
we
can
accomplish
this
in
july
and
august
of
this
year.
K
J
F
K
D
K
D
K
Done,
I
would
hope
so
I
mean
I,
I
feel
I
feel
like
we're
still
working
on
a
long-term
storage
plan
for
the
data.
It's
it's
no
issue,
storing
all
the
runs
currently
on
our
comment
system,
but
where
will
the
data
eventually
be
stored?
I
hope
it's
community
based
and
really
could
lead
to
a
lot
of
collaboration
with
the
groups
outside
of
vincar
sprout.
So
absolutely
excellent.
K
A
I
was
just
going
to
make
a
comment:
it
wasn't
clear
from
your
talk,
but
your
low
top
experiments
will
essentially
mirror
smile
just
with
different
atmospheric
initial
conditions
right
so
the
low
top
that
you're
running
we
use
era
5,
whereas
smile
used
jra55.
That's
the
only
difference.
B
B
There's
a
question
in
the
chat
from
hyemi:
how
does
the
qbo
act
as
a
source
of
seasonal
predictability
for
the
western
u.s?
Through
what
process
could
you
please
share
references.
K
B
Yeah,
so
hyemi
is
one
of
the
people
who
also
looks
at
kibio
teleconnection.
I
think
why
she's
asking
because
the
mechanism
for
western
u.s
in
particular
may
not
be
clear,
so
there's
lots
of
qbotel
connections
to
the
mjo,
so
possibly
through
the
mjo,
possibly
through
the
polar
vortex
and
the
nao,
but
I
think
miami
maybe
can
just
jump
in
that
focus
on
the
western
us
may
not
be
very
clear
where
that
comes.
B
F
Yeah,
I
guess
we
are.
We
are
clear
that
the
cubia
can
have
a
self-seasonal
impact
for
most
but
for
just
seasonal
for
the
seasonal
impact.
F
But
we
have
some
studies
that
the
qbo
can
modulate
the
storm
track
or
or
the
jet
are
directly
so
that
may
can
also
impact
over
the
western
u.s
through
the
storm
tracks
or
or
the
jets.
But
I'm
not.
But
but
that's
the
study
that
have
been
done
by
me
and
hammy,
but
I'm
not
sure
if
other
groups
that
they
have
other
studies
about
the
cuban
impact
on
the
seasonal
predictability.
B
Yeah-
and
I
think
this
will
be
one
of
the
frontier
studies,
because
there
are
not
that
many
seasonal
prediction
models
that
actually
have
a
qbo
in
them.
That
is
any
good,
so
you
know,
I
think
it's
we'll
find
out
whether
there
are
any
impacts,
all
right
in
the
interest
of
time.
Why
don't
we
go
ahead
and
move
on
so
steve
over
here
will
give
a
coaches
update
and
then
we'll
be
follow
up
with
discussion.
A
Okay,
so
here's
what
we'd
like
to
update
you
on
first
some
s2s
updates
and
then
bring
everybody
up
to
speed
on
where
we're
at
with
smile
and
finally
talk
about
csl
allocation,
current
usage
and
future
plans.
A
So
I
wasn't
sure
yaga
if,
if
you
were
going
to
present
these
s2s
slides
or
if
you
want
me
to
run
through
them,.
A
We've
got
two
versions,
one
with
cam6,
one
with
wacom
6,
both
at
one
degree,
horizontal
resolution-
and
these
are
weekly
heinz
casts
that
span
1999
to
present,
and
I
think
both
are
being
updated
in
real
time.
Correct
me
if
I'm
wrong
so
for
the
km6.
These
are
11
member
ensembles
and
for
wacom
five
member
and
they
are
45
day
heim
casts
and
the
plan
is
to
keep
going
through
the
end
of
summer.
I
think
this
is
with
real-time
updates
and
thereafter,
as
human
resources
permit.
A
Yeah
so
so
this
is
the
documentation
paper
for
this
data
set,
which
is
already,
I
guess,
available
on
campaign
store
at
these
dois,
and
maybe
we
can
make
this
available
to
the
whole
working
group.
So
you
can
get
access
to
this
data.
A
We
brought
up
at
the
winter
working
group
meeting
the
idea
of
an
analysis
registry
just
to
try
to
bring
some
coordination
to
the
analysis
of
these
large
community
data
sets.
So
this
is
the
s2s
analysis
registry.
A
Maybe
yaga
can
put
the
link
into
the
chat,
so
several
people
expressed
interest,
but
not
all
are
actively
working
now
that
this
data
set
is
soon
to
be
documented,
it's
really
more
urgent
now
to
get
serious
about
trying
to
put
your
name
down,
to
put
a
placeholder
on
a
particular
topic
that
you'd
like
to
look
at
so
that
we
can
avoid
duplication
of
effort
and
then
now
that
these
base
experiments
are
done.
There's
a
plan
for
some
additional
simulations,
so
isla
and
yaga
have
a
snapseed
project.
A
A
So
three
case
studies-
and
these
are
almost
done
with
50
member
ensembles
and
will
be
submitted
to
the
snapseed
project,
and
then
we
had
put
in
the
csl
proposal
the
idea
of
additional
s2s
experiments
to
investigate
sources
of
predictability.
So
I
think
this
is
a
work,
that's
underway.
The
idea
will
be
to
repeat
the
s2s
full
hind
cast
set,
but
with
climatological
conditions
in
this
case
climatological
ocean
to
investigate
the
role
of
ocean
variability
on
s2s
predictability.
A
A
I
thought
I
would
give
a
broad
overview
of
the
smile
experiment.
I
went
over
this
in
winter,
but
there
may
be
new
people
in
the
meeting
today
who
haven't
heard
what
smile
is
so
we've
heard
it's
invoked
several
times
now.
It's
it's
our
new
seasonal
to
multi-year
large
ensemble
using
the
cesm2
model,
all
components
at
a
nominal
one
degree
resolution,
and
it
includes
biogeochemistry
in
the
ocean
component.
A
So
marble
is
turned
on
and
the
design
is
that
we
initialize
four
times
per
year,
so
november
february
may
and
august
from
1970
to
present,
and
we
wanted
as
a
requirement
for
initial
condition
generation
to
be
extensible
both
forwards
and
backwards,
ideally
to
pre-1960
and
forwards
to
near
real
time
and
we're
running
these
as
20-member
ensembles,
integrating
them
for
24
months,
and
so
we
have
I'll
get
to
where
we're
at
soon.
A
But
I
wanted
to
mention
some
configuration
differences
from
the
csm2
cmip6
runs
the
the
historical
ssp
37
runs,
so
smile
uses
a
different
ocean
device,
mixing
parameter.
It's
got
other
bgc
tuning
parameters
changed
relative
to
those
historical
runs,
a
modified
sea
ice
albedo,
it's
got
a
bug,
fix
and
clm5,
and
it
uses
the
smooth
biomass
burning
forcing
that
we've
heard
about
this
week,
but
there
are
fewer
differences
with
the
the
latter
50
members
of
the
cesm2
lens
ensemble.
So
those
those
last
two
differences
in
red
are
eliminated.
A
When
comparing
smile
to
these
membrane
lens.
A
As
far
as
initialization
we're
trying
to
use
a
consistent
set
of
states
based
on
the
jra,
55
reanalysis
and
the
rationale
starts
in
1958,
so
pretty
far
back
and
is
updated
in
near
real
time.
It's
relatively
high
resolution
at
55
kilometers
and
it's
the
basis
of
the
new
omic
2
forcing
for
ocean
and
sea
ice
models,
and
so
what
we're
doing
in
smile
is
just
a
brute
force
initialization
in
the
atmosphere,
interpolating
jra55
to
the
cam
grid
and
then
for
the
land.
A
So
I'm
happy
to
tell
you
that
the
base
set
of
hind
casts
from
1970
to
2018
was
completed
last
month,
huge
thanks
to
nan
and
sasha,
who
were
the
real
heavy
lifters
here.
But
I
also
wanted
to
call
out
everyone.
Who's
who's
helped
in
the
design
and
implementation
of
this
experiment
in
particular,
who
kim
keith
lindsay
and
donica
lombardos,
really
helped
to
create
the
initial
conditions
that
were
used
so
currently
underway.
We're
extending
the
november
start
dates
to
10-member.
A
A
But
here's
a
an
analysis
that
shan
wu
did
looking
at
nino,
3,
4
skill
and
comparing
to
dple
skill.
So
this
is
looking
at
the
smile
november
initializations
and
as
it
was
mentioned
in
the
plenary,
we
see
some
evidence
of
improved
skill
following
strong
el
nino.
So
we
get
a
higher
correlation
in
a
smile
which
is
black
compared
to
dple
red,
and
we
get
a
reduction
in
the
root
mean
square
error.
A
And
so
here's
a
hove
muller
that
kind
of
shows
this
skill
improvement
for
two
year
la
nina
events.
These
are
following
strong
enso
events
and
they're
listed
here.
1972-82
all
the
way
up
to
2015.,
and
so
you
can
see
for
individual
events
like
this
1982
two-year
la
nina
smile
seems
to
be
doing
a
better
job
of
predicting
that
initialized.
In
this
november
state
of
anonymously
warm
tropical,
specific
and
dple
did
less
well.
A
And
so
this
is
a
good
sign,
because
jean
wu
has
a
nice
paper
being
prepared
for
submission
that
identifies
this
cold
tongue
biases
as
one
of
the
as
one
of
the
main
sources
of
error
in
our
meaning
or
related
prediction,
skill,
at
least
in
pinecast,
that
we're
done
with
cesm1.
A
And
if
we
look
at
how
that
bias
develops
as
a
function
of
start
month,
we
see
that
the
strongest
cold
tongue
bias
is
coming
from
the
february
initialization,
but
these
late
spring
and
summer
initializations
may
in
august
have
a
cold
tongue
bias
that
doesn't
exceed
-1,
and
so
this
is
a
big
improvement
over
the
cesm1
pinecast.
That
john
has
written
up
in
her
new
paper.
So
there's
hope
that
when
we
dig
deeper
into
smile,
nino's
skill
that
we're
going
to
see
some
benefits
of
this
of
this
lower
mean
bias.
A
And
here's
another
analysis
that
john
woo
did
looking
kind
of
at
the
broad
picture
of
sst
skill
as
a
function
of
forecast
season
again
comparing
directly
to
dple.
This
is
the
difference
of
correlation
skills,
smile,
minus
dple,
and
this
is
again
from
just
the
november
start
date
and
you
can
see
you
know
fairly
high
skill
out
to
this
second
son
initialized
from
november
and
the
comparison
to
dple,
you
know,
looks
good
in
the
early
seasons
and
then
shows
more
of
a
mixed
bag.
A
A
Seem
to
be
doing
a
slightly
better
job
and
smile,
but
if
we
look
at
summer
we
we
see
the
reverse
so
dple
see
as
extent.
This
is
actually
the
actual
raw
value,
not
the
anomaly.
A
The
mean
cs
extent
was
actually
quite
good
in
dple
in
summertime,
and
so
we
made
a
concerted
effort
to
try
to
maintain
our
sea
ice
thickness
and
smile,
and
it
looks
like
we
might
have
overdone
it
because
now,
our
summer
sea
ice
extent
is
showing
too
high
bias
in
year.
Two.
A
Here's
an
analysis
that
isla
produced
looks
at
seasonal,
nao
skill
from
the
nao
initially
from
the
november
initialization
and
compares
to
dple,
and
this
was
a
bit
disappointing
to
see
that
smile
has
a
lower
one
skill,
essentially
zero
compared
to
dple,
which
at
least
had
something.
So
neither
of
these
two
systems
shows
any
skill
in
the
second
winter
of
the
forecast.
A
So
I
wanted
to
mention
the
forward
extension.
We
can
easily
extend
the
atmospheric
initial
conditions
up
to
near
present,
so
we've
got
those
for
may
2021
what
we're
trying
to
do
in
collaboration
with
land
model
working
group
people
is
extending
this
smile
trendy
state
reconstruction
of
clm5
through
may
2021,
but
we
don't
have
this
mild
trendy
forcing
so
we're
thinking
that
we'll
just
switch
over
to
jra
55,
forcing
which
is
available
and
maybe
try
to
do
an
anomaly
forcing
approach.
A
I
mentioned
this
in
the
winter
meeting
as
far
as
the
data
release
we're
going
to
follow
the
cesm
policy,
which
says
that
this
experiment
will
be
available
to
any
csm
working
group
no
later
than
six
months
following
the
conclusion
of
the
experiment,
so
that
was
last
month
and
then
it
becomes
fully
public
as
soon
as
the
scientific
paper
has
been
submitted
or
one
year
after
the
end
of
the
simulation,
and
so
a
group
of
us
have
started
putting
together
small
documentation
paper.
This
is
in
preparation.
A
It
will
give
some,
you
know
basic
experiment,
description
and
a
broad
overview
of
skill
and
we're
hoping
that
this
will
also
want
to
get
a
brief
of
analysis
scripts
so
that
people
can
build
off
of
the
tools
that
we're
developing
kind
of.
In
our
initial
assessment
of
this,
this
experiment-
and
as
I
mentioned,
the
smile
analysis
registry
is
becoming
more
important
now
that
the
data
are
are
are
close
to
becoming
available.
So
please
sign
up.
If
is
interested,
we
can
make
this
link
available.
We
can
try
to
facilitate
coordination
within
the
group.
A
There
we
go
so
we
already
have
some
people
who've
signed
up.
Thank
you
for
doing
that,
and
you
know,
as
you
can
see,
several
people
interested
in
enso.
So
hopefully
we
can
get
people
to
talk
to
each
other
and
and
carve
out
some
some
distinct
research
foci.
So
if
you
fill
out
this
form
on
the
spreadsheet,
the
the
more
detailed
you
can
make,
this
probably
the
better.
You
know,
I
would
say
maybe
something
as
general
as
s2s
might
be
too
broad.
A
But
you
know
we
need
to
start
having
these
discussions
as
we
get
close
to
working
on
this
data
set
in
tandem.
A
Finally,
I
just
want
to
mention
some
things
about
the
esp
working
group
csl
allocation
for
year,
one
which
spans
november
to
october
31st
of
this
year,
we
were
awarded
17.6
core
hours
on
cheyenne,
we're
we're
under
utilizing
this
resource
so
far,
and
in
part
that
was
because
the
smile
production
was
much
cheaper
than
anticipated.
A
We
out
we
planned
on
having
to
use
10
million
of
the
esp
working
group
allocation,
complete
smile,
but
we
were
able
to
run
it
quite
efficiently
and
just
use
the
end.
Car
strategic
capability
award
and
we've
had
slow
progress
on
case
studies
and
planned
development,
experiments
of
the
sort
that
dan
amrein
and
yaga
have
mentioned.
A
So
we
started
discussion
on.
What
can
we
do?
What
can
we
prioritize
over
the
summer
and
fall
to
to
use
up
these
resources
and,
as
I
mentioned,
we're
planning
to
extend
the
november
start
dates
to
to
start
to
form
a
cesm2
decatal
prediction
large
ensemble
right
now.
This
is
using
catalyst
resources,
so
we're
not
using
the
esp
working
group
project
code
and
this
small
forward
extension
I
mentioned
is
going
to
be
quite
cheap,
probably
less
than
a
million
four
hours,
yaga's
s2s
sensitivity.
A
Experiments
will,
if
we
run
these
over
the
summer
and
fall
eat
up
about
three
to.
Eight
million
remains
to
be
seen
exactly
how
much
those
are
going
to
cost,
and
so
another
option,
then,
is
to
just
start
a
backward
extension
and
smile
towards
1958,
and
that
would
cost
about
0.2
million
four
hours
per
year
that
we
extend
it
backwards
in
time.
A
We
do
have
the
initial
conditions
all
the
way
back
to
1958.
So
this
is
definitely
an
option
or
we
could
add
more
ensemble
members
to
the
existing
data
set.
So
that's
a
discussion
to
have
and
not
to
be
prescriptive,
but
I
thought
it
would
be
useful
to
at
least
put
down
some
topics
of
discussion,
maybe
that
we
think
it
would
be
nice
to
talk
about.
A
A
Are
there
additional
experiments
that
should
be
prioritized
that
we
haven't
mentioned
we're
interested
in
finding
out
who's
willing
to
lead
on
some
of
these
esp
working
group
sensitivity,
experiments
that
can
leverage
our
new
control
experiments,
these
s2s
and
smile
experiments
and
finally,
how
best
how
to
best
organize
esp
working
group
diagnostics,
development
within
the
broader
cesm
diagnostics
and
earth
system
data
science
initiatives.
A
B
F
F
Yeah
hi,
thank
you.
That
was
a
really
great
overview.
I
have
a
question
about
the
s2s
and
smile
runs,
and
this
is
just
more
general
thinking
ahead.
Whether
this
is
on
your
radar
or
not.
I
know
there's
been
an
effort
done
for
higher
vertical
resolution,
but
I
was
wondering
if
there's
going
to
be
an
effort
down
the
road
for
higher
horizontal
resolution.
A
That's
definitely
something
that
is
high
priority
in
the
ihes
project,
which
is
the
international
laboratory
for
high
resolution
or
system
prediction.
Many
of
us
in
cgd
are
involved
in
that,
and
so
we
are
doing
fosse
initialized
decadal
predictions
at
10th
degree.
Ocean
resolution
quarter
degree
atmosphere,
those
those
first
sets
of
pinecasts
are
completed
and
probably
we're
going
to
need
the
remainder
of
2021
to
fill
out
that
that
kind
of
base
experiment.
I
don't
know
that.
A
B
Well,
I
think
one
of
the
issues
is,
we
don't
have
a
high
resolution.
Csm
that's
ready
to
go
and
I
think
for
the
subseasonal
hind
cast
to
be
useful.
Everything
is
done
on
anomaly,
so
you
would
have
to
complete
the
entire
heincats
set,
although
if
there
was
a
model
ready,
we
could
try
to
do
a
few
case
studies,
but
they
would
then
have
to
be
done.
F
Yeah
the
usual
problem-
I'm
kind
of
new
to
this,
but
I
didn't
hear
anything
about
volcanic
eruptions.
Do
you
put
volcanic
eruptions
into
your
forcing
for
these
hind
casts
and
are
you
ready
to
put
the
next
big
volcanic
eruption
into
the
prediction?
Runs
that
you're
doing
volcanic
eruptions
produce
the
largest
forcing
for
seasonal
to
in
your
annual
time,
skills.
A
Absolutely
yes,
we
do
include
them
in
these
control
heim
cast.
So
it's
part
of
the
forcing-
and
this
is
done
so
that
we
can
directly
compare
to
our
uninitialized
set
that
uses
volcanic
forcing
as
well.
So
we
can
isolate
the
impact
of
initialization
relative
to
that
control
set.
A
There's
no
doubt
that
large
volcanic
eruption
would
add
predictability
to
the
system.
You
know,
there's
a
debate
about
whether
the
skill
we
do
have
in
our
decay
prediction
system
is
really
coming
from
the
volcanic
forcing
not
not
directly
in
the
model,
but
at
least
through
the
initial
condition
that
were
you
know,
the
ocean
initial
condition
has
the
imprint
of
that
volcanic,
forcing
which
is.
Perhaps
you
know,
a
really
important
part
of
the
predictability.
A
A
Yeah
yeah,
so
we
have
done
a
set
that
is
parallel
to
our
dple,
that's
using
cesm1,
and
this
was
a
10-member
set
that
did
not
include
the
volcanic
forces,
and
so
this
is
a
set
that
we're
actively
analyzing.
John
woo
has
a
lot
of
really
cool
results
that
hopefully
she'll
share.
Maybe
at
the
next
esp
working
group
meeting
yeah
very
interesting
sensitivities
there
of
whether
you're
included
or
do
not.
F
But
if
I
use
the
amip
runs
where
it
specified
ssts,
then
it
did
an
excellent
job,
and
that
was
because
the
models
weren't
doing
el
nino
very
well
or
doing
the
ocean
very
well.
So
I
I
was
just
one
and
I
I
also
was
wondering:
what
forcing
do
you
use
to
use
the
data
set
that
mike
mills
put
together
with
anya
schmidt
for
the
past,
and
what
data
are
you
gonna
ingest
for
the
future.
A
L
I
mean
it's
just
a
standard,
one
that
we
get
from
cmm6,
but
I
think
my
understanding
is
that
correct
me,
alan.
If
I'm
wrong
is
it's
not
necessarily
the
data
set
itself,
but
how
individual
models
and
jerry
can
chime
in
too
are
free
to
inject
those
sort
of
emissions
in
different
places
or
different
heights,
and
I
believe
how
you
do
that
can
make
a
big
difference.
Even
though
the
modeling
groups
are
using,
let's
say
the
same.
Data
set.
F
L
But
just
to
clear
it
clarify,
though
we
actually
get
the
camp
six
forcings
for
some
of
the
fields
from
our
vacuum
simulations.
So
in
other
words,
we
do
the
vacuum
high
top
simulations
first
with
all
of
those
emissions,
then
we
get
some
of
the
data
sets
from
vacuum.
In
addition
to
the
siemens,
data
sets
for
our
low
top
version
thanks.
B
And
I'll
just
add
to
that
allen
that
we
were
thinking
with
mike
mills
to
doing
the
exactly
studies
you
you
suggest,
but
the
problem
is
for
wacom.
We
have
hindcast
on
the
substitutional
time
scale,
only
for
initial
dates
between
november
well
september
and
march,
because
the
focus
was
on
the
winter
season,
where
a
lot
of
the
eruptions
happened
during
the
summer.
So
we
do
not
have
the
historical
record
so
yeah,
I'm
hoping
for
a
volcanic
eruption
that
happens
in
the
winter
season.
This
time
around.
B
Lasts
for
a
couple
years,
but
would
be
nice
to
get
them
initialized.
You
know
if
there
was
an
eruption,
do
it
around
the
eruption
time
and
do
some
simulations
within
without.
L
Yeah,
I
was
just
I
mean
this
is
in
response
to
maria,
also
and
just
to
add
to
what
steve
said.
Csm
working
groups
and
in
particular
ssc
sort
of
there,
was
a
conscious
decision
made
not
to
create
a
high-res
version
of
2.0
or
to
csm2
series,
because
thinking
was
that
already
work
was
going
on
with
csm
1.3
with
asd
and
ihas,
and
the
thinking
was
that
it's
going
to
cost
too
much
time.
L
L
F
I
Yeah
I
just
wanted
to
just
chime
in
on
this
volcano
issue,
because
it's
really
turned
out
to
be
a
pretty
interesting
aspect
of
the
std
prediction
problem,
especially
for
the
pacific,
and
when
we
were
first
doing
this,
we
thought,
including
the
volcanic
eruptions
behind
casts,
would
actually
increase
skill
because
we'd
have
a
known
forcing,
and
so
there
was
arguments
about
you
know,
leaving
the
volcanic
eruptions
out
when
we
were
putting
the
cement
5
dcp
protocol
together,
we
put
them
in
as
it's
turned
out,
there's
been
some
studies
done
since
then:
analyzing,
the
cement,
5
and
seam
of
6
models
that
the
ensemble
average
model
response
to
a
typical
volcanic
explosive
volcanic.
I
Tropical
volcanic
eruption
is
kind
of
a
weak
el
nino
kind
of
the
first
winner
afterwards
and
followed
a
couple
years
later
by
a
la
nina.
So
that's
what
the
model
is
trying
to
produce
on
a
in
a
kind
of
ensemble
average
sense.
So
if
the
actual
sequence
that
occurs
after
a
volcanic
eruption,
the
observations
goes
along
with
that.
I
If
there's
a
weak,
el
nino,
followed
by
a
lania
a
couple
years
later,
you're
going
to
get
great
skill
for
the
pacific
seas,
temperatures,
but
if
the
actual
sequence
in
the
observations
doesn't
follow
what
the
ensemble
average
model
is
trying
to
produce
you're
going
to
get
really
terrible
skill,
and
this
is
what
happened
after
pinot,
tubo
and
after
tavern,
so
in
the
initialized
hind
cast
after
pinatubo,
we
got
terrible
skill
in
the
pacific
for
like
the
ipo
ssts
after
travervar
the
same
thing,
and
so
these
no
volcano
simulations.
We
thought!
I
Well,
if
you
leave
the
volcanoes
out,
the
verifying
observations
still
have
the
effects
of
the
eruptions
in
there.
So
that
may
cause
problems
too,
but
I
think
john
woo's
results
and
initially
or
the
no
volcano
runs.
You
actually
get
a
little
bit
better
skill
on
average
because
you
don't
have
these
kind
of
conflicting
influences
in
the
tropical
pacific
after
volcanic
eruptions.
I
So
I
think
it's
it's
a
really
kind
of
a
challenging
problem
because
you
may
get
better
skill
in
some
areas,
maybe
in
the
stratosphere
and
other
types
of
things
in
the
northern
hemisphere,
high
latitudes,
but
for
areas
in
the
tropics,
tropical
pacific
and
even
connections
to
the
asian
monsoons
it
really
kind
of
complicates
complicates
the
whole
thing.
So
I
think
this
is
a
really
active
area
of
research.
I
That's
pretty
interesting,
and
I
think
you
know
one
way
to
get
at
it
is:
are
these
no
volcano
runs
and
matt
long
just
sent
me
something
where
he
or
somebody,
I
think
goku?
Maybe
you
know
they
just
did
a
a
no
pinatubo
set
of
hind
casts
just
for
that
period.
Do
you
know
more
about
that?
Goken.
I
F
I
can
speak
to
that.
That's
the
for
a
separate
project.
We
have
we're
trying
to
do
30,
ensemble
members,
where
there
was
no
pinatubo
run
from
cesm1le
to
compare
with
biogeochemistry
turned
on
in
the
ocean
to
compare
the
ocean
tracer
distributions,
because
we
took
a
lot
of
our
observations
in
the
early
1990s.
Our
first
decade
of
observations
occurred
then,
and
so
we're
worried
about
trends
being
biased,
yes,
but
that's
available,
I
think
to
everybody
who
wants
to
look
at
it.
F
I
Actually,
the
opposite
happened
after
a
pinata
went
to
river,
so
there
was,
you
know
a
couple
years
after
the
two
eruption.
There
was
an
el
nino
when
the
model
response
was
trying
to
produce
a
la
nina
and
after
tavern
the
same
thing
happened.
There
was
a
an
el
nino
a
couple
years
after
that
eruption,
which
is
when
the
model
ensemble
average,
was
trying
to
produce
mania
so
that
those
that's
why
those
eruptions
really
produced
really
bad
skill
for
tropical
pacific
ssds.
I
That
one
was
better
because
it
it
synced
up
to
better
to
what
the
model
was
trying
to
produce.
You
didn't
get
a
huge
reduction
of
skill
after
detailing,
but
it
was
just
kind
of
tuba
into
verve
and
just
depends
on
the
volcano
and
what
the
internal
variability
is
doing
in
relation
to
what
the
model
ensemble
average
is
trying
to
produce.
F
F
D
Yeah,
I
was
just
wondering
I
I
didn't
I'm
not
sure
if
I
fully
grasped
all
the
implications
of
jerry's
talk,
but
as
we
start
to
analyze,
the
smile.
Do
we
need
to
is
like
removing
lead,
dependent,
climatology
insufficient.
Is
that
the
conclusion,
or
is
it
only
if
if
we
want
to
isolate
the
internal
variability
like
do,
we
need
to
come
up
with
a
plan
for
how
we're
defining
our
anomalies
in
the
smile.
A
I
I
think
you
know
we
should
proceed
with
kind
of
the
standard,
lead,
dependent,
climatology
removal
for
this
initial
assessment.
I
think
that
you
know
is
standard
practice
and
is
good
enough
for
kind
of
a
first
order.
Look
at
the
skill
and
it's
easier
to
coordinate
across
everyone,
who's
looking
at
different
fields-
and
you
know,
leave
kind
of
the
second
order.
Investigation
of
that
for
later.
D
A
H
While
making
the
presentation
for
this
workshop,
I
went
to
the
u.s
drought
monitor
and
it
looks
like
there
is
a
big
drought
going
on
in
california
and
nevada
region
and
what
I
have
learned
last
couple
of
years
doing
different
predictability:
analysis
that
at
least
the
csm
model
has
very
large
predictability
compared
to
the
other
region
in
that
particular,
like
california,
nevada
region.
H
So
have
anybody
looked
into
or
if
there
is
some
interest
in
in
the
direction
of
like
yeah,
what
this
drought
was
predictable,
using
2017,
2018
initial
condition
and
when
this
drought
is
going
to
go
away,
because
that
that
gives
a
kind
of
societally
relevant
angle
to
to
this
project
and
if
we
find
something
that
can
be
kind
of
quick
or
small
paper
advertising
this.
This
new
set
of
simulation.
A
Yeah,
so
I
don't
know
if
isla
wants
to
jump
in
with
an
answer.
It's
definitely
on
our
radar
that
you
know
we're
in
a
position
if
smile
is
skillful
in
the
in
the
western
us
to
actually
maybe
shed
some
light
on
on
what
what
to
expect
in
the
coming
year
right.
So
we
just
had
a
la
nina
this
past
winter
and
and
with
our
may
2021
forecast.
We
can,
we
can
maybe
say
whether
we're
gonna
see
a
two-year
la
nina
and
what
the
probability
of
that
would
be.
A
That
would
be
quite
quite
relevant
and
actionable,
so
we
we
just
need
to
dig
more
into
the
data.
It's
a
bit
too
fresh
to
give
you
a
hard
answer.
F
Liz,
I
just
wanted
to
briefly
comment
that
I'm
really
excited
to
see
all
of
judith's
diagnostic
package
starting
to
come
together,
and
it
would
be
great
to
see
that
evolve.
I
think
into
some
sort
of
esp
working
group
repo,
that's
a
bit
more
flexible
for
different
kinds
of
work.
Workflows,
for
example.
F
I
don't
need
bizarre
capability
because
I'm
mostly
working
on
cheyenne
but
something
some
sort
of
package
similar
to
pop
tools,
which
I
use
for
certain
simple
things
that
I
don't
need
to
calculate
all
the
time
would,
I
think,
be
great
stuff
that
can
read
in
any
type
of
csm
esp
data
sets
and
then
do
all
the
difficult
stuff
under
the
hood
as
just
simple
functions.
I
I'd
love
to
see
that
happen.
A
I
I
totally
agree
you
know
I
tried
to
get
the
ball
rolling
on
that
a
little
bit
with
just
the
internal
smile
group.
You
know
to
set
up
a
repo
where
we
could
collaboratively
develop.
You
know
performant
python
scripts,
but
it's
hard.
I
I
think
it's
hard
to.
You
know
it
kind
of
works
when
you're
in
a
small
project
group
and
that's
meeting
regularly
to
do
that.
A
But
when
you're
trying
to
co-develop
with
a
large
group
of
people,
you
know
it
gets
complicated
quickly
and
I
think
we're
still
learning
you
know
what's
feasible
in
that
area.
Certainly.
F
A
I
think
that'll
be
a
big
win
and
that's
the
sort
of
kind
of
generic
tool
that
could
maybe
go
into
an
esp
working
group,
github
repo,
and
then
we
build
from
there.
H
Yeah,
that
is
a
very
good
question.
I'll
try
like
from
my
experience
but
yeah
yeah
in
terms
of
the
obstacle
it's
kind
of
lining
up
the
resources
and
it
it.
It
requires
little
bit
broader
push
not
only
from
the
individual
pi,
but
also
at
the
management
level
like
when,
when
these
these
big
runs
goes
on
are
there
kind
of
a
smaller
project
allotted
where
people
like
individual
pi
can
get
graduate
student
or
the
postdoc
to
work
on
the
like
at
least
two
or
three
something
interesting
project.
H
So
from
my
side
like
yeah,
it's
kind
of
resource
limitation
is
it's
kind
of
one.
One
thing
that
I
can
bring
in
and
landing
up
with
with
either
existing
nsf
funding.
If
I
go
to
the
nsf
yeah,
some
project
get
funded,
some
other
doesn't
get
funded
so
independent
upon
the
funding
cycle
becomes
an
issue.
B
So
I
understand
that
eric
deriver
is
very
open
to
proposals
on
earth
system.
Predictability
as
though
long
as
they
are
focused
on
research
on
research,
on
sources
of
predictability,
not
skill
but
understanding
processes
etc,
and
we
would
be
happy
to
you
know,
give
you
a
support
letter
and
make
all
the
data
accessible.
So
we've
had
one
or
two
proposals
I
think
have
gone
in
in
the
last
six
months
to
eric
on
the
topic
and
just
to
come
in
that
we
don't
have
erenkar
any
well.
B
We
have
few
large
efforts,
so
it's
not
so
easy,
I
guess
for
us
to
piggyback
additional
university
collaborators,
but
there
would
be
something
maybe
to
think
about
in
the
future
how
to
do
this,
because
a
lot
of
the
effort
that
inc
are
already
a
little
bit
underfunded
and
they
rely
on
noaa
funds
which
have
been
really
they've,
been
super
helpful,
but
they've
not
allowed
enough
funding
that
we
can
involve
the
university
collaborator.
B
L
H
B
F
Interested
in
the
extension,
through
the
current,
through
the
present
day,
in
particular
because
of
the
the
covet
pandemic
and
the
effect
that
this
may
or
may
not
have
had
on
the
climate
system,
and
I
think
it
would
be
interesting
to
investigate
predictability
in
the
context
of
that.
Just
because
it
was
such
an
unusual
external,
forcing
situation.
L
So
can
I
say
something
with
respect
to
that
nikki,
as
I
mentioned
in
the
planetary
talk
on
monday
morning
and
john,
I
don't
know
whether
facility
actually
followed
up
on
it
or
not,
but
we
have
actually
it's
not
predictability.
It's
it's
not
prediction.
Experiments,
though
we
have
50
ensemble
members
of
covet,
plus
australian
fire
and
then
the
actual
g
fed
fire
simulations
as
well.
So
I'm
not
sure
whether
and
they
are
with
the
full
cesm.
If
I'm
not
mistaken,
so
budget
chemistry
is
actually
included
in
those.
L
So
if
they
are
not,
though
prediction
simulations,
so
they
are
sort
of
short-term
five-year,
50
ensemble
member
simulations,
if
that's
any,
if
that's.
F
Of
use
to
you
yeah,
I
think
that's
that's
interesting.
One
of
the
things
that
I've
noticed
is
that
certain
modeling
centers
decided
to
do
prescribed
atmospheric
co2
and
certain
modeling
centers
described
to
do
prescribed
emissions,
and
I
I
think
almost
the
prescribed
emissions
are
preferred
for
this
particular
situation.
So
I
don't
know
how
that
doesn't
really
fit
into
the
smile.
F
F
So
I
think
there's
something
to
be
learned
from
the
prescribed
runs
too
and
something
I
want
to
think
about,
but
I
just
wanted
to
put
my
two
cents
in
that.
That
period
of
time
is
particularly
interesting.
L
And
then
another
thing
that
I
should
probably
I
don't.
I
mean
it's
not
necessarily
esp
working
group
thing,
but
it's
on
the
climate
variability
and
change
with
the
csm
too
large,
ensemble
and
isla
can
chime
in
as
well.
We
have
now
quite
a
few
ensemble
members
of
with
the
same
model
of
all,
but
one
forcing
cases,
so
they
are
also
with
by
geochemistry
that
can
be
of
use.
Perhaps.
D
D
A
Yeah,
so
that
that's
actually
what
I
was
going
to
mention,
so
they
would
be
using
the
scenario
3370,
but
john
fasoula
had
indicated
interest
in
doing
sort
of
a
g
fed
sensitivity
parallel
to
smile.
That
would
actually
include
the
australian
wildfire.
So
you
know
indications
are
that's
a
much
bigger
signal
than
the
coveted
emissions
reproduction
which
we'd
be
missing
in
the
ssp,
but
you
know
it'd
be
really
interesting
to
see
how
the
forecasts
change.
If
you
add
in
these
large
aerosols
from
the
wildfires.
L
I
A
There
is
that
you
know
we
could
use
the
working
group
resources
to
support
that
kind
of
work.
We
just
need
someone
to
lead.
It.
A
F
A
Can
we
start
to
do
some
attribution
of
particular
events
on
the
multi-year
time
scale?
In
addition
to
s2s
go
com.
L
Yeah,
I
didn't
join
this
working
meeting
until
last
hour,
but
another
opportunity
if
people
want
to
propose
experiments,
if
there
is
no
computer
time
is
the
upcoming
asd
call
that
we
can
pursue
and
those
essentially
can
be
up
to
whatever
50
60
80
million
core
hours
on
the
new
machine,
and
they
can
be
essentially
dedicated
to
particular
experiments.
But
somebody
has
to
lead
those
and
everything
needs
to
be.
L
B
All
right
well
a
little
bit
a
few
minutes
after
noon.
What
do
you
think
steve?
Should
we
wrap
it
up.
A
Yeah
we
can,
we
can
leave
the
zoom
call
on
for
anyone
who
wants
to
chit
chat,
but
if
there
aren't
any.
A
F
A
So
sanjeev
just
get
in
touch.
If
you
know,
if
you
can,
I
don't
know.