►
Description
The 27th Annual CESM Workshop will be a virtual event. Specifically, the Workshop will begin with a full-day schedule on 13 June 2022 with presentations on the state of the CESM; by the award recipients; and two presentations from our invited speakers in the morning, followed by order 15-minute highlight and progress presentations from each of the CESM Working Groups (WG) in the afternoon.
To learn more:
https://www.cesm.ucar.edu/events/workshops/2022/
B
All
right,
well,
it
looks
like
I
see.
40
participants
are
showing
up
in
my
on
my
zoom
here.
So
I
think
it's
about
time
for
us
to
get
started.
I'm
kathy
paijon,
I'm
one
of
the
co-chairs
of
the
earth
system,
prediction
working
group
and
the
other
co-chairs
yaga
richter
and
steve
yeager,
and
we're
going
to
start
off
today
with
just
a
brief
introduction,
reminding
you
here
of
of
encar's,
respectful
dialogue
and
and
kind
of
rules
of
engagement
here.
B
So
I'll
remind
you
that,
as
we
have
our
presentations
and
have
discussion
to,
please
offer
constructive
feedback
to
share
the
air,
to
acknowledge
teamwork,
to
encourage
innovation,
to
show
appreciation
and
consider
new
ideas.
And
I
think
this
is
really
just
in
the
spirit
of
a
working
group
meeting.
So
please
engage
in
discussion
and
and
let's
make-
and
you
know,
discuss
some
really
exciting
new
ideas
and
and
things
we
can
do
in
the
earth
in
the
aspg.
B
All
right,
so
it's
already
almost
time
to
for
our
first
presentation,
so
I
really
just
want
to
welcome
you
to
to
our
meeting
and
thank
you
for
coming.
I
know
we
may
all
be
getting
tired
of
virtual
meetings,
but
it
really
is
a
great
opportunity
for
all
of
us
to
to
come
together
from
no
matter
where
we
are,
and
that
really
is
the
wonderful
benefit
of
this
virtual
platform.
B
So
thank
you
all
for
joining
us
from
wherever
you're
sitting
yoga
steve.
Is
there
anything
else?
You
want
me
to
remember
to
say
before
we
get
started
with
our
presentations.
C
B
Yes,
thank
you.
Can
you
put
that
link
in
the
chat
window
as
a
reminder.
B
Yeah
so
this
afternoon,
in
our
discussion,
we'll
be
talking
about
future
simulations
and
ideas
for
future
simulations
for
espwg,
and
we
really
want
to
this
to
be
a
community
effort
and
community
ideas
that
we
can
coalesce
around.
So
if
you
have
ideas
around
that,
please
enter
those
into
the
google
document
that
will
help
to
facilitate
discussion
this
afternoon
and
steve
just
put
that
in
the
chat
window.
So
please
click
on
that
and
enter
your
ideas
for
discussion.
B
All
right:
well,
then,
we
will
move
forward
with
our
presentations
and
I'll
just
remind
you
that
you
can
put
questions
or
comments
in
the
chat
window
or
you
can
raise
your
hand
and
I'll
call
on
you.
B
D
Awesome,
can
you
all
see.
B
We
can
see
it
perfect
now
it's
the
full
free
mode.
Thank
you.
D
Awesome
so
yeah.
Thank
you
so
much
for
having
me
speak
today,
I'm
going
to
present
some
of
the
work
that
I've
been
doing
in
my
phd
with
dr
elizabeth
barnes
here
at
colorado,
state
university
and
we've
been
looking
at
predicting
sea
surface
temperatures
in
the
cecm2
pre-industrial
control,
with
artificial
neural
networks
and
specifically
on
this
idea
of
looking
for
state
dependent
predictability
in
our
predictions.
D
D
So
I've
been
looking
at
decadal
prediction,
which
is
how
the
climate
evolves
around
two
to
ten
years
in
the
future,
so
we're
at
this
sort
of
area
in
this
figure
and
we're
looking
at
predictability
associated
with
the
ocean,
so
patterns
of
ocean
variability
such
as
amv,
so
atlantic
multicast
or
variability
or
specific
variability
in
the
form
of
the
specific
decadal
variability
or
interdecatal
pacific
oscillation,
and
it's
also
worth
noticing
notice,
noting
that
external,
forcing
can
also
provide
predictability.
D
So
rising
temperatures
due
to
greenhouse
gas
emissions
or
volcanic
eruptions
can
also
provide
predictability,
and
these
also
interact
with
our
internal
variability
in
the
ocean.
D
So
as
we
are
thinking
about
predicting
on
decatal
time
scales,
recent
studies
have
looked
at
how
predictability
can
be
influenced
by
the
initial
state
of
the
system.
So
some
initial
states
are
more
predictable
than
others,
and
so
here
I'm
showing
a
figure
from
this
paper
by
boscher
all
from
2018
who
used
mpi.
Esm
initialized
hindcasts
to
predict
ssts
in
the
north
atlantic
subpolagia
and
they
found
that
their
ssg
predictions
were
more
more
skillful
following
periods
of
anomalously
strong
ocean
heat
transport
into
the
gyre
region.
D
So
this
figure
has
lydia
on
the
x-axis
and
anomaly
correlation
coefficients.
So
a
skill
basically
forecast
scale
on
the
y-axis
and
these
red
forecasts
here
are
the
strong
ocean
heat
transport
forecast
showing
that
they
were
more
skillful
and
then
the
study
here
using
the
cesm
dpla,
so
the
decadal
prediction,
large
ensemble
by
kristen
citadel,
showed
that
information
retained
by
initialized
heincats
in
the
north
atlantic
corresponded
to
the
state
of
the
amo
so
down.
D
Here
we
have
the
sort
of
information
retained
by
the
hive
cast
and
we
can
see
that
it
corresponds
fairly
well
to
these
high
amplitude
phases
of
the
amo.
D
So
these
studies
both
demonstrate
that
there
is
state
dependent,
predictability
on
decade
or
time
scales
and
so
studies
of
predictability
across
time
scales,
so
not
just
decadal
but
also
easter,
eggs
and
east.
Do
I
have
encouraged
this
focus
to
stay
dependent,
predictability,
and
so
here
we're
going
to
present
a
data-driven
approach
to
identifying
state-dependent
predictability
and
we're
going
to
use
the
cecm2
pre-industrial
control
for
steemit
sex.
So
we
are
specifically
interested
in
state-dependent
predictability
of
internal
variability.
D
And
we're
going
to
use
an
artificial
neural
network
to
for
this
problem.
So
an
artificial
neural
network
is
a
collection
of
nodes
and
connections
which
map
some
input
to
an
output.
So
we
could
think
of
having
our
inputs
here
on
the
left,
and
each
of
these
points
on
these
maps
is
connected
to
our
artificial
neural
network,
which
is
basically
a
collection
of
nonlinear
functions.
And
then
we
have
these
outputs
here.
D
So
in
our
particular
problem,
we
are
inputting
three
maps
of
ocean
heat
content,
ohc
integrated
to
from
the
surface
to
100
meters,
surface
to
300
meters
and
surfaced
to
700
meters,
and
these
are
averaged
over
this
previous
five
years,
and
these
are
our
inputs
to
our
neural
network.
D
So
the
average
anomaly
in
one
to
five
years
and
so
just
to
really
go
over
this
we're
having
we're
inputting
ocean
heat
content
over
the
previous
five
years
averaged
over
the
previous
five
years,
and
then
we're
outputting,
a
prediction
of
sst
anomaly
in
one
to
five
years
at
some
point
in
the
ocean,
and
I
want
to
specifically
dig
into
what
we're
doing
here
on
our
output
layer.
D
So
this
output
is
a
prediction
of
16.2
and
one
to
five
years
and
for
this
example,
we're
going
to
do
a
prediction
here
in
the
north
atlantic
sub
apologia,
we
train
our
artificial
neural
network
to
predict.
We
call
it
a
mu,
an
anomaly
but
also
a
sigma,
an
uncertainty
range
so
that
a
n
has
two
outputs.
It's
the
sst
anomaly
and
an
associated
uncertainty
during
the
training
process.
The
a
n
is
optimized
so
that
it
assigns
a
lower
uncertainty
value
to
initial
states
that
are
more
likely
to
result
in
lower
error.
D
D
So
in
this
way
the
neural
network
is
able
to
learn
more
predictable
states
in
the
data
with
its
uncertainty
value,
and
we
so
in
this
figure
here,
I'm
showing
again
this
a
n
trained
to
predict
sst
in
the
north
atlantic
and
on
the
y-axis.
Here
we
have
the
mean
absolute
error
in
the
predictions,
and
so
for
a
neural
network
which
is
properly
learned.
Predictability
in
the
data.
We
should
find
that
predicted
prediction
error.
D
So,
for
example,
here
we
are
looking
at
the
hundred
percent
most
confident
prediction.
So
this
is
just
all
of
the
predictions
in
our
testing
set
and
the
error
for
our
for
these
samples
is
0.52.
So
this
would
be
the
error
we
would.
You
know
quote
on
earn
your
network
across
all
of
our
testing
data.
It
has
a
mean
absolute
year
of
about
0.52.
D
We
could
say
you
know
only
going
to
keep
the
20
most
confident
samples,
and
now
we
find
that
our
error
has
dropped
to
about
0.42,
and
so
this
is
showing
that
the
neural
network
has
identified
what
samples
are
more
predictable
and
it
also
achieves
lower
error
on
these
samples.
D
And
so
what
we
have
done
in
this
study
is,
I
just
showed
one
neural
network,
which
is
somewhere
up
here
in
the
north
atlantic,
but
we
actually
trained
a
neural
network
at
every
point
in
the
ocean.
So
these
this
color
scale
is
showing
mean
absolute
error
and
so
brighter
colors
indicate
more
skill.
D
These
black
regions
are
where
the
neural
network
did
not
outperform
climatology,
but
we
do
have
regions
where
the
neural
network
is
finding
some
skill,
and
we
also
find
that
this
spatial
spread
of
skill
is
in
regions
where
the
ocean
is
considered
more
predictable.
So,
for
example,
the
north
atlantic
shows
good
scale
in
the
north
pacific
and
then
there's
some
skill
across
these
deep,
deeper
mixed
layer
areas
in
the
southern
ocean.
D
But
then
what
we
can
do
is,
as
I
did
before
is
we
could
only
look
at
the
20
most
confident
predictions
at
each
grid
point,
and
so
we
again
we
take
our
testing
set
and
we
say
what
are
the
20
lowest
uncertainty,
values
and
we're
going
to
just
plot
the
error
for
those
ones,
and
we
find
that
in
many
places
our
skill
increases.
So
a
nice
example
is
here
off
the
west
coast
of
africa.
We
find
we
have
little
skill
on
across
our
entire
testing
set,
but
when
we
say
neural
network,
what
have
you?
D
D
So
the
initial
state
that
led
to
confident
predictions
in
the
north
atlantic-
and
here
we
find
that
we
have
this
anomalously
positive
ocean
heat
sea,
surface
temperature
are
sort
of
in
the
subtropical
to
mid-latitude
north
atlantic,
and
then
we
can
look
at
the
composite
one
to
five
years
later.
So
at
the
time
of
prediction,
and
what
do
we
see
here?
D
We
find
that
this
blob
of
heat
has
moved
northward
into
the
jaya
region,
and
so
this
is
suggesting
that
there
has
been
this
northward
transport
of
heat
and
that
has
led
to
the
neural
network's
increased
confidence.
And
so
this
is
a
state
that
it
has
this.
This
initial
state
is
one
that
the
neural
network
has
identified
as
more
predictable.
D
We
can
also
link
it
to
a
large-scale
ocean
variability
to
sort
of
get
a
general
picture
of
how
the
ocean
is
behaving
at
the
time
of
input.
So
here
I'm
showing
histograms
of
on
the
left
is
amv
atlantic
multi-decadal
variability
and
on
the
right
ipo
into
decatur,
pacific
oscillation
index
and
the
shading
is
the
distribution
of
these
indices
across
the
entire
testing
set.
And
then
the
outline
is
where
we've
chosen
only
the
20,
most
confident
samples
corresponding
to
this
initial
state.
D
And
we
find
that
for
this
this
state.
We
have
a
slightly
positive,
a
and
b,
which
means
that
we
have.
You
know
a
positive
sst
anomaly
in
the
north
atlantic.
D
But,
interestingly,
we
also
find
that
there
is
this
leftward
shift
of
our
distribution
in
the
ipo
index,
and
we
can
see
that
here
in
this
ecst
input
figure
as
well,
suggesting
that
the
that
these
skillful
initial
states
coincided
with
a
negative
phase
in
the
ipo
which
may
suggest
some
sort
of
interface
and
teleconnection,
which
led
to
the
a
n's
skillful
predictions.
D
And
then
we
can
play
this
game
again
in
the
north
pacific
ocean.
So
I
grab
this
neural
network
at
this
point
here
and
we
can
and
we're
doing
the
same
thing.
I'm
going
to
look
at
the
composite
sst
map
at
input
and
we
find
that
it
is
showing
a
fairly
strong,
positive
ipo
pdb
pattern
at
the
input
time.
D
So
it's
saying
that
this
positive
pattern
is
a
skillful
initial
state
for
predicting
ssts
in
this
region
and
then
again
we
can
look
at
the
composite
sst
map
one
to
five
years
later,
and
this
shows
persistence
in
the
ipo.
So
we
have
again
this
ipo
pattern,
particularly
these
negative
anomalies
and
the
crucial
extension.
D
So
it
is
seems
that
the
a
n
is
confident
about
predicting
positive
to
positive
persistence
in
the
ipo
and
an
interesting
result
that
we
had
was
that
the
neural
network
appeared
to
be
much
more
confident
about
positive,
to
positive
persistence
over
negative
to
negative
persistence,
which
suggests
some
sort
of
non-linearity
in
the
way
that
the
neural
network
was
viewing
persistence
in
the
ipo.
D
So
I
have
some
sort
of
future
work
that
I'm
interested
in
with
this.
This
study
and,
first
of
all,
I
think
we
need
to
think
about
state
dependent
predictability
in
the
high
caste
study,
so
it's
already
been
done
a
little
bit
with
the
cesium
dpla,
but
I'm
interested
in
seeing
you
know.
If
we're
looking
at
us
our
forecasts
that
have
higher
skill,
what
can
we
you
know
what
initial
states
can
we
back
out
and
then
what
can
we
learn
about
our
models
in
this
in
this
paradigm?
D
You
know
if
we've
got
a
state
that
that
performs
better
against
observations.
Does
this
mean
that
our
model
is
particularly
good
at
making
this
recreating
this
process,
or
does
it
mean
that
this
was
a
more
predictable
process
that
had
more
memory?
D
Also,
so
this
study
that
I've
shown
we
do
predictability
due
to
internal
variability,
and
how
does
this
perform
enforced
simulations
where
we
have
external,
forcing
and
or
in
observations?
Do
we
still
find
that
we
are
having
this
ipo
teleconnection
and
then,
as
I
just
tend
to
there?
D
Is
there
a
link
between
the
state
of
the
pacific
ocean
and
north
atlantic
predictability
outside
of
this
control
run
the
mechanisms
behind
north
atlantic
variability
are
still
pretty
highly
debated,
and
I
think
this
is
an
interesting
lens
to
view
this
through,
and
so
I'm
going
to
finish
I'll
just
leave
up.
These
are
my
key
points
and
this
work
is
currently
in
review
at
grl,
but
we
have
a
preprint
available
online
for
your
perusal.
B
E
That's
really
interesting
emily.
I
was
curious,
you're
training
on
different
layers
of
ocean
heat
content.
Right
is
there
any
way
you
can
narrow
down
kind
of
which
layer
is
producing
most?
I
assume
it's
the
upper
operation
part,
but
it
could
be
the
atlantic.
It
could
be
different,
it
could
be
deeper,
I
mean,
could
you
and
I
suppose
you
could
retrain
it
based
on
different
inputs
from
the
ocean
heat
content
layers,
but
if
you
thought
about
that
trying
to
narrow
it
down
a
little
bit
as
to
what's
producing
the
predictability.
D
Yeah,
so
we
in
the
paper,
we
do
a
little
bit
of
neural
network
explainability,
so
this
is
sort
of
attributing
different.
You
know
blobs
of
heat
in
the
ocean
and
how
they
corresponded
to
predictable
outputs
and
from
what
I
can
recall
from
the
north
atlantic.
D
We
found
that
it
was
in
the
upper
ocean
where
the
where
a
lot
of
the
skill
came
from,
but
because
we
were
doing
this
sort
of
study,
we
were
like
okay,
we
want
to
use
the
same
inputs
and
do
every
point
in
the
ocean.
We
wanted
to
have
all
three
layers
to
make
sure
that
you
know
this
would
work
if
we
were
everywhere.
If
that
makes
sense,.
F
Yeah,
so
I
had
a
question
about
the
observations
you
might
use
and
whether
using
a
field
like
sea
level,
that's
observed
from
satellite
very
well
and
given
that
upper
ocean
heat
content
seems
to
be
so
important.
That
would
be
a
very
good
indicator
of
that.
Have
you
thought
about
using
sea
level
from
satellite.
D
Not
sea
level
in
particular,
but
I
I
should
have
actually
put
this
future
work.
I
think
I
think
this
is
really
really
important
that
that
we
that
I'm
starting
to
think
about
grounding
this
in
in
observable
quantities,
because
ohc
is
such
a
different,
a
difficult
thing
to
to
measure
in
observations.
D
B
All
right,
thank
you
so
much
emily,
very
interesting
talk
and
great
questions,
there's
also
a
question
in
the
chat
window.
If
you'll
follow
up
when
you
get
a
chance,
absolutely
all
right.
Okay,
our
next
talk
is
subpolar.
North
atlantic
cold
extremes
in
cesm,
initialized
predictions
from
elizabeth
maroon.
G
B
G
Great
things
are
working,
that's
encouraging,
so
hello
from
university
of
wisconsin.
This
is
a
continuation
of
something
that
I
think
I
last
presented
here.
The
esp
working
group,
maybe
a
year
or
two
ago,
so
we've
been
looking
at
this
challenging
2015
cold
anomaly
prediction
in
the
subplot
north
atlantic
and
trying
to
take
a
forensic
approach
to
figure
out
what
went
wrong
in
the
dple.
G
So
this
talk
is
looking
at
more
comparatively
across
a
bunch
of
other
decadal
prediction
systems
from
all
the
collaborators
that
are
listed
here,
and
I
want
to
thank
them
for
being
generous
with
their
feedback
and
they're.
Providing
the
output
from
their
systems
been
starting
to
become
useful,
to
figure
out.
What's
different
going
on
so
I
mean
the
motivation
is
that
extreme
oceanic
events,
such
as
marine
heat
waves
here
on
the
left
or
these
marine
cold
spells
or
cold
blobs
are
the
most
impactful
on
ecosystems
in
society.
G
So
being
able
to
predict
these
on
monthly
or
longer
time,
scales
would
be
valuable,
so
stick
around
in
about
you
know:
30
minutes
for
evan
meeker's
talk,
he'll
talk
about
the
predictability
of
marine
heat
waves
and
smile,
but
the
focus
of
my
talk
is
the
2015
coal
blob,
and
if
this
isn't
just
you
know
a
dple
issue,
it
is
a
societally
relevant
issue
because
this
event
contributed
to
the
heat
wave
that
summer
in
europe.
G
So
what
was
the
issue
in
the
20?
So
2015?
There
was
a
cold
anomaly
in
the
north
atlantic.
I
mean
it
was
near
record,
so
you
can
see
that
in
the
time
series
below,
but
something
else
that
made
it
quite
prominent
is
just
its
large
spatial
extent.
So
you
can
see
if
you
can
see
my
arrow.
G
So
the
csm
decadal
prediction,
large
ensemble,
totally
failed
with
this
event,
despite
you
know,
otherwise,
having
really
high
skill
in
subpoena
north
atlantic
sst,
I
mean
this
is
where
the
dple
really
shines.
This
is
where
it
adds
a
lot
of
initial
value:
predictability
over
the
uninitialized
csm,
large
ensemble.
So
on
the
top
here
we
have
summertime
2015
predictions
from
observations.
You
can
see
the
cold
blob
there.
The
box
is
what
I'll
be
using
for
subplone
or
clinic
sst
going
forward.
The
bottom
is
the
nine
month
prediction
from
the
csm1
dple.
G
You
can
see
that
while
there
is
still
some
indication
that
there
had
been
some
sort
of
cold
anomaly
there
closer
to
initialization,
if
anything.
Now
it's
warm
none
of
the
ensemble
members
captured
this
the
ensemble
mean
is
warm.
G
It
was
a
real
miss,
and
so,
if
you,
if
this
is
just
for
a
nine
month
prediction,
but
if
you
use
earlier
hind,
casts
so
for
a
two
three
four
year,
predictions
they're
even
worse,
so
the
goal
is
to
try
to
figure
out
what
went
wrong
in
the
dple
is:
there's
things
that
we
can
figure
out
for
how
to
improve
it
and
we're
going
to
be
looking
at
this
earlier
prediction:
six
to
mind
nine
month,
because
this
is
where
things
seem
to
go
wrong
earliest.
G
So
let's
talk
a
little
bit
about
the
characteristics
of
this
event,
so
we
know
sort
of
it's
things
that
we
might
be
looking
for
in
terms
of
predictability.
G
We
have,
you
can
see
here
in
observations
already
cold
anomaly
that
gradually
becomes
more
intense
and
larger
as
you
go
through
winter
and
spring
of
2015..
It
really
peaks
in
extent
quite
large
during
2015's
summer
right
here,
we've
got
hub,
mullers
of
ocean
upper
ocean
temperature
and
solidity,
and
this
event
wasn't
just
an
a
cold
blob
at
the
surface
it
really
extended
to
depth.
G
So
here
you
can
see
this
cold
blob,
especially
starting
january
of
2014.,
extends
down
near
to
a
thousand
meters
depth,
and
it
was
part
of
this
more
long-lasting,
fresh
blob.
That
is
also
present
sub-floor
north
atlantic
that
you
can
read
about
in
holiday
at
all,
2020.
G
So,
in
terms
of
the
atmospheric
forcing
that
was
really
key
for
amplifying
and
sustaining
that
initial
cold
anomaly,
that
was
there
in
2014
fall.
There
were
two
periods
in
two
different
two
periods
worth
looking
at.
The
first
is
there
was
about
a
0.2
of
a
degree
celsius
drop
from
january
to
april
of
2015..
This
was
associated
with
a
very
positive
winter
and
spring
nao.
It
was
very
persistent
caused
a
lot
of
cooling,
but
because
of
the
deep
winter
mixed
layer
depths.
G
This
cooling
only
translated
to
a
little
bit
of
surface
temperature
change,
but
it
was
important
for
sustaining
and
then
intensive
later
intensification
for
the
cold
anomaly.
The
big
drop,
however,
came
during
spring
and
summer
of
2015,
so
from
may
to
july
there
was
a
drop
of
0.6
degrees
celsius.
This
was
associated
with
a
relatively
modest
cooling
flux,
so
about
10
watts
per
meter
squared,
but
with
shallower
summer
mixed
layer
depths,
it
didn't
take
much
cooling
to
really
amplify
the
strong
surface.
Cooling.
G
So
in
terms
of
are
these
different
atmosphere?
Forcings
predictable,
there's
some
indication
that
winds
first
winter
and
second
winter
nao
may
be
predictable
in
some
systems,
particularly
the
depresses
three
system,
but
I'd
argue
that
nine
months
in
advance
you're
not
going
to
be
able
to
predict,
may
and
june
such
modest
surface
heat
fluxes.
So
for
this
latter
period,
what
we're
going
to
be
hoping
for
in
the
dple
is
that
it
has
sufficient
spread
to
capture
these
modest
cooling
and
hopefully,
no
bias.
G
So
that
means
based
on
the
events
we've
got
here
going
through
this
there's
a
couple
of
features
that
we're
thinking
are
probably
still
needed
for
a
skillful
2015
prediction:
you
need
a
reasonable
initialization,
so
you
have
to
have
the
coal
blob
in
the
right
place.
Hopefully
you
also
have
the
salinity
anomalies,
both
at
the
surface
and
at
depth
somewhat
accurate.
G
You
might
hope
that
you
have
a
positive
winter
time.
Nao
prediction:
some
systems
show
that
most
don't
so.
Hopefully
you
encompass
that
very
strong,
positive
nao
within
your
spread,
even
if
you
can't
predict
its
signal.
G
Finally,
by
the
time
you
get
to
may
and
june,
you'd
hope
that
you'd
have
sufficient
spread
and
little
bias
in
the
surface
heat
fluxes
that
were
necessary
for
that
amplification
of
the
cooling,
and
then
you
know
maybe.
Finally,
the
fourth
ingredient
would
be.
You
want
a
model
that
represents
north
atlantic
climate
processes
just
well
enough.
G
So
maybe
it's
only
the
csm1d.
That's
very
much
struggled
with
this
event.
What
about
other
systems?
So,
thanks
to
all
of
our
blue
action,
collaborators
we've
collected
12
different
hind,
casts
from
eight
different
systems.
G
These
are
all
from
initialized
in
late
2014
or
early
2015.,
so
we've
got
the
dple
and
smile
and
I'm
using
three
initializations
from
smile
and
examining
more
we've
got
depresses
three
and
depresses
four
from
med
office
ec
earth,
two
initializations
from
ipsl
and
three
initializations
from
north
cpm,
one
with
their
cmip5
configuration
and
two
with
their
cm6
configuration
and,
as
you
can
see
from
this
table,
there's
a
bunch
of
different
initialization
techniques,
ensemble
sizes
number
of
years
present
resolutions.
G
These
are
very
different
systems,
so
hopefully
there
will
be
something
that
stands
out
depending
on
how
these
systems
do
so.
First
here
I've
got
two
metrics
for
the
summer:
cold,
blob
skill,
so
on
the
x-axis,
I've
got
cold
blob
intensity,
so
just
in
that
sub-polar
north
atlantic
box,
how
cold
was
it
and
on
the
y-axis
I've
got
cold
blood
extent.
G
It's
not
possibly
too
surprising
to
see
that
there's
some
you
know,
relationship
between
extent
and
intensity
up
here
in
the
left-hand
corner
is
our
observations
and
down
here
is
all
of
our
csm
family
of
hindcast,
so
the
deep
belief
initialized
in
november
2014
and
then
the
three
smile
initializations
that
are
prior
to
summertime
2015..
G
So
you
can
see
that
the
november
initialization
from
smile
is
actually
an
improvement
on
the
dple
which
had
a
warm
cold
blob.
Well,
that
doesn't
even
make
sense
had
a
warm
anomaly
where
you
should
have
cold
so
already,
there's
been
improvement
and
then
these
other
two
dots
are,
as
you
might
imagine,
february,
and
may,
as
you
get
closer
to
the
events,
things
are
getting
colder,
though
there
is
quite
a
gap
between
this
may
smile,
initialization
and
the
observed
2015
summer,
even
only
two
months
in
advance.
G
So
here's
the
other
families
from
systems
you'll
notice.
There
are
a
few
that
are
kind
of
like
the
dple
that
are
warm
when
they
should
be
cold
and
then
there's
some
that
are
on
the
other
side.
None
of
these
ensemble
mean
predictions
really
get
you
close
to
what
the
observations
were.
G
One
thing
I
do
want
to
point
out,
though,
is
that
all
three
of
the
nor
cpm
initializations
that
we
have
with
various
different
initial,
I
think,
there's
two
different
initialization
techniques
and
then
two
different
model
configurations,
they're
all
negative,
so
that's
kind
of
encouraging
that
there's
something
that's
in
that
configuration
that
might
be
working
better
with
this
event,
so
we
can
take
a
look
next
at
all
of
the
ensemble
members
to
see
if
this
event
was
encompassed
in
the
spread.
So
now
I've
pulled
apart.
G
All
12
of
these
heim
casts
by
intensity
and
extent.
The
dark
dots
are
ensemble
means
all
the
crosses.
Are
each
ensemble
member
you'll
notice
that
some
of
these
may
smile
simulations
get
pretty
darn
close
to
what
was
observed.
There's
also
a
couple
of
the
precise
ensemble
members
getting
close
to
what
was
observed
and
the
north
cpm
also
are
starting
to
get
close
in
some
of
those
ensemble
members.
G
So
some
of
the
ensemble
members
do
have
a
cold
blobs
similar
to
what's
observed.
Even
if
some
of
these
ensemble
members
are
too
warm
ensemble
means
sorry,
so
I
think
the
biggest
issue
that
we've
noticed
with
the
dpla
going
forward
is
that
it
if
this
is
a
time
series
of
the
dpl
dpl
sea,
surface
temperature,
it
has
these
upward
trends.
That's
also
present
smile,
so
almost
immediately
from
initialization.
G
So
maybe
this
is
the
null
hypothesis
for
a
prediction
system
that
if
you
have
a
cold
anomaly,
it's
going
to
want
to
damp
back
towards
climatology
or
if
you
have
a
warm
anomalies,
can
damp
back
towards
climatology.
Also
that
maybe
this
is
just
what
we
should
expect,
but
it
is
present
in
all
of
the
csm
initializations.
For
this
event,
however,
there
are
a
couple
of
the
other
prediction
systems
that
manage
to
sustain
persistent
sustain.
G
So
took
just
a
quick
look
at
upper
ocean
heat
temperature,
so
I've
got
all
four
systems
again
time
series
the
in
general,
the
csm
systems
are
kind
of
missing
within
the
spread.
G
The
operation
heat
content
in
the
summer
time
and
then
something
I
also
noticed,
that's
a
little
interesting-
is
that
the
spread
in
the
upper
ocean
salinity
is
much
narrower
in
the
csm
time
cast
than
in
the
other
ones,
and
this
could
be
suggestive
of
how
the
initialization
for
csm
being
very
different,
and
then
I'd
also
like
to
point
out
that
salinity
is
pretty
well
predicted
in
smile
relative
to
csm.
G
So
a
few
high
as
I'm
running
out
of
time
very
quickly,
a
few
hind
cast
members
were
able
to
predict
a
springtime.
Nao
is
positive,
as
observed.
Probably
we
shouldn't
expect
that,
so
that
ingredient
is
missing.
G
There's
also
a
tendency
for
that
was
may
and
june
2015
serve
as
heat
flux,
fluxes
to
be
more
positive,
to
have
a
positive
bias.
So
black
line
is
the
observed
this
histogram,
so
surface
heat
flux,
anomaly
both
in
may
and
june.
Most
of
the
ensemble
members
for
the
entire
multi-model
hide
cast
are
skewed
a
bit
positive.
So
that's
something
I
think
worth
looking
into
a
bit
more.
G
So
in
summary,
I
think
the
most
notable
issue
going
forward
that
needs
figuring
out
is
comparing
is
figuring
out
why
the
dple
and
smile
have
this
warming
up
during
this
event.
So
smile
also
has
this,
but
the
north
cpm
and
to
some
extent
depressus
does
not
smile
shows
a
lot
of
improvement
in
the
cold
blob
prediction
over
the
dple
upper
ocean.
G
Salinity
has
been
notably
improved,
but
something
interesting
to
note
is
that
dpl
and
smile
tend
to
have
narrower
spread
insulinity
compared
to
the
upper
other
hind
cast
systems
likely
due
to
how
they're
initialized
in
general
and
the
nao
that
winter
was
not
really
predicted
by
any
system
spread
in
the
surface
heat
fluxes
encompassed
the
observed,
modest
summer
cooling,
but
may
have
a
positive
bias
and,
as
I
finally
finished
this
very
long,
lingering
study
up,
there
are
a
couple
remaining
possibilities
to
track
down
across
these
multiple
models.
G
One
is
a
little
closer
look
at.
There
might
be
any
biases
mixing
or
stratification
and
then
also
taking
a
little
closer
look
at
what
kind
of
drift
there
is
comparatively
in
each
system.
So
thank
you
for
your
time
and
happy
to
take
a
couple
of
quick
questions.
B
Thank
you,
elizabeth.
I
think
we
have
time
for
maybe
one
quick
question
or
if
people
want
to
put
questions
in
the
chat
window
so
that
elizabeth
can
respond
to
those
as
well,
I'm
not
seeing
any
raised
hands.
B
I
All
right,
hopefully,
you
can
all
see
my
screen
and
that
the
slides
transition
that
has
happened
in
the
past
so
I'll
get
started
so
hi
everyone,
my
name's
zach
labe,
I'm
currently
a
post
doc
that
I
just
started
at
gfdo
and
princeton
university,
and
this
work
was
done
at
colorado.
I
State
university
working
with
dr
elizabeth
barnes
that
I'll
be
presenting
today
and
something
that
I
really
like
to
do
in
my
free
time
is
make
visualizations
of
climate
data,
and
so
I
thought
I'd
start
off
with
this
graph
and
probably
a
lot
of
you
recognize
it,
because
it's
one
of
the
most
familiar
graphs
of
climate
change
data.
It
is
the
global
mean
surface
temperature
record
by
year
and
like
many
climate
variables,
even
though
it
is
globally
averaged,
you
still
see
lots
of
inter-annual
variability
and
decadal
variability
or
longer
time
scales.
I
Of
course,
this
is
not
a
new
topic
by
any
means.
In
fact,
there
was
a
whole
part
of
the
ipcc,
ar5
report
devoted
to
this
topic,
and
there
have
been
plenty
of
studies
trying
to
evaluate
what
really
was
the
cause
of
that
early
2000s
slowdown
or
hiatus
period,
and
then
trying
to
sort
of
assess.
You
know
not
only
sort
of
the
physical
mechanisms
in
the
climate
system,
but
also
observational
data
and
the
record
for
those,
and
it
was
not
only
interesting
from
a
scientific
perspective.
I
I
So
I
want
to
point
out,
as
I've
just
shown
some
examples.
There's
been
hundreds
of
papers
that
have
looked
at
this
early
2000s,
slow
down
period
and
here's
just
another
graph
using
multiple
reanalysis
and
observationally
station
based
data
sets
all
sort
of
showing
this
period
again
in
the
early
2000s.
I
But
when
you
look,
you
know
it
overall
fits
within
our
understanding
of
internal
variability
in
the
climate
system,
and
I
also
want
to
point
out
that
over
this
period,
using
some
of
the
most
recent
global
mean
surface
temperature
data
sets,
there
actually
was
still
an
upward
trend
over
this
period.
I
So
where
I'm
coming
at
this
approach
and
to
revisit
this
period
was
thinking
about
it
from
a
predictability
standpoint,
specifically
through
machine
learning,
and
I
want
to
point
out
that
while
we
look
at
predictability,
we
also
apply
some
explainability
methods
to
try
to
understand.
You
know
if
our
machine
learning
model
is
able
to
predict
something
like
this
type
of
slowdown.
Where
is
it
looking
and,
of
course,
many
of
these
studies
have
proposed
all
sorts
of
different
possible
mechanisms
for
the
early
2000s
slowdown.
I
Some
people
have
even
proposed
that
it's
a
statistical
construct
you
know
of
the
different
data
sets
used
to
measure
the
global
mean
surface
temperature,
but
also
other
types
of
internal
variability
and
external
forces
in
the
climate
system.
Specifically
things
like
the
interdecadal
pacific
oscillation
or
the
ipo
things
like
aerosol,
forcing
or
solar,
forcing
are
all
possible
proposed
mechanisms
in
those
hundreds
of
papers
that
have
evaluated
this,
including
many
of
the
people
attending
this
call
in
the
csm
workshop.
I
From
that
record,
we
then
calculate
10-year
moving
trends
again.
This
sort
of
is
an
arbitrary
definition
that
we're
using
here
for
10
years,
and
this
could
be
adjusted
for
whatever
type
of
purpose
is
or
how
you
want
to
define
a
slowdown.
We
can
then
graph
sort
of
their
slope
and
again
you
can
see
from
this.
You
know
1990
to
2000.
There's
this
course
this,
inter
annual
variability
in
the
trend,
and
then
we
can
define
some
sort
of
threshold
for
when
of
the
slope
of
those
linear
trends
falls
below
a
certain
value.
I
We
also
importantly,
want
to
try
to
evaluate
our
framework
using
observations,
so
here
I'm
just
showing
one
re-analysis
data
set
from
error:
five
showing
again,
we
can
calculate
the
global,
mean,
surface
temperature
and
then
ten-year
linear
trends
and
plot
their
slope
there
on
the
bottom
and
then
the
dashed
line
indicates
some
sort
of
threshold
for
a
slowdown
event
and
how
we're
defining
the
slowdown.
Events
is
the
year
it
begins
and
I'll
get
into
that
into
more
detail.
I
And
then
it's
going
to
go
through
two
hidden
layers.
Sort
of
that's
where
all
this
sort
of
black
box
type
idea
goes.
It's
a
fully
connected
neural
network
and
then
it's
going
to
output,
something
where
it's
going
to
be
a
yes
or
no.
So
this
is
a
binary
classification
problem.
Where
it's
going
to
say
in
the
next
10
years.
Will
the
10-year
trend
be
a
slowdown
period
based
on
our
threshold?
Yes
or
no
and
then
we're
going
to
apply
an
explainability
method
to
really
understand.
I
Okay,
if
the
neural
network
can
make
accurate
predictions
of
these
slowdowns,
where
is
it
looking
in
these
global
maps
of
upper
ocean
heat
content
and
sort
of
providing
a
quick
proof
of
concept
for
how
this
explainability
method
works?
Let's
think
about
something
we
already
know
like,
and
so,
and
so
imagine
you
set
up
a
simple
neural
network
that
takes
in
maps
of
sea,
surface
temperatures
and
essentially
it's
the
neural
network.
We
ask
it
whether
or
not
that
current
time
period
is
observing
an
el
nino
or
la
nina
another
binary
classification
problem.
I
So
then
we
can
use
this
visual
visualization
method,
specifically
we're
going
to
be
using
layer,
wise
relevance,
propagation,
but
there
are
many
methods
for
explainability
and
in
the
upper
map.
Here
you
can
see
this
global
map
and
the
output
of
the
explainability
method
is
a
heat
map,
so
the
brighter
colors
indicate
that
region
is
more
important
for
the
neural
network
to
make
its
prediction
in
this
example,
whether
it's
an
el
nino
or
la
nina
and
then
the
bottom
map.
I
You
can
compare
that
with
a
composite
of
the
actual
sea
surface
temperatures
and
you
can
see
that
they
nicely
align
that
the
explainability
method
is
looking
in
the
correct
re
region
to
make
its
correct
predictions
and
as
I've
already
mentioned,
there
are
many
methods
for
explainability
in
machine
learning,
but
they're
not
perfect.
You
know
they
provide
sort
of
there's
an
own
user
interpretation
and
it's
often
very
valuable
in
machine
learning.
I
To
compare
many
of
these
methods
and
again
they
can
be
very
useful
for
understanding
what's
going
on
in
that
black
box
and
potentially
learning
new
climate
science
along
the
way.
So
I'll
return
again
just
to
emphasize
what
is
our
neural
network,
we
input
maps
of
upper
ocean
heat
content
from
different
ensemble
members
from
the
csm2
large
ensemble,
and
we
asked
the
question
whether
the
next
10
years
will
be
a
slowdown
event.
Yes
or
no,
and
then
we
look
at
the
explainability
methods.
I
So
how
well
does
the
network
do
so?
What
I'm
providing
here
is
actually
six
different,
ensemble
members
from
1990
to
the
end
of
the
21st
century.
These
are
six
ensemble
members
from
testing
data.
So
what
that
means
is
that
the
neural
network
trains
on
a
set
of
ensemble
members,
but
we
leave
these
six
asides,
so
the
neural
network
hasn't
already
seen
this.
So
it's
essentially
we're
inputting
data
that
it
hasn't
seen
before
and
then
asking
our
question
whether
it's
a
slowdown
so
wrong
predictions
are
in
red.
Blue
predictions.
Are
the
correct
ones
in
white?
I
I
So
now
we
can
apply
the
explainability
method
to
understand
when
it's
making
correct
predictions.
Where
is
it
looking?
So
these
are
heat
maps
from
that
method,
called
layer,
wise
relevance,
propagation
in
the
upper
left
brighter
colors
indicate
that
that
region
of
ocean
heat
content
was
more
important
for
the
correct
prediction
of
a
slowdown.
I
We
can
then
compare
these
on
the
right
with
actual
composites
of
what
the
inputs
look
like
for
upper
ocean
heat
content
and
then
the
yellow
contours.
Again,
that's
those
areas
that
the
the
explainability
method
is
showing
that
we're
important
for
the
neural
network
to
make
its
decision,
so
you
can
see
that
they
often
align
with
sort
of
this
pattern
in
the
tropical
pacific,
but
there
are
also
other
regions
of
the
ocean,
such
as
in
the
indian
ocean
and
southern
ocean
that
were
clearly
important
for
the
neural
network
to
make
its
predictions.
I
So
what
about
observations?
So
we
know
that
there
is
some
skill,
actually
in
this
setup
and
again
we're
just
inputting
maps
of
ocean
heat
content
anomalies.
We
remove
the
force
trend
from
those
maps.
So
now
we're
curious.
Well,
does
this
work
on
observations
and
we
can
test
this
on
this
so-called
hiatus
period.
I
I
Also,
what's
interesting
is
sort
of
on
the
right-hand
side
of
this
upper
panel,
where
the
line
of
the
dashed
line
is
even
higher,
indicating
that
neural
networks
essentially
are
even
more
confident
that
a
slowdown
event
for
this
sort
of
ten
year
short
period
has
actually
begun
right
around
the
2016
super
el
nino
and
then
going
forward
into
the
next
10
years.
I
So
that's
interesting
if,
because
we
now
have
a
few
extra
years
since
2016
to
compare
with
the
global
immune
surface
temperature
record
and
it's
actually
been
somewhat
flat,
as
I
pointed
out
earlier
in
the
talk,
so
it'll
be
interesting
to
see
how
this
performs
going
forward.
But
one
of
the
takeaways
here
is
that
clearly,
our
neural
network
was
classified
in
many
of
the
ensembles
of
the
neural
networks.
I
The
correct
prediction
of
the
early
2000s
slowdown
and
it's
even
more
confident
that
we
are
currently
in
one
right
now,
so
we
can
also
return
to
that
explainability
method
and
then
compare
that
using
observations.
So,
on
the
left
hand
side
this
is
a
composite
map,
the
shaded
colors
or
ocean
heat
content
anomalies.
I
The
yellow
contours
are
outlining
from
the
explainability
methods
the
regions
of
those
ocean
heat
content
anomalies
that
were
important
for
the
neural
network
to
make
its
prediction
in
observations
and
then,
on
the
right
hand,
side
sort
of
ask
for
the
future
prediction
there,
and
we
can
see
many
of
this
familiar
patterns.
We
found
from
analyzing
the
climate
model
data
also
appearing
in
observations
when
it
makes
its
prediction
and
I'll
just
wrap
here,
sort
of
giving
a
broad
takeaway
of
this.
I
Although
we
provided
sort
of
a
simple
sort
of
task
of
just
thinking
about
global,
mean
surface
temperature
record,
we
think
there's
a
lot
of
exciting
opportunities
for
using
neural
networks,
particularly
coupled
with
explainability
methods
for
things
like
decadal
prediction,
and
one
really
exciting
part
is
by
using
the
increasing
and
growing
number
of
large
ensemble
simulations.
You
have
plenty
of
data
to
play
with
for
training
and
testing,
which
is
really
excellent
for
these
types
of
methods,
so
my
email's
there
and
feel
free
to
reach
out
and
I'm
happy
to
take
any
questions.
Thank
you.
B
All
right,
thank
you,
zach.
We
are
running
right
at
our
time
limit
so
and
that,
but
there
is
a
question
in
the
chat
window
and
if
others
also
have
a
question
have
questions.
Please
do
put
them
in
the
chat
window.
B
We'll
move
on
to
our
next
talk.
The
influence
of
biomass
emissions
on
enso
and
its
teleconnections
in
ces
and
2,
and
our
speaker
is
john
vasulo.
Go
ahead,
john.
F
How
does
that
look
good
good?
Okay,
thanks
kathy,
so
this
is
an
extension
of
work
that
we
did
last
year
and
then
we
published
on
last
summer
looking
at
the
climate
response.
Initially,
the
work
was
to
look
at
the
climate
response
to
the
covid
19
emissions
reductions
and
also
the
australian
wildfires.
But
we
found
the
really
interesting
part
of
the
story
was
the
australian
wildfire
response,
and
so
this
is
delving
into
some
questions
that
were
raised
in
that
paper.
F
My
co-authors
on
the
work
are
nan
rosenblum,
who
has
been
a
powerhouse
and
producing
these
simulations
and
rebecca
buchels.
Who
has
done
a
lot
with
the
the
wildfire
emissions
and
their
prescriptions
in
csm2.
F
So
as
we
go
into
what
might
be
our
third
la
nina
year
in
a
row,
it's
interesting
to
provide
some
perspective
on
where
we've
been
and
it
may
be
distant
memory.
F
But
at
the
start
of
this
event,
actually
it
wasn't
clear
at
all
that
we
were
going
into
a
la
nina
event,
and
so
I
highlight
here,
for
example,
the
the
noaa
discussion
from
june
of
2020
and-
and
you
know
the
mei
of
what
we
know
happens
bottom
right
here,
and
so
this
is
right
on
the
cusp
of
what
might
be
a
three-year
la
nina
event,
and
the
prediction
was
that
we
would
have
and
so
neutral
conditions
actually
through
the
following
year.
F
So
it
was
not
a
very
well
forecast
event.
I
was
given
about
a
60
chance
of
enzo
neutral.
It
ended
up
being
a
very
strong
and
prolonged
la
nina
event,
and
part
of
that
poor
forecast
may
have
come
from
the
the
unique
onset
of
the
event,
and
obviously
this
la
nina
did
not
follow
on
on
the
heels
of
an
el
nino
event.
F
But
rather
you
know.
Instead
of
having
this
typical
evolution
in
the
ocean
of
raspberry
waves
hitting
the
western
boundary
and
reflected
kelvin
waves
up,
while
in
kelvin
waves
coming
across
the
pacific,
what
happened
was
that
the
trade
winds
kind
of
intensified
out
of
nowhere,
and
so
a
major
question
would
be
why.
Why
did
this
happen
in
the
absence
of
any
kelvin
wave
coming
through
and
then
also
in
the
absence
of
major
sst
anomalies?
F
And
I
think
that
our
simulations
that
I'll
show
actually
provide
an
answer
to
this
and
that
there
was
a
an
atmospheric
connection
between
the
australian,
wildfires
and
conditions
in
the
eastern
pacific.
That
may
have
increased
the
chances,
at
least
of
the
onset
of
a
la
nina
and
just
to
drive
this
point
home.
This
is
a
tweet
from
matt
england.
F
Last
week,
mike
mike
mcfadden
had
given
a
talk
at
university
of
new
south
wales,
which
he
was
looking
at
current
and
so
conditions
and
april
2022
sst
anomalies
in
the
central
and
eastern
pacific
were
at
their
coldest
since
1950
and,
of
course,
that's
despite
the
underlying
global
warming
signal
and
so
a
very
unique
event,
a
very
strong
event
and
our
understanding
that
actually
as
to
where
it's
come
from,
has
been
pretty
poor.
F
So
and
what
seems
to
be
unrelated
but,
as
I'll
argue
is
not.
This
is
a
paper
we
did
in
australian
wildfires.
F
The
bush
fire
season
was,
it
was
the
worst
on
record,
so
we
have
a
real,
exceptional
event
here
and
I
think
that's
important
in
the
sense
to
to
provide
context
for
this
event.
It
may
be
that
you
don't
need
to
have
biomass
emissions
in
a
prediction
system
to
get
these
effects
most
of
the
time,
but
during
unique
circumstances,
perhaps
they're
important,
and
so
in
this
season,
over
60
million
acres
burned
for
context.
We
have
around
10
million
in
north
america
during
a
large
season.
F
F
We
found
that,
although
the
clear
sky
effects
were
actually
pretty
marginal,
you
know
in
the
order
of
a
tenth
of
a
watt
per
meter,
squared
the
all-sky
effects
were
actually
quite
strong,
two
to
three
watts
per
meter
squared
through
the
southern
hemisphere
and
that
actually
resembled
a
major
volcanic
eruption.
F
So
just
quickly.
Some
of
these
features
were
you
know,
just
nakedly
obvious.
In
the
observations
top
left,
you
can
see
aerosol
optical
depth
for
the
globe
for
northern
hemisphere
and
southern
hemisphere,
and
those
maxima,
you
see
in
december
of
2019
and
into
2020
are
the
australian
wildfires,
and
so
it
was
a
maximum.
These
are
averaged
throughout
the
hemisphere,
and
these
are
maxima
in
the
hemisphere
for
aerosol
optical
depth
and
cloud
aerosol
radio
effect,
and
that's
to
be
expected
with
this
huge
emission
of
biomass
into
the
southern
hemisphere.
F
We
could
equally
see
it
in
the
all
sky
data,
so
this
figure
on
the
bottom
left
looks
at
the
contrast
between
the
net
flux
between
northern
and
southern
hemisphere,
and
we
reach
an
extreme
value
in
december
of
2019
as
well.
It's
coincident
with
that
modus
peak.
So
there's
the
suggestion
at
least
that
the
all-sky
effects
are
are
quite
significant,
even
in
the
hemispheric
means,
and
that
maxima
was
driven
by
southern
hemisphere,
cooling
in
the
observations
and
then,
lastly,
on
the
right.
F
You
can
see
how
much
noisier
it
gets
when
you
look
at
regional
features,
but
that
highlighted
region
in
the
eastern
pacific
is
a
region
of
enhanced
albedo
and
I'll
argue
that
that's
actually
pretty
important
for
the
connection
between
the
wildfires
and
enso.
So
just
quickly.
The
simulations
that
we're
using
are
kind
of
a
modified
version
of
the
smile
setup.
I
won't
go
through
the
smile
details.
F
I
think
figure
people
are
familiar
with
it,
but
we're
our
focus
here
is
on
the
august
1
2019
simulations,
which
we
increase
from
20
to
30
members,
and
we
also
extend
them
to
36
months
in
length
in
addition
to
the
default
smile
setup.
We've
also
created
this
smile,
australian
fire
ensemble,
that's
meant
to
match
the
smile
set
up,
so
we
have
30
members
of
that
extends
for
36
months
from
august
of
2019,
and
that's
our
real
focus
here.
F
Beyond
what
the
smile
ensemble
itself
predicted,
there
was
some
pooling
predicted
by
this
mile
ensemble,
but
a
significant
addition
to
that
when
we
included
the
australian
wildfire
emissions
and
both
of
these
match
better
with
the
eventual
sst
anomaly
in
2021,
when
we
when
we
include
the
the
biomass
emissions
and
our
goal
here
really
is
to
try
to
get
at
what
the
mechanisms
might
be.
F
The
biomass
burden,
so
on
the
left-hand
side,
you
see
a
progression
of
biomass
burden,
anomalies
october
december
and
february,
and
this
gives
you
a
sense,
at
least
for
the
evolution
of
the
basic,
forcing
and
also
the
the
spatial
extent
of
the
forcing
the
emissions
were
greatest
off:
the
east
coast,
australia,
but
really
were
quite
high,
undetectable
throughout
the
southern
hemisphere,
even
in
the
indian
ocean
and
atlantic
ocean,
and
they
reached
their
peak
generally
in
december
of
2020.
F
And
so
that's
actually
in
close
agreement
to
what
we
see
from
modis,
where
we
can
actually
make
make
fairly
good
comparison
between
the
observed
fields
and
what
we
simulate
and
that
pulse
is
really
pretty
short-lived
and
it's
largely
gone
by
march
of
2020.
F
associated
with
that
pulse
and
when
those
aerosol
burdens
reach.
The
subtropical
stratocue.
Is
this
brightening
of
the
cloud
decks?
So
this
is
something
that
we've
computed
called
cloudy
sky
albedo.
It's
basically
backing
out
the
albedo
of
clouds
from
the
all
sky
and
clear
sky
fluxes,
taking
into
account
the
cloud
amount
and
what
you
find
is
that
when
those
aerosols
reach
the
stratocue
in
the
in
the
eastern
part
of
the
basins,
even
in
the
indian
ocean,
actually
you
see
a
brightening
of
the
cloud
field.
F
You
can
see
a
residual
brightening
really
through
the
southern
hemisphere
off
of
australia
over
through
to
the
eastern
pacific,
and
this
is
primarily
low
clouds
that
are
brightening
because
that's
the
vertical
level
at
which
the
aerosols
reside.
They
do
not
really
extend
in
significant
quantities
to
the
upper
troposphere,
but
these
anomalies
on
the
right
actually
last
longer
than
the
burdens
themselves
and
I'll
argue
that
that's
related
to
some
feedbacks
that
probably
occur
in
the
eastern
pacific
ocean.
F
So
radiatively.
These
are
quite
a
punch
and,
like
I
mentioned
before,
in
the
hemispheric
mean
we
see
anomalies
of
two
to
three
watts
per
meter
squared
but
regionally.
Actually
the
anomalies
are
quite
a
bit
higher
and
so
in
the
eastern
pacific.
This
area
really
highlighted
here.
The
anomalies
are
15
to
20
watts
per
meter,
squared
and
and
they
persist
for
a
while.
F
So
beyond,
after
the
aerosols
are
gone,
these
seem
to
trigger
feedbacks
between
the
the
stability
of
the
lower
troposphere
and
the
sst
pooling,
and
so
they
stick
around
for
quite
a
while
and
they
have
broader
environmental
effects
and
so
on.
The
left
is
the
the
radiation
and
on
the
right
are
the
sst
anomalies,
and
so
you
know
some
of
the
first
regions
where
we
see
sst
anomalies
emerge
in
response
to
these
biomass
emissions.
Are
these
stratocute
decks?
You
can
see
in
the
indian
ocean?
F
You
can
see
in
the
pacific
ocean
and
the
sst
anomalies
actually
get
quite
intense,
so
almost
a
degree
you
can
see
them.
They
are
statistically
significant.
The
stippled
regions
are
reasons
where
it's
not
significant,
greater
than
twice
the
standard
error
of
the
ensemble,
and
you
can
see
over
time
the
sst,
the
negative
ssd
anomalies
kind
of
emerge
from
this
area,
where
the
clouds
were
really
juiced
by
the
biomass
aerosols.
F
Now
that
has
an
impact
on
the
the
boundary
layer,
moist
static
energy
as
well
and
I'll
show
you
this.
The
spatial
structure
of
this
in
a
bit,
but
this
is
the
evolution
of
boundary
layer,
moist,
static
energy
in
that
circle,
dot
in
the
east
pacific,
and
so
as
soon
as
the
aerosols
arrive.
Basically,
you
can
see
a
pretty
drastic
drop,
2
000
joules
per
kilogram
of
boundary
layer,
moist,
static
energy,
and
that
continues
for
quite
some
time
and
the
argument
I'm
making
is
that
this
is
important
for
enso,
because
these
are
this.
F
This
region
is
basically
upstream
in
the
trade
winds
from
the
deep
tropics
and
so
these
anomalies
get
injected
into
the
deep
tropics
and
have
an
effect
on
both
the
itcz
and
also
the
mjo,
and
I
think
that
those
can
influence
the
onset
of
enzo
and
so
just
to
give
you
a
sense
for
the
spatial
structure.
The
left
hand
panel
here
is
the
near
surface
specific
humidity.
F
That's
strongly
tied
to
the
moist
static
energy,
and
you
can
see
the
anomalies
begin
to
emerge
in
this
eastern
pacific
region,
but
then
are
advected
eastward
and
eventually
into
the
deep
tropics
and
so
by
august
of
2020.
You
really
see
this
whole
span
of
anomalies
that
began
in
the
southeast
pacific,
but
was
ejected
by
the
trade
winds
after
that
and
no
doubt
has
begun
to
feed
back
onto
the
walker
circulation,
the
mjo
and
the
ocean
by
august.
F
But
the
initial
indication
is
that
this
region
in
the
southeast
pacific
is
really
critical
and
and
the
kind
of
the
source
reason
for
these
anomalies.
On
the
right
hand,
side
what
you
see
is
cape,
and
so
that's
the
convective
available
potential
energy
for
the
itcz
and
other
deep
convective
phenomena
like
the
mjo
initially
very
few
anomalies.
But
as
this
you
can
make
you
make
a
connection
between
the
anomalies
as
they
propagate
in
in
low
level
humidity
and
magnetic
energy.
On
the
left.
F
With
these
cape
anomalies
on
the
right
and
eventually
you
can
see
them
propagate
all
the
way
over
to
the
warm
pool
and,
like
I
say
these
are
going
to
in
effect
displace
the
itcc
northward,
which
we
see.
I
don't
have
time
to
show.
But
the
rainfall
belts
go
to
the
north
of
these
negative
anomalies
in
cape
and
then
also
one
would
imagine.
F
We
haven't
actually
looked
at
high
frequency
data
yet
to
to
nail
down
the
mjo
behavior,
but
you
can
imagine
that
that's
going
to
curtail
the
mjo
propagation
and
the
upwelling
kelvin
waves
associated
with
that.
So
just
in
summary,
we
see
this
connection
between
the
wildfires
and
aerosol
burns
kind
of
throughout
the
southern
hemisphere,
and
when
those
aerosol
burns
impinge
on
the
stratocue
decks,
it
increases
their
albedo.
F
F
That
may
explain
why
the
2021
la
nina
event
was
so
poorly
forecast
because
it
wasn't
kicked
off
by
waves
in
the
ocean
and
then
perhaps
why
it
was
so
long
lasting
and
why
it
continues
to
this
day
and
I'll
just
stop
there.
Thanks.
B
All
right,
thank
you,
john
any
questions
for
john
either
in
the
raise
your
hand
or
in
the
chat
window.
E
Yeah
john,
it's
very
interesting.
So
it
is
an
interesting
chain
of
connections
across
the
southern
hemisphere,
subtropics
up
into
the
the
strata,
q
and
then
kind
of
going
across
the
tropical
pacific.
And
I
presume
then
you
start
invoking
changes
in
the
walker
circulation
right,
which
would
then
intensify
the
la
nina
and
I
think
you're
arguing
that
it
would
have
been
a
weak
la
nina
anyway,
and
it
just
made
it
stronger.
F
Yeah,
so
I've
highlighted
it
better
than
my
text
than
I've
said
it.
But
one
of
the
challenges
here
is
that
this
you
know
this
is
the
plausible
connection
of
events,
but
certainly
it's
difficult
to
rule
out
and
even
weigh
the
relative
contributions
of
the
various
components,
and
so
in
one
respect.
The
ssd
coin
in
eastern
pacific
may
directly
be
weakening
the
walker
circulation
and
that
may
be
the
root
driver
of
it
with
these
other
anomalies
being
secondary,
but
yeah.
F
I
think
I
think
there
are
at
least
two
effects
at
play,
and
that
is
the
direct
cooling
in
the
eastern
pacific
and
the
moist
static
energy
anomalies
that
go
over
and
yeah.
There
is
some
component
that
is
likely
predictable
from
initial
conditions
alone.
I
don't
want
to
downplay
that
and
that
may
even
be
more
important
than
these
effects.
There's
a
ton
of
internal
variability
right
in
this
region,
and
it's
really
difficult
to
validate.
F
You
know
the
model
and
or
the
prediction
and
say
that
one
is
necessarily
right
over
another,
but
this
does
seem
to
have
juiced
the
event,
and
you
know
it's
consistent
with
what
we've
seen
with
volcanic
eruptions
and
models
as
well,
and
so
the
mechanisms
are
probably
different,
but
the
overall
energy
imbalance
between
the
hemispheres
is
probably
the
same.
So
I
think
it.
It
makes
sense
to
conclude
that
both
are
at
play.
J
Hi
everyone
thanks
for
the
introduction,
kathy
and
thank
you
everyone
for
being
here
today.
My
name
is
evan
meeker
and
I
am
a
first-year
graduate
student
at
the
university
of
wisconsin-madison
working
with
professor
elizabeth
maroon,
and
today
I
will
be
presenting
on
the
predictability
of
persistent
marine
heat
waves,
specifically
looking
at
the
2013-2015
northeast
pacific
event
in
the
seasonal
to
multi-year
large
ensemble.
J
So
I
wanted
to
start
by
just
giving
an
overview
of
marine
heat
waves,
which
can
be
described
as
discrete,
prolonged
and
anomalously
warm
events.
This
is
a
pretty
broad
definition
and
it
means
that
marine
heat
waves
can
occur
over
a
wide
range
of
spatial
and
plural
time
scales
on
the
right.
We
have
a
schematic
from
holbrook
at
all.
That
shows
this
wide
range
where
you
can
have
marine
heat
waves
that
are
as
short,
lasting
as
a
few
days
and
as
long
lasting
is
over
a
year.
J
I'm
interested
in
these
persistent
marine
heat
waves,
which
are
the
longest
lived
and
tend
to
be
the
largest
spatial
marine
heat
waves,
and
these
events
overlap
with
modes
that
we
know
contain
predictability
such
as
enso
and
oceanic
raspy
waves.
Marine
heat
waves
are
also
have
dominant
physical
drivers
that
are
spatially
dependent
and
often
time
varying,
for
example,
strong
el
nino
events
actually
are
considered
marine
heat
waves,
and
we
know
that
the
physical
drivers
of
that
are
going
to
be
different
than
something
that's
happening
in
the
middle
latitudes.
J
Marine
heat
waves
are
also
important
because
of
their
negative
impacts
and
the
largest
and
most
intense
marine
heat
waves
are
associated
with
a
huge
number
of
negative
impacts,
and
so
this
figure
from
smith
smith
at
all,
I'm
not
going
to
go
over
it
in
detail,
but
I
think
it
gets
the
point
across
that
these
events
are
well
studied
and
that
there's
a
large
number
of
negative
consequences
of
these
events
and
so
being
able
to
predict
them
is
a
useful
endeavor.
J
J
The
event
starts
around
summer
of
2013
has
a
peak
in
january
of
2014
and
then
last
for
over
two
years,
depending
on
how
you
define
it,
and
I've
got
an
animation
of
weekly
ssts
shown
on
the
right
here,
and
this
is
the
same
region
that
we're
getting
the
sst
anomalies
from,
and
I
want
to
point
out
that
the
blob
is
a
non-stationary
event.
It
starts
in
over
the
gulf
of
alaska
region,
but
then
over
the
course
of
its
existence.
J
It
progresses
to
the
eastern
boundary
of
the
pacific,
and
this
is
coinciding
with
a
pdf
phase
change
from
2013
to
2014..
J
So
I
wanted
to
highlight
a
couple
of
previous
predictability
works
that
have
looked
at
the
same
thing.
The
first
is
this
paper
from
kabatandi
at
all
that
examines
northeast
pacific
marine
heat
waves
in
general,
using
a
linear
inverse
model
and,
interestingly,
they
actually
find
a
strong
off
equatorial
sea
surface
height
signal
12
months
prior
to
the
max
marine
heat
wave
strength
in
the
northeast
pacific-
and
this
is
this
is
interesting
because
they
don't
find
nearly
a
strong
sst
signal,
but
they
do
have
the
strong
sea
surface
height
signal.
J
Additionally,
they
find
that
northeast
pacific
marine
heat
waves
may
be
a
precursor
to
central
pacific,
el
nino
events.
Second,
is
this
paper
from
jaycox
at
all
that
examined
marine
heat
wave
predictability
using
the
north
american
multi-model
ensemble?
They
find
that
globally.
The
enso
region
has
the
greatest
skill
in
marine
heat
wave
predictability
with
significance
at
at
least
a
10-month
lead,
notably
though
the
blob
specifically
was
not
predicted
above
a
10
percent
chance,
even
just
one
month
before
the
events
using
the
nnme.
J
So
my
main
research
questions
are
how
predictable
are
persistent
marine
heat
waves
and
I
want
to
look
at
the
intensity
and
size
and
spatial
pattern
of
the
marine
heat
waves,
and
I
also
want
to
look
at
their
predictability
prior
to
the
onset
of
the
event
and
if
you
initialize
after
the
onset
of
the
event-
and
my
second
question-
is
what
are
the
physical
drivers
that
lead
to
marine
wave
predictability
so,
like
I
said
before
I'll,
be
looking
at
a
case
study
of
the
blob
from
2013
to
2015
using
the
csm2
smile
prediction
system.
J
This
consists
of
24-month
heinkes
simulations
that
are
initialized
in
february
may,
august
and
november
from
1970
to
2019
and
I'll
be
using
these
colors
to
signify
those
seasons.
Each
ensemble
is
a
or
each
initialization
is
a
20
member,
ensemble
initialized
with
jra
55
atmospheric,
forcing
and
a
forced
ocean
sea
ice
or
fosse
model.
I'm
using
a
standard
drift
correction
with
a
30-year
drift
climatology
with
the
seasonal
cycle
removed
and
I'm
going
to
quantify
the
intensity
of
the
blob
using
rigid
region,
average
sst
anomalies,
as
well
as
the
extent
and
spatial
pattern
using
pattern
correlation.
J
So
I
want
to
start
with
some
general
results,
and
this
is
just
a
sst
time
series,
with
the
observations
from
ersst5
shown
in
the
black
line
and
from
the
fassi
and
the
red
line,
and
the
start
of
the
blob
is
right
here
in
may
of
2015,
and
I
want
to
point
out
that
the
fosse
does
a
very
good
job
overall.
But
you
can
see
that
during
these
periods
of
extreme
surface
temperature,
anomaly
there's
a
larger
bias
in
the
fosse
compared
to
some
of
these
other
times.
J
And
so
I
wanted
to
look
into
this
a
little
bit
more.
And
so
I
went
ahead
and
calculated
the
climatological
sea
surface
temperature
bias
and
we
find
a
cold
anomaly
over
the
central
pacific
ocean,
as
well
as
a
slight
warm
anomaly
over
the
eastern
boundary,
and
I
want
to
point
out
that
this
is
still
a
very
small
bias.
This
is
around
0.1
degrees
celsius,
but
it
does,
interestingly,
have
a
similar
pattern
to
the
csm2
uninitialized
biases,
although
with
a
different
signal
in
the
corrosion
extension
region.
J
However,
looking
at
the
biases
during
some
of
these
peak
events,
we
see
that,
as
opposed
to
0.1
degrees
celsius,
changes
where
the
blob
is
situated,
there
can
be
sst
biases
that
are
over
1
degree
celsius,
and
I
haven't
run
the
statistics
on
on
the
difference
here.
But
I
think
it's
interesting
to
think
about
how
these
biases
are
represented.
Specifically
during
extreme
events,
as
opposed
to
just
like
general
in
the
general
run,
so
continue
with
general
results.
J
I'm
going
to
take
a
look
at
the
anomaly
correlation
coefficient,
which
sort
of
measures
the
skill
in
predicting
the
temporal
variability,
and
we
see
that
in
the
northeast
pacific
region,
we
have
a
acc
over
0.5
at
the
10-month
lead
in
all
of
the
initializations
except
for
august,
and
we
also
see
a
re-emergence
of
acc
skill
in
the
following
fall
for
each
of
these
ensembles.
So
first
we
have
november,
and
then
you
see
these
peaks.
J
We
also
see
the
same
sort
of
signal
at
the
beginning
with
february.
Having
the
the
best
spatial
skill
in
the
long
term
average-
and
we
also
see
a
skill
retention
over
winter
months,
and
so
we
see
a
flat
line
in
skill
corresponding
with
the
winter
of
each
of
these
initializations.
J
So
moving
to
the
blob
specifically,
this
is
the
same
sst
anomaly
that
I'm
showing
with
the
january
2014
peak
here
and
our
these
lines
here
are
the
member
mean
smile
heimcast
for
each
2013
initialization,
and
what
we
see
is
that
all
of
these
initializations,
even
the
november
november,
one
initialized
two
months
before
the
peak,
don't
don't
show
any
growth
towards
this
peak
and
in
fact
the
may
and
august
initializations,
which
do
start
higher,
come
back
down
to
an
anomaly
of
around
0.5,
and
this
isn't
just
true
in
the
mean.
J
But
if
you
look
at
the
spread
of
all
the
members,
you
see
that
even
the
spread
misses
this
peak
by
quite
a
wide
margin,
not
just
in
the
observations,
but
even
if
you're
looking
at
the
the
fosse,
it
still
misses
the
the
fosse,
and
this
isn't
just
it's
not
just
that
this
is
true
here.
But
if
you
go
look
at
the
full
50-year
time
series.
This
is
the
only
time
where
the
spread
of
the
ensemble
members
misses
the
peak
in
this
region.
J
In
contrast,
we
can
see
in
2014
once
the
event
has
been
underway.
We
get
a
pretty
good
representation
of
the
blob
in
smile
hindcast,
with
the
means
staying
at
around
one
degree,
celsius,
positive,
sst
anomaly
in
basically
all
of
the
initializations
in
2014
and
the
spread
encompassing
the
the
observations
fairly
well.
J
So
moving
on
to
the
pattern
correlation
I'm
going
to
be
comparing
the
november
2013
initialization
to
the
may
2014
initialization
and
on
the
top
here
we
have
the
fosse
representation
every
three
months,
so
this
is
starting
in
november
2013
and
then
going
forward
in
three
month
intervals
and
then
here's
the
heincast
representation.
J
J
We
also
see
that
there's
a
central
pacific
anomaly
that
shows
up
throughout
this
first
year,
that's
completely
unrepresented
in
the
heimcast
and
going
forward
in
time.
What
ends
up
happening
is
that
this
cold
anomaly,
this
cold
negative
pdo-like
signature-
that
is
shown
in
the
first
month
of
the
heincast,
just
kind
of
continues
all
the
way
throughout
these
two
years
and
at
a
certain
point,
this
becomes
the
opposite
of
what
actually
happened.
J
So
this
transition
is
completely
unrepresented
in
the
november
2013
heimcast,
and
the
result
of
this
is
that
our
pattern-
correlation,
especially
as
you
get
around
the
one-year
lead
mark,
is
the
lowest
pattern.
Correlation
of
all
50
november
starts.
So
this
is
a
pretty
significant,
miss
and
trying
to
get
a
little
bit
at
why
this
is
happening
so
contrasting
with
the
may
2014
heincast.
So
again,
this
is
the
fosse,
and
now
I've
just
moved
six
months
forward.
J
So
here's
the
may
2014
and
then
moving
forward,
and
we
can
see
that
when
the
blob
is
initialized
further
east
and
when
we
initialize
this
cold
anomaly
that
has
started
here,
we
actually
get
a
very
good
representation
all
the
way
through
two
years
so
looking
forward,
we
see
that
the
the
evolution,
although
the
the
values,
aren't
quite
matched
the
the
pattern,
is
very
well
matched
and
even
out
at
20
lead
22
we're
getting
a
pretty
good
transition
here,
and
so
I
think
this
goes
along
with
emily's
talk
earlier
about
this
positive
to
positive,
ipo
or
pdo
phase.
J
So
in
summary,
we
find
that,
in
agreement
with
previous
studies,
we
find
the
that
the
intensity
of
the
blob
has
low
predictability
prior
to
the
january
2014
pecan
smile,
however,
after
the
blob
is
migrated
to
the
north
american
coast.
So
after
that
pdo
sign
change,
small
heinkes
seem
to
lock
in
tropicalness
extra
tropical
pacific
scst
anomaly,
evolution
which
corroborates
previous
work-
and
I
don't
show
it
here,
but
preliminary
examination
of
smile,
skill
and
the
tropics
suggests
that
this
long,
lasting
skill
is
not
solely
attributable
to
the
tropics.
J
You
need
some
sort
of
extra
tropical
skill
as
well
to
be
able
to
get
this
long-lasting
skill
and,
finally,
the
inability
of
the
blob
or
the
inability
to
predict
the
blob
before
it
settles
on
the
north
american
coast
suggests
that
there
is
an
issue
in
predicting
the
transition
from
a
negative
pdo
to
a
positive
pdo.
J
And
with
that
I
want
to
thank
you
all
for
coming
to
my
talk
and
I'll.
Take
any
questions.
If
we
have
any.
B
All
right,
thank
you,
evan
we're
right
at
our
time
limit
and
I
do
want
to
respect
people's
time
for
having
a
break
as
well.
So
if
you
have
questions
for
evan,
please
put
that
in
the
chat
window
and.
B
K
K
K
This
is
a
ability
of
any
statical
method
or
the
modeling
exercise
to
not
only
work
in
in
your
favorite
condition,
but
also
if
there
is
slightly
departure
or
slight
update
in
the
in
the
model
configuration
it
should
work
and
it's
not
a
complicated
concept.
If,
if
you
work
with
the
non-normal
data
set,
you
can
think
about
like
median
is
a
more
robust
estimate
than
me.
K
So
this
is
overview
of
my
talk.
I'll
go
ahead
and
get
started.
Okay,
so
climate
modeling
works.
Two
thing
I
wanted
to
pay
you
attention
here
is
this
paper
was
published
in
2008
and
the
study
found
yeah,
I'm
not
a
co-author
here,
and
this
study
finds
the
yeah
back
in
2008
15
years
ago
that
there
was
50
50
chance
of
the
lake
meat
to
go
dry
by
2021,
and
here
we
are
like
I'll
show
you
some
of
the
hard
number
as
we
go
through
this
presentation
in
the
second.
K
It
integrates
a
number
of
different
climate
processes,
particularly
related
to
land,
and
I
will
be
focusing
on
the
root
zone
roughly
0,
to
1
meter
and
that
has
a
more
more
dynamics
involved
than
the
full
layer
that
is
like
a
deep
layer
plus
root
zone.
So
I
will
be
focusing
on
the
root
zone
variability
with
these
two
contexts.
K
What
is
objective
of
this
study
is
to
first
to
investigate
how
the
north
america,
hydroclimatic
variability
and
predictability
are
changing,
then
to
understand
what
are
its
driver
and
then
finally
sas
yeah.
If
we
understand
what
are
their
driver,
then
how
it
can
impact
both
drought
and
blue
wheel
risk
under
changing
climate,
so
we
are
using
two
of
the
large
ensemble
data
set,
csmle
and
gftlcm3le,
and
in
the
top
panel
you
see
the
how
the
precipitation
variability
is
changing.
K
K
So
you
you
see,
there
is
a
big
decrease
in
the
small
moisture
variability,
particularly
in
the
higher
latitude
and
csm
le
has
more
of
the
mixed
signal.
So
this
is
what
we
wanted
to
understand.
Why
does
the
small
moisture
variability
changes
differently
than
the
then
the
precipitation
variability
to
to
understand
this
soil
monster?
Variability
changes
will
be
using
a
model
called
reddened
and
so
framework,
where
we
modeled
the
soil
massive
variability
at
annual
time
scale
as
a
previous
year,
soil
master
and
then
the
current
year.
K
K
So
here
is
our
little
formula
and
then
this
is
the
second
term
that
is,
and
so
so
we
take
the
ssd
in
the
tropical
pacific,
including
indo
in
indian
ocean,
and
then
we
calculate
the
uf1,
and
you
see
the
tensor
signal
there
and
you
also
see
in
the
panel
b.
There
are
two
vertical
lines:
those
are
two
major
transition
that
we
have
seen
in
the
last
70
years
in
the
pacific.
K
If
you
are
not
convinced
with
those
vertical
line.
What
in
the
next
panel,
I've
shown
is
the
small
moisture
variability
from
the
era
5
data
set
and
you
you
see
how
how
there
was
a
multi-decadal
dry
period
before
77
76,
76
77,
and
then,
after
that
we
went
into
the
wet
period
and
then
again
we
are
into
the
dry
period,
if
you
overlay
the
lake
mid
water
record
and
that
that
pretty
much
corresponds
to
that
multi
dedicated,
wet,
dry,
wet
and
again
dry
period.
K
So
to
model
this
hydro
climate.
We
need
this
memory
component
like
if
you
calculate
what
is
one
year
lag
anomaly
correlation
for
n,
so
it
is
0.17
for
soil,
moisture,
0.41
and
if
you
look
into
the
lake
mid
record,
it
is
0.91.
So
hydroclimate
needs
that
memory
component.
K
So
with
that
yeah.
If
we
compare
only
and
so
only
model
or
the
memory,
only
model
memory
is
the
small
master
part
you
you
see,
there
is
a
skill
coming
from
the
enso
component,
but
surprisingly
memory
con
component
is
contributing
more
than
the
answer
component
of
the
scale.
K
K
There
is
strengthening
of
predictability,
at
least
in
the
csm
alley
in
the
southwest,
and
there
is
somewhat
weakening
of
predictability
in
the
in
the
canadian
plane
and
that
weakening
of
predictability
is
more
evident
in
the
csm
in
the
gftl
cm3
early
data
set
and
again
you
see
that
high
latitude
predictability
decreases
considerably
in
the
gftl
model.
K
So
since
I
had
a
little
formula,
so
I
can,
I
can
see
which
of
the
coefficients
are
increasing
and
decreasing
again
in
the
csm
le
alpine,
the
memory
component-
and
you
see
yeah,
there
is
a
decrease
in
the
alpha
component.
K
Okay.
What
what
I
was
showing
you
here.
This
is
very
complicated
plot,
but
let
me
walk
you
through
here,
I'm
showing
each
of
the
ensemble
member
in
both
cssmle
and
gftlcm3le
and
in
the
top
panel.
It
is
how
enso
is
changing
in
the
bottom
panel.
There
is
small
mass
and
memory
and
the
potential
evapotranspiration.
K
So
I
think
big
part
of
the
story
is
yeah.
When
we
take
the
ensemble
mean
it
provides
some
robust
estimate,
whereas
the
individual
and
sample
can
show
you
very
large
or
very
small
changes
that
that
is
totally
possible
due
to
the
internal
variability,
both
ensemble
members,
so
both
large
ensembles
so
similar
changes.
There
is
a
increase
in
them,
so
variability
and
the
decrease
in
this
all
master
memory.
That
is
related
to
the
increase
in
the
pet.
K
So
this
is
my
final
part
of
my
presentation,
so
we'll
be
going
into
the
hydro
climate
extreme
and
here
there
is
a
limited
sample
size
problem.
So
these
kind
of
events
like
22
years
of
the
drought.
If
you
look
into
either
observation
on
climate
model
data,
you
will
find
very
limited
sample
in
in
that
data.
K
So
what
we
can
do
using
this
little
equation,
we
can
do
100
randomly
generated
sample,
and
then
we
can
use
this
100
sample
to
investigate
the
robust
changes
in
both
drought
and
pluvial
risk
to
see
how
hot
that
equation
is
working
here,
I'm
showing
the
power
spectra
for
both
the
large
ensemble
and
in
the
in
the
start
symbol.
You
see
what
we
get
out
of
the
climate
model
from
the
respective
climate
model
and
in
the
shading
region.
What
we
get
out
of
that
equation,
or
the
red
and
enso
framework.
K
Some
of
those
reddening
are
absent
in
the
gftl,
and
this
may
be
related
to
the
lower
memory
in
the
gftl,
and
if
you
compare
both
historical
and
the
future
climate,
there
is
slight
strengthening
of
the
enzo
related
variability
in
the
csmle,
not
so
much
in
the
in
the
eftl,
and
particularly
if
you
go
to
the
high
latitude,
the
low
frequency
variability
component
is
decreasing.
K
So
that
brings
me
to
the
robust
changes
in
drought
and
pluvial
risk
here
again,
blue
is
the
historical
yellow
is
the
future
d
trended.
So
when
we
compute
the
variability,
we
can
remove
the
trend.
However,
we
can
also
add
back
the
trend
to
assess
how
the
future
changes,
which
of
the
component
is
driving
the
future
changes.
K
So
that
is
what
is
shown
in
the
red
color
with
the
with
trend,
and
we
add
back
the
trend
into
that
redundancy
framework
and
what
we
surprisingly
find-
or
maybe
not
so
surprising,
is-
is
both
that
yellow
and
the
blue
color
mostly
overlap
each
other,
and
then
only
when
we
add
the
trend,
you
get
the
changing
drought
and
fluvial
risk,
and
this
is
also
depicted
by
these
two
number.
K
First
number
correspond
to
changes
in
the
risk
due
to
variability
component
only
and
then
second
number
represent
the
changes
in
the
risk
due
to
both
the
variability
and
the
mean
component.
So
only
when
you,
you
include
that
mean
component,
you
see
most
of
the
drought
or
pluvial
risk
in
there
is
changing,
so
we
we
did
the
same
for
all
the
three
region
and
here
for
both
drought
and
pluvial
condition
in
here.
K
Two
two
characteristics
are
increasing
enzo
variability
and
the
decreasing
soil,
moisture
variability
it
may
affect
how
the
north
america,
hydroclimate
variability
and
predictability
are
changing,
and
one
of
the
conclusion
we
can
draw
from
this
modeling
exercise
is
yeah.
Future
drought
and
pollution
risk
are
primarily
driven
by
the
mean
state
changes.
So
if
you
are
doing
the
infrastructure
planning
you
can
robustly,
if
you
know
how
much
the
mean
state
is
going
to
change
is
going
to
change,
you
go
ahead
and
incorporate
those
changes
and
it
is
likely
to
be
robust
in
the
coming
decade.
B
All
right,
so
we
are
now
at
our
break
time,
and
so,
if
you
have
questions
for
sanjeev,
please
put
those
into
the
chat
window.
We
will
return
for
the
rest
of
our
talks
at
10
30.
and
before
we
do
that.
I'd
like
to
thank
all
of
our
morning,
speakers
in
the
first
session
here-
and
I
will
see
you
all
again-
10
30
and
don't
forget
also
at
11
30.
We
will
be
having
discussion
of
brainstorming
of
new
simulations.
L
Oh,
thank
you.
So
much
I'll
also
give
you
a
heads
up.
Well
I'll
share
this
with
everyone.
When
I
start
I'm.
M
L
In-Person
conference
I'm
co-chairing
so
hopefully.
L
B
All
right,
you
can
go
ahead
and
share
your
screen
and
I
will
start
off
our
session.
B
All
right
welcome
back
everyone.
I
see
we
have
looks
like
52
participants
logged
in
so
hopefully,
everyone's
returned
from
from
the
break.
Our
speaker
is
maria
molina
and
we're
talking
about
machine
learning,
based
assessment
of
the
representation
and
predictability
of
north
american
weather
regimes.
Go
ahead,
maria.
L
Thank
you
kathy
and
hello,
everybody.
It's
so
wonderful
to
see
everyone
virtually
here
for
the
cesm
event,
and
I
want
to
share
that.
I
am
currently
in
kansas
city
missouri
for
another
workshop.
This
is
the
ams
early
career
leadership,
leadership
academy.
L
So
this
is
why
I'm
wearing
a
mask
and-
and
hopefully
there
are
no
interruptions,
but
but
please
be
aware
that
that
may
occur
as
I'm
co-chairing
this
event,
and
sometimes
things
come
up
last
minute,
but
I'm
really
excited
to
be
here
with
you
today
and
share
this
work
again.
L
As
kathy
mentioned,
it's
titled
machine
learning,
based
assessment
of
the
representation
and
predictability
of
these
north
american
weather
regimes,
and
essentially
the
motivation
for
this
project
came
about
because
we
know
that
predicting
temperature
and
especially
precipitation
on
longer
lead
times
such
as
weeks.
Three
four
five
six
can
be
really
challenging
and
there
are
really
some
high
impact
events
that
have
occurred
in
the
past.
L
Some
extremes
on
those
time
scales,
and
so
here
we're
asking
whether
we
could
potentially
gain
some
additional
insight
and
hopefully
predictability
to
these
precipitation
and
temperature
anomalies
on
those
longer
time
scales
by
assessing
north
american
weather
regimes,
and
this
is
work
conducted
in
collaboration
with
yaga,
sasha,
katie,
judith,
aishu
and
jerry.
Many
of
who
are
on
this
call
today,
but
here
I
have
these
figures.
These
are
from
yaga's
recent
paper,
assessing
the
prediction,
skill
of
csm1
and
csm2,
and
various
different
versions
of
these
models.
L
Looking
at
the
skill
as
we
head
into
these
longer
lead
times
so
again
from
weeks
three
to
six,
we
can
generally
see
that,
as
we
know,
temperature
skill
is
generally
higher
than
precipitation
skill
and
precipitation.
L
Skill
really
is
poor
once
we
get
into
these
longer
lead
times,
but
as
kathy
has
actually
so
eloquently
mentioned
in
a
recent
paper
in
2019
in
bands,
you
know
we
really
could
gain
a
lot
of
knowledge
or
yeah
provide
a
lot
of
value
to
stakeholders
such
as
water
resource
managers,
energy
sectors
and
et
cetera.
L
If
we
were
to
have
skillful
prediction
of
these
anomalies
as
we
head
into
those
longer
lead
times,
so
this
has
a
substantial
societal
value
and,
as
we
know,
historically
marginalized
communities
really
do
experience
disproportionate
vulnerability
to
extremes,
and
so
this
could
be
extremes
in
the
form
of
temperature
anomalies,
with
heat
waves
or
cold
snaps
and
also
precipitation
anomalies.
You
know
if
they
experience
extremes
in
terms
of
flooding
or
other
events.
L
Really
we
know
that
these
marginalized
communities
do
experience
the
brunt
of
this
disproportionately
to
others,
but
again,
if
we
can
provide
additional
value
with
knowledge
of
these
anomaly,
events
that
are
forthcoming,
that
would
be
of
a
really
imp,
high
impact,
and
so
essentially,
what
we're
doing
here
is
you
know,
instead
of
predicting
these
precipitation
or
temperature
anomalies
outright,
we
wanted
to
ask
whether
we
can
assess
the
predictability
of
these
large-scale
patterns
in
the
atmosphere,
and
so
this
is
not
a
new
idea.
L
This
has
actually
been
around
for
quite
some
time
several
decades.
We
have
numerous
publications
here
and
that
is
not
an
exhaustive
list,
but
we
do
know
that
these
persistent
large-scale
patterns,
in
terms
of
when
we
look
at
geopotential
height
anomalies,
at
500
hectopascal.
L
These
patterns
are
very
persistent
and
because
they're
so
persistent,
we
do
end
up
seeing
these
anomalies
projected
onto
the
surface
in
terms
of
temperature
or
precipitation
anomalies,
and
so
again,
maybe
there's
some
value
that
we
can
gain
there
and
so
essentially,
what
we
did
is
we
followed
a
lot
of
the
previous
literature
looking
at
k-means,
clustering,
algorithms
and
essentially
just
taking
some
re-analysis
product
and,
in
this
case
we're
focusing
on
era
5,
so
using
numerous
decades
of
data
actually
from
the
90s
and
training
this
algorithm,
using
this
data
to
extract
what
these
persistent
large-scale
patterns
are
over
north
america,
and
so
here
we
have
the
plot,
on
the
right
hand,
side.
L
We
see
that
we
have
numerous
patterns
that
emerge,
and
these
are
very
consistent
with
past
literature.
So
we
generally
see
this
alaskan
ridge
pattern
where
we
have
high
pressure,
that's
very
persistent
over
alaska
and
then
and
then
a
shift
into
negative
geopotential
height
anomalies.
Then
we
have
this
greenland
high
pattern
where
high
pressure
is
very
persistent
over
green
land
and
despite
how
far
north,
that
is,
we
still
see
a
pattern
projected
onto
the
us
and
over
the
pacific
trough.
L
This
is
our
third
weather
regime
here
again
low
pressure
over
the
alaska
region
and
north
pacific,
and
we
also
see
this
pattern
emerge,
which
is
high
pressure
along
the
west
coast
of
the
us,
and
so
what
we
do
here
is
we
can
assess
first,
we
can
assess
the
representation
of
these
patterns
in
our
numerical
prediction
system,
so
in
this
case
we're
using
an
earth
system
model
cesm2.
L
These
are
forecast
initialized
and
then
going
out
to
about
week
six
and
we
can
assess
the
percent
of
occurrence
of
these
patterns
in
the
csm
prediction
system,
and
so
we
see
that
overall,
the
percentages
are
generally
very
similar
when
you
compare
them
to
that
of
frequency
of
occurrence
over
error
five,
and
we
also
see
that
generally
spatially.
These
patterns
are
also
very
consistent,
although
we
will
note
that
as
we
head
into
later
lead
times,
we
do
notice
that
the
anomaly
magnitudes
do
decrease
in
magnitude.
L
When
we
look
at
the
surface
and
precipitation
anomalies
for
weeks
three
four
we
see
drier
conditions
where
we
have
generally
very
persistent
high
pressure
and,
above
average
precipitation
anomalies
over
the
southeastern
u.s.
So
again,
providing
us
with
some
value
and
overall
cesm2
does
a
really
good
job
at
representing
this.
But
we
do
see
some
anomaly
biases,
where
we
have
weaker
anomalies
in
the
cesm
prediction
system,
and
that
again
is
also
partly
a
function
of
lead
time
during
earlier
lead
times.
We
see
magnitudes,
for
these
anomalies
are
higher
and
then,
as
lead
time
increases.
L
We
see
dampening
of
those
anomaly
magnitudes,
but
again
here
we
have
a
noaa
cpc
and
we
have
our
arrow
five
product
and
then
a
comparison
with
csm
and
generally
we're
seeing
at
least
the
sign
of
the
anomalies.
Co-Located
in
correct
locations,
and
very
similarly
as
well
for
temperature.
L
So,
for
example,
we
can
ask
okay
if
we
started
in
a
certain
weather
regime
in
terms
of
our
initialized
forecast,
how
likely
is
it
that
we
will
stay
in
that
weather
regime
as
we
go
forward
in
time,
because
we
said
you
know
these
are
very
persistent
patterns,
and
so
what
we
did
here
is
we
just
went
ahead
and
looked
at
the
duration
of
time
that
we
spent
in
a
weather
regime,
and
then
we
looked
at
the
frequency
of
how
you
know
how
often
that
duration
occurred
and
overall
it
turns
out
that
the
west
coast,
high
pattern
generally
tends
to
have
a
higher
frequency
during
shorter
duration.
L
But
also
we
can
see
these
events
here
at
the
tail
end,
sticking
out
as
compared
to
other
weather
regimes,
so,
for
example,
again
to
more
clearly
state
weather
regime.
Four,
we
look
at
longer
duration
and
we
can
see
certain
events
that
actually
can
last
for
multiple
weeks.
So
in
the
past,
at
least
in
our
cesm
data
and
rfi
product,
there
are
certain
weather
regimes
that
can
persist
beyond
two
weeks,
so
that
would
offer
us
potentially
some
predictability.
Were
we
to
start
our
initialized
prediction
at
that
weather
regime?
L
Hopefully
that
made
sense,
and
so
here
we
can
also
assess
the
skill
in
predicting
these
weather
regimes.
So
we
can
compare
to
era
5
our
cesm2
forecast
and
as
we
head
into
later
lead
times,
we
again
generally
see
a
decrease
in
skill,
as
we
would
expect,
but
overall,
not
too
bad,
I
guess,
but
by
unfortunately,
by
week,
two,
you
do
see
a
big
drop
in
acc
when
you're,
comparing
to
era
five
to
further
assess
representation
of
weather
regimes
and
also
predictability.
L
So
here
if
we
were
to
look
at
day
zero,
this
could
be
our
assessment
of
the
skill
in
cesm2
and
we
see
that
overall,
the
percentage
of
days
that
are
spent
in
a
weather
regime
for
either
one
of
these
products,
so
again
era
5
or
csm2,
they're
generally
very
consistent.
So
this
gives
us
some
confidence
in
the
representation
of
these
patterns
within
cesm2.
L
But
as
we
head
into
later
lead
times,
we
see
a
bias
emerge
right.
So
here
we
have
our
west
coast
high
weather
regime
over
occurring
during
later
lead
times.
L
So
so
this
is
a
bias
that
emerges
from
our
analysis
and
when
we're
looking
at
predictability,
we
can
also
look
at
this
in
a
little
bit
of
a
different
way
when,
if
you
know
in
these
previous
plots,
I've
been
showing
you,
the
11
member
ensemble
mean
for
csm2,
but
then
we
can
also
look
at
the
weather
regimes
within
individual
ensemble
members,
and
then
we
can
look
at
that
as
a
function
of
lead
time.
And
so
here
what
I'm
showing
which
this
plot
is
a
bit
confusing.
L
But
here
on
the
top
we
have
the
alaskan
ridge
on
the
bottom.
We
have
the
west
coast
high
regime
and
what
we're
seeing
here
on
the
x-axis
is
the
lead
days.
So
we're
going
forward
in
time-
and
here
we
have
a
frequency
so
we're
seeing
how
many
times
the
ensemble
members
really
agreed
frequently,
and
so
during
earlier
lead
times.
L
So
this
offers
us
some
insight
into
the
predictability
of
different
regimes,
and
we
learned
that
our
west
coast
high
regime
tends
to
have
higher
ensemble
agreements,
so
there's
lower
spread
among
the
11
ensemble
members
and-
and
we
also
just
generally-
are
you
know
again
seeing
higher
predictability.
L
The
blue
and
the
black
bars
show
our
comparison
between
csm2
and
era5.
So
here,
where
the
higher
the
black
lines
are
the
the
more
agreement
there
is
with
the
csm2
prediction.
L
So
this
is
again
we're
seeing
some
correct
forecasts
even
during
later
lead
times,
and
then
we
can
look
at
individual
forecasts
as
well.
So
this
is
also
a
bit
confusing
of
a
plot.
I'm
sorry,
I
was
a
little
too
ambitious
with
the
limited
time
I
have
and
how
to
explain
these,
but
here
we're
looking
at
one
forecast
that
was
initialized
on
november
9th
2015
or
using
data
from
november
9,
2015
and
and
during
early
lead
times.
L
We
see
that
cesm2
predicted
our
west
coast
high
regime,
so
we're
starting
in
that
fourth
weather
regime,
and
that
prediction
is
correct
because
cesm
also
has
or
era
5
also
has
a
west
coast
high
regime.
Then
we
see
spread
so
a
reduction
in
ensemble
agreement
and
we
see
an
incorrect
forecast
eventually.
L
So
our
cesm
2
and
era
5
products
are
not
agreeing
in
the
shade
of
the
weather
regime,
but
suddenly,
as
we
head
into
weeks,
you
know
beyond
lead
day
20
and
we're
looking
at
now
about
week,
three
or
four
or
so
we
see
that
era5
and
csm2
agree
on
the
weather
regime.
So
it's
some
sort
of
ensemble
realignment
and
while
it
is
true
that
this
could
potentially
happen
by
chance,
we
decided
to
go
ahead
and
look
at
fields
that
are
located
upstream
of
north
america.
L
And
so
here
we're
considering
a
north
pacific
region
and
also
the
you
know,
an
area
of
interest
for
the
mjo
and
we
actually
see
an
increase
in
acc
skill
during
the
same
time
period.
So
this
would
suggest
that
you
know
if
we
have
outgoing
long
wave
radiation
better
represented
over
the
north
pacific
upstream
of
a
north
american
weather
regime,
that
perhaps
we
could
actually
see
some
lower
spread
in
our
ensemble
members,
but
again,
looking
further
upstream.
L
As
far
as
like
the
source
of
this
higher
skill
in
outgoing
longwear
radiation
over
the
north
pacific,
I'm
still
not
sure.
I
can't
give
you
an
exact
answer
on
that,
but
it
was
promising
to
see
that
this
ensemble
realignment
or
ensemble
agreement
occurring
during
a
later
lead
time,
despite
there
having
been
a
lot
of
spread
during
earlier
lead
times,
did
coincide
as
well
with
some
other
physical
fields
showing
better
skill,
and
so
here
this
is
another
plot.
Just
generally
looking
at
what
these
fields
look
like
upstream.
L
So
here
we're
looking
at
outgoing
long
wave
radiation,
the
north
pacific
region
that
we're
considering
in
blue
and
our
tropical
region
for
the
madden
julian
oscillation
across
in
this
black
polygon,
but
there
are
other
ways
to
think
about
predictability.
Of
course,
this
is
just
one
way
and
you
may
still
ask
you
know,
potentially
what's
the
value
of
looking
at
weather
regimes,
if
that
may
not
offer
you
skill
at
all
locations
throughout
north
america
or
other
areas
of
interest,
and
that
would
be
true
right.
L
We
may
not
have
necessarily
certain
anomalies
projected
onto
the
surface
in
terms
of
temperature
or
precipitation,
and
so
we
can
also
think
about
bias,
correcting
our
cesm2
output,
and
so
we
do
have
a
project
that
is
spinning
up,
and
this
is
in
collaboration
with
katie
and
others
that
are
listed
on
this
project
as
well
as
well
as
across
other
labs
of
ncar
and
and
so
we're
going
to
be
working
on
using
a
unit
to
bias
correct
these
csm2
predictions.
L
And
so
we
look
forward
to
sharing
updates
on
that
once
once
that
continues.
So
with
that,
I
will
go
ahead
and
stop
talking
and
apologies.
If
that
was
a
bit
chaotic
or
if
you
heard
a
lot
of
shuffling
in
the
back
or
setting
up
lunch
here,
but
I
will
say,
I
will
also
say
that
I
have
a
new
ton
of
respect
for
workshop
and
conference
organizers
and
and
just
want
to.
L
B
B
N
Okay,
great
thanks
hi,
I'm
nick
davis,
I'm
from
the
chemistry,
observations
and
modeling
laboratory
here
at
ncar.
N
This
is
work
that
I've
done
with
yaga,
sasha,
jim
and
also
emerson
lejoy
from
noaa
cpc,
and
I
presented
part
of
this
work
last
year,
focused
on
the
sudden,
stratospheric
warming
in
2021,
but
this
year,
I'd
like
to
focus
on
the
extreme
cold
air
outbreak
in
february
2021
with
a
similar
set
of
experiments.
N
N
This
blob
of
extreme
low
temperatures
extended.
B
Nick
I'm
sorry
to
interrupt,
but
we
can't
we
did
not
see
the
slide
advance.
So
we're
still
seeing
your
title
slide.
N
N
All
right
so
can
you
see
a
figure
and
in
advance?
Okay,
great,
so
this
cold
extended
over
the
period
in
february
down
from
alaska
and
canada
towards
texas
and
in
some
cases
plunged
temperatures
over
30
degrees
c
below
their
climatological
anomaly.
N
N
Now,
in
terms
of
weather
regimes,
colder
outbreaks
tend
to
be
more
likely
after
a
sudden,
stratospheric
warming,
and
there
was
one
in
january
of
last
year,
but
they
are
most
likely
when
the
polar
vortex
is
stretched.
So
this
happens,
this
is
a
disturbed,
polar
vortex.
You
can
think
of
it
as
a
wave
one
kind
of
shape
where
it's
stretched
out
and
elongated
over
the
atlantic,
and
this
did
occur
in
february
2021.
The
polar
vortex
was
stretched
out
over
the
atlantic
and
it
looked
to
have
reflected
planetary
waves
back
down
to
the
troposphere.
N
The
mechanism
is,
I
think,
relatively
simple.
The
idea
is
that
these
waves
are
generated
over
eurasia,
they
go
up,
they
reflect
off
the
vortex
and
then
they
deposit
their
they
apply,
drag
on
the
flow
and
increase
wave
amplitudes
in
the
troposphere
which
invigorated
the
trough
and
made
it
intensify
the
cold
air
outbreak
they
also
are.
This
mechanism
is
also
hypothesized
to
connect
extreme
weather
to
arctic
amplification,
and
so
we
really
wanted
to
take
a
experimental
approach
to
understanding
this
extreme
event
rather
than
a
statistical
approach.
N
N
We
essentially
deny
mechanisms
from
occurring
so,
for
instance,
when
we
scramble
the
tropospheric
initial
conditions,
we
let
a
new
troposphere
develop
but
with
the
same
vortex
stretching
and
wave
reflection
in
observations,
and
that
gives
us
some
insight
into
the
direct
impact
of
vortex
stretching
and
wave
reflection.
N
On
the
other
hand,
if
we
let
the
stratosphere
drift
and
we
prevent
that
vortex,
stretching
and
wave
reflection
from
occurring,
what
that
gives
us
is
insight
into
the
impact
of
the
tropospheric
circulation
in
total
isolation
from
that
mechanism.
So
what
you
can
think
of
is
the
scrambling
procedure
is
like
a
direct
hypothesis
test.
You're
you're
leveraging
the
model
to
unveil
the
hidden
physics
in
the
system.
N
Basically,
what
would
have
happened
during
the
cold
air
outbreak
if
there
was
no
vortex
stretching
and
wave
reflection,
but
everything
else
was
the
same
and
what
I
think
that
does
and
it's
important
to
highlight
is
it
aligns
our
experimental
scope
at
the
process
level
that
we're
interested
in.
If
we're
arguing
about
the
troposphere
versus
the
stratosphere,
we
should
be
doing
experiments
on
the
troposphere
in
the
stratosphere.
N
So
here's
what
the
forecasts
over
the
north
american
land
surface
looked
like
during
the
peak
of
the
cold
air
outbreak
so
on
the
far
right
is
merit2
from
1980
to
2020
and
then
the
dotted
red
line
is
the
event.
So
the
event
had
an
average
minus
5.5
c
anomaly
over
all
of
north
america,
that's
canada,
alaska
and
the
united
states.
N
N
N
N
You
have
no
impact
on
skill
for
the
event,
which
is
kind
of
interesting
when
you
actually
let
the
tropospheric
initial
conditions
drift.
You
end
up
with
extreme
warmth,
which
seems
to
suggest
that
the
direct
impact
of
that
vortex,
stretching
and
wave
reflection
would
have
instead
been
extreme
warmth,
given
a
different
troposphere.
N
N
So,
let's
figure
out
what's
going
on
so
for
those
that
are
not
dynamically
inclined,
I'm
not
going
to
get
into
any
big
big
details
here,
I'll
keep
it
really
simple
and
I
think
a
simple
perspective
anyway,
is
all
you
need
to
understand
what
exactly
is
going
on
so
on
the
top
panels,
here
in
a
and
b,
I'm
showing
you,
the
100
millibar
geopotential
height
anomalies,
and
also
in
shading,
showing
you
where
there's
net
upward
and
downward
wave
activity.
N
So,
where
are
planetary
waves,
traveling
upward
from
the
troposphere
and
then
going
downward
back
down
to
the
troposphere,
and
so
you
can
see
that
the
vortex
had
some
very
high
anomalies
over
siberia
and
some
low
anomalies,
especially
later
in
the
period
february,
7th
12th
over
the
atlantic
and
the
wave
activity.
Flux
is
upward
out
of
eurasia
and
downward
over
north
america.
Okay,
consistent
with
the
mechanism-
and
we
see
that
continue
later
into
the
period.
N
N
So
there
is
no
way
of
reflection
in
the
forecast
with
scrambled
stratospheric
initial
conditions.
So
what
that
means
is
our
mechanism.
Denial
experiment
is
effectively
preventing
this
mechanism
from
occurring,
but
if
there's
no
way
reflection,
the
obvious
question
is:
how
is
the
surface
forecast
unchanged
and
I
think
it's
actually
a
really
simple
story.
N
I
don't
even
think
you
need
these
special
forecasts
to
understand,
perhaps
why
this
wave
reflection
process
wasn't
relevant
for
this
event,
so
here
I'm
showing
you
a
sort
of
zonal
slice,
average
between
45
and
75
north
of
this
wave
activity
flux.
So
the
vectors
are
showing
you
where
the
wave
activity
is
going.
N
The
stream
lines
are
also
showing
you
that
wave
activity
flux
and
I'm
going
to
argue
that
the
streamlines
are
the
most
important
part
of
these
plots
and
the
tropopause
here
is
shown
in
the
white
lines
and
then,
of
course,
the
background
zone
means
little.
Wind
is
shown
in
the
shading,
so
you
can
see
that
in
merit2
there
was
definitely
a
lot
of
upward
wave
activity
flux
out
of
the
troposphere.
N
However,
the
stream
lines
just
above
the
tropopause
in
that
trough
over
north
america
are
sort
of
a
capping
streamline
for
where
the
wave
activity
flux
is
coming
from,
and
what
it
indicates
is
that
the
wave
activity
that
went
on
to
intensify
the
cold
air
outbreak
in
the
trough
never
traveled
above
200
hecta
pascals.
It
never
really
reflected
off
the
tropopause,
it
just
sorry
off
the
polar
vortex
it
just
sort
of
propagated
up
from
the
troposphere
traveled
along
the
tropopause
and
then
deepened
the
the
trough.
N
So
essentially
yeah
there
was
wave
reflection,
but
that
wave
reflection
resulted
in
waves
that
traveled
downstream
of
the
trough,
and
you
can
see
that
we
do
shut
down
most
of
that
mechanism
in
our
our
initial
condition.
Scrambling
experiments,
but
again,
because
that
mechanism
doesn't
seem
to
be
important.
Shutting
it
down
doesn't
have
any
impact
on
the
surface
temperature
forecast.
N
N
You
get
the
same
temperature
forecasts,
but
I
think
it's
also
supported
by
the
merit2
verification
output,
and
the
key,
I
think,
is
to
make
sure
that
when
you
analyze
a
vector
field,
you
have
to
properly
scale
it
not
just
geometrically
for
your
plot
but
physically,
especially
when
you
do
different
vertical
coordinates,
and
I
would
say
that
streamlines
really
help
constrain
your
interpretation
of
vector
fields.
B
Thanks
nick,
please
put
questions
for
nick
in
the
chat
window
and
we'll
move
on
to
the
next
speaker.
B
Yes,
it's
it's
in
edit
mode!
So
if
you'll
put
it
in
presenting
mode,
that'd
be
good.
M
You
hello
everyone.
My
research
topic
is
land
surface
initializations
contribute
most
to
the
subseasonal
soil,
moisture,
forecast
skill.
We
thanks
to
the
usda
grant
for
sporting,
our
research
and
thanks
to
the
collaborator,
dr
richter,
for
help
with
data
and
for
sharing
her
slides
at
csm
co-chairs
meeting
the
soil.
Moisture
data
we
use
is
subax,
csm2
hand,
cars
and
forecast
experiments.
M
M
The
table
shows
three
different
mass
different
initialization
methods
for
the
csm2
cam6.
The
first
method
is
control
experiment
we
use
the
we
initialize
the
atmospheres,
ocean
and
land
models
with
corresponding
renaissance
method,
a
realistic
data
science.
The
atmosphere
model
is
initialized
using
the
unsap
surface
virtual
analysis.
The
ocean
part
is
initialized
using
the
adjusted
japanese
project,
state
fields
and
fluxes.
M
The
land
model
is
initialized
by
the
700
years,
been
a
method
with
a
model
crm5
with
atmosphere
forcing
from
the
cfs
v2.
The
second
method.
We
initialized
the
model
ocean
part
and
the
land
part
for
the
climatology
for
the
atmosphere
models
they
use
climatology
to
initialize
it,
and
the
last
method
is
land
initialization
on
it
and
they
use
the
ocean
and
the
atmosphere
climatology
to
initialize
the
ocean
atmosphere
model,
so
the
cam6
model
handcart
forecast
was
launched
for
I
was
launched
on
each
monday.
M
M
The
er5
land
soil
moisture
is
a
real
narcissistic
site,
based
on
the
numerical
integrations
of
the
ecm
wf
land
surface
model.
The
core
model
is
carbon
hydrology
tail
dcmwf
scheme
for
surface
exchange
over
the
land.
The
priority
of
the
era
5
land,
soil
moisture
is
the
model
resolution
compared
with
the
former
er5.
M
So
the
resolution
is
9
kilometers,
and
so
they
have
the
hourly
data
side.
The
forecast
evaluation
method
is
a
standardized
soil,
moisture
and
then
calculate
the
normal
correlation.
Standardized
method
is
consists
of
four
steps.
The
first
step
we
will,
we
calculate
the
ensemble
members
mean,
and
we
calculate
the
14
days,
moving
average
along
the
forecast,
late
dimension,
and
then
we
calculate
the
mean
the
standard
deviation
long
year,
dimension
with
later
and
initialization
time
dependency,
and
we
calculate
the
standardized
forecast
normally
and
calculate
the
anomaly
correlation
we
have
so
here's
a
result.
M
We
have
for
the
cam6
rotalone
soymizer
correlation
of
the
record,
the
forecast
skill
in
this
figure.
We
divided
the
forecast
into
three
columns.
The
first
column
is
the
control
experiments
with
the
all
part
initialized
component
and
the
middle
part.
We
only
have
the
ocean
line
initialization
and
for
the
last
column,
we
only
have
the
land
initialization
and
we
have
the
ocean
and
atmosphere.
Crime,
told
you
to
initialize
their
specific
specific
model
component,
so
each
for
each
row.
M
So
for
each
we
show
the
first
two
weeks,
four
past
skills,
the
second
two
weeks
and
the
third
two
waves.
We
find
that
the
land
initialization
provides
most
of
the
soil,
moisture
predictability
and
compared
with
other
part.
It
seems
like
the
ocean
ocean
model.
Initialization
provides
a
predictability
along
the
coastal
regions
and
part
of
the
coastal
regions.
M
M
The
definition
is
that
the
different
model
initialization
method
for
the
cam6
eurozone
soil,
moisture,
forecast
skill
in
this
figure.
We
find
that
the
atmosphere
initialization
provides
predictability
along
the
coastal
region
and
the
central
continent
out
of
the
central
continent
region.
The
land
initialization
provides
most
of
the
soil,
moisture
predictability
in
the
four
seasons.
M
We
got
the
conclusion
from
the
third
column
since
the
the
model
only
initialized
with
the
landing
initialization
and
the
soil
moisture
correlation
is
greatest.
In
the
great
plain
and
the
central
southern
u.s.
We
find
that
for
the
four
seasons,
the
soil,
moisture,
correlation
distribution
is
relevant
to
the
rainfall
zone
in
the
four
seasons.
M
We
based
on
the
formal
figure
we
talking
the
area
weighted
correlation
for
each
equal
region.
We
find
that
for
the
equal
region,
number
six,
the
agricultural
regions
are
most
predictable
and
almost
all
predictability
is
coming
from
the
land
sources
and
also
with
the
similar.
I
think
the
second
second
largest
correlation
should
be
found
in
the
site:
number
nine,
which
is
located
in
the
north,
great
plain
in
the
u.s,
because
the
three
lines
are
from
the
three
different
initialization
methods.
There
is
no
obvious
difference,
especially
in
the
site
number
six,
so
it
means
no
matter.
M
M
To
make
sure
our
comparison
is
robust
and
not
depend
on
the
specific
subax
model
we
selected.
We
also
compare
the
cam6
forecast
with
other
sub
x
forecasts,
the
two
other
models
esrl
and
rsmes
ccsm4.
M
M
M
So
for
the
specific
neon
regions,
soil,
moisture
in
agricultural
regions,
coal,
the
core
region-
number
six-
are
the
most
predictable
and
almost
all
predatabilities
coming
from
the
land
sources
compared
with
two
other
sub
x
models,
the
esr
and
the
ccs
m4.
Both
of
them
have
similar
forecast
trends
with
cam6,
okay,
that
should
be
them
or
we
or
I
have
no
hey.
Thank.
B
I
don't
see
questions
we
do
have
a
little
bit
of
time.
So
I'll
ask
a
question,
so
I
see
that
you
compared
with
two
other
sub
x.
Models
is
the
reason
why
you
chose
those
particular
ones
and
and
have
you
considered,
including
some
of
the
other
models.
M
And
the
reason
is
that
I
unchecked
the
ira
library
for
the
subax
models
and
that
these
two
models
are
the
only
one
I
can
download.
I
don't
know
why
I
meet
some
arctic
problem.
I
want
to
download
another
the
third
model,
so
that
will
be
the
tool
I
have
okay,
yeah.
B
And
so
do
you
have
any
thoughts
on?
Why
you're
why
the
ezreal
film
model
seems
to
be
higher
skill.
M
Oh
yeah,
you
mean
why
the
reason
why
some
model
has
higher
skill
than
other
models,
yeah
yeah.
That
would
be.
That
should
be
a
good
question
because
the
research
is
halfway
down.
So
I
will
continue
working
on
that
problem
and
find
out
the
reason
why
this
model
has
some
priority.
It's
very
significant
computer
results
are
too
okay.
Thank
you.
B
C
M
Oh
yeah,
that
that
is
a
very
good
question.
We
want
to
study
some
the
predictable
skill
and
see
if
it
comes
from
the
internal
rabbit
viability.
The
signal
liability
so
to
calculate
this.
This
will
be
the
I
arrange
all
the
data
together,
so
it
should
be
the
annual
annual
desire
we
use
here.
I
only
remove
the
climatology.
Maybe
I
actually
remove
actually
train
the
data
side
for
each
season
and
see
if
the
predictability
is
from
the
seeds
to
the
mountain
or
something
else
yeah.
That
would
be
very
good.
B
All
right,
thank
you
very
much.
We'll
move
on
to
our
next
speaker.
Talk
is
titled
state
dependent,
predictability
of
ss
forecast
using
the
python
package.
Client,
pred
and
our
speaker
is
judith
burner.
Go
ahead.
Judah
hi.
O
So
this
is
work
that
has
been
done
by
myself
and
abby
j
at
ncar,
but
I
would
like
to
acknowledge
the
around
spring
and
device.
Heiner
christian
and
certainly
yaga
for
the
s2s
runs
so
so
yaga
produced
these
s2s
runs
as
an
extension
to
the
sub
x
product
project.
O
It
was
led
by
kathy,
and
so
here
we
look
at
spread
an
error
between
different
models
and
we
see
that
overall,
the
models
are
under
dispersive,
and
so
we
also
noted
that
when
we
looked
at
sort
of
the
ordering,
sometimes
we
got
different
orders
between
rms
error
and
forecast
skill,
and
so
I
try
to
dive
a
little
further
into
this.
O
Since
we
had
these
simulations,
there
was
really
a
need
for
a
verification
package
for
the
sort
of
an
in-house
verification
package
to
look
at
initialized
forecasts
on
the
sos
time
scales,
and
so
we
developed
this
with
money
from
noaa,
which
I
would
like
to
acknowledge.
And
basically
we
have
a
number
of
notebooks
developed
by
abby
j
which
starter
das
cluster.
O
They
remove
the
lead
time
dependent
bias.
They
remove
the
climatology
and
then
produce
a
number
of
deterministics
and
probabilistic
skill
scores,
leveraging
the
package
x,
skill
score
and
the
focus
alignment
is
done
with
the
python
package
climb
prep.
That
is
right,
now
maintained
and
developed
by
spring
at
the
university
of
hamburg,
and
so
these
have
been
developed
and
used
by
the
asb
summer
colloquium
last
year
and
we
are
happy
to
share
those
with
anyone
who
is
interested
and
the
idea
was
to
get
rid
of
all
of
this
tedious
pre-processing
that
we
have
to
do.
O
O
O
We
want
to
look
at
the
influence
of
bias
on
forecasts
and
because
we
normally
these
verifications,
are
done
on
anomalies
and
and
we
removed
the
lead
time
dependent,
forecast
bias,
which
is
a
form
of
model
error.
And
then,
when
we
looked
into
this,
the
signal-to-noise
paradox
popped
up
on
the
sds
time
scale.
O
O
So
here
we're
looking
at
the
error,
so
the
first
column
is
forecast
for
weeks,
one
two,
the
middle
one
weeks:
three
four
and
the
right
one
weeks:
five:
six
for
ecmwf
in
the
top
row,
csm2
and
then
enceph
just
as
comparison
and
since
we
had
some
noaa
funding,
we
thought
we
should
include
it
in
the
comparison-
and
here
are
some
perfect
model
results.
So
here
we're
not
verifying
or
computing.
The
error
in
regard
to.
O
In
the
perfect
model
scenario
spread
and
error
have
to
be
the
same,
and
I
give
you
the
formula
and
it
turns
out
that
they
are
only
the
same
if
the
rms
era
is
computed
exactly
the
same
way
as
the
spread,
and
that
means
that
the
sum
over
all
ensemble
members
has
to
be
taken.
So
you
take
your
sample
member,
you
remove
the
mean
you
take
the
square,
but
you
have
to
do
this
for
each
ensemble
member.
Otherwise
you
don't
get
the
identity
in
the
perfect
model
scenario,.
O
O
So
what
is
the
difference
between
the
spread
and
error?
So
here
I'm
just
removing
the
spread
of
esemwf
or
the
error
of
ecmwf
for
each
forecast
lead
time,
and
so
we
find
here
that
csm
has
more
spread
than
ecmwf
for
the
week
one
two
forecast
this
has
probably
to
do
with
how
we
initialize
the
model
which
is
sort
of
the
anomaly
initialization,
as
opposed
to
using
data
simulation.
O
But
then,
in
the
later
forecast
lead
times
we
have
sort
of
a
mix
with
regions
that
are
over
and
under
dispersive,
but
overall,
it's
sort
of
similar,
whereas
nsab
actually
has
less
spread
on
all
the
times,
and
this
is
probably
because
it's
less
ensemble
members,
so
here's
the
acc,
oh
yeah,
and
so
I
should
also
say
no,
I
say
later
so
here
now
we
look
here
at
the
anomaly
skill
and
we
see
that
the
skill
is
highest
again
in
the
perfect
model
scenario
is
highest
in
the
tropics
in
the
tropical
belt
and,
interestingly,
we
see
that
csm
has
more
skill
than
ecmwf
in
the
perfect
model
context
for
weeks,
three
four
and
longer
so
this
is
csm.
O
We
see
here
over
the
whole
northern
hemisphere
is
sort
of
darker
colors
and
then
also
especially
in
this
tropical
belt,
and
we
could
hypothesize
that
has
to
do
with
a
better
representation
of
enzo.
O
We
also
saw
that
when
we
looked
at
the
era,
it
was
largest
over
the
northern
hemispheric
regions,
whereas
here,
if
you
look
at
skill
really
what
pops
out
is
the
tropics
and
so
the
large
era
over
the
north
hemispheric
land
really
is
an
expression
of
the
large
amplitude
of
the
anomalies,
but
not
not
necessarily
predictive
skill,
and
that
was
one
of
the
questions
we
had
in
the
summary
paper
that
java
led.
O
O
And
now
we
see
that
ecmwf
has
better
forecast
skill
than
csm
for
weeks,
three,
four
and
longer
so
what's
going
on
here,
I'm
looking
actually
at
the
difference
between
the
actual
acc
and
the
perfect
acc,
and
if
we
take
the
ratio,
it's
the
rcp.
That's
typically
taken
for
the
signal
to
noise
paradox
in
the
signal-to-noise
paradox
community,
but
here
I'm
showing
the
difference,
because
if
you
divide
by
zero,
you
get
ugly
plots.
So
here's
the
difference
so
wherever
it
is
zero.
O
The
actual
predictability
reaches
the
predictability
limit
as
estimated
in
the
perfect
model
and
wherever
it
is
positive.
The
actual
approachability
is
actually
higher
than
that
estimated
by
the
perfect
model,
and
so
wherever
it
is
here
red
for
week,
three
four
and
and
five
six,
the
actual
predictability
is
actually
higher
than
that
in
the
perfect
model
ad
for
ecmf
and
also
for
ncep,
but
again
ncep
was
under
dispersive.
O
Probably
so,
interestingly,
we
don't
see
this
paradox
for
csm,
and
so
it's
a
good
question
what's
going
on,
is
the
intrinsic
predictability
of
csm
actually
higher
than
that
for
the
ecmwf
model,
and
obviously
this
is
linked
to
the
spread.
O
You
would
think
that
if
you
have
an
ensemble
system
and
you
have
diverging
trajectories
and
another
ensemble
and
they
stay
closer
together,
you
would
think
that
the
predictability,
the
sort
of
intrinsic
predictability
is
higher
for
the
model
that
is
less
dispersive,
and
that
obviously
has
implications
because
we
want
to
re-run
all
of
those
with
stochastic
parameterizations,
and
this
is
because
the
spread
is
too
small,
but
we
definitely
don't
want
to
introduce
artifacts
by
becoming
artificially
over-dispersive.
O
So
here
again,
is
these
differences
between
the
spread,
and
we
see
that
overall,
the
csm
spread
is
somewhat
smaller,
but
there
is
regions
where
it's
higher,
and
so
the
fact
that
csm
is
just
under
dispersive
is
not
the
only
reason
that
contributes
to
the
signal
to
noise
paradox
on
the
s2s
time
scale.
It's
typically
reported
for
seasonal
and
larger
time
scales.
O
Also,
the
signal
to
noise
communities
is
typically
looking
at
your
potential
height
right.
It
has
been
showing
two
meter
temperatures
up
until
now,
and
you
you
see
you
get
the
same
message
if
you
look
at
your
potential
heights.
O
So
if
we
compute
the
acc
for
a
perfect
model,
the
ensemble
mean
needs
to
be
taken
across
all
members,
including
the
verifying
member.
O
O
For
on
these
time
scales,
the
skill
is
overall,
really
small.
This
does
not
show
up
if
your
skill
is
really
high,
and
so,
if
you
do
a
bar
plot-
and
you
just
take
the
average
over
it,
you
can
get.
You
can
get
wrong
messages,
because
it'll
average
over
negative
and
positive
skill
and
a
negative
skill
doesn't
make
sense.
O
However,
if
we
actually
verify
against
actual
data
observations,
then
negative
acc
is
an
expression
of
model
error
and
can
happen.
O
Okay,
so
with
this,
I
just
want
to
show
you
some
highlights
from
the
actual
verification.
O
So
this
one
is
the
skill
for
these
three
models
for
actually
precipitation,
and
I
was
very
pleased
to
see
that
there's
a
lot
of
noise,
but
there
is
really-
and
this
is
actual
hindcast-
not
potential
ones,
so
really
regionally.
The
precipitation
is
can
be
captured
up
to
six
weeks,
and
I
know
quite
a
bit
of
work
has
done
over
north
america,
but
the
strongest
signal
here.
If
this
global
perspective
is
over
the
tropics
and,
for
example,
over
australia,
where
csm
is
able
to
capture
precipitation
better
than
ecmwf.
O
Next,
we
wanted
to
look
at
state,
dependent,
predictability
and,
and
maria
has
nicely
introduced
this.
It's
been
long
known
that
these
regime
patterns
and
which
are
indicative
of
persisting
in
recurrent
pattern
in
the
atmosphere
and
the
ferranti
at
our
paper,
has
shown
2015
that
those
don't
only
pop
up
sort
of
in
a
seasonal
investigation,
but
they,
if
you
stratify
initialized
forecasts
by
how
they
project
onto
how
the
initial
state
projects
under
these
large
scale
patterns.
O
She
shows
here
that
the
if
you
project
at
initial
time
under
the
negative
and
nao
you
have
extended
predictability
for
europe.
O
So
inspired
by
this,
we
introduced
this
state
dependence.
Predictability,
for
example,
for
the
nao,
where,
if
you
have
the
negative
nao,
you
have
sort
of
here
a
more
blocked
pattern
and
it
is
more
of
a
persistent
pattern
and
if
we
now
look
at
all
the
forecasts
that
project
at
initial
time
on
the
negative
neo,
we
show
the
c
on
the
right
side.
And
if
we
look
at
the
forecasts
for
the
neutral
nao,
we
show
this
on
the
left.
O
The
sample
size
on
the
right
is
much
smaller
than
on
the
left,
but
we
show
here
significance
by
using
bootstrapping
with
regard
to
members
in
stippling,
so
so
here
is
shown
the
differences,
and
I
apologize
now.
The
csm
is
in
the
bottom
row
and
ends
up
in
the
middle
row,
but
we
see,
for
example,
that
here
in
the
northeast,
we
have
higher
predictability
for
negative
nao
states
than
for
neutral
states,
and
I
can't
see
this
because
this
is
also
sorry.
O
Okay,
here
is,
and
so
so.
This
is
also
shown
if
you
average
over
the
whole
domain,
and
here
is
enzo,
which
obviously
has
been
investigated
in
terms
of
state
dependent
predictability.
And
if
you
look
here
for
la
nina
states,
again
csm
is
at
the
bottom.
You
have
here
a
much
higher
predictability
for
weeks,
three
four
over
esmwf.
O
Okay,
so
my
summary
is
it's
hard
to
develop
good
verification
code.
We
have
we
have.
We
really
meant
to
do
this
as
a
community
project
we're
happy
to
share,
but
as
soon
as
you
go
in
and
it's
nice
to
leverage
other
packages,
but
as
soon
as
you
want
to
do,
science
there's
a
lot
of
good
reason
to
develop
everything
from
scratch.
So
you
really
know
what
you
do,
for
example,
with
regard
to
how
the
ensemble
mean
is
removed.
O
Interestingly,
the
csm
does
not
exhibit
the
signal
to
noise
paradox
on
the
s
time
scale,
but
the
other
models
do
and
ecmwf
does.
Although
has
the
same
number
of
members
details
of
the
acc
computation
matter
in
low
skill
scenarios.
Otherwise,
you
get
physically
unrealistic
results
and
we
can
have
also
shown
that
skill
can
be
written
and
larger
during
certain
floor
regimes,
which
is
an
extension
of
varante
adele
for
the
northern
hemisphere
future
work.
O
Well,
we
want
to
read
one
this
suite
with
stochastic
permittivizations,
which
is
why
it
was
important
for
me
to
look
at
the
spread
error
and
down
the
line
when
sema
has
fully
incorporated
empaths
into
csm.
I'd
be
interested
to
rerun
some
of
those
simulations
with
empaths
and
maybe
down
the
line
with
empath
physics.
Thank
you.
B
Thank
you,
judith.
Do
we
have
questions?
You
can
raise
your
hand
or
put
those
questions
in
the
chat
window,
not
seeing
any.
I
have
a
question
and
I
may
have
just
missed
this,
but
so
you
talked
about
the
signal
to
noise
paradox
and
you
talked
about
the
stochastic
parameterization,
so
basically
addressing
those
issues
of
under
spread.
But
I'm
curious
if
you
have
a
clear
sense
of
whether
the
this
is
a
the
signal
to
noise.
Paradox
is
occurring
because
of
under
spread
or
over
signal.
O
Now
it's
an
excellent
question.
Yes,
I
don't
know,
and
obviously
we
want
to
introduce
stochastic
permanentization
to
get
a
better
spread
error
ratio,
but
we
don't
know
what
is
the
truth
spread
in
the
true
era.
What
has
been
shown
is
that
in
some
cases,
stochastic
permeabilizations
can
remove
noise
and
I've
shown
this
for
enso
and
understood
it
where
how
random
perturbations
can
actually
act
as
an
additional
damping,
and
so
it
is
not
clear
yet
whether
the
stochastic
parameterizations
might
actually
get
there
is
a
potential.
O
B
Yeah,
I'm
really
curious
about
the
ss
timescale
for
this
because
for
the
seasonal
timescale
it
appears
the
previous
work
I've
seen
with
the
nao
and
the
single
to
noise
paradox
there.
It
seems
to
be
that
there's
a
the
models
have
way
too
strong
of
a
signal
is
what
they're,
attributing
the
the
source
of
that
paradox
to,
but
I
haven't
seen
a
clear
explanation
of
of
that
for
the
s
to
s
time
scale,
so
I
I
think,
there's
the
potential
for
for
very
interesting
results
from
that.
So
thanks
judith.
Thank
you
all
right.
B
All
right,
then,
I
think
we
are
done
with
our
talks.
So
I'd
like
to
say
thank
you
to
all
of
our
speakers
and
I'd
like
to
say
especially
thank
you.
B
I
really
enjoy
the
opportunity
to
see
all
the
interesting
things
that
people
have
done
with
combining
the
initialized
predictions
from
their
assistant
prediction,
working
group
and
the
substantial
experiment,
and
you
know
ss
project
with
the
many
cesm
simulations,
and
I
really
find
it
super
exciting
to
see
how
bringing
those
those
things
together
can
can
answer
some
really
great
questions
about
for
disability
prediction
across
a
wide
range
of
time
scales.
A
A
And
where
would
you
like
the
working
group
to
go
so
after
we
go
through
just
a
handful
of
slides
I'll,
encourage
you
to
put
on
your
cameras
if
your
internet
connection
allows
and
engage
in
the
discussion,
so
we
can
shape
the
working
group
for
the
entire
community
all
right
so
in
our
update,
primarily
I'll
summarize
with
steve
some
of
the
datasets
that
are
available.
So
we
may
have
new
people
online
who
are
not
aware
of
the
breadth
of
the
simulations
that's
already
out
there,
so
starting
with
csm1.
A
A
So,
starting
with
the
seasonal
to
seasonal
reforecasts,
we
have
reforecast
for
our
time
period,
1999
to
2015
that
followed
the
same
protocol,
nearly
the
same
protocol
as
the
ones
with
csm2.
They
just
ended
2015
and
those
are
documented
in
return.
All
2020
and
the
data
is
in
the
riri
subjects,
library,
just
with
the
other
sub
x
models.
A
A
Then
I
think,
then
we
have
the
decatur
prediction:
large
ensemble
and
that's
a
40-member
ensemble,
going
back
to
1954
with
november
first
starts,
and
it's
been
documented
by
steve
in
the
2018
beams
paper,
and
you
heard
a
lot
about
the
csm2
reforecast
and
those
are
right
now
we
say
here
2000
to
2020..
Those
are
just
the
reforecasts,
but
real-time
forecasts
have
been
running
since
and
they
are
available
every
week.
So
we
run
the
mondays
forecasts,
usually
on
tuesday
or
wednesday.
A
So,
usually,
you
can
find
them
on
our
site
already
by
thursday
morning,
because
that's
when
they're
due
to
know
us,
so
we
just
upload
them
to
an
ftp
site,
and
we
also
have
instructions
on
the
working
group
website
how
you
can
get
them
in
real
time
and
those
hold
for
the
low
top
model,
as
well
as
for
wacom,
so
the
real
time
forecast
with
wacom.
We
only
run
for
the
winter
season
roughly
for
september
through
march,
and
that's
because
it
was
really
computationally
too
expensive
to
run
the
summer
season
as
well.
A
C
Sure
so,
we've
we've
heard
already
several
talks
using
this.
The
new
smile
data
set.
We
made
that
available
in
winter.
You
can
go
to
the
or
system
prediction
working
group
website
and
find
out
how
to
access
those
data.
If
you
have
an
account
on
cheyenne,
we've
recently
just
uploaded
all
that
data
to
the
climate
data
gateway,
and
so
it
can
be
accessed
from
there
as
well
and
the
the
documentation
paper
for
that
is
available
from
geoscientific
model
development.
C
It
is
still
in
revision,
but
that
should
be
coming
out
within
a
month
or
two
we've
also
then
extended.
The
smile
november
starts
10
members
out
to
decadal
time
scales
and
we've
done
this
for
the
time
periods
from
1958
through
2019.
C
C
C
A
So,
typically,
these
simulations
are
in
campaign
storage
and,
if
you're
willing
to
help
analyze
the
data
we
welcome
any
collaborations
so
that
set
right
now
includes
the
ones
with
climatological
atmosphere,
climatological
ocean
and
climatological
atmosphere
and
climatological
ocean
together
and
the
land
variability
being
the
only
source
of
predictability
coming
in
those
simulations.
A
So
the
point
of
these
simulations
has
been
to
really
verify
this
accepted
cartoon
that
paul
dearmeyer
created
and
I've
been
emailing
with
him
back
and
forth
and
he's
coming
at
you
to
boulder.
So
if
you're
in
the
boulder
area
at
the
end
of
july,
you
can
come
visit
with
him
here
too,
but
just
getting
some
of
this
history
of
the
diagram,
because
it's
been
repeated
in
many
publications
and
reports
and
it's
shown
pretty
much
in
every
s2s
talk.
A
So
he
made
this
diagram
to
show
where
predictability
may
be
coming
from,
and
I
think
the
piece
of
history
that
he
says
gets
left
out.
A
So
the
set
of
experiments
that
we've
been
carrying
out
here
is
just
meant
to
expand
on
that
and
verify
the
contributions
from
the
atmosphere
and
the
ocean
and
also
look
how
that
varies
across
different
continents,
different
seasons,
etc.
So
so
far
the
figure
on
the
right
is
what
we
have
inferred
from
these
experiments.
So
far,
and
surprisingly,
it's
showing
us
little
predictability
from
the
ocean
in
our
model.
A
It's
growing
as
you
get
to
weeks
five
six,
but
in
this
subseasonal
window
that
we
typically
look
at
in
weeks
before
it's
really
land
in
the
atmosphere,
at
least
in
csm2.
That
seem
to
be
providing
the
majority
of
the
predictability,
we're
working
on
a
climal
land
experiment
and
that
one
has
been
challenged
to
set
up.
A
So
we've
been
working
with
sanji
of
kumar
how
to
best
do
that,
because,
unlike
the
atmospheric
initial
conditions
that
are
really
easy
to
make
a
climatology
out
of
because
of
the
different
configurations
of
the
variables
in
the
land
model,
we
can't
just
do
that.
So
we've
tried
an
experimental
suite
where
we
spun
up
the
land
model
with
climatological
atmospheric
conditions
and
we're
trying
that
now,
but
I'm
not
sure
if
that's
really
work
so
sanjeev
is
going
to
put
up
some
other
suggestions
during
the
discussion
for
how
we
can
perhaps
do
that
better.
A
C
Yeah
so
so
judith's
talk
was
a
nice
introduction
to
the
challenges
associated
with
analyzing
initialized
prediction.
Data
sets
they're
very
large,
there's,
there's
huge
amounts
of
of
data
to
ingest,
and
so
we've
been
putting
effort
into
developing
shared
python
tools
for
efficient
interactive
analysis
of
these
data
sets,
and
I
mentioned
on
monday
that
we've
put
together
this
package
called
esp
lab
that
will
be
rolled
out
with
the
smile
overview
paper,
so
the
time
frame
for
that
is
order
months,
and
this
includes
tools
for
data
ingestion
and
skill.
C
So
what
we
found
putting
together
this
set
of
tools,
specifically
for
analyzing
the
smile
data
set,
is
that
the
real
obstacle
is
ingesting
the
data
and
getting
it
into
a
nice
format
for
analyzing,
and
so
one
of
the
key
functionalities
that
this
package
will
allow
you
to
do
is
to
get
data
directly
from
glade
without
any
pre-processing
and
structure
that
into
a
desk
array
like
the
one
you
see
on
the
upper
right
here.
C
That
has
the
dimensions
that
you
that
you
want
for
analyzing
hindcast
data
for
so
for
example,
this
is
a
precipitation
desk
array
that
is
dimensioned
lat
lawn
and
then
it
has
additional
dimensions
for
initialization
year,
lead
time
and
ensemble
size.
So
here
m
is
40..
This
is
a
dple
desk
array
and
this
was
put
together
by
using
some
functions
that
daniel
kennedy
had
created
for
looking
at
uninitialized,
large
ensembles
and
with
help
from
liz,
maroon
and
tegan
king.
C
We
got
this
into
a
documented
function
that
really
I
have
found
in
my
my
work.
Greatly
speeds
up
analysis
of
these
large
data
sets,
so
we've
created
a
csm
esp
wg
github
organization.
That's
where
the
esp
lab
sits
today,
you
can,
you
can
already
go
there
and
use
it,
although
you
know
use
it
with
caution
because
we're
we're
still
changing
it
and
you
can
contact
me
or
tegan
if
you'd
like
to
get
involved.
We'd
love
to
have
contributors,
kind
of
expanding
our
tool
kits
for
hindcast
analysis.
A
B
I
do
not
see
any
questions
in
the
chat.
Oh
wait,
yeah.
There
is
one
talking
about
the
predictability
figure
from
david
lawrence
asking.
What
field
does
that
figure
represent
and
is
the
cesm
one
global,
as
the
title
implies,
and
have
you
tried
to
focus
on
land
mid-latitudes,
as
paul's
figure
is
meant
to
represent.
A
Yes,
so
it's
supposed
to
look
at
surface
temperature
and
yes,
we
have
this
diagram
for
different
continents
and
we
could
do
it
of
different
regions
and
there
are
some
differences,
but
it
looks
mainly
the
same
over
majority
of
regions.
South
africa
is
a
little
bit
different,
there's
a
little
bit
more
predictability
there
from
land.
But
yes,
so
we
we're
definitely
looking
over
different
regions
and
for
surface
temperature,
mainly.
A
It
is
and
the
you
know-
and
we
ran
an
entire
suite
by
accident
when
we
had
a
software
engineering
error,
that
we
had
ocean
from
that
was
different
by
one
year
and
then
we
could
not
tell
apart
the
new
runs
from
the
old
run,
so
we
kind
of
verified
the
result,
a
second
way
which
we
didn't
put
in
this
paper
yet.
But
it
was
really
surprising
to
us
that
we
could
have
an
ocean
that
was
a
year
old
and
our
skill
did
not
change
very
much
and
that's
looking
over
the
entire
hind
cast
period.
A
B
L
B
Okay,
yeah,
so
maria's
question
is:
could
the
way
the
climatology
for
the
ocean
is
being
computed
also
be
affecting
the
lower
predictability
contribution
coming
from
the
ocean?
Did
you
use
jra
or
csm.
A
So
we
used
we
made
climatology
out
of
the
jra
files,
but
yeah,
as
I
said
before,
we've
done
this
the
experiment
where
the
ocean
is
a
year
old
and
it
also
did
not
change
our
skill
very
much.
So
this
is
where
it's
brought
up
the
questions.
How
much
predictability
there
is
on
these
shorter
time
scales,
and
we
thought
that
perhaps
we
just
need
the
handcast
to
be
run
longer,
but
unfortunately
we
do
not
save
the
restart.
So,
ideally
we
would
find
out.
A
A
Yeah,
so
that's
one
of
the
things
we
can
think
about
for
our
last
discussion
item
is:
do
we
carry
some
of
these
experiments
out
on
the
longer
time
scale
and
also
with
the
ocean
and
also
with
climatological
land?
Potentially,
I
think
that's
one
of
the
suggestions
we'll
get
to
in
the
google
doc.
A
B
Yeah
sure
so,
we've
been
contacted
by
the
ufs
community,
which
gfs
is
the
unified
forecast
system.
B
It's
the
new
noaa
model
and
in
particular,
focused
around
ss
for
what
they
contacted
us
about,
and
they
they
contacted
us
to
see
if
there's
interest
in
in
collaborating
and
interacting
with
with
them
in
terms
of
ss
or
system
prediction,
and
I
think
it's
worth
having
a
discussion
about
that,
because
I'm
just
thinking
that
you
know
one
of
the
benefits
of
doing
these
initialized
predictions
and
our
system
prediction
work
is
ultimately
it
tells
us
about
our
models.
B
So
we
can
learn
something
about
about
ways
to
improve
our
models.
You
know
errors
that
occur
in
our
models
and
and
and
ultimately,
hopefully
leads
to
model
improvement,
and
so
I
think
there
might
be
some
value
in
there
in
having
some
ufs
experiments
that
are
parallel
to
some
of
our
system.
Prediction
working
group
experiments
with
cesm.
B
That
would
allow
us
to
do
some
similar
analyses
between
the
two
models
to
to
that,
I
think,
would
ultimately
help
in
model
improvements
for
both
modeling
systems,
and
so
I
guess
the
question
I
was
sort
of
thinking
to
to
launch
to
the
to
the
community
is
is
if
there
was
if
there
were
parallel
experiments
with
ufs
that
were
consistent
with
the
the
s
experiments
from
the
earth
system
prediction
working
group.
Would
there
be
interest
in
in
doing
doing
similar
analyses
across
both
the
ufs
and
the
cesm
models.
H
B
So
yeah,
that's!
What
I'm
proposing
is
the
idea
that
we're
certainly
not
asking
that
you
know
expecting
the
esp
wg
to
do
any
of
the
ufs
experiments,
but
I'm
saying
there's
a
potential
for
for
experiments
from
ufs.
That
could
be
done
in
a
parallel
context
to
what's
been
done
with
csm.
B
So
then
we
have
two
different
models,
but
with
similar
experiments
that
we
could
do
analysis
across
those
experiments
don't
exist
yet,
but
I
think
there's
potential
for
them
to
exist
and
contribute
to
the
broad
scope
of
earth
system
prediction
types
of
experiments
available
to
the
community.
O
Judith
yeah,
I
mean,
I
think
one
other
thing
is
that
to
get
the
results
we
have,
we
have
to
remove
the
lead
time
dependent
bias-
and
I
haven't
shown
this,
but
if
you
leave
it
in
the
skill
is
much
much
lower
or
negligible,
and
so
my
understanding
is
that
the
ufs
runs.
We
can
ask
for
certain
dates
and
stuff,
but
they
clearly
are
nowhere
near
having
20
years
of
hindcasts,
and
so
I
think
it
would
be
great
to
engage
the
community.
O
It's
a
potential
funding
source.
It
really
helps
to
compare
you
know
to
the
next
generation
nwp
models
in
this
context,
but
I
think
the
emphasis
would
need
to
be
on
case
studies
as
opposed
to
anything
we
do,
which
involves
removing
a
lead
time
dependent
bias
averaged
over
20
years.
B
Yeah,
so
I
I
mean
I
realized
that
those
experiments,
the
a
large
on
our
large
high
cast
ensemble
instead
of
experiments
with
ufs,
does
not
exist
yet
for
the
you
know,
consistent
with
with
what
what
we
have
with
csn,
but
I
think
there
you
know
there
is
potential
for
that
to
exist.
Those
those
will
exist
eventually
and
there's
interest.
I
think
amongst
the
ufs
community,
and
you
know,
bringing
in
the
community
to
work
with
ufs
to
do
some
of
those
kinds
of
experiments.
B
I
myself
am
interested
in
that,
so
I
would
certainly
think
about.
If
I
were
to
do
experiments
with
ufs,
I
would
I
would
make
them
parallel
to
you
know
to
the
esp
working
group
experiments.
So
then
they
would
be
useful
more
broadly
to
the
community
and
address
that
issue
of
the
of
the
lead,
dependent
climatology.
O
Yeah
and
just
as
a
comment
we're
doing
this
for
certain
case
studies
with
regard
to
stochastic
permanentizations.
A
B
Sure
so
subex
continues,
with
the
real
time
experiments
real
time
forecast
every
week.
That
data
is
posted
on
the
iri
data
server
and
as
new
models
come
online,
they
do.
The
modeling
groups
provide
us
the
new
versions
of
their
models
and
those
timecast
get
posted
as
well.
So
those
data
sets
continue
to
be
provided.
B
We've
doing
some
new
things
with
with
the
graphics
for
the
forecast
and
there's
definitely
individual
pi
research
going
on
with
with
the
sub
x
data,
I
myself
have
planned
to
incorporate
the
real-time
csn
forecast
into
the
the
public.
My
the
webpage
that
I
have
that
publicly
puts
the
forecast
images
up,
so
people
could
see
how
the
real-time
csn
you
know,
data
and
forecast
and
and
how
that
compares
with
other
models
and
if
they
wanted
to.
B
You
know
you
use
those
real-time
forecasts
for
anything,
then
that
would
help
to
see
cs
where
csm
fits
in
with
the
other
models
and
contributes
to
multi-model
ensemble.
B
I
think
that's
something
that
we
will
need
to
communicate
to
the
ufs
community,
that
for
us
to
engage
with
them,
that
the
the
data
sets
need
to
be
easily
accessible
and
easy
for
poor
for
us
to
to
use
in
combination
with
the
other
data
sets
we're
using
at
this
point.
This
is
a
preliminary
discussion
about
whether
this
community
is
interested
in
collaborating
interacting
with
the
ufs
community,
and
so
this
is
the
opportunity
to
provide
our
feedback
on
what
is
needed
in
order
to
facilitate
those
collaborations.
B
So
we
will
make
sure
to
to
share
that
on
issue
of
the
data
access
as
a
as
a
critical
component.
O
Yeah
in
this
context,
we
looked
into
putting
the
s
data
under
the
cloud
for
both
the
csm
s2s
and
also
the
subex
s2s,
and
we're
working
with
the
data
library,
aaron
kaplan
at
iri,
and
so
especially
with
these
sort
of,
as
steve
said,
if
you
use
x-ray
and
ask
which
we're
using
and
steve
is
using,
it
is
sort
of
trivial
to
access
data
in
the
cloud
and
the
trick
is
to
have
them
in
the
right
format,
and
so
I
think
it
would
be
really
nice
for
the
community
to
put
all
the
s2s
data
into
the
cloud.
O
I
think
noaa
is
ahead
of
us
in
this
in
this
regard,
but
maybe
they're
not
using
the
the
best
formats.
You
really
want
to
have
a
big
tsar
store,
and
so
I
think,
there's
a
obviously
I'm
not
sure
if
it's
a
software
engineering
project
how
to
do
this
test.
But
I
I
think
the
it
would
be
really
nice
to
have
all
the
data
into
the
cloud
and
if
you
don't
have
it
locally,
it
goes
directly
into
the
cloud
and-
and
you
know
maybe
down
the
line,
we
will
have
cloud
computations.
O
I
think
this
would
be
great
for
us.
It
would
be
great
for
the
community,
together
with
these
pre-processing
tools
that
are
paralyzed
using
dusk,
that
that
we
can
send
out
to
really
facilitate
that
people
do
the
science
they
want
to
do,
and
this
would
be
really
for
universities,
but
even
you
know,
sort
of
developing
countries
where
once
they
also
have
cloud
computing,
they
can
do
really
relevant
research
without
having
hpc.
So
so,
I
think,
there's
a
really
big
opportunity
here
down
the
line
to
really
make
this
research
available
across
the
globe.
K
H
Coming
back
to
the
potential
benefits
of
this
effort
to
see
esm,
there
are
other
s2s
systems
out
there,
and
they
are,
I
mean
as
close
as
possible
or
as
different
as
possible
from
ours.
What
would
be
the
specific
benefit
that
we
would
get?
I
mean
you
indicated
that
it
is
in
model
improvements,
but
it's
so
general
and
we
can
probably
tackle
that
thing
with
existing
system
comparisons
as
well.
B
Yeah,
I
just
see
potential.
I
don't
think
that
there's
a
clear
and
specific,
you
know
at
this
point,
I
think
it's
just
an
idea
of
how
potentially
their
community
is
interested
in
collaborating
with
us
we're
just
we
may
be
interested
in
collaborating
with
them,
but
you
know,
like
I
said,
a
preliminary
discussion
about
whether
there
would
even
be
interest
in
furthering
this
conversation
about
you
know
collaborating.
So
I
don't
think
I
have
a
direct
answer
for
for
that.
Particular.
B
You
know
question
at
this
point,
but
I
I
find
value
in
in
working
with
different
models
and
understanding
how
different
model
biases
have
affect
predictability
prediction
question:
you
know
answers
of
questions
about
predictability
and
prediction
when
you
have
a
of
these
case
studies,
where
some
model
seems
to
to
get
something
really
right
and
some
model
seems
to
get
something
really
wrong
and
or
or
many
models
are
getting
it
all
raw.
You
know
all
the
models
are
getting
it
wrong.
B
I
think
there's
value
in
in
seeing
that
and
then
using
that
as
a
way
of
trying
to
understand
you
know
why
are
some
models
getting
this
and
some
not
or
why?
Why
did
everybody
get
it
wrong
and
so
from
a
in
a
broad
sense,
I
find
working
with
many
models
actually
very,
very
useful
for
answering
these
kinds
of
predictability:
prediction
questions.
H
A
B
I'm
trying
to
see
if
there's
anyone
from
noah
on
the
who
might
be
related
to
it
involved
in
ufs
on
our
call
I
see
deepti
are
you
is
somebody
who
can
speak
to
to
this.
H
Hi
kathy,
this
is
the
yeah
I
can.
I
can
find
out
the
link
for
the
ufs
workshop.
I
think
I'm
not
directly
involved
in
that,
but
I'm
definitely
aware
of
the
workshop
going
on
so
and
about
the
data
says.
Yes,
I
think
I
was
actually
trying
to
look
up.
H
The
some
of
the
s2s
data
sets
that
we
have
available
for
the
prototype
runs
that
are
already
on
the
cloud,
but
I'm
not
particularly
sure
about
the
format
and
things
like
that,
but
I
think
like
what
judy
was
saying,
I
think
the
format
of
the
data
is
that
may
not
be
the
most
user
friendly.
So
we
are
really
interested
in
learning
more
about
what
kind
of
format
would
be
more.
H
You
know
more
modern
and
more
more,
I
mean
technically
more
easily
accessible,
so
we
can
learn
about
that
and
I
can
pass
that
information
to
others
who
are
uploading
the
data.
So
we
do
have
a
couple
of
prototypes.
I
think
two
or
three
prototypes
that
are
the
most
recent
ones
on
the
clouds.
I
can
find
out
the
link
and
post
it
in
the
chat.
B
All
right,
thank
you,
dt,
so
yeah
I
just
wanna.
I
just
wanna
acknowledge
this
broad
scope
of
data
accessibility
issues.
I
think
that's
one
of
the
things
that
the
earth
system
prediction
working
group
is
doing
an
excellent
job
of
providing
by
providing
all
these
data
sets
and
incar
as
a
whole,
as
an
institution
does
an
excellent
job
with
this.
B
This,
from
the
s
perspective,
this
issue
of
data
access
or
very
large
data
sets
is
an
ongoing
problem
for
the
whole
community,
and
so
I
think,
while
we
as
a
group
can
help
to
provide
insights
on
how
to
address
it.
I
think
this
is
a
much
broader
than
just
on
the
esp
working
group
community
issue,
and
I
I
myself
will
continue
to
to
work
with
program
managers
to
push
on
this
as
an
important
aspect
of
of
you
know,
issues
with
with
working
with
initialized
prediction,
experiments.
K
Yeah
I
I
just
did
one
comment:
that's
supporting
what
get
he
said.
One
of
the
potential
benefits
that
I
see
it
may
increase
the
user
base
or
the
csm
model
would
be
more
known
to
the
to
the
operational
community
and
that
may
likely
to
give
you
a
better
feedback
or
more
critical
area
for
the
improvement.
That
is
the
kind
of
potential
benefit
that
I
see.
I
don't
know
if
you
have
tested
it,
whether
it
is
a
potential
benefit
or
not.
Thank
you.
C
Yeah
sure
do
you
want
to
share
this
slide
again
yeah.
So
this
is
a
topic
of
discussion
that
leads
into
the
the
question
of
what
what
simulations
to
propose
for
the
next
round
of
csl
allocation
and-
and
that's
this
largely
unanswered
question
of
what
what
is
the
minimum
set
of
runs,
that
we
need
to
to
do
sensitivity,
studies
to
test
for
skill
improvements
and,
of
course
you
can
envision.
C
The
answer
to
this
will
will
change
with
time
scale
and
with
field
of
interest,
but
we
really
don't
have
a
good
sense
of
you
know.
What's.
C
What's
a
minimal
experiment
to
perform
that
we
can
compare
to
what
are
now.
Our
foundational
data
sets
dple
smile
s2s
if
we
want
to
start
to
explore
ways
to
improve
skill
for
cesm
forecasts,
so
I
guess
we're
in
a
sense
looking
for,
for
people
interested
in
in
this
question
and
volunteers
to
either
push
push
something
forward
or
work
with
us
to
to
get
answers
to
this
question.
A
And
just
to
put
it
in
a
little
bit
more
context,
we'll
have
a
fixed
amount
of
computing
resources
that
we'll
need
to
decide
in
the
next
item
what
to
do
with
and
we're
in
the
process
of
developing
csm3.
So
there
will
be
different
versions
of
the
model.
So
the
more.
If
we
answer
this
question
and
it
allows
us
to
better
plan,
how
can
we
help
out
with
the
model
development
process
and
see,
for
example,
if
mgl
ends
the
predictability
changes
with
the
physics
changes
in
the
model,
so
the
smaller?
A
A
The
good
news
is,
we
have
the
data
sets
that
we
can
start
sub
sampling
right.
So
we
have
the
basic
states,
and
I
guess
the
question
would
be
how
many
years
of
either
the
s2s
or
smile
or
s2d
runs.
Do
we
need
to
actually
get
to
a
basic
skill
score
that
we
trust?
A
That's
how
you
know
we
were
thinking,
maybe
of
starting
and
then,
if
you
subsample
a
different
set
of
years,
if
it
completely
changes
the
answer,
it's
perhaps
not
viable
good,
go
khan.
H
Well,
just
to
start
the
discussion
since
everybody's
silent,
I
mean
one
thing
I
mean
related
to
that
question
is
how
does
the
like
the
model
bias
impact
the
prediction,
skill
right
that
can
be
within
that
category
as
well,
but
I
mean
so
based
on
steve's
sort
of
recent.
Well,
I
guess
he's
working
on
his
his
recent
results.
H
It
looks
like
actually
the
number
of
ensemble
members
that
one
needs
or
minimum
will
depend
upon
what
you're,
after
I
guess,
and
it
will
depend
on
the
fields
and
all
that
stuff.
But
if
the
perp,
if
we
rephrase
the
question
and
ask
the
following
question
rather
so
maybe
I
should
just
back
up
it
a
little
bit
but
steve's
results
are
showing.
Is
that
additional
you?
You
need
a
minimal
ensemble
set
and
it
may
be
really
a
handful
of
ensembles,
maybe
five,
but
anything
that
you
add.
On
top
of
it
is
literally
beating
down
the
noise.
H
It's
not
adding
any
new
spatial
areas
of
skill.
It
is
not
adding
necessarily
or
taking
away
any
skill
from
existing
areas.
So
if
the
goal
is
to
get
a
preliminary
assessment
of
impacts
of
certain
changes,
maybe
some
artificial
model
bias
improvement
or
some
parametrization,
I
think
I
mean
based
again
steve's
results.
H
So
maybe
I
mean
that
might
be
a
way
to
pursue
that
with
minimal
ensembles,
maybe
five
and
then
looking
into
many
aspects
or
many
different
physics
or
bias
correction
techniques,
rather
than
trying
to
beat
the
ensemble
spread
and
then
get
more
signal
out
of
it.
As
a
I
mean,
I'm
not
saying
we
shouldn't
do
that,
but
it's
as
the
first
step.
C
Well,
I
I
think
you're,
maybe
exaggerating
a
bit
the
results
that
I've
shown.
It
seems
like.
Maybe
the
overall
pattern
of
where
there's
positive
and
negative
skill
is
fairly
insensitive
to
ensemble
size
once
you
have
a
minimal
set,
but
the
actual
skill
scores
themselves
are
quite
sensitive
and
if
that's,
ultimately,
your
goal
is
to
be
able
to
do
experiments
that,
with
some
confidence
establish
that
you
have
enhanced
the
skill
of
the
system.
C
C
And
and
obviously,
we've
got
the
experiments
in
hand
to
explore
these
issues
like
dple,
you
know
we,
we
have
saturated
the
skill
for
oceanic
variables
with
40
members,
and
so
we
can
certainly
get
a
quantification
of
you
know.
What's
a
minimal
set,
what's
a
minimal
verification
window,
you
know
for
looking
at
say,
sst
skill,
which
would
be
a
key
metric
for
decadal
scale
predictions.
B
So
are
you
thinking
of
like
a
you're
thinking
that
that,
if
there's
people
who'll
be
willing
to
volunteer
to
do
a
bunch
of
evaluation
of
skill
for
some
common
types
of
metrics
for
a
different
number
for
sort
of
see,
the
sensitivity
of
the
number
of
ensemble
members
and
the
sensitivity
of
the
number
of
which
years
are
used?
Is
that
what
you're
thinking
I.
C
Think
what
what
I
think
this
working
group
needs
is
a
a
set
of
metrics
that
are
key
for
prediction
on
different
time
scales,
so
maybe
for
s2s
you're,
most
interested
in
mjo.
C
What's
the
minimal
number
of
hind
casts
that
are
needed
with
minimal
ensemble
size
to
to
get
some
sense
of
mjo
skill?
I
feel
like
to
to
make
progress.
We
we
need
some
answers
to
those
questions
so
that
we
don't
have
to
replicate
the
full
set
every
time
we're
testing
model
changes
or
initialization
changes.
A
B
H
B
Maybe
people
are,
you
know,
of
course,
hesitant
to
volunteer
just
sitting
here
on
zoom,
but
from
the
s
perspective
I
will
volunteer
to
coordinate
volunteers
so
if
from
the
s
time
scale,
if
if
people
are
interested
in
in
maybe
getting
having
a
little
side,
conversation
or
group
to
discuss
how
we
might
go
about
doing
this,
please
contact
me
and
I'm
happy
to
organize
that.
A
A
All
right:
well,
let's
move
on
then
to
the
last
discussion
topic:
it's
not
unrelated
and
it's
the
proposed
simulations
for
the
next
csl
proposal.
Gokan.
When
is
that,
due
to
you.
A
H
A
H
A
A
A
A
K
Yes,
so
we
were
yeah,
I
was
working
with
yaga
for
some
of
that
land,
climatological
initialization
and
it
looked
like
yeah
generating
those
files
yeah.
We
have
some
issues,
so
I
thought
about
about
doing
the
all,
but
one
climatological
initialization.
K
In
that
case
we
are
taking
the
initial
condition
from
a
different
year
like,
for
example,
if
we
are
running
it
for
2014
I'll,
take
the
initial
condition
for
all
other
year,
except
for
2015.,
and
we
have
tested
this
methodology
couple
of
different
times
and
we
find
that
all.
But
one
initialization
gives
you
a
sufficiently
different
initial
condition
that
is
significantly
different,
initial
condition,
at
least
with
respect
to
the
land
state.
So
I
thought,
maybe,
if
community
is
interested
for
the
sensitivity
or
the
climatological
study
we
can
test
this
methodology
in
the
csm
suits.
O
Yeah
yeah,
I
did,
but
it
I'm
not
sure
if
it's
helpful,
I
understand
you're
asking
you
know
which
simulation
should
we
do,
but
I
think,
from
a
scientific
perspective,
there's
sort
of
two
outstanding
questions.
I
think
one
is
what
are
the
causes
for
coupled
model
drift?
I
think
it's
a
huge
one
and
I
don't
think
I
can't
think
of
a
simulation
that
we
can
do
that
addresses
this.
But
I
think
this
is
really
something
where
we
can
really
improve
the
modeling
efforts
to
understand
this
better
and
the
second
one
is.
O
You
know
on
that
for
the
s2s
time
scale
specifically,
does
data
sla
data
simulation
help
you
for
forecasting?
I
I
I
am
impressed.
How
well
csm
is
doing
with
the
anomaly
in
initialization
you
put
into
there
and
I
think
an
outstanding
sign
question
is
sort
of.
O
A
Dan,
are
you
on
the
line
to
comment
about
some
of
the
experiments?
You
still
hope
to
do
this
year.
P
Yeah
absolutely
and
right
judith.
I
think
those
are
important
questions
and
also
related
ones.
So,
in
the
shoot
this
year
we've
been
looking
at
ocean
d,
a
forced
by
an
ensemble
reanalysis
in
cam6,
and
particularly
looking
for
strategies
to
increase
spread
among
the
ocean
states,
which
has
classically
been
like
one
of
the
underlying
problems
with
doing
ocean
d8
and
so,
for
instance,
using
a
hybrid
da
formulation
to
try
and
bring
in
more
spread
within
the
deep
ocean,
which
I
think
has
been
one
of
the
underlying
issues
there,
but
I
think
moving
forward.
P
There
is,
I
think,
yeah
a
strategic
choice
to
make
on
the
da
side.
I
can
talk
about
that
now
or
I
I
don't
want
to
derail
this
conversation.
P
Yeah
I
mean,
and
aside
we
had
recently
a
wcrp
workshop
on
data
simulation
for
climate
and
yaga
actually
posed
an
interesting
question
there.
In
her
talk,
which
was
you
know,
what
approaches
can
we
use
to
move
away
from
having
to
compute
climatologies
to
do
anomaly,
predictability,
studies
and
this
actually
kind
of
gets
at
the
crux
of
what
we
can
do
in
d.a,
because
it
determines
whether
we
need
to
be
in
the
business
of
producing
long
re-analyses
in
order
to
really
study
the
impact
of
da
initialized
predictability.
P
So
I
you
know,
I
think,
that's
trying
to
think
about
lightweight
ways
to
do
da.
You
know
longer-term
re-analyses
is
one
way
forward.
An
alternative
would
be
to
think
about
da
approaches
to
estimate
model
bias,
and
these
are
things
you
know
used,
for
instance,
by
the
forecasting
centers
to
you
know,
estimate
model
bias
online
in
in
da
contexts,
so
yeah.
Those
are
kind
of
maybe
complementary
approaches.
K
P
Yeah
start
that
you
know
in
terms
of
a
long-term
re-analysis,
I
think
a
way
to
do.
It
would
be
to
leverage
the
existing
atmospheric
reanalysis
to
couple
it
to
an
ocean
enoy,
and
then
we
can
think
about
different
strengths
of
coupling
between
the
ocean
and
atmosphere
components.
P
So
this
is
kind
of
getting
at
this
weekly
versus
couple
strong,
a
strongly
coupled
problem
and
then
on
the
bias
adjustments
sides.
You
know
there
yeah
these
approaches.
C
P
H
A
So
just
one
follow-up
question
dan
out
of
the
workshop,
did
were
the
specific
common
experiments
or
were
there
any
outcomes
of
that
workshop?
That
would
help
to
guide
kind
of.
How
did
you
answer
these
questions.
P
Was
just
looking
at
some
of
the
distinctions
between
d,
a
used
for
numerical
weather
prediction
and
da
as
it
applies
to
climate
models?
I
think
one
of
them
was
just
that
the
forecast
time
scales
are
much
longer
and
that
the
model
biases
on
those
time
scales
are
also
much
braver.
And
so
you
know
an
emergent
focus
of
some
of
the
work,
which
is
something
I
think
we
can
do
more
here
is
yeah
just
to
bring
in
the
observations
in
a
d,
a
framework
to
think
about
objective
procedures
for
bias.
C
Correction,
so
if
I
could
interject,
I
think
one
thing
that's
important
to
communicate
here
is
that
the
working
group
has
resources
and
resources
that
are
available
to
the
community,
and
that's
really
what
we're
soliciting
here
and
we're
not
soliciting
your
ideas.
C
If
you
want
to
keep
them
private,
you
can
contact
us
co-chairs
privately,
but
we
would
like
to
have
people
know
that
we
can
provide
the
compute
resources
needed
to
realize
your
ideas.
The
only
ask
is
that
there's,
some
eventual
you
know,
tie
into
the
working
group
or
benefit
to
the
working
group.
So
if
you're
hesitant
to
speak
up,
that's
understandable,
but
you
can
contact
us
later.
If
you
have
an
idea
that
requires
compute
resources
from
this
working
group.
B
So
it
makes
me
wonder
if
we
can't
sort
of.
I
don't
know
how
to
do
this,
but
I've
had
this
thought
that
maybe
of
using
shorter
time
scale,
you
know
predictions
to
then
help
to
then
maybe
and
then
maybe
somehow
adapt
those
to
to
the
model
to
estimating
the
model
biases
for
longer
time
scale.
I
don't
really
know
how
to
do
that.
It's
just
sort
of
a
I
don't
know,
half-baked
quarter-baked
idea
in
my
head.
A
E
Yeah
we're
actually
starting
to
look
at
that,
and
the
idea
is
that
maybe
you
could
use
the
large
ensemble
simulations
which,
by
definition
are
already
drifted.
You
can
use
those
as
your
reference
clemo
to
calculate
anomalies
as
opposed
to
a
drifted
climatology
which
is
kind
of
the
conventional
way
of
calculating
the
anomalies.
E
But
looking
at
these
results
right
now,
as
you
can
imagine,
when
you
look
at
the
drifted
climatologies,
the
drifts
are
all
pretty
similar
the
patterns.
E
But
when
you
look
at
the
drifts
in
the
actual
initialized
hind
casts
it
takes
about
five
years
to
get
to
the
magnitude
of
the
anomalies
that
you
see
in
the
drifted,
the
the
large
ensemble.
So
I
think
you
could
use
the
large
ensemble
as
a
drifted
climatology
if
you're
looking
at
the
longer
lead
times
like
say
years,
four
to
seven
or
four
to
eight
or
something
like
that.
E
So
that
would
save
you
having
to
run
an
entire
hindcast
set
to
get
what
the
drifted
climate
of
clemo
is,
and
you
could
just
use
the
the
large
ensemble
as
the
as
by
definition
as
a
drifter,
because,
like
you
say,
kathy,
the
the
drifts
show
up
very
quickly,
but
of
course,
then
they
they
build
with
time
in
terms
of
the
magnitudes.
If
you're,
actually
looking
at
the
magnitude
of
the
drift
and
climate
for
computer
anomalies,
the
time
scale,
I
think,
really
becomes
important.
C
C
Yeah,
so
on
the
s2i
front,
we
would
like
to
do
smile
pacemaker
experiments
as
part
of
the
working
group
activities
in
the
coming
year.
C
This
is
in
conjunction
with
the
clivar
tropical
basin
interaction
panel,
and
so
we've
got
the
smile
control
already
completed,
and
the
idea
would
be
to
repeat
heincasts,
with
sst
restoring
in
the
tropics
of
the
different
basins.
C
So
that's
an
idea.
That's
on
the
table.
I
also
really
like
the
sort
of
attribution
studies
that
we
saw
by
john
fasoula
and
nick
davis
and
and
think
there's
a
lot
of
potential
to
build
off
of
the
the
hind
cassettes.
We
already
have
to
understand
the
influences
particular
forcings,
so
that's
definitely
something
people
can
propose
for
the
next
csl
allocation
and
then
on
the
decadal
time
scale.
You
know,
there's
there's
an
option
of
further
expanding.
C
The
csm2
dp
set
from
10
member
to
20
member
might
want
to
first
take
a
look
at
the
skill
we're
getting
with
ten
members
and
see
if
it's
potentially
worth
expanding
on
that
experiment
and
then
liz
maroon
put
in
an
idea.
I
don't
know
if
she
wants
to
elaborate.
She
might
not
still
be
here.
G
I'm
still
here
and
I'm
not
sure
I
have
a
clear
idea-
it's
kind
of
half
faked,
but
you
know
something
I've
been
noticing
in
dple
and
smiles
that
the
ocean's
pretty
underspread
and
granted-
I
haven't,
looked
much
more
outside
the
sublime
atlantic,
but
I
think
it'd
be
interesting
to
do
some
sort
of
testing
initialization
strategies
to
get
more
spread
in
the
ocean.
I
think
dan
just
mentioned
that
pretty
well,
so
maybe
I
should
just
go
and
talk
with
dan
and
see
what
the
two
of
us
think.
G
H
With
I
mean
one
possibility
could
be,
one
can
run,
maybe
10
ensemble
members
and
perhaps
additional
sort
of
information
can
be
contributed
from
some
sort
of
machine
learning
technique
to
produce
maybe
equivalent
of
20
ensemble
members
in
terms
of
its
signal-to-noise
ratio
is.
Is
there
any
work
on
that
or
is
it
even
possible.
O
So
I'm
part
of
m
squared
lines,
which
is
using
machine
learning
to
reduce
coupled
model
biases
in
a
research
context
and
then
obviously
there's
leap
which
is
doing
it
as
an
nsf
center
on
a
larger
scale.
But
I
think
both
of
these
projects
really
tried
to
address
the
issue
of
of
model
error
and
it
could
be
to
learn
the
model
error
itself
or
it
could
be
to
learn
a
complete
parametrization
instead,
and
so
I
think
there
will
be
interesting.
O
I
think
it's
too
early
for
the
next
two
years,
but
the
hope
is
that
we
will
actually
address
some
of
the
issues
leading
to
couple
by
bias,
drift
and-
and
so
some
of
these
things
could
be
rerun
with
model
error
parameterizations
that
that
got
informed
from
machine
learning
in
terms
of
the
signal
to
noise,
I'm
not
aware
of
anything,
but
but
I
I
think
maria
left,
it's
certainly
possible
to
increase
artificially
the
ensemble
size
using
an
emulator
that
is
trained
on,
let's
say
10,
ensemble
members
and
and
see
what
that
brings.
O
I'm
not
aware
of
any
work.
That's
doing
this,
but
I
think
it
should
be
it's
very
feasible
to
do
it.
C
Well,
the
other
method
for
increasing
your
ensemble
size
is
to
do
a
lagged
ensemble
right,
so
you
use
members
from
your
previous
initialization.
You
kind
of
do
more
of
a
temporal
average
with
a
larger
ensemble
as
a.
A
Result
all
right
well
we're
coming
to
the
end
of
the
hour
here
so
and
please
feel
free
to
just
email,
the
co-chairs
or
putting
things
in
a
google
doc.
I
think
we're
going
to
start
to
work
on
that
proposal.
I
don't
know
sometime
in
july
and
then
lastly,
maybe
I'll
put
up
a
question,
is
there
anything
else
that
the
community
any
other
thoughts
that
they
have
on
the
working
group
and
what
they
need
from
the
university
perspective?.