►
From YouTube: CESM Climate/Land Ice/Earth System/Polar Climate/Paleoclimate Working Group Meetings Day 2
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hello,
everyone
welcome
to
the
second
day
of
this
climate,
variability
change
and
Earth
system
prediction
now
so
we're
having
a
joint
session
to
begin
with,
with
five
talks
between
climate
variability
and
change
in
Earth
system
prediction
on
enso
and
its
predictability,
a
reminder.
This
is
the
code
of
conduct,
so
please
be
respectful
and
constructive
in
your
feedback
and
then
here's
just
the
schedule
for
the
day.
So,
like
I
said,
we
have
a
joint
session
between
climate
variability
and
change
and
Earth
system
prediction.
A
And
then
we
have
a
number
of
talks
on
sub-seasonal
to
seasonal
time
scales
and
then
decadal
time
scales
and
then
there'll
be
a
discussion
for
the
Earth
system
prediction
working
group
at
the
end.
So
we're
going
to
kick
things
off
with
an
in-person
talk
by
Nathan
Lansing
on
mechanisms
of
multi-year
and
so
predictability.
B
Great
thanks,
yeah
thanks
to
everyone
for
making
it
up
the
hill
I'm
gonna
be
talking
today
about
some
work.
I've
been
doing
over
the
past
few
years
in
my
PhD
at
the
iri,
with
Lisa,
Goddard
and
others,
and
now
at
CU,
Boulder
with
Pedro,
dinesio
and
I
want
to
recognize
all
the
all
the
other
people.
Who've
helped
me
along
the
way
through
discussions.
C
B
Right
so
I'm
going
to
hop
right
into
my
basically
my
main
result
explain
this
and
then
and
then
talk
through
some
of
the
interesting
features
and
Curiosities
on
this
figure,
and
so
what
I've
done
here
is
run
perfect
model
and
so
predictions
in
cesm1,
using
the
model
analog
technique
of
of
we
doing
Matt,
Newman
and
at
all
and
what's
exciting
about
this
is
we're
seeing
significant
event
detection
skill
out
to
three
years
in
this
perfect
model
setting.
So
this
is
not
necessarily
a
new
result.
B
We've
seen
perfect
model,
and
so
skill
at
leads
that
long.
But
this
is
exciting
for
a
few
reasons,
one.
It
uses
a
method
that
is
incredibly
computationally
cheap,
which
I'll
get
into
a
little
bit
more
in
a
bit
and
two.
It
allows
us
to
do
some
really
cool
experiments,
because
we're
able
to
run
a
bunch
of
different
experiments
on
our
on
our
laptops
and
think
about
them,
and
so
I'm
going
to
be
talking
about
this
Roc
area
under
curve
scale
is
my
primary
skill
metric
throughout
this
talk.
B
And
so
there's
been
a
lot
of
work
over
the
past
few
years
and
trying
to
understand
these
potentially
predictable
multi-year
Enzo
events,
particularly
up
here
at
ncarin
in
this
working
group,
and
so
we
have
kind
of
three
classes
of
events
that
we
think
we
might
be
able
to
get
longer
skill
on
just
because
of
how
the
enso
system
generally
evolves,
and
so
the
first
one
of
these
sequences
of
events
is
been
well
known
since
the
80s
put
the
recharge
oscillator
paper
up
there.
But
many
people
have
written
about
this.
B
B
More
recently,
denezio
and
Wu
and
others
have
spent
a
lot
of
time,
looking
at
other
sorts
of
sequences
of
events
and
durations
of
enso
events
and
have
shown
that
large
El
ninos
generally
lead
to
double
La
Nina's,
and
so
that's
another
chance
of
getting
a
few
year.
Predictability.
If
we're
sitting
in
the
middle
of
a
large
El
Nino,
we
might
be
able
to
actually
get
a
pretty
reasonable
24-month
forecast
then
and
then
and
another
third
one
that's
been
written
about
more
recently
by
Jean-
is
that
a
late?
B
B
And
so
the
idea
here
is
to
use
these
model
analog
perfect
forecasts
to
explore
the
predictability
of
some
of
these
classes
of
events
and
see
how
much
they
could
be
contributing
to
this
High
skill
at
12,
18
and
24
months,
and
so
here
we're
going
to
be
using
the
model.
Analog
forecasts
I'm
going
to
give
a
brief
introduction,
which
might
be
useful
for
the
rest
of
this
session
as
well
and
and
so
the
idea
here
is
really
simple.
B
And
so
then
all
you
have
to
do
is
take
those
States
in
in
your
library.
Your
analogs
and
Advance
them
forward
in
time
to
get
a
forecast,
and
so
here
I'm
going
to
be
using
yeah,
just
indo-pacific,
sea,
surface
temperature
and
sea
surface
height
following
the
original
ding
papers
and
and
just
to
highlight.
Probably
everyone
in
this
room
has
seen
these
results
by
now,
but
just
to
highlight
that
these
forecasts
are
doing
a
really
reasonable
job
up
with
the
state
of
the
art
of
forecasts.
B
On
the
top
row,
we
have
the
nnme
six
month,
anomaly,
correlation
for
SST
and
precip,
and
on
the
bottom
we
have
the
analogs
and,
if
I
had
removed
the
labels.
It'd
be
pretty
hard
to
tell
the
difference
between
these
two
forecast
skills.
B
Okay,
I,
don't
know
if
online
could
hear
that
I
had
the
the
author
himself
corrected
me
that
these
are
all
analogs
in
the
enemy
is
a
little
lower
than
these,
so
just
to
motivate
that
these
analogs
are
performing
state-of-the-art
forecasts,
so
we
can
be
confident
in
using
them
to
answer
some
more
science
questions,
and
so
here,
I'm
using
analog
selected
from
a
thousand
years
of
pre-industrial
control
and
running
forecasts
on
the
remaining
800
years
of
the
cesm,
1.1,
pre-industrial
control
and
and
like
I
said,
this
is
really
exciting
from
a
science
perspective
and
one
with
like
a
limiting
limited
Budget
on
Cheyenne,
because
I'm
able
to
run
all
of
these
800
years
of
a
pine
cast
in
one
hour
on
my
computer,
and
so
we
can
start
thinking
about
more
ambitious
experiments
to
do
in
the
future,
and
so
here
I'm
back
to
my
this
skill.
B
So
this
is
skills
shown
over
all
months
or
excuse
me,
this
is
skill
of
djf
and
there
were
kind
of
two
areas
of
Interest
I
wanted
to
to
investigate
in
this
study.
First,
we
have
the
hypothesis
that
this
one
year,
larger
La
Nina
skill
over
El
Nino
skill,
is
partially
driven
by
the
predictability
of
El
Nino
and
to
La
Nina
and
second,
the
two-year
La
Nina
scale
is
partially
driven
by
predictability
of
strong
El.
B
Nino
into
double
lining
is,
and
so
what's
really
nice
about
being
able
to
run
a
thousand
years
of
hindcast
is
we're
able
to
stratify
our
forecasts
and
reassess
skill
and
have
some
sort
of
robustness
in
our
results,
and
so
here
I'm
showing
the
12-month
Roc
skill
again
looking
at
La,
Nina
prediction
and
I've
stratified
by
the
state
of
and
so
at
initialization.
B
So
this
first
bar
is
showing
our
for
our
skill
at
forecasting
La
Nina's,
if
we're
initializing
in
no
El
Nino,
12
months
prior
and
then
a
weak,
El
Nino
and
then
a
strong
El
Nino
and
as
expected,
when
we're
at
least
in
CSM
1.1,
when
we're
initializing
in
djf
in
the
middle
of
a
strong
El
Nino,
we're
basically
always
going
to
have
a
la
nina
afterwards
and
the
forecast
System
picks
up
on
that
and
gets
nearly
perfect
skill,
as
shown
by
this
bar
being
really
close
to
0.5
and-
and
so
this
is
a
a
pretty
simple
but
like
powerful
way
to
check
that
a
lot
of,
or
at
least
a
significant
amount
of
this
one-year
lining.
B
What's
also
cool
about
this
large
amount
of
sample
size
is,
we
can
do
retrospective
analyzes
as
well,
so
here
now
I'm,
starting
at
a
la
nina
and
looking
at
backwards.
And
so
here
is
the
in
blue,
the
composite
of
all
La
Nina
events
in
this
hindcast
period,
And
as
we
can
see
on
average
they're,
followed
by
some
sort
of
El
Nino
event
and
often
quite
a
strong
one
or
excuse
me
preceded
by,
and
so
what
we
can
do
is
look
at
12
month.
B
Forecasts
then
of
this
event
and
composite
them,
and
so
we
can
see
kind
of
lining
up
with
our
our
perspective
analysis
previously
that
these
most
of
the
forecasts
of
these
types
of
events
are
following
the
trajectory
correctly
coming
out
of
an
El
Nino
into
a
la
nina
and,
as
we
look
at
longer
leads,
we
can
see
how
we
do
trying
to
predict
that
El
Nido
into
the
La
Nina,
and
you
can
see
that
at
18
months,
we're
still
doing
quite
a
good
job.
B
B
And
so
then,
I
extended
out
this
stratification
analysis
to
24
months,
hopefully
to
see
that
a
strong
El
Nino
would
lead
to
good
two-year
linear
predictability
because
of
the
double
La
Nina
that
often
Falls
a
strong
El,
Nino
and
there's
maybe
some
evidence
for
this,
but
when
I
was
first
making.
This
figure
I
expected
to
see
this
really
strong,
Red
Bar
here
significantly
higher
than
these
two
other
ones,
but
I
think
more
work
needs
to
be
done.
B
So
here's
the
composite
two-year
La
Nina
event
in
this
hindcast
period,
and
it's
really
quite
strong
and
robust
and
in
cesm1,
has
been
written
about
considerably
before,
and
we
can
see
that
our
forecast
does
a
pretty
good
job
out
of
the
first
La
Nina
forecasting,
the
second
La
Nina.
B
It
really
struggles,
as
you
might
expect,
18
months
out,
but
it
does
a
and
it
does
an
okay
job
at
two
years
out.
So
I
think
just
that
there's
still
some
quite
a
shift
below
zero
in
these
two-year
retrospective
or
retrospective
composites.
B
And
so
some
quick
conclusions
that
there
is
at
least
three
or
perfect
model,
insta
skill
and
CES
M1
using
model
analog
forecasts,
and-
and
this
has
been
confirmed
in
in
quite
a
few
other
models
at
this
point-
I
just
left
with
CSM
1.1-
to
make
this
story
a
little
simpler
today
when
we're
able
to
make
800
or
a
thousand
years
of
hindcast
we're
able
to
do
some
cool
statistical
things
to
really
understand
where
our
skill
is
coming
from,
and
we
we
were
able
to
show
that
strong.
B
Only
new
events
really
lead
to
predictable
longing
events,
as
we
would
hope,
but
we
didn't
really
see
the
double
onion
to
predictability,
at
least
in
this
more
simple
analysis.
So
it'll
be
interesting
to
continue
to
dig
into
that,
and
then
I
just
showed
some
really
the
Nino
3.4
retrospective
Composites,
but
we'll
also
be
able
to
composite
entire
Fields
out
of
the
ocean
and
atmosphere
using
this
technique.
So
it'll
be
really
interesting
to
dive
into
some
of
the
precursors
of
these.
B
Of
these
two-year
Landing
is
and
understand
if
there's
predictability
there
as
well
and
so
yeah
going
forward
I'm
I'm
going
to
be
working
on
this
this
spring
and
excited
to
talk
to
all
of
you
about
this
as
it
as
it
moves
along,
but
I'm
really
excited
to
compare
model,
analog
and
initialized,
and
so
Dynamics
and
predictability,
because
a
lot
of
these
model
configurations
were
also
included
in
the
in
the
cmip6
decadal
prediction
experiment,
and
and
also
investigate
the
skill
transitioning
out
of
multiple
La
Nina
events
like
if
you've
been
following
the
news.
B
B
D
When
it
might
matter
by
the
way,
I
think
I
think
he's
off
the
mic,
they
should
hear
us
through
the
ceiling
every
well,
okay,
so
yeah.
One
thing
you
might
try,
then
is
other
I
mean
we
use
SSH,
because
that's
what
you
can
get
for
real
world,
but
in
a
perfect
model
you
could
use
other
measures
of
heat
content
in
my
pinteresting
to
see,
if
that's
what
you're
you're
not
capturing
with
these
double
events,
yeah.
B
E
A
F
Sounds
good
hi
everybody
thanks
for
forwarding
the
blizzard
to
get
up
here
and
thanks
for
those
online
that
joined
us
today,
I
changed
my
title.
A
little
bit.
I
am
going
to
be
thinking
about
future
and
so
predictability,
but
I
also
want
to
think
more
broadly
about
climate
predictability.
I
think
this
is
a
a
broader
question
happy
to
work
on
this
project
with
a
group
of
folks
from
NOAA
series,
CU,
Boulder
and
ncar.
F
So
basically,
all
of
Boulder
is
involved,
but
really
excited
to
be
talking
with
you
all
today
about
this
when
thinking
about
seasonal
climate
prediction,
it's
important
to
remember
that
enso
is
by
far
the
main
source
of
deterministic
seasonal
forecast,
skill
for
most
things
we
care
about-
and
this
is
just
one
way
to
illustrate
that
this
is
taken
from
Jay
Cox
at
all
2022
and
it's
a
paper
that
looked
at
Marine,
Heat
Wave
forecasts
in
seasonal
forecasting
systems-
and
this
is
just
a
Time
series
showing
the
global
average
Marine
Heatwave
forecast
skill
at
three
and
a
half
month
lead
as
a
Time
series,
where
the
shading
of
the
color
of
the
line
is
showing
you
the
Oni
index
value
at
each
time.
F
For
example,
these
are
five
large
Ensemble
simulations
showing
you.
The
Nino
3.4
standard
deviation
in
30-year
running
Windows
out
into
the
future
for
Boreal
winter,
and
you
can
see
that
some
models
are
increasing.
Some
models
are
decreasing.
Csm2
is
doing
this
funny
thing
where
there's
a
peak
in
Nino
the
3.4
variants.
F
In
the
mid
21st
century
and
MPI
is
showing
basically
no
change
and
and
so
going
into
the
future,
but
it's
safe
to
say
that,
if
the
character
and
so
were
to
change
in
the
future,
we
might
expect
similar
changes
in
our
ability
to
predict
the
things
that
are
most
influenced
by
and
so
on,
on
seasonal
time
scales.
So
this
is
the
overarching
question
that
we're
after
today
and
one
way
that
we
can
address
this
issue
is
by
using
perfect
model
analog.
So
thank
you,
Nathan
for
the
awesome
introduction
to
the
concept.
F
I
won't
go
into
too
much
detail
again
about
how
model
analogs
work.
I
thought
Nathan
did
a
great
job
there.
I'll
just
mention
that
our
technique
will
be
a
little
different
in
that
Nathan
was
drawing
analogs
from
a
pre-industrial
control
simulation
oops,
drawing
analogs
from
a
pre-industrial
control
simulation,
whereas
we
are
going
to
be
drawing
analogs
instead
from
large
ensembles,
which
provide
enough
data
to
give
you
robust
statistics
about
your
forecast,
skill
and
predictability,
but
it
also
allows
you
to
draw
analogs
from
time
varying
climate
states
that
are
affected
by
radiative
forcing
changes.
F
So
that's
the
goal
today
is
we're
going
to
assess
time
varying
predictability
using
perfect
model
analogs
from
large
ensembles
and
we're
going
to
look
at
changes
in
and
climate
skill.
Our
predictability
across
time
in
these
different
climate
States.
So
just
very
quickly
about
how
this
works.
This
is
our
workflow
we're
going
to
be
drawing
model
analogs
from
a
large
Ensemble
based
on
global
maps
of
what
I'm
calling
ssta
star,
which,
if
you
were
at
Clara's
talk
yesterday,
is
the
same
thing
as
isst.
F
Having
done
that
within
a
given
large
Ensemble,
we're
going
to
take
turns
treating
each
Ensemble
member
as
the
truth
or
as
our
observations
and
we're
going
to
draw
analogs
from
the
remaining
Ensemble
members.
So,
for
example,
in
csm1
you
might
want
to
try.
You
might
want
to
find
analogs
for
January
of
1944
in
the
first
Ensemble
member.
F
You
can
find
your
your
10
best.
Analogs
from
the
other
remaining
Ensemble
members
from
a
library
that
includes
all
of
the
januaries
from
across
all
of
the
Ensemble
members,
I
mean,
in
this
case
we're
able
to
get
10
good
matches
on
a
global
scale,
and
these
are
our
forecasts
for
so
for
csm1.
As
an
example,
we
end
up
with
40
climate
Ensemble
members,
10
forecast
members
for
12
months
and
a
30-year
period.
F
So
we
end
up
with
144
000
forecasts
to
to
play
with
in
a
30-year
chunk,
so
lots
of
good
data,
and
we
can
repeat
this
cycle
for
different
30-year
periods.
Look
at
the
prediction,
skill
in
the
past,
look
at
it
in
the
future,
take
their
difference
and
see
how
predictability
may
change
for
a
variety
of
things
that
we
think
we
care
about.
So
we're
going
to
do
this
in
five
single
model,
initial
condition:
large
ensembles,
so
csm1,
csm2,
gftl,
Spear,
gfdo,
esm2m
and
MPI.
F
These
five
models
were
chosen
for
a
couple
reasons:
one
all
of
their
ensos
are
decent.
They
all
have
different
biases
for
different
reasons,
but
they're
doing
something
reasonable.
Also,
they
all
have
different
Trends
as
what
they
think
enso
is
going
to
do
in
the
future
as
far
as
its
variance
goes
and
they
all
sort
of
sample.
You
know
the
the
different
outcomes
that
we
might
expect,
whether
that's
an
increase,
a
decrease
or
stays
the
same.
F
So
thanks
to
Nicola,
all
of
these
models
have
been
re-gridded
on
a
common
two
and
a
half
by
two
and
a
half
degree
grid
from
1920
to
2100
and
we're
going
to
be
assessing
predictability
using
forecast
skill
based
on
anomaly
correlation
coefficient
as
a
function
of
initialization
month
and
lead
time,
so
just
for
the
sake
of
time,
I'm
only
going
to
focus
not
only
but
I'm,
mostly
going
to
focus
on
the
results
from
csm1
today.
So
take
note
that
csm1
in
the
future,
Nina
3.4
State
variability
increases.
F
So
let's
look
at
some
of
the
results
here.
This
is
global
forecast
skill
across
the
ensemble
mean
of
csm1
at
zero
month
lead
for
1921
to
1950..
So
this
is
just
telling
you
how
good
of
a
match
your
analog
is
at
initialization
relative
to
the
thing
you're
trying
to
match,
and
overall
you
can
see
that
there
is
High
skill
for
most
of
the
globe,
and
I
should
mention
that
this
is
the
correlation
for
all
months
and
we'll
break
it
down
into
Seasons
here
in
a
second.
F
But
overall
the
analogs
are
doing
something
reasonable
and
they
have
high
skill
similar
what
what
Nathan
had
shown
from
the
ding
at
all
papers.
But
you
can
extend
this
out
to
24
months
and
you
can
see
that
as
you
go
out
through
forecast
skill
decreases
as
you
would
expect,
but
you
still
have
some
significant
skill
in
the
tropics
associated
with
that
two-month
inso
forecasting
skill
that
Nathan
was
highlighting
before.
F
But
the
cool
thing
is:
is
you
can
play
this
game
again
for
a
future
period
2071
to
2100,
and
you
can
take
the
difference
between
these
two
periods
and
look
at
the
change
in
predictability
of
global
sea
surface
temperatures
in
this
model
and
in
this
case
sea
surface
temperature
forecast
skill
increases
nearly
everywhere,
particularly
at
long
leads
and
particularly
in
the
tropics.
You
can
see
that
El
nino-like
signature
and
you
can
also
see
places
light
up
that
are
typically
influenced
by,
and
so
things
like
the
Tropical
Atlantic
and
the
tropical
Indian
Ocean.
F
But
it's
not
just
sea
surface
temperature
that
we
can
look
at
even
though
we're
drawing
analogs
out
through
sea
surface
temperature.
The
cool
thing
about
the
analog
technique
is
that
you're
just
using
sea
surface
temperature
as
a
way
to
pick
a
climate
state
that
looks
like
today
and
be
once
you
pick
that
climate
State.
You
can
look
at
all
the
variables
that
you're
interested
in
and
follow
their
trajectory
as
as
forecasts.
So
we
can
start
to
look
at
things
like
surface
air
temperature.
F
This
is
showing
you
the
change
in
forecast
scale
between
the
late
period
and
the
early
period
for
surface
air
temperature,
where
over
the
ocean,
it's
just
SST
but
over
land.
It's
two
meter
air
temperature
and
you
can
start
to
see
some
significant
changes
in
predictability
for
air
temperature
over
land
and
you
can
do
the
same
game
for
precipitation
and
500
millibar
Heights.
So
there's
a
lot
to
unpack
here
again.
This
is
all
for
csm1
and
for
all
months
maybe
I'll
just
focus
our
attention
on
a
couple
features
one.
F
You
can
see
that,
regardless
of
variable,
you
increase
your
forecast
skill
again,
particularly
along
leads
in
the
tropics,
but
you
can
also
see
some
remote
impacts
of
of
inso
that
that
may
show
some
increase
in
predictability
as
well.
For
example,
if
we
focus
on
the
U.S
West
Coast,
you
see
an
increase
in
forecast
skill
along
the
American
southwest
both
for
surface
air
temperature
and
precipitation.
You
can
also
see
the
Aleutian
low
and
the
Azores
High
light
up
in
the
northern
hemisphere
along
the
path
that
the
PNA
would
take.
F
So,
in
this
case,
it
suggests
that
predictability
is
increasing
for
remote
inso
impacts,
at
least
in
this
model.
So
these
are.
This
is
the
predictability
of
the
things
that
enso
may
influence,
but
what
about
inso
itself?
Well,
we
can
look
at
the
Nina
3.4,
the
predictability
of
the
Nina
3.4
region
in
csm1.
This
is
showing
you
the
forecast,
skill
change
between
the
period
1951
to
1980
minus
1921
to
1950,
and
it's
showing
you
the
scale
as
a
function
of
initialization
month
and
lead
time
where
the
dots
are
showing
a
significant
shift
in
the
distribution.
F
Basically,
and
we
can
play
the
same
game
not
just
for
the
earlier
periods,
but
we
can
work
our
way
consecutively
throughout
the
historical
record
to
see
if
this
change
is
monotonic
or
if
there's
some
interesting
shifts
in
predictability
as
a
function
of
time
and
what
you
can
see
is
in
csm1.
You
get
this
nice
monotonic
increase
in
forecast
skill
from
the
early
part
of
the
record
to
the
late
part
of
the
record,
basically
in
all
seasons
and
again
in
the
later
part
of
the
record.
F
So
what
about
the
other
models?
Right,
I
started.
This
talk
by
showing
that
all
these
other
models
are
doing
different
things
with
respect
to
their
future
inso
variability
changes
and
it
turns
out.
If
you
look
at
the
forecast
skill
in
these
other
models,
you
get
very
different
answers.
F
So
this
is
the
forecast
scale
of
the
Nina
3.4
region
for
the
other
four
models:
csm2
gfdl,
Spear,
gfd,
esm2m
and
MPI,
but
I
want
to
make
the
argument
here
that,
although
the
models
give
you
different
results
for
the
future,
I
think
they're
all
doing
something
internally
consistent
within
their
own
modeling
framework
and
just
to
convince
you
of
that
here
are
their
different.
Eno
3.4
standard
deviation
changes
for
the
winter
time
once
again
and
you
can
see
that
the
the
change
in
skill
is
going
lockstep
with
the
change
in
and
so
variance.
F
If
we
jump
down
to
gfdl
esm2m,
you
can
see
that
there's
very
little
change
in
Nino
3.4
variants
and
then
a
sharp
decrease
in
variance
which
corresponds
to
a
minimal
change
in
skill
and
then
a
sharp
decrease
in
skill
and
things
like
MPI,
where
there's
basically
no
change
in
forecast
or
in
variability,
there's
very
little
change
in
scale.
So,
even
though
the
models
give
very
different
answers
in
the
future,
they
all
seem
to
be
doing
something
internally
consistent.
F
So
darker
colors
are
the
beginning
of
the
record
and
yellows
are
at
the
end
of
the
record,
and
this
is
specifically
for
skill
at
six
month
lead,
but
let's
just
focus
on
csm1,
which
is
the
circles,
and
you
can
see
that
as
time
in
as
time,
Time
Marches
forward,
Nino
3.4
standard
deviation
increases
and
you
get
a
corresponding
increase
in
global
SST
skill.
Something
more
fun
might
be
like
csm2,
which
has
an
increase
in
variance
and
then
a
mid-century
decrease
in
variance
that's
again
corresponding
to
a
global
decrease
and
increase
in
skill.
F
All
the
way
around
global
increase
and
then
decrease
in
skill
and
esm2m
again
is
doing
this
sharp
decrease
in
skill
after
about
2020.-
and
this
is
true-
not
just
a
six-month
lead,
but
18
months,
12
months
and
24
months
as
well,
you
get
very
similar
relationships,
so
the
Mantra
I'm
going
with
here
is
that
forecast
skill
goes
as
in
so
goes.
We
may
not
know
what
inso
is
doing
in
the
future,
but
scale
is
going
to
follow
at
least
based
on
these
model
estimations.
F
So
with
that
I'll
summarize
and
so,
and
its
teleconnections
are
projected
to
change
in
the
future,
even
if
the
nature
of
those
changes
are
uncertain,
perfect
model
analog
forecasts
drawn
from
large
ensembles
suggest
that
seasonal
climate
predictability
will
also
change
in
the
future,
but
the
sign
and
intensity
of
forecast
skill
changes
are
related
to
the
sign
in
agency
of
inso
variability
changes,
so
forecast
skill
goes
as
in
sokos.
So
thank
you,
I'm
happy
to
take
any
questions.
E
C
E
F
I
when
I
say
they're,
reasonable
I
just
mean
that
they
they
look
like
observations
as
far
as
their
spatial
pattern
and
their
overall
variability
and
where
they
have
Peak
power
in
their
power
spectrum.
They
have
different
biases
with
like
Westward
extension
but
they're
just
because
they
have
different
future
projections
or
radiative
responses
doesn't
necessarily
mean
that
their
internal
characteristics
aren't
consistent
generally
consistent
with
observations,
but.
E
E
From
I
guess
from
what
you
should
at
the
end,
it's
probably
easy
to
do.
I
mean
to
say
that
they're,
not
it's
not.
This
is
not
going
to
work
right.
Yeah
I
mean
all
the
models
to
take
csm1
to
predict
csm2,
or
vice
versa.
F
Yeah
I
would
imagine
that
there's
not
much
common
between
the
two
okay
sure
yeah
I'm,
not
I'm,
not
entirely
understanding
your
point,
but
maybe
we
can
talk
about
it
later.
A
C
F
So
yeah
I
think
that
the
idea
here
is
that,
as
enso
becomes,
let's
say
and
so
we're
to
increase
in
the
future.
It
becomes
a
bigger
hammer
in
the
climate
system.
So
as
it
becomes
a
bigger
hammer
in
the
climate
system,
it
increases
our
signal
to
noise
ratio
in
a
forecast
sense
and
we're
able
to
forecast
something
better.
We're
working
on
ways
to
quantify
that
signal
to
noise
ratio
to
get
a
better
sense
for
what
that
looks
like
as
a
function
of
time
and
space.
G
I'm
just
curious:
if
you're
able
to
quantify
you
know
how
good
of
how
good
an
analogs
that
you
have
as
a
function
of
time,
I
mean.
Could
some
you
know,
could
it
be
that
you've
got
a
less
kind
of
well-matched
set
of
analogs
in
present
day
versus
in
the
future?
You've
kind
of
got
a
better
kind
of
initialized
Ensemble
I
mean.
Do
you
quantify
that
or
look
at
that
at
all
yeah.
F
So
you
can
kind
of
get
a
sense
for
that
by
looking
at
the
zero
the
changes
in
the
zero
monthly
forecast,
skill,
which
there
were
differences
that
sort
of
suggests
that
in
the
future,
your
analog
is
our
better
matches
in
the
future.
Another
way
to
do
that
is
by
looking
at
so
we
chose
analogs
based
on
global
rmse,
and
we
could
look
at
Global
rmse
of
your
analog
fit
as
a
function
of
time
and
that's
something
that
actively
calculating
right
now.
Unfortunately,
I
can't
do
this
in
an
hour
on
my
computer.
F
A
Y'all
appreciate
it
thanks
a
lot
so
we'll
move
on
to
our
next
talk.
This
is
a
virtual
talk
by
Jacob
on
using
machine
learning
to
characterize
and
so
non-linearity.
So
we
can
see
your
slides.
It
looks
good.
A
H
H
I
make
it
for
the
talk
without
coughing
too
much
yeah
I'm
talking
today
about
characterizing
nonlinearities
in
the
season,
2
and
so
Dynamics
using
machine
learning
techniques-
and
this
is
a
project
I'm
working
on
with
antonietta
and
Matt
at
NOAA,
currently
and
I'm,
visiting
Noah
from
for
half
a
year,
because
I'm
actually
a
PhD
student
at
the
University
of
tubing
and
I'm
advisors
with
data
and
I
brought
to
you
like
a
teaser
where
it's
like.
H
This
is
forecast
skill
Improvement
due
to
non-linearities
in
a
12-month
forecast
I'm,
seeing
in
this
surface
temperature
anomalies,
but
you
will
see
later
how
I
get
to
this
class
okay,
so
Enzo
forecasts
are
generally
done
by
dynamical
or
statistical
models
and
one
model
which
around
for
long
already
is
like
linear,
inverse
models
which
was
introduced
almost
30
years
ago
by
panel
and
Matra
sofa
and
has
since
then
been
improved
and
developed.
So
limb
skill
is
now
comparable
to
the
nmme
forecast.
H
H
The
study
made
by
Muhammad
Al,
where
they
use
convolutional
neural
networks
to
forecast-
and
you
know
3.4
for
24
months
so
kind
of
naturally
already
arises
the
question:
if
we
cannot
combine
this
model
like
the
the
longer
well-established
model
and
neural
networks,
and
can
we
improve
forecasts
with
this,
and
and
can
we
not
just
forecast,
you
know
3.4
but
also,
maybe
like
whole
Fields,
because
if
we
are
interested,
maybe
also
and
so
diversity
and
their
impacts
I'm,
not
the
first
one
doing
this
there's
I'm
a
study
by
Rodriguez
in
2021
already,
but
we
do
things
differently
there
and
improve
upon
their
results,
I!
H
Think,
okay,
so
why
would
we
do
this?
What's
the
reasoning
behind
and
that's
I
think
we
can
take
the
perspective
of
seeing
answer
Dynamics
from
a
stochastic
differential
equation
perspective
where
we
have
the
changes
of
our
state,
Vector
being
described
by
a
deterministic
part
and
a
stochastic
forcing
of
it
they
both
here
generally
could
be
non-linear
and
then
in
the
end.
So
this,
the
deterministic
part
would
be
the
slow
varying
ocean
Dynamics,
which
would
be
stochastically
forced
by
the
atmosphere
like
the
higher
frequency
atmosphere
in
linear
inverse
models.
H
The
deterministic
part
is
basically
assumed
to
be
linear
and
Dynamics
to
just
by
described
by
this
linear
operating
acting
on
the
state
and
the
stochastic.
Forcing
is
just
assumed
to
be
gaussian
white
noise,
so
not
stay
dependent
and
not
linear,
and
so,
if
we
have
an
and
a
forecast
with
a
limb,
we
should
see
we
would
have
this
data
here.
This
time
series
and
zero
would
be
our
initial
State.
H
We
can
do
a
forecast,
let's
say
24
months,
I
had
like
he
has
shown
us,
for
instance
the
blue
curve,
and
this
will
be
the
first
part
of
our
model.
H
But
so
we
can
do
a
bit
like
we
can
be
more
General
by
saying
the
deterministic
part
does
not
necessarily
only
have
to
be
linear,
but
it
can
also
have
a
non-linear
part
here
and
this
we
try
to
now
model
this
nonlinear
part
using
a
neural
network,
and
so
here
again
this
figure.
Let's
say
this
would
be
our
blue,
which
is
again
our
limb
forecast
of
the
black,
the
data,
and
we
try
to
find
some
non-linear
part
of
it,
but
to
go.
It
is
important
here
we
still
have
a
has.
H
H
So
it
has
this
kind
of
Inc,
so
different
time
scales
can
be
captured
in
internal
variables
by
so
we
kind
of
get
like
a
memory
in
the
network
and
over
time,
and
this
is
able,
therefore,
to
capture
not
only
gear
and
non-markovian
Dynamics,
and
this
will
be
our
second
part
here
of
our
model.
So,
looking
at
this
all
together,
we
would
have
our
initial
conditions
where
we
use
the
linear,
invest
model
to
do
predictions,
and
then
this
our
inputs
to
our
lstm
kernel
Network,
which
learns
the
residuals
of
this.
H
So
the
non-linear
part,
the
r
hat
here-
is
the
nonlinear
part
and
we've
we
train
this
by
using
a
mean
square
error,
loss
on
the
data
and
the
limb
forecast,
plus
the
residuals
there's.
We
can
do
a
bit
better
here
by
improving
here,
like
or
looking
at,
including
cyclostationarity,
so
the
seasonality
into
this
model,
and
we
could
do
this
at
two
places
we
can
use
the
cycle.
H
Stationary
limb
wish
was
introduced
by
Shin
in
2021
and
we
can
also
incorporate
cyclostationarity
in
the
lstm
by
conditioning
on
the
season
and
therefore
we
we
use
here
I'm
a
feature-wise,
linear
modulation.
If
you
want
to
Target,
if
you
want
to
know
more
about
this,
talk
to
me,
maybe
later,
okay,
so
the
data
we're
using
here
is
the
season
2
pre-industrial
control
and
where
we
use
monthly
sea
surface
temperatures
and
see
surface
height
in
the
tropical
Pacific
here
on
the
right.
H
I
just
show
you
off
and
the
El
Nino
and
La
Nina
Composites
and
the
surface
temperature
normal
SD,
surface
hi,
and
so
this
gives
us
10,
like
10
200
years
of
data,
which
is
great
because
I
mean
you
know
that
machine
learning
techniques
are
tend
to
be
quite
data
hungry
and
we
split
this
into
training
and
test
data.
One
important
thing
is
here:
we
do
the
forecast
and
the
principal
component
space
so
because
the
linear
model
requires
Computing
off
the
covariance
Matrix
and
an
inverse.
H
When
it's
indicated
sorry
I
have
models
which
are
only
on
sea,
surface
temperature
anomalies
and
models
which
are
on
CC
surface
temperature,
normally
sensory,
surface
health,
the
evaluation
we
do
in
Grid
space,
and
we
use
the
skill
score
for
this,
and
this
is
basically
one
minus
the
root
b
square
error
divided
by
the
standard
deviation
and
the
nice
thing
is
about
this-
that
zero
is
basically
a
climatological
forecast
and
to
not
only
look
at
Maps.
H
Okay.
So
let's
come
to
some
results,
so
I
did
like
a
suit
of
experiments.
It's
a
bit
overwhelming
the
slide,
but
I'll
walk
you
through
this.
So
what
we
have
here
is
basically
the
skill
of
the
Nino
3.4
over
the
different
lag
times,
and
this
is
the
skill
score,
and
what
we
have
here
in
black
is
the
persistence
which
is
like
Escape
plot
kind
of
kind
of
fast
after
six
seven
months,
and
then
we
have
just
as
a
comparison
with
just
the
pure
lstm
forecast,
where
we
have
this
stationary
into
cycle
stationary.
H
We
have
the
limbs,
the
linear,
inverse
models
stationary
and
cycle
stationary,
and
then
we
have
all
the
different
kind
of
combinations
of
limb,
Plus
lstf.
And
so,
if
you
look
at
the
pure
lstm
forecast,
this
form
basically
the
lower
boundary
here
of
the
skill
curves.
We
have
here
because
one
reason
for
this
is
what
that
we
think
that
the
whole
learning
the
whole
Dynamics
really
just
purely
data
purely
from
a
neural
network,
it's
much
harder
than
having
these
procedures.
We
think
this
might
increase
with
more
data,
but
we
have
to
check
this.
H
Then
we
have
the
linear
inverse
models
which
already
do
better.
We
have
here
light
blue,
the
stationary,
the
universe
models
and
inclusion
season
LED
here
the
purple
and
dark
blue
already
to
have
a
higher
skill,
whereas
adding
sea,
surface
height
anomalies
also
increases
the
scale.
H
And
then
we
have
here
the
best
models
which
are
the
combinations
of
the
cyclostation
LM
and
the
lstm
here,
for
instance,
in
red
the
best
one,
and
so
now
we
can
kind
of
because
we
have
all
these
different
models.
We
can
compare
them
and
see
what
is
the
role
of
non-linearities
in
the
skill,
and
so
here
we,
for
instance,
here
it's
a
bit
of
also
an
overwhelming
slide.
H
We
have
different
skills
or
skill
scores
or
actually
differences
in
skill,
score
of
the
cycle:
stationary
limb,
plus
the
lstm
minus
the
sacred
station
LMN,
meaning
we
see
what
is
the
Improvement
due
to
the
lstm.
We
have
here
this
from
1
to
24
months
for
different
verification
months,
and
we
definitely
see
here
that,
after
nine
months,
we
get
an
improvement
in
skill
here
and
let's
have
a
look
at
with
cluster
and
one
of
the
plots
here.
We
see,
for
instance,
for
a
12
month
lag
time.
The
skill
Improvement
for
July
will
be
def.
H
We
clearly
see
that
the
non-linearities
seem
to
improve
the
skill
in
the
western
Central
Pacific
here.
So
we
can
also
look
this
and
look
at
this
at
with
a
different.
You
know:
Nino
indices
here
from
east
to
west,
where
we
see
a
clear
increase
in
in
skill
due
to
non-linearities
between
9
to
18
months
in
the
Western
Pacific,
and
this
Improvement
forecasts
should
initialized
in
July
and
December
and
because
we
did
all
these
experiments.
H
H
So
the
question
is:
what
could
that
be?
And
now
we
kind
of
think
that
this
is
due
to
El,
Nino
and
La
Nina
asymmetry
and
forecast
scale.
So
here
again
the
same
plot
as
before
in
the
upper
row.
H
And
then,
if
we
only
look
at
the
skill
of
el
ninos,
we
see
we
don't
see
this
pattern
in
the
Western
Pacific
here.
But
what
we
see
is
that
when
we
look
at
La
Nina's
that
we
pick
up
a
similar
Improvement
in
skill
for
La
Nina's
as
we
see
in
the
overall
Improvement,
so
one
of
our
hypothesis
that
at
least
partially
the
nonlinearities
can
be
explained.
H
The
Improvement
digital
linearities
can
be
explained
by
and
so
asymmetry,
meaning
that
La
Niners
are
better,
for
we
can
do
a
better
forecast
for
La
Nina's
than
for
El
ninos,
which
goes
very
along
with
also
what
Nathan
already
showed
before
yeah.
So
to
conclude,
we
have
with
limited
amount
of
data
of
SOS.
Prediction
is
hard
for
Pure
neural
network
methods.
So
that's
why
it
makes
sense
to
look
at
residuals.
H
Non-Linearities
and
non-markovia
entities
seems
to
improve
predictions
for
bigger
than
nine
months,
and
the
predictable
non-linearity
still
likely
to
tell
me
your
La
Nina
asymmetrics
and
as
an
Outlook,
we
like
to
disentangle
the
non-linearities
more
with
using
analyze
experiments
and
we
like
to
apply
this
to
observational
data,
using
transfer
learning
and
with
this
I'd.
Like
to
thank
you
for
your
attention
and
antonietta,
Matt
and
vidarta
for
their
advisors
and
good
feedback
and
everything
and
I'm
happy
to
answer
questions
via
mail
or
in
the
chat
or
or
now,
foreign.
E
H
So
yeah
we
checked
for
this,
so
we
we
look
at
so
there's
like
we
did
kind
of
an
ablation
study
where
we
varied
the
number
of
principal
components
for
both
sea
surface
temperature
and
normal
latency
surface
height,
and
look
with
whether
the
resource
changes
and
then
there's
kind
of
like
a
threshold
where
you
say
you
don't
gain
any
more
by
using
more
principle.
Components
previously
make
also
the
problem
harder
in
terms
of,
and
you
have
more
degrees
of
freedom.
But
you
capture
enough
of
the
autocorrelation.
H
E
Yeah
we're
just
yeah
I'm,
just
trying
to
see
whether
there's
a
more
physically
based
Criterion
to
select
the
number
of
eofs
or
whether
you
have
thought
of
using
and
rotating
eofs,
for
example,
because
we
know
it
I
mean
you're,
retaining
20
UF,
so
I,
don't
know
how
much
variance
for
twin
I
mean
you
know
they
explained
in
the
end.
So
just
a
well
I
was
responding.
It
I
was
arrested,
just
sampling
noise,
basically
so,
which
is.
H
So
I
am
I'm
like,
even
if
there
would
be
noise
for
the
neural
network
I.
If
you
regularize
it
good
enough
and
train
it
well
that
you
get
like
the
minimum
and
the
validation
set,
because
we
do
validate
our
model
against
like
a
data
which
hasn't
been
seen
for
trading
right.
So
it
should
not
pick
up
necessarily
the
noise
that.
A
H
Yeah,
so
what
we
definitely
see
is
that
there
is
a
huge
improve
like
at
least
for
the
in
the
linear
case.
We
definitely
see
an
improvement
in
skill,
adding
ocean
heat
content
or
other
variables
which
capture
kind
of
the
deeper
ocean,
so
the
for
instance.
Here
we
use
the
surface
height
and
we
clearly
see
an
improvement
between
three
to
nine
months
in
this
initialize
in
the
spring.
So,
basically,
what
we
see
is
that
adding
surface
height
basically
overcomes
the
spring
predictability
minimum.
H
Basically
there's
a
minimum
and
predictability
and
the
like
in
the
spring
usually
before,
and
so
adding
this
ocean
variables
improves
among
this.
But
what
we
actually
see
is
that
the
non-linearities
Do
Not,
the
neural
network,
does
not
pick
up
the
signal.
If
you
only
look
at
Sea
surface
temperature,
normally
as
it
picks
up,
it
improves
the
forecast
skill
at
later.
In
the
times
then
yeah
then
including
this
ocean
variable.
So
there
isn't
improvement
with
the
ocean
variable,
but
that's
not
what
we
see
with
a
neural
network.
It's
a
different
effect.
I
You
can't
hear
me
well
and
see
my
screen
today,
I'm
going
to
talk
about
and
so
Focus
here
in
a
changing
climate
using
the
model
analog
technique
as
Nathan's
and
Dylan's
talk
have
shown
model
analog
is
such
a
simple
but
powerful
tool
that
not
only
provides
a
focus
skill
that
is
comparable
on
to
that
generated
by
numeric
models,
but
also
provides
a
very
straightforward
tool
to
evaluate
the
performances
of
the
state
of
the
art
models,
for
example,
cmib
class
models
without
further
Ado
I'd
like
to
start
with
the
motivation
of
this
work.
I
The
screenshot
here
was
taken
from
PSAL
seasonal
forecast
webpage.
Basically,
it
shows
the
real-time
insta
predictions
using
a
couple
of
empirical
Dynamic
models,
for
example
linear
inverse
model
and
model
analog
technique.
The
answer
prediction
was
taken
from
last
month
in
January,
and
we
can
see
we
are
currently
in
La
Nina
condition,
followed
by
an
El
Nino
condition
in
the
upcoming
winter
in
2023-24,
in
order
to
evaluate
how
the
accuracy
and
skillfulness
of
the
real-time
insta
predictions,
we
utilize
the
model,
analog
technique
and
make
20th
century
and
so
handcast
to
evaluate
the
answer.
I
Predictions
and
predictability-
previous
studies,
for
example,
during
LL
2018,
shows
that
modern
analog
is
actually
comparable
to
the
skill
generated
by
numeric
models,
for
example,
and
mme
showing
the
bottom
layer.
However,
these
previous
work
only
goes
back
to
1982,
which
might
not
be
sufficient
to
diagnose
and
understand
the
long-term
variation
of
the
answer
for
CAT
Scale.
I
Therefore,
one
of
the
motivations
of
this
work
is
to
extend
the
previous
and
so
handcast
to
to
see
how
the
and
so
for
classical
might
evolve
over
time
here.
This
plot
shows
a
nominic
correlation
of
linear
3.4
at
three
different
time
leads.
I
6,
12
and
18
months
leads
basically
I
divided
the
entire
period
into
30-year
moving
window
and
computed
economic
correlation
skill
at
each
chunk
of
period
for
seven
different
SSC
verifications,
the
gray
shading
here
indicates
a
cross
verification,
which
means
we
use
one
of
the
SSC
datasets
to
select
for
model
analogs
that
are
verified
against
the
other.
Six
SST
data
sets
which
sort
of
gives
us
an
estimate
of
how
much
the
answer.
Focus
skill
is
due
to
the
uncertainties
arising
from
the
observations.
So
we
can
see
the
answer.
I
Focus
skill
here
is
quite
robust
across
different
verifications,
and
another
Point
here
is
that
and
so
for
Cascio
underwent
multitative
variation
with
the
minimum
in
the
middle
of
20th
century.
I
Besides
the
deterministic
skill
metric
I
showed
in
the
previous
slide.
We
also
explored
some
probabilistic
skill
metric
for
the
categorical
La
Nina
and
El
Nino
conditions
here,
I
showed
the
rock
score,
which
basically
measures
a
hit
rate
versus
a
false
alarm
rate
for
a
perfect
system.
The
Rock
score
is
equal
to
one,
which
means
there
is
no
false
alarms.
If
the
rock
score
is
less
than
0.5,
it
means
our
predictions
are
no
better
than
random
guesses.
Then
we
can
see
our
predictions
for
La,
Nina
and
El.
I
Nino
conditions
are
generally
skillful,
as
most
of
the
values
are
greater
than
0.5,
and
then,
if
we
have
a
look
at
the
long-term
variation
of
the
enso
forecast
scale,
we
can
see
similarly,
as
I
showed
in
the
previous
slide.
Probabilistic
skill
metric
also
underwent
multi-articular
variations
with
the
minimum
scale
in
the
middle
of
20th
century.
I
Another
point
I
want
to
make
here
is
that
if
we
have
a
look
at
the
hatching
area
over
here,
it
indicates
the
differences
between
predicting
aluminum
condition
and
Landing
air
condition
as
significantly
different.
So
as
indicated
in
the
green
circle
over
here
we
can
see
in
recent
periods.
La
Nina
has
been
more
predictable
than
El
Nino.
However,
if
we
go
back
to
a
century
ago,
there
was
a
pronounced
period
where
the
El
Nino
is
actually
more
predictable
than
La
Nina.
I
The
question
now
is
that
from
what
might
be
drug
causing
the
answer
for
classical
changes
during
the
historical
period,
so,
on
the
right
hand,
side
here,
I
show
the
30-year
moving
variance
on
Union
3.4
index,
pesto,
seven,
seven
different
SSD
observations
and
we
can
see
the
answer.
Focus
skill
on
the
left
hand,
side
corresponds
well
with
the
various
change
that
is
the
greater
the
variance
the
more
skill
for
the
and
so
for
cuts
are,
however,
question
Still
Remains.
I
Now
we
try
to
figure
out
how
much
we
can
attribute
the
past
multitude
variation
of
enso
focus
scale
to
climate
change
versus
internal
variability
and
to
address
this
question,
we
defined
three
experiments
in
our
work.
The
first
experiment
uses
pre-industrial
control
simulation
to
select
for
the
model
analogs
and
the
second.
Oh,
it's
worth
noting
that
the
results
showing
the
previous
slides
are
all
based
on
pre-industrial
control
simulations.
I
The
second
experiment
to
use
is
the
larger
results
by
comparing
the
pre-industrial
control
simulation,
showing
the
blue
curve
with
a
large
Ensemble
simulation
issue
in
the
red
curve
we
can
see,
and
so
Focus
skill
increases,
especially
at
longer
leads
under
a
changing
climate
scenario,
which
sort
of
suggests
that
climate
change
might
have
a
positive
impact
over
the
until
forecast
skill,
then
we
have
a
look
at
the
seasonality
of
meaning
of
3.4
for
classical
for
different
experiments,
and
we
can
see
in
the
for
the
recent
period.
I
The
second
year
also
skill
becomes
more
predictable
under
a
changing
climate
scenario,
which
again
suggests
that
answer
Focus
skill
might
be
increasing
in
climate
change
and
to
eliminate
the
uncertainties
from
the
observations
and
to
separate
the
internal
variability
and
and
the
external
forcing
here.
We
also
considered
a
perfect
model
experiment.
I
So,
on
the
left
hand,
side
I
show
the
30-year
moving
standard
deviation
of
new
3.4
in
csm2
large
Ensemble,
where
the
red
curve
represents
the
ensemble
mean
we
can
see
that
most
of
the
also
viability
has
been
smoothed
out,
leaving
only
a
monotonically,
increasing
and
so
Trend
that
reflects
that
externally,
forced
and
so
response
during
historical
period
in
csm2
large
Ensemble.
I
If
we
compare
these
various
change
with
observed
various
change,
we
see
that
we
tend
to
conclude
that
the
answer
variability
is
mostly
internal
generated,
as
The
Ensemble
being
fails
to
represent
or
reproduce
the
minimum
variance
in
the
middle
of
20th
century.
I
Then,
if
we
compare
the
standard
deviation
with
the
corresponding
forecast
scale
of
the
in
the
perfect
model
experiment
we
can
see,
there
is
an
evident
relationship
between
the
standard
deviation
of
a
new
3.4
and
the
forecast
scale
to
illustrate
this
relationship.
A
bit
more
here.
I
show
the
scatter
approach
by
presenting
economic
relation
skill
on
the
X
Y
X
Y
axis,
and
the
standard
deviation
on
the
x-axis
for
three
different
time
leads.
We
can
see
as
we
increase
the
time
leads.
I
The
esophagus
scale
becomes
more
correlated
with
the
corresponding
variance,
for
example,
in
this
case,
if
we
increase
the
standard
deviation
of
answer
by
one
Sigma,
the
corresponding
and
so
Focus
scale
might
be
increasing
by
0.36
at
18
months
later,
in
this
case.
I
Finally,
since
each
other
member
simulates
into
trajectories
quite
differently,
we
carefully
selected
a
few
that
could
correctly
reproduce
a
minimum
variance
observed
in
the
middle
of
20th
century,
as
indicated
in
the
colored
curves
here,
and
then
we
also
checked
the
corresponding
forecast
skill
for
those
number
members,
and
then
we
saw
that
the
answer
for
Cascade
decreases
on
due
to
the
reduction
of
variance
in
the
middle
of
20th
century,
which
is
quite
similar
as
didn't
concluded
in
his
work.
Sorry.
I
To
sum
up,
we
showed
that
we
first
show
that
and
so
Focus
skill
underwent
multi-decular
variations,
impose
the
deterministic
and
probabilistic
skill
Matrix
with
a
minimum
skill
in
the
middle
of
20th
century,
and
then
we
separate
the
La
Nina
and
the
linear
conditions.
We
show
that
the
in
the
recent
period,
La
Nina,
has
been
more
predictable
than
El
Nino.
However,
this
difference
in
the
recent
period
might
be
just
changed
in
the
follow-up
work.
We
try
to
address
a
question.
I
How
much
can
we
attribute
to
the
Past
multitude
variation
of
insofocus
scale
to
climate
change
versus
internal
variability,
based
on
the
current
results?
We
found
that
unsuit
variability
is
mainly
internally
generated
if
external
force
and
so
variance
increases
in
a
changing
climate,
as
indicated
in
the
csm2
large
Ensemble.
We
might
expect
that
the
forecast
skill
of
iso
will
increase
correspondingly
with
that
I'm
going
to
stop
here
and
take
questions.
If
there's
any
thanks,
foreign.
B
A
B
Was
a
great
talk
thanks,
I'm
curious
about
this.
This
second
conclusion
that
you're
you're
saying
that
you
you
had
this
period
in
like
the
early
20th
century,
right
where
it
looked
like
El
Nino's
skill
was
larger
than
maninisco.
Have
you
looked
into
why
that
might
be
the
case?
And-
and
secondly,
have
you
seen
this
result
in
any
of
your
other
more
perfect
model,
experiments
that,
on
the
new
skill
is
greater
than
La
Nina.
I
Yeah,
the
conclusion
is
the
best
on
this
and
first
of
all,
I
didn't
conducted
this
for
the
perfect
model,
experiment
and.
I
A
I
First
of
all,
first
of
all,
my
analog
is
selected
depends
on
like
30
South
to
30,
North
and
I.
My
focus
is
like
those
variability
in
those
regions,
so
right
now,
I'm
not
sure
for
PBO
and
Atlantic
regions.
H
A
All
right,
we'll
move
on
to
the
last
speaker
of
this
session.
That's
in
person
taught
by
Steve.
C
G
All
right
so
I'm
going
to
talk
about
some
work
that
a
group
of
us
have
been
doing
to
address
this
question
using
cesm
prediction
systems-
and
these
are
preliminary
results.
I
really
want
to
emphasize
preliminary,
because
these
experiments
are
only
a
couple
weeks
old
now
and
we're
following
a
clivar
tropical
Basin
interaction.
Protocol
doing
atlanticine
cast
pacemaker
experiments.
G
So
just
some
brief
background.
This
is
not
comprehensive
by
any
means,
but
I
wanted
to
give
you
a
a
flavor
of
sort
of
the
where
this
question
is
coming
from.
So
in
2007,
keenly
side
and
Latif
showed,
based
on
observational
analysis
that
the
Tropical
Atlantic
is
correlated
with
enso
with
the
former
leading
the
latter
by
about
six
months,
and
then
later
they
did
some
pacemaker,
seasonal
prediction,
experiments
with
and
without
restoring
in
the
tropical
Atlantic.
G
The
standard
experiment
here
is
contrasted
with
the
experiment
where
they
do
pacemaking
in
the
tropical
Atlantic,
and
they
highlight
this
Improvement
in
skill
for
these
large
enso
events
compared
to
the
control
suggesting
that
the
Atlantic
is
is
adding
some
information
for
these
large
positive
El,
Nino
events
and
then
in
2012
doing
it
all
did
an
Atlantic
pacemaker
experiment
using
the
MPI
model,
full
SST,
restoring
in
the
tropical
Atlantic,
and
they
were
able
to
recover
this
anti-correlation
between
the
Tropical,
Atlantic
and
and
the
Pacific.
Now
the
lag
has
switched
so
now.
G
This
is
Atlantic
leading
the
Pacific
by
about
six
months,
so
some
evidence
that
this
is
evident
in
coupled
model
simulations
in
2021.
G
Here's
a
paper
that
did
an
analysis
of
available
seasonal
prediction
systems,
including
some
that
were
using
a
nudging
in
the
Atlantic
and
they're,
highlighting
this
correlation
between
skill
in
summertime
Atlantic,
three,
so
Atlantic
Nino
in
in
sort
of
year,
one
and
how
that
correlates
with
son,
nino3,
djf,
ndj
nino3,
so
there's
some
evidence
that
models
that
do
a
better
job
of
predicting
the
Tropical
Atlantic
also
do
a
better
job
than
so
prediction.
G
Another
paper
came
out
in
2019.
This
was
an
analysis
of
observations
and
cement.
5
historical
and
projection
runs.
This
top
plot
here
shows
the
the
running
correlation
of
Atlantic
3,
with
nino3
from
observations.
You
see
that
this
this
negative
correlation
really
kicks
in
after
1980,
and
many
cement
models
are
able
to
simulate
this
connections.
Ces
M1
is
in
here
somewhere
so
and
the
OBS
is
in
Black
here.
G
So
this
is
sort
of
a
teleconnection
that
models
are
somewhat
able
to
represent,
and
so
there's
hints
that
you
know,
and
so
predictability
could
could
benefit
from
increased
skill
in
the
tropical
Atlantic.
But
then
Ingo
Richter
came
out
in
2022
with
a
paper
and
I
don't
have
a
plot,
but
this
quote
is
from
that
paper
saying
that
the
results
indicate
that,
in
this
particular
GCM
The
Tropical
Atlantic
mostly
acts
as
a
negative
feedback
to
enso.
By
accelerating
the
decay
of
events,
it
has
a
little
impact
on
the
development
of
enso
events.
G
So
there's
some
uncertainty
in
the
literature
regarding
what
role
The,
Tropical
Atlantic
plays
in
enso
seasonal
prediction,
and
so
that's
the
background
for
coordinated
pacemaker
experiments
organized
by
the
clivar
tropical
Basin
interaction
research,
Focus
panel,
we're
calling
this
TBI
coex
co-chairs.
G
We've
kind
of
decided
on
a
common
protocol
time,
common
time
period
for
running
experiments,
common
SST,
restoring
Target,
SST,
restoring
region,
primarily
in
the
Deep
Tropics,
with
linear
tapering
out
to
the
subtropics.
You
want
at
least
10
member
ensembles,
we'll
use
cmip6
forcing
and
for
the
initialized
hindcast
Pacemakers.
We
really
want
to
emphasize
this
February
1st
initialization
and
do
other
ones
if
possible,
but
there's
still
several
undecided
elements
to
this
protocol.
G
So
what's
the
method
for
generating
a
hindcast
control,
whether
to
use
full
field
or
anomaly
restoring
and
what
the
restoring
strength
should
be
so
we've
kind
of
been
the
guinea
pigs
and
have
have
pushed
forward
and
are
testing
this
protocol
and
and
maybe
it
will
become
finalized.
G
It's
our
seasonal
to
multi-year
prediction
system,
initialized
quarterly
using
the
csm-2
model,
but
I
do
want
to
point
out
that
cesm2
smile
has
good
and
I
would
say
competitive,
enso
prediction
skill,
so
this
is
showing
the
skill
for
djf
nino34
as
a
function
of
lead
time
from
different
initialization
months
and
I
really
want
to
highlight
that
the
February
initialization
down
here
this
is
a
10
month
lead
prediction
of
djf
nino34.
We
get
correlation
scores
above
0.6.
G
You
can
see
the
time
series
here,
quite
small
Ensemble,
spread
and
good
prediction
for
these
large
Enzo
and
La
Nina
events
at
this
lead
time.
So
this
is
our
control
and
then
just
to
further
highlight
the
skill
here.
Our
February
initialization,
which
is
the
blue
line
here,
outperforms
all
of
the
models
that
contributed
to
enemy
nmme,
which
is
the
red
shading
here.
So
in
particular,
for
this
initialization.
We
think
we've
got
a
skillful
system
all
right.
G
So
here's
the
hindcast
pacemaker
experiment
that
I'll
be
showing
results
from
is
the
equivalent
setup
to
see
sm2
smile
except
SST
anomalies
are
restored
to
observed
anomalies
in
the
tropical
Atlantic,
so
the
target
SST
is
cmip6,
amip
SST
we're
doing
anomaly
restoring.
So
that
means
we're
restoring
to
the
smile,
lead,
dependent,
climatology,
plus
the
observed
monthly
SST
anomalies,
and
so
what
that
means
is
that
this
experiment
has
the
same:
lead
dependent
climatology
of
smile.
So
we
haven't
changed
the
climatology
of
the
prediction
system:
we've
just
added
the
observed
anomalies
in
the
tropical
Atlantic.
G
Again,
this
region,
10
South,
to
10
North
with
linear
tapering,
pretty
strong,
restoring
10
days
over
50
meters.
If
that
means
anything
to
you,
and
so
this
experiment
is
a
small
Ensemble,
five
members
initialized
each
February
1st
from
19
to
2018
and
integrated
out
23
months,
and
just
to
show
you
that
the
experiment
is
working
here,
I'm
showing
forecast
month
five.
So
this
is
June
monthly
anomalies.
This
is
Atlantic
Nino
Black's
observations.
Blue
is
the
smile
control.
So
we've
got
no
skill
of
predicting
Atlantic
Nino
at
forecast
month.
G
Five
and
here's
TBI
Atlantic
so
closely
matching
The
observed
anomalies
and
both
systems
have
pretty
good
skill.
Predicting
nino34
at
at
five
month
lead.
So
here's
the
result.
Here's
the
main
slide.
G
What
you
see
is
in
smile,
the
Atlantic
Nino
skill
starts
off
a
modest
a
little
bit
over
0.4
and
drops
pretty
rapidly
the
summer
skill
which
is
kind
of
what
we
think
of
is
the
primary
influencer
on
subsequent
winter
and
so
development
is,
is
pretty
low
when
we
do
TBI
Atlantic,
of
course,
by
Design,
we've
got
very
high
skill
and
in
Atlantic
Nino
region,
but
we're
comparing
here
a
five-member
Ensemble
to
a
20
member
Ensemble.
So
the
gray
curve
here
shows
smile
resampled.
G
So
this
is
the
mean
five
member
skill
from
smile
and
the
uncertainty
and
gray
shading
and
Below.
We
see
nino34
skill,
so
this
was
the
shocking
surprising
result
to
me
almost
no
improvement
in
Nino
3
4
skill
in
djf
of
of
year,
one
so,
the
the
much
improved
representation
of
Tropical
Atlantic
variability
in
TBI
Atlantic
not
having
an
impact
on
winter,
and
so
what
it
does
have
an
impact
on
is
the
subsequent
decay
of
Enzo
skill
in
the
spring.
G
So,
just
showing
you
the
time
series
just
to
hammer
that
point
home
here
we're
looking
at
djf
nino34,
here's
the
smile,
five
member
Ensemble,
here's
TBI
Atlantic,
five
member,
it's
hard
to
see
any
difference
here,
smile
five
member
is
is
very
skillful
at
getting
these
large
events
and
TBI
Atlantic,
not
showing
major
Improvement
there.
No
obvious
Improvement
to
my
eye.
G
So
we
can
look
then
at
the
kind
of
broader
skill
here.
These
are
maps
of
correlation
skill
for
surface
air
temperature
on
the
left
is
smile
and
on
the
right
is
the
TBI
Atlantic
experiment,
of
course,
by
Design,
we've
got
high
scale
and
Tropical
Atlantic,
and
this
is
as
a
function
of
lead
going
down,
but
we
don't
want
to
compare
to
20
member
smile.
We
want
to
compare
to
five
member
smile,
so
when
I
switch
back
and
forth,
you
can
see
how
the
skill
drops
drastically.
G
When
you
only
have
five
members
now,
you
can
see
that
TBI
Atlantic
is,
is
outperforming
a
five-member
smile,
but
how
much
of
this
performance
gain
is
is
coming
through
sort
of
interesting
Dynamics
as
opposed
to
the
imposed
Tropical
Atlantic.
Forcing
that's
that's
the
question
so
here
just
to
show
you
the
skill
differences.
This
is
the
the
ACC
skill
difference:
TBI
Atlantic,
five
minus
mile
5
for
surface
air
temperature,
precipitation
sea
level,
pressure
in
interest
of
time,
I'm
just
going
to
focus
on
winter,
lead
month,
10..
So
you
can
see
some.
G
You
know
skill
Improvement
in
regions
of
Interest
like
western
Canada,
and
we
also
see
skill
Improvement
in
precipitation
but,
as
I
pointed
out,
there's
no
real
Improvement
in
Enzo
skill.
So
where
is
this
skill
Improvement
coming
from?
If
it's
not
coming
from
enso?
Maybe
we
don't
care
about
it
if
it's
just
coming
from
the
restored
SST
here
so
to
try
to
get
at
that
question.
I
looked
at
the
enso
teleconnections.
G
These
are
all
regressions
of
dgf
fields
on
djf
nino34.
This
is
at
lead
month.
10
for
the
prediction
systems
and
the
left
column
here
shows
The
observed
teleconnections
for
SST,
surface
air
temperature,
sea
level,
pressure
and
precipitation
and
there's
a
lot
of
information
here,
but
the
main
takeaway
is
that
these
two
prediction
systems
are
much
more
similar
to
each
other
than
they
are
to
OBS.
So
all
of
the
teleconnection
biases
that
are
in
smile,
like
the
successive
cooling
over
much
of
continental
US,
are
also
present
in
TBI
Atlantic.
G
All
right
to
summarize
We've
we've
done
a
successful
test
of
this
hindcast
pacemaker
protocol
with
csm2
the
evidence
for
improved
Denso
skill
appears
to
be
weak
in
this
system.
There's
less
skill
reduction
in
Spring
and
forecast
year,
two
and
that's
consistent
with
Ingo
Richter's
recent
work.
Maybe
there's
minor
improvements
in
enso
teleconnections
in
forecast
Year
too,
but
I'm
grasping
at
straws
to
find
them.
The
strong
Atlantic
influence
on
djf1
may
be
limited
to
specific
events.
G
So
maybe
we
need
to
sort
of
focus
in
on
case
studies
with
a
larger
Ensemble,
and
it
could
be
that
the
high
skill
and
the
smile
control
might
explain
some
of
the
discrepancy
with
earlier
studies
that
have
shown
a
significant
impact
of
the
Tropical
Atlantic
I
think
it's
questionable
whether
these
forecast
year,
two
improvements
are
realizable,
because
it's
probably
Beyond
The
Tropical
Atlantic
predictability
limit
anyway.
So
should
we
care
and
also
another
comment
on
the
experimental
design,
it's
difficult
to
disentangle
these
Regional
skill
enhancements
in
service
climate?
Are
they?
A
A
A
L
G
That's
kind
of
future
work
right,
so
we've
done
February
initialization
I.
Think
we'd
like
to
to
try
to
expand
this
to
May
and
August
and
and
I
think
it.
G
Actually
more
interesting
to
do
November
and
see
if
sort
of
enso
Decay
into
the
spring
in
a
lead
time
window
that
is
potentially
predictable
on
Tropical
Atlantic
would
would
give
us
skill
Improvement,
but
remains.
A
There's
another
question
in
the
chat
from
Evan
Meeker:
there
have
also
been
studies
that
the
North
Pacific
can
be
a
precursor
to
enter
events
at
similar
time
scales
to
the
Tropical
Atlantic
of
the
order
of
six
months.
Would
it
be
worthwhile
to
run
a
Pacific
pacemaker
in
this
way,
following
the
same
line
of
reasoning.
G
G
G
Large-
and
you
know
that
was
certainly
a
lot
of
my
effort.
Exploring
this
experiment
was
testing
whether
we
should
do
full
field
restoring
and
try
to
eliminate
the
bias.
That
would
certainly
probably
improve
the
teleconnections.
But
if
you
hammer
down
bias
in
one
region,
it
pops
up
in
another,
it's
a
really
problematic
experiment
to
design.
So
in
the
end,
I
felt
that
anomaly
restoring
was
the
way
to
go
and
just
don't
try
to
mess
with
the
climatology.
A
E
M
Can
everyone
hear
me
kill
it
clearly?
Yes,.
E
M
Good
hi
everyone,
I'm,
Danny
and
I'm,
currently
a
fourth
year,
PhD
student
at
University
of
Colorado
Boulder
and
today,
I
will
be
presenting
some
results
from
our
recent
project.
M
The
potential
increase
in
mjl
predictability
under
global
warming
and
in
the
beginning,
I,
would
love
to
acknowledge
the
contribution
from
my
advisors,
a
new
subramanian
and
waiting
hand,
as
well
as
our
two
other
collaborators
on
this
work.
Well,
Chapman
and
Jeff
blice.
M
Usually
when
we
are
monitoring
the
mjl
activity,
we
use
the
real-time
multivariate
mgl
index,
the
rmmi
and
it's
computed
using
the
eof
analysis
on
the
three
combined
fields
of
the
Donald
wind
at
8,
15
millibar
designer
went
at
200
millibar
and
the
outgoing
line
with
radiation,
which
is
a
good
proxy
of
the
deep
convection
over
Tropics.
After
doing
the
EOS
analysis,
we
would
be
obtaining
two
leading
UF
modes
and
their
corresponding
principal
component
time
series,
and
there
are
rm1
and
RMF
rm2
respective
with
rmy
and
rm2.
M
We
will
be
able
to
determine
the
amplitude
and
the
location
of
the
MGO
active
convection.
Here
on
the
right.
We
show
an
mgl
phase
diagram
with
the
rmm1
being
the
x-axis
and
rm2
being
the
y-axis.
So
in
this
diagram
the
further
away
a
point
is
from
the
origin,
the
greater
the
mjl
amplitude
is
also.
We
can
clearly
see
that
the
arm,
one
and
arm2
are
dividing
the
entire
face
diagram
into
step
into
eight
phases
where
each
one
representing
some
physical
locations
in
the
tropics.
M
Impression
on
how
we
can
use
rmm
faces
to
characterize
the
location
of
mjl
active
convection,
so
here
the
blue
color
denotes
more
rain,
so
it
means
the
active
MGR
connection
and
on
the
upper
right
we
can
see
at
the
rmm.
Rmm
phases
is
changing.
The
mgl
signal
is
propagating
from
the
Indian
Ocean
to
the
maritime
continent
and
then
to
the
Western
Pacific
is
to
work
under
global
warming.
We
know
that
the
mjl
behavior
is
changing.
M
We
ask
this
question
if
there
are
a
systematic
change
in
mgl
predictability
under
global
warming
as
well
to
investigate
this
question,
we
would
like
to
evaluate
the
mjl
predictability
over
the
past
Century
and
currently
the
most
common
practice
in
estimating
MGR
predictability
is
to
use
the
Ensemble
forecast
from
numerical
models
and
and
the
metric
is
to
use
the
bivariate
anomaly
correlation
coefficient
ACC.
So
usually,
when
we
wanna
SSD
prediction
skill
of
mjl,
we
would
be
comparing
the
rmmy
and
rm2
from
the
model
output
with
the
observational
Army
and
rm2.
M
However,
when
we
are
evaluating
the
MGO
predictability,
we'll
be
comparing
one
model
ensemble
with
the
rest,
ensemble
mean
with
the
ensembled
mean
of
the
rest,
Ensemble
members
and
using
this
ACC
method
and
using
a
30-year
window
White
initialization
per
year
and
51
on
double
members
and
from
the
ecmwf
subs
from
the
ecmwf
seasonal
forecast.
M
We
also
show
the
ACC
time
series
at
a
forecast
lead
of
40
days
and
we
can
see
at
a
forecast
leader
of
40
days,
the
ACC
time
series
is
also
increasing
and
the
two
lines
are
telling
us
the
same
story:
The,
that's
the
mjl
predictability
is
increasing
over
the
past
Century
and,
however,
this
method
would
require
the
existence
of
this
kind
of
forecast,
a
large
Ensemble
size
and
enough
initializations.
M
M
Therefore,
we
seek
an
alternative
method.
To
answer
our
question
whether
or
not
the
increasing
MGR
predictability
is
caused
by
global
warming,
and
this
new
method
is
the
weighted
permutation
entropy
and
the
lower
the
weighted
permutation
entropy,
the
the
less
random
hour
time
series
is,
and
therefore
the
higher
the
predictability
of
our
time
series
and
the
mathematical
details
of
how
to
compute
the
weighted
permutation
entropy
has
been
recorded
in
a
lot
of
existing
references
here.
We'll
just
use
a
very
simple
example
to
illustrate
how
we
can
simply
compute
this
wpe
quantity.
M
So
with
such
a
given
simple
time,
series
The,
First
Step
would
be
choosing
the
time
length
and
the
time
delay
with
xim
Tau.
So
m
is
the
length
of
our
delay
vector
and
the
Tau
as
the
time
delay
we'll
be
using
to
pick
elements
from
our
original
time.
Series
to
reconstruct
our
delay,
vector
so
here
in
this
example,
we'll
use
the
time
delay
to
be
1
and
the
time
length
to
be
three.
M
The
step
two
would
be
to
characterize
the
segments
into
each
permutation
by
the
order
of
each
element
in
the
segment.
So
the
first
segment
from
this
time
series
would
be
one
two
three
and
we
are
using
ordinal
analysis.
So
the
smallest
element
is
the
first
one.
The
second
smallest
element
in
this
second
one
and
the
largest
element
is
third
one,
so
it
to
the
correspond
to
an
original
permutation
zero
one
two
as
shown
here
and
then
the
second
delay
Vector.
M
M
M
M
After
doing
this,
we
can
weight
the
probability
distribution
function
by
the
variance
of
each
segment
and
then
obtain
the
weighted
per
weighted
probability.
Distribution
of
each
permutation,
as
shown
here
as
PW
pi,
and
then
we
can
follow
the
formula
to
calculate
the
weighted
per
mutation
entropy
and
it
is
just
a
varying
varying
version
of
the
well-known
Channel
entropy.
M
And
using
the
new
method
of
weighted
permutation
entropy,
we
analyze
the
coupled
error,
20th
century
analysis
with
the
timeline
3,
which
will
the
correspond
to
six
permutations
at
most
and
then
use
use
the
coupled
era
re-analysis
to
20th
century
project.
We
further
use
a
10
year.
Rounding
mean
time
window
to
compute
the
weighted
permutation
entropy
in
this
figure,
the
we
have
three.
M
We
have
three
rows
and
four
columns,
so
each
column
corresponds
to
one
mgl
related
time
series
the
arm,
one
time,
series,
the
arm
two
time
series,
The,
mgl,
amplitude
and
the
mgl
propagation
and
each
row
corresponds
to
the
different
time
delay
parameter.
We
chose
to
compute
the
result.
The
first
row
corresponds
to
Tau
to
be
two
days.
M
The
second
row
is
how
equal
to
three
days-
and
the
last
panel
corresponds
to
a
Tau
equal
to
four
days
and
for
each
subclass
here
the
y-axis
is
simply
the
weighted
permutation
entropy
and
the
x-axis
is
the
year.
So
we
can
see
during
the
past
entry
among
all
four
MGR
related
time
series
and
using
different
time,
delay.
M
Parameters
we'll
see
that
there
is
a
consistent,
decreasing
Trend
in
the
weighted
permutation
entropy
and
recall
that
in
the
previous
slide
we
said
that
the
lower
the
weighted
permutation
entropy,
the
higher
the
principality,
so
such
a
decreasing
Trend
in
the
weighted
permutation
entropy,
would
be
translated
into
a
increasing
Trend
in
the
MGO
predictability,
which
is
also
consistent
with
the
results
we
obtained
from
the
model,
Ensemble
forecast
method.
M
And
then
we
can
further
answer
the
following
question:
if
the
increasing
MGR
predictability
caused
by
global
warming
or
it's
just
the
Centennial
internal
variability
and
previously
because
we
don't
have
the
Ensemble
forecast
initialized
from
the
pre-industry
condition,
we
cannot
answer
this
question.
But
now,
since
the
weighted
permutation
entropy
is
more
flexible
and
it
can
even
be
applied
to
the
simulations
from
the
SIM
6
models,
we
can
easily
answer
this
question
by
picking
a
good
model
that
can
stimulate
the
mjl
behavior.
M
And
here
we
choose
the
csm2
model
to
test
our
assumption,
because
we
know
that
csm2
model
has
a
good
representation
of
the
mgl
structure
and
the
MGO
propagation,
and
we
analyze
the
controller
from
csm2,
as
well
as
the
historical
runs
with
10
Ensemble
members
from
csm2
and
three
Ensemble
members
from
CS2,
csm2.com
and
also
the
ssp-5,
is
by
future
projections.
With
three
Ensemble
members
from
csm2
and
5
from
season.
2
will
come
by
comparing
the
historical
round
with
the
control
round.
M
We
will
be
able
to
see
under
a
more
severe
global
warming,
how
the
MGR
predictability
will
be
changing
and
the
first
step
in
doing
such
analysis
would
be
to
compute
the
weighted
permutation
entropy
time
series
in
each
simulation
and
the
future
projection,
and
the
Second
Step
would
be
to
estimate
the
spread
of
the
weighted
permutation
entropy
slope
from
the
control
round,
and
it
would
tell
us
on
the
range
of
the
internal
variability
of
how
the
mjl
predictability
can
change
and
recall
that
a
negative
weighted,
permutation
entropy
slope
corresponds
to
the
increasing
mjl
predictability,
while
a
positive
way
to
permutation
entropy
slope
corresponds
to
a
decreasing
mjo
predictability,
and
the
third
step
would
be
to
compare
the
historical
realm
or
the
ssp-585
on
future
projection
with
the
control
round
to
see
the
influence
of
the
external,
forcing
away
from
the
internal
spread.
M
Here.
In
this
figure,
we
shall,
on
the
historical
round
versus
the
control
run
in
the
figure
the
blue
bars
are
just
the
weighted
permutation
entropy
slope
spread
estimated
from
the
control
run
and
the
dots
and
triangles
are
the
weighted
and
permutation
entropy
slope
fitted
from
each
historical
round.
M
Ensemble
members
and
the
vertical
blue
line
here
is
the
wpe
slope
mean
estimated
from
the
control
round,
while
the
vertical
red
line
here
is
the
mean
weighted,
permutation,
entropy
slope
computed
from
the
historical
round,
Ensemble
members,
and
here
we
can
see
that
the
dots
and
the
triangles
they're,
basically
still
within
this
thread
of
the
blue
bars,
which
means
the
the
influence
of
the
external
forcing
here.
In
this
case,
the
global
warming
has
not
yet
emerged
from
the
internal
variability
of
the
mjl
predictability
change
and
especially
for
arm
one
arm2
and
amplitude
dial.
M
M
However,
for
the
MGO
propagation,
the
datian
triangles
they're,
not
only
within
the
spread
and
they're,
also
not
significantly
biased
with
this
figure,
and
our
conclusion
would
be
that
the
increasing
mgl
predictability
during
the
past
Century
might
be
a
result
of
both
the
internal
variability
and
the
global
warming
forcing
and
to
further
split
the
impact
from
the
internal
variability
and
the
global
warming.
We
compare
the
control
round
with
the
ssp-585
future
projection
with
more
severe
global
warming
and
similar
to
the
previous
slides.
M
To
explain
why
under
global
warming,
the
MGR
predictability
is
increasing.
The
physical
mechanism
behind
this
question
can
be
very
complicated.
However,
here
we'd
like
to
give
a
simple
answer
from
a
mathematical
perspective,
because
in
the
weight,
supplementation
entropy
computation
formula
is
basically
a
it's
basically
just
to
extract
the
information
from
the
weighted
occurrence,
frequency
of
each
permutation
in
the
ordinal
analysis.
M
In
the
time
series,
therefore,
here
and
in
this
figure,
the
x-axis
are
the
time
while
the
y-axis
are
the
weighted
permutation
entropy
of
the
weighted
occurrence,
frequency
of
different
permutation
entropy,
and
here
we
can
see
the
dominant
the
occurrence
frequency
of
the
dominant
patterns
of
the
dominant
permutations
are
becoming
increasingly
dominant
for
arm.
One
arm
two
and
the
mgl
amplitude,
it's
the
increasing
and
decreasing
pattern
becoming
more
and
more
dominant
and
for
the
energy
of
propagation
is
the
Eastward
propagation
pattern
becoming
more
and
more
dominant
and
intuitively.
M
And
we
also
show
a
mathematical
proof
here
by
just
doing
the
time
derivative
of
the
wpe
and
attribute
the
contribution
to
different
permutations,
and
by
doing
this
kind
of
analysis,
we
can
conclude
that
the
increasing
MGR
prediction
is
indeed
contributed
by
the
increasingly
dominant
patterns
in
the
MGO
Behavior
itself.
M
So
in
summary,
during
the
past
Century,
both
The
Ensemble
sectional
forecast
and
the
real
analysis
indicate
an
increase
in
MGR
predictability
and
when
examine
with
examining
with
the
csm2
model
Ensemble,
we
find
such
an
increase
in
MGR
predictability
is
caused
by
internal
variability
and
the
external,
forcing
global
warming
and
under
more
severe
global
warming.
The
MGO
tends
to
be
more
predictable
and
the
mjl
gains
more
predictability
through
showing
more
organized
patents
and
with
a
range
of
10
days.
M
G
N
Good
morning
everybody
can
you
hear
me?
Okay,
yes,
great
all
right,
so
today,
I'm
going
to
report
on
a
study
that
we
started
quite
a
while
ago
on
some
seasonal
predictability
from
atmospheric
land
and
ocean
initial
States
and
I
like
to
acknowledge
all
the
people
that
contributed
to
this
work,
including
Sasha,
Tegan,
Steve,
Sanjeev,
Nick,
polgermeyer
and
Gokhan.
N
So
we
set
off
on
this
study
a
while
ago
with
the
basic
question
or
the
basic
goal
to
quantify
how
much
of
seasonal
predictability
comes
from
the
initial
state
of
the
atmosphere,
land
and
ocean
and
sea
ice.
So
basically
to
verify
this
diagram
on
the
right
that
Paul,
dear
Meyer,
made
it
quite
a
while
ago,
which
is
supposed
to
be
representative
of
predictability
of
mid-latitude,
surface
temperature
Overland
and
we're
going
to
focus
on
this
sub-seasonal
window,
which
is
right
in
here
so
weeks
before
and
weeks.
N
Five
six
and
we
expect
the
atmosphere
to
be
a
huge
contributor
in
the
early
days.
But
in
a
subseasonal
window
we
expect
contributions
from
the
land,
Ocean
and
the
atmosphere.
So
we
ran
a
series
of
experimental
reforcast
Suites
to
test
exactly
how
much
predictability
comes
from
these
and
we
started
just
with
a
couple
and
now
we've
completed
the
set
because
we
just
we're
in
seeing
things
that
were
unexpected.
So,
in
addition
to
the
standard
we
forecast
set
that's
from
1999
to
2020
with
weekly
initializations.
N
Basically,
following
the
subjects
protocol,
we
we
ran
the
suite
six
more
times
and
in
each
Suite
we
changed
out
either
one
or
two
initial
conditions
from
one
one
or
two
mono
components
to
climatology.
N
So
what
I'm,
showing
here
is
the
anomaly
correlation
coefficient
for
the
first
set
of
the
forecasts
so
on
the
left,
we
have
the
standard
forecast
and
these
top
channels
go
from
weeks,
one
two
to
weeks
to
E4
down
to
weeks,
five
six,
and
we
see
that
the
scale
for
two
meter
temperature
really
drops
off
from
weeks
one
two
to
weeks
five,
six-
and
this
is
very
similar
to
what
all
the
other
sub-seasonal
models
are
doing,
and
you
see
the
drop
is
mostly
Overland.
N
There
is
a
lot
more
skill
over
in
the
oceans
on
these
time,
skills,
okay,
so
what
happens
when
we
take
out
their
realistic
atmospheric
initial
conditions?
Well,
firstly,
we
see
in
the
weeks
one
two:
there
is
a
very
large
drop
in
ACC
over
delay
in
areas
and
not
so
much
over
the
ocean
and
then
summary
over
weeks
before,
but
by
the
time
we
get
to
weeks
five,
six,
all
the
areas
that
are
stippled
they're,
not
significantly
different
for
the
standard
forecast.
N
So
what
this
is
showing
us
is
that
the
atmosphere
has
really
strong
impact,
not
only
in
the
weeks
one
too,
but
also
in
weeks.
Three:
four:
when
we
go
to
the
Cleveland
Suite,
this
is
where
we
started
seeing
our
first
surprise.
So
when
we
replaced
the
realistic
land
initial
conditions
with
climatology,
we
do
not
lose
any
skill
in
the
first
two
weeks
and
as
a
matter
of
fact,
we
really
don't
lose
much
skill
in
weeks,
three
four,
so
over
selected
regions
over
South,
America
and
parts
of
Africa.
N
N
That
is
not
working
the
same
way
as
we
expected
and
we've
had
lots
of
conversations
about
this
with
Sanjeev
Paul,
dear
Meyer,
and
even
the
land
group,
and
what
we
think
it's
coming
down
to
is
that
the
coupling
in
csn2
may
be
a
little
too
weak,
although
we're
still
investigating
that
theory,
but
we're
definitely
not
seeing
what
we
expected
to
see
this
strong
coupling
between
the
Lillian
and
the
atmospheric
initial
conditions,
because
we're
not
getting
any
gain
of
skill
from
realistically
an
initialization
all
right
and
when
we
move
on
to
the
ocean.
N
We
also
wear
a
little
surprised
by
what
we
saw,
but
not
not
as
much
as
with
the
land.
So
in
the
first
two
weeks
we
don't
expect
the
ocean
to
make
much
difference,
so
we
don't
lose
any
skill
from
changing
to
climatology,
but
we
also
don't
see
much
loss
of
skill
in
weeks,
three
four
so
everywhere
over
Lane,
where
you
see
stippling.
That
means
there's
no
difference
from
the
standard
forecasts.
So
the
only
area
is
the
Lost
skill.
Is
this
North
part
of
South
America
and
Central
Africa?
And
it's
similar
here
for
weeks?
N
Five,
six
as
weeks
three
four,
perhaps
there
is
a
little
bit
more
difference
in
the
West
Coast
of
United
States
in
weeks,
five,
six,
but
more
lists
in
our
sweet.
The
atmosphere
is
the
main
driver
of
surface
temperature
prediction.
Skill
and
here
are
also
the
results
from
the
other
complementary
Suites.
So
as
expected,
when
we
initialize
all
the
components
with
climatology,
we
get
very
little
skill.
N
We
get
back
over
a
majority
of
land
areas
in
weeks,
three
four
we
get
back
the
same
thing
as
the
standard
forecast
so
again
reinforcing
the
fact
that
there
is
largely
the
atmosphere
is
contributing
this
up
seasonal
window
and
in
weeks
five
six
it
had
plays
a
very
large
contribution
over
the
land
areas.
N
Now
an
interesting
thing
happens
when
we
only
run
with
initializing
the
land.
We
do
actually
get
some
skill
from
the
model,
especially
in
weeks
one
two
and
a
little
bit
in
weeks,
three
four,
especially
over
South
America.
So
what
we
think
is
happening
here
actually
are
seeing
coupling
here
between
the
land
and
the
atmosphere
which
is
happening
in
the
absence
of
when
the
atmosphere
is
not
initialized,
then
it
takes
a
little
bit
of
the
memory
from
the
lane
model
and
it
gets
transferred
into
the
atmosphere
through
a
coupling
mechanism.
N
When
we
only
include
ocean
variability,
as
we
could
expect,
we
don't
get
my
skill
until
we
start
getting
into
weeks,
five,
six
and
primarily
we're
getting
skill
from
the
ocean
over
in
the
north
part
of
South,
America
and
again
Africa.
So
those
regions
keep
coming
up.
So
those
are,
we
think
that
are
largely
associated
in
and
so
and
I'll
talk
about
that
for
the
minute.
But
you
can
see
that
we
recovered
the
majority
of
the
skill
in
these
regions
just
with
initializing
oceans.
N
So
you
don't
really
need
to
initialize
anything
else
to
get
the
surface
temperature
scale
in
these
regions,
so
we
Quantified
the
contribution
of
and
so
a
little
more.
So
what
this
figure
on
the
right
shows
is
the
change
in
ACC
of
to
meter
temperature
between
the
active
and
the
neutral
and
so
years
and
as
expected.
This
is
these:
are
the
regions
where
you're
seeing
an
influence
so
North
part
of
South,
America,
central
Africa
and
perhaps
a
little
bit
on
the
West
Coast
of
United
States,
which
is
what
you
would
expect
from
teleconnections.
N
But
the
changes
are
not
large,
they're
they're,
not
very
large,
but
in
those
regions,
and
so
does
make
a
difference,
and
one
thing:
I
forgot
to
mention
and
I'm
mainly
showing
you
manual
means
we
have
looked
at
djf.
We
have
looked
at
GGA,
but
more
or
less.
The
picture
looks
the
same
all
right.
So
the
last
thing
we've
done
and
we've
struggled
how
to
demonstrate
this
really
well.
So
this
is
our
I
think
last
final
attempt,
but
what
we've
tried
to
separate
is
the
contributions
for
the
variability
of
the
different
components.
N
I
call
those
batm
villain
the
ocean.
Every
one
of
the
experimental
Suites
also
has
a
contribution
from
climatology.
There's
that's
just
something
we
cannot
take
out
if
you're
going
to
initialize
the
model
with
something
there's
going
to
be
a
contribution
from
climatology
and
then
we
think,
then
there
are
three
terms
that
make
a
difference:
coupling
from
the
atmosphere
in
the
land,
coupling
from
the
atmosphere
of
the
ocean
and
there
might
be
coupling
between
land
and
the
ocean,
but
it
should
be
very
small
for
two
meter
temperature
over
land.
N
So
for
the
rest
of
the
presentation,
we
assume
that
is
zero.
So,
for
example,
etm
Suite
will
have
contributions
from
climatology
verbility
of
delay
and
variability
of
the
ocean
and
from
these
coupling
terms,
So
based
on
all
the
forecast
Suites,
we
can
calculate
different
terms
and
we
can
see
if
the
combination
of
skill
is
if
the
total
skill
is
a
combination
of
these
individual
components.
So
the
ECC
in
the
standard
simulation
is
the
ACC
of
all
of
these
terms
combined
together.
N
So
if
the
linearity
would
hold,
we
could
calculate
the
ACC
sum
as
the
ACC
from
climatology
from
atmospheric
variability
land
and
coupling
terms
and
so
on.
So
we
can
do
a
little
bit
of
math
from
all
these
different
forecast
sets
and,
for
example,
derive
the
ACC
from
the
variability
of
the
atmosphere,
and
we
do
that
by
subtracting
the
ACC
from
the
standard
we
forecast
Suite
in
minus
the
clima
ATM,
and
similarly,
you
can
do
a
little
bit
more
math.
That
Sasha's
mainly
has
done,
and
you
can
calculate
also
the
contributions
from
these
coupling
terms.
N
So
if
you
see
that
this
is
the
reason
why
we
needed
to
run
the
clean
wall
Suite,
because
otherwise
we
we
couldn't
derive
those
numbers
so
I'm,
sorry
if
the
math
is
a
little
bit
too
much
here,
but
the
picture
will
make
things
a
little
clearer.
So
what
you're,
showing
here
and
we'll
focus
on
this
top
left
panel,
which
is
the
global
mean
the
yellow,
is
the
contribution
from
the
atmosphere.
N
So
this
is
the
ACC
from
very
from
ability
from
the
atmosphere
and,
as
you
saw
earlier,
it
peaks
in
the
early
weeks
and
it's
still
the
dominant
term
in
weeks
before
and
by
the
time
we
get
to
weeks,
five,
six,
it's
right
on
par
with
the
ocean,
which
is
the
blue
line
here,
as
was
clear
earlier,
the
land
does
not
contribute
much
so
our
green
line
here,
the
contribution
from
the
land
is
just
flat
and
again
we
think
that
that
maybe
has
to
do
with
just
how
the
coupling
is
set
between
land
and
the
atmosphere.
N
And
then
there
is
this
contribution
in
this
green
line.
Is
this
coupling
between
the
atmosphere
and
the
land,
so
that
primarily
comes
up
when
we
don't
initialize
the
atmosphere,
then
the
the
surface
temperature
is
going
to
be
driven
by
what
the
land
is
doing
and
finally,
in
this
pink
line
here
is
the
sum
of
all
of
these
terms.
N
So
if
you
add
up
the
yellow
all
the
solid
lines
and
all
the
dashed
lines
together,
they
do
add
up
to
the
pink
and
what
you
see
is
that
almost
is
perfectly
the
same
thing
as
what
you
get
with
the
standard
we
forecast,
so
the
linearity
Assumption
holds
everywhere,
except
for
this
region
of
South
America.
So
if
you
look
at
all
the
other
diagrams,
we
can
add
up
individual
contributions,
but
there's
something
different
going
over
South
America
and
we've
seen
it
in
the
diagrams
right.
N
So
the
atmospheric
initial
condition
does
not
give
you
as
much
skill
as
in
other
areas
in
the
ocean
and
the
land
atmosphere
interactions
become
important
almost
from
the
beginning,
so
I
think
it
has
to
do
something
with
the
vegetation
with
this
region.
Different
just
different
meteorology
that
happens
in
that
region,
but
mostly
the
linearity
Assumption
holds,
and
we
only
talked
about
two
meter
temperature.
N
But
we've
done
all
of
these
figures
for
precipitation
as
well
and
as
you
see
in
this
top
right
figure,
this
is
the
ACC
for
a
precipitation
Overland,
and
it
has
a
very
sharp
decline
in
this
skill
in
the
subseasonal
window
is
very
low.
So
these
other
panels,
H
through
n,
are
only
zooming
in
on
the
weeks
three
through
six.
But
you
see
a
very
similar
message
that
the
atmosphere
throughout
the
sub-seasonal
week,
three
four
window,
is
the
dominant
source
of
predictability
and
then
the
ocean
and
the
land
play
a
very
minor
role.
N
However,
the
ocean
starts
being
on
par
or
greater
than
the
atmosphere
when
we
get
to
weeks
five,
six
and
again,
with
the
exception
of
South
America,
where
the
ocean
plays
a
much
larger
role
and
then
also
southeast
Asia
and
Australia,
where
the
ocean
also
is
playing
a
much
larger
role
and
all
the
terms
that
we
calculated
from
Maurice
forecast
Suites
do
add
up
pretty
well
to
the
standard
forecast.
N
So
that's
pretty
cool
okay,
so,
in
short,
our
results
suggest
that
the
atmospheric
initial
state
is
the
dominant
source
of
two
meter:
air
temperature
prediction
predictability
through
weeks
before,
for
the
majority
of
billion
areas,
Unfortunately,
they
land
initial
condition,
plays
a
small
role
and
we're
still
investigating
this
a
little
bit
so
Sanjeev
I
think
the
next
talk
is
going
to
talk
about
soil,
moisture
prediction
in
these
runs
and
we're
looking
into
the
reasons
why
this
is
happening,
but
likely
just
just
we're
just
not
getting
the
benefit
from
the
coupling
in
our
system.
N
It's
a
predictability
from
the
ocean.
Initial
State
exceeds
that
from
the
atmosphere
only
after
four
weeks,
and
we
don't
have
the
restarts
for
these
simulations
in
order
to
bring
them
a
little
further
in
time
to
see
when
we
need
the
ocean
would
be
playing
a
larger
role,
but
the
ocean
does
play
a
larger
role
during
the
active
Enzo
years
and
for
the
precipitation
skill
again.
N
The
atmospheric
initial
state
is
the
main
driver,
except
for
these
regions
of
South
America
and
Southeast
Asia
and
Australia
right
and
lastly,
so
the
data
is
already
on
the
climate
data
Gateway.
If
you
would
like
to
look
at
these
three
forecast
Suites,
there
are
also
on
Casper
in
this
directory
and
we're
just
folding
them
under
this
common
DOI
for
all
the
s2s3
forecast
Suites.
But
if
you
follow
that
link,
then
there
it
is,
there's
all
the
and
basically
we
have
the
P1
variables,
which
is
just
following
the
subjects
protocol
surface
temperature
Etc.
N
C
K
Anna
thanks
so
much
very
interesting.
I
have
two
quick
questions.
One
of
them
is
why
do
you
have
so
low
skill
from
the
atmospheric
component
until
week?
One
does
it
have
to
do
with
the
verification
being
different
from
the
initialization?
It's
the
spin-up
problem
and
the
other
one
was
a
comprehension
question
these
first
acc's
on
your
first
slide.
Is
this
done
against
a
verification
data
set
like
arbs,
or
is
this
done
in
the
perfect
model
world
out.
N
Okay,
so
I'll
answer
the
second
question,
so
these
are
we've
done
this
against
the
CPC
data
and
we've
done
it
against
era,
interim
I'm,
showing
you
Era
interim,
because
we
also
wanted
to
see
what
happens
on
the
oceans,
because
when
we're
showing
the
CPC
data,
people
would
kept
asking
what's
going
on
over
the
oceans,
but
there
doesn't
really
make
much
difference
and
I'm
not
sure
if
I
understood
your
first
question
so
the
standard
we
forecast
Suite,
so
the
ECC
values
are
pretty
close
to
one
in
the
weeks.
N
One
two
but
they're
here
this
one
shows
a
little
clearer,
so
they're
pretty
close
to
one
they're,
not
they're
little.
There
is
a
little
shock
of
the
system,
so
they're
not
as
high
as
ecmwf
but
they're
I.
Think
pretty
well.
As
expected.
Maybe
I
didn't
understand
the
question.
K
N
Oh
I
see
what
you're
saying
so:
I
think
that
is
due
to
the
initialization
shock
a
little
bit
in
the
beginning.
So
that's
why
there
is
I
think
you
know
we
get
in
the
first
couple
of
days.
We
don't
get
as
much
benefit
as
we
could
and
I
think
there
is
also
then,
these
interactions
with
the
land
model.
So
there's
something
we
don't
fully
understand
in
that
time
period.
D
I
was
just
Curious
when
you
look
at
the
the
runs
where
you
turn
off
the
the
where
you
don't
put
in
a
note
ocean
initialization,
what's
happening
to
the
say,
the
Enzo
tropical
convection
signal,
presumably
that's
not
just
dying,
isn't
it
is
that
got
some
tail.
Is
that
tail
long
enough
that
you're
still
seeing
the
effects
of
the
ocean,
even
though
you
don't
have
it
explicitly.
N
So,
in
which
one
of
these
sweets,
when.
N
N
Yes,
so
you
see
that
in
a
drop
in
skill
over
the
ocean,
so
that's
partly
why
we're
showing
that?
So?
Yes,
so
in
this
clima
ocean
Suite,
if
you
compare,
for
example
week
three
four
what's
going
on
with
the
ocean,
then
compared
it
to
the
standard
we
forecast,
so
you
used
to
see
the
the
drop
oven
skill
over
the
ocean
regions
right
so
now
there
is
no,
and
so
it
all
in
these.
D
N
D
N
D
D
G
O
O
I
am
going
to
talk
on
the
performance
evaluation
of
real
time.
Some
seasonal
rainfall
forecast
over
West
Africa
of
2000
and
2001
most
of
the
season
for
operational
use.
O
This
is
a
Fallout
from
a
research
collaboration
called
the
GCR
Republican
African
Swift
from
2017
to
2022.
It
gives
us
the
that
leverage
to
access
the
ecmwf
broadcast
in
real
time
during
the
real-time
pilot
project.
So
the
basic
the
basic
aim.
O
O
Okay,
it's
going
now
the
the
the
basic
the
basic
aim
of
this
of
this
work
is
to
have
the
scientific
understanding
of
the
S2s
prediction
system
in
West
Africa,
the
one
that
is
required
to
provide
rate
and
focus
operationally,
and
but
the
objective
here
is
to
access
the
performance
of
the
of
the
real
Focus
From,
the
S2s
over
West
Africa.
In
order
for
us
to
have
improved
knowledge
on
how
this
model
represents
the
monsoon
Dynamics.
O
So,
therefore
we
want
to
know
what
is
the
skill
of
this
S2s
prediction
system
in
in
producing
Monsoon
evolution
in
real
time?
What
is
the
skill
of
this
S2s
in
predicting
room
or
climatology
and
its
variability
on
the
Fly?
And
if
we
have
the
evaluation
of
this
S2s
model,
because
is
it
sensitive
to
observational
uncertainty?
O
So
here
for
this
experiment,
we
use
the
ecmwf
S2s
rainfall
data
in
real
time
of
2020
and
2021.
The
date
of
its
initialization
here
is
from
March
to
October
of
that
same
year
and
the
icast
from
on
the
fly
from
2001
to
2019,
to
test
the
sensitivity
of
this
evaluation
to
the
observation
we
use
the
GPM
image
and
and
turns
out
for
for
the
same
period.
O
So
in
effect
we
have
in
every
year
15
star
dates
and
for
each
start
date
of
every
year
we
have
40
times
Decay
daily,
that
is
in
in
10
10
days,
Time
advance.
So
again,
we
now
classify
the
start
date
into
season
so
into
fourth
season.
So
we
have
the
season
from
March
to
April,
which
we
call
the
premonson
season
and
made
to
June
July
to
August,
which
we
call
Dr
Monsoon
Peak
and
from
September
to
October,
which
we
call
the
second
Southernmost
city.
O
So
we
want
to
set
the
attributes
of
this
Focus
the
focus
quality
of
this
sqs.
So,
in
order
for
us
to
evaluate
this,
we
are
going.
We
use
the
deterministic
and
probabilistic
techniques,
so
the
deterministic
evaluation
techniques
that
we
use
consists
of
the
bias
anomalies,
recognization
and
linear
correlation,
and
while
the
probabilistic
evaluation
used
here
is
the
rapper
ability
Cisco
to
test
reliability
from
four
rainfall
threshold
and
the
relative
operative
characteristics
to
serve
to
see
the
separability
of
the
model.
O
So
in
order
for
us
to
be
explicit
in
the
way
we
compute
the
association
in
terms
of
correlation,
we
compute
the
correlation
from
All
Star
dates
from
each
lead
time,
which
we
call
this
partial
temporal
and
from
each
lead
time
from
each
start
date,
which
we
call
the
special.
Then
we
computed
from
the
spatial
temporal
the
correlation
of
all
returns
from
all
studies,
then,
from
the
spatial
we
computed
from
the
mean
correlation
for
each
lead
time,
from
all
studies
and
for
all
leads
and
from
All
Star
dates.
O
O
So
later,
based
on
common
chromatology,
we
classified
West
Africa
into
four
regions,
which
is
the
Gulf
of
Guinea,
the
guinea
Forest,
the
Sudan
and
the
Sahel.
The
first
map
that
I'm
going
to
be
showing
here
is
the
the
eyeball
verification
that
any
forecasters
want
to
see,
and
this
is
showing
the
probability
Focus
From
the
S2s
and
The
observed
decada
rainfall
accumulation
of
less
than
10
millimeter.
O
So
what
we
want
to
see
here-
I,
don't
know
if
you
can
see
this,
but
you
can
I,
don't
know
what
what
we
want
to
see
here
is
the
the
ability
of
the
model
to
produce
low
rainfall,
and
we
can
see
from
the
upper
panel
we
can
see
from
the
upper
panel
I,
don't
know
if,
if
I
go
here,
we
can
see
from
the
auto
panel
here
that
the
model
captures
the
little
drive
system
over
this,
the
the
Gulf
of
Guinea
and
its
reduction
in
2001..
O
So
the
second
map
is
showing
the
same
thing
but
of
rainfall
that
is
greater
than
100
mm.
This
is
just
to
show
how
well
the
model
captured
the
evolution
of
the
monsoon,
and
we
can
see
that
if
we
compare
the
the
GPM
observation
to
the
focus
here,
the
model
can
capture
is
able
to
capture
the
the
bad
model
characteristics
in
the
South
and
the
unimodal
characteristic
in
the
in
the
North.
So
the
next
one
is
showing
the
bias
of
of
the
model
climatologically
in
the
archives.
O
We
can
see
that
the
model
is
wetter
during
July
and
August
period
in
the
South
and
and
drier
in
in
the
north,
so
in
the
same
period
And.
This
is
more
detailed
in
in
in
2001
and
2002..
I
do
not
want
to
dwell
on
the
time
set.
If
you
can
see
the
time
set
on
the
on
the
on
the
right
hand
side,
then
it's
just
to
compare
how
the
model
behave
using
Samsung
and
using
GTM,
but
I'm
not
going
to
dwell
on
on
Time
sets.
O
So
the
this
is
showing
the
distribution
of
winter
annual
availability
of
the
standard,
applicator
rainfall,
anomaly
of
of
the
model-
and
here
the
model
assemble
mean
is
the
is
the
solid
black
line.
The
red
boxes
are
from
the
gpn
observation
and
the
green
one
from
the
from
the
from
the
timestage.
We
could
see
here
that
the
model,
irrespective
of
the
type
of
the
lead
time
and
irrespective
of
the
area
it
can
reproduce
the
inter-annual
variability
for
all
the
lead
time
for
any
lead
time.
That
is
on
the
subject.
O
So
this
I
won't
talk
about
is
just
to
to
give
Credence
to
the
kind
of
outlier
that
we
have
here
in
the
in
the
Thompsons.
So
it's
just
a
way
of
giving
credence
to
the
to
the
to
the
model,
considering
the
sign
alone
and
not
the
magnitude
inside.
So
this
here
is
the
correlation
between
the
ensemble
mean
of
of
the
model
and
the
and
the
GPM
for
all
the
time
for
all
standards
and
all
the
time
from,
and
you
can
see
that
consistently.
O
The
anomaly
has
the
the
weakest
skill
in
Coalition,
although
it's
it
increases
from
the
Gulf
of
Guinea
the
region,
one
to
region
four,
but
seasonally.
We
could
see
a
sharp
drop
in
skilled
in
in
the
in
during
the
peak
of
the
of
the
of
the
Nostrand
Monsoon,
for
the
fact
that
again,
the
the
the
strongest
skill
is
two
is
on
is
the
icons
it's
in
the
icons
and
the
lower
skill
is
in
the
anomaly.
O
I
won't
talk
about
the
special
it's
just
to
compare
those
ones.
So
this
is
the
the
correlation
scale
of
the
of
the
of
the
model
from
all
the
time
from
each
lead
time
of
each
of
all
studies,
and
what
this
is
signifying
is
to
see
at
what
lead
time
do
we
have
the
strongest
skill
and
even
as
the
region,
one
has
the
lowest
correlation
skill?
O
It
does
not
imply
that
the
last
lead
time
as
the
as
the
as
the
lowest
skill,
so
I
am
going
to
the
the
round
probability
Cisco
here.
The
first
thing
that
strikes
is
the
fact
that
the
ability
of
the
model
to
forecast
the
ranges
of
of
of
events
is
small
during
the
northern
Monsoon
Peak,
and
that
the
lowest
skill
here
is
is
the
style
again,
though
in
2001
in
in
Region
4
with
the
lower
skill.
O
That
is
this
systematic
decrease
of
skill
from
lead
one
to
to
lead
four,
but
it
is
not
consistent
in
in
all.
Even
while,
it
is
consistent
that
the
Sahel
as
the
as
the
lower
skill
and
the
Gulf
of
Guinea
has
the
strongest
skill.
So
this
is
showing
the
the
the
rock,
which
is
so
indicates
the
separability
of
the
model,
and
we
can
see
here
that
the
model
has
no
skill
in
discriminating
low
rainfall
over
the
Gulf
of
Guinea
and
heavy
rainfall
over
the
sale.
O
Despite
the
fact
that
in
detecting
rainfall
anomaly,
it
displays
stronger
steel
over
the
over
these,
the
the
Sahel,
as
can
be
seen
in
in
here.
So
in
summary,
we
have
seen
that
the
most
the
model,
the
SDS
model,
to
capture
the
evolution
of
the
of
the
monsoon,
which
includes
the
dryness,
the
wetness
and
the
monsoon
jump
It's.
It
shows
that
during
the
northern
Monsoon
Peak,
the
model
is
generally
wetter
in
the
South
and
drier
in
the
north,
with
different
attributes
using
timesheets.
O
We
can
also
see
that
the
decrease
or
increase
of
correlation
scale
at
mostly
times
overall
region,
is
not
consistent.
So
again,
the
correlation
skill
in
this
special
compared
to
the
special
tempura
is
stronger.
It
is.
It
also
shows
here
that
the
model
could
predict
on
the
average.
Eight
correct
rainfall
are
normally
focused
out
of
10
across
region,
regardless
of
the
lead
time.
The
RPP
access
generally
decrease
towards
the
desire,
but
weakest
over
the
Gulf
of
greedy.
In
the
time
sets.
It
also
revealed
significant
inconsistency
in
Rapid,
businesses
called
across
lead
time
from
All.
O
Star
dates
is
the
with
the
area
on
the
court
of
about
0.78.
The
model
ability
to
discriminate,
different
rainfall
threshold
across
all
region
is
strong,
and
finally,
we
have
seen
that
the
results
have
shown
us
that
the
assessment
of
the
skill
in
this
model
is
sensitive
to
the
uncertainties
in
the
validating
observations.
So
I
will
stop
here
because
of
thank
you
for
for
listening
and
I'm
going
to
welcome
every
additional
advice
and
question.
Thank
you.
So
much.
K
G
P
Our
main
research
Resort
is
the
landing
initializations
contribute
most
to
the
subseasonal
soy,
moisture,
podcast
skill
in
the
U.S,
but
thanks
to
the
USDA
grant
for
sporting,
our
research
and
we
assign,
if
you
are
in
the
csm2,
subox
sensitivity,
experiment
data,
the
motivation
of
our
starting
is
the
soy
moisture.
Sorry.
P
P
The
motivation
of
our
starting
is
the
soil
moisture,
pretty
related
questions.
The
first
question
we
want
to
say:
can
we
predict
Road,
Zone,
soy,
moisture
anomalies
wakes
to
Mountain
bones
and
based
on
our
previous
study,
we
found
that
the
soy
moisture
has
significant
predictability
from
seasonal
to
decade
of
time
scales,
even
with
limited
precipitation
predictability.
We
want
to
know
which
sources
contribute
most
to
the
soil,
moisture
predictability
like
the
atmosphere,
ocean
and
land.
P
The
model
we
use
is
subsignal
forecast,
experiments,
the
sub
X,
the
esm2
large
Ensemble.
It
has
one
control
experiment
and
we
use
another
four
sensitivity:
experiments.
Just
like
the
introduction
in
Dr
Yaga
presentation,
we
use
the
different
initial
States.
For
example,
the
land
only
experiment
represents.
We
only
have
the
realistic
land
initialization
for
the
atmosphere
and
ocean.
The
model
has
the
climatology
instead
of
the
realistic
initial
States
and
for
the
control
experiment.
We
have
all
the
series,
The
Atmosphere
ocean,
aligned,
realistic
initial
States.
P
P
So
studies
scale
for
in
our
starting
is,
from
the
surface,
to
0.5
meter
Rose
of
soy
moisture
to
remove
the
drift.
We
calculate
we
normalize
the
soil
moisture
for
each
point,
for
each
forecast
initial
week
and
for
each
month
and
also
across
the
forecast
late
from
the
0
to
45
days
and
to
remove
the
biases.
In
observation,
we
calculate
the
average
of
the
four
soil
moisture
observation
sources.
There
are
remote
sensing
and
real
nice.
It's
based
soy,
moisture,
the
re5
line,
submerged
claim
and
the
mirror
two
soil
moisture.
P
We
calculate
for
15
days
moving
average
around
the
forecast
late
days
from
the
0
to
45,
so
we
don't
have
the
first
seven
days
and
last
seven
days.
Soil
moisture
due
to
the
running
average,
weighs
like
the
same
signal
scale
and
it
is
the
wakes
rate
of
full
forecast
and
because
it
is
also
the
gap
between
the
short-term
weather
forecast
and
the
long-term
season
of
log.
We
want
to
test
the
model
performance
during
that
window.
P
The
first
Resort
from
our
starting
is
a
comparison
within
the
full
observation
design.
The
First
Column
means
erv5.
The
second
is
merge,
the
third
is
claim
and
the
force
is
mirror
too.
The
forecast
skill
is
weakness
during
ER
5
and
the
highest
for
mirror
2.,
so
we
combine
them
together,
calculate
the
average
normalize.
The
observation,
soil,
moisture
and
calculate
the
normally
correlation
between
the
observation
and
the
foreign.
P
P
To
make
the
resort
much
more
straightforward.
We
show
the
relative
contribution
from
the
three
sources
by
the
area
area
ratio.
The
region
we
select
is
the
neon
region
6,
which
is
the
agriculturally
dominated
Eco
region.
The
left
figure
y-axis
is
model,
observation
correlation
and
the
X
access
is
the
forecast
laid
from
the
8
to
38
days
due
to
the
moving
average.
We
don't
have
the
first
seven
days
and
a
lot
and
the
last
seven
days
so
I'm
moisture
so
right
line.
The
right
solid
line
is
from
the
land.
P
P
It
was
to
note
that
the
light
blue
line
is
from
the
difference
between
the
ocean
line
experiment
and
the
line
experiment.
The
similar
it
is
a
similar
for
the
gray
region.
The
atmosphere
nine
percent
is
from
the
difference
between
the
control,
experiment
and
ocean
line
experiment
to
test.
If
the
resort
is
robust
or
not.
We
also
study
is
a
One
Source,
only
Starbucks
experiment
to
see
if
our
Resort
is
robust
or
not
because
in
the
formal
sliding
is
the
difference
here.
P
We
use
the
land
only
ocean,
lonely
and
atmosphere
only
to
see
if
the
land
country
also
contribute
most
to
the
soy
moisture,
pretty
ability.
We
find
that
for
the
substational
time,
skills
land
contribute
highest
to
the
soil.
Moisture
radiability
for
the
ocean,
there's
very
limited
skill
during
the
fourth
season
and
the
atmosphere
has
significant
contribution
in
the
spring
season.
The
dotted
region
is
the
correlation
where
it
is
significant
with
the
point
point
of
five
significant
level.
P
We
also
compare
the
Surface
2
meter,
surface
air
temperature
and
soil
moisture
Resort
in
the
sub
x
sub
X.
We
find
that
the
atmosphere
contributes
most
to
the
near
surface
air
temperature
in
the
CR7
to
larger
dumbbell
experiments.
The
atmosphere
has
limited
contribution,
but
the
ocean
has
limited
contributions.
The
atmosphere
atmosphere
have
significant
contribution
during
the
during
the
four
season
and
so
on.
Season.
P
To
make
the
resort
much
more
straightforward,
we
also
plot
the
we
also
plant
relatively
relatively
air
around
in
the
area
and
to
compare
the
credibility
from
those
resources.
The
last
figure
is
the
soy
moisture
right
hands
2
meter,
air
temperature.
It
is
also
in
the
near
Rico
region.
Six.
We
find
that
the
land
contribution
decreased
from
the
19
to
the
44.
The
atmosphere
contribution
increased
from
9
to
49.
The
ocean
contributes
seven
percent
into
air
temperature
predated
protein.
Also,
the
different,
also
the
light
blue
and
light
gray
region
is
the
model
difference.
P
J
C
P
Okay,
this
figure,
because
we
show
the
difference
between
the
experiment,
if
we
add
so
One
Source
on
the
experiment,
four
kinds
of
skill
together
is
going
to
be
much
more
greater
than,
for
example,
the
control
experiment.
If
we
add
the
land
atmosphere
and
ocean
One
Source
experiment
predicts
field
together,
it
will
be
much
greater
than
this
solid
black
lives.
So
by
that
means
for
the
soy,
Monster
breeding
with
him,
it
has
known
nearly
need
everything.
Sorry
in
the
modern
scale,
which
is
different
from
the
air
temperature.
P
Yeah
I
have
not
shown
the
result
yet
I'm
working
on
them,
so
I'm
always
from
memory.
We
want
to
remove
it
and
see
how
much
the
how
much
the
pretty
thing.
P
Oh,
it
works,
and
that
is
a
work
I'm
I'm
working
now
we
want
to
remove
the
soil,
moisture
memory
from
the
model,
forecast
skill
and
see
how
much,
how
much
the
forecast
from
the
model
initial
State
itself
or
from
the
model
itself
or
from
the
soil,
moisture
memory
directly,
because
the
memory
has
a
very
significant
effect
on
the
pretty
big,
pretty
ability,
even
for
the
decade
of
time,
scale
so
yeah
that
that
is
a
figure
I'm
working
on
now.
It
still
has
some
security
problems
so
yeah.
Thank
you
very
much.
R
P
Layer
with
my
limited
experience,
it
was
influenced
by
the
atmosphere,
especially
the
precipitation
significantly,
so
the
land
atmosphere,
coupling
effect,
will
play
in
a
lighter
role
in
in
the
surface
layer
and
yeah.
It
will
be
a
great
Point
since
the
snap
Air
Force
moisture
is
the
surface
layer.
It
is
0.05
meters,
so
it
would
be
the
if
we
had
time
we
will
have
a
local
Aid
to
make
the
research
more
integral.
Thank
you.
S
K
Thank
you
so
much
I'm
going
to
report
on
the
impact
of
the
stochastic
parameterizations
on
the
S2s
time
scale
in
coupled
simulations
with
CSM.
K
So
the
idea
for
stochastic
commodizations
is
that,
especially
as
a
resolution
increases,
the
classical
bulk
assumptions
no
longer
hold
and
we
want
to
actually
represent
the
subgrid
scale.
State
we
want
to
sample
the
subcut
scale
State,
rather
than
look
at
its
I
mean
this
is
very
loud.
Can
I
put
this
little
down.
K
So
one,
while
you
scheme,
is
this
sppt
scheme,
which
is
the
stochastically
protect
physics
tendency
scheme
where
the
parametrized
Tendencies
from
the
from
the
physics?
Sorry
where
the
accumulated
Tendencies
from
the
physical
problem
advisations
are
perturbed
with
a
random
field
so
and
this
random
field
has
temporal
and
spatial
correlations.
K
So
this
is
a
simple
equation.
It's
the
update.
The
tendency
is
decomposed
in
the
dynamical
Tendencies
from
the
dicor
and
the
physical
permanentizations,
and
we
perturb
them
with
r
plus
one.
So
these
are
gaussian
Proto
patients
at
each
grid
point
but
adjacent
grid
points
are
correlated
the
schema,
perhaps
the
accumulator
Tendencies
from
we
UV
T
and
Q,
and
and
in
particular
we
do
not
perturb
the
radiative
Tendencies,
because
for
those
it
has
been
shown
that
just
perturbation
shouldn't
be
gaussian.
K
From
course,
cleaning
experiments
I
will
in
interest
of
time.
I
will
skip
over
this,
but
we
need
to
make
sure
that
if
we
perturb
q,
that
moisture
is
preserved,
and
so
we
actually
have
to
perturb
also
the
P
minus
E
term,
which
is
not
so
easy
in
CSM,
because
half
of
the
parameterization
schemes
are
executed
before
the
coupling
and
and
half
of
them
later.
K
So,
if
you
put
her
up
at
the
end
of
the
physics
time
step,
we
need
to
save
that
pattern,
and
then
we
compute
a
precipitation
correction,
which
is
then
added
on
half
a
Time
step
later
and
in
the
climatological
runs
that
causes
some
issues
with
restom
that
we're
currently
working
on
with
the
top
of
the
atmosphere.
Outgo
a
balanced
energy
balance
here,
I'm
going
to
talk
about
coupled
simulations.
So
this
is
following.
K
K
First
of
all,
I
looked
at
perfect
model
skill,
so
this
is
where
you
verify
the
model
not
against
observations
but
against
an
ensemble
member,
and
you
have
to
do
this
carefully
to
get
the
theoretical
results.
For
example,
you
need
to
compute
the
ensemble
mean
based
on
all
Ensemble
Mendes,
including
the
verifying
member,
to
get
some
of
the
theoretical
results
that
are
valid
in
the
perfect
model
scenarios.
K
So
I
show
here
the
same
format
throughout
the
slides
weeks.
One
two
are
on
the
left
week:
three
four
in
the
middle
and
weeks:
five,
six
on
the
right
side
and
I'm,
comparing
here
ecmw
F2
csm2,
and
if
we
look
here
at
week,
three
four:
we
see
that
the
perfect
model,
World
csm2,
actually
has
more
predicts
more
skill
than
esmwf,
so
the
intrinsic
predictability
of
csm2
is
higher
than
that
of
esmwf.
K
K
Now
we're
looking
at
actual
model
skill.
So
now
we're
verifying
against
observations,
and
if
you
look
very
carefully
we're
seeing
two
things,
first
of
all,
I
was
really
impressed
to
see
that
the
CSM
model
skill
on
the
sub-seasonal
time
scale
out
to
weeks
five
six
is
very
similar
to
that
of
ecmwf.
So
it
means
that
the
initialization
that
the
e7wf
does
at
least
in
weeks
yeah
after
week,
two
three
are
not
the
leading
costs
for
model
skill.
K
Secondly,
we
see
that
now
CSM
actually
has
less
skill
a
little
bit
than
ecmwf,
and
so
this
perfect
model
skill
did
not
translate
into
actual
skill.
K
The
perfect
model
skill
is
often
looked
at
the
predictable
Horizon,
so
this
is,
if
we
cannot
a
model
cannot
has
no
perfect
model
skill.
We
would
it
not
to
expect
to
have
the
process.
So
if
a
process
has
no
skill,
then
we
would
not
expect
this.
In
actual
verification,
he
has
a
signal
to
a
nice
Paradox,
so
here
I'm,
comparing
the
actual
model,
skill
to
the
perfect
model,
skill
and
even
on
a
sub-season
seasonal
time
scale.
We
get
these
regions
in
EastEnders.
K
If
you
see
at
the
top,
where
the
actual
hindcast
skill
is
higher
than
that
of
the
perfect
model-
and
this
has
been
reported
on
longer
time
scales
from
seasonal
to
decadal-
and
it's
the
role
of
active
research-
we
also
note
that
CSM
doesn't
have
that,
and
so
there
could
be
several
reasons
for
that.
K
It
could
be
that
the
actual
skill
is
actually
smaller
than
that
of
esmwf,
or
it
could
be
that
the
perfect
model
skill
is
estimated
to
high
sorry
I
cut
that
off,
but
those
are
differences
from
what
I
showed
you
before.
Typically,
you
want
to
do
ratios
for
the
signal
to
noise,
but
then
you
can
divide
by
zero,
so
for
so
here
I'm
just
going
to
the
it
actually
doesn't
matter
because
I'm
just
looking
at
the
differences,
unless
you
ask
about
statistical
significance,
I
can
come
back
to
that.
K
However,
there's
another
reason.
So
why
is?
Why
is
the
intrinsic
skill
of
csm2
small
higher
than
that
of
CSM
CSM,
and
one
of
the
reasons
is
that
there
might
not
be
no
enough
spread
in
those
simulations?
K
And
so
here
you
look
at
here
is
Sean
spread
era
for
two
meter
temperature
and
precip,
so
the
the
solid
bars
are
the
arrows
and
then
the
lighter
ones
dashed
ones
are
the
spread,
and
so,
if
we
look
at
smwf
really
all
the
models,
you
see
that
the
spread
is
always
smaller
and
than
the
than
the
error,
and
this
is
the
original
region
by
stochastic.
K
Permeabizations
were
developed
because
having
this
under
this
version
will
really
hurt
your
probabilistic
skill
scores,
and
this
is
why
stochastic
permanentizations
are
now
routinely
used
in
numerical
weather
prediction.
K
And
if
you
look
for
example,
at
week,
three
four
for
temperature,
you
see
that
for
CSM
the
ERA-
this
is
global,
mean
errors,
went
actually
up,
42
meter
temperature,
but
the
spread
is
the
same,
and
so
that
means
that
the
under
dispersion
in
the
CSM
runs
is
worse
than
those
in
the
ecmwf
runs.
K
What
we
then
want
to
do
with
the
stochastic
permanentizations
is.
Ideally
we
don't
want
to
increase
the
error,
but
we
want
to
increase
the
spread.
That's
the
Holy
Grail
of
stochastic
permanentizations,
and
we
see
we
can
actually
do
this
for
temperature
in
the
in
the
week
in
the
subseasonal
time
scales,
it's
very
small,
but
we
can
increase
the
spread
and
not
increase
the
RMS
error
and,
in
particular
for
precipitation.
We
get
a
a
very
large
spread
which
actually
is
now
approaching
the
RMS
era.
K
So
this
is
now
a
different
format,
I'm,
showing
here
only
weeks.
Three.
Four
on
the
left
side
are
these
perfect
model
predictions
on
the
middle
one,
the
actual
ones,
and
then
on
the
hindsight,
the
differences-
and
here
you
can
see
that
if
we
have
the
stochastic
parametrizations
sort
of
this
signal
to
noise
Paradox
or
these
regions,
where
the
the
model
is
actually
over,
dispersive
is
reduced
and
and
the
regions
that
pop
out
are
to
first
order
similar
vendors
in
ecmwf,
so
I.
K
Think
from
this,
we
can
conclude
that
the
the
perfect
model
skill
in
these
S2s
forecast
and
CSM
2
is
over
predicted
because
the
system
is
under
dispersive.
K
What
is
now
the
actual
model
skill,
so
this
is
now
djf
only
because
we
have
these.
The
stochastic
runs
are
only
done
for
the
extended
vendor
season
and
so
at
the
bottom,
I
show
you
the
difference
between
sppt
and
csm2
and
so
to
first
order.
Those
are
very
small
differences
where
it's
blue
sppt
is
worse
and
where
it's
red
sppt
is
slightly
better.
K
However,
if
we
look
at
the
RPS
of
Tesla
forecasts,
the
rank
probability
score
is
a
probabilistic
score
that
doesn't
look
at
the
deterministic
forecast,
but
it
compares
the
distribution
of
forecasts
to
those
of
occurrences.
K
K
So
this
is
a
negatively
oriented
score,
which
means
the
smaller,
the
better
and
a
perfect
forecast
would
be
zero,
and
we
also
have
to
note
this
is
now
shown
for
weeks
three
four,
so
these
are
actually
relatively
low
overall
skills,
although
I've
shown
that
they
were
all
positive,
so
I
think
there
is
really
skill
on
these
time
scales,
as
we
all
know,
and
if
we
now
look
at
this
probabilistic
skill
score
this
the
lower
it
is
the
better
and
so
to
first
order.
K
I
would
also
say
that
if
we
compare
the
CSM
results,
the
red
bars
to
ecmwf
those
are
worse,
but
overall
I
am
over
and
over
really
impressed.
How
well
CSM
does
also
probabilistically
when
compared
to
ecmwf
on
those
S2s
time,
scales
and
I.
Think
the
reason
for
that
is
that
CSM
is
really
good
at
capturing
internal
variability
and
teleconnections
and
and
processes
like
that.
K
But
with
the
stochastic
parameterization,
the
green
bars,
we
are
not
only
better
than
csm2,
but
also
better
than
smwf,
so
I
have
better
probabilistic
skill,
and
the
reason
for
this
is
that
we
we
could
pump
in
a
lot
of
spread
in
there.
This
would
probably
not
go
through
at
an
operational
Center,
because
you
cannot
reduce
deterministic
skills,
so
here
we're.
We
are
basically
trading
a
little
bit
of
deterministic
skill
for
probabilistic
skill
and
then
in
precipitation.
The
results
are
similar
good.
K
Finally,
we
wanted
to
extend
this
and
look
at
State
dependence.
It
has
been
shown
that
stochastic
parameterizations
can
increase,
can
have
beneficial
impacts
on
models
in
terms
of
capturing
plucking.
Such
as
associated
with
the
Nao
and
PNA,
and
so
here
we
wanted
to
look
at
State
dependent
predictability.
This
is
not
only
focusing
on
North
America.
K
This
is
in
csm2,
and
here
I
have
neutral
condition
when
the
initial
conditions
project
on
the
positive
p,
a
pattern
and
the
negative
one,
and
so
the
state
dependent
predictability,
tells
you
that
if
you
you'll
project,
your
initial
condition
is
in
the
positive
PNA.
Then
you
have
wide
regions
where
you
S2s
predictability.
This
is
two
meter
temperature
for
weeks.
Three
four
are
better
than
those.
K
If
you
look
at
all
states
and
also
in
the
negative
pattern,
so
for
the
positive
you
get
sort
of
in
the
Northeast,
this
greatly
engrossed
predictability.
This
is
ACC
and
in
the
negative
one,
it's
real
all
over
corners.
K
This
is
the
plot
for
sppt,
and
so,
if
anything,
there's
a
small
degradation,
not
really
an
improvement,
so
I
think
the
stochastic
permanentizations
have
not
improved
the
state,
dependent
predictability
associated
with
the
PNA,
and
this
is
enso
again
we're
looking
at
forecast
where
the
initial
date
is
projecting
onto
positive
or
negative
and
so
States,
and
we
get
there.
K
This
increased
predictability
in
the
in
the
Northeast
in
week,
three
four,
that
is
part
of
the
classical
teleconnections,
and
if
we
have
the
stochastic
poem
advisations,
we
we
don't
see
a
big
Improvement,
and
this
is
my
talk.
Thank
you.
D
My
answer,
the
comment
was,
you
might
try
looking
at
reliability
and
sharpness
separately
or
maybe
I,
don't
know
if
you
have
maybe
I
should
have
asked
it,
though,
have
you
tried
looking
at
reliability
and
sharpness
separately
to
see
you
know,
you
know
if
your
RPS.
L
K
K
Right
now,
I
understand,
but
in
the
sharpness
I
think
if
the
sharpness
would
be
better
in
the
ACC
would
be
better
and
honestly.
So
for
me,
the
biggest
challenge
in
this
has
been
the
regionality
of
us
right.
If
you
look
at
Quebec
or
the
sort
of
this,
this
sort
of
Labrador
region,
obviously
the
teleconnections
they
they
have
very
different
impacts
on
the
different
regions,
and
so
there's
a
lot
going
on.
There's
a
lot
of
ways
to
slice.
These
results.
Okay
and.
D
I
guess
the
question
I
had,
which
is
a
little
different
than
what
you
showed
at
the
end.
Is
you
know,
we've
been
trying
to
argue
that
that
these
systems,
you
should
just
mainly
look
at
forecast
of
opportunity.
You
know,
because,
if
you're
just
improving
low
skill
from
like
bad
to
mediocre,
you're
you're
not
really
helping
so
have
you
have
you
tried
doing
looking
at
that
here,
just
stratifying
cases
of
high
skill
say
in
the
control
and
then
asking
whether
you're
doing
better
or
worse.
Under
those
scenarios,
I
mean
maybe
I'll.
D
D
K
I
think
this
would
be
interesting
to
do
I
think.
The
idea
is
that
you
want
to
predict
when
you
have
extended
predictability,
and
so
this
is
why
we
do
this
projection
onto
the
initial
State,
because
we
we
want
to
say
things
like.
Oh
if,
today,
if
I
project
today
on
the
negative
PNA,
then
I
think
my
forecast
will
be
better
right.
D
D
K
Yeah
no
I
think
this
is
a
good
idea,
part
of
why
we
started
this
work
is
that
the
sppt
improves.
K
But
but
ours
is
too
strong,
so
it
dampens
it,
and
so
the
Hope
was
that
the
teleconnection
patterns
would
be
better
and
then
you
reflect
acted
in
this
improved
S2s
skill,
but
yeah.
Maybe
a
different
slicing
would
would.
K
B
Have
you
or
would
it
be
meaningful
to
compare
the
spptpt
with
csm2,
where
you've,
like
post
processor,
bias,
corrected
The
Ensemble
spread
with
some
of
those
probabilistic
metrics
like
I
know
you
wouldn't
be
capturing
the
benefits
and
the
physics
there,
but
it
might
be
interesting
for
the
how
it
feeds
through
to
the
some
of
the
probabilistic
metrics.
K
Improval
casts
I
think
the
idea
of
where
a
stochastic
scheme
would
outperform
a
sophistic,
post-processing
method,
or
you
know,
machine
learned
ones.
Etc
is,
if
you
really
are
in
a
if
you
look
at
the
Lorenzo
tractor
and
and
you
sort
of
have
two
regimes
and
you
just
never
get
there,
and
so
you
never
get
the
evolution
right
because
you
never
reached
that
particular
one
or
if
you
have
two
called
ssts
and
you
never
kick
off
the
hurricane.
K
So
the
idea
is
that
these
stochastic
schemes
do
perturb
kick
their
model
into
face
spaces.
It
wouldn't
go
otherwise,
but
should
during
run
time
I'm
sure
it
would
be
interesting
to
compare
this.
But
then
the
question
is:
if
you
both
post
process,
you
know,
would
the
sppn
still
be
better
yeah
but
I
mean
those
are
interesting.
Questions
for
applications
thanks.
So
much
thanks.
L
Judith
one
more
talk
before
lunch:
Nick
Davis
foreign.
T
Great
thanks,
I'm,
just
gonna
briefly
talk
about
this
month,
so
I'll
just
try
to
speak
warming
and
how
that
has
shaped
out
in
the
real
time.
Csm2
forecasts
so
I'll
just
provide
a
brief
picture
of
of
what
I'm
talking
about
I.
Think
last
few
years,
there's
been
a
lot
of
news
and
science
about
sudden
warmings,
but
Zach
Lawrence
who's
now
at
Noah
in
Boulder,
has
a
really
great
website
with
visualizations
of
the
vortex,
so
the
potential
temperature
here
you
can
think
of
this.
T
As
you
know,
pressure
surfaces
vertically
and
so
we're
going
going
up,
basically
from
the
lower
Stratosphere
way
high
up
into
the
upper
Stratosphere,
and
these
are
just
some
ISO
Contours
chosen
to
illustrate
the
vortex
and
so
on.
The
left
was
the
vortex
in
early
February.
It
looks
like
a
Vortex,
it's
very
circular
over
the
pole,
very
regularly
shaped
and
then
by
February
16th,
which
was
the
official
date
of
this
year's
sudden
warming.
T
You
can
see
that
the
vortex
has
really
displaced
and
and
sort
of
whipsawed
off
the
pole
and
then,
by
by
today,
by
this
morning
the
vortex
is
still
in
a
state
of
disarray.
So,
basically,
there's
there's
not
really
any
Vortex
to
speak
of
in
a
Stratosphere
right
now,
and
so
these
occur
usually
every
every
two
years
or
so
on
average.
T
Just
to
give
you
an
idea
of
what
what
the
forecasts
this
year
were
were
doing
so
there's
two
forecasts
Wacom
and
Cam,
and
this
is
looking
back
at
our
February
6th
initialization
and
then
our
initialization
last
week
with
the
verification
overlaid
in
Black
here
and
so
the
normal
diagnostic
folks
uses
60
North,
10
hectopascal,
so
it'll
win
and
when
that's
zero,
you
call
it
a
sudden
warming
and
so
each
of
the
sort
of
con
the
lines
here
as
a
member
that
has
predicted
a
sudden
warming-
and
you
can
see
that
basically
by
February
6th,
it
was
pretty
pretty
much
in
the
in
the
cards
that
we
were
going
to
get
a
sudden
warming.
T
So
we
had
something
like
a
10
to
10,
plus
Day
lead
time.
There
was
actually
some
visibility
on
the
sudden
warming
by
the
second
to
last
week
of
January,
but
it
just
wasn't
that
convincing.
But
there
were
a
number
of
model
members
predicting
one.
T
So,
like
I
said
these
usually
happen
every
every
two
years
or
so,
and
this
is
for
the
week
three
to
four
forecast
anomalies
for
the
run
initialized
last
week.
So
the
temperature
anomalies
here
on
the
left
and
on
the
right,
is
the
sort
of
standard
pattern
that
we
expect.
T
After
a
sudden
warming,
and
so
the
Saturn
pattern
that
we
expect
is
sort
of
cold
air
over
over
Europe
and
especially
Siberia
warm
temperatures
over
parts
of
the
Middle
East
and
the
Mediterranean
warm
temperatures
over
Greenland
and
Canada
and
Alaska,
and
then
cold
temperatures
over
the
U.S.
So
there's
commonly
cold
air
outbreaks
in
the
month
following
a
sudden
warming.
But
you
can
see
that
that's
basically
orthogonal
to
what
the
model
forecast
was
predicting
for
you
know
the
the
month
following
the
sudden
warming.
T
So
the
obvious
question
is:
what's
on
the
go
and
I
I
should
say:
CSM
was
not
an
outlier
here.
Other
sub
X
models
were
predicting
this
sort
of
similar
pattern
on
the
left.
So
I'm
saying
this
is
not
your
average
sudden
warming
and
so
one
of
the
ways
we
look
at
at
the
Dynamics
and
the
propagation
of
sudden
warming
circulation
anomalies
is
to
look
at
polar
cap,
geo-potential
height,
which
is
really
pretty
much
the
same
as
the
northern
annular
mode
index.
T
So
in
this
plot,
I'm
showing
on
the
left,
the
the
csm2
Wacom
6
forecasts
initialized
last
week,
along
with
the
GFS
forecasts
initialized
at
the
same
date,
and
so
warm
colors
here
mean
the
same
thing
on
the
left
and
the
right
cool
colors
I
mean
the
same
thing,
so
warm
colors
mean
a
warm
polar
cap.
So
when
there's
a
sudden
warming,
we
have
a
really
strong
warming
of
the
Polar
cap
that
tends
to
descend
from
you
know
the
middle
to
Upper
Stratosphere.
T
What's
interesting
about
this,
one
is,
if
you
look
at
the
forecast
on
the
left,
especially
because
we
go
out
a
little
further
there's.
No
warm
polar
cap
at
the
surface,
really
to
speak
of
we've
basically
got
a
cold
polar
cap
which
is
giving
you
those
sort
of
opposite
sign.
Temperature
anomalies
that
you
expect
and
and
GFS
seem
to
be
consistent
with
with
CSM.
So
the
question
is:
does
that
mean
there's
no
coupling
with
this
between
the
sudden
warming
and
the
surface
I'll
talk
about
in
a
second
but
as
I
toggle
back
and
forth?
T
Here
you
can
see
that
at
some
point
last
week,
GFS
started
to
hint
at
again
this
negative
surface
Nam
that
we
usually
expect
after
a
sudden
warming,
that's
coinciding
with
this
sort
of
second
disruption
around
March
1st
that
was
a
forecast
and
then,
if
I
toggle
forward
again
yesterday,
GFS
backed
off
from
the
sort
of
positive
sorry
negative
name
at
the
surface.
T
But
what's
interesting
now
is
CSM
has
converged
with
GFS
on
the
second
anomaly,
basically
on
March
1st,
an
apparent
downward
propagation
of
that
anomaly
and
then
and
then
a
sort
of
warm
polar
cap,
negative
Nam
at
the
surface
sometime
in
mid-march,
and
so
the
two
models
were
were
a
little
bit
different
CSM
didn't
want
that
initial
or
didn't
want
that.
Second
disruption:
GFS
wanted
more
coupling
with
the
surface,
but
they've
Now
sort
of
converged,
and
the
question
is:
is
this
a
non-coupling
sudden,
Stratosphere
warming
and
I
think?
T
Actually
the
answer
is
no,
but
maybe
it's
just
in
a
different
way
than
we
usually
expect,
usually
expect
to
see
the
warm
polar
cap.
But
if
I
take
those
February,
13th
forecasts
and
sort
of
split
them
into
ones
with
an
ssw
and
ones
without
an
ssw,
the
ones
without
an
ssw
had
a
much
colder
polar
cap
in
the
forecast
than
than
you
otherwise
would
expect,
and
so
basically,
the
the
perspective
I'd
Advocate
is,
you
know
just
because
you
don't
see
the
same,
signed,
Nam
anomaly
at
the
surface.
T
Geez
does
not
mean
that
you
know
does
not
mean
that
that
the
for
that
the
sudden
warming
is
not
associated
with
a
shift
in
surface
weather
conditions,
and
so
the
polar
cap
would
be
colder
and
our
temperatures
in
the
mid-latitudes
would
probably
be
even
milder
if
the
sun
warming
had
not
occurred,
or
at
least
co-occurred,
and
so
one
thing
we
can
do
is
regress
this
index,
the
the
polar
capture
potential
height
anomaly.
T
On
the
strength
of
on
the
left,
the
sudden
warming
and
then
on
the
right,
the
secondary
disruption-
and
you
can
see
that
when
we
get
a
stronger
sudden
warming
in
the
forecast
or
we
get
a
stronger
secondary
disruption,
we
get
this
sort
of
elevated
and
I
should
say
warmer
polar
cap
at
the
surface.
And
so
basically,
if
you
just
look
at
the
shift
in
the
the
polar
cap
conditions,
you
see
a
pretty
good
reflection
of
the
surface
and
the
stratosphere
coupled
variability.
T
And
that's
sort
of
all
I
had
I
wanted
to
just
give
an
update
and
show
kind
of
some
of
the
the
predictive
skill
of
of
CSM
for
this
event,
but
also
show
that
you
know
just
because
sudden
warmings
don't
look
like
the
normal
composite
doesn't
necessarily
mean
that
they're,
not
coupling
with
the
surface.
T
A
T
T
So
like
2021,
when
we
had
like
a
week
where
we
barely
punched
above
zero,
that's
definitely
a
cold
air
outbreak
like
the
odd
day
or
two
when
a
Shore
wave
comes
through
and
dumps
snow
I,
don't
know
how
that
gets
classified,
but
yeah
I
think
the
trouble
is
the
smaller
scale,
the
event
in
time
and
space,
probably
the
harder
it
is
to
draw
the
connection
to
the
sudden
warming.
So
for
sure
like
when
we're
saying
we
don't
see
the
normal
surface
impacts
here,
you
know
we're
talking
like
two
week.
T
Averages
so
I
know,
there's
a
few
folks
with
papers.
Looking
at
now
more
of
this
shift
in
distributions,
which
is
doing
a
better
job
of
recognizing
that
you
know
it's
not
just
it's
not
like
the
Enzo
composite.
You
know
it's
like
we're
going
through
the
same
thing
where
everyone
would
say:
oh
it's,
and
so
so
that
means
the
snow
is
going
to
do
this.
T
Well,
it's
kind
of
like
a
shift
to
distribution,
so
I
think
there's
work
showing
that
it
shifts
the
distributions
now,
but
but
yeah,
the
the
like,
you
said,
there's
colder
outbreaks,
whether
there's
a
sudden
warming
or
not
so
obviously,
they're,
not
the
the
ultimate
cause
necessarily
I.
G
T
Yeah
I
mean
I
think
the
dripping
paint
is
like
a
zero
order.
Perspective
I
think.
The
only
reason
I
wanted
to
show
this
is
just
to
like
highlight
the
dripping
paint
on
their
own
I.
Think
when
folks
just
do
an
absolute
composite
and
I
by
the
way,
I
should
say,
I
don't
actually
view
this
as
downward
impacts.
I
I
think
we
had
a
paper
last
year
in
nature,
comms
that
yeah,
basically
I
I
think
this
is
two-way
coupling.
T
Although
you
can
see,
there's
not
there's
not
a
great
sample
I
think
if
you
view
it
as
coupling
like
not
just
pure
downward
I
think
you
know
maybe
there's
something
useful
to
the
polar
cap,
geopondential
height
anomalies,
I,
will
say:
I'm
really
careful
here
to
say
it's
not
downward,
just
that
when
we've
got
a
stronger
sudden
warming
or
a
stronger
disruption
across
these
model
runs.
We
seem
to
have
a
more
positive
polar
cap.
T
Geote
potential
high
at
the
surface
that
could
very
well
mean
that
the
Dynamics
of
the
surface
are
actually
the
things
governing
the
polar
capture
potential
height
at
you
know
the
lower
Stratosphere.
It
could
be
a
really
strong
upward
effect.
Yeah
I,
don't
I'm
not
going
to
comment
on
which
direction
everything's
going.
D
Cool
just
a
comment:
I
just
note
that
there's
a.
D
Nina
going
on
now
and
the
warming
in
the
Northeast
is
is
very
much
a
La
Nina
sort
of
signature
with
Cooling
in
the
West
and
I
know.
When
the
we
sit
in
on
the
weekly
weeks,
three
four
CPC
discussions,
and
certainly
for
this
period,
they've
been
focusing
on
questions
of
a
fairly
strong
mjl,
also
propagating
through,
along
with
the
La
Nina.
So
it's.
T
Right
so
you're
sort
of
yeah,
pointing
to
this
warm
blob
over
the
the
Northeast
and
and
the
sort
of
Great
Lakes
yeah
I,
mean
I.
Think
that's
kind
of
that
was
my
goal
with.
This
is
to
just
say
that,
yes,
just
because
we
don't
see
The,
Usual
Suspects
of
like
sudden
warmings,
where
you
get
this
positive
polar
cap
height
at
the
surface,
that
doesn't
mean
there's
some
kind
of
Association
going
on
behind
the
scenes.
T
Yeah
I
agree,
I
mean
it.
When
you
get
down
to
individual
events,
you've
got
a
really
difficult
bag
of
things
to
tease
apart,
but
but
I
think
just
just
to
address
both
the
previous
questions,
yeah
I
think
at
least
when
we
look
at
the
relationship
between
polar
cap,
Heights
and
the
severity
of.
What's
going
on
in
the
stratosphere,
we
see
evidence
that
there's
an
association
regardless
of
the
actual
values
at
the
surface,
so
yeah
I
agree.
G
Think
that
wraps
it.
Let's
thank
all
our
speakers
again
from
the
morning.
G
G
G
E
U
So,
can
you
hear
me,
okay,.
U
Yes,
we
can
thank
you,
okay,
good
all
right,
so
I
wanted
to
start
out
by
acknowledging
car
readers,
Ben,
Sasha,
Yaga,
Nan
and
Steve
and
I'm
going
to
be
talking
about
at
least
two
of
the
challenges
in
decadent
prediction.
One
involves
how
you
initialize
the
model
to
do
your
hindcast
and
then
the
other
is
how
you
remove
the
drift
away
from
that
initial
State.
U
So
the
usual
thing
is
you
run.
A
very
large
set
of
initialized
hind
casts
with
a
lot
of
Ensemble
members
to
evaluate
skill
in
the
iron
cast,
so
the
dple
has
64
start
years
and
40
member
ensembles
for
each
start
year,
but
of
course
that
is
very
expensive
and
it's
very
difficult
to
do,
and
it's
it's
rarely
that
is
there
enough
computer
time
available
to
do
such
a
large
set
of
initialized
hindcasts
and,
of
course,
the
reason
you
want
to
do
this.
U
Is
you
want
to
have
enough
samples
of
start
yours
and
drifted
hindcast
to
form
a
drifted
climatology
from
which
to
compute
anomalies,
to
compare
to
observations
to
quantify
the
skill
of
hindcast?
So
one
of
the
things
we're
going
to
be
looking
at
is
the?
U
Is
it
possible
to
run
a
smaller
number
of
start
years
for
ironcasts,
for
example,
if
you
want
to
do
sensitivity,
experiments
or
test
the
model
out,
or
maybe
look
at
case
studies
and
maybe
use
the
uninitialized
free
running
historical
simulations
to
form
the
model
climatologies
to
compute
these
anomalies
and
remove
the
drift
and
the
reason
we
think
that
could
be
a
possibility
is,
is
looking
at
a
plot
like
this
in
the
upper
right?
This
is
from
the
dple.
This
is
globally
average
service
temperatures.
U
The
black
line
is
the
observations
representing
kind
of
the
initial
States.
The
dots
are
the
successive
forecast
years
as
the
ironcast
Run
The
Ensemble
averages
of
the
40
Ensemble
members
and
going
from
Blue
to
Red.
You
can
see
they
go
from
year,
one
to
year,
10
and
you
can
see
right
away.
U
The
drifts
are
very
large
and
very
rapid
away
from
this
observed
initial
state,
but
they
do
kind
of
converge
down
here
on
this
kind
of
green
line
with
the
shading
around,
and
this
is
the
csm1
large
Ensemble,
the
free
running,
uninitialized,
large
Ensemble,
and
so
maybe
after
about
the
year
three
or
four,
you
may
be
close
enough
to
the
systematic
error
state
of
the
model
represented
by
the
large
Ensemble.
That
you'd
use
that
as
your
climatology
to
form
your
anomalies.
U
So
how
you
form
these
anomalies
there's
a
number
of
ways
that
have
been
tried.
The
usual
way
here
is
and
used
for
decades,
the
s2d
as
well
as
seasonal,
linear,
annual
and
S2s.
Is
you
take
forecast
your
differences
from
a
model
climatologies?
In
other
words,
if
you're
going
to
look
at
lead
years,
three
to
five
for
your
predictions,
you
take
your
whole
Hein
cast
data
set.
Look
at
all
drifted
three
to
five
year.
U
Hind
casts
average,
those
all
together
to
form
your
drifted
model
climatology
for
that
time
frame,
and
then
you
use
that
as
your
reference
to
calculate
the
climate,
the
anomalies
of
your
hind
test,
you're,
going
to
evaluate
the
problem
with
that
is-
and
you
can
see
it
in
this
plot
here-
is
that
the
the
trends
over
these
long
time
periods
that
you
have
your
hindcast
data
sets
may
be
different.
So
the
trend
in
the
initial
States
you
can
see
uii
is
different
than
the
trend
and
the
drifted
hindcast.
U
So
you
can
introduce
problems
when
you
average
over
the
drifted
station,
where
this
whole
time
period,
especially
if
you're
near
the
beginning
near
the
end
of
pinecast
you're,
trying
to
evaluate
another
method,
is
you
calculate
the
mean
drift
and
then
remove
that
mean
Drift
from
each
hindcast
and
compare
that
to
the
observations
directly.
U
So
that
removes
the
problem
of
introducing
problems
with
Trends
in
your
drifted
hindcast,
and
it
does
kind
of
put
you
closer
to
a
previous
period
that
you
may
want
to
compare
what's
different
in
their
predicted
period
compared
to
a
recent
period
and
the
last
one
here
is
one
I'm
not
going
to
talk
about
today.
So
what
do
these
look
like?
This
is
an
example
from
the
dple.
U
This
is
where
we
took
the
initialized
state
and
year
2013,
looking
at
lead
years,
three
to
seven,
in
other
words
the
average
of
years
2015
to
2019
the
prediction
over
that
time
period,
and
these
are
the
three
methods
I
was
just
talking
about,
of
removing
forming
the
climatologies
and
removing
the
drip.
So
this
is
using
the
entire
drifted
hindcast
time
period
to
form
the
drifted
climatology
and
calculates
the
anomalies,
and
this
is
the
observed
verification
using
that
same
method.
U
This
is
using
the
previous
15
years
of
the
drifted
climatologies
and
the
similar
model
of
verification
data
calculated
in
the
same
way,
and
this
is
using
the
bias,
adjusted
drifts
compared
to
observations
and
compared
to
the
same
kind
of
thing
you
can
see.
These
are
all
I'll.
U
Give
you
somewhat
different
results,
the
the
sense
the
results
the
same,
because
we
wanted
to
see
if
the
dple
could
predict
a
transition
from
negative
IPO
to
positive
IPO,
and
all
these
three
show
some
evidence
of
a
positive
phase
of
the
IPO
and
this
prediction
compared
to
the
observations
that
show
a
similar
kind
of
thing,
with
some
element
of
a
positive
IPO
compared
to
persistence,
which
is
quite
different.
U
This
is
just
persisting
the
previous
negative
phase,
the
IPO
into
this
prediction
period,
but
in
terms
of
the
trends
comparing
these
two,
especially
you
can
see
that
almost
all
these
anomalies
are
positive,
and
this
is
because
of
the
problems
of
the
different
Trends
and
you
have
a
trend
in
the
data
of
your
drifted,
hindcast
and
you're.
Looking
in
this
case,
in
the
latter
part
of
the
period
when
there's
been
warming,
that's
occurred.
Almost
do
you
get
part
of
that
warming
trend
is
introduced
into
your
your
anomaly
field.
U
If
you
take
the
previous
15
years,
you
don't
have
that
big
Trend
and
you
do
get
these
some
areas
of
these
negative
anomalies
that
are
that
show
up
in
the
observations.
So
that's
kind
of
some
of
the
issues
with
just
how
you
calculate
the
anomalies,
but
then
we
can
also
see
if
the
initialization
scheme
makes
a
difference.
U
These
are
the
one
degree
or
standard
resolution
versions
of
these
models
and
the
main
difference
between
these
two
models
is
in
the
ocean
component.
Csm1
has
a
version
of
Pop
e3smv1.
Has
the
impasse
ocean
and
we're
going
to
use
two
different
initialization
schemes?
One
is
the
so-called
forced
ocean
sea
ice
or
Fosse
method,
which
we've
been
using
an
end
curve
for
quite
a
long
time
now,
and
you
run
the
model,
for
example,
through
five
Cycles
of
20th
and
21st
century
climate.
U
You
use
observed
forcing
from
the
atmosphere
and,
as
you
run
through
each
cycle
of
the
20th
and
21st
century,
you
imprint
more
of
the
observed
atmospheric
forcing
on
the
on
the
ocean,
and
so
by
the
time
you
get
to
the
fifth
cycle.
You
say:
okay,
that's
going
to
be
our
ocean
initial
States
for
our
hindcast.
U
The
other
method
is
the
so-called
brute
force
method
that
Ben
kurtman
has
been
using,
and
this
is
pretty
simple
because
you
just
take
the
observery
analysis,
products
from
Ocean
atmosphere
and
land
and
just
interpolate
that
to
the
model
grid
and
that
those
are
your
initial
States
and
so
we've
got
a
kind
of
a
smaller
number
of
start
years,
every
five
years,
starting
in
1985,
going
out
to
2015
and
then
2016,
2017
and
2018.
We
have
mostly
five
Ensemble
members
for
each
except
for
three
Ensemble
members
for
a
couple
of
the
e3sm
cases.
U
So
what
do
these
drifts
look
like
in
terms
of
drifting
away
from
the
initial
State?
And
these
are
anomalies
now
taken
from
the
observed
initial
State?
These
are
the
two
models
with
the
two
initialization
methods,
and
this
is
the
first
month.
So
you
start
this.
The
models
in
November
1st
take
the
average
of
November.
So
that's
the
first
month
the
model
is
run.
U
You
can
see
that
the
Brute
Force
methods
have
already
drift
a
little
bit
away
from
the
initial
State,
because,
by
definition
their
initial
state
was
the
observations,
but
these
errors
of
maybe
a
tenth
or
two
are
less
than
in
the
Fosse.
The
models
with
the
using
the
fosseum
method,
because
the
fosseum
method
doesn't
guarantee
the
ocean.
Initial
state
is
exactly
what
the
observations
are.
U
There's
going
to
be
some
distance
away
from
the
observations,
because
you're
only
using
the
atmospheric
forcing
to
give
you
the
ocean
initial
State
you're,
not
nudging
the
ssts
toward
observations,
but
by
the
end
of
year
one
now.
This
is
the
forecast
year,
one
and
you
can
see
patterns
starting
to
develop
and
all
four
of
these
ma
in
all
four
of
these,
with
these
two
models
and
the
two
initialization
methods-
and
this
pattern
is
pretty
much
pretty
stable,
and
so
you
can
see
that,
even
by
year,
one,
the
model
has
already
approached
its
systematic
error
state.
U
With
this
pattern
of
anomalies
and
by
year
five,
you
can
see
that
you
can't
see
much
difference
between
the
year
three
and
the
year,
five
anomaly
patterns
they're
a
little
bit
bigger
in
terms
of
amplitude,
but
pretty
much
the
same
pattern
and
you
compare
that
to
the
patterns
of
the
historical
large
ensembles,
and
you
can
see
again
that
the
patterns
are
very
similar.
U
The
amplitudes
are
a
little
bit
bigger
in
the
large
ensembles,
because
this
is
the
ultimate
drifted
state,
but
the
pattern's
pretty
much
set
up
early
and
they
are
fairly
consistent
within
each
model
and
within
within
each
initialization
method.
Another
way
of
looking
at
this
is
say
how
much
or
what
percent
of
the
drift
occurs
by
year:
five,
so
the
dark
red
colors
here
we're
taking
the
the
ratio
between,
for
example,
years
three
and
year
five
drifts.
U
So
the
dark
gray
colors
means
that
the
by
year,
three
you've
already
drifted
by
at
least
80
percent
of
what
the
drift
is
in
year.
Five
looking
at
year,
one
there's
not
many
dark
red
colors.
So,
of
course,
you
haven't
drifted
very
far
and
by
in
in
year
one
of
course
you're
part
way
there,
and
by
the
time
you
get
to
your
three,
almost
most
of
the
areas
of
the
oceans
have
drifted
at
least
80
percent
of
where
they're
going
to
end
up
in
year.
Five.
U
So
another
way
of
looking
at
the
differences
between
the
initial
States
is,
you
can
look
at
as
a
function
of
leadier
the
agreement
and
sign,
or
disagreement
and
sign
between
the
brute
force
and
the
Fosse
methods,
and
so
the
idea
here
is
that
the
more
areas
that
are
blue,
the
more
agreement
you
have
and
the
sign
of
the
drift
between
the
two
initial
initialization
methods-
and
you
can
see
the
the
models-
are
different
in
e3sm
V1.
U
By
the
time
you
get
to
your
your
three
there's,
virtually
no
difference
in
the
drifts
between
the
two
initialization
methods,
but
even
csm1.
Even
by
year,
three
and
by
year,
five
there's
still
some
evidence
that
the
the
brute
force
method
was
introducing
a
little
bit
different
drifts
compared
to
what
the
the
Fosse
was
doing,
so
that
there
is
a
model
dependence.
These
drifts
and
the
effects
from
the
different
initialization
schemes
as
well.
U
This
is
lead
years
three
to
five,
so
the
average
of
the
prediction
in
years
three
to
five
for
the
Pacific
Basin
compared
to
the
observed
anomalies
over
the
Pacific
Basin
over
that
time
period,
the
top
one
is
using
the
drifted
hindcast
climatology
for
this
small
number
of
start
years,
represented
by
these
circles
here-
and
you
see
it's
very
noisy
and
really
there's
not
a
whole
lot
of
skill
across
this
time
period.
U
This
is
what
you
expect,
because
you've
only
got
these
few
numbers
of
start
years
and
for
your
drifted
climatology
to
form
so
you're
going
to
get
a
much
noisier
kind
of
picture.
This
is
why
you
want
to
run
a
lot
more
samples
for
your
for
your
model
climatology
to
form
these
differences.
U
But
if
you
use
the
large
Ensemble
now-
and
you
see
it's
much
less
noisy
you're,
getting
higher
higher
values
here,
but
you've
introduced
a
trend
into
this
you're
getting
lower
values
in
the
beginning
of
the
period
higher
values
at
the
end-
and
this
is
what
we
have
shown
and
anticipated
this
before
as
a
as
kind
of
one
of
the
issues
with
using
the
whole
time
period
for
the
ironcast
climatology.
Now
this
is
used
in
the
previous
15
years
of
the
hindcast
climatology
again,
because
we've
got
just
a
small
number
of
start
years.
U
U
Where
you
get
drops
in
skill
after
the
big
volcanic
eruptions
like
after
Pinatubo
and
the
to
River
nabro
period
and
interesting
enough,
you
can
see
a
little
bit
evidence
of
dropping
skill
here
at
the
end
that
is
likely
to
the
Australian
Wildfire
smoke,
which
I'm
going
to
talk
about
a
bit
more.
In
the
in
a
minute,
so
now
let's
put
this
in
context.
U
So
now
we
can
compare
this
limited
number
of
start
years
and
calculating
the
skill
metric
to
the
full
dple
with
the
full
number
of
start
years
every
year
and
the
40
member
ensembles,
and
that's
the
red
lines
here,
and
these
are
the
dots
from
the
previous
plots
and
you
can
see.
These
are
all
very
noisy,
there's
not
a
whole
lot
of
agreement
with
the
dple,
and
this
is
using
the
drifted
hindcast
climatology.
These
are
all
using
the
15
years
prior
to
the
initial
year.
U
But
if
you
use
the
large
Ensemble
as
the
climatology
now
you're
getting
better
agreement
with
the
dple,
it's
less
noisy
and
this
seems
to
be
an
indication
that
you
may
be
able
to
use
this.
The
large
Ensemble
climatology
to
calculate
your
anomalies,
to
Value
each
skill,
at
least
by
this
measure.
Looking
at
lead
years,
three
to
seven,
you
get
a
very
similar
picture,
but
we
don't
have.
Of
course
this
is
beyond
our
time,
Horizon
of
our
three
to
five
years.
U
So
finally,
we
can
look
at
another
way
of
that's
often
used
to
portray
skill
in
these
hindcast
data
sets,
and
this
is
taking
the
time
series
of
correlations
at
each
grid
point
across
the
whole
your
hindcast
data
set.
So
this
again
is
using
the
dple
to
give
us
the
best
estimate
of
what
the
effects
are
of
using
the
drifted.
Hindcast
is
the
reference
climatology
versus
the
historical
large
ensemble.
U
S
U
This
is
using
all
of
the
the
full
set
of
64
start
years
for
your
drifted
climatologies
for
years,
three
to
five
three
to
seven:
there's,
not
much
difference
for
the
the
reference
climatology
using
the
dple
using
the
all
the
drifted
hindcast.
U
There
is
a
little
bit
of
a
drop
off
and
skill
when
you're,
using
the
historical,
the
large
ensembles,
compared
to
using
the
15-year
prior
climatology,
so
I
think
our
conclusion
is:
if
you're
going
to
do
this
method,
you
probably
it's
best
to
use
the
15-year
prior
to
the
initial
year
to
compute
the
anomalies,
and
you
can
then
use
the
large
Ensemble
as
you
reference
climatology
and
get
results
fairly
similar
to
what
you'd
have
if
you
had
the
full
set
of
start
years
from
the
prediction,
large
and
so
level.
U
So,
to
summarize
we're
looking
at
IPO
predictions
in
the
Pacific
here
and
I
think
we've
shown
some
evidence
that
it's
possible.
You
could
run
a
small
number
of
start
years
for
hindcast
and
use
the
uninitialized
free
running
historic
simulations
from
the
large
Ensemble
to
form
the
model
climatologies
to
complete
the
anomalies,
to
get
rid
of
the
drifts,
and
we
can
do
this
after
about
lead
year.
Three,
it
doesn't
work
as
well
when
the
model
in
the
early
lead
years,
when
it's
still
drifting,
large
large
amounts,
but
by
lead
year
three.
U
It's
probably
okay,
there's
model
dependence,
as
always
in
these
things,
regarding
the
relative
agreement
of
the
pattern
of
the
drift
errors
between
the
two
initialization
schemes
initially,
but
by
the
time
you
get
out
past
deplete
year,
two
within
each
model
system
that
drifts
are
fairly
similar
and
in
that
same
vein,
over
80
percent
of
the
amplitude
and
pattern
of
the
drift
occurs
by
leader
three
in
most
areas
in
areas
of
the
tropics
where
there's
high
energy
variability
there's
some
cases,
especially
in
the
CSM,
where
the
drifts
are
still
converging
to
their
final
State
even
after
lead
year.
U
Three
in
the
dple
using
that
as
an
example
the
highest
and
most
consistent
skill
for
the
IPO
metric.
We
used
was
used
in
the
previous
15
years
to
calculate
anomalies
for
the
historical
large
Ensemble
to
assess
prediction
scale
for
lead
years,
three
to
five
to
three
to
seven
and
the
caveat
for
all
these
things
that
go
on
to
the
Pacific,
which
has
been
fairly
well
documented,
Now
by
at
least
two
papers.
U
John
Woo's,
most
recent
one
she's
talking
next
there's
a
loss
of
skill
for
IPO
predictions
after
these
major
volcanic
eruptions
and,
interestingly
enough,
if
you
saw
my
talk
yesterday,
I
was
presenting
evidence
that
the
this
massive
Australian
Wildfire
smoke
event
in
2019
and
2020
could
have
contributed
to
externally
forcing
this
current
quasi-negative
IPO
phase,
we're
in
or
at
least
the
multi-year
La
Nina
that
we're
in
and,
of
course,
that
Wildfire
smoke
is
not
in
the
dple
and
therefore
we
would
see
a
loss
of
skill
without
that
smoke
in
the
dple
verifications.
G
I
had
one
Jerry,
so
we
use
the
large
Ensemble
for
the
E3
smv1
Pine
casts
to
compute
anomalies,
but
what
what
large
Ensemble
did
you
use?
I
wasn't
aware
that
there
was
a
well.
U
U
E
V
Thanks
I'll
be
talking
about
predictability
of
tropical
Pacific,
decadal
variability
and
Associated
Oceanic
mechanisms.
This
is
a
work
in
progress
and
I'd.
Welcome
any
feedback
or
comments
you
may
have
chubby
cops
affect
Decatur
variability.
Tbdv
is
usually
defined
as
the
leading
eof
mode
of
10-year
low-pass
filtered
and
detrended
SST
variability
in
the
tropical
Pacific.
V
It
exhibits
an
El
nino-like
SSD
mode
in
the
tropical
Pacific,
and
it
also
has
connections
with
global
SST
variability,
especially
in
the
North
and
South
Pacific.
The
time
series
of
the
principal
component
leading
principle
component
pc1
shows
that
the
tropical
Pacific
is
going
between
warm
and
cold
phases
on
Decatur
time
scales.
V
Tpdv
has
substantial
Global
Climate
impacts,
including
the
Decatur
modulation
of
global
warming,
surface
temperature
interactions
with
internal
variability,
for
example,
and
so
and
influences
on
global
hydroclimate
and
marine
ecosystems.
Therefore,
predictability
of
tbdv
would
have
important
implications
for
decadal
predictability
of
this
climate
impacts.
V
V
First,
I'm
just
going
to
show
how
the
observed
tpdv
is
reproduced
by
the
forced
ocean
simulation
fossil,
which
is
the
oceanic
Oceanic
and
serious
conditions
used
to
initialize
the
decadal
forecasts
in
observations.
We
can
see
in
a
positive
phase
of
tbdv.
We
have
a
relaxed
thermocline
field
in
the
equatorial
Pacific.
V
How
is
tbdv
predicted
by
the
initialized
decade
of
forecasts
and
uninitialized
simulations
here,
I'm,
showing
the
same
analysis
based
on
again
based
on
fussy
and
dpre,
DPI,
novoc
and
large
for
dpre?
We
are
taking
the
average
across
forecast
year,
one
to
ten
for
each
initialization
and
both
dpre
and
DPI
novoc
can
reproduce
the
pattern
of
global
SST,
as
well
as
ocean
structure
associated
with
tpdv
to
some
extent.
V
0.70.65
in
dple
no
walk
which
is
much
higher
than
that
in
dple,
as
also
mentioned
by
Jerry.
We
have
shown
that
volcanic
forcing
will
degrade
the
prediction
scale
of
tpdv,
because
the
model
tends
to
produce
decal
Cooling
in
the
tropical
Pacific
in
response
to
the
large
volcanic
eruptions,
for
example,
El,
Chichen
and
pinotubo
in
1980
and
90s.
V
Sorry
here
and
wiring
observations.
These
two
large
volcanic
eruptions
coincided
with.
C
V
Tropic
observic
warming,
which
is
instead
well
predicted
by
DPI
novoc
and
is
likely
related
to
the
internal
climate
variability.
The
rest
of
the
talk
will
be
focusing
on
understanding
what
Oceanic
processes
contributing
to
the
high
predictability
and
skill
of
tpdb
in
DPI
no
lock.
The
bottom
row
shows
the
results
from
uninitialized
layout
example.
The
ensemble
mean
signal
of
La.
Zhang
cannot
reproduce.
V
However,
we
find
that
the
tbdv
SSD
warming
tends
to
be
preceded
by
a
subsurface
ocean
warming
in
the
western
Central
Pacific,
and
this
subsurface
ocean
warming
anomalies
are
likely
to
propagate
into
the
Eastern
Pacific
and
believe
the
sign
of
SST
from
negative
to
positive.
During
the
10-year
forecasts,
then
I
further
correlate
the
predicted
index
with
isopy
no
depth
at
at
which
the
potential
density
is
equal
to
25.5
kilogram
per
cubic
meters,
and
we
find
that
there
are
high
correlations
with
isopy
no
depth
which
can
well
capture
the
temperature
variability
at
200
meters.
V
V
Next,
we
further
decompose
the
initial
conditions
into
high
pass
filter
and
low-pass
filtered
variability
before
calculating
the
correlations,
and
we
find
that
the
tpdv
predictability
is
mostly
arising
from
the
low
frequency
variability
in
the
initial
condition
in
the
global
SSD
Maps.
We
also
identify
significant
correlations
in
other
ocean
basins,
including
negative
correlations
over
North,
Atlantic
and
positive
correlations
over
the
oceans
in
the
southern
hemisphere.
So
we
wonder
if
such
correlations
indicate
causality,
so
we
conduct
Regional
initialization
experiments,
I'm,
showing
a
case
study
here
for
1999
to
2018.
V
V
The
right
two
panels
shows
the
initial
SSD
conditions
in
both
observation
and
fussy
to
test
which
ocean
basing
initialization
is
more
important
to
generate
this
decade
of
cooling.
In
10-year
forecasts,
we
conducted
a
set
of
regional
initialization
experiments.
First,
we
initialize
the
model
with
fuzzy
climatology
as
the
controller.
So
in
this
experiment
we
are
not
giving
any
information
about
The
observed
conditions
to
the
model.
V
Second,
in
the
tropical
Pacific
initialization
experiments,
we
initialize
the
model
with
climatology
everywhere,
but
at
the
fourth
four-depth
ocean
temperature
and
salinity
anomalies
only
in
the
tropical
Pacific.
Similarly,
we
conducted
North,
Atlantic,
initialization
and
southern
hemisphere,
ocean
initialization
experiments.
V
V
The
North
Atlantic
initialization
is
key
to
get
the
North
Atlantic
Decatur
warming
here,
but
it
doesn't
generate
any
significant
SST
anomalies
in
the
Tropic
Ox
effect
and
in
contrast,
the
southern
hemisphere.
Ocean
initialization,
generates
tropical
surface
warming,
but
it
should
be
noted
that
there
is
an
errors
in
the
southern
hemis
Southern
Ocean
warming
here
in
predictions,
which
is
in
an
opposite
sign
to
the
observations.
V
In
summary,
we
find
that
the
predictability
of
tbdv
SST
anomalies
arises
from
low
frequency,
subsurface
ocean
temperature
anomalies,
particularly
the
isopy
no
depth
that
now
meets
in
the
western
tropical
Pacific
and
Associated
Rose
B
wave
reflection.
We
also
explored
other
ocean
organic
mechanisms
for
tbdv,
but
we
haven't
shown
here,
including
the
subtropical
tropical
cell
and
spiciness
infections.
F
Yeah,
thank
you.
Sean
great
talk,
I
have
a
question:
I!
Guess
it's
for
you,
and
also
kind
of
Steve
I
was
interested
by
your
Regional
initializations
there
at
the
end,
I'm
wondering
if
there's
a
role
for
the
same
sort
of
protocol
that
you
all
are
putting
together
with
the
Trans
Basin
interactions
group,
but
for
decadal
time
scales.
I
know
you
all
are
just
spinning
up
now,
but
is
that
somewhere
you
expect
to
go
next.
W
Yeah
thanks
Let's
try
to
do
this.
W
Oh
sorry,
it's
I
have
to
give
Chrome
permissions
to
look
at
stuff
good,
sorry,
I
might
my
thing
might
disappear
for
a
second,
because
Google
Chrome
might
have
to
restart
sorry.
This
is
a
new
laptop
just
bear
with
me.
W
W
W
Okay,
can
you
now
see,
can
you
now
see
my
slides?
Yes,
okay,
sorry
Steve
new
laptop.
You
know
woes
me
anyway.
W
Let's
get
straight
straight
into
action,
then
so
I'm
gonna
talk
about
some
work
done
with
colleagues
whose
names
are
listed
here,
including
Isla
Simpson,
in
particular
in
the
audience
and
I'm,
going
to
talk
about
the
cable
predictability
of
a
North
Atlantic
Jet,
and
why
we
think
this
is
driven
by
the
subpolar
North
Atlantic.
W
So
hopefully
there's
going
to
be
a
preprint
that
available
on
whether
on
Quantum
Dynamics
any
day
now,
okay,
so
what's
the
context
of
this
I
guess,
most
of
you
are
familiar
with
this
past
work
by
three
three
kind
of
papers,
in
particular,
was
by
Isla
I,
mean
Doug,
Smith
and
also
Panos
asthma
series
and
together
these
three
papers
strongly
suggest
that
decadal
variations
in
the
winter
Nao
are
predictable,
at
least
over
this
period,
1954
to
2015.
W
and
both
ilr
and
panels
kind
of
suggest
that
you
know
to
varying
degrees
of
of
strength.
They
suggest
that
the
AMV
might
be
the
possible
source
of
this
skill,
but
but
nevertheless,
we're
kind
of
left
from
these
papers
with
some
some
outstanding
questions.
W
Firstly,
you
know:
is
it
really
the
AMD
which
is
doing
this?
Can
we
really
can
we
gain
some
more
confidence
in
that
idea?
Secondly,
what
other
Pathways
and
mechanisms
involved?
And
thirdly,
what
exactly
is
being
predicted
here
and
to
explain
what
I
mean
by
that
that's
what
this
kind
of
schematic
picture
is
is
about.
W
So
this
schematic
is
is
is
showing
is
sort
of
telling
you
that
you
can
think
about
the
Nao
as
representing
two
very
different
modes
of
variability,
one
of
them
coming
from
the
latitude
of
the
Jet
and
rather
from
the
speed
of
the
jet.
W
These
are
very
different,
they're
orthogonal
modes
of
variability
and
they
have
very
different
variability
on
both
seasonal
and
decadal
time
scales
and,
crucially,
Studies
have
shown
that
they
respond
differently
to
Thermal
forcing
so
if
you
use,
if
you
try
to
do
analysis
with
a
single
index
like
the
Nao
index,
or
something
like
that,
your
you
know
that
index
is
necessarily
going
to
mix
these
things
up,
and
you
know
you
might
have
a
hard
time
understanding.
W
What's
going
on
so
kind
of
key
idea
in
this
work,
we
did
was
to
was
to
decompose
into
these
two
modes
of
variability
and
see
if
we
could
get
a
better
sense
of
what's
going
on.
W
So,
if
you're
not
familiar
with
the
latitude
and
speed
of
the
jet
like
how
you
compute
these
things,
the
gist
of
it
is,
if
you
have
a
zonal
wind
anomaly
like
850
on
a
given
day,
then
you
just
kind
of
declare
that
the
the
location
where
the
the
zonal
winds
Peak
is
where
the
jet
you
know
is,
and
if
you've
decided
where
the
jet
is,
then
you
can
easily
compute
the
latitude
and
then
also
the
magnitude
of
that
peak
of
those
Peak
winds,
and
that
magnitude
is
just
defined
to
be
the
speed
of
the
jet.
W
Okay,
so
I'm
going
to
show
you
some
results.
I'm
going
to
use
three
data
sets
I'm
going
to
use
the
dple
Ensemble,
which
is
already
being
mentioned
several
times,
of
course,
using
the
cesm
model
and
then
crucially,
I'm
going
to
make
use
of
this
seasonal
hindcast
Ensemble,
based
on
the
ifs
model
which
covers
1900
to
2010.
So
so
that's
really
nice.
It
covers
the
full
20th
century
and
crucially,
it
uses
prescribed
ssts.
So
this
is
not
coupled.
W
W
Okay,
so
just
a
quick
point
about
this
seasonal
hind
class,
of
course,
taking
decay
leverages
of
a
seasonal
forecast
is
not
the
same
as,
as
you
know,
genuine
decadal
forecast.
W
So
you
know
it's:
it's
crucial
that
these
earlier
Studies
have
established
actual
decadal
forecast
skill,
but
what
this,
what
these
seasonal
hindcasts
are
going
to
be
helpful
for
our
understanding
the
mechanisms
and
you
can
kind
of
think
of
it
as
being
like
a
nudged
decadal
forecast
nudged
in
the
sense
that
every
November
1st
you
are
bringing
the
forecast
back
to
the
observed
atmospheric
State
and
and
in
addition,
of
course,
here
we're
giving
it
perfect
ssts.
W
C
W
Helpful
tool:
what
is
this
helpful
tool?
Tell
us?
Well,
so
the
top
panel
is
showing
10-year
averages
of
the
Nao,
so
error
20c
is
in
black
and
the
ifs
ensemble
mean
is
in
blue,
and
what
you
can
see
is
that
you
get
a
sort
of
correlation
of
about
0.45.
W
So
so
this
hindcast
can
skillfully
reproduce
the
availability
over
the
entire
period.
This
correlation
of
0.45
by
the
way
is
basically
exactly
the
same
correlation
that
Doug,
Smith
and
kanos
find
using
actual
decal
forecasts
in
the
in
the
second
half
of
the
century.
So
so
that
kind
of
suggests
we
get
the
variability
all
the
way
back
to
1900
to
the
same
extent.
W
So
that's
nice,
but
then
what
the
two
bottom
panels
are
showing
is
that
in
the
middle
it's
the
jet
latitude
and
where
you
have
basically
zero
correlation
and
in
the
bottom
panel
it's
the
jet
speed
and
there
you
have
a
you
know,
a
correlation
of
0.44
again.
So
so
what
this
is
telling
you
is
that
all
of
the
predictability
of
the
Neo
on
decadal
time
scales
is
coming
from
predictions
of
the
speed
of
the
Jet
and
in
fact
the
jet
latitude
appears
to
be
unpredictable
by
the
at
least
in
the
IFS.
W
Is
it
the
same
true
for
the
Decatur
prediction?
Lodge
Ensemble
yeah
terms
like
yes,
so
the
top
panel
showing
the
Decay
The
dple,
Jet
latitude
predictions
a
10-year
10-year
predictions,
and
you
see
there's
nothing
going
on
there.
Really.
W
It
can't
predict
the
latitude
at
all,
but
it
can
predict
the
jet
speed
very
well
and
I
want
to
emphasize
here
right
that
if
you
compare
this
to
the
previous
one,
where
we
had
the
seasonal
hindcast
in
those
forecasts,
we
had
given
it
the
perfect
ssts
and
we've,
given
it
the
correct,
November
1st
state
every
single
year,
but
nevertheless
even
giving
it
all
about
extra
information,
it
doesn't
actually
have
more
skill
than
with
dple.
W
It's
got
completely
comparable
levels
of
levels
of
skill,
and
we
took
this
to
be
strong
evidence
that
the
signals
in
dple
and
the
seasonal
hindcast
are
the
same
same
signals
and
consequently,
these
signals
have
to
be
present
in
their
entirety
already
in
a
single
single
winter
season.
W
Okay,
so
what
is
driving
the
jet
speed?
Well,
what
you
can
just
simply
do
now
is:
we've
got
our
three
data
sets.
We've
got
two
forecast
models
and
a
re-analysis,
and
we
can
just
look
for
any
animal
possible
SST
signals
which
are
common
to
all.
These
data
sets
statistically
significant
and
visible
both
on
seasonal
and
decadal
time
scales.
W
So,
what's
this
picture
is
telling
you
is
that
if
the
scale
is
indeed
coming
from
ssts,
then
basically
it
could
only
POS
It
could
only
be
the
subfolder
North
Atlantic
and
if
you
then
compute
some
time
series
in
the
subpolar
North
Atlantic
ssts,
that's
the
purple
line,
although
well
I
flipped,
the
sign
there.
It's
negatively
correlated
really
but
I've.
Just
for
visual
Clarity
flip
the
sign,
and
you
can
see
that
plotted
together
within
black
for
jet
speed
of
reanalysis,
and
you
get
this
amazing
correlation
of
0.97,
which
is
High.
W
Even
though
I
mean
it
is
30
air
means,
but
it's
still
very
high.
And
similarly,
you
get
very
high
correlations
in
the
forecast
models
as
well.
W
What's
the
pathway
here?
Well,
we
argue
that
It's
seems
to
be
essentially
tropospheric,
or
at
least
you
can
kind
of
explain
it
just
in
a
tropospheric
pathway
sense.
How
is
that?
Well,
there
are
two
key
points.
Firstly,
if
you
look
at
the
kind
of
regression,
if
you
regress
the
SST
anomalies
with
air
temperature
in
the
vertical
that's
what
the
plot
on
the
left
is
showing
in
djf,
when
you,
then
this
heating
from
SSD.
Is
it
doesn't
kind
of
it
doesn't
stay
confined
to
the
surface?
W
It
goes
up
quite
High,
all
the
way
up
to
about
300,
hectopascals
and,
and
so
that's
the
filled,
quanton
red
for
heating
anomaly
and
then
the
the
line
corn
for
is
the
climatological
jet.
In
the
background,
this
is
for
error.
20C,
oh
boy,
and
what
you
can
just
see
is
that
this,
this
heating,
which
goes
Aloft
it's
sort
of
beautifully
situated
in
front
of
a
jet
core
to
change
the
meridianal
temperature
gradient
and
if
you
just
make
a
crude
estimate
of
you
know.
W
Well,
if
we
assume
geostrophic
balance
and
that
the
only
thing
which
is
changing
is
this
Meridian
temperature
gradient
due
to
the
sub
Pole
or
North
Atlantic
ssts,
then
it
turns
out.
You
can
actually
reproduce
all
the
decadal
variability
in
the
in
the
jet
speed
that
you
see
in
these
data
sets.
So
that's
what
this
figure
on
the
right
is
showing
so
the
black
it's
the
actual
jet
speed
and
in
the
blue.
It's
the
this
kind
of
geostrophic
estimate.
W
Okay,
so
the
obvious
question
and
controversy
around
all
these
questions
is
causality.
Is
it
really
of
ocean
forcing
the
atmosphere
and
not
the
other
way
around?
And
here's
where
this,
this
seasonal
hindcast,
that
we
use
is
really
kind
of
coming
into
play,
because
these
are
prescribed
SST
hindcast.
So
if
you
find
correlations
between
ssds
and
atmosphere
in
such
a
hind
class,
it
can't
possibly
be
because
the
atmosphere
is
forcing
the
ssds,
because
there
is
no
coupling
here.
W
So,
at
least
in
that
hindcast
the
correlations
are
indicating
causation
into
the
atmosphere
and
when,
pragmatically
speaking,
that
would
suggest
that
you
have
caused.
You
should
have
causality
in
the
involver
data
sets
as
well,
because
they
all
have
the
same
symptoms.
W
W
You
know
if,
if
this
is
really
old,
if
sbna
is
really
all
driven
by
the
atmosphere,
then
where
the
heck
is
the
skill,
the
skill
coming
from
as
I
showed
you
earlier,
the
only
real
place
it
could
be
coming
from
is
the
SP
a
and,
as
Isla
showed,
a
kind
of
stochastic
forcing
short-term
forcing
is
unpredictable,
so
the
kind
of
has
to
be
that
sort
of
essentially
has
to
be
something
coming
from.
Ssts
yeah
and
I'll
skip
over
the
last
point,
because
I'm
already
at
12
minutes
here,
but.
W
The
conclusion
seemed
to
be
that
espna
seems
to
force
the
jet
and
I'm
just
going
to
then
end
with
this
kind
of
what
what
does
this
result
sort
of
suggest?
Morally
speaking?
Well,
we
think
it
suggests
the
following
kind
of
picture
that
on
decadal
time
scales,
the
Eddies
are
just
kind
of
varying
around
their
mean
latitude
in
an
unpredictable
way,
and
that's
essentially
because
latitudinal
variations
are
probably
a
result
of
very
complicated
interactions
between
multiple
teleconnections.
That
therefore
aren't
predictable.
W
The
intensity
of
Aries
is
varying
in
a
slow
and
predictable
way
in
response
to
the
subpolar
North
Atlantic
ssts,
and
that's
why
you
get
decaded
predictability
of
the
jet
speed
in
particular,
and
a
key
Point
here
is
obviously
that,
as
Stephen
has
shown
that
the
Suburban
North
Atlantic
is
predictable,
so
I
will
stop
there.
Thank
you
very
much.
L
Yeah
very
interesting
talk
thanks,
so
I
I'm
a
bit
confused
in
comparing
this
with
the
literature
I've
read
previously
on
the
dple
about,
for
example,
the
signal
noise
Paradox.
So
maybe
this
is
c
preprint
for
more,
but
maybe
you
could
expand
on
it
here.
I
I
thought
this
signal
in
particular
for
the
Nao
was
too
weak
in
in
terms
of
atmospheric
response
to
the
ssts.
So
are
you?
W
It
doesn't
get
around
it,
I
mean
there's
kind
of
no
getting
around
it,
so
to
speak,
I
mean
it's
just
there
and
and
a
kind
of
thing
I
glossed
over
in
all.
My
plots
is
that
I'm
always
normalizing
the
standard
deviation
in
all
the
time
series
to
make
everything
comparable.
Of
course.
In
reality,
the
the
forecast
standard
deviation
is
always
tiny
in
comparison
to
the
real
analysis
one.
So
it's
not
that
you
kind
of
get
you
magically
get
the
highest
standard
deviations
in
this
way.
W
G
Just
to
follow
up
with
the
the
signal
or
the
magnitude
of
the
jet
speed
changes
in
the
AFS
larger
than
in
dple.
W
Yes,
I'm
pretty
sure
like
I,
yeah
I'm,
pretty
sure
all
of
that
and
it's
just
because
they
you
know
they're
only
see
they
only
lost
the
season,
so
there's
not
kind
of
a
they
don't
have
nearly
as
much
drift
and
things
like
that.
So
you,
you
kind
of
the
signal,
isn't
lost
to
the
same
extent.
G
W
Yeah
so
I
mean
this
is
something
we
ended
up,
not
just
deciding
deciding
not
to
discuss
in
this
pre-print,
which
is
things
about
Trends
and
drift
and
stuff
like
that,
because
it
ended
up
getting
a
bit
complicated
and
didn't
know
what
to
make
of
it.
I
do
think
there
are
unanswered
and
kind
of
subtle
questions
about
the
role
of
of
Trends
and
drift
in
generating
the
skill.
W
Yes,
so
I
yeah
I'm
familiar
with
that
paper,
I
haven't
really
thought
much
about
that
in
the
content.
I'm
not
I,
wasn't
sure
that
I
didn't
know
that
that
was
sort
of
doing
stuff
for
on
decadal
time
scales.
I,
guess
my
my
guess
was
that
that
was
more
relevant
for
shorter
time
skills,
but
maybe
I'm
maybe
I'm
wrong
about
that.
Yeah.
Thanks
for
the
suggestion.
S
All
right,
I'm
gonna,
share
my
screen.
C
S
All
right,
can
everyone
see
the
presentation
and
hear
me.
C
S
All
right,
great
thanks,
thanks
for
the
opportunity
to
speak
today,
I'm
Glenn
I'm,
a
grad
student
at
the
MIT
Hui
joint
program
today,
I'm
going
to
present
work
on
trying
to
gain
physical
insights
from
the
prediction
of
Atlantic
polenticular
variability
using
explainable,
deep
neural
networks
and
I
want
to
acknowledge
my
collaborators
paid
online,
so
to
start
off
looking
at
Atlantic,
multiplicative
variability
or
A
and
B.
S
Essentially,
for
those
who
don't
know
what
it
is,
it's
essentially
the
quasi-periodic
oscillation
of
sea
surface
temperatures
averaged
over
the
North
Atlantic
when
you
regress
it
back
to
the
anomalies.
You'll
see
a
characteristic
horseshoe
pattern
with
maximum
warming
in
the
subpolar
gyre,
and
this
has
a
periodicity
about
60
or
70
years,
and
it
has
a
lot
of
impacts
across
the
Earth
system
from
Atlantic
hurricane
activity,
extreme
temperature
and
precipitation
over
the
surrounding
continents
and
even
regime
shifts
in
local
ecosystems.
S
If
we
can
use
machine
learning
to
try
to
get
at
that
question-
and
this
is
also
a
very
preliminary
work,
so
if
you're
really
interested
in
your
feedback
on
this
work
and
a
quick
note
on
definition,
so
in
the
most
proper
sense,
the
AMD
in
order
to
calculate
the
index
that
involves
de-trending
or
removing
the
external
trend
from
the
data,
as
well
as
applying
some
sort
of
low
pass
filter.
But
for
this
presentation
we'll
be
focusing
on
trying
to
predict
the
undetrended
and
unsoothed
AMD
index.
S
So
the
question
again
we're
trying
to
ask
is:
can
we
if
we
use
Oceanic
versus
atmospheric
predictors
to
predict
the
A
and
B
State?
Can
we
gain
any
insight
on
whether
the
ocean
of
the
atmosphere
is
more
important
for
predicting
AMV?
And
so
the
reasons
why
we're
taking
this
approach?
Is
that
there's
a
lot
of
challenges
Associated
using
a
dynamical
forecast
systems
like
sensitivity
to
initial
conditions
or
a
high
computational
costs,
but
recently
there's
been
promise
that
neural
networks
have
shown
at
inter-annual
climate
prediction.
S
So
an
earlier
talk
this
morning
reference
the
Hem
at
all
paper
on
enso
prediction,
and
the
kind
of
question
here
is:
does
that
skill
really
extend
to
longer
multiducadable
time
scales
and
one
of
the
challenges
of
understanding
AMV?
Is
that
there
isn't
a
lot
of
data
in
observations
and
if
you
really
trust
data
going
back
before
World
War
II,
maybe
you
have
150
years
so
in
this
study,
we're
really
going
to
try
to
focus
on
understanding
AMV
and
the
committee
of
system,
one
large
Ensemble,
just
the
first
40
members
this.
S
This
gives
us
about
3440
years
to
play
with
and
finally
kind
of,
one
of
the
main
I
guess
setbacks
of
using
machine
learning
is
that's
often
seen
as
a
black
box,
where
it's
really
hard
to
understand.
Where
is
this
predictability
coming
from
but,
as
was
accidentally
described
by
impairment
yesterday
during
the
seminar
there's
a
lot
of
different
ways
that
are
you?
Can
you
can
use
to
basically
understand
what
has
a
neural
network
learned
like
given
a
sample
you
can
recover?
S
You
can
recover
a
heat
map
or
relevant
snap
to
understand
which
regions
were
important
for
prediction,
so
we're
going
to
use
one
such
approach,
layer-wise
relevance,
propagation
in
the
study.
So
what
are
we
trying
to
do
here?
So
we
have
a
neural
network
and
we
give
it
some
sort
of
predictor.
It
can
be
sea,
surface
temperature
anomalies
which
are
used
to
calculate
the
AMD
index,
some
sort
of
oceanic
variable,
like
sea,
surface
height
or
sea,
surface
salinity
or
some
sort
of
atmospheric
variable
like
sea
level
pressure.
We
then
ask
the
network
three
zero.
S
Three,
all
the
way
up
to
24
years
later.
What
is
the
AMD
State
going
to
be
it's
going
to
be
positive,
neutral
or
negative,
and
we
kind
of
determine
these
using
a
one
standard
deviation
threshold
in
terms
of
the
architecture
of
the
neural
network.
We've
tried
a
few
architectures,
but
for
the
for
these
results
we'll
be
sharing.
We
use
the
convolutional
neural
network
that
was
used
in
ham
at
all
2019.
S
That
was
pretty
successful,
enso
prediction
and
we
also
used
a
simple
four-layer
neural
network
with
four
layers
and
128
neurons
each
so
for
each
essentially,
we
train
a
network
to
predict
a
and
v
at
a
given
lead
time,
given
a
specific
predictor
and
we
train
50
networks
to
account
for
a
variation
due
to
the
initialization
of
the
network
weights.
This
leads
to
a
total
of
1
800
networks,
which
we
have
trained
for
this
project.
S
So
first
we're
going
to
try
to
compare.
Is
there
a
difference
between
using
convolutional,
neural
networks
versus
fully
connected
neural
networks,
and
the
results
I'm
going
to
show
here
are
using
SST
as
a
predictor,
and
the
results
are
pretty
much
the
same
for
the
other
other
predictors
as
well
as
a
baseline.
We
use
a
persistent
space
line
so
essentially
at
lead
time:
zero,
whatever
A
and
B
state
it.
S
It
is
currently
will
be
the
same
state
and
the
times
later.
So
what
I'm
showing
here
is
the
skill
from
0
to
24
years
and
the
accuracy
on
the
y-axis.
We
see
here
that
both
the
CNN
and
snn
outperform
the
persistent
space
lines
and
exhibit
comparable
accuracy.
So
you
can
see
the
the
yellow
and
the
blue
lines
are
pretty
much
comparable
and
they
follow
all
within
the
sort
of
95
confidence
or
95
of
the
50
different
initialized
networks
that
we've
trained.
And
why
might
this
be
the
case?
S
Well,
you
can
think
of
the
idea
of
how
cnns
are
really
specialized
for
something
called
translation
invariance.
So
as
an
example
in
image,
recognition
cnns
have
to
learn
how
to
identify
features
that
no
matter
where
they
appear
in
the
image.
So
in
this
example,
the
heat
you
can
see
that
this
interpretability
method
reveals
that,
in
order
to
identify
a
barbell,
you
have
to
find
it
no
matter
where
it's
located
in
the
picture.
S
However,
perhaps
for
the
case
of
large-scale
prediction,
we're
thinking
that
small
scale
features
are
not
as
important
for
predicting
the
AMD
State,
though
we're
still
thinking
of
different
ways
to
investigate
this
hypothesis
and
welcome
any
suggestions
so
because
they
perform
so.
Similarly
from
this
point
on
we're
going
to
focus
on
the
results
of
the
simpler,
fully
connected
neural
network.
S
So
what
are
their
differences
in
skill
by
predictor?
So,
let's
start
by
focusing
on
the
positive
AMD
stage,
so
we
use
Sea
Service
temperature
as
a
predictor.
You
see
that
initially
performs
pretty
well,
but
after
about
six
years,
the
skill
plateaus
is
about
50
and
it
outperforms
the
persistence,
Baseline
We,
compare
it
to
an
atmospheric
predictor.
We
see
that
generally,
the
atmospheric
predictor
does
worse
compared
to
SST,
but
when
we
look
at
Oceanic
predictors,
so
here
you
see
the
sea
surface
height
in
blue
and
sea,
surface
salinity
and
pink.
S
It
seems
to
outperform
both
sea
surface
temperature
and
sea
level
pressure
for
predicting
positive
A
and
B
States.
You
also
notice
that
the
spread
in
the
models,
which
is
the
shading,
is
slightly
narrower
for
these
Oceanic
predictors,
suggesting
that
the
scale
for
these
predictors
are
much
more
consistent
compared
to
the
other
two
variables.
S
So
this
is
the
case
for
a
positive,
A
and
B
State.
What
about
negative?
A
in
these
states
so
here
I
show
the
same
plot,
but
for
the
negative,
A
and
B,
and
the
main
difference
here
is
that
the
differences
in
scale
between
C,
surface
temperature
and
the
oceanic
predictors
are
a
little
bit
smaller.
But
overall
it
seems
like
the
mean
of
the
oceanic.
Predictors
is
slightly
higher
than
using
sea
surface
temperature
as
a
predictor.
S
So
now
that
we've
looked
at
some
of
the
differences
in
skill
by
predictor.
What
are
these
sources
of
predictability
so
we're
using
this
layer,
wise
relevance,
propagation
method,
so
essentially
for
some
for
a
given
prediction
like
a
positive
A
and
B,
you
can
back
propagate
the
weights
of
back
propagate
the
weights,
the
relevance
through
the
networks
and
recover
which
regions
of
the
input
are
most
important
for
that
prediction
and
use
a
series
of
propagation
rules
which
are
outlined
in
this
excellent
review
paper
by
monteven
at
all.
In
2019..
S
S
Here,
I'm
showing
the
relevance
map
for
Sea
Service
temperature
for
lead,
zero
prediction
again,
darker
red
indicates
that
these
regions
are
of
higher
relevance
or
more
important
for
making
that
prediction.
S
What
about
longer
lead
times
so,
as
we
go
back
in
time,
you'll
see
that
the
region
of
Maximum
relevance
contracts
to
an
area
of
around
the
grand
Banks
along
the
Gulf
Stream
and
North
Atlantic
current.
S
So
this
suggests
that
the
region,
the
focus
or
important
region
of
predictability
contracts,
this
more
focused
region
at
longer
time
scales.
But
if
you
remember
from
the
previous
slides,
Sea
Service
temperature
wasn't
as
good
as
an
oceanic
predictor
at
longer
time
at
longer
lead
times.
So
how
do
these
relevance?
Maps,
look
like
for
sea
surface
height,
for
example.
S
So
this
is
what
I'm,
showing
on
the
bottom
row
right
now
and
you'll
see
that
one
consistent
feature
that
pops
out
throughout
all
these
different
lead
times
is
that
the
this
sort
of
Gulf,
Stream,
North
Atlantic
current
region,
seems
to
be
pretty
important
for
for
predicting
on
these
much
longer
time
scales
so
very
quickly.
Some
kind
of
takeaways.
For
this
we
see
that
complex
architectures,
like
using
a
CNN,
does
not
necessarily
lead
to
more
accurate
amp
prediction.
S
So
much
of
the
differences
between
these
predictors
when
the
external
trend
is
removed
is
lost
after
six
years
and
finally,
some
other
steps
are
to
examine
how
these
relevance
maps
are
sensitive
to
other
explainability
methods
and
if
this
is
applicable
in
terms
of
transfer
learning
to
observations,
real
analyzes
and
other
models,
so
that's
about
all
I
have
for
today.
Thank
you
for
listening
and
looking
forward
to
your
feedback.
You
can
also
reach
out
to
me
via
email
over
there.
So
thank
you.
L
Well,
there's
also
a
question
of
mine
if
you
want
to
take
that
first
bit
because
I'm
talking
already
so
nice
thanks
for
the
Clear,
Talk
and
I
I
saw
that
you
mentioned
this
in
your
future
directions
and
I.
Think
it's
a
important
thing
to
look
into
more,
but
yeah
curious
about
this,
not
choosing
to
remove
the
external
Trend
I
I,
think
that
I
mean,
if
you're
predicting
something
20
years
out,
that
is
likely.
You've
you've
predict
predicted
the
trend
and
I
mean
it
from
the
layer,
wise
relevance,
propagation.
L
You
do
seem
to
be
capturing
around
features
that
are
getting
the
internal
variability,
but
somehow
there
seems
to
be
some
combination
of
the
forcing
and
internal
variability
in
in
your
results
and
I'm
interested
if
you've,
if
you've
done
any
more
to
look
into
that
yet
or
what
what
you
have
planned.
S
Yeah,
so
I've
done
a
little
bit
of
Investigation,
but
still
need
to
do
a
lot
of
work
along
that
direction.
For
example,
I've
done
I
can
share
my
screen
again,
but
a
simple
linear
regression
of
the
external,
the
ensemble
mean,
or
the
external
Trend
A
and
B
index
on
these
variables
and
this
the
same
regions
pop
up.
Sorry,
the
same
regions
do
pop
up,
so
you
can
see
over
here,
I,
essentially
just
regressed.
S
The
ensemble
mean
a
b
index
to
each
of
the
different
predictors
and
you
can
see,
for
example,
for
sea
surface
height
and
sea
surface
salinity.
This
sort
of
Gulf
Stream
So,
slash
North,
Atlantic,
current
region,
pops
up
as
some
of
the
largest
signals,
but
but
there
are
also
other
regions
that
have
large
signals,
but
don't
quite
show
up
on
the
on
the
relevance
map
so
in
terms
of
dynamical
understanding.
I
still
need
to
think
about
different
ways
to
investigate
this,
but
we
have
also
looked
at
the
distribution
of
prediction.
S
So
is
the
model,
for
example,
simply
just
predicting
more
positive,
AMD
States
towards
the
end
of
the
period,
and
actually
it
doesn't
appear
to
be
that
way.
The
the
in
terms
of
the
distributions
of
the
predictions,
the
model,
the
neural
networks,
are
managed
to
learn
that
sort
of
multi-hedicated
oscillation,
where
they
make
more
negative
predictions
towards
the
middle
and
more
positive
predictions
towards
the
beginning
and
the
end
of
the
period.
S
So
it
seems
like
there
might
be
something
more
going
on
here
than
sort
of
a
simple
I
guess.
The
effect
of
the
linear,
Trend
but
kind
of
clarifying.
That
would
require
a
bit
more
investigation
and
be
curious
to
hear.
If
you
have
any
suggestions.
G
Online
Jacob
schluer
is
saying
nice
talk.
Why
do
you
train
different
networks
for
different
lag
times
and
different
variables?
Would
you
not
expect
to
get
the
best
model
by
stacking
the
variables
in
the
channels
of
the
CNN
and
predicting
all
lag
times?
This
way
you
can
also
use
the
relevant
propagation
to
identify
which
variables
are
the
most
important.
S
Yeah,
thank
you
for
the
suggestion,
so
I
think
part
of
it
is
part
of
it
was
a
practical
consideration
so
by
stacking
all
the
different
variables
and
all
the
different
lag
times.
The
inlet.
The
input
size
of
the
neural
network
is
much
larger,
so
it
takes
much
longer
to
train
so
by
sort
of
separating
this
task
into
different
lead
times
and
different
variables.
S
I
guess
our
goal
is
to
try
to
see
if
we
can
get
some
sort
of
physical
understanding
Maybe
by
seeing
if
certain
predictors
lead
to
more
predictability
and
kind
of
train,
more
networks
to
make
that
that
are
more
specified
at
a
specific
task,
but
that
that
is
the
possibility,
though
I
also
imagine
if
you
have
a
single
model
that
you're
training
to
make
predictions
to
cross
multiple
lead
times,
just
thinking
of
it,
I
guess
in
an
optimization
sense
there
might
be.
S
You
might
have
some
I
guess
some
bosses
at
different
lead
times
as
it
tries
to
be
optimal
for
all
these
different
lead
times,
which
maybe
different
processes
contribute
to.
If
that
sort
of
makes
sense,.
H
Yeah,
just
just
a
quick
question,
so
I
think
you
could
with
a
seat
like
when
you
have
the
fully
connected
neural
network,
it
should
have
because
you
you
need
to
use
every
grid
Point
as
an
input
would
be
a
feature
right.
So
the
number
of
parameters
would
be
much
larger
with
a
fully
connected
Network,
then
it
would
be
a
better
than
CNN,
which
is
stack
variables
right.
I
C
H
I
would
do
I
mean
I,
don't
know
if
you
saw
my
talk,
but
there
is.
You
should
look
into
this
film.
The
feature
wise
layer,
modulation,
where
you
kind
of,
could
inform
the
network
which
black
time
to
predict-
and
if
you
do
this,
then
it
is
kind
of
learning
different
networks,
but
they
share
their
weights
and
they
are
kind
of
scaled
for
the
different
lag
times,
and
this
might
give
you
the
same
effect
and
would
be
actually
way
more
data
efficient.
H
Because
this
way
you
would
not
have
to
train
this
all
these
different
Networks
and
you
should
be
actually
getting
a
much
better
performance
and
could
look
than
in
the
relevance
map
it
like
joined.
Variable
highlights
basically,
and
you
could
get
like
a
state
where
you
could
get
kind
of
more
information
from
I
guess.
S
H
G
F
I
guess
I'd
need
to
pull
up
the
top
there
we
go
and
as
Claire
mentioned
yesterday
now
for
something
entirely
different,
we're
gonna
return
back
to
seasonal
forecasting
world
and
we're
going
to
be
looking
at
much
smaller
scales
and
thinking
about
dynamical
forecasts
of
coastal
upwelling
in
the
California
Current
system.
F
F
So
after
this
talk
and
we
get
into
the
discussion,
I'm
excited
to
hear
more
about
opportunities
for
integrating
ocean
forecasts
into
what
we
do,
because
we
have
a
lot
of
stakeholders
who
are
interested
in
the
forecasting
capabilities,
both
physical
and
ecological
variables,
so
for
those
that
are
unfamiliar,
the
California
Current
system
is
very
important.
F
There
are
a
lot
of
important
Marine
Resources
that
need
to
be
managed
and
those
management
decisions
take
place
on
a
variety
of
different
time
scales
in
a
variety
of
different
ways
and
one
of
the
most
important
components
of
the
California
Current
system
that
that
influences
many
of
these
Marine
Resources
is
upwelling
and
dynamical
ocean
forecasts
can
help
make
management
decisions
related
to
variations.
F
In
things
like
upwelling
or
other
other
variations
in
the
California
Current
system,
which
is
something
that
we're
trying
to
develop
so
thinking
about
upwelling
in
the
California
Current
system,
it
can
sort
of
be
developed,
broken
up
into
two
components.
This
is
sort
of
encompassed
by
this
Coastal
upwelling
transport
index,
which
was
developed
by
jaycock
settle
in
2018..
It
takes
into
account
the
wind
driven
Ekman
component
of
upwelling
along
the
U.S
West
Coast,
which
is
driven
by
Northerly
winds
along
the
coastline,
but
there's
also
an
important
geostrophic
component
in
the
horizontal
Direction.
F
That's
set
up
by
a
north-south
gradient
in
Sea
surface
Heights
that
that
drives
typically
an
onshore
geostrophic
flow
that
can
also
influence
vertical
transport
of
Colder
Waters
from
below,
but
also
nutrients,
and
this
is
mostly
important
for
Southern
California.
But
it
is
it's
an
important
part
of
the
equation.
The
issue
is:
is
that
if
we
want
to
start
looking
at
forecasts
of
parameters
that
people
care
about
in
the
ocean,
we
typically
don't
have
all
of
the
data
available
to
us
to
evaluate
the
forecast
scale
of
these
things.
F
For
example,
if
I
wanted
to
evaluate
the
forecast
skill
of
cutie,
this
Coastal
upling
transport
index,
I
would
need
often
daily
mean
wind
stress,
daily,
mean
sea,
surface
height
and
daily
mean
mixed
layer
depth,
and
these
are
not
often
things
that
are
saved
on
seasonal
time
scales
for
a
large
subset,
at
least
of
seasonal
forecasting
systems.
But
we
can.
We
can
start
to
address
this
question,
at
least
for
the
Ekman
transport
component
of
it,
because
most
most
models
have
saved
daily,
mean
wind,
stress
and
it'll
become
apparent
here
in
a
second.
F
Why
we
need
daily
mean
daily
mean
forecasts
for
seasonal,
seasonal
forecast,
skill
assessments,
basically
at
least
for
for
something
like
upwelling,
so
to
assess
to
assess
our
ability
to
forecast
this
thing.
We
need
some
measure
of
Truth
and,
as
you
can
imagine,
we
don't
have
fantastic
measurements
of
upwelling
in
the
historical
record,
so
our
measure
of
truth
is
actually
coming
from
a
regional
data,
assimilated
ROM
simulation
from
1988
to
2018.
It
was
developed
at
UC,
Santa
Cruz,
and
they
have
because
it's
a
simulation.
F
They
have
all
the
data
you
could
want,
but
I'm
going
to
be
specifically
looking
at
daily
mean
Eckman
transport
in
the
vertical
velocity
associated
with
that
Ekman
transport.
In
these
one
degree
boxes
which
are
outlined
in
Black
here
up
and
down
the
U.S
West
Coast
and
I'll
be
evaluating
the
forecast
skill
of
this
Ekman
transport
term
in
the
ecmwf
seasonal
forecasting
model,
which
is
one
of
the
models
that
is
able
to
save
daily
mean
reforests
for
seven
months,
initialized
every
month
from
1988
to
287,
2017.
F
and
I'm
embarrassed
to
come
to
a
CSM
working
group
meeting
and
not
show
CSM
model
validation.
But
that
is
something
that
we
have
and
we're
working
on
and
something
I
can
show
at
a
later
date.
But
you
can
at
least
get
a
sense
for
the
method
with
the
cmwf
framework.
For
now
so
ecmwf
has
25
Ensemble
members
on
a
one
degree
grid
and
we're
basically
going
to
be
looking
at
upwelling
in
one
degree
boxes,
basically
the
nearest
grid
cell
to
the
coast.
F
F
So
this
is
not
what
the
model
is
intended
for,
but
it
is
possible
that
it
could
be
doing
something
useful
and
that's
kind
of
what
we're
after
and
I'll,
just
reiterate
that
it
only
saved
daily
mean
wind
stress,
so
we
can
only
look
at
Ekman
transport
driven
Vertical
Velocity.
We
can't
look
at
this
geostrophic
term
just
to
simplify
things.
F
We're
going
to
look
at
forecast
scale
broken
up
into
three
regions:
the
Northern
California
current
system,
central
California,
current
system
in
the
Southern
California
current
system
separated
at
Point
conception
in
Cape,
Mendocino,
okay.
So
let's
just
look
at
the
skill
evaluation
very
broadly.
The
top
is
the
forecast
skill
so
as
measured
by
the
anomaly
correlation
coefficient
for
the
Southern
California
current
region,
and
the
bottom
is
the
RMS
error.
The
stippling
is
telling
you
that
the
forecast
skill
is
significantly
better
than
damped
persistence
at
90
confidence.
F
Damped
persistence
is
often
your
best
first
guess
so
we
have
to
beat
that
I
will
make
the
argument
here
that
there
is
some
some
hope
that
the
model
is
doing
something
reasonable,
even
if
there
aren't
too
many
stipples
here
there
is
some
physical
I
think
some
physical,
physically
consistent
thing
going
on
here,
and
that's
this
horizontal
or
this
diagonal
orange
stripe
here,
which
is
very
it's
a
very
common
feature
in
seasonal
forecasts
of
the
ocean
in
the
California
Current
system
at
least,
and
it's
basically
the
enso
stripe
is
what
it's
referred
to.
F
It's
telling
you
that
you're
getting
an
a
skill
bump
in
boreal
winter
because
of
the
influence
of
El
Nino,
regardless
of
when
you
initialize
the
model,
so
you
can
initialize
it
in
spring
summer
fall
as
long
as
it
encompasses
the
winter.
You
could
you
get
that
skill
bump,
and
this
is
generally
true
for
all
of
the
regions.
F
Although
RMS
era
does
increase,
as
you
go
further
north,
because
the
wind
Fields
associated
with
Ekman
transport
and
Eckman,
Eckman,
pumping
or
suction,
are
noisier
at
higher
latitude,
so
it
just
becomes
harder
to
forecast
and
in
case
you
needed
more
convincing.
That
inso
is
the
most
important
driver
of
seasonal
forecast
skills,
our
seasonal
forecast
scale
for
variety
of
variables.
Here's
another
piece
of
evidence
for
that
up
here.
F
These
plots
are
showing
the
skill
above
persistence
in
the
dynamical
forecast
system
for
all
years
years,
just
for
enso
events
and
neutral
years,
and
this
is
just
highlighting
that
the
seasonal
forecast,
skill
that
is
present
is
being
primarily
driven
by
enso
again.
This
is
for
the
Southern
California
current
system,
but
this
is
generally
true
once
again
for
the
Central
and
North,
although
it
does
start
to
get
a
little
bit
noisy
and
you
get
less
of
that
coherent
sort
of
enso
stripe.
F
So
what
this
is
actually
what
I'm,
trying
to
forecast
and
I
realize
I,
probably
should
have
led
with
this
but
upwelling
is,
is
actually
a
pretty
noisy
time
series.
This
is
a
the
Cutie
index
for
one
Latitude
40
and
a
half
degrees
north
for
one
year,
and
these
are
the
daily
mean
values.
So
you
can
see
anything
above.
Zero
is
upwelling,
anything
below
zero
is
downwelling
and
it's
a
fairly
noisy
time
series.
But
there's
a
lot
of
features
here
when
you
integrate
this.
F
This
sort
of
Time
series
over
the
course
of
a
year
that
a
lot
of
marine
resource
managers
look
to
for
signals
for
what
the
physical
system
is
doing
and
how
the
ecological
system
May
respond
to
those
physical
changes,
and
we
we
quantify
that
using
what's
called
cumulative,
upwelling
and
accumulative
upwelling
index,
where
you
basically
just
integrate.
F
Every
year,
you
integrate
the
QD
index
from
January
1st
onwards,
and
when
you
do
that,
you
get
something
that
looks
like
this
blue
line
and
you
can
start
to
Define
different
what
we
call
upwelling
phonology
terms
to
indicate
where
you
are
in
the
upwelling
season.
For
example,
the
spring
transition
is
the
minimum
of
the
cumulative
upwelling
index
over
the
course
of
a
year,
and
that
is
the
signal
that
you
are
transitioning
from
a
time
of
mostly
downwelling
to
a
time
of
mostly
upwelling.
F
F
So
these
are
different
things
that
would
be
useful
to
forecast
on
seasonal
time
scales
so
that
folks
can
get
a
sense
of
whether
upwelling
season
will
be
later
or
earlier
than
normal
and
then
plan
accordingly
for
for
where,
when
and
where
they
want
to
open
Fishing
grounds,
for
example,
and
again
I
I'm
gonna,
when
I
try
and
do
this,
I
have
to
use
the
Eckman
transport
term
because
I
just
don't
have
access
to
any
daily,
mean
sea,
surface
site
or
mix
layer,
depth
beta
okay.
F
So
we're
going
to
look
at
the
forecast
skill
of
the
spring
transition.
Just
to
give
you
a
sense
of
what
this
looks
like
this
is
the
cumulative
upwelling
for
39
degrees
north
in
the
year
1998-
and
this
is
from
observations
by
observations.
I
mean
this.
This
data
assimilated
ROM
simulation,
but
this
is
the
observed
spring
transition
for
that
year.
F
If
you
average
them
all
together,
you
get
an
ensemble
mean
guess
of
when
the
spring
transition
may
happen,
and
in
this
case,
at
this
latitude
in
this
year,
you
can
see
that
the
model
does
really
well
there's
only
a
three
day
error
and
when
the
this,
the
transition
to
upwelling
is
forecasted
to
happen.
Obviously,
that's
not
going
to
happen
every
year.
F
The
black
line
is
the
ensemble
mean
estimate
of
the
spring
transition
in
the
model
for
a
January
initialization,
and
then
the
horizontal
gray
lines
are
the
individual
Ensemble
member
guesses
of
when
the
spring
transition
is
occurring.
The
take-home
message
here
is
that
the
model
is
doing
something
sensible.
It's
getting
particularly
these
Peaks,
a
lot
of
the
peaks
in
sort
of
late
spring
transition
times
and
there's
an
overall
significant
correlation
of
0.62
and
a
relatively
low
RMS
era
of
when
you're,
when
the
spring
transition
is
forecasted
to
occur.
F
This
is
just
for
one
latitude,
though,
so
we
can
repeat
this
process
for
all
the
latitudes
and
see
how
we
do-
and
it
looks
something
like
this.
So
this
is
showing
you,
the
anomaly
correlation
coefficient
in
the
RMS
error
for
the
inner
annual
relationship
or
the
annual
forecast
of
the
spring
transition
relative
to
observations
as
a
function
of
latitude
and
for
three
different
initializations,
either
November,
December
or
January,
and
in
any
of
those
cases
you
were
integrating
from
January
1.
F
As
far
as
the
cumulative
upwelling
index
goes,
and
the
take-home
message
here
is
that
the
model
does
skillfully
seem
to
predict
the
spring
transition,
but
mostly
in
the
central
California
current
system.
Once
you
get
too
far
north
the
the
no
there's
just
too
much
noise,
there's
too
much
wind
variability
and
it's
it's
not
something.
That's
easily
predictable.
If
you
get
too
far
south,
it's
upwelling
all
the
time,
and
it's
not
something.
F
I
have
to
show
you
here,
but
it's
it's
trivial,
to
forecast
the
spring
transition
to
upwelling
season
in
the
Southern
California
current
system,
because
it's
always
upwelling,
but
the
central
California
current
system
seems
to
be
that
nice
sweet
spot
where
we
might
have
some
useful
skill
for
folks
to
to
make
decisions
based
off
of
so
I'll
summarize
there
there,
the
ecmwf
model
does
seem
to
have
some
prediction
capabilities:
getting
upwelling
intensity,
something
like
five
to
seven
months
out
and
that
seems
to
be
primarily
linked
to
inso.
F
Like
most
things
are
in
seasonal
time
scales,
particularly
on
the
California
Current
system,
and
the
model
does
seem
to
have
some
skill
at
predicting
the
spring
transition
in
the
central
California
current
system
when
initialized
in
November
through
January.
So
with
that
I'm
happy
to
take
any
questions
and
I'm
looking
forward
to
discussion
afterwards.
Thanks.
J
J
F
Good
question:
it's
it's
sort
of
yeah,
I!
Guess
I,
don't
have
a
clean
answer
for
you
here
we
haven't
looked
too
deeply
into
the
model
spread
just
yet,
but
I
agree
that
there's
huge
swings
in
what
the
model
is
guessing
at
at
the
spring
transition.
F
But
it's
only
when
you
take
the
Ensemble
meme
that
you
end
up
getting
some
nice
correlation
here,
but
it
is
something
to
explore
for
sure.
I
can't
decide
if
Jacob's
talking
to
me
or
the
previous
previous
one.
Okay.
G
G
F
So
it
also
kind
of
depends
on
where
you're
at
this
is
39.
North
is
something
like
Central
Central
California
current
system
and
a
lot
of
the
inso
teleconnections
historically
have
been
shown
to
happen
mostly
in
the
northern
California
current
system.
So
if
I
I
honestly
can't
tell
you
because
I
can't
remember
what
the
time
series
looks
like
for
45
degrees
north,
for
example,
but
it's
possible
that
inso
signal
will
be
more
apparent
further
north
you
go
that
said,
I
agree,
the
spread
is
huge.
F
There's
not
a
high
signal
to
noise
ratio
going
on
here,
so
it
I
guess
both
to
your
and
Eyeless
points,
it'll
be
important
to
understand.
What's
driving
that
spread,
and
if
there's
you
know
something
real
going
on
here
or
if
we're
just
getting
lucky,
but
one
way
to
get
to
that
is
by
also
comparing
across
models
and
we're
folding
in
more
and
more
models
to
sort
of
see.
If
there's
something
useful
amongst
them,
you
had
your
hand
half
raised
right.
K
F
Yeah,
so
you
could
do
that,
but
what
ends
up
happening
is
that
you
end
up
getting
a
very
smooth,
an
artificially
smoothed
answer
on
what
the
cumulative
upwelling
is.
So
that
it'd
be
the
average
of
all
these
gray
lines
and
sort
of
a
rule
of
thumb
is
if
you're
trying
to
forecast
something
you
try
to
get
that
something
and
all
the
Ensemble
members
and
then
take
the
Ensemble
average
and.
C
F
Have
you
looked
at
sub-seasonal
variability
within
the
seasonal
forecast
skill?
Is
there
more
skill
on
early
season
versus
late
season,
so
not
not
directly
in
that
manner
we
have
actually
looked
at
sub-seasonal
forecast
scale
of
upwelling
in
a
subseasonal
forecasting
system,
but
it
has
it's
just
a
noisier,
a
noisier
thing
to
look
at
so
I.
Don't
have
a
very
satisfying
answer
for
you
Niche,
but
it
is
a
good
suggestion.
R
F
R
Better
yeah
go
ahead.
Okay,
another
question
about
the
resolution:
a
requirement
for
simulating
a
coastal
up
learning,
because
this
is
also
an
ongoing
doe
effort.
So
do
you
have
any
comments
on
what
would
be
like
the
like
how
like,
if
we
want
to
simulate
Coastal
upwelling?
Do
you
think
there
is
what
would
be
the
minimum
resolution
required.
F
Yeah,
that's
a
good
question
and
not
something
I'm
ultimately
prepared
to
answer
right
now,
because
I
I
don't
have
a
very
satisfying
answer
for
you.
This
model
is
its
simulation.
Resolution
is
a
quarter
degree
and
that
seems
to
be
good.
You
want
something
that's
Eddie
permitting,
because
you
want
to
make
sure
you
get
the
SSH
gradients
correctly
and
there's
a
lot
of
Eddie
activity
in
the
California
Current
system.
F
If
you
really
want
to
the
I
would
say,
the
most
important
thing
is
be
able
to
resolve.
What's
going
on
on
the
continental
shelf
and
like
an
impact's
perspective
and
the
Shelf
along
the
U.S
West
Coast
is
only
like
25
kilometers
wide,
it's
extremely
narrow
and
a
lot
of
times.
F
We
care
about
these
really
narrow,
upwelling
features
that
are
not
definitely
not
resolved
in
this
model,
but
it
seems
it
seems
that
at
25
kilometers
we're
getting
something
reasonable,
at
least
in
ecmwf,
but
if
you
want
to
get
I,
don't
honestly,
don't
know
what
the
biases
are
in
this
model,
either
I
I
realize
I'm
having
a
lot
of
no
answers
here
but
I.
This
is
something
I
put
together
this
past
summer.
So
it's
been
a
while
since
I
thought
about
it,
but
it's
a
good
question.
F
I
am
and
if
you,
if
you
come
to
an
answer,
please
let
me
know
so:
I
can
think
about
it
more.
Thank.
G
G
For
the
co-chairs,
update,
I
think
my
fellow
co-chairs
are
online.
I
know
Yaga
has
to
leave
at
three,
but
this
can
be
quick
and
then
we
can
get
into
discussion
so
I
just
wanted
to
start
with
just
an
overview
of
sort
of
core
assets
of
the
working
group
and
just
remind
people,
particularly
people.
Maybe
who
haven't
tuned
in
before
of
the
data
sets
that
this
working
group
is
disseminating.
We've
got
reforests
or
hindcast
sets
from
cesm1
subseasonal
to
seasonal
csm1,
also
contributed
seasonal
hindcast
to
the
nmme
project.
G
We've
heard
a
lot
about
the
dple,
which
used
cesm1
system
with
this
40-member
large
ensemble.
More
recently,
we've
augmented
these
collections
with
CES
M2
versions.
So
we
heard
in
yaga's
talk
about
the
csm2
S2s
set
cam
6,
there's
a
also
a
high
top
Wacom
6
S2s
pre-forecast
set,
but
both
of
these
are
documented
in
papers.
By
Yaga
we
have
now
the
seasonal
to
multi-year
large
Ensemble
so
focusing
on
this
kind
of
intermediate
seasonal
to
inter-annual
time
scale
using
cesm2.
It's
a
20
member
Ensemble,
initialized
quarterly
over
quite
a
large
historical
window.
G
Here
that
publication
came
out
last
year.
Less
well
advertised
is
the
fact
that
we
extended
the
November
1st
initializations
of
smile
out
through
decadal
time
scales,
but
we
only
did
that
for
10
members,
so
it's
kind
of
a
mini
Decatur
prediction
Ensemble
using
cesm2,
but
it
does
span
kind
of
this
large
decal
time
span.
G
G
And
then
you
heard
in
yaga's
talk
today
about
the
S2s
perturbed
initialization
experiments,
there's
paper
and
prep,
and
so
that
those
data
sets
will
be
coming
available
shortly.
Certainly
this
year
and
then
in
my
talk,
I
talked
about
sort
of
these
pacemaker
experiments
built
on
top
of
the
smile
control.
G
We
completed
a
preliminary
Atlantic
pacemaker
experiment,
initialized
in
February
I
plan
to
increase
the
Ensemble
size
of
that,
and
so
this
is
another
sort
of
asset
that
will
be
coming
available
this
year
and
then,
as
I
mentioned,
we
do
plan
to
expand
The
Ensemble
size
of
the
csm2
decal
prediction
set
to
15
from
10.,
so
I
don't
think
we're
going
to
aim
for
a
large
Ensemble
with
csm2,
at
least
with
our
current
allocation,
but
15
should
should
be
better
than
10.
G
all
right.
So
we
went
through
the
exercise
in
the
fall
of
submitting
the
CSM
compute
allocation
proposal,
and
that
was
successful.
So
we
got
the
core
hours
that
we
requested
for
this
two-year
time
span
here.
G
This
working
group
got
22
million
core
hours
in
year,
one
and
a
little
bit
more
in
year
two,
so
that
will
be
particularly
near
two
allocation
that
can
be
used
on
the
new
duration
supercomputer
and
we
solicited
feedback
from
working
group
members
back
in
June
as
to
what
we
should
put
into
this
ESP
WG
proposal,
and
we
did
our
best
to
reflect
the
feedback
that
we
got
and
let
me
just
kind
of
walk
through
this
briefly.
We
we
have
a
development
allocation
and
a
production
allocation
for
the
development.
G
We
we
kind
of
decided
to
allocate
just
some
some
compute
just
for
STD
sensitivity,
studies,
sort
of
very
generic
idea
of
kind
of
building
on
these
control.
Hindcast
sets
that
we've
that
we've
created
and
exploring
sensitivity,
for
example,
to
Ocean
initial
conditions
or
ocean
initial
condition
spread,
which
we
currently
lack.
G
How
much
is
that
impacting
our
predictions
on
on
sub-seasonal
to
decadal
time
scales,
land
initialization,
something
that
we
have
have
not
focused
a
great
deal
on,
certainly
not
on
the
longer
time
scales,
but
we
don't
really
have
a
good
sense
of
what
the
sensitivities
are.
So
this
is
very
open-ended
and
and
very
much
sort
of
looking
for
people
to
to
come
to
us
with
ideas
and
and
take
the
lead
and
get
free,
compute
hours
to
to
see
those
ideas.
G
Also
in
development,
we
allocated
time
for
testing
the
new
cesm3
model
in
initialize
prediction
mode,
both
S2s
and
s2d,
likely
in
collaboration
with
ocean
Mall
working
group
because
they're
the
ones
who
are
gonna
potion
and
sea
ice
initial
conditions
using
the
Fosse
methodology
that
has
worked
well.
In
the
past.
C
G
Production
side
there
are
plans
to
extend
our
existing
hindcast
sets
to
near
real
time,
both
for
S2s
and
s2d.
So
in
year
year
one
we,
we
will
extend
the
S2s
hindcast
set
through
October,
2023
initialization
and
then
word
a
year
in
year,
two
and
then
we'll
try
to
do
the
same
with
the
cesm2
decal
prediction
set
so
in
in
this
year.
Try
to
update
initializations
through
November,
2022
and
then
up
through
2023.
G
caveat
here
is
that
we
don't
have
a
working
group
liaison.
So
you
know
a
lot
of
this
is
aspirational,
but
we
need
people
to
actually
do
this
so
depends
on
on
people's
availability.
G
G
G
You
know
we
only
allocated
for
12-month
simulations,
but
that's
probably
too
short,
I
think
we
want
to
get
into
year.
Two
we
want
to
get
back
to
1980,
so
we
may
be
tapping
into
other
parts
of
the
allocation
to
complete
these
small
pacemakers,
we've
kind
of
done,
the
initial
cut
at
the
Atlantic,
but
experiments
can
can
sort
of
Leverage
the
technology.
That's
been
developed
and
get
underway
this
year
and
then,
as
Dylan
asked
earlier.
Another
idea
is
to
extend
this
to
decadal
time
scale.
G
So
using
now
the
new
csm2dp
as
a
control,
we
can
think
about
decadal
behind
cast
pacemaker
experiments.
What
those
would
look
like
is
up
for
discussion,
they
could
follow
the
TBI
protocol
or
they
could
be
something
completely
different
and
again
it
really
depends
on
people
showing
some
entrepreneurial
spirit
and
saying
I've
got
a
great.
G
I
think
it
would
help
a
working
group
and
we
can
give
you
the
computer
to
do
that.
One
idea
that
has
already
been
shelved
is
is
contributing
to
the
wcrp
spark
dcpp
volcanic
Readiness
exercise.
This
is
something
that
bill
merryfield
is
trying
to
get
all
decal
prediction
producing
centers
to
to
participate
in
which
is
rerun.
Your
your
real-time
decal
forecast
with
a
synthetic
volcanic
eruption
just
to
basically
get
the
infrastructure
in
place
in
case
a
volcano
does
go
off
that
invalidates
your
decayo
forecast.
G
Would
you
be
prepared
to
rerun
those
kind
of
in
a
short
time?
We
shelled
that
because
they're
using
this
like
easy,
Volk
or
something
I
kind
of
forget
aerosol
forcing
that
is
not
the
way
that
we
Implement
aerosol
Force
too
much
work
to
contribute
to
that,
so
that
frees
up
half
a
million
quora
0.5
million
core
hours.
G
So
oh,
the
only
other
thing
I
wanted
to
mention
was
you
know
we
have
put
together
this
python
package,
calling
it
ESP
lab.
The
goal:
is
community
developed
toolkit
for
efficient,
interactive
analysis
of
initialized
prediction?
Ensembles
I
used
this
for
this
mile
paper,
I'm
finding
it
pretty
useful.
G
In
my
own
work,
but
you
know-
maybe
it's
just
me
I,
don't
know
if
other
people
are
using
it
if
this
is
a
good
idea
or
if
people
have
their
own
kind
of
software
tools
that
they
use,
but
I
just
wanted
to
advertise
it
again.
It's
on
GitHub
certainly
welcome
contributions
and
collaborative
effort
on
analysis
and
with
that
I'll
just
open
it
up
to
discussion,
questions
comments.
X
I
I
have
a
bit
of
a
remark
to
give
if
you're
listening
to
me,
this
is
a
Enola.
Yes,.
X
I,
just
wonder
how
I
can
be
more
involved
or
useful
in
this
group
right
from
where
I
am
here
in
in
Nigeria,
so
because
I
I
can
see
that
most
of
the
people
in
the
group
are
far
in
the
United
States.
So
I
wonder
how
can
I
be
more
improve?
How
can
I
be
more
involved
in
in
the
activities
of
of
the
group.
G
Great
question
well,
I
think
you've
made
an
important
step
by
participating
today,
and
you
can
look
forward
to
our
our
next
working
group
meeting
in
June,
which
will
be
the
cesm
annual
meeting,
and
hopefully
you
can
participate
in
that
as
well.
You
can
kind
of
keep
up
to
date
on
on
the
happenings
by
just
looking
at
the
ESP
WG
website.
Finding
the
data
sets
that
are
available.
G
The
upcoming
events-
and
you
know
certainly
I
think
if
you
were
to
start
to
utilize
some
of
the
the
cesm
prediction
data
sets
in
your
work.
That
would
be
that.
Would
that
would
help
tie
in
your
work
to
to
that
of
the
working
group.
F
I
guess
I
I
just
want
to
reiterate
something
I
mentioned
in
my
talk,
which
was
sort
of
the
Gap
in
ocean
forecasting
relative
to
atmospheric
or
climate
forecasting.
F
F
So
is
there
an
opportunity
before
these
runs,
are
made
to
inform
what
variables
might
be
useful
to
save
and
help
guide?
That
decision
making
absolutely.
G
Okay,
you
know
we're
certainly
interested
in
hearing
if
there
are
any
glaring
gaps
in
the
data
that
we're
saving-
and
you
know
if
they're
runs
coming
up
that
you
know
you're
particularly
interested
in
you
know,
I
think
he
feel
free
to
make
your
voice
heard.
You
know
to
to
the
co-chairs
and
like
when
we
put
together
the
smile
we
we
did
go
through
a
round
of
soliciting
feedback
on
output.
G
I,
don't
recall
that
we
had
a
lot
of
external
input
like
it
kind
of
in
the
end.
It
was
It
was
kind
of
internal
cgd
decision
making,
but
we
always
want
to
hear
like
if
there
are
some
key
Fields,
some
key
frequencies
that
that
are
that
are
missing.
F
F
That's
a
problem
and
like
the
nmme
doesn't
save
most
ocean
variables
that
aren't
at
the
surface
and
I'm
not
I'm
not
advocating
for
3D
ocean
variables,
but
like
simple
things
like
mixed
layer,
depth
or
sea
surface
height,
Daily
Times,
a
daily
time
scales
would
be
a
huge
help,
especially
if
there
there's
some
effort
to
with
the
climate
ecosystem
Fisheries
initiative
to
develop
Regional
ocean
forecasting,
systems
that
are
forced
by
global
forecasting
models,
and
we
can't
force
our
Ocean
Models
without
having
like
some
of
these
Global
outputs.
F
So
anyway,
that's
a
discussion.
We
can
have
like
down
the
road,
but
I
just
want
to
make
sure
that
door
was
open.
K
C
F
Ensembles
typically
have
all
the
data
I
need
for
in
the
ocean.
It's
it's
more
of
initialized
forecasts
because
they're
producing
so
many
so
much
data,
especially
in
real
time.
It's
just
hard
to
store
it
all
and
I,
don't
expect
people
to
store
it
all,
but
if
we're
going
to
prioritize
I
guess
my
point
is:
is
that
if
you
want
to
broaden
your
user
base,
this
might
be
one
easy
way
to
do
it,
because
I
think
these
these
runs
are
extremely
useful
for
more
than
just
climate
problems.
F
I
think
they
could
be
used
in
an
operational
sense
if
we
can
get
a
sense
for
how
how
accurate
they
are
for
some
of
these,
these
other
things
and
then
I,
guess
kind
of
on
that
note,
if
you
don't
mind
me
following
up
I'm
curious,
what
is
the
resolution
of
the
S2s
runs
that
you're
developing
for
like
for
the
ocean
like
what
is
what
is
the
simulation
resolution
and
what
is
the
output
resolution
like?
What
are
you
saving.
N
Hey
sorry,
I
was
typing
back
to
Enola,
so
yeah
we
just
used
a
default.
One
degree
resolution.
F
Okay,
I
guess
just
continuing
to
advocate
for
the
oceans
here
a
little
bit
if,
if
we
want,
if
it's
a
priority
for
S2s
forecasts,
to
be
useful
for
ocean
decisions
and
I
I
will
admit
that,
like
a
lot
of
ocean
decisions
happen
on
these
S2s
time
scales,
even
though
the
ocean
is
dominated
by
longer
low
frequency
things
that
when
you
look
at
S2s
forecasts
of
the
ocean,
your
spatial
scale
necessarily
decreases
because
the
ocean
is
so
sluggish.
F
So
it's
really
not
useful
to
look
at
like
Global
S2s
forecasts
to
Oceanic
variables,
you
really
need
to
zoom
in
on
the
coastline
and
look
at
at
high
frequency
quote:
unquote
high
frequency
variability
that
is
important
on
those
time
scales.
So
we
can't
do
that
if
the
resolution
of
the
ocean
model
isn't
isn't
capturing
what's
going
on
along
the
coastline,
I'm
not
advocating
for
a
12th
degree.
F
You
know
csm's
S2s
product,
but
it's
just
a
general
comment
that
if
you
again
want
to
broaden
your
user
base
to
include
ocean
forecasting
on
S2s
time
scales,
you
necessarily
have
to
start
resolving
some
of
those
Coastal
processes
where
it's
actually
going
to
be
used.
We
just
simply
won't
use
S2s
forecasts
out
in
the
middle
of
the
North
Pacific.
F
Have
a
little
bit
yeah
I
had
to
put
it
down
because
there
weren't
the
variables
I
needed
I
think
wind
stress
was
saved,
I
was
looking
at
it
for
upwelling
purposes
and
when
stress
was
saved,
sea
server
site
wasn't
mixed
layered
up,
it
wasn't,
but
it
was
also
one
degree
which
limited
its
use
in
our
purposes
as
well.
S
N
F
F
Cheyenne
or
something
or
Casper,
yeah,
gotcha,
yeah,
okay,
anyway,
I,
don't
I,
don't
mean
to
dominate
the
discussion.
It's
just
something
that
is
worth
thinking
about.
If
we,
if
you
want
to
expand
your
user
base
for
ocean
ocean
oriented
forecasts
which
is
I,
would
say
about
two
decades
behind
atmospheric
oriented
forecasts
in
an
operational
sense,
you.
G
Know,
I
think
the
next
kind
of
critical
juncture
when
we're
going
to
reevaluate
output
is
when
we
start
doing
CES
M3,
initialized
hindcast
and
you
know
that's
when
we're
really
gonna
make
an
effort
to
get
feedback
from
people
like
you
on
what
should
we
include
kind
of
moving
forward
because
you
know
we're
not
going
to
rerun
all
of
our
existing
hindcast
sure
to
save
extra
output
and
I
saw
this
comment
from
Sanjeev
about
Missing
soil
moisture
variable
from
smile
and
I've
noticed
that
myself
and
I
I
really
don't
know
how
we
missed
H2O
SLI
I.
G
G
Online
we're
starting
to
dwindle
in
participation,
Judith.
K
Not
online,
could
you
put
up
the
allocation
for
the
next
two
years?
Okay,
I'd
be
interested
in
the
plans
or
there
was
the
s2d
online
bias
correct.
You
don't
need
to
put
it
up,
but
it
was
number
five
on
the
left
side.
I
was
wondering
what
the
plans
are
for
this
and
who's
the
sort
of
contact
person
for
this,
because
we
are
working
on
online
bias,
correction
methods
using
nudging
and
machine
learning,
and
it
would
be
nice
to
link
with
that
effort.
G
Thank
you,
Jerry
I
think
this
is
sort
of
a
catalyst
placeholder
here,
I,
don't
know
if
you're
online.
G
G
K
To
State
there's
a
very
big
collaboration,
using
machine
learning
to
detect
and
correct,
coupled
model
biases,
which
would
align
your
advice
there
and
should
be
should
be
collaboration
rather
than
repetition.
Is
the
question
that
antonietta
asked
me,
which
is
how
big
are
the
climatological
biases
in
the
tropical
Atlantic
and
they're
huge,
and
you
know,
I
would
think
that
if,
if
we
could
come
up
with
some
online
SST
bias
correction
methods,
you
know
if
it's
sophisticated,
if
it's
just
flux,
correction,
like
I,
I,
think
that
would
be
an
interesting
thing
to
explore.
K
Highland
I
was
just
wondering
about
higher
resolution,
whether
things
come
out
desk
that
motivates
the
working
group
to
look
at
that.
More
yeah,
I
I
was
hoping.
Gocon
would
would
say
something
about
his
new
high
resolution
prediction
project,
but
he's
in
the
cab
meeting
in
DC,
but
you
know
that
work
is
continuing.
K
I
think
the
the
preliminary
set
of
initialized
decadel
heimcast
from
what
was
formerly
known
as
I,
has,
will
be
made
available
this
year,
and
that
would
be
like
a
really
awesome
asset
for
this
working
group.
You
know
I,
don't
know
that
we're
going
to
distribute
it.
It's
probably
I,
I,
think
data
distribution
of
that
of
that
is
problematic
and
hasn't
been
solved.
K
You
can't
just
download
all
of
this
high
resolution
data
because
it's
it's
too
much.
You
kind
of
need
to
get
compute
where
the
data
is,
and
you
know
it's
still
very
much
sort
of
getting
the
data
through
back
channels.
At
this
point
we
do
have,
we
do
have
a
set
of
it
here
at
ncar
and
I
I
would
like
to
see
that
kind
of
made
public
and
advertised,
maybe
maybe
by
the
June
Workshop
I'd,
be
interested
in
looking
at
it.
K
K
In
this
context,
I
think
it
would
be
nice
to
do
some
of
these.
K
At
least
friends.
I
was
wondering
if
it
makes
sense
for
s3e
or
not
on
variable
resolution,
with
a
refinement
over
conus,
for
example,
and
we
should
make
sure
it's
big
enough.
K
I
think
some
of
the
things
that
have
been
done
on
some
of
the
grids
are
too
localized,
but
I
think
we
should
maybe
then
go
maybe
into
the
ocean
over
the
the
coastal
regions
and-
and
you
know,
talk
to
users
and
stuff,
there
would
need
to
be
some
one
would
need
to
do
some
work
on
how
much
you
can
actually
go
down
in
terms
of
variable
resolution
on
what
makes
sense,
but
I
would
think
the
greater
conus
area
for
for
the
typical
CSM
used.
K
K
Just
at
the
moment
running
some
coupled
simulations,
but
I,
don't
think
much
has
been
done,
coupled
I,
think
I
think
the
I
just
talked
to
Brian
Dobbins,
so
I
think
the
empath
variable
resolution
runs
coupled
as
part
of
one
of
a
Sema
demonstration
projects,
and
this
is
the
one
which
is
too
localized
to
make
sense
for
more
than
one
application.
Let's
say
so
I
mean
the
very
resolution
does
not
extend
in
towards
the
West
Coast
or
beyond
the
West
Coast,
but
depending
on
which
SEMA
projects
will
get
funded
and
go
forward.
K
There
will
be
a
version
of
that
coupled
and
it'll,
be
based
on
csm2
with
empath
wearable
resolution,
cam
physics,
yeah,
I,
I,
think
that's
a
great
idea
and
I.
It
reminded
me
that
I
forgot
to
mention
that
we
have
this
collaboration
with
Scripps
doing
you
know:
redoing
smile
with
high
top
cam,
so
it
kind
of
is
along
the
same
lines
of
first
establishing
your
your
foundational
hindcast
set
and
then
doing
sensitivity
studies
of
resolution.
K
You
know
initialization
Etc,
so
we've
kind
of
covered
the
atmospheric
vertical
resolution,
front,
I
think
and
you
know,
refined
horizontal
resolution
is
something
that
could
build
off
of
the
current
S2s
or
smile
and
I.
Think
there's
plans
to
do
this
because
I
I'm,
taking
back
how
much
so,
depending
on
the
science
project,
it
might
make
sense
or
not
to
refine
over
conus.
K
The
other
thing
is
to
refine
in
the
tropical
belt,
but
I
think
there's
a
catalyst
effort
going
on,
but
maybe
more
free
running,
I,
don't
know
of
refining
over
the
ocean
over
the
in
the
tropics
and
then
the
question
is:
do
you
get
mjo
teleconnections
better,
which
would
have
applications
for
predictability?
K
I,
think
Brian
is
doing
that,
but
I
don't
know
the
details
and
I
don't
think
it's
initialized,
maybe
jagannos,
but
she
might
be
gone
by
now.
She's
gone
by
now,
yeah
I,
think
Brad.
You
mean
the
regionally
refined
Tropics
that
I
think
he's
just
running
at
F2000
case
at
the
moment,
I
think
yeah
I
mean
I,
think
I
mean
I.
K
Think
I've
seen
some
results
by
Lucas
Harris
initialized
with
mjo
and
the
MGO
gets
better
in
their
case
and
so
I
don't
know
if
they
looked
into
tropical
extra
tropical
teleconnections,
but
that
would
be
a
potential
for
improved
S2s
forecasts.
K
I
think
Ian's
had
her
hand
raised
for
a
little
while
go
ahead.
Oh
sorry,
I
already
lowered
my
hand,
but
this
thing
stopped
through
this
so
started
the
conversation
about
Regional
refined
resolution.
It's
all
Cascade
and
hyperfast.
These
two
doe
projects
are
aware
of
brainstorming
right
now
to
initialize
Israel,
Sam,
Regional
refined
model
to
produce
some
handcuffs
for
extreme
events.
This
is
the
area
that
I
think
potentially
Cascade
can
collaborate
with
Catalyst
in
the
future,
too.
K
One
more
thing:
real,
quick
I
guess
is
sort
of
changing
topics
there
is
dple
has
BGC
variables.
Is
that
right,
yeah?
Okay,
again,
I
guess
this
goes
to
variables,
and
we
can
have
this
conversation
later,
but
something
to
consider
as
well
for
future
efforts
is
maintaining
BGC
presence
because
it's
I,
we
don't
have
a
ton
of
observations
to
validate
any
of
these
ups
of
these
forecasts,
but
we
would
have
a
better
hope
of
doing
so
on,
like
the
S2s
space
more
than
the
STD
space,
because
we
don't
have
Decades
of
BGC
observations.
K
We
have
like
10
years
of
BGC
observations
so
anyway,
just
something
to
consider
as
well.
Did
you
see
Sam
Logan's
talk
yeah,
that's
that's
what
I'm
thinking
of
yeah
I
didn't
actually
see
his
talk,
but
I
have
spoken
to
him
about
his
his
work.
K
Yeah
I
mean
it's
true
that
the
ocean
has
been
sort
of
an
afterthought
in
the
S2s
prediction.
Experiments
up
to
now,
certainly
don't
include
BGC.
K
And
I
know
that
shui
Shanks
group
in
in
Washington
they
also
are
interested
in
sort
of
more
high
frequency
output
in
the
upper
ocean
for
SC
interaction,
research.
K
Well
daily,
so
they
are
interested
in
salinity
and
so
so
that
they
having
this
model
between
they're
having
a
in
the
observations,
they're
thinking
that
the
mjl
and
so
interactions
there's
a
large
component
with
a
precipitation,
freshens
the
water
and
then
that
reacts
with
a
Kelvin
wave,
and
so
they
want
daily
upper
Ocean
or
put
I
have
a
list
of
wearables
they're
like
so
for
csm3.
We
should
involve
them
too.
K
I
realize
that
this
does
nothing
but
present
you
with
like
problems.
It
just
is
more:
you
define
more
storage
space
and
more
compute
time,
yeah
I,
guess
I'm,
just
trying
to
advocate
for
where
we
can't
do
research,
because
we
just
simply
don't
have
the
data.
That's
all.
K
Right
I
think
things
are
winding
down
unless
anyone
has
any
final
comments.
K
If
not
thanks,
everyone
for
participating
and
hope
to
see
everyone
in
in
June
here
at
the
annual
Workshop
thanks
everyone
online.