►
From YouTube: Dan Amrhein Coupled Data Assimilation
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
all
right
thanks,
everyone
for
coming
to
d
a
week,
I'm
happy
to
beat
it
off
and
I'll
talk
about
a
couple
status
simulation
and
cesm,
but
I've
also
dedicated
a
fair
amount
of
the
presentation,
just
providing
some
Theory
background
for
data
simulation
in
the
ocean
and
coupled
models,
and
so
I'll
talk
a
little
bit
about
why
we
care
why
it's
important
and
interesting
to
think
about
doing.
A
Data
simulation
in
this
context
provide
some
background
on
the
Machinery
talk
specifically
about
data
simulation
efforts
in
cesm
and
then
talk
about
some
of
the
big
open
questions
that
I
have
I
think
we
have
to
look
forward
to
in
the
coming
years.
A
So
a
first
point
of
motivation
for
doing
data
simulation
is
that
the
ocean
is
only
sparsely
observed,
even
in
the
last
10
years,
in
the
Argo
and
the
altimeter
era,
we
don't
have
a
complete
picture
of
the
ocean
from
observations,
and
so
it
behooves
us
to
introduce
information,
dynamical
information
that
comes
from
Models.
So
this
is
a
figure
from
a
recent
paper
by
Laura,
Jackson
and
co-authors
that
compared
10
different
ocean
reanalyzes,
pictured
here
to
examine
properties
of
the
North,
Atlantic
and
I'm,
showing
here
the
paratropic
stream
function.
A
Strength
in
sphere
drops
of
thymine
from
1993
to
2010..
The
point
I
want
to
make
here
is
that
these
models
are
actually
all
very
different.
They
had
varying
degrees
of
Revolution
resolution
from
one
degree
to
a
12th
of
a
degree,
so
we
spare
trap
extreme
functions,
look
quite
different,
but
one
of
the
takeaways
from
this
paper
was
that
as
time
moves
forward
and
we
accumulate
more
observations
in
the
North
Atlantic
Ocean.
A
A
I'll
just
be
demonstrative,
so
this
just
points
to
you
know
the
potential
for
data
constraints
to
to
kind
of
deliver
similar
performance
and
models
that
are
themselves
very
different.
A
A
second
motivation
for
Ocean
State
estimation
or
data
simulation
is
initialized
to
prediction,
and
here
I'm,
showing
a
figure
from
work
on
the
decadal
large
prediction:
Ensemble
behavioral,
predictive
large
dple,
which
is
showing
a
difference
in
forecast
skill
for
upper
ocean
heat
contents
between
the
cesm
large
Ensemble,
which
is
just
forced
by
boundary
conditions,
but
not
initialized
to
a
particular
State,
and
that
the
dple,
which
is
both
forced
and
initialized,
and
the
takeaway
here,
is
that
when
it
comes
to
decadal
prediction
at
least
in
cesm,
and
certainly
in
some
regions
where
you
start
matters,
the
initial
condition
for
predictability
seems
to
be
important.
A
A
A
The
Ensemble
sensitivity
of
the
longitude
of
a
tropical
Cyclone
at
a
particular
time
like
two
days
in
advance
to
win
strengths
and
that's
the
picture
that
you're
showing
here.
Basically,
this
is
where
you
want
to
blow
the
wind.
If
you
want
to
change
the
course
of
Hurricane
Katrina
and
I
guess
you
know,
this
has
promise
in
the
climate
world.
A
You
know
if
we
could
use
tools
like
this
to
study
longer
term
climate
variations,
the
Nao,
you
know
how
mechanisms
leading
up
to
that
that's
a
potentially
powerful
tool
and
finally,
and
something
that
I'm
really
interested
in
as
we
move
to
longer
and
longer
climate
time,
scales
of
the
data
only
become
more
and
more
sparse,
which
means
that
you
know
if
you
can
hack
it.
A
Data
simulation
really
becomes
more
and
more
important
as
we're
going
out
to
these
longer
time
scales,
so
I'm,
showing
here
two
reconstructions
of
sea,
surface
temperatures
from
Deep
time,
paleo
climate
to
pliocene,
and
then
the
lgm
slightly
shallower
time,
and
so
these
are
data
simulative
products,
I'm
trying
to
get
ssts
they're
being
used
now
in
the
study
of
equilibrium,
climate
sensitivity.
A
A
So
that's
I
think
some
of
the
interesting
motivation
for
why
we're
doing
this
and
now
I'll
just
provide
a
little
bit
of
background
on
the
Machinery
that
hopefully,
will
inform
the
next
two
talks
that
we'll
see
as
well
and
data
simulation
techniques
can
be
roughly
broken
down
into
two
different
categories
of
techniques,
sequential
methods
and
variational
methods
and
I'll
just
give
each
of
them
a
brief
summary
here.
A
So
sequential
methods
and
I'll
talk
specifically
about
Ensemble,
sequential
data
simulation,
they're
called
sequential
because
as
we're
moving
forward
in
time,
you're
going
to
make
changes
to
the
climate
state
in
order
to
fit
observations,
and
so
by
the
state
here,
I
mean
a
list
of
all
the
prognostic
variables
that
you
have
in
your
model.
So
for
the
ocean.
The
temperatures
velocities
that
kind
of
thing.
A
So
the
procedure
is
starting
from
a
particular
time
to
integrate
an
ensemble
of
models
forward
in
time
until
you
encounter
an
observation
of
some
quantity
and
to
assimilate
that
observation,
what
is
done
is
to
then
sample
the
models.
Like
the
data,
so
if
you
have
an
observation
from
the
North
Atlantic
you'd,
look
at
North,
Atlantic
temperatures,
for
instance,
and
then
to
compare
the
models
and
observation
and
then
add
increments
to
the
model
states
that
improve
the
fits
to
the
data
and
then
having
added
these
increments.
A
You
have
a
new
set
of
initial
conditions
and
you
can
continue
your
integration
and
integrated
over
some
period
of
time.
What
you
end
up
with
is
a
re-analysis
or
a
trajectory
of
an
ensemble
of
model
States
that
reflects
the
model
physics
and
also
these
adjustments
that
you
get
from
fitting
observations.
A
And
just
to
point
out
that,
there's
you
know
some
challenges
and
shortcomings
in
procedures
like
this.
The
first
is
that
when
we
come
across
an
observation,
we're
really
just
changing
the
model
State
kind
of
by
Fiat-
and
this
can
lead
to
some
funny
things
if
I
have
an
observation
of
a
temperature,
that's
incorrect
in
the
deep
ocean,
I'm
just
kind
of
introducing
my
temperature
change,
and
so
what
can
happen
is
that
you
break
the
model
physics
and
also
by
virtue
of
these
Ensemble
approaches.
A
And
inflation
localization
is
necessary
because
in
Ensemble
data
simulation
we're
often
limited
by
the
number
of
model
Ensemble
members
that
we
can
run
because
it's
expensive
to
run
these
models.
Why
this
matters
is
that,
as
we
run
as
we
accumulate
more
and
more
Ensemble
members,
the
regression
patterns
that
we
have
to
make
our
increments
become
less
accurate.
So
this
is
a
sort
of
classic
example.
This
is
a
pressure
field,
sorry
correlation
with
a
pressure
field.
A
Anomaly
in
so
this
is
in
in
the
atmosphere
and
then
the
top
panel
you'll
see
this
correlation
computed
with
32
Ensemble
members
and
then
the
bottom
panel
with
128..
These
regression
patterns
are
important
because
they
tell
us
if
we
have
an
observation,
for
instance
off
the
coast
of
North
America,
where
should
I
change
the
model
state?
So
what
we
see
is
that,
as
we
reduce
the
number
of
Ensemble
members,
we
get
these
spurious
correlations
that
just
come,
because
we
don't
have
a
high
enough
n
to.
A
And
so
a
typical
procedure
to
do
this,
noting
the
fact
that
these
smaller
covariances
are
often
at
greater
distances
is
to
use
what's
called
localization,
which
establishes
typically
an
isotropic
radius
outside
of
which
inside
of
which
these
correlations
are
attenuated
and
outside,
of
which
they're
set
to
zero.
So
I.
Think
of
this
as
kind
of
a
pruning
exercise,
where
you're
kind
of
getting
rid
of
the
things
you
trust
least
from
The
Ensemble.
A
The
second
approach-
that's
often
necessary,
is
called
inflation,
and
this
arises
because
when
we
make
these
increments,
how
much
the
state
is
changed.
The
size
of
the
increments
depends
on
the
spread
of
The
Ensemble
members
compared
to
the
observational
uncertainty,
and
this
makes
some
intuitive
sense.
If
you
were
to
say,
have
three
Ensemble
members
that
we're
all
doing
very
different
things
in
the
ocean
and
then
Along
Comes,
an
observation
that
you
really
trust
a
lot.
A
You
would
hope
that
you
would
be
able
to
correct
your
each
of
your
Ensemble
members
in
kind
of
a
big
way,
a
larger
increment,
whereas
by
contrast,
if
your
Ensemble
members
are
really
telling
you,
we
really
the
model
really
knows.
What's
going
on
and
you
have
an
observation,
that's
you
know
very
uncertainty,
then
the
changes
to
the
model
trajectory
are
going
to
be
relatively
small,
but
this
is
a
problem
because
Ensemble
sampling
often
will
systematically
under
sample
some
types
of
the
model
spread.
A
This
sort
of
the
model
on
some
levels
can
get
closer
and
closer
together,
and
this
leads
to
filter
Divergence,
which
is
basically
that
you're
totally
ignoring
the
observations,
which
is
not
something
that
you
want,
and
so
inflation
is
sort
of
a
commonly
prescribed
treatment.
It's
not
necessarily
a
solution
where
you're,
basically
just
spreading
out
the
variability
of
The
Ensemble
members,
and
this
can
either
be
additive
or
multiplicative.
A
So
that's
kind
of
a
quick
background
on
these
sequential
methods
and
then
just
for
completeness
and
hopefully
to
provide
some
background,
especially
for
Travis
I'll,
talk
briefly
about
variational
approaches
so
in
sequential
methods,
we're
advancing
the
model
forward
and
making
Corrections
at
all
these
times
to
get
closer
to
observations.
Variational
methods
have
a
bit
different
approach:
mathematically
where
they
solve
the
optimization
problem
to
minimize
an
objective
function
and
I'll
just
illustrate
this
using
an
example.
A
The
echo
State
estimate
is
an
example
of
a
variational
method,
and
so
excuse
me
in
the
Echo
State
estimate.
Instead
of
adding
changes
to
the
state
Vector
at
each
time,
they
Define
instead
a
set
of
control
variables.
A
So
this
is
atmospheric
boundary
conditions
and
the
initial
conditions
in
the
ocean,
and
also
on
some
of
the
parameterized
physics
and
with
Echo
does
is
say:
well
we're
not
going
to
change
the
state,
the
temperature
and
salinity
of
the
model,
but
instead
we're
going
to
change
these
control
variables
and,
specifically,
you
change
the
control
variables
to
solve
an
optimization
problem.
The
solution
of
control
variables
is
given
by
the
set
of
time
varying
controls,
but
minimizes
inject
the
function
with
two
terms.
A
A
A
Moreover,
the
40
VAR
is
a
smoother,
not
a
filter,
which
means
that
information
from
observations
rather
than
just
making
a
single
increment
at
the
time
when
you
have
an
observation
that
that
observation
is
also
going
to
affect
everything
that
happened
before
that
observation
came
along,
so
you're
really
information
is
Flowing,
both
forwards
and
backwards.
A
The
disadvantages
are
that
you
can
generate
spurious
fluxes,
but
you
can
fit
the
observations
for
the
wrong
reasons,
and
so,
for
instance,
in
that
discussion
of
North
Atlantic
reanalyzes,
they
point
to
the
fact
that
Echo
has
some
spurious
fluxes
in
the
labrador
C
Ensemble
methods
have
the
advantage
of
having
these
covariances
that
are
State
dependent,
so
that
how
an
observation
affects
the
state
depends
on
the
state
itself
and
for
Devar
methods
use
these
climatological
covariances.
A
So
that's
kind
of
a
downside,
but
Travis
might
talk
about
some
approaches
that
kind
of
have
a
hybrid
between
the
two.
Finally,
there's
no
uncertainty,
quantification
and
then
to
realize
the
reason.
I
think
that
a
lot
of
people
don't
do
this
is
it's
expensive
to
run
and
to
maintain
and
the
adjoint
of
the
model,
which
is
necessary
to
run
for
Evar.
A
A
So
one
of
the
earliest
applications
was
the
assimilation
into
an
ocean
only
model
pop
two
one
degree
by
Alicia,
karsbeck
and
authors
in
2013..
A
So
here
they
were
assimilating
subsurface
temperature
in
salinity
from
the
world
ocean
database
in
an
eight-year
simulation
using
48
Ensemble
members,
in
this
case
forced
by
individual
members
of
a
Cam4
atmospheric
reanalysis.
So
a
lot
of
the
app
of
the
spread
among
the
ocean
members
in
this
case
was
actually
attributed
to
these.
The
forcing
you
got
from
different
atmospheric
Ensemble
members-
and
you
know
some
of
the
things
that
they
point
out
are
you
know,
big
challenges
that
persist.
They'll
persist
to
the
rest
of
this
talk.
A
One
of
them
is
these
biases
in
western
boundary
currents
in
the
model-
and
this
is
kind
of
illustrated
by
this
panel
on
the
right
on
the
top.
You
see
the
model
data
SST
Misfit
before
data
simulation,
so
this
is
just
kind
of
running
the
model
and
letting
it
do
its.
A
A
These
the
changes
that
you
that
are
wrought
by
the
data
assimilation
are
insufficient
to
really
get
rid
of
this
model.
Data
Misfit,
and
this
is
attributed
in
part
to
the
fact
that
it's
just
hard
to
tug
this
boundary
condition
where
you
would
like
it,
this
boundary
current,
where
you
would
like
to
go,
and
that's
probably
in
part,
because
it's
a
lower
resolution
model
that
has
you
know
a
low
resolution.
A
Representation
of
boundary
currents
in
a
second
thing
that
was
noted
was
that
there
was
a
very
small
Ensemble
spread
in
the
deep
ocean
and,
as
we
discussed
earlier,
a
small
Ensemble
spread
is
a
problem,
because
then
the
data
simulation
is
just
going
to
say.
Well,
that
has.
B
A
Be
it
all
of
your
models
are
saying
that
that's
basically
the
truth,
so
that
it
makes
it
difficult
to
assimilate
observations,
and
this
could
be
attributed
in
part
to
the
fact
of
how
this
Ensemble
spread
was
generated,
mostly
from
the
top
of
the
ocean
column
by
these
different
atmospheric
reanalyzes.
So
one
of
the
things
that
scar
spectral
pointed
to
was
the
possibility
of
using
inflation
to
solve,
to
address
both
of
these
problems,
the
boundary
currents
and
the
deep
ocean,
but
it
wasn't.
It
wasn't
clear
that
that
was
necessarily
going
to
do
it
alone.
A
A
second
application
published
a
couple
of
years
ago,
was
now
in
a
weekly
coupled
framework,
and
so
by
weekly
coupled
I
mean
that's.
This
is
a
active
sorry
cesm
with
active
ocean
atmosphere,
land
and
sea
ice
30,
member
ensembles
and
the
data
simulation
is
happening
now
not
just
in
the
ocean,
but
also
in
the
atmosphere,
and
so
this
is
sort
of.
A
This
is
Illustrated
in
a
prototype
12-year
simulation,
and
it's
it's
weakly
coupled
in
the
sense
that,
when
observations
are
assimilated
into
the
ocean
ocean
observations
affect
the
ocean
atmosphere.
Observations
affect
the
atmosphere,
but
there's
no
interplay
between
the
two
and
we'll
talk
about
this
a
little
more
later
on,
and
so
you
know
in
moving
now
to
this
to
couples
data
simulation,
the
authors
identified.
A
You
know
some
good
things
and
again
some
challenges.
The
first
was
just
the
cost
of
running
a
large
Ensemble.
You
know
running
30
members
of
fully
coupled
CES
M1.
It
was
very
expensive
and
there
are
also
issues
at
the
time
with
the
flavor
of
data
assimilation.
So
this
is
using
Dart.
The
data
simulation
research
test
bed
at
the
time
it
had
issues
with
memory
redistribution,
and
there
are
problems
that
persist
to
this
day
originating
from
the
cost
of
restarting
cesm
after
every
model
assimilation
step.
A
Looking
at
the
sort
of
output
of
the
simulation
again,
we
have
a
kind
of
comparison
of
the
model
SST
Misfit
before
and
after
data
simulation,
and
so
one
of
the
things
these
authors
pointed
to
was
that
the
data
simulation
can
reduce
the
model
data,
the
bias
where
their
data,
for
instance
in
the
western
boundary
here
in
Northern,
North
Atlantic
regions.
A
A
I'll
talk
briefly
also
about
some
work
that
Fred
costuccio
has
been
leading
submitted.
Now
addressing
you
know
both
the
difficulties
in
the
boundary
currents
and
the
low
resolution
model,
and
also
the
you
know,
it's
quite
High
Cost
of
running
these
Ensemble
data
simulations,
so
Fred
has
run
high
resolution
pop
2
and
I
think
it's
40.
It's
like
84
member
ensembles
and
assimilating
satellite
and
Institute
of
ocean
observations
over
a
six-year
period.
A
A
This
work
is
using
what's
called
Ensemble
optimal
interpolation,
which
is
going
to
reduce
the
number
of
simulations
to
one
and
then,
when
you
know
an
observation
comes
along
instead
of
having
a
range
of
different
Ensemble
model,
realizations
to
look
at
you
make
perturbations
that
approximate
the
forecast
errors.
A
So
you
know
the
big
upside
of
this
is
that
you
have
a
factor
in
this
case
84
reduction
in
cost,
which
is
allowing
you
to
data
simulation
at
0.1
degree
pop.
The
downside
is
that
again
you
don't
have
these
kind
of
Errors
of
the
day.
These,
the
covariance
error
statistics
that
you
have
are
stationary
and
you.
A
As
a
result
of
that,
so
now
I'll
get
to
work,
that's
kind
of
ongoing
and
more
forward-looking.
Regarding
specifically
the
coupled
cesm
to
and
start
so,
work
has
been
going
on
and
credit,
especially
to
Ben
Johnson
in
the
dart
group
for
updating
scripts
and
running
some
of
these
test
cases.
A
Work
has
been
going
on
to
a
couple
now
cesm2,
with
the
new
version
of
the
data
simulation
research
test
bed
Manhattan.
This
will
hopefully
deal
with
some
of
the
memory
issues
that
we
were
having
before
gives
us
new
tools
for
strongly
couple
of
data
simulation
in
which
I'll
talk
about
and.
A
A
So
the
goals
for
this
moving
forward
with
this
cesm2
Dart
configuration
you
know,
one
of
them
is
to
investigate,
as
mentioned
before
da
Ensemble
initialization,
for
some
case
studies
of
particular
interest.
A
For
instance,
this
cold
blob
in
2015,
that's
been
studied
by
Liz,
Maru
and
Steve,
Yeagers
and
others,
and
so
try
to
start
addressing
some
of
these
questions
of
predictability
and
data
simulation.
Another.
A
That
I'm,
showing
here
part
of
this,
is
if
you
can
predict
data
simulation
increments,
so
as
to
say
if
the
model
has
a
persistent
bias
or
deviation
from
observations,
it's
possible
to
incorporate
these
into
the
model
Tendencies
or
to
use
them
to
improve
existing
parametration
parameterizations
and
develop
new
ones,
and
so
this
is
part
of
a
project
with
the
Schmidt
Professor
Foundation
that
I
know
some
folks
at
gfdl
are
involved
with
do,
and
so
then,
the
development
goals
in
the
near
term
are
to
move
from
Ocean
only
couple
ocean
only
da
to
weekly
couples,
d,
a
with
csm2
and
dart
Manhattan,
and
then
looking
ahead
as
Mom
6
comes
online
in
cesm
we
will
certainly
you
know
we're
going
to
incorporate
that.
A
In
the
meantime,
we
expect
a
lot
of
challenges
to
be
in
common
between
pop
and
Mom,
just
regarding
how
we're
going
to
do
couples
d,
a
okay.
So
in
the
remaining
minutes,
I'll
talk
briefly
about
some
sort
of
bigger
picture
challenges
and
opportunities
for
couples.
A
Climate
D.A
that
hopefully
we
could
include
in
the
conversation
later
so
one
question
I
have
is
you
know,
I
showed
these
results
earlier,
where
there
might
be
a
sort
of
convergence
of
data
simulative
models
when
we
have
lots
of
observations
during
the
Argo
era
of
some
properties,
for
instance,
you
know
in
the
labrador
c.
A
The
question,
though,
is
how
long
do
we
have
to
wait
in
order
to
accurately
describe
decadal
in
longer
periods,
climate,
variability
and
simulative
models?
Do
we
have
to
wait
for
50
years
of
Argo
in
order
to
look
at
50-year
variability
in
the
ocean
and
the
coupled
climate
system?
A
My
hope
is
not
but
I
think
it's
an
interesting
question
and
there's
some
insights
into
this
that
have
been
provided
by
idealized
data
simulation
studies,
I'm,
showing
here
some
a
couple
of
results
from
this
paper
by
Cardiff
and
haken,
where
they
coupled
the
Lorenz
model
to
a
stomal
box
ocean
model
and
then
reported
correlation
of
what
they
called
the
amoc
in
this
very
simple
models
model
to
a
series
of
simplified
atmosphere,
properties
that
were
resulting
from
this
chaotic
atmospheric
model.
A
A
Another
point
is
that
the
ocean
integrates
the
time
history
of
atmospheric
forcings,
and
so
when
we
make
these
sort
of
incremental
adjustments
to
the
ocean
state.
At
the
moment
when
we
have
an
observation,
we're
kind
of
leaving
behind
some
information
on
the
table-
and
so
this
is
Illustrated
by
this
figure
of
adjunct
sensitivities
of
amox
strength
from
a
study
by
Helen,
Piller
and
co-authors.
So
here
they
said,
how
can
we
change
the
freshwater
flux
and
the
Atlantic.
B
A
In
order
to
change
amox
during
the
26
North,
and
so
when
you
look
at
the
lag
of
one
month,
there's
a
very
clear
signal
about
the
array,
a
quote
unquote
at
26
North,
but
there's
actually
a
strong
signal
that
persists
for
15
years
and
back
so
I.
Think
this
begs
the
question
of
whether,
for
instance,
when
Alicia
karsbeck
and
the
authors
co-authors,
we're
finding
that
the
spread
in
the
deep
ocean
was
too
small.
A
This
might
reflect
the
fact
that
low
atmospheric
low
frequency
atmospheric
variability
that
they
were
forcing
their
model
with
was
too
low.
It
also
begs
the
question
of
whether
it
could
be
possible
to
implement
a
smoother
accountant
smoother
to
incorporate
some
of
these
longer
term
influences
backwards
in
time
from
observations.
A
And
then
you
know,
another
Major
Avenue
moving
forward
is
the
possibility
of
doing
strong
coupling
a
strongly
couples.
Data
simulation
and
I
mentioned
briefly
before
the
definition
of
weekly
couples,
data
simulation,
which
is
where
effectively
you
have
an
ocean
model
coupled
to
do
an
atmosphere
model.
You
perform
data
simulation
separately
in
those
two
domains
and
then
the
sort
of
only
cross
talk
that
you
get
between
ocean
observations
and
the
atmosphere
comes
from
the
model.
Coupling
strongly
couples.
A
B
A
Or
actually
last
and
the
blue
regions
in
this
map
on
the
right
hand,
side
show
places
where
strong
coupling
improves
a
perfect
model.
Analysis
for
upper
ocean
temperature
and
the
red
Showplace
is
where
it
disimproves
it.
I
guess
you
know
the
questions
moving
forward
are
how
important
is
this
strong
coupling
for
initialization
and
state
estimation,
and
it's
an
important
question
in
part,
because
a
lot
of
the
mechanics
of
how
to
perform
strongly
couples?
State
estimation
are
remain
unclear.
For
instance,
this
question
of
localization.
C
A
B
A
C
Thanks
Dan,
that
was
a
very
nice
overview
of
data
assimilation.
One
of
the
challenges
that
I
foresee
going
from
say
pop
to
Mom
six,
is
that
in
some
ways
the
mom-6
algorithms
are
formally
less
linear
than
the
pop
algorithms.
In
other
words,
Tracer
transport,
for
instance,
involves
a
correlation
with
thicknesses
and
other
things
in
the
context
of
data
assimilation.
Are
there
formal
assumptions
of
linearity
that
we
are
going
to
need
to
be
aware
of
as
we
move
to
using
a
more
non-linear
model
or
put
slightly
differently?
How
do
you
deal
with
physical
limits?
C
A
Yeah,
those
are
great
questions,
and
yes,
some
of
the
I
guess
you
know.
First
thing
I
would
say
is
that
these
models
are
already
non-linear
and
there's
nothing.
That's
saying
that
we're
assuming
about
linearity
in
order
to
make
these
methods
work
specifically
with
regarding
the
tracers
I
I
would
be
interested
to
see
you
know,
and
there
are
there
are
approaches.
A
I
think
that
could
that
you
could
take
maybe
changing
the
frequency
with
which
you're
assimilating
observations
that
might
try
to
get
at
some
of
those
issues,
but
yeah,
that's
a
good
question.
I
can't
I
can't
say
if
there
could
be
problems
regarding
the
way
that
the
tracers
are
handled.
Regarding
other
you
know,
there
are
other
physical
quantities
that
have
these
hard
limits,
that
don't
obey
gaussian
statistics
and
there
are
tools
for
dealing
with
those
as
well.
B
Okay,
I
would
just
quick
question
Dan.
Maybe
it's
not
a
quick
question?
What
what
in?
Maybe
in
the
context
of
these
this
comparison
of
re-analyzes
you've
seen
what
are
the
major?
What
are
the
most
serious
gaps
in
the
current
observing
system
that
are,
you
know
precluding
additional
convergence
between
their
analyzes
or
would
lead
to
the
degradation
of
the
re-analyzes?
What
what
if
we
were
to
invest
more
in
the
observing
system?
What
would
be
the
most
advantageous
investment
from
a
data
assimilation,
reanalysis
perspective.
A
Well,
that
is
a
question
that
data
simulation
can
answer
and
there
is
a
literature
using
data
summation
to
do
these
sort
of
you
know
formal
Network
designs.
It
does
depend
on
the
question
that
you're
trying
to
address.
A
A
Just
I
guess
it
would
just
I'd
say
it's
it's
problem
dependent,
but
that
is
one
of
the
things
you
can
study
is
to
do
these
sort
of
network
designs.
Okay,.