►
From YouTube: 2023 Ocean Model WWG- Day 2 Am Session
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
C
A
A
D
E
B
C
E
A
E
H
E
A
A
E
H
A
H
A
Welcome
back
we're
finishing
off
this.
A
E
Sorry
Rob
are.
K
I'm
here
and
it
seems
like
I've-
never
shared
my
screen
on
Google
Chrome
before
so.
It's
asking
me
from
permissions.
Let's
see
if
I
can
try
this
again
Microsoft
PowerPoint
share
system.
A
K
I'm
gonna
have
to
log
out
and
log
back
in
again.
J
J
H
K
That's
pretty
quick,
let's
see
if
this
works
now
I've
never
had
my
everything
like.
Can
you
see
this
now.
I
K
For
letting
me
join
this
meeting,
it's
been
interesting
to
me
so
far,
so
I'm
I
guess
it's
kind
of
a
like
a
long
time
listener.
First
time,
caller
situation
I
know
some
of
you,
but
not
others,
and
for
the
people
who
don't
know
me.
Let
me
give
you
a
background.
K
I
I'm
at
pnl
I'm,
an
earth
scientist
at
pnl
before
that
I
was
at
Texas
A
M
for
20
years,
where
I
focused
on
sort
of
small
scale,
Coast
election
modeling,
River,
plumes
estuaries
stuff,
much
smaller
than
the
things
that
have
been
discussed
here.
You
know
easily
domains
easily
fitting
within
bigger
itself,
since
joining
pnl
I've.
K
You
know
been
interested
in
thinking
about,
instead
of
thinking
about
how
to
go
down
in
scale
how
to
go
back
up
in
scale,
and
so
that's
why
I'm
interested
in
in
talking
to
you
today.
Topic
of
today's
talk
is
on
high
resolution.
Regional
ocean
modeling
coupling
ROMs
to
e387s
is
part
of
a
project
called
seahorse
that
I'll
explain
in
a
few
slides,
but
first
I
wanted
to
just
give
a
shout
out
to
two
other
Regional
projects
that
I'm
involved
with
in
in
kind
of
the
doe
Coastal
ecosystem.
K
In
case
you
don't
know
about
these
one
of
the
integrated
Coastal
modeling
project.
This
has
been
focused
on
Extreme
events
and
climate
change.
Attribution
along
the
usd's
coast.
Coupling
sort
of
one
of
the
aspects
that's
been
pretty
interesting
is
coupling
land
runoff
models
to
an
industry
model
to
to
look
at
inundation
Coastal
inundation.
K
The
other
project
is
the
Great
Lakes
modeling
project,
Compass
Great
Lakes
modeling
project,
and
that
is
focused
on
building
sort
of
a
regional,
coupled
or
System
model
where
we
have
Lake
atmosphere,
land
and
all
of
these
models.
Sort
of
at
the
regional
running
at
the
regional
scale
are
very
high
resolution,
and
so
that's
that
might
be
interesting
to
some
of
you
as
well
foreign,
so
back
to
the
the
project
at
hand.
K
So
the
seahorse
project
stands
for
the
study
of
ex-scale
advances
in
a
high
resolution
ocean
using
ROMs
coupled
e3sm.
It's
kind
of
a
forced,
acronym
and
and
the
C
has
a
little
hook
on
it
to
make
sure
it's
pronounced
like
an
S.
It's
one
of
these
scientific
discovery
through
advanced
Computing
projects
at
the
Dua
fund,
so
part
of
that
is,
you
have
to
have
part
of
it
is
a
science
focused
part
of
it
is
computational
focused,
so
we
have
objectives
for
both
of
the
directions.
K
So
in
the
science
direction,
we
have
focus
on
improved
representation
of
small-scale
coastal
and
open
ocean
processes
in
larger
scale,
product
models,
we're
thinking
like
River,
plumes,
Coastal
fronts,
Mizzou
and
submissive
skill,
any
processes
in
the
open
ocean-
and
we
want
to
think
about
these
in
the
context
of
global
air
system
models.
K
As
part
of
doing
that,
we
want
to
design
a
scientific
and
Technical
framework
for
two-way
coupling
between
ROMs
and
impasso,
for
optional,
flexible,
efficient,
robust
dynamical
up
and
down
scaling.
So
I
think
the
key
part
here
is
optional.
We
we
see
this
as
a
component
of
e3sm
that
you
can
turn
off
and
not
you
don't
have
to
run
e3sm
with
ROMs
you
can.
K
If
you
choose,
you
can
do
it
for
a
day
a
week
a
month,
but
you
can
also
turn
it
off
again
and
not
sort
of
mess
with
your
long-term
Thousand-Year
simulations
at
courser,
Scales
and
another
interesting
aspect
of
this
I'll
talk
about
at
the
very
end
is
we
want
to
create
and
evaluate
ROMs
X,
which
is
a
GPU
enabled
Port
of
ROMs
that
exploits
the
latest
HPC
architecture,
so
rewrite
ROMs
and
C,
plus
plus,
and
use
the
AR
Max
protocol
to
be
able
to
run
this
on
gpus
and
CPUs,
or
you
know,
with
a
number
of
MPI
options
as
well,
so
arms
kind
of
takes
care
of
the
the
the
the
the
portable
the
portability.
K
So
is
a
little
motivation
here.
These
are
some
slides
from
collaborator.
Darren
edwarda
Los
Alamos
he's
an
impasso
developer,
so
we
want
to
develop
a
hierarchy
of
regionally
refined
and
peso
configurations.
K
He
does
mainly
on
the
U.S
Gulf
Coast
east
coast
and
so
they've
been
working
on
these
Regional
refined
m-peso
stands
for
model
prediction
across
scales
ocean.
So
it's
the
ocean
model
in
the
doe
e3sm
Earth
System
model,
and
it's
unstructured.
So
you
can
have
some
Regional
refinement
and
they've
done
some
experiments
using
30,
10
and
2
kilometer
grids
in
the
Gulf
of
Mexico
to
just
experiment
with
the
sorts
of
the
resolution
dependence
of
the
processes
that
they
see
represented
in
the
model.
K
So
the
30
kilometer
config
is
consistent
with
the
low
resolution.
E3Sm
runs,
10
kilometer
with
high
resolution,
e37
runs
and
the
two
kilometer
one
is
like
an
ultra
high
resolution.
K
Really
these
these
ultra
high
resolutions,
you
can
only
do
regionally
refined
and
you
can
only
do
sort
of
like
one
component
of
Earth
system
alone
like
this
is
these
are
ocean
only
runs,
so
let's
see
what
they
look
like.
The
30
kilometer
one
isn't
even
shown
here,
because
it's
quite
coarse,
but
here
we
can
see
a
comparison.
You
know,
excuse
me
and
so
we're
missing.
So
the
30
kilometer
one
is
sort
of
missing
some
basic
Dynamics,
like
the
evolution
of
the
loop
current
in
the
Eddies.
K
So
we're
not
even
going
to
show
that
here.
So,
let's
compare
sort
of
the
two
more
interesting
runs:
the
10
kilometer
equivalent
to
the
e3sm
high
resolution
configuration
and
the
two
kilometer.
Can
you
see
my
my
mouse
or
do
I
need
to
put
turn
the
laser
pointer
on
and
I?
Don't
see
it?
Okay,
let
me
let
me
make
a
major
pointer
then,
how
about
now
yeah
10
kilometer,
two
kilometers.
Can
you
see
my
my
electric
pointer
now,
because
so
so,
let's
compare
these
these
these
two
simulations,
so
the
10
kilometer
run.
K
Has
you
know
a
loot,
current
Gulf
Stream
blueprint,
Eddies
and
some
you
know
some
mesoscale
activity
in
the
Gulf
of
Mexico
looks
pretty
good.
Honestly,
it's
a
strong
fronts.
This
is
showing
the
relative
season
risk
temperature
and
the
relative
vorticity
in
the
relative
vorticity
is
fairly
strong.
It's
getting
to
be.
You
know
close
to
f
over
here,
though,
in
the
when
we,
when
they
extend
this
to
a
two
kilometer
simulation.
K
You
see
that
there
is
a
there's,
a
lot
more
detail
in
these
flows
here
along
the
shelf
break
in
the
front
of
the
loop
current
and
in
in
these
sort
of
features
that
are
riding
along
the
the
kernel
slope
here-
and
you
can
see
this
really
well
in
the
the
relative
vorticity,
where
the
fronts
are
no
longer
smooth,
but
there's
a
lot
of
structure
within
the
fronts
and
so
that
and
the
and
and
the
relative
vorticity
is
increased
so
that
you
know
the
recipe
number
is
a
lot
higher
now.
K
So
it's
pretty
neat
to
be
able
to
achieve
these
submitted
skill
permitting
simulations,
but
these
simulations
are
very
expensive,
so
so
the
current
throughput,
you
know
96
simulated
days
per
day
using
12
000
cores.
K
It's
that's
hard
one.
So
one
of
the
questions
we
got
this
is
this
this
this
explanation.
One
of
the
questions
we
got
in
the
review
of
our
project
and
subsequently
is
empath
is
unstructured.
Why
not
just
run
and
pass
out
with
high
resolutions?
Why
not?
K
Just
you
know
if
you
want
to
do
an
edge
to
write
simulation
on
the
East
Coast,
why
not
just
increase
the
resolution
of
M
Paso
until
you
can
resolve
that
and
there's
some
problems
with
that
impact,
so
it
doesn't
have
they're
working
on
having
different
time
stepping
in
different
regions
of
resolution,
but
that's
not
working
entirely
yet,
and
you
know
it's
just
the
these.
These
high
resolution
simulations
even
just
the
regionally
refined
simulations,
are
slow
and
expensive.
K
Every
time
you
double
the
resolution,
you
increase
the
computational
cost
by
roughly
order
of
magnitude,
so
you
know
you
could
expect
a
four
eight
time
slow
down
to
achieve
one
kilometer
resolution
and
one
kilometer
resolution
is
pretty
typical
for
modern
Coastal
ocean
simulations
and
you
know
insufficient
for
estuarine
simulations
even
of
like
a
big
history
like
Chesapeake,
so
you
know
a
lot
of
ROM's
applications
nowadays,
modern
Realms
have
patients
employed
resolutions
that
are
on
the
order
of
100
meters
to
capture
some
of
the
skill
effects
or
finer.
K
As
Baylor
mentioned,
they
went
down
at
five
meters
in
Narragansett.
Bay
El
Paso
is
just
not
going
to
get
there
so
they're
in
practice.
There
are
limits
to
how
far
we
can
refine
resolution
without
pastel,
so
wouldn't
it
be
great
to
take
an
existing
model,
that's
that
has
been
used
for
decades,
two
decades
to
Run
Regional
applications
and
and
use
that
to
pop
in
the
middle
of
an
impassive
simulation
only
when
you
want
it.
K
So
that's
the
the
kind
of
the
motivation
for
this
study
and
I
want
to
also
point
out
here
that
that
batch
transition
in
the
front's
becoming
more
having
more
structure
and
becoming
more
more
non-linear,
isn't
really
something
that
depends
on
you
know
the
the
resolution
of
the
model
in
terms
of
kilometers.
This
is
really
sort
of
you
know.
You
have
to
think
about
the
the
parameter
space.
K
The
part
parameter
space
you're
in
so
this
is
an
example
from
a
course
simulation
in
the
northern
Gulf
of
Mexico,
so
the
achaflaya-
the
Mississippi-
are
right
over
here.
So
this
is
strongly
forced
by
fresh
water
from
the
chaff
line
in
the
city.
So
it's
very
strongly
very
Clinic,
and
so
you
can
see
that
on
the
continental
shelf.
Here
we
get
a
bunch
of
eddies
as
the
as
the
Mississippi
River
poem
undergoes
Bear,
Clinic,
instability
and-
and
these
are
you
know,
pretty
energetic
features
with.
K
You
know
recipe
numbers
of
two
or
three
along
the
fronts,
but
as
we
go
from
one
and
a
half
kilometer
simulation
in
this
region
to
300
meters
simulation
of
this
region,
you
can
see
that
the
fronts
all
of
a
sudden
get
become
more
energetic
and
get
a
lot
more
structure.
So
the
same
process
we
saw
going
from
10
to
2
kilometers
in
the
whole
Gulf
of
Mexico
also
happens
at
you
know
at
these
finer
scales
under
stronger
buoyancy
forcing
so
we
think
this
is
a
pretty.
K
This
is
a
pretty
common
problem
or
a
pretty
common
thing.
That
happens
when
you
go
increase
in
resolution
and
one
of
the
one
of
the
reasons
why
we
think
it's
important
to
think
about
France
is
here's.
An
example
of
I've
got
a
student
Dylan
schlifting
at
Texas,
A
M
who's
still
there
after
I
left
a
p
l
and
he's
looking
at
numerical
mixing
in
the
coast.
Lotion
there's
been
a
lot
of
interesting
papers
on
numerical
mixing
and
estuaries
by
Parker
McCready
Hans
bershard
Guyer,
and
we
saw
it
we've
been
thinking
about.
K
How
do
we
take
these
studies
onto
the
Cardinal
shelf?
So
we've
been
looking
at
numerical
mixing
over
these
regions
right
here,
and
this
is
an
example
of
relative
motivity.
The
same
kind
of
figure
I
just
showed
you
before,
and
this
is
an
estimate,
an
estimate
of
the
numerical
mixing
locally
based
on
a
method
by
bershard
right
now.
K
This
is
not
the
best
method,
it's
a
little
noisy,
and
this
is
the
absolute
value
of
the
numerical
mixing
because
there
can
be
you
know
you
can
have
that
we're
looking
at
soft
fluxes
here,
because
salt
is
the
primary
determinant
of
the
density.
You
can
have
positive
and
negative
values
of
numerical
mixing.
K
It
doesn't
it's
not
it's
not
necessarily
always
positive,
but
we're
looking
at
the
absolute
value
just
to
get
an
idea
of
the
order
of
magnitude
in
this
figure,
and
you
can
see
that
the
numerical
mixing
is
associated
roughly
with
the
fronts
you
know
you
can
see.
The
features
in
here
and
and
the
numerical
mixing
is
a
significant
fraction
of
of
the
total
mixing.
So
here's
an
example
of
of
the
vertical
mixing
turbulence
closure
scheme.
You
can
see
that
this
is,
you
know
darker
red
than
this
is
on
average.
K
You
know
the
numerical
mixing
is
about
a
third
of
the
total
mixing
and
the
total
mixing
is
the
prescribed
mixing
this
through
the
turbulence
closure.
The
horizontal
harmonic
harmonic,
mixing
the
the
the
camp
scheme
in
the
vertical
and
plus
the
spurious
mixing
mixing
this
mainly
due
to
the
errors
in
the
invection
scheme
and
and
that
and
that
spurious
mixing
is
about
a
third
of
the
total.
K
So
it's
significant
and
when
we
increase
the
resolution,
we
in
we
decrease
the
numerical
mixing
by
about
a
third,
and
so
that's
good.
So
we
want
to
think
about
that,
and
these
these
features
aren't
exactly
lined
up.
So
here's
a
front
right
here.
You
can
see
that
the
numerical
mixing
isn't
exactly
lined
up
with
the
fronts.
It
looks
like
it's
on
either
side
of
the
fronts,
and
so
it's
hard
to
do
exact
property
property
comparisons
here.
K
So
that's
a
difficulty
in
the
in
the
analysis,
so
we
want
to
focus
on
front.
So
let
me
finish
this
here.
So
there's
a
paper
by
a
former
student
and
postdoc
they
send
shoe
is
now
a
professor
at
Shanghai
University.
K
He
wrote
this
a
nice
paper
of
nature
Communications,
describing
this
very
strong
down
dwelling
and
then
at
the
nose
of
the
front
and
very
strong
upwelling,
the
league
of
the
front
that
was
modulated
inertially
by
the
the
inertial
Motions
that
happen
in
the
northern
Gulf.
It's
driven
by
a
nearly
resonant
Sea
Breeze,
with
the
with
with
the
local
coriolis
frequency
and
yeah.
K
So
so
this
little
cartoon
schematic
is
supported
by
both
observations,
high
resolution,
observations
and
high
resolution
numerical
modeling,
and
it
shows
that
there
can
be
you
know,
really
active
vertical
motions
associated
with
the
front
that
are
that
are
kind
of
periodically
driven
by
by
the
inertial
Motions.
K
So
and
so
the
mixing
can
be
intense.
You
can
generate
strong
vertical
motions.
Of
course
we
know
fronts.
Aggregate
Plankton
under
biological
processes
happen
at
fronts
and
but
I
think.
The
main
thing
is
that
you
know,
as
as
the
Dynamics
form
fronts,
these
fronts
are
always
driven
toward
the
limits
of
what
the
model
can
resolve
and
so
they're
really.
You
know
these
are
really
challenging
features
to
simulate
in
in
the
models
and
sort
of
deserve
focus
when
we're
thinking
about
different
resolutions.
K
So
to
that
end
we
are
going
one
of
the
one
of
the
aspects
of
the
project
is
to
compare
and
contrast,
ROMs
and
El
Paso
and
to
for
example,
think
about
what
would
be
the
equivalent
resolution
between
the
two
models.
You
know
one
is
an
unstructured
they're,
both
Secret
models,
but
one
is
unstructured.
Hexagons
and
one
is
a
structured.
You
know
the
structured
curvilinear
grid,
and
so
it's
not
clear
that
one
kilometer
resolution
means
the
same
thing
in
each
model.
K
So
in
you
know,
one
of
the
first
things
we
want
to
do
is
figure
that
out,
let's
see
if
it's
all
right,
okay,
so
we're
going
to
make
it
a
nice
simple
model
of
Bear
Clinic
instability
on
a
channel.
You've-
probably
seen
this.
You
know
10
billion
times
in
one
form
or
another.
The
unique
aspect
to
this
particular
simulation
well,
first
of
all,
I'm
driving
it
very
hard
with
really
strong,
like
unrealistically
strong
salinity
gradients.
You
know
over
the
domain.
This
is
meant
to
be
similar
to
the
northern
golf
experiments.
K
I
showed
you
that
would
really
really
vomiting
over
over
100
kilometers
or
less
so
so
don't
worry
that
the
salinity
goes
to
55
it's
just
well.
We
can
just
non-dimensionalize
all
that
stuff
to
get
the
right
part
parameter
space,
but
you'll
see
here
that
the
that
the
everything's
jiggling
a
little
bit.
That's
because
I'm
forcing
this
with
a
nearly
inertial
wind
a
week,
nearly
inertial
wind,
to
start
to
generate
some
inertial
currents
in
the
upper
ocean
and
those
those
a
nurse
occurrence.
This
in
intent,
intensified
France.
K
We
can
see
that
here
so
I'm
I'm,
taking
it
I'm,
looking
at
a
number
of
different
simulations,
forced
with
different
magnitudes
of
the
narrow
inertial,
like
it's
0.9,
roughly
f,
as
as
as
the
frequency
so
the
first
moment,
I'm
looking
at
the
first
moment
of
the
the
gradient
of
salt
squared
over
the
whole
domain
at
the
surface,
and
you
can
see
that
as
we
increase
the
winds
from
nothing.
This
is
this
red
line
here.
K
The
increased
lines
there's
a
maximum
in
the
in
the
mean
salt
gradients
and
as
the
winds
increase,
more
they
decrease
and
the
way
I
take.
This
is
that
for
no
wind,
everything
is
behaving
pretty
well,
there's
not
a
lot
of
fronts
and,
as
you
start
to
increase
the
wind,
the
the
Eddies
don't
necessarily
respond
uniformly
to
the
wind,
and
so
when,
when
they,
when
they're
moving,
they
start
to
kind
of
walk
into
each
other
a
little
bit
and
so
and
that
bulking
creates
stronger
fronts
to
a
point.
K
And
then
it's
going
to
get
stronger.
You
just
start
to
mix
things
through.
You
know
through
you
know,
1D
boundary,
light
processes
and
so
there's
a
sweet
spot.
Where
you're
you're
you're
driving
the
flow
but
not
generating
mixing,
so
we
want
to
look
at
that
as
a
as
a
key
part
of
parameter
space
to
focus
on
the
fronts.
K
Another
aspect
of
the
project
I
should
say
that
we've
been
doing
this,
for
you
know,
maybe
four
months
now,
so
we're
still
in
our
infancy,
I
would
love
to
show
you
more
complete
results,
I'm,
mainly
to
show
you
what
we're
doing
the
direction
we're
heading
is
one
of
the
other
things
we're
looking
at
is
a
realistic
domain.
We
picked
this
one
as
our
first
test
case
here.
It's
a
small
domain
in
the
North
Atlantic.
This
is
a
a
ROM
simulation.
K
This
was
originally
done
to
look
at
the
interaction
between
fronts
and
internal
waves
and
to
look
at
how
fronts
can
drive
internal
waves
into
the
bottom
of
the
ocean,
and
so
we
decided
that
we
would
like
to
pick
this
domain.
As
the
first
domain
we
Nest
within
El
Paso.
K
We
picked
it
because
you
know
there's
no
steeper
Symmetry
and
there's
no
coastlines,
and
so
we
think
this
will
be
an
easy
first
one
to
try
and
then
later
on,
we'll
try
to
move
to
the
U.S
east
coast,
all
right
and-
and
you
know
further
in
this
effort
to
couple
Brahms
and
E3
this
is
we
have
try
to
put
this
ROM's
domain,
that
I
just
showed
you
within
the
the
e37
cup
or
Moab,
and
so
this
is
an
example
that
this
is
kind
of
an
extreme
resolution
change
here
from
these
coarse
hexagons
to
these
fine
grids.
K
But
this
is
just
an
example
of
what
we
might
want
to
do.
We
probably
won't
want
to
do
quite
such
an
extreme,
a
difference
in
resolution
when
we,
when
we
do
this
dynamically.
This
is
just
a
proof
of
concept.
We
can
take
the
ROMs
grid
and
interpolate
the
mpaso
results
to
the
ROMs
grid
and
get
fairly
good
results.
One
of
the
things
we
found
out
is
we
had
to
do
something
that
was
globally
conservative.
If
we
did
something
that
was
a
locally
conservative,
the
hex,
the
hexagons
imprinted
too
strongly
onto
the
wrong
script.
K
You
can
basically
see
the
hexagons
in
the
grid
and
that's
that's
not
good,
so
we
have
to
think
about
global
conservation
using
a
bicubic
interpolation
scheme,
and
the
last
thing
I'd
like
to
to
tell
you
about-
is
Ron's
X,
which
is
this
this
new
Port
of
ROMs
built
on
the
MRX
software.
This
is
done
by
an
umbrun,
lbnl
and
Gene
Sexton
is
a
is
a
programmer
there
who's
the
main
person
working
on
this.
K
It's
going
to
be
Ron's.
X
is
going
to
be
the
same
functionality
as
ROMs,
but
it
will
be
built
translated
into
C
plus
and
it
will
run
you
know
various
different
kinds
of
MPI.
You
can
run
it
on
various
different
kinds
of
architectures,
so
you
can
use
different
sorts
of
GPU
chips
and
you
get
all
this
for
free
when
you,
when
you
link
to
the
BAM
RX
package,
one
of
the
other
things
that
will
be
an
advantage
over
just
Fortran
ROMs
is
the
we'll
be
able
to
exploit
fast
parallel.
K
I
o
That's
provided
by
amrx
as
well
right
now,
ROM
sends
everything
to
a
single
it
doesn't
use
parallel
net
CF
uses.
You
know
single
CPU,
regular
cereal
net
CDF,
so
that's
pretty
slow.
I
o
can
be
a
real
bottleneck
and
right
now
we
are
at
the
point
where
we're
able
to
initialize.
K
You
know
basically
able
to
to
propagate
forward
initial
condition
and
they're
working
on
the
sort
of
the
the
the
dynamical
core
as
we
speak,
so
we've
got
framework
and
they're
going
to
translate
the
demo
core,
and
so
progress
is
pretty
good
on
that
too,
and
that
is
my
last
slide.
So
I
will
I'll
be
happy
to
take
questions.
K
D
Next
month,
to
talk
more
about,
have
you
thought
more
about
the
feedback
to
the
large
domain.
K
Yeah,
so
that's
that's
a
great
question.
I
think
I
think
that's
an
open
question.
K
So
I
can
imagine
that
there
are
many
things
in
our
simulations
that,
like
I,
can
imagine
that
I
don't
know
ninety
percent
of
the
time
or
more
than
half
of
the
time.
You
would
want
to
use
this
in
one
way.
You
know
in
a
one-way
nested
configuration
not
a
two-way
coupled
situation
and
I
and
I.
Just
don't
unders
I,
don't
know
what
processes
are
going
to
scale
up.
K
I
think
you
know
one
of
one
of
the
ways
in
which
we
could
you
know
one
of
the
ways
I
was
hoping
to
move
forward
with
this
is
to
identify
which
processes
are
poorly
represented
and
matter
and
think
about
parameterizations,
because
you
know
if
a
process
is
important,
you
know
think
about
that
North
Atlantic!
K
K
So
that's
a
problem
right
so
I
think
probably
the
most
effective
upscaling
will
be
in
identifying
processes
that
need
that
are
impactful
in
need.
Parameterization.
K
E
K
H
All
right,
Deepak:
are
you
ready.
C
So
I
started
sharing
my
screen.
My
name
is
Lee
batarian
I
work
here
in
encode.
This
is
a
very
recent
work.
I've
been
dealing
with
online
Gustavo,
Marquez,
Frank
Ryan
office
of
Encore
Danville
that
NASA
Ames
and
gym
movement.
Oregon
State
University,
like
I,
said
this
talk
is
very
work
in
progress.
It's
very
rough
feedback
is
welcome.
C
I
would
love
to
hear
what
you
think.
The
rough
idea
here
is
to
evaluate
vertical
mixing
and
models
using
observational
turbulence.
Estimates
in
the
tropical
Pacific
Hotel
tomorrow
here
is
CSM,
so
that's
I'll
be
showing
you
full-on
two-third
degrees,
CSM
model
6
and
comparing
that
to
instruments
that
record
at
50
units
or
100
Watts.
C
It's
a
lot
of
fun,
don't
be
at
the
creators
to
kind
of
just
a
quick
overview
of
why
there
are
more
turbines
have
been
created.
The
Creator
is
super
special
because
here's
some
this
is
two
weeks
of
data
average
to
show
you
vertical
profiles,
so
the
y-axis
is
depth.
The
first
panel
is
the
zonal
velocity,
so
the
equatorial
current
system-
you
have
the
South
equatorial
current,
exploring
last
words
at
the
surface
EUC,
which
is
more
than
a
meter
per
second
adapted
about
100
meters
going
eastwards.
C
These
are
measurements
at
0
to
140
West
in
like
rhetorical
term
oops
sorry.
So
this
is
a
very
highly
shared
system,
just
how
shared
is
shown
in
the
next
panel.
So
that's
stratification,
N,
squared
and
blue
and
shear
and
brownish,
and
the
point
here
is
that
the
shear
and
static
equation
are
pretty
high,
so
n
square
is
sitting
at
you
know
two
or
three
times
ten
minus
four,
it's
definitely
stratified,
but
the
shear
is
so
high
that
the
Richardson
number
sits
at
0.25,
which
is
special
in
theoretical
case.
C
Once
you
go
less
than
0.25,
the
system
is
unstable
to
share
and
stability,
and
so
this
is
a
cool
result.
They
went
out
with
their
developments,
instruments
they're
stuck
in
an
order,
and
you
find
a
lot
of
turbulence
and
since
then
people,
if
that
was
in
the
80s
since
then,
people
have
figured
out
that
the
flow
above
the
EUC
is
sitting
in
the
state
of
what's
called
marginal
stability.
Where,
which
is
the
number
of
0.25
is
kind
of
special?
It's
a
stability
boundary
and
it
looks
like
in
the
long-term
means.
C
The
flow
is
sitting
right
at
0.25.
There's.
A
very
cool
paper
by
both
might
I
encourage
you
to
read
it.
If
you're
interested,
they
argues
that
0.25
happens
because
the
forcing
tends
to
increase
Shear,
so
it
reduces
the
rigid's
numbers.
The
turbulence
asked
to
destroy
the
shear
increases
the
richest
armor.
So
once
you
have
this
competition
between
forcing
and
turbulence,
you
can't
hit
the
stability
boundary.
C
C
B
C
Familiar
the
key
point
of
this
figure
is
that
there's
a
daily
cycle,
so
that's
one
day
second
day,
Third
Day
fourth
day
and
so
on,
there's
a
daily
cycle.
It
penetrates
below
the
base
of
the
mixture,
which
is
that
top
half
line
right.
This
is
the
daily
cycle
of
mixing
and
it's
below
the
mixed
layer
and
it's
tied
to
the
fact
that
there's
a
bunch
of
the
flow
is
pretty
close
to
the
shear.
Stick
instability
boundary
the
lower
black
line.
There
is
the
core
of
the
UC,
so
you
can
see
it's
kind
of
bonded.
C
C
Lengths
nearly
all
the
time
this
is
particularly
intense.
It
was
observed
during
a
tropical
instability
base,
but
this
is
the
characteristics
that's
happening
in
the
observations
you
can
about
it,
because
this
is
the
major
component
of
how
the
ocean
takes
up
heat.
All
of
that
turbulence
is
mixing
and
acts
to
suck
heat
out
of
the
atmosphere
to
the
whole
town
dump
it
somewhere
about
the
EUC
from
where
it
gets
moved
off
the
equator
to
subtropics.
C
One
way
to
see
that
is
shown
in
this
nice
image,
I
like
from
Ryan
homes's
paper,
but
it
looks
at
diatomal,
heat,
transport
and
21.5
Celsius
surface.
This
is
kind
of
sitting
just
above
the
UC
whoops,
so
that
light
blue,
that's
a
lot
of
Heat
going
into
the
ocean,
and
you
can
see
that
the
cooldown
stands
out
in
this
particular
metric.
C
One
thing
it
shows
here,
if
you
look
in
the
borderline
panel,
is
how
much
is
attributed
to
Shields
doubled
in
that
model
configuration
that
was
used
in
kpp,
and
it's
a
lot
right.
So
most
of
that
journal
in
the
tropical
Pacific
is
coming
from
the
sheer
scheme
which
ties
back
to
the
shield
instability.
Observations
I've
shown
so
data
set
Zoom
moves
and
his
group
have
been
really
creative.
They
figure
out
how
to
hook.
Elements
are
really
fast.
Temperature
sensors
on
Moorings
and
I've
been
deploying
them
on
Tau
and
the
Parada
array.
C
So
these
are
morons
that
sit
on
the
equator
in
the
term
of
Civic
belongs.
Deployment
of
these
instruments
is
at
140
us.
They
started
in
2005.,
so
it's
20
23
now
so
they're
more
than
a
decade
of
data,
however,
they
observe
is
either
you
get
an
estimate
of
chi,
which
is
the
rate
of
dissipation
of
temperature
Radiance.
So
it's
units
of
Celsius
Square
per
second,
you
can
rescale
that
CHI
with
the
temperature
gradient
to
get
something
like
turbulent
diffusivity.
C
You
can
make
long
assumptions
you
can
go
from
Kai
to
Epsilon
and
then
get
from
there
to
until
the
viscosity.
You
can
take
your
Chi
and
again
go
through
diffusivity
and
get
to
a
heat
flux
right.
So
in
modeling
world,
if
you
are
using
kbp,
for
example,
the
K
is
an
output
of
this
team.
It's
used
to
get
the
key
flux.
The
heat
flux
Divergence
is
then
fed
back
to
the
model
into
your
pollution
equation
for
temperature,
and
so
that's
how
it
all
closes
foreign
looks
like
so
they
removed
instruments
this.
C
C
There
are
lots
of
cool
results
that
come
out
of
this
to
one
paper
I,
particularly
like
is
by
Warner
and
moon,
where
they
have
this
very
nice
one
in
the
assembly.
It
says
the
oceans
microstructure,
really
decorative
relations.
Microstructure
is
meaningfully
organized
in
microscape
patterns
and
in
that
paper
they
tried
to
make
the
point
that
once
you
take
all
these,
this
multilateral
data
set
and
start
parsing
It
Out.
By
opening
new
phases,
you
start
to
see
consistent
patterns.
So
if
you
look
on
the
left
on,
the
x-axis
is
dsst
DT
on
the
y-axis.
C
H
C
Cooling
into
a
la
nina
and
then
you're
warming
out
of
the
line
here,
and
so
you
look
at
the
dtdt
phase
and
you
can
show
at
least
on
these
observations.
It
looks
like
there
is
some
meaningful
correlation.
The
other
interesting
one
is
on
this
one.
This
is
a
actually
I'll
get
to
this
one
later,
let
me
focus
on
the
bottom
right
panel,
so
this
y-axis
is
Epsilon.
So
again
you
could
think
of
that
as
a
viscosity.
C
The
x-axis
is
the
Richardson
number,
and
this
shows
a
clear
difference
between
the
La
Nina
cooling
phase
and
the
El
Nino
warming
phase
right,
that's
half
a
factor
of
three
to
five
in
epsilon
and
how
much
difference
there
is
even
on
the
same
Richardson.
It
was
just
super
interesting,
so
something
about
the
means
that
is
influencing
returns.
C
So
those
who
have
very
small
scale
observations
that
think
they're
organized
in
large
scale
patterns.
We
are
climate
models
that
represent.
We
are
hoping
get
those
large-scale
patterns
right,
and
so
this
is
that,
what's
this
to
do
well,
we
think
that
climate
models
must
represent.
These
last
gave
these
macro
scale
patterns
of
microscale
turbulence
with
some
Fidelity.
My
question
is
due
date.
C
Where
they,
that
is
one
of
the
kind
of
tunes
the
kpp
scheme
to
do
really
well
in
that
Global
simulation,
but
they
make
this
blind
about
clean
and
direct
comparisons
and
the
way
I
understand
it
is
when
you
create
the
new
parameterization,
you
tune
it
to
an
Les
or
DNS.
That
conversion
is
clean
because
you
can
force
your
1D
mixing
model.
You
can
close
the
Les
with
the
same,
forcing
both
this.
Both
the
things
beta
can
be
made
to
spit
out
the
same
quantity.
So
you
can
do
a
very
direct
comparison
of
tilts
quantities.
C
This
combustion
is
not
clean
because
you
can't
ever
get
the
forcing
exactly
right
because
always
errors.
It's
also
not
direct
because
means
State
Properties,
like
SSD.
Our
certification
is
not
the
direct
output
of
a
mixing
scheme
right.
It's
an.
C
C
The
Baseline
simulation
here
is
something
Gustavo,
helping
with
CSM
on
six
to
third
degree,
incoming
the
Workhorse
at
the
moment
that
star
65,
vertical
levels,
I
think
the
environment
there
started
to
get
into
about
10
meters
spacing
out
above
the
EUC
up.
It's
just
kind,
of
course,
that
it
uses
KBB
default
settings
segment,
CD
mix
a
random
first
cycle
with
JRA
55,
forcing
oh
yeah,
and
so
the
cool
thing
is
I
stuck
in
Virtual
moving.
C
So
here
we
save
output
FDR,
which
is
basically
the
Tracer
time
step
along
the
town
where
I
want
to
use
and
I
output
a
bunch
of
stuff.
So
that's
State
variables
bunch
of
turbulence
things
heat
budget
terms.
So
it's
something
like
a
million
and
see
whether
you
can
do
direct
comparison
to
the
morning
data
set.
C
So
that
would
be
the
Baseline
I
do
only
one
thing:
we
change
three
kpb
parameters.
These
parameters
were
chosen
by
to
admit
so
now,
ai
it'll
be
natural
intelligence.
The
ran
1D
KBB
tried
to
tune
into
this
Les
that
he's
been
publishing
at
a
high
level.
What
it
does
is
lowers
the
maximum
viscosity.
We
think
the
scheme
was
too
diffusive,
so
we
reduced
it
by
a
factor
of
two.
C
The
real
thing
is
also
the
return
share
everything
on
at
higher
share.
So
usually
it
comes
on
pretty
early
once
the
universal
number
of
0.7.
If
we
start
mixing,
we
allowed
this
year
to
get
more
intense
down
to
0.5
we
make
you
know
these
are
somewhere
other
trade,
but
they
worked
well
in
compared
to
oh,
yes,
so.
L
C
C
So,
first
compaction
we'll
try
the
mean
State
at
0
140.
in
greenish.
This
one
is
the
tile
Mooring,
that's
the
Baseline,
so
that's
temperature
a
vertical
profile.
The
y-axis
is
depth,
it's
a
little
warmer.
If
you
look
on
the
right,
that's
the
zonal
velocity.
So
again
the
Baseline
is
a
little
weak.
The
shear
is
weaker.
The
maximum
is
weaker.
C
C
Boosting
the
vertical
transition
doesn't
actually
make
too
much
for
defense.
It
looks
pretty
much
the
same
in
the
main
state,
so
this
looks
pretty
awesome
right.
Really,
you
see
it's
a
lot
sharper
and
now
simulation
compared
to
what
it
used
to
be.
Here's
another
need
speed
comparison.
Now
the
left
one
is
shear
squared.
So
u
z,
squared
plus
three
Z
squared
again
good.
Looking
workbook
profiles,
similar
story.
You
can
see
that
the
Baseline
simulation
is
a
little
low
onto
three
kpb
a
bit.
C
You
do
a
lot
better,
so
that
was
a
factor
of
two
different
earlier
now.
They're
doing
really
well.
Similarly,
with
N
squared,
we
were
pretty
low
earlier.
We
went
all
the
way
up
right
on
top
of
what
the
child
observations
say,
we
should
be
this
last
panel
is
the
median
Richardson
number
remember.
It
should
be
sitting
at
0.25
and
all
the
simulations
are
pretty
much
there.
So
that's
kind
of
interesting
parameters,
and
we
can
do
better
on
these
nuts.
C
On
these
indirect
comparisons,
I,
remember,
I'm,
doing
mean
state
part
of
the
reason
those
parameters
were
chosen
by
looking
at
these
stability
diagrams.
So
remember
one
of
the
key
quantities
here,
which
is
a
number
which
is
N
squared
to
Shear
squared
and
the
idea
that
we're
setting
a
recorder
and
the
left
panel
is
similar
to
what's
being
published
by
Warner
and
moon.
C
He
x-axis
the
shear
squared
the
y-axis
is
N
squared
these
curves
are
enclosed
50
of
the
observation,
so
you
look
at
which
part
of
the
space
based
the
observations
are
sitting
at
right
and
then
we
think
our
model
should
be
doing
the
same
thing.
If
you're
doing
fine
options,
the
different
colors
represent,
El,
Nino
phases
and
part
of
the
paper
is
making
the
point
that
the
linear
phase
is
intend
to
be
in
the
lower
part
of
this
foreign.
C
C
Case
the
first
thing
that
jumped
out
was
there
was
a
lot
of
observations
in
this
lower
corner
right,
so
we
have
Shear,
Square
was
too
low,
N
squared
was
too
low
and
then
High
Shear,
squared
and
N
squared.
We
were
moving
up.
The
diagonal,
which
is
the
richest
number
equals
borderline.
That's
where
the
observations
like
to
hide
up
right,
so
we
were
moving
off.
That
was
systematic
error.
B
B
C
Landing
or
cooling
phase,
so
you
get
a
lot
of
strong
turbulence.
We've
also
moved
right
words
towards
that
marginal
stability
boundary
a
lot
better
at
higher
shares.
So
this
is
a
big
Improvement
going
to
150
vertical
levels
is
very
confusing
something's
off
of
that
simulation,
so
I'm
talking
about
too
much
but
here's
another
indirect
comparison,
but
it's
more
informed
by
solvent
observations
and
it's
helping
us
to
make
choices
and
decisions
about
what
to
do
so.
H
C
We're
trying
to
direct
and
now
it's
super
interesting
is
that
timing,
vertical
profiles
and
various
different
quantities.
Second,
one
is
Epsilon
psychi-
could
be
converted
to
a
diffusivity,
which
is
the
last
time.
Okay
in
all
these,
the
first,
this
green
one
is
the
observation.
I
would
ignore
the
last
point,
I
think
something's
wrong
with
it
by
the
depth,
so
just
focus
on
the
1200
meters.
C
Timings
right:
this
is
really.
This
was
very
surprising
to
me,
but
once
you
get
to
the
medians,
the
whole
story
changes
all.
The
simulations
are,
for
example,
in
order
of
magnitude
high
in
epsilon.
So
this
is
like
you
hit
the
right
mean,
but
the
observations
get
to
that
mean
by
having
a
low
kind
of
consistent,
State
and
some
very
high
magnitude
events.
Kpp
is
giving
you
a
very
consistent,
constant
mixing,
so
the
median's
kind
of
high,
but
it
still
gets
you
to
the
same,
mean
yeah.
C
C
Here's
the
annual
cycle
of
heat
flux,
so
this
animal
cycle
of
how
much
heat
the
Colton
is
sucking
down
somewhere
between,
like
50
meter
depth.
The
green
is
this
one
is
the
observation,
and
here
are
the
models
they're
not
too
bad,
they're
always
offset
up
by
about
a
factor
of
two
look
at
the
phase.
The
model
starts
sucking
up
a
lot
of
heat
in
April
of
May
and
June.
As
the
observation
start
about
a
month
later,
it's
not
there
to
me,
you
know
it's
the
peak
in
six.
C
You
know
two
in
July,
August
September
is
usually
associated
with
tropical
instability
days.
April,
May
and
June
I,
don't
know,
what's
happening
so
there's
something
better,
and
so
what
I'm
taking
away
so
far
is
that
the
details
that
are
important
and
are
giving
us
Clues
as
to
what
to
look
at
as
to
what's
happening
with
the
two
movies.
C
I've
been
running
out
of
time
for
the
one
last
point
I
would
tell
you
make,
is
again
about
these
details,
so
just
focus
on
this
bus
panel.
This
is
just
a
distribution
of
Kai
I
need
to
look
at
the
distribution
of
Epsilon
you'll,
see
that
those
are
the
child
observations.
Let
me
look
at
Epsilon.
Those
are
very
different
right,
so
these
are
the
Ali
averages
directly
spit
out
by
the
marble
close
on
the
observations
and
program
call
up.
C
C
Epsilon
is
doing
a
lot
better,
like
this
makes
our
scheme
a
little
lot
better
and
so
here's
another
detail
that
I
didn't
really
appreciate
when
I
started.
This
work
is
how
we
choose
to
make
these
comparisons
and
those
frequency
choices
are
really
quickly
important
like
if
you
looked
at
all
and
you
would
say
oops
you
look
at
the
Army
histograms
and
say
well
well.
This
is
just
a
mixing
too
much
all
the
time,
but
on
daily
numbers.
Well,
maybe
you're,
not
so
bad.
So
what
is
the
method
that
matters?
What
is
the
metric?
C
C
Sorry,
it's
updating
really
slowly.
So
basically
the
changes
have
made
the
UC
a
lot
better.
Well,
they
make
it
stronger
that
it
hurts
the
mixed
layer
that
ends
up
being
a
large
shower
I'm.
Sorry,
this
is
not
a
good
job,
but
it
makes
an
adapt
ends
up
being
shallow
Gustavo
is
very
polite
and
it's
like
yeah.
It's
not
that
good
man.
B
C
Parameters
make
a
difference
and
how
do
we
choose
the
matters
and
the
idea
behind
the
stuff
is
to
try
and
see
whether
turbulence
data
sets
give
us
ideas
for
how
to
do
better
on
those
parameters.
C
C
It's
the
it's
a
climatology
by
R
of
day
on
the
x-axis,
the
vertical
differences
are
different
decks
and
it's
a
type
history
of
Epsilon.
The
colors
are
Advanced
for
us.
So
I
think
this
one's
interesting,
it's
been
by
wind
stress,
which
is
an
input
to
the
turbine
scheme.
What's
part
of
that
is
Epsilon,
which
is
an
upward
of
the
Tobin
scheme,
but
it's
from
actual
observation.
So
this
is
where
you
know.
I
was
running
the
complicated
model.
C
Looking
at
the
outputs,
the
scheme
is
tuned
by
running
simple
models
and
looking
at
the
outputs,
here's
one
that
I
think
we
can
do
both.
You
can
look
at
the
bit
stress,
that's
kind
of
in
the
data
and
in
the
climate
model,
and
we
can
look
at
Epsilon
directly
and
see
where
you're,
representing
those
patterns.
A
C
A
J
Yeah
thanks
for
that
I'm,
not
sure
if
this
is
quick,
if
it's
too
long
just
so
the
spurious
mixing
question
which
came
up
before
also
in
a
couple
of
contexts,
Ryan
Holmes
paper,
a
lot
of
the
mixing
that
he
is
diagnosing
is
spuria
is
what
you
call
spurious.
It's
the
numerical
term
when
you
are
diagnosing
Epsilon
and
all
the
connections
to
turbulence
that
you're
measuring
in
the
ocean
when
you're
when
you're
calculating
that
from
the
model.
J
Are
you
somehow,
including
the
numerical
term,
which
I
know
is
hard
to
do?
But
how
are
you
thinking
about
that.
C
C
Mean
think
of
it
is
this
is
an
underestimated.
Almost
the
model
is
dissipating,
and
so,
if
you're
wrong,
they're
definitely
more
wrong
than
what
I've
shown
you
okay
yeah,
but
yes,
I
would
like
to
do
that.
I
haven't
done
it
yet.
H
G
L
G
M
C
L
And
is
it
showing
up
inside
show
mode?
Yes,
and
you
have
my
mouse,
let's
just
excellent!
Thank
you.
So
first
thanks
Deepak
for
that
great
talk.
That
really
complements
what
I'm
going
to
be
talking
about
here.
L
So
I'm
going
to
be
talking
about
some
work,
I've
been
doing
to
improve
the
representation
of
the
tropical
Pacific
Ocean
in
the
configurations
of
Mount.
Six
that
we
run
here
at
gfdl.
Hopefully,
I'll
convince
you
that
my
title
is
true
that
we
have
improved
the
upper
ocean
vertical
mixing
parameterization.
L
So
this
is
work.
I've
done
a
very
close
collaboration
with
Andrew
Wittenberg
over
the
last
few
years
and
I've
also
had
a
lot
of
discussions
with
Aleister
and
Steve.
So
I
want
to
thank
them
for
their
contributions
as
well.
L
So
I
want
to
jump
right
in
I've,
been
focused
on
this,
particularly
motivated
because
of
a
specific
bias
that
we
noticed
emerged
in
basically
all
of
the
configurations
of
our
last
Suite
of
climate
models
and
this
bias
is
associated
with
the
tropical
thermocline.
So
what
I'm,
showing
here
is
dtdz
from
the
Argo
climatology
produced
from
the
romic
and
Gilson
data
in
in
the
upper
panel
and
from
our
model
force
with
JRA
55
in
the
lower
panel,
and
basically
the
the
thing
I
want
to
point
out
here.
L
Is
this
very
sharp
Steppy
feature
you
see
that
emerges
in
our
simulations
in
both
the
eastern
Pacific
and
Atlantic
basins?
That
is
not
there
in
the
observations,
this
does
occur
in
Jr
a
force
models,
core
force
models,
coupled
models,
so
it's
it
seems
to
be
almost
certainly
related
to
the
role
of
the
ocean
here
and
not
the
forcing
so.
The
goals
of
the
the
talk
I'm
going
to
give
here
are
going
to
be
to
basically
try
to
diagnose
the
cause
of
the
shallow,
strong
stratification
bias
in
the
Eastern
tropical
basins.
L
We're
going
to
do
this
by
testing
Fidelity
of
the
om4
vertical
mixing,
parameterization
so
very
similar
to
what
Deepak
was
just
talking
about
and
then
applying
a
variety
of
changes
to
these
schemes
and
testing
the
sensitivity
of
the
tropical
thermocline
to
these
changes.
I'm
going
to
use
the
two
strategies
that
that
Deepak
referenced
here,
one
is
the
large
Eddie
simulation
versus
1D
models
approach
and
then
the
second
part
of
the
talk
I'll
be
presenting
the
results
in
Ocean
General
circulation
model.
L
L
But
the
main
point
to
understand
here
is
that
the
vertical
mixing
is
characterized
by
this
large
diurnal
swing
in
the
stability
and
the
turbulence,
which
is,
you
know,
basically
associated
with
the
daily
cycle
of
solar
heating
and
then
is
sort
of
a
little
bit
special
because
of
the
fact
that
you
don't
have
rotation
and
you
have
these
constant
sort
of
wind
forcing
and
basically
the
way
everything
sort
of
develops.
L
Is
you
get
this
very
large
showing
of
The
Boundary
layer
during
the
day
which
is
again
seen
in
the
observations
of
Epsilon
here,
where
basically,
the
values
become
very
small
below
about
10
20
meters
during
the
day?
And
then
you
get
a
very
rapid
deepening
of
the
turbulent
regions
at
night.
This
is
sort
of
that.
Deep
cycle
turbulence
that
again
Deepak
just
described
for
us
in
in
much
more
detail.
L
The
main
point
I
want
to
make
here,
though,
is
that
it
has
been
shown
in
a
at
least
one
previous
study
that
these
processes
should
be
captured
by
the
mixing
parameterizations
that
we
are
using
in
our
Ocean
Models.
So
we're
going
to
be
doing
some
tests
here
to
make
sure
that
we're
capturing
basically,
these
processes
in
the
mixing
parameterizations,
we
use
an
om4.
L
So
the
the
first
approach
I'm
going
to
take
here
is
I'm
going
to
test
this
by
comparing
directly
to
some
large
Eddy
simulation
models.
The
large
Eddie
simulation
output,
I'm
going
to
be
using
comes
from
a
recent
study
last
year
from
Dan
Whitten.
A
lot
of
I
think
other
folks
were
involved
in
that
who
are
present
today.
L
L
Make
it
very
easy
to
use
one
of
the
sort
of
very
nice
things
about
this
largeetti
simulation
model
is
that
Dan
has
imposed
these
sort
of
large-scale
horizontal
forcing
from
a
regional
model,
so
this
allows
basically
the
evolution
of
the
Les
over
the
35
days
or
so
that
he
runs
the
simulation
to
be
realistic
and
and
representative
of
the
conditions
that
you
will
observe
in
the
field.
What
I've
done
here,
which
I
think
really
is,
is
a
nice
compliment.
L
It's
everything
that
Deepak
just
showed
is
that
I've
actually
implemented
the
capabilities
into
mom6.
Now
that
we
can
run
mom
6
in
1D
mode,
with
the
exact
same
fluxes
and
forcing
as
the
Les
so
basically
taking
sort
of
the
other
Avenue
to
compare
the
mixing
schemes
here.
So
in
the
talk
today,
I'm
going
to
talk
about
how
we're
using
this
configuration
to
test
the
om4
base
mixing,
so
that
includes
the
epbl
boundary
layer
scheme.
L
That's
described
in
a
in
a
couple
of
papers
here,
as
well
as
the
Jackson
Hallberg
leg,
mixing
parameterization,
which
is
in
the
model
to
basically
parameterize
vertical
turbulent
mixing
in
stratified
Shear
conditions.
This
model
also
has
options
to
use
Gotham,
which
is
something
I
want
to
get
back
into
the
main
mom
6
soon
and
of
course,
CV
mix,
kpp
I'm
not
going
to
talk
about
any
of
those
today,
but
I
have
started
looking
at
some
of
that
foreign.
L
So
to
answer
the
question:
can
the
om4
mixing
parameterization
reproduce
the
Les
heat
flux
I'm,
going
to
start
by
comparing
this
particular
sort
of
time?
Series
snapshot
of
the
vertical
turbulent
heat
temperature
flux
from
the
Les
simulation
against
that
from
the
1D
model?
So
here's
the
result
of
the
Les
above
the
1D
model
below
you
can
see.
L
The
1D
model
does
get
sort
of
the
basic
pattern
of
enhanced
mixing
at
night
getting
down
to
about
the
right
depth
and
the
reduction
of
the
mixing
during
the
day
to
look
at
this
in
a
bit
more
detail
and
a
bit
more
critically.
What
I've
done
is
I've
taken
the
diurnal
Composites
of
these
over
the
entire
time
series.
So
now
you
can
see
how
this
looks
in
the
large
Eddie
simulation.
L
You
see
this
sort
of
Fairly,
persistent
deepening
of
the
boundary
layer
yeah
at
night
and
then
a
rapid
sort
of
showing
of
The
Boundary
layer
during
the
day
when
the
solar
heating
reaches
its
maximum.
L
What
I'm,
showing
is,
is
the
same
diurnal
composite
repeated
twice
here,
just
for
clarity
and
when
I
do
the
same
thing
from
the
om4
result,
we
can
see
actually
much
clearer
what
the
the
biases
are
in
the
parameterization,
so
basically,
there's
I
I
would
call
a
quite
significant
bias
in
the
heat
flux,
the
phase
and
the
magnitude.
L
So
in
particular,
we
have
too
much
of
this
downward
heat
flux
during
the
day,
we're
not
quite
re-stratifying
the
boundary
layer
and
at
night
time
we
have
kind
of
a
bit
too
rapid
of
deepening
and
particularly
right
at
the
onset
of
dawn.
We
get
this
large
sort
of
coherent
pulse
in
in
turbulence
that
emerges
in
the
diurnal
composite
I'm
not
going
to
get
into
the
details
of
this,
but
basically
the
conditions
of
the
stable,
forcing
constraint
turn
out
to
be
failing
here,
because
of
the
large
variability
with
deep
cycle.
L
Mixing
basically
to
I
would
need
to
modify
epbl
to
better
account
for
when
mixing
is
more
dependent
on
sort
of
pre-existing
conditions
in
turbulence.
But
the
other
thing
to
recall
here
is
that
this
is
basically
a
a
very
good
example
of
stratified
Shear
mixing,
and
we
have
this
other
parameterization
in
mom6
already,
which
is
the
Jackson
Hallberg
leg
scheme
and.
L
So
that's
simulated
in
the
large
Eddie
simulation
and
just
very
quickly,
if
I,
just
direct
your
attention
to
the
thereinal
Composites,
you
can
see.
The
answer
is
quite
clearly.
Yes,
it's
a
significant
improvement
over
om4,
which
I've
now
shown
on
the
right.
L
Basically,
we
we
are
definitely
getting
that
that
rapid
sort
of
showing
of
The
Boundary
layer
during
the
day
and
we're
getting
a
pretty
approximate
simulation
of
what
the
Les
is
getting
at
night.
I
will
point
out
this
there's
a
very
rapid
downward
propagation
of
the
temperature
flux
in
the
evening,
which
is
part
it's
it's
basically
related
to
neglecting
a
Time
tendency
of
TK
and
the
Jackson
scheme.
So
this
is
something
we're
going
to
address
in
some
future
work,
but
going
forward.
L
So
basically,
what
I've
done
is
I've
revised
epbl
to
relax
this
equilibrium
assumption,
which
then
allows
it
to
let
the
the
Jackson
scheme
take
over
for
parameterizing
the
mixing
in
this
sort
of
deep
cycle,
environment
and
I've
done
this
in
a
way.
So
that
it's
still
able
to
skillfully
predict
the
mixing
at
higher
latitudes,
where
this
issue
is
is
not
so
prevalent
so
now
we
are
able
to
get
sort
of
a
full
model
that
calibrates
much
better
to
the
deep
cycle,
turbulent
mixing
in
the
tropical
all
right.
L
So
now,
I
want
to
finish
up
by
implementing
this
in
our
Global
ocean
model.
So
this
is
the
om4
configuration
and
ask
the
question
of
how
the
the
improved
parameterization
helps
the
model.
L
So
this
particular
configuration
is
a
quarter
degree
model.
This
again
is
forced
with
the
Jerry
55
reanalysis
I'm
running
it
from
1999
to
2008
I'll
be
presenting
results
average
from
2001
to
2008.
I
have
made
sure
that
these
approximately
converge
to
the
sort
of
climatology
you
would
see
if,
if
you
ran
it
all
the
way
out
to
2020
by
keeping
it
a
bit
shorter,
I
can
run
a
lot
more
experiments.
L
So
what
you'll
see
is
I
have
done
a
lot
of
experiments
to
see
what
parameterizations
affect
the
results
in
what
ways
so
the
relevant
model
factors
I'll
talk
about
in
this
talk
are
the
boundary
layer
and
Shear
mixing
schemes,
as
well
as
the
background
mixing
I'm
not
going
to
talk
about
the
role
of
the
restratification
and
submesoscale
parameterizations.
It
does
doesn't
turn
out
to
be
quite
as
important
for
what
we're
talking
about.
It
is
relevant
I'm
not
going
to
talk
about
resolution
or
vertical
coordinates.
L
So
there
are
some
other
concerns
related
to
the
hybrid
coordinate
and
its
performance
in
this
region.
So
everything
I'm
going
to
present
is
actually
in
a
z-star
coordinate
here.
So
we
can
avoid
that
and
there's
other
things
I
won't
be
talking
about,
but
they
are
things
to
keep
in
mind
going
forward.
Okay,
so
the
first
result
I
want
to
present
is
what
happens
when
we
implement
this
improved
mixing
parameterization
and
what
I'm,
showing
you
here
is
three
panels.
L
So
in
the
upper
left
is
the
result
of
the
original
om4
model
and
we
can
see
the
bias
from
Argo
now
so
we're
taking
the
dtdz
of
the
model
minus
dtdz
of
Argo,
so
where
it's
red,
it
means
that
we
have
too
much
stratification
again.
This
is
sort
of
the
signature
of
this
stepy
thermocline
that
I
introduced
at
the
beginning,
and
you
can
see
it
in
the
Pacific
in
the
Atlantic
basins
and
there's
also
some
sort
of
shallow
thermocline
bias
in
the
Indian.
L
When
I
introduce
the
the
revisions
that
I
presented
earlier,
you
can
see
we
get
some
changes
to
the
signal,
but
nothing
too
substantial,
so
I've
actually
compared
these
directly
in
the
bottom
panel.
It
does
map
onto
the
original
bias
we
were
looking
at.
We
are
reducing
that
surface
stratification
a
little
bit,
but
it
is
only
about
maybe
10
or
20
percent.
So,
after
sort
of
all
that
work
to
test
the
scheme,
this
was
a
little
bit
of
a
frustrating
result.
L
So
we
started
to
ask
questions
and
test
the
sensitivity
to
some
of
the
other
model
factors
that
I
mentioned
turns
out.
The
most
impactful
setting
was
I.
If
I
were
there,
I
would
I
would
take
a
guess,
but
I'll
just
give
it
to
you.
It
was
the
background
viscosity.
L
So
what
I
was
sort
of
doing
when
I
was
looking
through
the
configuration
settings,
I
I
noticed
that
the
background
viscosity
we
have
is
10
to
the
minus
4,
which
is
probably
a
bit
higher
than
it
should
be,
and
so
I
did
run
some
tests
where
I
reduced
it
to
a
few
values.
L
When
I
reduce
it
to
10
to
the
minus
five
you
can
see,
we
get
a
very
strong
change
in
the
thermocline
in
all
the
bases
and
I
I
particularly
am
focusing
on
the
response
we
see
in
the
Pacific
here
and
when
I
take
the
difference
now
between
panel
C
and
what
I
call
B
here,
you
can
see
it
Maps
very
well
onto
the
original
bias.
So
basically,
this
this
eliminates
everything
that
we
saw
with
the
step
ethermocline.
Now
we
do
have
a
shallow
thermocline
on
the
west.
L
I'll
come
back
to
that
in
a
bit,
but
just
to
answer
the
question
of
why
this
works.
Maybe
it's
not
too
surprising,
but
basically
it's
because
of
the
effect
that
it
has
on
the
EUC.
So
the
EUC
stays
a
bit
shallower.
It
stays
a
bit
stronger
and
it
persists
a
bit
stronger
as
we
go
east.
This
allows
us
to
have
more
Shear,
as
we
get
towards
the
position
where
we
had
that
step.
Etherocline
this
activates,
the
shear
mixing
parameterization,
which
helps
to
then
reduce
that
bias
all
right.
L
So
the
final
thing,
then,
that
I'm
going
to
show
is
I
spent
a
lot
of
time
now
asking
how
we
might
remove
or
or
improve
this
now
shallow
thermocline.
We
would
see
if
we
went
with
this
in
reduced
viscosity
and
the
one
answer
that
I
found
that
tended
to
work.
The
best
was
just
to
increase
the
background.
Diffusivity
we
use
a
background
diffusivity
that
I
think
is
1.2
times
10
to
the
minus
6
right
now.
So
that's
that's
in
the
result
you
see
in
the
upper
left,
I've
increased.
L
L
So
sorry,
I
didn't
mention
this
before
I
should
have
the
the
traces
here
were
the
position
of
the
maximum
dtdz,
so
the
solid
being
the
observation
and
the
dash
dot
line
being
the
model,
and
you
can
see
it
comes
down
to
about
the
right
value
when
we
change
the
diffusivity
in
this
way
it
helps
a
lot
in
the
Pacific
for
sure
and
and
in
the
Indian
the
Atlantic.
Maybe
it
gets
a
little
worse.
L
So
it's
sort
of
telling
us
that
just
impro
increasing
the
diffusivity
everywhere
isn't
the
right
answer,
but
at
least
this
helps
guide
us
to
the
sorts
of
amounts
of
mixing
that
we
we're
perhaps
missing
for
one
reason
or
another
just
really
quickly.
I
want
to
say
that
I
I
did
also
look
at
this
in
salinity,
so
this
is
now
going
from
the
original
model
to
the
final
model.
So
the
dtdz
is
the
maps.
I
was
walking
you
through
the
whole
time.
We
see
these
improvements
in
all
three
basins.
L
L
It
is
nice
that
by
reducing
the
viscosity,
we
have
less
momentum
being
pushed
down
below
in
the
sort
of
the
lower
branch
of
the
EUC
here.
So
this
is
the
original
model
in
red.
The
observation
in
Black.
L
The
currents
were
too
strong
below
the
peak
of
the
EUC
before
they're
much
better
now
with
the
reduced
viscosity,
but
we
do
maintain
a
bit
of
a
strong
EUC
and
it
is
perhaps
a
bit
too
persistent
going
to
the
east,
so
maybe
going
all
the
way
down
to
10
to
the
minus
5
was
a
bit
too
much
but
anyways
just
to
sort
of
wrap
up
everything.
L
I've
said
we
basically
use
the
Les
to
come
up
with
a
a
site
modification
to
the
M4
mixing,
so
that
it
is
able
to
accurately
capture
the
tropical
mixing
processes
and
their
sort
of
characteristics.
L
And
then,
when
we
implement
this
in
the
global
model,
we
found
that
what
ended
up
being
the
sort
of
most
strong
levers
were
the
background
viscosity
and
background
diffusivity.
So
hopefully
this
at
least
helps
to
clarify
the
role
of
the
vertical
mixing,
parameterizations
and
guide
these
future
Improvement
efforts.
I
want
to
make
sure
I
really
emphasize
the
point
that
I'm
not
proposing
that
we
do
use
these
other
alternative,
constant
background
mixing
coefficients
everywhere.
This
is
sort
of
a
proxy
now
to
hopefully
guide
better
process
representation
going
forward.
L
Since
we
can
see
it
doesn't
give
the
same
response
in
all
basins,
and
it
won't
give
the
same
sort
of
responses
when
we
go
off
the
equator
and
the
other
important
point
is
just
that.
We
need
to
now
evaluate
the
effect
of
these,
especially
the
change
in
the
diurnal
cycle
in
coupled
models
to
better
understand
how
this
might
project
onto
a
couple.
Atmospheric
ocean
processes
like
mjo
and
so
and
so
on.
So
thank
you
at
this
point.
I'm
happy.
If
there's
time
for
questions
left
and
look
forward
to
discussion.
H
F
Hey
thanks
Brennan
for
this
awesome
talk
I'm
wondering
about
the
effect
that
you
get
this
big
effect
on
changing
the
background.
Diffusivity
versus
changing
some
things
in
the
sheer
scheme.
Does
that
implicate
that
we're
in
the
sheets
game
we're
missing
a
bunch
of
processes
that
would
then
be
subsumed
in
the
background
mixing
and
that
maybe
we
should
either
extend
the
scheme
or
rely
on
the
background
diffusivity
more
to
capture
all
these
processes
that
we
can
currently
premise
foreign.
L
Yeah
thanks,
it's
a
really
great
question
and
that's
something
that
I've
been
thinking
about.
So
I,
of
course,
did
go
to
our
share
scheme
and
and
went
and
tried
to
look
at
various
parameters.
L
You
know
the
critical
Richardson
number
and
so
on
to
see
if
I
could
you
know,
get
better
representation
of
the
mixing
in
that
way
and
it
turned
out
that
it
didn't
seem
to
be
the
role
so
I
think
exactly
what
you're
saying
is
Right,
which
is
that
we're
somehow
missing
something
that's
relevant
to
the
mixing,
whether
that
just
has
to
do
with
maybe
the
sort
of
the
high
frequency
of
the
processes,
which
is
you
know
very
I,
think
complementary
to
to
what
Deepak
was
just
showing
and
saying.
L
If
you
know
it,
it
turns
out
that
this
is
something
that
we
have
to
account
for
with
the
background
mixing.
Maybe
it's
something
that
we
can
justify
in
the
short
term,
but
hopefully
we
can
come
up
sort
of
with
more
sort
of
process,
physics
based
solutions
for
the
long
term,
especially
because
you
want
this
to
be
something
that.
I
N
Yeah
I
wanted
to
talk
to
you
a
long
time
ago.
I
asked
Eric
Lucero
what
the
turbulent
paramel
number
should
be,
and
he
said
well,
it's
the
last
number
you'll
ever
know
so
I
wanted
to
know.
If
DPAC
had
any
update
on
that,
and
in
particular
though
he
did
say
it
should
be
greater
than
one
and
less
than
10..
That
was
off
the
top,
but
you
put
it
to
one.
Have
you
not.
N
So
that's
just
within
his
maybe
but
I
guess:
there's
no
more
progress
on
that
Deepak.
But
you
know
I.
M
From
other
vertical
vertical
discretization
settings
that
are
that
in
help
inform
this
kind
of
thinking.
L
M
L
Division
at
gfpl
I
think
that
my
increase
of
the
background
diffusivity
at
the
equator
to
10
to
the
minus
five.
It
really
helps
in
a
very
narrow
band,
so
from
like
minus
three
to
three.
Once
you
start
to
get
outside
of
that
band,
I
think
it
potentially
can
degrade
this
the
stratification
simulated
in
the
model.
So
we
have
to
be
very
careful
about
how
we
might
consider
layering
this
into
the
existing
parameterizations.
L
So
no
I,
don't
I,
don't
have
any
great
evidence
for
why
it
should
be
10
to
the
minus
five
I
think
this
just
falls
into
the
category
of
something
that
better
matches
the
the
Argo
observation.
So
it
hints
at
something
missing:
process
wise.
C
All
right,
thanks:
can
you
can
you
stop
sharing,
so
we
can
get.
A
F
All
right
so
I'm,
assuming
you
can
see
much
effective,
hi
everyone,
I'm
Anna,
we've
heard
a
lot
about
mixing
just
now,
and
so
the
main
takeaway
from
this
talk
exercise
is
going
to
be
mixing
really
matters
to
not
only
one
location
on
the
equator,
but
I'm
still
going
to
be
focusing
on
the
equator,
but
also
to
you
know,
impacts
on
the
large-scale
system
and
I
want
to
zoom
out
a
little
bit
and
start.
F
By
having
to
have
a
look
at
the
tropical
Pacific,
so
that's
this
region
indicated
here,
we've
already
heard
a
lot
about
it
and
you
all
know
the
tropical
Pacific
has
cold.
Temperatures
leaves
warm
in
the
west,
and
so
this
is
our
snap
five
daily
temperatures,
Sea
Service
temperatures
and
what
the
simulation
that
I'll
be
looking
at,
which
is
mainly
a
tenth
of
a
degree
Puck.
F
So
it's
a
little
bit
before
the
mom
will
get
back
to
Mom
in
the
very
end
and
I'll
also
make
another
connection
between
Brandon
again
in
the
end.
But
for
now
we've
been
kind
of
thinking
about
the
mean
or
maybe
really
die
on
a
short
time.
Scales
and
I
want
to
be
focusing
on
a
little
bit
of
different
times
throughout.
So
things
you
can
see
here
are
just
now
obviously
they're
not
coming
and
yeah.
F
There's
the
tropical
stability
waves
that
are
passing
along
from
the
East
to
the
West
here,
which
will
play
a
role
and
we'll
also
be
looking
at
the
seasonal
cycle
of
the
system
and
how
that
impacts,
mixing
and
quantities
that
are
influenced
by
mixing
okay.
So
we
don't
need
to
spend
a
lot
of
time
on
this.
You
know
the
equatorial
circulation
right
now.
F
There's
this
really
strong
electronic
undercurrent
that
induces
this
strong
Shear
called
in
the
East
swarm
in
the
west,
and
then
we
have
the
SEC
again
really
increasing
the
shear
between,
in
the
water
column,
with
Eastward
occurrence
and
the
subsurface
and
Western
currently
surface,
and
then
people
usually
look
at
the
equatorial
circulation
and
then
draw
this
upwelling
in
the
East,
which
kind
of
connects
this
idea
or
constipulation
that
we
have
and
the
way
that
people
draw
upwelling
is
just
sort
of
as
a
vertical.
It's
a
vertical
velocity,
obviously,
but
it
crosses
temperatures.
F
So
I've
indicated
the
temperature
gradient
here
by
colder
temperatures
in
the
subsurface
and
warmer
on
the
surface,
and
so
we're
wondering
now.
How
does?
How
does
that
happen?
And
again,
part
of
the
message
is
mixing
matters
I'll
keep
saying
that
so,
but
what
we
want
to
do
is
we
can
decompose
this
up
well
and
into
two
components
and
one
of
the
components
of
the
upwellings.
F
The
a
diabetic
component
in
which
no
partial
the
parcel
doesn't
exchange
any
heat
with
its
environment,
which
really
just
results
from
the
upward
slope
of
the
equatorial
on
the
current,
which
roughly
follows
the
gradient
of
the
thermocline.
So
there's
an
Eastward
velocity
compound,
but
there's
also
an
upward
velocity
component
and
this
are
boiling,
cannot
contribute
to
a
energy
exchange
of
a
parcel
or
any
energy
uptake
or
heat
uptake.
F
But
when
people
draw
this
Arrow,
they
usually
don't
think
about
this.
A
diabetic
component
right.
We
think
about
the
diagram
component,
where
heat
is
being
taken
up
into
the
ocean
and
Deepak
has
shown
results
from
Ryan
Homes
earlier,
where
you've
seen
this
really
strong
heat
uptake
on
the
equator.
That
really
provides
the
global
circulation
of
buoyancy.
So
this
is
really
the
important
reason
to
understand
and
we're
trying
to
understand
the
time
scales
on
which
this
heat
uptake
by
this
boiling,
how
they,
how
that
varies,
and
how
what
time
scales
play
a
role.
F
So
I
have
an
equation
to
diagnose
the
diabetic
upwelling,
which
is
consistent
of
basically
projecting
the
motion
of
a
parcel
into
the
direction
of
the
temperature
gradient.
And
then
you
also
take
into
account
the
motion
of
the
isotone.
So
that's
what's
shown
here
on
the
left,
because
if
the
isotherms
move,
then
you
you
know,
and
you
can
move
towards
the
gradient
as
much
as
you
want
it's
sort
of
escaping
from
you.
F
So
this
is
one
way
that
we
can
diagnose
from
this
output
that
we
have
diabetic
that
doesn't
consist
of
any
you
know
it
doesn't
know
of
any
prioritizations.
It
doesn't
know
of
the
physical
processes,
it's
a
standalone
way
to
diagnose
the
diabetical
line,
but
we
can
also
connect
this
equation
to
the
heat
budget
equation,
and
this
is
really
nice
because
it
allows
us
to
now
actually
attribute
physical
processes
that
are
behind
this
heat
uptake
that
are
behind
this
diabetic
upwelling
and
one
of
them
is
the
vertical
Divergence
of
solar
penetration
and
I.
F
Also
really
like
that
depop
already
said,
you
know
it's
the
vertical
Divergence
of
the
heat
flux
that
really
matters,
because
if
less
comes
in
then
goes
out,
then
you
have,
and
you
know,
then
you
have
a
change
of
heat
in
the
parcel.
So
it's
always
the
version
of
the
versions
of
the
solar
penetration.
The
vertical
mixing
for
completeness
you
know:
horizontal
mixing,
obviously
occurs
in
the
heat
budget.
F
I
would
say:
I'm
not
showing
any
results,
because
they're,
really
it's
really
small
in
this
10th
of
degree
simulation
and
then
there's
a
covariance
term
and
that
kind
of
arises
because
we
have
five
daily
average
outputs.
So
there
are
some
covariance
that
happens.
That
might
be
a
front
passing
along
in
these
five
days,
with
changing
temperature
gradients,
changing
the
Dynamics
of
the
system
and
that's
that
will
be
captured
in
this
covariance
term.
F
F
So
what
does
this
diabetic?
Upwelling
look
like
in
the
long
term,
and
now
again
you
might
recognize
Ryan
homes's
heat
uptake
into
the
ocean
sort
of
I'm,
just
showing
you
a
very
narrow
region
along
the
Equator
here
at
22
degrees
Celsius,
which
for
me
represents
some
some
more
in
the
thermocline.
You
have
to
choose
1.21
degrees,
Celsius.
F
The
picture
looks
very
much
similar
and
something
that
I
was
interested
in,
and
this
is
kind
of
related,
because
we're
interested
in
understanding
the
spatial
and
temporal
variability
of
this
system
them
is.
How
does
that?
What
does
that
look
like
at
different
locations
and
in
depth
with
the
water
column?
So
if
you
now
look
at
Cross
sections
of
these
lines
that
are
drawn
here
at
155,
West,
125
and
110,
and
these
are
chosen
because
there's
tile
Moorings
there
so
hopefully
at
some
point
we'll
be
able
to
do
some
observational
comparison.
F
You
see
that
this
diabetic
upwelling
and
this
line
here
is
the
22
degrees
Celsius
isothermic.
So
you
see
that
this
dive
at
an
upwelling
happens
strongly
located
on
the
equator,
and
then
it
has
this
sort
of
v-shape
and
the
reason
that
we
care
about
with
this
diabetic
welding
is
the
region
in
and
around
thermal
plan,
because
that's
where.
F
One
more
thing
to
point
out
is
that
when
we
are
in
the
thermocline,
we
really
can
be
thinking
about
upwelling,
because
remember
I've
been
protecting
the
motion
into
the
gradient
of
the
temperature,
which
means
that
we're
not
always
strictly
talking
about
a
vertical
motion,
especially
when
we're
really
near
the
surface,
because
then
we
are
actually
in
the
mixed
layer
and
our
isotherms
are
going
to
be
vertical.
F
So
I've
promised
you
that
we
can
so
this
these
pictures
before
I've
calculated
them
according
to
the
left
side
of
my
equation.
You
know
this
projection
of
motion
into
the
gradient
with
the
movement
of
the
isotherms
and
now
I
can
decompose.
According
to
the
heat
budget,
what
this
diabetic
upwelling
is
due
to-
and
you
can
see
a
very
strong
signal,
especially
in
the
thermoplane
again,
the
signal
that
I'm
most
interested
in
is
driven
by
the
vertical
mixing
component,
the
Divergence
of
the
vertical
mixing
induced
Plus
and
then
close
to
the
surface.
F
We
get
the
solar
penetration
so
basically
because
you
have
more
heat
coming
in
than
goes
out
because
the
solar
organization
decays
you
get
this
warming
effect
of
the
parcels
which
leads
to
diabetic
upwelling,
but
only
really
close
to
the
surface,
where
it's
really
more
of
an
officatorial.
F
Vertical
motion,
so
this
is
what
it
looks
like
in
the
mean
again,
we
are
also
interested
in
looking
at
what
how
this
varies
on
different
time:
scales:
I'm,
not
going
to
talk
about
Denso.
There's
a
paper
I
think
two
years
ago
about
how
it
bears,
and
so
this
is
how
it
varies.
With
the
seasonal
cycle,
again
chosen
a
measure
of
the
thermocline
and
showing
you
the
diabetic
upwelling
in
a
Mac.
You
can
see
that
there's
some
variability
here,
so
it
looks
like,
for
example,
in
April.
We
have
very
little
boiling
happening
in
July.
F
We
have
a
whole
lot,
which
indicates
that
this
heat
uptake
that
is
so
important
again
for
providing
our
circulation
with
buoyancy
is
current,
is
happening
mostly
in
this
month
and
not
so
much
in
April,
and
we
can
now
look
again
at
the
water
column.
So
where
does
this
happen
and
what
are
again,
the
physical
processes
that
make
that
happen
on
a
seasonal
cycle?
Is
it
the
same
balance
that
we
also
see
in
the
mean?
F
So
this
is
the
diabetic
backbone
with
temperature.
I'm
also
I
forgot
to
mention
this
I
look
in
temperature
space
because
I
look
at
a
transformation
of
temperature,
so
it
doesn't
really
make
a
whole
lot
of
sense
to
look
at
50
meters,
because
at
50
meters
it
might
be
different
temperatures,
the
Isotopes
move
up
and
down
across
in
space,
and
we
wouldn't
get
a
clear
picture
of
where
the
action
is
happening.
F
So
I'm
looking
at
temperature
spacing
the
y-axis,
and
this
is
the
seasonal
cycle
on
the
x-axis
and
if
you
just
sort
of
summarize
these
three
different
locations,
you
can
see
the
pronounced
seasonal
cycle
of
this
diabetic
upwelling
and
there's
two
Maxima
I,
don't
know
if
you
can
see
that
on
this
screen,
maybe
on
your
own
screen,
so
you
can
see
it
so
there's
a
maximum
early
in
the
year,
that's
higher
in
the
water
column.
Another
thing
to
mention
is
that
this
hatching
indicates
whenever
an
isotherm
is
present
less
than
50
of
the
time.
F
That's
a
very
steep
function.
It
doesn't
really
matter
if
I
choose
50,
70
or
80.
That
just
means
when
the
isotherm
has
been
present,
we're
you
know
above
the
surface,
so
the
closer
you
get
to
the
hamstring
the
nearer
you
are
to
the
surface.
So
you
have
this
maximum
of
diabetic
alcohol
in
here
near
the
surface,
and
then
you
have
a
sort
of
what
I
call
an
internal
maximum
later
in
the
year
and
what
I've?
F
Also
overlained
in
this
picture
is
some
stippling
here
and
some
hatching
patches,
stripes
and
those
indicate
when,
for
the
stippling
solar
penetration
dominates
the
diabetic
upwelling,
so
that's
again,
typically
associated
with
more
of
an
outfit
withdrawal
motion
and
the
dashes
are
when
vertical,
mixing
and
dominates
the
diabetic
abwelling,
and
you
can
see
that
for
all
these
maximum
later
in
the
season,
they're
really
driven
by
the
vertical
mixing
that
can
be
explained
by
the
shear
I'm
here,
showing
you
the
sheer
vertical
Shear
in
temperature
space.
F
So
you
can
see
that
these
Maxima
of
mixing
coincide
with
Maxima
in
the
shear
again.
This
really
is
a
huge
Urban
mixing
drives
this
diabetic
up
well,
I
mean
this.
Is
you
know
the
topic
of
this
as
a
motivation
of
why
we
really
want
to
be
getting
this
right,
because
that's
an
important
measure
that
drives
is
connected
to
the
large-scale
circulation?
F
That's
not
only
important
for
the
turbines
at
this
one
location
and
I'm,
showing
you
here,
the
half
mineral
temperature
and
I'm,
basically
showing
that
to
to
show
you,
the
thermocline,
is
sort
of
roughly
steady
and
whenever
we're
looking
at
temperatures
between
22
and
20
degrees.
F
Now,
if
we
move
to
sub
seasonal,
the
shorter
time
scales,
you've
seen
these
tropical
instability
waves
and
they've
also
been
mentioned
before
here,
I'm,
showing
you
a
half
hour
of
a
year,
I
have
chosen
that
is
representative
of
the
years.
You
can
find
similar
things
of
Seasons
temperature,
so
you
can
see
sort
of
these
tricks
here
which
coincide
with
the
passage
of
tropical
instability
waves,
which
are
also
indicated
in
these
black
dots,
which
are
tropical
instability
or
kinetic
energy.
F
So,
when
these
dots
are
sort
of
further
to
the
right,
you
can
see
the
The
Coincidence
with
these
tropical
stability
waves.
So
we
have
them.
What
does
that?
Does
it
do
something
to
the
diabetic?
Uploading
is
the
question
and
if
you
look
at
the
direct
equally
across
an
isotherm
in
the
in
the
thermopline,
you
see
here
two
things.
You
see
the
seasonal
modulation
that
we've
looked
at
before
very
little
diabetic
upwelling
in
the
earlier
part
of
the
year,
more.
F
Boiling
and
perform
later
in
the
year-
and
you
also
see
these
tricks
they're,
not
as
clear,
but
you
can
see
there's
some
modulation.
There
must
be
some
rectification
of
the
tropical
instability
waves
into
the
diverged
upwelling
system
and
if
we
now
again
decompose
this
into
vertical
mixing
or
a
covariance
term,
that
would
capture
covariance
on
these
subfactured
time
scales,
which
you
can
associate
with
Tropical
Smoothie
waves,
not
because
that's
their
time
scale,
but
because
they
force
a
front
to
pass
a
region
on
very
fast
times.
Yet
you
can
see.
F
The
signal
is
mostly
subsumed
in
the
vertical
mixing
scheme,
so
there
must
be
an
enhancement
in
the
vertical
mixing.
That
is
a
seen
from
the
seasonal
title,
but
then
also
from
these
struggled
stability
waves
and
an
interesting
thing
is
to
look
off
the
equator,
where
we
are
now
a
little
bit
away
from
this
really
strongly
sheer
driven
regime.
So
we
don't
have
this
constant
background.
Shear,
that's
driving
this
basically
constantly
sitting
at
Richardson
number
0.25.
F
You
see
again
problems
to
otherwise
because
they
extend
to
the
north.
You
see
the
signal
in
the
diabetic
upwelling
and
now
you
see
some
of
the
signal
explained
in
the
vertical
mixing,
but
you
see
a
bigger
part
of
the
signal
in
this
covariance
of
the
front.
F
F
Is
an
overview
to
show
you
that
the
variance
of
diabetic
welding
on
the
magnitude
that
it
varies
really
is
the
same
whether
or
similar
whether
you
look
across
the
season
cycle
and
so
or
tropical
instability
waves.
F
So
a
theory
could
be
that
maybe
they
Rectify
onto
the
variants
that
we
see
on
these
other
time
scales
and
that's
something
that's
really
hard
to
tease
out.
So
if
anyone
has
any
great
ideas,
please
contact
me
yeah,
so
just
a
short
wrap
up
that
most
vertical
vertical
mixing
really
drives
static,
upwelling
I'm.
Also,
culturally.
We
can
really
see
the
signal
by
the
tropical
stability
waves.
F
All
times
gets
matter
and
I
just
want
to
tie
this
quickly
back
together
to
the
mixing
talk
that
we've
heard
about
before,
because
obviously
these
results
depend
on
the
mixing
scheme.
I
use
right,
I
use
I'm
using
Pop's
output
from
the
mixing
scheme,
and
we
know
that
that
is
biased.
So
if
we
look
here
at
the
seasonal
cycle
of
Seasons
temperature
published
by
a
gym
a
couple
of
Ten
Years
Ago
by
now-
and
we
look
at
the
seasonal
cycle
of
SST
and
pop
in
the
simulation
used,
they're
really
similar.
F
But
if
you
look
at
the
balance
of
what
drives
the
seasonal
cycle,
the
surface
D
flux
compared
to
the
turbulent
pooling,
you
can
see
that
that's
very
different
or
it's
different
from
some
part
of
the
Year
pop.
Yet
some
some
some
characteristics
right.
But
then
some
are
in
some
part
of
the
Year
itself,
and
this
is
something
that
we
want
to
improve
on,
because
we
want
to
get
the
correct
answer.
The
correct
SSD,
the
correct
representation
of
the
process
is
because
of
the
right
physics
that
we
put
in
and
we
can
do.
F
In
this
case,
the
surface
heat
flux,
heating
of
the
system,
and
then
this
diagram
looks
very
different,
so
we're
really
trying
to
figure
out
what
a
best.
Ideally,
you
know
clean
ways
to
compare
these
things
to
each
other,
to
optimize
our
mixing
schemes.
N
And
this
is
really
going
to
be
important
if
we're
going
to
focus
on
the
Pacific
to
know
that
we
should
so
I
I,
like
I,
think
that's
really
critical,
but
that
last
thing
about
the
solar
radiation
surprises
me,
because
I
thought
that
mom
had
overdone
solar
radiation
because
it
took
this
Omen
stuff
and
it
took
observations
and
it
really
tried
hard
to
do
that.
I
thought
much
better
than
it
need
to
be
yet
you're
showing
what
moon
was
using
is
completely
different,
yeah.
N
So
there's
a
disconnect
there
somewhere,
because
the
blue
line
was
based
on
ox
I,
believe
so
that
would
be
something
to
try
and
sort
out.
Now
the
solar
penetration
does
matter
and-
and
we
might
as
well
get
it
right
and
we
know
how
to
get
it
right.
If
we
know
what
right
is
so
if
you
could
chase
that
down
a
bit,
I
think.
F
E
F
Yeah
so
yeah
I
agree,
there's
okay,
things
to
be
to
be
looked
at.
G
I
Yes,
I
just
wanted
to
say
that
the
attenuation
of
the
of
the
shortwave
penetration
is
also
dependent
on
the
by
biological
chemistry
production,
so
it
acts
as
the
damping.
So
that's
another
part
of
it.
That
has
to
be
included.
F
Rudimentary
implement
of
that.
So
for
all
of
this
analysis
that
I've
shown
we
had
a
seasonal
cycle
of
chlorophyll
and
again
I
probably
should
have
put
this
function
here.
So
basically
the
function
sits
in
between
and
then,
if
you
activate
the
chlorophyll,
it
varies
the
function,
but
it's
very,
the
variance
is
very
small
compared
to
the
big
gap
that
we
see
so
basically
I
don't
know
if
you
can
see
me,
but
the
gaps
like
this
and
pop
sits
here
and
then
chlorophyll
makes
it.
F
You
know
vary
between
this
region
compared
to
this
so,
but
we
are
currently
running
or
people
are
running
on
me.
This
high
resolution
ocean
Force
simulation
with
BTC
and
I
think
it
would
be
very
interesting
to
look
at
how
the
variance
of
the
chlorophyll
is
there
and
how
that
impacts.
The
solar
penetration
so
I
agree
that
that's
an
important
question.
A
G
We're
gonna
do
a
full
20-minute
break,
come
back
at
10,
50.,
so
10
minutes
before
the
hour.
C
So
we're
going
to
move
from
the
thermocline
up
into
the
boundary
layer,
I
guess
for
the
next
couple,
talks
and
or
maybe
find
out
so
first
up
is,
let
me
say,
we're
about
10
minutes
behind.
Let's
try
not
to
get
any
further
behind,
so
we
don't
dig
into
lunch
too
much.
A
O
So
yeah.
O
Which
is
it's
a
closure
for
vertical
fluxes,
especially
in
the
boundary
layer?
That's
based
on
a
convective
adjustment,
then
a
prognostic
turbulent
kinetic
energy.
So
we
call
it
catching
and
I'm
a
I'm,
a
research
scientist
at
MIT
and
I
work
with
the
climate
modeling
Alliance.
O
So
the
kind
of
modeling
Alliance
is
a
it's
a
cons.
It's
a
MIT
Caltech
JPL
collaboration
to
develop
a
new
climate
model,
a
new
trainable
climate
model,
and
so
what
that
means?
What
we're
trying
to
do?
I
think
the
the
main
Innovation
that
we're
trying
to
achieve
is
to
automate
the
calibration
of
Earth
system
models
and
included
in
that
automatic
calibration
process
would
be
on
uncertainty,
quantification
of
the
kind
of
model,
predictions
and
we'd
also
like
to
leverage
diverse
range
of
observations
of
observational
data.
O
So
those
are
like
the
the
main
two
objectives:
those
first
two
and
I
think
all
the
scientists
working
on
Prima,
and
especially
my
goal
like
I'm,
hoping
that
if
we
can
achieve
that
second
goal
and
develop
an
earth
System
model
whose
calibration
can
be
automated,
which
means
we
can
tune
many
parameters
at
the
same
time,
rather
than
focusing
on
just
a
few
when
we
do
hand
tuning,
we
I
hope
that
that
can
eventually
lead
to
improved
printerization
unresolved
processes
like
what
we've
learned
learn
about
what
we
found,
what
we
need
to
do
and
accelerate
that
process.
O
So
some
challenges
of
the
project
are
the
calibration
to
automate
calibration
requires
a
lot
of
forward
evaluations
of
whatever
model
that
we're
calibrating,
so
that
could
be
single
column
model
or
Global
simulations
and
that's
very
expensive.
O
O
O
O
And
so
this
is
so.
This
oceananigans
is
a
what
basically,
what
we've
done
with
ocean
inning
is
that
we've
we've
in
order
to
focus
on
parameterizations
we're
using
simple
established
numeric.
So
it's
C
grid
finite
volume,
partial
cell
bathymetry,
simple,
vertical,
coordinate,
but
it's
written
in
the
Julia
programming
language,
which
means
that
we
can
develop
parameterizations,
it's
easy
to
develop,
parametrizations
and
plug
them
into
ocean,
and
in
addition
to
that,
it
can
run
on
a
variety
of
architectures,
including
gpus.
O
So,
on
the
on
the
that's
crucial
for
performance,
so
on
the
left,
I'm
showing
you
a
quarter,
degree
simulation
of
the
global
ocean
and
with
that
setup
we
achieved
three
simulated
years
per
day
on
a
single
V100,
and
so
that's
so
that's
the
kind
of
performance
that
you
get
if
you
use
something
like
a
thousand
cores
with
own
four.
O
So
in
terms
of
computational
cost
is
very
cheap.
It's
it's
possibly
up
to
50
times
cheaper
and
that's
like
a
crucial,
a
crucial
aspect
for
doing
this.
Automated
calibration,
where
we
need
to
do
large
ensembles
of
global
simulations.
O
In
addition
to
that
ocean,
ambiggins
can
do
large
Eddie
simulations
in
either.
You
know
boxes
with
periodic
boxes
or
closed
boxes,
and
it
can
also
do
a
lot
of
dirty
simulations
with
complex
bathymetry.
That's
something
that
was
relatively
recently
developed
and
it's
efficient
because
we
use
a
pressure.
Solver,
based
on
query
transforms,
so
it
has
a
non-hydrostatic
mode,
just
like
mitpcm,
but
it's
much
faster
and
it's
not
hydrostatic
than
migdcm
and
the
the
bottom
right
is
just
showing
you
an
example
of
how
you
might
you
can.
O
You
can
set
up
and
run
a
simulation
at
the
Julia
Rebel,
which
is
like
a
prompt
interface
and
then
that's
showing
me
what
that
looks
like.
O
So
we
also
have
the
ocean,
which
is
fun
to
use
so,
okay,
so,
okay,
that's
all
I
want
to
say
so
that
key
is
cat
key
is
we
have
two
boundary
layer
or
microscale
turbulence
schemes
and
oceananigans?
One
is
a
simple
Richardson
of
a
scheme,
that's
sort
of
like
Patrick,
Brown,
ski
cylinder
and
then
but
the
the
flagship
are
the
more
sophisticated
work
is
packing,
and
so
that's
what
I'm
talking
about
today
and
so
the
reason
why
we
chose
we.
O
The
reason
why
we
chose
to
hack
key
as
our
basically
our
first
parameterization
to
develop
is
because
it
was
relatively
conservative
choice.
So
CAD
key
is:
it
extends
a
widely
used
parameterization
that
was
I.
Think
the
simple
version
was
proposed
by
blank
and
delicuz
in
1993,
it's
used
by
the
Nemo
ocean
model,
so
it's
standard,
it's
portable.
O
You
could
implement
it
in
in
almost
all
gcms
I
think
today,
and
we
also
believe
that
it
has
a
wide
range
of
applications.
So
it's
not
just
a
closure,
that's
applicable
for
Global
simulations
or
core
simulations,
but
we
think
that
it
has.
The
potential
must
be
used
also
at
very
high
resolutions
down
to
submesoscale
or
even
be
crossing
over
the
gray
Zone
into
near
larger
simulation
of
resolutions.
O
So
tacky
is
a
is
sort
of
a
standard
model,
so
we
improve
on
the
state
of
the
art
in
two
ways.
One
is
that
we
leverage
the
same
calibration
methods
that
we'd
like
to
use
for
the
global
model
to
develop
this
parameterization
in
a
single
column
context.
So
we
use
some
sophisticated
calibration
methods
to
estimate
cat
these
three
parameters
and
and
discover
its
biases
and
then.
O
The
second
is
that
we
do
make
some
small
improvements
to
the
pleasure
formulation
and
I'll
try
to
describe
in
hopefully
which
are
measured
so
they're
they're,
they're,
they're
conservative
improvements,
but
they
prove
to
be
crucial,
especially
for
reducing
the
sensitivity
of
of
cat
Keys
predictions
to
resolution,
so
I'll
get
to
that.
Okay.
So
so,
just
briefly,
I'd
like
to
the
talks
that
I
would
discuss
calibration,
so
I
have
one
slide
about
how
we
calibrate
cat
key.
O
So
the
way
that
we,
we
calibrate
CAD
key
the
formulation
we're
going
to
consider
formulations
of
khaki
that
have
between
7
and
18
free
parameters.
So
we
use
this
calibration
technique
that
can
estimate
all
of
those
parameters
at
the
same
time,
and
this
technique
is
it's
based
on
is
an
ensemble
based
method
called
Ensemble
common
inversion,
which
we
utilize
between
100
to
1,
000,
individual
single
column
models,
each
with
a
different
parameter
set
in
order
to
estimate
the
parameters.
O
So
the
way
it
works
is
that
so
the
way
it
works
is
that
we
have.
We
have
these.
We
have
like
100
guesses
for
what
the
parameters
might
be,
and
then
we
run
the
single
column
model
forward,
and
then
we
use
the
discrepancy
between
the
predictions
of
the
single
column
model
and
our
source
of
Truth,
which
in
this
case
our
18
large
any
simulations,
and
we
we
use
that
discrepancy
to
update
the
parameter,
sets
in
all
of
those
100
models.
And
then
we
do
it
again
and
that's
so.
O
We
iteratively
improve
on
the
parameters
and
what
happens
is
that
through
this,
through
the
algorithm
that
Ensemble
common
inversion
provides
a
parameter,
the
parameters,
that's
converged
to
a
consensus,
optimal
parameter,
set
and
I.
Think
one
of
the
one
of
the
crucial
things
that
a
crucial
aspect
of
this
calibration
technique
is
that
it's
it's
what's
called
a
posterior
online.
So
when
we're
when
we're
finding
the
optimal
parameters,
it
involves
running
ocean
forward
and
so
because
it
because
we
use
the
predictions
of
the
single
column
model
to
find
the
optimal
parameters.
O
We're
incorporating
things
like
numerical
error,
and
you
know
the
the
details
of
the
numerical
implementation
of
categy
in
the
calibration
process.
O
The
other
crucial
thing
is
that,
and
so
so
the
other
thing
that
this
allows
us
to
do.
In
addition
to
incorporating
numerical
errors,
it
allows
us
to
calibrate
parts
of
the
parameterization
that
are
that
we
don't
have
data
to
directly
estimate,
so
we
can
use
indirect
data
as
long
as
it's
somewhat
informative
to
calibrate
parts
of
the
primerization
that
are
not
directly
observed.
So
an
example
of
that.
O
So
an
example
of
that
is
catkey,
prognostic,
turbulent
energy,
so
yeah,
you
might
say:
well
you
actually
can
calculate
that
in
the
Les
and
that's
true,
but
the
thing
is
that
we
regard
the
turbulent
kinetic
energy.
That
catkey
predicts
as
something
that's
approximately
that's
similar
to
the
turbulent
kind
of
energy
in
the
LDS,
but
it
doesn't
have
to
exactly
be
equal
to
it.
Instead,
what
we
want,
we
want
cat
heat
to
accurately
predict
temperature
profiles
and
velocity
profiles,
so
you
can
see
in
the
in
the
bottom.
O
The
panel
in
the
bottom
left
that
cat
key
this.
This
calibrated
version
of
cat
key
versus
an
accurate
prediction
of
the
temperature
profile
and
the
velocity
profiles
from
the
Les
code.
It
doesn't
exactly
predict
the
terminal.
Kinetic
energy
is
a
little
bit
off
and
that's
by
Design.
It's
because
we're
not
using
that
information
to
estimate
the
parameters.
O
Okay.
So
that's
that's
how
we
do
the
calibration
now
I'm
going
to
walk
you
through
the
development
of
CAD
Key
by
considering
two
different
formulations
of
cat
key,
so
the
first
I
call
minimalist.
So
catchy
is
a
Eddie
5070
model,
it's
fully
loadable,
which
is
helpful.
That
makes
it
efficient
and
it
has
a
prognostic
turbulent,
kinetic
energy
variable
which
I
call
capital
E.
So
the
equation
for
that
is
in
yellow
at
the
bottom
and
I'm,
showing,
for
example,
how
catkey
predicts
the
vertical
turbulent
plug.
O
Cat
uses
a
simple
formulation
for
the
mixing
length,
that's
in
the
blue
and
it
has
a
constant
turbulent
parental
number
everywhere
and
seven
three
parameters
so
so
minimals
cut
key
is
a
simple
model,
but
when
we
calibrate
it
to
the
largeetti
simulations,
so
we
use
sorry
and
I
should
have
mentioned
when
we
do
the
calibration
we're
running
single
column
model
forward,
we're
using
20
minute
time,
steps
for
the
single
column
model
and
we're
evaluating
it
at
two
different
resolutions,
eight
meter
and
four
meters,
so
that
helps
that
helps
us
find
Optimal
parameters
that
are
relatively
insensitive
to
resolution.
O
But
when
we
calibrate
minimalist
catkey,
we
find
that
its
predictions,
even
though
we're
using
two
resolutions
to
calibrate
it.
The
predictions
are
still
sensitive
to
resolution.
So
on
the
right
I'm
sure
we
have.
O
We
have
18
different
Les
cases,
I'm,
showing
you
one
particular
one
that
has
weak
winds
and
strong
Cooling,
and
you
see
in
this
case
that
that
there's
three
different
single
column
results
with
one
with
two
meters,
one
with
four
meter
vertical
resolution,
one
with
eight
meter
vertical
resolution-
and
you
can
see
that
at
higher
resolutions
it's
deepening
faster.
So
minimalist
cat
key
is
simple,
but
its
predictions
are
sensitive
to
resolution.
In
addition
to
that,
it
has
no
particular
model
for
convection.
O
So
when
you're,
when
they're
cooling
like,
for
example
in
pre-convection
and
there's
no
other
source
of
mixing,
it
has
very
unstable,
unrealistically
unstable
or
a
buoyancy
profile
so
to
improve
on
Minimalist
catching
I
have
another
formulation
that
we
I
call
favorite
cat
here.
Maybe
just
cat
games
better
name,
and
this
one
has
18
free
parameters.
So
the
way
that
we've
improved
on
cat
key
we've
done
two
things.
One
is
that
we've
replaced
the
constant
turbulent
parental
number
with
a
variable
turbulent
parental
number.
O
So
now
it's
a
stepwise
function
of
the
gradient
Richardson
number
and
then
the
second
Improvement
we've
made
is
that
we've
added
another
contribution
to
the
mixing
length
that
is
sort
of
like
a
convective
adjustment
scene.
If
the
buoyancy
gradient
is
unstable,
it
cranks
up
the
diffusivity
and
it
kinds
of
the
diffusivity
in
a
way
that
it
estimates
this
convective
mixing
length.
I
think
this
is
nice
to
show
this
at
Empire.
O
I
can't
I
can't
show
my
my
mouse,
but
if
you
look
at
the
in
the
yellow
at
the
bottom
left
you'll
see
that
that's
that's
their
scaling
for
the
convective
mixing
line
where
QB
is
the
pointy
flux
at
the
surface.
So
some
of
you
might
recognize
this
as
the
boundary
layer
depth
scaling
that
was
developed
by
beardorf
here
many
years
ago.
So
we
use
this.
O
So
we
use
the
information
in
the
turkentic
energy
and
the
buoyancy,
flux
of
the
surface,
to
estimate
the
boundary
layer
depth
and
then
use
that
as
a
mixing
length
that
acts
during
infection
and
we
find
that
out.
So
these
all
those
embellishments
add
quite
a
few
free
parameters
to
minimalist
khaki.
So
now,
favorite
khaki
has
18
free
parameters,
but
we
find
that
its
predictions
are
much
less
sensitive
to
resolution.
O
So
on
the
right
I'm
showing
you
three
results
from
this
from
a
case
that
has
strong
wind
and
note
cooling
on
the
very
right
is
the
minimalist
result
which
you
can
see
is
sensitive
to
resolution.
The
three
lines
are
different
in
the
middle
I'm,
showing
you
a
version
of
khaki
that
has
no
convective
adjustment
model,
but
does
have
this
variable
turbo
and
parental
number,
and
you
can
see
that
adding
this
variable
turbo
and
random
number
improves
things
a
little
bit.
O
The
results
are
still
sensitive
resolution,
but
much
less
but
much
better
than
the
minimalist
cat
Peak.
But
then,
when
we
add
this
convective
adjustment
parameterization,
the
interesting
thing
is
that
we
haven't
changed.
We
haven't
changed
anything
that's
explicitly
associated
with
a
strong
wind
case,
but
the
thing
is
that
when
we
add
the
convective
adjustment
piece,
it
allows
the
model
to
represent
convection
more
accurately.
Therefore,
freeing
up
other
parts
of
the
model
to
so
there's
let
there's
fewer
compensating
errors.
O
I
guess
is
one
way
to
put
in
when
we
have
favorite
cat
pee
and
that
allows
its
results
to
be
even
less
sensitive
to
resolution
in
a
strong
wind
case.
So
it's
showing
you
how
you
know
the
minimalist
khaki
has
a
lot
of
compensating
errors,
which
means
that
it
can't
it
can't
produce
predictions
that
are
insensitive
to
resolution,
but
favorite
khaki
by
reducing
those
compensating
errors.
We
end
up
with
a
model,
that's
more
accurate,
even
in
a
case
that
we
didn't
consider
explicitly
with
the
Improvement
that
we
made
okay.
O
So
that's
that's
most
of
my
talk.
Here's
the
last
slide
on
the
left,
I'm,
showing
you
an
implementation
of
CAD
key
in
a
three-dimensional
simulation,
an
editing,
Channel
simulation
that
has
some
wins
and
buoyancy
forcing
at
the
surface.
O
O
We
had
to
change
the
formulation
and
then
that
required
us
to
recalibrate
all
of
khaki
Street
parameters
and
they
all
change
when
we
use
the
new
formulation,
but
the
results
actually
were
quite
similar,
even
though
the
parameters
were
different
with
the
new
numerics,
so
I
think
that
was
kind
of
a
nice
illustration
of
how
we
can
use
this
automated
calibration.
If
we
have
this
automated
calibration
scheme,
we
can,
we
can
always
go
back
and
recalibrate
when
we
obtain
new
information
or
champion
of
merits
in
the
future.
O
L
O
Suite
I
had
has
had
had
surface
waves
in
it,
but
a
simple
model
with
a
constant
linear
number.
So
what
I'd
like
to
do
in
the
future
is
have
more
realistic,
surface
waves,
which
will
help
motivate
adding
a
surface
wave
state
dependence
to
catchy
cat
Keys
prediction.
O
Also.
Maybe
we
Implement
and
I
should
I
actually
meant
to
write.
Cb
mix
maybe
got
into
probably
not,
but
hopefully
we
can
Implement
Tech
in
CD
mix
and
of
course
we
would
like
to
go
to
the
global
context
eventually
and
recalibrate.
The
parameters,
maybe
Dutch
them
slightly
to
hopefully
and
see
what
happens
in
that
case,
see
if
we
need
any
more
improvements
or
whether
it
works
well
out
of
the
box.
Okay,
thanks.
C
So
when
you're
doing
the
calibration,
what's
in
the
objective
function,
is
it
all
state
variables
are
all?
Are
you
also
doing
tables
and
stuff
from
LDS.
O
B
J
Tom
Greg,
if
you
go
to
the
penultimate
slide,
that's
the
one
right
before
the
end,
yeah
right!
No,
no
right
there,
your
convective
piece
in
yellow
you
have
N
squared
positive.
Do
you
mean
N,
squared
negative.
J
Okay,
so
that
helps
clarify
that
confusion,
the
the
more
more
speculative
or
more,
you
know,
discussion
question:
is
the
numbers
you're
getting
from
the
Learned
parameters,
the
parameters
you're,
getting
that
you're
learning?
Are
you
connecting
them
to
some
physical
insights
or
physics
at
all?
Are
they
are
you
allowing
for
the
scheme
to
give
you
numbers
that
or
outside
of
what
you
might
think,
is
physically
relevant?
How
do
you,
how
do
you
get
physics
into
this
scheme?
Basically,.
O
That's
a
really
interesting
question,
so
I
think
there's
a
fear
that
if
you
do
the
calibration
blindly,
you
end
up
with
unphysical
stuff
going
on
it.
I
think
that
when,
if
that
happens,
it
indicates
there's
a
deficiency
in
the
parameterization.
So
far,
so
we
do
have
to
put
bounds
on
the
parameters
when
we
do
the
calibration
we
have
to
like
initialize
our
guests.
Basically,
we
have
a
prior,
but
I
try
to
make
the
prior
quite
wide.
O
I
actually
meant
to
mention
that
out
of
the
calibration,
the
minimalist
catkey
has
a
turbulent
random
number
of
about
one,
but
favorite
cat
key
has
a
turbulent
branding
number
that
varies
between
0.7
and
2..
So
in
a
lot
of
aspects,
the
parameters
that
come
out
seem
to
reflect
like
what
we
would.
What
we
think
should
be
the
case
physically,
but
that's
definitely
something.
N
N
Do
you
think
if
you
have
a
realistic,
largeity
simulation
that
has
a
lot
of
variability
in
it
that
you
could
substitute
that
for
many
simple,
steady,
forcing
kind
of
things
like
a
diagonal
cycle?
Yes,
I,
think
I
think
you
could
do
that.
So
maybe,
if
you
get
fewer
fewer
Les
but
more
realistic
and
complicated
would
fill
the
parameter
sweep.
Maybe
I
believe.
N
O
O
As
we
add
more
data,
it
actually
makes
the
calibration
much
more
efficient,
because
the
parameter
values
become
more
certain
I
think
with
variable
okay
variable
time
varying
Les.
That
will
help
a
lot
too.
M
When
diagnostic
mode,
when
they
want
a
variety
of
1D
models,
we
often
find
things
like
kpps
both
ends
the
deepest
and
the
shadowless
in
different
numerical
implications,
and
so
the
Menards
matters
yeah.
So
that's
critical
to
how
it's
not
it's,
not
just
the
equations
on
paper.
It's
the
equations
the
computer
sees
but.
M
Would
it
be
possible
to
predict
a
kind
of
tracer
turtling,
kinetic
energy
I
mean
because,
when
you're
going
to
compare
that
observations
and
things
having
some
of
the
Diagnostics
that
are
similar
to
what's
actually
being
measured
like
tkg
or
Epsilon,
it's
actually
a
really
helpful
step
in
connecting
to
the
outside
and
outside
of
the
computer.
So
now
we've
talked
about
inside
the
figure
outside
the
computer.
It
seems
like
you
could
carry
one
along
which
is
not
khakis,
but
it's
khakis
vision.
E
L
O
Even
though
I'm
saying
it's
hidden,
yeah
so
I
think
that's
that
could
definitely
be
an
area
of
future
research
like
how
can
we
improve
catkey
so
that
its
predictions
are
still
accurate,
but
it
also
accurately
predicts
TKE
or
somehow
produce
a
mapping
between
the
cat,
ptke
and
the
physical
TK
yeah
I
think
that
would
be
useful,
yeah.
O
N
Yeah,
that's
it:
okay!
Okay,
thank
you,
Frank
for
driving
here,
because
I
don't
have
a
license
anymore
yeah.
So
this
is
basically
a
development
of
the
way
the
ocean
boundary
layer
that
I've
been
doing
with
the
Gustavo
and
Alper
and
would
like
to
point
out.
This
is
the
same
problem.
You've
been
hearing
you've
given
an
ocean
model
and
given
the
forcing
and
given
the
ocean
state,
what
do
you
do
with
it?
Well,
the
ocean
model
we're
going
to
use
here
is
the
two
two-thirds
degree.
N
It's
a
busanesque
mom
six,
but
we
also
have
the
wave
average
boost
and
x
variety
that
was
developed
by
Brandon,
Reichel
I
believe
the
only
thing
I
noticed
turned
off
was
the
Stokes
coriolis
turn,
and
so
that's
off
I
think
that's
because
you're,
not
interacting
with
the
wave
brand
new,
can
correct
me.
N
So
of
course,
what
we
want
to
do
is
compute
a
boundary
layer,
depth,
H
and
we've
heard
about
that
and
how
important
it
can
be
in
some
schemes
and
the
turbulent
vertical
flux
is
a
buoyancy
and
momentum,
and
then
you
have
the
vertical
mixing
Tendencies
and
that's
all
standard
stock.
So
that's
just
definitions
and
notation.
N
Yeah,
so
that's
what
we're
going
to
do,
because
at
the
bottom
I'm
going
to
restrict
myself
to
the
boundary
layer,
which
is
a
normalized
coordinate
between
zero
and
one
it's
depth
over
H
is
Sigma,
and
this
is
going
to
be
consistent
with
some
Les
results:
okay,
as
well
as
CD
mix,
I'm,
not
ready
to
go
and
redo
CVS.
N
So,
therefore,
the
vertical
flux
of
the
vertical
fluxes
have
to
be
written
as
a
as
a
diffusivity
and
that's
and
a
viscosity,
and
what
I've
done,
though,
is
turn
the
coordinate
system
into
an
a
long,
Shear
coordinate
and
across
your
coordinate.
N
So
it
rotates
with
depth
if
you,
if
you
need
to
stays
with
the
shear
okay,
and
in
that
way
you
can
see
that
you,
like
the
buoyancy,
flux,
you
add
this
extra
non-local
term,
you
add
it
to
the
U
and
you
also
then,
the
cross
stream
there's
a
momentum,
flux
in
the
cross
stream
Direction
locally,
and
that's
this
gamma
V,
but
because
there's
no
gradient
in
that
direction.
There's
no
down
gradient
term.
N
So
it's
this
Crosshair
term
that
I'm
going
to
play
with
and
very
and
that's
what
I
talked
about
a
year
ago,
okay,
and
since
that
year,
I've
taken
a
lot
of
time
off
and
moved
to
other
things:
okay
for
now,
I'm
going
to
keep
the
non-local
down
or
long
sheer
down
gradient
zero,
because
I'm
just
not
sure
what
to
do
with
it
yet,
and
it's
techniques
like
we
just
saw
that
might
be
able
to
eke
out
what
that
should
be
because
doing
what
you
just
saw
in
my
head.
N
It
took
me
two
years
to
do
so:
I
haven't
it's
I'm
a
lot
slower
than
these
machine
learning
things.
So,
okay,
then
you
write
it
as
a
diffusivity
in
viscosity,
so
I'm,
just
putting
momentum
and
scalar
so
I
can
cut
down
the
number
of
equations
and
there's
a
turbulent
velocity
scale,
the
to
the
boundary
layer,
depth
and
the
shape
function.
You
heard
about
the
same
function
a
bit
yesterday
now
the
CD
mix
assumption
is
that
these
diffusivities
are
the
same.
N
That's
an
assumption,
as
is
the
actual
functional
form,
because
there's
no
real
reason
that
they
should
be
written
as
diffusivities
necessarily,
but
if
I'm
staying
in
CV
mix
world
for
a
while,
so
we're
going
to
have
that
one
and
the
big
things
I'm
going
to
play
with.
Is
this
turbulent
velocity
scale?
That's
all
standard
stuff!
That's
that's!
Just
the
velocity
times
our
energy
times
a
length
you've
seen
that
just
in
the
last
talk-
and
these
are
the
two
similarity
parameter
functions.
N
But
what
you
have
to
do
when
you
have
a
boundary
layer
problem
with
another
forcing
I.E
Stokes
gives
you
another
similarity
function
and
that's
this
Chi
function
of
a
wave
variable
to
do
with
the
Stokes
the
effect
of
the
waves
on
the
turbulence.
Okay
and
those
are
the
things
I'm
going
to
be
changing,
as
well
as
this
shape
function.
In
addition,
the
limit
on
on
this
velocity
scale
can
can
vary
and
I'll
change
that
as
well,
and
basically
this
function
of
way,
this
similarity
function
is
less
than
one.
N
N
Okay,
that's
the
only
difference
and
it's
bigger,
then.
As
far
as
the
boundary
layer
depth
is
concerned,
I
can
change
the
shear
in
the
Richardson
number.
Calculation
resolve
shear
and
I'm
also
changing
the
result,
fictitious
sheer
due
to
turbulence
that
we
don't
know
anything
about
now
where
that
came
from
is
we
know
from
convection
that
convection
actually
has
no
velocity?
N
So
this
this
is
zero,
so
you
need
something
and
when
you
work
it
out,
it
has
to
be
proportional
to
the
the
buoyancy
scale,
velocity
scale,
convective
velocity
scale,
which
is
minus
the
surface
flux
H
to
the
one-third
for
Pure
convection.
That's
all
you
really
know.
N
Okay,
so
here
are
the
cases
I
take
Mom
six
JRA
one
run,
and
this
is
I
change.
The
equation
set
so
for
Mom.
Six-
and
this
is
the
two-thirds
Workhorse
I
learned
yesterday,
and
these
are
the
parameters
that
are
sort
of
by
default.
A
lot
you've
seen
today
have
run
this
way.
There
is
no
such
no
ways
at
all,
so
that's
one.
N
This
is
limited
to
point
one
or
the
surface
layer.
Only
it's
a
cubic
that
was
mentioned
yesterday
or
fundamentally,
a
cubic
VT
squared
is
proportional
to
the
turbulent
velocity
scale,
which
then
defaults
to
W
star.
Only
in
convection,
the
illyrian
velocity
is
Delta
B,
and
this
non-local
velocity
in
the
cross
stream
direction
is
zero
and
there's
no
Stoke
switch.
N
So
that's
the
control,
then
I
added
the
weight
average
glucines
equation,
set
that
Brandon
sent
over
and
then
I
started
to
play
with
that
and
as
I
said
first
thing
I
did
was
play
with
this,
but
then
it
to
make
this
consistent
with
the
Las
it
turns
out.
You
got
to
change
everything
if
you
change
one
thing:
you're
not
being
consistent,
so
I
did
that
what
the
Les
tell
me
is
what
this
function
should
be
so
I
use
that
okay,
I,
don't
have
a
point
here.
All
right
can
I
get.
A
N
N
If
I
go
and
change
this
and
like
the
Les
tell
me
I,
don't
need
to
restrict
it!
It's
better
fits
if
I
don't.
This
is
my
you
know
equivalent
to
the
last
talk
again.
It
took
two
years
to
do
so
then
I
take
that
to
one
and
that's
what
the
Las
teams
do
this
G
function,
and
we
saw
some
evidence
yesterday
that
the
cubic
doesn't
work
very
well.
Well,
that's
absolutely
true.
We
use
a
composite,
that's
much
less
and
it's
based
on
the
Las
for
oceans
with
waves.
It's
the
same
function.
N
N
Bt
squared
I'll
come
back
to
later,
then,
what's
been
done
before
in
What's
called
the
Legacy
you
can
actually
make
in
in
the
Delta
B
term,
which
is
down
here
in
the
Richardson
number.
You
can
make
it
the
lagrangian
shear,
instead
of
just
the
polarium
Shear.
This
is
this
a
non-local
which
is
proportional
to
the
component
of
the
wind
stress
in
the
cross
stream
Direction,
and
that
works.
N
That's
what
the
Les
tells
me
what
to
do,
and
then
this
guy,
instead
of
changing
it
and
making
it
proportional
to
to
the
turbulent
velocity
scale,
I
just
do
what
I
I
know
it's
w
star.
That's
the
only
thing.
I
know
the
rest
to
WS
was
an
extrapolation
and
hope
and
a
prayer.
Okay.
So
that's
the
other
thing,
I
change.
So
if
I
do
all
those
things,
I
have
a
case
with
Wave
Watch
3
on
the
two-thirds
grid.
N
It's
called
the
wavy
case
and
the
other
case
I'm
going
to
talk
about
you
know
and
compare
to
is
I
just
turn
the
waves
off,
which
effectively
turns
off
the
wave
average
boost
and
ask,
and
it
turns
off
it
makes
this
equal
to
one.
But
I
and
you
know
I
still
keep
the
same
GC
same
everything
else,
so
it
is
just
the
effect
of
these
waves
in
this
system
and
that's
what
I'll
talk
about
so
the
first
metric
that
I
use
is
the
winter
Halo
client
at
Ocean
P,
because
I
know
what
it
is.
N
It
should
be
between
120
to
180.
Meters
is
the
thermocline,
and
this
is
a
example
of
a
whole
JRA
cycle,
and
you
can
see
that
that,
indeed,
is
where
the
halocline
stays,
if
I
just
put
those
deepening
or
that
one
enhancement.
I
can
take
this
to
350
meters
in
in
six
six
years.
Okay,
and
so
this
is
the
thing
and
without
waves,
it's
a
little
shallower.
N
Okay,
it
shows
up
very
clearly
and
it's
a
place
I've
been
to
I
like
it
I
know
what
it
should
be,
so
it
deepens
and
that's
what
the
waves
do,
and
that
makes
some
sense
and
if
I
look
at
how
how
the
deepening
goes
and
what
it
does
in
a
sort
of
a
more
Global
sense.
This
is
what
it
does
to
sea
level:
height,
okay
and
just
verdict,
I'm
already
on
I'm
already
on
the
six
or
yeah
I'm
running
out
of
section.
N
Here
you
can
see
what
it
does
is
it
takes
volume
of
sea
level
from
the
high
latitudes
to
the
equator
in
both
seasons,
March
and
September
in
all
basins.
These
are
the
Atlantic
Pacific
Australia,
that's
just
south
of
Australia
in
the
Southern
Ocean
and
in
the
Indian
Ocean.
N
So
that's
what
it
does
just
putting
in
the
non-law
and
local
term
used
to
change
things
in
the
East-West
too.
But
this
when
you
put
everything
else
together,
I
don't
get
that
anymore.
If
I
look
at
wave
effects,
which
we've
seen
as
just
a
mixed
layer
depth,
this
is
the
wavy
case
minus
the
Comm
case.
You
can
see
here's
how
much
it
deepens
in
both
March
and
September
winter
winter
summer,
summer
and
equator
Equator,
so
it
does
deepen
the
equator
a
little
bit.
N
I'll
leave
the
equator
because
we've
seen
lots
of
it,
but
it's
important
I
know
you
can
see
what
it
does
in
the
lab.
C.
Okay,
that's
that's
an
80
meter
in
March,
deeper
mixed
layer
relative
to
no
waves,
okay,
and
you
can
see
that
there
are
changes
in
high
latitudes
and
these
guys
you
can
see
these.
These
increases
here
correspond
to.
If
you
remember
back
to
Justin's
Small
Talk,
he
talked
about
the
Deep
mixing
bands.
N
These
are
the
things
that
get
enhanced
both
in
the
in
the
Pacific
in
south
of
Australia
and
in
the
Indian
Ocean.
So
my
Hope
was
to
start
looking
at
these
more
more
carefully
and
instead
of
the
Equator
I'll
move
to
I'll
move
to
the
Southern
Ocean.
So
this
came
from
the
paper
that
Justin
referenced
to
malice
dividier
and
she
actually
did
a
lot
of
Argo
processing
from
the
year
2004
to
17
and
that's
what
the
JRA
is
going
through.
N
So
I
do
the
averages,
in
the
wavy
case,
of
mixed
layer
depth
as
a
function
of
latitude
in
Indian
Ocean
in
90,
East
and
again
at
158
East,
which
is
in
the
Pacific
and
in
the
Far
West
Pacific,
and
what
you
can
see
is
in
fact
that
relative
to
the
calm,
the
wavy
is
deeper.
But
now
what
we
have
are
these
are
gold
floats,
and
what
you
find
is
that
in
fact,
sometimes
the
calm
is
better,
just
because
you're
deeper
doesn't
mean
you're,
better
all
the
time.
Okay,
that
depends
on
on
the
whole
Ocean
State.
N
These
deep
mixing
bands
depend
on
the
wind,
the
wind
curl,
actually
not
the
wind,
speed
and
so
everything's
got
to
work
right,
but
you
can
see
at
158
East.
This
is
really
shallow
to
calm,
so
the
deepening
helps
so
these
effects,
whether
they're
positive
negative,
depend
on
what
general
circulation
is
doing
and
whatever
and
what
happened
now.
N
What
was
found
here
is
that
these
models
that
we
were
using
always
had
this
fresh
bias
in
the
Indian
Ocean
Justin
thought
it
was
not
enough
salt
coming
from
the
Indian
Ocean,
and
he
says
the
eye
has
passed
some
of
that
too.
So
that
complicates
things
so
you,
instead
of
just
looking
at
mixed
layer
depth,
you
really
have
to
look
at
the
profiles
and
the
whole
biases
and
lots
of
things.
N
So
if
I
look
at
the
an
annual
cycle,
average
annual
cycle
of
salinity
at
90,
East,
40
degrees,
south,
so
now
I'm
going
to
a
point
but
I'm
going
to
an
annual
cycle.
Monthly
I
could
make
some
sense
out
of
that.
This
is
this
very
strange
thing
in
the
Southern
Ocean
of
a
deep
salinity
maximum
and
that's
what
stops
the
mixing
in
the
summer?
N
Okay,
and
it's
only
when
you
start
breaking
through
that
in
September
or
depending
when
you
break
down.
That's
when
you
get
the
Deep
mixing
bats
and
the
model
actually
produces
this
darn
thing
believe
it
or
not,
and
it's
seasonally
reforms
and
it
reforms
all
across
the
Indian
Ocean
south
of
Australia
and
into
the
Pacific,
and
this
is
what
the
wavy
case
does.
So
there
is
the
Argo
data
at
the
mix
layer
depth,
so
you
can
think
of
it's
too
crowded
if
I
put
all
the
lines
on.
N
N
So
what
the
waves
have
done,
you've
got
a
fresh
bias,
but
the
waves
at
least
have
been
able
to
help
mix
up
a
little
and
sweeten
it
a
bit.
Okay
actually
makes
it
salty
I
guess
that
sours
it
so
that
that's
why
it's
it's
useful
to
look
at
the
whole
column
and
you
can
do
it
on
a
monthly
basis.
N
N
N
The
solar
radiation
was
never
big
enough
to
put
it
to
go
stable
in
certain
days
and
in
JRA
it
doesn't
have
the
clouds
and
the
solar
radiation
comes
down
and
it
makes
it
stable
every
day,
which
means
you've
got
a
diurnal
cycle,
that's
complicated,
but
it's
not
comparable
to
the
observations
and
everything
else.
We've
done
on
the
Las
at
Ocean
station
P
there's
this
case.
It's
beautiful
in
the
in
the
data
and
I
was
there
and
it
wasn't
as
hot
as
JRA
says.
N
Jra
says
it's
so
hot
that
at
night,
with
a
20
meter
per
second
wind.
You've
still
got
enough
sensible
heat
flux
to
heat
the
ocean.
Well,
that
doesn't
happen
out
there,
but
that's
some
so
JRE
isn't
good
enough
to
go
on
The
Daily,
the
daily
scale
and
I
was
trying
to
do
that.
However,
I
do
have
some
Stoke
Swift
from
this
Southern
Ocean
site,
that's
47,
South,
just
south
of
Tasmania
and
we
actually
created
and
this
Lionel
Romero.
It's
a
very
nice
Wave
Watch.
Three
words
with
the
sofas
wins
itself.
N
Okay,
they
sort
of
five
minute.
15
minute
wins,
and
this
is
the
Stokes
drift
profile
that
he
got.
This
is
the
wind
direction,
and
then
this
is
the
surface
Stokes
drift,
and
this
is
10
meters,
30,
20
meters,
30
meters
back
down
and
what
happened
here.
There
was
swell
coming
from
the
southwest
and
that's
the
Southwest
swell
the
wind
started
to
blow
and
blue
at
right
angles
to
the
to
this
to
the
swell
and
started
to
bring
the
surface,
drag
the
surface
currents
along
with
it
and
in
time
these
things
just
fan
out
very
nicely.
N
Okay-
and
you
can
see
here
that
this
U
star
over
the
stokesprift
magnitude
here
is
tan
overuse
start.
Then
you
start
that
corresponds
to
a
langere
number
to
the
minus
two
of
ten,
allowing
me
your
number
a
little
bit
less
than
equilibrium
value
or
a
little
bit
more
than
the
equilibrium
value.
The
the
waves
aren't
quite
built
up
enough.
Okay,
this
L
A
to
the
minus
two
in
equilibrium
is
about
11.,
and
but
you
can
see
it
an
order
of
magnitude
drop
by
10
meters.
That's
the
shear!
N
If
I
look
at
what
Wave
Watch
does
is
the
wave
watch
that
we've
got
now
that
Brandon
put
in
breaks
it
into
three
components
and
you
get
the
Stokes
velocity
in
three
components
at
the
surface,
and
then
you
have
an
exponential
decay
of
each
one
that
gives
you
the
Stokes
drift
so
I've
taken
these
numbers
and
I
can
now
use
those
anywhere
in
the
code
because
I
want
to
play
with
them
and
that's
what
you
get
and
that
surprising
to
me,
because
you
can
see
how
well
there's
this
there's
the
the
swell
it's
there
now,
it's
Wave
Watch
Wave
Watch
with
sort
of
big
scale.
N
N
What
was
surprising
is
the
langmere
number
is
now
too
big
I'm
here
to
the
minus
two
is
only
six
and
that's
surprising:
I
thought
that
that's
the
thing
they
could
get
unless
it's
it's
using
the
three
hourly
forcing
of
of
of
the
model
versus
the
15
minute
or
10
minute,
forcing
of
of
sofas,
I,
don't
I,
just
don't
know
why
that
that's
backwards
to
me,
I
thought
the
langbear
would
be
right
and
the
direction
would
be
wrong,
but
that's
what
we
got
so
basically
I
want
to
make
conclusions.
N
N
Okay,
and
it
gives
you
this
much
bigger
turbulent
velocity
scales
in
both
momentum
and
scalars,
but
different
and
we've
got
it.
The
Stokes
drift
with
three
components
captures
the
depth
Decay,
but
for
some
reason
the
langmuir
number
is
about
30
percent
high
and
I.
N
Don't
know
why
waves,
deep
in
the
mixed
layer
and
move
water
volume
to
low
latitudes
and
there's
all
sorts
of
other
little
effects
here
and
there,
but
I
just
won't,
go
through
with
you,
because
my
guess
is
that
prior
it's
not
that
different
than
the
original
control
experiment,
because
many
of
the
things
we
had
in
there
largely
compensated
for
the
missing
ways,
because
we
compared
to
data
and
the
data
had
weights.
Okay,
you
didn't
have
wind
without
waves
in
the
ocean
that
I've
ever
seen
so
so.
N
In
other
words,
that's
why
we
had
the
bigger
G
Sigma
and
it
kind
of
worked,
and
that
might
be
also
why
the
VT
squared
making
it
proportional
to
the
velocity
scale
instead
of
just
W
star,
which
is
all
you
knew,
might
have
been
part
of
that
compensation
of
error,
which
we
just
heard
about,
and
that's
a
a
big
thing
so
I
think.
That's.
Why
there's
not
so
many
differences,
though
there's
still
a
little
from
the
control,
though
it
still
tends
to
be
a
deeper
mixed
layer,
adding
weight
so
just
adding
a
single
wave
dependency.
N
You
could
double
count
in
some
instances
if
you're
not
consistent,
all
the
way
through
and
you're
you're
going
to
have
all
sorts
of
compensating
things
change
the
weight
change.
The
equation
set
change
this
so
I
did
everything,
but
wave
impacts
were
positive
or
negative,
depend
on
the
general
circulation
and
forcing
biases
and
that's
a
problem,
and
that's
why
it's
really
good
to
see
these
people
doing.
These
couple
runs
now
because
they're
forcing
biases
and
circulation
biases
all
over
the
place
and
there's
okay,
the
more
observations
we
have
from
Deep
pack
or
anybody.
N
So
the
parameters
can't
vary
much.
That
would
be
really
useful
and
the
sooner
we
find
that
out
the
better,
but
we
need
cheaper
ways:
the
two-thirds
mom
six
with
Legacy
waves
run
that
was
talked
about
yesterday.
If
you,
if
you
use
weight,
watch
to
be
more
than
doubles
the
time.
So
that's
probably
not
what
we
need
for
this
problem,
though
the
way
people
might
need
something
else.
It
was
it's
pretty
expensive.
Anyway,
it
took
a
long
time.
N
N
Right
something
cheaper,
I,
don't
know
how
and
I
don't
think
we
need
it,
and
whether
or
not
just
more
bands
would
give
you
that
increased
to
change
the
language
number
better
or
it's
the
frequency
of
the
forcing
I
just
don't
know
I
was
surprised,
I
thought
they
would
get
that
right,
because
you
just
integrate
the
whole
spectrum
and
give
you
those
three
terms.
So
that's
the
one
thing
I
thought
would
be
right
and
it
seems
to
be
in
the
other
cases.
I
have
also
the
same
sign
error.
N
O
To
Greg
and
then
back
to
Taylor,
so
I
guess
I
I'm,
not
sure
if
you
said
it
or
not,
but
I
think
you
identified
at
the
end,
but
I
think
is
the
right
question,
which
is
it's
not
you
know
whether
waves
matter
for
mixing,
but
whether
the
independent
variability
of
the
surface
wave
field
means
that
it
needs
to
be
explicitly
modeled
to
get
make
evictions
about
mixing
in
the
boundary
layer
right.
So,
like
that
point
missing,
the
waves
were
compensated
for
our
original
PPP.
N
What
what
is
your
conclusion
on
that
like?
Well,
the
biggest
problem?
You
you,
you
have
a
global
model,
so
you
fix
it
in
one
place
and
if
you,
if
you're
doing
it
with
compensating
areas
that
compensation
may
vary
from
place
to
place
and
in
particular
from
time
to
time,
in
a
climate
model
you're
going
to
so
the
more
physics
you
have
right,
less,
compensating
the
better
and
but
basically.
O
N
Always
going
to
be
you're
always
going
to
be
in
that
regime
of
trying
to
find
it,
but
the
less
is
better
yeah
I
mean
look
at
what
they
were
doing
to
get
that
Labradors.
He
straightened
out.
I
mean
you'd
really
like
to
have
some
things
that
are
fixed
and
and
well
in
hand,
but
that's.
M
Yeah
yeah
the
waves,
the
non-legal
aspect,
the
waves
is
totally
in
this
second,
if
you're
just
very
interesting
as
a
wind
function,
so
and
that's
not
vulnerable
into
China
space,
but
the
so.
The
question
I
have
this
back
to
the
different
silk
profiles,
which
is
so
you
know
the
surface.
Stokes
Valley
was
really
unstable
because
it
depends
on
whether
how
you're
doing
the
tail
off
to
Infinity
in
the
way
that
the
subgrid
scene
and
the
wave
model
works
the
most
of
the
closures,
actually
don't
use
the
traditional
library
number
anymore.
M
N
But
the
first
thing
is
you'll
notice
that
you
have
the
same
drop,
so
the
the
10
meter
is
about
a
tenth
the
surface,
just
as
the
other
Wave
Watch
see
that
that
doesn't
reach
one
either.
So
it's
about
it
10
it's
a
factor
of
10
drop
in
10
meters,
regardless
of
where
you
started,
so
that
the
shears
is
off.
Why
what
I
do
okay
is
I
need
the
shear,
because
I
want
to
do
the
correlation
with
the
turbulent
stress
over
the
surface
layer
that
is
going
to
be
right
to
the
surface.
N
The
surface
layer
now
the
nice
thing
about
the
surface
layer
is
that
these
the
momentum
flux,
is
simply
G
over
Sigma
times
U
star
squared
because
of
the
similarity
Theory
works
perfect.
You
know
you
have
the
profiles
and
you
can
actually
do
that
interval.
So
it's
to
do
that
integral
I
need
it
to
that's
how
I
use
it
I
don't
use
a
langmuir
number
to
anything,
but.
M
I
guess
my
point
is:
is
that
the
integral
is
not
the
way
the
model
can't
predicts
that
and
what
the
surface
value
it
uses
doesn't
mean
much
because
it's
all
tangled
up
in
my
breaking
waves
and
other
things,
and
it
doesn't
do
well
so
you're
when
you
tie
yourself
to
that
surface
value.
Your
time
itself
is
only
the
way
models
really
don't
know
what.
N
M
N
We
should
think
about
what
what
to
do
about
that
to
make
it
more
stable
or
what
I'm
actually
being
given
here.
I
mean
I,
don't
Brandon
could
tell
maybe
he's
not
using
the
actual
surface
value
either
I
don't
know,
but
that
dictates
everything
yeah
yeah.
So
if
that's
not
valid,
then
nothing
is
valid
in
this
breakdown.
A
P
The
noise
and
okay,
you
see
the
screen
new.
Yes,
thank
you.
Yes,
yes,
okay,
good
yeah.
Thanks
for
giving
me
the
opportunity
to
to
talk
here
and
I.
Think
I
can't
see
you
that's
okay,
I
think
Bill
just
said
it
right.
P
At
the
last
point
of
the
bullet,
the
last
bullet
point:
we
need
cheaper
waves,
and
this
is
something
I
did
with
Baylor
over
the
last
year
and
it's
a
new
way
to
model
surface
waves,
particularly
for
the
purpose
of
coupled
couple
climate
models
and
recall
this
particle
itself
for
efficient
swell
or
pickles,
and
you
could
guess
who
came
up
with
that
name.
It's
the
in
the
room
right
now
and
he's
not
here
so
yeah
so
and
I
think
the
end
of
the
discussion
just
pointed
to
one
of
the
main
reasons.
P
What
we
should
need
should
model
waves
explicitly
in
cupboard
models,
but
we
should
see
the
second
Point,
like
the
spatial
gradients
of
the
wave
filter
are
not
necessarily
the
same
gradients
of
the
wind
field,
but
the
first
one
one
I
want
to
highlight
here
is
also
there's
The
Miz
problem
right
waves
affect
the
Martian
ISO
and,
as
CS
gets
more
seasonal
and
and
thinner
ways
will
more
and
more
affect
the
evolution
of
sea
ice.
P
And
if
you
want
to
get
that
correct,
you
really
have
to
propagate
small
information
to
the
sea
ice,
and
for
that
you
have
to
explicitly.
You
saw
a
like
model
waves.
The
other
one
is
was
basically
summarized
well
in
the
discussion
before
right,
winds,
Maxwell,
but
then,
if
winds
change
direction,
there's
still
still
some
some
waves
there
and
they
will
like
impact
a
lot
of
things.
Language
might
be
one
of
them,
others
might
be.
P
Also
this
surface
drag
white,
capping,
other
kinds
of
interactions
and
then
there's
the
third
component,
which
is
the
area
of
wave
current
interaction.
That's
again
in
way
smaller
scales,
where
we
are
talking
down
scales
of
probably
smaller
than
20
kilometers,
and
if
models
ever
get
there
in
a
sufficient
way,
then
we
also
have
to
deal
about
the
fear
have
to
deal
with
that
and
then
we
also
need
explicitly
model
the
waves
to
understand
the
problem
of
wave
modeling.
P
You
should
start
with
observations,
and
this
is
a
point
in
North,
Pacific
Ocean
station
Papa
and
there
you
have
the
situation
that
was
also
just
described.
The
last
talk,
but
you
have
three
small
systems:
superimposing
there
right,
there's
the
wind
sea
and
two
soil
system
nearly
going
across
of
each
other,
and
if
you
the
first
order
one
describe
this,
the
C
State,
you
probably
draw
three
red
dots
and
then
you
gave
give
each
of
them
a
direction,
a
peak
frequency
and
an
energy.
P
And
then
you
end
up
with
about
nine
variables
and
to
get
a
first
estimate
of
the
C
state.
What
wavefodge
does
way
for
just
able
to
reproduce
this
quite
well,
but
it
does
respect
for
discretization
right
it
discretizes
frequency
and
Direction,
and
that
leads
to
a
large
State
Vector
at
each
grid.
P
Point
and
then
at
each
of
the
grid
point
it
solves
the
wave
action
equation
which
to
look
see
in
the
lower
burden,
and
that
is
one
reason
why
waveforge
is
so
expensive
and
essentially
what
we
Face
here
is
this
the
efficiency
problem,
the
ratio
between
the
red
dots
and
the
black
dots?
That's
what
it
what
we
try
to
solve
here.
So
what
we
propose
is
actually
a
medium
complexity,
wave
model.
It
sits
right
between
Specter,
wave
modes
from
hassamon
and
fence
like
from
The
Wave,
modern
project
and
second
generation
wave
models
like
fetch
loss.
P
It
sits
somewhere
here
right
and
we
call
this
Back
to
the
Future,
because
we
take
another
term
just
before
the
innovation
of
the
wave
modern
project
and
instead
proposing
something
like
a
second
generation
plus
model
model,
which
we
call
Pickles
for
weed
reasons
and
what
we
do
here.
We
will
evolve
the
wave
energy
growth
along
lagrangian,
trajectories
and
then
add
to
this
iterative
remeshing
method
such
that
we
have
information
on
the
nullarian
grid.
P
So,
yes,
the
main
goal
of
pickups
is
trading
accuracy
for
Speed
and
convenience
right.
We
want
to
find
another
way
to
reduce
this
High
dimensional
State
Vector
of
the
wave
action
equation,
but
still
we
want
to
be
sufficiently
accurate
to
describe
like
a
kind
of
complex
wave
field
for
doing
that.
We
have
to
minimize
particle
interaction,
the
minimize
the
interaction
between
these
blue
lines.
Here
we
want
to
be
very
parallel
because
we
want
to
run
this,
hopefully
on
gpus.
P
P
For
now
we
ignore
wave
current
and
topography
interaction,
but
that
can
be
pretty
easy
added
to
that
problem.
Okay,
how
do
we
solve
the
system
along
a
particle?
Well,
we
solve
the
wave
action
equation
or,
in
the
simple
case,
the
wave
energy
equation,
and
then
the
problematic
term
is
this
at
the
Divergence
term
here
and
that
one
we
write
in
the
coordinate
system
of
the
particle
right.
P
Then
we
have
the
remeshing
part,
which
is
based
on
particle
cell.
Well,
The
Parting
itself
is
another
way
to
solve.
Maybe
stocks
like
equations,
it
was
first
developed
by
people
in
Los
Alamos
in
the
50s
and
60s
pretty
much
behind
the
curtain,
but
now
it's
widely
used
in
plasma
physics
and
electromagnetics,
as
well
as
some
other
geophysical
applications,
and
it's
very
good
in
simulating
shocks
like
burgers
equation
and
so
on,
and
we
have
these
situations
also,
when
we
have
cell
phones
propagating
across
the
Pacific.
P
The
core
of
this
method
is
this
remeshing
exercise,
so
you
have
a
particle
charge
in
this
case,
this
white
dot.
You
want
to
distribute
this
charge
to
the
nodal
points
and
you
just
come
up
with
geometric
weighting
factors
to
do
that.
There
are
more
fancy
versions
to
do
that,
but
we
keep
here
this
one
for
Simplicity
and
then
we
can
actually
assemble
this
model
and
create
tickles.
P
So
we
start
with
an
ocean
grid,
whatever
an
ocean
grid
Point
like
the
great
Dots
here
and
then
we
initialize
the
particle
equations,
the
stuff
here
on
the
lower
right-
and
you
hope,
I
hope
you
see
my
mouse
and
then
we
advance
this
each
of
these
systems
individually
right.
We
advance
these
blue
lines
to
a
deformed
particle
state.
So
that
we
end
up
at
these
blue
dots
and
what
we
then
have
to
do,
we
just
have
to
remesh
back
to
our
grid,
where
we
want
to
have
it
and
that's
using
then
particle
itself.
P
It
would
be
all
particles
within
this
green
area
such
that
the
field
state
is
then
actually
holding
energy
momentum,
and
then
we
just
have
to
map
back
to
the
particle
state
where
we
then
can
re-advance
the
model
and
that's
how
that
thing
advances
right
now
we're
in
the
in
the
yeah
in
the
phase
of
reproducing
fetch
laws
or
liquid
using
simple
situations.
So
this
is
the
classic
situations
of
wind
blowing
offshore.
Is
the
shore
there's
a
constant
10,
meter,
wind
and
then
the
horizontal
axis
is
here?
P
Is
the
Fetch
and
the
y-axis
is
time
and
in
the
upper
right
you
see
the
static
fetch
relations
that
are
just
parametric
functions
that
are
trained
by
observation
data
and
that's,
basically
the
Benchmark.
What
waveforge
does
wayfarged
into
like
produces
that
quite
well,
because
it's
a
tuned
model
and
pickles
produces
it
also
sufficiently
well.
But
it's
not
tuned
yet
right.
There's
a
few
parameters,
wave
growth
and
dissipation
that
we
have
to
adjust
because
the
particle
itself
introduces
a
numerical
diffusion
that
we
then
have
to
recalibrate
and
there
we
just
will
use
what
means
just.
P
Then
what
happens
in
if
you
make
Dynamic
wins
or
non-static
fetches
I
have
here
like
an
eyeless
storm,
because
that's
the
process
I
like
to
think
of
right.
This
is
X,
and
this
is
time.
That's
the
half
Miller
kind
of
plot
and
the
storm
moves
from
the
left
to
the
right
and
in
this
process
it
grows
swell
waves,
and
you
see
what
you
expect
to
see
right.
P
This
is
the
highest
wind
speeds
here,
whatever
20
meters
per
second,
but
you
see
that
the
highest
wave
energy
or
the
the
largest
wave
height
is
displaced
by
that.
You
also
see
a
second
feature,
which
is
this
sharp
Frontier
of
the
swell
propagating
further
forward
and
to
illustrate
with
wave
watch,
is
actually
doing
what
pickles
is
actually
doing.
It
does
steps
right,
so
we
initialize
the
particle
that
each
of
these
black
dots
and
then
we
just
Advance
them,
and
then
we
mesh
them
and
that's
how
it
iterates
forward.
P
P
Well,
the
answer
is
we
just
plan
to
copy
the
model
and
just
do
a
second
layer
first
well
and
the
third
and
the
fourth
layer
first
well
such
that
the
upper
layer
will
be
always
the
the
growing
wind
see
and
then
there's
other
layers
or
other
particles
at
the
same
node
that
live
there,
just
the
exact
spell.
At
the
same
time,
of
course,
you
have
to
come
up
with
interaction
terms
between
these
different
layers.
P
So
one
is:
when
is
the
wave
energy
growing
scene
when
it
does
swell,
but
there
we
can
just
draw
from
existing
literature
about
wave
physics,
okay
and
then
here's
kind
of
the
money
plot
for
this
right.
That's
an
expected
scaling
for
pickles
or
pickles
perks
right
because
of
the
small
state
vector
and
the
highly
parallelization.
We
expect
this
thing
to
scale
way
better
for
increasing
model
resolution
right.
P
So
there
are
implementations
of
of
wave
watch
within
with
CSM
that
about
three
degrees
and
basically
live
on
an
atmospheric
grid
on
like
a
30
minute
time
step
and
that's
basically
the
limit.
What
is
allowed
for
the
white
model
to
take
up
in
processing
time,
but
when,
when
you
want
to
provide
wave
informations
on
the
scale
of
syrup,
6
kind
of
model
like
on
the
core
degree,
then
you
will
have
to
face
four
hours
of
magnitude:
increase
of
complicational
cost
right
and
I'm,
not
sure
if
you
want
to
do
that.
P
Instead,
if
pickles
will
be
about
an
order
of
magnitude,
cheaper
at
least
depends
again
on
the
on
the
way
how
complex
we
do
that
right.
The
lower
limit
is
probably
the
black
line
here,
which
is
just
the
what
the
stuff
would
scale
with
n,
but
it's
also
not
particle
and
particle
interaction
or
N
squared
interaction.
That's
what
we
want
to
try
to
avoid.
So
really
think
we
will
sit
somewhere
here
then
a
few
other
limits
about
this
model
right.
P
We
think
it
will
work
well
up
to
like
in
this
gray
area,
essentially
like
up
to
like
five
or
ten
kilometers
after
that,
like
the
Dynamics,
you
want
to
model
really
change,
so
we
are
actually
not
sure
about
that
limit
yet,
and
so
maybe
one
word
or
what
happens
next,
it's
yeah.
We
will
do
the
swell
part
next
I,
think
and
deal
with
dissipation
and
this
in
this
version
and
frequency,
and
then
we
will
do
a
lot
of
testing
against
waveforge
3.
and
with
that
I
I
would
take
questions.
P
J
Yeah,
thank
you
I
like
that
talk.
One
thing
that
I
wonder
is
when
we
first
started
playing
around
with
Wave
Watch
a
few
years
ago.
We
realized
that
it
was
poorly
written
from
a
computer
science
perspective
and
not
very
parallelizable
and
I'm
wondering
if
the
the
motivation
to
go
to
something
like
pickles,
which
I
think
is
based
on
lots
of
good
motivation,
would
nonetheless
be
a
bit
less
motivated.
If
somebody
would
take
the
time
to
write
Wave
Watch
in
a
better
manner
that
is
actually
paralyzable
and
more
efficient.
J
P
Well,
if
you're
starting
to
write
a
based
models
from
strategy,
you
better
ask
yourself
if
there's
a
better
alternative,
so
it's
the
right
question
to
ask
I,
don't
know
I
know.
The
I
also
know
that
waveforge
Essence
is
written,
is
pretty
pretty
bad
for
parallelization
and
coupling
for
the
limits
of
that
you
have
to
understand
the
numerics
better
and
unlikely
Baylor
has
a
better
answer
for
that
than
I
have
well.
E
M
Steve,
yes,
if
you
look
at
this
graph,
so
suppose
you
code
weight,
watch
and
make
it
a
factor
two
faster
or
a
factor.
Four
Factor.
You
could
slide
that
way,
much
line
down
by
a
factor
of
two
or
four,
but
you
would
never
change
the
scaling
one,
because
the
way
that
Wave
Watch
discretizes
is
the
problem.
M
We
can
buy
more
efficient
implementation
and
going
after
the
wave
phenomena
we
care
with
a
very
low
degree
of
Freedom
approach.
We
got
on
the
blue
line
on
the
black
line,
and
so
it's
a
totally
different
scaling.
Even
if
you
made
the
present
implementation
more
efficient,
I
understand,
the
doe
has
actually
made
way
watch
much
more
efficient
and
able
to
run
on
an
unstructured
grid
now
for
impasse
applications,
but
it's
still
Wave
Watch,
and
so
it
still
scales
like
the
green
line,
not
like
the
blue
line.
P
H
To
get
yeah,
we
basically
started
from
the
same
thing
now
because
of
the
spectral
dispersation.
We
can't
go
above,
say,
300
cores
and
but,
as
Baylor
mentioned,
I
think
there
are
some
implementations
based
on
domain
because
spatial
domain
decomposition,
but
that's
for
unstructured
grids
not
for
structured,
because
that
we
use.
G
I
I
really
appreciate
your
trying
to
find
this
kind
of
sweet
spot
to
capture
the
wave
effects.
One
thing,
though,
that
strikes
me
with
Wave
Watch.
If
you
try
to
coarsen
the
resolution,
you
get
things
kind
of
like
this
so-called
Garden
sprinkler
effect
where
things
tend
to
go
in
well-defined
directions.
Is
there
anything
analogous
that
you're
kind
of
fighting
here
when
you're
trying
to
design
an
optimally
course
resolution?
Or
is
this
something
that
by
Design
you
just
don't
have
to
worry
about?
Thank.
P
You
so
the
remeshing
is
a
smoother,
so
I
think
I
don't
get
these
numerical
artifacts
of
the
sprinkler
we
discretize
direction
right.
So
there's
of
I
Glimpse
over
this
part.
Where
you
do
this
layering
aspect
and
that's
really
the
question:
how
well
you
understand,
wave
wave
interaction,
so
the
four-way
four,
the
four
four
wave
it
this
yeah,
the
wavelength
in
the
action,
how
you
discretize,
how
you
make
these
rules
right
and
that's
that
will
be
a
kind
of
a
kind
of
an
artistic
jump
to
do
that
consistently.
A
O
I
want
to
make
some
comments.
One
is
that
it's,
so
all
those
parallelization
issues
might
be
less
of
a
concern.
It
brought
bar
to
figure
we're
using
gpus,
because
we,
you
spark
your
nodes
and
those
figurations
so,
but
that
doesn't
affect
every
model.
Obviously
I
guess
the
other
yeah
I,
guess
I.
I
think
the
moment
we've
talked
about
this
before,
but
I
just
want
to
raise
the
question
here.
Is
there
any
way?
O
Is
there
any
way
to
you?
Do
you
see
it
as
possible
to
simplify
weight
watch
in
a
different
in
a
different
way
like
reduce
the
spectral
distortization,
but
somehow
introduce
some
approximations
that
allow
you
to
have
a
so
you
have
lots
of
less.
You
know
frequency
weight
number
bands
which
help.
A
Solve
part
of
the
problem,
but
still
have
no
learning
representation.
O
Do
you
think
that
there's
a
also
a
second
generation
plus
type
model-
that's
we're
wearing,
but
rather,
but
rather
than
discretizing
in
frequency
space?
You
use
some
kind
of
like
bulk
representation.
You
know
it's
still
right.
P
Yeah
no
I,
don't
think
so,
because
waves
are
not
flow,
not
a
float
right.
It's
a
very
high.
It's
a
highly
directional
thing.
The
stuff
goes
only
in
One
Direction,
basically,
and
that's
what
makes
it
very
hard
to
to
do
a
pure
eulerian
thing
that
also
accounts
for
the
way
physics
sufficiently
and
just
I
cannot
imagine
I,
just
yeah,
maybe
I'm
just
too
stuck
in
this
particle
idea
already,
but
I
I
don't
think
there
is
a
there's
another
Another,
Second
Generation
plus
version
to
do.
O
I
think
there
is
there's
work:
yeah,
there's
people
out
there
I
think
there
is
people
are
trying
to
work
on,
like
college,
looks
import,
Trend
or
and
finally
deleted
some
kind
of
static
binary
that
we
could
call
that.
C
Thanks:
okay,
we
did
have
some
instruction
time
here,
but
one
challenge
any
other
pressing
questions.
E
Will
break
for
lunch
until
1pm
Mountain
Time.