►
From YouTube: Alistair Adcroft MOM6 Webinar 3 13 20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good,
okay,
so
I'm
get
my
the
title
of
my
talk
was
using
mom
six,
and
so
you
know
this
is
kind
of
a
rather
ambiguous
and
could
be
completely
open-ended
talk.
A
So
I
I
decided
to
interpret
it
in
a
particular
way,
but
first
I
just
need
to
acknowledge
that
there
are
a
lot
of
people.
Who've
been
working
on
Mom,
six
on
the
gfdl
side
as
well
as
outside,
and
in
particular
I
just
acknowledge
that
most
of
these
people
have
actually
contributed
some
way
to
the
to
the
contents
of
what
we're
going
to
talk
about
today.
A
So
Mom,
six
using
mom,
six
I
think
the
correct
the
right
way
of
asking
what
that
means
is
you
know
where
should
somebody
start
if
they're
coming
to
mom6
and
you
and
so
I'm
going
to
give
you
a
sort
of
some
starting
points
pointed
as
to
where
you
should
go
if
you're
going
to
come
to
Mom
six
afresh,
there's
one
challenge
with
with
it
with
a
sort
of
answering
questions
like
this.
As
you
know,
every
everyone's
everyone's
use
of
a
model,
every
application
is
unique.
A
A
It's
you
know
it's
always
going
to
have
to
be
a
case-by-case
basis
and
there's
a
sort
of
some
advice
that
other
things
modelers
have
have
learned
over
the
years,
which
is
extremely
important,
which
is
that
if
you're
ever
going
to
start
building
your
own
configuration
or
using
a
model
in
a
new
way,
it's
always
best
to
start
from
something
that's
actually
already
working
so
start
with
a
working
example
and
then
evolve
it
and
tweak
it
around
until
it's
looking
the
way
you
want
it,
you
know
to
reconfigure
the
way
you
want
it,
so
that
you
don't
so
you
have
so
you
have
a
sort
of
a
starting
point
that
you
notice.
A
You
know
you
don't
start
cold
and
end
up
in
in
a
and
digging
a
divided
trench
that
you
can't
get
out
of
So
to
that
end,
I'm,
going
to
give
you
a
quick
run
through
of
of
of
how
a
user
might
come
at
this
and
and
I'm
going
to
give
this
from
the
perspective
of
sort
of
the
ocean
only
gfdl
point
of
view,
because
CSM
is
something
that
CSM
the
CSM
folks
will
handle
using
models
in
the
CSM
context
is
something
that
ncar
is
going
to
present
later
in
the
series
or
it's
going
to
have
is
in
charge
of
so
just
briefly
I'm
going
to
just
quickly
talk
about
how
to
get
the
code,
how
how
to
use
the
code
in
terms
of
controlling
it
and
looking
at
it
and
I'll,
give
some
a
few
examples
of
as
good
places
to
start,
which
are
somewhat
well
documented,
better
than
no
documentation
at
least
and
and
then
just
a
few
final
comments
on
on
how
things
are
setting
up
and
how
they
work
from
this
from
an
open
source.
A
Point
of
view.
So
quickly,
I'm
not
going
to
go
into
much
detail
here,
because
there
was
actually
a
mom6
Wiki
for
users
who
are
just
building
the
ocean.
Only
code,
Mom,
six
examples:
Wiki
is
you
know
a
user-driven
wiki
page
that
has
instructions
for
cloning,
which
means
obtaining
the
code
compiling
and
running
running
codes.
A
These
days
will
be
straightforward
if
using
MPI,
and
if
you
just
search
for
mom6
Wiki
on
Google,
it
will
be
the
first
link
if
you
don't
want
to
write
down
that
that
URL
and
that,
because
this
is
user
user
contributed
documentation.
It
has,
in
my
experience,
my
observations
that
the
documentation
is
pretty
complete
and
up
to
date.
A
Basically,
people
were
able
to
follow
through
the
steps
as
outlined
in
the
wiki,
and
it
generally
works.
We
get
very
few
questions
about
how
to
actually
build
the
model
on
someone
else's
on
someone
else's
platform
and
to
that
end,
I
think
it
does
actually
suggest
that
the
code
is
is
portable
and
that
somewhat
depends
on
FMS
the
underlying
infrastructure
of
being
a
portable
system.
A
So,
as
I
said,
I'm
going
to
give
something
which
is
not
about
how
to
use
the
model
in
context
of
CSM,
so
I'm
not
going
to
talk
about
workflows
or
anything
specific
to
workflows,
including
our
own
workflow
at
gfdl
and
I,
will
also
suggest
that
when
you're
starting
out
it's
much
much
easier
to
start
out
if
you're
working
with
a
notion
only
configuration
because
ocean
only
means
only
using
one
six
code.
The
moment
you
start
getting
into
coupled
configurations
and
even
isocian
is
strictly
speaking
coupled
then
it
involves
other
codes
and
couplers.
A
A
When
you
start
talking
global
and
the
instructions
are,
for
the
most
part,
targeted
people
who
who
have
some
experience
with
modeling,
meaning
that
they
have
some
experience
with
Linux
as
well,
and
it's
been
an
interesting
question
whether
we,
whether
more
documentation
is
needed
to
get
say
a
graduate
student
who's.
Just
arrived
with
no
Linux
experience
up
and
running
it's
something
which
I'm
kind
of
hoping
that
maybe
another
Community
might
feel
is
something
that
they
will
invest
in
in
terms
of
that
level
of
documentation.
A
But
for
the
time
being,
this
user-driven
user-driven
installation
documentation
seems
to
be
working
very
well
for
the
group
of
users
we
have
so
controlling
mom
and
mom
there's
one.
The
very
first
thing
that
you
need
to
know
is
how
to
get
parameters
in
and
out
of
the
model,
and
we
have
a
very
simple:
we
have
our
own
parameter,
syntax,
which
is
a
simple
key
value
pair
system,
for
example,
KH,
which
here
is
the
the
horizontal
laplacian
viscosity,
is
simply
set
to
25
meters
squared
per
second.
A
The
input
is
via
a
set
of
files
which
are
bootstrapped.
You
list
list
the
the
list
of
parameter
files
in
a
a
file
called
parameter
in
a
parameter
called
parameter
file,
name,
which
goes
into
a
hard-coded
static
file,
called
input
nml,
and
that
that's
a
bootstrapping
mechanism,
because
somehow
you
have
to
get
Mom
to
be
able
to
find
out
where
to
find
the
parameters.
A
But
then,
in
this
in
this
parameter
parameter
file,
name
on
the
right
hand,
side
you
can
list
any
list
and
any
number
of
files
of
your
own
choosing
and,
as
by
convention
that
we
use
the
names,
mom
input
and
Mom
override
as
I'll
explain
in
a
moment
is
purely
a
convention.
You
can
call
those
whatever
you
like.
If
you
want
to
call
them
a
thread
and
Joe,
you
can
call
him
Fred
and
Joe.
A
So
once
you're,
bootstrapped
and
you've
got
your
parameters
into
this
thing
into
the
into
these
files
and
those
files
are
listed,
the
model
will
read
them
and
they'll
read
them
in
the
key
value
pair,
and
it
will
actually
do
some
error
checking
on
it.
For
example,
we
consider
it
an
error
if
you
specify
the
same
parameter
twice
with
different
values.
A
Things
like
that
are
things
which
are
hard
to
trap
in
in
conventional
Fortran
nameless,
which
is
one
reason
why
we
went
down
this
path,
but
the
other
value
of
the
system
which
we're
using
is
that
it
actually
is
self-documenting.
So
when,
when
the
model's
finished
reading
all
the
parameters,
it
will
then
also
log
all
the
parameters
that
it
read
and
it
will
log
it
in
several
ways:
it
will
log
all
parameters
in
a
file
called
momfander.call
f.
Absolutely
everything
is
written
down
into
that
file.
A
It
will
also
write
a
shortened
version
of
that
parameter
file,
which
is
only
the
parameters
which
were
not
set
to
the
default
value.
So
if
I
can
just
draw
your
attention
to
the
first
lines
of
code
and
on
the
top
left
here
on
the
middle
left,
here
it
says:
k
equals
25
and
then
there's
some
comment,
symbols
and
it
has
units
and
a
default
value
and
the
default
value
is
zero.
A
So
these
two
lines
will
appear
in
the
mom
parameter,
dot
short
file,
because
it's
because
it's
non-standard-
and
it
will
also
appear
in
the
monfrender.com.
But
if
we'd
set
KH
to
zero,
those
lines
would
not
be
in
the
dot
short.
So
it
allows
you
to
have
a
much
shorter
overview
of
what
the
configuration
the
model
is.
A
The
fact
that
the
units
are
reported
is
a
very
valuable
aspect
of
of
our
system,
because
it
means
you
have
units
for
your
for
you
for
your
parameters
and
that's
something
which
we're
very
keen
on
expanding
in
terms
of
our
testing
capabilities.
A
You'll
see
that
the
API
is
very
simple
and
the
API
has
the
specification
of
what
these
units
are
in
this
Unix
string,
but
it
also
has
a
scale
parameter,
and
this
is
something
that
in
Marshall's
talk,
perhaps
we'll
explain
what
we're
doing
there,
but
basically
the
model
is
able
to
test
every
single
term
for
dimensional
consistency.
A
So,
as
I
said,
we
have
a
Convention
of
using
file
names
mom
and
put
them
on
override.
It's
very,
very.
It's
become
a
very,
very
powerful
convention.
A
What
we
do,
what
we'd
like
to
think
of
mom
input
as
being
the
sort
of
a
canonical
configuration
for
an
experiment,
and
then,
if
you
want
to
run
a
perturbation
experiment
where
you
are
going
to
change
something
to
see
whether
there's
to
check
a
sensitivity,
for
example,
you
would
throw
that
change,
parameter
in
the
mom
override
and
there's
a
Syntax
for
changing
a
parameter
from
so
that
we
don't
have
this
problem
of
specifying
the
same
parameter
twice.
We
have
this
hash,
overwrite
syntax,
and
you
throw
these
overrides
into
this.
A
The
the
model
of
course
runs
in
parallel
and
the
the
paralyzation
is
documented
is
the
inputs
are
in.
It
can
also
be
controlled
from
Mom
input
or
or
whatever
file
you
decide
to
put
the
parameters
in,
but
they
are
documented
in
a
separate
file,
one
time
to
Doc
or
layout,
because
these
these
parameters
don't
change
the
numerical
solution.
They
change
the
format
of
the
data
inside
the
memory
and
perhaps
in
the
files,
but
they
do
not
change
the
actual
result.
A
So
we
consider
these
to
be
auxiliary
parameters
which
are
there
purely
for
computational
reasons,
and
so
the
model
the
model
can
be
decomposed.
There
is
an
optimal
tile
site,
so
tile
size
is
how
many
points
you
have
on
each
trace
or
node,
and
there
seems
to
be
an
Optimum
of
somewhere
between
12
and
30
by
30.,
depending
on
on
the
chip
that
you're
using.
A
There
is
a
Halo
in
order
to
get
the
parallelization
to
work
and
if
Halo
can
be
somewhere
between
actually
between
one
and
one
and
one
and
four,
but
typically
for
most
of
action
schemes
that
people
choose
to
use
the
higher
order
ones.
You
need
a
Halo
of
three
or
four.
A
The
there
is
a
it
is.
It
is
a
a
a
a
a
requirement
that
the
code
bit
wisely
produces
across
any
layout.
If,
if
that
wasn't
the
case,
then
your
results
would
depend
on
how
many
processes
you
chose
to
use,
and
that's
not
kind
of
that's
to
be.
We
don't
consider
it
to
be
useful
science.
A
So
one
one
feature
that
catches.
People
off
off
guard
is
a
great
feature.
There
is
no
requirement
that
the
tiles
actually
have
the
same
size
on
each
processor.
A
So
this
means
that,
if
you
might,
even
if
you
have
a
nominal
global
grid
of
say
128
by
64,
you
do
not
have
to
use
a
power
of
two
number
of
processors
to
decompose
the
model
using
say
you
could
use
six
processors
and
then
and
then
you'd
need
to
have
a
2x3
layout,
and
one
of
the
processes
would
be
something
like
48,
48,
49
or
whatever.
The
numbers
are
to
divide
128..
A
So
so
so
it's
possible
for
this
model
to
run
with
different
tile
sizes,
and
it
just
is
able
it
sorts
itself
out
so
that
it
throws
away
the
Redundant
columns
as
needed
on
on
some
processes.
A
A
I
can't
see
my
slides
because
my
own
Windows
there,
sorry,
okay,
so
restart
so
just
like
parallel
decomposition,
is
required
for
models
to
be
able
to
restart
across
some
time
boundary
and
that's
because
most
many
jobs
can't
fit
into
a
single
job
submission
and
if
they
and
if
you
had
a
a
non-bitwise
reproducibility
across
a
restart
boundary,
then
of
course
your
results
again
would
change
on
decide
would
depend
on
how
how
you
decided
to
run
the
model.
A
So
that's
a
requirement
that
we
have
bitwise
reproducibility
and
the
it's
worth
noting
that
here
there's
a
certain
procedure,
a
protocol
for
where
stuff
lives.
This
is
actually
controlled
by
this
bootstrap
file.
This
mom
input
nml
these
Fortune
name
lists
these.
A
These
output
directory
and
restart
input
directories,
and
so
on
are
again
it's
a
convention
that
it
is
configurable
and
I,
don't
actually
know
if
CSM
is
using
our
convention
or
something
you
know
different,
but
for
the
for
the
most
part
we
take,
all
of
our
inputs
are
read
from
the
input
directory.
A
All
Diagnostics
are
generally
written
in
the
local
working
directory
and
the
restart
files
are
written
into
a
separate
directory
restart
so
that
you
and
you
have
to
copy
them
copy
the
files
from
the
restart
to
the
input.
If
you
wanted
to
start
the
next
segment,
that's
the
convention,
we
use
again
it's
configurable,
it's
a
good
convention,
I,
think
because
it
means
that
you
can
repeat
a
segment
without
making
a
mistake.
If
you
don't
just
simply
don't
move
the
restart
files
over
into
the
input.
A
So
Diagnostics
we'll
talk
a
bit
more
about
this
later,
but
the
Diagnostics
are
using
the
FMS
diag
manager
which,
when
I
arrived
at
gfdl
I,
was
actually
very
impressed
with
it's
a
very
powerful
Diagnostics
utility,
it's
controlled
by
a
file
called
diac
table
and
that's
that
seems
to
be
a
hard-coded
file
name
and
it
has
two
segments
in
the
in
the
file.
A
So
these
are
going
to
be
the
frequency
for
either
a
snapshot
or
time
average
and
then
lower
down
you,
you
list
the
variables
and
you
specify,
as
you
associate
a
variable
with
a
file
and
a
variable
can
be
associated
with
many
files,
so,
for
example,
I'm
showing
fatal
o,
which
is
the
ocean,
three-dimensional
temperature
using
the
cmip
name,
the
the
ipcc
naming
conventions,
and
here
we're
showing
ocean
model
which
is
taking
TS.
Sorry,
temperature
in
3D
and
writing.
A
An
annual
time
average
at
all
means
so
mean
means
time,
average
and
that's
being
put
in
at
a
single
Precision
I.
Think
that's
what
the
comma
two
means
into
this
file,
which
is
defined
by
the
ocean
annual
up
at
the
top.
A
So
so
the
the
the
this
diag
manager
is
also
capable
of
doing
Regional
Diagnostics,
which
I
won't
go
into
here.
But
there
is
documentation
of
this
on
on
the
web
page
already,
and
then
one
thing
that
we
do
with
mom
is
that,
rather
than
writing
a
static
documentation
file
where
you
might
say
these
are
the
Diagnostics
available
in
this
model,
which
requires
a
maintenance
of
you
know
of
that
document
to
file
whether
it's
a
word
file
or
an
HTML
file.
A
What
we
actually
do
is
we
have
the
model
write
a
list
of
what
what
diagnosis
are
actually
available
and
the
reason
one
reason
we
chose
this
is
because
the
num,
the
Diagnostics,
are
a
function
both
of
the
configuration
of
the
model.
You
know
whether
you
are
using
General
vertical,
coordinates
or
stacks
for
water
layer
mode,
but
also
because
you
can
turn
on
all
of
these
arbitrary
coordinate
Diagnostics.
A
You
can
do
Diagnostic
and
density
space
or
Diagnostic
in
Z
coordinates,
and
so
the
the
Diagnostics
that
are
available
are
simply
a
function
of
of
the
model,
configuration
and
differ
between
experiments,
and
so
what
we
do
is
we
every
time
you
run
the
model,
it
generates
a
file
called
available,
diags
and
that
that
is
a
list
of
what
to
diagnostic
available.
So
if
you
want
to
find
out,
the
best
thing
to
do
is
run
the
model
quickly
in
your
configuration
and
then
see
what
that
file
has
in
it
and
that
file
that
documentation
will.
A
Okay,
so
I
want
to
quickly
start
talking.
I
want
to
talk
about
four
briefly
talk
about
four
actual
configurations,
and
these
configurations
are
part
of
the
mom's
six
examples
repository
which
will
go
into
a
bit
more
later.
A
Just
quickly
say
that
to
Mom
the
mom
examples
configuration
which
is
the
repository,
which
has
that
wiki
page
was
a
repository.
That
is
like
a
kind
of
a
catch-all
for
everything
that
was
happening
at
gfdl.
We're
actually
thinking
rethinking
the
design
of
this
repository
layout,
which
I'll
explain
a
bit
later,
but
for
the
most
part
it
is
for
the
time
being
where
you
can
find
examples
of
gfel
experiments.
A
So
you'll
see
these
you'll
see
these
directories
with
the
sort
of
configuration
type
and
then
there
is
an
SRC
directory
and
inside
there
are
actually
sub
module
links,
git
sub
modules
to
the
source
code,
and
these
are
not
the
actual
source
code.
If
you
just
checked
out
this
repository,
they
are,
they
are
effectively
symbolic
links
which
is
a
but
they're
using
this
git
sub
module
concept,
which
I
will
explain
briefly
later
on.
A
But
this
this
Repository
is
a
starting
point
and
the
four
four
experiments
I
want
to
go
through
are
the
double
gyre
experiment,
Philips
Chile,
experiment,
a
flow
down
slope
and
then
a
global
ocean
model
for
o5,
so
ocean
only
double
gyre.
So
this
is
this
is
our
I
would
say.
This
is
our
one
of
our
simplest
configurations
and
it's
something
that
dates
back
to
Bob's
thesis
and
it's
uses
a
very
small
part
of
the
model.
It's
basically
stack
shallow
water.
A
It
is
stack
shallow
water
equations
and
the
reason
we
like
people
to
try
it
is
is
if
this
doesn't
work,
then
something's
wrong
with
your
computer,
because
it's
that
simple,
it's
like
the
model
doesn't
use
very
much
code.
A
It's
just
you
know
integrating
Force
to
momentum
and
continuity
equations,
and
you
know
if
if
this
doesn't
work,
then
then,
then
you,
then
you
need
to
sort
of
sort
something
out
and
it's
you
know,
I,
don't
know
what
would
be
going
wrong,
but
for
the
most
part
this
does
work,
and
so
it's,
but
it's
always
a
safety
check
that
so
our
first
thing
is:
does
double
jar
work.
A
Double
gyre
has
a
Jupiter
notebook
in
it,
I
apologize
for
the
rainbow
color
map.
This
is
quite
an
old
notebook,
and
this
notebook
is,
you
know
something
that,
hopefully,
with
with
a
very
small
installation
of
python,
you
would
be
able
to
up
and
run
and
check
that
you
can
actually
do
stuff
with
with
the
model.
A
So
we
use
netcdf
for
the
file
for
the
for
files,
and
this
note
notebook
just
reads
in
shows
you
how
to
read
in
some
output
from
the
model
and
in
fact,
in
this
case
it
animates
it.
A
We
we
I,
am
I,
am
very
much
a
fan
of
using
Jupiter
notebooks
to
illustrate
and
document
procedures.
However,
we're
not
I'm
not
in
favor
of
advocating
for
one
particular
system
of
analysis,
so
this
notebook
is
actually
using
scipy
for
reading
the
files
and
just
matplotlib
we
we
will
be
adding
other
formats.
I
hope
you
already
have
notebooks
that
use
net
CDF
and
X-ray
and
I
think
we
should
be
adding
some
more
of
those
along
with
using
Seaborn
and
a
few
other
packages
just
because
people
have
different
ways
of
working.
A
A
So
Double
J,
as
I
said,
is
a
very
simple
model,
and
you
know
it's
it's
it's
kind
of
useful
model
because
you
can
turn
up
the
resolution
and
it
starts
doing
interesting
things.
It
starts
editing
and
it's
sort
of
textbook
stuff
in
terms
of
what's
going
on
in
there,
but
it's
only
effectively.
It's
a
it's.
Just
a
I
think
it's
a
four
layer.
It's
actually
a
water
model,
whereas
ocean
only
flow
down.
A
Slope
is
a
also
a
2d
model,
but
it's
now
x
and
z
in
the
XE
plane,
and
so
this
is
an
interesting
one
because
it
exposes
the
the
the
the
various
choices
in
the
vertical
structure
of
the
model,
so
whether
you're
using
stack,
Shadow,
Water
mode
and
my
signal
mode
or
what
enable
one
of
these
ale
driven
coordinates.
General
coordinates-
and
this
there's
a
notebook
in
here
also
I,
can't
remember
what
what
system
I'm
using
I
think
it's.
A
A
But
it's
a
notebook
trying
to
explain
how
the
model
is
actually
storing
data
itself
in
in
memory
and
therefore,
what
it
looks
like
in
net
CDO
files
when
you're
both
looking
at
what
we
call
native
format,
the
native
storage,
which
is
you
know,
just
the
functions
of
ijk
or,
if
you're
writing
it
out
in
in
one
of
these
diagnosed.
Diagnostic
coordinates
like
Z
coordinates,
and
it's
very
it's
very.
A
The
the
fact
that
the
model
is
able
to
do
vanished
layers
is
something
that
can
confuse
people
when
you,
when
you
write
out,
write
the
model
out
and
you
and
you
have
the
ability
for
advanced
layers.
A
You
basically
always
have
data,
even
if
the
layer
thickness
is
zero,
and
this
can
confuse
people
because
they're
used
to
seeing
data
masked
if
your
z-coordinate
modeler,
for
example-
and
this
is
not
the
case
for
our
output
in
Native
space,
and
so
when
people
start
plotting
the
native
data,
they
can
start
worrying
that
they
haven't
got
the
typography
in
the
model
or
something
like
that.
So
this
is
a
notebook
trying
to
explain
how
the
data
is
stored
in
the
vertical
and
how
to
look
at
it.
A
This
is
so
we're
moving
on
to
more
something
more
complex.
Now.
This
is
now
a
three-dimensional
model,
the
Philips
two
layer
test
case
and
it's
a
good
one,
because
it's
a
it's
a
relatively
affordable
but
three-dimensional
model,
and
it's
been
published
a
few
times
the
Philips
two
layer
models.
The
Philips
model
is
a
famous
model
used
for
looking
at
Mexican
instability
and
Bob
Hallberg.
Most
recently
used
this
particular
configuration
in
2013
for
his
resolution
function,
paper
and
so
I
haven't
got
a
notebook
in
there.
A
Yet
all
of
the
plots
that
Bob
used
in
his
paper
were
done
in
ferret
and
I
think
it
would
be
a
good
exercise
for
some
young
person
to
see
if
they
could
reproduce
Bob's
plots,
perhaps
not
using
his
color
scales,
but
at
least
getting
the
the
results
and
and
figures
looking
the
same
and
they're
doing
that
in
in
Python.
Hopefully.
A
So
this
is
a
useful
one,
because
it's
a
relatively
recently
documented
and
the
experiments
are
should
be
exactly
the
same.
The
configuration
that
we
we
have
there
should
be
one
of
these
experiments
exactly
foreign
ly.
A
There
is,
if,
if
you
were
trying,
if
you
wanted
to
start
for
looking
at
a
global
ocean
model,
there
is
no
choice
really
but
to
consider
an
isolation
model,
because
the
high
latitudes
need
sea
ice
to
have
anything
to
Federation.
To
look
at
all
realistic
and
I
would
suggest
that
we,
you
look
at
the
om405
experiment
in
the
isocian
assist
II
directories.
A
This
is
a
half
degree
Global
isolation
model,
it
uses
the
dfdl,
cis2
sea
ice
model
and
the
gfdl
coupler.
And
that's
because
you
know
we
are
at
gfdl
Anders
and
that's
all
we
can
support,
that's
not
to
say
that
it
shouldn't
be
too
much
work
if
you
have
a
working
version
of
the
MCT
or
the
opticaps,
because
they
already
exist
as
working
couplers.
A
Working
with
another
sea
ice
model
is
something
that
anchor
is
working,
I,
think
at
EMC
as
well,
but
something
that
is
out
of
of
our
domain
in
terms
of
being
able
to
support
it.
At
this
point,
this
particular
model
uses
the
gfdl
vertical
physics
and
I
highlight
that,
because
that
is
not
kpp
or
using
cvmix,
but
using
our
own
physics
packages.
Eppl
is
this
package
that
Steve
mentioned
in
an
answer
earlier,
but
I
think
to
join
Joanne,
and
so
so.
A
This
is
it's
worth
highlighting
here,
because
when
we,
when
we
decided
to
join
force
with
CSM
and
ncar,
it
was
decided
that
we
would
not
be
using
the
same
physics
for
the
for
the
sake
of
the
science,
so
that
we
had
some
diversity
in
in
science
and
so
we're
maintaining
these
packages.
For
for
to
be
able
to
have
different
physics,
this
configuration
is
not
editing.
It's
cost
resolution,
in
other
words,
but
it's
not
as
coarse
as
the
one
review
models
that
we
would
have
been
using
say
eight
years
ago.
A
It's
a
choice
we
made
and
we
are
not
yet
clear
whether
we'll
be
be
supporting
and
Publishing
the
course
of
resolution
versions
of
this
model
probably
should
but
we're
not
doing
it
yet,
and
then
the
model
also
uses
GM
and
neutral
diffusion
parameterizations,
which
are
necessary
when
you're
not
resolving
the
ideas
or
putting
the
Eddies,
and
so
there's
a
curve
down
here
that
you'll
recognize
from
Steve's
talk
and
the
just
a
few
that
the
model
actually
works.
We're
quite
happy
with
this
configuration.
A
So
the
reason
for
highlighting
these
four
four
experiments
is
that
they
are
they're
they're,
fairly
they're,
either
fairly,
very,
very
cheap
or
well
documented,
in
the
sense
that
there
were
papers
about
these
these
configurations
already,
and
so
it's
a
good
starting
point,
because
you've
got
something
you
know
has
been
vetted
I'm
afraid
to
say
that
there
are
many
of
these
other
experiments
in
here
that
in
many
ways
left
over
from
our
development
exercises.
A
Now
we
would
say:
well,
let's,
let's
see
what
we
can
do
with
this
and
so
we'd
create
an
experiment
that
might
use
some
feature
and
then
then
the
experiment
sort
of
was
was
added
there
as
a
way
of
testing
the
model,
but
not
actually
necessarily
vetted
as
a
good
experiment
as
a
starting
point.
And
so
when
I
say
we
want
to
redesign
mom
six
examples.
A
It's
partly
for
that
reason
that
we
need
to
actually
break
out
what
we
should
consider
good
tutorials
on
starting
points
at
these
of
these
experiments
and
put
the
other
ones
actually
into
a
testing
repository
that
will
use
it's
obviously
just
for
testing
and
that's
something.
We've
just
been
discussing
and
we'll
we'll
we'll
announce
and
discuss.
You
know
with
with
a
large
audience
when
we've
figured
out
how
to
do
that
properly.
A
Okay,
so
about
the
repositories.
So
the
mom's
examples
is
what
we
call
a
super
repository
and
in
that's
actually,
because
it
has
all
these
experiments
in
there.
It
can
get
quite
confusing
if
you
were
building
your
own
experiment.
A
You
should
take
a
repository
point
of
view
and
say
I
want
to
keep
all
of
my
stuff
together,
and
so
this
is
what
we
were
advocating
now
is
to
actually
say
create
an
create
a
repository
just
for
that
one
experiment,
or
at
least
that
one
configuration
and,
for
example,
we
have
one
for
M4,
the
one
we
just
described
over
405
and
the
quarter
degree
version
of
it
as
well,
and
that
experiment
that
repository
would
contain
everything
you
need
for
that.
A
Repository
I'll
highlight
it
here
only
because
the
reason
we've
come
up
with
this
is
it
makes
it
makes
development
really
easy
when,
whilst
we're
you
know
we're
currently
in
the
in
the
midst
of
re-tuning,
this
configuration
and
the
Greet
training
process
is
exactly
like
developing
code.
You
know
you
have
a
main
branch
and
you
might
decide
you
want
to
try
some
sensor
three
tests
out,
and
so
you
can
just
literally
create
a
branch
change.
A
The
parameters
in
that
Branch
evolve:
It
Forward
what
it
does
when
you're
using
git
in
a
when
you're,
applying
these
sort
of
software
development
tools
to
model
configurations.
It
forces
you
to
do
good
things
like
record
what
you're
doing
it
also
makes
everything
be
producible
and
recoverable,
and
it's
it's
been.
It's
been
a
huge
Boon
to
to
our
ability
to
figure
out
what
we
did
when
we
were
saying.
Oh,
we
saw
this
result
before.
A
What
do
we
do
and
it's
like
it's
all
there
in
the
git
logs,
and
so
this
is
a
way
of
organizing
your
life.
A
We
recommend
it
very
highly
and
we're
going
to
try,
try
and
break
up
the
breakout,
shall
I
say
the
way,
the
way
that
we're
doing
our
development
out
of
mom's
examples,
which
is
a
big
super
repository,
now
we're
going
to
start
isolating
some
of
the
experiments
that
we're
currently
continuing
to
develop
and
make
those
independent
repositories,
and
so
this
is
a
view
of
how
everything
fits
together
right
now.
Mom
six
examples
is
this
big
super
repository.
A
It
points
to
particular
versions
of
code
with
these
symbolic
links,
these
git
submodules
and
car
cases,
the
SMG
configs.
All
these
other
super
repositories
are
very
analogous.
They
point
to
the
to
their
own
versions
of
mom6
and
right
now
those
are
collections
of
experiments
for
the
most
part
and
we're
advocating
that
we
should
actually
think
about
breaking
up
into
smaller,
smaller
parts.
A
This
is
a
slide
we've
shown
before
about
how
how
everything's
organized
for
those
who
need
to
understand
it.
Okay,
I'm
just
going
to
quickly
delve
into
the
structure
of
the
mom's
source
code.
Briefly,
because
it's
important
for
new
users
to
have
a
clue
of
what's
going
on
here
so
I'm
there
are.
There
are
several
several
several
directories
which
have
source
code
in
them.
A
The
first
is
the
config
Source
directory,
and
in
here
you
will
find
several
server
directories,
of
which
there
will
be
duplicates
of
code
or
code
that
doesn't
seem
to
match
anything
to
do
with
what
you
are
interested
in.
This
is
code
that
is
selectively
compiled
and
it
is
context
it
depends
on
the
context
in
which
you're
building
the
model.
A
So,
for
example,
the
ncar
coupled
model
needs
to
use
the
new
opsy
driver
and
then
has
to
choose
a
memory
mode
either
Dynamic
Dynamics
metric
and
so
those
two
parts
would
actually
be
compiled,
but
it
would
be
incompatible
if
you
were
to
add
in
say
the
the
other
one
of
the
other
drivers.
First
to
build.
The
Standalone
model
you'd
have
to
use
the
solo
driver
and
then
again
choose
the
dynamical
Dynamics
metric.
So
this
is
the
sort
of
driver
level
code
and
what
it
does
is
it
this.
A
This
code
is
basically
code
that
Maps
an
API
that's
compatible
with
one
of
the
high
level
driver,
high
level
drivers
at
say,
10,
car
or
EMC,
with
the
interface
that
have
we
have
in
mom
and,
and
so
the
idea
is
that
Mom
only
has
one
API
and
then
this
is
like
the
layer
that
Maps
between
our
API
and
the
coupler
there's
a
few
other
directions
there,
which
I
won't
go
into,
but
one
in
particular
is
the
I
shelf
driver,
but
the
solo
driver
is
the
one
which
most
people
would
be
looking
at,
and
the
instructions
on
the
wiki
for
compiling
are
used.
A
A
So
these
are
not
compiled
in
place
because
often
these
packages
have
main
programs
or
C
code
or
other
codes
that
just
shouldn't
be
compiled
with
your
model,
and
so
we
have
to.
We
have
to
be
able
to
selectively
point
out
point
to
parts
of
those
those
packages,
and
so
we
do
that
with
symbolic
links
from
The
Source
directory,
but
this
is
where,
if
you
have
a
package
that
is
going
to
become
part
of
mob6
as
a
sort
of
Library,
this
is
where
we
would
live.
A
They
have
to
be
cloned,
they
have
to
be
generally,
you
would
do
a
recursive
clone
of
your
super
repository
and
get
this
would
be
would
be
obtained
for
you.
But
if
you
don't,
if
these
files,
if
these
directories
are
empty,
it
needs
to
means,
we
need
to
populate
them
with
a
sub
module
update
and
oh
and
just
one
last
thing
is
each
version.
A
The
version
that
we
are
currently
compatible
with
is
recorded
by
git
as
as
a
sub
module
there
and
that's
important,
because
it
means
that
there's
no
no
one's
requiring
us
to
stay
up
to
date,
or
vice
versa,
that
the
package
can't
can't
evolve
in
order
to
be
maintain
compatibility
with
one
six
and
then
the
most
one
of
the
most
important
directions,
of
course,
is
the
SRC
directory,
which
has
the
code
that
we
actually
integrate
forward
in
in
mom6.
A
So
this
is
the
code
that
solves
the
equations
emotion.
It
has
all
the
code
that
does
the
Diagnostics
it's
all
every
every
file
in
this
directory
is
compiled.
There
are
some
rules,
there's
some
code
styles
to
pay
attention
to.
We
don't
use
CPP
macros,
except
for
memory
and
grid
macros.
We
don't
like
CPP
directors
for
the
most
part,
and
there
were
a
few
that
have
have
managed
to
get
in
there
and
we
we
have
been
talking
about
trying
to
get
rid
of
them.
A
We
will
continue
to
try
and
do
that.
So
there
is
a
particular
structure
here
which
I
will
go
into
in
a
bit
more
detail.
So
this
is
again
SRC.
A
So
there
are
the
directories,
our
ale,
which
has
all
the
code
to
do
with
the
vertically
mapping,
and
this
has
been
broken
out
from
the
core,
because
it's
reused
in
in
both
Diagnostics
and
in
the
dynamic
core
and
actually
in
the
initialization.
So
there's
so.
The
remapping
tools
are
invoked
in
many
different
ways.
So,
as
ale
is
almost
a
standalone
piece
of
software,
the
core
has
the
dynamic
core
from
the
mental.
A
You
know
the
the
code
for
integrating
for
the
momentum,
equations
continuous
equations,
not
the
traces
traces
are
in
their
own
directory.
You
can
see
that
at
the
bottom,
and
so
the
Tracer
code
is
separated
because
it's
generalized
to
be
able
to
handle
any
number
of
traces,
the
trace
advection
and
the
factorizations
that
Acton
traces
are
all
in
that
directory.
A
The
Diagnostics
is
actually
not
all
our
Diagnostics,
but
just
a
special
group
set
of
Diagnostics,
where
certain
things
like
budgets
for
momentum
and
so
on
are
being
diagnosed
somewhat
independently
from
the
other
equations.
So,
for
example,
there's
an
energy
budget
Diagnostic
and
you
know
we
can
work
out
all
the
terms
in
the
energy
budget
and
so
on.
So
those
those
Diagnostics
are
are
living
in
the
Diagnostics
directory.
A
There's
an
equation:
State
we
have
five
equations
of
State
you
get
to
choose
which
one
you
want,
and
so
they're
grouped
out
set
billion.
There's
an
a
single
API
that
lives
in
there
as
well,
which
selects
between
the
five
and
then
perhaps
so.
An
important
one
for
people
trying
to
build
mom
6
into
their
own
system
is
the
framework
directory.
So
the
framework
directory
is
really
the
interface
to
the
lowest
layer
layer
of
software,
which
is
in
charge
of
communications
and
I
O,
and
right
now
we're
using
FMS.
A
We
will
in
at
gfcl
continue
to
use
FMS.
We
like
it
very
much,
but
the
idea
of
the
framework
directory
is
that
if
you
were
in
a
knee,
if
you
needed
to
do
this
in
principle,
you
could
replace
the
calls
to
FMS
with
calls
to
your
own
infrastructure.
A
No
one's
actually
done
this
yet
that
I
know
of.
But
it
is
the
case
we
if
we
do
let
things
slip
occasionally,
but
for
the
most
part
every
call
to
the
infrastructure
is
only
from
the
framework
directory
and
the
I.
You
know
the
idea
here
was
looking
forward.
It
might
be,
it
might
be
that
someday
somebody
has
to
actually
use
a
different
infrastructure,
and
so
this
was
the
idea
here
was
to
try
and
make
sure
that
was
something
we
were
not,
but
it
would
make
it
easier
to
happen.
A
Initialization
has
stuff
for
initializing,
State
and
grids,
and
it's
very
self
explanatory,
and
then
the
factorizations
are
broken
up
into
two
directories:
lateral
and
vertical
for
for
convenience,
and
then
there's
one
last
original
user.
So
this
is.
This
is
a
a
directory
that
contains
code,
that's
specific
to
one
configuration
so,
for
example,
the
double
jar
experiment
has
a
a
double
jar
wind
which
is
specific
to
that
configuration,
and
there
will
be
some
user
code
associated
with
that
particular
shape
of
wind,
forcing
in
that
directory.
That's.
A
The
idea
here
is:
if
you've
got
something
where
you're
adding
something
to
something
of
your
own
for
a
particular
experiment.
You
would
add
it
here,
which
is
why
we
call
it
user
rather
than
something
else
now.
There's
two
other
directories
in
mom6,
which
are
really
really
important.
One
of
them
is,
for
the
most
part,
something
you
don't
normally
realize
is
there
because
it
has
a
dot
in
front
of
it
called
dot
testing
and
this
dot
testing
directory
contains
the
scripts
and
make
files
that
are
run
by
the
Traverse
CI
continuous
integration
system.
A
So
if
you've
enabled
it
on
your
fork
and
you've
pushed
some
code
up
to
your
GitHub
Fork,
Travis
CI
will
run
these
tests
and
you'll
get
a
check
mark
or
a
red
X,
depending
on
whether
you've
passed
all
of
our
tests.
These
tests
are
incredibly
powerful
and
we
are
expanding
them.
In
fact,
there's
a
big
pull
request
coming
soon
from
Marsh
reward
that
will
expand
them
even
further,
and
in
fact
this
is
where
one
of
the
tests
which
are
being
added
most
recently
is
a
rotation
test.
A
But
before
that,
we've
added
this
dimensional
testing
capability,
so
we're.
This
is
something
that's
really
important
because
it
helps
you
know
if
code
you've
added
is
passing
tests
and
it
helps
us
because
it
means
that
the
test
the
code
is
has
been
at
least
checked
for
sanity.
In
that
sense,
but
it's
actually
the
directory
that
you
can
use
for
your
own
development.
I
mean
these.
The
experiments
in
here
are
not
real
experiments,
they're
tiny,
tiny
little
tests-
some
of
them
are
unit
tests,
but
they
are
something
like
eight
by
ten
points.
A
A
Perhaps
the
most
important
directory
is
the
docs
directory,
and
this
is
a
work
in
progress
by
all
means,
there
is
a
read
the
docs
site,
which
currently
points
to
a
somewhat
incomplete
documentation.
There
is
a
work
under
under
development
and
work
right
now
by
Kate.
Hedstrom
is
a
fuller
documentation
which
is
actually
the
one
I'm
showing
here
I've
it's
it's
basically
where
we
want
to
host
all
of
the
documentation
for
the
model
itself.
A
So
you
know
the
equations:
how
how
all
the
units
of
the
model
work
in
all
of
the
apis,
that
that
is
what's
going
to
be
hosted
on
read
the
docs
ncars
has
their
own
version
of
this
already,
and
the
idea
is
that
in
time
this
will
become
complete
that
we
we
will
be
able
to
actually
just
point
people
to
here.
That
knows
this
is
that
this
is
where
you
can.
A
You
can
get
started,
okay,
so
that's
what
that's,
hopefully,
an
enough
pointers
as
to
how
to
get
started
and
I
ended
up
here.
It's
my
last
slide
just
to
say
there
was
more
to
come.
A
You've
already
heard
the
laguardian
map
discussion
from
Steve
Bob
is
going
to
talk
about
the
equations
and
algorithms
and
next,
in
a
few
weeks
time
and
Marshall
water
is
going
to
talk
about
some
of
the
CI
and
that
I
was
alluding
to
and
then
later
on,
we've
got
talks
from
RAF
and
Matthew
on
analysis
and
ocean
data
assimilation.
Thank
you
very
much.
C
Hi,
this
is
Santa
thanks.
Oh
it's
just
a
quick
question
Alistair
and
as
time
permitting,
if
you
could
expand,
could
you
please
tell
us
about
the
MCT
driver
and
the
other
drivers
like
the
Dynamics
and
the
dynamic
symmetric?
If
we
were
to
just
bypass
and
use.
A
That
directory
that
configure
I
could
bring
up
slides
again
but
I'll,
just
kind
of
assume
you
can
remember
it.
The
config
sort
directory
has
all
these
drivers
in
and
a
few
other
things.
Dynamic
and
dynamic
symmetric
are
not
drivers,
they
are
simply
memory
models,
and
so
you
have
to
choose
between
either
a
I
I
should
I
need
a
slide
on
this.
Perhaps
that's
what
I
should
really
do?
There
are
two
different
ways
of
of
of
shaping
arrays
in
mom
six.
We
refer
to
them
as
symmetric
and
non-symmetric.
A
They
don't
change
anything
from
a
the
point
of
view
of
the
solutions,
they're
just
ways
of
of
making
sure
that
there
is
data
on
the
outside
edge
of
a
domain.
If
you're
working
with
open
boundaries,
for
example,
there
is
a
static
memory
mode
which
requires
you
to
know
when
you
compile
what
size
memory
you
want,
but
for
the
most
part
people
aren't
using
that,
but
it
is
available
and
you
know,
gets
you
a
few
percent
speed
up
if
you,
if
you
care
about
that
sort
of
thing,
but
those
are
the
memory
models.
A
The
drivers,
MCT
neuropsy,
F,
the
regular
FMS
coupler
and
the
solo
driver
are
the
codes
that
are
calling
Mom.
Six
solar
driver
is
obviously
the
solar
driver,
the
one
you
need,
if
you
aren't
using
a
coupler
MCT,
is
in
earlier
ncar,
coupler
and
I.
Think
I
should
perhaps
refer
to
defer
to
somebody
at
ncar
to
comment
on
whether
that's
even
still
working
or
checked
anyone.
B
A
C
B
Okay,
thanks
Johnny.
D
Thanks
for
the
sharing
the
examples,
that's
really
helpful.
I
just
have
a
really
quick
comment
regarding
what
you
mentioned
at
the
very
beginning
on
on
Outreach.
If
you
are
trying
to
get
a
a
new
grad
student
to
get
started
on,
Unix
CSN
has
this
well
ncar.
Has
this
Unix
tutorial,
that's
part
of
the
CSM
starting
pack,
so
I
will
post
the
link
in
the
yeah
in
the
chat
and
I
will
find
it
helpful
and
use
it.
D
A
You
so
I
will
say
that
one
of
our
one
of
the
reasons
that
we
got
interested
in
working
with
ncar
and
CSM
is
that
ncar
promised
that
CSM
would
actually
reach
out
to
that
Community,
which
is
to
bring
you
know
to
help
you
all
out.