►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I'd
like
to
run
up
Chuck
Ward,
who
will
be
talking
to
us
from
that
air
force
research
lab
and
he
is
the
lead
for
integrated
computational
materials,
science
and
engineering,
so
detectives
led
efforts
in
integrated
computational
science
and
engineering
for
Air
Force.
As
a
co-chair
for
the
materials
genomes
initiative
subcommittee
under
the
National
Science
and
Technology
Council,
he
is
the
adjunct.
Faculty
member
of
material
science
and
engineering
materials.
Engineering
at
University
of
Dayton
is
also
the
editor
of
TMS
integrated
materials
and
manufacturing
innovation
journal.
A
His
professional
career
is
30
years
29
years
serving
and
several
roles
and
research,
engineering
and
management.
His
research
has
focused
on
microstructure
property
relationships
in
titanium
and
titanium
aluminum
alloys,
so
he
served
as
a
manager.
He
searched
for
the
Air
Force
basic
research
program
in
metals,
then
as
an
engineer
on
the
f-35
propulsion
program
and
as
a
staff
officer
to
the
assistant
secretary
for
the
Air
Force,
for
acquisition
and
in
the
Air
Force
liaison
for
materials,
research
and
development
in
Europe.
B
Very
much
for
Donna,
so
what
I
want
to
talk
to
you
about
a
little
bit
right
before
a
lunch
here
is
a
I,
always
love
that
saying
I'm
standing
between
you
and
lunch.
So
that's
usually
the
longest
talk
in
the
morning.
So
I'll
try
not
to
do
that
to
you.
So
I'll
talk
to
a
little
bit
about
a
national
materials
data
infrastructure,
so
these
are
I'm
going
to
focus
on
things
that
the
federal
government
has
been
investing
in
now
for
the
past
few
years,
first
I'll
start
talking
about
a
little
bit
of
policy
drivers.
B
B
So
there
was
a
memo
from
ostp
back
in
2013
from
the
president's
science
advisor.
They
really
went
towards
making
the
products
of
federal
research
freely
available
to
the
public.
So
one
objective
was
get
the
product
of
research
out
from
behind
paywalls
and
make
it
available
to
all
taxpayers,
and
this
is
something
that's
been
going
on
very
actively
in
the
EU
for
several
years
as
well.
The
other
part
of
that,
though,
the
interesting
part
was
that
digitally
formated,
digitally
formatted
scientific
data
sets
that
support
that
research
should
also
be
made
freely
and
publicly
available.
B
So
that's
kind
of
a
new
twist.
So
that's
driving
a
lot
of
behavior
a
lot
of
policy
in
the
agencies.
You'll,
probably
all
be
familiar
with.
That's.
What's
driven
all
these
data
management
planning
requirements
and
proposals
coming
out
of
NSF
do
ii
and
soon
to
be
DoD.
So
it's
all
talking
in
response
to
the
so
as
TP
memo
about
how
to
make
your
supporting
scientific
data
publicly
accessible.
So
that's
driving
an
awful
lot,
an
investment
as
well,
and
then
within
the
materials
genome
initiative
community.
B
There's
a
number
of
federal
agencies
involved
in
the
material
genome
initiative.
You
can
see
the
the
blue
bubbles
here,
MIT
dem
ref
I
see
MSC
CMD
icme.
These
are
all
at
different
brand
names,
as
you
will
that
the
different
agencies
have
chosen
to
describe
their
materials
genome
initiative
efforts,
so
you
won't
see
an
mg
I
program
per
se,
but
you
will
see
a
dem
rack
program
out
of
an
NSF.
For
example,
we
establish
for
strategic
goals
that
we
want
to
achieve
through
MGI
in
the
corner
down.
B
Here
you
can
see,
one
of
them
is
making
digital
data
accessible
to
the
community.
So
this
is
a
fundamental
fundamental
precept
of
MGI
that
we
really
want
to
promote
this,
and
we
feel
it
is
important
to
enable
discovery
and
development
and
transition
of
materials
to
the
industrial
base
for
competitiveness.
So
what
are
some
of
the
requirements
for
materials?
Data
infrastructure?
B
First
output,
federated
in
quotes,
we're
never
going
to
see
one
giant
huge
database
that
satisfies
everyone's
needs,
but
we
do
need
to
understand
how
the
interoperate,
with
each
other,
how
do
we
discover
data
and
that
I'll
talk
more
about
that
a
little
bit
later?
But
this
goes
across
government
academia,
industry
with
it
with
the
key
ideas
being
they
need
to
be
accessible.
These
repositories
need
to
be
affordable
and
sustainable.
B
We
need
standards
for
data
exchange
here,
I'm
talking
about
particularly
open
formats,
not
necessarily
common
formats,
but
open
formats
that
you
can
understand,
build
a
translator
for
vocabularies
when
we
say
Young's
modulus
do
we
know
what
we
mean
by
Young's
modulus.
Do
we
know
how
that
Young's
modulus
was
determined?
Was
it
for
my
stress
strain
curve,
vibrating
reed
ultrasonic?
That
carries
a
lot
of
import
to
different
folks,
particularly
in
the
engineering
world,
on
how
that
was
how
that
was
acquired
and
then
new
concept
for
material
science
and
engineering
oncology's?
B
How
do
you
start
relating
concepts
together
to
make
them
much
more
machine,
accessible
and
readable,
and
therefore
much
more
useful
need
to
be
concerned
about
data
quality,
metrics,
the
pedigree,
the
provenance,
verification,
validation,
uncertainty,
quantification
incentives
associated
with
making
data
available?
Bryce
talked
quite
a
bit
about
that
citation,
attribution
protocols,
clarity
on
intellectual
property
right
now
we
see
a
lot
of
people
just
throwing
up
any
words.
They
want
on
how
you
can
use
the
data.
B
So
it's
all
over
the
map,
it's
very
difficult
to
reuse
data
when
you
don't
have
consistent
application
of
common
common
IP,
for
example
the
MIT
license
or
Creative
Commons,
and
any
potential
liability
that
goes
along
with
that
and
last,
but
certainly
not
least,
this
knowledgeable
practitioners
and
materials
data
management.
I
know
many
of
you
in
the
audience
and
we
keep
seeing
each
other.
If
you
need
to
broaden
that
audience
quite
a
bit,
there's
a
concept
that
really
came
out
of
bio
informatics
and
really
is
being
pushed
by
the
EU.
B
It's
a
nice
way
to
frame
the
discussion
about
data
and
the
data
infrastructure
and
they
call
it
the
fair
principles.
How
do
you
make
data
findable,
accessible,
interoperable
and
reusable?
If
you
look
at
those
four
attributes
of
data?
Is
a
nice
guideposts
to
follow?
There's
a
nice
paper
in
scientific
data
which
is
published
earlier
this
year
that
walks
through
what
it
means
to
the
follow
the
fair
principles
we
positively
about
some
of
the
government
funded
project
based
repositories.
B
B
One
example
would
be
under
the
goe
energy
efficiency,
renewable
energy
projects
geared
towards
automotive
applications,
they've
actually,
as
part
of
their
project
solicitations
required
that
these
projects,
these
these
folks,
are
working
on
magnesium
alloys
that
they
deposit
their
data
into
an
instrument
or
something
that's
got
a
lot
more
longevity
and
it's
a
lot
more
centralized
has
a
lot
of
the
attributes.
You
like
to
see
what
the
data
repository
like
assigning
a
persistent
identifiers
which
helps
make
the
data
much
more
discoverable.
B
So
this
is
an
example
of
that,
potentially
how
projects
could
be
managed
into
the
future.
More
centralized
databases,
not
project
specific
we've,
heard
a
lot
about
the
large
project-based
repositories.
These
are,
in
my
opinion,
some
of
the
more
gold
standard
repositories
because
of
their
size
and
their
interface.
Importantly,
their
interface
I.
Think
each
of
these,
except
perhaps
Harvard,
have
a
have
a
AP
I
associated
with
it,
so
they
publish
how
do
I
interact
with
the
database
on
a
machine
basis.
So
I'm
not
left
with
just
a
look
up
menu
to
try
to
find
hey.
B
What's
the
band
gap,
energy
for
a
dish,
material,
no
I
can
create
the
entire
database
to
look
for
ranges
of
band
gaps,
for
example,
so
these
that
these
also
do
suffer
from
the
problem
of
their
programmatically
funded.
So
what's
the
long-term
sustainability
of
these
projects,
they're
excellent
projects
or
excellent
repositories,
but
what's
the
long-term
future
of
them?
We
as
a
community
needs
and
government
need
to
come
to
grips
with.
What
is
that
sustainment
model?
Look
like
there
are
a
couple
agency
data
gateways.
B
Do
e,
has
a
data
Explorer,
it's
pretty
rudimentary
again,
not
machine
actionable
and
then
nASA
has
one
called
mapped
asst,
which
is
very
extensive,
but
it's
highly
restricted.
Lata
export
control
data
in
it,
so
government
agencies
other
than
NIST,
haven't
been
very
proactive
on
standing
up
data
repositories.
B
Nist
has
been
very
good
about
standing
up
a
repository
based
on
D
space
that
provides
folks.
The
ability
to
load
in
specific
data
sets
along
our
number
of
different
themes.
For
example,
thermodynamic
data
assigns
the
data
set
with
a
persistent
identifiers
which
makes
it
uniquely
identifiable
and
and
discoverable
on
the
internet,
which
is
very
nice
feature,
but
there
is
no
api,
for
example,
to
interface
with
this.
With
this
repository.
B
B
That's
just
up
and
running
for
the
past
few
months,
because
I
have
a
whole
lot
of
data
in
there.
That's
another
open,
free
and
accessible
resource
to
the
materials
community
to
repose
data
and
access
it
at
a
later
at
a
later
time,
and
then
you
also
heard
about
some
of
the
activities.
Asm
has
been
in
the
as
a
professional
societies
than
in
the
materials
data
business
for
quite
a
long
time,
working
with
granta
and
now
citrine
informatics.
They
had
a
structural
data
demonstration
project
or
the
past
year
where
they
tried
to
collect
everything.
B
We
know
about
alloy
6061
and
make
it
available
to
everyone
so
that
that
is
out
there
to
be
used.
So
that's
an
example
of
a
professional
society
playing
a
role
as
well.
So
let
me
switch
gears
on
some
of
the
enabling
features,
maybe
first
a
fought
on
standards.
We
often
hear
a
lot
about
that
in
the
community
on
standards
and
all
so
just
maybe
a
cautionary
word
that
perhaps
we
don't
get
too
wrapped
up
in
in
that,
because
I
know
there's
been.
B
For
example,
some
European
projects
that
have
gone
on
for
years
and
years
and
years
tried
to
define
a
standard
for
mechanical
property
data,
for
example,
and
it's
a
long
and
drawn-out
process
and
I'm
not
sure
it's.
If
it's
not
a
red
herring
in
the
conversation,
probably
more
useful
in
the
in
the
short
term
are
things
that
NIST
is
doing,
for
example,
materials
resource
registry,
which
is
writing
an
interface
with
some
very
rudimentary
interface
standards
on
people,
exposing
their
specific
resources
through
a
central
interface.
B
But
it
will
start
giving
you
some
very
basic
search
capability
on
hey
I
know
what
there's
a
database
out
there
that
contains
this
type
of
data
may
not
exactly
give
you
a
specific
pointer
to
a
specific
data
set
though,
but
it
leases
to
start
other
things.
The
community
starts
needs
to
think
about.
Are
the
discoverability
of
a
search
we've
been
doing
a
little
bit
of
work
on
exploring
semantics
and
an
ontology
development
and
developing
tools
that
try
to
link
data
and
make
it
much
more
searchable
and
starting
to
connect
concepts?
B
So
you
can
extract
not
just
quantitative
information
and
data
but
start
gathering
a
little
bit
more
information,
even
perhaps
knowledge
from
from
literature,
for
example.
So
this
is
a
tool
that
a
company
that
normally
deals
in
the
intelligence
community
is
now
started.
Working
in
materials
and
developed
a
tool
called
map
anto
it's
going
to
be
publicly
available
and
again
it'll
be
able
to
be
downloaded
installed
and
people
can
connect
and
start
interfacing.
That
way,
I
think
a
key
element
that
we
haven't
talked
a
lot
about
this
morning.
B
I
know
there's
some
believers
in
the
audience
here,
but
is
collaborative
networks
and
collaborative
environments.
We
talked
a
lot
about
getting
material
data
out
to
a
repository,
but
first
it
has
to
start
locally
right
and
then
perhaps
regionally,
and
then
you
move
up
to
a
repository
so
but
you've
got
to
make
that
a
seamless
process,
a
natural
process
at
the
local
level.
A
lot
of
these
you
see
six
of
these
collaborative
elements,
one
of
them
here
at
Georgia,
Tech,
with
matin,
with
Syria
standing
up
and
Dave,
are
trying
to
get
the
arms
around.
B
How
do
you
handle
the
day-to-day
business
materials,
research
and
collected
data?
So
it's
not
an
effort
when
you
go
to
publish
it,
it's
it's
lowering
that
barrier
and
it
is
providing
you
a
local
repository
that
you
can
reuse
and
keep
track
of
your
own
data
ears
on
out
so
that
a
year
or
two
from
now
you
can
reconstruct
the
research
process
for
yourself.
B
So
you
know
that
that
tensile
curve
went
with
this
sem
image
that
went
with
that
chemistry,
evaluation
and
they're,
not
all
sitting
in
different
shared
folders
or
on
a
hard
drive
or
on
a
DVD
or
your
graduate
students
laptop
who
just
left
last
year.
So
it's
it's
building
that
institutional,
local
knowledge
repository.
B
I
had
a
group
heard
a
great
talk,
the
other
year
from
a
young
faculty
member
who
is
looking
at
to
get
stage
to
work,
hardening
rates
on
single-crystal
materials
and
so
well
all
this
all
these
tests
have
been
done
probably
decades
ago,
but
of
course,
people
only
publish
the
reduced
data,
perhaps
critical
resolved
shear
stress
as
a
function
of
orientation
and
temperature.
It
didn't
publish
the
whole
curve.
The
data
was
generated.
B
It
was
there
to
be
reanalyzed,
but
we,
as
material
scientists,
only
generally
tend
to
focus
in
on
one
part
of
a
curve
where
there's
a
lot
more
information
and
knowledge
to
be
gained
from
from
even
a
simple
tensile
test,
let
alone
high-energy
diffraction
microscopy.
So
there
are
a
series
of
journals.
B
B
Elsevier
has
got
something
called
data
and
brief
they're
generally
much
shorter
articles,
they're
intended
to
kind
of
complement
other
journal
articles.
So
it's
an
expanded,
experimental
or
method
section.
If
you
will
for
a
normal
journal
article
and
then
integrating
materials
of
manufacturing.
Innovation
has
also
started
a
data
descriptor
article
similar
to
scientific
data,
but
it's
going
to
be
specifically
geared
towards
material
science
and
engineering
and
we're
looking
at
unique
data
sets
where
we
want
to
try
to
tie
together
processing,
structure
of
property
performance
relationships.
B
Just
a
note
about
this
one
that
I've
got
up
here
from
from
dave
macdonald
at
all.
This
is
the
first
date,
a
descriptor
article
that
was
published
and
the
nist
NSF
and
anna
ferrell
random
material
science
and
engineering
data
challenge.
Last
year,
where
we
challenged
the
community
to
use
publicly
available
data
to
solve
a
problem,
whether
it's
an
engineering
problem
or
a
scientific
problem,
but
it
was
to
demonstrate
the
value
in
reuse
of
material
science
and
engineering
data.
B
The
winner
of
the
challenge
used
this
article
actually
in
the
data
behind
this
article,
and
they
were
the
grand-prize
winner
and
the
winner
happened
to
be
Khaled
India
at
all,
also
here
at
Georgia
Tech,
but
it's
different
from
the
original
authors,
which
was
fantastic,
but
it
showed
the
value
that
these
articles
and
publishing
the
raw
data
itself
can
have
so
some
challenges,
specific,
the
material
science
and
engineering.
We're
really
saddled
with
no
single
government
agency
has
the
lead
for
materials
manufacturing.
B
If
you
look
at
Earth,
Sciences
is
generally
NASA
and
NOAA,
of
course,
the
big
dog
on
the
block
of
National
Institutes
of
Health.
So
they
can
do
tremendous
things
in
bio
and
biomedical
bioinformatics,
but
materials
are
spread
across
literally
just
about
every
agency
of
the
government.
So
no
one
has
the
lead
so
getting
someone
to
really
own
the
problem
and
take
the
lead.
It's
very
difficult.
B
We've
already
heard
today
a
lack
of
data
competency,
a
data
science
competency
in
the
field.
So
that's
that's
sorely
needed.
We
do
need
open
data
formats,
not
necessarily
common,
but
open,
do
I
understand
we
deal
with
JPEG
TIFF
TIFF.
Now
those
are
all
standard
open
formats.
We
understand
how
to
deal
with
them,
so
it's
really
a
matter
of.
Are
they
open?
Can
we
understand
them
that
not
only
goes
for
what
we
publish,
but
what
we're
finding
in
our
laboratory
with
something
like
750
pieces
of
experimental
equipment
is
trying
to
extract
the
data
off
the
Express.
B
Our
mental
equipment
into
a
painless
data.
Workflow
is
extraordinarily
difficult,
you're
dealing
with
unique
manufacturer
formats.
It
may
not
be
proprietary,
but
their
unique,
FBI
and
jeol,
for
example,
on
Sen
may
say:
kv
versus
KETV.
It's
simple
little
things
like
that.
But
then
there
are
the
proprietary
data
formats
and
there
are
a
real
pain
to
overcome.
If
you're
trying
to
develop
a
very
painless
data
workflow
to
get
things
out
the
publishing
into
a
repository
and
then
these
collaborative
platforms
can
I
keep
track
of
it.
B
Can
I
can
I
keep
track
of
the
different
files
so
that
I
can
create
a
provenance
for
the
data
set?
That's
where
the
collaborative
program
platforms,
I,
think
really
add
the
value.
Is
you
can
start
connecting
the
data
files
off
an
FCM
with
electrical
conductivity
measurement
and
now
I
now
I
established
a
profit
on
so
that
entire
data
set
domain
vocabularies
schemata
ontologies-
and
this
is
doing
a
lot
of
work
in
this
area.
But
can
we
come
to
common
terms?
B
So
when
we
have
open
data
formats
and
someone
says
yield
strength,
we
can
point
to
something
and
say:
yep:
that's
the
definition
of
yield
strength,
I'm
using
and
so
that
helps
they
make
machine
interpretation
of
the
data,
much
more
viable
data
management,
how
to
resources
or
we've
got
a
dearth
of
that
in
material
science.
I'll
talk
about
that
in
a
little
bit.
But
how
do
we
get
the
word
out
to
the
community
to
how
to
do
this?
B
How
do
we
sustain
the
repositories
and,
as
we've
heard
earlier,
community
buy-in,
so
a
number
of
other
communities
is
taking
this
on
with
some
discipline
based
best
practices?
You
see
the
social
sciences
here
out
of
the
University
of
Michigan
policy
guide
on
data
preparation
on
archiving.
We
don't
have
anything
like
that:
material
science
and
engineering
NASA's
done
the
same
thing.
The
earth
sciences
folks
have
been
tremendous
at
this
people
publish
different
formats
protocols
on
how
to
do
and
describe
data.
We
as
a
community
need
to
be
much
more
accepting.
B
If
you
will,
we've
got
to
get
out
of
our
science
based
only
publication
model
to
how
to
best
practices
as
other
communities
who
have
done,
and
then
at
the
next
scale
up
is
our
the
discipline
based
resources
that
are
out
on
the
web
for
other
communities.
There's
the
earth
sciences.
Federation
has
got
a
wonderful
platform
that
walks
through.
How
do
you
do
data
management,
nerve
sciences?
What
are
the
best
practices,
one
of
the
formats
USGS
as
a
government
agency,
that
this
is
a
gold
standard?
B
B
What's
the
best
way
to
handle
my
do
geospatial
data
and
when
we
have
nothing
like
this
in
material
science
and
engineering
right
now,
and
we
want
to
get
to
a
point
where
we
can
just
point
people
to
say:
if
you
want
to
know
how
to
do
this,
go
here,
not
just
figure
it
out
on
your
own.
You
know
it's
a
pickup
game
like
we
do
with
a
lot
of
other
stuff,
but
this
is
the
best
practice
and
that's
how
we're
going
to
get
more
towards
standardization.
B
So,
in
summary,
we
are
creating
a
materials
data
infrastructure
over
the
past
five
years
since
materials
genome
initiative.
I
really
like
Bryce's
chart
that
showed
the
the
materials
informatics
taking
off
about
five
years
ago
when
we
started
this
five
years
ago.
The
question
of
those
why
why
materials
data?
Why
are
we
sharing
this?
How
what's
the
value
in
it
and
with
with
forums
like
this
we're
moving
towards?
Why
wow?
How
are
we
going
to
do
this?
B
B
What
are
the
protocols,
what
are
what
our
formats
that
are
available
out
there,
that
folks
can
use
and
then
finally,
community
wide
acceptance
of
the
dentist
of
the
stewardship
and
I
think
that's
building
we're
seeing
more
and
more
articles
out
there
that
are
reusing
data
there
that
are
applying
informatics
than
showing
the
community
what
can
be
done
and
the
value
of
data.
So
with
that,
I'll
conclude
and
thank
you
very
much.