►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
let's
start
by
talking
about
some
customer
trends
that
we've
been
seeing
so
modern
application.
Development
is
essentially
any
time
that
you
create
an
application
using
modern
techniques
and
services.
This
could
include
creating
applications
in
containers
or
serverless
in
order
to
have
it
be
more
agile
or
scale
up
and
scale
down.
So
this
is
something
that
we're
seeing
quite
a
lot
from
customers
who
are
really
building
net
new
applications
right,
and
so
they
could
be
using
ci
cd
to
deliver
applications
faster
and
so
anytime.
A
Customers
are
using
these
techniques
to
make
their
applications
do
more
for
you
or
move
faster,
become
more
agile
or
meet
the
needs
of
the
customers,
that's
kind
of
what
we
call
modernizing
applications
or
building
new
applications.
However,
it's
not
just
building
new
applications.
What
we
are
also
seeing
is
customers
modernizing
their
existing
applications.
A
What
we're
seeing
is
a
lot
of
customers
taking
existing
applications
and
modernizing
them
so
taking
something
that
might
have
been
running
on
premises
or
that
may
have
been
running
on
virtual
machines
on
premises
or
ec2
instances
on
aws
for
a
while,
and
so
there
are
a
lot
of
opportunities
to
modernize
these
applications,
containerize
them
or
build
them
online
and
launch
them
on
container
services.
A
And
so
with
this,
you
will
have
much
less
of
an
operational
burden
right
and
so
the
key
reasons
that
you
heard
from
customers
on
why
they
are
actually
modernizing
their
applications
and
building
and
deploying
applications
using
containers
are
one
significantly
reducing
cost,
because
because
companies
or
customers
running
applications
on
premises
typically
have
to
pay
for
way
more
infrastructure
than
they
use
for
primarily
for
two
reasons.
A
First,
they
often
dedicate
servers
to
single
applications
for
security
or
other
isolation
reasons
with
main
with
most
of
those
servers
ending
up
being
under
utilized
and
the
second
is.
They
typically
have
to
upfront
provision,
compute
and
storage
ahead
of
time,
based
on
some
forecast,
which
usually
means
that
it
lies
unutilized
for
a
long
amount
of
time.
A
Right
and
so
modernization
really
reduces
cost
allows
them
to
iterate,
deliver
faster,
build,
build
and
nitrate
on
their
applications,
much
more
faster,
thereby
reducing
the
time
to
market
getting
customer
feedback
more
quickly
and
thereby
making
their
teams
more
more
agile.
A
So
many
of
these
applications
that
are
being
modernized
or
that
are
building
net
new
kind
of
already
rely
on
persistent
storage,
typically
on
as
part
of
what
they're
doing
so.
For
example,
applications
may
already
be
sharing
data,
or
they
actually
require
data
to
be
shared
between
multiple
resources
or
instances.
A
So
to
speak
of
that
application,
so,
for
example,
a
persistent
storage
layer
might
be
exactly
how
they
actually
share
data
with
other
applications.
So
as
you
modernize
applications
or
build
net
net
new
applications,
you
often
want
to
modernize
your
storage
as
well
as
part
of
that,
and
that's
where
amazon,
efs
or
amazon
elastic
file
system
comes
into
play
and
we'll
talk
about
that.
And
so
what
you've
really
heard
from
builders
and
developers
is
that
efs
has
now
become
the
tool
of
choice
of
persistent
storage
in
modern
application
development.
A
So
when
we
talk
to
customers
about
moving
to
and
growing
in
the
cloud
with
efs,
one
thing
that
really
excites
them
is
how
efs
really
helps
in
minimizing
the
risk
along
the
way
right.
So,
rather
than
having
getting
bogged
down
and
having
to
manage
their
own
storage
infrastructure,
they
can
simplify
operations,
reduce
cost
and
so
on.
A
A
A
It's
elastic
it
automatically
scales
up
and
down,
as
you
add,
or
remove
files,
and
you
only
pay
for
what
you
use
your
performance
automatically
with
your
capacity
and
and
it
can
go
up
to
hundreds
of
thousands
of
iops
as
well.
It's
highly
available
and
designed
to
be
highly
durable.
We
offer
a
four
nines
availability
sla
for
as
well
as
it's
designed
as
as
well
as
the
fs
is
designed
for
eleven
nines
of
data
durability
and
so
to
achieve
these
levels
of
durability.
A
We
redundantly
store
data
across
multiple
availability
zones
as
well.
Efs
is
completely
serverless.
So
if
you
think
about
it,
you
don't
need
to
provision
or
manage
any
underlying
infrastructure
or
capacity
from
a
storage
perspective
right.
So
as
long
as
your
workload
scales
up
so
does
your
file
system
automatically
accommodating
any
additional
storage
or
connection
capacity
that
you
need?
A
So
from
from
a
performance
perspective,
like
I
said,
efs
provides
very
low,
consistent
latency
single
digit,
millisecond
latencies.
Typically,
so,
let's
talk
about
what
application
modernization
really
means
from
a
container's
perspective
right,
so
many
customers
are
migrating
existing
applications
into
containers
to
simplify
the
operations,
lower
your
costs
and
and
scale
more
elastically.
A
So
the
way
that
you
can
think
about
application,
modernization
with
containers
and
and
efs,
is
that,
if
you're
migrating
an
application
from
a
you
can
have
the
same
efs
file
system
backing
that
application,
irrespective
of
where
that
application
is
or
where
that
application
might
eventually
move
to.
So,
irrespective
of
location,
you
can
have
your
really
you,
you
really
hank
on
that.
Underlying
file
system
that's
running
on
on
amazon
efs.
A
So
let's
talk
about
what
sort
of
applications
and
use
cases
that
we're
seeing
that
customers
using
efs
and
containers
for
so
typically
as
customers
containerize
most
of
their
applications.
A
They
often
find
that
applications
require
persistent
storage,
and
this
is
because
applications
are
long
running
and
need
to
persist,
state
for
high
availability
or
because
an
application
wants
to
scale
out
around
a
shared
data
set,
for
example,
developer
tools
like
jira
and
content
management
systems
like
wordpress
and
drupal
use
persistent
storage
to
achieve
high
availability
through
an
active
standby
model,
often
across
aws
availability
zones
with
machine
learning.
Customers
use
efs
to
so
store
shared
data
science
and
data
scientist
home
directories.
A
So
these
are
you
know
where
your
data
scientists
use
home
directories
to
store
whether
it's
output
files
scratch
nodes
all
of
that
stuff,
allowing
and
allowing
them
to
train
models
in
parallel
across
multiple
containers
and
access
data
from
individual
data
science,
notebook
containers,
so
just
summarizing
key
applications
that
we
that
we
see
an
an
interesting
application
that
we've
seen
customers
use
efs
for
in
the
kubernetes
world
is
with
flow
and
and
efs,
and
there
are
several
examples
as
well
and
I'll
give
you
links
where
you
can
use
kubeflow
with
with
with
kubernetes
with
efs,
and
so
what
you're?
A
Seeing
really
is
three
key
things
like
I
said
web
serving
in
content
management
using
wordpress
and
drupal,
and
learning
management
systems
such
as
moodle
data
science
and
analytics
and
devops,
where
you
essentially
want
where
your
efs
file
system
is
a
single,
consistent
source
of
truth
for
your
for
various
binary
files,
config
files
and
all
of
that
for
applications
such
as
jenkins
and
jira,
and
and
git,
for
example,
a
very
common
use
case.
A
So,
let's
talk
about
customers,
some
real
customers
who
who
are
actually
using
kubernetes
on
with
efs,
so
the
first
is
t-mobile,
so
t-mobile
t-mobile's
existing
infrastructure
really
wasn't
able
to
scale
to
meet
peak
user
demand
without
having
to
over
provision
storage
ahead
of
time,
and
so
what
they
decided
was
to
move
or
modernize
their
entire
applications
to
run
on
kubernetes
and
misos,
and
they
decided
to
use
efs
as
a
persistent
backing
storage.
A
And
so
with
this,
they
have
been
really
able
to
meet
peak
user
demand
dynamically
scale
without
any
additional
storage
management.
Overhead,
because
efs
really
scales
to
the
storage
requirements
that
they
need
automatically
and
so
in
terms
of
scale.
They
have
16
000
containers
under
management
and
they've
really
reduced
their
storage
cost
significantly
yeah
by
using
efs
and
finally,
they've
also
been
able
to
really
reduce
the
time
to
deploy
as
well.
A
Another
example
is
johnson
and
johnson,
which
uses
efs
as
a
backings
backing
store
for
their
genomics,
neuroscience
r
d
and
drug
discovery,
sort
of
analytics
applications,
and
so
the
amount
of
data
that
they
have,
which
runs
into
several
hundred
terabytes
and
growing.
They
really
needed
a
file
system
that
would
scale
but
is
also
equally
performant
and
at
the
same
time,
they
didn't
want
to
manage
the
underlying
storage
and
so
that
they
could
really
focus
on
delivering
value.
A
And
so
there
they
run
their
data
science
platform
on
an
amazon,
elastic,
kubernetes
service
or
eks
cluster,
and
they
use
efs
as
a
backing
store.
And
by
doing
this,
they've
reduced
their
analytics
time
by
35
percent
and
reduced
costs
by
at
least
37
percent.
A
So
now
we'll
talk
about
before
before
getting
into
how
how
you
can
really
get
started,
with
efs
and
and
kubernetes
I'll
talk
about
identity
right
and
how
you
can
bridge
the
identity
between
pods
and
kubernetes
spots
and
storage,
so
from
a
goals
perspective.
What
we'd
like
to
what
we
like
to
say
is
there
are
kind
of
two
goals
that
you're
looking
looking
for.
A
One
is
the
first
is
file
systems
should
only
be
a
mountable
by
the
applications
that
need
them,
so
you
don't
want
any
application
to
be
mounting
a
file
system
right
at
the
same
time,
the
applications
that
mount
file
system
should
only
have
access
to
the
data
that
they
need.
You
don't
want
applications
that
share
a
common
storage
data
source
to
be
sharing
data
across
those
applications
right
because
some
of
those
applications
might
be
more
important
than
others
and
they
and
so
on.
A
So
if
you
keep
these
two
goals
in
mind,
let's
talk
about
container
identity
right.
So,
if
you
think
about
by
managing
app
if
managing
application,
identities,
dfs
such
as,
if
you're
sharing
a
file
system
with
multiple
users
with
containers
that
the
identity
and
when
I
say
identity,
I
mean
user
id
and
group
id
is
typically
decided
at
build
time
and
built
into
the
container
image.
A
So
this
means
that
it
isn't
uncommon
for
applications
to
run
as
root,
which
means
they'd
be
trying
to
do
all
file
system
operations
as
root
or
be
running,
as
whatever
user
or
group
made
sense
to
the
developer,
who
built
it,
which
may
not
make
sense
to
the
file
system
right.
So,
for
example,
if
somebody
built
an
nginx
container
at
some
other
company
and
you're
just
spawning
that
you
may
you
may
really
not
know.
A
What's
going
to
come
up
right
and
in
many
of
these,
when
they
kind
of
eventually
spawn
right,
they
come
up
as
root
for
convenience,
right
because
that's
how
it
is
created
and
so
root
users
can
read
and
write
whatever
they
want,
which
is
not
what
you
want
right
and
so
typically,
what
happens?
Is
that
applications?
A
You
just
mount
the
file
systems
and
they
tell
the
file
system
what
user
id
or
group
id
that
they
are
right.
So
in
many
cases
the
file
system
simply
trusts
the
user
id
and
group
id
coming
for
the
application,
which
in
many
cases
is
valid.
So,
for
example,
if
you're
running
a
virtual
machine
or
an
ec2
instance
on
linux,
for
example,
somebody
has
to
log
into
it
in
order
to
take
credentials
and
so
on,
and
so
you
can
trust
the
group
id
and
the
user
id
and
all
of
that.
A
But
in
the
case
of
a
container
the
id
is
really.
Data
is
really
the
id
that
the
application
is
running
is
often
determined
at
build
time
right,
and
so
that's
why
we
we
we
built
a
very
specific
feature,
called
amazon,
efs
access
points,
and
so
access
points
give
your
application
an
entry
point
into
the
file
system,
where
all
operations
are
overwritten
to
the
user,
id
or
group
id
defined
in
the
access
point.
So
you
know
that
it
makes
sense
in
the
cons
in
the
context
of
the
data.
A
Is
is
thousand
and
thousand
and
and
even
better
what
happens?
Is
access
points
can
now
root
your
application
in
a
specific
directory,
so
your
applications
don't
need
to
worry
too
much
about
what
directory
to
cd
into.
So,
let's
talk
about
how
access
points
actually
work
right.
So
in
this
example,
you
have
deployed
a
container
with
a
task
role
called
app
role
in
your
file
system
resource
policy.
A
You
have
stated
that
this
role
is
only
allowed
to
mount
your
file
system
by
a
specific
access
point
and
that's
the
access
point
that
is
used
to
mount
right.
So
now
this
application
is
built
to
run
as
root.
So,
ordinarily,
all
operations
would
be
interpreted
as
a
root
in
the
file
system,
which
isn't
always
the
best
idea.
However,
because
you're
using
an
access
point,
that
access
point
has
placed
the
app
in
a
home
directory
called
apps,
slash,
app,
slash
my
app
right.
A
And
so
is
rewriting
all
the
file
operations
to
user
group,
one
two
three,
so
you
can
be
sure
it
has
exactly
the
access
to
the
data
that
it
needs
and
no
more
and
no
less
so
to
kind
of
summarize,
the
key
use
cases
for
access
points
are
that
it
solves
the
kind
container
identity
problem
for
sharing
efs
data
and
the
second
is
it.
It
really
allows
you
to
pack
in
multiple
applications
into
a
single
file
system.
So
what
that
means?
Is
you
just
have
one
single
file
system?
A
You
have
multiple
applications
accessing
that,
so
which
means
it
reduces
the
overhead
or
the
addition
or
the
additional
resources
that
you
need
to
manage,
but
at
the
same
time
it
gives
you
that
it
gives
you
the
security
in
in
the
fact
that
each
application
is
only
accessing
the
data
that
it
needs,
which
kind
of
ties
back
to
the
to
when
we
started
about
the
goals
of
security
right.
A
So
best
practices.
I
want
to
just
summarize
the
suggestions
on
how
you
should
configure
your
environment
first
use
access
points,
even
if
you
have
only
one
app
or
file
system.
This
kind
of
takes
guess,
work
out
of
that
user
or
group
that
the
app
will
run
as
and
help
you
avoid
any
file
system.
Permission
setup
thing
just
also
enable
encryption
as
well,
it's
single
click
and
it
comes
without
any
performance
penalty.
A
So
now,
let's
get
into
amazon,
eks
the
elastic
kubernetes
service
and
eff
right.
So
when
you
look
at
the
overall
benefits
of
kubernetes
and
eks
with
efs,
there
are
a
few
benefits.
The
first
is
that
it's
it's
really
simple
to
use
and
the
efs
configuration
is
really
done
through
kubernetes
native
objects
and
through
the
efs
csi
driver,
so
that
you
focus
on
the
application
and
not
managing
the
underlying
infrastructure
right.
The
next
thing
is
elasticity.
There
shouldn't
be
actually
no
surprise
given
eks
in
gear.
A
Fs
have
elastic
in
the
name,
but
the
fact
that
your
application,
that,
as
your
application,
needs
to
scale
out
this
combination
of
services
instantly
provides
additional
compute
and
storage
capacity,
and
so
this
means
that
you
only
pay
for
to
use.
You
don't
have
to
forecast
usage
or
provision
or
slow
down
when
you
kind
of
run
out
of
capacity
from
an
availability
and
durability,
durability
perspective
both
eks
and
efs
are
regional
services
and
they
run
across
the
various
availability
zone
right,
and
so
you
can
build.
A
Cross-Az
architectures
can
be
scheduled
across
multiple
acs
and
share
data,
as
if
kind
of
the
local
to
each
other.
Okay.
So,
let's
before
we
get
into
how
to
get
started,
let's
talk
about
just
summarize
some
of
the
concepts
that
we
kind
of
know
about
right.
So
when
you're
using
kubernetes,
there
are
a
lot
of
kubernetes
specific
concepts
at
play.
So
the
first
thing
you
do
when
you
first
set
up
your
kubernetes
cluster
and
you
want
to
set
up
storage.
A
There
are
a
couple
of
objects
that
a
storage
administrator
or
someone
acting
in
that
role
would
have
to
create.
First,
there
would
be
a
storage
class
setup,
which
is
the
thing
that
developers
see
and
are
able
to
request
so
an
example
of
a
storage
class
could
be
high
performance,
block,
storage
or
could
be
high
performance
file,
storage,
or
this
could
be
any
sort
of
cost
optimized
file,
storage
right.
A
A
The
csi
driver
for
efs,
which
you
are
talking
about
today,
as
well
as
for
ebs
and
fsx,
and
the
whole
point
of
the
csi
driver,
is
that
once
you
have
the
driver
installed,
you
have
a
consistent
way
of
allocating
and
managing
storage,
no
matter
what
service
is
providing
it,
and
so,
after
that
you
have
storage
classes
and
persistent
volumes,
which
are
what
kubernetes
administrators
provision
to
provide
storage
to
their
users.
So
first
you
create
a
storage
class
to
describe
the
particular
type
of
storage.
A
So,
within
your
storage
classes,
the
administrator
provisions,
persistent
volumes,
which
map
to
actual
units
of
storage
on
a
storage
service
like
efs,
so
either
a
whole
file
system
or
a
subsidiary
of
file
system,
and
so
next,
when
a
developer
wants
to
deploy
an
application
that
needs
storage,
they
create
a
persistent
volume
claim
against
the
storage
class
and
are
allocated
a
persistent
volume,
and
so
the
user
facing
object
is
called
a
pvc
or
a
persistent
volume
claim,
since
the
user
doesn't
have
or
need
visible
visibility
to
how
exactly
the
request
was
satisfied.
A
So
with
that
said,
let's
talk
about
attaching
how
you
can
attach
efs
to
an
eksport
using
dynamic
provisioning.
So
what's
dynamic
provisioning.
So
before
that,
I
I'd
also
like
to
point
out
that
the
efs
csi
driver,
github
repo,
has
several
examples
of
how
to
configure
storage
classes,
persistent
volumes,
persistent
volume
claims
and
parts
to
use
the
fs.
So
I'd
encourage
you
to
go
there
for
reference,
but
we'll
walk
through
one.
A
A
simple
example
here
so
when
we
originally
originally
released
the
csi
driver,
we
supported
a
mode
called
static
provisioning,
which
meant
that
administrators
would
first
create
a
storage
class.
Then
they
would
create
persistent
volumes
that
were
backed
by
efs
file
systems
and
by
creating
those
they
make
a
kind
of
pool
of
available
storage
volumes.
And
then,
when
a
developer
comes
along
and
takes
out
a
persistent
volume
claim
they
get
associated
with
that.
A
And
so
that's
where
dynamic,
and
so
that's
what
dynamic
provisioning
does
and
so
with
dynamic
provisioning.
You
create
a
storage
class
and
that
points
to
an
efs
file
system
and
when
users
create
persistent
volume,
claims
or
pvcs
instead
of
being
allocated.
Something
that
you
already
had
to
create.
The
driver
will
go
to
the
file
system,
create
an
access
point
in
that
file
system
map,
that
to
a
dynamically
created,
persistent
volume
and
then
give
it
to
the
user
right
so
essentially
in.
A
And
so
here,
we'll
start
by
configuring,
configuring,
a
storage
class
you'll
see
that
there
really
isn't
much
that
we
need
to
configure
here
right.
So
it's
just
essentially
telling
just
tell
us
that
it's
for
efs
and
that
if
you
want
tls
encryption
to
be
used
for
mounts
and
next
persistent
volume
is
created
linking
to
that
storage
class.
And
this
is
where
you
specify
the
details
about
what
file
system
it
is
and
what
mount
path.
It
is
right.
So
you
provide
the
file
system
name
and
the
and
the
and
the
mount
path.
A
So
we'll
talk
about
what
does
it
look
like
for
a
user
or
a
developer
right?
So
then,
next,
the
developer
kind
of
comes
along
and
creates
a
persistent
volume
claim
against
the
storage
class
you'll
see
that
they
request
a
specific
amount
of
storage,
which
is
important
for
storage
system
that
provisions
an
exact
amount
of
storage
each
volume,
but
with
efs
that
really
doesn't
matter,
because,
as
I
as
I
mentioned
previously,
efs
is
completely
elastic,
and
so
it
automatically
scales
up
and
down
as
you
need
it.
A
So
it's
pretty
much
meaningless
from
an
efs
file
system
perspective,
and
so,
lastly,
they
launched
their
pod
referencing
their
volume
claim
and
then
specify
where
to
mount
it
into
their
container
like
that's
it
it's
as
simple
as
that.
A
So
hopefully
this
gave
you
a
good
sense
of
how
you
can
get
started
with
efs
and
and
eks
so
I'll
kind
of
end
by
talking
about
how
you
can
get
started,
there's
a
here's.
The
link
to
the
efs
csi
driver,
github
repo.
As
I
mentioned
a
lot
of
examples
here
on
how
you
can
get
started
so
really
recommend
that,
and
we
do
have
a
couple
of
examples
of
using
cube
flow
on
with
efs
as
well.
There
are
there's
some
you
can
use
with
the
with
static
provisioning.
A
There's
also
some
that
you
can
use
a
dynamic
provisioning
since
we
launched
that
earlier
this
year,
so
various
ways
of
doing
it,
but
just
this
will
give
you
a
good
sense
of
how
you
can
get
started,
and
so
that's
all.
I
had
hoping
to
hear
more
about
how
how
you
all
start
building
on
kubernetes
and
efs.
Thank
you.