►
From YouTube: Why You Need Data Protection for K8s
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
welcome
to
today's
session.
It's
called.
Why
do
you
need
data
protection
for
k-8
or
containerized
applications
within
kubernetes?
This
session
is
not
a
product
pitch.
This
is
not
going
to
be
about
how
our
solution
is
better
than
others
or
or.
However,
you
want
to
phrase
it.
What
we're
going
to
talk
about
today
is
really
focus
on
as
a
user,
whether
you're
in
the
devops
organization
or
whether
you
are
an
I.t
manager,
a
vp
of
ite,
I
or
director
of
it
who's,
been
asked
to
look
after
and
understand
containerized
applications
within
their
organization.
A
So
what
we're
going
to
do
is
going
to
look
at
best
practices
and
move
from
there
into
a
thought
leadership
presentation
around
this.
Before
we
move
on
I'd
like
to
introduce
myself
and
my
co-presenter,
my
name
is
andy
fernandez.
I'm
a
senior
product
marketing
manager.
Here
at
zerto,
I
am
joined
by
egon
van
dongen.
He
is
a
solutions
architect
as
well,
and
he
is
here
to
help
us
really
understand
exactly
what
we
should
be
looking
for
in
a
kubernetes
solution
or
a
kubernetes
data
protection
solution.
A
What
happens
when
I
can't
get
back
and
operational
after
running
this
and
then
after
we
have
an
understanding
of
the
scenarios,
and
you
can
you
can
understand
whether
your
organization
requires
this
or
not?
It
will
take
a
look
at
what
are
the
key
characteristics
or
the
key
design
elements
that
your
organization
needs
to
look
for
when
you're
looking
at
backup
disaster
recovery
data
protection
for
your
containerized
applications.
A
So
with
that
in
mind,
in
order
to
really
understand
the
evolution
of
persistency
adoption,
we
have
to
understand
that
persistent
data
within
kubernetes
and
especially
where
the
storage
comes
from.
So
let's
talk
about
storage
within
persistent
data
in
kubernetes.
Well,
docker
takes
a
simple
approach
of
mapping
a
node
path
to
a
container
path,
so
that
a
container
can
write
to
a
portion
of
that
volume.
Kubernetes,
however,
takes
a
much
more
eloquent
approach
in
that
we
have
a
concept
of
these
persistent
volumes
as
well
as
a
persistent
volume
claims
that
exist.
A
A
claim
is
very
similar
to
something
like
a
code
check
claim
and
that,
as
long
as
you
have
the
right
id,
you
can
access
the
volume.
So
what
kubernetes
will
do
is
it
will
actually
map?
Whatever
storage
is
available
underneath
to
the
various
pods
that
have
requested
storage
and
it
does
this
via
csi
or
pods
that
have
or
sorry
or
kubernetes
storage
drivers
that
are
available
from
the
various
providers?
A
This
could
be
from
physical,
virtual
or
from
your
favorite
cloud
provider
as
well.
So
what
the
developer
has
to
do
now
or
you
as
a
developer,
is
deploying
its
containerized
workloads,
basically
to
specify
the
class
of
storage.
That's
needed.
The
size
of
the
storage
is
needed
as
well
as
that
claim,
once
that
workload
is
deployed,
the
underlying
storage
resources
are
created
and
allocated
to
that
claim.
What
this
allows
you
to
do
is
is
actually
have
the
storage
be
completely
separate
from
the
container,
so
the
container
could
have
a
very
short
lifetime.
A
It
can
come
up
and
go
down
in
multiple
times.
It
could
even
fail,
but
then
the
storage
will
persist
and
every
single
time
the
claim
is
used
that
storage
can
be
accessed.
So
now
that
we
we
have
an
understanding
or
description
of
persistent
data
in
kubernetes.
Let's
take
a
look
at
it
at
its
evolution
and
its
adoption.
A
So
we're
going
to
dig
just
a
little
bit
deeper
into
each
one
of
these,
so
at
a
very
basic
level,
you
can
have
a
complete
stateless,
kubernetes
application.
This
is
where
a
lot
of
it
started.
The
the
source
code,
which
is
the
crown
jewel
in
this
case,
is
stored
in
an
external
system.
This
could
either
be
a
sas
or
it
could
be
hosted
on
a
vm
coupled
with
the
ci
cd
pipeline
tool.
Right,
think,
jenkins.
Jfrog
this
system
is
going
to
take
the
code
run
into
a
pipeline
chain.
A
Make
sure
that
these
images
are
created
then
pass
along
to
kubernetes.
So
then
kubernetes
can
deploy
the
application
at
any
source
code
or
anytime
source
code.
Changes
are
made.
The
entire
process
is
essentially
repeated.
The
containers,
in
this
case,
don't
really
rely
on
persistent
data,
underneath
the
source
code
and
pipeline
systems
can
either
be
provided
as
a
sas,
or
they
could
be
running
as
part
of
your
virtualized
infrastructure,
which
would
already
be
protected
by
whatever
solution
you
have
from
a
vm
perspective.
A
Now,
what
about
database
as
a
service
for
persistency?
This
is
the
next
change
in
this
evolution
and
it's
getting
rid
of
those
virtual
machines
that
essentially
provide
that
persistence
layer.
So
a
good
observation
that
we've
made
is
that
our
own
customers
moving
to
database
as
a
servicer
pass
services,
as
are
commonly
known
in
this
case
database,
is
provided
as
a
service
you're,
not
responsible
for
the
os
or
even
standing
up
the
infrastructure,
you're
merely
a
user
of
that
shared
database
environment,
a
tenant.
A
So
if
you
want
to
scale
one
part
of
that
database
to
serve
your
container
needs
more
you're,
pretty
much
bundled
in
with
the
general
purpose
type
environment,
and
you
don't
have
the
flexibility
now.
As
far
as
resilience
comes
in
you're,
relying
on
that
resilience
to
the
database
vendor
provides
right,
you
can't
dictate
the
resilience
you're
relying
on
somebody
else.
You
may
be
able
to
extract
that
data
from
the
database
or
retain
it,
but
the
question
comes
where
you're,
storing
that
persistent
data,
so
it's
really
not
a
complete
solution.
A
A
That
includes
the
persistent
data.
So
not
only
do
you
have
the
components
that
do
not
require
state
running
as
containers,
but
also
the
stateful
ones,
with
use
of
persistent
volumes
running
as
containers
that
allows
you
to
do
two
things
one.
It
obviously
gives
you
freedom
of
choice
so
that
the
containerized
workload
can
be
deployed
anywhere
on
premise
in
one
cloud:
service
provider,
another
cloud
provider
back
to
on-prem
physical,
with
a
hypervisor
without
a
hypervisor
you
make
it
mobile,
but
also
including
the
database
components
as
part
of
your
ci
cd
application
pipeline.
A
So,
just
as
you
would
test
and
deploy
application
and
front-end
changes,
you
can
also
deploy
the
database
changes
in
that
manner.
So,
if
you've
truly
achieved
the
separation
of
a
devops
in
an
I.t
role,
this
is
the
ideal
state
that
we're
looking
at
now
we're
all
getting
pretty
excited
about
persistent
workloads.
But
before
we
get
excited,
we
have
to
start
thinking
about
the
protection
of
that
data
itself.
We
see
the
the
valley
units
deployment,
but
how
about
the
protection
so
from
an
evolutionary
standpoint?
A
Having
that
persistent
data
be
part
of
the
container
provides
or
gives
you
the
most
flexibility
and
scale
as
opposed
to
being
locked
into
a
specific
path
or
sas
service,
or
even
locking
the
data
on
vms.
We
have
to
now
think
about
the
resilience
and
by
resilience.
We
have
to
think
about
the
ability
to
be
able
to
back
that
data
up
to
restore
it.
A
The
ability
to
replicate
it
from
a
disaster
recovery
standpoint
be
able
to
fail
over
fail
that
back
as
well
as
provide
migration
services
and
as
more
containerized
workloads
move
from
test
dev
into
lower
tier
production
and
even
into
higher
tier
critical
production.
Apps.
These
aspects
become
much
more
important
because
it's
your
data
has
to
follow
the
same
governance
rules
that
existed
for
virtualized
data
and
even
for
physical
data
as
much
as
we
want
to
get
away
from
the
constraints
of
the
physical
and
vm
world.
That
data
still
requires
the
same
application
or
the
same
protection.
A
However,
what
kubernetes
does
not
do
a
good
job
of
today
is
ensuring
that
you
have
a
native
method
or
something
purpose-built
in
order
to
back
up
that
persistent
data,
also
in
order
to
back
up
the
configurations
of
those
deployed
applications
how
to
use
the
data
itself,
extending
that
to
not
being
able
to
provide
disaster
recovery
across
clusters
as
well.
It's
not
just
being
able
to
eventually
restore
that
data.
A
It's
also
resilience
around
dr
being
able
to
get
back
up
and
running
and
mobility
as
well,
whether
it
can
be
from
one
time
or
for
expansion
or
for
bursting
capabilities
across
clusters,
or
even
across
platforms,
say
from
on-premise
to
public
cloud
or
even
across
public
cloud
providers.
So
there
are
some
gaps
that
we,
as
an
organization
have
been
working
on.
Just
have
we've
been
existing
in
the
virtual
infrastructure
world
as
well.
Now,
what
we're
not
here
to
do
is
talk
about
what
zerto
delivers.
You
know
with
data
protectionist
code.
A
No,
what
we
want
to
talk
about
now
is
understanding
what
are
the
threats
out
there
right
now
to
your
environment,
to
your
kubernetes
environment
and
what
to
really
look
at
a
checklist
of
what's
important?
What
are
the
data
protection
challenges
that
you
face
without
any
solutions?
What
are
the
data
protection
challenges
that
you
face
with
your
existing
solution?
Maybe
that
isn't
purpose-built.
A
Well,
we
know
that
there
are
traditional
threats
to
downtime
and
data
loss,
whether
it's
corruption
outages,
you
name
it,
but
a
threat
that
we're
really
seeing
is
the
growth
of
ransomware.
We
know
about
ransomware
from
a
vm
perspective,
we
know
it's,
it's
a
high
growing
volume
of
attacks,
ransom,
inc
prices
are
increasing
and
so
are
the
consequences.
A
A
This
is
not
a
container
issue,
but
it
is
a
container
issue
and
it's
here
and
we
have
to
be
able
to
make
sure
that
as
organizations
we're
able
to
protect
our
applications
from
it.
So
with
that
in
mind,
we're
going
to
talk
about
resilience
being
able
to
recover
restore
data
at
any
point
in
time
and
minimizing
data
loss.
So
with
that,
I'm
going
to
transition
over
to
egon
and
econ
is
going
to
drive
through
exactly
what
to
look
for
in
a
data
protection
solution.
B
B
There
are
already
multiple
solutions
that
claim
to
deliver
data
protection
for
kubernetes,
but,
as
there
are
many
scenarios
in
which
your
application
and
data
in
kubernetes
can
be
compromised,
it
is
wise
to
at
least
know
what
capabilities
the
solution
must
have.
So
what
should
you
look
for
in
a
solution
when
creating
a
protection
strategy
for
your
containerized
environment.
B
First
of
all,
yes,
kubernetes
does
have
an
aha
mechanism
natively,
but
the
options
in
this
the
solution
for
exhaust
recovery
are
actually
very
limited,
and
on
top
of
that,
there
is
no
data
protection
option
at
all
possible
with
many
of
the
solutions
at
hand.
Today,
most
of
them
are
not
purposely
built
for
kubernetes.
B
Furthermore,
it
is
the
same
as
in
the
virtualized
world,
not
enough
just
to
secure
the
data.
Persistent
data
must
be
protected,
but
if
you
want
to
recover
an
application,
all
related
resources,
such
as
configs
and
services,
also
must
be
protected,
not
to
mention
if
an
application
consists
of
more
than
one
container.
B
Persistent
storage
in
a
container
is
becoming
more
and
more
commodity.
Nowadays,
people
are
still
not
convinced
that
the
precision
data
in
these
containers
need
to
be
backed
up,
but
the
persistence
is
constantly
evolving
inside
these
containers.
You
need
to
be
able
to
track
these
involvements,
not
only
from
a
deaf
point
of
view,
but
also
from
a
compliancy
point
of
view,
with
all
that
persistent
storage.
B
There
also
comes
a
pitfall,
locking
the
data
in
to,
for
instance,
vms
or
platform
as
a
server
solutions
and
software
as
a
server
solutions
once
in
it's
hard
to
move
somewhere
else
with
your
data,
so
by
backing
up
or
replicating
your
persistent
data
anywhere,
you
are
freeing
your
data
to
be
used
on
any
platform
together
with
the
position.
Storage
also
comes
stateful
and
stateless
components,
as
they
are
part
of
the
application.
B
With
the
cdp,
you
can
enable
the
lowest
rpos
possible,
with
checkpoints
on
a
very
granular
level.
Checkpoints
created
should
also
be
able
to
get
tagged,
so
management
and
search
options
get
much
easier
by
leveraging
those
checkpoints.
One
could
also
create
a
ordered
way
of
recovering
applications,
as
they
are
able
to
get
organized
in
a
smart
way.
B
The
dr
solution
should
also
be
aware
of
the
environment.
It's
protecting
a
historic
agnostic
volume
approach
is
very
helpful
in
this,
not
only
for
the
awareness
on
what
the
platform
the
application
is
running
on,
but
also
it
should
be
able
to
assure
you
that
the
persistent
volume
is
consistent
at
the
destination
it's
being
sent
to,
as
mentioned
before,
the
solution
must
be
able
to
protect
a
entire
kubernetes
application.
B
One
have
the
option
to
do:
data
protection
as
code,
for
instance,
defining
your
protection
strategy
in
yaml
files.
The
same
way
as
you
are
used
to
manage
your
kubernetes
environment,
two,
the
solution
should
be
able
to
extend
the
cube
ctl
commands,
which,
with
specific
commands,
purposely
made
for
the
protection
of
these
containers.
B
B
All
in
all,
it's
very
important
that
the
solution
you
choose
that
protects
your
applications
and
data
in
kubernetes
should
be
a
solution
that
is
actually
made,
especially
for
kubernetes.
It
has
to
be
native
to
the
environment.
It's
securing
the
solution
has
to
be
designed
and
deployed
and
used
as
a
normal
kubernetes
application.
B
B
So
should
the
dr
solution
be
scaling
with
the
needs
of
your
environment
is
almost
obvious
summarizing.
There
are
a
lot
of
factors
and
capabilities
involved
in
choosing
the
right
solution
for
your
data
protection
strategy.
I
hope
I
have
shed
some
light
on
what
it
takes
to
make
the
right
choice,
not
protecting
your
applications
or
not
doing
it.
Natively
to
kubernetes
can
create
more
difficulties
than
you
can
imagine
with
this
food
for
thought.
A
Thank
you
egon
now
that
you've
kind
of
seen
how
we're
interpreting
the
evolution
of
stateful
data
within
kubernetes
and
and
what
are
the
threats
that
you're
facing
from
from
a
downtime
from
a
data
loss
perspective
the
health
of
your
application
and
looked
at
how
do
you
qualify
a
solution
today
with
all
the
marketing?
That's
out
there?
What
do
you
look
at
the
design
elements
of
a
true
but
native
backup
disaster
recovery
experience.
A
So
with
that
in
mind,
as
I
promise
this
is
not
a
product
pitch,
I
do
ask
you
that
you
check
out
what
zerto
can
do.
What
zerto's
data
protection
is
code,
our
zerto
for
kubernetes
experience
can
do,
and
the
one
thing
that
I
asked
is
instead
of
just
reading
marketing
documents,
just
take
a
look
at
our
labs,
be
able
to
see
exactly
how
zerto
works
test
drive
it
for
your
sale
for
yourself
in
a
self-paced
environment.
A
Whenever
you
want,
you
see
that
link
there
you're
able
to
simply
access
register
for
the
labs
and
start
using
it.
If
you
want
to
learn
more
check
out
our
website,
there's
a
lot
of
key
information
on
both
on
thought.
Leadership
sponsored
information
that
we've
paid
analysts
to
deliver
on.
How
should
you
protect?
What
are
the
configuration
elements?
What
does
this
look
like
today?
What
is
it
going
to
look
like
in
a
couple
of
years
and
also,
as
you
might
want,
a
live
tailored
demo?