►
From YouTube: K8s Data Import and PVC Cloning
Description
An end to end demonstration using cinder/ceph. This demo imports data into a persistent volume claim and then clones that data into a second claim.
Automation Scripts: https://github.com/screeley44/setupvm-dev/
Data Importer Repo (Provided by auto scripts): https://github.com/kubevirt/containerized-data-importer
A
This
is
a
multi-part
demo
on
the
walks
through
the
automated
deployment
of
seven
Center
on
to
new
cluster.
Following
that
we
have
seven
all-in-one
kubernetes
cluster
and
demo,
the
Qbert
data
importer
and
cinder
cloning.
Provisioner
automation
here
is
gonna,
be
handled
by
custom
scripts.
We
will
provide
things
so
scripts
in
the
description.
The
first
node
we're
gonna
set
up
here.
Is
the
sender
know
if
we're
gonna
do
this
using
scripts
from
the
setup
VM
dev
repository
before
we
get
started
we're
going
to
edit
the
setup.
Config
I've
done
this
ahead
of
time.
A
A
Now
this
scripts
gonna
take
about
15
to
20
minutes,
to
run
it's
perfect
time
to
go,
get
a
cup
of
tea
or
coffee.
Now,
in
the
interest
of
time
here
we're
going
to
accelerate
the
video
and
in
just
a
second,
the
script
will
finish
and
there
you
go.
We
now
have
a
OpenStack
cinder
and
kubernetes
node
next,
we'll
be
deploying
Seth
on
our
second
AWS
node.
So
now
we're
going
to
be
setting
up
the
stuff
now
and
it's
going
to
look
almost
identical
to
configuring,
the
OpenStack
standard,
kubernetes
mode
that
we
did
previously.
A
A
A
Okay,
so
it's
at
this
point
that
we
now
have
a
two
node
SEF
cluster
and
we're
gonna
hop
back
over
to
our
cinder
OpenStack
kubernetes
cluster
for
the
next
step
of
the
demo.
So
we've
completed
our
setup
of
Ceph
on
our
second
node
and
now
we're
back
on
the
OpenStack
cinder
kubernetes
code.
First
thing
we're
going
to
do
is
deploy
and
all
in
one
kubernetes
cluster
I'm
going
to
do
that.
By
going
to
the.
A
A
A
So,
thanks
to
the
automation
script,
we
have
a
set
of
specs
here
to
get
our
cinder,
dynamic
provision
or
deployed,
and
that's
going
to
be
handling
the
cloning
operations
for
our
images.
There's
some
configuration
that
needs
to
go
on
and
then
with
these
I've
done.
In
the
background,
if
you're
doing
this
yourself,
please
refer
to
the
documentation,
I
may
go
ahead
and
spin
up
the
provisioner
here
screen.
A
Sorry
there
we
go
and
make
sure
it's
running
and
make
sure
the
storage
class
exists
excellent.
So
now
we've
done
that
way.
Is
our
data
import
service
to
fetch
a
file
from
a
remote
location
and
populate
a
PVC
with
it
again.
Thank
you.
A
Thanks
to
the
automation
scripts,
we've
got
a
the
repository
cloned
down
in
the
root
directory,
so
we're
gonna
hop
in
there
and
use
the
Matt
fest
provide
for
us
again.
There's
some
configuration
that
needs
to
go
on
in
the
background
here.
I've
added
a
couple
extra
specs
just
for
demonstration
purposes.
Those
would
be
the
checker
pod
and
this
tunic
alone,
PVC,
aren't
included
in
the
repository.
Those
are
just
sanity
checks
for
our
demonstration
right
now.
You
can
ignore
the
namespace
spec.
We
are
gonna,
be
doing
this
on.
A
They
need
to
fall
in
space,
so
let's
go
ahead
and
create
the
imported
pod,
config
and
Porter
secret
and
the
importer,
the
importer
pas
PVC,
and
wait
for
this
to
start
going.
It's
gonna
take
a
second
to
the
Benham,
a
provisioner
to
get
that
blowing
them
up
the
PVC.
While
we're
doing
that
we're
going
to
look
at
the
config
now
the
config
file
is
where
we're
going
to
tell
the
data
import
service
where
we
want
our
data
that
come
from.
A
A
A
Next
thing
we
want
to
do
is
kick
off
the
cloning
process,
so
what
we
are
doing
is
specifying
a
PVC
called
the
cinder
clone.
You
can
see
in
this
case
and
the
provisioner
is
going
to
be
looking
for
PVCs
with
this
annotation
here
sorry,
that
is
hardly
so.
The
annotation
reads:
que
vas
io
/
clone
request
and
then
the
name
of
the
PVC
object.
So
our
PVC
in
this
case
is
called
:
PVC
and
it
will
be
using
these
stimulants.
A
A
Ok,
so
now
we
are
on
the
checker
pond.
We
expect
the
data
directory
to
contain
our
zero
byte
file
and
there
it
is
hello
Cooper.
So
what
we've
done
is
set
up
a
two
node
AWS
cluster
we've
deployed
sender,
OpenStack
and
kubernetes
on
one
and
Seth,
on
the
other,
we've
plugged
sender
in
to
sell
we
set
up
a
dynamic
provisioner,
that's
been
altered
to
handle
cloning
operations
of
PVCs
and
we've
deployed
that
into
kubernetes.
A
We
created
a
PVC
and
populated
that
PVC
with
remote
data
using
our
data
service
and
then
using
the
clone
annotation
in
our
custom.
Pvc
we've
cloned
the
golden
PVC
into
our
golden
clone,
and
that
is
how
we
have
managed
to
get
this
data
populated
from
remote
location
in
the
kubernetes,
and
that
is
all
thank
you
very
much.