►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
today
I'm
going
to
be
talking
about
re-hosting
applications
between
kubernetes
clusters
using
crane.
My
name
is
marco
bechubay.
I'm
the
product
manager
for
migration,
toolings
at
red
hat
and
crane,
is
a
new
project
to
that
is
actually
related
to
a
current
tool.
We
have
inside
reddit,
it's
called
mtc,
but
crane
is
this
new
upstream
project
that,
like
based
on
everything
we
learned
on
mtc
like
it's?
It's
actually
going
to
help
us
like
increase
the
scope
of
what
we
can
do
from
a
migration
point
of
view.
A
So
from
an
agenda
point
of
view,
I'm
just
going
to
quickly
go
over
what
is
actually
crane.
Some
of
the
use
cases,
the
introduction
to
the
crane
commands
and
how
the
tool
works,
and
then
eric
nelson,
which
is
the
engineering
manager
for
this
tool,
is
going
to
go
through
a
more
deep
dive
demo
of
the
tool
and
how
it
works
and
yeah
and
we'll
go
from
there.
A
So
and-
and
let
us
know
if
you
have
questions
and
we'll
answer,
questions
at
the
end
with
some
q,
a
so
quick
update
on
on
crane.
So
so
crane
is
a
project
in
the
conveyor
community.
As
you
might
expect,
but
it's
to
help
migrate,
applications
from
all
kinds
of
kubernetes
flavors
to
kubernetes
or
openshift,
so
so
just
want
to
point
out
that
actually
mtc
one
x
is
is
not
crane
right,
so
mtc
was
built
and
we
started
the
mt.
A
So
if
you're
familiar
with
mtc,
which
was
the
downstream
red
hat
project,
which
was
called
migration
toolkit
for
containers
or
mtc.
For
short,
this
tool
was
built
for
open
shift
to
open
shift
migration
only
and
mostly
for
the
purpose
of
of
the
red
hat
customers
migrating
from
openshift
3
to
openshift
4..
A
But,
as
we've
been
working
on
this
for
a
significant
amount
of
time
and
did
a
lot
of
migrations,
we
learned
a
lot
and
and
and
now
we're
kind
of
re-implementing
this
knowledge
of
everything
we
learned
on
that
tool
and
this
new
upstream
project
crane
that
we
hope
is
going
to
be
built
on,
like
all
the
latest
and
greatest
and
everything
we
learned
to
make
this
tool
even
better
and
with
a
bigger
scope,
which
is
any
kubernetes
to
any
kubernetes
flavor
migration.
A
So
crane
doesn't
have
a
downstream
product
name
yet
inside
red
hat.
So
right
now
it's
just
an
upstream
project,
and
but
we
expect
that
will
happen
later
this
year
and
have
something
also
downstream,
as
everything
we
do
upstream
becomes
a
typically
like
a
product
downstream,
then
that
should
happen
around
spring
2022,
but
stay
tuned
for
what
this
will
be
named
and
what
how
this
will
be
delivered.
A
But
for
now,
let's
talk
about
the
upstream
crane
project
and
crane
right
now,
the
expectation
is,
will
be
a
mix
of
some
common
line,
power
tools
or
a
way
to
do
more
advanced
type
of
migrations
and
in
the
future
also
could
be
leveraged
with
some
kind
of
downstream,
like
openshift,
easy
button
type
of
migration
as
we
make.
We
want
to
make
this
actually
part
of
the
openshift
product
in
the
future
as
a
way
to
easily
migrate
applications
between
openshift
clusters
as
well
using
the
same
technology.
A
So,
let's,
let's
talk
a
little
bit
about
the
lesson,
learn
right
from
from
all
the
work
we've
done
so
far
with
this
engineering
team.
For
a
couple
for
two
years
now
and
and
the
most
some
of
the
most
requested
features
that
sometimes
we
couldn't
deliver
in
an
mtc
fashion
because
of
the
way
we
actually
started
this
project
and
the
current
architecture
or
limitation
that
we
have
and
the
way
we
build
this
tool.
So
the
first
one
is
the
admin
requirement.
A
Another
thing
is
we
would
like
to.
I
would
like
to
provision
this
application
from
pipeline,
but
only
migrate,
my
state,
so
this
is
something
we've
actually
solved
in
mtc,
but
something
that
is
going
to
be
even
easier
to
do
using
the
crane
commands
so
that,
if
you
can,
if
you
want
a
reprovision
from
pipeline,
but
only
using
this
tool
to
migrate
the
state
or
your
pvs,
then
you
you
will
be
allowed
to
do
that
pretty
easily
using
the
crane
tool.
A
Also
one
other
thing
we
found
is
in
the
first
place
like
and
I'll
touch
on
the
next
slides
in
more
details,
but
typic
technically
like
in
many
cases.
You
should
not
need
a
migration
tool
right
like
if
you
would
have
automate
automation.
If
you
would
have
automated
deployments
and
pipelines,
then
you
could
just
reprovision
this
application
to
another
cluster.
But
if
you
don't
have
that
like
how
crane
could
help
you
actually
achieve
that?
A
We
made
significant
improvement
over
the
over
the
years
with
mtc,
but
still
we
believe
that
with
crane
and
the
current
and
the
new
architecture
is
going
to
be
even
better
and
easier
to
troubleshoot
anything
that
could
go
wrong
during
a
migration
process.
As
as
we
understand
that
like
you,
could
have
downtime,
and
this
could
be
in
a
maintenance
window.
So
anytime
we
can
save
troubleshooting
and
fixing
issues
is
actually
a
very
good
thing
when
you
are
migrating
applications.
A
So
that's
one
of
the
other
key
thing
we
were
thinking
about
while
building
this
project
or
this
new
architecture
for
crane.
So
again,
as
I
touched
on
the
previous
slides
up,
but
why
do
you
actually
even
need
a
migration
tool?
In
the
first
place
right
so
so,
if
you
have
automated
deployments,
then
obviously
migrating
from
pipeline
is
the
best
approach.
So,
but
we
found
that
many
applications
don't
have
that,
and
this
is
why
you
end
up
having
a
need
for
a
migration
tool.
Also,
even
if
you
have
automated
deployments,
you
might
have
state.
A
So
if,
if
you
want
to
provision
your
applications
for
for
from
one
cluster
to
another,
but
you
need
to
migrate
your
state.
This
also
could
be
another
reason
why
you
would
need
a
migration
tool
to
help
you
with
that
to
migrate
the
data
from
from
one
cluster
to
another.
A
And
this
is
this
is
what
we
found
like
so
far
over
the
years.
That
is
the
current
situation,
with
a
lot
of
customers
that
we're
seeing
using
currencies
so
so,
typically
like
over
the
years
you've
had
installed
and
deployed
many
applications,
and
some
of
those
applications
and
a
small
subset
will
have
some
automated
deployment
right.
So
this
is
like.
A
Typically,
there
would
be
your
most
important,
apps
and,
and
those
ones
would
have
a
way
to
you-
would
get
automated
deployments
and
will
be
promoted
from
dev
to
qe
to
production,
but
there's
a
larger
subset
of
those
apps,
and
we've
done
surveys
like
with
also
in
previous
sessions
right
asking
like
the
percentage
of
your
application
that
have
automated
deployment
versus
not
not
having
automated
deployment,
and
this
is
a
pretty
good
ballpark
in
in
many
many
clusters
or
customers
we're
seeing
today.
A
It
makes
it
that
makes
it
very
difficult
to
embrace
this
hybrid
cloud
approach,
as
you
are
locked
in
with
all
those
manually
deployed
apps-
and
this
is
this-
is
one
of
the
problem.
We
want
to
solve
right
as
as
right
now,
if
even
if
you
can,
for
example,
reprovision
from
pipeline
your
application
that
have
automated
deployment
and
then
you
could
use
a
migration
tool
to
migrate.
Your
apps
that
have
been
manually
deployed
from
one
cluster
to
another.
A
The
end
state
is
still
the
same.
You
have
the
exact
same
configuration
that
you
had
before,
where
what
we
want
to
get
you
to
is,
after
the
migration
create
for
you,
some
automated
deployment,
so
that
in
the
future
you
don't
need
a
migration
tool.
You
have
automated
deployments,
you
have
you're
following
a
github's
approach
potentially,
and
this
will
allow
you
to
now
be
more
agile
and
to
promote
code
from
dev
to
production
in
a
much
faster
way,
as
you
will
have
the
proper
approach
of
deploying
applications
on
top
of
kubernetes.
A
So
if
we
think
about
our
migration
tool,
works
right
so
and-
and
this
is
the
scenario
one
like
the
most
simplistic
like
migration
pattern.
So
first
of
all,
you
need
to
extract
all
the
kubernetes
manifest
and
re-import
them
on
the
destination
side,
and
in
many
cases
you
have
to
fix
them,
as
as
they
might
have
metadata
or
all
kind
of
things.
In
those
manifests
that
actually
are
proprietary
to
your
source
cluster,
then
you
would
have
to
migrate
the
state
or
the
pvs
from
one
cluster
to
another,
and
then
you
have
the
images
as
well.
A
A
And
if
you
don't-
and
if
you
are
interested
in
actually
improving
your
situation
and
have
automated
deployments
in
the
future,
then
you
can
use
crane
as
well
to
reconstruct
your
manifest,
but
instead
of
provisioning
that
to
your
destination.
Cluster.
Push
that
to
git,
and
you
have
your
cd
solution
to
reprovision,
that
on
the
destination
side
for
you,
which
at
the
end,
will
bring
you
to
a
much
better
state.
As
you
would
have
automated
deployment.
So
crane
can
extract
your
kubernetes
manifest
clean.
A
This
up,
push
that
to
get
and
then
leverage
your
cd
to
deploy
that
on
the
destination
side
and
the
most
simplistic
way.
There's
many
ways
to
use
screen
and
and
eric
will
go
through
into
this
into
more
details.
But
if
you
would
use
the
most
simplistic
way
of
using
crane,
you
would
see
something
like
crane
export
crane
transform
crane
apply.
Those
are
the
commands
that
you
would
use
to
actually
do
those
steps
and
to
provision
your
manifest
into
git.
A
B
B
Cool
all
right,
so
I'm
sharing
my
screen
right
now
and
I'm
looking
at
a
project
called
within
the
conveyor,
org
called
crane
runner.
I
can
get
more
details
of
what
what
exactly
this
is
housing,
but
as
what's
interesting
and
relevant
to
this
call
is
that
we
have
a
set
of
examples
that
are
inside
of
this,
that
we'll
also
be
publishing
to
our
crane
documentation.
B
If
I
can
drag
this
over
here,
this
is
our
documentation
site,
so
I'll
provide
links
to
that
and
we're
going
to
add
some
scenarios
for
folks
to
be
able
to
follow
along
what
I'm
going
to
demonstrate
today,
I'm
only
going
to
go
through
one
of
the
most
basic
examples
and
show
kind
of
each
of
the
steps
that
marco
was
just
describing,
but
we
we
have
this
sequence
of
scenarios
that
folks
can
run
through
that
gradually
kind
of
ratchet
up
the
complexity,
to
demonstrate
one
particular
piece
or
use
use
case
of
the
tooling,
because,
as
you
mentioned,
this
is
kind
of
a
tool
box
of
of
different
utilities
that
can
be
combined
in
order
to
do
like
more
sophisticated
things,
rather
than
one
prescribed
path.
B
Although
downstream
with
red
hat
we're
going
to
have
an
opinionated
solution
that
will
help
you
with,
as
marco
described,
like
kind
of
the
easy
button,
so
we're
going
to
try
to
go
through
this,
like
stateless
application
mirror
so
we're
going
to
be
using
an
example,
application
called
guest
book,
that's
kind
of
the
classic
kubernetes
example.
B
That's
not
a
total
toy
application,
but
is
also
like
it
has
a
redis
back
end
and
then
it's
also
got
a
front
end
and
it's
got
a
bit
of
stator
it's
designed
to
and
some
of
its
iterations
as
a
guest
book
application.
So
it's
kind
of
your
hello
world
application.
B
Next,
the
scenarios
include
a
section
on
how
do
you
integrate
this
with
customize
so
once
you've
stripped
everything
out
of
it?
That's
cluster
specific,
so
that
it's
no
longer
this
pet
application.
How
do
you
layer
back
in
details
that
are
relevant,
because
sometimes
you
actually
need
to
layer
things
back
in
such
as
resource
quota
information
or
node
selectors
that
are
actually
specific
to
your
destination
cluster,
where
you
want
to
deploy
your
application.
B
Number
three
has
the
get
ops
integration
demonstration.
So
in
that
example,
you
actually
integrate
with
argo
cd
in
order
to
on
ramp
yourself
into
a
cd
situation,
so
that
argo
itself
is
actually
deploying
rather
than
you
directly
and
then
finally,
there's
a
couple.
Stateful
application
examples,
so
migration
and
then
there's
a
stage
in
a
migrate
model
where
you
can
continuously
stage
your
data
over
time,
while
keeping
your
source
applications
up
and
then
finally
doing
a
final
cut
over
migration.
B
But
we're
going
to
be
using
some
of
the
commands
that
are
in
here
and
and
we'll
also
be
adding
a
crane,
101
scenario
to
this
shortly.
I
think
there's
actually
even
a
pr
out
there
to
add
that,
but
so
the
first
step
that
we
have
is
in
my
environment,
I'm
going
to
be
running
a
couple
of
mini
cube
clusters,
so
this
is
really
convenient
way
to
get
started.
You
can
bring
a
vm
like
a
rel
or
a
fedora
vm.
Maybe
you
launch
launched
an
ec2.
B
We've
actually
tested
that
and
have
some
documentation
around
recommendations
for
that.
But
I'm
just
going
to
be
running
on
a
box
that
I
have
at
my
house
and
what
this
script
does.
Is
it
actually
launches
two
different
mini
cube
clusters
onto
on
the
same
machine,
I'm
going
to
be
using
podman
as
my
as
my
provider,
and
it's
got
some
networking
rules
that
are
necessary
for
routing
as
well
as
dns,
in
order
for
both
of
those
clusters
to
be
able
to
see
one
another.
B
So-
and
I
can-
I
can
point
folks
to
this.
So
if
you
want
to
take
a
look
at
exactly
what
this
is
doing,
there's
kind
of
some
guards
in
place
here,
but
it's
for
the
most
part,
it's
doing
as
I
as
I
mentioned,
so
we're
bringing
up
the
source
and
destination
cluster
and
setting
up
the
networking.
B
So
I
actually
already
have
that
set
up
in
this
environment.
So
I'm
in
my
machine.
This
is
a
fedora
machine
that
I
freshly
set
up
and
installed
podman
on.
So
I
have
a
couple:
aliases
mk
is
my
mini
cube.
I
use
a
couple
tools.
Kx
is
a
tool
called
cube
context.
That's
really
convenient
for
switching
between
kubernetes
contexts
and
then
kn
is
my
cube,
ns
tool.
So
it's
related.
These
are
like
bash
scripts
that
get
loaded
and
that'll
help
me
set
like
my
active
name
space.
B
So
this
is
kind
of
a
function
that
is
nice
and
and
part
of
open
shift
that
we
don't
actually
have
kind
of
natively
in
a
similar
manner.
So
this
helps
with
vanilla,
cube
environments.
B
So
right
now
I'm
going
to
set
my
contacts
to
my
source
cluster
and
go
ahead
and
get
started,
so
I'm
going
to
skip
over
installing
tecton
because
we're
not
not
going
to
use
it
in
this.
In
this
example,
and
then
I
also
I'm
not
going
to
be
installing
crane
runner
manifests
because
again,
those
are
cluster
tasks
that
are
related
to
techton.
B
Example
application
workload,
so
I'm
going
to
run
a
command
to
create
my
guestbook
namespace
on
the
source
side
and
then,
secondly,
what
I'm
going
to
do
is
run
a
customize
command
that
pulls
in
the
guestbook
application
and
installs
it
on
my
source
cluster
here.
So
the
customize
command
is
just
layering
in
some
details.
I
think
I
think
it's
bundled
as
a
customized.
B
B
So
now
you
can
see
that
my
active
namespace
is
the
getbook
and
if
I
want
to
get
pods
here,
they're
all
actually
running
already
under
normal
circumstances.
You
have
to
pull
this
so
it
takes
longer.
I
actually
run
through
this
already
once
so.
I've
got
all
the
images
on
my
machine,
so
fortunately
we
can
benefit
from
that
stuff
coming
up
quickly,
so
I've
got
a
front
end,
a
redis
master
and
a
couple
of
redis
slaves.
B
So
this
last
command
will
just
like
block
until
everything
comes
up
as
ready.
This
doesn't
happen
to
be
ready.
B
B
So
at
that
point
we
should
be
ready
to
actually
get
started
using
crane,
so
you
can
go
find
crane
at
in
conveyor
crane
and
we
have
a
sequence
of
releases
here.
Our
current
release
is
alpha
2..
We
are
expecting
an
alpha
3
release
pretty
soon
here,
but
each
with
each
release,
we'll
have
release,
notes
and
we're
adding
new
features
as
we
go
along.
So
you
can
download.
This
is
the
binary
directly.
So
it's
a
go
binary.
B
It
doesn't
need
any
dependencies,
you
can
download
it
make
it
executable
and
then
it
should
be
available
to
you
to
use
so
I've
added
it
to
my
path
on
this
particular
machine.
So
if
I
run
a
crane
version
on
here,
you'll
see
that
I'm
using
crane
version
003
and
then
I've
got
a
crane
lib,
which
is
where
a
lot
of
the
logic
is
implemented.
B
So
it
can
be
utilized
by
other
projects
as
zero,
zero,
five,
so
crane
itself,
the
command
has
several
applic
several
commands
that
you
can
run
the
ones
that
we're
interested
in
today
are
the
primary
migration
commands.
So
that's
going
to
be
export
which
exports
the
raw
application,
raw
application
resources
that
it
finds
within
the
namespace
that
you
specify
to
disk
and
then,
secondly,
we're
going
to
run
transform,
so
transform
is
the
piece
that
generates
a
set
of
json
patches,
as
dictated
by
the
plugins
that
you
have
installed.
B
We
use
a
plugin
model
because
I'll
just
describe
a
little
bit
of
background
around
this.
It's
pretty
frequent
when
you're,
when
you're
approaching
different
environments,
that
everybody's
kind
of
got
one-off
problems,
and
so
in
an
effort
to
build
a
generic
tool,
but
one
that
also
services
people
with
specific
needs.
We've
decided
to
build
this
around
the
plug-in
model,
so
the
cool
thing
about
that
is,
if
folks
end
up
discovering
particular
issues
for
themselves
or
they
have
like
really
specific
needs.
B
The
plug-in
api
is
very
simple
for
folks
to
go
out
and
build
plug-ins
for
themselves
to
solve
their
solve
their
own
issues
and
then,
secondly,
they
can
go
and
actually
pull
down,
plug-ins
that
have
already
been
written
so
that
the
community
can
codify
those
the
solutions
to
those
problems
within
plug-ins
and
share
them
so
they're
easily
easily
accessible
to
the
rest
of
the
community.
So
we,
the
the
crane
command
itself,
actually
ships
with
a
couple
of
default.
Well,
actually
it
ships
with
one
plug-in
that's
built
into
it,
then.
B
Secondly,
it
has
a
default
repository
where
we
are
publishing
kind
of
our
official
plugins.
So
the
the
plugin
that's
built
directly
into
crane
is
the
kubernetes
plugin,
so
there's
a
whole
set
of
like
cleaning
operations
that
we
know
already
are
going
to
be
necessary
for
kubernetes
things
like
owner
references
stripping
like
derivative
resources
from
owner
references.
So
an
example
of
that
would
be
a
replica
set
in
a
pod.
B
What
you
actually
want
to
do
is
restore
the
replica
set
on
the
target
side
rather
than
actually
and
allow
the
the
target
cluster
controllers
to
recreate
those
derivative
resources
such
as
the
pods.
So
we
want
to
strip
those
from
your
manifests,
so
you're
not
recreating
those
I'm
trying
to
think
what
else
so,
that's
kind
of
an
example
of
a
kubernetes
transform.
A
second
example
is
once
you
get
into
openshift,
you
start
to
talk
about
routes
or
openshift,
specific
resources,
and
so
there's
operations
that
you
want
to
include
there.
B
However,
it's
an
optional
plug-in,
so
you
can
decide
to
use
it
or
not.
Depending
on
whether
or
not
your
target
environment
is
an
open
shift
cluster
and
then
of
course,
you
can
get
more
and
more
specific,
so
you
can
build
plugins
for
things
like
node
selectors
or
your
own
specific
environments,
all
right,
so
I'm
going
to
get
started
here.
Let's
go
into
my
working
directory.
B
I
think
I
have
a
demo
directory
and
I've
run
this
one
time
previously,
so
I'm
going
to
rmrf
that
whole
directory
just
to
start
clean,
and
so
we
we're
starting
clean
and
I'm
going
to
set
my
context
to
the
source
cluster
and
I'm
going
to
set
my
namespace
to
the
guestbook.
I
think
it
was
already
done,
but
I'll
just
make
sure
of
that.
B
So
crane
export
itself
exports
the
raw
resources.
So
you
can
see
it's
going
in
and
it's
actually
using
the
kubernetes
api
server's
discovery
api
to
understand
what
are
all
of
the
applica,
like
all
the
api
resources
that
are
available
and
then
from
there
it'll
go
through
and
it'll
make
sure
to
export
that.
So
the
cool
thing
there
is
that
it'll
actually
support
crds
as
well,
because
it'll
it'll
be
able
to
find
those
based
on
the
discovery
api.
B
So
I'm
going
to
run
a
tree
here,
and
so
you
can
see
these
are
all
of
the
raw
resources
that
it
just
found
within
my
guestbook
namespace.
So
I've
got
things
like
underlying
pods
that
have
been
generated
as
a
result
of
replica
sets.
I've
got
endpoints
and
endpoint
slices
that
have
been
generated
as
a
result
of
the
services
and
if
I
go
and
look
at
one
of
those
I'll
take
a
look
at
the
let's
look
at
the
redis
master
here.
B
B
So,
okay,
so
now
I've
got
my
raw
resources
and
that
can
be
useful
in
itself
just
for
exporting
your
applications
in
the
resources
that
make
them
up
for
your
own
purposes.
But
now
I'm
actually
gonna
run
a
crane
transform.
But
before
I
do
that,
I
want
to
take
a
look
at
the
plugins
that
it's
going
to
run.
So
I'm
just
using
crane
the
default
binary
that
I
downloaded.
B
I
haven't
installed
any
custom
plugins,
but
what
I
actually
do
have
is
a
there's,
a
there's,
a
plug-in
management
command
here,
so
that
it
can
discover
plugins
and
you
can
easily
install
them
and
then,
lastly,
so
you
can
run
a
list
plug-ins
command
when
you
do
the
transform
command
to
so
that
the
tooling
will
tell
you
hey.
These
are
the
plugins
that
I'm
going
to
use
when
you
run
your
transform
command.
B
B
The
default
arguments
are
acceptable
to
me
in
the
scenario
there's
a
lot
of
configuration
that
I
don't
really
want
to
get
into
right
now,
so
you
can
see
here
that
a
lot
of
work
was
done
here.
One
of
the
examples
here
is
that,
like
the
underlying
endpoints,
because
these
are
derivative
resources
of
a
service-
those
are
going
to
get
white
out
files
and
what
that
means
is
that
they're
going
to
get
blocked.
B
When
I
do
my
the
application
of
these
transforms
so
that
they're,
effectively
stripped
from
my
output
minif
set
of
manifests
because
I'm
not
going
to
need
those.
Similarly,
I'm
going
to
have
the
same
thing
with
pods.
B
We
can
run
a
tree
command
on
transform,
and
so
here
are
the
json
patch
files
that
I've
got.
So,
let's
take
a
quick
look
at
one
of
those.
B
So
it's
going
to
do
things
like
strip
my
metadata.
The
status
is
no
longer
relevant
once
I
version
it,
so
it's
really
just
kind
of
cleaning
up
these.
These
exported
resources
and
then
another
item
here
is
I'm
sorry,
I'm
looking
at
a
deployment
we're
going
to
want
to
strip
the
cluster-specific
ips
from
these
services.
B
So
here
you
go
so
you
can
see
here
we
remove
a
cluster
ip,
because
we
know
that
that's
not
something
that
we
want
to
get
created
once
we
instantiate
these
on
the
target
side.
So
I've
got
this
set
of
white
out
files
plus
my
json
patches
in
order
to
manipulate
my
raw
exported
resources
to
turn
them
into
a
cluster
agnostic
manifest.
B
So
the
next
step
is
to
run
a
crane
apply
command,
which
is
going
to
do
that.
You
can
kind
of
think
of
this,
as
I
have
like
kind
of
a
useful
function
here.
Folks
can
see.
This
apply
is
really
you
can
think
of
it
as
an
idempident
function,
meaning
that
I
can
rerun
this
over
and
over
and
over
again
and
as
long
as
my
inputs
are
the
same.
My
outputs
are
the
same.
This
also
doesn't
have
any
side
effects,
so
it's
not
going
to
be
impacting
the
applications
that
are
running
inside
of
my
clusters.
B
It's
all
happening
on
the
command
line
in
my
local
environment,
which,
in
our
experience,
is
kind
of
paramount.
When
you're
doing
these
migrations,
you
really
want
to
be
careful
about
altering
the
state
of
your
clusters
so
that,
if
things
go
sideways,
you
understand
what
happened
and
you're
able
to
recover
from
that
in
an
easier
manner.
B
B
Put
this
side
by
side,
we
can
compare
the
raw
resources
to
the
exported
resources,
so
this
is
the
raw
output
and
then,
on
the
left
hand,
side.
I've
got
my
stripped
set
of
manifests
so
there's
quite
a
bit
less
on
the
left
hand,
side
and
that's
because
again,
a
lot
of
these
are
generated
as
a
result
of
kind
of
the
parent
objects
that
are
over
here
that
make
up
my
workload.
B
So
now
what
I'm
going
to
do
is
go
ahead
and
I'm
going
to
apply
these
to
my
target
cluster,
so
I'm
going
to
set
my
destination
context
to
my
desk
cluster,
and
so,
if
I
run
a
if
I
list
the
name
spaces
here
actually,
oh,
we
did
create
the
guestbook
namespace,
because
I
need
a
destination
location
name
space
to
put
those
in.
If
I
get
the
pods
there's
nothing
in
there,
so
I
actually,
I
don't
have
anything
in
this
name
space.
B
It's
a
fresh
name
space,
I'm
just
going
to
pause
for
a
second,
I
see
a
question
here.
What
plugins
do
I
need
when
I
move
an
app
from
gke
to
ocp?
I
think
that's
what
it
says:
it's
blocked.
Is
there
an
open
shift,
plugin
yeah.
So
the
answer
is
yes,
because
you're
going
to
open
shift
from
gke
you'll
want
to
use
the
openshift
plugin
for
that
once
once
our
product
goes
live
with
it
within
openshift.
It
will
be
designed
in
order
to
do
that.
B
So
you
can
just
take
the
defaults
and
the
openshift
plugins
already
installed
if
you,
if
you
would
rather
use
the
cli
for
doing
that
manually-
and
there
are
a
lot
of
reasons
to
do
that
you
may
want.
You-
may
have
some
more
advanced
use
cases
or
you
want.
You
know
finer
grade
control
over
what
you're
doing
you'll
want
to
install
the
openshift
plugin
and
there
is
an
openshift
plugin.
So
I
guess
I
can.
B
B
Okay,
so
we
left
off
trying
to
recreate
the
cluster
agnostic,
manifests
that
my
apply
command
had
output
in
my
destination
cluster,
so
really
we're
doing
like
a
basic
mirroring
of
an
application
workload
from
the
source
to
the
destination,
and
that's
going
to
be
as
simple
as
a
cube
cuddle
apply.
B
So
you
can
see
that
all
these
got
created.
Some
of
these
default
service
accounts
already
existed
there,
so
it's
just
complaining
that
they
weren't
already
created
by
apply.
So
these
are
safe
to
ignore
so
I'll.
Just
make
sure
that
I'm
on
my
destination
context
and
in
my
guest
book
namespace,
and
if
I
do
it
get
pods,
you
can
see
that
that
has
launched
my
application
on
my
destination
namespace.
B
So
that's
kind
of
the
most
basic
of
examples,
but
as
marco
described
like
there
are
definitely
much
more
complex
use
cases
for
this,
and
it's
it's
really
flexible
and
one
of
our
favorite
parts
about
it
is
that
it's
all
been
designed
to
be
very
transparent.
B
Migrations
can
be
ugly
things,
and
so
the
ability
to
be
able
to
diagnose
problems
when
they
arise
is
really
important.
So
that's
kind
of
it's
been
designed
from
the
ground
up
in
order
to
make
that
make
sure
of
that
see.
Another
question
in
here
do
plugins
only
apply
to
the
destination
environment,
then,
no,
so
plugins
will
do
they're
kind
of
they'll.
B
Do
arbitrary
mutations
upon
your
export
exported
resources
so
the
way
that
we've
kind
of
been
thinking
about
this
is
that
the
plugins
often
are
pulling
things
out
that
are
cluster-specific,
whereas
you
can
marry
this
with
customize
in
order
to
overlay
cluster-specific
details
of
your
destination
clusters
back
into
your
resources,
so
an
example
of
that
might
be
node
selectors,
so
the
nodes
and
the
way
that
they're
labeled
are
often
cluster
specific.
B
So
if
you
would
like
to
set
up
a
node
selector
for
your
application
as
it
gets
layered
in
or
as
it
gets
deployed
to
another
destination,
you
can
use
customize
in
order
to
overlay
those
details
in
and
customize
is
natively
supported
by
things
like
argo
cd,
so
you
can
even
combine
customize
with
cd
systems
like
argo
in
order
to
overlay
those
depending
which
cluster
you're
going
to
so
yeah.
I
hope
that
answers
your
question
feel
free
to
ask
for
clarification.
If
that
didn't.
C
Thank
you
eric,
so
guys,
that's
it.
So
I
did
put
a
link
to
a
forum,
and
that
would
help
us
understand
like
how
helpful
this
content
was
to
you.
If
you
do
have
any
questions,
there's
still
time
to
put
it
in
the
chat,
I
think
another
one
just
came
in
for
you
eric
sure.
B
How
would
one
do
a
migration
if
the
source
cluster
is
a
database
to
the
destination
database
with
all
previous
data?
So
I'll
answer
that,
assuming
that
you
don't
already
have
a
database
on
your
target
side,
I'm
not
sure.
If
you
mean
like
should,
can
you
merge
data
in
the
target
site?
B
I'm
assuming
that
you
don't
mean
that
so
we
we
didn't
get
into
the
stateful
piece
of
that
on
this
call,
because
we
really
didn't
have
time
to,
and
it
probably
deserves
like
several
of
these
in
it
of
these
meetups
in
themselves,
because
it's
complex
enough
of
a
use
case.
The
stateful
component
of
this
is
really
kind
of
the
difficult
part
when
it
comes
to
migrations.
B
So
if
you're
curious,
you
can
actually
run
through
some
scenarios
that
will
demonstrate
that
in
this
cube
in
this
crane
runner
repository-
let's
see
here
so
we
have
stateful
application,
migration
and
then
stage
and
migrate.
So
it
depends
on
whether
or
not
you're
you're
already
in
some
kind
of
a
cd
system,
or
you
want
to
go
to
a
cd
system.
B
Let's
assume
you
want,
you
already
have
kind
of
a
pet
application
on
your
source.
Cluster
because
that's
the
most
common
scenario-
and
it's
got
a
database,
so
it's
it's
and
that
database
is
on
the
cluster,
because
often
people
will
have
external
databases
that
are
off
cluster.
B
So
the
interesting
one
is
is
the
database
that
has
its
state
also
within
that
cluster,
so
as
a
sequence
of
ta
as
a
sequence
of
commands,
and
the
order
is
relevant
here,
you're
you'll
be
able
to
do
something
like
transfer
your
pvcs
using
a
crane
command
that
we
didn't
get
into
here,
but
there
is
actually
a
transfer
pvc
command
that
will
help
you
map
pvcs
from
your
source
cluster
to
your
target
cluster.
B
So
what
you'd
be
able
to
do
in
the
case
that
you
have
a
database
is
basically
launch
your
namespace
on
the
target
side
and
then
use
transferpvc
to
get
most
of
your
data
onto
the
target
side.
While
the
application
is
still
up
on
the
source,
then
you
can
export
your
application
details
quiesce
the
application,
so
you
no
longer
have
new
data
being
written
to
that
database
and
then
run
through
the
export.
Transform
apply
to
get
your
workload
resources
over
to
the
target
side.
B
Do
a
final
data
transfer
pvc,
which
is
again
it's
also
it
can
be
rerun
over
and
over
and
over
again
in
order
to
pick
up
the
delta.
So
in
theory
that
one
should
run
much
faster
than
the
initial
one,
because
it's
only
picking
up
any
of
the
new
data
and
then
you
can
bring
up
your
application
on
the
target
side.
So
we
didn't
get
into
kind
of
those
advanced
use
cases,
but
we're
thinking
about
that
a
lot,
and
so
that's
one
off
the
top
of
my
head.
C
B
Correct
so
there's
some
like
gotchas
around
state
transfer,
because
state
transfer
depends
upon
the
clusters,
seeing
it
being
able
to
see
one
another.
However,
we
also
have
some
tickets
in
order
to
explore
what
it
would
be
like
to
actually
export
your
data
off
of
this,
the
source
cluster
and
then
use
like
a
sneaker
net
way
mechanism
to
like
get
it
into
your
disconnected
cluster
on
the
target
so
that
it
like
you
know,
maybe
you
want
it
on
a
on
your
workstation
or
whatever,
but
yeah
the
crane
itself.
B
So
it's
you
can
see
that
I
was
actually
operating
on
my
workstation,
so
you
can
export
it
exports
all
of
those
resources
onto
your
mac
onto
your
desktop
or
wherever
you're
running
those
commands,
and
then
you
can
use
that
as
long
as
you,
you
know,
move
take
your
laptop
into
your
connected
into
your
disconnected
network
so
that
you
can
see
that
cluster.
Then
you
can
actually
create
it
so
that
there's
a
way
that
you
can
kind
of
operate
with
disconnected.
C
And
then
the
next
questionnaire
is
how
does
crane
handle
scenarios
where
the
k-8
application
are
being
run
as
root,
and
we
need
to
migrate
to
ocp.
B
That's
a
great
question:
we've
been
thinking
a
lot
about
that
as
well
on
some
level,
some
of
these
applications
are
actually
like
themselves,
written
fundamentally
to
expect
root,
and
so
that's
that's
that,
like
tackling
that
problem
is
a
little
bit
outside
of
our
own
domain.
Although
we
can,
we
can
make
a
best
effort
so,
for
example
like
if,
if
the,
if
it
fundamentally
expects
root
like
there's,
not
a
ton
that
we
can
do
about
that
and
that's
an
application
detail
that
has
to
be
addressed.
B
However,
there
are
also
a
lot
of
other,
like
one
thing
that
comes
to
mind
is
like
pod.
Security
policies,
train
like
which
is
a
kubernetes
resource,
and
the
analogous
resource
in
openshift
world
is
an
sec.
B
So
we've
been
thinking
a
lot
about
like
this
is
a
complex
issue,
as
you
can
imagine,
but
there
are
kind
of
transforms
and,
like
permission,
control
features
that
we
can
implement
in
crane
in
order
to
address
that
so
yeah,
it's
recognized
that,
like
openshift
often
is,
is
kind
of
a
more
strict
environment
and
so
we're
adding
the
tool
sets
in
order
to
allow
you
to
integrate
with
it.
C
A
Yeah,
so
so,
right
now,
the
only
downstream
product
is
still
mtc.
Crane
is
more
for
now
is
only
upstream,
as
I
was
saying
in
my
in
some
of
the
slides
like
crane.
We
expect
to
have
something
downstream
in
spring
time
frame
and
the
idea
so
far
is
it
would
be
part
of
like
in
an
openshift
feature
more
than
a
product
by
itself.
A
So
we
would
like
to
bring
this
in
a
way
that,
like
eventually
over
time,
you
would
have
like
some
kind
of
tool
inside
openshift
that
can
help
you
migrate
so,
but
but
there's
a
lot
of
things
that
need
to
be
figured
out
before
we
can
talk
exactly
on
what
this
will
exactly
look
like
in
the
downstream
more
to
come
in
next
couple
of
weeks
on
that.