►
From YouTube: OCB: Migration Tools and the Konveyor Community
Description
Join OpenShift's Developer Experience experts for our regularly scheduled program filled with cloud native, Kubernetes, and OpenShift tips and tricks for developers.
Today we'll be kicking off the OCP4 Console Customization Competition.
A
Development,
marco
marco,
will
touch
on
both
the
container
and
virtualization
toolkit
today
and
he'll.
Explain
to
you
what
that
roadmap
looks
like
and
and
the
purpose
of
the
migration
toolkit
for
virtualization
as
well,
and
finally,
joining
us
shook
from
ibm
research
is
going
to
present
on
move
to
cube.
This
is
actually
a
pretty
exciting
new
project
that
was
open
sourced
by
the
ibm
research
team.
A
As
ibm,
research
was
supporting
a
customer
moving
from
docker
swarm
to
kubernetes.
They
developed
this
tool,
which
has
since
grown
to
be
able
to
support
migrations
from
cloud
foundry
and
other
platforms
to
kubernetes
as
well.
A
So
ashok
will
do
that
as
well,
so
each
of
these
areas
will
have
kind
of
a
dive
into
each
of
these
toolkits
and
and
technologies,
and
then
you
know
give
a
quick
demo
of
each
of
them
is
the
idea
so
with
that,
let
me
hand
it
over
to
to
miguel
and
he
can
take
you
through
the
migration
toolkit
for
applications.
B
B
So
probably
you
know
migration
token
for
applications
by
its
previous
name,
which
was
application,
migration
toolkit
and
it's
not
a
tongue
twister
and
also
by
the
upstream
project.
I
mean
it's
fully
open
source
and
it's
available
which
is
windup.
So
what
does
this?
Do?
It
can
help
you
review
java
applications,
whether
it
is
source
code
or
or
binary
form?
B
We
can
review
them
and
check
for
compliancy
against
rules.
What
kind
of
rules
do
we
have?
Well,
we
have
rules
for
checking
java
ee
kind
of
classes
to
see
if
we
are
using
this
kind
of
classes.
So
your
application
is
more
portable,
so
you
can
bring
it
from
one
application
platform
to
another.
We
check
for
jdk,
so
if
you're,
using
oracle,
jdk
and
any
of
the
property
classes
that
come
with
it,
we
can
review
them
and,
let
you
know
hey
you,
have
we
found
this?
We
check
also
for
things
are
related
to
to
windows.
B
B
B
So
you
could
write
your
own
rules
and
point
the
point:
the
rules
to
your
own
recipes.
So
whenever
a
developer
finds
this
issue,
he
or
she
could
make
the
modifications
much
faster,
reducing
the
time
to
to
bring
to
onboard
applications
on
your
on
your
cloud
platform.
How
do
we
provide
this?
Well,
there
are
four
four
deliverables
that
we
provide.
One
of
them
is
the
web
console
which
I'm
going
to
show
you
in
in
a
minute,
and
it
is
the
easiest
way
to
to
get
started.
B
You
just
download
a
zip
file
and
pack
it
and
run
a
script
file
to
to
start
it,
and
all
you
need
in
your
laptop,
is
to
have
a
java
and
eight
gigabytes
of
ram.
So
as
long
as
you
have
that
you
can
run
it,
you
can
go
through
it,
you
can
see
it
and-
and
you
can
use
it,
we
have
also
a
cli
to
be
able
to
to
embed
it
in
pipelines
or
to
use
it
in
an
automated
fashion.
B
So,
whenever
you
have
to
say,
do
analysis
in
a
very
repeated
man
a
way
or
you
have
to
analyze
like
a
huge
amount
of
applications,
you
could
use
the
cli
to
do
so.
We
have
also
a
maiden
plugin
just
in
case.
You
want
to
embed
the
analysis
in
the
builds
and
on
ide
plugins
for
cloud
ready,
sorry,
code,
ready
workspaces,
which
is
our
version
of
eclipse
chair
that
runs
on
openshift,
also
eclipse
chair,
eclipse
and
code
ready
studio,
which
is
a
created
version
of
clips.
B
That
that
red
hat
provides
like
the
hat
spills
of
clips
and,
of
course,
visual
studio
code,
so
you
can
get
this
plugin
install
it
on
your
ide
and
get
started
analyzing
the
applications
to
see
how
they
work.
So
now,
let
me
show
you:
this
is
a
migration
token
for
applications.
It's
running
on
my
laptop
and
right
now
we
have
a
project.
We
could
start
a
new
project,
so
this
would
be
like
a
test
project.
Okay,
let
me
log
in
again
okay.
B
B
You
can
have
you
can
go
to
the
wind
up
project
and
download
it
from
there.
So
we
have
rule
cells,
you
have
sample
applications
and
in
this
repository
you
can
download
here
from
the
sample
binaries
some
binaries
to
just
test
your
your
migration
token
for
applications.
So
I
choose
those
files.
I
go
home
code,
migration
sample
applications.
This
is
the
repository
I
just
pointed
out.
I
go
to
sample
binaries
and
I
choose
these
five
binaries
click
open.
B
They
upload,
I
mean
they're
on
my
laptop,
so
it's
fast
next
and
then
you
can
configure
okay,
let's
check
for
rules
that
are
intended
to
be
for
jboss,
eap.
These
are
jee
rules,
you
know,
so
your
applications
is
jee
compliant.
This
is
contourization
rules.
These
are
rules
for
linux.
These
are
rules
for
opengdk
and
of
course
we
have
more
of
the
advanced
options
that
you
could
add
here
and
and
among
them
we
have
right.
B
Now
the
we
have
added
some
camel
ammo
rules
that
could
help
you
bring
from
camel
two
to
camera.
Three
camouflage
is
a
lot
more
container.
Oriented
has
many
kubernetes.
Integration
is
very
interesting.
So
if
you're
using
apache
camel
2-
and
you
want
to
move
to
apache
camera
3,
you
can
also
use
the
migration
took
it
for
applications.
B
So
we
have
right
now
this
analysis
and
I
can
click
on,
save
and
run,
and
then
the
analysis
will
be
queued
and
then
started.
So
let
me
do
like
in
the
cooking
shows
and
I'll
go
to
another
project.
That
has
already
finished
the
analysis.
So
we
don't
have
to
wait.
So
this
is
what
happens
when
you
finish
analyzing
it.
You
could
go
to
the
analysis,
part
and
check.
This
is
the
report
for
the
set
of
applications
that
I
have
analyzed.
I
can
see
the
five
applications
that
I
have
it.
B
B
That
is
fully
supported,
so
you
could
check
here
that
this
this
component
is
not
going
to
be
supported
if
you
run
it
on
the
above
web
server,
but
everything
will
be
supported
for
jboss,
eap,
same
thing
for
all
the
applications
that
I
have
analyzed,
they
can
check
that
which
ones
of
them
are
going
to
be
supported,
then,
to
the
important
part,
the
issues
you
know.
We
found
35
instances
of
this
issue,
so
we're
using
a
weblogic
property
logger
and
it
has
found
it
in
these
cases.
B
So
in
here
we
have
links
to
the
jdk
login
documentation,
so
you
could
use
jdk,
login
or
jboss
login,
so
you
will
be
able
to
get
into
the
applications
and
change
the
locker
to
make
it
fully
compliant
with
the
ee
and
then
be
able
to
move
your
application.
So
you
have
these
migration
mandatory
rules,
so
these
are
the
the
the
rules
that
we
have
found
that
you
will
be.
You
will
need
to
change
and
I'm
showing
more
examples.
Then
we
have
some
optional
grooves
some
potential
rules,
then
some
plot
mandatory
rules.
B
So
in
here,
for
example,
we
are
using
some
embedded
cache
library
we'll
need
to
change
it
to
make
it
more
suitable
for
containers
and
then
some
informational
rules
so
same
thing.
You
could
check
for
each
application,
there's
a
report
and
you
could
go
to
each
one
of
those
applications
and
it
will
give
you
some
guidance
on
the
story
points
the
effort
they
will
take
to
migrate.
This
application,
for
example,
same
thing
with
the
issues,
but
just
for
this
application,
so
I
could
take
a
look
at
the
application
and
be
able
to
change
it.
B
Some
applications
details
so
in
this
case
this
is
an
easy
one.
Seven
story
points,
because
there
are
minor
issues
that
we
need
to
change.
Of
course,
this
is
an
application
that
is
fully
suitable
for
for
gables,
but
we
could
take
a
look
at
this
other
application.
This
is
much
more
complex
and
has
more
incidents
that
we
need
to
check
and
we
can
check,
as
you
see,
that
there
are
more
items
to
be
changed.
B
There's
125
story
points,
and
I
will
tell
you
which
packages
we
could
check,
which
technologies
are
being
used
in
these
applications,
the
dependencies
graph
to
see
if
we
are
using
any
component
that
is
used
elsewhere.
What
was
not
reviewed
because
there
were
issues
passing
it
and
the
dependencies
enterprise
java
beans
a
lot
of
information?
Okay,
so
I
think
that
was
with
this.
B
C
So
yeah
so
next
tool
is
the
migration
toolkit
for
containers
or
mtc.
For
short,
so
mtc
is
available
in
openshift
in
the
operator
hub.
We
just
released
mtc
1.3.
That
is
now
available.
The
main
use
case
of
mtc
right
now,
right,
actually
mtc,
is
used
to
migrate,
an
application
from
one
openshift
cluster
to
another
openshift
cluster.
C
We
support
migration
from
three
to
four
as
long
as
you're,
at
least
at
3.7,
or
from
four
to
four,
so
it's
possible
as
well
to
use
the
same
tool
for
migrating
applications
between
you
openshift
four
clusters,
but
right
now
we're
seeing
a
lot
of
demand
for
just
from
customers
migrating
applications
from
openshift
three
to
openshift
four.
C
So
this
on
the
right
side.
Here
I
have
the
list
of
the
things
that
were
changed
or
added
to
the
to
the
latest
release
of
1.3.
So
we
actually
rebranded
the
tools.
So
maybe
you
you
knew
the
tool
before
as
a
cluster
application,
migration
or
cam,
as
we
are
rebranding
everything
under
the
migration
toolkit
name.
So
now
the
tool
is
named:
migration
toolkit
for
container
and
it's
available
in
operator
operator
of
under
that
name
now.
C
The
second
thing
is,
we
have
had
it
in
the
tool,
a
more
detailed
view
of
what's
getting
migrated.
So
one
of
the
feedback
we
got
from
customers
is
when
you
are
migrating
your
application.
Sometimes
it
would
be
good
to
know
from
the
tool
itself
exactly
what
is
about
to
get
migrated
inside
inside
those
namespace.
C
So
the
tool
migrate,
an
application
from
a
namespace
point
of
view
and
all
the
resources
that
are
actually
attached
to
that
namespace
and
sometimes
it
might
be
difficult
to
understand
exactly
what
is
what
are
all
the
resources
that
are
actually
attached
to
this
namespace.
So
now
we
provided
information
directly
in
the
tool.
C
So
before
you
click
the
migrate
button,
you
can
review
exactly
what
is
going
to
be
migrated
and
and
why
it's
fast
or
why
it's
slow,
depending
on
the
amount
of
resources
that
actually
any
amount
of
data
is
actually
going
to
get
copied
during
the
migration
process.
The
second
thing
is
that
we
spend
a
lot
of
engineering
effort
on
is
to
improve
the
debugging
of
directly
from
the
tool
itself,
so
we're
trying
to
reduce
the
amount
of
time
you
would
have
to
go.
C
Come
online
to
try
to
dig
into
the
logs
or
figure
out
exactly
what's
happening
behind
the
scenes,
we're
trying
to
make
this
as
easy
as
possible
from
to
itself,
even
if
something
goes
wrong,
that
you
can
find
the
appropriate
information
directly
in
the
tool
or
if,
for
any
reason,
you
need
to
go
dig
behind
the
scene,
then
we're
trying
to
as
well
provide
you
the
right
command
or
exactly
where
to
go
from
the
tool
itself.
To
provide
you
some
guidance
on
how
to
go.
C
Do
some
deeper
analysis
on
what's
happening
behind
the
scene
as
well
as.
Finally,
we
also
have
created
a
best
practices
guide
that
is
available
on
github
so
and-
and
this
is
still
something
that
is
work
in
progress-
and
this
is
mostly
like
outside
of
how
to
use
the
tool
itself,
which
is
properly
documented
on
inside
the
openshift
migration
section.
But
best
practices
are
more
than
more
than
the
tool
itself
is.
C
What
are
the
the
most
common
steps
that
that
are
usually
followed
to
have
successful
migrations
right
outside
of
just
how
to
use
the
tool
is
how
to
make
sure
that
the
that
you're,
following
the
the
right
steps
to
to
migrate
your
application
at
scale
from
from
one
cluster
to
another?
C
And
by
the
way
here,
I'm
showing
you
the
reason
why
you
see
conveyor
on
the
top
right
is
that
this
is
a
recording
from
the
upstream
community
version
as
everything
we
do
at
red
hat
right.
C
We
work
open
sour
on
the
on
the
upstream
first
and
when
everything
has
been
fully
merged
and
and
fully
tested,
then
we
release
it
as
a
version
inside
openshift
inside
operatorhub,
and
this
tool
is
based
on
other
community
projects
like
vero
and
rustic,
which
is
all
getting
consolidated
and
tested
upstream
under
the
conveyor,
github,
repo
and
and
then
once
everything
is
solid,
then
we
release
it
so
as
this
tool
is
freshly
released
as
one
three.
C
C
Then
you
need
a
replication
repository,
so
this
object
repository
is
to
copy
the
data
to
and
from
while
we're
doing
the
migration,
so
I'll
pause
here.
So
when
you're
doing
a
migration
what's
happening
behind
is
actually
a
backup
and
a
restore
of
your
application.
So
we
back
up
all
the
data
to
this
object
repository
and
then
we
restore
it
on
your
destination.
C
C
C
So
a
copy
is
when
all
the
data
is
going
to
get
copied
using
restic,
which
is
like
a
backup
service
that
is
doing
a
copy
of
all
your
data,
where
a
movie
is
typically
used
when
you
can
share
the
same
storage
and
so
nfs
would
be
a
good
use
case
where,
if
you
have
the
same
nfs
storage
on
both
sides,
then
we
can
just
unattach
the
pv
on
the
source
and
reattach
the
pv
on
the
destination.
So
it's
faster
as
we
don't
have
to
copy
any
data.
C
We
can
just
reattach
the
same
pv
next
on
the
next
page.
You
have
a
couple
of
different
options.
So,
if
you're
doing
copy
here,
you
have
the
snapshot
capabilities
that
can
be
used
in
the
cloud
if
you're
using
a
cloud
storage-
and
you
also
have
a
verify
icon
that
can
do
like
in
a
checksum
at
the
end,
to
make
sure
that
everything
has
been
fully
migrated
and
nothing
has
changed
data
wise.
C
Then
you
have
the
migration
hooks,
so
migration
hooks
is
a
way
to
add
automation
during
your
migration,
like
during
during
the
execution
of
a
migration
plan.
So
you
have
free
opposed
migration.
C
Hooks
that
are
available,
this
can
be
as
simple
as
up
uploading
like
an
ansible
playbook
that
we
can
run
before
or
after
the
migration
or
you
can
customize
your
own
image
as
well,
that
you
can
make
available
from
the
registry,
and
then
we
can
launch
that
image
as
a
as
a
putter
as
a
service,
while
you're
doing
the
migration
before
or
after
the
migration.
So
this
way
you
can
have
any
kind
of
automation.
C
Then.
Finally,
here
now
that
I
have
my
migration
plan,
I'm
just
going
to
execute
that
plan.
So
and
first
I
can
sorry.
Yes,
I
can
first
review
what
is
actually
going
to
get
migrated
by
looking
at
the
name
space.
So
this
is
a
the
new
functionality
I
was
talking
about
so
here
I
can
look
at
all
the
details
of
all
the
resources
that
are
inside
that
namespace
that
are
going
to
get
migrated.
C
That
gives
me
a
good
indication
of
exactly
what's
going
to
happen
during
the
migration
process,
and
then
I
can
either
stage
or
migrate.
So
I'm
going
to
pause
it
for
a
second,
so
here
I
have
a
couple
of
options.
So
when
I
do
a
stage
is,
staging
is
actually
copying
as
much
of
the
data
as
possible
from
source
to
destination,
but
without
flipping
the
switch
at
the
end.
C
So
at
the
end
we're
not
shutting
down
the
applications,
we're
not
doing
the
final
migration,
so
this
is
a
way
just
to
accelerate
your
migration
later
when
you're
going
to
do
the
final
migration.
So
this
is
something
that,
as
an
example,
could
be
done
during
business
hour.
You
could
stage
your
application,
so
you
are
ready
to
do
the
final
migration
during
a
maintenance
window
that
could
happen
overnight.
C
And
then
I
could
do
the
migrate
and
the
migrate,
and
then
I
would
have
access
to
the
logs
or
the
debugging
information
as
well
if
the
logs
are
not
sufficient.
If
and
if
something
goes
wrong,
so
here
I'm
doing
the
first
aging
process,
and
this
takes
a
few
minutes.
It
will
copy
all
the
pvs,
all
the
all
the
resources,
all
the
definition
of
what's
inside
that
namespace
and
get
this
as
ready
as
possible
on
the
destination
side,
I'm
going
to
fast
forward
that
to
accelerate
this.
C
C
That
says:
don't
help
transaction,
so
even
during
the
final
migration,
if
I
would
click
migrate
by
default,
then
at
the
end
of
the
migration
process,
the
source
side
will
get
scaled
down
to
zero.
Actually,
your
application
will
get
quiesce
and
then
we'll
scale
down
the
application
to
zero
before
we
actually
launch
the
application
on
the
destination
side.
If
my
application
allows
it
because
it's
fully
stateless
or
if
I'm
doing
testing,
then
I
might
want
to
keep
my
source
side
as
well
up
and
running
even
during
a
final
migration
process.
C
So
here
as
this
is
a
lab
environment,
I'm
going
to
click
the
downtile
transaction,
which
is
going
to
skip
that
final
step
of
shutting
down
my
source
cluster.
So
it's
still
going
to
do
the
final
migration,
where
I
can
flip
my
dns
or
my
little
answer
to
point
to
my
destination,
but
I'm
not
going
to
shut
down
my
source
cluster
as
I'm
right
now.
I'm
doing
this
for
testing
purposes
inside
of
a
lab,
but
this
options
allows
you
to
test
the
final
migration
without
affecting
what
is
actually
running
on
your
source
cluster.
C
That's
pretty
much
it
so
this
will
run
again
for
a
few
minutes
and
until
my
migration
is
succeeded,
and
then
after
that
I
would
have
access
to
all
the
debugging
information.
If
something
went
wrong,
then
I
could
have
accessed
all
the
background
information
to
troubleshoot
the
the
the
migration
process.
If
something
wrong
would
happen,
that's
it
for
mt
mtc.
C
Slides
and
I'm
gonna
get
to
the
migration
toolkit
for
virtualization,
so
migration
toolkit
for
virtualization
or
mtv
actually
is
a
tool
that
is
also
available
for
free
inside
operatorhub
will
be
available
for
free
insider
operator
hub
like
like
mtc,
and
it's
going
to
be
available
as
a
tech
preview
release
in
december
2020..
So
right
now,
this
tool
is
not
available.
Yet
this
is
something
we
are
currently
finalizing,
but
we've
done
a
lot
of
engineering
effort
before
on
similar
tooling,
which
was
called
ims
at
the
time.
C
So
ims
was
used
to
migrate
applications
from
as
an
example
from
vmware
to
the
rev,
openstack
and
now,
in
this
case
we're
doing
the
exact
same
thing,
but
for
open
shift,
virtualization,
and
so
all
the
tooling
in
the
back
end
like
pretty
much
like
the
linux
based
tooling,
that
is
used
to
convert
the
disk
and
import
the
data
from
vmware
as
an
example
is
actually
already
pretty
much
built
built-in.
C
Now,
what
we're
actually
finalizing
is
leveraging
the
openshift
api
to
import
the
vms
inside
openshift,
as
well
as
building
a
new
ui
experience
that
is
available
as
an
operator
inside
openshift,
and
the
main
use
case
is
for
this
tool
is
for
mass
migration
of
vms
from
vmware
to
openshift,
virtualization
first
and
then
we'll
also
support
in
the
future
rev
and
openstack,
as
well
as
well
as
some
new
migration
analytics
capabilities.
C
So
we
used
to
have
migration
analytics
capabilities
inside
the
cloud
where
customers
could
download
like
information
about
their
environment
and
upload
this
to
the
cloud
to
get
better
indication
of
if
their
vms
were
compatible
with
the
destination
target.
In
this
case
here
now,
we
have,
we
will
have
migration
analytics
available
on
premise
inside
the
tool
itself.
C
That
will
help
you
analyze
those
vms
and
let
you
know
if
those
vms
are
fully
compatible
or
if
there's
known
issues
when
migrating
those
vms
to
open,
share,
virtualization
or
sometimes
they're,
not
issues
they're,
just
additional
things
that
need
to
happen
manually
outside
of
the
migration,
the
automated
migration
flow
or
as
well
as
some
things
that
need
to
be
architected
differently,
as
obviously
the
openshift
technology
is
a
little
bit
different,
sometimes
on
how
you
would
do
all
kinds
of
features
from
the
legacy
more
hypervisor
that
tool
that
that
is
adding
a
way
quicker
to
access
that
data.
C
Before
launching
your
migration,
we
still
have
this
cloud-based
migration.
Planner
that's
going
to
be
available
in
the
cloud
as
well
for
even
further
deeper
analysis
of
your
overall
vm
footprint
to
help.
You
understand,
for
example,
like
how
many
of
those
vms
would
be
compatible
or
actually
good
candidates
to
migrate
to
openshift
virtualization,
then
yeah
and
again,
all
that's
going
to
be
available
as
tech
preview
in
december
and
we're
hoping
to
have
this
fully
ga
in
april
2021..
D
Guys
thanks
marco,
so
let
me
quickly
share
my
screen.
D
E
Yeah,
quick
one
hi
everyone-
this
is
amit
singh
from
ibm
research
really
glad
to
be
talking
to
the
community
here.
So
move
to
cube
is
a
recent
recently
open
source
project.
James
mentioned
as
part
of
the
overall
conveyor
initiative.
I
really
started
off
with
requirements
that
we
saw
within
ibm
with
our
large
clients
in
multiple
industries.
E
Since
then,
the
team
has
done
a
lot
of
work
to
expand
it
to
other
sources,
cloud
foundry
being
another
big,
interesting
one
as
a
lot
of
clients
out
there
and
businesses
are
looking
to
move
from
cloud
formula
to
more
scalable
kubernetes
architecture.
D
Thank
you
so
much
explained
it
helps
you
move
towards
kubernetes
platforms,
any
flavor
of
kubernetes,
like
the
normal
kubernetes
or
open
shift,
or
any
other
flavor
motor
cube,
tries
to
translate
from
many
of
the
source
platforms
like
docker,
swarm
or
cloud
foundry,
and
any
of
the
language
stacks
or
equipment
in
future,
even
ecs
and
acs
in
a
uniform
way
to
kubernetes.
D
So
it
does
the
process
of
discovery,
discovering
all
the
artifacts
and
then
translating
them,
and
then
optimizing
the
configurations
for
best
deployment
into
kubernetes
and
orchid
architecting
them
in
such
a
way
that
it
is
good
data,
operation,
characteristics
and
stuff
like
that.
It
can
do
the
same
for
all
the
source
platforms
into
while
moving
towards
kubernetes
and
the
way
how
it
does.
We
will
just
look
at
it
in
a
minute
in
a
quick
demo
at
the
end
of
all
these
translations.
D
What
we
get
is
the
artifacts
which
we
can
deploy
to
kubernetes
or
openshift,
and
in
cases
where
the
source
platform
is
a
non-containerized
platform,
it
can
also
containerize
your
application
and
motor
cube
supports
three
kinds
of
containerization
techniques.
One.
D
B
D
Motor
cube
supports
three
kinds
of
containerization
techniques:
one
is
a
docker
file,
another
is
a
cloud
native
belt
pack
and
then
source
to
image
and
by
default
it
supports
most
of
the
language
platforms,
but
if
there
is
a
unique
language
platform
or
you
have
a
unique
requirement
of
using
a
particular
base
image,
a
motor
cube
gives
you
a
completely
extensible
framework
with
which
you
can
add
your
own
docker
file
based
images
and
stuff
without
having
to
edit
a
single
line
of
motor
cube
code.
D
In
addition
to
that,
moto
cube
can
also
create
helm,
charts
for
you
and
a
stubs
for
operator.
If
you
choose
to
have
it,
you
can
also
create
a
k
native
artifacts
for
your
application.
D
We
are
also
enhancing
motor
cube
to
create
tecton
pipelines
for
you,
so
that
the
moment
all
the
transformation
is
done,
you
have
a
ci
cd
pipeline,
also
up
and
running,
and
you
you
are
ready
to
go
in
summary,
motor
cube
tries
to
do
the
discovery
and
then
containerization
do
the
translation
and
optimize
the
artifacts
and
then
customize
it
for
a
very
specific
cluster
that
you
are
going
to
target
with.
D
That
summary,
let
me
take
you
through
a
quick
demo
of
motor
cube,
where
what
we
are
going
to
look
at
is
the
different
stacks
that
we
are
seeing
here.
We
are
going
to
see
a
flavor
of
each
one
of
these
applications
up
and
then
go
going
to
use
the
same
process
to
translate
all
of
them
towards
kubernetes.
Artifacts
motorcube
is
an
open
source
project.
D
It
is
open
source
in
the
in
this
conveyor,
a
community
you
can
go
to
github.com,
conveyor
motor
cube
to
access
the
source
code
and
the
demo
that
you're
going
to
see
is
in
the
moto
cube
hyphens
demo,
the
unified
flow
one,
which
is
going
to
look
at
all
the
source
artifacts
and
do
a
one
one
step:
translation
to
kubernetes,
okay,
so
motor
cube
uses
a
three-stage
process.
D
To
encapsulate
all
these
complex
things,
the
three
stages
that
we
will
be
looking
at
are
collect,
and
then
there
is
a
plan
phase
and
then
there
is
a
translate
in
the
collect
phase.
If
a
motor
cube
can
collect
artifacts
from
the
runtime
environments,
let's
say
you
have
a
kubernetes
cluster
that
you
want
to
deploy
to
motokib
collect
will
get
information
from
the
runtime
instances.
D
That
is
in
your
terminal
context,
and
if
you
want
to
have
a
source
platform
like
cloud
foundry
and
you
have
access
to
a
cloud
foundry
instance,
all
you
need
to
do
is
to
do
your
moto
cube
and
then
collect
command
and
as
soon
as
it
finishes,
what
it
does
is.
It
creates
the
m2k
underscore
collect,
folder,
the
sample
of
which
is,
in
the
left
hand,
side.
You
can
see
I've
done
a
tree
of
the
folder
that
I'm
looking
at
so
it
can.
It
has
two
fold:
two
files
in
the
m2k
underscore
collect.
D
One
is
the
cf
apps
and
there
is
a
cluster.
The
cluster.yaml
is
essentially
a
description
of
your
actual
running
cluster.
It
has
information
like
the
storage
classes
and
all
the
kinds
and
the
versions
that
are
supported
by
the
cluster.
This
helps
us
know
characteristics
about
the
cluster.
D
Without
getting
into
specifics,
like
which
kubernetes
version
or
openshift
version
is,
it
is
defined
in
terms
of
the
actual
resources
cluster
supports
and
then
the
collect,
when
it
has
run
against
a
cloud
foundry
instance
will
create
a
yaml
file
with
all
the
running
apps
that
it
could
find
in
that
running
instance
of
cloud
foundry,
which
might
have
information
like
the
environment
variables.
What
is
the
name
of
fair
desktop,
so
what
we
are
going
to
do
is
once
the
correct
phase
is
done.
D
You
have
this
collect
folder,
and
you
just
put
that
folder,
along
with
every
other
source
code
that
you
want
to
translate.
Let's
look
at
the
different
artifacts
that
are
here
that
we
are
going
to
translate
in
this
case.
The
base
folder
is
as
a
cf
node.js
folder,
which
has
a
nodejs
application
with
the
manifest.tml
for
deploying
to
cloud
formatting.
So
it's
a
cloud
foundry
node.js
application
and
then
there's
a
docker
compose
file.
D
If
you
look
into
the
contents
of
it,
which
is
all
available
in
the
github
repository,
you
will
find
three
services
which
are
needs
to
be
translated.
So
it
is
a
containerized
application
and
the
third
one
is
a
java
gradle
one.
It
has
the
source
code
and
the
gradle
related
files,
and
then
there
are
also
some
kubernetes
examples
for
a
particular
node.js
application
and
deployment
ingress
and
a
service.
D
So
what
we
are
all
going
to
do
is
to
just
take
this
base,
folder,
which
is
the
sample,
slash,
unified
flow
and
going
to
give
it
to
motor
cube
in
the
second
phase
of
motor
cube,
which
we
just
described
as
plan.
So
all
it
does,
is
you
need
to
give
more
to
cube
space
plan
space
samples,
slash,
unified
folder,
and
it
does
the
job.
D
So
what
it
does
is,
it
goes
through
each
and
every
file
in
each
and
every
folder
that
it
can
find,
tries
to
figure
out
whether
there
is
a
docker
file,
a
compose
file
cloud
40,
manifest
k,
native
files,
cube
files,
everything
and
then
any
metadata
that
it
can
use
it's
going
through
each
and
every
file
to
understand
them
and
correlate
among
each
other.
Once
it
does
the
correlation.
It
creates
a
plan
for
you,
and
your
plan
looks
like
this.
D
It's
a
yaml
file
which
has
a
list
of
all
the
artifacts
it
has
formed.
It
says
I
found
your
three
kubernetes
yamls
and
it
says:
okay,
I
have
found
multiple
services
which
each
one
of
different
types,
for
example,
there's
an
api
which
was
found
in
the
docker
compose
file
over
here,
and
I
can
convert
from
compose
to
kubernetes
using
this
image
and
I
will
reuse
the
docker
image.
So
you
need
not
update
your
containerization
pipeline,
but
you
have
to
change
your
cd
pipeline
to
deploy
to
your
kubernetes
cluster
and
then
it
says.
D
Okay,
I
found
a
java
based
java,
gradle
application
and
I
can
create
either
create
a
docker
file
for
you
or
I
can
create
a
cloud
native
build
pack
for
based
containerization,
for
you
also
for
the
cloud
foundry
app.
It
says
that
oh
I
found
a
founder
cloud
foundry
application.
I
found
some
source
artifacts
like
manifest
yaml.
I
also
found
some
runtime
metadata.
I
can
combine
all
of
that
for
you
and
also
containerize
the
application
and
have
it
containerized
to
kubernetes.
D
Similarly,
it
is
proposing
multiple
contenization
techniques
for
you
for
the
same
note
as
cloud
foundry
application.
Similarly,
for
the
node.js
kubernetes,
it
is
just
also
listing
that
and
the
other
two
services
that
it
found
in
rocket
compose
and
then
it's
saying
that
I
found
a
cluster
information.
Do
you
want
to
target
that
particular
cluster
by
default?
It
support
understands
kubernetes
and
openshift,
but
any
flavor
of
cluster.
As
long
as
you've
done
a
collect
you,
it
can
just
target
that.
D
So
you
see
that
it
is
automatically
targeting
that
and
it's
saying
that
what
is
artifact
types
you
have
to
support
in
this
case
by
default
at
cml.
So
let's
take
this
plan
and
give
it
to
motorcube's
next
phase,
which
is
called
translate
and
let
it
run
so
since
dot
plan
file
is
in
same
folder.
It's
going
to
take
that
and
execute
the
translation.
D
So
here
it
interacts
with
me
to
understand
the
missing
information.
So
first
it's
asking
okay,
I
found
all
the
services
we
want
to
actually
translate
all
the
services
and
then
it's
saying,
okay,
I
know
these
kind
of
containerization
techniques.
Do
you
want
to
support
all
of
them,
or
only
a
selected
few
and
then
for
each
of
the
services?
D
It's
asking
what
kind
of
containerization
techniques
I
need
to
use
based
on
the
types
it
knows
right
and
then
it's
saying
whether
to
create
a
health
chart
or
uml
or
like
that,
and
then
it's
saying
which
cluster
I
need
to
support
and
you
can
choose
your
custom
cluster.
In
this
case
there
is
a
cluster.yeah
aml
that
I'm
going
to
choose,
or
I
can
also
choose
the
default
openshift
and
then
it's
asking
what
are
the
services
which
needs
to
be
excel
exposed
externally?
D
So
this
is
the
folder
it
has
created
for
us.
What
it
has
is.
It
has
some
scripts
in
the
base
level
which
helps
you
orchestrate
it.
It
allows
you
to
build
the
images
and
then
push
the
images
and
install
it
with
helm
or
the
normal
elements.
First,
it
has
created
containerization
scripts
for
cloud
native
build
pack.
It
has
created
a
script
which
can
create
container
using
cloud
native,
build
bank
for
docker
file.
It
has
created
the
docker
file
and
the
script
here
to
build
it.
D
Similarly,
if
you
have
chosen
an
application
which
has
s2i,
it
would
have
done
that
too.
In
addition
to
that,
it
created
a
sample
docker
compose
file,
so
that
you
can
run
the
container
images
that
you
are
creating
locally
to
test
locally,
and
then
there
is
a
cache
on
all
the
answers
that
you
are
given,
so
so
that
you
can
put
it
as
part
of
a
pipeline
and
you
need
not.
It
will
not
interact
with
you.
It
will
choose
the
default
answers
also.
D
It
has
created
a
health
shot
for
you
and,
if
you
notice,
even
though
all
the
source
artifacts
were
deployment
ingress
and
service,
it
was
able
to
convert
all
of
them
to
the
openshift
cluster
we
were
targeting.
So
it
has
a
greater
deployment
conflicts
for
your
image
streams
for
you
and
root,
depending
on
what
you
want
to
deploy,
and
it
has
also
parameterized
it
and
created
a
valid
study
aml
and
it
has
created
operator
stuff
for
you
and,
and
it
has
created
all
the
other
scripts
that
are
required
to
install
it.
D
So
this
is
all
a
manual
command
line
process.
You
can
do
it
one
application
at
a
time
or
all
of
them
together
supports
all
this.
In
addition
to
the
command
line,
we
also
support
your
ui,
so
the
ui
is
right
here.
It
has
this
assets
plan
and
target
artifacts
tab.
D
So
that's
a
very
brief
demo
of
how
you
take
the
different
artifacts
from
the
various
source
platforms
and
translate
it
to
kubernetes
artifacts
that
you
can
then
deploy
to
your
cluster
with
that.
I'd
like
to
hand
it
over
back
to
james
for
talking
about
the
conveyor
community
thanks
james.
F
Thanks
to
shook
and
the
myth
and
marco
and
miguel
as
well
so
I'll,
try
and
keep
it
brief
and
bring
us
home.
Hopefully
you
can
see
my
shared
screen
now
yeah,
so.
A
So,
as
shook
mentioned,
and
one
of
the
things
that
we're
working
on
is
as
as
marco
and
miguel
have
mentioned
as
well,
you
know
we
have
an
upstream
development
upstream
first
development
process
at
red
hat,
and
so
one
of
the
things
that
we've
recognized
is
that
there
is
a
large
and
growing
community
and
interest
in
tools
and
best
practices
around
how
to
modernize
applications,
whether
it's
breaking
down
monoliths
adopting
containers,
you
know
how
to
take
your
existing
virtual
machines
and
move
them
forward
into
a
kubernetes,
orchestrated
manner.
A
We
certainly
see
a
lot
more
interest
and
need
for
this,
and
so
we
want
to
do
this.
The
open
source
way,
and
so
what
we're
working
on
today
is
something
called.
The
conveyor
community
book
had
mentioned
how
the
ibm
research
team
had
open
source
to
move,
to
cube
into
that
community.
We're
at
the
point
where
we're
starting.
We
have
a
bunch
of
source
code
and
projects
that
are
in
there.
The
source
code
for
all
the
projects
that
you
saw
today,
except
for
migration
toolkit
for
applications,
is
currently.
B
A
Community
and
our
goal
is
to
continue
to
catalyze
this
community
and
bring
more
source
code
to
here,
but
then
also
even
bring
more
of
the
practitioners
around
people
who
are
modernizing
these
applications
and
working
in
in
in
in
these
areas
together
over
time,
so
that
we
can
create
a
really
a
vibrant,
diverse
community
of
people
who
can
share
best
practices
and
inform
the
tooling
that
you
just
saw
to
help
improve
it
and
make
make
all
the
practitioners
more
efficient
and
more
effective
at
modernizing
these
applications
faster.
A
So
we
invite
you
to
you
know,
join
us
on
the
the
conveyor
slack
channel.
So
if
you're
on
the
kubernetes
slack,
I
believe
it's
actually.
I
think
it's
slack.kubernetes.io
not
kubernetes.io,
but
if
you
go
to
slack.kubernetesk8s.io
and
join
the
conveyor
channel,
we'd
be
happy
to
continue
the
conversation
there,
whether
you're
interested
in
learning
more
about
the
community
learning
more
about
the
tooling
or
getting
more
involved
or
have
comments
or
questions.
We
welcome
them.
G
All
right:
well,
I'm
I'm
really
amazed
at
how
much
that's
actually
been
done
and-
and
I
haven't
seen
any
questions
from
the
audience
yet.
G
So
I
think
there's
there's
been
this
whole
conversation
with
end
users
about
helping
them
do
migration
from
wherever
they
are,
whether
it's
on
an
earlier
version
of
open
shift
three
to
four
and
over
the
past
couple
of
years,
and
it's
pretty
amazing
to
see
how
much
work
everybody's
been
doing
to
make
that
happen
and
to
really
facilitate
people
being
able
to
deploy
wherever
they
want
and
and
move
their
successfully.
G
So
I
I
really
I'm
a
little
overwhelmed
at
how
much
you
guys
have
done
in
the
past
year
year
and
a
half
and
I'm
thrilled
to
see
the
work
that
that
ibm
has
been
doing
in
the
research
group
from
amith
and
ashok.
G
So
this
is
really,
you
know
kudos
to
all
of
you
for
making
this
happen
and
we're
really
grateful
and
definitely,
if
you're,
listening
to
this
anywhere
and
you
have
questions
check
out
the
conveyor,
github
repo
join
us
on
the
kubernetes
slack
and
find
us
there
and
we'll
answer
your
questions.
If
there's
a
platform
we
haven't
touched
on,
yet
I
think
I
saw
an
amazon
eks,
lift
and
shift
to
openshift
sort
of
grayed
out
that,
maybe
that's
in
the
future
road
map.
A
Yeah,
as
far
as
kind
of
the
future
road
map,
so
on
the
migration
toolkit
for
container
side,
you
know
and
I'll
let
marco
miguel-
and
I
should
I'll
keep
be
honest
here-
is
really
the
migration
toolkit
for
containers,
we're
working
on
really
the
robustness
of
the
tool
itself
to
make
sure
that
it's
entirely
bulletproof
when
you're
moving
applications
from
one
kubernetes
cluster
to
another.
A
A
That's
I
think
the
current
kind
of
the
current
area
there
on
the
migration
on
on
a
migration
toolkit
for
applications,
there's
a
lot
of
focus
right
now
on
kind
of
spring
boot
to
carcass
and
helping
with
those
specific
rules,
as
we've
seen
quite
a
big
demand
there,
as
well
as
building
out
some
green,
some
something
called
pathfinder
which
was
developed
in
our
services
team
and
integrating
that
with
the
migration
toolkit
for
applications.
So
it'll
allow
for
you
to
understand
the
you
know
really
get.
B
A
Advice
around
whether
or
not
a
an
application
is
suitable
for
containerization,
based
on
a
series
of
questions
that
you
would
answer
and
then
on
the
migration
toolkit
for
virtualization.
You
know
right
there
we're
just
trying
to
get
that
out
with
vmware
support
in
the
future.
We're
going
to
want
to
add
rev
support
to
help
our
existing
rev
customers
move
over
to
openshift
virtualization
as
well
and
shook
I.
I
can't
speak
for
the
move
to
cube
roadmap.
That's
all
upstream
right
now!
So
do
you
have
any
thoughts
on.
A
D
Far
as
motor
cube
is
concerned,
we
are
trying
to
build
it
as
as
native
to
kubernetes
as
possible
so
that
we
can
scale
it
across
any
flavor
of
kubernetes
and
as
we
were
as
we
open
sourced,
we
have
seen
a
lot
of
traction
and
a
lot
of
people
have
own
interest
to
actually
contribute
back
plugins
for
adopting
for
amazon's
container
service
and
stuff
like
that,
and
there
is
a
lot
of
community
action
around
it.
D
So
hopefully,
in
the
near
future
we
can
see
more
platforms
being
supported
and
extended.
G
Perfect,
that's
you
know
it
doesn't
it's
an
interesting
conversation
to
have
because
it
may
initially
appear
like
it's
a
competitive
thing
where
we're
trying
to
get
people
off
of
our
customers
platforms,
but
this
has
probably
been
one
of
the
most
requested
services
from
the
the
solution,
architects
and
other
folks,
across
red
hat
and
at
our
end
user
organizations,
because
this
really
kind
of
reflects
the
hybrid
nature
of
cloud
where
people
are
are
lifting
and
shifting
their
their
loads
workloads
from
one
platform
to
another,
and
you
know
that
this
is
all
out
in
the
open
source
too,
is
pretty
amazing.
G
So
I
am,
as
I
said,
thrilled
to
see
all
of
this
work
going
on
and
looking
forward
to
helping
you
guys,
build
out
the
community
around
it
and
seeing
where
it
all
takes
off
and
goes
to
so
with
that.
G
Yeah,
I
have
a
few
in
my
head.
I
think
tanzu
to
openshift
a
few
other
things.
There
there's
a
little
couple
of
other
things
out
there.
That
could
be
fun,
but
I
I
really
think
it's
it's
it's
what
the
end
user
organizations
have
been
asking
of
us.
So
this
is
not.
This
is
not
out
of
the
blue
and
really
it's
not
about
competing
with
other
folks.
It's
about
facilitating
how
organizations
actually
do
their
work
and
need
to
do
their
work.
G
Hopefully,
a
little
bit
later
today
depends
on
how
good
the
internets
are
for
me,
but
we're
we're
rocking
and
rolling,
and
I
look
forward,
I
think
we're
going
to
have
you
guys
back
for
the
open
shift
commons
gathering
on
november
17th
at
kubecon,
I'm
going
to
try
and
sneak
in
more
of
the
demo
on
move
to
cube,
and
you
know
any
more
highlights
that
we
want
to
show
off.
G
So
you
can
look
forward
to
james
and
a
shock
and
maybe
amith
as
well,
and
a
talk
on
that
and
rocking
and
rolling
to
the
next
there
and
I'm
looking
forward
to
december
when
we
get
the
mtv
out
there,
because
I
want
my
mtv
and
I'm
sure,
you've
heard
that
joke
before
and
if
not
you're
not
of
a
certain
age
and
I'm
looking
forward
to
december
as
well.
G
So
thanks,
everybody
and
really
wherever
you
are
stay
safe,
keep,
keep
healthy
and
happy
and
we'll
we'll
talk
to
you
all
soon.
So
thanks
everybody
for
coming
together
today.
Making
this
happen.