►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
the
app
modernization
demonstration.
My
name
is
hank
scorpio.
I
work
for
the
globex
corporation,
I'm
glad
you
could
join
me
today.
We
are
a
global
conglomerate.
We
have
a
retail
business.
That
is
one
of
our
key
businesses
to
our
overall
strategy
as
a
company,
and
I
want
to
tell
you
a
little
bit
about
the
history
of
one
of
the
applications
inside
of
our
retail
business.
A
There's
been
a
group
of
folks
that
have
been
knocking
on
my
door.
Emailing
me
from
the
conveyor
community
that
wanted
to
hear
about
this
and
kind
of
our
history
of
our
retail
application
and
where
we
are
today,
so
I'm
excited
to
invite
them
into
the
room
here
and
into
the
meeting
and
and
tell
them
the
history
of
this
and
see
what
they
can
do
to
help
me
modernize
this
application.
A
So
this
retail
application
is
a
typical
end.
Your
application
started
in
the
mid
2000s,
it's
a
monolith.
It
runs
on
vms
on
today
on
vmware
vsphere.
We've
got
a
number
of
challenges.
With
this,
our
code
commits
take
a
very
long
time.
We
had
a
code
deployment
that
brought
down
the
entire
system
once
when
things
fail.
It
often
takes
us
hours
to
fix
them.
It's
leading
to
long
downtime
and
during
peak
times
we
have
trouble
handling
transaction
volume.
A
So
if
you're
familiar
with
the
devops
metrics
that
you
want
to
measure
we're
failing
in
all
of
them
with
this,
with
this
application
in
the
mid
2000s,
and
so
we
decided
to
begin
to
strangle
this
monolith,
there's
four
services.
A
We
identified
the
gateway
service,
the
customer
service,
the
order
service
and
the
inventory
service
inside
of
this
application,
and
so
what
we
did
is
we
began
to
strangle
them
and
at
the
time
we
heard
of
this
platform
called
cloud
foundry,
and
so
we
decided
to
basically
strangle
out
the
gateway
and
order
service
and
develop
a
new
modern
front
end.
That
would
make
our
application
more
customer
friendly
as
well,
and
we
did
this
all
on
cloud
foundry,
we
did
it
using
spring
boot.
A
Unfortunately,
we
realized
that
cloud.
Foundry
is
not
the
best
choice
of
platform,
given
the
moment
of
momentum
that
kubernetes
has
in
the
ecosystem.
So
we
now
re.
You
know
we
started
to
realize
that
we
didn't
want
to
develop
any
more
applications
on
cloud
foundry
and
we
also
started
to
discover
new
runtimes
like
orcas,
that
were
even
more
efficient
than
spring
boot
for
cloud
native
application
development.
So
what
we
did
next
was
we
took
this
inventory
service
and
we
split
it
out
into
a
kubernetes
environment.
A
The
nice
thing
here
is
that
we
could
actually
bring
our
database
on
kubernetes,
because
kubernetes
could
handle
persistence.
We
developed
the
inventory
service
using
quarkx,
and-
and
we
were
you
know,
we
thought
we
were
moving
in
the
right
direction.
Unfortunately,
the
big
challenge
we
have
with
the
kubernetes
environment
is
that
we've
manually
deployed
this
inventory
service.
A
Our
development
teams
kind
of
have
just
developed
it
and
and
have
a
manual
way
of
deploying
it,
and
in
order
for
us
to
promote
this
into
our
production,
kubernetes
clusters
we're
going
to
have
to
re-platform
it
we're
going
to
have
to
automate
the
way
we
do.
This
they're
really
trying
to
embrace
a
get
ops
method
for
our
development,
and
so
they
don't
want
to
just
have
manual
deployments.
They
want
to
be
able
to
redeploy
in
an
automated
fashion,
so
this
is
where
we
are
today.
A
It's
really
challenging
our
customer
service
still
remains
on
vms,
it's
slowing
our
deployment
frequency,
so
we
still
can't
deploy
any
faster
than
we
used
to,
because
our
customer
service
is
really
lagging
behind
in
the
ability
to
deploy
faster.
B
A
Of
all
you
know,
perhaps,
is
that
we're
maintaining
three
platforms,
and
this
becomes
really
difficult.
You
see
one
of
our
employees
there
who's
having
a
tough
time
managing
all
three
platforms
at
once,
so
I've
invited
the
conveyor
team
here,
because
ultimately,
what
I
want
to
get
to
is
this.
I
want
all
of
my
services
running
on
kubernetes
so
that
I
get
the
benefits
of
it:
horizontal
scaling,
automated
roll
out
and
roll
back
in
packing
all
those
great
things
that
kubernetes
gives
me.
A
I
want
to
leverage
a
git
ops
model
to
decrease
my
lead
time,
for
change,
mean
time
to
recover
and
change
value
rate
and
increase.
My
deployment
frequency
and
I
want
to
simplify
my
operations
by
putting
all
of
this
on
a
single
platform,
that's
easier
to
manage
for
my
ops
teams,
and
then
I
can
start
to
plug
in
cloud
services
and
all
these
other
things
and
and
do
even
you
know
more
fancy
cloud
native
things
in
the
future
with
this
application
to
increase
its
value,
so
I've
invited
the
conveyor
team
here.
A
This
is
my
current
retail
application
ramon.
You
are
telling
me
how
you
guys
can
help
me
start
to
modernize
this
app.
Can
you
can
you
tell
me
more.
C
Yeah,
of
course,
so,
first
of
all,
I
think
we
need
to
run
an
assessment
on
all
the
different
services
within
your
application,
and
it
looks
like
this
legacy.
Customers
component
will
be
a
bit
problematic,
so
I
think
we
should
run
an
analysis
on
that
one
to
try
to
find
out
what
could
prevent
it
from
running
on
containers,
and
once
we
have
found
out
everything
start
with
the
refactoring
of
the
application
to
adapt
it
to
a
more
cloud
friendly
architecture.
A
Nice,
nice,
that's
great,
so
once
we
do
that,
what
am
I
going
to
do
with
this
database,
though
miguel
you
were
telling.
D
Me
database
and
yeah
I
mean
there
are
some
workloads
that
are
not
intended
to
be
migrated
straight
ahead
and
that
the
modernization
could
take
longer,
but
you
could
bring
them
to
kubernetes
either
way
by
moving
them
as
virtual
machines.
So
you
just
take
the
database
in
the
virtual
machine
as
it
is,
so
you
just
deploy
it
as
a
virtual
machine
on
the
target
and
then
you
could
leverage
all
the
all
the
features
that
kubernetes
provides
for
virtual
machines
because
they
are
just
another
coordinate.
Subject:
you'll.
A
Thanks
thanks
miguel
yeah,
that's
good,
because
I'm
scared
to
do
anything
with
that
database.
It
scares
me
to
death
to
touch
it.
It's
very
old!
What
about
my
cloud!
Foundry,
apps.
B
A
E
So
for
that
we'll
use
the
crane
project
which
will
help
you
remove
everything
that
was
hard
coded
in
your
deployments
and
so
we'll
clean
this
up,
push
that
to
git
and
get
this
fully
automated,
so
that
you
can
really
deploy
this
app
and
promote
it
from
dev
to
kiwi
to
production
in
an
automated
fashion.
A
C
Okay,
definitely
so
allowed
me
to
start
with
the
assessment
of
your
portfolio
with
tackle.
So
let
me
share
you
my
screen
and
pray
to
the
fedora
gods
that
this
works.
C
C
So
it
looks
like
your
architects
were
proactive
enough
to
load
up
the
your
portfolio
within
the
application
inventory,
so
we
can
get
started
here.
One
of
the
cool
things
about
the
inventor
is
that
it
allows
you
to
classify
and
manage
your
portfolio
in
any
way
you
might
want.
First
of
all,
we
have
the
notion
of
business
service.
C
So
right
now
we
have
all
your
different
business
services
on
screen,
but
we
wanted
to
focus
on
on
your
retail
application,
so
we
can
filter
the
retail
applications
just
like
this
and
talking
about
managing
the
portfolio
and
classifying
the
the
the
applications.
C
One
of
the
most
exciting
features
in
the
application
inventory
is
an
extensible
tagging
model
that
allows
you
to
classify
your
portfolio
in
as
many
dimensions
as
you
might
want.
So,
for
example,
focusing
on
this
legacy
customer
management
application
we
were
discussing
before.
If
we
expand,
we
can
see
a
series
of
tags
in
here.
For
the
moment
we
we
have
used
the
taxi
with
the
concept
of
technologies
that
each
application
has.
So
we
have
java
tomcat
oracle.
C
As
I
said,
the
tagging
model
is
extensible,
so
we
added
another
tag,
type
related
to
the
different
custom
frameworks
you
might
be
using
within
your
application.
So
it
seems
like
this
legacy.
Customer
management
application
is
using
a
custom,
configuration
library,
and
I
get
the
feeling
that
this
might
be
the
problem
that
we
need
to
solve
to
to
make
this
application
suitable
for
containers.
A
C
A
C
C
Okay,
so
moving
on
well,
the
way
we
see
the
assessment
is
a
questionnaire-driven
assessment,
so
we
we
are
presented
with
a
series
of
questions
of
different
aspects
related
to
the
application
landscape,
and
by
that
I
mean
technology,
application,
life
cycle
management
architecture,
all
the
different
concerns
that
might
have
an
impact
on
on
on
the
application,
and
the
idea
behind
the
assessment
is
to
find
out
how
suitable
the
application
is
from
for
containers.
Okay.
So
this.
C
Yeah
absolutely
happy
to
be
here
to
help
so
yeah
what
what
the
tool
does
is
based
on
your
answers.
It
detects
a
series
of
potential
risks
that
might
prevent
the
application
or
present
some
sort
of
threat
for
the
application
to
to
run
on
on
containers.
So,
let's,
let's
just
keep
the
the
assessment
and
go
straight
into
the
review
to
find
out
what
what
these
risks
are.
C
So
I
can
save
and
start
with
the
review
process,
and
here
we
have
a
high
level
diagram
of
the
different
risks
that
have
been
identified
out
of
my
of
my
answers,
but
we
can
get
down
to
the
detail.
C
Absolutely
so
we
have
the
list
of
risk
identified.
If
I
reorder
this
thing,
I
get
the
hike
and
mediums
presented
first,
so
there
seems
to
be
some
problem
with
the
way
your
application
handles
service
discovery,
and
that
makes
sense
because
it
comes
from
a
legacy
platform
in
which
static
ips
and
things
like
that
are
used,
and
that
is
not
very
cloud
friendly.
C
So
that's
that's
one
thing.
The
other
risk
that
has
been
identified
is
the
maturity
level
in
your
organization
relates
to
containerization
process.
But
I
guess
that's
why
we're
here?
Yeah
absolutely
and
finally,
we
have
detected
that
you
have
some
some
some
trouble
with
how
the
application
is
is
being
configured.
So
there
seems
to
be
multiple
configuration
files
in
multiple
file
system,
location,
and
that
is
an
anti-pattern
when
you're
talking
about
cloud
cloud
native
and
cloud
friendly
applications,
yeah.
C
Yeah
absolutely
so,
it
seems
like
this
custom
library
that
we
already
detected
has
some
sort
of
responsibility
here,
and
we
need
to
figure
out
what
to
do
with
that.
Maybe
replace
it
with
a
more
straightforward
approach,
or
something
like
that.
So
once
we
have
identified
the
different
risks,
we
we
have
enough
information
to
make
an
informed
decision
on
what
would
be
the
the
best
migration
strategy
for
this
application.
C
So
if
we
go
up
we're
presented
with
a
six
r's
or
or
or
the
six
hours
from
for
amazon
the
standard
for
the
different
migration
strategies
to
follow.
In
this
case,
we
will
choose
refactor,
since
we
need
to
perform
some
changes
in
the
source
code
for
the
application
to
be
more
container,
ready
and
and
and
cloud
friendly
cloud
native.
C
C
So
we
can
submit
the
review
and
everything
gets
stored
for
for
later
consumption.
So
now
we
have
this
this
assessment.
We
have
some
sort
of
clues
of
what
needs
to
be
addressed
here.
The
next
step
will
be
to
do
an
analysis
and
being
able
to
detect
on
the
actual
source
code.
What
are
the
things
that
are
preventing
us
from
from
doing
a
clean
migration
towards
towards
cloud
so.
A
C
Exactly
that's:
that's
that's
the
idea
behind
this,
so
we
have
an
analysis
piece
for
tackle
on
development
right
now
we're
bringing
some
other
project
into
the
tackle
umbrella.
So,
for
the
moment
we
will
have
to
switch
to
another
tool
in
the
future.
We
want
to
have
everything
fully
integrated
in
the
same
fashion
like
we
have
the
assessment,
so
we
click
on
application
and
click
on
access
will.
It
will
be
the
same
with
the
analysis.
C
C
Okay,
so
no
need
to
choose
an
application
server.
We
need
to
do
a
containerization
process.
Of
course
I
would
for
I
would
do
a
sanity
check
with
the
linux
migration
path,
just
to
make
sure
there
are
no
windows
static
paths
in
there
from
other
versions
of
the
application,
and
I
I
know
you-
you
have
some
problems
with
the
licensing
with
the
oracle
jdk
and
you
will
want
to
get
rid
of
that.
Yeah.
C
C
A
C
Moving
on
once,
we
have
selected
the
the
migration
path
we
want
to
follow.
Then
it's
time
to
select
which
are
the
packages
that
we
want
to
analyze.
So
we
remove
everything
we
want
to
focus
just
on
the
business
classes,
from
your
application
and
avoid
the
libraries.
So
we
will
choose
this
conveyor
packaging,
it's
funny,
because
you
use
the
same
packaging
that
we
do
so.
It
kind
of
feels
like
this
is
some
sort
of
staged
marketing
demo,
but
it
isn't.
No,
absolutely
the
buildings
behind
me
are
real:
okay,
yeah
yeah
yeah.
C
It
looks
like
that.
So
once
we
have
selected
the
the
business
packages
we
want
to
analyze,
we
move
on
next
step
is
the
custom
rules,
so
we
we've
been
discussing
these
discussed
on
configuration
library,
so
we
already
had
some
conversations
with
your
architects
and
they
already
told
us
about
this
library,
so
we
know
how
to
find
it
within
your
code,
and
we
already
came
up
with
with
some
strategy
to
replace
it
with
a
more
straightforward
standard
approach
to
enable
externalized
configuration
in
kubernetes.
C
So
this
analysis
component
is
a
rules
engine
that
is
extensible,
so
we
came
up
with
another
extended
rule
and
we
added
to
the
rule
set
for
the
for
the
analysis.
So
we
we
upload
it.
We
enable
the
rule
and
then
we're
good
to
go
with
the
analysis.
Oh
great,
okay.
C
Moving
on,
we
won't
be
using
any
custom
labels.
We
also
have
a
gazillion
of
options
to
fine-tune
to
further
fine-tune
the
analysis,
but
for
the
moment
we
will
stick
with
the
with
the
target.
Okay,
so
moving
on
and
last
step
is
to
review
that
we
didn't
make
any
mistakes
which
we
didn't
so
we're
good
to
go
and
start
the
the
analysis.
A
C
Exactly
that's!
That's
exactly
what
is
that
it
does
it's
some
sort
of
static
analysis
using
the
binaries
decompiling
and
then
analyzing
the
source
code.
So
since
we
are
a
bit
a
bit
tight
in
timing,.
C
A
C
That's
it
exactly
so
this
legacy
configuration
that's
an
occurrence
of
the
custom
rule
we
developed.
So
it
presents
us
the
number
of
incidents
within
each
one
of
the
classes
of
your
application
and
provides
a
series
of
hints
and
and
links
to
documentation
on
how
to
actually.
C
Fix
this
this
this
issue.
So
if
we
click
on
the
class
itself,
we
can
navigate
straight
into
your
circle
source
code
and
see
where
the
the
offending
lines
are
have
been
detected
on
on
on
the
analysis,
and
that
that
that
this
could
be
pretty
useful
to
work
with
the
changes
but
we're
working
on
a
web
console
here.
So
we
cannot
actually
do
the
changes.
A
C
Yes,
so
one
thing
could
be
keep
changing
from
from
this
window
to
your
id,
but
that's
that
doesn't
feel
very
productive
to
us.
So
that's
why
we
developed
a
series
of
ide
plugins
for
the
most
popular
ides
out
there.
So
my
idea
of
choses
of
choice
is
bs
code
and
I
already
have
the
project
open
in
in
vs
code.
I
already
installed
the
the
the
plugin
so
once
I
have
my
project
open,
I
can
go
into
the
the
plugin
view
and
do
the
configuration
of
another
analysis.
C
C
I
can
click
here
and
run
and
the
analysis
will
run
and
we
will
get
the
the
results.
So
I
already
did
that.
I
have
the
results
in
here,
so
we
can
access
the
exact
same
report
that
we
have
on
the
on
the
web
console
to
be
consumed
locally.
But
again
we
we
agree
that
this
is
not
the
most
practical
approach
to
follow.
So
we
also
have
a
list
of
issues
that
have
been
detected
on
the
on
the
application.
So
if
we
navigate
towards
this
persistent
config
class
and
open
it,
there
seems
to
be
two
hints.
C
If
I
click
on
here,
I
can
see
what
the
offending
lines
are.
If
I
need
any
more
detail,
I
can
hover
here
and
see
the
description
or
I
can
see
the
details,
so
I
can
basically
get
the
the
details
on
what
needs
to
be
changed
and
the
actual
source
code
side
by
side
and
start
performing
the
changes
which
will
be
pretty
pretty
straightforward
and
and
easy.
So
after
doing
this,
after
doing
all
these
changes,
my
application
would
will
be
ready
to
be
deployed
in
containers,
but
we
we
need
to
go
to
the
next
level.
C
Once
we
have
our
our
source
code
ready
to
be
in
containers,
we
definitely
need
to
generate
all
the
different
manifests
and
images
images
for
this
application
to
run
in
kubernetes,
and
that
is
something
that
moved
to
cube
that
so
we
have
my
colleague
a
shock
here
that
will
show
you
how
this
thing
works.
A
Oh
wow,
that's
great
thanks
thanks
ramon,
so
basically,
we've
taken
me
through
kind
of
assessing
my
entire
application
portfolio.
Then
analyzing,
the
customer
app,
then
understanding
what
needs
to
be
changed
and
making
those
changes
and
now
you're
saying
that
we
could
use
a
tool
called
move
to
cube
inside
a
conveyor
to
actually
generate
all
the
objects
and
manifests
and
things
I
need
to
deploy
on
kubernetes.
Is
that
right?
That's
it
okay
cool!
So
so
a
shook
you're
from
what
I
understood.
You're
gonna
focus
on
the
cloud
foundry
pieces,
but
this
could
just
as
well.
A
B
A
B
Absolutely
so
you
have
a
couple
of
spring
boot:
apps,
the
gateway
service
and
the
order,
services
and
yeah
node.js
application
of
the
front
end.
Let's
look
at
them
and
see
how
we
can
translate
them
so.
B
So
let's
have
a
quick
demo
of
motor
cube
with
the
ui
akash.
If
you
can
share
your
screen.
D
B
Okay,
so
the
first
thing
that
we
are
going
to
do
for
motor
cube
is
to
look
at
the
source
code
right.
So
here
is
a
source
code
of
motor
cube.
We
have
the
e2e
demo
apps.
So
let's
look
at
what
we
have
there.
B
So
if
you
look
at
the
code
there,
you
have,
the
patented
react
seed,
which
is
the
front-end
app,
and
then
you
have
your
back-end
applications
in
the
rho
oar
micro
services.
Demo,
you
have
your
orders
and
the
gateway
services.
So
what
we
are
going
to
do
now
is
to
take
a
zip
of
this
folder.
We
have
already
separated
e2e
demo
apps
and
we
are
going
to
give
it
to
motor
cube
to
do
the
translation.
B
So,
let's
head
over
to
the
multi
cube
ui.
So
this
is
the
motor
cube
ui.
What
we
are
going
to
first
do
is
to
create
a
project
call
demo,
and
then
we
are
going
to
head
over
to
that
project,
and
the
first
thing
we
need
to
do
is
to
upload
the
source
code.
So
we
upload
the
zip
file.
A
B
The
second
thing
that
we
will
be
uploading
is
the
configuration.
So
there
are
some
environment
information
like
your
ingress
and
other
information.
We
need
to
give
it
to
multicube
so
that
you
can
create
the
exact
right
artifact.
So
we
are
uploading
the
configuration
to
move
the
cube
and
you
can
see
both
of
them
are
there
in
the
ui.
B
The
next
step
that
we
will
do
is
to
do
the
processing
here.
What
it
will
do
is
we
will
go
through
all
the
files
and
use
the
configuration
try
to
understand
what
are
the
services
there
so
to
save
time,
we
have
done
a
pre-processing
of
this
in
our
other
project.
So,
let's
just
head
over
to
that:
okay,
okay,
so
now
you
can
see
that
the
plan
file
has
been
created,
which
has
information
on
your
different
services
and
the
different
folders
in
which
it
found
it.
B
B
So,
let's
try
that
out
for
that:
let's
click
on
start
transformation
and
then
it
uses
the
information
in
the
plan
file,
the
source
code,
the
configs
that
we
gave
and
it
will
do
the
translation
and
give
you
that
data.
B
Exactly
so,
you
can
see
that
the
transforming
is
done.
There
is,
let's
download
the
artifacts
and
see
what
is
that.
B
So
it
will
give
give
us
a
zip
file,
let's
just
unzip,
that
file
and
see
the
artifacts
motorcube
has
created
for
us.
B
So
if
you
open
the
cf2
ocp
folder,
you
have
a
source
folder,
which
is
your
initial
source
that
we
uploaded
and
into
which
it
has
exceeded
some
more
files.
Let's
look
at
what
it
has
added
to
it.
It
has
added
the
docker
files
to
each
of
your
services.
Oh.
B
Exactly
so,
you
can
create
your
docker
images
out
of
it
using
these
docker
files
and
then
what
it
has
also
created
is.
It
has
created
some
scripts
which
help
you
test
locally.
B
We
have
already
used
these
scripts
to
build
their
images
and
push
your
images
to
registry,
and
the
next
step
would
be
to
develop.
B
A
B
Exactly
and
also,
if
you
look
at
the
amaze
parameterized
folder,
it
has
a
bunch
of
additional
helm,
charts,
customize,
yammers
and
openshift
templates
of
these
files.
So
if
you
prefer
to
use
any
of
them,
you
can
use
that
now,
let's
deploy
this
app
into
kubernetes.
So,
let's
head
over
to
our
terminal
and
let's
check
whether
we
are
connected
to
the
cluster
by
using
cube,
ct
and
version.
B
Okay,
so
we
are
connected
to
the
kubernetes
cluster
and
then
what
we
will
do
is
we
will
now
push
the
yammers.
A
B
B
Built
the
images
pushed
images,
and
now
we
have
deployed
all
your
services,
it
is
creating
all
the
deployment,
servers
and
ingress.
Now,
let's
look
at
whether
all
the
ports
are
up
and
then
we
can
see
our
app
running
so.
B
So
we
have
the
ingress
now,
let's
just
check
this
ingress
in
the
ui.
B
Let's
get
here
and
we
are
able
to
check,
have
your
front-end
app
running
with
connections
to
your
backend.
A
B
Now
you
will
be
able
to
see
locally
right,
but
you
might
want
to
automate
your
ci
cd
pipelines.
So
if
you
notice
you
have
the
cicd
artifacts
too,
created
the
tecton
artifacts
that
you
can
use
to
automate
your
build
process.
A
Nice,
I
could
put
those
right
into
my
pipelines
and
be
on
my
way,
absolutely
cool
great
thanks,
ashook
this
and
akash.
This
was
really
helpful
to
see
how
we
could
you
know,
use
move
to
cube
to
move
our
spring
boot
applications
over.
Let
me
go
ahead
and.
A
Bring
this
back
so
just
to
kind
of
reiterate
where
we're
at
so
we
had.
We
just
moved
all
of
our
spring
boot
apps
over
to
kubernetes.
Now
so
now
I
have
my
front
end,
my
gateway,
my
order
service
and
my
customer
service
all
running
on
kubernetes,
but
I
do
have
a
question
here.
We
we
kind
of
forgot
about
that
database
with
with
the
customer
service.
So
how
am
I
going
to
move
that
over
miguel?
You
mentioned
moving
that
over
with
forklift?
Is
that
something
we
can?
We
can
talk
through
now.
D
A
Yeah,
I'm
definitely
I
get
it.
I'm
really
nervous
to
move
that
over
you
know
do
anything
to
it,
because
there's
a
lot
of
old
pl
sql
in
there.
We
don't
really
know
what's
going
on
so
getting
it
over
into
kubernetes
first
and
then
figuring.
That
out
would
be
a
lot
better.
D
Yeah,
so
we
have
this
tool
forklift
that
is
going
to
bring
virtual
machines
from
your
environment,
from
your
vmware
environment,
into
kubernetes.
Using
keyboard
keyboard
is
a
capability
that
helps
you
run
vms
in
in
pods,
just
like
container
inputs,
but
but
with
vms.
D
So
what
we
have
here
is
that
we
have
deployed
forklift
and
it
has
configured
provider
that
is
cubeverd,
as
you
could
see
here,
so
we
have
source
and
target.
So
this
is
going
to
be
the
target
we
need
to
add
the
source.
We
can
simply
add
the
source
by
adding
vmware
and
what
we're
going
to
do
is
just
provide
the
name,
the
the
ip
address
or
hostname
and
our
credentials
to
access
it
and
on
a
fingerprint
to
ensure
that
we're
connected
to
the
right
place
and
there's
no
money
in
the
middle
attacks.
D
D
Providers
you
see,
and
it
has
loaded
everything
we
can
see
here
the
host,
so
we
have
been
scanning
your
vmware
environment,
lively
to
get
all
the
data
and
right
now
we
are
ready
to
to
perform
a
migration.
So
to
do
that
we
do
a
migration
plan
and
we
create
a
plan
and
we
give
it
a
name.
So
we're
going
to
focus
on
retail
it's
what
we
are
migrating
right
now
from
what
you
told
me.
So,
let's
migrate
retail.
D
And
we
select
the
source
provider
in
this
case,
the
center
and
the
target
provider
which
is
automatically
configured
and
say
that
host,
which
is
your
your
kubernetes
environment,
and
we
can
select
the
namespace,
I'm
selecting
here,
globalx
retail,
although
we
can
create
it
from
the
menu
and
go
next
and
then
I
have
to
select
the
vms
that
we
want
to
migrate.
So
I
select
this
this
cluster
and
then
it
will
gather
all
the
information
about
the
cluster
and
I'll
get
here.
D
D
Okay,
cool
exactly
we
don't
want
to
have
issues
during
the
migration.
We
want
to
warn
as
early
as
possible.
So
here
we
have
the
retail
database
that
we
want
to
migrate,
and
we
have
also
some
vms,
like
the
business
rules
for
for
bacteriological
warfare
and
the
usb
doom
for
doomsday
this.
This
this
up
for
another
day.
D
We'll
keep
those
out
for
now.
So
let's
focus
on
the
retail
database.
We
click.
Next
and
now
we
have
to
establish
an
equivalence
between
the
networks
in
the
source
and
the
networks
in
the
target.
So
I'm
going
to
create
a
map,
an
equivalence,
so
I'm
going
to
use
the
vm
network
that
the
tool
has
detected
in
the
vm
that
is
being
consumed
by
by
the
network
interface
and
I
select
the
target
network.
So
I
use
the
port
network
and
I
can
save
it
to
reuse
it
in
the
future.
D
So
I
could
save
it
as
network
map
and
go
next
same
thing
happens
with
the
storage.
We
need
to
establish
an
equivalence
between
source
and
target
storage,
so
it
has
intended
that
we
are
using
an
nfs
data
store
in
the
source
and
I'm
going
to
use
a
standard.
It's
also
nfs
in
the
target,
so
they
are
equivalent.
If
you
have
you
happen
to
need
faster
storage
in
the
source,
you
should
select
also
on
the
storage
class
in
the
target
that
you
have
to
have
pre-configured
in
your
environment
first,
so
I
could
save
it
click.
D
Next,
I
am
going
to
use
code
migration
because
we
don't
have
change
block
tracking,
but
there's
the
chance
to
be
able
to
do
board
migration,
which
copies
the
data
before
shutting
down
the
vm,
and
once
the
data
is
copied,
we
can
shut
down
the
vm
copy,
only
the
changes
that
were
applied
to
the
disk,
reducing
the
time
required
as
downtime
and
increasing
the
number
of
vms
that
you
could
migrate
in
one
intervention
window.
I
like
that,
doesn't.
D
D
Yeah,
so
we
have
also
hooks
in
case
we
want
to
automate
changes.
We
are
not
automating
changes
in
this
case,
so
we
just
complete
and
finished,
and
the
plan
is
ready
to
start.
So,
let's
say
that
the
intervention
window
arrives,
we
could
click
start
and
the
migration
will
start
going
on
so
right
now
the
tool
is
connected
to
the
source,
to
vmware
it
is
using
vddk,
which
is
what
the
backup
tools
use
to
gather
the
data.
D
So
if
it
works
for
backups,
it
works
for
the
tool
and
we
are
transferring
the
disk
as
you're
very
busy
man.
I
already
transferred
one
vm.
So
let
me
show
you
I'm
using
here
openshift,
so
we
could
show
you
in
our
new
eye
and
and
but
all
the
tasks
that
I've
shown
you
could
be
done
via
apr
cli
on
kubernetes.
So
we
are
importing
and
I
have
another
vm
already
imported.
So
what
I'm
going
to
connect
to
this
vm?
Okay,
so
I
can
connect
to
the
console
and
then,
as
you
will
see,
oh.
D
Yeah
yeah
yeah
it
is,
it
is
exactly
right
there
if
I
do
not
mistype
it,
so
I'm
I'm
going
to
to
become
the
oracle
user
and
I'm
going
to
check
for
the
connections
right.
We
have
200
connections
and
we
need
more.
We
need
150.
So
that's
one
thing
we
could
do
in
kubernetes.
That
is
very
interesting.
We
have
this
deployment.
Sorry,
this
config
maps,
okay
and
I
create
a
config
map
that
is
very,
very
small
and
straightforward,
and
what
it
does
is
that
it
changes
the
number
of
connections.
For
me.
Oh
wow,
okay,
so.
D
D
A
D
I
mean
we
could
wait
to
check
it,
but
the
thing
is
that,
once
the
vm
is
started,
we'll
see
that
the
number
of
of
the
of
I'll
say
connections
has
increased
to
250..
D
A
Great
awesome
thanks
miguel.
This
was
super
cool.
To
see
how
we
can.
You
know
basically
move
this
vm
into
more
of
a
modern.
You
know
modern.
You
know
modern
way
of
working
right
now.
I
can
actually
use
it
with
config
maps
and
change.
Make
changes
to
it
a
lot
easier,
so
yeah
cool.
Let
me
let
me
just
see
so
just
to
kind
of
make
sure
I'm
on
track
here,
so
we
just
moved
that
vm
over
that
oracle
database.
A
So
now
we
have
the
gateway,
orders
and
customer
service
our
front
end
the
oracle
database,
everything
running
on
kubernetes,
except
for
one
last
thing,
marco,
which
is
my
inventory
service.
So
how?
How
am
I
going
to
do
this
remember?
This
was
kind
of
manually
deployed
and
I
need
to
figure
out
how
I'm
going
to
redeploy
this
into
this
new
kubernetes
cluster,
bring
the
state
along,
but
also
automate
it
with
some
kind
of
git
ops
flow,
because
you
know
I
don't
want
to
leave
it
the
way
it
is.
E
E
E
So
first
thing
I
will
do
is
to
export
the
current
manifest,
so
we'll
use
a
command
called
crane
export
to
do
that,
and
what
crane
export
will
do
is
look
at
what's
currently
actually
deployed
in
the
inventory
source,
namespace
export
all
the
the
manifest,
and
then
we
can
review
those
manifest
and
use
another
crane
command,
which
is
called
crane,
transform
to
actually
clean
this
up,
remove
everything
that
was
that
is
environment
specific
and
can
do
all
kind
of
things
to
your
to
your
files
so
to
to
actually
or
abrasive
and
new
technologies.
E
And
things
like
that.
So
let
me
just
window
first
show
you
this
new
export
folder.
Now
that
has
all
your
kubernetes
manifest
in
there,
okay
and
then
I'll
use
the
crane
transform
command,
which
will
look
at
those
and
we'll
be
stripping
out
like
service
cluster
ips
and
metadata
information
that
is
specific
to
this
environment.
A
So
those
are
all
the
things
that
would
trick
me
up
when
I'm
trying
to
deploy
into
a
new
environment
right.
Those
are
all
specific,
exactly.
E
And
that's
a
common
mistake
that
we
see
right.
People
are
just
like
deploying
something
on
one
kubernetes
cluster
but
like
if
you
really
want
to
embrace
hybrid
cloud
and
be
able
to
or
even
promote
your
your
apps
from
dev
to
qe
to
production.
This
needs
to
be
automated
and
you
don't
want
anything
to
be
hard-coded
in
there.
So
we
want
that
to
be
programmatically
done
instead
of
like
hard-coded
in
your
manifest.
E
But
to
do
that
now
we
created
like
a
transform
folder,
which
actually
is
all
the
things
we
have
detected.
That
should
not
be
our
coded
in
your
files
and
we're
creating
json
patches
out
of
that,
so
that
this
can
be
reviewed
and
analyzed
to
make
sure
you
know
we're
applying
the
right
changes
and
when
we're
ready,
then
we
can
use
the
crane
apply
command,
which
will
look
into
all
that
apply
the
patches
and
then
create
all
the
files
that
you
want
to
push
to
get
so
that
this
point.
E
This
can
be
fully
automated
using
argo
cd
to
deploy
your
app
on
any
cluster
and
and
and
can
be
done
programmatically
instead
of
manually
as
you're
deploying
or
promoting
your
application.
So,
let's
just
run
the
train
apply
command.
E
And
now,
if
I
go
to
my
git
folder,
I
have
a
resource
folder
that
got
created,
which
is
the
brand
new,
manifest
now
fully
cleaned
up
and
ready
to
go.
So
the
last
step
I
have
to
do
is
to
push
that
to
my
git
repository
so
that
our
go
cd
can
pick
this
up
and
and
provision
my
application
automatically
for
me.
So
if
I
go
back.
E
Folder
here
I
have
oops,
I
can
type.
I
have
an
argo
file
now,
which
is
the
definition
it's
a
very
simple,
argo
definition
that
will
actually,
with
the
with
the
repository
that
I
want
to
use,
to
deploy
this
app
and
and
the
name
space
where
I
want
this
to
be
provisioned
from
from
this
point
on
as
soon
as
argo
is
actually
taking
care
of
that
for
me,
then
I
can
make
changes
and
and
push
that
to
get
and-
and
have
this
automatically
deployed
all
the
time
just
through
our
ddr
city,
automated
deployment
model.
A
E
E
So
it's
good
that
we're
moving
a
lot,
removing
your
application
and
then
we
can
deploy
automatically.
But
there's
a
question
question
of
state
and
the
state
is
all
the
tvs
right
with
all
the
data
of
your
database.
That
also
needs
to
follow
the
deployment
and
be
pushed
from
one
environment
to
another.
So
we
have
another
crane
command
for
that.
That
is
called
crane
transfer.
Pvc.
E
Rsync,
in
the
background
to
actually
copy
the
data-
and
this
can
be
run
multiple
times-
they
will
just
copy
the
data
from
the
last
time.
I
ran
this
command,
but
you
guys
have
a
very
a
big
database
with
a
lot
of
products
right.
So
I
already
read
that
for
you,
so
we
don't
have
to
wait
for
all
those
products
or.
E
So
so
I
already
ran
that
for
you,
the
the
database
already
there
so
actually
now,
I'm
ready
to
run
just
in
our
go.
So
I'm
just
going
to
run
this
cube,
ctl
create
with
the
our
go
definition
file,
and
this
will
launch
our
go
to
actually
provision
my
app.
So
let's
look
at
at
argo
and
see
how
how
actually
argo
is
provisioning.
All
this
just
give
me
one
second
here
so
you'll
see
now
that
argo
actually
looked
at
my
brand
new
manufacturer
created
with
crane.
This.
A
E
Yeah
and
now
you
have
following
your
github
principles,
that
should
be
actually
the
best
practices
way
of
deploying
apps
and
promoting
apps
and
and
embracing
every
cloud
as
you
can
now
deployed
on
any
cluster,
and
it
would
just
follow
along
as
everything
is
fully
automated.
So
anytime,
you
want
to
make
a
change.
You
make
it
on
your
git
repository
and
then
argo
will
keep
provisioning.
The
latest
changes
for
you
automatically.
A
Great
awesome,
thank
you.
So
much
mark,
that's
a
great
great
demo.
It's
showing
me
now
now,
I
believe
so
with
this
is
basically
the
end
state
we've
gotten
too.
So
we've
seen
how
we
can
now
have
all
of
our
services
running
on
kubernetes
leveraging
a
git
ops
paradigm
and
then
simplifying
our
operations
by
running
on
a
single
kubernetes
platform.
A
So
this
puts
our
retail
application
in
a
great
place
right
now
we
can
start
to
plug
in
cloud
services
to
this
start
to
bring
in
aiml
all
the
new
cool
things
that
are
available
when
you're
running
your
application
natively
on
kubernetes.
So
hopefully
this
demo
by
the
the
conveyor
group
was
helpful,
just
to
reiterate
some
of
the
tools
you've
seen
across
rehosting
re-platforming
and
ring
factoring.
A
You
can
see
them
here
in
the
future,
we're
going
to
be
bringing
in
polaris
as
well,
which
will
actually
help
us
measure
those
dora
metrics
so
that
we
could
see
the
improvements
we're
making
from
a
software
delivery
programming
standpoint
over
time.
So,
if
you're
interested
in
joining
us
on
this
journey,
please
join
us
at
conveyor.io.
A
This
is
where
the
community
hangs
out
we're
interested
in
understanding
how
you're,
modernizing
your
apps
and
interested
in
building
tools
to
re-host
free
platform
and
refactor
your
apps
to
run
on
kubernetes
and
use
other
cloud
native
technologies.
You
can
find
us
on
conveyor
slack.
You
can
learn
about
us
by
joining
meetups
and
if
you're
interested
in
proposing
a
meetup
talk,
you
can
email
us
at
conveyorio
gmail.com.