►
Description
Project Harvester is a new open source alternative to traditional proprietary hyperconverged infrastructure software. It is built on top of cutting-edge open source technologies including Kubernetes, KubeVirt and Longhorn.
In this talk, we will talk about why we decide to build Harvester, how did we integrated with KubeVirt, and the lessons we've learnt along the way. We will also have a demo of current release of Harvester by the end of session.
Presenter: Sheng Yang: Sr Engineering Manager, SUSE, Twitter/GitHub: @yasker
B
So
thanks
everybody
for
attending
this
session
and
my
name
is
shan
young,
I'm
a
senior
engineer
manager
now
at
the
sousa
previously
I
work
at
the
rental
labs
and
I'm
basically
drive
a
few
projects
like
longhorn
harvester
and
before
that
you
might
heard
of
some
project
called
convoy
and
local
pass
provisioner.
B
So
those
are
in
my
scope
of
responsibility
and
today,
I'm
here
to
talk
about
our
newest
project
in
one
of
the
newest
projects
inside
the
rental
lab
susan,
which
is
the
harvester.
B
So,
first,
let's
bring
out
to
say
why:
why
do
we
build
harvest?
And
why
do
we
do
it
now,
so
things
we
avail
that
then,
when
the
dockers
start
getting
populated
like
a
popular
popular
and
then
they
will
in
fact
already
many
efforts.
Many
attempts
to
unify
the
vm
management
with
container
management
platforms
right
so
for
the
rancher
rental
labs
itself
started
the
rental
vm
project
very,
very
early
around
2015
and
even
before
I
think
the
kubernetes
became
the
mainstream
orchestrator.
B
So
at
the
time
the
ranch
vm
was
targeting
the
the
docker
using
docker
as
the
one
tool
you
unify
to
start
the
vms
and
even
inside
using
the
docker
image
as
the
in
vm
ripple.
We
have
image
ripple
to
and
also
package
the
small
cumule
and
other
components
into
the
vm
import.
So
you
can
have
the
docker
run
command
to
start
vm
instantly
and
using
the
vnc
on
the
side
to
access
it
so,
but
that
project
we
even
we
do
some
retrofit
2018
to
make
it
compatible
with
the
kubernetes.
B
But
later
we
basically
archived
the
project
at
2019.,
so
as
as
the
guys
you're
familiar
with
the
history
of
what
is
virtualization
going
with
kubernetes
and
with
docker,
you
also
know:
there's
another
project
called
called
the
coop
virtualet
it's
from
mirantis.
B
The
project
was
started
at
2017,
but
it's
it's
aiming
to
it's
basically
doing
a
similar
thing
at
the
time
with
the
cook
verge,
but
at
the
different
they're
doing
a
different
level,
and
they
want
you
whether
that
run
as
a
container
runtime
interface
and
ci,
and
they
aiming
to
solve
the
problem
at
that
level.
But
this
project's
also
been
inactive
since
2019..
B
So
finally,
we
have
a
covert
and
covert.
We
all
know
that's
originally
coming
from
red
hat
has
now
has
donated
red
hat,
donated
to
the
cncf
and
it's
a
cncf
sandbox
and,
of
course
it's
the
topic
we're
talking
about
here,
so
the
codeword
in
fact,
is
going
much
better
and
much
stronger
than
other
projects.
The
main
reason
is,
I
think,
is
because
the
the
big
cooperate
is
rooted
on
the
kubernetes
and
the
kubernetes.
B
The
popularity
of
the
combinations
come
up
and
we
all
see
the
function
as
for
the
kubernetes
as
a
single
orchestration
platform,
even
we
can
like
if
we
can
like
upstream
vm.
On
top
of
that,
it
will
be
very
big
again
for
the
reduced
complexity
on
the
say.
You
have
to
operate
vm
and
the
container
separately,
and
that
is
what
could
work,
I
think,
has
gained
a
lot,
a
lot
more
popularity
and
still
like
getting
very
strong
those
days.
And
so
that's
why.
We
also
have
the
coverage
first
summit
here.
B
So
the
the
last
year,
the
nutanix
rep,
the
nutanix
revenue
is
about
1
billion,
something
and
the
real
ml
revenue,
I
think,
is
about
16
billion
or
something
which
is
a
very,
very
big,
like
probably
10
100
times
bigger
than
the
entire
kubernetes
market,
and
but
they
but
those
people
using
windware
and
nutanix.
They
are
not
really
switching
like,
of
course,
some
of
them
switch
to
kubernetes
and
maybe
start
using
cooperate,
but
most
of
them
are
not.
You
haven't
done
that
haven't
done
that,
yet
so
why?
Why
is
that?
So?
B
Why
there's
so
big
difference
in
the
market
there?
So
one
thing
we
think
is
because
of
there's
no
like
really
open
source
alternative
to
the
commercial
hci.
So
if
the
user
wants
to
get
a
very
unified
experience
and
they
have
to
go
to
vmware
and
of
course,
there's
another
product
you
might
have
seen
here
is
the
openstack,
which
is
for
the
last
few
years
before
the
comments
become
mainstream.
B
A
lot
of
lots,
big
companies
pouring
tons
of
money
into
trying
to
make
openstack
the
de
facto
standard
for
the
vm
orchestration
on
the
on
the
on
premises.
But
that's,
as
you
know,
doesn't
work
out.
So
that's
a
lot
of
resources.
We
think
there's
a
lot
of
resources
to
spend
on
that
and
that's
and
also
basically
means
that
the
resource
has
diverted
from
like
create
a
new
open
source
alternative
for
the
commercial
hci
which
can
compete
at
the
same
playground
with
vmware
and
nutanix.
B
You
might
also
know
that
there's
another
open
source,
hdr
solution
out
there
called
promox
and
yes,
it's
open
source
and
it's
it's
actually
a
solution,
but
the
the
market
share
of
what
products
have
is
really
not
really
compatible
for
the
vmware
water
nutanix
or
on
to
the
extent
of
what
even
even
all
the
stack
is
not
really
successful.
B
B
So
we
think
that
may
be
one
another
major
reason
that
there's
no
really
mass
in
mass
transition
to
the
kubernetes
with,
for
example,
with
cooperate
and
that's
the
one
reason
we
think
why
we
want
to
try
this
and
see
if
the
harvester
will
be
the
product
to
fill
that
gap.
B
So
we
create
harvester
and
harvester
is,
by
definition
of
open
source,
hyper-converged
infrastructure
software,
so
harvest
third
position
is
directly
competing
with
when,
where
we
sphere
nutanix
and
when
you
think
about
what
we
sphere
and
mechanics
do
they
when
we
you,
you
have
iso
right.
So
if
you
want
to
install
that
gsi
server,
you
have
iso
installed
on
this
on
the
disk
and
on
on
your
bare
metal
machine
and
you
boot
up,
you
see
the
screen
and
you
know
where
to
access
it
or
you
create
recenter
to
connect
to
those
nodes
right.
B
So
it's
a
pretty
much
different
from
what
we
have
right
now
on
the
on
kubernetes
is
that
you
run
a
bunch
of
scripts
and
using
either
couple
domain
or
rk
your
p3s,
and
you
do
that
manually,
you
you
bring
up
the
cluster
and
then
they
used
to
convert
covert
components
or
probably
some
ci
components
or
some
storage
as
well.
So
we
were
the
purpose
of
the
harvester.
Is
we
wanted
to
be
the
easiest
way
for
you
for
the
end
user
to
get
hdr
up
and
running
on
developmental
servers?
B
It
should
be
as
easy
as
how
you're
going
to
install
a
vmware
esx
server
or
how
you
install
a
nutanix
server
right.
So
that's
the
one
goal,
and
this
is
the
part
that
we
spend
a
lot
of
efforts
on
so
on
another
side,
because
harvester
essentially
is
still
a
clown
native
software,
so
you
can
deploy
harvester
as
a
health
chart
on
the
existing
kubernetes
cluster,
but
of
course
that
cluster
needs
to
have
the
visualization
support
on
every
single
node
of
it
right.
B
So,
in
fact,
this
way
of
deployments
as
help
chart
directly
is
pretty
complicated,
has
a
lot
of
restrained
restraints.
Like
say
you
have
to
have
a
certain
version
of
longhorn
or
don't
have
long
form
installed
or
don't
and
also
don't
have
codeword
installed,
because
the
harvest
itself
is
going
to
patch
up
a
certain
version,
which
is,
we
make
sure,
that's
certified
and
compatible
with
the
hottest.version
to
your
comments
cluster.
B
So,
in
fact,
we
in
the
future
we
still
not
going
to
refer
this
way
of
deploying
home
chart
it's
most
likely
going
to
be
the
app
mode
or
the
developer
mode,
and
when
you
we
can
still
envision
that
when
you
run
the
production
environment,
production
workload
on
the
harvester
is
still
recommended
to
use
hci
mode,
and
it's
probably
going
to
be
the
only
mode
to
run
in
production.
B
We're
going
to
support
in
the
future
to
just
minimize
the
support
and
the
the
variety
of
the
your
environment
problems
to
minimize
the
problem
we
you
might
have
with
the
harvester
so
harvester.
Another
very
important
aspect
of
harvester
is
harvester,
of
course,
is
based
on
several
cognitive
relay
technologies.
B
But
we
are
designing
this
product
project
to
be
really
require
no
kubernetes
knowledge
to
operate
right.
So
this
is
one
part.
B
We
also
spend
a
lot
of
time
to
converting
the
kubernetes,
the
the
terminologies
and
also
supporting,
as
you
can
see
in
the
next
slide
here-
we're
supporting
vlan
networking,
which
is
probably
basically
unheard
of
in
in
the
kubernetes
native
world,
and
we're
also
having
the
motors,
which
is
allow
you
to
have
the
two
different
mix
on
the
same
vms
and
the
one
is
connecting
to
what
we
call
the
management
networking,
which,
in
fact
is
the
kubernetes
overlay
networking
another
one
is
of
course,
vr
networking.
B
We
are
also
aiming
to
add
more
networking
plugins
cnis
in
the
future,
but
it's
going
to
be
one
overlay.
Another
one
is
like
some
type
of
layer,
3
or
layer,
2
and
networking
plugins.
B
So
what
can
harvester
do
right
now,
of
course,
vm
lifecycle
management,
that's
the
that's!
The
key
components
of
harvester
coop
vert
is
doing
and
you
can
do
the
vm
create
division,
stop
restart
in
the
harvest
ui,
and
we
support
that
sst
key
injection
cloud
init
and
those,
of
course,
is
just
exposed
by
the
covert,
and
we
also
provide
the
graphics
console
and
serial
port
console
in
our
ui
as
well.
B
For
the
user
to
have
very
easy
access
to
the
to
the
vms
right,
so
we
converse
also
has
a
distributed
block
storage,
which
is
powered
by
longhorn.
Underneath
is
highly
available
and
the
in
the
third
one
release.
We
still
expose
this
as
like
a
film
as
fail
system
mode,
but
in
the
0.2
release,
which
is
upcoming
the
next
month,
we're
going
to
expose
it
as
a
roadblock
device
which
should
like
increase
the
performance
as
well.
B
So
in
the
harvester
you
can
adding
any
new
images
into
your
harvest
cluster
and
we
are
going
to
download
that
image
into
the
medium
running
inside
harvester,
also
in
edgeway
mode,
and
once
you
get
started
starting
vms,
the
vm
will
put
the
image
from
the
menu
instead
of
like
directly
from
the
internet,
so
you
can
think
of
a
minion.
It's
more
going
to
be
like
a
building
cluster
level
cache
to
help
you
speed
up
the
starting
of
the
vms.
B
All
right,
so
this
is
the
really
the
very
general
architecture
of
the
what
the
harvester
looks
like
now
here.
We
are,
of
course,
assuming
that
you
are
deploying
harvester
in
hci
mode,
and
we
have
two
nodes
here.
On
top
of
that,
you
see
that
we're
running
on
the
k3
os,
which
is
the
life,
light
os
distribution
of
running
like
a
k3s
natively.
B
So
in
fact
those
and
on
top
of
that
we
are
going
to
run
longhorn,
convert
both
deploy
on
top
of
kubernetes
so
and
the
minion
is
also
there
for
to
support
the
image
pool
the
image
management
for
the
whole
cluster
and
the
each
vm
we're
going
to
have
choices,
to
connect
to
the
management,
networking
or
and
and
to
one
of
the
vlans,
and
that
is
to
achieve
the
real
isolation
at
vlan
level.
So,
in
fact,
in
this
picture,
one
components
might
be
found
in
the
future.
B
B
So
that
is
one
part
we
might
and
we
still
haven't,
find
out
decided
yet,
but
that's
the,
but
before
the
ga
is
very
likely,
the
kcos
part
we're
going
to
be
replaced
by
something
like
we
can
better
support
it
in
the
context
of
that
the
suse
operation
system
or
the
suse
wrench,
or
something
combined
operation
system.
B
That's
all
question
asking
from
christians
say:
can
harvest
the
provision
kvm
gpus?
Currently,
no,
but
it's
in
fact
it's
one
thing
where
we
want
to
do
things.
The
harvester
itself
is
integrated
with
the
cooper
version
very
well.
So
we
do
once
the
convert
has
the
features,
I
think
already
have
the
q
two
provision,
kvm
gpus,
the
harvester
in
harvest
site.
We
can
also
add
that
and-
and
also
we
probably
need
to
like
more
wrapping
around-
that-
to
improve
the
use
of
user
experience.
B
Yeah
yeah,
I
think
we're
ready
all
right.
Can
you
see
my
screen?
Someone
can
give
me
a
thumbs
up
or
something
in
the
chat.
Okay.
Thank
you
all
right.
So
this
is
the
harvest
ui.
When
you
first
install
it
and
the
when
you
see
the
dashboard,
you
can
see
the
overview
of
your
host,
the
virtual
machines
and
everything
here
and
when
you
go
to
the
host,
you
can
see
that
those
are
to
harvest
the
node,
and
this
is
in
fact
provisioned
by
sdi.
B
Okay,
so
I
saw
another
question
say
how
do
you
handle
fcd
consistent
with
only
two
nodes?
That
is
we,
when
you
have
only
two
nodes,
we
only
provide,
I
mean
one
of
them
is
going
to
be
the
the
management
node,
of
course,
is
not
going
to
be
highly
available.
So
two
nodes
cluster
is
not
going
to
be
highly
available.
You
have
to
have
at
least
three
nodes
clustered
to
make
sure
everything
is
consistent
and
available
all
right.
So,
oh,
what's.
B
B
B
B
Let
me
just
re-login,
so
you
can
see
this
is
the
in
fact.
We
are
complete
connect
to
this
virtual
machine
using
tdys0
and
you
can
see
the
the
ip
address
and
this
machine
is
configured
with
the
two
networks
and
the
one
is
around
the
10.0
network,
which
is
the
effect
bridge
into
the
kinetics
overlay
networking.
Another
one
is
172.
B
1691.89
network
is
located
on
the
vlan
91
and
this
is
the
in
our
data
center.
So
I
can
start
in
fact
before
that,
let's
take
a
look
at
how
we
download
images.
So
you
can
see.
We
have
two
images
here
and
that's
this
one
is
the
vocal
of
course.
It's
another
one
is
k3
os.
But
how
do
you
download
a
new
image?
B
B
C
B
B
B
B
B
So
you
can
see
that
when
you
boot
up,
you
are
going
to
see
the
harvest
installer
and
now
I
could
just
go
to
start
installation
process
for
this
installer.
When
you
use
down,
you
can
see
a
kind
of
installation
research
to
help
you
to
guide
you
through
how
to
install
harvester
and
either
you
can
start
a
new
cluster
or
join
an
existing
cluster.
B
B
See
so
now
you
can
see
this
one
still
this
one
I
already
have
like
ip
and
everything
connect
correctly
yeah.
So
I
saw
a
question
asking
from
me:
how
so
does
the
live?
Migration
works
with
the
long
harness
backing
storage?
Yes,
so
currently
the
for
the
0.1
release,
it
doesn't
work
yet
for
the,
but
we're
aiming
to
get
that
work
with
the
0.2
release,
even
with
the
roadblock
support.
B
Yes,
so
on
this
92.1.
So
let's
see,
I
think
this
one
doesn't
have
the
console
as
well.
So
it's
I'm
going
to
the
serial
console
and
okay,
I'm
going
to
save
console
for
this.
B
Config
ifp
the
cloud
immediately
here
I
should
route
I'm
going
to.
I
need
to
delete
that
before
the
route
for
the
management
networking
in
order
to
show
this
demo,
they
are
going
to
route
through
another
way.
B
B
B
So
I
can
so
from
this
one
I
I'm
not
able
to
ping.
Of
course.
Of
course,
I
make
sure
that
I
can
reach
the
gateway
so,
but
I
won't
able
to
ping
another
vm,
because
the
isolated
by
the
vlan,
but
of
course,
for
this
vmware
also
ping,
the
gateway
to
make
sure
this
is
in
fact
the
network
itself
is
working,
but
it's
not,
but
it's
not
is
isolated.
The
network
itself
is
working,
but
it's
isolated
by
the
two
lan
2
vlan
on
the
layer,
2
level
right.
B
Yeah.
Now
the
the
harvest
installer
has
been
boot
up
and
you
can
choose
to
create
a
new
harvest
cluster
or
join
an
existing
harvest
cluster
here
and
we
are
going
to
join
existing
cluster
and
I
have
to
make
sure
I
get
this
ip
address
right.
So
what's
the
ip
address
for
the
for
the
main
foot
for
the
cluster
is
16.41.
B
So
pokemon
is
like
a
password
for
you
to
join
this
classroom
and
you
can
set
up
passwords
in
order
for
you
to
gain
to
the
good
success
once
you
boot
up
your
machine
and
we
don't
need
edge
and
then
we
don't
need
a
proxy
and
we
can
harvester
can
choose
which
nick
you
are
going
to
use
as
your
management
network
and
it's
going
to
be
the
need
for
which
is
on
the
tension.
B
Networking
and
now
this
harvester
node
is
starting
storing
and
what
is
due
is
installing
case
rest
on
the
note
and
then
copy
the
necessary
components,
and
basically
we
have
packaging
all
the
version
specific
convert,
longhorn
motors
everything
into
into
that
disk
into
the
blue
disk
and
then
and
then,
when
we
started,
we
just
deployed
the
help
chart
on
the
necessary
components
into
into
the
system.
B
B
B
All
right,
so
I
hope
you
can
see
my
slide
yeah.
So
there's
a
few
issues
we
have
encountered
when
developing
harvesters
one
is
regarding
the
cdi,
because,
currently
how
cdi
works
is
they
are
like,
of
course,
it's
very
general
purposes
and
you
can
create
pvp.
I
can
create
pvpc
and
copy
the
image
into
it
and
those,
but
the
first
thing
we
notice
that
cdi
doesn't
work
well
with
raw
image
because
you
will
tend
to
like
copy
the
whole
data
of
the
you
want
to.
B
For
example,
if
this
qcon
file
is
only
one
gigabyte,
but
the
q
file
size
that
the
volume
size
is
10
gigabyte
cdi
we're
going
to
copy
the
10
gigabytes
data
into
the
raw
image,
which
makes
it
very
very
slow
for
the
boot
up
so
and
another
thing
is
because
we
are
in
order
to
speed
up
the
boot
up
process.
We
are
thinking,
have
some
like
a
backing
image
feature
which
is
like
because
we're
using
long
form
there
and
we
also
have
control
over
the
normal
components.
B
We
can
have
a
long
form
using
certain
vm,
certain
vm
image
as
the
backing
image
and
then
so.
Every
vm.
Every
copy
of
that
vm
image
is
only
going
to
be
like
a
still
copy
on
right.
On
top
of
that,
just
like
was
darker
image.
Layer
was
due,
but
that's
it's
not
going
to
be
at
least
for
now.
We
don't
find
it.
We
cannot
find
a
way
to
get
that
work
with
cdi.
B
So
in
the
future
we
might
able
to
just
like
walk
around
the
cdi
and
see
and
we
haven't
made
a
decision
yet,
but
that's
the
part
that
we
found
cbi
is
not
really
very
efficient,
maintaining
the
copies
and
those
another
one
is.
We
are
also
thinking
we're
also
checking
with
the
corporate
upstream
communities
and
looking
forward
to
the
outcome
versus
ga
in
order
to
make
a
better
supported
corporate
in
this
sense.
B
So
very
brief
roadmap
here
and
for
the
next
version.
I
think
some
audience
already
asked
about
live
migration.
Yes,
it
will
be
supported
and
we
are
going
to
support
vm
image,
backup,
restore
and
zero
downtime
upgrade
and
the
pxc
boot.
So
once
we
have
the
ps
boost
we
should
able
to.
You
should
be
able
to
run
the
harvester
in
the
environment
like
equinix
metal,
and
then
you
may,
if
you
don't,
have
the
accessory
parameter
nodes
and
you
might
able
to
do
it
on
the
cloud
providers
and
harvest
the
0.3.
B
C
You
have
a
bunch
of
questions,
so
let
me
go
ahead
and
take
them
in
order.
You
are
the
last
talk
of
the
day,
so
we're
going
to
take
all
of
these
questions
in
overtime.
C
Are
not
hanging
out
hanging
around
for
the
questions?
Thank
you
very
much
for
attending
today
and
we
will
have
a
full
day
tomorrow.
So
first
question
from
christian
can
harvester
provision
kvm,
gpus.
B
Yeah,
so
currently
we
haven't
had
that
yet,
but
I
can
I
imagine
that
should
be
pretty
straightforward,
since
the
I
think
already
got
confirmed
from
waterdeck.
That's
the
covert
already
support
gpu.
So
yes,
we
should
able
to
add
that.
B
Yeah
so,
as
I
mentioned
before,
the
scd
doesn't
really
have
consistency
with
two
nodes.
So
in
order
to
have
a
highly
available
harvest
cluster,
you
have
list
to
have
like
three
nodes
right.
So,
of
course,
if
you
go
beyond
three
nodes,
the
consist
the
acid
that
actually
actually
feature
will
be
there.
But
if
you
only
have
two
nodes,
if
you
somehow
bring
down
the
master
node
for
the
sad
well,
now,
not
not
really
anything
can
be
guaranteed
there.
Yes,.
C
Okay
mihai,
I
wants
to
fly
vm
migration
work
with
longhorn
as
backing
storage.
B
Yes,
so
in
fact,
a
little
bit
more
into
that
we
on
the
lower
side,
we
are
doing
something
special
in
the
case
of
you
want
to
do
the
vm
light
migration
and
the
even
it's
really
roadblock
device
and
longer
are
supposed
to
be
regret
once
if
you
want
to
do
the
line
migration
longhorn
can
still
do
that.
Just
given
that
you
are
having
the
data.
B
B
As
long
as
you
don't
write
any
data,
when
start
writing,
so,
like
validation
process
will
be
like.
I
write
data
to
do
a
and
then
pause
for
birth
moment,
and
then
I
start
switching
everything
to
the
node
b
right.
So
for
that
moment,
as
long
as
that's
you
don't
come
back
to
writing,
say
you
write
in
note
a
and
then
write
your
note
b
and
then
come
back
to
write
to
know
that
as
long
as
that
doesn't
happen,
in
fact,
long
runs
on
the
blog
levels
going
still
supported.
B
So
that
is
how
we
are
going
to
support
the
line
migration
so
by
exposing
to
roblox
device
two
different
nodes,
but
make
sure
that
the
user
need
to
make
sure
that
they
are
not
going
to
write
data
back
and
forth.
C
Okay,
thank
you
I'll,
take
a
question
from
tal
and
then
another
follow-up
question
from
mihai
mi.
I
apologize
if
I
mispronouncing
your
name
but
tal
wants
to
know.
I
have
rancher
h.a
up
and
running.
Can
I
somehow
use
harvest
with
the
harvester
ui
only
to
provision
m
and
vmi.
B
Yeah,
so
I
think
what
you
mentioned
here
is
what
we
call
the
that
mode
or
app
mode.
In
theory,
you
can,
because
converse
is
just
deployed
as
a
bunch
of
help
chart
and
you
can
do
that
from,
for
example,
rancho
harvester
branch
0.1,
and
you
can
get
it
right
up
and
running
of
course,
but
first
you
need
to
make
sure
you're
on
a
barometer
node,
and
the
second
thing
is
in
fact
there's
very
big
restrictions
there.
You
cannot
have
the
convert
installed
and
you
cannot
have
the
longhorn
or
any
harvest
component
installed.
B
It's
going
to
be
very
limited,
so
currently
harvester
wasn't
really
designed
to
be
running,
say
inside
an
existing
wrench,
a
setup
yet,
but
in
the
future.
What
we
have
in
mind
is
you
can
have
a
harvest
cluster
and
you
can
import
that
cluster
to
a
rancher
like
just
to
a
wrench
setup,
just
like
how
you
import
normal
kubernetes
clusters.
So
that
should
be
giving
you
a
little
bit
of
way
to
like
operate
in
containers
on
top
of
the
harvester.
B
But
harvest
itself
may
may
still
like
only
deal
with
the
what
you
like
virtual
machines,
because,
for
example,
if
you
have
a
part,
you
basically
have
very
so
how
port
consumer
resource
can,
where
is,
but
when
you
have
virtual
machines
every
time
normally
say
I
give
this
virtual
machine
one
core
and
it's
better.
You
really
give
it
one
core
right.
B
B
What
we
envisioned
at
this
for
the
moment,
you
know
virtual
apps
things
changed
like
fairly
frequently,
but
at
least
for
the
moment
what
we
won't
have
is
you
still
need
to
have
a
dedicated
harvest
cluster,
but
you
might
able
to
like
import
that
to
your
ranchers
and
even
we
are
considering
that.
Can
we
do
we
allow
towers
to
check
like
a
bundle
with
ranchers
single
cluster
ui,
but
that
is
also
another
thing
to
talk.
B
C
Okay,
there's
a
follow-up
from
mihai,
but
it's
complicated
and
I've
pinged
you
both
on
slack.
So
you
can
continue
the
discussion
there.
So
thank
you
very
much
and
pep
over
to
you.
A
Yeah
just
a
quick
question:
what
now
that
you
mentioned
slack,
there
was
a
quick
one,
hopefully
from
the
containerized
data,
encoder
team
cdi,
if
just
a
clarification,
if
you
are
currently
using
cdi
or
not.
B
Yeah
so
currently,
as
in
0.1,
we
are
still
using
the
cdi
because
cdi
can
handle
the
volume
resizing
and
the
copy
image
and
those
stuff.
But
for
the
0.2
we
are
considering
like
removing
the
dependency
on
the
cdi
yeah,
but
it
hasn't
finalized.
B
A
Okay
thanks
so
much.