►
From YouTube: KBE Insider (E9) - Bandan Das & David Vossel
Description
For this episode, KBE Insider interviews Bandan Das, software engineer at Red Hat and instructor at Boston University, as well as David Vossel, principal software engineer at Red Hat. Bandan works on KVM (Kernel-based Virtual Machine) and David contributes to the open-source project, KubeVirt, a virtualization API for Kubernetes. In this episode we’ll focus on what the future of VMs looks like in a Kubernetes world.
A
Well,
hello,
everybody
welcome
to
kb
insider
and,
with
our
fancy,
fancy
new
graphic.
You
know
it's
always
nice
to
have
a
new
intro
clip
just
to
make
our
lives
more
entertaining.
We
also
got
some
cool
new
swag.
I've
got
the.
I
got
the
sweatshirt
going,
so
you
know
we're
we're,
always
happy
about
swag.
So,
let's
see
I
have
a
couple
of
new
things
today.
A
So,
first
and
foremost,
I'd
like
to
introduce
my
co-host
for
today,
who
is
joshua-
and
you
know
so
we
have
a
rule
on
the
co-host
along
with
me-
is
that
they
must
be
named
josh.
So,
even
if,
even
if
it's
somebody
else
we're
just
gonna
call
them
josh
anyway.
That
way,
it's
easier
for
me,
yeah.
A
Really
exactly
exactly
so
a
little
bit
about
the
show,
oh
actually
so
josh
do
you
want
to
introduce
yourself
real,
quick
and
then
I'll
talk
about
the
show,
a
little
bit
sure.
B
Hi
everybody
I'm
josh
wood,
I'm
a
developer
advocate
at
red
hat,
principally
focusing
on
openshift
and
especially
operators
as
an
extension
to
the
kubernetes
api
and
openshift
in
a
way
of
delivering
features
that
we
add
atop
the
kubernetes
core
and
openshift.
A
Awesome
cool,
so
the
reason
we
do
the
show
is
to
try
to
give
you
an
inside
look
into
what's
going
on
in
kubernetes
land
and
in
particular
you
know,
like
the
idea
being
that,
if
you
kind
of
talk
to
let's
say
the
lead
engineers
or
the
engineers
who
are
actually
doing
the
work,
that
you'll
get
a
much
better
insight
of
where
kubernetes
is
going
in
the
future,
rather
than
the
hopes
and
prayers
of
a
product
manager,
you
know
or
a
press
release.
A
So
that
way,
you
know
we
because
it's
open
source
right.
There
is
a
lot
of
contribution.
That's
done
based
on
the
engineer's
recommendations,
and
you
know.
Sometimes
that's
that's
well
reflected
in
the
press
releases
and
sometimes
it's
not.
A
So
we
think
it's
a
good
idea
to
talk
to
the
people
who
are
actually
doing
the
work
and
we
hope
you
do
too
and
so
feel
free
to
put
questions
in
the
chat
and
you
know
or
if
you
have
any
comments
or
whatever
we
are
always
happy
to
have
more
engagement
as
it
were.
A
So
definitely
let
us
know,
but
today's
show
we
are
going
to
focus
a
little
bit
on
virtualization,
and
so
we
have
two
guests
and
the
reason
is
is
because
virtualization
seems
like
something
that
isn't
normally
a
part
of
kubernetes
being
a
you
know,
a
container
orchestration
platform
and
I
think
what
we're
what
we're
starting
to
see
is
that
it's
more
than
that
right.
It's
that
it's
an
orchestration
platform
for
lots
and
lots
of
different
things.
A
If
you
go
back
to
like
our
first
episode,
you
can
see
us
talking
to
clayton
coleman
about
you
know
how
using
it
in
general
as
a
control
plane
and
it's
kind
of
actually
come
up
a
number
of
times
throughout
our
interviews.
So
we
we
think,
there's
a
lot
more
going
on
there,
so
we
invited
two
guests.
One
is
david
vassell.
A
I
hope
I
said
that
right,
who's
from
the
cube
vert
team
and
the
other
one
is
bandan
das,
who
works
on
kvm
or
the
kernel
virtualization
manager.
Is
that
right?
I
was
like.
I
don't.
A
Virtual
machine
yeah
I
was
like
I
can't
remember
what
the
expansion
is
so
and
hopefully
we
can
talk
a
little
bit
about
virtualization
and
how
it
relates
to
kubernetes.
So,
let's
start
with
david,
do
you
want
to
introduce
yourself?
I
I
always
say
it's
very
difficult
to
remember
or
find
out
or
discover
or
have
any
consistency
to
what
titles
and
roles
are
within
red
hat.
So
I
find
it's
much
safer
to
let
people
introduce
themselves.
D
Yeah
sure
I'm
david
vossel,
I'm
an
engineer
at
red
hat,
I'm
contributing
to
the
key
vert
open
source
project
and
it
really
it's
kind
of
evolved
into
an
ecosystem,
so
I'm
kind
of
contributing
to
the
key
fert
ecosystem.
At
this
point,
I
was
involved
with
the
key
verb
project
early
on
got
the
opportunity
to
design
a
lot
of
the
way
that
it
operates
today.
So
I'm
coming
at
this
from
a
perspective.
A
Gotcha
cool
all
right
bonded.
How
about
you.
C
Hi,
my
name
is
benden.
I
work
in
virtualization,
so,
as
langdon
said,
I
work
mostly
on
kvm,
which
is
the
kernel
based.
You
know:
virtualization
module
in
the
linux
kernel
and
this
doesn't
that's
an
ecosystem
on
the
virtualization
virtualization
side
as
well.
So
it's
qmu,
kvm
and
livered,
which
are
just
you
know
things
that
usually
my
team
takes
care
of
both
upstream
and
downstream
and
then
yeah
I've
been
working.
C
So
I
I
was,
I
was
always
interested
in
systems
and
I
think
the
best
way
to
deal
with
you
know
different
kinds
of
systems.
Issues
is
to
work
on
virtualization
because
it
has
all
things
that
you
can
think
of.
Be
it
devices,
be
it
you
know,
cpu,
be
it
interrupts,
and
so
that
got
me
stuck
with
virtualization
for
a
long
time.
It
has
always
kept
me
busy.
C
So
and
then,
besides
my
work
at
red
hat,
I
also
teach
at
bu,
which
I
skipped
this
semester,
but
hopefully
next
semester,
I'm
gonna
teach
again,
and
I
also
work
with
red
hat
research.
We
have
a
project
going
on
on
at
qmu
with
with
boston
university.
That's
going
well
so
yeah.
A
Cool,
so
one
of
the
things
we
always
like
to
run
in
this
show
is
sorry
josh.
Did
you
have
a
question.
A
Go
ahead
like
okay,
sorry
I
was
just
gonna
say
we
always
like
to
ask
like
kind
of
what
brought
you
into
the
open
source
world
and
so
david.
I
was
wondering
if
maybe
you
could
tell
us
what
like.
How
did
you
end
up
here?
Yeah.
D
So
I've
been
on
the
periphery
of
open
source
for
a
really
long
time,
like
back
in
the
90s,
so
my
dad's
office
would
have
old
computers
and
I
would
inherit
these
old
computers
and
I
would
need
something
to
run
on
them.
So
I
would
use
linux,
that's
all
I
could
find
and
at
the
time
I
think
like
you
could
find
linux
and
office
stores
like
the
boxes
or
whatever
yeah
yeah.
So
I
I
might
have
gotten
a
few
of
those
like
mandrake
linux
or
something
like
that,
but
I
think
I
downloaded.
D
Book
yeah,
it
did
right
yeah,
so
I
would
download
it.
It
would
probably
took
me
like
a
week
to
download
an
iso,
but
I
would
I
would
download
and
run
it
right
on
these
whole
computers,
and
I
had
no
idea
what
I
was
doing,
but
that
was
fun
and
then
I
never
really
contributed
to
source
as
a
result
of
that,
but
I
was
on
in
the
periphery
and
after
university,
so
I
was
using
linux
throughout
my
studies
in
computer
science.
Things
like
that.
D
I
needed
a
job
and
I
had
lots
of
opportunities,
but
one
opportunity
kind
of
stood
out
and
it
was
a
company
called
digium
that
maintained
the
asterisk
project,
which
is
the
telephony
project.
B
C
D
Really
yeah
yeah,
so
I
did
that
and
I
got
to
contribute
to
you
know
asterisk
open
source
projects
and
all
you
know
more
linux
stuff
and
it
kind
of
just
took
off
from
there
and
I've
been
lucky
to
contribute
to
open
source
majority
of
my
career
at
this
point.
So
that's
how
I
got.
A
Started,
I
still
remember,
I
had
a
project
a
million
years
ago
to
basically
we're
doing
an
asterisk
implementation
to
do
lido
as
part
of
a
back-end
system
and
discovering
that
you
couldn't
actually
run
it
in
ec2.
A
You
know
for
a
long
long
time,
so
that
was
a
that
was
a
challenge,
but
it
was
actually
related
to
virtualization,
which
is
kind
of
nice.
It
was
the
timing,
yeah
yeah.
I
couldn't
quite
remember
why
exactly
yeah,
but
that
was
an
interesting
experience.
Yeah
the
I
still
remember
also
you
know,
speaking
of
old
linux,
distros
a
stack
of
three
and
a
half
inch
floppies.
That
was
literally
this
high.
A
You
know
of
slackware
and
installing
it
on
a
computer
and
being
actually
concerned
that
the
monitor
was
going
to
lay
on
fire
all
right.
B
Memory
to
that
I
have
is
watching
folks
download
those
stacks
of
floppy
disks
to
acquire
linux
on
what
I
think
were
like
macintosh,
like
lcs
in
the
computer
lab
at
the
time
right,
like
loaded
disk
after
disk
after
this
to
copy
them
off.
Take
them
back
to
the
rooms
and
do
that
linux
install.
A
Nice
all
right
so
bonding
on
to
you.
What
got
you
into
kind
of
the
open
source
world.
C
C
I
I
don't
think
I
had
access
to
you
know
I
just
I
didn't
even
know
what
open
source
is
in
the
90s,
and
I
I
think
the
first
time
during
my
undergrad
days
is
when
you
know
when
I
had
to
make
a
choice
between
either
you
know
buying
software
that
was
available
from
a
shady
place
for
a
much
reduced
price
compared
to
you
know,
going
for
a
version
that
was
available
to
download
online,
even
though
the
speeds
were
kind
of
that
really
sucked
is
when
I
started
kind
of
realizing
the
importance
of
open
source
without
realizing
that
realizing
that
that's
free
or
open
source
software.
C
I
remember
the
first
time
I
was
kind
of
browsing
in
this
in
this
marketplace
back
in
india,
where
you
know
you
could
get
software
for
a
much
reduced
price,
and
I
saw
cds
for
red
hat
linux-
nine,
not
red,
hat
enterprise,
linux,
nine,
but
red
hat
linux,
nine-
and
I
I
bought
them.
I
I
didn't
know
that
I
could
download
them
for
free,
but
I
I
just
said
I
have
to
try
this
out.
C
So
that
was
my
first
time
I
actually
kind
of
tried
something
which
was
you
know,
open
source
and
I
installed
them
on
my
desktop
only
to
find
that
I
could
either
run
my
network
card
or
my
sound
card.
I
cannot
run
at
the
same
time,
and
and
that's
that's
that
that
was
how
I
got
like
interested.
C
I
looked
at
this.
I
was
able
to
look
at
the
sources,
for
you
know
red
hat
linux
9
they
were
in
the
cds
and
I
didn't
did
not
understand
a
bit,
but
it
was
really.
I
felt
empowered
to
know
that.
Oh
so
this
is
the
code
that
it
kind
of
runs
on
the
system.
That's
really
cool,
but
but
I
think
the
real
contribution
I
made
was
believe
it
or
not.
It
was
on
minix.
C
During
my
when
I
was
doing
my
masters,
we
had
a
project
on
linux3
and
I
found
a
bug
in
the
network
stack
in
minix3,
and
so
I
sent
an
email
to
you
know
andy
tannenbaum
and
with
a
patch
saying
that,
okay,
this
is
the
change
that
is
required
and
he
acknowledged
it.
I
mean
I
don't
know
if
it
got
into
a
release.
I
didn't
know
that
there
was
no
easy
way
to
find
that
out,
but
the
very
fact
that
you
could
find
an
issue
and
can
fix
it
and
get
it
accepted.
C
I
think
that
was
what
got
me
kind
of
interested
in
the
whole
process
and
and
that
and
yeah
and
the
rest
is
history.
For
me,
I
I
had
an
internship
at
red
hat
when
I
was
able
to
work
on
kernel,
build
tools
followed
by
working
in
a
hardware
company
in
massachusetts
where
we
were
working
on
device
driver
hardening.
So
that
was
my
real
kind
of
first
experience
working
on,
you
know
contributing
to
open
source
software
linux
based.
C
So
I
was
able
to
submit
patches
to
drivers
and
yeah
and
later
on
in
virtualization,
so
yeah
here
am.
I.
A
Nice,
that's
pretty
cool.
I
like
both
those
stories.
I
I
agree,
though
bonded
a
lot
of
them
are.
You
know,
I
think
I
think
a
lot
of
the
origin
stories
are
are
often
the
same.
You
know
or
the
at
the
very
least,
it's
the
oh.
You
know
I
have
a
scratch
and
I
itched
it
right.
It's.
B
A
B
Like
nothing
like
like
in
listening
to
ben
and
tell
that
story
I
was,
I
was
thinking
a
lot
of
the
reason
why
I
know
the
command
line
pretty
well
is
because
I
had
this
janky
laptop
for
a
long
time.
I
couldn't
configure
video
drivers
right
so.
A
That'll,
do
it
yeah
all
right,
so
talking
a
little
bit
more
about
what
is
going
on
in
the
kind
of
you
know,
kubernetes
world
moving
on
so
so
why
are
you
or
why
do
you
think
that
you
know
kind
of
virtualization
is
still
important
in
a
containerized
world
and
feel
free?
Whoever
wants
to
answer
that
first.
D
Containers
have
to
run
on
something,
so
what
are
they
gonna
run?
On
I
mean,
of
course,
you
can
run
on
bare
metal,
but
I
think
that
virtualization
has
it's
going
to
maintain
itself
as
kind
of
the
underlying
substrate
that
containers
and
applications
run
on,
because
they're
easy
to
manage.
You
can
update
them
easy
they're
kind
of.
D
Flexible
than
bare
metal,
and
that
in
that
aspect-
and
they
can
do
kind
of
interesting
things,
so
you
can
like
over
commit
virtual
machines
on
hardware
where
you
can't
really
over
commit
bare
metal
and
bare
metal.
So
as
being
this
kind
of
substrate
applications
live
on,
I
think
that
it's
going
to
have
a
really
long-lived
life
within
kind
of
this
new
ecosystem,
that's
emerging
with
containers,
and
even
what
do
we
call
it
serverless
and
things
like
that?
Those
have
to
run
somewhere
as
well.
So
where
are
they
going
to
run?
In
virtual?
D
Well,
it's
going
to
move
to
the
background.
I
think
that's
what
we're
seeing
with
virtual
machines
they're
moving
to
the
background,
where
maybe
right,
as
virtual
machines
originally
started,
even
when
infrastructure,
as
a
servers
originally
started
like
ec2,
we
saw
people
packaging
up
their
applications
inside
of
virtual
machines,
so
we
saw
netflix
do
that
they
they
used
like
these
image.
Creators
put
all
their
applications
into
a
virtual
machine
and
they
would
just
kind
of
scale
out
that
way,
and
then
we
saw
that
transition
to
painters.
D
But
that's
where
was
that
going
with.
B
D
B
So
so
david,
it's
interesting.
I
hope
you
use
the
word
flexible
and
then
kind
of
the
center
of
your
answer.
There
bm's
more
flexible
than
than
real
hardware.
I
in
an
odd
way-
and
I
wonder
what
your
thoughts
on
this
would
be
in
an
odd
way.
B
For
me,
if
I'm
the
provider,
but
I
can,
I
can
lock
them
down
in
the
form
I
deliver
them
to
users,
and
I
mean
I
wonder
in
a
world
where
we're
mostly
renting
computers
from
a
cloud
provider
to
run
our
stuff,
I
mean:
do
you
think
that's
an
important
dimension
of
what
like
kind
of
the
the
vm
surface
needs
to
provide
is?
Is
that
ability
to
to
like
lock,
vms
down,
lock
down
their
resource
allocations?
When
you
talk
about
like
over
commitment
kind
of
touches
on
that.
D
Certainly
yeah,
so
isolation
is
one
of
those
features
that
virtual
machines
provide.
That
containers
also
provide,
but
it's
a
stronger
form
of
isolation,
and
you
know
we're
probably
going
to
talk
about
qvert
a
little
bit
but
I'll
just
say
in
kubert.
We
actually
have
kind
of
two
layers.
D
Well,
there's
really
multiple
layers,
there's
lots
of
dimensions
here,
but
one
of
those
is
the
hypervisor
itself,
which
is
a
really
strong
form
of
isolation
that
you
we
can
give
people
and
then,
within
that
we're
also
running
the
hypervisor
and
namespaces,
so
kernel
name
spaces,
and
then
you
also
have
sc
linux,
or
I
forget
the
ubuntu
equivalent
of
that.
D
So
it's
like
isolation,
security
through
depth
that
you're
providing
when
you're
using
shared
resources-
and
I
think
the
hypervisor
is
an
important
part
of
that,
especially
when
you're
renting
out
shared
resources
like
that,
where
multiple
companies
are
running
on
the
same
hardware,
you
have
no
idea
what
you're
running
next
to.
A
Yeah
yeah
is,
I
think,
is
it
app
armor?
Is
that
the
the
yes
yeah
yeah
yeah
so
yeah?
So
I
I
was
kind
of
curious
about
bondan's.
Sorry,
our
video
seemed
to
have
switched
on
me.
I
was
kind
of
curious
about
bonnie
like
where
are
you
seeing
the
growth
in
you
know
kind
of
kvm
as
a
reaction
to
you
know
like
kubernetes
and
containerization,
and
that
kind
of
thing
it's
like.
What's
the
what
seems
to
be
more
of
the
focus.
C
So
I
yeah
in
reaction
to
containers-
that's
a
good
question,
so
so
kvm
is.
Is
it
a
pretty
low
level
in
the
stack,
so
that
kind
of
makes
it
immune
to
to
I
mean
the
good
thing.
Is
we
don't
have
to
think
about
these
things,
because
we
know
that
the
api
on
top
of
us
is
gonna
take
care
of
it.
C
So,
but
that
said,
when
you
know,
if
you,
when
you
talk
to
me
about
virtualization,
I
don't
think
about
k
only
kvm,
so
you
know,
I
also
talked
think
about
qmu
and
libvert,
and
so
with
respect
to
you
know
how
you
know:
kvm
is
going
to
work
with
containers.
I
think
there's
a
lot
of
kind
of
emphasis.
That's
been
put
now
that
I
don't
think
was
in
the
past.
Is
you
know?
How
can
we
expose?
C
You
know
liquid
apis
that
kind
that
can
help
with
these
kind
of
things
and
that
that
was
something
I
don't
think
used
to
happen
previously.
So
that's
one
aspect
when
it
comes
to
the
real
kind
of
feature
that
kvm
is
exposing,
which
is
the
hardware
isolation
that
david
talked
about.
C
You
know
the
the
mechanism
itself
is
pretty
transparent
to
containers,
because
you
know
they
don't
really
have
to
talk
to
kvm
directly
and
that
kind
of
makes
it
very
very
simple
for
us
not
to
not
to
worry
about
those
kind
of
things
yeah.
I
think
that
pretty
much
sums
it
up,
yeah.
A
So
just
kind
of
for
the
audience.
What
what
is
the
difference
between
qmu
and
kvm
and
livford.
B
A
That's
my
question
was
going
to
be.
C
So
yeah,
I
I
kind
of
think
about.
So,
if
you
so
does
this
really
popular
device
drivers,
linux
device
drivers
book-
I
don't
know
if
they
still
publish
and
or
release
new
versions,
but
they
talk
about
mechanisms
and
and
policy.
So
they
say
that
the
kernel
implements
mechanisms
and
the
user
space
kind
of
you
know
uses
those
mechanisms
to
implement
policy,
and
I
kind
of
think
of
kvm
and
kmu
in
in
those
respects.
C
So
kvm
is
the
the
the
code
in
the
kernel
that
enables
hardware
virtualization,
so
whatever
it
is
x86
arm
and
a
few
other
architectures
that
the
linux
kernel
supports.
Kvm
is
gonna,
enable
hardware
virtualization
and
give
us
an
interface
to
be
able
to
use
them
from
user
space,
and
that's
where
kmu
comes
into
the
picture.
Qmu
is
the
user
space
qmu
talks
to
the
kernel
via
a
device
file
through
a
set
of
eye
octals
to
request
services
from
kvm,
which
is
in
this
case.
C
You
know
all
things
virtualization,
all
things
hardware
virtualization,
that
is
a
qmu
also
does
a
major
part
of
qmu,
actually
is
also
emulated
devices.
So,
even
though
you
can
pass
through
devices
to
the
guest,
that's
not
really
the
norm.
I
mean
a
lot
of
devices
that
the
guests
use
are
actually
just
emulated
devices
that
kme
implements
or
for
that
matter,
in
some
clouds
it
could
be
something
some
other
user
space
which
might
not
be
kimmy.
So
that's
that's.
A
C
Good
thing
about
this
mechanism
policy,
you
know
separation
that
you
don't
have
to
use
qmu
with
kvm.
You
could
use
your
own
user
space
that
talks
to
kvm
and
you
know,
gets
those
services
and
then
coming
to
liveword
is
where
we
are
saying
that
you
know
applications
will
find
it
really
cumbersome
to
talk
to
qmu
directly.
So
let's
build
an
api
on
top
of
it,
which
will
be
easier
for
applications
to
use
and
that's
what
liver
does.
Basically,
you
know
it's.
C
It's
a
it's
a
layer
on
top
of
cmu
that
helps
applications
to
talk
to
qmu,
which
would
have
been
easier.
If
you
were
a
user,
then
you
probably
would
have
been.
It
would
have
been
easier
for
you
to
talk
to
me
directly
as
like
a
human
user,
but
for
an
application.
It's
it's
delivered.
That's
how
I
make
the
distinction
between
the
three.
C
B
Just
a
quick
clarifying
point
on
that,
because
of
it
you
mentioned
a
couple
of
times
wherever
there
is
hardware
virtualization
support
like
with
vtx
on
intel,
is
there
or
was
there
historically
any
support
in
kvm
for
virtualization
on
architectures?
Without
that
hardware
support,
like
I
remember,
some
of
the
techniques
in
zen
for
plain
old
x86
and
didn't
have
vtx,
yeah
and
they're
elaborate
and
like
you,
wouldn't
want
to
have
to
use
them,
but
they're,
fascinating
sort
of
from
an
implementation
point
of
view.
Was
there
ever
any
non-hardware
virtualization
that
kvm
could
do.
C
That's
a
good
question
so
when
it
comes
to
the
real
hardware,
extensions
that
implement
virtualization,
no,
the
answer
is
no.
That
said,
kvm
is
really
a
hybrid
model
where
it
takes
ideas
from
zen
as
well,
so
so
kvm
actually
came
into
being
after
hardware.
Virtualizations
were
introduced
by
a
by
intel
and
amd.
So
that's
the
first
part.
That
said
we
do
have
a
lot
of
para
virtualized
interfaces,
for
example,
which
is
obviously
brought
from
zen.
So,
for
example,
timekeeping
is
one
of
those
things.
C
So
one
of
the
things
kvm
does
is,
you
know
share
this.
Has
this
shared
space
between
the
guest
and
kvm
through
which
we
are
able
to
time
keep
use
timekeeping
features?
The
guest
is
able
to
kind
of
get
accurate
timing
without
having
to
perform
yourself.
You
know
severe
penalty
that
you
would
have
if
you
had
an
emulator
device.
B
C
That
said,
the
other
thing
is
in
the
in
the
in
the
the
initial
days
of
virtualization.
C
You
know:
hardware
extensions
have
had
a
lot
of
limitations
and
what
kind
of
instructions
they
can
execute
in
what
processor
mode
so,
for
example,
in
real
mode
processor,
was
not
able
to
execute
certain
instructions
when
you
were
running
in
a
guest,
so
kvm
would
emulate
them.
So
it's
a
mix
of
all
these
things
together
and
my
answer
to
you
would
be
it's
yes,
and
no
both
it's
it's!
B
A
On
that's
crazy,
all
right
so
kind
of
moving
on
into
kind
of
talking
about
cube,
vert
so
like
where
you
know
where
to
where
does
that
relationship
live?
You
know,
does
kubert
use
qmu?
Does
it
use
kpm?
Does
it
use
liver?
Does
it
use
all
the
above?
Where
does
cube
vert
and
and
kind
of
you
know,
one
of
those
tools
kind
of
where
is
that
line
drawn
and
does
that
does
that
line
stay
still?
Does
it
move
around?
Does
it
matter.
D
D
It
allowed
us
to
rapidly
iterate
on
this
stuff
quickly,
because
it
gives
us
a
really
nice
interface,
like
user
interface,
cli
interface,
but
an
interface
to
to
manage
the
life
cycle
of
virtual
machines
that
we've
had
to
create
ourselves,
but
at
the
end
of
the
day,
we're
using
humeu
kvm
really-
and
we
are
a
rapper
around
this-
we're
launching,
what's
essentially
just
a
chemical
process
in
a
container
well
in
a
pod
in
kubernetes,
pod
and
qvert
itself
is
just
a
set
of
controllers
to
manage
the
life
cycle
of
that
pod
and
also
to
provide
that
pod,
the
kinds
of
resources
cluster
resources
that
it
needs.
D
So,
if
we're
talking
about
like
cpu
and
memory,
we're
using
the
kubernetes
scheduler
to
provide
those
resources
to
the
pod,
and
then
we
have
the
kind
of
a
small
glue
layer.
That's
passing
that
on
to
the
kvm
process,
same
thing
with
storage
and
network,
we're
using
the
regular
pod
network
being
an
ip
address,
and
we're
have
the
glue
in
that
pod
to
give
that
ip
to
the
virtual
machine
same
thing
with
persistent
storage
you're,
giving
it
a
persistent
storm.
D
Pvc
to
that
pod,
you're,
attaching
a
pvc
to
the
pod,
then,
all
of
a
sudden,
you
have
a
boot
image
for
your
virtual
machine.
So
cute
vert
is
just
like
just
layers
and
controllers
around
this
kind
of
underlying
technology.
A
And
actually
I
mean
so
for
me,
that
brings
up
a
big
question
which
is
like,
when
I
think
about
a
container
or,
by
extension,
a
pod
rakes,
a
pod
in
a
lot
of
ways.
Acts
a
lot
like
a
container
like
a
virtualized
machine,
expects
a
lot
of
things
you
know
like
like
it
expects
all
the
ports
to
work
right.
It
expects
you
know
certain
kinds
of
storage
it
expects.
You
know.
I
mostly,
I
guess,
I'm
thinking
about
networking,
but
so
how
does
q
vert
or
or
wherever
right
like
some
tool
chain?
A
How
does
it
kind
of
you
know
tell
the
virtual,
the
virtualized
os
right
that
oh
no
you're,
not
really
you're,
not
running
in
a
pod
you're
you're
running
just
like
you
normally
would,
because
I
presume
you
you
kind
of
masquerade
to
it
so
that
it
doesn't.
You
know
you
don't
have
to
kind
of
change.
The
inside
of
the
virtual
virtualized
machine.
D
I
kind
of
get
what
you're
getting
at
so
when
we
launch
the
kvm
virtual
machine,
the
environment,
that's
in
looks
very
natural
to
it.
You're
going
to
have
like
q-mu
is
going
to
see
the
kvm
device.
It's
going
to
see
things
like
the
ip
address
and
the
and
persistent
storage,
all
kind
of
just
just
look
like
mounts
or
interfaces
within
that
environment.
D
It
looks
like
you're
just
running
on
normal,
like
a
normal
non-containerized
environment.
So
we've
we've
done
a
lot
to
to
kind
of
make
it
appear
that
way
and
that
that's
where
I
talk
about
the
glue,
so
we
we
actually
reach
into
that
pod
with
a
privileged
daemon
set
to
kind
of
set
things
up
for
us
in
a
way
that
mimics
what
you
would
expect.
So
when
we
actually
launched
the
virtual
machine
using
well,
ultimately
live
vert
to
call
humeu
kvm,
it
just
looks
normal
to
it.
It
doesn't
think
anything
different.
D
So
we've
we've
recreated
that
environment
for
the
virtual
machine.
A
A
That
was
that
was
kind
of
what
I
was
expecting.
I
guess
you
know
so
so
are
there
trade-offs
or
are
there
negatives
to
that?
Like
you
know,
I
mean
part
of
part
of
why
you
know.
Like
I
said
I
go
back
to
networking
a
lot
but
part
of
why
that
networking
works
way.
A
It
does
is
to
kind
of
re
minimize
the
resource
consumption
and
provide
some
level
of
or
different
kinds
of
security,
and
so
are
there
negative
sides
to
kind
of
pretending
that
the
virtualization
or
the
virtualized
os
is
running
in
a
normal.
D
Let's
say:
there's
necessarily
negatives,
it's
just
it's
different,
so
like
if
you,
if
you
had
complete
control
over
a
bare
minimum
machine,
you're
running
livevert
on
it,
you
can
create
your
own
bridge
interface
and
your
own
network
and
do
whatever
whatever
you
want
here.
We
are,
you
can
say
we're
limited
by
the
types
of
networks
that
we
provide
to
the
virtual
machine,
but
really
we're
not.
So
we
have
it's
just
a
different
layer.
D
So
at
the
cluster
layer
we
can
create
multiple
interfaces
and
pass
multiple
interfaces
to
pods
today
using
multis,
and
we
can
provide
like
sr
rov
devices
and
things
like
that.
So
we
can
do
a
lot
of
the
same
things
that
we
could
do
if
we
were
in
complete
control
over
a
variable
machine,
but
we're
doing
at
the
cluster
level
using
kubernetes
like
apis
to
to
kind
of
manipulate
those
sorts
of
things
and
assign
them
to
the
virtual
machine
pods.
So
it's
really
just
a
shift
of
how
we
look
at
these
things.
D
B
B
Sorry,
I
kind
of
lost
my
train
of
thought
there.
You
talked
about
a
set
of
custom
controllers,
managing
resources
for
these
pods
that
we're
going
to
launch
vms
into
do
those
custom
controllers
manage
a
set
of
crds
and
is
there
a
custom
set
of
api
endpoints
represented
in
those
crds
for
managing
communicating
with
and
monitoring
the
set
of
vms
that
we're
running
in
that
cluster,
like
what's
the
implementation
of
that
actually
look
like.
D
Yeah
exactly
so,
we
have
a
our
own
api
and
if
you
look
at
our
api,
you
wouldn't
know
that
it's
necessarily
kvm
or
qmu
or
libvert
behind
the
scenes
we
have.
Maybe,
if
you're,
really
knowledgeable
of
what's
going
on
behind
the
scenes,
you
might
see
certain
values
that
make
sense
only
for
kvm
and
things
like
that.
But
our
api
is
unique
for
cubert.
D
You
have
a
virtual
machine
api
that
describes.
You
know
how
to
create
a
virtual
machine
and
things
like
that,
and
we
have
two
layers
of
controllers,
so
we
have
control
plane,
that's
living
at
kind
of
the
cluster
level
and
it's
managing
the
life
cycle
at
virtual
machines
at
the
cluster
level.
So
when
you
post
your
virtual
machine
to
the
kubernetes
cluster,
this
these
cluster
level
controller
is
going
to
say,
hey,
I
see
a
new
virtual
machine,
I'm
going
to
create
a
pod
for
it
to
live
in.
D
That
pod
is
going
to
get
scheduled
onto
a
node
somewhere
with
correct
resources
assigned
to
it
says
right
now,
cpu
and
memory
that
you've
requested
there,
and
then
we
have
a
daemon
set
that
privileged
daemon
set.
That's
going
to
live
on
every
single
one
of
our
nodes.
It's
going
to
see
when
that
virtual
machine's
pod
gets
scheduled
there.
It's
going
to
see
that
pod
come
up,
it's
going
to
reach
into
that
pod,
and
it's
going
to
manipulate
some
things.
D
So
it's
going
to
be
the
thing:
that's
helping
facilitate
network
and
some
of
our
other
things
around
present,
storage
and
stuff.
Like
that,
and
then
within
that
pod
itself,
we
have
a
really
small
daemon
set,
that's
just
kind
of
gluing,
all
those
things
together,
so
the
things
that
the
privileged
damon
set
went
and
set
up
for
us.
D
D
You
have
a
virtual
machine
that
starts
in
a
pod.
It
really
it
kind
of
behaves
like
a
pod
in
some
ways
too,
because
the
virtual
machine
can
talk
to
all
their
pods
and
all
their
virtual
machines,
and
you
kind
of
just
have
a
virtual
machine
all
sudden
appearing
within
this
cluster,
and
it
looks
like
just
a
native
application
within
there.
So.
B
Right
on
so
one
quick
follow-up
to
that-
and
this
is
also
sort
of
for
you
as
well
london-
and
it
is
if
I'm,
if
I'm
working
with
convert,
am
I
working
with
nested
virtualization
a
lot
because
somebody's
running
the
cluster
on
a
vm
in
the
first
place
and
then
trying
to
do
convert
things?
Is
that
a
use
case?
That's
ruled
out
from
afashi,
and
we
don't
ever
do
that
like?
B
D
Yeah,
so
we
definitely
support
from
a
community
standpoint,
nested
virtualization
and
use
it
every
single
day.
That's
what
my
dev
environment
is.
We
see
people
using
it
in
production
on
infrastructure
as
a
service,
because
they
can
manage
their
virtual
machines
in
really
unique
ways
and
provide
levels
of
uptime
guarantees
that
are
difficult
even
with
like
ec2.
So
with
keyvert
you
can.
D
You
can
live
migrate,
your
virtual
machines,
so
you
don't
lose
them
and
things
like
that
and
that's
that's
the
virtualization
there's
limitations
to
what
you
can
do
with
that
and
it's
complicated
and
you
can
hit
some
really
crazy
issues
with
that.
But
it's
definitely
something
that
the
community
uses.
It's
not
something
red
hat
supports
right
now
and
they're
openshift
virtualization
product.
It's
certainly
a
really
strong
use
case
within
the
community.
C
Yeah,
so
I
I
had
a
question
for
david,
so
you
mentioned
this
certain.
You
know
level
of
generalization
if
I
understood
it
correct
and
the
api,
so
so
is
it
is
it?
Is
it
true
that
there
are
there's
people
use
something
else
other
than
live
word
or
is
that
do
you
know
of
cases
where
livered
is
not
in
the
picture
at
all.
D
Not
for
cuvert
today
so
kevert
does
have
livert
in
the
picture.
It
was
designed
in
a
way
to
isolate
the
usage
of
libvert
and
really
even
kvm
to
that
container.
So
the
pod,
that's
actually
running
the
virtual
machine
itself
and
the
api
was
we
were
trying
to
design
it
agnostic
of
what
that
underlying
technology
was.
At
this
point,
it's
so
ingrained
into
our
design
that
I'd
be
really
surprised
if
liberty
kvm
were
ever
replaced
or
swapped
out,
but
we
have
the
potential
to
do
something
like
that
in
the
future.
If
we
needed
to
yeah.
C
I
wanted
to
add
that
yeah
for
a
long
time
after
nested
virtualization
came
into
being,
I
mean
the
the
best
use
case
that
we
could
think
of
was
testing
and
which
is
you
know,
running
a
guest
inside
a
guest
and
making
sure
we
are
able
to
expose
features
or,
for
example,
as
you
said,
we
don't
have
a
bare
mission,
bare
metal
system
at
hand
in
hand,
so
we
want
to
set
up,
even
though
I
would
say
that
nested
word
and
first
level,
virtualization
are
not
really
the
same
in
terms
of
in
so
a
bug
that
reproduces
on
nested
but
necessary
does
not
mean
that
that
bug
is
gonna.
B
C
Our
bare
mineral
system
with
virtualization
this
this
whole
new,
you
know
thing
with
newer,
newer
use
cases
that
are
coming
up.
They
are
they're
very
interesting
because
they
are
stressing
nested
virtualization
in
unique
ways,
and
I
mean
one
of
the
things
that
I
have
personal
experience
with
is
we
are
able
to
find
new
and
interesting
bugs
that
we
had
never
found
out
before
and
they
are
not
replaceable
on
bare
metal
or
just
through
regular
virtualization.
C
B
Right
because
there
is,
I
mean
in
a
sense,
or
at
least
in
a
subdivided
sense,
you
are
exercising
different
hardware
when
running
nested
vert
right.
You
got
the
second
level
address
translator
and
like
some
of
that's
implemented
in
the
mmu
correct.
So
if
I
never
run
nested
vert,
I
never
touched
that
bit.
A
New
and
I
always
love
new
and
interesting
books,
the
so
kind
of
moving
on
a
little
bit.
So
what's
the
future,
do
you
think
of
the
kind
of
virtualization
within
kubernetes?
You
know
the
you
know
when
we
were
thinking
about
the
the
setup
for
the
show
right,
like
the
obvious
answer
for
virtualization
in
kubernetes,
is
lifted
shift
right.
So
I
have
a
running
application.
I
want
to
be
able
to
manage
it
on
the
same
control
plane
as
I'm
running
everything
else.
A
Okay,
let
me
you
know,
let
me
copy
that
virtual
machine
over
and
you
know,
and
then
I'm
done
eventually
that'll
end
and
or
it
may
not
be
the
best
choice
for
all
applications.
So
what's
what's
the
plan,
what
where
we're
gonna
be
in
ten
years?
I
don't
know
five
years
two
years.
D
So
there's
a
story
arc
here,
lift
and.
D
Traditional
vert
is
what
we're
talking
about
with
lift
and
shift,
and
so
you
would
take
an
application.
That's
maybe
it's
a
legacy
application
and
you
you
want
to
transition
to
a
containerized
or
cloud
native.
I
don't
know
whatever
this
buzzword
is
like
infrastructure,
so
you
want
to
move
to
kubernetes,
but
you
want
to
bring
your
virtual
machine
with
you
great,
so
convert
allows
that,
and
I
I
think
that
was
the
thing
that
kind
of
justified
us
building
this
to
begin
with,
and
we
needed
that,
but
that
that's
not
the
full
vision.
D
The
next
step,
I
think,
is
using
kubevert
as
infrastructure
as
a
service,
so
having
the
same
sorts
of
patterns
that
you
would
see
in
something
like
ec2
or
azure
or
gcp,
and
we're
getting
there
on
the
development
side,
where
we
kind
of
have
that.
At
this
point,
it's
not
necessarily
adopted
by
users.
Quite
yet,
but
that's
that's
where
we
and.
B
So
yeah,
just
to
help
me
understand
to
interrupt
briefly
to
help
me
understand
kind
of
what
that
means.
That's
that's
sort
of
I
want
to
do
terraform
kind
of
things,
but
I
want
to
write
kubernetes
style
yaml,
because
I've
got
an
investment
in
the
tooling,
and
I
know
the
the
terms
of
and
the
way
the
api
works
and
is
that
kind
of
the.
D
Think
of
it
like
that,
there's
a
set
of
operational
patterns
that
we
see
in
infrastructure
as
a
service.
So
if
you
look
at
like
ec2,
we
see
auto-scaling
groups
and
what
people
are
doing
with
these
auto
scale
groups
is.
They
have
lots
and
lots
of
virtual
machines,
they're
scaling
horizontally
and
instead
of
it,
goes
back
to
whole
pets
versus
cattle
analogy.
D
When
something
goes
wrong
with
a
virtual
machine
and
their
auto
scale
group,
they
don't
care
like
what
what
actually
happened
nobody's
going
to
look
at
somebody
or
something
automation
is
just
going
to
kill
that
virtual
machine
and
no
one's
going
to
spin
up
to
replace
it.
So
that's
what
I
mean
infrastructure
is
the
service
light
patterns,
we're
introducing
those
into
kubert.
We
have
that
today
and
then,
once
we
have
that
which
we
do.
D
The
next
thing
we
are
working
on
is
for
ultimately
cuvert
to
be
the
substrate
to
run
more
kubernetes
clusters
on
so
now
we
have
an
ecosystem
where
traditionally,
you
would
have
used
like
vmware
on
bare
metal
and
run
kubernetes
cluster.
On
top
of
that
now
we
have
pure
kubernetes,
so
you
have
kubernetes
the
bare
metal
level
running
qvert,
to
launch
your
tenant
clusters
on
key
vert
virtual
machines
and
we're
I'm
working
on
that
right.
D
Now,
that's
the
kind
of
the
next
step
and
then
the
future
step,
the
one
that
I
don't
really
quite
understand.
That's
on
the
horizon
is
ultimately
I
think
that
key
vert
is
going
to
be
the
harps
of
multi-cluster
in
a
certain
way,
because
it's
the
again
the
substrate
that
you
actually
run
multi-cluster
on.
So,
if
you're,
looking
at
a
new
project
like
acp
which
go
look
it
up
if
you're
not
familiar
with
it,
but
it
has
the
potential
to
like
launch
clusters
on
demand
for
your
workload.
D
So
if
you
have
a
workload
and
the
cluster
doesn't
exist
for
it
and
infrastructure
doesn't
exist
for
it
yet,
then
perhaps
we
can
have
smart
controllers
that
can
spin
up
infrastructure
in
the
fly
for
this
application
to
live
in,
and
things
like
that,
and
I
think
kubernetes
or
evert
has
the
potential
to
be
the
thing.
That's
powering
that
for
bare
metal
it
doesn't
necessarily
make
sense
for
public
cloud
infrastructure.
D
Maybe
it
does.
I
don't
know,
but
when
we're
trying
to
replicate
those
sorts
of
things,
I'm
bare
metal,
that's
where
you
would
be
so
there's
a
long
tail
here.
I
think
for
kuvert
the
thing
that
kind
of
justified
us
making
qvert
a
traditional
virtual
machine
use
case.
That's
the
thing
that
got
our
foot
in
the
door
and
I
think
the
potential
is
much
greater
than
that,
but
it
will
be
behind
the
scenes.
People
won't
know
that
they're
running
a
bit.
It's
there
yeah.
A
So
so
can
you
repeat
what
what
project
was
that?
Did
you
say
acp
kcp?
A
Oh,
oh,
okay,
that's
the
united
states
stuff!
That's
that's!
Literally
the
project
we
talked
about
with
clayton
and
I
think
yes
yep
so
turtles.
All
the
way
down
is
what
you're
telling
me
that's
exactly
what
I'm
saying:
yeah
yep!
So
that's
really
interesting,
particularly
the
part
where,
where
you
get,
if
you
have
really
sophisticated,
live
migration
right,
you
can
you
know,
then
you
can
have.
You
can
actually
kind
of
realize
that
dream
right
where
you
can
kind
of
say.
A
Oh,
you
know
what
I
need
that
new
kubernetes
club
or
I
need
this
kubernetes
cluster
to
be
over
there.
You
know
for
performance
reasons
or
whatever
and
you
can
actually
live
migrate,
the
entire
cluster
to
yeah,
wherever
it's
9
a.m,
so
that
you
can
handle
all
those
logins
or
whatever.
D
D
Migration
allows
us
to
do
like
an
update
of
the
underlying
cluster
without
impacting
all
the
clusters
that
live
on
top
of
it,
which
is
great,
so
you're,
shuffling
around
virtual
machines
or
running
kubernetes
clusters
as
you're
updating
the
nodes
that
are
underneath
it
that's
great,
but
another
thing
that's
interesting
is
we
can
start
thinking
about
the
spend
and
resume
of
clusters,
so
you
have
like
clusters
that
you
spin
up
and
maybe
you've
already
store,
created
a
quorum
and
everything
within
them
and
then,
when
you
don't
need
it
rather
than
tearing
down
the
like
persistent
part
of
that,
you
just
kind
of
suspend
it,
give
all
those
resources
back
to
the
underlying
clusters
and
when
you
need
it
again,
it
just
kind
of
starts
up
again
and
it's
right
where
it
was
so
there's
lots
of
just
really
advanced
future.
A
So
so
that
was
actually,
I
was
kind
of
like
had
a
related
question
in
the
back
of
my
mind
for
a
little
bit,
there
is
that,
but
that
brings
it
up.
Well,
is
that
you
know
virtualization
in
general
is
slow
to
start.
You
know
at
least
compared
to
a
container
is.
Is
that
an
issue
or
is
it
more
of
a?
A
A
You
know
I
ever
want
to
be
out
or
whatever
I
just
start
the
thing
you
know
right
and
make
sure
there's
a
minute
in
there
and
then
you
know
and
then
bring
the
other
one
down,
whereas
with
containers
I
can
kind
of
do
it
more
instantly
which,
on
paper,
it's
kind
of
like
you
know,
there's
a
lot
of
features
that
get
written
up
by
the
tech
rags
that
a
lot
of
the
time,
I'm
not
really
sure
I
care
about
as
much
as
the
tech
rags
give
it
some
reason
to
write
about
something,
and
that's
one
of
the
things
that
I
always
wonder
about
is
think.
A
D
We
like
to
limit
so
there's
late.
We,
I
can
say
that
like
latency,
so
the
amount
of
time
you
ask
for
something
before
it
actually
comes
available
and
when
we
look
at
auto
scaling.
So
if
you're,
trying
to
auto
scale
your
application
and
under
the
scenes
we're
trying
to
provision
infrastructure
to
meet
that
demand,
then
the
quicker
we
can
do
that
the
better.
D
So
the
idea
of
like
pre-warming
could
play
a
role
just
things
within
the
guest
itself,
so
we
saw
a
huge
improvement
back
in,
like
the
I
forget
what
it
was
whenever
like
systemd
came
around
when
you
started
having
just
the
boot
order
in
parallel,
rather
than
sequential
that
that
improved
boot
times
quite
a
bit,
I
think,
ideally,
anything
under
a
minute
is
probably
good
enough
for
most
people.
What
we're
talking.
B
D
Spinning
up
infrastructure
when
it
gets
to
like
five
minutes
and
things
like
that,
then
you're,
probably
optimized.
D
I
mean
infrastructure
as
a
service.
When
we
look
like
ec2
they
they
take
a
while
to
start
up
your
ec2
instances
try
cloud
formation
right
right,
so.
A
D
A
D
B
A
But
it's
all
right,
I
guess
it
matters,
but
it's
kind
of
like
there's
a
threshold
where
it
matters
right,
there's
kind
of
like
a
almost
like
a
step
function
right.
You
know
if
it's
under
a
minute
or
something
maybe
it's
fine,
but
then
there's
planning
around
it
or
you
know,
planning
right
so,
like
you
have
to
build
into
the
tool
some
level
of
planning
around
it,
so
that
you
know
we
can
kind
of
tolerate
that
outage.
I
guess
for
me
in
my
development
experience.
A
I
need
to
tolerate
when
something
goes
away
anyway
and
I'm
not
sure
how
spin
up
time
is
different
than
going
away.
So
it's
kind
of
like
I
feel
like
that's
one
of
those
things
that
I'm
building
it
into
my
system
anyway,
you
know
what
differences,
but
I
was
just
kind
of
curious
on
your
take
josh.
Did
you
have
a
follow-up
to
that?
Well,.
B
I
was
just
kind
of
wondering
like
it
is
a
follow-up
to
that
question
really
like
so
so.
Lightning
was
just
asking
david
bonden
about
a
number
of
concerns
specific
to
this
use
case
and
this
kind
of
driving
infrastructure
as
a
service
with
cooper
when
you're
working
on
kvm
and
qmu
qemu,
having
a
really
hard
time
saying
that
this
morning,
for
some
reason
nevertheless,
like
are
these
things.
B
Are
these
two
teams
and
youtube
folks
communicative
about
these
issues,
or
is
that
orientation
you
described
before
of
mechanism
rather
than
policy
at
the
kvm
layer
mean
that
you,
you
necessarily
stay
loosely
coupled
from
sort
of
what
concerns?
Are
this
project?
That's
using
kvm?
What
does
it
have
versus
other
ones
like
artavian
folks,
specifically
thinking
about
these
kubert
and
kubernetes
use
cases,
and
in
communication
with
that
team?
C
Yeah,
it's
it's!
It's!
It's
yeah!
I
don't
think
it's
very,
very
tight
at
this
point
and
whether
it's
going
to
be
any
advantage,
I
don't
know
we
probably
will,
but
that
said
yeah
I
mean
there
are
people
who
you
know,
focus
on
these
aspects,
even
in
the
in
the
in
the
in
the
virtualization
group.
So
there
are
people
who
you
know
spend
a
lot
of
time,
understanding,
containers
working
on
them.
It's
just
the
nature
of
you
know.
C
Kvm
work,
probably
it's
not
a
conventional
to
think
about
containers,
but
that
said
when,
when
you
guys
were
talking
about
this
question,
this
interesting
question
that
langdon
brought
up
and
langan
asked
me
something
similar
a
while
back,
and
I
I
said
yeah-
we
don't
think
about
that.
But
yeah
it
really
depends.
C
Sometimes
we
do
because,
for
example,
you
know
boot
up
times
is
is,
is
that's
a
that's
a
interesting
topic
that
comes
up
once
in
a
while,
and
there
has
been
changes
in
qmu
to
actually
improve
boot
up
times
like,
for
example,
let's
not
get
a
traditional
chipset
emulated,
let's
not
emulate
a
traditional
chipset,
rather
boot
up,
something
that
is
like
just
takes
the
shortest
amount
of
time
possible
right,
not.
C
B
B
D
I
can
speak
for
cute
vert
yeah.
We
we
have
community
members
that
already
run
on
arm,
so
we
support
it
from
a
community.
D
It's
there's
two
there's
multiple
layers
here,
so
we
have
arm
at
the
like
the
controller
layer.
So
if
we're
going
to
run
on
arm
infrastructure,
we
probably
need
to
build
our
controllers,
even
the
cluster
level
controllers,
to
to
run
arm.
And
then
we
have
like
the
underlying,
like
that,
the
nodes
that
the
virtual
machines
run
on.
We
need
to
have
all
of
our
controllers
that
are
at
the
node
level
to
the
arm
as
well.
D
But
then
there's
this
complex
use
case
where
we
have
containers
that
might
need
x86
for
the
control
plane
and
then
we
might
have
different
types
of
nodes
within
our
cluster,
some
x86
some
arm
and
that's
where
things
get
really
complicated,
that
we
don't
do
really
quite
well
yet
being
able
to
schedule.
A
D
Accurately
to
the
right
architecture
and
then
have
the
right
components
on
there
that
are
built
with
the
right
architecture,
but
we're
looking
at
things
like
that
as
well.
Yeah.
A
So
so
related
to
that
does
do
either
of
you.
You
know
bond
and
david.
Do
you
either
of
you
see
yourselves?
A
You
know,
maybe
not
you,
but
like
your
teams
right
making
direct
requests
of
cpu
manufacturers
where
you
know
like
to
you
know,
bonded
comment
made
me
think
of
this
is
like
there's
a
there's,
a
lot
of
junk
on
that
chip
that
you
don't
care
about.
You
know,
and
so,
if
there's
a
way
where
you
know
ar
64
knocked
out,
you
know
half
of
its
calls
blanking
on
the
right
word,
but
you
know
like.
Would
that
be
better
and
is
there
a
way
you
know
does?
C
So
is
it
I
I
yeah,
I
don't
know
the
answer
to
that.
All
I
can
say
is
I
have
experience
with
x86
and
the
turnaround
time
to
listen
back
from
hardware.
Manufacturers
is
really.
C
You
know
you
get
back
to
hear
from
them.
The
ship
has
sailed,
so
I
don't
know
how
advantages
would
that
be.
That
said,
I
I
think
arm
traditionally
has
been
kind
of
more
in
has
been
more
working
more
in
a
sync
with
software.
You
know
providers
you
know
listening
to
their
requests,
just
because
of
the
nature
of
how
arm
works,
and
maybe
it's
it's
a
possibility.
Yeah.
But
honestly,
I
don't
have
an
insider
good
experience.
B
A
B
B
How
much
call
for
that
you
hear
people
doing
that
in
the
community.
Is
this
a
really
popular
idea?
Is
this
stupid
and
crazy?
We've
got
a
good
networking
stack
in
linux.
Why
do
I
want
to
run
a
kernel?
That's
built
just
around
my
application,
like?
Is
that
a
use
case
that's
important
for
for
the
kubrick
shot.
D
It
comes
up,
I
don't
have
a
lot
of
experience
with
it,
so
I
I
know
I
think
some
people
are
even
doing
this
already
with
cuvert.
D
I
don't
know
their
use
case
and
why
my
instinct
is
that
it's
kind
of
a
niche
use
case,
but
I
could
be
totally
wrong
and
it's
something
that
I
don't
understand
well
enough
to
really
speak
to,
but
I
know
that
it's
something
that
exists
within
our
ecosystem
today.
A
Yeah,
it's
it's
one
of
those
like
unicorns,
like
I'm
kind
of
on
both
sides
of
the
fence.
There,
the
there's
a
lot
there.
It
makes
a
lot
of
sense
right
to
use
like
an
x86
chip,
because
it's
standardized
and
it's
got
a
general
use
case
and
all
this
other
stuff
right,
but
then
there's
also,
you
know
hey.
I
I
built
this
chip
specifically
for
this
particular
scenario,
and
so
you
kind
of
have
the
same
idea
right
with
linux
and
like
a
unit.
A
Kernel
right
is
that
I
have
kind
of
this
general
purpose
thing
and
it
does
quite
a
good
job
at
a
large
number
of
things,
but
then
there's
also
a
small
piece.
You
know
that
maybe
you
know
it
could
be
focused
on
so,
if
you're
running
a
golden
image
kind
of
world
right
anyway,
you
know
so
we're
you
know
we're
building
a
virtual
machine
and
then
we're
putting
you
know
golden
copies
of
it
out
or
taking
golden
copy
and
putting
out
instances
which
is
kind
of
the
containerization
idea
as
well.
A
Maybe
something
like
unicornos
makes
sense.
The
problem
I
see
is
that
the
unicorn
doesn't
get
exercised
anywhere
near
as
well
as
the
general
purpose
solution.
So
at
least
for
me,
I
have
yet
to
see
you
know,
there's
a
lot
of
good
research
in
the
united
kernel
space,
but
I
get
to
see
in
the
unicorn
space
kind
of
a
good
reason
that
offsets
the
fact
that
it's
not
getting
it
exercised
as
well
and
that
you
have
to
kind
of
go
through
these
extra
steps.
A
So
that
was
a
that's
a
nice
easy
closing
question.
I
did
want
to
ask
quickly,
though,
to
david.
I
noticed
your
website
redirects
to
a
music
album
and
I
just
wanted
to
ask
what
that
was,
because
I
didn't
actually
listen
to
it.
Oh
interesting.
A
That's
that's
pretty
hilarious,
so
yeah.
So
I
was
yeah.
I
was
looking
for
your
twitter
handle
for
a
tweet
and
it
does
not.
A
C
B
B
A
So
I
will
say
we
we
actually
had
a
pretty
good
question
in
the
in
the
chat,
but
we
are
out
of
time
and
so
in
the
interest
of
saving
time.
We
will
not
answer
it,
but
basically
is
that
you
know
should
cue
slash
openshift,
be
following
the
linux
model
of
you
know.
A
Do
one
thing
well,
rather
than
do
all
the
things,
and
what
I
will
propose
is
that
we
will
try
to
bring
that
up
on
the
next
episode
with
our
next
guest
and
we
will
we
will
let
you
off
the
hook,
because
we
know
we
know
our
guests
have
already
planned
to
do
other
things
today.
A
Excuse
me
and,
and
so
we'll
wrap
up
there,
but
thank
you
so
much
david
and
brandon
for
joining
us
on
the
show.
Thank
you
to
the
guests
or
to
the
audience.
I'm
sorry
and
we
really
hope
to
see
you
next
time.
We
are
the
last
tuesday
of
the
month
and
we
have
a
whole
bunch
of
great
guests.
Lined
up.
You
know,
check
out
our
website
at
cubebuyexample.com.