►
From YouTube: KubeVirt Community Weekly Meeting - 2018-10-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everybody.
This
is
the
October
17th
2018
edition
of
the
giver
community
meeting
I've
got
a
few
notes
upfront,
the
first
one.
That's
probably
most
salient:
everybody
is
the
namespaces
topic.
There's
been
a
couple
emails
to
the
mailing
list,
but
just
a
reminder,
so
we
can
really
stress
this.
We
want
to
move
from
the
cube
system
to
the
cube.
A
Vert
namespace
for
our
default
manifests
timeline
on
that
I'm
not
entirely
clear,
but
I
believe
that
maybe
January
to
February
timeframe
at
this
point,
so
we're
just
trying
to
get
if
everybody
ample
heads-up
that
we
intend
to
do
this
and
the
kind
of
the
back
start
behind
that
is
just
to
reduce
confusion,
make
it
perfectly
clear
what
is
a
cube.
Vert
related,
pod
versus
or
a
dim
said
we're
having.
What
have
you
versus
other
resources?
A
All
right,
fair
enough,
vert
IO
block
multi
cue,
not
much
to
say
there
the
PR
was
merged,
so
that
will
be
in
the
next
release
of
pubert
I.
Believe,
zero
point,
nine
point:
one
was
just
really
so
it
will
miss
that,
and
this
allows
the
enabling
of
the
multi
cue
feature
and
liberty.
Is
that
what
you're
typing
okay,
so
there
you
go
I,
don't
believe,
there's
any
questions
that
are
there.
B
C
C
A
Was
a
late
changing
thing
that's
worth.
Noting,
though,
is
the
original
name
of
this
feature
was
multi
queuing
in
terms
of
the
manifest
we
decided
to
block
multi
cue
might
be
a
better
name,
so
that
would
be
less
ambiguous
on
that
front.
I
believe
there's
also
many
network
interfaces
multi
queue
which
Vladek
has
authored.
Would
you
like
to
speak
on
that
Blahnik
actually
I.
A
A
B
A
C
Yeah,
let
me
just
note
so
that
is
basic
basic,
like
migration,
this
is
was
merged,
but
just
to
level
expectations.
It's
not
that,
like
you
know,
this
is
comparable
to
like,
like
migration
in
in
overt
or
something
comparable.
It's
really
the
basic
building
blocks
to
to
layout
the
scheme
in
in
which
we
can
continue
to
iterate
over
the
my
live
migration
feature
in
order
to
to
get
somewhere
usable.
So
today
the
feature
set
is
that
you're,
the
the
features
Gove?
C
Is
that
you're
able
to
use
a
registry
disk
and
and
move
a
VM
from
one
node
to
the
other,
but
as
its
registry
disk,
this
does
not
work
for
persistent
volumes
or
for
persistent
data.
That
is
an
important
fact.
Also
for
for
networking,
any
network
connection
will
be
will
be
disconnected
so
that
all
the
the
connections
have
to
be
reestablished
on
the
target
node.
C
That
was
intentional
to
have
these
limitations,
as
it
was
more
about
identifying
how
this
can
conceptually
work,
because
there
was
a
lot
of
infrastructure
work
which
had
to
be
done
in
order
to
to
allow
moving
again
to
a
different
node.
Now
the
goal
is
from
here
to
reach
out
to
other
teams
like
to
network
in
particular
to
to
find
owners
for
the
more
features
like
you
know.
C
C
It's
a
brought
discussion
to
be
honest,
so
yeah
we
need
to
get
the
discussion
start
it's
nothing
for
here.
So
the
reason
why
we
didn't
reach
out
or
the
store
or
why
this
large
site
is
more
unclear
to
us
as
the
as
people
working
in
the
core
verb
parts
is
that
we
see
the
shortcomings
when
it
comes
to
exclusive
access
and
avoiding
concurrent
access
to
storage
or
X
guaranteeing
exclusive
access.
C
We
see
these
shortcomings
in
kubernetes
and
because
these
gaps
are
so
not
in
our
perception,
we
did
not
go
down
a
path
which
is
looking
at
communities
and
as
hope.
The
problem
is
that
we
don't
see
that
the
storage
subsystem
kubernetes
does
gives
us
enough
guarantees
to
to
to
use
it
for
for
for
live
migration
right
now,
but,
to
be
honest,
as
you
see
my
statements,
pretty
good,
pretty
confused,
so
maybe
it's
a
good
time
to
start
just
loosely
chatting
about.
You
know
how
the
storage
and
ëifí
Gration
work
together.
C
How
could
that
work
out,
because
we
also
started
to
to
rethink
again
about
you
know,
doesn't
make
sense
to
revisit
the
queue
of
alien
drivers
so
effectively
bypassing
kubernetes
for
for
block
storage?
If,
if
life
migration
is
required,
there
are
a
lot
of
options
which
I
can't
lay
out
here,
I
think
the
bottom
line
is
we
just
start
thinking
now
on
the
next
steps
and
we'll
reach
out
to
people
we
think
can
help,
but
also
feel
invited
to
to
chime
in.
Do
you
think
you've
got
a
solution
for
anything.
A
No,
it's
all
good
I,
don't
know
how
much
background
any
individual
necessarily
had.
That
was
really
helpful.
Does
anybody
have
any
questions
and
no,
it
does
not
have
DP
DK
enabled
no
page
of
either
so
next
up
we
have
the
go
imports
changes.
This
is
a
quick
note,
mainly
a
heads
up,
but
we
switched.
We
updated
to
the
latest.
Go
imports,
module
which
ultimately
manifests
is
just
changing.
A
The
order
of
imports
in
our
go
modules
shouldn't
be
too
much
impact
to
anybody,
but
just
a
heads
up
that
we
did
that
just
in
case
the
the
motive
behind
that,
of
course,
was
for
our
Travis
builds
because
they
were
having
issues
with
that
and
next
up
we
have
guest
agent
channels
Peter.
If
you'd
like
to
speak
on
that.
E
Pure
is
ready
and
are
passing
so
we're
ready
to
March.
Unfortunately,
I
missed
Roman
to
do
the
review
today.
So
probably
the
first
thing
in
the
morning,
but
we
collaborated,
he
helped
a
lot
with
doing
the
guest
agent
image
and,
with
the
time
I
got
on
my
hands.
I
figured
the
Liberty
events
issue,
which
David
was
also
solving,
as
I
saw
in
the
in
the
PR
he
pulled,
he
pushed
its
about
liver.
E
E
A
F
A
All
right,
and
so
next
up,
we
have
VM
CTL.
This
was
something
we
mentioned
briefly
in
the
new
open
floor
last
week,
but
the
PR
are
the
repo
the
link
to
the
repo
we're
using
Fabian's.
Personally,
though,
as
the
canonical
upstream
for
that
right
now,
once
again,
the
basic
intent
behind
this
is
to
allow
virtual
machines
to
be
controlled
from
a
pod
interface,
which
allows
us
replica
set.
Stateful
sets
daemon
sets,
basically,
all
the
goodness
that
kubernetes
brings
for
pods.
A
We
get
just
by
all
that
this
pod
does
is
start
a
VM
watch
it
and
then
kill
it
if
it
gets
killed
that
way
or
delete
the
the
resource
which
would
you
know,
cause
the
the
resulting
actions
to
happen
on
the
cubes
earth
side
and
so
yep,
that's,
basically
just
the
glue
we
need
to
inherit
all
kubernetes
features
at
the
virtual
machine
level.
Is
there
anything
else
you
wanted
to
add
to
that
David.
C
A
C
Okay,
I'll
turn
it
over.
Okay,
I
just
want
to
say
I
think
that's
an
exciting
thing,
because
we
will
be
able
to
tie
cube
root
VMs
into
stuff
like
the
horizontal
and
the
cluster
autoscaler,
for
example,
or
in
the
pod
autoscaler.
So
the
number
of
VMs
can
be
increased
depending
on
the
load
of
the
cluster
and
that
kind
of
stuff.
So
I
think
that's
that's
the
area
where,
where
it
comes
from
or
daemon
sets,
you
know
having
function
or
having
BMS
per
node,
which
should
be
present
everywhere.
C
Vm
CTO
will
allow
us
to
do
that
without
any
any
additional
coding.
We
need
to
do
on
the
Cooper
side,
so
placement
and
controlling
also
rolling
updates,
like
deployments
we
have
with
the
diplomas,
have
never
upgrade
logic
like
rolling
updates,
blue
green
updates.
They
don't
I
mean
VM.
Ctl
allows
us
to
use
that.
So
if
we
have
appliances
which
run
in
VMs,
then
using
the
deployment
of
rolling
update
features
will
be
something
that
is
attractive
to
look
at
to
see.
C
If
that
really
works
out,
I'm,
not
saying
that
it's
all
solved,
everything
like
and
deployment
upgrade
would
obviously
restart
in
UVM's,
but
it
would
obviously
not
take
care
of
the
guest
inside
that
VM.
So
that
is
something
that
is
anyway,
an
interesting
topic
beyond
VM
CTL
how
we
could
do
guest
management,
but
that
is
something
else
so
I
just
wanted
to
say,
I
think
that's
an
interesting
which
we
all
well
wonder
where
that
goes
over
time.
A
C
C
We
know
there
are
things
which
we
don't
like
in
the
API
or
place
with
something
so
public
facing
things,
and
we
want
to
get
clean
them
up,
but
they
will
break
stuff.
I,
send
an
email
to
the
mailing
list
and
already
got
the
question
if
we
plan
to
support
upgrading
after
these
changes
are
made
or
or
easing
the
transitioning
with
those
changes,
and
we
will
look
at
that.
C
C
A
G
D
So
recently
we
landed
the
recipe,
the
basic
sepia
painting
and
what
it
does
it.
It
relies
on
the
kubernetes,
severe
manager
to
provide
dedicated
CPUs
to
VMs
and
to
put
I'm
sorry,
and
then
we
would
use
those
dedicated
CPUs
and
pin
VM
CPUs
on
top
of
the
VM
virtual
CPUs
on
top
of
the
physical
abuse.
And
what
happens
is
that
apparently,
this
dedicated?
No,
the
notion
of
dedicated
spews
is
not
currently
being
guaranteed
by
this.
D
The
kubernetes
view
manager-
and
it
leads
to
all
sorts
of
issues
for
us-
is
that,
for
example,
if
you
start
a
work
workload
with
the
dedicated
CPUs,
it
will
be
into
a
specific
CPUs.
But
then,
after
that,
if
a
user
starts
a
workload
with
regular
CPUs,
non-dedicated
CPUs,
severe
manager
will
allow
this
workload
to
run
on
all
available
CPUs
and
will
not
exclude
the
already
peon
CPUs
from
the
list
of
available,
and
the
opposite
is
happening
as
well.
D
Is
that
when
a
known,
dedicated
CPU
is
running
on
node
and
we
start
it
CPU,
so
a
vm
with
the
dedicated
cpu,
the
previous
workload
will
still
run
on
the
dedicated
CPUs
as
well.
So
really
it
all
comes
down
to
a
question.
If
kubernetes
wasn't
intended
to
provide
dedicated,
CPUs
or
just
pin
a
workload
to
specific
CPUs,
and
and
that's
it
maybe
it's,
it
will
require
us
on
the
cube
root
side
to.
D
Kind
of
separate
between
nodes
that
are
intended
to
be
dedicated
to
run
with
dedicated
to
beers,
to
run
on
specific
nodes
and
other
workers
that
can
be
mixed
to
run
on
another
subset
of
CPUs
anyway.
I'll
try
to
open
a
an
issue
about
this
with
kubernetes,
but
I've,
just
kind
of
scared
that
it
will
be
dismissed
as
it's
something
that
was
never
intended
to
be.
That
way.
C
Yeah,
let
the
flame
us
actually
I
I
think
it's
too
good
to
open
that
issue,
because
I
think
it's
also
good
to
get
some
clarity
on
that
matter,
because,
to
be
honest,
the
documentation
sounded
like
they
want
to
give
something
like
guaranteed
CPUs.
So
to
me
the
question
is:
is
it
a
bug
that
they
just
didn't
think
of
those
cases,
or
is
it
really
intended
that
it's
just
you
know
as
it
is,
but
then
I
wonder
about
the
value
so
I
think
we
should
engage
with
them
and
understand
what
they
were
wearing.
We
had.
A
C
Yeah
actually,
actually
I
I,
would
that
makes
you
put
that
first.
I
know
that
we
have
people
working
on
operators
to
use
deployment,
but
if
somebody's
here
to
give
an
update
on
that,
that
would
be
interesting
because
that,
here
that
whole
topic
of
deployment
and
updating
that
would
be
interesting,
but
I
don't
see
anybody
who
can
give
us
a
detailed
inside
there.
Okay,
the
other
topic,
the
same
is
true
for
networking
storage.
I,
wonder
I
know
that
some
work
is
happening
happening
at
the
networking
area
and
some
work
is
happening
in
the
storage
area.
H
Can
share
some
things
on
the
storage
side
if
you're
interested?
Oh,
yes
for
sure
in
particular,
a
number
of
things
that
I'm
working
on
are
upstream
kubernetes
changes
to
do
transfer
of
resources
into
different
namespaces,
so
moving
a
PVC
from
one's
main
space
to
another,
for
example,
that's
going
to
be
necessary
for
things
like
creating
a
volume
from
a
snapshot
and
giving
it
to
another
user
stuff
like
that.
We
can
also
use
it
for
our
cloning
purposes
as
well.
H
That's
that's
one
side
of
what
I'm
working
on
the
other
thing
that
we're
pushing
right
now
is
to
the
CSI
spec
has
the
ability
to
do
create
from
a
source
you
can
specify
that
source
to
be
a
snapshot,
a
github
repo,
another
PVC
at
Sarah
Sarah.
Unfortunately,
though,
there's
no
mechanism
to
do
that
in
kubernetes,
so
that's
the
other
thing
that
I'm
working
on
right
now
as
well.
H
Those
are
two
things:
some
on
the
storage
side,
at
least
that
I
think
are
pretty
interesting.
The
other
thing
is
I'm
thinking
about
looking
at
multi
as
hatch
for
some
of
the
live
migration
stuff
to
see.
If
that
can
help
us
that
helped
us
in
the
OpenStack
but
I,
don't
know
if
it'll
help
us
here
or
not
so.
I
H
C
If
I'm
so
crazy,
then
even
for
data
discs,
is
it
more
interesting
right
because
I
mean
we
will
use
image
discs
today,
just
because
it's
only
possible
with
them,
but
the
goal
would
be
to
really
use
the
data
discs
right
and
yeah
I
mean
yeah,
because
today
I
mean
we.
We
can
allow
multiple
readers
writers
for
a
volume,
but
we
don't
have
any
guaranteed
I
mean
we
have
weak
guarantees
around
that
so
good
that
you
start
looking
at
that
area.
Yeah.
H
C
F
If
you
want
to
see
some
updates
on
that
right,
this
is
better.
So
we
are
working
on
extending
will
be
incorporating
those
multiple
networks
and
eventually
it
would
give
us
the
PDK
support,
maybe
and
support
for
light
migration,
since
we
can
keep
one
that
covers
an
IP
address
between
two
different
hosts.
C
J
C
Yeah,
that's
really
awesome.
I'm,
looking
forward
to
that
and
I
know
that
Vladek
is
also
interested
in
device.
Pass-Through
and
I
want
to
leave.
Oops
and
I
want
to
reflect
some
items,
I
wonder
if
that
will
be
helpful
for
general
device
passed
through,
but
we'll
see
that
yeah
cool,
yeah
I
would
encourage
both
of
you.
C
C
C
C
None
of
my
talks
was
accepted,
but
it
was
very
hard
to
get
it
talked
in
and
but
I
hear
or
I
saw
in
the
agenda
that
at
least
two
or
three
keyword
related
talks
from
non
red.
Headers
got
accepted,
which
is
pretty
awesome
and
I
know
that
other
people,
also
playing
with
Cuba
or
using
Qbert,
will
also
be
telling
cube
for
North
America.
So
I
wondered.
C
Now
I
wondered
and
I'm
looking
at
planning
a
small
gathering
there,
it's
a
little
bit
tricky
because
we
are
not
attached
to
a
zigg
or
we
are
not
a
CNC
F
project.
So
getting
some
space
there
is
is
a
little
bit
difficult
and
I'm
still
sorting
that
out
how
we
can
get
some
space
or
time
to
come
together,
so
I'm
also
giving
a
heads
up
on
that
one.
Just
to
also
give
you
guys
time
in
order
to
plan.
C
C
K
Carefully,
we
have
told
you
sorry,
police
under
the
cover.
One
of
them
must
provide
fencing
at
church.
Are
we
built
all
fencing
stuff
on
top
of
cluster
API,
faster
API
API?
It's
like
it,
has
additional
controllers
for
machining
cluster
entities
and
wanted
to
cognize
that
Marshall
Knoblauch
was
deleted.
It
just
apply
machine
actuator
and
depend
on
implementation
after
on
actuator,
it
can
can
do
different
stuff
like,
for
example,
in
the
case
of
Google
cloud
provider,
can
configure
configure
dressy
instances,
but
in
our
case
it
will
just
run
some
facing
Mecca
dislike
VIII.
K
It
also
provides
controller
with
you
monitoring,
not
conditions
and
to
depend
on
the
node
condition
that
you
remediate
the
note
if
you're,
not
conditioning,
not
fine
at
you,
if
you
try
to
run
machine
API,
to
delete
the
machine
object.
So,
in
the
background,
if
you
want
motion
controllers
that
you
try
to
do
some
actuator
logic
and
after
that,
you
recreate
machine
object.
So
in
the
background
actuator
will
also.
K
You
also
do
some
calculator
logic.
I
do
publish
both
repositories
under
community
document,
so
everyone
who
who
is
interested
in
it
in
fencing
at
all,
welcome
to
RO
connect
and
also
welcome
to
create
some
PR
space.
But
again
it's
still
in
some
initial
phase.
You
already
have
some
basic
initial,
the
mediation
logic,
so
the
special.
A
K
H
K
J
C
H
Think
we
should
probably
try
and
get
on
the
schedule
for
not
this
week's
meeting
but
the
following.
It's
a
two-week
meeting
every
two
weeks.
We
should
probably
get
on
schedule
to
actually
raise
it
mention
it
at
least
make
people
aware
or
or
go
out
to
the
mailing
list,
one
or
two,
but
we
should
definitely.
You
definitely
be
good
to
make
people
aware.
Yeah.