►
From YouTube: KubeVirt Community Meeting 2019 03 06
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
So
this
is
the
Qbert
community
meeting
mark
6
2019
the
discussion
that
started
there
was
just
a
little
follow-on
from
a
bug
we
had
mentioned
this.
The
meeting
was
starting
the
agenda
and
notes
is
kind
of
light.
So
if
anybody
has
anything
to
add
we'll
take
it
and
start
it
as
we
continue
just
a
quick
note
on
anti
affinity
in
in
the
virtual
or
Qbert
pods.
C
The
background
of
this
is
that
we
don't
have
any
means
right
now
we're
not
using
a
daemon
set,
and
so
with
our
deployments,
we
don't
have
any
means
to
make
sure
that
pods
aren't
being
hosted
on
the
exact
same
node.
This
is
obviously,
if
the
point
of
having
toon
pods
for
a
high
availability.
If
that's
the
point,
there's
absolutely
no
reason
to
have
them.
On
the
same
note,
because
if
that
note
goes
down,
you
lose
all
the
pods,
so
we're
looking
at
introducing
some
anti
infinity
there.
C
The
pull
request
is
two
zero,
eight,
nine
more
to
follow
on
that,
since
it's
work
in
progress,
but
you
know
this,
you
know
there
are
different
opinions
and
how
this
could
be
solved.
It
could
have.
We
could
have
used
daemon
set
as
opposed
to
a
deployment,
but
if
we're
going
to
continue
this
way,
then
we
need
to
actually
do
it.
Do
it?
Do
it
do
diligence
and
put
in
affinity
rule
so
that
they
don't
appear
in
the
same
node
on
networking?
Do
we
have
any
updates
from
that
side
of
the
thing
something.
D
C
D
If
any
entity
discussion
circle
yeah
well,
one
issue
which
we
will
always
have,
if
we
only
offer
deployments
and
not
also
the
daemon
sets,
which
would
then
start
on
master,
we
would
always
be
much
less
available
than
community.
So,
for
instance,
if
zones
or
something
then
I
mean
it
can
always
happen
that
we,
even
though
our
pods
are
on
different
nodes,
they
are
still
all
in
the
same
soon
and
then
that
one
is
disconnected.
D
Communities
itself
can
immediately
continue
working
because
you
have
masternodes
in
all
zones,
but
all
our
pods
will
be
down.
They
wouldn't
have
to
be
pulled
down,
started
again
in
the
era
soon.
So
I
personally
really
think
that
on
deployments,
which
offer
the
possibility
to
to
see
master
notes,
we
should
really
strive
for
scheduling
all
our
all
our
control
plane
on
the
master
notes
by
default
scaled
with
them,
so
that
they
have
the
same
ability.
D
So
I
mean
I
mean
also
in
the
other
case
it
will
recover,
but
depending
on
how
huge
our
images
are,
how
accessible
that
the
registries
are
in
case
of
failures
already,
it
can
take
a
long
time
until
Cupid
comes
back
up
if
it
soon
disappears.
If
we
have
a
demon
set
which,
by
default
skins
with
the
master
notes,
then
then
we
have
our
infrastructure
already
on
all
crucial
place
is
very
fault
and
it
will
just
continue
like
the
rest
of
the
communities
control
them.
D
Otherwise,
we
can
of
course
improve
the
situation
by
adding
anti
affinity
so
I'm,
not
in
general
against
just
adding
the
preferred
and
the
affinity
right
now,
but
it
doesn't
solve
the
whole
picture.
So
that's
one
thing,
but
the
other
thing
is
also
that
there
are
managed
classes
outside
which
don't
even
allow
you
to
see
the
master
notes.
For
instance,
if
you
on
on
the
public
cloud
on
GCE
requests
the
cluster,
you
normally
don't
see
master
notes,
just
your
virtual
notes.
D
C
D
D
D
A
Do
have
something
actually
on
the
community
side
stem
and
it's
the
fact
that
at
the
moment,
regarding
the
user
guide,
our
understanding
was
that
more
or
less
every
developer
is
responsible
for
providing
the
documentation
of
any
feature
that
it
had
that
you
had.
But
we
don't
know
if
we
should
also
maybe
be
the
group
taking
care
of
the
user
guide
as
a
whole
and
making
sure
that
it's
adequate
from
an
user
perspective.
So
I
don't
know
what's
to
take.
C
B
So
I
looked
at
the
document
a
little
bit.
I
guess
the
certain
I
would
race
open
floor
is
we're
looking
at
just
my
high
level,
the
VM
snapshot
objects
that
we
need
to
introduce
to
support
offline
snapshots
or
live
step
shots
for
that
matter.
There's
two
aspects
of
this:
there's
the
request
to
create
a
snapshot
and
then
there's
the
actual
snapshot
content.
So
the
way
Cabernets
has
handled
this
so
far-
and
this
is
even
a
reflective
in
their
snapshot.
Api
for
volumes
is
they
have
the
request,
which
is
the
snapshot
and
then
the
another
resource.
B
That's
the
actual
content
and
I
think
it's
called
snapshot
content
and
that's
another
pattern
that
they
use
for
persistent
volumes
and
persisted
volume
claims
so
persist
of
my
client
being
the
request
and
then
it's
fulfilled
by
the
creation
of
the
PB
if
PB
didn't
already
exist.
So
when
I'm
looking
at
our
API
I
think
we
fall
within
that
same
pattern,
where
we
might
need
two
objects
and
I
just
kind
of
wanted
to
get
people's
thoughts
on
that,
because
right
now
we're
going
for
a
single
object.
B
E
I
can
answer
this.
Okay,
we
had
precisely
the
discussion
last
week
on
the
synchronization
meeting,
where
we
asked
John
Griffith,
who
was
part
of
the
design
of
the
snapshotting
volume
snapshots
and
the
volumes
in
the
kubernetes
and
the
fact
that
kubernetes
has
these
objects.
Split
separately
is
due
to
the
fact
that
they
want
to
allow
having
the
request
in
a
different
gnome.
It
was
the
volume
that
they
wanted
in
the
different
namespace
and
the
request
should
be
always
paired
with
in
the
ending
same
namespace
as
the
object
that
will
use
it.
E
The
fact
is
that
when
we
specifically
I
specifically
asked
if
we
should
follow
this
pattern,
the
answer
was
no.
If
we
do
not
require
to
have
it
split
across
the
next
phases,
which
we
do
not
the
request
and
the
result
will
always
be
in
the
same
namespace
as
the
virtual
machine,
that's
being
snapshotted,
therefore,
we
simplified
our
lives
and
one
for
a
single
object.
This
was
the
thinking
behind
placing
both
object
into
a
single
one
and
into
a
consideration.
E
B
That
makes
sense
to
me
that
they
had
to
split
because
one
the
request
is
namespaced
and
the
actual
content
of
the
volume
snapshot.
I
guess
is
unnamed
spaced,
so
they
have
to
be
into
super
objects.
Okay,
so
if
we,
if
we
use
a
single
object,
I
guess
my
concern
is
kind
of
how
the
API
is
structured.
Then
we
have
a.
B
D
Here
we
have
a
good
example.
For
instance,
we
got
from
from
certificate
requests
when
you
have
at
a
heavy
certificate
signing
request.
You
also
have
in
the
spec
what
you
want,
including
the
unsigned
certificate
and
in
the
status
to
get
in
the
signed
one.
If
it's
there
so
I
think
it
makes
sense
that
it
in
status.
Okay,
but
that's
what
I
was
thinking.
D
That
also
goes
in
hand
with
the
fact
that
I
mean
the
poor
request
is
still
not
much,
but
in
kubernetes
in
general.
It's
always
works
like
this.
That
despair
can
be
modified
for
users
and
the
status
is
protected,
so
I
mean
it
looks
like
when
you
do
a
patch
or
an
update
on
encore
objects
from
Cornelius
like
I
effete,
fheo
object,
I
changed
my
staffing,
spec
and
posts
back
and
status,
but
that's
actually
not
true.
B
Ok,
so
that
would
be
my
advice.
I'll
go
revise
my
comment
to
reflect
that
that
we
need
a
spec
section.
That
is
essentially
what
I
guess
you
have
in
the
config
part
of
the
API
right
now
and
the
actual
spec
that's
generated
as
a
result
of
the
snapshot.
It
should
go
in
the
status
section,
which
is
something
that
is
it's
only
written
by
the
system,
but
yeah.
That's
that's
my
most
immediate
feedback.
There.