►
From YouTube: KubeVirt Community Meeting 2020-01-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
C
I
held
that
in
for
from
the
milling
is
just
in
case,
because
it's
not
just
the
question
to
see,
but
also
they
call
from
yours
asking
for
any
that
they
might
be
in
mind.
So,
even
if
the
official,
with
her
participation,
as
you
say
this
way
to
be
Bannister
force
them,
maybe
if
someone
has
some
idea
could
be
able
to
start
working
on
that.
A
C
Is
a
mail
on
the
mailing
list
and
Lenny
called
just
to
make
sure
any
of
my
works?
I
just
wanted
to
check
if
anyone
planning
for
chief
comes
and
I
call
for
is
open,
I
would
like
to
know
if
someone
has
attended
it
previously
and
how
useful
would
be
to
have
session
like
be
laughs
in
English
recently
language
and
there's
a
lot
of
interest
in
token
stock.
I'm,
probably
Cuba
will
be
one
on
a
nice
topic
for
the
developer
community
in
China
yeah.
A
C
A
Don't
know
how
that
works.
Honestly,
I've
not
dealt
with
that
before.
So.
The
context
for
those
not
familiar
with
the
conversation
is
the
website
is
currently
licensed
as
MIT
and
I.
Think
Creative
Commons
by
attribution
would
be
a
better
fit,
and
so
the
proposal
has
been
that
we
should
change
to
match.
But
that's
a
very
good
question:
I,
don't
know
if
we
need
buy-in
from
existing
contributors
to
change
it,
we
might
yeah
license.
D
Changes
license,
changes
need
neatly,
I
mean
it's
a
little
bit
of
a
gray
area
right,
but
usually,
if
you
have
a
large
amount
of
contributors
who
need
the
buy-in
from
them
more
from
the
majority
of
them
to
do
the
license,
change
so
I
would
take
to
the
mailing
list.
But
to
be
honest,
I
don't
think
it's.
E
A
A
A
A
A
F
Thank
you
sorry
for
this,
for
that.
So
if
someone
here
is
unfamiliar
with
up-armor,
our
Pomeroy
is
basically
Ubuntu's
version
of
SELinux,
and
so
it
allows
a
user
to
set
different
rules
and
then
kubernetes
allows
the
user
to
bind
these
rules
to
pods,
and
then
it
effects
applications
within
the
pods.
So
this
feature
what
the
user
added
actually
allows:
a
user
who
can
specify
who
can
deploy
vmi
objects
to
the
API
server
to
add
up
armor
rules,
and
then
our
controller
will
render
those
rules
into
the
launcher
pod,
so
they
will
propagate
eventually.
F
So
at
first
glance,
it
looked
to
me
like
okay,
just
you
know
another
security
limitations,
but
I'm
not
sure
if
it
can
maybe
allow
users
implicitly
to
extend
the
rules
like
give
their
lunch
or
more
permissions
than
what
it
needs
to
have,
and
maybe
you
know
it
the
problem
that
it
can
be
like
you're,
a
cluster
admin.
You
have
cnv,
you
give
someone
permission
to
create
vm
eyes
and
implicitly,
you
actually
allow
them
to
run
applications
with
greater
permissions.
D
Here
to
be
fair,
I
did
great
for
raising
it
great
for
raising
it,
so
to
be
honest,
I'm,
due
to
my
day,
job
I'm,
not
so
familiar
with
a
more
and
I'm
rather
working
with
Elsa.
You
know
so
one
thing
that
would
be
really
helpful
and
I
actually
asked
I
just
asked
vishesh
well,
that
is
if
we
have
a
peer
who
can
actually
review
it
from
from
their
side
and
to
really
get
get
some
opinions
from
people
really
doing
it
in
their
day
to
day
job.
So
I
hope
that
we
get
that
at
some
point.
D
F
D
D
That's
good,
it's
good.
Actually,
that
is
I.
Think
question
for
many
things
like
this
other
PR,
that
qat
thing
from
Intel,
which
is
also
very
hard
to
review
my
opinion,
because
we're
really
not
I'm,
not
I,
don't
have
the
expertise
and
from
the
people
I
know.
We
also
don't
have
the
expertise,
that's
making
it
stuff
difficult
in
these
cases.
Yeah,
maybe
that's
a
good
thing,
I
think
one
takeaway
is.
We
should
encourage
this
kind
of
contributors
first
to
appreciate
that
they
contributed
it,
but
maybe
ask
if
they
can
pull
in
peers
to
do
reviews
right.
A
G
B
H
The
issue
as
far
as
I
saw
it
was
not
commit,
was
the
API
server
could
not
connect
to
like,
at
least
in
CBI
the
CBI
API
server,
which
was
running
on
a
different
node.
So
the
cube
API
server
is
using
the
cluster
IP,
which
a
on
QB
API
servers
running
on
node
1,
using
a
cluster
IP
to
talk
to
CGI
API
server
on
node
2.
H
H
D
Yeah,
that's
actually
bugged,
that
a
woman
is
working
on
so
he's
actually
working
on
that
problem
that
the
API
server
registration
is
still
present.
Why,
when,
when
you,
for
example,
to
need
a
namespace,
because
we
also
have
that
in
the
operator
delete
flow
so
at
least
for
the
operator
delete
for
we
can
fix
it,
but
for
the
namespace
nation
we
cannot.
But
what
I'm
not
sure
about
is.
Are
you
specifically
speaking
about
the
namespace
to
leech
deletion
or
the
general
communication
with
the
with
your
API
server?
D
B
H
Sure
yeah,
so
the
issue
is
the
qubit,
so
in
a
multi,
node
117
cluster,
the
cube
API
server
cannot
is
running
on
node
1,
the
CDI
API
server
running
on
node
to
the
cube
API
server
on
node.
One
cannot
communicate
to
the
server
on
on
node
to
basically
request
the
timeout
it's
trying
to
connect
to
the
cluster
IP.
H
It
can
connect
to
the
pot
like
if
I.
If
I
you
know
SSH
into
node
1,
it
can
go
to
the
court
to
the
pod
IP,
but
not
the
cluster
IP.
So
what
I
seem
to
be
seeing
is
that
the
you're
accessing
a
service,
a
cluster
IP
service,
it
is
only
accessible
from
the
node
that
is,
has
the
container
or
an
endpoint
running
that
service.
So
the
cube
API
server
cannot
communicate
to
the
CDI
API
server
because
they're
on
different
nodes.
I
A
Or
to
answer
your
question
differently:
Alex,
we
were
aware
that
there
were
issues
with
117
provider.
Our
CI
is
the
failures
are
showing
up
quite
a
bit,
so
we
have
not
gotten
to
the
bottom
of
that
yet
so
we
were
aware
there
were
issues
in
general,
but
in
terms
of
diagnosing
specific
ones,
we're
gonna
have
to
reach
back
to
you
guys
and
work
together
on
that
sure.
A
B
A
I
Started
looking
into
like
how
do
you
create
okay,
D
4.4
provider
with
which
runs
on
fabric
or
OS
instead
of
our
coasts
and
I,
just
wanted
to
ask
if
anyone
else
is
looking
into
it,
because
we
first
want
to
try
it
out
and
start
manually
by
all
purchased
installer
and
see
if
it
works
and
then
maybe
start
creating
the
provider
image,
but
I
don't
want
to
duplicate
anyone's
work,
just
raising
it.
I.
D
B
D
I
Gotcha
well
yeah
as
much
as
I
like
openshift
CI,
and
address
doing
our
work.
It
makes
it
very
hard
to
like
do
the
development,
which
is
already
hard,
because
you
need
60
gigs
machine,
but
like
it's
still
better
for
us
to
have
access
to
do
like
lipfird
machines
and
all
of
this,
and
especially
for
network.
We
are
way
more
flexible
with
that.
So
yeah.
D
F
So
maybe
we
can
use
it,
you
know
if
you
can
just
use
CRC
I
think
that
they
already
have
something
for
for
four
but
I'm,
not
sure,
but
if
they
already
have
it,
you
can
at
least
right
now
use
the
ghost
CLI
to
just
you
know,
deploy
a
cube
root
on
the
cluster
that
is
being
created
by
Searcy.
So
you
reduce
most
of
the
work
for
you.
Well.