►
From YouTube: KubeVirt Community Meeting 2023-03-01
Description
Meeting Notes: https://docs.google.com/document/d/1nE09vQWcCTW-9Ohe9oCldWrE0he-T_YFJ5D1xNzMtg4/
A
B
All
right,
in
that
case,
let
me
find
my
note,
find
my
place.
B
B
B
B
Anyone
is
welcome
to
add
things
to
agenda
notes
or
open
for
as
we
speak.
If
we
missed
your
item,
I
will
be
watching
to
try
and
make
sure
that
we
cut
Circle
back
around
to
it
before
we
dive
into
the
agenda
items
I
always
like
to
call
out,
of
course,
feel
free
to
add
any
pull
requests,
mailing
list
items
or
bugs
that
you
would
like
special
attention
paid
to
if
you
feel
something
that
is,
is
being
neglected.
That
would
benefit
from
conversation
today
that
is
open
to
all.
B
If
you
don't
have
right
access
to
the
agenda
notes,
all
you
need
to
do
is
join
the
cupert
Google
group
and
then
make
sure
you
log
into
the
document.
With
that
same
account
that
is
joined
to
the
Google
group,
and
you
should
have
everything
you
need
to
add
items
at
will
all
right
without
further
Ado
I'm
going
to
go
ahead
and
jump
into
this
first
item
on
an
open
floor.
B
Would
anyone
like
to
speak
to
this?
Yes,.
C
C
So
this
PR
is
a
fix
for
a
bug
that
we're
running
to
with
one
of
our
customers,
and
essentially
there
are
they
have
a
namespace
that
has
a
limit
range
in
there,
and
the
limit
range
has
a
request
to
limit
ratio
currently
and
it's
breaking.
This
cut
plug
the
disk
plug
container
as
a
request
to
limit
ratio
of
I
think
it's
40,
something
like
that
and
their
limit
ratio
is
lower
than
that.
C
The
consequence
of
this
is
is
that
a
the
hot
plug
attachment
pod
will
use,
or
it
will
reserve
a
little
more
memory
and
it
will
reserve
a
little
more
CPU
and
I
just
wanted
to
bring
this
up
and
see.
If
anybody
is
is,
you
know
really
opposed
to
this?
C
It
it's
just
a
single
container
per
VM
that
does
hot
plug.
If
you
don't
use
the
Discord
plugin
it,
this
will
not
affect
you
at
all.
D
Hi
Alex
I
think
I,
don't
have
any
objection,
but
as
far
as
I
know
the
the
container
disk,
or
how
do
we
call
it,
we
don't
really
need
it
for
the
hot
pack.
We
could
just
bind
some
process
arbitrary
process
to
some
unique
socket,
and
that
should
be
it
right.
So
maybe,
as
an
improvement,
we
can
have
a
look
on
making
actually
the
process
which
would
be
more
predictable
on
the
memory
usage.
C
Yes,
yes
and
and
I
actually
I
do
want
to
do
a
follow-up
PR
to
this,
where
you
can
essentially
set
your
request
and
limits
in
the
cupboard
CR
and
then
the
container
will
use
that
instead
of
having
it
hard
code,
so
you
can
sort
of
play
with
it,
because
I
I
can
see
a
use
case
where,
for
some
people,
the
one
the
one
ratio
is
a
little
too
much,
and
maybe
they
want
something
else
so
having
a
configurable
would
be
better.
C
But
that's
you
know
adding
a
new
API
and
there's
a
whole
bunch
of
extra
stuff
related
to
this
and
sort
of
an
urgent
issue.
So.
C
C
If
we
can
get
the
rid
of
the
whole
part
of
this,
we
can
actually
reduce
the
amount
of
memory
that
we
need.
But
again
it's
I
think
it's
like
the
the
the
limit
is
like
80
megabytes.
You
know
compared
to
probably
gigabytes
on
the
actual
VM.
You
know
it.
It
should
not.
You
know,
hurt
that
much
I
just
wanted
to
bring
it
up.
That's
all
and
see
if
anybody
was
like
I'm.
You
know
this
is
terrible.
C
I
I,
don't
think
it's
terrible.
It's
it's
not
optimal
and
we
can
improve
it,
but
for
now
I
think
it's
it's
okay!.
C
This
is
hotline
not
competitive,
so
there
there's
a
a
process
that
runs
in
the
a
pod
that
allows
the
word
Handler
to
find
word
Handler
to
find
the
attachment
pod.
So
we
can
then
find
the
actual
volume.
So
it
can
then
do
some
magic
to
buy
mounted
into
the
vertical
launcher
cloud,
and
we
can
probably
replace
that
process
with
something
that
uses
less
memory
and
then
we
can
reduce
the
limit
and
make
it
better.
From
that
perspective,.
D
C
B
C
F
G
Yeah
hi
everyone,
so
this
is
actually
one
thing
I
would
like
the
community
can
help.
So
we
have
the
good
first,
you
issue
label,
but
actually
there
are
no
currently
open
issue
that
has
this
label
and
yeah
I
mean
right
now,
I,
don't
know
if
you
have
noticed,
but
in
some
some
people
has
joined
and
those
are
students
that
would
like
to
contribute
to
gstock.
So
cubert
has
been
accepted
as
part
of
gsoc
program.
G
However,
we
don't
have
issues
that
new
contributor
can
pick,
so
this
is
in
context
of
gsoc,
but
generally
we
will
probably
facilitate
if
you
start
to
label
issues
with
this.
With
this
table,
for
example,
Andrew
was
also
asking
a
couple
of
months
ago
for
Country
Fest
for
kubecon,
so
yeah.
If
you
have
a
feature
that
you
think
it's
a
pretty
easy
for
newcomers,
but
you
don't
have
time
to
work
on
maybe
just.
H
F
D
D
G
H
H
E
E
Yeah,
that
would
then
probably
facilitation.
F
Yeah
I
just
happy
to
announce
that
kubernetes
has
this
feature,
measured,
I
hope
it
will
get
into
1.27,
so
for
now
it
will
be
possible
to
resize
words
without
restarting
of
them.
I
think
this
is
nice
idea
to
start
working
about
implementing
CPU
and
memory,
hot
plugging
or
return
back.
The
memory,
ballooning
and
I
was
thinking
about
the
base
is
about
the
best
option
for
doing
this.
I
made
a
small
research.
This
is
not
published
yet
I
will
write
it
on
some
issue.
F
I
guess
there
are
a
few
methods
how
we
can
do
that
from
daily
weird
side,
but
the
first
question
I
would
like
to
know:
do
you
know
if
hourly
built
controlling
c
groups?
Somehow
or
is
it
totally
dedicated
to
the
kubernetes,
not.
F
Okay,
so
we
don't
have
to
change
shares,
because
kubernetes
will
do
that
instead
of
us,
but
we
have
to
think
about
managing
the
threads
and
the
available
memory
that
size
for
the
virtual
machine
I
found
that
there
is
some
option
to
specify
maximum
amount
of
size,
and
there
were
also
memory
building
device
which
was
disabled
in
1.4.
Sorry,
0.49
version
of
cubit
I'm
thinking
about
it
running
it
back
because
it
will.
It
would
allow
to
scale
the
vital
machine
easily.
D
D
And
I
think
there
is
a
better
way
to
to
how
to
plug
the
memory,
so
I
can
share
it
with
you.
After
the
call,
if
you.
F
F
Thank
you
and
about
this
CPU,
it's
very
simple,
the
same
thing.
F
We
can
specify
amount
of
available
course
for
extending
not
actually
course
but
threads,
and
we
can
manage
these
threads
by
the
simple
command
viewers
set
CPU
and
amount
of
course,
and
the
next
question
I
would
like
to
ask,
is
that
I
found
that
live
resizing
of
the
volumes
is
not
working
right
now
and
I
was
thinking
about
the
best
way
of
implementing
it
for
there
any
ideas
or
is
it
expected
Behavior
or
should
I
just
fix
it
it
as
I
thinking
the
best
way.
F
F
G
G
Lot
yeah
we
should.
We
should
put
that
more
in
a
more
visible
place,
because
it's
always
coming
up
this
question
about
you're,
not
you're,
not
the
first
one
asking
yeah.
F
Okay,
that's
actually
everything
I
wanted
to
ask.
I
have
nothing
to
speak
now,
all
right.
We
can
go
ahead.
B
I
just
wanted
to
make
sure
we
weren't
still
lacking
any
of
us.
All
right
then
looks
like
guest
kernel,
update
topic.
A
Yes,
sorry
evidently
I,
muted,
myself
again,
can
you
hear
me
okay,.
A
Yeah
perfect,
so
I
know
that
we're
not
like
really
into
confidential
stuff.
Yet,
as
far
as
you
know,
confidential
containers
and
whatnot
through
lib
for
I
know
that
we've
recently
got
started
using
you
know:
Legacy,
SCD
and
SUV
es,
but
I
just
wanted
to
bring
it
to
the
attention
of
people.
A
Here
there
have
been
a
couple
of
guest
kernel,
so
519
through
6.1,
some
bugs
that
were
found,
not
major
bugs,
but
just
with
surrounding
the
firmware
error,
as
well
as
some
of
the
there's,
a
slight
ABI
breakage
for
in
released
in
6.1,
and
it's
being
worked
on
right
now
to
be
backboarded
and
fixed.
So
I
just
want
to
make
sure
that
someone
was
playing
around
with
it
that
they're
aware
of
it
and
that
we
are
aware
of
it
as
well.
B
Thank
you.
Sorry
I'm
slightly
distracted,
getting
a
couple
of
the
bugs
listed
real,
quick
since
I
don't
have
screen
sharing
today.
Let's
see.
B
D
A
Yeah,
so
it
looks
like
the
question
is:
do
we
have
cubert
limitations
at
known
like
that
are
known
at
scale
for
deployments
documented
anywhere
and
I
I,
don't
believe
so,
but
again
someone
else
can
correct
me
if
I'm
wrong
on
that.
C
We
have
a
few
known
large
scale,
you
know
deployments
nvidia's
GeForce,
now,
platform
is
running,
converter,
large
scale
and
core
weave
is
is
running
a
I
think
they
said
like
5000
VM,
something
like
that.
So
we
have
some
known
I,
don't
think
we
have
a
you
know,
documentation
anywhere,
so.
A
And
do
you
know
if
there
have
been
any
limitations
that
have
been
run
into
running
at
that
scale
or
as
it
seemed
to
work?
Okay
for
them?.
I
C
Both
both
Nvidia
and
core
weave
have
the
presentations
for
cuberg
on
the
Cooper
Summit
and
maybe
other
conferences
that
I
don't
know
about.
B
We
also
have
this
Zig
scale.
This
might
be
a
really
good,
taco
or
item
to
bring
up
on
there
community
meeting.
I
All
right,
thank
you
sure
if
you,
if
you
see
any
of
those
recordings,
I'm
going
to
try
and
find
them,
please
let
me
know
in
the
chat
or
anything
like
that.
Thank
you.
B
And
then
maybe
watch
the
announcement
for
coopercon
when
that
gets
published.
I
can't
remember
off
the
top
of
my
head.
If
we
have
like
hyperscale
talk
coming
up,
but
I
know
we
have
some
coming
up
that
do
have
to
do
with
scale
which
might
be
helpful
to
the
audiences
That
Matters
to,
of
course,.
B
F
F
I'd
like
to
can
I
yeah,
go
ahead,
I'm
just
happy
to
announce
the
new,
stable
version
of
deck
house
or
distribution,
kubernetes
distribution,
where
we
edit
the
cube,
weird-based
virtualization
and
implement
it
all
our
patches
for
live
migration
and
macquata
building.
What
network.
B
All
right,
then,
jumping
ahead
then
to
PRS
I
went
through
those
and
didn't
see
any
from
the
last
week
that
were
idle,
so
I
logged
that
we
do
have
I
found
those
like
three
bugs
that
we
might
be
able
to
open
up
and
drop
some
comments
on
just
in
case.
They
are
idle,
so
I
have
not
responded
to
any
of
them
yet,
but
starting
with
the
first
one.
B
Do
we
have
trying
to
think
if
I'm
getting
I'm
sure
it's
mixed
up
in
my
head,
which
are
not
related?
Okay,
do
we
have
any
like
Global
labels,
I,
don't
think
so,
because
we're
installing,
mostly
directly
from
manifests.
C
This
and
essentially
when
they're
deploying
cupboard,
there's
like
a
job
that
gets
created
where
the
install
strategy
is
generated,
and
then
you
know
that's
basically
a
config
map.
If
I
remember
right
and
then
the
conflict
map
is
used
to
actually
deploy
convert
and
during
that
job
it's
failing.
C
It's
looking
for
some
Prometheus,
namespaces
and
and
there's
basically
two
hard-coded
ones.
There's
the
the
openshift
monitoring,
which
is
is
obviously
for
openshift
and
then
there's
a
monitoring
which
is
for
vanilla,
kubernetes,
and
you
know
this
error
looks
like
it.
Just
can't
reach
the
control
plane
for
some
reason
and
I.
Don't
know
why
so.
D
B
Okay,
maybe
redundant
I,
haven't
read
the
conversation
in
slack
yet
so.
Jumping
to
next
one
line,
305
set
check.
Failing
licensing
issues,
this
one
opened
last
week
is
Idle,
so
it's
also
test
failing.
E
B
C
The
matching
issue.
B
B
Why
is
my
internet
being
slow
today?
It's
not
cool.
B
B
B
B
You're
stuck
with
mine,
okay,
then,
so
why
won't
swords
go
out
of
style?
B
Then,
in
that
case,
I'm
gonna
go
ahead
and
dismiss
the
meeting
going
once
going
twice.
B
Thank
you
all
for
participation,
and
we
look
forward
to
seeing
you
same
time
same
place.
Next
week
sounds.