►
From YouTube: KubeVirt Community Meeting 2023-06-14
Description
Meeting Notes: https://docs.google.com/document/d/1nE09vQWcCTW-9Ohe9oCldWrE0he-T_YFJ5D1xNzMtg4/
A
So
welcome
everyone
to
the
Cuba
community
meeting
of
June
14th
in
2023.
A
First
of
all,
there
are
a
couple
of
links
where
you
can
join
the
community
inside
the
document
that
link
which
which
the
link
I
just
posted.
B
C
E
A
F
F
Is
Chris
with
Microsoft
we're
using
kubert
in
a
couple
of
places,
so
that
would
come
along
and
maybe
bring
up
a
couple
of
things.
A
Maybe
you
could
meet
yourself
if
you
know
who
you
are
okay,
so,
first
of
all,
we
have
the
scheduled
check-in
of
the
V1
release
as
a
side
note
on
that
release
should
have
been
happening
this
night
or
tonight,
but
unfortunately,
which
I
will
get
get
later
to
the
release.
Automation
did
not
work
well,
so
we
are
currently
investigating
with
that
so
yeah.
Actually,
we
should
have
had
the
first
the
release
already
but
yeah
it
just
didn't
work.
A
So
then,
let's
go
straight
into
the
agenda.
Notes
first
point
is
by
ready
and
yeah.
The
stage
is
yours.
F
I
think
what
we
wanted
to
do
is
talk
about
a
what
we
see
is
a
life
cycle
issue
between
kubernetes
and
kubert
I,
don't
know
how
you
normally
run
your
meetings,
so
given
this
will
sort
of
be
a
little
vague
and
a
little
hand
wavy.
Do
you
want
to
push
this
towards
the
end
of
the
call
or
should
just
go
straight
into
it?.
A
F
Right
all
right,
so
we
have
a
situation
where,
for
a
variety
of
reasons,
we
have
our
own
CSI,
and
you
know
what
will
happen
is
we'll
have
a
convert,
VM
hot
plug
a
volume
spins
up
a
hot
plug
pod.
It's
a
PVC
involved
in
that
it
then
sort
of
connects
from
the
hot
plug
part
across
into
the
hypervisor.
You've
got
a
file
descriptor
link.
You
know
from
the
quemu
process,
everything's
good,
your
hot
plug
is
second
volume
and,
of
course,
in
order
to
hot
plug
the
second
volume.
F
It
creates
a
second
hot
plug
pod,
but
in
order
to
do
that,
it
has
to
delete
the
first
hot
plug
pod,
which
of
course
makes
the
PVC
reference
in
the
first
hot
plug
quad
go
to
zero
and
we
get
a
get
a
whole
sequence
of
unpublished
events
and
it
basically
tears
down.
I
o
for
the
first
volume
that
was
hot
played
this
from
a
distance,
looks
like
sort
of
an
incompatibility
between
the
way
kubernetes
life
cycles
PVCs,
and
they
weigh
that
query.
References
behave,
I
can
think
of
a
bunch
of
ways.
F
B
Hi
Chris
could
I
clarify.
One
thing
is
this
because
you
mentioned
a
CSI
of
your
own
nature.
Is
this
a
project
that
you're
working
on,
or
is
this
a
hot
plug?
You
know
that
is
native
to
cube,
verts
or
more
in
the
Qbert
ecosystem,
so.
B
Okay
and
then
I
would
just
say
the
name
Alex
well
so
in
case
he
perks
up
and
let
the
discussion
go
from
there.
I'm.
C
So
a
hot
plug
this
in
in
keyword
is,
is
actually
sort
of
a
hack.
It's
it's
not
really
natively
supported
by
kubernetes
at
all.
C
So
that's
where
the
whole
Hub
plug
attachment
pot
comes
in
essentially
we're
trying
to
sort
of
work
with
kubernetes
to
you
know,
do
the
attachment
to
the
node
and
then
do
some
behind
the
scenes
craziness
to
make
the
volume
visible
inside
the
vert
launcher
pod
and
then
pass
it
off
the
cumu
to
to
actually
have
plug
into
the
app
and
in
order
not
to
have
you
know,
the
original
implementation
actually
had
one
pod
per
volume.
C
So
what
we
did
is
we
sort
of
do
a
flip-flop
pod
where,
as
you
you've
already
said,
all
of
this,
where
we
you
know,
delete
the
original
pod
and
then
create
a
new
part
that
references
all
the
the
volumes
that
we
have
and
we
know
in
some
CSI
drivers
that
will
cause
a
bunch
of
unpublished
issues
or
the
main
ones
that
we
use
is
not
really
a
problem,
because
the
VM
sort
of
keeps
a
lock
on
the
volume.
And
you
know
the
unpublished
will
just
error.
C
Saying:
hey
I
can
actually
unpublish
this
and
then,
when
the
new
one
comes
up,
the
error
goes
away,
but
you
actually
get
an
error
because
you're
trying
to
unpublish
something
that
that
you
can't
some
Seaside
drivers
will
ignore
that
and
will
actually
just
disconnect
the
I
o
in
particular
some
iSCSI
based
ones.
Do
that
I
believe
Elysee
or
cecili
mentioned
yeah
Vasily
mentioned
that
it
sounds
similar
to
9263.
C
C
Doing
a
slightly
different
order
that
shoots
stop
the
unpublished
from
happening.
So
maybe
you
could
try
the
pr
and
see
if
it
fixes
it
for
you.
So.
F
Ultimately,
though,
this
needs
to
be
robust
to
a
situation
where
the
hot
plug
pods
go
away,
even
though
this
wouldn't
be
normal
and
now
sort
of
that
getting
into
too
much
detail
of
how
we're
using
it.
This
needs
to
be
pretty
solid,
so
you
know
I'm,
just
trying
to
figure
out,
like
so
I
think
what
you
described
in
terms
of
it
being
a
hack
and-
and
you
know
not
really
fitting-
that
was
sort
of
our
conclusion.
F
I
was
just
trying
to
understand
if
there
was
an
extra
part
of
kubernetes,
that
I
was
unaware
of
which
maybe
helped
manage
or
avoid
the
unpublished
situation,
though,
it
seems
to
me
that
the
CSI
requires
it
even
to
delete
the
first
hot
plug
hot
plug
pod.
You
need
the
CSI
unpublished
to
succeed,
that's
correct,
isn't
it
ready.
F
Yes,
most
of
this
is
Ready's
code.
So
and
that's
that's
what
I'm
asking
him
so
you
know
I
when
you
see
that
the
csis
will
hold
a
lock
on
the
volume.
F
It's
a
virtual
machine,
so
I
mean
what
way
you
mean
you
mean
just
holding
the
folder
script
or
open
is
sufficient
for
certain
things
to
just
fail,
because
I
mean
surely
they
would
fail
in
which
class,
in
which
case
the
first
hot
plug
pod,
would
be
stuck
in
a
in
a
terminating
State,
because
the
unpublished
was
the
unpublished
would
then
fail.
C
So
the
thing
is,
the
attachment
part
doesn't
actually
Mount
the
volume
into
that
part.
It
just
references
the
volume
which
causes
the
attached
to
the
node,
but
we're
not
mounting
it
into
that
pawn.
C
So
then,
the
behind
the
scenes
stuff
will
mount
it
into
the
work
launcher
pod
and
the
vert
launcher
will
have
to
lock
on.
D
D
We
are
not
using
4.4
by
the
way.
It's
a
version
below
that.
C
C
That's
oh
I
forgot
the
I
say
attached,
but
it's
actually
a
different
function,
name
in
in
the
industries
that
I
I
forget
the
exact
name,
but
there's
two
functions,
one
that
you
know
essentially
touches
the
volume
to
the
node
and
then
the
second
volume
is
the
you
know
mostly
side
driver
go
by
now
to
to
like
Mount
the
volume
into
the
container,
and
the
second
part
is
what
we
don't
do
for
the
attachment
file.
C
We
just
do
the
attached
to
the
network
and
and
that's
that's,
how
we
sort
of
get
around
the
the
the
problem
of
the
pot
getting
stuck,
because
the
Pod
is
never
actually
referencing
the
or
it's
never
actually
mounted
into
the
Container.
That's
in
the
file.
D
F
Maybe
I
can
ask
a
slightly
different
question
or
phrase
it
differently.
It
seems
to
me
that,
like
the
the
current
way
that
kubernetes
works
and
probably
for
a
while,
it's
sort
of
fundamentally
incompatible
at
a
lower
level,
with
the
way
that
kubert
hot
plug
works,
you
know
obviously
you've
got
it
working.
E
F
Certain
ways,
and
there's
probably
more
that-
can
be
done
to
make
it
work
in
a
other
variety
of
ways,
but
you
know
it
is
effectively
an
incompatibility
and
some
extra
work
probably
has
to
be
done
outside
the
normal
CSI
kubernetes
PBC
life
cycle,
correct,
okay,.
C
Correct,
ideally,
kubernetes
would
allow
you
to
dynamically,
attach
volumes
to
containers
and
it
does
not.
We've
tried
to
to
get
them
to
to
add
this
functionality
and-
and
it's
not
really,
there's
no
real
good
use
case
for
containers
and
they
sort
of
rejected
it.
C
So
we're
sort
of
stuck
with
this
hack,
maybe
more
people
you
know-
can
because
kubernetes
is
essential,
containers
right
and
we're
virtual
machines
that
are
doing
different
things
and
in
order
to
get
a
feature
into
kubernetes,
we
need
to
give
them
some
good
container
use
cases
for
this
feature
and
we
haven't
been
able
to
give
good
container
use
cases.
F
Yeah
I
think
in
our
particular
case
you
know,
I
can
sort
of
think
of
two
ways
of
approaching
this.
The
first
one
is
to
teach
the
CSI
about
these
I've
been
calling
them
hidden
references
effectively.
These
references
where,
where
the
quemu
process
holds
the
full
descriptor
open
and
it
will
defer
tearing
down
the
I
o
until
those
processes
go
away.
F
F
You
know
a
couple
of
seconds
later,
that'll
be
fine,
and
if
it
was
to
move
from
one
physical
node
to
the
next,
then
the
queen
you
process
would
go
away
and
we
could
deal
with
that
situation,
maybe
as-
and
that
might
be
the
easiest
thing
for
us
to
do
in
the
short
term.
The
other
thing
I
can
think
of
is
for
us
to
have
our
volume
attached
infrastructure,
be
somewhat
generic
and
separate
from
kubernetes
and
have
the
kubernetes
CSI
that
we've
got
consumed
from
that
and
then
also
teach
kubert
directly
about
how
to
consume.
C
So
in
general
we
don't
like
to
do
special
things
for
cube,
for
that
kubernetes
doesn't
include,
and,
and
the
biggest
Divergence
from
that
is
obvious-
that
hot
plug,
just
because
you
know
a
cupid
VM
is
is
just
a
process
inside
of
a
pod
and
pods
are
essentially
immutable,
so
you
know
CPU
memory
Hub
plug
is
currently
not
possible.
Network
hot
plug
is
currently
more
not
possible,
and
and
this
plug
we
have
to
do
this
crazy
hack,
essentially
to
make
that
work.
C
It's
it's
a
problem
and
we've
made
progress
with
CPU
and
memory
and
I
think
networking
is
is
getting
there
too,
and
and
we've
sort
of
mostly
got
disk
working,
but
there's
definitely
certain
CSI
drivers
that
don't
like
it
at
all,
just
because
we're
doing
some
crazy
stuff.
So
oh
apparently,
networking
is
already
working.
So
but
yes,
we're
doing
we're
doing
crazy
stuff
that
we
really
shouldn't
be
doing,
but
we
have
no
other
options
so.
G
Alexander,
this
is
a
little
different
than
I,
so
the
when
you're,
attaching
a
second
disk,
be
new
pod.
The
first
pod
gets
deleted
and
then
the
second
one
is
created,
I
thought
the
second
one
was
created
first
and
then
the
other
one
was
deleted.
G
C
G
F
F
F
So
if
you
were
to
hot
plug
one
and
then
wait
a
few
seconds
and
hop
like
another,
nine
relatively
quickly,
maybe
on
your
net
with
two
or
three
pods
in
that
case,
and
that
would
be
fine,
but
I
mean
ultimately
that
still
probably
comes
up
short,
because
we
have
a
situation
where
this
needs
to
work,
even
if
I
delete
the
this
needs
to
work
reliably,
even
if
I
delete
the
hot
plug
pods.
So
if
I
was
to
Cordon
drain
on
the
way
down,
it
needs
to
sort
of
sequencing
somewhat
correctly.
F
C
F
Right
but
we,
but
we
can
modify
the
CSI
to
not
to
to
not
tear
the
I
o
away,
while
there's
a
reference.
C
Right
and-
and
we
we've
seen
that
with
some
iscosity
Seaside
drivers
and
the
the
pr
referenced
changes
the
way
we
handle
it
slightly
different.
That
seems
to
fix
it
for
the
known
Seaside
drivers
that
broke,
so
it
might
work
for
you
too,.
C
It
out,
let's
see
the
pr
that
fixes
this
is
9269.
C
C
Plays
around
with
what
we
do,
is
this
a
a
block
volume
or
a
file
system
volume?
These.
F
C
C
I
don't
think
it's
been
released
yet
then,
let's
see
three
weeks,
it
was
merged
three
weeks
ago,
so
it's
probably
not
released
yet
so.
Actually,
okay.
F
I
could
I
could
I
could
pull
this
out
into
our
repo
and
test
it
that
way
as
well
we'll
try
that.
D
I
went
through
it
earlier
from
what
I
understand
it
tries
to
have
all
the
volumes
mounted
into
the
Container,
not
just
the
one
that
was
that
was
just
attached.
F
As
soon
as
the
first
hot
plug
pod
goes
away,
it's
relatively
quick
between
the
Pod
terminating
well,
I
guess,
the
actual
the
CSI
unpublished
actually
occurs
before
the
Pod
is
actually
technically
terminated.
I
think
it's
it's
sort
of
going
to
terminate
pending
state
kublic,
invokes
the
CSI
everything
cleans
up
and
then
the
termination
proceeds.
So
by
the
time
the
first
pot
is
terminated.
You
know
we've
torn
down
the
I
o
through
the
unpublished
mechanism.
F
It
sounds
to
me
like
at
this
point.
We,
like
you
know
we
probably
just
need
to
teach
the
CSI
how
to
be
aware
of
references.
You
know
outside
of
the
normal
kubernetes
life
cycle.
That's
not
particularly
difficult.
We
know
which
sorts
of
processes
to
look
for
on
the
system
and
then.
G
G
F
You
won't
get
published
called
I
think
on
the
second
pod
until
the
PVC
reference
goes
to
zero
on
the
first
pod.
C
F
So
we
I
think
that
we
could
live
with
that.
We
also
need
to
make
read,
write
many
work
at
some
point.
I
have
to
think
of
that
as
a
problem.
There.
F
I
think
for
rewrite,
meaning
it's
even
easier
because
then
there'll
be
fewer
restrictions
around
it.
So
you
know
again
it
would
have
that
situation
where
we
need
that
pod
around
or
else
the
I
o
breaks,
but
I
probably
live
with
that
for
a
little
bit.
Yeah.
G
C
No
because
there's
there's
some
suicide
drivers
that
don't
actually
respect
the
read,
write
ones
on
the
Node
with
multiple
pods.
So
that's
why
we
went
with
the
lead
first
and
then
create
instead
of
create
and
then
delete.
C
C
Know
it
would
be
better
if
kubernetes
actually
supported
adding
volumes
dynamically
to.
F
But
I,
but
I
do
like
the
idea
of
you
know.
If
we
can
I
think
the
other
change
that
would
be
needed
is
in
a
ready,
can
correct
me
here:
I've
anecdotally,
when
we
look
at
the
new
pods
coming
in,
it
only
has
the
new
volumes,
not
the
ones
that
are
already
attached,
so
we
would
actually
need
to
also
make
sure
that
the
hot
plug
pods
have
a
full
complement
of
all
volumes
at
all
time.
It
should.
C
F
Okay,
I'll
have
to
look
at
that
I.
Maybe
I
misread
that
that
actually,
you
know
I'll
mention
one
other
thing:
I've
noticed
just
slightly
related,
but
not
exactly
to
this.
When
the
hot
plug
pod
dies,
there's
some
sort
of
notification
event,
that's
trigger
sync
VMI,
at
which
point
it
walks
through
the
hypervisor
and
it
re-hot
plugs
all
the
existing
volumes
that
glitches
IO.
That
causes
the
I
o
to
eio
for
a
fraction
of
a
second,
so
I
was
going
to
go
through
an
event,
a
prevent
it
from
doing
that.
F
But
again,
I
wasn't
sure
if
there
was
a
a
better
way.
G
E
F
Can
do
that,
but
yeah
I
think
it
may
be.
F
E
C
I
said
you
know:
if
you
help
plug
10
volume,
then
you
have
10
pods,
you
eat
10
IP
addresses,
so
we
we
decided
to
go
with
a
with
a
hub
with
a
flip-flop
type
mechanism
where
we
created
a
new
pod
that
has
all
the
volumes
that
are
currently
Hub
plug
and
then
or
actually
we
delete
the
pause
first
and
then
create
the
new
one,
because
we
we
saw
a
Seaside
driver,
not
respecting
the
read,
write
ones
on
the
Node
for
multiple
clouds.
D
C
C
C
For
all
right,
I
have
to
look
at
that
PR,
because
that
PR
actually
is
essentially
what
it's
doing
is
changing
the
way
that
works
for
I
want
to
say,
file
systems.
We
decided
not
to
do
the
amount
and
for
Block
we
did,
and
that
was
causing
a
problem.
So
now
we're
changing
it
to
not
do
the
amount
at
all
for
all
of
them.
I
think
that's
the
the
gist
of
the
of
the
pr
of
the
fix.
F
You
know
how
robust
that
is
going
forward
like
I
can
imagine,
given
that
you
have
volumes
in
the
Pod
but
not
mounted,
and
the
containers
it
might
be
possible.
The
future
version
of
kubernetes
decides
to
you,
know
clean
those
out
since
they're
not
referenced.
Is
there
a
guarantee
that
that
won't
happen.
E
G
Don't
think
there's
a
guarantee
but
I
think
that
kubernetes
maintainers
are
pretty
usually
pretty
good
about
maintaining
Behavior
from.
F
The
past,
no,
they
are
but
I
mean
they've
certainly
changed
things
before,
and
this
would
be
one
of
those
things
it
could
go
on
for
a
month
before
anyone
noticed
there'd,
be
very
few.
People
I
would
think
creating
a
volume
reference
at
the
Pod
level,
but
not
at
the
container
level
and
and
then
expecting
something
to
happen,
because
you
would
have
other
than
you
know.
Situations
like
this
should
have
no
way
of
actually
knowing
what's
going
on
from
a
normal
container
workload.
G
Would
be
I,
don't
think.
I
would
be
very
surprised,
because
yeah
I
would
just
be
very
because
you
know
the
your
pod
spec,
the
volume
section
maps
to
like
certain
CSI
calls,
and
then
it
wouldn't
be
making
those
calls
anymore.
I.
H
E
F
Before
we
delete
the
old
one
I
you
mentioned
that
that's
a
problem
for
some
people-
maybe
yes
I,
don't
know
we
haven't
tried
it
yet.
Certainly
we
could
make
that
a
an
environment
variable
or
something
to
change
that
behavior
Maybe.
C
We've
discussed
doing
that,
making
it
a
a
flag
or
something
to
accommodate
different
CSI
drivers
and
changing
the
order
in
which
we
do
this
is
you
know,
not
super
difficult,
so.
A
I,
don't
want
to
step
too
hard
on
the
break
here,
but
I
think
maybe
at
least
we
have
a
path
forward
regarding
the
pr
fixing
this
behavior
that
was
mentioned
by
Alex
and
I
would
want
to
give
the
other
people
from
the
audience
also
a
little
bit
of
time.
So
would
you
would
you
be
able
to
to
close
this
up
outside
the
meeting,
or
do
you
still
want
to
continue
I'm
just
asking
I.
F
A
And
also
by
the
way,
I'm
not
exactly
sure
is
that,
as
is
that
as
maintained,
because
I
never
visited
that
one,
there
is
a
dedicated
cube
with
six
Deutsche
meeting,
which
is
on
our
qubit
community
calendar,
Alex,
I,
guess:
I
guess
this
is.
Is
this
still
alive
or
is
it
a
thing
every
other
Monday
yeah
it
has?
A
It
has
a
lower
Cadence
than
the
weekly
community
meeting,
so
it's
every
two
weeks,
I
think
right
so
at
least
yeah
that
that
could
also
be
something
where
you
could
probably
maybe
continue
on
on
discussing
this
somehow
I'm,
not
sure
because
I
work
for
you,
certainly,
okay,
so
to
go
to
the
next
agenda
Point.
This
is
from
Edward
Haas,
it's
Network
binding
as
plug-in
yeah
Eddie.
Do
you
want
to
go
next.
H
H
I
will
try
to
set
up
a
meeting
next
week
and
and
from
there
every
week
you
have
sync
points
to
see
how
it
goes
and
if
anyone
has
ideas
of
how
to
do
it
better
and
place
ads
comments
to
the
document
or
start
the
discussion
on
slack
or
send
me
emails
whatever.
If
it's
your
best
thanks.
A
Up
so
next
point
is
by
me:
there
was
an
attempt
yesterday
to
kick
off
the
release,
automation
which
failed
on
the
release
job,
so
I
created
an
issue
on
this.
We're
still
currently
in
investigating
this
one.
It
looks
like
there
is
something
wrong
with
the
container
manifests
that
have
that
are
being
pushed.
So
if
anyone
is
more
familiar
with
that
part
of
the
build
of
the
qubit
build,
we
really
appreciate
your
help.
A
So
yeah,
that's
for
me
so
and
also
in
to
take
to
replace
or
to
to
in
case
for
Andrew
burn.
He
just
wanted
to
mention
us
that
there
are
a
couple
of
days
to
apply
for
kubecon,
China
and
cubecon
North
America.
The
call
for
paper
will
close
on
June
18th,
so
that's
four
days
which
you
have
still.
If
you
want
to
put
something
in
there,
so
go
ahead.
A
Please
if
you,
if
you
are
interested
also
those
in
the
you
might
be
interested
in
the
SFS
count,
which
is
happening
to
to
be
to
be
starting
in
November,
10th
and
11th,
which
is
the
weekend
I
think
Friday
is
Saturday.
This
is
in
Bolzano
Italy,
because
of
papers
will
close
on
June
30th,
so
yeah.
That's
it
that's
it
for
that.
Next
one
would
be
edamor
holder
with
the
application
policy
for
Cupid.
Please
go
ahead,
and
tomorrow.
I
It
was
a
bit
abandoned
for
a
while
because
of
various
reasons,
but
now
I'm
in
fact,
working
on
it
I
think
it's
a
very
important
subject
and
I
think
it's
very
important
that
a
lot
of
representatives
from
this
repo
will
take
part
in
it,
because
that
it
would
affect
us
all.
So,
please,
if
you
want
to
have
a
look,
if
you
want
to
give
a
review,
it
would
be
very
appreciated
and
that's
it.
Thank
you.
A
Thank
you
tomorrow,
so
I
forgot
something
I,
just
I'm
just
going
to
sneak
that
in
right
away,
so
I've
been
creating
a
PR
in
case,
so
so
to
be
helpful
for
adding
some
documentation
on
we're
proud,
for
example,
stores
its
artifacts,
which
might
be
interesting
for
people
debugging.
The
keyboard
itself
and
I
would
really
appreciate
people
proofreading
this,
whether
that
makes
sense
or
whether
that
is
helpful.
A
So
in
case,
if
you,
if
you
just
happen
to
to
be
wanting
to
look
at
some
keyboard
locks,
for
example,
or
what
burnt
operator
locks
or
whatever
what
you
need,
then
please
take
a
look
at
this
prank.
This
tries
to
clear
up
on
where
those
active
artifacts
are
actually
stored
or
inside
the
prowl.
A
So
anyone
who
has
a
topic
he
wants
to
discuss
he
or
she
wants
to
discuss
they
want
to
discuss.
Please
go
ahead
now.
A
H
H
A
Right
so
I
just
added
that
to
the
your
control
notes,
so
I
hope
that
that
makes
sense.
A
A
I'm
going
to
try
to
share
my
screen
real,
quick,
I
hope
everyone
can
see
this
now.
These
are
the
requests
for
recent
ones.
A
Okay,
this
one
is
by
me.
We
don't
need
to
discuss
that
this
one's
also
by
me.
Sorry,
for
that
here,
for
example,
I
should
pull
up
right.
E
A
E
A
Been
commented
on
by
Ryan
account
stack
correctly
so
and
it's
all
also
gtmed
GTM
approved
without
getting
approved
yeah.
Let's
this
one
is
also
and
I,
don't
see
any
things
that
would
be
working
with
it.
A
H
A
A
It
for
a
while
and
then
let's
try
to
let's
try
to
to
look
at
it
next
time
if
it
still
doesn't
get
any
attention.
But.
A
People
have
already
been
looking
at
that
one,
so
that
should
be
okay.
This.
H
One,
for
example,
I
what
I
don't
understand
about
it
is
how
it
is
tested.
I
mean
this.
Is
this
part
time
Clueless.
H
I
think
we
don't
have
like
we
are
not
we
don't
know
if
this
works.
A
Because
we
don't
have
the
test
environment
that
actually
simulates
or
that
that
actually
gives
gives
it
gives
it
somehow
a
room
for
testing
it
on
on
a
real
environment.
H
A
Yeah,
that's
that's
a
good,
that's
a
good
point,
but
I
think
from
the
community
perspective
we
could
just
say
we
would
need
some
someone
who
would
provide
a
test
environment
if
he
wants
to
invest
in
that
right.
So
I
think
if,
if
people
from
the
community
are
really
interested
into
that
and
would
want
to
invest
their
time
into
that
and
also
some
money,
they
could
provide
the
testing
environment
for
just
actually
checking
that
so
I
don't
see
at
the
moment
that
we
can
somehow
create
an
environment
where
this
would
be
testable.
Somehow.
H
H
Is
to
to
do
this
somehow
externally
and
not
internally,
like
provide
means
for
someone
from
outside
to
to
add
these
changes
without
touching
the
base
code,
I,
don't
know
if
that's
possible,
like
maybe
environment
variables
or
stuff,
like
that.
A
A
E
E
A
A
A
A
A
Okay,
then,
let's
take
a
quick
round
of
box
scrub.
E
A
A
A
Folks
from
storage,
do
we
need
anything
else
here
and
description
I
see
something
like
the
CDI
version,
some
Cube
cuddle.pv
and
some
additional
context
regarding
the
logs?
Is
that
sufficient,
or
would
you
want
to
check
whether
that
there
is
still
something
missing
and
continue
that.
E
A
A
A
E
B
A
E
A
E
E
A
So
and
then
leave
it
at
that
I
think
we
are
nearly
at
the
top
of
the
hour,
so
I
would
probably
close
out
now
anyone
anything
to
share
that
the
community
needs
to
know
some
James
last
words
or
something
like
that.
A
Okay,
then
thanks.
Everyone
for
your
attendance,
have
a
nice
rest
of
your
day
wherever
you
are
and
have
a
great
week
and
see
you
next
week
in
the
next
community
meeting
thanks
everyone.