►
From YouTube: KubeVirt Community Meeting 2021-11-17
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#heading=h.pozqokb2ojl
A
Hello:
everyone:
this
is
the
cooper
community
meeting
where
users
and
developers
get
a
chance
to
talk
about,
what's
been
going
on,
talk
about
new
features
and
bugs
and
how
we're
using
coovert,
I'm
going
to
post
meeting
notes
out
to
chat
and
feel
free
to
open
up
that
document
and
add
your
attendance.
A
Okay,
we
usually
give
a
few
minutes
at
the
beginning
for
for
new
members
to
introduce
themselves.
Do
we
have
anybody
new
this
week?
I
would
like
to
say
hello.
B
B
A
Okay,
ryan
plugged
a
couple
bullet
points
into
the
agenda
and
says
he's
going.
I
thought
he
was
going
to
be
a
few
minutes.
Late.
Ryan.
Are
you
here.
C
Yeah
I'm
here
I
made
it
all
right,
go
ahead,
sure
thanks,
okay,
so
the
first
topic
I
wanted
to
talk
about
was
I
thread.
I
started
on
the
mailing
list.
C
It's
about
q,
verts,
cloudinet
and
adding
a
field
to
the
metadata,
and
so
I
basically
would
have
what
I
outlined,
and
this
throughout
is
just
the
current
metadata
that
kubert
offers
as
part
of
cloudnet,
and
I
was
wondering
you
know
how
it
came
to
be
that
you
know
what
the
current
metadata
is,
and
you
know
if
there's
possibility
to
extend
it-
and
you
know
one
of
the
fields
that
I
was
interested
in
initially
interested
in
adding
was
instance
type
right.
C
Now,
that's
not
currently
there
and
that's
kind
of
one
that
you
know
could
be
used
for
flavors
or
whatever,
like
maybe
hiding
infrastructure
details,
and
you
passing
through
the
credit
metadata
that
a
lot
of
other
cloud
providers
use.
So
I
don't
know
I
was
wondering
about
thoughts
kind
of
in
general
about
this
topic.
D
Yeah,
so
hey
yeah,
the
metadata
field
can
certainly
be
expanded.
I'm
not
super
familiar
with
the
instance
type
value
there
and
its
consistency
across
other
clouds
or
other
infrastructures
and
service
platforms.
D
What
would
you
so
you
mentioned
flavors?
Do
you
think
that's
the
primary
thing
that
would
be
used
for,
or
did
you
think
of
anything
else.
C
Yeah
that
that's
what
I
would
expect
is,
and
that's
I
kind
of
pulled
it
from
like.
I
looked
at
aws
and
their
in
the
data
that
they
have
for
their
credit,
and
that
was
you
know
one
of
the
ones
they
offered
but
yeah,
and
that's
what
I
would
expect
it
to
be
used
for.
C
Okay,
all
right
I'll,
I
mean,
I
guess
I'll
follow
up
with
maybe
initial
or
enhancement.
D
Okay,
and
so
if
somebody
doesn't
use
a
flavor,
for
example,
and
we're
talking
about
the
flavor
api,
if
they
don't
use
it,
then
I
guess
we
just
don't
provide
that
value.
Would
that
be
it.
C
Yeah,
I
that
that's
what
I
would
expect
like.
I,
I
don't
think
it's
something
that
you
know
necessarily
has
to
be.
You
know
maybe
something
that
people
always
need
to
use.
I
mean
if
it's
there
isn't
a
flavor
api
like
maybe
it's
something
that
we
can
use
as
an
enhancement.
If
something
that
you
know
if
there
is
a
flavor
api
or
some
sort
of
custom
thing
that
we
can
leverage
sounds.
E
Reasonable
in
openstack,
for
example,
this
instance
type
represents
something
very
specific
that,
like
it,
it's
well
defined
flavor
of
that
instance
type.
So
I
mean
I
wonder
how
how
we're
going
to
shape
this
here
for
people
who
are
not
using
flavor,
I
mean
I'm
not
sure
how
do
how
will
they
know?
What
does
that
translate
to.
E
D
A
D
To
take
a
step
back
for
a
second
ryan,
what
would
that
metadata
be
used
for?
How
would
that
be
helpful
to
have
that
within
the
guest
like?
What?
What
would
you
use
it?
Yeah.
C
So
the
if
we
have
any
sort
of
specific
things
that,
like
the
like,
we
like
one
of
the
things
that
maybe
we
extend
flavors
to
mean
a
certain
thing
and
the
cloudiness
scripts.
You
know
if
this
means
something
to
kind
of
during
startup
for
the
guest
when
they
see
this
kind
of
flavor
they
need
to.
You
know,
maybe
do
some.
I
don't
know
specific
things
to
that
flavor
then
they
would.
They
would
do
that
in
there
as
part
of
the
cloud
event.
F
C
C
Yeah,
thank
you,
okay,
so,
second
one
second
one's,
I
call
it
shared
gpu
qs.
This
is
an
issue
I
created
this
this
issue.
I
guess
the
easiest
way
to
explain
is
like
if
we
have,
if
we
have
youtuber
sports
vcpus,
if
we
have,
if
we're,
sharing
a
physical
cpu,
there
are
still
some
parts
of
the
like
when
we
slice
up
this
gpu
and
get
and
allocate
those
slices
to
a
to
a
vm.
C
There
are
still
shared
parts
of
the
cpu
between
the
the
vm
that
that
has
that-
or
I
guess
all
the
different
vms
that
have
those
slices,
like
you
know,
like
the
l3
class,
so
that
lowest
level
cash
would
be
shared
and
other
things,
and
so
there
are
ways
that
we
can,
I
guess
regulate.
You
know
how
those
resources
are
used
and
lynx
has
these
res
control
groups.
C
That
would
allow
us
to
do
this,
and
this
is
exposed
in
cuber
are
in
livert
now
and
we
can
actually
control.
You
know
the
access
to
that
cache
and
memory
bandwidth
and
so
on,
and
the
use
case
for
this
is
when
we
have
you
know
high
performance
or
performance,
sensitive
workloads
that
need
to
be
on
a
that
need
to
be
on
a
node.
C
We
want
to
make
sure
that,
when
we're
allocating
those
cpu
slices
that
they're
not
going
to
be
in
any
way
interrupted
or
if
we're
packing
a
a
node
with
a
lot
of
workloads,
we
want
to
make
sure
that
we're
we're
not
impacting
performance
at
all
on
for
those
workloads.
C
What
are
some
you
know
what
people
think
about
this
as
a
topic
actually
before
before
actually
open
it
up.
So
the
my
assumption
here
is
that
the
ask
for
this
issue
is
that
is
to
expose
this
as
a
as
an
api,
or
it's
not
if
you
guys
expose
this
as
a
on
via
the
vmi
api,
so
just
like
in
the
way
that
we
can,
we
now
expose
vcpus.
C
It
would
essentially,
we
would
expose
these
knobs
on
that
api
so
that
they
can
be
controlled,
but
this
wouldn't
be
like
this
wouldn't
have
anything
to
do
with
the
scheduling
of
these
resources.
The
cube
scheduler
would
still
be
in
control
of
that.
You
know
for
some
either
cpu
manager
or
whatever
would
still
handle
that,
but
it's
just
exposing
them
and
then
wiring
it
up
to
the
for
launcher
pod
and
then
wiring
it
up
to
the
actual
wiring,
all
the
way
down
to
delivered
and
actually
to
the
guest.
E
Ryan,
what
so
just
just
take
a
step
back
I
mean
libert
until
now
was
in
the
in
the
regular
environment
like,
for
example,
openstack
or
rev,
and
libert
was
acting
as
a
as
a
node
manager,
and
it
was
creating
a
c
group
for
for
each
of
these
individuals,
vms
and
and
these
resources
has
been
controlled
by
libert
across
the
node,
and
it
was
easy
for
libra
to
coordinate.
E
These
kind
of
resource
allocations
to
these
vms.
In
our
case
this
is
very
much
different,
because
the
c
group
that
is
being
created
for
for
our
vm,
it's
been
created
by
by
cubelet
and
the
resources
that
are
being
allocated.
E
Essentially,
they
are
being
allocated
by
kubernetes
by
the
cpu
manager
and
from
there
what
we
can
do
in
our
environment
in
invert
launcher,
we
could
take
already
allocated
resources
for
us
and
we
could
pin
or
not,
pin
and
the
virtual
cpus
to
these
already
allocated
resources.
E
C
C
That
is
that,
if,
if
like
kubernetes
doesn't
have
this
doesn't
expose
this
currently,
but
what
I'm
saying
is
that
you
know
if,
if
we
were
to
imagine
that
it
did
right
now,
like
just
like
in
the
way
that
that
kubernetes
exposed
us
to
or
exposes
the
ability
to
assign
vcpus
right,
we
needed
to
qvert
needed
to
come
along
and
also
wire,
this
up
to
virtual
machine
instance
apis,
and
so
that
the
guests
can
actually
use
it
right.
E
Unfortunately,
not
so
are
you
familiar
with
the
with
the
cpu
with
dedicated
cpus
placement
concept
in
in
keyboard.
E
So
when,
when
we
request,
when
the
user
requests
a
dedicated
cpus
to
be
assigned
for
his
vms
and
keyboard,
what
it
will
do,
it
will
force
force
the
put
to
request
to
have
a
guaranteed
qas
in
kubernetes,
and
then
this
will
force
the
cpu
manager
to
allocate
resources
dedicated
resources
for
kuber.
E
For
for
that
vm
and
then
the
vert
launcher
will
will
essentially
wire
to
what
wire
the
virtual
cpus
to
the
physical
cpus
that
has
been
already
allocated,
and
this
way
this
is
the
only
way
for
us
to
control
what
cpus
do
we
get?
E
If
we
expose
the
knobs
that
you've
mentioned,
people
will
be
able
to
request
or
or
somehow
pin
their
virtual
cpus
to
cpus
that
we
don't
have
in
the
pod
that
this,
the
kubernetes
cpu
manager
didn't
allow
us
to
actually
see
in
the
pod.
C
How
that
would
be
useful,
so
my
my
expectation
is
that
so
I
I'm
making
the
assumption
with
this
request
that
the
cpu
manager
or
any
solution
like
it
doesn't
doesn't
matter
something
that
plugs
into
the
scheduler
we'll
be
able
to
expose
the
cpus
in
the
way
that
we
asked
for
the
way
that
we
need
them
to
actually
to
use
these
knobs.
C
E
Yeah,
so
I
think
this
is
the
first
task
to
do
to
extend
the
cpu
manager
in
kubernetes
and
only
then
we'll
try
to
find
the
solution.
How
to
use
that
in
kubrick.
C
Yeah,
but
well.
What
I
would
argue
also
argue
is
that
extending
the
cpu
manager,
though,
isn't
necessarily
a
blocker
here,
because
what,
if
I,
what
if
we
use
a
different,
what
if
we
use
a
custom
solution,
that's
not
using
the
cpu
manager,
but
if
we
like
this
is
just
sort
of
the
cpu
manager
is
sort
of
it's
like
we
could
still
like.
You
could
still
have
this
feature
in
kuber,
even
if
the
cpu
manager,
like
you,
don't,
for
instance,
you
don't
have
to
use
a
cpu
manager
today.
C
If
you
want
to
leverage
attaching
vcpus
to
to
vmis
from
kuvert,
you
could
use.
E
C
I
see
okay,
it's
it's
hard,
it's
already
hard,
okay,
yeah!
Okay!
I
mean
we
yeah,
I
mean
internal,
I
mean
we,
we
don't
use
the
cpu
manager.
Okay,
let's
see
yeah
I
mean,
even
if
you
use,
though,
even
if
you
do
use
the
cpu
manager,
I
mean
I
mean
I
still
like
yeah.
I
mean
I
still
think
like
it's,
so
it
is
your
concern
vladik
that
like
if,
if
kuvert
enables
this
and
cpu
manager
does
not
have
it,
then
we're
just
going
to
cause
problems.
That
way.
Your
concern
is.
E
Absolutely
yeah,
and
also
it's
users,
may
express
topologies
and
and
pinning
requirements
for
for
cpus
that
are
not
in
their
c
group,
which
which
will
not
it
just
will
not
work.
E
Also
these
requirements
they
so
the
the
deliberate
api.
It's
a
it's
very
specific
it
you
can
say
which,
which
vcu
do
you
want
to
be
pinned
to
which
physical
cpu?
And
we
also
don't
do
this?
E
We
can
do
calculations
based
on
what
has
been
presented
to
us
and
there's
a
circle,
certain
algorithm
that
we
use
to
to
allocate
to
basically
arrange
the
virtual
cpus
across
the
physical
cpus
that
has
been
allocated
for
us
and
we
don't
use
a
specific
numbering
that,
like
the
liberty,
api
suggests,
okay,
because
otherwise
it
just
it's
not
possible.
I
mean
it's
not
scalable,
to
specify
a
specific
cpu
number.
C
Yeah
I
mean
I
get
yeah
I
mean
I
guess
like
we're
kind
of
for
where
I'm
at
I
said
I've
made.
My
assumption
is,
that
is
that
we
could
is
that
we
could
do
this
without
the
cpu
manager,
just
making
the
assumption
that
someone
is
is
yeah,
I
mean
okay,
I
mean,
I
guess.
Okay,
I
mean
I
guess
for
now
I
mean
I
I
if
that's
like
a
requirement
in
cubert,
then.
E
I
think
one
one
way
forward
is
to
use
is,
for
example,
to
maybe
use
hooks
and,
and
then,
if
you
use
such
a
hook,
you
would
be
able
and
you're
not
using
a
cpu
manager.
C
E
C
E
Okay,
I'm
just
saying
that
I
think
what
we
need
to
do
is
to
is
to
suggest
how
to
modify
the
cpu
manager
in
a
way
that
it
would
be
able
to
accept
other
policies.
I
know
that
there
were
a
lot
of
discussions
inside
nvidia,
for
example,
kevin.
I
think
he
participated
in
in
lots
of
different
designs,
how
to
make
the
cpu
manager
interact
with
the
with
the
scheduler
better
and
represent
pneuma
and
so
on.
So
I
think
that
effort
is
for
me
is
the
best
path
forward.
E
C
All
right,
I
guess
so
then
yeah
I
mean
thanks.
I
guess
then
so
we
can.
I
guess
for
now
we
can
leave
it.
I
mean
I
guess.
Let
me
see
what
I
can
we
can
do
in
the
the
cpu
manager,
side
and
and
vladic
can,
do
you
mind
adding
your
comments
to
the
to
this
issue,
just
so
that,
like
yeah.
C
Yeah
just
so
that
we
have
have
it
for
record
and
can
references
and
so
on.
Okay,
thanks
thanks.
A
Okay,
that's
silly
amd
serve.
F
F
Well,
as
a
background,
the
suv
technology
basically
allows
the
encryption
of
memory
for
virtual
machines
in
runtime
and
yeah.
This
pr
tries
to
enable
this
functionality
in
kubril.
Basically,
the
issue
here
is
that,
let's
say
this
technology
is,
there
are
two
steps.
Actually
one
is
the
launching
of
the
vms
with
the
encrypted
memory,
and
the
other
step
is
the
attestation
which
allows
the
end
user
to
verify
that
the
system
is
running
on
a
genuine
amd
platform
with
encryption,
etc.
F
So
this
pr
actually
focuses
on
implementing
the
launching
of
cfvms,
so
it's
it
just
adds
the
some
fields
to
the
vmspeak
which
I'm
up
to
the
vert
api
and
basically
that
the
end
allows
running
the
and
yeah.
There
were
several
comments
in
this
pierce
and
apparent.
There
was
one
concern
that
without
a
dissertation,
it
maybe
not
very
beneficial
to
introduce
this
now,
so
I
just
wanted
maybe
to
discuss
how
to
go
with
it
further.
F
With
this
pull
request,
let's
say
attestation
is
a
bit
complex
topic:
how
to
do
it
and
it's
still
very
much
work
in
progress
now
so
station
is
an
interactive
process
between
the
user
and
actually
kuyamo
and
with
kubert.
The
tricky
part
is
that
scuba
doesn't
talk
to
kuyama
directly,
but
it
talks
through
libert
and
as.
G
F
Now
there
are
some
missing
apis
in
liberty
and
that
basically
doesn't
allow
to
implement
that
station
step
completely.
Let's
say
so.
I
just
wanted
to
discuss-
maybe
some
opinions
about
that
if
it
makes
sense
to
introduce
this
functionality
like
gradually
step
by
step,
starting
with
this
car,
for
example,
which
just
allows
launching
the
vms
without
that
station
and
then
when
the
api
is
in
the
world
they
become
available,
then
introducing
the
registration
process
for
those
vms.
F
E
I
think
it
makes
for
my
sorry,
I
obviously
I
think
from
my
point
of
view,
it's
it
makes
sense
to
to
gradually
like
incrementally,
introduce
these.
These
changes
in
a
way
that
will
allow
us
to
later
introduce
the
at
the
attestation
part,
and
I
think
the
current
api
that
you
presented
is
makes
a
complete
sense.
E
F
So,
basically,
it's
fine
to
have
the
just
basic
support
of
a
cv,
and
then
we
we
can
handle
the
station.
I
I
already
thought
about
the
statistation
process.
Some
what's
I
was
thinking
about
is
that
basically
the
statistician
can
be
applauded
to
some
external
service
and
but
scuba
needs
to
provide
some
apis
to
interact
with
with
the
running
vm,
actually
so
yeah.
I
started
populating
some
proposal
design
proposal.
It's
still
kind
of
an
early
state,
it's
also
there.
F
Yeah
so
basically
yeah.
E
I
think
going
forward
we'll
need
to
to
consult
with
the
david
gilbert.
He
is
a
well
at
least
he's
he's
the
one
who's
more
familiar
with
the
with
the
advances
in
the
at
the
station
and
all
of
that
area.
My
my
initial
concern
was
that
the
api
was
too
specific,
but
the
way
it
is
right
now
for
me,
it
makes
sense.
F
F
F
As
of
now
to
be
honest,
so
so
yeah
the
issue
here
that
the
technology
itself
is
still
work
in
progress,
I
would
say,
but
the
the
basic
apis
for
launching
the,
for
example,
sc
vms,
are
in
place
and
currently
it's
possible
to
enable
that
and
convert.
F
F
Basically,
that's
what
I
wanted
to
discuss
for
now.
If
it's
fine,
I
will
continue
working
on
this
and
maybe
bring
it
to
some
good
shape,
then
so
it
can
be
reviewed
and
merged.
After
that.
A
See,
though,
the
last
note
on
here
is
23
days
ago
for
a
rebase.
A
Okay,
thank
you
and
michael
with
arm.
I
This
is
michael,
I
was
working
on
the
proposal
of
enabling
the
enterpriser
on
google
earth
recently
and
the
original
plan
was
to
submit
a
design
document
by
the
end
of
this
month.
But
recently
our
management
team
adjusted
some
works
that
are
ongoing,
so
the
plan
for
the
send
supporting
proposal
was
the
imprioritized,
so
my
investigating
and
testing
on
this
topic
will
suspend
and
it
may
not
be
resumed
in
a
very
close
future.
A
Oh
very
sorry,
to
hear
about
the
deep
prioritization
michael:
are
you
gonna
be
sticking
with
the
project
or
are
you
moving
on
to
other
work.
A
Also,
sorry
to
be
losing
you,
and
hopefully
you
come
back
to
us.
A
Your
contributions
have
been
very
valuable
and
it's
been
really
nice
working
with
you.
Thank.
J
All
right
one,
second,
not
a
problem,
hi,
okay!
So
it's
pretty
straightforward
question.
We
have
today
the
ability
to
do
a
hot
plug.
J
E
Sorry
my
take
on
this
is
that
it
would,
it
would
depend
on
several
things.
First
of
all,
not
all
not
all
host
devices
can
be
easily
unplugged.
J
E
Some
accelerators
can
be
unplugged
without
any
interruptions,
but
I'm
not
entirely
sure
that
that
all
can
be
pluggable.
But
although
we
can.
E
There
are
probably
ways
around
that
and
probably
ways
to
express
it
is
this
device,
pluggable
or
or
unpluggable
on
the
api?
Let's
discuss
it
further
I
mean
I'm,
I'm
not
sure
right
now,.
J
Okay,
so
it's
a
it's,
not
an
immediate
requirement,
so
there's
no
no
point
in
in
discussing
it,
so
we
we
could
wait
until
someone
asked
for
it.
I
guess.
K
I
have
a
question:
we
support
what
plug
and
photon
plug
for
disks.
Is
it
something
that
can
be
reused
by
this
pr.
K
J
J
A
Yeah
and
let's,
let's
get
some
comments
into
this,
pull
request
one
way
or
another.
J
That
that
pull
request
is
just
generalizing,
the
just
in
the
code,
giving
it
an
option
to
do
it
with
others.
It's
not
implementing,
so
just
takes
out
from
making
it
specific
for
srv
and
making
it
available
if
someone
wants
or
needs
to
edit
for
the
others,
but
it's
not
it's
just.
A
Sure-
and
it's
been
many
days
since
yeah
nobody's
even
commented
on
your
pull
request.
So
no.
A
J
Yeah,
it's
not.
The
topic
was
not
exactly
this
request.
The
topic
was,
do
we
need
to
use
hot
plug
or
for
j
for
all
other
host
devices
or
part
of
them,
and
the
answer
at
the
moment
is
that
it's
maybe
so,
and
there
is
no
specific
need
that
was
requested
from
anyone
to
this,
so
we
can
put
it
on
on
standby.
I
guess.
A
Oh,
I
just
checked
on
a
note,
saying
no
specific
need
at
this
time.
Functionality
can
be
placed
on
hold.
J
A
A
Okay,
itamar
you're
next.
L
Okay,
so
I
want
to
share
an
experience
I
had
and
maybe
erase
a
discussion
about
it.
So
I
was
dealing
with
a
task
not
related.
I
do
not
relate
it
to
refactoring
or
anything
and
during
this
task
I
stumbled
upon
a
render
launch
manifest
function,
which
is
a
very,
very
long
function,
something
like
2000
lines
of
code,
long,
which
is
very
crucial
to
our
flow
and
basically
translates
converts
a
vmi
manifest
to
to
vert
launcher
pod.
L
So
I
wondered
how
how
this
function
got
to
be
so
long
and
so
messy.
So
what
I
wanted
to
do
is
stop
what
I'm
doing
and
have
a
pr
that
will
simply
split
this
long
function
into
sub
functions,
so
it
would
just
be
an
initial
work
of
refactoring
and
the
idea
was
to
start
the
the
refactoring
work
and
to
encourage
others
to
also
make
more
incremental
changes
until
the
code
looks
better
in
practice.
L
The
experience
was
very
different,
so
at
the
beginning,
because
because
I
moved
a
lot
of
code
around,
if
you
look
at
the
diff
on
the
pr,
it
seems
like
I
removed
a
lot
of
code
and
added
a
lot
of
code.
So
I
got
a
lot
of
feedback
for
code.
I
didn't
even
write
so
after
a
lot
of
feedback
cycles
like
that
I
said.
Okay,
I
want
feedback
specifically
on
my
test,
which
is
the
code
that
I
write.
L
That's
the
splitting
it
to
sub
functions
and
that's
it,
but
the
pr
didn't
really
got
into
convergence,
basically
more
and
more
people
giving
feedback
about
how
we
could
take
it
a
little
bit
further
and
make
the
call
a
little
bit
better
and-
and
while
I
agree
that
we
have
more
work
to
do.
I
asked
myself
why
specifically-
and
we
were
talking
about
refactoring-
we
don't
really
use
incremental
changes.
L
Instead,
we
have
an
expectations
of
the
refactoring
being
perfect
and,
being
you
know,
100
and
my
question
is-
I
guess
I
mean-
let's
be
honest:
refactoring
is
not
that
fun.
It's
not
that
impressive
thing
to
do
on
your
resume.
L
So
what
I
would
what
I
wonder
is
I
mean
what
would
encourage
people
to
stop
their
work
and
and
do
refactoring
when
when
this
is
the
case-
and
we
have
like
a
proof
that
this
is
a
problem
since
this
huge
and
crucial
function,
hadn't
been
refactored
for
a
long
long
time
so
yeah.
I
just
wanted
to
raise
a
discussion
about
it.
A
general
discussion.
M
M
E
M
So
maybe
just
to
give
us
some
background,
why
I
need
the
split
so
actually
that
that
function
gives
interesting
information,
how
the
vm
will
be
scheduled.
So
if
we
want
to
add
some
kind
of
functionality
or
additional
pod,
that
needs
to
have
the
same.
M
Schedulability
as
the
vm
yeah,
basically
all
the
all.
These
kind
of
information
are
in
the
in
in
that
function.
L
So,
to
sum
up
all
this
discussion,
I
think
that,
basically,
what
david
is
saying
is
that
this
refactoring
work
is
not
complete,
and
I
say
that
I
agree,
but
I
think
we
should
merge
it
anyways,
because
we
should
work
in
incremental
changes
and
encourage
people
to
do
refactoring
works
without
it.
Having
to
last
for
months
and
and
being
absolutely
perfect,.
L
E
I
think
one
thing
that
may
help
is
is
just
schedule
some
some
time
with
the
people
who
are
mainly
objecting
and
just
go,
try
to
go
over
it
together.
J
Sure
I
wanted
to
I
wanted
to
give
one
one
one
thing:
let's
start
with
the
fact
that
there
are
people
that
do
like
to
do
faster,
so
it's
like
you
have
one
at
least
in
here
and
and
I
the
fact
that
the
holy
photon
is
not
done
like
in
a
regular
basis.
J
It
is
a
problem
because
then
we
reach
we
reach
to
code.
That
is
very
complicated,
because
it's
the
previous
developer,
that
added
to
a
specific
place,
we
didn't
do
the
refractory,
so
the
next
one
would
have
to
may
decide
not
to
do
it
as
well.
L
My
point
is
that
the
cure
for
this
is
making
small
incremental
changes
on
a
regular
basis
without
the
expectation
of
every
refactoring
work.
To
be
perfect.
It's
okay
to
to
just
do
an
initial
work
and
to
agree
that
this
gets
us
to
a
better
situation.
While
there
is
still
a
lot
of
work
to
do,
but
I
mean
the
reality
is
just
is
that
nobody
touched
this
function
for
a
really
long
time,
and
this
is
not
just
just
a
crucial
function
which
is
at
the
heart
of
our
flow.
So
yeah.
L
No,
actually,
I
just
I
just
saw
this
while
I
was
working
on
something
completely
else,
and
I
said:
oh,
my
god,
2
000
function,
let's
split
it
apart,
that's
it
I
mean
no,
no
other
motivation.
M
No,
because
really
I
I
need
this
kind
of
refactoring,
because
I
would
like
to
create
a
new
pod
that
has
the
same
resources
disk
as
the
vs
launcher
pod.
As
and
I
need
this,
this
kind
of
information
are
only
in
that
function.
L
Yeah
the
greatest
example
of
you
know
sometime
when
if
it
was
already
refactored,
I
guess
your
work
would
be
a
lot
easier.
E
E
L
Yeah,
I
agree
it
provides
more
motivation,
but
I
think
that's
basically
for
code
to
be
maintainable,
you
should
look
at
a
function
and
basically
instantly
know
what
it's
doing
and
what
are
the
stages
it's
going
to.
If
you
see
like
a
huge
message,
function,
yeah
splitting.
M
In
that
function,
for
example,
we
have
a
boolean
just
because
I
think
it's
the
attachment
part
or
another
part.
That
is
no.
I
think
it's
done
for
the
first
wait
for
first
consumer
and
it's
very
hard
to
extrapolate
the
schedule
ability.
I
mean
that
kind
of
information.
That's,
I
think
it's
the
reason
for
that
boolean.
M
L
J
L
I
got
a
lot
of
reviews,
but
maybe
10
different
reviewers
saw
this,
but
my
point
is
that
seeing
a
refactoring
work,
it's
very
easy
to
say:
oh,
this
can
be
a
little
bit
more
beautiful.
This
could
be
you
know
we
should.
We
should
do
an
extra
work
to
to
to
make
it
a
hundred
percent
good.
But
my
point
is
that
I
mean
this
pr
didn't
intend
to
to
refactor
this.
This
file
to
you
know
to
be
100
perfect.
He
just
wanted
to
to
do
an
incremental
work.
L
N
This
pr
that
I
actually
reviewed-
and
I
think
I
I
actually
gave
you
like
lots
of
pain
like
asking
for
more
stuff
continuously,
but
one
of
the
issues
here
is
that
you
turn
something
that
is
like.
I
agree
a
huge
huge,
huge
miss
and
you
split
it
into.
N
N
While
I
understand
what
you're
saying
you
just
want
to
split
this
into
tinier
pieces
like
it's
hard
to
get
a
full
understanding
of
what
the
pieces
are
and
that's
why
I
at
least
I
started
with
the
things
I
started,
but
at
least
I
think
you've
reached
an
understanding
like
you've
explained
it
to
me
and
that's
fine,
I'm
happy
with
the
state
of
you
left
it
in,
but
knowing
like
the
overall
direction
of
like
you
say,
this
is
the
first
piece
to
achieving
like
something
a
lot
better.
N
L
To
achieve
okay,
so
let
me
reply
to
you
and
what
vladik
said
earlier,
so
I
don't
think
you
know.
I
think
that
vladic
is
right
and
that
function
being
big
is
not
like
a
rule
to
to
always
break
it
apart,
but
I
do
think
that
every
you
know,
unit
of
the
code
should
be
cohesive
as
much
as
possible,
meaning
that
if
you
look
at
a
function,
you
should
know
what
it's
doing
and
you
should
know,
and
it
needs
to
do
a
specific
task
and
do
it
well.
L
We
have
a
one
function
to
render
all
aspects
of
the
pod
creation,
so
what
I
said
was:
okay,
let's
break
it
apart
to
to
some
to
different
rendering
aspect,
we're
first
we're
rendering
the
volumes
here
and
we're
seeing
which
volumes
we
should
create.
Okay,
then
we
are
dealing
with
resources
and
then
we're
dealing
with.
I
don't
know,
and-
and
I
thought-
okay-
let's
split
it
up
to
cohesive
functions-
that
this
way,
we
can
see
the
steps
that
that
the
function
you
know
takes
in
order
to
complete
its
goal.
J
J
J
Second
of
all,
the
fact
that
10
people
maybe
contributed
to
the
review
is
also
very
good
thing,
because
then
10
people
care
about
it
and
now
the
problem
is
that
maybe
10
people
don't
agree
with
each
other
or
it's
very
hard
to
continue
with
the
pr,
because
you
have
10
10
opinions,
but
this
is
like
the
nature
of
our
work
in
the
open
source.
There
is
no
way
to
get
out
of
this.
The
only
way
to
resolve
it.
J
I
guess
in
a
quicker
way,
it's
just
to
set
meetings
and
try
to
explain
yourself
and
maybe
get
the
other
side
opinion
in
in
more
maybe
you'll
understand
it
better.
At
least
this
worked
for
me
in
some
in
some
similar
cases,
because
sometimes
what
you
understand
is
not
what
the
other
side
understands
and
if
you
talk
with
him,
you
would
just
understand
it
and
that's
it.
L
Right,
so
I
fully
agree
that
I
would
like
to
hear
a
lot
of
opinions
that
it's
very
good,
that
I
had
a
lot
of
reviewers
and
that's
all
great
and
and
if
it's
not
good
enough,
then
you
know
I
would
like
to
hear
about
it.
But
my
point
is
that
we
don't
need
what
should
the
expectation
be
right?
This
is
my
question:
should
the
expectation
be
if
you
refactor
something
you
should
refactor
it
until
it's
100
percent?
L
You
know
the
best
card
that
could
be
there
or
we
can
say.
Okay,
you
can
refactor
something
but
not
complete.
You
know
all
the
refactoring
work
and
if
this
pr
gets
us
to
a
better
state
than
now,
let's
merge
right
away
and
then
do
further
prs
for
further
war.
I
I
don't
think
that
what's
blocking
this
pr
is
people
not
agreeing
with
me
on
stuff.
It's
that
people.
You
know,
look
at
all
the
code
that
changed
here,
which
is
a
lot
of
code
that
I
didn't
write.
L
You
know
all
of
it
more
or
less,
but
they're
looking
at
the
code
and
say
and
they're
saying,
is
it
perfect?
Can
I
can
I
say
something
that
will
make
it
better,
but
but
you
know
it's
not
again:
it's
not
the
code
that
I
write
that
I've
written
all
I
want
to
do
is
you
know,
make
things
one
step
better.
That's
it.
J
Yeah,
so
that's
that's
the
only.
The
challenge
that
you
have
in
this
case
is
to
explain
what
is
your
goal
in
in
a
way
that
is
like
it
makes
sense,
which
sometimes
is
hard,
for
example,
one
way
to
explain
it
is
I
want
to
cover
to
create
some
basic
tests
that
will
show
how
things
are
working
in
a
in
a
readable
way.
Just
this
is
just
an
example,
but
you
could
say
I
want
to
my
main
goal
is
to
decouple
some
code
out
of
a
specific
package
stuff
like
that.
J
L
So
what
I
did
right
there?
Oh
no
problem
like
this-
is
an
initial
work
that
breaks
apart.
Some
of
the
logic,
but
not
all
follow-up
prs
can
continue
until
we
have
a
clear
and
clean
function.
Like
that's
the
intention,
the
intention
is
to
to
create
an
initial
work
that
you
know
gets
us
one
step
better.
That's
all.
E
The
question
is:
did
you
reach
that
goal?
So
that's
a
that's
what
I
think
people
are
confused
about.
If
people
are
confused
about
this,
then
it's
better
to
take
them
aside
and
set
up
a
meeting
and
then
just
explain
what
you
did
and
how
did?
How
did
you
think
the
goal
has
been
reached.
L
Right,
I
think
this
conversation
would
have
been
more
relevant
if
david
and
roman
were
here,
but.
N
Out
that
you
have
an
approved
and
an
lgtm
here
like
at
least
yeah,
I
understand
that,
but
at
least
you
convinced
two
people
that
this
pr
achieves
your
goal.
A
Feel
free
to
to
ping
me
also
to
set
up
the
zoom
call
for
you
guys
to
talk
specifically
about
this
issue.
A
It's
why
it's
here
and
it's
a
it's
a
lot
easier
to
go
to
talk
to
people
in
person
than
to
go
back
and
forth
through
email
or
our
github
comments,
yeah
you're
right!
So
thank
you
for
that.
You're
welcome.
It
is
a
yc
8am,
but
it's
a
top
of
the
hour
for
everybody.
A
So
I'm
going
to
say
that
this
will
be
the
last
topic
and
the
rest
of
the
items
that
we
have
here
in
the
agenda
will
have
to
address
next
week
or
via
the
mailing
list
and
slash
sorry,
didn't
return
any
minutes
to
you
guys
it's!
I
actually
took
one
minute,
and
so
I'm
gonna
close
out
the
meeting.
Now.
Thank
you
everybody
for
joining,
and
we
will
see
you
next
week.