►
From YouTube: SIG Cluster Lifecycle 2021-07-27
A
A
Okay,
do
we
have
any
new
meeting
participants
that
wish
to
present
themselves
in
front
of
the
group.
B
Oh
hi,
everyone-
this
is
shivani,
I'm
from
vmware,
and
I've
recently
started
working
on
the
cluster
life
cycle,
and
I
think
this
is
the
first
time
I'm
joining
this
cluster
lifecycle
meeting
nice
to
meet
you
all.
Thank
you.
Justin
hi.
C
Hi,
I
guess
I'll
go
I'm
not
new
to
this
group,
but
I'm
new
to
this
meeting,
I'm
cecile
I
work
at
microsoft
and
I'm
one
of
the
maintainers
of
cluster
api
and
cluster
api
provider
azure
as
well
as
image
builder,
so
I'm
pretty
involved
in
sick
cluster
lifecycle
and
I'm
glad
that
you
changed
the
meeting
time.
So
now
you
can
attend
this
meeting.
A
Potentially
six
year
wire
will
start
discussions
with
sig
release
to
start
extracting
cube
curve
from
kubernetes
covering
this,
which
means
that
cubecara
may
not
may
no
longer
be
versioned
with
the
kubernetes
release
version,
which
you
might
assume
is
a
at
least.
That
was
my
context
on
the
topic
right
now,
which
means
that
kubrow
might
get
version.
Something
like
1.0
in
the
next
cycle,
and
also
an
important
aspect
is
the
cube.
Kernel
cycle
will
no
longer
be
coupled
with
the
kubernetes
release
cycle.
A
So
if
you
have
some
prescience
at
mind,
you
will
see
that
in
the
future,
kubernetes
will
be
more
of
a
distribution
of
components
and
less
of
a
release
that
bundles
a
bunch
of
components
that
have
the
same
version.
A
I
think
the
direction
from
cigar
architecture
is
that
we
want
a
secret
list
of
courses
that
we
want
to
version
components
independently
and
release
them
independently,
and
this
will
flow
into
cuba.
Gm
as
well,
it
means
that
we
may
have
a
release
of
kubernetes.
That
is
something
like
I
don't
know,
125,
which
includes
binaries,
that
are
not
versioned
125
and
they
have
a
different
version,
and
this
topic
also
affects,
for
instance,
proposals
in
costa
rica,
like
the
coaster
class.
A
If
you
have
a
kubernetes
version
now
you
have
to
consume
a
kubernetes
version
and
based
on
this
version,
you
have
to
go,
and
I
don't
know
hardcore
manifest.
That
includes
like
what
versions
mapped
to
these
kubernetes
versions.
For
instance,
a
cube
controller
manager
in
the
distant
future
might
not
be
1.25,
it
could
be
something
else,
and
it's
really
it's.
It
becomes
an
interesting
topic.
How
do
we
keep
track
of
what
components
mapped
to
a
particular
kubernetes
version?
How
do
we
skew
them?
How
do
we
support
the
whole
thing?
A
There's
no
cap,
I
wrote
it's
a.
It
was
a
bit
of
a
presentation.
Slash
rant
document,
try
to
grab
attention
of
pos
on
potential
problems
that
we
will
introduce
with
this
mismatched
versioning,
but
there
is
okay.
There's
a
cap
for
the
cube
current
extraction
that
I
don't
think
covers
potential
future
problems
that
will
arise.
It
basically
talks
that
we
are
going
to
move,
keep
growing
out
of
staging
you
fabricio.
A
A
They
said
hey,
you
shouldn't
do
that.
You
should
decouple
the
release
cycle
or
couple
it,
but
the
version
doesn't
have
to
be
the
same
as
kubernetes,
and
I
I
basically.
I
think
that
we
with
the
way
this
project
is
maintained,
I'm
talking
about
kubernetes
as
a
whole.
A
E
Specifically
for
coop
culture,
I
didn't
see
a
place
to
raise
my
hand.
I
don't
know
if
I
jumped,
but
specifically
for
cube
cuddle
in
the
past.
Coupe
cuddle
versions
have
had
windows
of
supported,
kubernetes
versions.
If
what
they're
actually
saying
is
that
they
will
get
rid
of
those
windows
that
would
actually
be
sort
of
nice
right,
like
I
think
everyone
would
like
a
coupe
cuddle
version
that
was
able
to
work
with
older
clusters
and
newer
clusters
like
that
would
be
quite
a
good
thing.
E
But
yes,
I
certainly
share
your
concerns
around
the
broader
picture
and
how
our
users
are
going
to
make
sense
of
20
different
version
streams,
and
I
think
you
know
like
daniel-
and
I
have
thought
about
this
in
terms
of
scd-adm
and
we
basically
tried
to
align
well.
We've
talked
about
maybe
aligning
versions
to
the
kubernetes
versions.
I
think-
or
so
we
certainly
talked
about
that
and
like
chaos,
aligns
versions
of
the
kubernetes
versions,
but
at
the
same
time
we
also
do
kubernetes
version
support.
E
A
I
mean
with
the
level
of
discipline,
I'm
not
saying
that
cubecrow
is
not
a
disciplined
project,
but
with
the
level
of
discipline
that
we
have,
it's
probably
going
to
be
a
delayed
release,
or
I
don't
know
but
but
basically,
if
we,
this
is
the
the
current
skew,
and
I
assume
that
is
going
to
be
some
sort
of
a
skill
like
implied
you
know,
cubical
release,
of
course,
but
just
the
numbers
I
think
just
the
numbers
are
going
to
also
confuse
people
like
there
has
to
be
some
sort
of
a
way
to
consume
a
machine,
readable
support
matrix
for
interconnecting
component
versions.
F
Thanks,
I
I
maybe
I
missed
it,
but
what
is
that?
What
is
the
problem
that
this
is
aiming
to
to
solve,
do
do,
or
is?
That
is
that
is
that
written
down
anywhere.
A
I
be,
I
think,
it's
mostly
tribal
knowledge.
The
original
problem
was
there's
a
lot
of
stuff
in
kubernetes
kubernetes,
a
lot
of
stuff
in
its
security
repository
where
a
lot
of
pro
requests
issue
trackers
are
difficult
to
manage
in
with
so
much
stuff
in
the
same
repository.
A
Basically,
the
the
original
idea
was
to
create
a
staging
concept.
The
staging
was
to
take
a
bot
that,
when
you
push
something
to
a
particular
folder,
the
bot
takes
it
and
creates
a
repository
out
of
it.
It
basically
distributes
code
out
of
this
huge
repository
to
new
small
repositories.
That's
the
whole
staging
process
that
we
have.
If
you
look
at
the
kubernetes
query
staging
we
have
that's,
that's
the
publishing
board
here.
Sorry,
that's
the
publishing
rules
for
the
bot,
but
here
we
have.
Basically,
this
is
are
separate
repositories.
A
You
might
have
seen
them
already
like
here.
We
have
api
machinery.
Client
goal
is
here
and
also,
I
think
we
should
have
cubecrow
a
cube
right,
but
kublet
is
confusingly
only
the
corporate
config,
but
if
you
go
to
kubl
cube
curl,
the
idea
is
to
stop
managing
the
staging
the
staging
process
for
cube,
curl
and
completely
separate
it
so
that
we
don't
have
to
accept
any
anymore
commits
in
kkk.
That
is,
that
is
the
main
topic,
and
now
we
have
the
already
existing
sorry.
A
A
That
is
basically
the
problem
is
the
coupling
splitting
the
monolith?
That's
what
the
project
wants
to
do,
but
it's
a
tribal
knowledge.
I
don't
know
if
there
was
a
kept
title
split,
the
kkk
monolith
or
something
like
that.
F
F
So
we
want
to
be
able
to
split
this
into
separate
repositories
or
separate
issues
separate,
but
if
we
do
that,
then
releasing
them
all
on
the
same
version
so
anyway,
so
that
that
sounds
like
a
problem,
but
I
don't
I
don't
yeah
is
that
is
that
the
right
problem?
Sorry.
A
Yeah
exactly
basically
once
we
start
doing
that
in
the
cube
curve.
Being
the
first
component
like
we
start
wondering
what
is
a
kubernetes
release
like?
How
is
it
just
a
number
that
bundles
like?
Is
it
like
ubuntu
particular
version
that
bundles
a
bunch
of
components-
and
you
know
a
particular
version
of
gnome-
something
like
that
like?
How
do
we?
A
What
what
is
the
kubernetes
release
at
that
point,
and
some
projects
which
are
like
I
say
much:
bigger
commodities,
have
strictly
decided
that
every
single
module
will
be
versioned
the
same
way
which
establishes
sanity
in
the
consumers,
but
some
other
projects
have
decided
that
it's
just
a
distribution
and
we
will
bundle
a
particular
component
of
whatever
stable
version
is
the
latest
one
from
it.
We
are
going
to
take
it
and
put
it
in
our
distribution.
A
It
works.
You
know
we
have
successful
projects
on
both
fronts.
I
personally
just
not
convinced
that
kubernetes
will
succeed
if
we
don't
version
everything
the
same
way.
E
I
will
add
one
other.
I
made
a
jokey
remark
about
like
notification
volume.
The
other
hope
was.
I
think
that
if
we
split
up
the
components
that
there
would
be
better
defined
interfaces
between
them
and
that
you
know,
we
would
avoid
some
of
the
problems
where,
like
coop
cuddle,
is
tied
to
particular
kubernetes
version
or
where
the
cloud
provider
is
really
tightly
coupled
into
the
like
into
the
kubernetes
deep
in
the
kubernetes
infrastructure.
So
it's
harder
to
integrate
another
cloud
provider,
for
example,
things
like
that.
E
So
there
was
the
hope
of
architectural
improvements
at
the
same
time,
but
I
do
think
that,
yes,
the
driving
force
was
the
volume
of
notifications
and
all
that
sort
of
just
tracking
that
things
I
agree
with.
Vladimir
though
it
is,
it
is
it's
going
to
be
hard
for
us
to
test
these
things
together
and
even
harder
for
our
users
to
put
them
together.
I
feel
like
this
is
something
that
maybe,
if
sig
release
isn't
going
to
sig
release
is
sort
of
the
natural
one.
E
E
E
I'm
talking
about
solving
the
problem
in
terms
of,
if,
let's
assume
that
we
yes,
I
mean,
perhaps
a
document
is
a
better
one
to
do,
but
let's
assume
that
the
different
projects
don't
want
to
maintain
so
right
now
we
have
a
single
version
that
is
maintained
by
kk
and
other
projects.
E
At
least
chaos,
for
example,
are
able
to
follow
that
version
and
maintain
like
a
coherent,
versioning
structure,
despite
not
being
in
the
same
repo.
I
know.
Cluster
api,
for
example,
does
not
correct.
If
I'm
wrong
does
not
do
that,
but
that's
and
that's
fine,
but
assuming
that
other
projects
don't
want
to
follow
the
kk
version,
and
we
end
up
with
a
sort
of
multiple
streams
of
versions.
Someone
or
something
needs
to
combine
them
together,
so
that
we
know
that
you
know
like
there
is
a
there
is
a
release
that
is
tested.
E
If
we
don't
have
that
there
is
no
way
you
can
install
an
open
source
copy
of
kubernetes
right.
What
is
it?
What
does
it
mean
if,
if
you
can't
even
find
a
cube
cuddle
version
that
works
with
your
with
your
cluster?
E
So
if,
if
no
other
group
is
going
to
do
it
and
this
problem
is
going
to
arise,
then
we
should
either
say
cluster
life
cycle
or
something
someone
needs
to
like
figure
out
how
we
can
describe
a
set
of
versions
that
together,
have
been
in
some
way
verified,
essentially
recombining.
The
different
repos
into
a
a
monorepo
that
is
then
tested
might
be
one
way
to
like
visualize
it.
E
I
think
yeah.
It
would
be
nice
to
avoid
this
problem,
but
I
don't.
I
am
not
terribly
optimistic
based
on
what's
happened
so
far,
but
it
might
be
that
if
we,
if
we
threatened
to
take
over
this
functionality,
that
sig
release
is
then
willing
to
do
it
or
that
other
people
say
well,
we
don't
want.
You
know
we
we
actually
just
want
to.
We
want
to
follow
the
version
of
kk
or
whatever
it
is.
A
We,
if
you
look
at
the
cube,
adapter
testing
today,
that's
probably
applies
to
cops
as
well.
We
we
already
test
like
a
distribution
of
kubernetes.
It's
just
the
fact
that
we
have
a
kubernetes
version
in
the
cubed
m
config,
which
is
it
has
a
meaning
that
all
the
components
will
match
this
particular
version.
So
it's
keep
controller
manager,
api
server.
Everything
is
like
115
or
125.
A
Whatever
we
match
this
version
now,
if
we
consider
a
cube
idea
by
distribution
of
kubernetes,
we
have
to
hard
code
for
a
particular
release
of
kubernetes,
a
set
of
component
versions
that
we
have
to
test,
and
I
think
we
already
like
we
already
covered
distribution
testing
in
a
way,
but
we
don't.
I
think
we
cover
cubecrow
with
some
edge
cases,
but
we
don't
especially
like
this
cube
curl
like
the
main
client
testing.
A
So
we
can
also
add
that,
but
if
we
start
testing
the
skew
in
the
future-
and
if
this
happens,
of
course,
it's
going
to
be
an
impossible
matrix
to
cover
like
what
is
I
don't
know,
what
is
going
to
be
the
security
cube,
controller
manager
and
api
server
that
are
completely
different
in
terms
of
versions
like
how
do
we
secure
it
like?
How
do
we
document
it
like?
How
is
it
machine,
readable
and
it
becomes?
A
A
So
I
I
think
that
I'm
not
sure
that
sig
release
and
cigar
architecture
understand
these
problems,
and
I
I
am
going
to
link
a
document.
I
cannot
find
it
right
now,
but
I
wrote
this
like
this
document
pretty
much
covering
some
of
these
potential
problems
said.
I
the
only
response
was
from
dims
and
he
said,
hey.
I
think
it's
better
to
just
go
with
the
separate
version
in
separate,
recycle
but
again
there's
a
lot
of
depth
to.
E
This
we
we
do
have
one
component
right,
which
is
coordinates,
or
I
think
even
coop.
Dns
is
actually
totally
separately
versioned
and
I
think
at
least
the
way
we
do
it
in
kops
is
the
version
of
kubernetes
you're
installing
ends
up
determining
the
version
of
coordinates,
you're
installing
and
that
is
determined
by
chaos.
So,
in
that
regard,
we
have
artificially
we've
chosen
a
version
schema
it's
the
kk
version.
Yes,
thank
you
for
british
and
yes,
you
points
out
also
ncd,
cni
and
csi.
E
I'm
not
sure
we
saw
all
of
those
in
chaos
today,
but
yes-
and
I
think
so
there
I
mean
there
is
we-
I
don't
know
how
cube
adm.
Does
it,
but
there's
certainly
the
way
it's
certainly
doable
to
tie
together
these
different
versions,
and
we
sort
of
do
it
already.
So
that's
why
I
feel
like
it
it
could
well
be.
We
could
say
that
it
is
a
c
cluster
lifecycle
concern
and
we
have
the
means
to
do
it
as
it
were.
A
Yeah
kubernetes
just
hardcodes
a
preferred
version
for
a
particular
release
of
ncd
and
coordinates:
that's
how
we
do
it,
but
we
don't
skew
test
it.
So
I
mean.
E
E
So
it's.
I
don't
think
that
I
don't
there's
anything
wrong
with
that.
It's
just
like
you
know.
Eventually
we
might
decide
that
as
becomes
more
complicated,
we
can't
we
have
to
move
it
into
a
manifest
or
something,
but
I
think
that
the
idea
that
we
sort
of,
I
think
have
both
settled
on
this
idea-
that
the
kubernetes
version
determines
the
versions
of
these
other
components
and
we
test
effectively
a
set
of
versions
of
these
disparate
components
together
and
that's
what
we
that's,
what
we
call
the
the
release
as
it.
A
F
Yeah,
it's
kind
of
a,
I
guess,
a
meta
or
a
philosophical
comment,
but
I
I
it
sounds
to
me
like
we're
having
right
we're,
dealing
with
complexity
as
let's
say,
kubernetes
maintainers
right
dealing
with
all
these
issues
and
one
repo
et
cetera,
et
cetera,
so
the
problem,
the
problem
sort
of
is
ours
to
deal
with.
Meanwhile,
it
sounds
like
users
kubernetes
end
users
get
some
benefit
from
this,
because
it's
like
hey
one
version.
Well
I
mean
there
are
there.
Are
there
are
disadvantages?
F
Justin
pointed
out
right,
coop,
cuddle
right,
the
version
skew,
I'm
I'm
not
really.
I
don't.
I
don't
too
clearly
understand
how
things
would
like
radically
change
if
we
had
independent
release
cycles.
But
to
me
it
sounds
like
from
from
the
end
user's
perspective.
This
is
actually
quite
nice.
Like
hey,
I
get
one
version
and
so
what
what
I?
F
What
what
I'm
thinking
is
that
maybe
we
actually
want
to
take
on
like
and
keep
this
burden
of
complexity,
because
if
we,
if
we
don't,
if
we
have
independent
release
cycles,
then
all
of
a
sudden
we're
just
externalized
all
right.
We're
laying
all
this
complexity
onto
the
end
users-
and
you
know-
maybe
I
guess
maybe
making
things
a
little
bit
easier
for
ourselves,
but
it
also
sounds
like
we're,
creating
more
more
like
different
work
for
ourselves.
So
sorry,
it's
not.
You
know
kind
of
wishy-washy,
philosophical
comment,
but.
A
I
mean
I
agree:
there's
certainly
benefits
to
this.
If
you,
for
instance,
if
you
have
your
own
this
method
of
distribution
of
kubernetes-
and
you
don't
want
to
be
particularly
scheduled
for
a
certain
date
to
consume
a
particular
release,
you
can
just
take
the
latest
version
of
a
particular
component
and
say:
hey,
I'm
just
going
to
equate
this
version
of
or
koblet
that
I
want
to,
and
that's
that's
really
nice.
It
also
like
we,
the
benefits
that
we
discussed
earlier,
like
splitting
the
monolith
and
maintaining
things
separately.
A
That's
like
the
biggest
benefit.
I
think
that
we
introduce
a
lot
of
complexity
on
the
side
of
groups,
like
sequence,
live
cycle,
sig
release
like
what
do
we
test
and
what
is
do
we
distribute,
and
how
do
we
like?
How
do
we
deal
with
problems?
For
instance,
if
there's
this
really
a
really
critical
patch
that
has
to
land
in
kubecoil
1.01
that
we
want
to
include
in
our
kubernetes
distribution
126,
but
they
don't
have
enough
biowave
to
complete
it
in
time.
So
what
do
we
do?
Do
we
release
a
like?
A
Could
the
distribution
of
kubernetes
with
a
slightly
broken
tube
crow,
or
should
we
like?
Do
we
wait
or
do
we?
What
do
we
do
about?
It,
there's
certainly
complexes
and
what,
if
we
have
components
that
multiple
of
them
are
broken
and
nobody's
maintaining
them,
and
we
like,
should
we
just
stop
the
release
train
of
the
kubernetes
distribution?
A
It's
I
have
some
worries
around
discipline
of
the
project
as
a
whole,
and
now,
if
everything
is
in
kk,
we
can
kind
of
apply.
Some
strong
sticks
around
what
goes
in
and
what
is
how
we
delay
the
whole
process.
If
things
are
separate,
I
think
we
increase
the
complexity,
which
is
we
want
to
do
it.
We
want
things
to
be
separate,
but
I'm
not
sure
that
people
are
realizing
like
basically,
all
the
problems
that
can
arise
from
that.
A
Like,
for
instance,
how
how
is
fabrizio,
howitz
quest
repre,
going
to
haddo
the
version
in
the
coaster
class
that
you
recently
spoke
about?
How
are
we
going
to
consume
a
kubernetes
version
that
is
bundling
a
set
of
completely
different
version
of
components.
A
For
instance,
you
want
to
set
the
version
of
a
cluster
class,
the
kubernetes
version
of
the
cluster
class
to
be
125.,
but
inside
this
2125
you
have
a
cube
controller
manager,
that
is
version
1.05,
and
I
guess,
if
you
use
the
cube,
adm
control
plane,
kubernetes
will
do
the
binding
and
skew
for
you.
But
if
it's
not
kubernetes,
what
is
the
meaning
of
the
the
125
version
that
you
set
in
the
question
class.
D
Okay,
so
we
are,
I
can
tell
you
what
we
are
doing
today,
which
is
based
on
on
the
current
version
schema.
Is
that
basically
we
we
create
a
machine
that
as
a
version
and
and
we
assume
that
inside
the
machine,
all
the
components
as
they
write
are
compatible
with
the
version
that
we
are
declaring.
So
it's
kind
of
off
a
cake.
Custard
api
consume
the
output
of
image
b.
D
What
I
see
is
that
is
is
like
we,
we
have
to
to
layer
for
the
problem
of
of
a
release.
One
is
component
and
the
second
one
is
distribution.
That's
mean
that
at
some
time
a
certain
set
of
components
are
going
to
be
computed
into
to
to
be
assembled
into
a
distribution
and
and
and
this
becomes
interesting
and
there
it
is
the
interesting
part.
How
do
we
define?
D
Which
component
are
part
of
distribution,
and
this
called
be
a
lot
of
material
or
whatever?
So
how
do
we
define
and-
and
then
this
should
be-
a
kind
of-
let's
say,
machine,
consumable
output,
because
it
should
be
consumed
by
testing
before
doing
the
release.
It
should
be
consumed
by
all
the
tool.
For
I
don't
know,
image
needed
to
generate
the
images
of
the
cluster
api
for
doing
upgrade
stuff
like
that,
so
going
back
another
step
to
just
an
assumption
that
someone
has
to
own
these.
D
Yes,
and-
and
I
think
that
maybe
this
is
not
only
on
us
because
before
we
have
to
define
with
cigar
dictator,
probably
cigaretes
and
sig
testing,
what
is
the
kubernetes
distribution
in
a
component
in
the
future
world
where
there
is
no
the
guarantee
of
the
version?
So
the
the
first
idea
is
that
okay,
let's
define
distribution
and
then
we
can
decide
how
to
act
on
it.
Everyone
for
each
part
of
the
of
the
problem.
A
Yeah,
it
makes
perfect
sense.
I
I
agree
that
it's
not
only
on
us.
I
think
sig
really
should
own
the
potential
machine
consumer
manifest.
What
is
the?
What
is
the
kubernetes
release?
We
can
help
with
testing.
We
can
coordinate
with
sick
testing
that
we
can
test
it,
but
we
are
not
going
to
make
the
decision
of
what
component
is
inside
the
distribution
for
sure.
A
C
Yeah,
I
think
we're
sort
of
already
running
into
that
today
with
the
out
of
tree
cloud
provider
and
that's
been
a
problem
that
we
don't
really
have
a
solution
to
right
now,
but
yeah,
because
basically
the
cloud
providers
now
have
their
own
releases
for
like
the
out
of
three
repository
and
those
are
not
always
like,
like
they're
supposed
to
be
aligned
with
kubernetes
versions
but
they're,
they
have
their
own
version
like
some
versioning
and
so
right
now,
like
with
cluster
api.
It's
like.
C
We
can't
really
include
it
as
part
of
the
kubernetes
version
anymore.
So
we
have
to
like
have
that
as
a
separate
component,
and
so
we
can
align
it
like
and
say
like.
Okay,
like
if
you're
installing
this
kubernetes
version,
you
should
install
this
like
cloud
provider
when
they
first
create
the
cluster.
C
But
the
problem
is
that
then
like
if
they
upgrade
the
kubernetes
version
later
on,
then
the
user
needs
to
know
that
they
also
need
to
upgrade
the
cloud
provider
version
and
I
think,
as
a
user
like,
if
we
make
it
easy
for
them
to
create
the
cluster
without
having
to
care
about
the
cloud
provider
version.
Then
it's
harder
for
them
to
know
that
they
have
to
like
care
about
it
later
down
the
line
when
they
have
to
like
upgrade
and
also
at
this
same
time.
C
It's
like
right
now,
it's
like
already
pretty
difficult
when
there's
like
one
component
that
they
have
to
care
about,
but
if
it
becomes
a
thing
where
there's
like
multiple
components
that
you
have
to
know.
Okay,
this
version
this
version.
That
version
like
that's,
not
going
to
be
a
great
experience
for
end
users
who
aren't
using
managed
services
so
yeah,
that's
my
two
cents.
A
That's
it
that's
a
great
point
and
also
csi
drivers
and
what
what
is
a
compatibility
matrix
between
csi
cloud
providers
like
I,
I
recently
saw
that
most
of
the
cloud
providers
have
completely
arbitrary
versions
and
they
don't.
I
don't
think
they
have
a
version
matrix
most
of
them
inside
the
repo
and
it's
not
machine
consumable.
A
A
If
you,
if
you
go
to
ubuntu
as
a
project
as
an
example
that
is
managing
different
versions,
you
just
apply
command
and
it
will
upgrade
a
set
of
different
components
for
you
and
it's
part
of
the
package
management
system,
but
we
don't
have
something
like
that
in
kubernetes,
like
how
am
I
supposed
to
know
what
version
of
kernel
will
come
when
I
upgrade
my
ubuntu
distribution
yeah,
it's
it's
going
to
be
interesting,
so
I
just
I
wanted
to
bring
this
topic
for
a
discussion
fabricio.
D
D
A
I
I
have
read
this,
but
it's
basically
more
of
a
topic
of
what.
How
do
we
package
things?
What
is
the
mechanisms,
verification
process
of
binary,
artifacts
and
things
like
that?
But
it
doesn't
matter.
D
A
It
does
not
cover
components
instead
of
kkk
for
sure
this
is
this
particular.
A
Yeah,
it's
it's.
It
has
been
a
problem
already
but
like
what
will
happen
in
the
future.
Is
that
it's
it's
going
to
grow
and
be
a
like
a
much
bigger
problem
to
solve.
So
I
think
that
sick
release,
I
got
a
message
from
sick
release
today
that
they
are
going
to
start
discussing
with
60.
Why
the
cubecrow
extraction
I'm
going
to
message
our
mailing
list
about
this.
A
When
that
happens,
so
you
can
join
a
potential,
add
comments
to
issues
and
things
like
that,
so
I'm
going
to
keep
the
sigin
basically
notified
about
what's
going
to
happen,
but
also
I'm
going
to
raise
the
concerns.
Whenever
there's
a
forum,
I'm
going
to
try
to
get
more
discussion,
especially
with
cigar
architecture,
what
we're
going,
how?
How
are
we
going
to
manage
this
because
it's
it's
going
to
be
a
problem
in
my
opinion,
so
yeah?
I
think
that's
that's
the
the
whole
topic.
A
A
A
A
Before
going
to
edc
idea,
just
said:
what
do
you
think
about
changing
the
format
like?
Should
we
should
we
continue,
including
like
a
status
update
about
the
sub
projects,
or
should
we
make
this
like
more
optional.
E
I
mean
I
like
I
like
the
I
like
the
sort
of
format
where,
like
I
feel
like,
if
there
are
projects
that
have
like
cross,
wanted
to
give
a
quick
update,
that's
wonderful,
and
if,
if
there
are
particularly
cross
project
things,
that's
I
think
that
works.
We
can
consider
adding
a
a
deep
dive
section.
Maybe
you
know
what
I
mean
but
so
like,
but
I
I
like
the
current
quick
overview
approach.
E
We
could
consider
adding
if
you
wanted
something
more
required.
We
could
consider
having
like
a
chaos,
must
give
a
10
minute
like
deep
dive
into
what's
happened,
type
thing
on
a
quarterly
basis
or
whatever
it
is.
You
know
whatever
the
right
rotation
is.
A
Communication,
cross-project
collaboration,
like
some
of
your
ideas,
about
keep
going
in
a
week.
We
sort
of
do
some
of
that
in
the
super
project
update.
So
I
think
it's
very
nice,
a
deep
dive
section
potentially
like
what
we
did
right
now
was
was
a
deep
dive
for
the
kids
versioning.
So
I
guess
the
group
topics
already
covered
that.
E
In
a
way
right,
I
just
wasn't
sure
whether
you
felt
like
you
wanted
to.
I
don't
think
we
should
make
the
weekly
or
the
bi-weekly
updates
mandatory,
but
I
wasn't
sure
whether
you
wanted
some
sort
of
mandatory.
A
Thing
if
a
british,
I
was
talking
about
a
mandatory
subproject
update,
which
was
yeah
when
in
a
written
format,
we
require
subprojects
to
submit
to
the
group
a
summary
of
what's
going
on,
and
do
you
want
this
to
be
an
early
update
like
every
year
once.
D
D
This
is
problem
number
one
problem
number
two
is
that
the
the
steering
ask
us
as
a
sig,
basically
to
follow
or
to
some
best
practice
or
to
take
care
of
process,
like
I
don't
know,
keeping
the
honor
and
the
owner
files
updated
stuff
like
that,
and
so
the
idea
of
basically
having
the
same
document
that
we
answer
as
a
sig
answer
by
the
sub
project
is
just
to
increase
our
awareness
and
scarce
visibility.
D
Stuff
like
that-
and
this
is,
let
me
say
something
extras
and
the
cadence
could
be
yearly
or
semester
every
semester.
Okay,
this
is
it
really
depends
by
us,
because
it
is
a
way
to
ensure
the
entire
seek
follow.
Let
me
say
a
common
set
of
practice.
D
With
regard
to
the,
instead
of
this,
the
weekly
or
bi-weekly
upgrade,
my
opinion
is
that
it
is
nice
if
people
have
to
have
something
that
they
want
to
increase
visibility
or
make
others
a
project
aware.
Why
not?
This
is
this
is
this
is
the
place
where
the
where
we
have
to
work
together
as
a
sig
and
not
seeing
a
single
project,
so
sharing
what's
going
on,
is
fine
for
me.
A
But
if
I
go-
and
I
just
noticed
that,
for
instance,
you
know
the
ibm
cloud
provider
for
questor
api,
it
hasn't
been
updated
in
12
months
like
this
is
the
last
commit
like.
Is
it
working
even
like?
What
is
the
state?
What
is
the
future
of
support
for
v1
alpha
4
like
if
this
is
no
longer
maintained,
and
nobody
has
interest
about
it
like?
Should
we
archive
a
coaster,
api
provider?
A
D
Yeah,
I
will
only
add
that
we
don't
want
to
add
a
burden.
That's
a
project
we
want,
so
we
will
try
to
keep
this
as
light
light
possible,
but
at
the
end,
if
we
we
get
into
situation
where
it
is
even
difficult
to
find
the
owner
of
a
sub
project,
this
is
a
flag
that
we
have
to
consider.
So,
let's
go
for
it
and
see
how
it
goes.
A
All
right,
hopefully,
we
can
do
this
in
the
following
cycles:
okay,
moving
to
the
next
project,
hd
idm,.
F
I
can
yeah
yeah,
we've
got
yeah,
sorry,
so
support
for
for
more
architectures.
So
there's
yeah
there's
there's
more
interest
in
using
fcdm.
I
guess
an
arm
64.
so
that
that
that's
coming,
there's
also
the
there's
a
cluster
api
cap
for
an
external
cd
provider
and
that
that's
under
review,
if
yeah,
if
anybody's,
if
anybody's
curious
about
that
I'll
leave
a
link
in
the
in
the
doc.
A
It's
related
to
the
costa
rica
api
cap
about
the
hcd,
but
are
you
talking
about
does?
Does
it
require
the
systemd
integration
in
ncdm.
F
I
think
so
I
might
be
wrong
about
that,
but
I
I
think
we're
actually
where
we
are
on
the
way
to
we're
on
the
way
to
supporting
static
pods
as
well.
I
know
yeah.
This
has
been
that's
a
pr,
that's
been
open
for
for
a
while,
but
we
will,
I
think,
we're
close
to
merging
it.
F
Iradium
yeah
I'll
actually
I'll
raise
that
maybe
on
the
on
the
cap
I'll
ask
a
question:
if
they
could
do
that.
A
All
right,
dicks,
I
don't
have
any
more
comments.
No
questions
for
http,
any
others.
D
A
There
do
do
you
have
a
parent
issue
for
the
courser
api
tracking.
A
D
A
Well,
that's
that's
a
good
practice
to
have
in
general.
You
know
how
we
do
it
in
cuba,
dm
and
especially
with
the
recent
integration
of
github
tasks,
with
the
check
boxes
in
a
ticket.
It's
really
nice.
A
A
Yeah,
it's
just
nice
to
have
it's
just
a
recommendation.
C
Yeah
we
released
v050
just
a
couple
weeks
ago
now
and
yeah.
It's
the
v104
support
release.
We
have
a
patch
first
patch
coming
soon
that
fixes
a
few
bugs
and
then
the
big
thing
we're
working
on
right
now
is
a
proposal
for
doing
async
reconciliation
of
azure
resources.
So
that
means
starting
resource
creation,
exiting
the
controller
loop
and
then
coming
back
later
to
check.
If
it's
done.
E
It's
tricky,
I
think
it's
interesting,
because
the
model
that
chaops
and
I
think
most
of
the
controllers
use
is
they
have
a
limited
effectively
a
thread
pool.
I
think
we
call
them
go
routines,
but
you
know
a
thread
pool,
and
so
we
effectively
limit
the
number
of
concurrent
operations
that
way
which
is
both
and
that
that
that
limit
is,
I
think,
compiled
in
at
least
by
default,
and
I
don't
know
how
people
expose
it
as
a
flag
or
anything.
E
The
the
downside
is
I'm
guessing
cecilia,
what
you're
facing
where,
if
you
exceed
10
or
whatever,
that
number
is
concurrent
operations
that
are
all
serialized.
So
if
you're
launching
like
100
vms
so.
C
That's
yeah,
so
that's
so
that's
exactly
it!
So
we're
not
trying
to
do
concurrence
like
it's,
not
parallelization,
it's
right.
So
what
we're
doing
is.
Let's
say
you
do
have
like
a
limit
of
10
loops
at
this
like
controller
concurrency,
and
you
don't
want
to
increase
it
to
more
than
that,
then
right
now.
What
would
happen
is,
let's
say,
you're
trying
to
create
like
20
virtual
machines.
It
will
create
the
first
10
and
we'll
wait
for
all
10
to
be
completed
like
created
before
moving
on
to
the
next
10..
C
E
Yeah,
it's
very
interesting.
You
might
still
need
a
separate.
I
guess
you
might
implement
your
own
rate
limiting
but
like
if
someone
launched
ten
thousand
vms
would
ask
your
with
the
azure
control
plane
team
yell
at
you,
but
yeah
it's
it's.
Certainly
I
don't
think
it's.
I
don't.
E
Bounds
of
of
what
you're
allowed
to
do
in
controllers
and
yeah.
C
But
yes,
so,
actually
right
now,
when,
when
we're
like
creating
a
virtual
machine
like
serially
or
like
not
like
synchronously,
we
send
the
first
initial
put,
and
then
we
have
this
like
wait
for
completion
function,
which
looks
like
it's
only
doing
one
api
call,
but
it's
actually
polling
against
azure,
like
every
10
seconds
or
something
whatever
the
default
is.
C
So
that's
a
bunch
of
api
calls
that
you're
kind
of
wasting,
because
your
machine
is
not
ready
yet
and
you're
still
like
checking
on
it
every
10
seconds,
and
so
we're
not
necessarily
increasing
api
calls
by
doing
that,
because
you're
just
using
your
calls
more
wisely.
If
that
makes
sense,.
E
Anyway,
it
makes
a
lot
of
sense
that
actually
you've.
Oh
sorry,
sorry
go
ahead.
E
We
could
do
a
list,
for
example,
on
this
operation
and
get
like
the
status
of
all
the
vms
at
the
same
time,
in
one
core,
albeit
a
more
expensive
call,
and
so
I
don't
know
if
you
have
like
a
list
operations
called
in
azure,
but
that
sort
of
thing
like
the
ability
to
aggregate
multiple,
like
the
go
routine
approach,
has
that
is
a
big
weakness
of
the
gerusine
approach
that
you,
the
synchronous,
guy
routine
approach.
E
I
guess
you
would
call
it
which
I,
like
that
name,
you
you're
sort
of
compelled
to
treat
each
operation
completely
separately,
even
if
you're
doing
20
of
them
at
the
same
time
and
could
easily
do
a
more
efficient
like
batch
polling
type
thing.
A
E
I
just
put
out
it's
it's:
actually,
it's
not
just
traffic
yeah
like
we
certainly
had
a
lot
of
these
in.
I
think
there's
some
of
this
in
the
aws
intrigue
cloud
provider,
where
quota.
If
we,
if
we
aren't
careful
about
how
we
do
this,
we
blow
through
quite
a
very
very
quickly-
and
you
know
the
problem
with
quote
at
least
on
aws-
is
it's
at
the
account
level?
E
So
you
know
if
you
had
a
kubernetes
cluster
in
the
same
account
as
other
things,
you
would
effectively
dos
them,
so
it
does
matter
for
more
than
just
it's
more
than
just
network
traffic.
G
G
We've
had
two
releases
since
then
121-122
they
haven't
been
really
big
releases
or
anything
mostly
like
they're,
like
integration
testing
focused
and
like
stability
focused
making
sure
that
container
d
is
a
reasonable
drop
in
replacement
for
the
docker
inside
of
the
mini
cube
vm,
as
well
as
making
sure
that
arm64
slowly
becomes
like
a
first
class
architecture.
G
We're
not
there
yet,
but
we
do
have
tests
running
for
arm
64
for
docker
and
container
d
and
they're,
mostly
good,
now
and
yeah,
and
and
we're
slowly
making
windows,
less
paint
less
painful
in
terms
of
an
os
for
running
mini
cube.
That's
the
other
thing
that
we've
been
focused
on.
A
Thanks
for
joining
nice
to
have
you
back
so
you're,
basically,
phasing
out
the
docker
inside
the
vm
in
favor
of
container
day
or
no.
G
No
we're
always,
I
think
our
plan
is
to
always
have
docker
be
the
the
default,
but
we
want
to
if
people
want
to
whenever
kubernete,
like
since
container
d
is
like
the
blessed
kubernetes
runtime,
we
want
to
be
able
to
support
that
by
default
as
well,
while
also
supporting
everything
that
minicube
does
inside
of
the
vm.
G
So
a
lot
of
minicube
users
use
the
docker
runtime
as
a
way
to
like
build
images
inside
of
mini
instead
of
kubernetes
and
that
kind
of
stuff,
so
we've
been
working
on
making
sure
that's
as
transparent
to
the
user
as
possible.
G
So
we
have
the
the
mini
cube
image
sub
commands
where
you
can
run
mini,
cube
image
build
and
it
will
use
it'll
it'll
be
able
to
build
stuff
with
container
d
instead
of
just
trying
to
use
docker.
A
Yeah
we
have
the
same
platform:
kubernetes,
basically
continue
supporting
docker,
whatever
form
necessary
to
consume
it,
but
something
that
happened
around
the
potential
dokushima
extraction
and
the
donation
to
to
the
mobi
project
for
maintenance
is
that
they
took
it
as
a
responsibility
sent
a
twitter
message,
but
they
never
actually
started
developing
it.
So
we're
now
in
a
in
a
phase
where
it
might
happen
eventually,
but
hopefully
it
happens
before
the
removal
of
the
internalized
docker
shim
in
the
couplet,
because
if
it
happens,
you
know
people
will
be
stuck
with
with
a
functioning.
A
G
Yeah
this
is
we
we've
something
we're
keeping
an
eye
on
as
well.
Of
course,
there
are
other
people
who
have
sort
of
maintained
their
own
versions
of
the
docker
shim,
and
if
we
have
to
do
use
one
of
those,
then
we'll
use
one
of
those.
It's
not
really.
A
But
if
we
start
seeing
discussions
or
anything
like
that,
we
are
going
to
bring
you
on
to
to
see
what's
happening
yeah.
We
also
have
a
tracking
issue
in
kubernetes
for
that,
so,
okay,
all
right
any
questions
for
bbq.
A
All
right
image
builder,
I
think
we
are
almost
out
of
them.
C
Yeah,
I
don't
have
much,
but
just
we've
been
having
issues
with
our
office
hours
having
low
attendance
and
no
agenda
items
and
canceling
last
minutes,
so
we're
trying
something
new
trying
to
like
get
the
agenda
out
there
the
week
before
and
trying
to
get
to
other
people
to
get
people
to
add
stuff
there
and
then
making
a
decision
on
canceling
the
nights
before
it's
also
an
8
am
meeting.
C
So
it's
hard
to
get
people
to
show
up
and
then
other
than
that
there's
been
some
couple
peers
like
add
some
ci
for
the
capi
image
validations
that
run
on
prs,
which
has
helped
us.
You
know
builder,
improve
our
validation
and
we're
doing
that
on
every
pr
with
azure
already
right
now
and
I
think
there's
a
ci
that
was
added
for
ova
images
recently
and
we're
also
doing
container
image
validation.
So
yeah.
C
Yeah,
it's
not
too
bad
like
we
don't
build
an
actual
vm
or
cluster.
We
just
build
the
image
with
packer,
so
we
just
validate
that
packer
image
bolt
will
pass
basically.
A
A
Not
too
bad
about
office
hours,
I
mean
you
can
potentially,
if
you
want
as
maintainers,
you
can
make
the
office
hours
opt-in,
which
means
that
it's
not
mandatory
to
have
the
meeting
every
week.
Nobody
is
forcing
you
to
do
that.
Maybe
if
you,
if
you
don't
have
a
jedi,
just
continuously,
cancel
or
just
make
it
like
opt-in
to
the
extent
where
only
have
a
meeting.
If
there
is
a
gender-
and
I
think
the
sick
testing
are
doing
something
like
that
already
they
they
only
have
a
meeting
if
there
is
a
gender.