►
From YouTube: SIG Cluster Lifecycle 2021-05-04
A
Hello,
everybody
and
welcome
to
the
bi-weekly
sig
cluster
life
cycle
meeting.
I
am
your
moderator
facilitator.
For
today
my
name
is
justin
santa
barbara.
I
work
at
google
today
is
tuesday
the
4th
of
may
2021.
May
the
4th
be
with
you.
We
have
a
fairly
light
agenda
for
the
day.
I
have
pasted
a
link
into
the
chat.
Please
do
feel
free
to
add
any
agenda
items.
A
We
normally
like
to
start
with
any
new
meeting
participants
being
invited
to
introduce
themselves.
I
don't
know
if
there
are
any
meeting
participants
that
would
like
to
do
so.
If
there
are,
please
feel
free
to
unmute
and
introduce
yourself
if
you
would
like
to.
B
C
A
Wonderful
well
welcome
back
david
welcome,
alex
good
to
see
you
again,
I'm
sure
we
I've
certainly
remember
you
from
some
conferences
which,
if
there's
anyone
else
last
call
otherwise.
Oh
we
have
a
chat.
Introduction
from
victor
who's
mike
is
not
connected.
Victor
is
sig
release,
doc's
lead
for
122
and
says
nice
to
meet
you
all,
so
welcome
victor
as
well
right.
Moving
on
to
the
next
section
of
our
agenda,
that
is,
group
topics
or
psas
for
britsy.
A
I
know
you
said
you
had
something
you
wanted
to
talk
about.
We
don't
actually
have
any
group
topics
on
the
agenda.
Do
you
want
to
just
do
you
want
to
talk
about
that
topic
here
or
later
on
after
the
subproject
updates.
D
I
I
move
it
up.
Sorry,
I.
D
D
We
which
contain
a
lot
of
interesting
stuff
about
the
alpha
of
the
siege,
for
instance,
how
we
do
triage,
how
we
do,
how
we
keep
our
owner
list
up
to
date.
Stuff,
like
that
the
report
was
interesting,
and
this
year
I
added
a
little
bit
of
lubricant
to
prepare
the
repo,
and
basically
we
did
this
report
top
top
down.
D
So
starting
from
the
siege,
charles
single
tech
lead,
and
we
prepared
the
repo
collecting
information
from
the
from
the
different
projects,
I'm
starting
to
wondering
if
this
is
the
right
approach
or
we
we
should,
let
me
say,
move
to
a
more
federated
approach
where
we
ask
the
the
the
each
project
to
to
to
be
proactive
and
and
to
show
they
are
healthy
in
terms
of
managing
their
own
community,
because
at
the
end
this
was
the
goal
of
the
of
the
of
the
steering
quest.
D
This
will
kind
of
help
us,
because
we
we
are
going
to
ask
the
project
to
report
in
this
meeting
periodically
not
only
about
the
technical
activity,
but
also
about
the
elder
health
of
the
community,
which
makes
a
lot
of
sense
for
me.
So
this
was
one
idea
that
that
that
pop
up
to
my
mind,
let's
try
to
make
the
this
happen.
So
we
as
a
sick
governance,
we
we
gain
more
visibility,
but
also
as
a
sick
project.
We
start
discussing
about
processes
like
triage,
like.
D
A
I
I
certainly
think
it
sounds
like
an
excellent
idea.
I
know
that
the
as
you
say,
like
the
intent
of
the
steering
committee,
is
not
necessarily
to
hear
the
answers,
but
to
make
us
answer
the
questions,
and
so
we
are
a
looser
federation
than
like
some
of
the
other
sigs
that
are
like
much
more.
The
same
group
of
people
that
work
on
all
the
projects
and
so
therefore
will
naturally
have
the
same
processes.
A
And
so
I
think
it
makes
a
lot
of
sense
to
to
do
what
you
propose
and
encourage
the
subprojects
to
effectively
individually
answer
the
questions.
And
then
we
can.
We
can.
We
should
pre-aggregate
and
pre-like,
discuss
and
cross-compare
ideas,
but
I
think
it
makes
a
ton
of
sense.
D
Okay,
maybe
maybe
we
can
disc,
follow
up
on
this
offline
on
the
channel
and
and
see
how
we
can
shape
it
out,
maybe
asking
each
project
to
come
to
this
meeting
and
give
a
short
update,
not
only
technical
but
also,
let
me
say,
community
driven-
and
this
could
be
an
idea
and
also
a
way
to
to
make
this.
This
meeting
more
participated.
A
That
sounds
great
any
other
thoughts
on
fabrizio's
idea.
A
A
All
right,
then,
we'll
move
on
to
some
project
updates.
We
don't.
We
only
have
two
subroutines
on
here,
one
of
which
is
in
absentia,
and
so
I
would
normally
read
that
and
I
will
probably
do
the
other
one.
I
don't
know
if
someone
else
wants
to
read
daniel's,
but
otherwise
I
can
do
it.
A
A
There
is
a
new
release
coming,
I'm
not
sure
exactly
what
features
are
in
that,
but
there
is
a
relevant
thing
coming
on
the
chaos
side,
which
I'll
talk
about
in
a
minute
and
there's
also
lots
of
active
work
on
the
etcd
bootstrap
provider
for
cappy,
so
figuring
out
how
ed,
cdadm
and
cappy
can
work
well
together,
particularly
for
the
case
of
what
kubereum
has
to
start
called,
I
believe,
an
external
lcd
cluster,
and
so
there
is
a
proposal
hackmd
link
there
in
the
agenda,
which
I
will
not
try
to
pronounce
and
please
feel
free
to
read
and
and
comment
on
that.
E
Later
today,
at
a
11,
a
pacific.
A
That's
wonderful,
so
hopefully
some
people
will
join
at
that
time.
The
next
update
is,
I
guess,
for
myself,
which
is
on
chaops.
We
are
working
on
the
122
release
of
chaos,
to
support
kubernetes
122
and
the
I
guess
more
interesting
cross
product
new
across
project
news
is
that
we
are.
We
have
effectively
moved
scd
manager
to
the
scd-adm
repo
and
we
are
working
on
actually
consuming
the
artifacts
that
are
built
from
the
scd-adm
repo
in
chaops
that
will
hopefully
land
this
week
and
those
artifacts
are
built.
A
Those
artifacts
are
built
using
the
the
staging
and
promotion
process
that
lands
them
on
case.gcr.io.
So
I
think
that's
a
good
example
of
you
know.
Moving
moving
projects
to
the
the
official
release,
staging
and
release
pipelines.
A
D
D
We
are
mostly
focused
on
the
on
basically
getting
the
v1
beta
3
api,
so
the
work
is
already
started.
The
all
the
pid
1.1
was
removed,
v1
beta3
is
added,
and
now
there
is
a
the
first
batch
of
pr
which
are
basically
changing.
Adding
incremental
changes
on
v1
beta3.
That
initially
was
clone
of
a1
beta2.
So
the
first
set
of
change
was
is
about
removing
deprecated
info
and
then
then
we
will
start
adding
new
things
as
defined
and
that's
pretty
all
for
for
kube
and
me.
A
D
D
One
is
kubernetes
node
authentication
that
basically
will
allow
in
order
to
to
join
in
a
more
secure
way,
and
we
are
still
working.
This
proposal
is
still
targeted
for
v1
alpha
4
and
but
basically
it
is
on
a
no
breaking
change,
so
it
is
not
touching
the
api,
so
it
is
fine
to.
D
Basically,
we
agreed
on
extending
a
little
bit
of
the
deadline
for
this
for
this
proposal
and
then
there
is
also
interesting
discussion
going
on
about
the
idea
of
class
of
cluster
class
and
managing
or
managed
cluster.
D
So
there
are
discussion
thread
in
the
caster
api
channel
and
there
is
a
a
proposal
being
finalized
by
gab
from
from
when
we
are
so
one
of
my
colleagues
and
vince
and
yeah.
This
is
a
really
interesting
topic
that
that
will
improve
the
user
experience.
D
D
Moving
on
this
week,
I
also
pick
up
some
work.
That
was
originally
started
by
ben
moss
and
basically,
I've
sent
a
pr
that
introducing
incap
d,
a
new
kind
of
machine
that
that
basically
is
backed
by
kubernetes,
and
it
is
interesting
because
as
soon
as
we
get
stable
and
and
completed
it
will
be
possible.
D
Basically,
it
will
be
much
easier
to
stress
all
the
k,
the
cappy
machinery,
by
getting
bigger
tests
a
bit
bigger
clustering
in
end-to-end
tests
and
the
the
the
caveat
is
that
this
machine,
backed
by
kubernetes,
cannot
ask
the
workloads
but,
but
it
it
is
fine.
It
is
not
relevant
for
the
testing
cluster
api
machine
most
of
the
custom
api
machine.
So
this
is
an
interesting
idea.
If
you,
if
you.
D
This
is
interesting,
and
the
last
topic
is
that
in
cluster
api
we
have
a
set
of
issue
with
regarding
the
the
possibility
to
to
have
to
expose
component
config
from
the
cluster
api
api.
D
So
if
someone
is
interesting
to
this
topic,
which
impacts
not
only
copy
but
also
kubernetes,
probably
other
installers,
please
reach
out
to
me.
I'm
really
curious
to
know
what
are
people
opinions
with
regard
to
company
config
in
his
current
state
and
also
he
is
his
possible
roadmap,
because
this
is
not
an
easy
take
to
work
on
component
config
and
we
need
a
critical
mass
to
get
this
moving.
So
if
someone
interest
is
interesting,
please
reach
out.
A
Thank
you
for
richer.
Those
are
actually
really
exciting
things
all
of
them
happening.
I
have
a
question
personally
on
the
component
config,
which
is
we
have
a
similar
thing
in
chaos.
We
feel
like
we
should
adopt
component
config,
but
as
far
as
I
know,
the
versions
aren't
necessarily
stable.
Is
that
true
or
have
they
locked
the
apis?
At
this
point,.
D
D
There
is
no
a
clear
way
or
an
automatic
way
to
allow
users
to
upgrade
their
own
component
config
or
it
is
not
easy
for
the
tool
like
but
mean
or
chaos,
to
build
an
automatic
upgrade
procedure,
and
there
is
a
lot
of
component
specific
knowledge
to
do
this.
So
we
want
to
avoid
to
spread
this
knowledge
around
all
the
standard.
We
want
to
get
these
in
a
single
point.
This
is
one
problem.
Another
problem
which
is
stuck
in
the
in
the
design
phase
is
the
having
insta-specific
component
config
and
how?
D
Basically
you
can
split
the
global
class
clusters.
Settings
from
the
local
settings
get
them
merged,
so
there
is
still
a
lot
of
problem
or
a
lot
of
possible
improvement
in
in
the
idea,
and
this
make
really
difficult
for
tool
like
cluster
api,
and
I
guess
also
cops
to
to
chart
a
path
forward
in
in
the
adoption
of
of
class
api
of
component
config,
that
that
means
that,
in
my
opinion,
where
we
have
to
sit
down
and
find
agree
on
on
what
will
come
next
and
and
then
try
to
make
this
happen.
A
Yes,
I
I
agree
that
I
mean
I
think,
if,
if
we
can
get
them
to
commit
to
not
changing
too
many
of
the
fields
in
just,
for
example,
the
kubelet
config,
which
sounds
like
they've
implicitly
committed
to
with
the
beta
one,
then
we
can
start
on
the
we
can
start
adopting
it
and
we
can.
A
As
far
as
I
know,
we
can
use
flags
and
component
config
and
maybe
we
have
to
all
write
the
same
upgrade
logic
three
times
or
end
times
and
that's
the
way
it
is
for
now.
But
I
I
am
you
know
I
I
feel
like
if
the
thing
which
was,
in
my
opinion,
a
blocker
before
was
the
the
idea
that
the
schema
could
change
and
if
that's
not,
if
that's
locked
at
this
point
and
they're
they're
sort
of
happy
with
it
being
the
way
it
is,
then
I
think
that
will
be.
A
We
should
certainly
cooperate
and
like
trade
notes,
but
maybe
we
can
all
do
hublet,
config
and
and
see
what
commonalities
and
what
problems
arise.
That
might
be
a
way
to
move
forwards,
because
I
feel
like
there's
been
this
chicken
and
egg
right.
They
don't.
They
component
config
hasn't
been
adopted,
so
they
can't
say
it's
ready
and
then
we
can't
adopt
it
because
it
isn't
stable
type
thing.
A
Topic,
I
think
I,
if
I
recall
correctly,
there
is
a
some
flag
that
is
only
available
in
component
config
in
I
actually
don't
think
it
was
cubelet.
I
think
it
was
something
else,
but
hopefully
it
was
cubelet,
so
I
think
we're
being
forced
to
adopt
component
config
in
any
case
or
will
be
forced
to
adopt
it.
So
there's
there
is
that
path
to
getting
adoption.
D
C
I
think
it's
interesting
how,
with
the
cluster
class
problem
as
well
of
managed
clusters,
we
use
a
combination
of
cops
created
clusters
and
eks
ctl
created
clusters
and
whether
managed
providers
will
ever
expose
component
config
to
you.
There's
cubelet.
You
certainly
can
because
at
the
end
of
the
day,
you're
running
a
node
but
for
control,
plane,
components.
I
know
that
providers
are
quite
touchy
on
what
they
expose
to
you.
C
A
I
I
mean
I
can
certainly.
I
can't
speak
for
the
providers
who
are
here,
but
I
think
if
they
are
going
to
expose
a
flag,
it
would
be
nice
if
they,
if
the
schema
they
used
was
the
component
config
schema
right
so
like
you
should
be
able
to
provide
a
partial
component,
config
and
maybe
you're
not
allowed
to
set
certain
fields
because
they
are
like
not
supportable
in
that
scenario,
but.
C
C
A
I
mean
I,
I
think,
the
the
intention
with
chaops
is
that
a
lot
of
our
we're
actually
talking
about
this
a
lot
right
now,
like
chaos,
has
this
cluster
object,
which
is
a
large
sort
of
monolithic
object
which
contains
you
know:
cni
configuration
and
optional
like
api
server,
flag
overrides
and
all
these
things,
and
and
what
we're
talking
about
now
is
the
whether
we
can
get
that
into
a
set
of
objects
that
are
more
modular,
so
api,
server
and
cubelet
and
scheduler
and
controller
manager
will
move
to
component
config
and
the
cni
configurations
will
ideally
move
to.
A
You
know:
operators
that
control
cilium
or
calico,
or
whatever
cnn
provider,
you're
using
and
so
on,
rather
than
so,
basically
breaking
up
this
monolithic
object
into
separately,
versioned,
ideally
separately,
ideally
separately,
released
but
cbd
objects,
and
those
should
then
be
a
lot
more
shareable,
because
even
if
gke
says
we're
not
going
to
api
server
flags,
they
might
let
you
set,
as
you
say,
your
flags
or
cni
flags
or
whatever.
It
is.
D
If
I
can
comment
on
this
in
kubernetes,
we
went
a
little
bit
further
down
this
path
in
the
in
one
order:
version
of
min
api.
We
had
component
config
embedded
in
the
api,
so
we
basically
were
importing
types,
and
this
is
this
is
something
that
at
the
end,
we
decided
to
move
away
from,
because
basically
it
was
linking
the
life
cycle
of
our
api
to
the
life
cycle
of
other
api,
which
are
not
yet
stable.
D
So
this
was
bad
and
and
then
now
now
we
have
basically
component
config
as
a
separate.
D
Object
in
a
multi-part
yammer.
This
is
how
we
manage
this
in
our
configuration,
but
we
are
still
not
super
happy
about
this
because
at
the
end,
the
lack
of
an
upgraded
part
or
the
lack
of
a
good.
Let
me
say
library
we
can
rely
on
for
doing
validation,
defaulting
stuff
like
that
is
still
putting
on
kubernetes
and
on
the
command
maintainer
and
the
end
user.
A
lot
of
work
that
ideally
should
be
taken
care
in
by
the
component
and
offered
as
a
service.
A
Actually
that
prompts
me.
A
One
of
the
things
we
have
in
chaos
is
we
have
these,
what
are
effectively
add-ons
like
cni
providers
built
into
chaos
today,
and
it
has
similar
processes
or
similar
problems
where
we
effectively
have
to
maintain
that
logic
and
that
validation,
one
of
the
things
that
we
have
teed
up
for,
I
guess
123
or
maybe
122.
A
I
can't
remember
for
the
next
next
release
is
initial
support
for
add-ons
that
effectively
can
either
run
on
the
cluster
or
can
run
you
can
sort
of
run
them
locally
in
a
sort
of
generative
mode,
and
so
the
the
units
that
we
have
begrudgingly
decided
to
use
is
a
docker
container,
but
the
idea
is
in
this
way
what
you
could
have
is.
A
E
A
It
is
a
sorry,
it
is
a
container
that
can
be
run
somewhere.
The
current
implementation,
only
shells
out
to
docker
using
exec,
but
yes,
the
the
intent,
is
that
it
will
be
not.
A
It
will
be
much
more
generic
in
future,
but
we're
starting
with
the
the
mvp
as
it
were,
but
you
know
if
we
can,
maybe
we
can
figure
out
a
way
to
package
validation
or
defaulting,
or
things
like
that
in
containers
that
we
can
effectively
dry,
run
and
add-ons
in
the
same
way
where
we,
you
know,
we
we
run
an
add-on
and
we
pipe
so
the
way
we're
doing
it
right
now
is
what's
interesting.
Is
some
add-ons
have
dependencies
on
other
things
and
we
pipe
in
the
universe
into
the
add-on?
A
You
know
dry,
run
mode
and
then
there's
some
fancy
code
that,
like
mocks
out
client,
go
clanko,
yes,
well,
the
the
client,
so
it's
able
to
see
the
kube
dns
ip
cluster
ip
address
as
if
it
was
actually
running
on
the
cluster,
so
it's
sort
of
sneaky
in
that
way.
But
the
point
being,
I
think
the
real
point
is
like.
Maybe
we
can,
I
don't
think
anyone
loves
the
idea
of
docker
run
or
container
d
run
and
taking
that
dependency,
but
maybe
it's
the
best.
It's
the
best.
A
A
We've
almost
got
it,
I
think
we
have
it.
We
have
it
teed
up
and
that
will
also
like
you
know,
that's
another
reason
to
spread
it
out
into
unrelated
objects,
because
if
we
have
to
re-synchronize
them
all
into
the
cluster
object,
there's
no
real
point.
So
there
must
be
a
set
of
additional
additional
objects.
D
A
We
don't
have
plans
for
component
conflict,
but
I'm
wondering
if
it
could
be
applied
to
component
config
for
the
same
problem
like
you
know
the
that
it
is
the
same
problem
right.
A
There
is
a
different
group
from
us
that
is
not
the
k,
ops
team,
that
is
in
theory,
maintaining
this
configuration
and
this
type,
and
we
would
like
to
not
re-implement
all
the
work
they
do
and
they
they
best
know
how
to
implement
validation,
and
we
would
like,
for
that
validation
to
be
consistent
across
chaops
and
cube
adm
and
cluster
api
and
coupe
spray
and
everything
right.
It's
it's
the
same.
It's
the
same
problem
as
the
add-on
problem.
A
Also,
I
will
put
a
link
in
the
into
our
agenda
for
the
sort
of
work
in
progress,
pr
that
is
influencing
the
core
dns
operator
or
sort
of
client
side
which
includes
piping
in
the
the
cube
dns
service.
So
there's
some
sort
of
non-trivial
back
and
forth
into
the
container,
but
I'll
put
a
link
to
that
right
now.
A
We
have
also
reached
the
end
of
our
agenda.
I
don't
know
if
there's
other
things
other
subject
updates
that
want
to,
because
I
guess
we've
been
talking
about
that
topic
for
a
long
time.
A
A
Was
someone
talking
or
was
that
background
noise?
I'm
sorry,
sorry.
C
My
background
noise,
I
have.
C
As
I've
alluded
to,
we
use
cops
and
eks
ctl.
At
the
moment,
I
don't
like
eks
etl,
not
in
life.
I
also
don't
like
having
to
pay
for
my
control
plane,
so
it
kind
of
balances
itself
out,
but
I
really
like
the
idea
of
cluster
api,
so
I
guess,
as
an
end
user
and
a
part-time
contributor
to
cops
when
I
used
it
every
day,
not
so
much
now.
So
I
just
is
there
anything
that
people
like
me
that
run
lots
of
clusters
could
do
to
help
with
cluster
api
is?
C
D
I
can
try
to
give
an
answer.
First
of
all,
any
feedback
is
valuable,
so
if
you
find
something
that
is
hacky
or
missing
or
you
don't
like
in
the
api
is
his
is
valuable
feedback.
Even
ideas
are
valid
feedback,
so
this
is,
they
are
really
feedback.
One
from
the
field
are
always
valuable
and
then
they
get
taken
into
consideration
with
regards
to
moving
to
the
to
graduating
the
api,
I
think
that
this
was
a
topic
that
we
discussed
several
times
so
from
one
side
as
a
project.
D
I
I
I
think
that
we
can
say
without
doubt
that
that
we
are
already
acting
as
a
ga
api,
because
we
provide
conversion
we
back
for
fix,
so
we
are
for
sure
we
are
not
acting
like
a
alpha
alpha
project.
We
we
have
end
to
estress
coverage,
which
is
continuous,
growing.
B
D
We
we
are
already
acting
as
we
have
products
built
on
top
of
cluster
api,
and
so
every
time
we
we
do
a
release.
We
we
get
a
a
nicely
round
of
feedback.
D
Pretty
soon
so
also
the
feedback
from
the
also
the
community
is
really
helping
in
keeping
up
with
the
quality
of
the
of
the
code
why
we
are
not
graduating.
I
think
that
we
would
like
to
do
so.
We
are
a
little
bit
worried
by
the
fact
that
that,
being
a
more,
let
me
say,
stable
api,
we
slow
down
us
in
addressing
the
new
requirement,
because
there
are
a
lot
of
new
requirements.
D
D
We
will
move
to
beta
and
the
required
to
move
to
beta
is
to
have
the
load
balancer
proposal
in
which
is
a
a
big
change
for
the
pi
and-
and
this
will
be,
let
me
say,
a
good
middle
ground
from
being
ga
and
being
faster,
still
evolving
as
a
project,
but
definitely
I'm
happy
to
discuss
why?
Why
not?
We
go
to
to
to
be
one
alpha.
C
D
I
think
that
beta
will
happen
soon.
We
were
planning
to
do
so
this
before
kubercon
us,
but
then
basically,
a
person
who
was
working
on
the
local
balancer
proposal
got.
D
D
We
don't
want
to
make
such
a
big
change
in
in
v1
1,
and
that
doesn't
mean
that
probably
we
need
a
a
new,
a
last
alpha
release
to
get
the
low
pass
and
proposal
setter
the
stable
and
then
we
we
move
to
2v1
with
the
one
but
yeah
this
is
being
discussed.
Also,
let
me
say,
having
more
release
more
api
released
during
the
year
is
something
that
we
are
discussing
to
get
together
there
soon.
D
A
There
there
is,
that
is
another
one,
we're
trying
to
at
the
same
time
as
add-ons
we
are
going
to
hopefully
start
we
have
code
so
there's
another
pr.
That's
just
been
sitting
there
for
a
compromise
time,
like
I
think,
two
years
that
but
it's
been
updated
and
it
works
where
we
are
able
to
back
an
instance
group
with
a
cluster
api
machine
deployment.
A
So
we're
going
to
start
with
not
trying
to
replace
the
control
plane
but
just
being
able
to
do
the
the
worker
nodes
as
it
were,
that's
a
good
start.
We
obviously
want
to
go
further,
but
yeah
that
it
is
very
much
part
of
part
of
the
plan
and-
and
hopefully
we
will
get
there
this
year.
C
Well,
thanks
very
much
I'll,
try
and
convince
my
team
to
spend
a
bit
of
classes
with
cluster
api
and
see
what
falls
are.
A
Anthropo
is
the
is
it
fair
to
say:
aws
is
the
best
supported
one
right
now.
D
It
is
hard
to
say
we
are
still
considering
couple
and
then
say
the
reference
implementation,
because
usually
things
happen
first
in
kappa
and
then
in
the
other
provider,
but
definitely
we
we
have
a
good
maturity
in
kappa
v
in
azure
and
also
I
I
see
other
providers
following
following
up
pretty
quickly
like
with
metal.
D
A
D
If
I
can
take
the
opportunity
to
make,
a
call
for
action
is
what
I
really
would
like
to
get
up
to.
Speed
is
a
couple
g,
so
the
provider
for
google
for
many
reasons,
and
the
main
reason
for
that
is
that
whenever
we
can
get
cup
g
up
to
speed
and
and
the
fix
or
some
flakes
that
we
have
currently,
it
will
be
possible
to
hook
custom
pi
into
the
kubernetes
test
grid.
D
And
this
will
be
beneficial
for
us
as
cluster
api
browser,
because
we
get
all
the
feedback
from
the
entire
kubernetes
community
and
also
for
the
kubernetes
community
itself,
because
we
are
basically
dropping
the
the
the
collection
of
bash
script
that
exists
in
a
slash
cluster
and
moving
to
something
which
is
better
maintenance.
So
help
wanted
on.
On.
D
A
Yes,
yes,.
A
The
I'll
see
what
I
can
do,
the
I
don't
know
if
there
are
other
topics,
otherwise
or
subject
updates,
otherwise
people
can
have
a
bit
of
time.