►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Is
the
cluster
API
officero
meeting
today
is
Wednesday
14th
of
June
2023.
before
starting
just
a
reminder
that
this
meeting
about
to
the
cncf
cloud
control,
so
please
be
kind
with
each
other.
A
We
have
a
meeting
agenda
document,
I've
posted
the
the
link
to
the
agenda
Doc
in
chat
few
seconds
ago,
and
if
you
don't
have
access
to
the
meeting,
please
subscribe
to
the
Secret
Service
cycle
mailing
list.
Also
the
others
of
the
mailing
listings
is
in
chat.
We.
B
A
Meeting
etiquette,
so
please
use
the
reason,
the
feature
of
Zoom,
which
is
under
reaction.
If
you
want
to
speak
and
if
you
ever
talk
and
then
please
add
your
name
to
the
the
attendee
list
and
if
you
have
copied,
you
would
like
to
speak
about
feel
free
to
add
them
into
the
agenda.
So
let's
kick
off
the
meeting
as
as
usual,
we
give
some
time
for
people
attending
this
meeting
for
the
first
time.
A
Okay,
it
seems
that
there
are
no,
not
indeed
today's,
and
so
let's
keep
moving
open
proposal
readouts.
If
I'm
not
wrong,
we
have
only
one
proposal
which
is
from
Jake
and
it
is
about
making
the
infacast
resource
option
for
management
for
improving
management,
kubernetes
supporting
copy
Richard.
Sorry
Jake.
Do
you
want
to
give
an
update.
C
Yes,
thank
you.
First
off
I'm
gonna
live
stream
change.
The
title
in
just
a
second
but
the
the
Cape
is
basically
ready
to
go.
I've
been
waiting
for
Vince
to
respond
to
a
couple
of
sort
of
quasi-open
threads,
but
Cecile
pointed
out
earlier
this
morning
that
he's
actually
on
leave
for
a
couple
more
weeks,
so
I
was
going
to
suggest
I
reread
those
threads
and
I.
C
Don't
think
anything
is
super
controversial,
so
I
was
going
to
suggest
that
I
confirm
that
with
say
yourself
and
Stefan
that
the
that
there's
like
an
acceptable
amount
of
call
it
uncertainty
in
those
threads
and
we
can
move
forward
without
waiting
for
Vince
and
then
start
the
lazy
consensus
timer.
A
I
have
to
look
at
the
recent
threat.
That's
fair,
but
let
me
say
what
wasn't
the
proposal
was
fine
for
me.
I
will
take
a
look
at
what
events
suggest
and
then
we
will
find
a
Way
Forward
just
for
confirming.
Are
you
still
aiming
to
get
this
implemented
in
this
release
cycle?
That
means,
basically
in
the
next
two
weeks
or
is
something
that
we
are
aiming
for.
1.6.
C
I
think
I'm
aiming
for
1.6,
but
Stefan
do
I,
see
Stefan
here,
I'll
sync
with
Stefan
he
seemed
oh
you're
you're,
here's
fun.
Do
you
have
any
thoughts
about
wanting
to
do
it
quicker.
D
So
what
I've
thought
would
might
be
a
good
thing
is:
if
we
essentially
have
it
implemented,
then
we
unblock
providers,
so
they
can
already
use
it
and
by
having
it
only
in
one
six
I
mean.
Obviously
we
can
implement
it
on
Main
and
people
can
start
implementing
against
Main,
but
depending
on
all
equal
folks
want
to
pick
this
up.
Just
delaying
it
by
into
106
might
mean
we
delay
the
whole
thing
by
four
months.
So
I
don't
really
know,
but
I
mean.
C
Well,
I
think
so
that's
that's.
Definitely
the
most
important
thing
I
think.
The
second
most
important
thing
was
my
read
on
your
confidence
that
this
was
actually
a
small
amount
of
implementation
work.
Do
you
still
agree
with
that?
Is
this.
D
I
think
if
we
have
consensus
on
that,
what
we
want
to
do
is
basically
just
change
the
core
controllers,
so
in
one
way
that
we're
looking
at
infra
cluster
and
control
plan
for
the
control
plan,
endpoint,
plus
the
change
in
cluster
class-
that
we
can
say
gracefully
deal
with
a
non-existing
intra
cluster,
the
implementation
should
be
fine,
so
I
mean
someone
has
to
find
the
time
and
actually
do
it,
but
I
think
it's
in
general.
It
should
be
doable.
D
The
main
thing
is
really
I.
Think.
Do
we
think
it's
okay
to
move
forward
with
a
proposal
or
not
I
think
that's
this
biggest
uncertainty
in
making
it
happen
in
one
five:
okay,.
C
If
we
want
to
do
it
in
one
five,
then
we
probably
want
to
start
the
implementation
now
and
and
create
a
work
in
progress,
PR
and
wait
for
the
proposal
to
land,
because
I
think
the
proposal
won't
land.
You
know
end
of
week
at
the
earliest
I'm,
not
sure
what
the
lazy
consensus
is,
but
I
think
we
can
take
this
over
async
last
call
for
comments
live
otherwise
see
on
Slack.
A
I
think
that
whatever
posing
is
fair,
let's
continue
with
the
discussion
of
seeing
and
thank
you
for
the
update.
Let's
keep
moving,
so
we
can
start
with
our
discussion
point
and
the
first
one
is
Andreas.
E
Yeah
hi
I've
been
here
for
a
while,
because
I
was
mostly
in
Kappa
topics
and
yeah
this
one
I'll
just
market,
so
this
I
brought
over
from
a
couple
meeting
because
we
found
this
seems
to
be
a
generic
issue.
So
let
me
try
to
explain
it
so.
The
issue
is
essentially
in
a
situation
where
we
have
a
regular
machine
pool
in
this
case
for
copper.
It
refers
to
an
AWS
machine
pool
and
to
cube
ADM
config,
so
totally
nothing
special
and
what
we
then
expect.
E
If
we
make
any
change
to
the
cube,
ADM
contracts
back,
let's
say
the
files
or
anything
else
that
would
configure
the
node
in
some
way
actually
to
recreating
nodes,
but
it
doesn't
in
case
of
Kappa
and
then
we
we
discussed
a
lot
first
of
all,
should
the
machine
pool
even
refer
to
cube
8m
config
and
we
found
yeah.
That
makes
sense.
That's
on
purpose,
so
we
use
the
same
Cube
ADM
config
for
all
the
instances
or
servers
that
the
machine
pool
creates
or
recreates.
E
So
that
seemed
to
be
intended
and
yeah
is
the
situation
kind
of
clear
what
I'm
talking
about
regular
machine
pool?
Okay
and
then
I
wrote
down
the
three
problems
that
we
have
here
that
lead
to
no
instances
being
recreated
number
one.
That's
in
copy
there's
this
Secret
that
we
create.
So
it's
a
bootstrap
secret
I
think
it's
called
we
created
from
a
cube,
ADM
config,
but
if
we
don't
update
it
when
the
cube,
ADM
config
changes,
we
do
update
the
secret
when
the
bootstrap
token
gets
rotated.
E
That's
before
the
TTL
of
15
minutes
is
over,
and
the
purpose
here
for
this
bootstrap
token
is
so
that
newly
created
instances
can
still
join
later.
So
that's
the
use
case
we
are
done
until
now,
which
means,
if
I,
do
make
a
change
to
cube.
Atm
conflicts
back
I
have
to
wait
for
the
TTL
of
this
bootstrap
token
to
expire
or
otherwise
nothing
will
get
reconciled.
Only
then
the
secret
will
get
updated.
E
So
no
immediate
reconciliation.
As
expected.
That's
problem
number
one
having
to
wait
then
number
two,
that's
in
the
interest
provider,
so
in
this
case
I
will
use.
Kappa
is
an
example
where
it
definitely
does
not
work.
Copper
does
not
watch
the
bootstrap
Secret,
so
it
cannot
react
to
those
changes
at
all,
or
at
least
not
immediately.
It
would
have
to
you
know,
wait
until
it
does
some
reconciliation,
so
it
could
be
minutes
could
be
whatever
or
never
so.
E
A
A
Question
I'm
not
an
aspect
or
machine
pool
machine
implementation,
but,
for
instance,
for
the
machine
deployments.
The
the
suggested
way
to
make
changes
to
template
is
to
rotate
templates.
A
B
A
If
you
do
so,
my
expectation
is
that
machine
pool,
we
start
reconciling
and
you
don't
have
any
and
you
don't
have
the
problem
number
two,
but
I'm
not
late
for.
E
Machine
deployments
you
use
not
a
cube,
am
config
but
a
cube,
ADM
config
template,
so
that
makes
things
basically
immutable.
So
a
new
Cube
ADM
currently
gets
generated.
The
operator
watches
that
detects
the
change
rolls
out
new
instances.
Everything
works,
but
it's
different
for
machine
pool,
because
here
we
refer
directly
to
one
single
Cube,
ADM
config,
there's
no
template.
E
F
Yeah
there's
a
chat
going
on
the
zoom
chat,
but
essentially
there's
a
PR
that
emerged
recently
that
hasn't
been
shared
like
now,
I'm
wondering
if
it
should
have
been
shared
effects,
which
I
don't
know
if
it
solves
some
or
all
of
this
problem,
I
don't
or
if
at
all.
But
essentially,
if
you
look
at
the
original
issue,
the
problem
that
I
think
Stefan
opened
this
use
I'll,
let
him
speak
because
he
probably
knows,
but
it
seems
very
similar
to
this.
D
Oh,
should
I
join
so
so
I
think
there's
a
difference
between
what
I
expected
ux
of
this
whole
thing
should
be,
and
and
what
currently
Workshop
doesn't
work.
So
this
issue
is
about
I.
Try
to
do
the
same
with
a
machine
pull
or
I'll
try
to
use
the
machine
pool
in
the
same
way.
I
would
have
expected
a
machine
deployment
to
behave
so
in
a
machine
deployment.
D
If
I
change
the
reference
to
from
One
keyboard
and
confident
to
another
one
I
get
a
rollout,
then
I
was
looking
at
a
machine
pool
and
I
expected.
Okay,
if
I
change
this,
this
bootstrap
ref
from
one
Cube
item
config
to
another,
one
I
expect
to
get
some
sort
of
rollout,
and
in
that
case
that
didn't
happen,
because
the
data
secret
name
wasn't
kept
up
to
date.
So
I
changed
the
bootstrap
config
graph,
but
a
data
secret
name
was
staying
the
same.
So
what
we
essentially
fixed
is
that
the
data
secret
name
is
continuously
reconciled.
D
So
as
soon
as
you
change
the
qubidden
config,
the
data
secret
name
gets
changed
and
that
should
trigger
rollout.
Usually
so
it
doesn't
really
so
that
that
addresses.
Like
my
kind
of
expectation
called
when
a
machine
push
to
trigger
roller
I
think
it
also
has
some
influence
on
your
behavior,
because
it's
probably
maybe
one
one
less
place
that
you
would
have
to
change
to
end
up
with
your
ux.
So.
B
E
D
Yeah
I
think
the
main
questions
probably
do.
We
want
the
machine
to
behave
differently
than
the
machine
deployment,
because
usually
it's
also
a
normal
ux
of
the
cluster
class-
is
also
the
same.
Basically
we're
saying
if
you
reference
that
stuff,
the
way
to
make
a
changes
to
create
new
object
and
change
to
ref
and
not
to
change
the
ref
in
line,
because
there
are
some
new
extensions
around
that
in
machine
deployment,
so
yeah.
D
E
I'm
also
yeah
I,
think
a
machine
pool
in
general
is
a
very
different
concept.
So
in
one
case
from
machine
deployments,
we
have
the
template
well
for
machine
pools.
We
directly
refer
to
one
single
Cube,
ADM
config
and
then
my
expectation
and
from
some
other
people,
I
heard
that
yeah
we
do
expect.
If
that
object
changes
it
it
kind
of
gets
reconciled
to
the
to
the
parent
and
the
parent
in
this
case
is
the
machine
pool.
A
F
Yeah
so
I
think
that
clarifies
it
for
me,
so
this
doesn't
won't
fix
the
problem
if
you're
just
changing
the
video
and
config
directly
like
you
said,
however,
it
may
provide
your
work
around
for
now
to
create
a
new
key,
radium
config
and
then
change
the
configurous
instead
of
modifying
the
existing
cubadium
config
in
place,
however,
then
you
still
have
problem.
B
F
F
So
in
cap
Z
we
actually
implemented
something
recently
to
allow
external
Skilling.
So
relying
on
the
cloud
like
the
Azure
Auto
scaling
feature
for
vmss,
in
which
case
like
there
won't
be
a
scaling
event
right
coming
from
the
machine
pool
itself,
so
it
won't
update
anything.
So
if
you
try
to
scale
from
Azure,
the
bootstrap
token
may
be
expired.
F
So
what
we
do
is
we
actually,
if
the
bootstrap
token
changes,
we
do
like
update
the
the
MSS
model,
which
is
the
equivalent
of
an
ASG
template
I.
Think.
E
F
Yeah
so
I
think
the
main
issue
is
that
they're,
both
like
you
said
like
combined,
you
have
like
the
token
secret,
and
then
you
also
have
to
like
config
for
the
machine
like
for
caught
in
it,
which
really,
if
we
want
to
solve
this
cleanly
I,
think
we
need
this
like
original
issue,
to
split
on
right,
because
otherwise,
there's
no
easy
way
to
have.
Like
a
token,
that's
getting
refreshed,
but
then
also
have
like
react
on
changes
just
for
the
OS
config.
E
B
E
So
yeah
that
would
solve
one
of
the
issues
and
the
workaround.
We
talked
about
I
guess
what
we
could
do
right
now,
just
create
a
new
Cube
am
config,
maybe
with
a
suffix
of
the
spec
checksum
or
something
that
would
solve
it
sure
just
wondering
what
the
ux
should
be
like
I
kind
of
expect
it
does
get
reconciled
when
I
make
a
change.
E
F
Yeah
I
was
going
to
say
something
like
a
singular
so
for
problem.
One
I
think
the
six,
the
fixed
seems
pretty
straightforward,
I
think
that's
just
a
bug,
I,
don't
think.
There's
any
reason.
It's
not
kicking
off
a
new
token
so
that
one
I,
the
you
could
use
the
workaround
in
the
meantime,
but
like
really,
the
real
fix
would
be
to
like
actually
make
a
patch
in
Cappy
to
update
the
secret
when
anything
changes
not
just
the
bootstrap
token.
A
But
I
don't
think
that
the
change
is
so
simple
watching.
Secrets
is
a
thing
that
we
always
avoided
because
as
soon
as
you
create
a
watch
on
secret,
basically
it
can
blow
up
your
memory.
We
already
experienced
it.
This
is
the
reason
why
secrets
are
not
captured
and
I.
Also
looking
at
a
discussion,
it
seems
to
me
that
we
are
doing.
We
are
going
down
a
road
where
we
are
asking
provider
to
watch
for
Secrets,
which
seems
wrong
provider
should
just
work
for
the
much
for
the
machine
pole.
A
A
F
Yeah
sorry
getting
into
the
weeds
of
implementation,
so
maybe
we
should
follow
up
async,
but
just
before
I
forget
I
think
for
problem.
Two.
F
This
solution
might
be
as
simple
as
just
changing
the
data
secret
name,
every
time
that
we
make
a
new
secret
version
so
like
versioning,
the
data
secret
name
itself,
because
we
do
update
in
the
machine
code
controller
for
Cuban
content.
We
do
update
the
machine,
pull
itself
with
the
data
secret
name,
and
so,
if
that
changes
that
would
trigger,
we
can
sell.
F
So
it
doesn't
know
about
the
contents,
but
it
knows
about
the
name,
so
yeah
I'm,
just
thinking
at
the
top
of
my
head,
might
need
to
look
at
the
code
for
but
yeah.
If
you
want
to
start
a
select,
the
red
and
dress
or
we
can
follow
up
on
the
issue,
I
think
that'd
be
best.
E
F
The
data
secret
name
gets
is
part
of
the
machine
pool
I,
think
sorry,
yeah,
okay,
okay,
yeah
I'll,
send
you
the
code
line
I'm
talking
about.
B
A
Yeah,
this
was
also
what
I
was
meaning
before
when
I
was
telling.
If
we
rotate,
then
everything
happens,
because
you
are
applying
a
change
to
the
machine
pool,
object
and
and
the
infrastructure
that
has
only
to
watch
to
the
machine
pool.
So
whatever
we
do
as
to
surface
to
the
machine,
pull
object,
Stefan.
D
I
just
want
to
highlight
the
application
between
what
whatever
the
right
way
is
to
update
a
cubism
config
in
a
machine
pool
and
the
cluster
class
machine
pool
support
we
are
implementing.
D
So
if
you
support
either
only
change
the
ref
or
both,
we
have
to
make
call
at
some
point
for
the
classical
support
what
we
are
doing
there.
If
rotation
is
the
cleanest
thing
that
we
can
do
or
if
that
one
here
is
better
for
some
reason:
I
don't.
A
Since
there
is
another
issue
where
we're
discussing
cluster
class
support
for
Machine
Tools,
okay
is
that
is
that,
okay
for
you
Andreas,
if
you
continue
offline
on
the
issue.
E
A
The
link,
sorry,
the
linked
issue
here
is
in
on
AWS:
do
we
want
to
continue
AR
or
since
it
is
a
copy
program?
We
open
a
new
issue
in
copy
and
we
start
from
a
fresh
discussion.
There.
A
Okay,
that's
I
agree.
Also,
Cecile
is
a
place
plus
one
in
in
chat.
So
if
you
can,
if
you
kindly
can
open
the
issue
in
copy
repo
and
then
I
don't
know
of
right
in
the
in
the
slack
thread,
the
linking
people
that
discuss
participated
to
the
discussion,
we
can
restart
their
async.
A
That's
great!
Thank
you.
Thank
you
very
much
for
releasing
the
moving
on
I
have
a
couple
of
PSA
from
other
meetings,
so
first
set
of
PSA
from
the
C
cluster
recycle
meeting
yesterday,
so
kubernetes
play
is
planning
to
cut
a
new
API
version
of
the
API
of
the
config.
Api
dates
are
not
defined,
but
they
have
a
an
umbrella
issue
with
a
list
of
possible
change
for
this
release.
A
Okay
I
have
asked
that
the
government
team
to
update
this
list,
adding
something
that'll
allow
us
to
understand
which
kind
of
change
our
addition,
so
basically
no
breaking
or
which
kind
of
change
are
breaking
I
know
that
some
of
them
are
breaking,
for
instance,
is
one
handling
esta
ARG,
which
are
allowed
multiple
times.
We
basically
change
the
type
of
the
current
estera
arcs.
This
is
a
breaking
change,
so
my
ask
for
this:
Community
is:
please
check
the
list
of
proposal,
changes
and
comment
there.
A
Second,
if
you
find
that
there
is
something
that
can
impact
cluster
API
or
provider
or
Etc,
something
that
is
relevant
for
this
group,
please
report
back
in
this
meeting,
so
we
can
discuss
solution
before
and
go
to
a
common
answer
to
the
government
team
comments.
Question
on
this
topic.
D
Stefan,
just
one
click
comment:
if
there
are
breaking
changes
in
there,
we
have
to
figure
out
how
this,
how
we
make
it
work
with
our
stable
API
and
how
we
hold
back
or
deal
with
breaking
changes
until
we
can
do
our
next
breaking
change
of
beta2,
so
that.
B
A
Okay,
moving
on
so
the
there
is
a
there
was
an
email
in
kubernetes
Dev
from
themes
about
migrating
CI
jobs
to
the
community
cluster.
This
is
part
of
the
effort
for
reducing
cost
on
Google
infrastructure,
and
it
is
a
follow-up
of
the
work
that
that
we
did
on
on
registry
ks.io.
A
Still,
some
limitation,
this
doesn't
work
for
jobs
which
are
creating
a
standard
resources,
not
relying
on
gcp
secret.
A
A
There
are
some
PR
already
opened
by
the
test:
infra
team,
but
considering
everything
my
personal
proposal
is
the
following:
we
have
to
it
is
to
bank
on
recent
efforts
for
chasing
flakes
and
keep
things
stable
during
the
last
part
of
the
1.5
release
and
then
staff
the
next
CI
team
for
this
job
and
start
doing
it
from
1.5
to
0
time
in
the
1.50
time
frame
comments,
opinions.
D
Yeah
I
think
it
makes
sense
because
with
150,
basically
almost
freezing
on
the
version
of
the
code
on
one
five
Branch.
So
basically
you
can
clearly
say
it
was
stable
on
our
cluster
and
then,
if
it
becomes
unstable,
then
it's
unstable
because
of
the
new
cluster
or
stuff,
and
not
because
a
lot
changed
in
that
branch.
So
the
risk
and
release
makes
sense
and
also
being
able
to
pinpoint
a
bit
better
if
it's
like
in
our
code
or
in
the
infrastructure
when
you
move
on
afterwards.
A
Thank
you.
What
do
you
think
if
I
open
an
issue
in
copy
for
tracking
these,
and
maybe
also
some
providers,
so
they
can
start
planning
for
the
same
as
well,
I
see,
plus
one
for
Stefan.
B
A
A
A
A
This
is
not
good
for
their
of
the
community,
and
so
there
are
discussion
ongoing
about
how
to
handle
this
if
to
prune,
basically
to
remove
a
membership
for
those
Forks
that
are
not
active
but
yeah
in
the
in
the
short
time
they
ask
is
for
every
one
of
us
that
can
sponsorship.
A
H
Hi
this
is
Chris.
I
have
a
little
concern
here
that
I
feel
like
kubernetes
is
trying
to
crack
down
on
folks,
but
at
the
same
time
we
have
a
number
of
sigs
like
in
Cappy
that
are
very
tied
to
a
company,
and
it
can
kind
of
really
hurt
that
company's
ability
to
get
things
done
if
they
can't
bring
people
in
to
work
on
that
company's
prerogative
related
cigs,
and
so
there
needs
to
be
some
sort
of
consideration.
H
That's
not
there
today
both
that
it
incorporates
this,
but
also
makes
it
easier
for
companies
to
get
people
in
to
cigs
that
are
related
to
that
company's
business
and
and
I
think
we're
just
missing
it
entirely.
I,
don't
think
what
we
have
today
works
I,
don't
think
what
this
proposes
would
necessarily
fix
that
at
all.
H
But
that's
where,
on
my
mind's
been
a
lot
in
this
regard
lately-
and
maybe
some
of
that
is
still
gonna
happen
with
people
coming
in
even
for
just
a
single
company
and
then
abandoning
because
they
are
a
contractor
or
something,
but
some
flexibility
to
get
people
in
to
do
work.
That
needs
to
get
done
that
the
only
people
who
care
about
it
is
that
company
that
Pro
sponsors
that
sub-project
and
then
let's
get
them
back
out
more
easily
than
using
the
org
membership.
H
A
I
A
F
Yeah
but
I
think
you're
right
that
doesn't
necessarily
will
write
for
him
to
reach
the
right
people.
So
I'll
just
comment
on
the
slack
right.
Thanks
for
the
PSA.
A
Yeah
just
quick
comment
on
on
Chris
feedback:
I,
think
that
is
fair,
but
as
far
as
I
know,
getting
people
in
should
not
be
driven
by
company
and
affiliation,
but
should
be
driven
by
your
tracker
records.
This
is
what
at
least
we
tried
to
do.
A
I
think
that
the
issue
here
is
not
getting
people
in
the
issue
is
that
there
is
no
not
yet
a
clear
content
when
where
people
goes
out
but
yeah,
let
me
say
interesting.
Discussion
I
think
that
as
a
contributor
as
a
member
of
this
Arc,
we
all
have
a
valuable
opinion
going.
Kubernetes
contributor
Channel
participate
to
the
discussion.
J
Yeah
sure
I
posted
this
issue
a
little
while
back
and
Jack
responded.
The
the
reason
why
I
brought
it
up
is
just
because
I
think
it
has
well.
It
does
have
implications
upon
the
manage
cluster
and
people
putting
this
with
Git
Ops
in
code.
So
if
you've
got
a
copy
definition,
you
know
that
is
all
in
code
and
the
management
cluster
could
be
ephemeral
or
it
could
die
and
come
back.
J
That
obviously
has
a
lot
of
implications
and
so
I
think
it's
important
for
people
to
know
if
you
know
what
they
created
in
the
infrastructure
as
code
yaml
is
truly
item.
Potent
or
not,
and
it
sounds
like
really
depends
on
if
there's
a
mutating
web
hook
and
I,
think
that
would
be
really
good
for
for
people
to
know
because
yeah
be
nice
to
not
necessarily
have
to
run
a
management
cluster
if
you
didn't
have
to
or
want
to
or
have
a
recovery
mechanism
from
just
what
you
store
in
source
code.
J
E
Did
I
ask
I'm
totally
out
of
context,
but
we
have
a
similar
Tooling
in
my
company
or
also
for
some
Docker
build
tooling
previously.
I
have
the
same
experience
whenever
something
works
online,
let's
say
by
connecting
to
the
cluster
being
different
between
the
versions
than
the
output,
would
also
be
different
or
could
lead
to
an
error
if
you
cannot
connect
so
maybe
in
offline
mode
for
generating
yaml
templates,
if
that's
somewhat
feasible,
because
then
you
should
get
the
same
output
every
time.
J
Well,
generating
the
yaml
templates
right,
it's
just
from
the
cluster
CTL,
so
you
you
really
have
no
I
I
mean
today
you
have
no
idea
if
that's
gonna,
last
or
not
right,
so
it's
not
about
I
mean
it's
I
mean
the
ideal
scenario
in
my
opinion,
would
be
I.
Think
as
Jack
is
trying
to
propose
there
of,
if
there's
a
way
to
detect
if
any
kind
of
mutating
web
focus
is
going
to
take
effect
with
the
definition,
that's
there
and
then
at
least
notify
people.
J
If,
if
that's
the
case
at
a
minimum,
and
then
that
way,
you
can,
you
can
kind
of
pre-validate
before
I
deploy
a
cluster
I
want
to
know.
If
this
is
going
to
be
item
potent
or
not
to
to
be,
you
know
to
make
that
decision
for
the
for
the
future.
I
A
Ahead,
yeah,
if
I
think
about
this,
so
basically
it
seems
to
me
that
the
issue
is
that
when
we,
when
an
object,
get
created,
we
have
defaulting
and
validity
of
a
book.
So
we
we
added
Fields
information
and
then
we
have
controller
that
maybe
are
the
Steels
in
Spec
and
so
on.
But
honestly,
if
I
think
about
this,
it
is
more
or
less
was
the
same.
That
happens
for
other
kubernetes
objects,
so
I'm,
not
an
expert
in
a
funky
talks,
but
I
really
would
like
to
understand
what
the
problem
is.
I
J
Yeah,
the
fun
I
think
the
fund
it.
Let's
just
go
with
the
fundamental
question
of.
If
my,
if
I
have
my.
If,
if
I
have
my
secrets,
are
all
the
same
for
initializing,
the
management,
cluster
and
I
have
all
of
my
definitions
of
the
cluster
definitions
right
in
yaml
in
source
code,
I
delete
I,
deploy
those
clusters
from
those
yamls
onto
the
management
cluster.
The
management
cluster
goes
away
for
whatever
the
reason
I
delete
it.
J
A
B
A
You
tried,
and
you
have
evidence
that
this
doesn't
work.
Water
doesn't
work
because
I
I,
don't
personally
I,
don't
have
tried.
It
I
know
that,
because
we
incluster
API,
we
can
do
better
for
githubs,
but
the
problem
is
that
until
someone
commits
in
doing
the
research
work
for
making
this
happen
is
not
going
to
happen.
So
this
is
this
is
what
the
the
I
I'm
not
saying.
This
is
not
a
problem,
but
in
order
to
make
it
in
this
app
and
we
need
actionable
items,
and
so
we
need
people
that
are
research.
A
What
doesn't
work
comebacks
with
issues
and
we
work
on
the
proposals
right
so.
D
Okay,
what
just
confused
me?
The
issue
was
basically
that
what
you
were
posting,
what
didn't
work.
The
second
time
was
basically,
you
couldn't
deploy.
I
think
a
few
Azure
objects,
because
the
web
Hook
was
down
and
immediately
will
never
work
as
long
as
we
have
a
validating
mutating
the
book.
If
nothing
is
answering
on
that,
webhook
no
chance
to
apply
that
object,
and
that
was
like
what
confused
me,
the
most
about
I
mean
is.
D
J
J
A
Yeah,
okay,
David
I,
think
that
this
is
important.
I
I
really
think
that
if
we
can
make
copy
more
githubs
friendly,
it
will
be
a
win
for
the
project.
If
you
have
evidence
being
bring
them
up,
we
will
try
to
to
find
a
way
to
make
this
happen.
Next
up,
Leonard.
A
G
Hello
I
think
I
started
this
on
slack
about
a
week
ago
or
so.
I
have
since
created
an
issue
also
for
it.
Basically,
we
we
see
a
need
for
kind
of
provisioning
strategy
to
limit
the
number
of
machines
that
are
created
or
in
provisioning
state
simultaneously.
So,
basically,
when
you
create
a
big
cluster,
then
not
all
of
them
should
be
created
directly
like
we
want
to.
G
I
think
it
was
Killian
that
suggested
that
we
we
look
at
this
concurrent
machine
deployment
upgrade
thing
also,
and
that
looks
good
to
me.
I
just
wanted
to
to
bring
it
up
here
also
and
see
what
people
think
is
this
a
good
way
forward.
I
have
started
prototyping
a
bit
trying
to
add
basically
an
annotation
for
the
machine
set,
and
then
it
could
also
be
set
on
the
machine
deployment
and
propagated
to
the
machine
set.
G
A
A
This
this
is
a
I
think
I
will
not
call
it.
The
provisioning
strategy.
I
will
call
scale
up
or
scale
down
strategy,
because
probably
we
need
the
both,
and
this
is
basically
how
a
single
machine
deployment
goes
up
or
down.
Okay
now
it
goes
up
or
the
up
or
down.
Suddenly,
if
you
want
to
do
a
more
gentle
ramp,
you
have
to
do
it
by
accident
automation.
A
They
they
issue
that
kylian
pointed
out
is
slightly
different,
and
this
is
something
that
we
experimented
in
in
cluster
class.
Basically,
when
you
have
many
machine
deployments-
and
you
are
upgrading
the
cluster,
maybe
you
want
them
to
upgrade
in
parallel
or
with
certain
order
of
stuff
like
that,
so
it's
slightly
different,
even
if
it
is
again,
the
underlying
issue
is
again
to
how
much
we
want
to
be
gentle
or
aggressive
on
the
underlying
infrastructure.
So.
B
A
My
auto
take
is
that
the
the
problem
that
you
are
proposing
is
that
basically
belongs
to
a
single
machine.
Deployment
should
be
solved
with
a
clean
API,
because
we
already
have
an
object
that
is
responsible
for
this.
That
clearly
owns
the
the
set
of
machines,
so
we
can
make
this
a
first
class
object.
What
we
experiencing,
because
the
class
was
one
of
the
first
case
where
we
are
trying
to
make
a
group
of
machine
deployment
to
behave
somehow
concurrently,
but
we
don't
have
nothing
that
really
owns
this.
A
G
Right
so
so,
you're
basically
saying
skip
The
annotation
make
a
field
directly
in
in
the
machine
deployment
itself.
A
G
G
Yeah
that
that
sounds
good
and
your
point
is
completely
valid.
I
I
see
the
I
see
the
point.
It's
very
good,
any
any
other
thoughts.
D
So
so
today
would
be
best
to
have
something
similar
to
the
upgrade
strategy
we
already
have
for
machine
deployment.
Of
course,
what
we
configured
there
is
I
know
it's
slightly
different,
but
in
a
similar
Style
Just
for
for
the
scale
up
use
case
instead
of
the
upgrade
use
case,
it
might
be
interesting
how
those
overlap.
Basically,
if
you
have
an
upgrade
strategy,
we're
saying
oh
I
can
burst
like
to
five
upgrades
at
the
same
time,
and
then
you
have
your
scale
up
strategy
that
says,
or
you
can
only
do
like
two
scale.
D
Ups,
at
the
same
time,
how
should
they
work
together
or
not?
Is
it
like
just
different
each
just
care
support
itself
or
is
one
red
limiting
the
other
as
well?
That
could
be
interesting
and
then
the
other
questions.
It's
basically
is
it
enough
to
solve
the
numbers,
or
are
you
also
worrying
about
the
use
case
for
people
who
are
only
using
machine
sets
I,
don't
know
just
an
open
question.
G
A
Thank
you,
yeah
I
think
that
those
are
the
kind
of
discussion
that
we
have
to
have.
Why
we
try
to
sketch
the
API
and
see
how
these
scale
up
things
we've
gone,
how
it
interacts
we
followed
out,
etc,
etc.
Yeah
is,
we
can
I
think
that
for
sketching
an
API
we
can
do
or
on
a
Google
doc
or
in
icmd,
but
we
need
something
where
we
can
have
a
fast
and
quick
iteration.
It
is
basically
a
short
proposal.
A
G
I
can
I
can
try
creating
something
like
that.
I
will
need
to
play
a
bit
with
it
and
then
I'll
try
to
do
it.
Thank
you
very
much
for
for
the
input.
Thank.
A
You
I
think
this
is
an
interesting,
really
interesting
one.
If
there
are
no
more
comments
on
this
one,
we
can.
We
have
four
minutes
left
and
we
can
move
to
the
last
Topic
in
agenda
for
today,
which
is
cast
API
operator
from
mic.
A
I
So
yeah
I
posted
a
PR
this
week
that
adds
up-to-date
documentation
instead
of
the
old
proposal
so
yeah.
If,
if
you
have
the
opportunity,
please
please
take
a
look
and
also
at
the
next
meeting
I
think
next
week
yeah.
We
want
to
show
a
demo
to
the
class
straight
back
Community
and
there
we
want
to
demonstrate
how
easy
it
is
to
manage
providers
life
cycle
with
the
operator
in
declarative
github's
way
yeah.