►
From YouTube: SIG Cluster Lifecycle - Cluster API 22-08-24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
wednesday
august
24th
2022.
This
is
the
cluster
api
office
hours
meeting.
Just
as
a
reminder,
we
do
have
a
meeting
etiquette.
If
you
like
to
speak
up
or
say
something.
Please
use
the
reason
feature
you
you
can
find
it
on
the
reactions
on
zoom
and
before
we
start,
we
would
like
to
welcome
new
attendees.
Does
anybody
want
to
introduce
themselves.
B
Hey
hello,
I'm
florian,
I'm
from
aws
and
yeah.
C
A
Three
times
all
right,
let's
move
on
are
there
any
proposals
that
need
review,
that
we
should
talk
about.
B
When
you
hi
so
the
manager
kubernetes
in
cafe
promoter
after
long
discussion
last
week,
we
got
a
lot
more
reviews
and
I'm
just
wondering
what
the
next
step
is.
B
A
So
today
is
august
24th,
let's
give
until
I
guess
like
two
more
days
so
until
friday
and.
A
Perfect
any
other
proposals
to
discuss
today.
B
B
A
D
D
If
you
want
to
implement
this
in
your
provider,
I've
given
a
link
to
the
kubemark
pr
that
implemented
it.
So
if
you
needed
another
reference,
that
would
be
another
place
to
look
and
then
I
also
wrote
a
small
blog
piece
about
kind
of
the
process
we
took
to
get
there
and
one
of
the
interesting
things
that
we
learned
about.
Client
go
and
informer.
Caching
in
the
process
so
just
wanted
to
share
with
anybody,
and
if
anybody
has
questions
you
know
feel
free
to
reach
out,
or
you
know
open
an
issue
or
whatever.
So
thanks.
A
All
right
jack,
you
have
the
next
one.
C
Okay,
can
everybody
hear
me?
Okay,
I'm
not
used
to
not
using
airpods.
I
see
mike
nodding
his
head
cool,
so
yeah
real
quickly.
I
opened
up
a
pr
last
week
in
response
to
doing
some
work
in
the
capsi
azure
managed
cluster
space
around
external
autoscaler
and
cecile
pointed
out
that
another
contributor
who
I
will
find
in
a
second
also
opened
up
a
a
comparable
pr.
So
this
is
dame
thorson,
I'm
not
sure,
if
dane
is
here,
but
I
really
wanted
to
just
sort
of
field
the
room
for
any.
C
If
anyone
has
any
preferences
about
how
we
might
do
this
in
capi
from
a
machine
pool
state
perspective,
so
the
pi,
the
pr
I
opened
is
a
sort
of
prototype
that
suggests
we
could
set
an
annotation
to
identify
that
the
machine
pool
was
under
the
external
management
of
an
auto
scaler.
So
this
what
this
really
has
to
do
with
is
who
is
authoritative
for
replica
account.
C
So
if
you're
in
a
managed
cluster
scenario
and
you're
using
that
manage
clusters,
auto
scaling
feature,
then
cluster
api
needs
to
sort
of
do
the
non-intuitive
thing
of
being
passive.
C
With
respect
to
the
authority
state
of
replica
account
and
defer
that
from
the
managed
cluster
provider,
so
in
capsi
we
have
a
solution
that
sort
of
gets
us
halfway
there,
just
in
the
capsis
space,
which
could
work
with
for
kappa
or
anything
else,
but
the
machine
pool
is
is
going
to
be
the
machine
point
cappy
is
is
still
sort
of
wanting
a
kind
of
authoritative
solution.
So
hopefully,
that
background
makes
a
little
bit
of
sense.
Dain's
solution
is
to
add
a
first-class
property
to
the
machine
pool
so
sort
of
evolve.
C
The
the
data
property
is,
is
going
to
be
a
much
more
sort
of
integral
solution,
but
it's
also
less
flexible.
So
anyone
have
any
thoughts.
A
I
forgot
if
I
actually
comment
on
this
now,
but
I
was
in
favor
of
annotation
instead
of
like
a
field,
because
the
this
field
seems
like
okay,
this
field
seems
like
something
that
it's
kind
of
dictating
like
the
apis
between
controllers,
which
annotation
that's
exactly
what
it
what
it
is
for
also
like
so
one
question
that
I
had
is
does:
is
this
annotation
going
to
be
set
by
another
controller
or
gonna?
We
expect
to
be
set
from
users.
C
A
Yeah,
but
if
it
is
the
former,
like,
I
think,
like
annotation,
is
a
little
bit
more
common
to
have
kind
of
contracts
between
controllers
and
annotating
an
object
and
say
like
hey.
This
is
managed
by
externally,
and
I
know
what
I'm
doing
if
I
use
it
like.
It
goes
out
of
the
way
if
it's
in
the
spec
field,
like
I
would
understand
if
the
user's
like
I
don't
like,
I
don't
want
to.
A
Maybe
I
don't
know
where
I'm
setting
or
I
know
what
I'm
saying
this
is
right
here
and
spec
fields
are
usually,
I
guess
you
know
managed
by
users
or
like
I
should
be
more
than
annotation
that
are
like
more
internal
or
like
that's
kind
of
the
feeling
around
that
at
least
so
personally,
like
I
would
vote
for
the
annotation
approach,
I
think
I
did
say
something,
probably
in
the
other
pr,
but
I'm
also
in
favor
to
hear
other
opinions.
D
Yeah,
I
just
want
to
say
I
think
I
think
the
annotation
is
a
fine
way
to
start
this
off.
You
know
like
I,
I
would
expect
it
to
start
as
an
annotation
because
we're
using
that
we're
using
annotations
in
very
similar
ways
to
denote
exactly
the
kind
of
things
vince
was
talking
about
these
kind
of,
like
not
api,
but
weak
relationships
between
different
controllers,
so
I
think
starting
it
off
as
an
annotation
is,
is
pretty
good.
My
my
question
would
be
like
at
some
point.
D
If
we
determine
through
usage
that
users
are
wanting
to
just
set
this
on
the
spec
when
they
create
the
machine
pool,
you
know,
then
we
might
want
to
revisit
this
later
and
say:
okay,
maybe
we're
going
to
promote
this
to
a
field
and
the
spec,
but
if,
if,
if
we
don't
know
ahead
of
time,
that
users
might
want
to
be
doing
that,
then
annotation
is
probably
the
simplest
way
to
approach
it.
I
would
think.
C
C
Okay,
cool
well,
clearly,
not
everyone
is
in
this
cafe
office
hour,
so
I'll
take
as
my
next
steps
to
reach
out
with
dane
and
other
folks
on
that
original
pr
thread
and
see
if
we
can
get
consensus
from
from
those
folks
as
well
and
also
winnie.
C
B
A
Awesome
matt,
you
have
the
other
topic
for
giga.
E
Yes,
I
do,
if
I
can,
I
mean
successfully:
yeah,
hey
everybody.
This
is
another
public
service
announcement,
and
hopefully
this
isn't
surprising
to
everyone,
because
we've
talked
about
it
here
a
couple
times,
but
we
followed
through
on
our
schedule
and
went
ahead
and
merged
this
change
and,
as
we
talked
about
before,
you
can't
generally
import
ginkgo
version,
one
and
genko
version
two.
At
the
same
time,
they
don't
play
nice
with
each
other.
E
So
at
some
point,
when
you
need
to
import
cappy
head
or
the
next
release
of
cappy
you'll
have
to
catch
up
with
this
change
in
your
test.
If
you're
using
the
framework
stuff.
E
D
C
E
Yeah
I
mean
you
can
look
at
the
pr.
It
changes
imports
everywhere,
which,
as
we
know,
can
maybe
trip
us
up
on
a
cherry
pick
pr,
but
it's
not
really
that
substantial,
otherwise.
C
Cool,
I
think
it's
worth
just
a
fyi
to
folks
that
this
this
might
be
something
you
have
to
work
around
so
yeah.
You
might
have
your
personal
cell
phone
number
matt
for.
E
I
mean
we
don't
need
to
go
that
far,
but
I'm
definitely
willing
to
help
if
people
have
problems
and
at
the
very
top
of
this
pr,
we
had
some
brief
suggestions
about
how
to
how
to
deal
with
the
changes.
E
Basically,
that's
the
blurb
there
and
then,
as
as
I
put
in
the
doc
controller
runtime,
was
trying
to
do
this
and
we
had
a
we
kind
of
tried
to
stay
in
sync
with
them
on
these
packages,
but
we
realized
that
wasn't
strictly
necessary.
So
we
went
ahead
with
this
in
capi.
On
the
other
hand,
the
effort
to
upgrade
to
genko
v2
in
controller
runtime
finally
got
resurrected
and
looks
like
it
might
land
pretty
soon.
So
that'll
be
nice
that
we're
all
on
genko
v2
in
the
in
the
ecosystem.
At
some
point
soon,.
A
No
just
one
clarification
like,
I
don't
think
we're
backboarding
this
right,
like
I
think
we're
just
going
to
keep
it
all
one.
C
A
A
Cool
well
thanks
for
doing
this.
I
think
this
was
long
overdue.
I'll
review
the
controller
on
time
peter
later
this
afternoon
as
well,
so
maybe
we
can
get
it.
A
In
soon
as
well,
cool
cool
awesome
provider
updates
joe
your
first
and
then
somebody
from
pepsi.
B
F
Nothing
huge
on
our
part
we're
just
kind
of
chugging
along
making
progress.
We
ran
into
a
bug
where
we
allow
our
tendency
owners
to
create
auto
tags
for
all
their
resources.
You
know
such
as
who
created
things
by
when
and
whatever,
and
so
one
of
the
bugs
we
ran
into
was
attempting
to
update
resources
based
on
those
tags,
and
so
when
they
were
auto
created,
our
cluster
didn't
know
about
it
and
would
start
to
fail.
B
E
Just
we
talked
for
a
while
and
finally
decided
to
move
our
office
hours
instead
of
bi-weekly
at
eight
pacific.
We
moved
them
an
hour
later
and
we're
doing
it
every
week.
So
tomorrow
at
nine
pacific,
please
come
join
us
and
and
then
the
141
release
is
mostly
bug
fixes.
But
it's
an
important
set
of
bug
fixes
and
I
don't
it's
actually
12
days
ago,
but
I
don't
think
we
announced
it
here
so
now
we
have.
B
A
Thanks
so
much
folks
we're
at
the
end
of
our
agenda
for
today
are
there
any
last
minute
topics
that
we
want
to
discuss.