►
Description
SIG Cluster Lifecycle - Cluster API Provider AWS Office Hours - 20210111
A
Oh,
this
is
the
monday
11th
of
january
plus
kubernetes
sig,
cluster
lifecycle
and
cluster
api
provider
for
aws
meeting.
Please
be
aware:
we
abide
by
the
cncf
code
of
conduct
be
excellent
to
each
other
and
we
are
also
using
the
hand
signals.
So
if
you
can
pull
up
participants
and
use
ray's
hand,
if
you
want
to
make
a.
B
A
I've
I'll
throw
the
link
to
the
dock
one
more
time
in
the
chat.
You
can
add
your
name
to
their
attending
and
we'll
get
started.
So
I
guess
someone
did
psas
unto
them
happy
new
year
to
everyone,
it's
good
in
2021.
It's
not
got
off
to
the
greatest
of
starts,
I
suppose.
But
hopefully
you
know
things
can
improve
and
hopefully
might
be
as
bad
as
20
20.
I
guess
just
if
you've
got
stuff
to
add
to
your
gender,
please
do
I've
added
just
two
things.
A
So,
first
of
all,
I
don't
know
if
we
want
to
create
a
new
docs,
so
class
api's
got
a
new
doc
for
2021.
We
don't
need
to,
and
one
reason
we
might
need.
One
too,
though,
is
that
this
document's
owned
by
ryan,
where
and
we
went
for
a
g
suite
migration
which
broke
all
the
links
and
the
recommendation
is
not
to
use
corporate
accounts
for
docs.
So
if
everyone's
happy,
I
will
create
a
new
doc
for
my
personal
gmail
and
share
that
and
then
update.
A
That
will
mean
I
will
have
to
update
the
community
repo
and
then
hopefully,
new
candy
and
white
should
go
out
with
the
new
document
link.
So
unless
anyone's
got
any
objections,
I'm
gonna
do
that
this
week,
just
make
a
one
that
isn't
under
isn't
at
risk
of
having
its
url
changed.
Basically,.
B
A
Take
silence
as
as
content
on
that
one
and.
C
A
Okay,
next
one
is
me
on:
we
won't
have
four,
so
I
we
had
two
meetings
last
year,
I
believe
about
it
and
then
I've
kind
of
gone
silent
and
got.
We
had
a
bit
of
reorganic
me
and
where
I've
been
doing
other
things,
I
haven't
done
a
lot
of
work
on
it
so
for
in
terms
of
their
api
design.
For
that
been
having
there's
an
upcoming
proposal
for
ux.
A
In
cluster
api
core,
which
is
going
to
address
things
like
clustered
templates,.
A
A
If
you
just
let
well
people
as
it's
been
some
time,
the
idea
about
we
on
alpha
4
is
potentially
deconstructing.
The
aws
cluster
object
into
its
separate
vpc
components,
but
then
you
have
a
lot
more
boilerplate
yaml,
which
makes
it
difficult
for
users
to
get
started.
A
So
there
is
a
going
to
be
a
proposal
on
core
that
will
address
that.
So
hopefully
the
we
went
out
for
four
design
of
kappa
can
rely
on
that
feature
and
the
other
bit
of
it
is
the
low
balance.
The
proposal
is
going
forward,
which
is
another
thing
inside
the
club
in
cluster
api
core,
and
we
will
have
load
balancer
constructs
in
kappa
to
do
this
like
to
be
that
low
balancer,
so
there's
some
interaction
with
stuffing
core
that
needs
to
be
hand.
A
We
are
now
looking
for
the
middle
of
the
year
for
v1
alpha
4.,
because
there's
gonna
be
some
quite
a
few
changes
to
cluster
api
core
and
we're
gonna
have
to
spend
some
time
dealing
with
that
and
then.
Finally,
my
last
question
is
on
amis.
A
Now
we
had
a
issue
so
there's
two
issues
here:
one
is:
how
do
you
rotate
other
than
a?
How
do
you
rotate
amis
other
than
the
kubernetes
release?
So
say
you
have
a
cbe
in
the
linux
kernel.
You
want
to
replace
that
ami.
A
There
isn't
an
easy
mechanism
today
to
roll
out
that
new
ami.
You
basically
have
to
create
a
if
you're,
using
the
default
publishing
mechanism
there's
no
way
to
guarantee
what
ami
your
ec2
instance
launch
word.
A
A
A
So
this
should
affect
all
of
us
so
be
interested
to
see
what
other
people
think.
A
C
Yeah,
I
think
it
just
sounds
reasonable
to
me.
It's
not
something
I'd
overly
considered
to
be
honest,
but
actually
yeah
sounds
good.
I
like
the
explicit
nature
of
it
as
well.
If
it's
a
new
crd
or
some
other
mechanism,
but
yeah.
A
Okay,
I'll
mill
away
at
that
so
yeah
I'll,
pull
that
together
in
two
weeks
and
so
we'll
have
a
joint
load,
balancer
proposal
for
core
and
the
kappa
v1
alpha
4.,
I
don't
know,
what's
happening
around
the
ux,
so
that
bit
will
probably
stay
a
gap
for
a
while.
Vince
is
working
on
that.
I
don't
know
who
else
is
but
oh
well,
we
didn't
see
where
we
are
now
all
right.
Next
item
the
slash
x,
folder.
C
Yeah,
so
this
came
out
with
a
change
that
mike
did
regarding
removing
a
a
field
from
the
launch
template,
so
he
had
a
question
about.
I
think
I
think
you
asked
whether
it
was
a
a
breaking
change.
C
Is
there
any
relaxing
of
rules?
If
it's,
if
a
crd
is
defined
within
the
exp,
you
know
folder
or
does
it
have
to
be?
You
know,
abide
by
the
usual
rules
of
you
know
the
alpha
alpha
api.
So
if
you
have
a
breaking
name,
you
should
increment
the
the
actual
version
number.
A
Yeah,
maybe
I
I
mean
I
haven't
we're
just
to
be
clear:
we're
not
using
aws
machine
pool
yeah,
so
we
don't
like
we
vmware
doesn't
have
a
specific
concern
around
that
I
had
actually
forgotten
it
was
in
exp
before
I
asked
that
question
I
just
that
was
more
of
a
general
comment.
It
was
more
like
if
you
have
end
users
who
are
using
that,
then
it's
it's
up
to
you.
Basically,
I'm
not.
B
A
C
I
also
had
a
quick
question
on
the
api
design,
one
as
well
about
the
ux,
because
there
was
a
proposal
as
well
to
support
another
templating
language
or
templating
mechanism
for
the
cluster
flavors.
C
A
A
We
ytt
concretely.
We
can't
use
ytt
because
ytt
is
not
a
cncf
project,
so
it
would
be
unreasonable
to
put
a
requirement
on
that.
Given
that
it's
it's
a
vmware
project,
you
know
it's
not
it's
not
great
to
make
cluster
api
relying
on
something
that
isn't
a
community
sponsored
project.
You
should
be
able
to
plug
ytt
in.
A
I
think
maybe
there's
I
don't.
I
don't
know,
maybe
there's
something
else
in
this
cncf
that
could
be
yeah
yeah.
I
don't
know
like
around,
maybe
cdk's.
Maybe
I
don't.
A
C
A
Yeah,
maybe
maybe
ping
for
bits
yeah,
you
might.
I
think
it
was
intentionally
designed
that
you
could
plug
something
external
in,
but
I
I
I've
not
played
with
it
enough.
You
know,
do
concrete
details
but
fabrizio.
C
The
the
next
one
was
me:
wasn't
it
so
it's
more
just
update
on
the
eks
end-to-end
test,
so
there's
it
was
basically
started
to
fail
from
just
before
you
know.
Everyone
went
on
holiday,
so
they
they
were
passing
and
then
they
started
to
fail.
So
I've
made
some
some
changes
to
make
the
tests
clean
up
better
there
we
didn't
when
we,
when
you
did
a
delete
cluster.
C
C
So
there
I've
added
some
basically
some
changes
that
says:
don't
don't
try
and
delete
the
eks
cluster
until
it's
dependencies
ie,
it's
node
groups
have
been
deleted
and
stuff,
so
I've
added
some
orchestration
of
the
delete
into
it
and
then
I've
literally
just
re-enabled
the
upgrade
test
after
adding
all
of
this
just
clean
up
stuff
in
there,
but
it
takes
a
long
time.
I
guess
it's
a
thing
and
and
because
of
that
I've
I've
written
the
test
in
a
very
specific
way
to
be
additive.
So
it
starts
off.
C
Performs
an
operation
does
a
test
on
the
same
cluster,
so
I
I
guess,
do
I
need
to
I've
written
that
some
very,
very
basic
documentation,
but
do
we
think
a
better
more
more
involved
documentation
is
needed?
I
guess
it's
just
a
better
out
there
or
a
session
on
it
and
why
it's
that
way,
I
don't
know.
A
I
I
don't
know,
I
don't
know
I
I
guess
that's
the
same
for
all
of
the
tv
tests,
because
we
we
did
a
similar
thing
with
the
unmanaged
one.
C
I
I
think
I
I
personally
think
it's
probably
a
good
idea.
This
guy
found
it
quite
impenetrable
initially
and
time
consuming.
So
I
guess,
if
I
just
think
back
to
my
experience,
maybe.
A
Okay,
yeah
I'm
happy
to
work
with.
B
A
That
yeah,
why
don't
you
want
to
start
like
a
hack,
md
and
then
yeah
cool
right
in
there
and
then
change
that
to
a
pr
yeah.
C
Sounds
good,
but
I
also
noticed
that
a
lot
of
the
the
the
cappy
test
framework
assumes
cube
adm.
So
there's
there's
instances
where
I've
had
to
re-implement
methods
just
temporarily
and
raise
issues
in
an
upstream
to
make
them
less
less
hard-coded.
To
assume
that
cubadium
say
control
plane
is
there
so
that
we
will
have
to
remove
certain
bits
in
the
future
when,
when
that's
fixed,
oh.
A
C
I
put
it
into
the
controllers
to
make
the
test
pass.
Also
yeah,
it's
a
it's
a
bit
of
a
weird
one
and
I've
done
it,
so
it
tests
on
the
crds
as
opposed
because
mike
raised
the
question
about
well,
we
could
query
the
node
groups
directly
and
I
was
like,
although
it's
probably
the
managed
control
plane,
relies
on
the
managed
machine
pool,
not
the
fact
that
it
says
no
grid
behind
that's
too
much
information,
so
I
just
do
it
based
on.
Is
that
cld
instance
still
available
what
yeah
cool.
B
Yes,
so
our
kpi2e
tests
are
working
now,
but
it
takes
too
much
time,
especially
aha
cluster
upgrade,
but
that's
working
also.
I
had
to
do
some
changes
in
aws
smashing
pool,
but
these
are
non-breaking,
so
we
should
be
okay.
I'd
appreciate
if
someone
can
reveal
those
also
I'll
create
a
periodic
job
for
it,
since
it
takes
too
long,
I
mean
how
long
is
reasonable
for
a
periodic
job
for
cappy
e3
like
should
it
be
daily?
I
think
daily
might
make
sense.
So
I
just
wanted
to
check
that.
A
We
so
we
currently
do
on
changes
to
master
and
daily.
I
think
which
I
think
should
be:
okay,
okay,.
B
A
Oh
wow,
I
wasn't
expecting
that
yeah,
okay
yeah!
Let's,
let's
do
let's
do
our
master
must,
let's
do
daily,
might
as
well
do
to
master
and
if,
if
we
get
lots
of
messages
in
outlook
or
if
other
people
or
if
we,
if
let's
keep
an
eye
on
boss
class,
I
guess
that
there's
we're
not
using
too
much
capacity.
B
Okay
and
also
I'm
going
to
upgrade
capital
version
3.12,
so
I
don't
think
there
will
be
any
breaking
changes
between
them,
because
there
are
some
fixes
that
the
tests
required
also
the
other
other.
Other
thing
is
so.
I
will
proceed
to
like
rebase
viva
lofo4,
as
we
spoke
before
before
the
new
year,
so
so
that
we
can
merge
it
to
the
master
branch
and
start
innovating
on
it.
A
B
A
Cool,
that's
all
we
had
on
the
gender.
Oh
richard.
C
Yeah,
sorry,
just
sorry
just
related
to
the
the
v1
alpha
4
helping
someone
a
new
contributor
over
christmas
break,
so
they
they
went
through
the
process
of
of
you
know,
creating
forks
pulling
everything
down,
but
they
automatically
just
pulled
down
the
latest
cappy,
so
that
was
on
v1,
alpha
4
and
then
so
all
so
they
didn't
use
the
zero
7
branch.
C
So
then
there
was
problems
with
the
different
api
versions
and
the
conversion
web
hooks
won't
work
in
using
that
combination.
So
it's
probably
just
something
to
be
aware
of.
A
Yeah
yeah
good
point:
okay,
yeah,
let's
yeah,
I
don't
know
what
to
do
about
that,
but
I
guess
we
just
need
stocks.
We
need
to
yeah.
C
I
basically
told
him
to
pull
down
the
version
of
cappy
that
was
still
one
v1
alpha
3.
yeah,
but
yeah
we
put
in
the
dogs.
If
you
want
all
right.
A
And
t178,
yes,
I
guess
so
is
this:
for
the
aws
manage
cluster.
A
Yeah,
I
think,
if
you're
using
existing
aws
infrastructure,
you
should
we
shouldn't
be
dependent
on
the
gateway
type.
That's
correct,
I
think.
Maybe,
if
you've
got
a
pr
in
mind
yeah,
I
would
do
that
sure
one
t170
consuming
existing
security
without
requiring
cluster
name
tag
so
that
different
cluster
can
consume
same
skills
before
okay.
D
Yeah
this
is
the
use
case
like
we.
We
have
a
secure
another
team
manage
security
group,
so
they
will
create
a
security
group
for
all
our
account
in
advance,
and
we
in
in
our
security
model
right,
multiple
clusters
could
have
could
use
the
same
security
group
if
the
flat
network
they
can
communicate
with
each
other
as
a
group.
So
but
in
our
aws
model
we
we
somehow
have
a
one-to-one
relationship
says
each
cluster
needs
to
have
a
security
group.
D
That
is,
we
want
to
see
if
we
can
make
it
more
generic
to
I
mean
to
allow
multiple
clusters
using
the
same
security
group.
A
D
Thanks
then
I'll
try
to
implement
it
thanks.
A
All
right,
yeah
and
by
the
way,
if
you
once,
if
you
do
start
a
pr,
if
you
just
put
like
life
cycle,
active
or
just
say,
you're
starting
work
on
it,
just
so
that
we
know,
then
somebody
else
doesn't
start
trying
to
also
do
the
work
and
without
knowing
you've,
already
started
great.
A
B
A
Right
then,
if
there's
nothing
else
I'll
see
you
in
two.