►
Description
SIG Cluster Lifecycle - Cluster API Provider AWS Office Hours - 20210125
A
All
right,
this
is
the
kubernetes
sig
cluster
life
cycle
meeting
for
cluster
api
provider
aws
on
january
25th.
2021,
please
be
aware.
We
abide
by
the
cncf
code
of
conduct
to
generally
is
be
excellent
to
each
other.
First
of
all,
psas
got
versions,
uh064
released
for
the
unmanaged
site
or
actual
dual
controller
side
latest
feature
is
rate
limiting.
So
if
you
are
running
lots
and
loads
of
stuff,
there
should
be
some
more
fine-grained
rate
limiting.
So
we
took
the
ec2
api,
docs
and
sort
of
made
rate
limiters.
A
Based
on
those.
Do
you
want
to
say
anything
about
changes
for
eks
richard.
A
I
guess
the
big
thing
is
we
had
the
e2e
tests
are
now
in
for
eks,
so
we've
got
ci
signal
so
should
be
fairly
confident
that
you're
good
to
go
with
eks
clusters.
Just
on
the
agenda.
I
just
put
some
little
things,
so
we
run
out
for
four
progress.
Is
I
don't
repeat
myself
load?
There's
the
cluster
api
load
balancer,
for
so
we
there's
going
to
be
a
new
load.
A
Balancer
constructing
cluster
api,
primarily
to
support
things
like
data
center
environments
and
bare
metal,
interesting
being
for
aws
is
vmc
on
aws,
where
you
or
bmw
whatever
the
equivalent
is
for
azure,
where
you're
running
vsphere
clusters
on
a
public
cloud.
But
then
you
actually
want
to
use
the
load
balancer
construct
from
that
cloud
in
front
of
the
api
servers
hosting
that
kubernetes
cluster
in
the
vsphere
service,
whatever
it
is
happens
to
be
so,
this
is
pretty
similar
to
something
that
red
hat
raised
around
external
cluster.
A
Like
you
want
to
be,
you
want
to
use
provision
an
aws
machine
on
aws
construct
how
how
does
if
outside
of
an
aws
cluster?
How
do
you
know
where
to
provision
that
machine
or
that
load
balancer,
not
figure
that
out?
I
will
put
a
link
to
the
issue
which
is
well
this
doc.
That's
been
written.
A
Some
links
linked
to
a
dot
from
red
hat
that
describes
what
they
want.
I'm
not
sure
it's
the
right
solution.
What
they're
suggesting
I
think
it
works.
We
run
out
for
you
and
not
what
we
run
out
for,
but
that's
that's
something
we
need
to
figure
out
and
it
will
become
even
more
intriguing
once
we
have
the
account
multi-tenancy.
The
other
thing
I
wanted
to
mention
is
crd
for
amis.
A
So
I
think
this
relates
to
something
that
dane
had
raised
and
a
while
ago
he's
around
like
if
you
need,
if
a
new
ami
comes
out
for
the
same
kubernetes
version
like
you,
don't
have
a
constant
identifying.
We
had
talked
about
making
resolving
that
ami
within
the
machine
deployment,
but
there's
also
another
use
case
where
you
want
to
use
ebs
encrypted
volumes,
but
you
also
want
to
use
some
publicly
published
amis.
A
So
I
think
thinking
around
this
is
that
we
have
a
separate
crd
for
amis
that
gets
resolved
and
maybe
automatically
gets
copied
into
your
account,
and
then
you
can
reference
that
in
your
machine
deployment.
I
haven't
done
much
more
than
that,
but
I
think
that's
the
additional
things
that
I'm
probably
going
to
add
to
the
v1
alpha
4
doc.
C
Sorry,
I'm
just
trying
to
think
about
how
that
would
interact
with
the
upgrade
flow
or
like
the
change
of
the
version
field,
and
things
like
that
yeah
well,.
C
A
Yeah
I'll
take
that
release
next
topic
is
release
notes
from.
B
B
My
brain's
engaged
now
so
it's
more
of
just
just
about
we
we've
sort
of
talked
about
adding
the
release,
notes,
scope
to
prs
so
that
when
we
generate
the
releases,
we
can
pull
that
information
out.
So
it's
more
of
a
I've
just
been
playing
around
with
that
and
the
prowl
release
notes
plug
in
we're
gonna
carry
on
playing
with
that
really
just
so
just
in
case
anyone's
interested
in
also
looking
at.
B
A
Sure
fabrizio
as
you're
on
the
chord,
do
you
know
anything
from
do
you
know
why
we
didn't
go
for
the
release,
notes
tool
for
the
cluster
api
replay?
No,
I'm
not!
That
way.
A
No
probs
yeah,
I'm
happy
to
experiment
with
that.
I'm
going
don't
has
anyone
else,
got
any
comments
so
we'll
move
on
to
the
next
one
e2e
eks
richard.
B
Yep,
actually,
on
that
last
point
as
well,
I'm
actually
using
copy
to
generate
the
cluster
that
I'm
in
leakout's
cluster
that
I'm
actually
running
prow
in
for
that.
So
it's
actually
quite
a
good
good
test
for
the
provider
as
well
yeah,
so
the
enterprise
the
basically
we
had
a
problem
running
them.
They
were
taking
too
long
and
they
were
really
really
flaky.
B
So
I've
got
a
pr
in
to
split
them
into
two
separate
jobs,
one
being
like,
like
a
general
happy
bath,
use
case
that
you
know
that
will
just
flow
through
creating
a
cluster
adding
machine
balls.
You
know
scaling
it
and
stuff
like
that,
just
to
test
as
much
functionality
as
quick
as
possible
and
then
the
the
second
one
will
be
a
fuller
test.
It
includes
things
like
upgrades
to
to
the
control
plane
and
the
machine
calls
so
yeah
yeah,
that's
failing
at
the
moment
for
some
reason,
but
I'll
get
that.
B
It
changes
so
sometimes
it's
like
so
some
of
them
are.
It
runs
out
of
elastic
ips.
So
it's
not
clearing
them
up
quicker,
which
is
which
is
strange
because
we
actually
put
a
test
in
there
to
make
sure
it
was
my
delete
test
and
then
then
they
there
another
one.
Is
it
just
never
got
to
the
bottom
of
this
one?
It
just
can't
communicate
with
the
control
plane
on
the
second
cluster,
the
second
run
so
which
is
a
bit
strange.
B
A
All
right,
yeah,
okay
preference
will
be
to
eliminate
the
flakes,
but.
B
Yeah
yeah:
well,
it's
more
of
yeah,
so
it's
a
different
block
because
of
the
irsa
stuff
that
marcus
was
working
on.
He
was.
He
was
being
blocked
from
that,
because
he'd
run
the
test
and
it
would
fail
like
two
hours
later.
So
that's
why
we
split
it
into
the
happy
path
and
the
the
four
tests.
So
the
block
people
why
we
sorted
the
problems
out.
A
Yeah,
okay,
I
mean
the
other
option
is.
I
know
we
also
need
to
refactor
it
in
the
unmanaged
cluster,
because
I
haven't
used
jinko
properly
in
the
controller
test,
is
to
have
the
mocked
mocked
out.
Test
marked
out
calls
to
the
do
simulations
so
that
for
some
prs
we
notice
sufficient
unit
testing
or
control
testing
that
we
don't
need
to
won
a
full
pr
job.
And
then
it's
only
things
where
we
it's
particularly
high
risk
that
we
need
to
run
the
full
suite.
D
A
A
A
Yeah,
no
it's
it's
a
very
small
subset,
so
golang
si
alien
will
have
like
has
like
40
lenders,
of
which
golend
is
one
static.
Check
is
another
there's
some
we
probably
need
to
revisit
we've
disabled
some,
because
we
disable
some
that
we
disagree
with
and
we
disable
some
just
because
the
code
is
always
never
has
never
passed.