►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
good
afternoon,
good
morning-
and
this
is
the
aws
cluster
api
meeting
as
part
of
the
kubernetes
cluster
lifecycle-
please
be
aware
that
we
are
operating
under
the
cncf
code
of
conduct
to
be
excellent
to
each
other.
If
you
want
to
raise
a
point,
please
use
the
raise
hand
feature
and
possibly
someone
else
will
notice
for
me,
because
I
can't
see
the
participant
screen
right
now.
A
So
to
get
started
yeah,
please!
If
you've
got
any
agenda
items,
don't
have
any
psas
right
now.
I
think
we've
been
we
released
version
0,
6
0
the
other
week
and
we're
working
on
the
0.61.
So
I
guess
we'll
go
on
to
the
group
topic
so
siddha.
Do
you
want
to
talk
about
their
ami,
build
automation.
B
B
B
B
A
bit
more
yeah,
so
here
you
see
a
file
that
so
there
are
a
lot
of
pieces
that
we
need
to
decide
what
to
automate
and
how
to
automate.
B
But
I
I
have
a
possible
workflow
that
may
work
where
we
have
a
single
yaml
file.
Can
I
go
up
a
little.
B
B
Okay,
yeah
yeah,
it's
good
so
here
this
is
the
this
is
my
proposed
format
for
for
the
yaml
file
that
made
trigger
auto,
build
and
publish
jobs.
So
here
you
see
we
have
some
default
parameters,
we
can
use
for
each
kubernetes
version
or
we
can
specify
whatever
we
want.
For
example,
if
there's
an
error
in
one
of
the
amis,
I
think
we
need
to
wait
until
next
image
builder
version.
So
with
this
yaml
format
we
will
be
able
to
trigger
builds
on
error
as
well.
B
The
only
thing
I
think
we
are
not
covering
here
is:
if
a
new
os
patch
comes,
I
don't
know
how
to
like
trigger
using
this
file.
The
only
thing
is
missing
is:
is
that
part
id
I
mean
feel
free
to
like
add
any
comments,
ideas
in
this
document.
B
B
B
Okay
yeah,
so
here
you
see
a
possible
workflow,
so
we
are
gonna,
have
a
a
kappa
box
similar
to
kubernetes
release,
bot
and
the
yaml
file
that
that
that
is
above.
We
are
going
to
create
that
when
a
new
kubernetes
release
is
cut,
there
are
two
ways
that
we
can
detect
it.
We
can
either.
Actually
I
am
trying
to
contact
with
a
release
team,
because
when
an
equivalent
is
released,
it's
cut
apr
is
automatically
created.
Maybe
we
can
do
that.
B
I
think
this
is
important
because
we
do
not
want
to
build
all
existing
kubernetes
release
amis
when
there's
a
single
change
so
and
then,
once
the
amis
are
built,
we
carry
on
some
conformance
tests
and
after
that
pr
is
merged.
I
think
this
part
can
be
manual.
B
We
may
want
to
check
if
images
are
successful
and
check,
maybe
test
results
and
then
a
post
submit
job
needs
to
rebuild
all
those
images,
because
there's
no
way
we
can,
I
couldn't
find
a
way
to
save.
B
C
Thanks
for
looking
into
this
saddaf,
I
would
raise
my
hand,
but
I'm
a
co-host,
so
I
can't
one
just
quick
comment
on
building
versus
pushing
amis.
I
think,
if
there's
no
distinction
between
the
two
isn't
that
right,
like
it's
either
an
ami
or
it's
not.
A
We
could
do
a
promotion
process
actually
so
which
would
involve
creating
a
new
ami
from
the
same
ebs
volume
snapshot.
So
we
could
sort
of
use
a
tag
that
kappa
does
not
pick
up
by
default
so
like
some
way
to
mark
it
as
a
staging
ami
and
then
once
we
pass
the
test
on
it,
we
can
create
a
new
ami
from
the
same
snapshot.
C
B
Yes,
that
would
be
great
also
if
there
are
any
other
ways
that
I
didn't
add
in
into
this
file
like
how
to
store
secrets.
Our
access
keys
securely
would
have
also,
if
we
wanna,
for
example,
combine
pre-submit
and
post
submit
together,
and
if
there
are
any
other
pieces
that
we
wanna
automate.
I
mean
I'd
love
to
hear
here.
Tell
us
about
those.
C
Okay,
that
sounds
good.
D
Yep,
it's
really
just
for.
I
guess
some
advice,
whether
the
the
way
I
was
thinking
of
doing
this
is
correct,
basically
we're
not
rendering
in
the
the
config
maps
required
by
awsim
authenticator
at
the
moment,
so
that
is
essentially
stopping
the
nodes
joining
the
cluster.
So
there's
also
an
open
ticket
to
allow
in
a
declarative
way
to
add
other
mappings
in
there
in
the
future,
as
well.
D
So
looking
at
how
to
do
this,
so
one
option
is
essentially
a
controller
that
watches
for
the
control
plane
and
the
machines
when
they
get
to
a
essentially
a
certain
state
and
then
render
out
the
config
map
on
on
every
reconciliation
loop.
If
it's
changed
or
my
splits
put
the
functionality
into
package
and
call
that
across
the
multiple
controllers,
so
the
so
the
control,
plane,
controller,
machine,
aws
machine
and
I
guess
in
the
machine
portal
as
well.
D
So
really
it's
just
whether
typical
controller
is
a
good
approach
just
to
render
out
that,
because
at
the
moment
it
is
it's
rendering
the
the
eks
functionality
pretty
much
useless.
Unless
you
manually
add
that
config.
D
A
Okay,
so
I'm
not
super
familiar
with
awsome
authenticator.
There's
a
comment
from
peter
r
in
chat
that
the
aws
in
fedcat
crds,
so
the
config
map
doesn't
have
to
be
required
for
the
authenticated
pods
to
start.
D
Cool
perfect:
I
can
just
push
that
in
then
cool.
That's
even
better.
C
C
A
So
there's
nothing
else,
we'll
move
on
to
the
issue
triage,
we'll
just
go
through
the
open
issues.
Right
now,.
D
A
Yeah:
okay
same
go
for
one
nine,
five,
six,
one:
nine
five,
eight.
D
Yeah,
that's
basically,
it
doesn't
work.
If
you
enable
bastion,
you
can't
connect
to
the
nodes
that
are
created
so
I'll,
put
a
fix
in
for
that,
but
yeah
for
definitely
zero.
Six
one.
D
Ssm
yeah
me
again,
just
I'm
not
actually
started
looking
into
this,
but
obviously
I
was
following
the
docs
saying
how
to
connect
to
the
nodes
instances
and
it's
not
available
for
the
the
nose
we're
using
for
eks.
A
Okay,
yeah,
I
I
think
we
did
enable
it
for
ect
so
yeah.
Do
you
want
to
get
this
in
zero
six
one
or
is
it
as
sort
of
optional?
We
can
go.
D
D
Yeah
yeah
that's
right,
obviously,
because
we
introduced
the
second
cube
config.
D
So
there
is
a
another
issue
has
been
opened
in
in
capi
to
sort
of
formally
discuss
how
you
can
have
a
copy
focus,
cube
config
and
one
separate
one
for
your
users.
A
A
A
Oh
just
done
that
wrong
e
to
ets
or
aws
machine
pool.
So
this
is
me
so
I
broke
this
out
of
the
existing
machine
pool
implementation,
so
machine
pools
are
for
for
those
who
aren't
aware
for
sort
of
auto
scanning
group
constructs.
Essentially
so
the
aws
machine
pool
implementation
will
add
support
for
scanning
groups.
We
probably
do
want
to
do
some
e2e
tests
around
that,
it's
probably
on.
I
don't
think.
A
We've
actually
got
the
machine
pool
stuff
in
completely,
so
it's
probably
not
for
zero
six
one,
but
we
do
want
to
do
this
asap
so
put
in
zero
six
x
and
then
what
I
think
we
should
do
is
after
the
zero
six
one
release.
We
do
a
planning
meeting
again
because
we
didn't
do
much
planning
in
detail
past
061
anyway,
and
so
should
probably
do
that
again,
any
and
just
see
where
we
want
to
put
stuff
in.
C
A
Fair
enough,
this
one
is
in
progress
1935,
so
for
people
in
air,
gapped
environments,
the
resource
tagging
api
for
some
reason
is
not
available
and
that
blocked
elb
deletion.
There
is
a
pr
for
that,
so
it's
good
to
go
in
for
the
2861.
I
think
the
next
one
is
we.
There
is
actually
an
open
pr
for
that.
So
we
are
going
to
have
cluster
api
aws
dot,
sig
sup
case.
So
io
is
a
website.
A
The
dns
is
all
set
up,
so
it's
almost
all
ready
to
go
so
I'll
leave
it
in
for
zero
six
one
and
we
can
locate
again.
I
guess
oh,
and
this
one,
this
vpn
gateway,
dxg
attachment
vgw
roots.
Someone
want
to
speak
to
that
one
sam.
I
believe
this
is
yours.
E
His
sort
of
context
here
to
say
is
the
we're
trying
to,
with
in
our
environment,
we're
trying
to
avoid
having
to
switch
over
to
like
byo
vpc,
as
long
as
we
possibly
can,
because
part
of
the
attraction
of
the
whole
setup
for
us
and
one
requirement
that
we
have
is
nearly
all
kubernetes
clusters
that
we
build
eventually
do
get
hooked
up
to
direct
connect
gateway,
which
is
rather
back
to
our
data
centers
and
so
understand.
E
There
are
things
that
just
are
not
automatable
right,
like
accepting
the
wiffs,
for
example,
we
have
external
dependencies
in
our
networking
team
and
so
on
and
so
forth.
But
that
is
a
specific
thing
that
has
to
happen
for
each
vpc
and
that
is
creating
a
vpn
gateway,
attaching
it
to
the
vpc,
sorry
ass
terminology,
associating
it
with
the
vpc
and
then
attaching
it
to
a
direct
connect
gateway,
and
it
is
a
lengthy
process.
The
last
associating
with
the
vpc
doesn't
take
is
even
faster
than
creating
in
that
gateway.
E
Attaching
to
the
data
connect
gateway
is
about
a
20-minute
process,
so
we
have
code
in
a
fork.
That
does
all
of
that.
I
obviously
didn't
want
to
block
a
reconcile
loop
for
20
minutes.
So
the
way
I
implemented
it
on
our
side
is
essentially
have
a
function
that
checks
the
state
that
returns
a
specific
error
type
which
the
caller
can
then
catch
and
basically
return
an
appropriate
reconcile
result.
So
it's
just
that
every
15
seconds
I
think
it's
a
default
right
now,
it'll
reconcile
the
aws
cluster
and
go
oh.
E
It's
not
finished
attaching
yet
come
back
later.
Just
so
we're
not
completely
blocking
a
thread,
but
so
that's
something
we
have
in
a
fork,
and
I
was
wondering
whether
there
is
an
opportunity
to
kind
of
clean
this
up
and
bring
it
upstream
again.
The
reason
I
it's
up
for
discussion
is
I'm
fully
aware:
there's
a
larger
discussion
right
around.
E
E
So
that's
why
it's
in
here,
obviously
more
than
happy
to
keep
maintaining
our
fork,
but
also
more
than
happy
to
make.
This
upstream
makes
our
life
easier.
Obviously,
and
if
someone
else
benefits,
it
would
be
great.
A
Yeah
I
mean,
I
think,
we'd
love
to
unfork
people
as
much
as
possible,
yeah,
so
the
yeah.
So
the
concern
here
is
just
the
increasing
complexity
with
aws
cluster
spec
versus
possibly
going
to
more
of
a
a
set
of
crds
representing
the
various
vpc
constructs,
which
would
then
be
relevant
on
using
aws
controller
kubernetes,
which
needs
all
the
relevant
bits
added
to
it.
To
do
that,
support
which
is
obviously
a
long
way
off.
A
Sorry
yeah,
no,
so
so
I
I
think
you're
in
your
implementation.
This
will
be
extending
the
existing
aws
cluster
object,
adding
in
the
appropriate
reconciliation
and
making
it
non-blocking,
which
is
not
what
we're
doing
right
now,
it's
very
what
we
should
be
doing,
but
the
existing
code
is
currently
blocking
so
I've.
I
think
this
is
a
question
for
other
people
are:
is
this
up
yeah?
I
I
don't
have
an
opinion.
A
Those
are
the
things
that
we
have
talked
about,
but
I
don't
feel
massively
strongly
either
way.
E
A
Yeah
pretty
much
okay,
but
the
consensus
was
that
we
wouldn't
do
that
without
relying
on
aws
controller
for
kubernetes.
E
The
way
thing
with
the
spec
is
right
that
my
assumption
would
have
been
that
the
direct
connect
gateway
is
still
something
the
user
would
have
created
before
coming
to
kappa.
So
we
would
just
hold
a
reference
to
it
or
a
a
filtered
attributes
to
find
it
rather
than
actually
creating
it.
So
it
would
not
include
creation
of
a
direct
gateway.
A
A
A
You
could
say,
and
you
can
open
up
a
pr.
We
can
start
reviewing
what
it
looks
like
remind.
A
No,
not
necessarily
so
I
think
we
will
probably
call
v
one
alpha.
Four
zero.
Seven,
zero
and
zero
six
x
will
continue
to
be
on.
We
won
alpha.
Okay,.
A
All
right,
cool,
okay,
secret,
back-end
support,
so
if
this
is
in
progress
right
now,
I
think
it's
yeah.
So
for
those
who
aren't
aware
in
certain
environments
yeah,
basically,
if
you're
in
aws
secret
region,
you
can't
use
aws
sequence
manager,
so
there's
a
pr
coming
in
to
add
support
to
use,
aws
systems
manager
parameter
store,
whatever
the
aws
acronym
is,
which
was
actually
one
of
the
options
we
were
considering,
and
I
think
there
it's
been
done
in
such
a
way
that
s3
could
be
added
as
well.
A
So
that's
in
progress
we'll
try
and
get
it
in
for
zero
six
one.
I
think
it's
in
pretty
good
shape.
It's
a
couple
of
been
a
couple
of
comments,
clean
up
code
across
eks
controllers.
I
think
I
can't
I
think
I
must
have
just
pulled
this
out
of
a
comment.
Yes,
I
I
did,
but
I
can't
massively
speak
to
it.
Richard.
D
Yeah,
there's
a
small
amount
of
duplication
between
the
managed
control
plane
and
the
the
bootstrap
side
of
things.
So
things
like
you
know,
checking
the
version
number
normalizing,
the
version
number
of
kubernetes
for
eks
and
stuff
like
that
so
good
to.
I
guess
it
would
be
good
to
do
it,
but
it's
not
it's
not
breaking
anything
at
the
moment.
A
C
A
C
So
there's
a
cluster
api
proposal
for
handling
node
termination,
regardless
of
the
infrastructure
provider
and
there's
been
some
active
discussion
this
morning
on
a
related
pull
request
in
capi
about
syncing
node
labels
from
machines
to
nodes
for
the
specific
purpose
of
labeling
them
so
that
we
can
get
the
termination
handler
pod
installed
via
demon
set.
C
C
It's
cappy
pull
request.
3668
is
where
the
conversation
is
happening.
I'd
encourage
y'all
to
read
that
if
you're
interested-
but
I
think
we're
probably
going
to
be
able
to
close
the
kappa
specific
request
for
terminate
of
spot
termination
detection
or
at
least
come
back
to
it
later
after
we
resolve
what
we're
going
to
do
in.
A
That
I'll
close
it
in
a
bit.
C
Yeah,
I
mean
you,
certainly
we
could
put
it
in
the
next
milestone
and
come
back
to
it
later.
A
I
add
e
to
e
test
coverage
for
spot
instances.
It's
andy,
you
open
this.
C
One
up
yep
just
looked
some
media
e
coverage,
since
we
do
have
support
for
spot
instances.
So
I
don't
know
if
it's
a
good
first
issue.
It's
definitely
help
wanted
and
whenever
we
can
get
around
to
it,
so
zero
six
x
is
fine.
Yeah.
A
Yeah
I'll
put
in
series
six
x
and
then
I
suspect,
we'll
want
it
for
zero.
Six.
Two,
oh
yeah,
this
one
I
opened
is
so
I
noticed
in
eks
bootstrap
scripts
that
there's
some
logic
to
set
cube
reserved
across
eks
and
ec2
node
types.
A
I
think
there
was
also
based
off
our
discussions
last
week
in
capy
around
cluster
auto
scaling,
support
around
infrastructure
provider,
providing
some
more
of
this
sort
of
information
and
setting
some
of
this
information
up
say:
cluster
autoscale
has
got
better
knowledge
around
scaling,
so
it's
not
urgent
in
any
way.
So
I
think
I
might
put
this
in
next
unless
anyone
feels
strongly
about
it.
A
Hey,
that's
a
no
e2e
conformance
test
fell
on
mac
os.
I'm
going
to
close
that
because
I
have
a
pr
that
fixes
it
in
cluster
api.
Well,
it's
not!
It
won't
be
fake.
Anyway,
it's
there's
work
in
progress
to
get
that
fixed,
so
I'll,
put
in
2861
and
because
there's
an
upstream
pr
in
cluster
api.
That
makes
it
work
and
then
need
to
import
that
code
into
kappa
and
it
should
be
okay,
so
I'm
putting
zero
six
one.
A
Support
user
data
privacy
beyond
cloud
in
it,
so
this
is
for
the
kinwa
folk
who
are
working
on
flat
car
linux
support
it's
really
about
how
do
we?
How
do
we
support
secrets
on
anything
beyond
cloud
in
it?
I
have
not
got
any
thoughts
on
this
right
now
and
I
think
we
might
need
to
look
at
the
bootstrap
provider
contracts
as
well,
and
so
I'm
going
to
stick
this
in
next.
A
We
might
have
worked
around
based
on
the
other
pr
for
ssm
parameter
store,
but
in
terms
of
solving
it
long
term.
I
think
this
might
be
a
plus
api
v1
alpha
4
thing,
so
yeah.
C
We
we've
talked
in
the
past
about
the
possibility
of
creating
some
sort
of
agent
or
process
that
can
run
on
the
os,
that's
baked
in
via
image
builder
into
the
amis,
and
that
can
have
whatever
code
in
it
we
want
so
it
we
could
potentially
have
our
cloud
init
just
call
this
agent
or
have
it
excuse
me?
C
And
then
we
don't
have
to
worry
about
the
crazy
boot
hook,
thing
that
we're
doing
with
cloud
init,
but
it
needs
a
design
and
brainstorming
and
proposal
and
could
be
just
for
kappa,
could
be
something
that's
for
cappy
as
well,
and
maybe
there's
some
plugable
aspects
like
how
do
I
get
my
bootstrap
data?
Definitely
not
anything
for
zero.
Six,
though.
A
Yeah,
so
I
might
have
volunteered
myself
to
write
that
proposal
so
well.
C
And
if
anybody's
interested
like,
I
can't
say
that
we've
done
a
lot
of
brainstorming
beyond
what
I
just
described,
but
I
think
it'd
be
a
fun
thing
to
explore.
So
if
anybody
wants
to
play
around
with
some
ideas
and
get
together
and
meet
just,
let
us
know
yeah.
A
A
A
So
it's
kind
of
a
bit
difficult
to
to
do
all
the
secrets
manager
stuff.
Yes,
so
this
is
just
about
me.
If
there's
something
we
can
do
short
term
to
make
asgs
secure.
And
how
do
you
stop
token
reuse?
Is
that
even
possible
and
what
are
the
what
other
contracts
we
want
to
set
around
that?
What
do
we
want
to
say
around
the
security
of
that?
A
So
I
think,
if
anything,
there's
a
documentation
issue
that
we
just
need
to
make
a
statement
and
then,
whatever
improvements
we
can
do
about
it,
so
we
can
put
in
06x,
at
least
so
that
we
put
documentation
on
the
website
around
what
security
guarantees
we
have
and
then
probably
it's
going
to
be
related
to
the
previous
quest
previous
issue
next
around
providing
a
comprehensive
solution
to
it.
That
makes.
A
Sense
all
right,
I
think,
we've
just
gone
through
all
the
issues.
Yep
there
we
go
done
we'll
just
take
a
quick
look
at
the.
A
Milestone
got
31
issues,
so
we
may
need
to
punt
some
of
these
for
a
bit.
I
can
already
see
if
you
a
few
so
what
we
got
yeah.
So
this
is
the
last
meeting
potentially
before
when
we
said
we
were
going
to
do
the
061
release.
A
Do
we
still
think
we
want
to
do
061
release
at
the
end
of
the
month
or
do
we
wanna
delay
it
or
do
we
wanna
remove
some
of
these
things
and
do
release
anyway?
I
think
probably
richard,
because
I
think
a
lot
of
these
are
bug
fixes
for
eks.
A
Okay,
so
I
think
I
might
go
through
these
after
this
meeting-
there's
probably
some
bits
which
are
on
me,
which
can
be
moved
out,
we'll
just
see
where
we
are,
and
maybe
we
meet
later
in
the
week.
We
just
have
like
a
status
check
on
the
these
issues.
A
A
All
right
thanks
a
lot
everyone
I
will
post
the
recording.
Sorry
sam
had
his
hand
up.
E
Okay,
sorry,
I
couldn't
find
the
actual
raise
hand
react.
So
I'm
just
doing
it
literally
very
tiny
question.
It's
loosely
related
to
the
vpn
gateway,
stuff,
ssh
security
group
is
my
understanding
correct
that
right
now
there
is
no
real
way
of
changing
the
default
ssh
security
group.
It
will
always
default
to
only
allowing
ssh
from
the
bastion
security
group
reason.
I
ask
again
obviously
again
relevant
in
our
setup.
E
All
the
hosts
that
we
create
are
internal,
only
nothing
has
a
public
ip
address
and
every
single
ec2
instance
is
directly
routable
right
through
our
vpn
or
data
center.
So
we
would
like
to
be
able
to
adjust
the
ssh
security
group
rule
to
allow
ssh
from
like
subnets,
either
ranges
and
so
on
and
so
forth.
It
feels
like
right
now
it's
hardcoded
to
only
allow
ssh
from
the
bastion
and
we
don't
actually
run
a
bastion
right.
We
don't
have
to
because
everything's
routable.
E
A
It
probably
is
feasible,
it's
just.
How
do
you
do
that
api
bit
to
make
it
neat
enough?
Yeah
you're
right,
it
is
just
basty.
You
know
it's
only
ba
is
bastian
only
it's
a
reasonable
case.
Just
as
a
workaround,
you
can
use
ssm
session
manager
as
well.
So
if
you.
D
A
A
All
right
thanks,
sorry,
that's
a
night
boss.
Any
anything
else
is
anyone
else
wearing
their
hands,
which
I
can't
see
right
now.
C
Not
at
the
moment,
with
the
ssh
thing,
would
the
additional
security
groups
option
work
on
machines
or
not.