►
From YouTube: 20190923 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
still
have
a
relatively
short
agenda
today,
so
if
there
are
any
topics
you
would
like
to
discuss,
please
go
ahead
and
add
them
to
the
agenda.
I've
linked
that
in
the
chat
to
start
with
I
wanted
to
start
with
a
couple
of
PSAs
biggest
one
is:
is
the
cluster
API
planning
was
last
week.
We
are
still
working
on
providing
this
summary
and
I.
Believe
Vince
is
actually
going
to
be
working
on
that
and
is
looking
to
have
something
present
at
the
weekly
cluster
API
meeting.
A
The
other
thing
is,
is
you
should
probably
expect
proposals
to
start
appearing
for
review
within
the
next
couple
of
weeks?
I,
don't
think
we
identified
anything
that
directly
will
impact
the
AWS
provider,
but
I
will
suspect
that
there
is
potential
for
some
downstream
kind
of
reverberations,
so
to
speak.
So
if
you
do
want
to
try
to
keep
up
before
it
comes
out,
I'd
highly
recommend
following
that
work.
A
Alright,
next
time
on,
the
agenda
is
mine,
we're
looking
at
aligning
the
building
process
that
we
currently
have
for
beam
1,
alpha
1
and
B
1
alpha
2.
Basically,
what
we're
trying
to
do
is
currently
B
1.
Alpha
1
requires
a
fork
of
cloud
in
it
with
the
Q
beta
and
module
installed,
B
1
altitude-
that's
not,
but
the
image
builder
that
we
use
to
build
images
for
both
of
them
today
uses
the
fork
and
and
we're
looking
for
at
least
the
V
1
alpha
2
and
forward
work
to
remove
the
use
of
that
fork.
A
So
we
basically
have
two
options
that
we're
looking
at,
and
that
is
one
to
use
two
different
image:
builders
for
B
1,
up
a
1
and
B
1
out
the
2
and
changing
the
naming
convention
that
we're
using
for
those.
We
would
probably
not
like
to
pursue
that
route,
because
that
means
we're
maintaining
basically
two
different
image:
builders,
the
upgrade
if
somebody
is
upgrading
from
B
1
up,
want
to
be
one
out
for
2.
There's,
not
really
a
clean
way
to
upgrade
that
as
well
you're.
A
Looking
at
basically
you'd
be
running
on
different
OS
images
based
on
if
the
cluster
was
brought
up
on
P
1,
alpha
1
and
P
1,
l,
2
2,
so
the
work
that
we're
looking
at
doing
potentially
is
to
backport
the
changes
needed
to
use
the
upstream
cloud
in
it
to
be
1
alpha
1.
So
unless
anyone
has
injections
with
that
latter
approach,
that's
probably
the
path
we're
going
to
take
with
that.
C
Yeah,
so
this
is
actually
pretty
related,
so
I
understand
that
the
ignition
support
question
is
mostly
bootstrap
provider
concern,
but
I'm
curious
in
hashing
out
some
of
the
like
am
I.
Look
up
and
I
had
a
couple
of
questions
about
kind
of
the
initial
direction
so
for
one
like
core
core
OS
patterns
and
container
Linux
patterns
are
not
really
in
favor
of
like
just
making
a
bunch
of
a
Mis
and
switching
between
them.
C
The
best
practices
are
typically
to
like
pull
in
the
official
ami,
always
and
then
use
system
D
units
to
configure
what
happens
on
boot.
However,
the
concern
is
that
this
is
vastly
different
from
the
way
that
the
am
eyes
work
in
cluster
API.
Today
we
have
a
different
ami,
/
kubernetes
version,
where
cue
medium
is
pre
baked.
C
A
C
B
So
I've
had
some
ideas
around
composition
of
bootstrap
so
and
this
may
be
a
horrible
idea,
but
create
a
new
like
Multi
bootstrap
provider,
whose
sole
job
is
to
coordinate
multiple
bootstrap
providers
like
in
a
list,
and
so
we
create
one
of
these
things
and
then,
if
you
need
to
have
something
I
think
you
added
in
the
notes
about
pre
bootstrap
steps
to
get
cube.
Atm
installed
like
there
could
be
a
a
stage:
zero
bootstrapper
that
gets
cubed
IAM
installed.
B
It
could
even
prequel
images
if
you
didn't
want
to
use
cube,
ATM
to
pre,
pull
and
then
switch
over
and
have
like
the
last
element
in
the
list.
The
stream
cluster
api
cubic
am
bootstrapper.
If
something
like
that
would
work.
If
that,
if
the
behavior
that's
in
the
bootstrapper
is
it
exists
today,
just
doesn't
fit
in
line
with
chorus
those
practices
than
maybe
that's
a
non-starter,
but
if
you
could
get
QA
team
installed
and
any
other
sort
of
prerequisites
before
calling
the
cube
ATM
bootstrapper
would
that
work
for
you,
I.
C
When
we
have
kind
of
kappa's,
you
know
semi
opinionated,
lookup
of
am
eyes,
and
the
bootstrap
provider
kind
of
I'm
just
trying
to
coordinate
what
things
would
need
to
change
to
kind
of
make
that
whole
flow
go,
and
maybe
maybe
it
would
there's
probably
changes
across
the
board
to
enable
you
know
the
version
on
the
cluster,
but
that's
kind
of
the
perspective
I'm
coming
from
like.
Ultimately,
we
could
make
10
a.m.
C
eyes,
you
know,
and
just
run
with
that
for
a
little
bit
and
then
kind
of
work
towards
getting
where
we
want
to
get
to
with
the
immutable
base
image.
But
I
think
what
you're
describing
it
kind
of
kind
of
makes
sense,
though
my
my
concern
would
be
like
the
I
think,
both
both
the
ignition
and
cloud
minute
output
types
would
theoretically
support
like
doing
something
before
running
the
qadian
in
it.
Yeah.
B
A
So
I
think
the
big
challenge
that
that
would
potentially
present
is
and
to
give
some
background
on
the
reason
why
we
had
chose
to
bake
in
all
of
the
bits
to
the
image
in
the
first
place
for
capo
was.
It
was
basically
for
two
different
reasons.
One
was
reliability
around
the
bootstrapping,
because
once
you
introduce
additional
external
dependencies,
the
reliability
tends
to
drop
on
those
bits,
especially
if
you're
looking
at
ever
doing
those
in
parallel
we've
seen
instances
before
when
trying
to
spin
up
even
relatively
small
clusters,
but
doing
it
in
parallel.
A
You
can
overwhelm
some
package
sources
and
things
like
that
to
where
all
of
a
sudden
you're
only
being
able
to
bring
up.
You
know
a
percentage
of
the
hosts
each
time
instead
of
reliably
bringing
up
all
of
them.
The
other
one
was
around
being
able
to
do
pre-qualification
of
the
both
of
binaries
installed
and
the
prerequisites
needed,
and
all
of
that
so
that
we
can.
C
Yeah
definitely
I,
guess
I.
Think
a
good
starting
point
may
be
I
mean
the
guidance
right
now
is
if
you're
gonna
be
running
cluster
API
in
production
that
you
bring
your
own
am
eyes
if
we
could
start
by
I,
don't
know,
maybe
hardening
some
of
that
look
up
logic
and
formalizing.
Some
of
that
look
up
logic
in
a
way
where,
like
consuming,
that
is
really
simple.
Like
we've
never
tried
this
to
be.
To
be
honest,
so
we've
been
using
the
cabbie
a.m.
eyes.
There's
a
cap.
A
Yeah
and-
and
it
becomes
tricky
and
I-
know
that
the
Talos
folks
are
taking
a
very
similar
approach
to
core
OS
but
I
think
they're,
still
I
think
they're,
looking
at
tying
their
images
with
a
very
specific
version
of
kubernetes
as
well.
So
I,
don't
think
it
has
the
exact
same
limitations
as
core
or
less.
A
A
B
B
C
Also,
we
don't
want
to
build
any
is
the
real
is
the
real
deal.
We
want
to
pull
the
official
core
OS
AMI
and
go
from
there,
but
we
don't
there's
a
ton
of
things
that
have
to
happen
to
enable
that
this,
like
potentially,
if
we
had
our
it's
the
the
Cappy
image
builder
process,
not
install
these
versions,
and
then
we
had
the
bootstrap
provider.
Basically,
pull
them
in
I.
C
B
A
A
B
That's
fair
I
mean
at
that
point
you're,
basically
back
to
let's
start
with
an
official
ami
and
boot
it
up
as
an
actual
VM
and
then
post
there,
whether
it's
cloud
in
it
or
whatever,
configure
it
as
needed
to
install
everything
and
configure
everything.
So
at
that
point,
yeah
image
building
probably
isn't
super
useful.
C
Yeah
I
wish
I
wish.
We
had.
You
know
one
or
two
more
use
cases
to
draw
from
from
people
use
like
consuming
cluster
API
directly,
because
I'd
be
curious
to
hear
you
know:
are
they
just
building
an
ami
for
kubernetes
version
and
just
kind
of
slapping
themselves
in
or
did
they
have
bigger
needs?
And
this
all
just
goes
back
to
the
like.
The
upgrade
question
around
how
the
kubernetes
version
configuration
item
will
move
through
all
the
pieces.
Yeah.
B
And
with
pretty
much
every
client
I've
ever
worked
with
when
I
was
in
consulting
and
my
experience
in
engineering
for
the
past
several
years,
I've.
Yet
to
find
somebody
who
says
yeah
that
like
base
Santos
am
I
is
just
perfect:
everybody
customizes
everything,
and
especially
when
you're
in
corporate
environments,
they
are
doing
golden
images
that
are
built
in-house,
whether
it's
running
on
Amazon
or
in
a
data
center.
Nobody
uses
like
a
vanilla,
ISO
anymore,
so
yeah.
A
A
You
know
because
they're
expecting
that
they
just
have
a
pool
of
hosts
that
just
have
like
an
OS
type
install
and
you
install
the
proper
version
of
kubernetes
and
stuff
on
provisioning,
so
I
think
there
are
other
use
cases
that
have
been
presented
in
the
community.
That
could
also
benefit
from
this
work
too.
B
B
Yeah
but
I
mean
you
still
have
like
well
I
guess
you
could
do
one
image
per
cluster
with
your
certs
vacant,
but
the
reason
I'm
asking
is
along
the
lines
of
status
reporting
and
the
fact
that
there
is
no
status
when
it
comes
to
cloud
in
it
and
it's
it
is
possible
to
SSH
into
a
machine
and
figure
out.
Why
is
it
stuck
bootstrapping,
but
it's?
It
would
be
nice
if
we
had
a
relatively
Universal
bootstrapping
mechanism,
where
we
could
report
status
on
the
bootstrapping
so.
A
I
think
one
thing
to
keep
in
mind
there
is
that,
at
least
as
far
as
the
eight
of
us
side,
one
of
the
main
reasons
we
avoided
SSH
to
begin
with
was
around
potentially
security
implications
of
having
basically
a
root
level
private
key
available
to
remote
into
the
machines.
I
think
that
might
be
less
of
a
concern
now
that
there's
actually
a
way
to
remote
into
a
machine,
but
that
would
require
adding
the
AWS
agent,
which
we
don't
currently
do
today,
and
it's
Amazon
specific
thing
well
and
I.
Think
for
Amazon.
A
C
B
The
details,
yeah
yeah,
it
was
just
rather
than
using
just
the
keep
it
under
strap
provider
as
your
only
provider.
If
you
had
other
things
you
wanted
to
do
as
part
of
bootstrapping,
you
would
have
the
bootstrap
ref
pointing
to
a
shame,
bootstrap
provider
and
then
in
your
chain
config.
You
just
have
an
array
of
actual
bootstrap
provide
and
so
speken
status.
So
you'd
have,
let's
say
we'll,
keep
it
simple.
Let's
say:
there's
like
two
things
you
want
to
do
step.
One
is
install
cube
ATM.
B
E
C
I
was
gonna
say
the
second
two
here
are
things
that
we
can
take.
I
have
so
I
think
extending
ami
look.
Ups
is
definitely
something
we're
happy
to
take
the
predefined
steps.
I
just
want
to
do
a
little
research
first
on
potential
solutions,
I'm
trying
to
understand
better.
You
know
why
or
I
guess:
I
haven't
looked
at
the
image
builder
repo,
very
much
so
I,
don't
know
exactly
what
it
does.
C
A
F
A
B
B
B
B
Right
yeah,
so
somebody
was
trying
to
do
to
add
validating
web
hooks
to
Kappa,
and
one
of
the
changes
that
they
made
was
to
change
the
user,
or
maybe
they
took
it
out,
was
to
change
the
user
in
the
dockerfile
from
No
to
route,
and
that's
because
the
default
value
for
the
court
for
the
web
book
server
is
4:43
and
you
need
to
be
a
the
reviews.
Err
to
find
on
that
port.
B
So
this
is
just
about
making
it
configurable
I
think
these
are
fairly
well
I
guess
it
depends
on
how
important,
in
terms
of
priority,
how
important
the
validating
web
books
are,
because
if
we
consider
them
fairly
important
I
would
say
that
the
priority
is
important
soon
and
we
do
have
somebody
interested
in
working
on
it.
I.
F
A
B
The
one
concern
at
least
for
proper
kubernetes
api
is
that
are
governed
by
the
cuban
kubernetes
api
policies,
which
were
not
at
this
time
is
if
you
have
an
object,
AWS
machine-
and
it's
currently
like
you've
created
it
it's
in
a
CD
and
then
you
add
a
validating
webhook
or
you
add
some
additional
validation
that
is
more
restrictive
than
what
was
originally
loud.
Then
you
end
up
with
an
object
that
is
no
longer
valid
under
the
new
validation
rules,
and
so
you,
you
potentially
could
end
up
with
a
reconcile
loop.
B
A
B
A
F
I
would
also
point
out
that
the
issue
that
this
was
raised
in
is
it
doesn't
currently
have
a
milestone.
The
pull
request,
which
is
adding
the
validating
web
hooks,
so
probably
something
that
we
want
out,
maybe
communicate
there
as
well
that
we
may
or
may
not
want
to
get
that
was
emerging
yeah.
B
Okay,
I
open
this
next
one
asking
if
events
that
are
specific
to
AWS
and
CAPA
should
go
against
AWS
cluster,
an
AWS
machine
or
cluster
and
machine.
This
is
probably
something
we
should
not
change
for
what
we
have
in
0
for
4
B,
1,
alpha
2,
and
it's
still
up
for
debate
as
to
whether
or
not
we
do
this.
B
B
B
They
can't
create
new
security
groups.
They
can't
really
do
much
with
I
am
for
managing
that,
so
they
want
to
be
able
to
predefined
security
groups
and
then
just
say
use
these
instead
of
having
kappa
deal
with
them.
I
don't
know
if
this
would
end
up
being
a
backwards,
compatible
change
or
a
breaking
change
in
the
api.
So
it's
probably
worth
seeing
a
proposal
from
andrew
or
whoever's
interested
in
writing
this
up
to
see
if
they
can
have
some
ideas
around
what
this
would
look
like.
B
Every
week
we
have
proposals
to
add
an
optional,
create
security
groups
boolean
to
the
networks
back
in
database
cluster.
It
would
default
to
true,
which
may
or
may
not
be
difficult
and
when
set
to
false
the
controller
would
not
attempt
to
create
security
groups
and
instead
only
use
the
groups
to
find
in
additional
security
groups.
I
agree:
that's
definitely
a
way
to
go
about
it.
B
B
You
Vince
alright,
so
we
had
a
request
to
modify
the
deployment
that
is
in
our
manifest
to
use
hos
networking
for
the
kappa
manager
pod,
Jason
I
have
weighed
in
here.
I,
don't
have
any
problems.
If,
if
anybody
wants
to
use
those
network
but
I,
don't
think
that
it
should
be
our
default
configuration
and
there's
there's
nothing
stopping
anybody
from
modifying
it
locally
to
meet
their
needs.
B
B
B
I
think
like
you
need
to
run
journaled
controller
or
whatever,
to
see
what
the
kyboot
is
doing.
This
I'm
not
sure
this
is
particularly
useful,
so
we
will
await
more
evidence
and
stick
it
in
next,
and
cluster
Network
field
has
always
been
overwritten,
Jason
and
Vince.
That
looks
like
you
all
have
comments
on
this.
Is
this
an
actual
bug
or.
D
A
B
Yeah
I
I
would
like
clarity
on
this
one
cuz.
You
know,
maybe
it's
related.
Maybe
it's
not,
but
this
code
is
definitely
not
what
we're
running
now
and
that
we
will
merge.
So
we
will
force
the
this
part
for
the
the
shared
value.
So
you
can't
you
can't
use
this
key.
We
force
it,
but
anything
else
you
set
will
come
through
right.
B
B
We
have
validating
webhooks
edie,
test
updates
being
able
to
adjust
the
spam
filter
on
the
event
creation
to
go
mod
verification
and
the
crazy
log
names
at
RT,
verbose,
other
than
validating
web
hooks,
which
maybe
we
pull
in
or
maybe
not
I
would
say.
All
of
these
should
just
go
into
the
0
for
X
milestone.
B
And
then
we
can
decide
I,
don't
even
remember
how
this
moved
the
iron
was
doing.
So
this
one
moves
all
of
the
code
that
we
had,
that
is
in
master
now
that
does
validation
after
the
object
after
the
machine
has
been
created
or
updated
to
about
dating
web
book.
So
it's
the
same.
You
get
the
same
failures,
but
instead
of
instead
of
allowing
the
object
to
be
created
or
updated
and
persisted,
this
rejects
those
if
that
makes
sense.
So.
D
A
B
I'm
happy
to
put
this
in
the
milestone
like
I,
said
before:
I
think
it's
it's
a
better
user
experience
and
I
mean
we
all
expect
that
alpha
2
is
a
transitional
version.
So
if
you
end
up
with
a
cluster
image
or
an
AWS
cluster,
a
debase
machine
that
the
validating
webhook
is
breaking
for
you,
you
can
either
fix
the
data,
so
it
can
be
validated
or
delete
the
thing
or
just
don't
deploy
the
validating
webhooks
for
right
now,
so
that
sound
fair
all
right.
C
C
The
TLDR
is,
basically,
you
can
easy
do
commands,
let
you
bring
your
own
up
to
two
Ian's
eyes
for
the
primary
and
secondary
interface.
This
allows
that,
but
in
doing
that,
you
can't
specify
an
en
I
and
a
security
group
or
a
subnet,
so
there's
some
reordering
of
logic.
We're
basically
at
now
the
reconcile
loop
checks,
if
you
specified
a
network
interface
first
and
then
goes
through,
goes
through
the
motions
all.