►
From YouTube: 20190701 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
We
have
a
decent-size
agenda
today,
but
if
you
have
any
additional
topics,
please
go
ahead
and
add
them
to
the
agenda,
and
if
you
would
like,
please
go
ahead
and
add
yourself
to
the
attending
list
as
well
to
start
off
today
we
do.
We
did
have
version
3.2
released
late
last
week,
that
included
updates
to
the
cluster
API
components
to
version
0.1
not
for
it
also
supports
logging
of
user
data
size.
I
know
there
have
been
some
concerns
that
we're
going
to
be
bumping
into
size
limits
there
soon.
A
So
we
want
to
go
ahead
and
start
logging
that
to
be
a
little
proactive
there
and
then
also
support
was
added
for
the
cluster
API
and
node
ref
controller
as
well.
It
did
include
a
bug
with
the
our
Beck
rules,
so
we
also
had
a
version
0.33
release
and
that
went
out
this
morning
that
included
the
fixes,
for
those
are
back
rules
that
were
breaking
the
cluster
cuddle
too
late.
Workflow
I
also
wanted
to
let
people
know
that
we
are
going
for
another
round
of
cluster
API
t-shirts.
A
So
if
you
haven't
already
received
one-
and
you
would
like
to
please
go
ahead
and
click
through
that
link
the
link
in
the
notes
and
go
ahead
and
add
yourself
to
that
list.
Also,
please
reach
out
to
me
privately,
either
over
email
or
slack
with
your
address
so
that
we
can
go
ahead
and
get
those
mailed
off
to
you
as
well.
A
Other
than
that
also
there's
a
PR
request
to
go
ahead
and
add
Andy
to
the
maintainer
role,
which
aligns
with
the
permissions
that
he
actually
already
has
today
on
the
github
repo
permissions.
So
if
everybody
can
go
ahead
and
please
click
through
there,
we'll
give
that
what
do
you
think
one
week
timeout
for
that.
C
A
B
Thanks
Jason,
so
I
wanted
to
talk
about
milestone,
naming,
and
this
applies
to
cluster
API
as
well.
But
we'll
start
here
since
the
meetings
today,
I
would
like
to
propose
that
in
github
that
we
start
naming
our
milestones
based
on
semper
instead
of
API
version,
mainly
because
the
API
version
potentially
spans
multiple
members,
for
example,
kappa
0.2
and
0.3
or
both
technically
be
one
alpha
one.
When
you
look
at
the
API
version
and
I
think
that
we
hopefully
can
think
both
about
breaking
changes
that
are
code
wise,
but
not
necessarily
API
breaking
changes.
B
B
B
Okay,
I've
been
I,
don't
know
if
we
want
to
talk
about
this
one
explicitly
or
just
do
it
all
in
the
backlog
grooming
which
I'd
like
to
do,
but
we
are
interested
in
being
able
to
do
API
server.
Web
look
token
authentication
configuration,
which
is
something
that
we
can
do
with
cube
ATM
as
part
of
the
innate
process.
But
it
means
that
we
need
to
get
some
additional
files
into
the
user
data
so
that
they
can
make
their
way
ultimately
into
the
ec2
VM
and
then
finally
into
the
API
server
pod.
B
So
it's
a
couple
of
CA
certificates
and
a
key
and
a
config
file,
and
that
was
one
of
the
reasons
that
we
added
the
the
logging
of
user
data
size,
because
there
is
a
hard
limit
that
AWS
has
and
we
I
think
are
around
10
kilobytes
right
now,
which
gives
us
about
five
or
six
of
wiggle
room.
Hopefully
we
won't
hit
it,
but
I
did
link
to
the
issue
and
ideally
we
would
find
some
way
or
set
of
ways
that
could
apply
regardless
of
cloud
provider
or
on-premises
bare-metal,
but
I
think
it's
worth
trying
to.
B
E
You
I
can
play
speak
to
some
of
the
suggestions,
so
I
not
totally
not
saying
this
is
their
way
but
listed
three
ways:
three
api's
that
AWS
have
that
enable
us
to
insert
data
onto
a
machine
beyond
the
limits
we
use.
The
data,
such
as
AWS,
simple,
Systems,
Manager,
parameter
store,
which
is
a
key
value
store
for
a
region
in
easy
c2c
supports
prefixes.
You
can
set
ACLs
on
them.
E
There's
dynamo,
DB,
a
SS
use
that
a
lot
I
think
internally
and
finally,
AWS
s3
they've
all
got
different
door
backs
and
positives
from
a
cross-platform
perspective.
The
only
one
that
is
Bailey
cross-platform
is
s3
because
you
can
get
off-the-shelf
open-source
equivalent
equivalents,
but
there's
a
trade-off
in
complexity
for
the
end
user,
yeah.
A
B
B
I'm,
just
getting
my
browser
ready
to
share
I
would
appreciate
it
if
to
try
and
do
this
quickly
if
one
or
two
people
would
like
to
volunteer
to
either
close
out
things
as
we
go
through
them
or
add
comments
so
that
I'm
not
are
sitting
there
not
watching
me
type.
That
would
be
quite
appreciated
and
thanks
I'm
starting
I
just.
B
Can
ping-pong
whatever
so
I've
got
all
the
issues
open
for
Kappa
I'm,
starting
at
the
oldest
one,
so
69
variants?
This
was
opened
a
while
ago
and
I
know
this
is
about,
but
you
know
this
was
before
we
were
working
on
the
V
1
alpha,
2
ideas
and
everything
so
I
think
there's
probably
some
stuff
in
here.
That
is
definitely
worth
keeping,
but
I.
B
Don't
know
that
we
need
exactly
all
the
things
they're
in
here
in
terms
of
because
I
think
they'll
span
different
portions
of
the
API
as
it
evolves
for
like
control,
plain
flavors,
for
example.
So,
and
do
we
all
like
his
first
question?
Do
we
want
to
keep
this
open,
or
does
it
make
sense
to
try
and
split
this
into
smaller
actionable
things
as
we
get
to
V
1,
alpha,
2
and
beyond.
C
B
My
understanding,
it
is
yes,
it
is
a
bunch
of
use
cases,
but
if
this
we're
going
to
be
implemented,
my
guess
is
that
this
could
turn
into
a
series
of
configuration
settings
where
you
can
pick.
For
example,
one
V
PC
for
a
group
of
nodes
versus
a
shared
V
PC
with
different
subnets
and
security
groups,
so
I
I
personally
feel
like
we
could
close
this
and
reference
it
to
create
new
issues
that
are
smaller
in
scope
and
more
directed
yeah.
C
E
E
B
F
B
B
Think
that
there
are
aspects
that
do
yes,
though,
in
terms
of
saying,
like
a
variant,
could
be
whether
I'm
using
kubernetes
or
rancher
or
OpenShift,
like
that,
fits
in
pretty
well
trying
to
say
something
like
eks
versus
full-blown
cube.
Atm
based
cluster
doesn't
really
fit
in
because
we
don't
have
the
control
plane
data
structure
yet
to
to
distinguish
between
those
types
of
things,
but
we're
working
on
it
got
it
alright.
So
could
somebody
close
this
one?
Please.
D
B
A
B
E
D
E
G
B
B
A
A
D
E
Yeah
depends
how
important
that
says:
the
ad
you
get
rid
of
the
secret
on
the
pivoted
cluster
and
switch
it
to
using
instance,
profile.
No
I
think
many
things
we
talked
about
is
using
annotations
on
objects.
We
didn't
want
to
pivot,
so
we
would
do
that
in
the
cluster
API
repo
in
cluster
console,
I,
think
so.
Yep
and.
A
B
D
B
A
B
D
B
E
D
E
G
E
E
B
A
B
B
B
Mean
it
feels
like
this
is
the
same
thing
that
we
just
talked
about
a
couple
minutes
ago:
yeah,
okay,
I'm
fine
go
through
cluster
cuddle
requires
an
aprons
I'm
gonna
skip
over.
Actually,
let's
just
close
anything
related
to
cluster
cuddle
and
it'll.
Get
dealt
with
as
part
of
the
re-envisioning
of
that
flow.
Well
sounds
good
handling.
It
abuse
resources
for
cluster
add-ons.
A
Specifically,
this
is
a
route
we
discussed
this
around
calico
because,
right
now
we
hard-code
the
security
group
rules
for
calico.
But
if
somebody
wanted
to
swap
out
a
different
CNI
provider,
we
don't
have
a
way
to
account
for
the
security
group
rules
for
that
and
that's
what
this
is
basically
tracking.
That
said,
considering
that
we're
talking
about
breaking
out
aspects
with
the
v
1
alpha,
2
and
beyond
work,
this
might
be
something
that
we
need
to
track
in
capi
itself.
So.
B
B
There
haven't
been
today
and,
like
I,
said
it's
it's
an
extreme,
but
were
we
to
do
that
or
some
some
aspect
of
it?
Then
we
could
have
some
defaulting.
If
you
don't
specify
security
groups,
we
expect
to
see
like
then
Kappa
could
default
them,
and
if
you
wanted
something,
then
you
could
supply
your
own
or
it
doesn't
necessarily
have
to
be
fully
modeled
as
individual
c
or
DS.
But,
as
word
of
I
as
we're
designing
the
types
for
me
1
alpha
2,
we
could
potentially
decide
to
include
some
sub
structs.
A
Think
it
would
probably
be
better,
especially
considering
the
way
that
we're
trying
to
break
apart
the
components
if
we
were
going
to
go
that
route
I
think
we
need
to
go
something
that
is
relatively
cloud
agnostic
in
an
abstraction
that
is
relatively
safe
across
multiple
providers.
Otherwise
we're
pawning
off
on
the
end-user
to
know
all
of
the
details
of
the
implementation
to
be
able
to
just
get
them
like
a
CNI
provider
to
work.
Yeah.
B
D
Was
gonna
say
maybe
this,
but
your
issue
for
the
AWS
provider
can
be
solved
by
just
letting
the
users
customize
the
security
group
rules
and
just
adding
the
defaults
when
we
don't
have
them
so
just
keep
a
backward-compatible
way
to
deal
with
those,
but
let
the
user
specified
their
own
if
they
want
to
change
the
add-ons
because
they
could
right,
but
I
wouldn't
definitely
wouldn't
solve
the
larger
issue.
For
you
an
offer
to
well.
A
B
B
B
A
So
we
would
need
infrastructure
made
well,
so
maybe
with
V
1
alpha
2,
because
we're
not
addressing
control
plane
management.
Now
it
would
still
be
covered
under
this,
but
this
is
specifically
about
being
able
to
have
a
managed
external
entities
that
we're
that
Kappas
not
responsible
for
managing
or
that
it
know
that
cafe
is
responsible
for
managing
yep.
A
Yeah,
so
this
is
something
that
has
come
back
as
requirements.
Every
time
we've
talked
to
kind
of
a
lot
of
customer
hosting
folks,
because
they
that's
that's
the
architecture
that
is
being
modeled
the
most
there
to
date,
with
with
the
management
methods
that
they're
familiar
with
so
not
necessarily
co-locating
and
managing
at
CD
on
the
same
nodes
and
shifting
that
failure
domain
over
to
a.
D
Different
set
of
gloves
I
got
a
question,
is
at
CD
X
Iran
today,
as
consider
I
swear
of
controlling
yes,
so
we
do
have
the
label
upstream
to
determine
the
control
plane
and,
as
far
as
I
know
the
user
that
doesn't
actually
preclude
you
to
actually
run
on
CD
onion
and
not
run
any
kubernetes
services.
So
technically
that
could
be
done
in
the
heck.
You
a
really
good
yeah.
C
A
B
C
B
E
E
Yeah,
so
that
was
a
UX
thing
after
somebody
ran
into
an
issue.
This
is
even
just
doing
the
bootstrap
I
think
it's
not
high
priority,
but
it's
for
you
would
be
good
to
document.
That's.
B
B
B
C
C
E
B
A
A
D
B
E
B
E
A
A
So
the
only
way
to
ensure
that
there's
consistency
is
to
have
a
fork
of
cloud
in
it
right
now,
and
even
if
we
did
upstream
it,
we
would
still
be
installing
a
version
of
cloud
in
it
that
isn't
necessarily
a
default
for
the
US
image,
because
who
knows
when
Fedora
or
cent
OS
or
any
of
the
other
rapport
any
the
other
OS
is,
would
rebase
and
consume
the
new
version.
From
upstream
to
how.
B
C
D
A
B
B
B
A
B
Okay,
nadir:
you
were
working
on
this,
but
then
yeah.
E
I
tried
it,
but
it
was
gonna
take
quite
bit
testing
about
to
move
on
to
some
customer
work
at
the
time,
so
I've
kind
of
left
it
open
for
now.
Okay,
Fitz,
if
you
want
to
do
it,
Lex
could
do
I,
don't
know.
A
B
And
see
if
so,
let's
just
leave
it
where
it
isn't.
I
think
this
one
I
want
to
close,
because
Michael
is
working
on
doing
this
for
cluster
API
itself.
Now
that
we
have
remote
node
references
and
he
is
working
against
master,
but
it
should
be
easy
to
back
port
it
to
cap,
easy
zero
dot,
one
dot
X.
So
I
think
this
is
something
we
can
just
close.
B
B
B
A
A
B
C
Is
oh
yeah,
there
was
just
a
few
things
that
were
priority
on
the
PR
feedback.
You
can
leave
that
up
and
keep
it
aside.
Okay,.
B
A
E
B
B
B
B
B
B
A
B
B
A
So
we've
said
in
the
past
that
we
could
potentially
have
one
of
two
behaviors
for
this.
We
could
either
just
have
it
be
a
blocking
condition
like
it
is
today
or
we
can
go
ahead
and
just
clean
up
those
services
of
type
load,
balance
or
automat,
or
those
load
balancers
automatically
by
going
through
and
killing
off
the
introspecting
via
tags
and
killing
them
off
manually.
A
B
A
B
B
Okay,
I
have
another
video
for
another
meeting,
yeah
same
here,
so
I'm
gonna,
say
I'm,
going
over
the
meeting
minutes
and
another
window
and
just
saying
last
issue:
triaged
was
773,
we'll
pick
it
up
again
either
asynchronously
or
we
can
do
it
at
the
next
meeting
in
two
weeks.
Thank
you,
everybody
and
see
you
on
slack
ye.