►
From YouTube: 20200518 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
May
18th
edition
of
the
cluster
API
provider,
AWS
office
hours,
the
sub-project
volts,
a
cluster
load
cycle
and
cluster
API.
Just
a
reminder.
This
meeting
is
recorded
and
will
be
posted
up
to
YouTube
later
and
all
of
the
kubernetes
community
guidelines
are
in
effect
for
the
meeting,
so
in
general,
please
bx1
to
one
another
all
right
to
start
with.
Today
we
have
nadir
with
the
heads
up
on
a
multi-county
proposal:
yeah.
B
D
I'm
grabbing
the
link
right
now
D
in
support
of
some
of
the
eks
work.
It
looks
like
Michael
or
Sartre
Richard
cases.
His
name.
Apologies
has
opened
APR
to
change
the
defaults
subnetting
that
we
do
when
creating
a
B
PC
before
it
was
one
AZ
with
one
public
and
one
private
subnet,
and
it's
going
through
like
a
more
standard,
multi
AZ
configuration
with
one
public
subnet
and
then
a
bunch
of
different
private
ones.
A
Awesome,
thank
you.
I
will
definitely
take
a
look
later.
I
know
there's
some
potential
challenges,
especially
when
looking
at
like
backwards
compatibility
and
that
sort
of
thing
and
I
know
I've
discussed
it
with
I
think
it
was
Ben
Moss
earlier
who
had
taken
a
stat
error,
so
I'll
definitely
take
a
look
and
provide
feedback.
There
it'd
be
great
if
we
can
move
along
in
that
manner
as
well,
because
that
would
also
help
out
our
more
general
kind
of
a
chase
story.
Around
control
planes
as
well.
D
Yeah,
so
we
at
New
Relic
have
been
doing
some
customer
interviews
with
our
customers,
who
are
users
of
Cappy
and
Kappa,
and
so
these
are
both
that
these
two
topics
are
sort
of
both
my
learnings
so
far
that
we've
had
from
that.
That
I
wanted
to
just
mostly
share
with
you
all,
but
also
check
in
if
there's
something
I'm
missing
about
so
big
one
was
that
the
three
machine
deployment
to
get
three
a-z
thing
was
a
rough
spot
for
our
customers.
They
they
would
prefer
to
have
machine
deployments
that
know
how
to
span
across
failure.
D
A
However,
I
can
give
some
of
the
background
as
to
why
we
chose
not
to
do
that
route
by
default
in
the
past,
and
it
mainly
comes
down
to
the
implementation
of
persistent
volumes
and
those
being
tied
to
specific
AZ's,
and
you
get
into
weird
scaling
conditions,
and
you
know:
where
do
you
scale
up
properly?
You
know
to
handle
those
types
of
situations
and
how
do
you
handle
scale
down?
A
A
Machine
flints
and
machine
sets
made
it
more
easier
to
also
target
you
know
those
specific
nodes
for
deploying
applications
to
when
they're
trying
to
based
on
different
labels
that
are
applied
to
the
different
Easy's
and
that
sort
of
thing
that
said
it's
a
it's,
definitely
a
discussion
that
we
need
to
have
because
it's
requests,
that's
come
up
multiple
times,
and
you
know
if
we
can
get
agreement
in
the
community
that
it's
something
that
we
want.
The
next
step
is
going
to
be.
D
Yeah,
that's
that's
great.
Those
are
great
points.
The
the
thing
that
made
me
realize
is
that
we
have
two
discrete
sets
of
customers,
the
ones
that
don't
particularly
care
about,
like
they
don't
use
EBS
volumes
and
they
don't
particularly
care
about
the
latency
pieces
and
then
the
ones
that
really
really
do
and
like
I
think
for
the
for
the
ones
that
really
really
do
like
I
would
feel
pretty
justified
and
saying.
Well,
here's
your
three
machine
to
blend
its.
You
know.
Now
you
have
the
now
you
have
the
control,
I!
A
We
haven't
really
broached
the
idea
of
adding
a
new
field
and
only
handling
you
know
the
value
of
that
field.
You
know
on
an
experimental
basis
yet
so
we
probably
need
to
figure
out
how
we
could
possibly
do
that
as
an
experiment.
If
we
wanted
to
go
that
route
or
if
we
wanted
to
kind
of
use
the
more
kind
of
cautious
approach
and
try
to
do
it
on,
you
know
a
non
experimental
basis
you
know
based
on
you
know,
the
data
model
changes
needed
I.
D
I
wonder
the
because
the
pattern
that
I
observed
in
upstream
kubernetes
was
that
they
would
introduce
like
in
it
containers
was
an
annotation
before
it
was
part
of
the
pods.
You
know
speck
I,
wonder
if
there
isn't
a
path
there,
where
we
add
stuff
in
annotations
and
then
it
sort
of
survives
the
experimental
phase
it
gets
promoted
into
something
else.
I
guess
I.
You
probably
open
up
an
issue
more
specifically
about
that
and
we
can
think
about
design
there.
Yeah.
A
Definitely
I
know
in
the
past.
We've
generally
avoided
the
annotations
because
they
end
up
being
you
know,
kind
of
they
become
part
of
the
API
regardless,
and
you
still
have
to
provide
certain
guarantees
around
that
and,
if
you're
already
doing
that,
you
might
as
well
go
ahead
and
do
it
with
the
field
anyway.
A
D
Yeah,
so
the
other
piece
that
we've
been
hearing
from
our
customers,
a
lot
is
that
Amazon
is
somewhat
wanton
with
their
capacity
and
availability
of
scaling.
Availability.
I,
unfortunately,
don't
have
anything
particularly
actionable
on
this,
but
it
is
a
pain
point
for
our
customers.
They
try
to
scale
the
Machine
deployment
and
it
skills
kappa
scale
is
perfectly
fine
and
CAPA
starts
taking
the
best
action.
It
can
and
often
gets
things
like.
There
are
no
more
of
that
instance
class
available
in
that
availability
zone,
or
you
pitch
your
VC
view
limit
for
this.
D
A
A
You
know
more
accurately
defining
the
types
of
conditions
that
we
wanted
to
surface
and
those
types
of
things
yet,
especially
on
the
infrastructure
provider
side,
but
I
would
expect
that
that
would
be
kind
of
our
first
course
of
action
and
then,
in
addition
to
that,
I'm
hoping
that
we
can
start
leveraging
some
of
those
conditions
when
there
to
help
better.
You
know
expose
some
of
that
on
the
command
line,
whether
it's
with
cute
cuddle
or
with
cluster
cuddle
as
well.
D
Yeah,
that's
that's
really
great
I
hadn't
put
together
the
conditions
and
the
surfacing
of
stuff
like
that.
Yet,
but
that's
that's
a
really
great
approach,
I
think
because
that's
you're
exactly
right
about
the
following:
the
bread
crumbs
tears.
The
other
thing
that
we've
heard
is
it's
tough
to
do
that
and
like
knowing
where
to
look,
and
you
know
which
things
to
inspect
and
whatnot
is
it
takes
some
effort,
yeah.
A
You
really
do
need
to
know
they'd
architecture
now
and
know
where
things
could
potentially
break
down
to
start.
Looking
and
longer-term
itch
that
shouldn't
be
the
case,
you
should
always
be
able
to
start
with
the
cluster
and
follow
the
bread
crumbs
down
as
far
as
you
need
to
to
be
able
to
inspect
further
and
get
to
the
kind
of
for
lack
of
a
better
word
kind
of
root
cause
of
an
issue.
Now.
A
Beyond
that,
the
other
thing
that
I
would
say
is
we
probably
want
to
look
at
how
we
can
better
define
which
types
of
these
errors
we
should
treat
as
terminal
versus
recoverable,
because,
right
now
we
default
to
the
kind
of
recoverable
use
case.
You
know
with
the
idea
being
that
you
know,
if
you're
running
into
account
limits,
you
can
always
reach
out.
Amazon
increase
your
limits,
and
you
shouldn't
necessarily
need
to.
You
know:
try
to
recreate
things
to
kind
of
get
things
kind
of
back
where
you
expect
them
to
be.
A
Maybe
we
need
to
reassess
some
of
those,
especially
in
light
of
like
the
cube
ATM
control
plane
resource
existing.
Now
that
can,
you
know
potentially
attempt
to
do
the
retries
for
scaling
at
that
level.
Instead
of
you
know,
keep
retrying
to
create
the
same
instance
on
the
machine
level
that
you
know
we
can
probably
start
considering
some
more
of
these.
A
You
know
more
terminal
kind
of
conditions,
rather
than
you
know,
retrial
conditions,
and
that
should
help
a
little
bit
with
some
of
the
visibility
as
well,
because
you
know
at
that
point
the
failure
becomes
more
of
a
failure
on
the
machines
and
machine
deployment
or
cubing
and
control
playing
versus.
You
know
being
a
issue
on
machine
that
just
continually
retries.
C
Semi-Related
question:
was
there
something
a
while
back
about
trying
to
surface
bootstrapping
errors
as
well?
That's
like
a
pretty
big
undertaking
if
I
remember
correctly,
I
was
just
curious.
If
that
was
a
plan.
Still
that
might
not
be
Kappa
related,
though.
A
B
I'm
coming
inviting
a
doc
on
that
there's
many
different
use
cases
and
there's
different
ways
to
solve
them:
some
commonly
collecting
them
with
a
bunch
of
questions
that
pointed
questions
that
I
want
people
to
answer,
because
it
strongly
influences
what
don't
actually
in
terms
of
what
we
can
provide
would
be
beat
up.
Radio
detection
and
there's
differences
across
the
cloud
providers
and
the
assumption
around
one-way
or
two-way
communication
strongly
affects
what
we
can
do.
B
A
A
A
B
Yep,
that's
the
proposal
in
the
PSAs
at
the
top.
So
just
look
that
just
looking
for
people
to
say
yes,
no
comments
once
that's
approved,
you'll
get
it
get
working
on
it.
So
the
main
changes
from
previous
discussions
is
that
accounts
are
going
to
be
cluster
scoped
and
copying
exactly
what
ingress
future
green
X
looks
like
is.
You
would
specify
an
allowed
namespaces
selector
and
then
that
will
allow
you
to
say
which
namespaces
are
allowed
to
use
their
account,
such
that
you
don't
get
privilege
escalation.
A
B
That's
a
fun
one
because
we
create
security
groups.
If
the
name
cluster
name
started
a
class
name,
if
you
actually
call
your
cluster
SD,
something
which
happened
because
we
have
an
individual.
These
initials
are
s
G
and
so
super
reasonable
enough
requests.
They
couldn't
create
the
glass
there.
So
there's
two
options
here:
one
is
we
prefix
all
the
security
create
no
class,
they
doing
something,
or
we
special
case
there
when
you
actually
just
have
s
G
yeah,
so
the
decision
to
be
made
but
otherwise
and
easy
X.
B
B
A
A
A
And
I
think
this
comes
to
the
the
fact
that
we
created
the
types
overly
broadly
but
I'm,
not
sure,
there's
much
that
we
can
do
to
address
that,
because
I
don't
think
we
ever
intended
for
security
groups
to
be
able
to
be
searched
by
filter
but
yeah
it's
at
the
same
time.
It
would
be
the
breaking
the
API
change
to
change
it
before
be
one
out
before
as
well.
So.
A
C
It
it
is
in
that
I
think
the
implementation
will
be
interesting
to
look
at
I.
Think
from
our
perspective,
we're
interested
in
the
spot
fleet
version
of
this
so
like
any
kind
of
generic,
it's
a
reference.
Implementation
will
help
us
get
there.
The
Seth
is
who's.
Looking
more
at
the
cast,
I
think
I'm
curious.
How
maybe
we
can
get
some
use
out
of
using
a
aSG's
but
yeah.