►
From YouTube: 20200629 - Cluster API Provider AWS Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
Monday
June
29th
edition
of
the
cluster
API
provider
AWS
office
hours
is
a
sub-project
of
bolts,
a
cluster
life
cycle
and
the
cluster
API
project.
Just
a
reminder
that
this
meeting
is
being
recorded
and
will
be
posted
up
to
YouTube
afterwards
and
please
do
abide
by
the
kubernetes
community
conduct.
A
It
looks
like
we
have
a
relatively
light
agenda
today
and
I
can
go
ahead,
paste
that
link
into
the
chat
for
anybody
who's
present
at
the
moment.
If
there's
anything
that
you
would
like
to
bring
up,
please
go
ahead
and
add
it
to
the
agenda
and
please
go
ahead
and
add
yourself
to
the
attendee
list,
the
doc
as
well.
A
B
B
I
just
wanted
to
say
that
we're
gonna,
try
and
get
excuse
me
try
and
get
an
alpha
of
zero
five
five
out
sometime
in
the
next
day
or
two,
and
the
reason
it's
going
to
be
an
alpha
is
because
it's
incorporating
conditions
from
cluster
API,
which
itself
is
still
alpha.
The
zero
three
seven
release
of
cluster
API
is
not
final.
Yet
so
we'll
do
an
alpha
of
zero
five
five
and
then
at
some
point
in
the
next
week
or
two,
when
cluster
API
itself
moves
from
alpha
to
released
four
zero.
B
Three
seven,
we
can
do
the
same
thing
for
Kappa.
So
if
there
are
any
issues
or
pull
requests
that
are
open
that
are
either
or
that
are
not
currently
merged
and
that
are
not
assigned
to
the
zero
five
five
milestone,
please
feel
free
to
nominate
them
either
set
the
milestone
yourself
if
you've
got
privileges
or
just
add
a
comment
asking
if
you
can
get
it
in.
A
C
A
D
D
A
D
Exactly
so,
let
me
briefly
describe
our
strategy
for
the
cross
account
resource
management,
because
I
think
it
might
be
interesting
for
you,
so
each
a
AWS
service
right,
s3,
SNS,
sqs,
etc
will
have
its
own
service
controller
within
the
ACK
project.
We
did
that
for
security
reasons
and
I'm
not
gonna
go
too
far
into
the
decision-making,
but
in
any
case,
the
the
pod
that
the
service
controller
runs
as
will
have
an
iron
role
associated
with
service
count
for
that
pod
through
the
IRSA
that
I
am
roles
for
service
accounts
functionality.
D
Well,
that
I
am
role
with
will
be
associated
with
a
particular
AWS
account.
What
we
didn't
want
was
for
users
to
have
to
install
a
full
kubernetes
cluster
if
they
wanted
to
manage
resources
across
multiple
AWS
accounts.
We
just
thought
it's
like.
Well,
we're
not
we're
not
having
a
very
great
user
experience
if,
if
we
were
requiring
people
to
set
up
an
entire
kubernetes
cluster,
just
to
manage
resources
across
accounts.
D
So
what
the
approach
that
we've
taken
is
we're
having
a
config
map
that
will
store
a
relationship,
a
mapping
between
an
AWS
account
ID
and
an
iamb
role,
a
RN,
and
that
I
am
role
a
RN
will
represent
the
target.
I
am
role
that
the
service
controller
will
assume
into
so.
The
the
service
controller
will
call
the
STS
assume
role,
API,
call
and
essentially
pivot
its
client
into
a
target.
D
I
am
role
and
that
I
am
role
will
be
in
a
different
AWS
account
than
the
that
the
than
the
I
am
role
that
the
service
controller
pod
is
running
under
that
service
account
and
the
pivoted
client
will
then
manage
resources
in
that
target.
Aws
accounts
the
way
that
the
the
custom
resources
that
each
service
controller
manages
there'll,
be
an
annotation
called
services.
Kata
AWS,
Ford,
/
owner
account
ID,
and
that
will
stick
in
all
to
the
service
controller
that
the
user
wants
to
create
or
manage
resources
in
a
different
account.
D
So
we're
using
this
this
annotation
for
that
signaling
constraint,
and
what
this
essentially
means
is
you
have
you
can
have
these
ACK
service
controllers
installed
into
one
kubernetes
cluster
and
that's
it.
You
don't
need
to
install
ack
service
controllers,
of
which
there
will
be
many,
because
there
will
be
a
service
controller
for
each
of
the
services.
D
D
So
the
Atmos
controller
for
kubernetes
project
just
recently
got
redesigned
like
a
another
full
rewrite
and
they
have
created
an
API
throttling
package
go
package
and
inside
at
meta
controller
for
kate's.
So
what
we're
doing
is
taking
essentially
the
ideas
from
that
throttling
package
and
making
it
generic
in
the
ACK
service
controllers,
because
ACK
all
of
the
service
controllers,
including
the
implementation,
are
generated.
There's
no,
you
know,
we've
not
handwriting
any
service
controllers.
We
can't
do
that
for
160
plus
services,
so
we
have
to
generate
everything,
including
the
controller
implementation.
D
So
anyway,
we're
we're
gonna
be
pulling
in
that
throttling
go
package
from
the
at
mesh
controller
for
kate's
and
make
it
essentially
generic
for
each
of
the
services
so
anyway.
So
that's
where
how
we're
gonna
be
handling
that
anyway,
it's
early
days,
but
that's
just
a
sneak
peek
into
some
of
the
things
that
we're
working
on.
B
D
So
the
application
we
kind
of
see
these
do
two
different
personas
right.
You
have
the
application
developer,
kubernetes
user,
who
frankly,
they
probably
only
know
the
AWS
account
ID.
These
are
folks
that
may
not
even
have
permissions
to
login
to
the
I
to
the
AWS
console
or
have
any
experience
using
the
AWS,
CLI
tools
or
API
at
all,
and
then
you've
got
a
persona.
That's
more
of
like
that
central
IT
team
or
admin.
D
They
will
be
the
one
that
that
central
admin
team
will
be
the
ones
that
manage
the
config
mapping
between
an
AWS
account
ID
and
the
target.
I
am
role.
They'll
still
need
to
manage
all
of
the
iam
role,
and
you
know
how
you
can
essentially
set
in
the
permissions
for
an
iamb
role.
The
ability
to
assume
role
into
a
different
role
which
will
be
associated
with
the
different
I
am
or
AWS
account
ID.
So
the
the
application
developer
user
of
kubernetes,
the
ones
that
will
be
doing
coop
cuddle
apply.
You
know
the
s3
bucket
llamo.
D
D
It
I
well
I,
suppose
I
could
they
could
guess
the
account,
but
unless
that
central
IT
team
has
created
the
mapping
between
that
AWS
account
ID
and
an
appropriate
I
am
role
that
has
permissions
to
be
assumed
into
the
the
service
controller
will
just
respond
back
saying
it's.
The
resources
out
of
sync
and
it'll
have
a
condition
that
basically
says
permissions.
Failure,
okay,.
B
Nadir
Jason:
does
that
sound
like
it'll,
be
enough
for
kappa,
where
you
know
at
least
today
we're
either
using
the
I,
am
role
from
the
instance
profile
or
credentials
associated
with
a
single
account
and
I.
Don't
know
how
this
jives,
with
the
multi-tenancy
work,
that
you've
got
going
on
the
deer
yeah.
C
I
think
we
tried
I
think
the
concern
from
our
end
was
we
wanted
a
stronger
guarantee
to
prevent
that
kind
of
Stephen
protect
against
just
guessing
the
ID.
So
what
we
have
a
Trident
link
in
the
chat
is
a
bunch
of
CR
DS
that
could
potentially
be
made
generic
there
conti
going
to
be
in
class,
API,
droid,
AWS,
I,
think
long
term.
C
We
might
want
to
have
it
in
the
cloud
provider
and
other
place
as
well,
but
this
you
would
explicitly
link
global
account
resources
to
a
cluster
and
you
have
the
ability
to
restrict
which
namespaces
are
now
to
consume
each
type
of
account
principal.
So
that's
whether
it's
a
so
for
the
purposes
of
assuming
role
you
can
do
you
can
restrict
by
namespace
and
then
we
in
the
future.
We
do
want
to
look
at
supporting
the
service
accounts
as
well
booooo,
thinking
of
allowing
that
to
be
named.
Scape
namespace
scoped
objects
because
there's,
if
you're
in.
D
D
Were
going
to
so
yeah
I,
I
kind
of
left
out
the
namespace
default
thing
that
we're
thinking
about
so
the
service
controller
for,
say:
s3
has
a
service
account
associated
with
the
pod.
That's
running
that
service
controller.
That
service
account
has
its
I
am
role
which
is
obviously
what's
running.
The
which
the
client
that
is
in
the
pod
is
communicating
using
each
of
the
crs-4
say
an
s3
bucket.
Will
you
can
create
them
within
a
kubernetes
namespace
and
we
were
planning
on
building
a
defaulting
mechanism
into
that
kubernetes
namespace.
D
So,
for
instance,
you
can
set
the
AWS
account
ID
for
all
objects
within
that
particular
namespace
to
a
specific
value,
and
if
the
way
that
the
service
controller
would
would
do
it
is
if
it
sees
that,
there's
an
override
for
that
namespace.
It
would
use
that
AWS
account
ID
and,
if
not,
then
it
would
use
the
account
ID
from
the
IAM
rule
that
the
service
account
for
the
pod
is
running
under
that's
one
way
we
could.
D
We
could
do
it
so
that
you,
you
restrict
the
because
only
basically
cluster
admins
can
you
you
can
change
the
kubernetes
arbok
right
so
that
only
cluster
admins
can
mutate
a
namespaces
in
values
right,
and
so
we
could.
We
could
essentially
set
it
up
so
that
a
normal
kubernetes
user
would
not
be
able
to
override
the
AWS
account
ID
from
a
namespace
if
the
namespace
has
it
set
on
there.
Does
that
make
sense
with
that
I
mean.
Would
that
be
the
sort
of
level
of
security
that
you'd
be
looking
for?
B
A
I
think
we
also
have
some
options
here
as
well.
You
know
because
generally
we're
talking
about
in
the
context
right
now
of
a
cluster
API
management
cluster,
that's
able
to
and
create
clusters
in
different
accounts.
We
could
also
limit.
You
know
the
access
to
the
service
operator
to
the
users
in
the
management
cluster
as
well,
and
that's
another
way
that
we
can
limit
some
of
this.
That
so.
D
D
The
permission
set
that
from
a
cloud
provider
perspective,
the
I
am
permission
set
that
that
kubernetes
user
needs
to
have
is
essentially
superuser.
Privileges
on
the
AWS
account
right
because
it
needs
to
create
VP
sees
security
groups
instances
all
sorts
of
other
things
does
it
need
to
create.
I
am
entities
as
well.
No.
A
That,
specifically,
so
that
you
know
we're
not
interacting
automatically,
but
I
am
but
to
give
a
bit
more
context.
The
scenario
that
I'm
saying
is
you
know
we
would
like
to
leverage
the
service
operator
to
interact
with
the
AWS
api's
for
us,
so
that
when
we're,
you
know,
building
clusters
with
cluster
API.
You
know
we're
basically
interacting
with
the
kubernetes
object.
Exactly.
A
B
Yeah
I
think
doing
it
at
the
namespace
level.
Not
allowing
users
to
override
would
be
helpful
along
with
what
Jason
said.
Just
let
CAPA
have
the
are
back
permissions
to
work
with
the
ACK,
but
don't
give
it
to
other
users
on
that
cluster.
The
the
scenario
that
I
was
thinking
about
was
just
wondering
like:
where
does
the
account
ID
go
and,
and
is
it
exposed
to
the
end-user,
because
what
I
would
not
want
to
have
happen
in
a
multi-user
multi
account
environment,
where
there's
just
one
management
cluster
is
say
to
the
user?
B
Okay,
you
can
go,
create
a
cluster.
Give
me
your
yeah
mole
and,
by
the
way,
put
the
account
ID
that
you
want
to
use
on
there.
I'd
want
to
have,
because
that
bypasses
any
need
for
credentials
right,
like
you,
just
give
it
an
account.
Id
the
management
cluster
has
the
has
that
privilege
and
I'll
just
do
it
because
you
gave
it
an
account
ID.
B
So
if
we
can
say
the
person
who
is
an
IT
administrator
setting
up
this
management,
cluster
configures
each
namespace,
with
the
account
ID
that
it's
allowed
to
use,
and
then
you
use
our
back
to
give
somebody
access
to
the
namespace,
presumably
you're,
giving
them
permission
to
use
that
IMS
account.
It's.
D
B
This
back
yeah
like
if
you're
an
admin,
and
you
want
to
fiddle
around
with
Kappa
using
the
ACK.
Then
you
go
configure
everything
and
it's
all
good
and
you
stick
the
account
ID
in
on
your
namespace
and
everybody's
happy
or
you
just
do
it
on
the
ACK.
But
if
you're
working
in
one
of
these
multi-tenant
environments,
there
needs
to
be
some
segmentation
between
who
can
assign
the
account
ID
and
who
can
create
the
clusters.
D
B
C
D
B
You
can
create
CR
DS,
that
store
information
like
that
mapping
that
you
were
talking
about
that's
going
to
be
in
a
config
map
and
restricted
to
just
cluster
admin
so
that
regular
users
don't
even
like
they
can't
list
them.
They
can't
get
them,
they
can't
create
them.
You
can
put
it
as
annotations
on
a
namespace
and
not
give
regular
users
the
ability
to
edit
namespaces
but
yeah
it
just.
It
has
to
be
on
some
resource
that
a
non
admin
or
you
know
or.
B
A
And
you
know,
at
least
from
my
perspective,
I'm
super
interested
in
seeing
how
we
can
take
this,
because
I
think
it
helps
us
on.
You
know
the
cluster
API
side,
ensuring
that
the
things
that
we're
doing
or
doing
what
they
say
a
lot
easier
than
having
to
interact
and
build
all
that
custom
logic
around
the
AWS
SDK.
B
B
Up
just
one
more
comment
on
that
so
Jay
you
did
mention
conditions,
I,
think
that
y'all
were
going
to
be
using
different
resources.
So
what
I
wanted
to
request
that
you
take
to
your
engineering
team?
Is
conditions
would
be
fantastic
for
us
to
have
visibility
into
what's
going
on
so
I,
don't
know
how
closely
you
followed.
Events
and
conditions
and
cluster
API
and
I've
been.
D
B
D
D
D
C
B
B
B
A
A
B
A
C
B
B
B
Don't
know
I
would
sit,
so
it's
because
of
the
I
am
role
that
I'm
wavering
here
like.
If
you
have
that
permission,
then
it's
not
harmful
and
could
wait.
But
if
you're
like
Andrew
here
and
you
have
that
block,
then
you
just
flat-out
can't
delete
with
an
unmanaged
PPC.
So
I
don't
know,
let's,
let's
maybe
do
it
X
and
see
if
anybody's
available
to
work
on
it.
C
B
A
C
A
B
B
B
B
B
A
So
I
think
the
biggest
challenge
we'd
have
here
is
if,
for
some
reason,
somebody
did
want
to
create
the
bastion.
After
the
fact,
we
would
end
up
in
a
weird
situation
to
where
we
could
create
the
security
group.
But
we'd
have
to
wait
for
the
machines
to
rear
eken
Syal
to
have
that
security
group
or
subset
of
rules
applied
on.