►
Description
Ever wanted to use the Kubernetes API to describe not just your Kubernetes objects but also resources on which your applications
depend? Well, with AWS Controllers for Kubernetes (ACK), now you can! Describe that RDS database instance using a Kubernetes manifest and let
manage its lifecycle. Need to ensure that an S3 Bucket exists for your application to store objects in? ACK can handle that for you as well.
Just describe the S3 bucket in a Kubernetes manifest.
Come learn about the design and usage of ACK from one of its authors and see how you can contribute to its roadmap and development.
Presenter:
Jay Pipes, Principal Open Source Engineer @Amazon Web Services
A
A
I'm
jerry,
fallon
and
I'll
be
moderating
today's
webinar.
We
would
like
to
welcome
our
presenter
today,
jay
pipes,
principal
open
source
engineer
at
amazon
web
services,
just
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
There
is
a
q,
a
box
at
the
bottom
of
your
screen.
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
A
B
Thank
you
jerry.
I
really
appreciate
the
opportunity
so
yeah
today
we're
going
to
talk
a
little
bit
about
this
ack
project,
which
is
a
brand
new
set
of
open
source
service
controllers
for
kubernetes.
That
allows
a
bridging
of
the
aws
universe
of
managed
services
with
kubernetes,
so
we're
going
to
get
started
here
with
what
should
be
a
pretty
familiar
story
for
lots
of
folks
and
it's
a
story
that
sort
of
highlights
what
the
benefits
of
ack
are
and
where
it
fits
into
things.
So
we've
got
alice.
B
She
is
a
web
developer
and
she's
a
huge
kubernetes
fan,
of
course,
she's
developed
this
node.js
application
for
her
internal
department
at
her
company
and
she's.
You
know
she's
using
modern
development
practices
and
building
her
application
into
an
immutable,
docker
image
and
she's
using
she
wore
at
least
initially.
She
chose
to
use
sqlite
as
sort
of
a
simple
storage
database
for
her
application,
and
that
was
all
fine
and
dandy,
so
alice
being
a
huge
kubernetes
fan.
B
She
goes
and
deploys
her
application
into
a
kubernetes
cluster,
and
she
does
this
usually
using
the
normal
cube,
cuddle
ply
for
a
deployment
and
a
service
for
some
sort
of
top-level
networking
stuff,
and
maybe
she
also
does
some
ingress
load
mounts
or
resources
for
her
application
and
that's
all
fine
and
dandy.
Everything
is
running
great
until
you
know,
like
10
users.
Try
using
her
site
at
once
and
kind
of
predictably
sql
light
falls
over
because
it's
just
not
built
for
that
right
and
alice.
B
She
realizes
she's
she's
got
to
set
up
some
sort
of
real
database
and
she
she
knows
that
postgres
is
a
real
rdbms
right,
a
real
relational
database
management
system
that
supports
concurrent
access
and
all
this
kind
of
stuff
and
alice
she's,
like
I
said,
a
huge
kubernetes
user,
and
so
she
googles-
you
know
hey.
How
do
I
set
up
postgres
in
kubernetes
and,
of
course,
there's
lots
of
tutorials
out
there
and
they
all
kind
of
boil
down
to
what
you
see
here
on
your
screen
right.
B
She
creates
a
secret
using
cube,
cuddle
and
then
persistent
volume
claims.
So
she's
got
some
persistent
storage
for
the
database
and
the
deployment
file
and
the
service
manifest
right
and
she
goes
and
deploys
postgres
and
changes
her
application
so
that
it
is
connecting
to
her
postgres
cluster
instead
of
sql
light,
and
this
all
works
great.
B
The
only
problem
with
that
is
now
alice.
She
is
now
in
the
dba
game
right
and
that's
not
really
what
she
had
in
mind.
She
wanted
to
focus
on
writing
her
application
and
not
necessarily
administering
databases.
B
So
what
is
she
to
do
right?
She
hears
about
aws's
rds
database
service
right,
which
provides
a
managed,
relational
database
experience,
and
she
thinks
oh,
that's
great.
You
know
now
I
don't
need
to
be
the
dba
I'll
just
set
up
a
an
rds
instance
and
amazon
will
do
all
the
heavy
lifting
around
managing
the
the
database
instances,
but
she
notices
that
there's
a
problem
right.
So
she
goes
to
create
this
rds
database
instance
and
she
logs
into
the
aws
console
and
everything
is
just
kind
of
like
incongruent
for
alice
right.
B
She
really,
she
really
loves
her
cozy
kubernetes
experience
and
having
to
like
sort
of
like
go
into
the
web.
Console
and
click
through,
like
a
wizard
to
create
database
instances
is
just
not
really
what
she
wanted.
I
mean
she
didn't
have
to
use
the
aws
console
right.
She
could
have
also
used
the
aws
cli
tool.
She
could
have
used
something
like
cloud
formation
or
terraform.
You
know
all
of
those
things
are
perfectly
good
tools,
but
at
the
end
of
the
day,
those
aren't
kubernetes
and
alice
really
likes
kubernetes.
B
B
B
So
instead
of
logging
into
the
aws
web
console
or
using
cloudformation
or
the
aws
cli
or
any
of
those
non-kubernetes
tools,
she
just
simply
writes
the
kubernetes,
manifest
to
the
kubernetes
api
and
boom.
An
ack
service
controller
for
rds
takes
over
the
management
of
the
life
cycle
of
that
particular
resource,
and
that's
pretty
much
what
ack
is
right.
It
kind
of
boils
down
to.
B
Let's
allow
kubernetes
users
to
to
stay
in
the
kubernetes
api,
use
the
familiar
kubernetes
manifest
and
configuration
language,
but
have
a
service
custom
controller
for
kubernetes
manage
those
resources
in
the
aws
apis.
So
you
know,
hopefully,
ack
was
solving
alice's
problems.
Let's
take
a
look
sort
of
under
the
covers
and
see
if
it
can
help
solve
some
of
yours
too.
B
So,
like
I
mentioned,
kubernetes
experience
for
aws
services,
it's
kind
of
providing
a
bridge
right.
This
sort
of
integration
bridge
between
the
aws
services
and
I
say
aws
managed
services
here,
but
it's
really
any
aws
service,
regardless
of
whether
it's
a
managed
service
like
rds
or
something
like
that
right.
B
So
there
are
custom
controllers
within
the
ack
project,
one
for
each
aws
service,
so
there's
an
s3
service
controller
for
ack
and
sns
service
controller
for
ack,
etc
and,
like
all
custom
controllers
in
the
kubernetes
universe,
kubernetes
stores
the
desired
resource
state
right.
So
when
alice
writes
a
kubernetes
manifest
for
an
rds
db
instance
kind
through
to
the
kubernetes
api,
she
does
so
using
cube.
Cuddle,
apply,
kubernetes
api
server
stores.
B
What
alice
had
requested
as
the
desired
resource
date
for
her
db
instance
and
then
the
ack
service
controller,
which
is
the
kubernetes
custom
controller
for
that
particular
service,
handles
the
life
cycle
of
that
managed
service
resource.
So
in
the
case
of
the
rds
ack
service
controller,
it
will
call
create
db
instance
in
the
rds
api
and
manage
the
life
cycle
of
the
db
instance
for
the
kubernetes
user.
B
One
important
thing
that
I
like
to
bring
up
early
on
is
that
there
is
no
use
of
cloud
formation
in
ack.
The
reason
I
bring
this
up
is
ack.
The
aws
controllers
for
kubernetes
project
is
a
sort
of
redesign
or
a
rethink
of
a
project
called
the
aws
service
operator
or
aso,
which
an
ex-colleague
of
mine
chris
hein,
created
back
in
2018
and
aso.
The
aws
service
operator
was
a
fairly
thin
shim
across
cloud
formation.
B
So
when
you,
for
instance,
created
an
s3
bucket
by
the
aws
service
operator,
what
actually
happened
behind
the
scenes
was
that
a
cloudformation
stack
was
created
and
within
that
cloud
formation
stack,
an
s3
bucket
was
created
and
when
we
were
thinking
about
how,
how
do
we
redesign
the
aws
service
operator
and
sort
of
bring
it
into
some?
Some
of
the
more
modern,
kubernetes,
client,
libraries
and
and
controller
runtime,
and
that
kind
of
thing
we
were
thinking
well,
is
that
user
experience
really
kind
of
surprising?
B
You
know
I
mean
if,
if
someone
creates
an
s3
bucket
via
a
kubernetes
manifest
and
the
service
controller
actually
creates
a
confirmation
stack
behind
the
scenes
that
creates
that
s3
bucket
and
then
someone
sort
of
logs
into
the
aws
console
or
or
looks
at
cloudwatch
or
something
and
sees
that
a
cloud
formation
stack
was
created,
is
that
we
thought
that
was
a
surprising
user
experience,
and
so
we
decided
not
to
use
cloudformation
within
the
design
of
ack,
and
that's
why
I
put
it
here
as
I
just
to
warn
people.
B
After
discussing
with
a
number
of
our
more
security
conscious
folks,
we
decided
that
it
was
a
better
idea
to
have
separate
service
controller
binaries
for
managing
the
resources
in
one
particular
aws
service,
and
the
reason
for
that
was
so
that
we
could
promote
and
encourage
a
best
practice
of
having
a
very
finely
scoped
set
of
im
role
policies
that
only
allowed
the.
I
am
role
that
the
service
controller
was
executing
in
to
manage
the
resources
in
one
particular
api.
B
If
we
had
a
single
binary,
the
im
role
and
the
policy
associated
with
that,
I
am
role
that
was
running.
That
single
binary
would
essentially
need
to
have
like
this
sort
of
super
user
sort
of
god
level
scope
and
that's
something
that
we
didn't
really
want
to
promote
and
that's
the
reason
why
we
chose
to
create
a
separate,
separate
service
controller
binaries,
one
for
each
service.
So
we
could.
You
know
fine
grain
scope
that
that
I
am
role
what
we
would
like
to
do.
B
This
is
a
little
bit
aspirational
as
I'll
I'll
explain
here
in
a
second,
when
I
talk
about
our
release
process,
but
you
will
install
ack
service
controllers
using
helm
or
static,
manifest
that
we
will
distribute
as
artifacts
for
each
of
the
releases
or
we're
actually
putting
together
helper
scripts
like
since
we
do
have
lots
of
these
separate
ack
service
controllers,
one
for
each
aws
service
and
we
do
have
lots
and
lots
of
aws
services.
I
mean,
I
think,
there's
like
what
107
aws
service
apis
at
this
point
or
more.
B
We
we
knew
that
it's,
not
a
great
user
experience
to
actually
ask
people
to
you,
know
manually,
install
either
with
helm,
install
or
or
manually
with
like
cube,
cuddle
or
customize,
or
something
over
100,
different
service
controllers,
and
so
we're
writing
some
helper
scripts
that
essentially
automate
this
process
of
installing
service
controllers
for
a
list
of
services
so
that
you
don't
have
to
you
know,
repeat
the
installation
process.
B
Another
important
aspect
of
the
ack
design
is
that
everything,
including
the
controller
implementation
itself,
is
generated.
So
many
of
you
might
be
familiar
with
a
project
called
cube.
Builder
right,
coupe
builder,
is
it's
frankly,
an
awesome
project,
but
it
generates
the
code
for
custom,
kubernetes
controllers
and
the
api
types,
and
it
uses
a
a
set
of
libraries
called
controller
tools
which
has
this
controller
gen
binary
in
it.
B
What
ku
builder
does
not
do,
however,
is
generate
the
controller
implementation
for
you.
So
basically,
what
it
does
is
it
outputs,
a
stub
of
a
controller
and
then
it's
up
to
you
to
to
go
ahead
and
write
the
go
code
for
implementing
that
particular
controller.
Well,
and
that's
all
fine
and
dandy.
Only.
B
We
have
a
sort
of
a
small
ack
runtime
that
provides
this
linkage.
You
know
between
like
a
reconciling
controller
and
the
various
aws
sdk
go
calls
that
we
make,
but
at
the
end
of
the
day,
each
service
controller
is
fully
code
generated
and
that's
kind
of
what
makes
ack
different
from
some
other
things
right.
B
Another
important
thing
to
two
important
things
to
point
out:
we
consult
with
the
aws
service
teams
in
question
to
make
sure
that
what
we
are
generating
for
their
service
controller,
actually,
you
know,
is
calling
their
api
in
a
semantically
and
behaviorally
correct
way.
So,
for
instance,
we're
working
hand
in
hand
with
the
elastic
team
and
the
step
functions
and
lambda
team
to
make
sure
that
the
ack
service
controller
for
elastic
hash
and
set
functions
and
lambda
and
sagemaker,
and
these
other
services
actually
behaves
the
way
that
they,
you
know,
expect
it
to
behave.
B
It's
making
calls
in
the
way
that
it
should
be,
and
then
finally
there
is
absolutely
nothing
that
is
specific
to
eks.
So
ack
service
controllers
can
be
installed
on
any
target
kubernetes
cluster
whatsoever,
regardless
of
whether
you
choose
to
use
the
the
managed
control
plane,
flavor
of
eks.
B
Let's
talk
a
little
bit
more
about
the
code
generation,
I
mentioned
that
we
generate
the
entire
controller
implementation,
and
that
is
true.
We
actually
have
this
multi-phased
approach
to
code
generation
and
we
use
as
the
source
of
truth
the
aws
sdk
go
model
or
api
models
that
are
actually
included
in
the
aws
sdk
go
source
repository.
B
So
we
generate
the
the
rbok
configuration
stuff
as
well,
so
it's
this
sort
of
like
multi-phased
waterfall
of
of
code
generation
that
happens
for
each
of
the
services.
B
I
put
a
link
here
which,
if
you
go
and
download
the
the
files-
or
you
can
just
follow
this
link,
it
has
a
diagram
on
that
page
and
that
diagram
is
primarily
there
to
focus
your
attention
on
the
fact
that
there
are
two
different
r
boxes:
role-based
access
control
systems
in
place
with
ack
at
any
given
time
and
that
they
don't
overlap
with
each
other,
and
it's
very
important
to
to
understand
how
these
different
rbox
systems
are
used
right.
B
So
alice
the
kubernetes
user
that
that
calls
coop
cuddle,
apply
and
passes
in
like
rds
dbinstance.yaml
file.
Right
alice
is
a
kubernetes
user
who
is
associated
with
a
role
a
kubernetes
role,
and
that
kubernetes
role
is,
has
a
role
binding,
which
allows
alice
to
read
or
write
custom
resources
of
a
particular
kind
in
in
alice's
case.
It
would
be
rds.services.kates.aws
forward.
Slash
db
instance
like
that.
That
would
be
the
kind
that
she
has
permission
to
create.
B
That
is
the
kubernetes
role
based
access
control
system
in
in
play
right
that,
once
the
kubernetes
api
receives
a
request
from
alice
and
determines
the
role
that
she
is
operating
under.
It
then
performs
its
authorization
and
access
control
to
determine
whether
or
not
alice
the
kubernetes
user
has
the
ability
to
write
a
custom
resource
of
that
kind
to
the
server.
B
However,
once
that's
done
and
the
kubernetes
api
server
writes
the
custom
resource
representing
the
rds
database
instance
to
etcd
behind
the
scenes
it
returns
success
to
alice.
That
is
the
end
of
the
kubernetes
arbox
scope.
B
B
You
know
rds
db
instance,
and
then
at
that
point
it's
going
to
need
to
call
the
aws
rds
api
right
to
manage
the
life
cycle
of
db
instances
in
a
particular
aws
account
and
that
rbox
system,
the
I
am
role
based
arbog
system-
is
in
place
for
the
server
or
it's
in
place
for
the
I
am
role
associated
with
the
service
account
that
the
ack
service
controller
runs
as
and
there
is
no
overlap
whatsoever
between
the
kubernetes
arbok.
B
That
alice,
you
know,
is
dick,
is
controlled
by
and
the
I
am
role
that
the
ack
service
controller
is
using
in
order
to
determine
whether
it
has
the
rights
to
manage
the
life
cycle
in
the
rds
api.
It's
very
important
to
understand,
like
the
scope
of
where
those
two
different
rbox
systems
come
into
play.
B
We
are
recommending
for
those
of
you
who
are
not
familiar.
We
have
something
called
pod,
irsa
or
pod
im
roles
for
service
accounts.
It
is
our
recommended
way
of
providing
fine-grained
iron
permissions
for
a
specific
pod,
and
this
is
in
contrast
to
the
the
default
setup
where
the
im
role
associated
with
the
worker
node,
the
the
kubelet,
is
running
on
those
permissions
are
used
by
default
for
pods.
B
B
All
right
so
one
last
thing
around
authorization
and
access
control,
something
I'm
super
excited
about.
So
one
of
the
contributors
to
the
ack
project
named
amin
hilali,
he
has
been
working
on
this
project
called
cross
account
resource
management
or
carm,
and
when
we
realized
that
okay,
we're
going
to
be
having
lots
of
these
different
ack
service
controllers,
we
we
didn't
want
to
have
a
user
experience
where,
in
order
to
control
resources
across
multiple
aws
accounts
that
the
user
would
have
to
install
an
ack
service
controller
in
lots
of
different
kubernetes
clusters.
B
Another
thing:
what
about
secret
stuff?
Any
of
you
who
are
familiar
with
the
rds
create
db
instance.
Api
call
know
that
it
has
a
little
bit
of
an
issue.
You
send
the
master
user
password
in
plain
text
in
the
create
the
create
db
instance.
Api
call
clearly
that's
not
a
kubernetes
best
practice
and
obviously
kubernetes
best
practice
is
to
store
secret
stuff
in
secrets
and
then
reference
that
secret,
where
you
need
to
in
your
in
your
resources
and
your
customer
resource.
B
So
what
the
secret
reference
project
does
is
implement,
basically
that
it
replaces
the
master
user
passwords
data
type
underneath
in
in
the
custom
resource
definition
from
string
to
a
secret
reference
right.
Actually,
it's
a
key
reference
within
a
secret,
and
this
allows
a
cluster
admin
to
set
up
a
secret
called
dp
secrets
with
a
key
within
that
secret
called
master
user
password
and
they
can
control
the
the
access
and
and
arbuck
and
all
that
kind
of
stuff
on
the
secret
themselves.
B
And
then
all
alice
needs
to
do
is
reference
that
by
name
she
doesn't
need
to
do
anything
other
than
that.
B
Some
other
things
that
I'm
excited
about
coming
soon
and
when
I
say
soon
I
mean
within
the
next
few
months:
okay,
so
standardized
aws
tag,
representation
for
all
ack
resources
and
then
the
second
bullet
point
tags
that
all
custom
resources
within
a
namespace
should
have
kind
of
related.
So
the
first
one
refers
to
the
fact
that
there,
across
the
the
universe
of
aws
service
apis
the
way
that
tags
are
represented,
meaning
the
data
type
that
a
tag
takes
is
very
inconsistent
and
you
know
some
some
of
the
apis.
B
They
allow
tagging
a
resource
on
the
create
call
like
basically
setting
a
set
of
tags.
Some
don't
allow
that
some
of
the
service
apis
represent
it
as
a
map
of
string
to
string
other
apis,
represent
it
as
a
list
of
structs
with
a
key
and
a
value,
and
then
there's
there's
other
representations
as
well.
This
first
bullet
point
is
about
having
ack
standardize
that
representation,
so
that
any
custom
resource
that
ack
manages
you
represent
you,
you
specify
the
tags
in
spec.tags
and
it
is
a
map
of
string
to
string.
That's
it.
B
No
inconsistent
representation
of
the
tag
data
structure.
The
second
bullet
point
is
allowing
a
specific
set
of
aws
tags
that
all
custom
resources
within
a
namespace
should
always
have.
So
if
the
cluster
admin
wants
to
make
sure
that
any
rds
instance
that
is
created
within
namespace
foo
should
be
tagged
with,
you
know,
should
have
an
aws
tag
of
bar,
then
they
would
annotate
the
namespace
with
that
set
of
tags
that
should
always
be
placed
on
db
instance,
custom
resources,
finally,
common
rate
limiting
and
throttling
support.
B
So
I
was
actually
talking
with
jason
to
tybrus
and
the
cluster
api
folks
about
how
can
we
have
a
common
rate
limiting
and
throttling
support
library
for
aws
apis
in
ack?
That
can
be
then
referenced
from
cluster
api
and
projects
like
crossplane,
so
that
we
don't
have
to
like
sort
of
constantly
repeat
ourselves,
and
all
of
us
are
like
work
on
various
variations
of
the
same
theme.
So
this
is
a
this
common
rate.
B
Limiting
and
throttling
support
for
aws
api
calls
is
something
that
I'm
really
excited
to
get
done
in
the
next
few
months
and
then.
Finally,
there
is
this
idea
that
look
you've
created
an
s3
bucket
or
an
rds
database
instance,
or
an
sns
topic
or
sqsq
or
whatever
in
the
aws
console
completely
outside
of
ack's
knowledge,
and
you
don't.
You
want
to
essentially
have
ack
start
managing
that
resource.
B
Well,
in
this
resource
adoption,
github
issue
and
project,
we
are
allowing
that,
so
you
will
annotate
the
custom
resource
with
an
arn,
an
edo
various
resource
name,
and
that
is
an
indication
to
the
ack
service
controller
that
it
should
expect
that
the
resource
with
this
particular
arn
already
exists,
and
it
should
just
essentially
place
that
resource
under
its
own
management,
as
opposed
to
attempting
to
recreate
a
resource
with
that
name.
B
Okay,
all
right!
This
is
this
final
set
of
things.
I
just
want
to
discuss
sort
of
how
we're
handling
the
release
cycle
or
the
release
cadence
for
ack,
as
I've
mentioned
a
few
times
now,
there
are
well
over
150
aws
service
apis.
We
want
to
get
to
all
of
them
right.
We
want
to
support
all
of
them
in
ack,
but
it's
just
it's
not
feasible
to
do
that.
B
All
in
one
go
so
the
way
that
we
are
thinking
about
it
is
we'll
have
we
have
these
phases
where
a
group
of
services
will
get
their
controllers
generated
and
then
included
in
the
ack
source
repository
and
get
binary,
docker
images
created
and
home
charts
created
and
pushed
up
to
a
docker
registry
and
helm
repository.
B
These
phases
of
services
are
documented
on
the
aws
controllers
for
kubernetes
github
page.
We
have
a
project
that
shows
the
sort
of
release
map
for
these
phases
of
controllers,
we're
going
initially
into
what
we're
calling
developer
preview
and
that
essentially
just
means
the
helm
chart
is
not
is
not
currently
available
for
easy
installation
and
the
way
that
you
work
with
these
service
controllers
is
frankly,
not
particularly
user
friendly.
It's
it's
very
sort
of
like
developery,
you
use
test
credentials
and
anyway
long
story
short,
it's
not
particularly
user
friendly
in
developer
preview.
B
The
services
that
we
initially
placed
into
developer
preview
are
listed
here,
s3
s,
sqs,
ecr
dynamodb
and
api
gateway.
V2
of
those,
unfortunately,
sqs
had
a
bit
of
an
issue,
and
it's
not
yet
in
the
the
ack
source
repository
dynamodb
should
be
by
the
end
of
the
week
as
well
as
api
gateway,
v2
we're
just
waiting
on
a
couple
end-to-end
tests,
the
next
phase
of
ack
service
controllers
is
rds.
B
Elasticash,
we've
got
some
parts
of
cloudfront
some
parts
of
ec2
and
eks,
and
those
should
be
coming
out
yeah
the
next
few
weeks
next
couple
weeks
and
then
sorry
after
that,
we're
we're
looking
at
the
kafka
service,
we're
looking
at
lambda,
step
functions
and
more
so
the
project
that
you
see
here
linked.
B
You
can
go
there
and
see
the
release
road
map
of
what
we
have
planned,
what
is
currently
targeting
for
developer
preview
and
currently
like
work
in
progress
and
then
beta
and
ga
after
that
and
I'll
just
wrap
up
by
saying
everything
about
ack
is
open
source
and
we
are
absolutely
jazzed
to
get
feedback
from
everybody
and
contributions
if
you
feel
like
it,
and
these
two
links
should
get.
You
started
going
in
the
right
direction.
A
B
All
right
so
najib,
I
hope
I'm
pronouncing
your
name
right,
so
ack
is
entirely
different
from
eks.
Eks
is
a
service,
an
aws
service
that
installs
a
managed
control,
plane
and
recently,
a
more
managed
data
plane
with
managed
node
groups,
but
a
managed
control
plane
for
kubernetes.
B
So
ack
is
a
set
of
kubernetes
native
applications.
Kubernetes
custom
controllers
that
allow
a
kubernetes
native
way
of
managing
resources
outside
in
the
aws
apis.
B
B
B
I
think
it
depends
on
the
resource
if,
if
it
is
within
a
specif
okay,
so
if
it's
within
a
particular
api,
for
instance
within
rds
the,
if
you
look
at
the
api,
call
that
references
another
resource
object
within
that
api,
we
may
be
replacing
the
custom
resource
definition
field
from
let's
say
an
arn
to
instead
be
an
object
reference
that
refers
to
a
different
custom
resource
within
the
rds
set
of
custom
resource
definitions.
B
Now,
if
the,
if
the
cross,
if
the
cross
resource
reference
is
across
apis,
for
instance,
if
it's
api
gateway
to
ec2,
vpc
or
elastic
to
ec2
security
groups,
things
like
that,
we
will
likely
continue
to
refer
to
those
things
via
arn
and
not
have
an
object
reference
type.
I
so
much.
I
hope
that
answers
your
question.
Please,
please!
Let
me
know
if
it
didn't.
I
think
that's
what
you
were
asking,
but
okay,
so
ryan's
asking
what
kinds
of
tags
does
ack
apply
to
create
a
aws
resource?
B
Is
there
a
way
to
guard
against
accidental
cube
cuddle
delete,
even
if
it
is
just?
I
really
don't
don't
want
to
delete
this
flag
very
nice.
We
haven't
decided
this.
Yet
there
is
an
issue
if
you
go
to
the
to
the
website
the
github
site,
that's
on
your
screen
now
and
go
to
the
issues
list
there
is.
There
are
two
issues
you
should
search
for,
something
called
just
gosh,
I'm
trying
to
remember
destructive
operations
or
what
I
think
it's
either
delete
operations
or
destructive
operations
or
destructive
behavior.
B
There
is
an
issue
that
talks
about.
Basically
how
how
do
we
prevent
deletion
of
important
resources?
I
I
think
what'll
end
up
happening
is
that
we
will
have
some
annotations
on
kubernetes
namespace
that
will
allow
the
ack
service
controller
to
be
configured
in
a
certain
way
for
crs,
in
that
particular
kubernetes
namespace,
to
essentially
allow
some
sort
of
like
deletion,
propagation
or
deletion
policy
or
protection.
B
That
kind
of
thing
it's
likely
going
to
be
fairly
dependent
on
the
aws
api
behind
it,
and
it's
likely
going
to
be
up
to
a
cluster
admin
to
to
to
configure
a
specific
crd
or
a
specific
custom
resource
type
or
kind
to
behave
in
certain
ways,
because
we've
we've
frankly
run
the
gamut
as
far
as
feedback
that
we've
gotten
from
people.
B
B
There
are
two
there's
an
issue
that
is
around
the
standardization
of
aws
tags
and
the
representation
of
those
tags
for
custom
resources
that
ack
manages-
and
there
is
also
an
issue
about
what
aws
tags
should
be
auto,
created
on
any
custom
resource
that
that
ack
service
controller
is
managing.
So
there's
there's
two
resources
there
ryan.
I
definitely
encourage
you
to
check
out
sorry
there's
two
issues
there,
ryan.
I
definitely
encourage
you
to
comment,
and
you
know,
plus
one
or
whatever
the
each
of
those
issues.
B
A
B
Let's
see
najib
is
asking:
will
I
still
be
charged
for
invoking
apis
through
ack
like
one
paying
for
invoking
native
aws?
Yes,
absolutely
so
look
ack
doesn't
remove
the
the
charges
for
resources
that
it
creates.
The
charges
are
exactly
the
same
so,
regardless
of
whether
ack
is
the
thing
that
ends
up
calling
create
db
instance
for
the
rds
api.
The
the
charges
that
you
will
accumulate
are
exactly
the
same
right
so
very
similar
to
to
cloud
formation
right.
B
So
there
was
announcements
for
something
so
anonymous
is
asking.
There
was
an
announcement
for
something
similar
in
2018.
Was
it
admission
controllers
not
entirely
sure
about
that?
Sorry
anonymous?
You
may
be
thinking
about
the
aws
service
operator,
which
is
the
sort
of
one
of
the
things
that
originated
the
idea
for
aws
controllers
for
kubernetes
yeah.
You
got
it
it's
the
service
operator
for
kubernetes
right.
This
is
sort
of
the
next
generation
of
that
the
reincarnation
of
that.
A
If
anyone
would
like
to
ask
any
more
questions,
please
go
right
ahead
and
do
so.
We
only
have
about
15
minutes
left.
Oh
well,
ack
provide
deeper
visibility
into
the
aws
services.
B
I
think
that
it
will
provide
a
different
type
of
visibility,
najib
so
for
those
users,
those
aws
customers
or
aws
users
that
prefer
the
kubernetes
environment
prefer
the
kubernetes
api
and
tooling,
and
you
know
the
cube
cuddle
experience
the
way
that
they
will
have
visibility
into
aws
resources
will
be
different
right,
they'll,
be
able
to
make
a
call
to
coop
cuddle,
get
db
instances
and
see
a
list
of
their
their
rds
database
instances
as
opposed
to
calling
the
aws
cli
tool
or
logging
into
the
aws
web
console
if
you're.
B
If
you're,
referring
to
you,
know
the
things
like
cloudtrail
or
cloudwatch
logs
or
that
kind
of
thing
there
there's
nothing
about
ack,
that's
going
to
change
the
setup
and
the
auditability
or
traceability
of
of
a
particular
aws
service.
B
We
need
to
get
our
prometheus
metrics
story
started,
and
one
of
one
of
the
things
that
we
would
like
to
do
is
have
prometheus
metrics
that
are
dimensioned
based
on
the
aws
api
call
that
ack
service
controllers
are
making
so
that
you
can
see
specifically
how
many
and
of
what
kind
the
the
aws
client
is
is
calling
a
specific
aws
api.
So
you'll
be
able
to
say
you
know
like
okay.
How
many
times
is
you
know?
I
don't
know
code
deploy,
get
deployments
being
called
per
hour
or
something
right.
B
We
want
to
provide
those
types
of
metrics
via
a
standardized
set
of
prometheus
metrics
that
are
dimensioned
by
what
is
called
the
operation
identifier
within
the
aws
api.
B
A
B
There
any
way
of
enabling
cross
account
resource
management.
Yes,
there
will
be
oh
hi
harish,
so,
yes,
there
will
be
we're
we're.
Probably
a
couple
weeks
out
from
the
cross
account
resource
management
being
fully
enabled.
I
merge
the
the
code,
the
the
largest
part
of
the
code,
which
incorporates
some
caching
mechanisms
for
namespaces
and
and
config
maps
earlier
last
week.
B
We
still
need
a
little
bit
of
work
there.
You
will
be
able
to
quote
enable
the
cross
account
resource
management
by
setting
an
annotation
on
a
namespace.
B
So
look
for
that
in
the
next
two
to
three
weeks.
B
All
right,
let's
see
najib,
I'm
I'm
sorry,
I
I'm
not
entirely
following
what
you
mean
by
native
visibility
of
kubernetes.
Perhaps
you
can
elaborate
a
bit
there.
Okay
fahad
is
asking
how
do
how
to
import
existing
resources
in
aws
and
securities
manifest
under
ack
management,
for
example?
B
You
will
have
the
owner
account,
I'm
sorry,
the
arn,
the
services.kates.aws
forward,
slash
a-r-n,
and
that
will
indicate
that
the
ack
service
controller
should
expect
that
that
resource
already
exists
and
not
try
to
to
create
it
again.
B
Hi
harrish,
howard
harris
is
a
member
of
the
uks
team.
So
what
do
you
think
about
leveraging
ack
to
do
heavy
lifting
for
aws
cloud
provider
behind
the
scenes
for
managing
and
provisioning
aws
resources
instead
of
current
implementation
of
aws
cloud
provider?
I've
actually
thought
about
that
harris
and
I've
had
some
conversations
with
some
of
the
cluster
api
folks.
B
I've
had
conversations
with
crossplane
folks
from
from
upbound
about
how
do
we
adapt
the
ack
generate
command
line
tool,
which
is
the
primary
code
generator
inside
of
ack,
so
that,
instead
of
spitting
out,
you
know
kubernetes
api
types
and
a
custom
controller
implementation
for
ack
service
controllers
that
in
it?
Instead,
it
spits
out
basically
all
the
the
generated
code
for
aws
cloud
provider
or
in
the
case
of
crossplane
the
cloud.
B
I
think
it's
cloud
provider
aws
package
right,
so
I'm
I'm
actually
got
some
prototype
code
going
locally,
where
I've
been
playing
around
with
this
idea
of
making
the
the
ack
generate
cli
tool
a
lot
more
extensible
so
that
it
can
kind
of
spit
out,
go
code
that
fulfills
sort
of
non-ack
core
use
cases
so
yeah,
I
think,
in
the
future.
B
It
definitely
will
be
possible
to
to
at
least
have
ack
service
controllers
provide
a
sort
of
lower
level
lower
layer
of
functionality
that
then
could
be
built
upon
in
things
like
cluster
api
and
crossplane
okay.
So
how
is
ack
different
from
crossplane
I'll?
Just
knock
this
one
out
real,
quick,
so
they're,
actually
very
complementary
technologies.
B
Ack,
its
entire
mission
is
to
provide
a
kubernetes
native
api
for
managing
aws
resources.
That's
it
it's
not
trying
to
do
anything
more
than
that.
Crossplane
has
a
much
broader
mission
right.
Crossplane
has
a
mission
to
support
cross
cloud,
meaning
you
know
like
to
to
gke
and
eks
and
aks,
and
all
different
cloud
providers
right
and
have
some
sort
of
standardization
for
cluster
creation,
kubernetes
cluster
creation,
as
well
as
some
of
the
managed
service
creation
for
each
of
those
different
cloud
providers.
B
So
it's
got
a
much
broader
mission.
I
think
I
well.
I
hope
that
ack
at
least
the
code
generator
inside
of
ack
can,
in
the
future,
be
a
library
or
or
a
sort
of
input
to
the
crossplane
aws
provider
at
least-
and
let's
see,
will
there
be
a
performance
penalty
for
using
ack
because
of
two
hops?
Now
one
is
ack
and
then
native
ada.
B
No
there
there's.
There
is
no
performance
penalty
there.
There
actually
isn't
two
hops,
so
the
the
kubernetes
user
is
communicating
with
the
kubernetes
api
right
right
and
the
ack
service
controller
is
communicating
with
the
aws
api.
So
it's
not
like
the
kubernetes
user
is
communicating
with
the
aws
api
instead
they're
only
talking
to
the
kubernetes
api
and
then
the
service
controller
for
ack
is
the
thing.
That's
communicating
with
the
aws
api.
B
B
I'd
also
like
to
point
out
something
that
I
didn't
include
here,
unfortunately,
but
I'm
I'm
on
the
provider-aws
channel
in
the
kubernetes
slack
community.
So
please
feel
free
to
hit
me
up
with
any
questions
that
you
might
think
of
after
this
webinar.
B
How
is
handled
for
ack
controllers
if
the
controller
crashes
in
the
middle
of
the
rds
s3
creation,
api
call
good
question
so
the
way
that
we've
built
the
service
controllers
should
not
depend
on
the
leader
election
within
kubernetes.
I
still
need
to.
B
I
still
need
to
work
on
some
test
cases
to
ensure
that
multiple
ack
service
controllers,
multiple
pods,
running
the
same
ack
service
controller,
can
have
concurrently
executing
reconciliation
loops
and
not
trample
on
each
other.
But
there
is
there's
nothing
that
we're
doing
inside
the
ack
service
controller,
for
instance,
setting
a
latest
observed
version
or
latest
observed
sort
of
state
we're
not
setting
that
in
a
from
the
ack
service
controllers.
So
we're
and
the
reason
we're
not
doing.
B
That
is
because,
by
having
that
latest
observed
version
field
within
the
status
of
a
custom
resource,
you
essentially
force
the
architecture
of
the
of
the
controller
to
be
a
single
writer,
and
we
did
not
want
that
right.
B
We
want
to
be
able
to
have
multiple
concurrent
service
controllers
for
the
same
service,
be
able
to
execute
in
in
multiple
pods
and
not
have
them
trample
over
each
other,
and
one
of
the
ways
to
do
that
is
to
ensure
that
you're
not
writing
bits
of
information
into
the
status
struct
the
status
field
of
a
custom
resource.
That
represents
the
view
of
only
a
single
writer
and
that's
what
latest
observed
version.
Actually
is
it's
not.
The
latest
observed
version
for
the
resource.
B
It's
the
latest
observed
version
for
that
particular
controller
that
is
observing
the
resource
and
by
by
getting
rid
of
that.
We
hope
to
to
have
a
more
concurrent
approach.
Hope
that
answers
your
question.
Will
ack
provide
kubernetes
secret
integration?
Yes,
absolutely
it
will
there's
a
slide
I'll
kind
of
go
up
here,
oops
whoops,
I
stopped
screen
sharing
by
accident
yeah.
There
is
a
set
of
slides
that
that
explain
that
they,
the
there
are
some
fields
within
aws
back-end
api
calls,
for
instance,
create
db
instance.
B
Where
you're
passing
in
a
plain
text
string,
we
will
be
replacing
those
types
of
fields
with
secret
reference
fields
or
fields
with
a
secret
reference
data
type,
and
that
means
that
you'll
be
able
to
set
up
a
kubernetes
secret
ahead
of
time
and
then
reference
that
reference.
A
key
within
that
secret
from
your
custom.
B
Any
planned
integration
with
parameter,
store
and
secret
manager
for
alternative
to
secrets
managing
kubernetes,
not
within
ack,
but
that
that
actually,
I
had
a
meeting
with
the
aws
config
team
recently
about
a
similar
topic,
find
me
on
the
provider
aws
channel
on
slack,
and
we
can
chat
about
it.
A
B
Up
fahad
was
asking
earlier
in
the
chat:
is
there
support
for
lambda
in
ack?
Is
there
any
plans
for
serverless
services
support,
not
currently,
I'm
aiming
for
mid
to
end
november
for
both
lambda
and
step
functions?
B
Luckily,
both
those
apis
are
actually
fairly
reasonable
and
sensible,
and
and
concrete,
with
very
few
exceptions
to
them
and
very
few
inconsistencies.
So
yeah
we're
aiming
for
mid
mid
to
late
november
for
both
step
functions
and.
B
Lambda
and
once
again
thank
you
very
much
jerry
and
for
to
the
cncf
for
inviting
me
out
here.
It's
it's
a
pleasure.
A
It's
our
pleasure,
thank
you
so
much
for
joining
us
today.
That
should
just
about
wrap
up
our
webinar
for
today.
As
I
said
before,
today's
recording
and
slides
will
be
posted
on
the
cncf
webinar
page
we'd,
like
to
thank
everybody
once
again
for
joining
us
today
and
to
you
as
well.
Jay,
everyone
take
care,
stay
safe
and
we
will
see
you
next.