►
Description
While Kubernetes is primarily thought out as a container orchestration platform, Kubernetes' inherent extensibility makes it a great choice to orchestrate any infrastructure. In this session, we will dive deep into how we can leverage Crossplane (https://crossplane.io/) and custom Kubernetes Operators to provision and manage Cloud Services (such as Amazon S3, RDS) in a Kubernetes native way.
A
Kubernetes
as
universal
infrastructure
control,
plane,
kubernetes
is
generally
related
to
as
a
piece
of
software
that
is
pretty
much
useful
for
orchestrating
and
managing
containerized
applications.
However,
kubernetes
by
its
inherent
extensibility,
is
useful
even
to
manage
infrastructure
outside
of
a
kubernetes
cluster.
A
So
what
we
can
do
is
so,
if
you
have,
let's
say
some
kind
of
a
custom
automation
that
we
want
to
run,
then
we
can
write
our
own
operators
where
we
define
what
the
spec
or
the
schema
looks
like
through
a
custom
resource
definition.
A
And
then,
when
the
user
creates
a
custom
resource,
the
user
provides
essentially
a
desired
set
of
inputs.
That
is
required
for
the
custom
resource
to
be
created,
and
then
we
have
a
piece
of
logic
that
takes
those
inputs
and
then
performs
whatever
automation
that
is
required,
while
you
may
not
have
written
an
operator
by
yourself,
you're
very
likely
to
have
used
a
lot
of
operators.
So
whenever
you
install
any
kind
of
add-on
to
a
kubernetes
cluster,
they
essentially
are
operators
written
by
irrespective
vendors.
A
So
operators
allows
us
to
perform
these
kind
of
custom
automations
right.
So
why
not?
We
use
these
operators
to
even
manage
cloud
resources.
Why
should
we
use
a
third-party
tool
to
even
orchestrate
cloud
resources,
such
as
a
database
or
an
object
store,
and
so
on
so
forth?
So
we
can
use
kubernetes
to
orchestrate
cloud
resources
through
these
custom
operators,
and
the
key
benefit
is
that
users
can
use
then
the
familiar
kubernetes
based
models.
Like
writing.
A
In
in
the
past,
there
have
been
attempts
made
to
achieve
this
from
a
kubernetes
cluster
itself,
so
one
of
the
earliest
implementations
were
done
on
using
the
kubernetes
service
catalog
api,
where
you
can
have
an
open
service
broker
implementation
to
our
to
provision
cloud
resources
through
kubernetes
itself,
all
right
and
currently
today
all
the
cloud
providers
offer
their
own
specific
operators.
So,
for
example,
aws
offers
something
called
as
ack,
which
is
aws.
A
One
controller
for
kubernetes
azure
has
something
called
as
azure
service
operator
and
gcp
has
gcp
config
connected.
So
all
of
them
are
custom
operators
provided
by
the
specific
cloud
provider
where
you
can
use
these
operators
install
them
on
the
cluster
and
then
provision
respective
cloud
services
through
these
operators.
A
One
of
the
other
interesting
thing
that
is
happening
in
various
organizations
currently
is
what
is
called
as
the
rise
of
platform
teams
right,
so
many
organizations
are
building
internal
platform
teams
where
these
platform
teams
are
tasked
with
building
abstractions,
where
these
abstractions
are
offered
as
shared
services
to
developers
and
the
abstractions
actually
take
care
of
implementing
a
lot
of
these
best
practices,
policies
and
guardrails
right.
A
So,
for
example,
if
a
developer
needs,
let's
say
a
database
or
a
queueing
service
or
a
messaging
service,
then
the
platform
team
actually
builds
this
kind
of
abstraction
and
offers
it
to
developers.
So
our
developers
simply
say:
okay,
I
just
need
this
queue
and
the
service
actually
takes
care
of
provisioning.
A
The
actual
infrastructure
developers,
on
the
other
hand,
expect
a
self-service
way
of
consuming
these
abstractions,
where
they
can
simply
use
a
self-service
experience
to
provision
infrastructure
so
that
they
can
actually
move
fast
without
being
limited
by
devops
teams,
bandwidth
and
so
on
so
forth
and
the
key
thing
is
they
don't
want
to
deal
with
the
infrastructure
directly
where
they'll
have
to
be
required
to
provide
a
lot
of
configurations
and
parameters
right?
A
This
is
exactly
where
cross
plane
comes
into
the
picture
right,
so
crossbringer
is
an
open
source
project
where
it
can
be
used
to
orchestrate
any
cloud
infrastructure
from
a
kubernetes
cluster
right,
so
cross
plane
of
exists
as
what's
called
as
a
universal
control
plane
and
using
cross
plane.
Platform
teams
can
actually
compose
various
abstractions
and
offer
them
to
developers,
and
these
abstractions
hide
away
all
the
complexities
after
provisioning.
A
The
infrastructure
and
the
abstraction
can
actually
also
implement
all
the
guardrails
and
policies
and
controls
that
is
required
by
a
particular
organization
developers
when
they
want
to
provision
an
infrastructure.
They
simply
use
these
abstractions,
and
the
key
thing
is
that
they
can
use
the
same
kubernetes
style,
declarative
way
of
defining
the
infrastructure
and
provision.
Those
infrastructure
directly
and
crossplane
has
out
of
the
box
support
for
all
the
major
cloud
providers
and
you
can
pretty
much
provision
any
cloud
service
through
crossplane.
Currently.
A
So,
let's
dive
a
little
bit
deeper
into
cross
plane
and
at
a
high
level
there
are
six
major
building
blocks
that
cross
plane
offers.
We
will
dive
into
each
one
of
them
to
understand
what
they
are
and
how
we
can
use
them
to
compose
our
infrastructure
so
before
I
dive
into
those
concepts.
So
let's
look
at
a
simple
use
case.
A
Everything
will
be
automatically
taken
care
by
this
particular
postgresql,
abstraction
that
the
platform
team
is
offering
so
with
that
as
a
con
as
the
context.
So
let's
look
at
how
we
can
use
cross
plane
to
offer
such
an
abstraction
or
a
service.
A
A
The
next
building
block
is
called
as
managed
resource
right.
A
managed
resource
within
cross
plane
actually
represents
an
external
resource
such
as
a
cloud
service
right.
So
it
could
be
an
rds
instance
or
a
gcps
cloud
sequel
instance
any
aws,
or
a
gcp,
or
an
azure
service
right,
for
example,
and
this
is
a
fundamental
core
building
block
that
can
be
used
to
create
a
corresponding
cloud
service
right.
A
So
an
xrd
essentially
has
a
respect
of
the
schema
that
defines
what
kind
of
composited
resources
exists
within
the
system
and
it
can
be
used
to
create
various
resources
downstream
right,
so
think
of
xrd
is
very
similar
to
the
kubernetes
custom
resource
definitions.
If
you
have
dealt
with
that
before
or
if
you're,
let's
say
coming
from
the
terraform
world,
xrds
can
be
thought
of
similar
to
a
variable
blocks
that
you
will
have
in
a
typical
tf
module.
A
So
here
is
a
simple
example
for
a
composite
resource
definition.
So
in
this
case,
for
our
positive
sql
example,
so
we
are
defining
a
kind
called
as
exposed
equal
instance
and
simply
we
are
defining
this
spec
of
this
composite
resource
definition.
So
this
is
again
left
to
the
platform
team
right,
so
they
are
actually
creating
a
custom
definition
and
this
spec
can
be
defined
as
they
are
going
to
define
the
actual
abstract
action.
A
Okay,
so
now
we
have
a
spec,
then
the
next
step
is
to
actually
create
a
different
compositions.
So
a
composition
essentially
allows
us
to
create
various
resources
in
the
form
of
a
composition
right,
so
you
compose
various
resources
using
a
composition,
and
these
can
be
used
by
cross
plane
to
create
the
actual
underlying
resources.
A
So
the
the
composition
in
turn
creates
the
corresponding
managed
resources,
such
as
the
rds
instance,
or
a
cloud
sql
instance
in
the
respective
cloud
provider.
So
as
a
platform
team,
what
we
will
be
essentially
doing
is
creating
different
types
of
compositions.
Whatever
abstractions
we
want
to
offer,
we
will
create
different
variants
of
the
abstraction
through
compositions
right.
A
So
in
our
example
of
trying
to
offer
a
postgres
sql
instance,
we
could
have
a
production
postgres
which
automatically
configures
high
availability
and
encryption
while
a
dev,
postgres
composition
restricts,
let's
say,
instance,
type
so,
let's
say
small
or
medium
right
so
like
that
we
can
create
different
flavors
of
our
composition
and
offer
them
to
developers
to
actually
provision
the
resources.
A
So
for
our
example
of
trying
to
offer
an
abstraction
for
creating
postgresql
instance
on
both
aws
and
gcp.
A
platform
team
essentially
creates
two
compositions,
but
both
of
them,
if
you
see,
are
of
kind
x,
postgresql
instance.
This
is
also
of
exposure,
sql
instance,
and
the
gcp
one
is
also
of
x,
postgres
sql
instance,
but
where
they
differ,
is
that
in
the
aws
case,
the
actual
underlying
managed
resource
that
is
getting
created
is
of
timed
rds
instance.
A
Whereas
on
the
gcp
side
the
managed
resource
is
actually
a
cloud
sql
instance,
then,
as
part
of
the
composition
itself,
we
can
pre-configure
a
lot
of
parameters.
This
is
where
you
start
breaking
in
a
lot
of
those
best
practices
and
guard
rails,
and
so
on
so
forth.
So
we
can
do
that
as
part
of
the
composition
itself
and
then,
when
you,
the
user,
gives
the
inputs.
Let's
say
in
this
case,
we
want
to
accept
the
storage
gb
as
an
input.
Then
we
can
merge
those
user
inputs
to
the
actual
managed
resource
that
is
getting
created.
A
A
So
let's
say
we
switch
the
persona
and
we
go
to
a
developer
persona
and
we
want
to
create
a
resource.
Then,
essentially,
we
create
a
composite
resource.
We're
saying
that
we
want
an
x
postgres
equal
instance,
a
database,
and
we
are
saying
that.
Okay,
we
want
20
gb
and
the
provider
is
aws.
That's
all
we
are
saying
and
when
we
apply
this
yaml,
the
respective
composition
is
automatically
picked
up
and
the
corresponding
managed
resources
are
automatically
created
by
cross
plane.
A
There
is
also
another
concept
called
as
composite
resource
claim.
A
composite
resource
claim
is
very
similar
to
a
composite
resource.
It
actually
one
to
one
maps
to
a
composite
resource
and
also
has
the
exact
sca
same
schema
like
a
composite
resource
right.
The
key
difference
between
xrc
resource
claim
and
a
composite
resource
is
that
typically
xrc,
the
composite
resource
claim
is
used
by
developers
and
devops
teams
and
platform
teams
can
use
the
composite
resource
to
create
resources
right
and
the
other
key
difference
is
that
the
composite
resource
claims
are
all
name
space
scope.
A
They
reside
within
the
name,
space
that
is
specified
and
the
composite
resources
are
actually
clustered
scoped
right
so
but
one
to
one,
they
have
a
exact
same
spectrum
schema
and
they
relate
to
each
other
in
a
one-to-one
fashion.
A
So
how?
How
does
this
all
these
things
actually
get
assembled?
And
you
know,
and
you
put
together,
how
does
it
look
like
right?
So
the
first
step
is
the
platform
team
defines
the
spec
through
a
composite
resource
definition,
and
then
the
platform
team
can
create
different
compositions
which
create
the
actual
composite
resources
and
as
a
developer,
they
initiate
a
composite
resource
claim
to
be
created
by
creating
a
claim
which
in
turn
creates
a
composite
resource
and
the
composite
resource
starts.
A
Creating
all
the
required
managed
resources
in
the
underlying
cloud
provider,
as
defined
in
the
composition,
composition
defined
by
the
platform
teams.
Okay,
so
that's
how
this
whole
thing
is
going
to
work?
A
Okay,
so
with
that,
let's
actually
jump
into
a
demo
where
we
will
see
things
in
action,
so
I've
got
a
kubernetes
cluster
running
on
aws
eks
and
I've
got
it
connected
to
my
local,
and
you
can
see
that
my
cube
config
is
currently
connected
to
that
right
now.
A
The
first
step
to
do
is
that
we
can
start
installing
cross
plane,
so
that
can
be
installed
just
by
using
a
helm,
and
you
know
install
the
cross,
plane,
template
and
once
cross
player
is
installed,
then
the
first
step
is
to
actually
install
the
provider
that
you
would
want
to
enable.
So
in
this
case,
you
can
simply
say
I
want
to
have
provided
aws
to
be
installed
because
we
are
going
to
deal
with
aws
primarily,
and
I've
already
got
that
as
well
installed
on
the
cluster.
So
if
I
just
say.
B
A
Providers
so
I've
got
the
provider
aws
installed
on
the
cluster
right
now.
So
let's
say
we
want
to
create
an
sqs
abstraction
where
we
want
to
offer
an
sql
service
where
developers
can
create
cues
on
the
fly.
A
So
what
we
will
first
do
is
we
will
create
a
composite
resource
definition
for
sqs
and
we
define
okay,
it's
a
composite
squares
and
it
can
be
claimed
by
the
developer
using
the
name
sqs,
and
we
simply
define
the
spec
for
this
particular
abstraction
that
we
want
to
offer
it's
a
very
simple
spec,
where
we
simply
accept
name
region
and
the
visibility
time
mode
from
the
end
user
right.
So
that's
the
first
step
to
define
a
composite
resource
definition.
A
A
We
are
predefining
the
message
retention
period
and
we
are
accepting
visibility,
time
mode
region
from
the
user
and
that's
actually
mapped
from
the
user
specified
parameter
to
the
actual
visibility
time
mode
property
in
the
actual
underlying
management
resource.
Similar
region
is
also
mapped
to
the
reason
parameter
in
the
managed
resource
right,
and
so
that's
the
composition
and
as
a
user,
when
I
want
to
use
this
composition
and
create
a
particular
sqsq,
I
simply
say:
okay,
I
want
to
create
a
queue.
A
A
That
should
happen
yeah,
so
we
have
got
the
composite
resource
definition
created.
So
the
next
step
is
to
define
the
to
install
the
composition,
we'll
go
ahead
and
actually
install
the
composition.
A
So,
typically,
this
activity
will
be
performed
by
the
platforms
team
and
the
compositions
will
be
available
and
then,
when
it's
available
for
the
developer,
they
simply
initiate
a
claim
which
actually
creates
a
queue
right.
So
now
that
the
composition
is
actually
available,
we
can
actually
go
ahead
and
create
a
particular
queue
as
well
so
before
that
let's
actually
switch
to
rescuers
console.
So
we
don't
have
any
queue
available
here.
So
let's
actually
go
and
say.
A
Sqs
claim
dot
ml
to
actually
initiate
creation
of
a
particular
sqs
queue
so
yeah,
so
there
you
go,
sqs
demo
is
created.
So
if
I
actually
say.
A
I
should
actually
see
so
that
I
actually
have
the
rescue
sdmo
queue
created.
So
what
you
see
here
is
you
now
start
managing
sqs
itself
from
kubernetes
right.
You
know
have
that
custom
resource
created
here
itself
and
actually
can
manage
that
particular
queue
from
kubernetes
without
even
going
to
the
aws
console
or
using
any
other
tool
right.
So
if
I
switch
back
to
my
area
console
a
queue
should
get
created
in
another
few
seconds.
Maybe
there
you
go
right,
so
the
queue
actually
has
got
created.
So
as
simple
as
that.
A
So
now
the
power
of
cross
plane
now
comes
in
the
picture
where,
like
I
mentioned,
you
can
now
manage
everything
from
kubernetes
itself
and
because
it
runs
in
kubernetes
itself
all
the
aspects
of
kubernetes,
like
your
reconciliation
loop.
All
those
things
also
apply
to
even
this
particular
cloud
distance
right.
A
So,
let's
take,
for
example,
somebody
goes
and
modifies
this
particular
cube
where
they
actually
go
and
change
the
visibility
time
out
to
let's
say
60
seconds
instead
of
30
seconds
right
and
say,
go
and
save
right
so
now,
because
the
the
queue
was
created
from
from
kubernetes
through
cross
plane.
It
will.
B
A
Automatically
try
to
maintain
the
state,
as
defined
in
your
desired
state
right,
so
the
reconciliation
loop
is
always
going
to
keep
reconciling
and
finding
out
whether
there
is
any
change
and
automatically
detect
the
drift,
and
you
know,
revert
that
change.
So
if
I
actually
go
back
and
refresh
this,
so
it's
it's
one
minute
now
and
if
you
give
another
a
few
seconds,
while
the
cross
plane
kicks
in
and
finds
out
that
you
know
this
particular
thing
has
changed,
it
will
actually
revert
it
back.
B
So
let
me
actually
see
if
that
happens,
I
think
we
just
have
to.
A
Yeah
there
you
go
right
so,
the
time
mode,
visible
timeout
has
actually
come
back
to
30
seconds,
as
defined
in
our
claim
right.
A
So
that
is
the
the
key
difference
between
you
know:
cross
spin
on
the
other
infrastructure
as
code
tool
right
where
drifts
can
be
automatically
detected
and
automatically
reconciled,
and
you
can
go
back
to
your
desired
state
as
defined
in
your
spec
right.
So
that
was
a
simple
demo
and
if
you
just
want
to
delete
this
particular
cube,
we
can
actually
again
simply
delete
it
from
here
itself
right.
The
entire
life
cycle
can
be
managed
from
kubernetes
itself.
A
Right,
so
that
was
a
simple
demo
of
how
you
can
use
crossplane
to
provision
cloud
service
like
an
aws
sqs
right
out
of
your
kubernetes
cluster.
A
Now.
The
next
question
is
what
about
pod
identities
right
so
now
this
queue
has
got
created.
How
does
the
application
connect
to
that
particular
queue
because
it
needs
credentials
to
talk
to
the
particular
queue
right
so
in
terms
of
aws
iem
rolls
or
whatever
right
now,
every
part
when
it
wants
to
connect
to
a
particular
cloud
service,
then
it
needs
to
have
those
fine
grained
permissions
that
are
specific
to
what
that
particular
part
needs
to
connect
to.
A
Let's
say
there
is
a
part
that
connects
to
sql.
It
needs
only
squeeze
permission
if
there
is
a
part
that
needs
to
connect,
let's
say
dynamodb,
it
needs
to
have
only
that
particular
permission,
so
this
whole
thing
needs
to
be
again
dynamically
created
whenever
the
product
comes
up
and
you
need
to
create
those
respective
iem
roles
and
policies
and
also
create
those
respective
service
accounts
right.
So,
if
you're
operating
as
a
platform,
then
this
whole
process
needs
to
be
again
automated
right.
So
how
do
we
do
that?
A
So
again,
the
operators
are
to
the
rescue
right.
So
what
we
can
do
is
we
can
actually
have
our
own
custom
operator
to
manage
the
whole
pod
identity
aspect
right,
so
you
can
have
your
own
custom
operator
written
where
what
it
does
is
whenever
a
part
comes
up
and
it
needs
to,
let's
say,
connect
to
a
particular
cloud
service.
It
identifies
which
servers
it
needs
to
connect
to.
It
automatically
creates
those
required
iem
roles
and
policies
and
also
creates
the
required
kubernetes
service
accounts
attaches
to
the
particular
part,
and
you
know
immediately.
A
The
part
can
actually
start
connecting
to
that
particular
queue
right.
So
hopefully
that
gives
you
an
idea
of
how
we
can
automate
the
entire
infrastructure
of
provisioning
cloud
services
and
even
provisioning
iem
roles,
everything
through
operators
through
a
combination
of
cross
plane
and
our
own
custom
operators.
A
So
in
terms
of
the
benefits
that
cross
plane
offers,
I
can
fun
first
fundamental
benefit.
Is
that
because
we
are
managing
everything
through
kubernetes
itself,
now,
as
as
users,
we
can
use
the
kubernetes
ecosystem
and
its
tools
to
even
manage
cloud
resources
right.
You
don't
have
to
use
another
third-party
tool
to
to
manage
cloud
resources.
A
The
entire
state
of
your
cloud
resources
are
available
within
your
kubernetes
clusters
itself
right
and
that's
a
big
advantage
in
terms
of
simplifying
the
entire
infrastructure
pipelines,
and
we
also
saw
in
the
demo
where
the
reconciliation
happens
automatically,
so
that
any
drifts
that
happens
in
the
infrastructure
outside
of
cross
plane
is
automatically
detected
and
it
maintains
the
desired
state,
as
defined
in
our
definitions
right.
This
is
very,
very
powerful
where
any
drifts
is
actually
automatically
rolled
back,
so
that's
pretty
much
it
so
hopefully
that
was
useful.
A
So
if
you
have
any
follow-up
questions,
so
please
do
reach
out
to
me.
Thank
you
and
have
a
great.