►
From YouTube: OCB: Using Crossplane in OpenShift
Description
Crossplane is a CNCF Sandbox project that allows you to extend your Kubernetes cluster to provision and manage cloud infrastructure, services and applications.
Tuesday, January 19, 12pm Eastern
Agenda:
- Overview of the Crossplane project
- Demo of deploying infra and workloads in different regions
- Demo of work being done with the Quay operator and Crossplane
- Discussion of future integrations
Speakers:
Daniel Mangum (https://www.linkedin.com/in/ACoAABl3Jt8B5qZQ1UXH-zN5ROeJ8rg-Ssrnm4E), Upbound (https://www.linkedin.com/company/upbound-io/)
Krish Chowdhary (https://www.linkedin.com/in/ACoAACMv1m0BSi6xfoTOlduVsDkvKVnjDEW8E7o), Red Hat (https://www.linkedin.com/company/red-hat/)
A
Hello,
everyone
welcome
to
another
openshift
commons.
It's
been
a
while,
since
we've
all
seen
each
other,
and
today
we
have
dan
mangum
and
chris
chaderi,
hoping
I'm
saying
your
names
right,
but
from
the
cross,
plane,
project
and
crossplane
is
a
cncf
sandbox
project
that
is
gaining
more
traction.
That
allows
you
to
extend
your
kubernetes
cluster
to
provision
and
manage
your
cloud
infrastructure
as
well
as
your
services
and
applications.
So
dan
is
going
to
kick
us
off
with
a
demo
and
we'll
get
right
into
it.
B
Absolutely
thanks
karina.
It's
a
definitely
an
honor
to
be
on
the
show-
and
I'm
excited
to
be
here
with
chris,
who
has
become
a
quite
a
large
contributor
to
the
cross
playing
community
and
he's
gonna
talk
about
some
of
the
stuff
that
he's
been
working
on
lately
towards
the
end.
But,
as
karina
said,
I'm
going
to
start
off
we're
going
to
break
the
demo
in
two,
which
is
you
know,
kind
of
frequent
thing
to
do
when
you're
provisioning
infrastructure
that
takes
some
time
to
come
up.
B
So
I'm
going
to
go
ahead
and
jump
into
the
demo
and
then
take
a
step
back
and
look
at
the
cross
plane
project
as
a
whole
and
then
we'll
circle
back
to
kind
of
see.
What's
happened
behind
the
scenes
while
we're
talking
so
with
that
I'm
going
to
go
ahead
and
share
my
screen
here.
B
All
right,
so
you
should
see
an
editor
here
and
chris
go
ahead,
and
let
me
know
if,
if
anything
goes
haywire
while
we're
running
through
this.
But
here
I
basically
have
a
little
bit
of
a
guide,
and
this
will
be
available
for
anyone
to
use
after
this
recording
as
well.
B
B
So
I'm
going
to
go
ahead
and
create
a
cluster
while
we're
waiting
and
then
we're
gonna,
install
crossplane
and
we're
gonna
install
a
configuration
package
which
I'll
talk
about
more
later
down
in
the
show,
we're
going
to
set
up
a
gcp
provider
which
is
basically
going
to
give
us
the
ability
to
provision
infrastructure
on
gcp
from
our
kubernetes
cluster
and
then
we're
going
to
go
ahead
and
create
some
east
and
west
clusters
is
what
we'll
call
them
right
now,
which
are
kind
of
an
abstraction
on
infrastructure
defined
by
our
configuration
package
and
once
again
I'll
circle
back
to
all
of
this
more
a
little
down
the
line.
B
So
it
looks
like
our
cluster
is
ready,
so
we're
just
running
a
local
cluster
in
docker
with
kind
and
before
we
do
anything,
I'm
going
to
go
ahead
and
build
my
configuration
package.
So
that's
the
package
here
that
I've
defined
in
this
repo
and
you'll
see
it's
just
called
cross
cluster
and
it
has
some
specifications
of
the
dependencies
it
has
specifically
on
cross,
plane
itself
and
then
gcp
and
helm,
because
those
are
the
two
providers
it's
going
to
use
to
abstract
infrastructure.
B
B
Let
me
go
ahead
and
make
sure
that
I
push
it
to
the
right
place
here,
hop
over
back
to
readme,
so
we're
going
to
put
this.
This
will
just
go
to
docker
hub,
so
crossplane
packages
are
actually
just
oci
images
behind
the
scenes,
and
so
we're
going
to
go
ahead
and
push
up
to
docker
hub.
It
looks
like
that
was
successful.
B
Next
thing
we're
going
to
do
install
crossplane
into
our
cluster.
These
are
just
the
instructions.
I've
pulled
off
the
crossplane
website,
which
we'll
also
took
take
a
look
at
in
a
little
bit.
This
is
just
going
to
help
install
crossplane,
and
if
we
get
the
pods
in
the
cross
plane
system
namespace,
we
should
see
that
they
are
present
where
they're
creating
once
those
are
ready.
B
We
can
go
ahead
and
install
our
configuration
that
we
just
built
and
pushed
I'll
go
ahead
and
get
ready
to
do
that
and
once
again
I
know
we're
moving
a
little
fast
here,
but
we're
going
to
circle
back
to
what
all
of
this
means.
So
those
are
up
and
running
I'll.
Go
ahead
and
install
our
configuration
like
I
said
that
has
a
dependency
on
provider
gcp
as
well
as
provider
helm.
So
we
should
see
those
also
get
installed
with
compatible
versions.
We've
said
anything
greater
than
0.15
and
0.5.
B
For
these
two,
so
we
should
see
compatible
packages
installed
by
crossblind
and
it
looks
like
they
are
installed,
they're
still
becoming
healthy,
so
we
got
the
latest
versions
of
each
which
was
0.15
and
0.5
and
shortly
they'll
come
available.
This
will
just
take
a
little
bit
of
time
and
we'll
start
to
see
things
like
crds
added
for
different
gcp
types.
So,
for
instance,
let's
look
at
a
gk
cluster,
which
is
something
we're
going
to
create.
B
Today,
you
can
now
basically
provision
a
gke
cluster
right
from
your
kubernetes
manifest,
and
then
we
can
also
see
with
helm.
B
We're
gonna
have
some
resources
as
well,
and
then
the
last
thing
I
want
to
touch
on
is
that
we
have
xrds,
which
is
abstractions,
that
we've
defined
over
these
granular
managed
resources
and
you'll
see
we
have
an
app
abstraction
and
a
cluster
abstraction,
and
these
are
part
of
our
cross
cluster
package.
Here,
all
right
so,
like
I
said
we're
just
going
to
do
a
few
more
things
before
we
jump
into
more
information
on
crossplane.
B
All
right
so
you'll
see
I
created
an
east
cluster
and
a
west
cluster
and
we'll
come
back
to
the
actual
whole
architecture
here.
So
now
that
we've
sprinted
through
that,
as
I
promised,
we
are
going
to
go
back
and
actually
talk
about
what
crossplane
is
and
everything
that
we
just
did.
So
I've
got
a
short
presentation
here,
we'll
try
to
spend
more
time
on
demos
and
actually
interacting
with
cross
playing,
because
I
feel,
like
that's
a
little
bit
more
tangible,
but
we'll
go
ahead
and
show
the
presentation
as
well.
B
Awesome
sounds
good,
we'll
we'll
expect
that
folks
at
home
can
also
see
it,
but,
like
this
says,
I'm
just
going
to
give
a
short
cross
playing
overview.
That's
kind
of
current
as
of
today,
so
starting
off
the
the
important
thing
to
think
about
with
crossplane,
though
it
has
evolved
since
this
time,
looking
back
on
kind
of
the
roots
of
the
project
and
when
it
was
released,
really
informs
the
capabilities
that
we
have
today.
B
So
crossplan
was
announced
in
december
2018
at
kubecon
seattle
that
year
and
since
then,
like
I
said,
it's
changed
quite
a
bit,
but
one
thing
it
has
remained:
is
open
source
and
open
governance
and
standardize
on
the
kubernetes
control
plane.
B
Now
you
know,
there's
lots
of
different
infrastructure
provisioning
systems,
there's
infrastructures,
tool
infrastructure
is
code
tools.
There's
you
know
cloud
provider
apis
all
sorts
of
things
like
that,
but
one
of
the
things
that
the
founders
of
the
crossplane
project
noticed
kind
of
early
on
is
that
we
were
standardizing
around
this
kubernetes
api.
So
you
know
whether
you're
provisioning
workloads
or
orchestrating
containers,
which
was
the
initial
purpose
of
kubernetes
or
doing
a
wide
variety
of
different
other
things
that
kubernetes
control
plane
has
extended
to
do.
B
Integrating
on
that
api
allows
all
those
different
tools
to
work
together
and
for
you
to
be
able
to
have
consistent
workflows,
so
that
was
a
core
tenant
from
the
beginning
and,
as
it's
evolved
over
time,
we've
kind
of
developed.
These
three
main
feature
areas
we're
going
to
primarily
look
at
the
first
two
today,
but
you'll
also
get
a
flavor
of
the
third,
so
the
first
one
is
provisioning
infrastructure
from
kubernetes.
That's
pretty
straightforward!
That's
what
we're
doing
right
now
behind
the
scenes.
B
That's
what
adding
that
gcp
provider
and
that
home
provider
said
you
know
you're
now
able
to
create
these
types
and
crossplane
knows
how
to
provision
those
on
the
cloud
provider
publishing
your
own
declarative
infrastructure,
api!
That's
basically
saying!
Yes,
you!
You
want
to
be
able
to
create
these
different
managed
resources,
these
granular
infrastructure
representations,
but
likely
within
your
organization
or
even
just
personally,
you
want
some
abstraction
on
that
right.
You
don't
want
to
have
to
fill
out
the
100
fields
in
an
eks
configuration
every
time
you
might
just
want
to.
B
You
know
auto
populate
some
of
those
or
set
some
of
those
fields
as
hard-coded
and
then
allow
for
configuration.
You
know
a
common
module
approach,
that
you'll
see
with
infrastructure
tools,
and
you
may
even
want
to
publish
that
and
distribute
it
and
let
other
folks
use
it
or
build
on
top
of
it.
So
we
have
a
nested
infrastructure,
composition
concept
in
crossplane,
which
we'll
also
touch
on
a
little
bit
later,
and
the
last
one
is
just
running
and
deploying
applications.
B
B
So
I'm
guessing
a
lot
of
folks
who
who
watch
openshift
commons
are
fairly
familiar
with
kubernetes
and
openshift
and
are
probably
very
familiar
with
kubernetes
operators
right
so
kubernetes
itself
internally
has
a
number
of
operators
that
make
it
possible
to
provision
container
workloads
onto
different
nodes
in
a
cluster,
but
you
can
also
extend
that
and
add
your
own
controllers
and
own
api
types
in
the
form
of
custom
resource
definitions.
B
So
the
the
core
thing
that
we're
going
to
see
with
lots
of
projects
is
creating
their
crds.
So
what
crds
does
that
expose
and
how
does
that
extend
your
api
for
your
kubernetes
cluster
and
then
your
controllers,
which
register
with
the
api
server
and
they
watch
for
changes
to
those
crds
or
instances
of
those
crds,
and
then
they
take
some
action
on
your
behalf
in
a
declarative
manner,
so
crossplane.
For
instance,
we
saw
earlier
that
we
had
a
provider
gcp
that
we
wanted
installed.
B
So
basically,
what
that's
doing
is
saying
I
would
like
these
crds,
which
represent
gcp
resources
and
this
pod
to
run
with
a
container
that
has
kubernetes
controllers
in
it.
That
talks
to
the
api
server
says
you
know
when
someone
creates
a
cloud
sql
database,
let
me
know
and
I'll
go
and
create
it
on
gcp
and
then
I'll.
Let
you
know
about
the
status
of
it.
That's
a
common
pattern,
you're
going
to
see
across
kubernetes
projects
and
that's
what
you
see
in
crossplane
as
well,
so
we
have
a
number
of
different
providers
here.
B
These
are
just
the
major
cloud
providers
that
are
listed,
but
we
have
countless
others.
You
can
go
check
those
out
in
the
cross
plain
org
or
the
crossplane
contrib
org
on
github.
We
also
have
things
that
are
not.
You
know
traditional
cloud
provider
apis,
so
one
that
we're
seeing
today
is
provider
helm.
So,
basically,
anything
with
an
api
is
something
you
can
write:
a
crossplain
provider
for
the
provider
helm's
case
that
api
is
the
kubernetes
api.
B
You
can
also
write
things
for
you
know:
we've
seen
kind
of
like
toy
providers
for
things
like
ordering
pizza
or
creating
a
github
org,
or
something
like
that,
or
sending
a
slack
message.
So,
while
traditionally
folks
are
more
interested
in
these
large
cloud
providers,
it
can
be
really
helpful
to
know
within
your
organization.
If
you
have
proprietary
apis,
you
can
easily
write
your
own
provider
as.
B
Well,
so
kind
of
moving
on
to
the
next
level.
Beyond
these
managed
resources,
we
want
to
be
able
to
create
abstractions
over
them
right,
so
whether
you're,
multi-cloud
multi-region
or
just
have
anything
that
differs
between
your
infrastructure,
which
I
think
includes
every
every
category
of
organization.
B
You
want
to
be
able
to
create
abstractions
on
top
of
those,
so
we've
seen
over
the
time
of
cloud
computing,
which
is
still
a
relatively
short
history,
but
even
in
that
short
history,
we've
seen
a
lot
of
different
kind
of
levels
of
abstraction
right
and
when
I
say
that
I
mean
you
know,
within
a
single
large
cloud
provider
like
aws,
for
instance,
you'll
have
granular
resources
like
an
ec2
instance,
all
the
way
up
to
something
like
a
lambda
function
right,
so
there's
these
different
levels
of
abstractions
between
or
within
a
single
cloud
provider.
B
Then
you
see
specialized
cloud
providers
like
something
like
heroku
that
says
you
know
we're
going
to
go
ahead
and
create
those
abstractions
and
only
expose
the
abstractions
to
you.
So
this
is
really
useful,
but
the
problem
with
that
is
that
your
organization
is
not
choosing
those
abstractions
right.
You
are
having
to
be
forced
into
another
cloud
providers
abstractions
and
if
you
know
the
one
that
you've
chosen
to
integrate
with
suddenly
doesn't
have
functionality
that
you
need.
It
takes
a
lot
of
effort
to
add
a
new
cloud
provider.
B
So
what
crossplan
wants
you
to
be
able
to
do
is
create
abstractions
with
pluggable
back
ends
to
them,
so
we're
using
provider
gcp
today
and
we're
going
to
look
at
a
multi-region
example.
You
could
also
very
easily
have
a
provider
aws
back-end
for
the
same
abstraction
that
we've
created,
which
today
that's
a
cluster.
So
our
cluster
consists
of
basically
a
kubernetes
cluster
and
a
postgres
database.
We're
satisfying
that
with
a
a
gke
cluster
and
a
cloud
sql
database.
B
B
So
once
again,
you
can
see
here
that
you're
hiding
that
infrastructure
complexity
and
the
last
thing
is
you
can
include
policy
guard
rails.
So
I'm
sure
on
this
show
before
this
there's
been
talk
of
things
like
opa.
So
that's
open
policy
agent,
where
you
can
write
policies
that
basically
control
who
can
provision
what
configuration
of
kubernetes
objects.
So
once
again,
when
you
expose
these
concepts
as
kubernetes
objects,
these
different
projects
integrate
really
well.
B
So
this
is
just
kind
of
a
visualization
of
what
it
looks
like
to
create
abstractions
in
front
of
those
resources.
So
this
is
a
little
more
complex
of
an
example
where
you
see
we're
using
azure
and
aws,
and
it
also
shows
a
little
bit
of
how
you
can
expose
these
at
the
name
space
level.
So
if
you
have
a
multi-tenant
cluster
right,
where
you
have
folks
in
different
name
spaces
that
need
different
capabilities,
you
can
choose
what's
exposed
to
what
namespace
all
right.
B
So
this
is
kind
of
an
overview,
but
we're
getting
towards
the
end
of
this
presentation
and,
like
I
said
I
really
like
sticking
with
the
demos,
so
I'm
going
to
go
ahead
and
swap
back
over
to
all
that.
So
let's
get
this
out
of
there
and
let's
see
how
our
provisioning
is
going
so,
like
I
said,
we
created
an
east
and
west
cluster
right
before
we
left.
B
We'll
see
here
that
both
of
those
are
now
ready.
So
what
does
that
mean
right
when
we
before
we
went
over
to
the
presentation?
The
first
thing
we
did
was
build
this
configuration
package.
So
when
you
have
these
abstractions
and
you
have
these
different
providers,
they
need
to
be
able
to
be
distributed
easily.
So
crossplane
has
its
own
concept
of
packages
that
go
through
the
cross
plane
package
manager,
which
does
some
things
like
enforce
constraints
around.
B
B
Obviously,
if
you
go
outside
of
the
crossplane
packaging
ecosystem-
and
you
have
other
things
running
in
your
kubernetes
clusters-
it
can't
make
guarantees
about
that.
But
that's
where
we
see
a
lot
of
folks
kind
of
migrating
towards
having
a
cluster
dedicated
to
infrastructure,
provisioning,
that's
kind
of
what
we're
doing
today
and
you'll
see
a
little
bit
more
of
that
down
the
line
as
we
provision
new
clusters
and
then
put
services
into
them.
B
So
the
first
thing
we
did
was
build
our
configuration
package.
That's
in
our
package
directory
here.
This
configuration
type
here
is
actually
not
a
crd.
It
just
follows
the
same
schema.
It's
basically
just
informing
the
crossplane
package
manager,
I'm
a
configuration
package
type
as
opposed
to
a
provider
package
type.
I
mean
I
depend
on
these
two
other
types
which
are
providers
so
provider
gcp,
as
we
saw
earlier,
brought
all
those
gcp
types
and
their
controllers.
B
If
we
actually
look
over
at
the
pods
and
crosswind
system
now
we
can
see
that
we
have
provider
gcp,
running
and
provider
helm
running
in
addition
to
our
rbac
manager
and
crossplane,
so
those
were
installed
successfully.
Crossplane
took
care
of
that
for
us
and
the
other
thing
we're
doing
in
this
configuration
package
which,
once
again
configuration
packages,
carry
these
abstractions
around
and
have
dependencies
on
providers
and
providers,
bring
the
granular
resources
and
controllers.
B
So
in
this
configuration
package,
we've
defined
a
composite
resource
definition,
which
is
frequently
abbreviated
to
an
xrd
in
cross
plane
parlance
and
we're
calling
this
a
cluster.
So
we're
saying
we
want
an
abstraction,
that's
available
to
folks
in
our
cluster
that
you
know
in
our
infrastructure
cluster
that
allows
them
to
create
clusters
and
then
behind
the
scenes,
we're
creating
a
single
composition.
B
Here
that
says,
I
satisfy
the
cluster
type
that
was
defined
as
an
xrd
and
when
a
cluster
is
provisioned
based
on
some
of
the
fields
that
is
given
to
it,
which
here
we're
only
saying
that
the
only
field
in
the
spec
is
a
region
west
or
east.
Here
I'm
going
to
provision
a
gke
cluster,
a
node
pool
a
cloud
sql
instance
and
a
helm
provider
config,
which
we'll
touch
a
little
bit
more
on
in
a
moment.
B
But
essentially
this
allows
you
to
create
this
huge
abstraction
over
some
pretty
complex
resources.
Like
a
gk
cluster
and
a
node
pool
in
a
cloud
sql
instance
and
make
sure
that
they're
always
provisioned
right
so
here
we're
creating
a
database
in
the
same
region
as
the
gke
cluster
in
node
pool,
which
means
later
on
when
we
provision
a
helm
chart
into
the
gk
cluster
that
was
created,
it's
going
to
consume
a
database
from
the
same
region
and
likewise
when
we
go
to
the
west,
we're
going
to
have
the
service
there
consume
the
west
postgres
database.
B
So
we're
doing
that
by
basically
using
this
light.
Patching
system
that
cross
plane
has
in
compositions
that
allows
us
to
overwrite
the
spec
we
have
here.
So
we're
basically
saying
you
know
based
on
the
region.
That's
provided
go
ahead
and
overwrite
the
region
in
the
cloud
sql
instance
in
the
node
pool
and
in
the
gk
cluster
and
that
maps
to
the
actual
regions
on
gcp.
B
You
could
also
imagine
that
we'd
have
many
compositions,
so
we
could
have.
Instead
of
you
know
having
the
the
values
come
through
here,
we
could
have
a
whole
separate
composition
and
label,
one
of
them
east
and
one
of
them
west
and
based
on
the
labels
on
the
definition
or
the
instance
of
the
definition,
that's
created,
select
one
or
the
other
or
select
an
aws
cluster,
and
you
could
create
different
criteria
for
doing
that.
B
So,
if
we
jump
back
over
to
our
cluster,
we
should
see
that
those
different
resources
we
defined
are
actually
present
in
our
kubernetes
cluster.
So
if
I
do,
the
shorthand
get
gcp
you'll
see
that
we
have
two
cloud
sql
instances
running
so
one
east
and
one
west
looks
like
they're
all
ready
to
go,
and
if
we
go
up
a
little
further,
we
should
see
we
have
two
gk
clusters,
one
east
and
one
west
and
a
node
pool
for
each
of
them
that
are
attached
and
and
they're
all
ready
and
run.
B
So
we
also
created
a
provider
config
here
which
is
basically
how
you
configure
a
given
provider.
You
say
you
know,
point
it
to
the
secret
and
then
I'll
reference
it
from
instances
for
that
provider.
So,
for
instance,
for
the
gcp
provider
we
actually
just
created
one
default,
which
means
that's
the
one
that
provider
gcp
will
use
to
provision
infrastructure
if
there's
not
another
one
specified
since
we're
provisioning
to
two
separate
clusters,
so
two
different
apis
with
our
application.
That's
going
to
go
into
our
gk
clusters
that
we
provisioned.
B
We
need
two
separate
provider
configs.
So
likewise
with
the
east
and
west
variant
of
the
gke
cluster
and
the
database,
we
also
are
creating
a
helm
provider
config
that
points
to
the
secret,
that's
actually
created
from
provisioning
that
gke
cluster.
So
up
here,
if
we
look
at
the
gk
cluster
you'll
see
we
have
right
connection
secret
to
ref
and
we're
saying
write
it
to
the
crosswind
system,
namespace
and
name
it
cubeconfig
dash
whatever
region
it's
in.
So
if
we
actually
look
in
the
cross
plane
system,
namespace.
B
You
can
see
that
we
have
a
cubeconfig
east
and
a
cubeconfig
west
you'll
also
see
we
did
the
same
thing
with
the
database,
so
we
have
a
database
east
and
a
database
west
all
right.
So
if
we
hop
back
down
to
the
helm
provider
config,
we
should
see
that
we
have
two
home
provider
configs,
one
of
them
referencing
cubeconfig,
east
and
one
of
them
referencing
cubeconfig
west,
which
basically
is
giving
the
ability
to
provide
our
helm
to
provision
things
into
either
one
of
those
clusters.
B
So
once
again
we'll
take
a
look
at
provider
config.
These
are
the
cluster
scope.
You'll
see
we
have
helm
east
and
helm
west.
If
we
want
to
make
those
a
little
bigger,
we
can
see
that
we're
referencing
pube
config
west
in
the
helm
provider,
config
west
me
all
right.
Next
thing
we
want
to
do
is
now
that
we
have
all
this
infrastructure
provisioned
and
managed
by
crossplane.
B
We
want
to
provision
applications
into
them,
and
specifically,
what
we
want
to
show
today
is
that
when
I
put
something
in
the
east
cluster,
it's
going
to
use
the
the
database
in
the
east
region,
I'm
going
to
put
something
in
the
west
cluster,
it's
going
to
use
the
database
in
the
west
region.
So
what
we're
going
to
be
provisioning
today
is
actually
a
documentation
site
for
crds.
B
So
I'll
pull
this
over
here
right
now.
Doc.Crd.Dev
is
a
website
where
you
can
go
in
and
look
at
documentation
on
different
crds.
So,
for
instance,
if
we
installed
all
of
those
provider
dcp
crds
and
we
want
to
see
which
ones
are
available,
it's
basically
going
to
parse
those
and
discover
all
of
them,
for
us
is
running
a
little
slow
right
now,
but
you
can
see
it
looks
at
all
the
versions
in
that
repo,
it's
going
to
show
us
the
different
ones
available.
B
So,
for
instance,
if
we
needed
to
see
that
gk
cluster
spec,
while
we
were
writing
our
configuration,
we
just
go
ahead
and
click
on
it.
Once
again,
my
my
internet
is
a
little
slow
here,
but
it
will
come
up
momentarily
and
you'll
see
it's
just
the
v1
beta1
container.gsp.crosspin.io
and
we
can
see
all
the
different
fields
that
are
available,
and
this
basically
just
helps
us
fill
it
out.
So
this
is
just
the
application
that
we're
going
to
be
provisioning
today
and
we're
going
to
do
it
in
two
different
regions.
B
So,
once
again,
in
our
composition,
we
defined
an
app,
and
this
also
just
has
a
single
field.
So
it
looks
very
similar
to
our
cluster
xrd
and
you'll,
see
here
that
I'm
referencing,
a
doc
helm
chart
that
I
just
published
this
morning
and
then
giving
some
values
from
the
db
secrets
that
we
provisioned
so
we're
basically
saying
provision
this
home
chart
with
these
secrets
for
the
given
cluster.
B
B
So
if
we
hop
back
over
and
once
again
look
at
our
xrd,
we
can
see
that
we
have
apps
and
clusters,
so
we
want
to
create
two
apps
now,
one
for
east
and
one
for
west,
and
I
believe
I
have
the
commands
here
for
that.
B
Yes,
we
do,
and
so
I'll
go
ahead
and
create
those,
and
what
that's
going
to
do
is
once
again
looking
at
our
composition
is
create
a
helm
release
which
basically
is
a
representation
of
a
release
that
gets
deployed
in
a
cluster
from
our
infrastructure
management
cluster.
B
And
so,
if
we
look
behind
the
scenes,
we
can
see.
There's
two
releases
they've
both
been
deployed
and
installed
successfully
one
in
east
and
one
in
west,
and
if
we
look
at
our
abstraction
on
that,
they're
not
quite
ready
yet
but
in
a
moment
should
show
that
they
are
also
ready
because
they're
one
composed
resource
is
ready
and
what
we
can
also
do,
which
is
kind
of
handy,
because
we
have
those
cube,
config
secrets
in
our
cluster.
B
B
We
can
see
that
we
have
our
cubeconfig
easton
cubeconfig
west,
and
so
I'm
going
to
actually
pull
those
out
into
cube
config
files
here,
because,
instead
of
exposing
these
via
service
or
something
like
that
on
the
public
internet,
I'm
just
going
to
port
forward
them
both
locally.
And
what
we
want
to
see
is
that
they're
talking
to
different
databases
right,
because
one
should
talk
to
easton.
Once
you
talk
to
west.
B
B
We
have
our
e-setup
and
our
west
setup
and
let
me
open
a
browser
window
here
and
pull
it
over
all
right.
So
first,
I'm
going
to
look
at.
Let's
see
which
one
did
we
do?
Each
okay,
yeah
8081
is
going
to
be
in
the
east,
so
I'm
going
to
go
ahead
and
connect
to
that
all
right.
So
we
are
able
to
connect
our
app
is
running
successfully,
but
we
haven't
actually
hit
the
database
for
anything.
B
Yet
so
what
I'm
going
to
show
is
doc.crds.dev
if
it
hasn't
indexed
a
repo
on
the
first
request
for
it
it's
going
to
do
that
behind
the
scenes.
So
the
first
time
we
asked
for
it,
the
repo
should
not
be
available
and
it's
gonna
say
we'll
index
that
please
try
again
soon
so
I'll
just
do
the
the
cross
plane
repo
here.
B
So
there
we
see
we're
not
able
to
index
it
yet
so
they're
working
on
that.
So
that's
great,
and
actually,
if
we
go
ahead
and
refresh
the
page,
we
can
go
ahead
and
get
results.
This
isn't
the
latest
version.
It's
indexing
all
of
them
right
now,
but
you
can
see
that
behind
the
scenes
the
website
has
gone
and
indexed
the
repo
and
if
we
kept
refreshing,
this
we'd
eventually
get
all
of
it.
B
So
what
we
should
see
is,
instead
of
it,
showing
us
the
documentation
for
crossplane
since
it's
using
a
separate
database-
and
you
can
imagine
a
real
world
scenario-
you
might
want
something
like
a
replica
of
the
database.
We
should
see
that
it
has
not
already
indexed
the
crossplane
repo
perfect,
so
it
hasn't
because
this
is
using
a
separate
database.
That's
a
separate
app
instance.
B
It
is
taking
it
a
little
bit
of
time
here
but
anyway,
you're
able
to
see
that
it's
using
a
a
different
postgres
database
and
it's
done
indexing
now
and
so
that
way,
you've
seen
now
that
you
can
con
excuse
me
provision
applications,
two
different
clusters
and
have
them
consume
the
infrastructure,
that's
close
to
them.
This
is
kind
of
a
trivial
example
right
because
we're
actually
just
passing
through
the
region
that
we
want.
You
could
do
things
since
you're
standardized
on
a
kubernetes
api.
B
You
could
do
things
like
you
know,
evaluate
the
capacity
of
each
cluster
or
use
some
other
metric.
That
says
this
should
be
provisioned
here
and
consume
this
infrastructure.
So
here
we're
talking
about
you
know
consuming
the
database,
that's
close
to
them.
You
can
think
about
edge
scenarios
where
you
may
want
to
use
an
n
cluster
solution
as
opposed
to
an
external
database,
or
something
like
that,
and
that's
getting
a
little
closer
to
what
chris
is
going
to
talk
about
here
and
some
of
the
work
that
he's
been
doing
so
krish.
B
B
Looks
like
we're
just
getting
the
you
are
sharing
your
screen
again
here
oops.
Let
me.
C
Try
that
again,
no
worries,
okay,
how
about
now
fingers
crossed?
Let's
see
it
looks
good,
okay,
perfect,
hello,
everyone
yeah,
I'm
going
to
be
talking
about
a
hybrid
cloud
deployment
of
a
red
hat
quay
or
key.
If
you
want
to
say
it,
the
the
technically
correct
way
using
crossplane
on
openshift,
so
I'll
skip
the
intro
slide
since
we've
already
gone
over
that,
but
the
basic
idea
here
is:
you
know
we
want
to
integrate
and
compose
different
aws
resources,
along
with
the
already
existing
key
operator,
to
create
a
reusable
and
automated.
C
You
know
package
for
for
deploying
key,
so
the
the
real
thing
that
brings
everything
together
in
this
demo
is
the
in
cluster
provider,
which
is
a
newer
provider,
which
I
am
the
primary
maintainer
of
at
this
point,
and
so
the
idea
of
the
ink
cluster
provider,
as
the
name
suggests,
is
to
provision
resources
within
your
kubernetes
or
openshift
cluster,
and
the
benefit
of
this
is
that
you
know
we
can
mimic
the
interface
of
you
know
all
the
cloud
provision
resources.
So
now
you
can
interchange
provisioning
an
rds
instance
or
a
google.
C
You
know
cloud
sql
instance
or
postgres
locally
inside
of
your
cluster,
and
so
you
know
we
can
then
fulfill
a
requirement
with
any
implementation
based
on
you
know
what
your
use
case
is
or
where
you
want
to
run
it
as
dan
mentioned.
You
know
you
might
want
to
be
running
it
in
cluster,
if
you're
in
an
edge
scenario-
or
you
know,
on
our
on
rds,
if
you
are
running
open
shift
inside
of
aws,
but
there's
trade-offs
right
because
of
course
we
can
replicate
all
resources.
So
it's
something
proprietary.
C
So
the
aws
provider
is
is
what
is
primarily
going
to
be
provisioning
all
of
our
resources,
so
redis,
s3
and
and
postgres,
and
it's
also
going
to
handle
you
know
the
networking,
security
and
iam
the
helm
provider
is,
is
going
to
be
responsible
for
more
granular
resources,
along
with
jobs
for
configuring,
the
database
prior
to
actually
starting
up
key.
C
So
how
do
we
bring
it
all
together?
Well,
you
know
we
first
installed
crossplane
the
providers.
Then
we
set
up
the
configuration
set
up
the
provider
configs
configure
and
create
our
requirement.
C
So
let's
run
the
demo.
So
first
things.
First,
we
have
to
run
the
make
crossplane
command
and
the
make
provider
command.
So
these
are
commands
that
are
already
set
up
within
the
cross,
plane,
query
repo,
which
have
which
there's
a
link
to
at
the
end
of
the
presentation.
C
So
if
we
run
make
cross
plane
we'll
see
that
we
start
creating
cross
plane
within
our
cluster,
so
just
gonna
take
a
second.
C
And
once
this
is
done,
we
will
have
created
the
cross
plane
system
namespace
up
here,
along
with
the
actual
cross
button
deployment.
So
we
can
check
that.
C
Perfect
so
crosstalk
is
up
and
running,
and
then
we
can
run
the
make
provider
command,
which
sets
up
all
three
of
the
providers.
So
something
to
note
for
for
openshift,
you
also
have
to
set
up
a
controller
config,
which
is
basically
responsible
for
doing
some
setup
work
for
the
deployment
for
each
of
the
providers.
C
Then
we're
going
to
create
our
authentication
secrets
for
aws,
along
with
this
kubernetes
cluster
or
this
openshift
cluster,
that
I'm
running
on
and
in
a
second
all
three
of
the
providers
will
be
created.
So
we
can
check
that.
You
know
these
are
all
spinning
up
now.
C
C
Next
up,
we
can
set
up
all
of
our
crds
using
the
make
configuration
and
then
the
make
catalog
command
so
make
configuration,
installs
a
cross,
plane
package,
and
so
this
cross
plane
package.
C
C
C
So
we
wrap
the
iam
user,
an
iem
user
access
key,
which
creates
a
set
of
access
keys
as
a
secret,
the
actual
bucket,
along
with
the
bucket
policy
right
and
so
there's
similar
xrds
for
redis
or
elasticash
rds,
which
is
postgres
along
with
some
networking
resources
and,
in
the
end,
the
actual
key
resources
for
the
operator.
C
So
if
we
do
make
catalog
all
of
our
xrds
are
are
now
created,
and
so
the
final
step
to
actually
get
everything
spinning
is
to
run,
make
key
and
then
make
watch.
But,
as
dan
mentioned
oftentimes.
C
Spinning
up
things
on
aws
can
take
a
bit
of
time,
so
I've
just
pre-recorded
what
it
would
look
like
if
you,
if
you
were
to
run
this
so
once
we
run,
make
key
we'll
see
that
it
creates
the
requirement
for
the
the
xrd
resource
and
we
can
then
watch
all
the
resources.
Spinning
up.
So
we'll
see
that
you
know,
we've
already
created
the
subnets
for
our
vpc
and
then
we
create
our
our
different
networking
resources.
So
the
security
group
and
route
table.
C
Sorry,
my
internet's
not
that
great
today
and
we'll
see
that
you
know
our
iam
user
and
the
I
am
user
access
key
are
both
already
up
and
running.
C
And
the
bucket
has
also
been
created
at
this
point
and
now
we're
just
waiting
for
the
replication
group
for
redis
and
the
rds
instance.
So
rds
is
done,
our
operator
has
been
created
and
the
release
will
finish
in
just
a
second
perfect.
C
So
now
that
all
of
those
are
done,
we
can
validate
that
our
actual
pods
for
a
key
are
coming
up.
If
we
just
watch
that
those
will
take
another
second
or
two,
so
oops.
C
C
C
C
So
that's
all,
for
my
demo
I'll
hand
it
back
to
dan.
B
Yeah
that
was
awesome
chris.
If
anyone
else
wants
to
see
some
more
demos
like
this,
we
did
have
a
crossplank
community
day
before
the
end
of
the
year
that
christian,
a
lot
of
other
awesome
individuals
presented
at
so
there's
a
lot
of
great
demos
and
solutions
you
can
check
out.
B
Also,
we
have
a
live
stream,
the
binding
status,
which
we
have
every
two
weeks
which
you
can
see
on
the
crossplane
youtube
channel,
but
as
we're
kind
of
like
nearing
towards
the
end
of
you,
know
our
demos
and
an
overview
of
crossplay
and
stuff
like
that,
we're
obviously
on
openshift
commons
here
and
chris-
and
I
have
been
talking
about
for
you-
know
the
last
few
weeks
or
so
along
with
other
folks.
You
know
what
is
kind
of
like
the
future
for
cross
plane
and
open
shift.
B
B
However,
you
know
if
there's
something
else,
that's
also
trying
to
manage
that,
for
instance,
the
olm
from
from
red
hat
and
openshift.
There
can
be
conflicts
on
that
and
I
think
we've
gone
a
long
way
to
reduce
some
of
that
with
things
like
the
controller
config
which
which
krish
referenced
there,
which
basically
allows
you
to
customize
different
parts
of
of
how
that
installation
process
happens.
B
C
Yeah,
I'm
really
excited
to
see
you
know
the
continued
growth
of
cross
plane
as
a
project
along
with
you
know,
seeing
more
and
more
people
getting
involved.
C
I
I
do
think,
there's
still
a
lot
of
room
to
grow
and
a
lot
of
interesting
things
that
we
can
do
using
crossplay
and
openshift.
You
know
whether
that's
multi-cluster
or
looking
at
edge
scenarios.
C
I
think,
like
those
are
two
areas
where
we're
definitely
going
to
be
seeing
a
lot
more
interest
in
the
next
in
the
next
few
months
and
and
as
the
project
grows.
B
Yeah
for
sure,
one
of
the
things
that
was
kind
of
a
larger
focus
earlier
on
in
the
crossplane
project
was
sort
of
this
concept
of
intelligence
scheduling
and
you
can
see,
and
and
in
my
demo
and
in
chris's
demo,
we,
we
talked
about
kind
of
scheduling,
infrastructure,
scheduling,
the
consumption
of
infrastructure
to
different
places,
whether
it's
regional
or
whether
it's
the
location
in
cluster
or
externally
that
sort
of
thing,
but
that
was
mostly
you
know
somewhat
of
a
manual
process
in
terms
of
we
had
to
say,
I
want
this
in
cluster.
B
I
want
this
easter
or
something
like
that,
but
I
alluded
to
a
little
bit
earlier
about
potentially
the
ability
to
automate
some
of
that.
So
you
know
one
of
the
nice
things
about
being
on
the
kubernetes
api.
Is
you
could
do
something
like
I
mentioned,
of
having
like
a
mutating
web
hook?
That
says
like
put
this
in
the
east,
you
know
if
the
west
is
really
overloaded
or
we're
seeing
more
traffic,
and
you
could
actually
just
design
a
whole
separate
system
that
integrated
really
well
with
crossplane,
but
it
you
know.
B
I
I
hear
that
from
from
folks
on
openshift
a
lot
which
you
know,
I
admittedly
don't
have
as
much
experience
with
openshift
as
I
do
vanilla
kubernetes,
but
what
are
kind
of
some
of
the
the
customer
use
cases
that
you
see.
Chris
of
folks,
that
are,
you
know
like
wanting
to
do
this
sort
of
like
intelligent
scheduling,
kind
of
thing.
C
One
of
the
teams
that
we've
been
working
with
a
lot
one
of
the
customers
or
someone
is
looking
to
adopt
crossplane
for
their
own
project,
something
that
you
know
they've
seen
as
one
of
the
core
features
of
crossplane
and
one
of
the
reasons
that
they're
looking
to
adopt.
It
is
they're
really
interested
in
being
able
to
schedule
workloads
intelligently,
whether
it's
you
know,
if
you
want
to
create
a
workload
for
production
being
able
to
deploy
that
with
a
custom.
Xrd.
C
That's
set
up
for
aws,
for
example,
right
and
just
like,
with
the
flip
of
a
switch
being
able
to
switch
that
to
a
development
workload
where
things
are
provisioned
in
cluster.
And
so
you
know
when
it
comes
to
scheduling
just
that
level
of
control
and
flexibility,
and
also
the
fact
that
it's
opaque
to
developers
is,
I
think
something
that
is
is
really
appealing
for
for
for
commercial
teams
and
enterprise.
B
Yeah,
absolutely
that's
a
great
point.
I
think,
there's
folks,
all
across
the
kubernetes
ecosystem
that
are
are
looking
for
that
kind
of
workflow,
and
you
know
you
can
see
that
through
the
you
know,
like
ci
pipelines
and
that
sort
of
thing
that's
where
we
see
a
lot
of
folks
using
cross
planets
with
something
like
argo,
where
you
know
they
have.
B
You
know,
based
on
the
stage
that
they've
defined
in
argo
or
just
last
week,
I
did
a
live
stream
with
captain
which
is
from
dynatrace,
and
they
do
things
like.
You
know:
different
development,
staging
production,
steps
in
a
workflow
and
run
tests
against
them,
and
that
sort
of
thing-
and
one
of
the
things
we
were
talking
about
in
that
case,
was
temporary,
temporarily
spinning
up
development
infrastructure
to
like
run
these
tests
right.
B
So
maybe
I
spin
up
a
new
kubernetes
cluster
using
crossplane
and
run
load
tests
on
it
and
that's
kind
of
like
isolated
away
from
you
know
my
production
workloads
and
that
sort
of
thing,
and
then
I
can
just
tear
it
down
right
with
these
same
systems
like
argo
or
captain
or
whatever
your
favorite
one
is
so
there's
a
lot
of
flexibility
of
being
able
to
do
all
these
operations
from
the
kubernetes
api,
but
yeah.
Those
are
the
main
things
I
wanted
to
cover.
B
C
B
Yeah
absolutely
and
there's
lots
of
opportunities,
even
if
you're
not
that
familiar
with
like
kubernetes
development,
if
you're
some,
you
know
type
of
operator
or
something
like
that-
and
you
want
to
put
together
a
demo
or
put
together
a
guide
for
using
the
different
projects
that
you
like
to
put
together
for
your
cloud
recipe
in
into
a
guide
or
a
live
stream,
or
something
like
that.
B
We'd
love
to
have
you
also
another
thing
that
just
started
up,
which
we're
looking
to
get
more
folks
involved
with,
is
an
on
duty
rotation
which
may
not
sound
super
fun
right.
It
sounds
like
being
on
call
for
free,
but
it's
a
little
bit
different
from
that.
B
There's,
obviously,
flexibility
with
your
time
and
that
sort
of
thing,
but
it
does
give
a
good
opportunity
to
continue
to
cultivate
our
slack
community,
which,
in
my
opinion,
I'd
say,
is
a
pretty
good
one,
with
lots
of
folks
helping
each
other,
which
I
think
is
a
common
common
thread
across
the
the
cloud
native
ecosystem.
B
But
I've
been
particularly
happy
with
a
lot
of
the
folks
who
have
decided
to
to
join
our
community
all
right.
Well,
I
think
that's
what
we
have
for
you
today,
like
chris
said,
definitely
feel
free
to
reach
out
to
either
one
of
us
individually
and
we'd
love
to
have
a
conversation
with
you
or
talk
about
your
use
case
or
anything
like
that,
but
I'll
hand
it
back
to
karina
here
to
kind
of
round
this
out.
A
Thanks
dan
thanks
chris,
so
speaking
of
maintainers-
and
I
hope
you
don't
mind,
but
we
do
have
a
couple
other
contributors
that
are
watching
right
now,
the
ibm
team
has
been
really
you
know
doing
a
lot
of
work
and
integrating
ibm
cloud
right
into
cross
plane.
So
hello,
I
don't
know
if
you
wanted
to
jump
in
and
say
anything
or
now,
you're
not
going
to
speak
to
me
anymore,
because
I'm
calling
you
out
in
a
briefing.
But
so
can
you
talk
a
little
bit
about
the
the
work
that
you've
been
doing.
E
Yeah
sure,
thanks
for
mentioning
the
work
we've
been
doing
actually
yeah,
it's
been
actually
a
very
interesting
journey
together
with
the
cosplaying
community.
Actually,
then,
has
been
very
helpful
for
us
and
I
think,
in
a
very
short
period
of
time,
we're
able
to
deliver
initial
release
of
the
ibm
cloud
provider.
E
This
will
allow
us
basically
to
integrate
a
few
ibm
services
actually
from
ibm
catalog,
and
so
this
actually
give
us
this
kind
of
portability.
I
think
one
of
the
most
important
aspect
I
see
now
across
plane
is
this
aspect
of
portability
right.
We
have
customers
that
they
want
not
just
the
portability
of
the
workload,
but
also
the
the
infrastructure
service
they
require
right,
so
I'm
running,
for
example,
open
shift,
maybe
I'm
running
on
aws,
but
now
I
want
to
somehow
to
to
move
this
to.
E
I
don't
know
gcp
or
ibm
cloud
another
all
this
portability
of
the
services
using
these
common
definitions,
the
success
the
so
this
is
for
us
is
a
very
important
aspect
is
one
of
the
reason
I
think
cross
plane
is
a
lot
of
value
for
our
customers
and
that's
an
area
that,
as
you
said,
also
this
area
of
somehow
scheduling.
E
Some
kind
of
smart
scheduling,
I
think,
is
a
very
interesting
aspect
as
well,
and
probably
having
also
maybe
operators
that
somehow
can
now
use
the
kubernetes
api
provided
by
crossplane.
We
can
make
some
of
this
kind
of
more
autonomic
in
a
way
more
intelligent
right.
We
can
do
this
kind
of
autopilot
and
now
for
workloads
and
infrastructure.
I
think
this.
There
are
a
lot
of
possibilities
here
that
we
definitely
want
to
explore.
A
That's
awesome.
I
love
you
know
all
the
community
coming
together
right,
I
mean
got
outbound,
ibm
red
hat
so
for
other
people
that
want
to
join
and
write
service
providers.
When
is
your
community
meeting.
B
Yeah,
so
we
have,
we
have
a
community
meeting
every
other
monday,
so
it
would
have
been
this
week,
but
it
was
cancelled
due
to
mlk
day
yesterday,
so
not
next
monday,
but
the
following
one
we'll
have
our
community
meeting
that
is
at
10
a.m.
Pacific.
B
I
think
I
need
to
double
check
that,
but
it's
posted
on
the
crosswind
website
and
also
on
the
crossplane
github
page
and
that's
definitely
a
place
where
you
can
come
and
bring
ideas
that
you
have
bring
use
cases
or
questions
or
anything
like
that,
and
we
would
love
to
have
you
there
and
love
to
you
know
help
out
as
best
we
can.
A
Awesome
thanks
and
for
anybody
you
know,
that's
watching,
you
know.
Chris,
do
you
have
any
questions
in
the
the
live
stream
chat
and
feel
free
to
drop
any
questions
into
this
chat
and
thanks
paulo
always
love
having
you
online
in
these
calls.
You're.
A
Saw
there
were
there's
a
lot
of
going
back
and
forth
in
the
chat.
F
Yeah
I've
been
talking
to
people
about
what
they've
been
doing
with
the
crossplane.
It's
pretty
fun,
to
see
users
and
how
they're
enjoying
the
experience
of
using
any
product
and
crossplane
is
one
of
those
niceties
where
it's
like
cute
cuddle
becomes
the
de
facto
kind
of
interface
for
everything
and
that's
a
huge
win
for
people.
So.
B
Really
cool
to
talk
about
it,
one
of
the
things
that
I
love
that
paulo
was
talking
about.
There
is
the
flexibility
that
provides
right,
so
providers
are
kind
of
just
doped
kubernetes
operators.
If
you
will
so,
you
can
really
get
this
nice
interaction
between
these
different
pieces
of
the
ecosystem
and
one
of
the
things
that
I
know
I've
been
chatting
with
chris
and
also
scott
from
red
hat
about
recently,
is
getting
an
even
smaller
kind
of
operator
deployment
unit
which
we're
kind
of
referring
to
as
functions.
B
That
kind
of
give
you
some
of
that
day,
two
operations
feel
so
something
like
you
know.
I
want
every
time
I
delete
a
a
cloud
sql
database
to
send
a
slack
message,
or
you
know,
do
some
extra
cleanup,
because
if
you've
worked
with
cloud
infrastructure,
I'm
sure
you're
very
aware
that
it
has
the
capability
to
leak
resources,
and
so,
in
a
lot
of
these
cases,
right
we're
solving
for
the
generic
use
case
in
the
provider.
B
Right,
you
create
the
instance
you
bring
it
back
down,
can't
really
make
assumptions
about
your
day,
two
desires
of
how
you
specifically
would
like
to
work
around
that.
But
you
know
if
we
make
a
smaller
deployment
unit,
that's
really
easy
for
you
to
just
script
out
some
actions.
Then
you
know
that
could
really
enhance
the
workflow
for
a
lot
of
people
as
well.
A
So
we
have
a
couple
great
questions
in
chat.
Can
you
talk
about
the
intersection
between
cross
plane
operators,
helm
and
so
why
you'd
want
to
use
them
together
versus
standalone.
B
Yeah,
absolutely
so
so
one
of
the
the
benefits
there's
a
couple
different
facets
of
this.
So
I
see
also
that
some
of
the
other
provisioning
solutions
are
mentioned
as
well.
So
you
know
right
off
the
bat.
One
of
the
things
that
we've
talked
about
is
the
standardization
on
the
kubernetes
api
right.
B
So
a
single
interface,
where,
if
you've
worked
with
other
infrastructure
as
code
tools,
you
know
having
a
imperative
system
or
having
just
a
different
set
of
tooling
to
provision
your
workloads,
separate
from
your
kind
of
like
kubernetes
environment
or
excuse
me,
providing
your
infrastructure
separate
from
your
kubernetes
environment
can
just
lead
to
some
fragmentation.
It's
also
hard
to
synchronize
a
lot
of
those
operations
and
create
useful
abstractions
in
front
of
the
infrastructure.
B
So
that's
one
of
the
benefits
another
one
is
kubernetes
and
crossplane
rider
are
running
continuously,
so
it
differs
from
infrastructure's
code
tools
and
that
it's
always
watching
your
infrastructure.
So
let's
say
you
know
I
provisioned
a
cloud
sql
instance
in
this
demo.
B
Let's
say
I
went
and
tried
to
you
know
scale
that
up
you
know,
10x
its
current
capacity
cross
plane
would
say:
hey,
that's
not
what
you
said
you
wanted
when
you
provisioned
this,
I'm
going
to
scale
that
back
down
and
if
you
want
it
scaled
up
right,
you
come
to
the
source
of
truth,
which
is
your
kubernetes
api.
B
So
that's
one
of
the
benefits.
Looking
more
at
the
operator
side,
one
of
the
questions
we
get
asked,
sometimes
in
terms
of
the
different
providers
and
that
sort
of
thing,
since
they
are
kubernetes
operators
themselves.
Why
not
just
help
install
them
in
the
same
way
that
we
help
install
crossplane
or
something
like
that
and
that
kind
of
goes
back
to
some
of
the
benefits
of
the
package
manager
that
I
was
talking
about
earlier.
B
So
you
know,
one
of
the
things
that
we
saw
is
that
in
crossplane
you
can
establish
dependencies,
which
is
a
real
problem
in
the
kubernetes
community,
is
having
dependencies
between
things
that
are
deployed
in
your
cluster
and
so
being
able
to
have
dependencies
managed
in
a
synchro
central
location
means
that
I
can
install
that
configuration,
and
you
know
in
that
case
it
brought
helm
and
gcp
and
it
was
kind
of
a
simple
case.
Let's
saying
they
were,
they
were
already
installed,
but
they
were
the
wrong
version.
B
I
please
update
to
this
other
version,
so
you
get
some
guarantees
there
and
then
the
big
one
that
I'd
say
is
when
it
comes
to
reconciling
these
crds
that
we
installed
crossplane
is
going
to
guarantee
that
only
one
operator
is
actually
acting
on
those
crds
which,
if
you
have
a
lot
of
operators
installed
in
a
cluster,
it
can
be
very
confusing
to
associate
different
api
types
with
those
operators,
and
so
crossplane
is
going
to
say
when
you
bring
a
new
provider,
a
new
configuration
package
or
something
like
that.
B
It's
going
to
say
all
right:
you
are
the
owner
of
the
types
that
you
install.
No
one
else
is
going
to
mess
with
them,
so
you
can
get
kind
of
a
guarantee
that
you
know
another
operator
isn't
going
to
come
along
and
break
them
once
again.
That's
all
configurable,
but
if
you
kind
of
stick
to
the
base
functionality
you're
going
to
get
all
of
those
wins
with
standardizing
and
cross
blind
there.
A
Now,
what
about
provisioning
solutions
like
terraform?
Is
there
an
intersection
between
cross
plane
and
terraform.
B
Yeah,
I
mean
one
of
the
besides
the
differences
I
mentioned
one
of
the
benefits
of
having
such
like
a
robust
cloud
native
ecosystem
and
chris
also
feel
free
to
jump
in
on
this
one,
but
some
folks
really
like
hcl
some
other
people
really
don't
like
it,
and
some
people
like
yaml
some
people,
don't
some
people
want
to
write
their
configuration
in
an
actual
pro.
Well,
that's
probably
slanderous
in
a
more
traditional
programming
language,
maybe
like
typescript
or
something
like
that,
and
the
nice
thing
about
having
this.
B
This
wide
ecosystem
is,
there's
all
these
tools
to
take
whatever
source
you
like
and
essentially
compile
that
to
the
kubernetes
api
or
that's
how
I
like
to
look
at
it
kind
of
like
a
compiler
tool
chain.
So
you
know
terraform,
for
instance,
has
resources
to
actually
be
able
to
use
hcl
to
produce
kubernetes
objects.
So,
just
like
you
know
I
applied
yaml
here
you
could
write
your
hcl
to
actually
create
these
objects,
as
well
kind
of
using
terraform
for
a
front
end
there.
B
Similarly,
we
work
with
cdks
from
aws
to
create
a
typescript
front
end
for
doing
this
type
of
thing,
and
then
you
can,
you
know,
get
all
the
benefits
of
versioning
and
whatever
distribution
that
programming
system
uses,
you
can
get
all
those
benefits
by
standardizing
on
one
of
those.
So
that's
one
of
the
nice
ways
I
feel
like
people
have
an
easier
path
for
migrating,
but
chris
have
you
had
any
experience
with
with
those
tools.
C
Yeah,
I
definitely
agree
with
what
you
said,
but
the
one
thing
that
I
chime
in
you
know
one
of
the
things
I
like
more
about
crossplane
when
compared
to
terraform
or
some
of
the
other
tooling,
is
that
you
know,
since
it's
so
integrated
with
you
know
the
kubernetes
resource
model
and
kubernetes
as
a
whole.
You
really
get
to
take
advantage
of
the
whole
body
of
tooling
that
is
available
as
a
result
of
being
part
of
kubernetes,
so
dan
showed
the
docs.crd.dev
before
right.
C
The
tool
for
visualizing
crds
exposed
by
you
know
any
any
any
repository
and
there's
also
tooling
like
helm,
or
you
know
you
know
everything
that
exists
in
the
kubernetes
world,
for
you
know,
iam
that
exists
for
orchestration.
That's
one
of
the
really
big
benefits.
From
my
perspective,.
A
B
I
don't,
I
don't
think
too
much.
I
would
just
echo
again
that
what
chris
was
saying
earlier
about
the
community.
That's
a
really
big
part
of
what
open
source
means
to
me
personally,
and
I
know
it's
true
for
a
lot
of
the
other
cross,
plane,
maintainers
and
so
yeah.
B
If
you
have
any
desire
to
be
involved
or
if
you're,
just
looking
for
a
place
to
learn,
you're
definitely
welcome
here,
no
matter
your
background
experience
level
or
anything
like
that,
so
so
please
feel
free
to
to
join
us
in
slack
on
twitter,
etc.
B
A
D
B
I
will
say
I
will
say
very
quickly:
there
is
a
live
stream
where
we
have
matt
one
of
the
k
native
maintainers.
So
if
you
want
to
go
and
check
that
out
on
youtube,
give
that
a
look.