►
Description
Crossplane is a project that strives to bring cloud infrastructure, services, and applications closer to your Kubernetes cluster in order to create a hybrid control plane. This goal is primarily achieved through the use of providers, which are standalone controllers for a specific API group. The Crossplane project itself manages the lifecycle of these providers, from installation to cleanup. In this briefing, Red Hat's Krish Chowdhary will briefly discuss what providers are, the Crossplane architecture as well as how we can repackage a provider installation via the Operator Lifecycle Manager (OLM).
A
Hello,
everyone
welcome
back
to
another
openshift
commons
briefing
and
today
krish
is
back
with
us.
You've
seen
him
before
when
we've
also
talked
about
crossplane,
and
that
was
a
few
months
back
and
crossplane
is-
has
applied
for
graduation
from
the
cncf.
So
that's
exciting
news
as
well,
and
chris
just
had
a
blog
post,
go
out
that
I
will
send
to
chris
for
those
of
you
watching
on
all
the
openshift
tv
channels.
A
You
can
go
open
up
his
blog
post
as
well,
and
I
will
let
krish
go
ahead
and
introduce
himself
and
talk
about
well,
I
will
just
let
you
go
krish.
Thank
you.
B
Thank
you
karina,
so
yeah
hi
everyone.
My
name
is
chris.
Today
we're
gonna
be
talking
a
little
bit
about
crossplane
providers
as
well
as
their
recent
olm
repackaging
effort
from
from
the
office
of
the
cto.
B
So
who
am
I
first
things?
First,
so
I'm
a
software
engineer
and
red
hat
office
of
the
cto,
more
specifically
in
the
emerging
technologies
team,
I'm
also
a
maintainer
on
the
cross,
plane,
sub
projects
for
provider
aws
and
the
provider
in
cluster,
I'm
also
a
sick
storage,
member
and
maintainer
on
some
sub
projects.
B
So
my
twitter
is
is
christo
underscore.
If
you
want
to
follow
me
there,
my
github
is
just
chris
chow,
so
our
agenda
at
a
high
level
we're
basically
gonna,
go
over
the.
What?
Where,
when?
Why?
Who
for
cross-plane
providers
as
well
as
then
the
how
for
olm
repackaging
in
between
there
we're
gonna
go
over
the
main
components
to
help
provide
some
more
context
for
folks
who
aren't
familiar
with
crossplane.
B
So,
let's
start
with
the
one.
So
so
what
is
crossplane?
Let's
quickly
go
over
the
project
at
a
high
level,
so
the
first
you
know
main
pillar
for
crossplane
is
really
provisioning
right.
Crossplane
allows
you
to
provision
cloud
resources
from
within
your
kubernetes
cluster.
It
also
allows
you
to
manage
the
entire
life
cycle,
so
not
only
creating
you
know
cloud
resources,
but
also
then
updating
those-
and
you
know
eventually
deleting
them
if
you
need
to.
B
B
Over
time
right
so
something
happens
to
your
your
rds
postgres
instance.
You
will
be
notified
on
the
custom
resource
through
events
and
so
from
the
crossplane
docs.
We
have
that
providers
are
packages
that
enable
crossplane
to
provision
infrastructure
on
an
external
service
right.
So
again,
the
main
goal
of
a
provider
is
really
to
provision
things
outside
of
your
kubernetes
cluster
or
your
openshift
cluster.
B
So
these
providers
bring
crds
or
managed
resources
that
map
one-to-one
with
external
infrastructure
resources,
as
well
as
controllers
to
manage
the
life
cycle
of
these
resources.
So
something
key
to
note
in
that
sentence
is
this
notion
of
mapping?
You
know
one
to
one
of
providers
really
being
focused
on
fidelity
for
these
external
resources.
This
is
going
to
be
a
common
theme,
but
what
are
providers
really
right?
Definitions
are
all
well
and
good,
but
but
what
does
it
mean?
B
So
providers
are
similar
to
operators
right,
but
the
main
difference
is
that
you
know
we're
for
operators,
we
utilize
olm
for
providers,
we
use
the
cross
plane
provider
to
handle
installation
and
and
all
other
management
right,
so
updating
removing
providers-
that's
all
handled
by
the
cross,
plane
operator,
as
providers
use
a
lot
of
the
same
tooling
as
operators
such
as
qbuilder
controller,
runtime
and
controller
tools.
B
But
again
the
main
difference
is
that
providers
are
designed
to
reference
some
external
resource
right.
So
for
a
developer.
The
really
nice
thing
is,
you
don't
have
to
worry
about
our
back
deployment
and
and
all
the
bootstrapping
since
there
already
exists
a
lot
of
tooling
for
for
new
developers
and
folks
that
want
to
create
you
know
new
providers
so
the
where.
So,
where
are
providers
located
right?
So
cross-plane
providers
are
open
source
and
you
know
most
will
be
available
on
github
under
the
crossplane
or
crosspin
contrib
organizations.
B
So
some
examples.
We
already
talked
about
the
provider
aws
and
provider
in
cluster.
There's
also,
you
know
the
provider
sql,
which
allows
you
to
orchestrate
sql
servers
by
creating
users,
grants
roles,
etc.
All
your
all,
your
favorite
sql
resources,
as
well
as
the
provider
helm,
which
allows
you
to
manage
and
deploy
helm
charts
using
a
custom
resource
and,
lastly,
the
provider
aws
that
we
talked
about,
but
there's
also
many
others
that
are
not
listed
here.
B
My
personal
favorite
is
the
provider
pizza,
which
allows
you
to
order
pizza
from
domino's,
using
using
custom
resources
and
so
they're
all
available
under
under
the
cross,
plain
orgs,
so
the
when
so,
when
does
it
make
sense
to
create
a
provider
right?
When
should
you
create
a
provider
versus
when?
Should
you
use
some
some
other
solution
right?
So
it
really
makes
sense
to
create
a
provider
if
you're
consuming
external
resources.
B
Similarly,
there's
a
really
high
emphasis
on
on
high
fidelity
right,
really
really
important
focus,
and
so
you
should
create
a
provider
if
the
resources
that
you
want
to
use
are
well
defined
in
granular
right,
so
it
doesn't
really
make
sense
to
create
a
provider
for
for
managing
abstractions
right,
since
that's
not
the
goal.
B
And,
lastly,
a
special
feature
in
crossplane
is
this
notion
of
a
composition
engine
and
so
the
best
with
an
asterisk.
The
best
way
to
to
utilize
the
composition
engine
is
to
create
a
provider
that
manages
your
resources
right.
The
reason
that
there's
an
asterisk
here
is
it's
technically
possible
to
utilize
any
resource
within
a
composition,
but
it's
really
recommended
that
you
only
use
resources
that
are
exposed
by
by
a
provider.
B
So
the
why,
right?
Why
should
you
create
or
contribute
to
a
provider?
Well,
so
crossline
is
a
cncf
sandbox
project
as
as,
as
karina
mentioned,
they're
applied
for
graduation,
so
hopefully
they'll
be
on
the
next
steps
in
the
cnc.
B
Soon,
yeah,
and
so
all
the
providers
are
open
source,
so
anybody
can
contribute.
So
this
comes
with
quite
a
few
benefits
right.
Not
only
is
there
shared
development
and
maintenance
of
common
resources
right
if
there's
10
companies
and
all
of
them,
you
know
want
the
ability
to
provision
resources
on
on
azure
or
ibm
cloud.
They
they
can
all
share
a
common
code
base
to
achieve
that.
Instead
of
creating
their
own,
you
know
bespoke
solutions
and
for
cloud
vendors
they
can
choose
to
expose
their
api
in
kubernetes
through
a
common
interface
right.
B
And,
lastly,
there
is
a
very
streamlined
development
and
consumption
process.
Right
crossplan
handles
a
lot
of
the
messy
parts,
so
you
know
cluster
cluster
administrators
and
developers
can
can
can
get
started
on
doing
the
work
they
need
to
do
so.
Why
might
you
want
to
repackage
a
provider
right?
We've
talked
a
lot
about
the
cross,
plane,
design
and
structure,
but
why
might
we
want
to
take
a
provider
and
structure
it
as
a
standalone
operator?
B
So
one
of
the
one
of
the
first
issues
is
has
to
do
with
proxies
right,
we'll
go
into
more
detail
about
this
shortly,
but
something
that's
really
integral
to
the
cross.
Plane
operator
is
pulling
an
op,
an
oci
image
right,
so
the
operator
itself
pulls
an
image,
and
so
this
can
cause
quite
a
few
issues
when
there's
a
cluster
that's
running
behind
a
proxy.
B
This
is
not
like
an
unfixable
issue,
but
it's
definitely
one
more
thing
for
administrators
to
note
and
it's
not
immediately
clear
that
this
is
how
a
crossplan
is
designed
similar.
A
similar
problem
has
to
do
with
credentials
if
you're
using
a
private
container
registry
you'll
need
to
supply
credentials
separately
to
crossplane
and
the
cubelet
right.
B
B
So,
for
example,
I
might
just
want
to
use
the
provider
aws
to
create
s3
buckets,
and
so
I
can
do
that
with
with
the
with
the
olm
repackaging,
and
I
don't
have
to
worry
about
all
the
other
setup
for
crossplane
or
or
if
there's
any
other
issues
so
who
is
actually
creating
and
maintaining
these
providers.
B
B
And
so,
let's
get
into
the
components
of
a
provider
and
olm
operator.
So
let's
do
a
quick
primer
again
on
the
provider
aws.
B
So
this
is
one
of
the
most
popular
crossplane
providers,
with
support
for
dozens
of
aws
resources
and
for
our
purposes
it's
also
a
really
great
example
of
a
provider
to
to
dissect
and
then
from
the
olm
side,
we're
going
to
be
looking
at
the
memcached
operator,
which
is
a
common
operator,
that's
used
in
guides
and
examples
like
this
one
and
it's
also
a
great
example
of
the
standard
structure
of
an
olm
operator.
B
B
Then
we
have
the
controllers
directory,
which
contains
all
of
our
controllers
and
reconciler
logic
for
the
aforementioned
api
resources
and
lastly,
we
have
the
config
and
bundle
directory
together.
These
allow
us
to
describe
all
the
metadata
and
deployment
information
for
our
operator
and,
as
for
provider,
we
have
we'll
see
the
structure
is
pretty
similar
right.
B
We
have
the
apis
directory
here
which,
just
like
the
api
directory,
contains
all
of
our
our
our
types
that
we
want
to
expose,
and
then
we
also
have
our
pkg
directory,
which
contains
all
of
our
controllers
and
reconcilers.
So,
as
you
can
see
at
this
point,
the
difference
so
far
is
is
pretty
minimal
and
lastly-
or
I
guess
most
importantly,
is
we
have
a
combination
of
folders.
B
Here
we
have
the
package
folder
as
well
as
our
cluster
and
build
folder,
and
so
together
these
three
directories
kind
of
handle
everything
around
deployment
and
packaging
that
we'll
get
into
in
a
second,
and
so
at
this
point
I
think
most
people
will
have
noticed
that
the
difference
here
is
mainly
semantic
right,
but
the
last
point
that
we
talked
about
around
deployment
for
providers
is
really
key
and
we're
gonna
hone
into
that
a
little
bit
more
here.
B
B
Right
and
so
let's
take
a
quick
aside
on
deployment,
so,
as
we
mentioned
before,
within
the
crossplane
model,
you
have
to
have
the
cross
plane
operator
installed
already
and
the
operator
the
craftsman
operator
exposes
a
few
different
api
groups,
so,
namely
the
pkg
package
that
crossplaned
group
contains
a
provider
resource
a
provider
custom
resource,
and
this
is
one
of
the
most
popular
ways
to
install
a
cross
pin
provider.
B
All
you
have
to
do
is
define
the
image
and
tag
that
you
want
to
provision
so
for
this
example,
we're
gonna
use
the
provider
aws
image
with
the
alpha
tag,
so
this
is
straight
from
you
know.
The
crossplane
docks
one
of
the
other
ways
to
install
a
provider
is
through
the
use
of
a
configuration
right.
B
So
you
can
see
within
the
spec
there's
a
depends
on
array
where
we
can
define
the
provider
as
well
as
the
version,
the
minimal
version
that
we
want
to
install,
and
so
what
is
the
delta?
Then
right?
Essentially,
this
boils
down
to
where
are
the
missing
components
related
to
deploying
the
provider?
B
So
you
know
my
favorite
way
to
try
and
figure
out
something
like
this
is
just
to
start
hacking
away
and
break
things
down.
So
let's
actually
take
a
look
at
what
our
package
contains
right
so
with
a
tool
like
undocker,
you
can
actually
browse
through
the
contents
of
our
image
without
digging
through
the
entire
build
process.
Right.
I
personally
would
rather
look
at
the
end
result
then
take
a
look
at
every
step
of
of
the
build
process
and
dig
through
a
make
file.
B
So
let's
pull
our
our
image
with
the
0.18.1
tag,
we'll
run
on
docker,
which
allows
us
to
unpack
the
image
layer
by
layer,
and
then
we
can
see
that
within
this
directory,
the
only
the
only
file
is
a
package.yml
file
in
the
root
of
the
image.
This
is
interesting
right.
There's,
no
controller!
There's
there's
nothing!
Nothing
here
that
stands
out:
it's
just
a
yaml
file.
B
So
if
we
take
a
look
within
this
cmo
file,
it
starts
off
with
a
long
list
of
the
crds
that
this
provider
exposes
and
then
all
the
way
at
the
bottom.
In
the
last
you
know
143
lines,
we'll
see,
there's
a
meta.package.crossplane
resource
here
and
at
the
bottom
of
that
we
have
within
our
spec
an
image
referenced
here,
which
is
the
provider
aws
controller
right.
B
So
this
is
the
actual
controller
that
we'll
be
using
and
and
everything
inside
of
the
package.aml
is
basically
just
metadata
and
crds
that
we
have
to
create.
B
So,
let's
so,
what's
the
rest
of
the
puzzle,
then
right
so
based
on
this,
we
can
deduce
that
there's
two
separate
steps
here
right,
there's
our
actual
metadata
image
right,
which
is
the
provider
aws
and
then
there's
the
actual
controller,
which
is
the
provider
aws
dash
controller,
and
so
it
turns
out
if
we
dig
through
the
cross
plane
operator
code,
that
the
operator
parses,
the
package.amo
installs,
all
the
crds
and
configures
the
rbac
for
the
provider
at
runtime
right.
So
it
parses
all
the
crds
and
creates
our
service
account.
B
All
of
that
happens
at
run
time
when
we
try
to
install
the
operator
or
when
we
try
to
install
the
the
provider,
and
so
for
brevity
here
we're
not
going
to
go
over
the
cross
screen
operator
code
base
in
detail,
but
there
is
a
link
in
these
slides
if
anyone
is
interested
in
digging
around
there,
and
so
at
the
end,
the
operator,
the
craftsman
operator,
creates
our
deployment
using
the
tag
referenced
in
the
spec.controller.image
within
our
package.eml
document,
and
so
this
is
where
that
aforementioned
issue,
we
talked
about
a
little
while
ago.
B
B
B
So
there
is
a
project,
a
repository
that
we
can
use
to
help
out
with
this
process,
namely
there's
the
olm
repackage
repository
within
the
red
hat
et
github
org,
and
so
if
we
clone
the
olm
repackage
repository
and
examine
the
contents,
we'll
see
quite
a
few
files
here
that
we're
going
to
be
using
so
from
the
top.
We
have
the
docker
file
and
make
file
that
we're
using.
B
So
the
docker
file
just
contains
everything
required
for
building
the
controller
itself
and
the
make
file
just
has
all
the
targets.
We
need
to
actually
build
our
operator
and
build
our
bundle
that
we'll
see
in
a
little
bit.
B
B
B
The
manager
contains
our
deployment
manifest
defines
what
our
csv,
our
cluster
service
version,
looks
like,
and
lastly,
within
the
rbac
directory,
we
have
all
of
our
generated.
Rbac
manifests,
which
is
related
to
the
gen
rbac
script,
so
this
script
will
create
an
rbac.go
file
which
contains
all
of
our
cue
builder,
annotations
that
are
used
to
generate
the
rbac
during
the
build.
B
Process
so
we
use
quite
a
few
different
tools
during
the
repackaging
process
as
well,
namely
we
use
yq,
which
is
a
tool
for
querying
different
fields
within
our
ammo
document,
the
operator
sdk
cli
tool,
which
is
used
to
handle
generation
and
validation
for
deployment
manifests
and,
lastly,
the
custom
we
use
customize
and
controller
gen,
so
the
the
former
of
yq
and
operator
sdk
must
be
installed
manually
prior
to
repackaging,
and
the
latter
two
customize
and
controller
gen
are
both
automatically
installed
by
targets
in
our
make
file.
B
So
not
pictured
here
is
golang
and
docker
or
podman
golang.
Of
course
you
need,
for
you
know,
actually
compiling
code
and
running,
you
know
different
generation
and
a
docker
and
podband
are
needed,
of
course,
to
build
and
push
up
the
image.
B
So
what
is
each
process
then
of
repackaging?
So
part?
One
is
all
the
setup
steps
right,
so
you
have
to
clone
your
target
repository
in
this
example.
We're
going
to
be
using
the
provider
aws,
then
we're
going
to
set
up
all
of
our
dependencies
right.
So
this
was
everything
that
we
saw
on
the
previous
slide
and
we
also
have
to
make
sure
that
we
have
credentials
and
access
from
those
credentials
set
up
for
a
container
registry
like
key
dot,
io
that
we're
going
to
use
here.
B
And,
lastly,
you
have
to
clone
the
olm
repackage
repository
and
then
copy
the
contents
from
the
olm
repockage
repackage
directory
into
the
contents
of
the
root
of
the
provider.
If
that
doesn't
make
sense,
we're
going
to
go
over
an
example
now
so
in
this
example,
we're
going
to
clone
the
provider
aws
as
mentioned
before,
and
so
we're
just
going
to
be
using
the
the
master
tag.
So
we
can
get
all
the
most
recent
changes.
B
B
So
now
we
just
have
to
set
some
environment
variables,
more
specifically,
our
container
registry,
user
or
organization,
as
well
as
the
operator
image
that
we're
using
so
under
you
know
our
org
we're
going
to
use
the
provider
aws
tag
and
or
the
provider
aws
image
and
the
master
tag.
B
B
And
in
one
second,
we're
going
to
generate
all
of
our
crds,
the
next
step
is
running
the
gen
r
back
script,
which
generates
our
rbac.go.
So
the
actual
contents
of
this
go
file
is
a
bunch
of
annotations.
Basically,
within
our
within
this
file,
we
have
comments
which
describe
what
the
rbac
needs
to
look
like
and
then
we'll
just
quickly
build
the
the
image.
So
for
me
that
was
all
cached,
but
if
you're
building
this
yourself,
it
might
take
a
second
and
so
we've
gone
through
the
build
process
for
the
operator.
B
We've
added
all
of
our
environment
variables
run
the
cogen
and
push
it
up
to
our
container
registry
and
the
last
step
of
the
build
process
is
actually
creating
our
bundle
right.
Although
we've
created
our
operator
and
that's
been
pushed
up,
we
still
have
to
create
the
bundle
and
this
bundle
create
contains
all
of
our
manifests
and
metadata,
and
this
was
put
into
a
single
package,
that's
installed
by
the
operator,
lifecycle,
manager
or
olm.
B
Well
and
now
we
can
run
the
gen
project
script
right,
so
this
just
defines,
as
I
said,
before,
our
project
file,
which
contains
a
bunch
of
metadata
about
our
operator
right,
although
this
isn't
strictly
required.
This
helps
to
provide
more
detailed
information
for
folks
that
are
trying
to
install
our
operator.
B
B
Yeah,
so
there
we
go.
That's
done
now
we're
just
going
to
run
a
few
commands
here
to
template
in
some
values
into
our
config
directory.
Of
course,
we
could
manually
go
through
this,
but
it's
easier
to
just
just
run
a
command
here
and
now
we're
going
to
rename
our
cluster
service
version
dot
yaml
to
include
the
name
of
this
operator
right.
So
we
chose
to
name
our
operator
provider
aws,
so
we're
just
going
to
make
that
change,
and
now
we
can
actually
create
our
bundle.
B
So
this
again
runs
all
of
our
code
generation
for
our
crds
as
well
as
generating
our
our
back
again.
This
will
take
another
second
and
so
just
to
talk
about
something
we
did
kind
of
behind
the
scenes.
So
when
we
templated
in
all
these
values
into
our
our
config
directory,
this
just
updates
a
bunch
of
references
and
and
metadata,
so
our
bundle
has
been
created.
Now
we
get
some
warnings
here,
but
but
no
issues.
B
B
So
now
we
can
just
build
our
bundle,
which
is
just
a
docker
file,
and
this
should
happen
pretty
quickly
since
we're
just
copying
a
bunch
of
files
around
and
now
we
can
push
that
up.
B
B
B
B
So
these
are
the
main
resources
required
to
not
only
create
a
new
operator
within
openshift,
but
also
it
allows
us
to
create
our
operator
group,
which
defines
name
spaces
that
we
can
target
as
well
as
the
subscription
which
actually
requests
the
operator
to
be
created,
and
so
now
we're
just
waiting
on
the
cluster
service
version
to
be
created,
we'll
see
that
that's
pending
now
it's
installing
and
it
succeeded,
and
so
just
like
that
we
now
have
our
operator
running,
so
we
can
check
to
see
that
all
of
our
crds
are
defined
here
with
all
of
our
aws
resources.
B
My
linkedin
and
twitter
are
here.
If
you
want
to
connect
or
chat
or
feel
free
to
bring
me
on
slack,
I'm
on
the
kubernetes
slack
as
as
chris
or
the
crossband
slack
as
as
krish
as
well,
I
believe,
are
there?
Are
there
any
any
questions.
A
Thanks
chris
hey
chris,
do
you
see
any
questions
in
twitch.
B
There
we
go
so
there's
a
few
there's
a
few
reasons
why
you
might
want
to
repackage
a
provider
right
if
you're
looking
to
get
started
off
the
ground
as
quickly
as
possible.
B
It
might
not
be
the
right
fit
from
my
perspective,
one
of
the
main
benefits
of
repackaging
a
provider
is,
you
can
choose
to
just
install
the
providers
you
need
and
directly
provision
resources
from
those
providers
while
building
your
own
abstractions
on
top
of
them
right.
You
might
not
necessarily
want
to
utilize
the
cross,
plane,
operator
or
composition
engine.
B
So
that's
one
reason,
aside
from
that
that
the
first
two
issues
here,
you
can
either
work
around
these
issues
and
I
believe,
there's
also
fixes
that
are
in
active
development
to
kind
of
handle
these.
B
So,
for
example,
one
use
case
that
I
can
talk
about
myself
is
another
another
project
that
I
contribute
to
is
the
container
object,
storage
interface
project
under
the
sig
storage,
and
so
for
that
project
we
essentially
are
trying
to
create
drivers
or
provisioners
for
object,
storage,
vendors
right,
so
we
could
use
the
provider
aws
operator
as
a
driver
for
the
cozy
process,
cozy
project
right,
and
that
makes
things
easier
for
folks
that
want
to
get
started
since
they
just
need
to
install
the
provider
aws
operator
instead
of
you
know,
having
to
install
the
cross
plane
operator
and
handle
everything
through
through
through
that
operator.
D
Yeah-
and
I
guess
I
mean,
if
you're
strictly
an
olm
type
shop,
just
running
operators,
you
may
not
want
to
have
another
layer
of
abstraction
from
you
know,
cross
playing
where
you
can
just
kind
of
repackage
it
and
plug
it
in
to
work
with
all
of
your
existing
olm
workflows
that
you've
already
established.
D
So
I
do
have
another
question
chris
that
I
don't
know
if
it
was
clear
to
everyone,
but
so
will
this
repackaging
like
the
package,
the
repackage
repo
work
with
any
of
the
cross
plane
providers.
B
Yeah,
so
not
only
will
the
old
repocket
repackage
repository
work
with
any
of
the
cross
plane
providers
out
there
there's
also
an
automated
version
of
this
process.
So
what
I
showed
today
was
kind
of
the
manual
way
to
go
about
it,
but
under
the
red
hat
et
org
there
is
a
github
action
that
can
be
used
to
kind
of
automate
every
step
of
this
process
and
repackage
a
provider
for
you
and
push
it
to.
B
You
know
a
target
registry
so
that
github
action
basically
goes
through
the
exact
same
workflow
here
except
you
know
it
can
be
automated
and
applied
to
any
operator.
So
let's
say
you
know
you
always
want
to
use
the
most
latest
version
of
the
provider
gcp.
You
could
set
up
that
action
to
run
on
a
schedule.
Let's
say
once
a
day
you
know
at
12
am,
and
it
can
you
know
repackage
the
most
recent
version
of
the
provider
gcp
and
push
it
up
to
you
know
your
favorite
container
registry.
C
So,
chris,
I
have
one
more
question
for
you
and
it's
a
leading
one.
Does
this
cross
plane
operator?
Have
you
tested
it
up
with
okd,
yet.
B
B
But
if
you,
if
you
wanted
to
use,
you
know
okd
directly
or
kubernetes,
vanilla
kubernetes,
all
you
need
to
do
is
install
the
olm
operator
and
then
you
can
get
started
so
that
that
you're
able
to
achieve
that
pretty
quickly.
Okay,.
A
And
chris
you've
been
working
mostly
with
the
adf
aws
provisioner
right
and
you're.
The
main
maintainer
for
that
provisioner
is
that
correct.
B
I
I
would
say
I'm
one
of
the
maintainers,
I
wouldn't
say
I'm
the
main
maintainer
yeah,
but
I've
been
contributing
to
the
upstream
provider
aws
for
for
quite
a
while.
Now.
A
B
Yeah,
so
I
guess
from
my
experience
I
think
the
biggest
tip
that
I
could
have
for
someone
that's
looking
to
get
started
with
contributing
to
either
the
cross,
plane
operator
itself
or
any
of
the
providers.
I'd
say
the
biggest
thing
is
just
to
reach
out
and
and
talk
to
people.
So
the
the
crossland
communities
is,
I
think,
one
of
one
of
the
best
open
source
communities
out
there.
Folks
are
always
willing
to
help
out
and
answer
questions
and
there's
community
meetings
that
run,
I
believe,
every
thursday
now.
B
B
Not
only
is
it
a
good
chance
to,
you
know,
learn
the
fundamentals
of
contributing
to
an
open
source
project,
and
also
you
know
everything
about
kubernetes
and
go.
It's
also
a
good
chance
to
learn
about.
You
know
some,
a
cloud
provider
right.
You
might
want
to
learn
about
gcp
or
aws
or
azure,
and
you
can
use
you
know
your
experience
from
writing
a
you
know:
gcp
provider
or
resources
in
the
gcp
provider
to
kind
of
learn
more
in-depth
information
about
google
cloud
itself
in
terms
of
gotchas.
B
I
think
the
biggest
thing
is
just
staying
up
to
date.
The
crossband
project
is
still,
I
guess
it's
more
stable
now,
but
when
I
started
contributing
the
project
was
definitely
still
in
in
the
earlier
stages,
and
so
there
would
be.
You
know,
breaking
changes
once
in
a
while
and
it
would
be
important
to
stay
up
to
date,
but
now
that
the
project
has
you
know,
matured
quite
a
bit.
I
think
the
api
is
pretty
pretty
stable.
These
days.
C
I
think
that
I'll
take
that
to
the
okd
working
group-
that's
kicking
off
in
about
30
minutes
so
yeah.
Hopefully
we
can
get
some
testing
done
from
the
okd
working
group
on
this
and
and
give
you
some
feedback
there
as
well
so
and
that
you
should
be
able
to
find
a
blog
post
if
we
get
that
done
on
okd.io
sometime
in
the
not
too
distant
future,
so
yeah
again
really
great
work.
C
It's
wonderful
to
see
the
use
of
olm
and
operators
to
repackage
this,
and
I'm
looking
forward
to
getting
some
feedback
on
this
and
watching
the
journey
that
cross
playing
takes
in
the
cncf
too,
in
the
not
too
distant
future.
So.
B
Yeah,
you
can
get
everybody
a
slice
of
pizza.
C
It's
lunch
time
somewhere
here
we're
still
drinking
coffee
on
a
west
coast,
that's
good,
so
yeah,
scott
and
and
and
chris
really.
This
is
the
great
work
that
you
guys
have
been
doing
and
I'm
looking
forward
to
seeing
a
few
more
providers
out
there
so
we'll
we'll
get
those
tested
and
and
give
you
feedback
from
the
okd
and
other
arenas
and
karina.
As
always,
thank
you
for
for
organizing
this
and
making
this
happen.