►
From YouTube: 2020-10-12 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
the
recording
has
started-
and
this
is
the
october
12th
2020
crossplane
community
meeting.
The
big
news
from
last
week
is
that
0.13
was
released
at
the
end
of
the
week
there
on
friday.
So
that
was
a
great
effort
from
the
team
to
get
out
a
release
that
had
a
lot
of
functionality
and
updates
in
it
nick
or
dan.
Or
do
you
want
to
talk
through
some
of
the
the
big
themes
there?
I
wrote
them
down
here,
but
you
know
you
all
put
a
lot
of
work
into
it.
B
Sure
I'll
give
some
updates
on
the
packaging
part
of
it
at
least
so
the
I
guess
the
base
part
was
replacing
the
package
manager
that
was
existing,
which
involved
a
couple
of
different
things,
one
of
them
being
that
the
package
format
was
redesigned
to
be
much
leaner
packages
and
another
being
that
configuration
packages
were
introduced.
So
now
you
can
bundle
up
kind
of
all
of
your
composition,
resources
and
install
those
as
a
package.
Just
like
you
do
a
provider.
B
That's
mounted
on
the
crossplane
pod,
there's
also
the
crossplane
cli,
that
was,
I
guess,
reintroduced
we've
had
a
cli
that
was
kind
of
some
bash
scripts
that
did
some
basic
functionality,
but
now
the
cli
will
actually
do
things
like
build
packages
for
you,
which
are
very
lean,
oci
containers
under
the
hood
and
there's
some
great
documentation
on
that
that
the
team
worked
on
last
week.
That
should
kind
of
show
you
how
you
can
use
some
of
these
new
features
and
gives
you
a
little
bit
of
a
common
path
to
using
them.
B
So
you
know
before
we
had
provisioning
resources
as
managed
resources
just
directly
and
then
using
composition
and
then
running
applications
with
oam,
and
now
we
also
have
a
packaging
section.
On
top
of
that,
which
shows
you
how
you
can
kind
of
package
up
those
compositions
you're
previously
building
and
the
docks
are
also
a
lot
more
streamlined,
because
we
can
just
install
a
package
as
an
oci
image.
Instead
of
you
know,
creating
those
resources
directly.
B
C
Yeah
sure,
let
me
know
if
my
internet's
working
okay,
I'm
noticing
other
people
are
breaking
up
a
little
bit.
I
think
it
might
be
mind.
C
So
previously,
absolutely
in
cross
plain,
we
had
some
opinions
about
how
our
back
roles
were
managed.
We
tried
to
mirror
roughly
what
upstream
kubernetes
does
with
regards
to
having
some
cluster
roles
that
you
can
use
admin
view
and
edit
that
you
we
give
them
to
anyone
out
of
the
box.
But
we
created
those
cluster
roles
that
we
would
automatically
try
and
group
crossplane
types
as
appropriate
resources
under
those
are
back
roles.
C
So,
for
instance,
if
you
give
someone
the
cross
play
and
admin
role,
they'll
get
access
to
everything
that
crossplane
does
in
the
cost
of
all
the
crossbones
types
and
all
the
things
installed
by
providers
or
by
the
package
manager.
Specifically
historically,
we
also
go
sort
of
one
step
further
than
kubernetes,
and
we've
done
this
at
the
name
space
level
as
well.
C
So
we
would
actually
create
a
series
of
cluster
roles
somewhat
oddly
for
technical
reasons,
for
each
for
each
namespace
and
we
would
have
admin
view
and
edit
for
those
namespaces
so
that
you
could
sort
of
give
someone
admin
within
a
namespace
to
cross-plain
resources,
for
example
like
claims
or
classes
or
whatever
it's
appropriate
and
depending
on
what
level
it
is.
So
we
refactored
this
in
the
latest
release
for
two
reasons.
C
One
was
because
we
wanted
it
to
apply
to
composition,
even
if
you
didn't
create
composition,
even
if
you
didn't
create
composite
resources
via
the
package
manager.
So
previously
all
of
the
rbac
was
handled
by
the
by
the
package
manager.
So
if
you
didn't
install
something
as
a
package,
we
couldn't
handle
our
backup.
D
C
You
that's
still
true
for
providers
if
you
go,
and
so
we
we
do
our
back
for
two
things:
providers
and
composite
resources.
If
you
install
a
provider
somehow
without
using
the
package
manager,
if
you
just
go
and
run
the
deployment,
then
you've
sort
of
got
to
handle
our
back
yourself.
But
if
you
use
the
package
manager,
we'll
we'll
automatically
make
sure
crossplane
has
access
to
create
all
of
the
types
of
the
providers
installed.
We'll
make
sure
the
provider
has
access
to
reconcile
all
the
types
that
it
has
installed.
C
Groups
or
cluster
roles
can
be
bound
to
groups
that
that
can
do
things
at
those
appropriate
levels.
So
two
two
nice
properties
about
this
change
are
one
call
crossplane
or
the
package
manager
specifically
used
to
run
as
cluster
admin.
To
enable
this,
it
does
not
run
as
close
to
admin
anymore.
In
fact,
the
package
manager
is,
it
doesn't
exist
as
a
separate
process
anymore.
It's
part
of
the
core
cosplain
deployment,
the
rbac
manager
only
manages
rbac
exists
as
a
separate
process
and
instead
of
using
cluster
admin,
users
are
back
escalation.
C
So
it's
you
know
it's
still
a
little
bit
of
a
security
risk,
but
it's
not
the
same
as
giving
the
process.
It
has
access
to
grant
itself
and
other
people
access
that
it
doesn't
have
and
the
rmac
manager
is
optional,
which
means
you
can
still
have
packages
support.
You
can
still
have
cross-plane
support.
You
can
turn
off
the
rpac
managers
and
not
deploy
it.
If
you
don't
feel
comfortable
with
crossplane.
C
Another
final
little
change
with
that
those
we
still
make
a
set
of
rolls
for
every
namespace,
but
we
now
make,
as
as
you
probably
would
expect
if
you're
used
to
kubernetes
our
back,
we
now
make
name-spaced
roles
for
aces
rather
than
making
constant
roles
for
them.
That's
about
it.
A
Cool
and
then
also
maybe
nick,
do
you
want
to
talk
about
some
of
the
the
composition
experience
that
was
updated?
You
know
the
infrastructure
definition
and
moving
those
composition,
resource
definitions.
C
Yeah
definitely
folks
probably
have
there's
been
a
lot
of
previews
of
this
and
it's
actually
been
in
the
code
base
in
master
for
a
while.
So
I
wouldn't
be
surprised
if
many
of
you
have
sort
of
know
what
I'm
gonna
say
already
so
composition
has
has
actually
quite
a
long
design.
History
we've
been
thinking
about
the
functionality
since
this
late
late
last
year.
I
think
so.
C
It
took
a
long
time
to
get
into
the
code
base
and
to
get
to
a
place
that
we
feel
comfortable
with
it
initially
started
out
in
its
design.
We
were.
This
was
sort
of
back
when
crossbane
went
through
a
focusing
recently
where
we
decided
that
we
really
sort
of
wanted
to
basically
go
all
in
on
this.
This
sort
of
model,
where
you
have
providers,
maybe
mostly
for
infrastructure,
you
know
most
people,
think
of
sort
of
google,
amazon,
azure,
alibaba
etc
as
sort
of
the
providers
or
in
future.
C
You
know
vmware
things
like
that,
but
we
also
have
the
helm
provider.
Now
we
have
providers
that
sort
of
are
doing,
maybe
what
you
might
think
of
as
applications
or
workloads,
more
than
sort
of
strict
infrastructure.
In
the
sense
of
databases,
networks
all
that
kind
of
stuff,
so
previously
crossplaying
had
oh
in
fact,
we've
still
got
in
the
code
base,
but
it's
due
for
removal
of
the
next
release.
We
had
sort
of
a
bunch
of
question.
C
For
example,
so
recently
we
sort
of
thought
all
right:
we're
going
to
focus
cross
plane
on
more
of
this
sort
of
infrastructure
type
of
thing
and
we're
going
to
integrate
really
well
with
with
areas
of
cross
plate
like
oam
or
even
other
app
models,
rather
than
focus
on
cross-play
doing
applications
strictly
as
a
first-class
thing
itself,
as
the
core
crossblade,
as
opposed
to
oem.
C
So
all
of
that
leads
up
to
originally
we
designed
the
composition
functionality
of
actually
with
the
idea
of
supporting
both
applications
and
infrastructure.
So
that's
so
in
crosstalk
0.12
at
0.11.
I
believe
the
one
way
you
would
create
a
composite
resource
was
by
writing
an
infrastructure
definition
which
was
such
named
to
distinguish
it
from
writing
an
application
definition,
which
would
be
something
we
plan
to
add
at
some
point.
In
the
future
we
scrapped
those
plans
to
introduce
application
definition.
C
So
in
this
release
we
decided,
after
talking
to
the
community
a
little
bit
and
getting
some
feedback.
C
We
decided
that,
since
we
were
talking
about
composite
resources
all
the
time,
the
thing
that
defines
a
composite
resource
should
be
a
composite
resource
definition,
there's
symmetry
here
with
kubernetes,
where
kubernetes
has
a
custom
resource
that
is
defined
by
a
custom
resource
definition
because
of
the
because
of
the
c
overlapping
of
the
the
acronyms
being
the
same
there,
crd
and
cr,
and
because
the
cross
in
cross
plane
sounds
a
little
bit
like
an
x,
we're
going
with
xr
and
xrd
for
a
composite
resource,
composite
resource
definition.
C
The
final
part
of
this
puzzle
is
that
previously
we
had
a
separate
type
called
infrastructure
publication,
which
sort
of
said,
if
you're,
an
sre
and
you've
you've
written
and
you've
written
your
composite
resource
definition
to
say:
hey,
there's
this
new
type
of
composite
resource.
We
explain
this
in
some
detail.
The
docs,
but
those
composite
resources
are
claustroscoped,
the
idea
being
that
they
exist
above
any
namespace.
They
might
be
a
vpc
or
something
that
wants
to
be
shared
across
multiple
namespaces.
C
But
then
you
can
basically
say.
I
would
actually
like
to
offer
this
to
my
to
my
application.
C
Is
the
people
who
run
the
applications
who
work
within
a
namespace
in
a
cluster
rather
than
working
at
the
clusterscope,
and
we
used
to
actually
say
well,
you
can
publish
a
requirement
to
those
folks
so
that
you
could
publish
a
namespace
proxy
type
for
the
composite
resource
and
that
namespace
proxy
type
was
always
called
something
requirement.
So
if
your
composite
resource
was
called
foo,
your
requirement
was
called
foo
requirement.
C
We
changed
that
around
we've.
We've
reintroduced
our
our
original
claim
terminology
to
replace
that.
So
we
we're
calling
that
a
composite
resource
claim
the
namespace
thing
and
you
can
call
it
whatever
you
want
now.
So
you
can.
You
can
call
your
you
know:
acme
company,
sql
database.
You
can
define
that
as
a
composite
resource
and
call
that
composite
database
and
then
you
can
find
a
claim
for
it.
That's
just
called
database
if
you
would
like,
and
that
allows
you
as
an
sre
to
present
a
a
type
to
your
users.
C
That
just
seems
like
a
database,
not
a
database,
something
or
a
database
claim
or
whatever.
They
can.
Just
logically
think
of
it,
as
this
is
the
database,
so
they
don't
need
to
think
too
much
about
there
being
a
composite
resource
behind
it
or
being
made
up
of
cloud
sql
instances
or
whatever
at
least
you
know.
If
things
are
operating
well,
they
don't
need
to
know
those
things.
C
So,
in
summary,
infrastructure
publication
is
gone
and
merged
into
composite
resource
definition.
Composite
resource
definition
is
basically
the
sum
of
infrastructure
definition,
infrastructure
publication,
it's
all
in
one
thing
now,
otherwise
it
will
look
very
familiar
to
people
who
are
who
are
used
to
infrastructure
definitions.
Basically,
the
same
and
requirements
are
now
called
claims
and
could
be
of
whatever
kind
you
would
like.
I
think
that's
about
the
long
version
of
it.
A
Cool
thanks
nick
for
those
details
there,
and
then
you
know
as
you're
touching
on
there
too,
that
those
changes
in
the
experience
and
the
types
for
for
utilizing
composition
is
what
we
expect
the
api
to
going
towards.
1.0
will
be
so
we
expect
that
this
api
will
remain.
You
know
somewhat
stable,
at
least,
and
you
know
getting
to
a
place
where
we
can
call
this
a
v1,
beta,
1
or
moving
towards
v1,
and
you
know
be
all
the
expectations
that
come
along
with
that.
C
Yeah,
my
michael
no
no
promises
at
this
point,
but
I
can't
currently
see-
and
I
don't
think
any
of
us
can
currently
see
any
features
that
we
have
on
the
roadmap
that
would
require
us
to
make
breaking
changes.
C
We
have
a
lot
of
buy-in
from
all
the
stakeholders
with
an
upbound,
at
least
and
from
the
community
members
that
we've
talked
about
talked
about
talked
with
and
to
that
indicate
we
don't
need
to
make
breaking
changes
to
the
api,
and
our
tentative
goal
is
to
pretty
much
rev
everything
to
the
one
beta
one
by
the
next
release
to
sort
of
commit
to
it.
So
we've
got
one
more
release
of
just
like
you
know:
whoops,
maybe
something
we
completely
missed.
You
know
needs
to
make
a
breaking
change.
C
We
don't
expect
that
to
happen
and
then,
as
long
as
that
doesn't
happen,
we'll
go
v1b
to
one
next
next
release.
A
That
sounds
like
a
good
segue,
then,
to
talk
about
the
next
release.
Phil
looks
like
you
made
a
couple
updates
here.
To
like
add
the
roadmap
in
and
such
for.
You
know
0.14
or
1.0,
etc.
Do
you
want
to
did
you
want
to
just
have
this
as
an
informational
link,
or
did
you
want
to
bring
this
up
on
your
screen
and
share
the
roadmap
maybe
or.
D
D
Can
start
share,
I
think
okay,
cool
yeah,
so
several
of
us
have
been
kind
of
iterating.
Is
that
sharing
the
roadmap
update.
D
Okay,
great
yeah,
so
several
of
us
have
been
iterating
on
kind
of
some
draft
road
map
things
here.
If
you
look
at
the
overall
2020,
you
know
we're
trying
to
get
to
at
least
for
a
cross-plain
core,
a
1.0
candidate
by
end
of
year,
and
so
that
includes
the
composition,
basically
getting
to
at
least
a
v1
beta,
1
or
higher.
D
For
kind
of
you
know,
production
usage,
as
well
as
having
the
package
manager.
You
know,
have
the
new
capabilities,
a
lot
of
them
already
there,
some
fine
tuning,
some
robustness
enhancements.
That
type
of
thing
will
probably
be
coming,
but
I
haven't
used
it
hands-on.
It's
already
pretty
pretty
solid.
So
I'm
pretty
pretty
happy
with
that.
D
The
other
major
thrust
is
really
getting
to
90
coverage
for
all
the
cloud
providers,
and
so
casey
has
been
doing
a
tremendous
amount
of
work
on
you
know
getting
the
stateless,
terraform
providers
wrapped
and
creating
a
code
gen
pipeline
around
that
specifically
for
for
clouds
that
don't
have
a
code
gen
pipeline
and
then
we're
working
with
both
the
ack
and
the
azure
service
operator
team
to
adapt
their
cogen
pipelines
and
they're,
basically
driving
that
and
we're
helping
them
with
that.
D
And
so
we
can
touch
on
that
a
little
bit
more
here
in
a
minute
and
then,
with
the
I'm
doing
the
0-13
release.
As
I'm
sure
you
know,
nick
mentioned,
you
know
we
basically
have
a
sub
chart.
D
That's
been
there
for
just
a
little
bit
and
that's
now
installable
as
an
option
off
of
the
main
cross
playing
home
chart
and
that
additional
features
have
been
coming
in
with
you
know,
health
scopes
and
a
bunch
of
bug
fixes,
and
there
was
a
nice
tweet
that
went
out
the
other
day
that
had
kind
of
like
the
zero
one
one.
I
believe
kind
of
release
notes
in
there
to
kind
of
see.
D
What's
what's
been
going
on
there
so
a
lot
a
lot
of
good
progress
on
that
we
kind
of
already
covered
the
0.13,
but
for
0.14
I
don't
know
nick.
If
there's
anything
on
here
that
you.
C
C
D
I
mean,
I
think
you
covered
the
x-ray
yeah.
You
know
we
have
like
a
new
lint
command.
I
think
that
is
is
available,
although
I
haven't
actually
seen
that
command
surface
in
the
crank
tool.
Yet
so
maybe
that
just
moved
behind
the
scenes.
C
Yeah,
I
will
just
I
would
just
reiterate
again
just
to
make
sure
people
are
prepared
for
it.
There
were
some
more
deprecations
in
v0.13,
specifically
all
the
workload
stuff.
As
I
mentioned,
kubernetes
application
communities,
target,
kubernetes
application
resource
and
the
core
cross-plate
kubernetes
provider.
Yep
are
all
deprecated
in
v0.13
and
are
likely
to
be
removed
in
the
next
release
in
v0.14.
C
We
also
did
a
small
tweet.
One
of
the
new
package
types
is
actually
called
provider.
As
a
new
declaratively
install
a
provider
we
used
to
have.
Actually,
rather
we
still
do,
but
it's
deprecated,
each
provider
had
a
time
called
provider
like
provided.aws
provided.gcp,
et
cetera,
et
cetera,
dot
com.io
they
configured
the
provider.
We've
replaced
those
with
the
time
called
provider
config
now
to
make
a
little
bit
clearer
what
that
does,
and
so
that
provider
type
is
much
deprecated
and
likely
to
be
removed
in
the
next
release
as
well.
C
C
If
I
would
say
that,
if
any
of
you
would
be
severely
put
out
by
those
being
removed
in
the
next
release,
please
let
us
know
and
we
can
leave
them
in
for
another
release,
or
so
afterwards.
C
Yeah
exactly
so
those
those
are
marked
deprecated,
so
they
were
not
removed
in
0.13.
They
were
just,
but
we
announced
that
we
are
going
to
remove
them
just
because
we
remove
them
without
notice,
if
possible
and
as
I
say,
we'll,
take
them
out
on
the
next
release.
Unless.
D
Yeah
awesome,
okay,
yeah,
so
that's
basically
like
0.13
and
kind
of
what's
coming
down
the
pipe
so
claim
update
propagation,
bi-directional
patching
for
status,
composition,
revisions,
whether
or
not
this
actually
lands
in
late
october
or
the
following
release
is
still
up
for
a
little
bit
of
discussion
just
regards
to
how
much
is
going
to
be
focused
on
like
provider
acceleration
versus
some
of
that.
But
I
think
nick
you
were
saying
that
at
least
getting
the
claim
update
propagation
will
be
likely.
C
B
D
But
0.13
feels
pretty
good
where
we're
at
right
now,
with
the
exception
of
maybe
the
claim,
update,
popper
propagation
bi-directional,
patching
for
status
would
be
really
nice,
but
it's
probably
not
super
critical
for
for
late
october,
and
then
you
know
the
provider
helm
that
we're
using
in
some
of
our
you
know
configuration
packages
to
basically
kind
of
post,
configure
a
provisioned
like
ets
cluster
as
an
example
with
some
home
charts
to
land.
D
You
know
the
trimmings
that
you
might
want
on
that
and
then
we're
targeting
having
at
least
a
couple
cross
plane,
resources
be
generated
for
each
of
by
ack
pipeline
and
the
azure
aso
pipeline.
So
that's
kind
of
work
in
flight
and
then
also
for
in
discussion
with
ryan.
You
know
basically
taking
the
om
apis
to
v1
data
one.
D
They
might
look
very
similar
exactly
the
same
as
the
alpha
ones,
and
so
they
did
have
some
questions
about
how
to
provide
an
upgrade
path,
and
so
you
know-
I
guess
we'll
talk
about
that
briefly
here
in
the
next
thing,
for
0.15
in
terms
of
conversion.
E
D
Hooks
to
support
installing
multiple
api
versions
at
the
same
time,
and
so
that's
kind
of
the
one,
the
one
caveat
there,
but
if
we
get
that
and
then
validation
web
hooks
for
composition
is
another
thing,
that's
kind
of
been
requested
and
that
is
looking
like
it
might
get
us
to
a
one
out
candidate.
So
we're
not
saying
for
sure
that
we'll
be
releasing
one
end
of
december.
D
E
D
But
just
wanting
to
you
know,
reserve
the
right
to
you
know:
transition
those
to
beta
one
after
they've
had
a
little
bit
of
zip
time,
so
that
basically
takes
us
through
end
of
the
year
and
then
some
of
the
things
that
you
know
we,
you
know
kind
of
pushed
out
until
you
know,
like
a
january
time
frame
was
custom
composition.
D
That's
kind
of
you
know
still
going
in
the
background
and
we'll
still
probably
keep
that
alive,
but
not
just
dedicate
a
huge
amount
of
time
to
that
for
use
with
cdks
and
other
types
of
templating
beyond
the
the
built-in
templating
and
then
some
other
package
manager
enhancements
some
crr
enhancements
for
codegen
and
then
just
getting
to
more
coverage
kind
of
in
that
that
january
time
frame
so
yeah,
that's
kind
of
roughly.
You
know
these
are
drafts,
so
they're
not
finalized.
D
Yet
but
maybe
nick
you
could
take
us
through
the
0.14
board
and
go
over
kind
of
what's
coming
down
the
pipe
for
late
october.
C
D
C
A
Oh
yeah,
thanks
phil
for
the
for
the
road
map
perspective
there,
especially
across
the
next
couple
of
releases
and
as
we
you
know,
make
the
final
steps
towards
a
1.0
release.
So
that's
good
to
have
a
bit
of
a
picture
of
what
we're
you
know.
What
are
the
issues
that
we
feel
are
the
critical
ones
for
getting
to
that
stability
and
maturity
to
declare
1.0.
A
Okay,
so
that
was
like
road
map
and
theme
in
focus
areas,
and
we
kind
of
touched
on
a
potential
release
schedule
for
0.14
as
well
like
a
late
october.
Release
would
be
nice,
but
it's
you
know
almost
mid
october
now,
so
we
might
need
to
be
a
little
realistic
there
and
so
functionality
or
depending
on
scope,
but
fixes
especially
you
know,
issues
or
fixes
that
are
not
quite
working
with
functionality
that
was
released
in
0.13.
A
We
definitely
should
consider
some
patch
releases
to
get
that
functionality
out
so
that
folks
that
are
depending
on
it
or
consuming
it,
can
get
that
out.
If
a
0.14
release
isn't
quite
in
the
cards
for
you
know
in
to
month,
time
frame,
so
we
we
kind
of
played
that
by
ear
and
with
things
that
we
are
hearing
reported
or
things
that
are
being
demanded
or
requested,
we
can
make
a
decision
about
what
vehicle
to
ship,
those
with.
A
Cool
anything
else
on
milestones
and
releases
and
roadmap.
B
Well,
it
was
recent
and
awesome.
It
was
a
good
show.
The
cto
of
source
graph
was
on
he's,
really
insightful
and
there's
a
lot
of
information
in
there
actually
about
sourcegraph,
mostly
because
I
was
just
kind
of
super
interested
about
how
things
work
behind
the
scenes.
So,
if
you're
interested
in
how
code
search
works,
there's
some
really
cool
stuff
there.
B
We
also
the
reason
well,
one
of
the
reasons
why
we
had
source
graph
on
the
show
was
because
they
recommend
using
kubernetes
as
the
deployment
target.
I
guess
you
could
say
for
source
graph,
whether
you're
deploying
the
open
source
project
or
you're
doing
it.
B
You
know
you're
using
their
enterprise,
proprietary
version,
which
has
like
sso
stuff
and
and
things
like
that,
and
so
we
looked
at
that
a
little
bit,
and
so
we
looked
at
their
deployment
manifests
which
one
of
the
things
we
covered
in
it
was
that
they
used
just
kind
of
raw
kubernetes,
manifest
as
opposed
to
using
like
a
home
chart
or
customize,
or
something
like
that,
and
we
looked
at
using
new
cross
plane
features.
This
was
the
day
before
the
release,
so
we
were
kind
of
giving
a
preview
of
it.
B
We
looked
at
using
some
of
the
cross
plane
features
to
package
up
backing
infrastructure
as
a
configuration
package.
So
generally,
when
you
deploy
source
graph,
it
will
create
a
postgres
database
and
two
redis
clusters
running
in
your
kubernetes
cluster.
So
they're,
you
know
just
being
created
as
deployments.
B
You
can
also
override
that
with
your
own,
you
know
infrastructure
backing
if
you
want
to
use
like
rds
or
cloud
sql
or
cloud
memory
store,
and
that
sort
of
thing
so
we
showed
doing
that,
but
with
actually
deploying
those
with
crossplane
instead
of
spinning
those
up
and
then
kind
of
manually
providing
the
credentials
and
it
was,
it
was
pretty
straightforward.
B
We
basically
just
created
a
configuration
package
that
had
a
source
graph,
infra
xrd
and
that
created
just
a
postgres
database
in
this
case,
but
we
talked
about
how
it
could
have
the
redis
clusters
as
well,
and
then
we
showed
how
you
could
use
that
with.
You
know
a
backing
composition
that
was
gcp
or
aws
or
azure
et
cetera,
and
then
we
talked
about
potentially
expanding
that
further
and
putting
basically
all
of
source
graph
into
a
configuration
package
right.
B
So
you
could
have
those
deployments
in
line
and
that
sort
of
thing-
and
it
would
be
especially
easy
even
right
now
if
it
was
a
helm,
chart
to
go
ahead
and
package
that
up.
So
it's
overall,
a
pretty
cool
demonstration.
You
get
to
see
some
of
the
power
of
the
new
package
manager
and
that
sort
of
thing
throughout
it.
B
So
yeah
we
we
ended
up
deploying
source
graph
and
getting
a
live
running
instance
and
we're
hoping
to
also
make
that
something,
that's
you
know
documented
in
the
source
graph
documentation
because
they,
currently,
you
know
it's
it's
a
little
bit
of
stumbling
for
folks
to
deploy
it
with
infrastructure
on
cloud
providers
as
opposed
to
running
in
cluster.
So
overall,
really
really
cool
conversation.
A
B
Yep-
and
I
don't
think
chris
is
on
this
call
today,
but
he's
been
doing
some
great
work
on
provider
aws
and
he
also
got
the
quay
operator
or,
however,
you
say
that
I
never
know
exactly
but
basically
red
hat's
kind
of
oci
registry
project,
the
operator
running
with
crossplane,
it's
kind
of
similar
to
what
I
was
just
saying,
with
sportsgraph,
where
it
has
some
backing
cloud
infrastructure
that
you
can
use
or
you
can
deploy
in
cluster.
B
So
we're
gonna
look
at
the
work
he
did
with
that
to
make
it
compatible,
which
one
of
the
things
we'll
highlight
is.
It
was
actually
a
pretty
straightforward
effort
on
his
part
to
get
that
working
with
crossplane,
so
that
should
be
pretty
cool
and
we
might
actually
make
some
changes
to
that
over
the
next
week
or
so
with
the
new
version
of
crossplane
out
and
see.
If
we
can
package
up
the
clay
operator
itself,
so
it
should
be
good.
A
Nice,
it's
exciting
that
crucial
beyond
a
guest
on
the
show
too,
because
I
know
christian's
been
doing
a
ton
of
good
work.
You
know
on
the
predator
you'll
be
using
getting
buckets
towards
beta,
1
and
stuff,
like
that.
So
that's
great
to
have
him
there
and
did
the
raw
code
live.
Is
that
did
that
already
happen
today?.
B
So
tomorrow
this
is
kind
of
david
mckay
who's,
a
developer
advocate
at
now
equinix
metal,
but
previously
packet.
B
He
has
a
live
stream
show
and
wanted
to
have
us
on
to
chat
about
kind
of
the
new
things
in
crossplane
0.13,
as
well
just
kind
of
explain
the
value
of
cross
pain
to
him
and
that
sort
of
thing
and
as
part
of
that
we'll
be
showing
off
the
the
packet
provider,
is
being
transitioned
to
the
equinix
metal
provider.
So
we'll
be,
you
know
incorporating
that
through
our
presentation.
A
Awesome
dan,
that's
exciting,
to
have
another
opportunity
to
to
you
know
be
for
you
to
be
a
guest
this
time,
I
guess
not
not
be
in
the
in
the
driver's
seat,
this
time,
cool,
congratulations,
dan,
all
right!
So
then
kubecon
north
america
is
coming
upon
us.
I
think
it's
about
a
month
away,
or
so
the
17th
november
17th,
I
think,
is
when
it
starts.
A
We,
there
are
two
talks.
I
know
of
related
to
crossplane
that
got
accepted
dan's
talk
about
or
about
building
an
enterprise
control
plane
on
kubernetes.
Who
is
that
with
who's?
That
with
dan
is
that
with
steven
yeah
yep
nice?
Congratulations,
stephen
as
well!
Steven's
on
the
call
thank.
D
A
Cool
cool,
so
that
would
be
a
great
talk
and
then
waterflow
and
I
got
a
talk
that
accepted
on
you-
know:
kind
of
crossband
and
ohm,
but
ours
is
actually
a
tutorial
format
which
I
have
never
done
before.
So
I
don't
I
don't.
I
need
to
understand
what
the
requirements
of
that
are,
but
I
think
it's
more
of
a
practical.
You
know
hands-on,
like
step-by-step
sort
of
thing,
as
opposed
to
conceptual,
so
that
might
be
interesting.
I
think
it's
like
an
85
minute
thing
too.
A
So
it's
a
longer
session
too,
so
you
can
kind
of
walk
people
through
how
to
really
do
something.
Interesting,
so
that'd
be
fun.
I
think
the
the
dates
to
record
have
the
talks
recorded
and
published-
I
think,
is
a
little
unreasonable
this
time
because
they
just
announced
like
less
than
two
weeks
ago,
and
you
know
what
talks
were
accepted
and
then
I
think
the
the
date
they
published
for
the
recordings
being
completed
is
like
a
week
away
or
something
like
that,
which
is
really
quite
ridiculous.
A
So
I
think
I
personally
am
going
to
push
back
a
little
bit
on
that
and
I'm
not
going
to
break
my
back
to
come
up
with
an
entire
talk
and
have
it
recorded
and
stuff
within
a
week
dan
feel
free
to
you
know.
Maybe
you
would
stephen
ahead
of
it
more
than
I
am,
and
you
guys
are
already
done,
but
feel
free
to
push
back
a
little
bit
too
or
if
you
need
to
as
well.
I
don't
think
I
don't
think
it's
quite
reasonable.
This
time.
A
B
If
you
want
to
discuss
the
requirements
of
that
happy
to
to
do
it,
I
mean
I'm,
you
know,
I'm
just
probably
getting
the
same
information
you
are,
but
we
can.
We
can
see
what's
desired,
there.
A
Yeah
awesome,
that's
great
cool,
so
we
have
two
tutorial
sessions:
interesting,
nice,
okay
and
then
nick
and
harry.
I
think
the
euros
talk
about
composition
got
waitlisted.
Is
that
right,
nick.
C
A
Yeah
cool
cool,
okay:
I
didn't
hear
any
other
talks
that
were
accepted
or
waitlisted,
but
no
people
can
holler
if
they
know
of
one.
A
Oh,
they
do
some
explicit
rejections
as
well.
I
my
other
talk,
jan
got
completely
rejected.
It
said,
there's
no
room
for
it
and
it's
not
happening
so
waitlist
is
a
you
know,
a
distinct
category.
A
Yeah
see
if
maybe
there
will
be
news
that
it
moves
into
accepted,
but
then
you
know
just
like
all
the
other
talks.
It
doesn't
leave
a
ton
of
room
to
do
preparation
and
recording
and
stuff,
but
we
can
make
it
happen
if
it
needs
to
awesome
cool,
then
so
the
cncf
projects
pavilion
that
was
not
typed,
correctly
cncf
project
pavilion.
We
signed
up
for
a
booth
and
the
office
hours
again
this
time.
A
One
of
the
really
good
things
here
is
that
the
booth
from
kubecon
eu
will
be
all
the
materials
and
design
and
everything
that
we
did
for
it
will
be
copied
over
to
the
booth
for
the
next
kubecon
here,
the
na1,
so
the
setup
work
will
be
fairly
limited,
since
a
lot
of
it
is
will
already
be
done
and
there's
no
ramping
up
on
the
platform,
since
we
figured
it
out
for
last
kubecon,
so
that
should
be
fairly
low
touch.
A
I
think
we
can
kind
of
discuss
a
little
bit,
maybe
about
how
how
best
we
want
to
try
to
engage
with
the
community,
since
it
had
somewhat
limited
results
last
time,
but
just
having
the
presence
there
and
having
you
know
the
crossplane
logo,
and
you
know
it's
like
the
materials
and
links
and
stuff
for
people
that
do
cruise
through
to
click
on
it's
just
more
traffic
and
more
eyes
on
the
project,
and
I
think
that
the
cost
to
us
is
this
time
is
less
than
it
was
last
time.
A
So
I
think
that's
a
fair
trade-off
to
at
least
have
our
name
up
there
and
out
there
there's
only
five
projects
that
get
that
for
the
entire
cncf
this
time
and
we
were
fast
enough
to
get
in
there
and
be
accepted,
so
we'll
be
one
of
only
five
cncf
projects
with
our
logo
and
stuff
in
the
booth
there.
So
it
seems
like
that's
that
would
be
reasonable
eyes
on
it.
For
anybody
that
goes
to
project
pavilion,
they'll
only
see
us
and
four
other
people
for
other
cncf
projects.
So
that
sounds
good.
A
Okay
cool,
so
that
was
oh,
I
think
I
didn't
cross
off
this
stuff
dan
like
these
are
the
two
issues
for
you.
These
are
old
stuff
right.
These
are
copy
paste.
A
Oh
okay,
this
one
is
copy
paste,
then,
for
me
last
time:
sorry,
okay,
cool!
So
were
there
any
prs
or
any
particular
code,
things
that
need
to
be
brought
up
right
now
or
any
other
community
issues,
and
we've
got
a
number
of
folks
on
the
call
today.
So
if
there's
anything
related
to
the
you
know,
cross
plane
in
the
you
know
community
here,
where
you're
more
than
welcome
to
bring
it
up
right.
A
A
All
right
cool,
so
then
that
would
be
everything
in
the
agenda
that
you
know
is
for
the
general
audience.
We
do
have
a
remaining
section.
That
is
an
optional
section
for
us
to
get
into
some
deeper
technical
discussions
that
may
kind
of
wind
through
some
through
some
some
paths
there
that
that
everyone
might
not
be
interested
in.
So
anyone
who
wants
to
go
ahead
and
hop
off
the
call
now
or
while
we're
getting
into
some
of
these
discussions
is
more
than
welcome
to
and
with
that
dan.
A
I
will
give
you
the
floor.
B
Cool
thanks,
I
think
the
main
thing
I
wanted
to
talk
about
here
was
the
configuring
providers
and,
I
think,
a
few
of
the
community
members
who
are
on
the
call
today.
I've
I've
talked
to
a
little
bit
about
this,
but
basically
there
are
a
variety
of
issues
open
in
this
kind
of
older
issue
as
well,
that
describe
installing
or
the
process
of
installing
a
provider
and
what
you
can
change
about.
B
Basically,
the
deployment
of
the
controller
for
that
provider
so
previously
in
the
old
package
manager
we'd
package
and
install
that
yaml
file
with
the
provider
package,
and
it
would
basically
have
a
deployment
manifest
in
it
and
we'd,
take
it,
but
then
kind
of
like
patch
over
some
fields
of
it
and
be
opinionated
about
how
we
want
it
to
look
or
you
could
run
the
package
manager
with
a
flag.
That
basically
said
just
let
whatever
the
package
says
happen
for
the
deployment,
so
that
was
kind
of
like
an
insecure
mode.
B
So
the
issue
with
that
is,
is
it
usually
doesn't
make
sense
for
the
package
author
to
determine
how
you
want
your
package
to
be
installed,
and
there
may
be?
You
know
a
variety
of
different
ways
that
a
single
provider
controller
should
be
run
depending
on
the
environment.
So
a
great
example
of
that
would
be
with
aws
the.
If
you're
running
an
eks
cluster,
you
might
want
to
use
im
roles
for
service
accounts,
which
involves
setting
an
fs
group
on
the
deployment.
B
However,
if
you're
running
in
something
like
openshift,
they
actually
don't
allow
you
to
set
an
fs
group.
They
randomly
assign
one
to
your
deployments,
so
it
actually
would
fail
to
run
in
the
versions
that
we
had
that
install.yaml
written
in
that
way.
So
obviously
you
want
to
be
able
to.
You
know,
choose
what
you
want
for
that
value
based
on
where
you're
installing,
however,
there's
also
some
security
boundaries
around
that
just
like
how
we
had
before.
So
we
don't
want
you
to
always
be
able
to
say
you
know
like.
B
Please
run
this
thing
with
root
access
to
do
whatever
it
wants
or
with
you
know,
arbitrary
configuration
of
resource
limits
and
that
sort
of
thing
so
we're
trying
to
think
about
what
the
appropriate
bounds
are
around
configuring
providers
and
we
don't
want
to
introduce
kind
of
like
a
subset
of
a
deployment
that
continues
to
grow
over
time
to
eventually
just
be
equivalent
to
a
deployment.
B
But
we
also
don't
want
to
provide
something.
That's
so
minimal
that
doesn't
allow
you
to
do
the
things
that
you
need
to
do
so,
basically,
the
the
way
that
we're
approaching
it,
given
those
goals
in
mind,
is
just
seeing
what
people
want
to
do
with
provider
deployments
and
see
if
we
can
develop
a
model
to
make
that
you
know
possible
safely.
B
So
if
you
have
any
thoughts
around
that,
please
feel
free
to
drop
comments
on
that
issue
or
just
reach
out
in
slack,
as
we
kind
of
work
towards
how
we're
going
to
configure
those.
I
think
the
the
leading
idea
right
now
would
be
to
well.
B
This
is
probably
actually
just
speaking
for
me,
so
I
shouldn't
say
the
leading
idea
from
a
whole
community
perspective,
but
my
kind
of
forefront
thought
is
that
we
should
probably
allow
some
sort
of
arbitrary
configuration
and
then
have
gates
on
the
package
manager
that
that
control,
how
you're
allowed
to
pass
an
arbitrary
configuration,
I'm
a
little
worried
if
we
don't
allow
arbitrary
configuration
that
we're
just
going
to
keep
running
into
use
cases
that
we
didn't
consider
at
first
and
keep
having
to
modify
things.
B
C
Yeah,
just
chiming
in
with
my
current
thinking
on
this,
which
is
which
is
maybe
similar
to
yours
dan,
so
so
for
the
for
context,
for
the
community
part
of
the
reason
that
we
don't
necessarily
want
to
I'm
not
sure
if
dan
just
said
this,
but
part
of
the
reason
that
we
don't
want
to
have
just
an
entire
deployment
spec
potentially
associated
with
a
with
a
provider.
C
Cr
is
because
there
are
some
situations
where
you
have
about
cloud
being
a
big
one
where
you
could
submit
a
provider
to
be
installed
and
you're
installing
it
on
someone
else's
infrastructure.
Basically,
they
don't
want
you
to
have
full
control
over
how
the
deployment
is
run
and
how
many
resources
it
has
and
all
these
these
other
things.
C
So
one
way
that
I
could
see
to
do
that
is
that
we
optimize
for
the
for
the
primary
open
source
use
case
where
potentially
you
you
are
okay
with
the
folks
who
are
authoring
providers
which
often
be
sort
of
infrastructure
operator.
Sre
types,
expanding
crossplane,
you're,
okay
with
them
having
a
say
over
like
what
nodes
the
provider
controller
runs
on
or
how
many
resources
it
has
or
what
environment
variables
are
injected
into
it.
C
But
then
we
would
need
some
way
to
to
lock
that
down,
and
one
thing
we've
spoken
about
in
the
past
is
just
having
a
flag
which
says
hey
this.
This
can
be
turned
on
or
can't
be
turned
on,
which
I
think
would
be
a
step
in
the
right
direction,
but
I
worry,
it
wouldn't
be
granular
enough.
So
my
thinking
of
the
moment
is
something
more
on
the
blinds
of.
B
Yeah,
I
think
that
sounds
really
good.
I
would
I
would
bias
towards
using
opa
for
sure,
because
that's
becoming
a
little
more
ubiquitous
and
is
less
resources
and
controllers
for
us
to
manage.
C
C
But
that's
what
I'm
thinking
but
take
this
with
a
huge
grain
of
salt.
I
have
never
personally
used
opa,
so
I
could
be
completely
wrong
about
how
opa
works.
My
thinking
would
be.
We
would
need
a
saying
default
such
that,
if,
if
someone
just
didn't
want
to
tell
us
all
the
details
about
the
deployment
we
were
just
like,
we,
you
should
be
able
to
submit
what
you
can
submit
today.
It
will
give
you
a
saying
default
deployment,
but
I
think
to
dan's
point:
it's
probably
going
to
be
a
losing
game.
C
If
we
are
like
you
know,
v0.14
allows
you
to
set
end
vars
and
then
someone's
like.
Actually
I
want
amount
of
volume
and
then
we
go
15
allows
an
amount
of
volume
then
someone's
like.
Oh
actually,
I
want
to
do
this,
like
I
could
imagine
just
having
like
a
deployment,
spec
template
or
something
like
that.
Eventually,
that
you
could
optionally
fill
out
to
just
have
access
to
everything,
but
we
would
need
a
way
to
you
know
allow
the
cross-playing
administrator
to
restrict
what
could
be
done
with
that.
E
Yeah,
so
the
like
mechanically
opa,
would
prevent
creation
of
the
provider
kind
with,
like
you
know,
details
that
you
don't
want
them
to
so
it
will
basically
prevent
the
creation
provider
instead
of
the
creation
of
the
deployment
itself.
C
E
E
Would
we
embed
the
complete
schema
for
deployment
specs
inside
of
our
our
provider
spec,
or
would
there
be
another
way
to
specify
the
deployment
outside
of
the
provider
structure.
C
B
Yeah,
no,
that
seems
fine
to
me.
I
mean
you
could
always
have
a
you
know
reference
to
like
a
config
map
or
something
like
that.
In
this
case
I
don't
really
see
a
bunch
of
value
of
that,
because
we
would
just
be
losing
out
on
val.
You
know
upfront
validation,
because
it
would
just
be
a
bag
of
vml
kind
of.
B
C
E
There
is
some
kvs
there
with
the
like
blue
spec,
template
being
like
huge,
so
we
might
not
like
need
all
of
the
properties
yeah,
that's
kind
of
where
I
was
going
with.
It
is.
If
we
try
to
replicate
the
deployment
spec
schema
into
our
provider,
schema
it
kind
of
blows
it
out
and
becomes
a
thing
that
we
have
to
maintain.
So
I
I
wanted
to
think
about
anyways.
We
can
obviously
design
it
later,
but
I,
like
the
general
approach.
C
Yeah,
we
could
probably
also
there's
a
there's
an
increasing
yeah
you.
We
could
probably
also
avoid
doing
the
whole
deployment
spec.
I
would
imagine,
and
just
have
a
pod
spec
template
that
the
deployment
uses,
which
is
a
pretty
common
pattern
these
days,
a
lot
of
folks
like
they
have
they
have.
I
think
the
k-native
people
talk
about
the
concept
of
pod
speckables,
which
is
just
a
duck
type
of
anything
that
has
a
pod
spect
template
in
it
and
as
dan
says.
C
I
imagine
that
you
know
we
could
just
pick
absolutely
one
from
upstream
and
just
put
that
in
there
I
think
it
would.
It
would
definitely
blow
out
our
crd,
docs
and
things
like
that,
so
anyone
looking
at
the
documentation
schema
of
this
type
would
see
like
you
know,
10
times
as
much
stuff.
That
was
all
optional
so
but
which
you
know
is
less
than
ideal,
but
I
don't
think
it
would
be
a
huge
maintenance
burden
for
us
or
anything.
C
E
B
Cool
jared-
I
don't
know
if
you're
you're
still
here
but
oh
yeah,
the
next.
B
All
right
sounds
good.
The
last
one
here
I
was
wondering
if
tim,
who
kind
of
requested
this,
I
was
going
to
be
on
the
call
as
the
main
reason
I
added
it,
but
maybe
we
could
discuss
it
a
little
bit
anyway.
Basically,
tim
wants
to
be
able
to
replicate
resources
across
basically
across
different
provider.
Configs,
that's
a
bit
of
a
simplification,
I
guess-
or
maybe
that's
kind
of
like
what
my
interpretation
of
the
implementation
would
look
like,
but
he
I
think,
the
the
main
use
cases
he
wants
to
replicate.
Resources
across.
B
You
know
a
kubernetes
cluster
across
kubernetes
clusters,
so
that
could
be
like
a
helm
provider
config
or
something
like
that,
and
so
you
know
that
could
be
as
simple
as
literally
just
having
like
some
sort
of
controller.
That's
completely
separate
from
crossplane,
which
is
kind
of
what
I've
recommended
for
expediency
to
be
able
to
just
say.
B
You
know
when
you
see
the
creation
of
this
type
of
resource,
please
create
one
of
them
for
each
of
this
other
type
of
resource,
but
then
there's
also
the
potential
to
have
some
sort
of
like
composition
set
or
something
like
that.
You
know
that
that
allows
you
to
say
when
I
create
an
instance
of
this
xrd,
you
know
all
right.
When
I
create
an
xr,
I
guess
you
know
create
one
for
each
of
these
provider
configs
I
reference
or
something
like
that.
D
B
To
I
don't
think
so,
there's
no
there's
no
like
this
isn't
like
that.
There's
cross
plane
on
the
remote
host.
This
is
just
literally
like,
like
everything
just
exists
in
one
cluster
right,
so
it's
not
like.
C
D
Right
so
if
we
just
had
a
single
control
cluster
and
then
we
just
wanted
to
basically
put
all
the
things
that
were
needed
for
like
an
eks
plus
a
database
or
whatever
was
going
to
be
into
a
composition,
then
I
could
just
create
10
instances
of
that
and
get
10
eps
clusters
with
database
and
helm
installs
and
all
that
stuff
right.
So
it
sounds
like
we're.
Looking
at
you
know
in
in
your
git
repo.
D
You
know
you
have
your
configuration
package
with
the
composition
in
there
and
then
what
what
he's
saying
is
that,
instead
of
having
10
files,
let's
say
cluster
one
cluster,
two
cluster
three
are
all
instances
of
this.
Then
I
want
to
just
have
one
that
says
you
know.
Inside
of
the
composition,
it
says
I
have
cluster
one.
Two
ten,
you
know.
C
I
think
I
can
see
the
case
either
way
I
mean
you
know
you
could
make
the
same
argument
for
pods
in
in
kubernetes.
You
know
at
deployments
like
why.
Why
have
a
replica
set
when
you
just
make
10
pods
manually
sort
of
thing?
I
think
I
I.
On
the
other
hand,
I
am
not
sure
if
I
want
to
commit
to
you
know
I
don't
think
we're
going
to
have
much
time
to,
but
this
isn't
in
our
road
map.
C
At
the
moment,
I
I
think
it
would
be
a
really
interesting
thing
to
look
at
as
an
experimental
add-on
to
to
cross
playing,
I
think,
sort
of
to
dan's
point
I
could
see.
I
don't.
I
don't
think
we
want
to
build
this
as
a
feature
of
composition
itself,
as
in
the
current
composite
resource
controllers,
but
I
could
see
as
building
a
layer
on
top
of
this.
You
can
imagine
that
perhaps,
when
you
create
an
xrd
that
defines
the
database
composite
resource,
we
could
have
a
separate
controller.
C
That's
also
watching
for
xrd,
and
that
automatically
creates
the
database
set
composite
resource
and
if
you
create
a
database
set,
it
basically
has
the
same.
Schema
as
a
database
or
it
has
a
database
spec
template
same
as
a
pod,
and
then
it
has
this
number
of
replicas
as
well,
and
then
it
just
goes
and
creates
those
ten
things
for
you
and
you
can
change
it
to
nine
and
it'll
kill
one
of
them
or
change
it
up
to
10
and
it'll
or
11
and
it'll
add
another
one.
C
C
Sorry,
my
dog
helped
want
an
issue
or
something
like
that,
but
unless
we
get
sort
of
multiple
customers
asking
for
it,
I
don't
know
if
it's
going
to
be
a
priority
with
all
the
other
things
we've
got
at
the
moment,
so
be
good
to
take
people's
temperature
and
see
if
anyone
else
like
really
wants
this,
I
think
you
know.
Reading
between
the
lines.
Basically,
this
is
supposed
to
be
sort
of.
This
is
something
that
we
would
that
I
think
tim
wants
to
use.
To
sorry
is
it
tim?
C
Should
I
get
the
name
right?
Yeah
wants
to
use
to
replace
kubernetes
application
right,
so
you
could
have
a
helm
chart
that
gets
scheduled
out
to
end
clusters
or
whatever,
but
you
could
argue
that
it
could
also
be
you
know
you
have
20
google
accounts
and
you
want
to
create
a
vpc
that
looks
the
same
in
each
google
account
or
something
like
that.
There's
a
sort
of
an
extraction
of
another
thing
there,
so
I
think
I'm
kind
of
seeing
both
sides.
D
I
mean
in
that
case,
where
you
want
a
vpc
in
each
google
account.
Couldn't
you
create
a
composition
that
was
like
here's,
the
content
that
I
want:
here's
the
vpc,
here's
the
you
know
the
cider
for
the
subnets
and
like
go
and
apply
this
thing
consistently
everywhere,
like
what?
What
are
we
lacking
today
like
that,
would
prevent
him
from
just
doing
that
and
applying
it
consistently?
Is
it
well?
Let's
say
you
have.
B
So,
every
time
you
had
to
go
and
be
like
all
right
and
I'd
also
like
to
create
an
instance
of
this
thing
in
this
thing,
like
you
know,
you
could
potentially
have
it
like
dynamic.
Where
you
know,
when
you
add
a
new
cluster,
it
automatically
gets
these
things
deployed
onto
it.
That
sort
of
thing
I
think
it's
just
like
around
automation
and
that
sort
of
thing
you
obviously
can
create.
B
You
know
multiple
instances
of
them,
but
I
I
think
the
ideal
use
case
here
is
not
one
where
you're
creating
like
the
new
cluster
with
or
the
new
account
or
whatever,
with
crossplane
right,
you're
like
bringing
it
in,
and
you
just
would
like
you
know,
for
maybe
some
labels
on
that
provider
config
to
say
like
please
put
link
or
d
in
here
or
whatever,
but
I
mean
you're
right
that
you
could
totally
do
it
the
other
way-
and
you
know
just
deploy
it
again
and
say
you
know
yeah.
D
C
I
I
feel
like
part
of
this
is
also
because
it's
it's
arguably
regression
in
cross
plane.
Right
like
this
is
this
is
ticking
some
boxes
for
the
folks,
where
we
basically
supported
scheduling,
workloads
before
and
for
a
long
time
promoted
that
as
a
core
tenant
of
cross
plane
and
then
recently
have
focused
and
sort
of
said
that
scheduling
workloads
is
not
really
not
really
a
priority
for
us
sort
of
thing.
So
I
can
sort
of.
I
can
see
some
of
the
you
know.
I
know
the
history
here
with
tim,
specifically.
C
Basically
he
wanted
to
use
kubernetes
workloads
for
that
and
we
sent
kubernetes
applications.
Then
we
said
sorry,
they
deprecated
use
helm
and
then
he's
like
how
do
I?
How
do
I
change
all
this
to
multiple
clusters
and
I
was
like
well,
you
can't.
So
I
actually
your
point
that
you
can
just
do
it
manually,
but
that's
also.
You
know
that
doesn't
necessarily
mean
that's
the
best
possible
user
experience
right.
So
what
would
you
do
with
pods?
D
B
Know
so
you
would
just
have
to
create
a
helm
chart
for
everything
you
wanted.
It's
like
you
had
to
create
an
instance
of
that.
Every
time
you
wanted
that
home
chart
to
go
somewhere,
as
opposed
to
just
saying
like
please,
you
know,
I
know
I
have
like
a
bunch
of
it
would
be
a
helm
provider
config
in
this
case
and
sorry
tim
actually
just
messaged
me,
and
you
know
I'd
like
for
this
thing
like
please
create
this
on
all
clusters
right,
as
opposed
to
saying,
like
let
me
list
all
my
clusters.
E
B
D
C
C
C
D
E
C
Randomly
picked
one
cluster
based
on
label.
Selectors
again
I,
like
you
know,
you
know
this
is
something
the
community's
asking
for.
So
I
think
we
should
at
least
you
know,
consider.
E
B
Oh
yeah
for
sure
he
said
he's
not
able
to
make
it,
but
I
said
we'd
have
a
recording
for
him,
so
you
know,
but
he
he
knows
what
he's
wants
right.
So
we've
kind
of
explored
our
thoughts
on
it.
So
hopefully
he
can
give
some
more
context.
Maybe
pop
and
slack
and-
and
let
us
know.
B
I
definitely
don't
think
that
this
is
something
that
that,
like
basically,
I
agree
with
nick
that
this
can
be
done
at
higher
level
abstractions.
So
I
would
not
recommend,
as
we're
trying
to
stabilize
apis,
to
try
and
jam
this
in.
A
All
right:
well,
that's
the
everything
that
was
on
the
agenda
for
today.
So
I
think
then
we
can
go
ahead
and
adjourn
and
we'll
see
everybody
online.