►
From YouTube: 2022-03-10 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There
we
go
all
right,
so
the
recording
is
going-
and
this
is
the
march
10th
2022
cross,
plane
community
meeting.
A
first
note
for
tonight
is
that
we
want
to
have
a
broader
discussion
with
folks
here
in
asynchronous
manner,
for
the
proposal
to
merge
the
jet
and
classic
providers
together.
A
That
is
listed
at
the
end
here
right
here,
so
you
can
follow
along
and
get
some
context
from
this
pr
right
now
from
this
link.
But
this
is
what
we'll
be
spending
the
back
half
of
the
meeting
discussing
today.
While
we
have
a
synchronous
quorum
gathered
up
here,
so
we
want
to
be
moving
through
the
agenda
at
a
fairly
good
pace
here.
So
you
can
reserve
that
time
at
the
end,
for
that
that
that
topic
all
right.
So,
let's,
let's
dive
into
it,
then
so
yeah.
A
We
had
a
recent
release
a
few
days
ago
and
special
thanks
to
to
muhammad.
I
don't
know,
if
he's.
No,
I
don't
think
he's
on
the
call
today,
but
muhammad
ran
this.
I
ran
this
release
here.
The
1.6.4
to
get
a
couple
of
patch
releases
and
mostly
the
contributing
factor
for
getting
that
patch
release
out
was
to
update
the
version
of
go
container
registry
which
had
some
important
fixes
in
it.
A
So
it's
really
good
to
see
new
folks
running
releases
and
contributing
in
that
way
as
well
and
that's
kind
of
a
nice
reminder
of
there's
a
lot
of
ways
to
get
involved
in
various
capacities
in
the
community.
So
definitely
grateful
to
see
that
and
the
release
notes
are
linked
here
from
the
agenda
document.
A
All
right,
1.7
has
got
a
lot
going
on
as
we're
moving
towards
the
release
date
of
the
22nd.
We
will
be
hitting
the
feature:
freeze
code
free
states
fairly
soon.
I
believe
somebody
can
kind
of
remind
you
remind
me
of
when
that
happens.
I
think
that
probably
maybe
tuesday
of
next
week,
I
think,
is
when
we
hit
feature
freeze
and
then
the
following
tuesday
would
be
code
freeze
before
we
then
did
the
release
the
next
tuesday,
something
along
those
lines.
A
Somebody
can
correct
me
if
I'm
wrong
on
that,
but
that's
the
general
flow
there
about
a
week
for
code
for
our
feature,
freeze
about
a
week
for
code
freeze
and
then
we
go
ahead
and
do
the
release
march.
22Nd
is
the
date
we're
targeting
for
getting
that
shipped
and
out
the
door.
A
A
You
can
read
all
about
it
in
the
pull
request.
That's
linked
here
2932-
and
this
is
an
important
part
of
the
x
package,
specific
specification.
A
So
basically
the
the
high
level
gist
of
it
is
that
you
know.
Crosstalk
packages
can
have
multiple
layers,
their
oci
images
at
their
core,
so
they
can
have
multiple
layers
in
it
and
you
can
annotate.
You
know
the
one
specific
layer
that
you
need
crosspin
to
look
at
that
contains
all
of
your
manifests
and
you
know
compositions,
etc.
A
That
will
be
at
the
base
layer
and
then
the
what
this
enables
now
is
that
you
can
have
arbitrary
amounts
of
other
layers
into
there,
so
you
can
contain
other
types
of
metadata
information,
resources
etc
into
your
packages
now.
So
the
the
key
ask
here
for
the
community,
since
this
has
been
merged-
and
you
know
the
x
package
specification,
I
think-
was
even
merged
for
1.6,
so
that's
already
been
in
there,
but
the
key
ask
here
for
the
community
is
for
a
little
bit
of
testing
effort.
A
So
if
folks,
you
know
want
to
take
for
a
test
spin
in
their
dev
clusters
or
local
clusters,
the
latest
builds
for
as
we're
coming
up
to
1.7,
so
basically
from
the
main
branch
that
a
little
bit
more
baking
and
testing
on
that
would
be
helpful
and
appreciated,
and
dan
was
requesting
that
all
right,
another
pretty
important
thing
in
1.7
is
external
secret
store
support.
This
was
a
pretty
highly
demanded
feature
and
hassan
worked
on
that
and
is
expecting
and
anticipating
to
have
it
land
hassan.
B
Planning
to
ship
external
secret
store
as
an
iphone
feature
in
1.7,
and
initially
we
will
be
supporting
kubernetes
and
vault,
as
secret
stores
like
kubernetes
already,
was
the
secret
store
for
cross
plane,
but
this
time
it
it
will
also
be
possible
to
write
to
external
kubernetes
clusters
as
well
as
as
well
as
the
in
cluster
case,
which
was
already
the
case
yeah.
B
Currently,
there
are,
I
think,
two
pr's
left
which
needs
to
be
merged
to
have
this
this
feature
in
in
crossplane,
and
then
we
will
start
rolling
out
this
feature
to
providers
and
hopefully,
in
the
following
weeks,
we
will
have
this
feature
available
in,
like
in
majority
of
the
providers.
A
Yeah,
I
think,
that's
fantastic,
the
fantastic
hassan.
I
think
that
enables
a
lot
of
enterprise
scenarios
for
adopting
cross-plain
and
not
storing
secrets
within
the
cluster
looks
I
I
thought
when
I
was
first
reading
this
all
this
was
about
external
secret
source
support,
but
nope.
It's
actually
other
important
features
that
are
part
of
the
release
that
should
be
landing
in
1.7
as
well
waffle.
Do
you
want
to
give
us
a
quick
update,
then,
on
the
web
book
support.
C
Yeah
yeah
sure
so
web
hook
support
is
a
feature
that
has
been
long
graded
by
the
by
the
community
for
a
lot
of
use
cases,
one
is
being
immutability
support
for,
like
you
know
things
in
aws,
for
example,
even
though
you
can't
change
some
of
the
stuff
in
rds
instance,
your
change
on
the
resource
would
still
go
through.
So
we
have
to
support
the
developers.
Maintenance
of
provider
database
would
be
able
to
put
some
functions
there
to
make
sure
your
your
ctl,
edit
command,
would
actually
fail,
say
like
hey.
C
This
field
is
immutable
and
the
other
thing
that
is,
I
believe,
really
important-
is
conversion
web
hooks.
You
know
we
have
a
lot
of
we
one
better
one
resources,
and
one
of
the
reasons
that
we
don't
really
push
them
to
v1
is
because
of
the
schema
changes
that
we
don't
have
like
a
really
great
story
about
automatically
converting
the
schema.
A
Yeah,
I
think
that
yeah
as
well
was
saying
that
the
the
contribution
that
this
makes
towards
our
a
general
upgrade
in
migration
stories,
for
you
know
maturing
resources
and
and
changes
in
providers
etc,
is
quite
nice
for
the
community
as
a
whole.
To
be
able
to,
you
know,
absorb
the
changes
in
a
much
easier
fashion
than
we've
been
able
to
do
in
the
past,
or
we've
held
off
from
doing
because
it
couldn't
do
it
without
disruption.
A
All
right
and
then
crd
scaling
is
something
that
we've
been
working
on
for
a
while
and
having
impact
upstream
as
well
and
alpha.
If
you
want
to
give
us
a
quick
update
on
where
we're
at
with
that
and
and
then
we'll
move
on
from
there.
D
Yeah
hello,
so,
as
you
mentioned,
we
have
a
recent.
You
know
one-page
proposal
in
crossplane
which
discusses
issues
around
crd
scaling.
D
D
D
D
There
are
some
details
in
the
document
itself,
but
basically
with
one
point:
twenty
three,
a
cube,
cts
discovery,
client
is
misconfigured
and
you
know
it
will
be
correctly
configured
starting
from
cuba,
city
of
1.24.
D
We
have
also
cherry
picked
that
fix
to
the
1.23
a
release
branch,
which
means
that,
starting
with
march
16th,
we
will
also
have
a
qctl
version
1.23.5
with
correct
parameters
in
for
single
provider
scenarios.
You
know
that
shield
basically
prevent
client-side
throttling
issues.
D
D
D
So
I
don't
want
to
go
into
details
of
this
document,
but
one
thing
that
I'd
like
to
especially
mention
is,
as
part
of
this
effort
we
would
like
to
you
know,
agree
on
some
common
cross-plain
scenarios
and
for
this
purpose
you
know.
Initially,
we
have
just
suggested
two
scenarios.
One
is
you
know
being
able
to
install
a
single
provider.
D
You
know
which
is
basically
the
base
scenario,
but
the
other
one
is,
for
example,
being
able
to
install
all
of
the
three
big
terajet-based
providers
onto
the
same
kubernetes
cluster.
So
we
would
like
to
define
you
know
a
set
of
scenarios
like
these
and
then
define
a
set
of
you
know
goals
to
achieve.
D
You
know
performance
goals,
I
mean,
like
you
know:
if
a
single
provider
is
installed
in
the
cluster
and
if
you
run
a
qctl
command,
for
example,
against
this
cluster
with
a
cold
cache,
then
it
should
take,
for
example,
at
most
10
seconds
you
know
given
or
assuming
that
no
errors
occur.
Of
course
you
know
to
complete
the
discovery
phase
and
to
have
a
result
from
the
cube,
ctl
command,
and
you
know
similar
goals
similar.
You
know
things
defined
for
this
second
scenario
of
installing
all
providers
etc.
D
Get
feedback
from
the
community
is
you
know
the
common
scenarios
they
are
using,
for
example,
we
wonder
if
any
cross
plane
community
member,
you
know
installs
all
of
these
three
providers
and
you
know,
tries
to
use
them
in
production
or
plans
to
do
this,
and
things
like
that.
So
feedback
on
the
issue
would
be
very
much
appreciated
because
that
using
this
we
we
will
try
to
define
our
goals
and
achieve
them
and
also
make
sure
that
we
maintain
them.
A
Yeah,
I
think
the
work
we're
doing
upstream
and
driving
fixes
into
api
server
institute-
troll
is
just
quite
helpful
in
terms
of
you
know,
scaling
the
ability
for
the
control
plane
to
be
able
to
handle
what
we're
doing
with
it.
So
yeah
so
feedback
is,
is
welcomed
into
that
document
here
and
the
link
is
right
here
and
yep.
So
thank
you,
everybody
for
pretty
good
and
interesting
stuff.
That's
going
into
1.7!
A
Let's
keep
on
going
to
to
make
it
to
the
end
here
where
we
can
have
the
broader
discussion
about
the
the
emerging
proposal.
But
yes,
let's
stay
on
backtrack
here.
So
matthias,
I
think,
did
some
some
work
to
gather
stats
or
gather
information
around
the
rug
provider.
Adoption
in
the
broader
ecosystem
and
folks
are
our
writing
and
building
providers.
A
So,
there's
a
link
here
to
see
matthias's
efforts
on
that
and
what
he's
found
and
there's
there's
no
doubt
about
it
that,
with
you
know
better
code
gin
foundation
now
where
people
are
much
much
more,
people
are
starting
to
build
their
own
providers
for
new
scenarios
and
new
environments
that
we
haven't
really
even
thought
of
as
a
community
ourselves
before.
So
it's
really
cool
to
see
that,
and
you
can
see
some
details
and
matias
is
reaped
up
there.
A
Moffitt.
You
want
to
give
us
a
quick
update
on
ontariojet
as
it's
at
its
core
there
before
we
move
on.
C
Yeah
sure
I'll
I'll
be
real,
quick,
so
0.4.2
is
released
with
a
bunch
of
bug
fixes
and
the
second
one
is
grpc
performance
test.
So
there's
this
test.
C
There
are
this
test
that
we
wanted
to
do
about,
like
you
know
whether
we
should
have
the
terraform
cli
talk
with
a
single
up
and
running
long-running
grpc
provider
server,
because
that's
what
the
tf
providers
are-
or
we
should
like
you
know,
use
it
in
normal
mode
that
for
every
call
it
spins
up
a
new
one
and
then,
like
you,
know,
shuts
it
down
for
every
call
and
apparently,
like
you
know,
with
this
issue,
you
can
see
the
results
of
like
comparing
the
two
and
we
are.
C
We
are
going
to
implement
having
a
single
provider,
grpc
provider
server
instance
up
and
running
in
the
container
that
terraform
continuously
talks
with,
because
it
reduces
our
time
to
readiness,
time
and
the
cpu
usage.
But
memory
research
is
not
that
affected
and
the
third
one
albert
can
give
more
details
if
he
wants
to,
but
is
about
scraping
the
tf
registry
for
metadata
data
extraction,
which
today
we
use,
for
example,
yaml
generation.
C
A
Yeah-
and
I
think
thanks
for
that,
update
there
muafik
and
then
yeah
we'll
have
this
one
available
for
folks
to
comment
on,
and
we
I
think
we
can
skip
that
one
today
for
going
into
the
details
in
this
call,
though
yeah
there's
a
lot
of
progress
on
all
sorts
of
stuff,
that's
kind
of
quite
exciting
to
see.
Okay,
then
I
think
there
have
been
a
number
of
other
releases
of
other
providers.
A
This
is
just
the
one
that
was
right
off
the
top
my
head,
because
chris
christopher
donated
it
to
the
crossband
contrib
org
just
yesterday.
So
it's
interesting
to
see
you
know
this
is
a
good
example
of
being
able
to
coach
in
a
crossfit
provider
with
fairly
minimal
effort
and
be
able
to
control
other
parts
of
your
pipelines
and
environment.
So
now
you
can
use
crossplane
to
manipulate
or
integrate
with
patriotic
as
well
so
christopher.
A
Thank
you
very
much
for
donating
that
to
the
cross-playing
contribution
organization
and
your
efforts
on
that
man.
A
E
Yeah
yeah,
we
will
release
tomorrow
the
next
provider
release
0.25.
There
is
a
big
change
for
the
sdk
upgrade
and
ack
bump.
I
tested
a
lot
of
all
resources
from
the
code
generator
stuff
and
I
think
we
have
also
a
lot
of
new
resources
inside
and
we
fix
a
lot
of
things
in
the
rds
resource
group.
A
Yeah,
thanks
for
the
testing
effort
on
that
as
well
christopher,
the
you
know,
like
efforts
that
you
go
through
to
make
sure
that
resources
and
manifests
and
things
in
your
environments
are
still
working
is-
has
been
a
nice
integration
suite
essentially
in
addition
to
what
we
already
have
in
repo.
So
that's
been
really
helpful.
A
A
I
don't
think
we
have
big
updates
in
this
community,
meaning
for
gcp
and
azure,
but
that's
okay,
because
we'll
be
talking
about
the
merging
of
those
provider
of
the
classical
fighters
and
jet
providers,
which
has
big
impact
on
that
so
yeah
when
it
gets
everything
else
here,
keep
on
moving
so
there's
as
usual.
A
There's
a
list
of
of
resources
and
content
that
I
gathered
before
the
meeting
here
of
interesting
blog
posts,
videos-
and
you
know,
people
generally
talking
and
sharing
about
crossplaying
in
things
you
could
things
you
can
do
with
it.
So
probably
one
of
the
most
interesting
and
unexpected
one
was
aaron's
teaching
cross
playing
to
play
poker,
one
which
is
a
treat
to
read
and
get
into
that
and
kind
of
using
that,
as
a
metaphor,
almost
for
understanding
some
of
the
concepts
and
cross
planes.
A
So
this
could
be
kind
of
an
interesting
intro
for
folks
to
understand
a
little
bit
more
about
some
of
the
concepts
and
capabilities
of
the
project.
So
thanks
for
writing
that
one
here
and
thanks
for
keeping
a
secret
too,
that
nobody
really
knew
about
it
until
it
just
dropped
today,.
A
I
think
another
interesting
thing,
that's
probably
worth
mentioning
is
taylor
wrote
about
the
vs
code
plugin.
I
think
it
works
for
goland
as
well,
but
I'm
not
sure
if
that
one
was
explicitly
released,
but
that
is
improving
the
story
around
authoring,
your
own
compositions
and
compounded
resources,
and
things
like
that,
where
it'll
give
you
immediate
feedback
and
intellisense-
and
you
know,
errors
and
stuff
like
that.
E
A
You're
authoring
it
as
opposed
to
having
to
do
that
downstream
later
on
after
you
build
the
package
and
and
installed
it
at
runtime,
so
shifting
left
on,
you
know
catching
issues
on
those
on
building
your
compositions
and
platform,
and
it's
quite
a
nice
experience
to
have
that
now
and
there's
more
to
come
with
that,
but
you
can
catch
up
with
what
taylor
wrote
about
that
in
in
this
link
right
here,
all
right!
Nick,
are
you
on
the
call
my
friend?
F
Yep,
I
would
I'd
recommend
anyone
who
wants
to
go
deep.
There
is
a
there
is
a
updated
sketch
design
dock
that
has
a
proposal
for
an
api
in
there,
and
I've
also
linked
through
to
the
notes,
but
basically,
sir
again,
who
is
on
call
sorry
justin
has
been
doing
most
of
the
heavy
lifting
he's,
got
a
couple
of
proof
of
concepts
that
have
de-risked
the
way
that
we
want
to
do
it
and
we're
moving
on
to
the
design
dock.
F
I
think
that's
that's
about
it
for
this
context,
but,
as
I
say,
please,
we
have
a.
We
have
a
slack
channel
that
you
can
find
if
you
click
that
sick
custom
compositions
notes.
So
if
anyone's
really
interested
in
this
topic
join
us
on
slack
and
get
us
up
there.
A
Yeah
thanks
for
that
update
there
nick
and
then
thanks
sergey
as
well
for
the
effort
that
you've
been
putting
into
that
and
the
progress
that
we're
contributing
to
that
effort.
So
that's
good
stuff
and
yeah
and
I'd
like
to
get
what
we're
doing
there
with
having
a
special
interest
group.
That's
formed
around
this
effort.
We're
gonna
keep
up
to
date
with
the
progress
on
it
in
the
deep
dive
in
the
notes
here
that
we
have
linked.
So
that's
that's
really
good.
A
I'm
definitely
excited
to
see
this
come
out
over
the
next
couple
of
releases.
F
Yeah
1.8,
I
think,
by
the
way,
is
what
we're
thinking
for
the
releases.
It's
not
quite
going
to
make
1.7,
but
hopefully.
A
Yup,
that
sounds:
that's
that's
really
reasonable:
okay,
sweet!
So
for
kubecon
eu
in
valencia,
spain,
in
the
middle
of
may,
about
two
months
from
now,
quick,
a
couple
updates,
the
our
maintainer
track
session
was
accepted.
So
this
will
be
the
first
time.
I
believe
that
cross
plane
as
an
incubation
project,
is
now
on
the
maintainer
track,
so
we'll
be
able
to
do
a
you
know,
introduction
and
deep
dive
session
that
will
reach
a
broader
audience
which
is
really
cool,
so
that
was
accepted
that
will
be
in
person.
A
In
valencia.
We
also
got
a
virtual
office
hours
accepted
as
well,
so
we'll
be
able
to
run
an
online.
You
know
virtual
session,
which
is
a
lot
of
similar
content,
but
being
able
to
reach
the
audience
that
is
not
able
to
attend
kubecon
in
person.
So
we've
got
a
couple.
Different
means
there
for
ways
that
we'll
be
able
to
connect
the
general
core
of
the
project
to
the
broader
ecosystem
that
will
be
attending
kubecon.
A
I
have
not
yet
heard
back
on
if
we
have
it
at
the
in-person
kiosk
like
booth
as
well.
That's
part
of
the
project
pavilion,
but
we
do
have
the
maintainer
track
and
we
do
have
the
office
hours
virtual
office
hours
already
confirmed
so
I'll.
Let
everybody
know
what
I
hear
about
the
kiosk
and
if
folks
want
to
come,
hang
out
there
and
and
talk
to
people
and
stuff
when
you're
in
valencia,
then
that
sounds
great.
Just
let
me
know
about
that.
A
We
also
there's
a
number
of
other
talks
that
I
heard
were
accepted
from
folks
on.
This
call,
I
think
muaf
has
one
hassan
albert
matthias.
A
Victor
also
has
one
so
a
whole
bunch
of
talks
got
accepted,
which
is
really
cool,
so
I
think
there's
gonna
be
a
whole
bunch
of
cross-plain
and
related
topics
that
will
be
spoken
on
on
stage
in
valencia.
So
it
seems
like
this
community
here
is
going
to
have
a
big
showing
not
only
of
content,
but
people
as
well.
So
I'm
still
thinking
about
getting
doing
a
little
gathering
and
getting
little
get-together
of
folks
from
the
community
here
in
valencia
for
anybody
else's.
Who
was
able
to
make
it.
A
All
right,
so
I
think
we're
doing
okay
on
time
here
and
then
so
yeah.
We
can
go
into
this
discussion
as
well,
so
jillian.
I
believe
that
this
was
one
that
you
added
is
that
correct
yep.
G
I
can
be
super
quick
because
nick
I
thought
that
you
replied
back
to
this
and
thank
you
christopher
for
sending
it.
I
know,
we've
had
this
conversation
multiple
times
about
supporting
like
patching
from
using
a
config
map
or
not
curious
on
the
viability
of
this
pr
or,
if
we're
thinking
about
doing
something
like
this
in
a
different
manner.
F
F
I'd
love
to
hear
feedback
on
this
ux
from
from
other
folks
to
me
personally,
patching
from
an
object
outside
of
a
composition
as
part
of
a
compositions.
Functionality
seems
a
little
off
ux
wise,
but
you
know
I
am
but
one
person.
If
everyone
loves
this,
this
ux,
we
could
potentially
proceed
with
it.
E
E
Since
two
days
in
our
environments,
because
we
are
in
zinc
with
the
guys
from
accenture
and
from
dodgeband,
and
we
using
this
a
lot.
F
How
does
it,
how
does
it
work
with
regard
to
our
back?
Wouldn't
this
mean
that
crossplane
would
need
our
back
access
to
to
read,
basically
anything
that
you
know
in
a
conflict
map,
for
example,.
E
We
added
the
arbuck
thing
in
our
githubs
tooling,
in
flux,
for
example,
and
this
is
not
a
problem
for
us
so.
E
A
little
bit
how
we
are
doing
this
in
our
compositions
and
the
airbag
stuff
around.
I
can
add
this
in
the
pr.
F
E
F
Yeah
I
mean
we
really
want
this
feature.
This
is
just
one
of
those
awkward
things
where
hatching
from
arbitrary
resources
the
potential
to
have
generic
resource
references
that
we've
looked
in
a
little
bit
before
in
the
past,
just
all
kind
of
have
a
lot
of
interplay
with
each
other.
So
it's
one
of
those
ones
where
we
want
to.
You
know,
pause
and
make
sure
we
get
the
api
right
before
we
commit
to
anything.
F
Otherwise,
we
end
up
with
you
know,
a
jumble
sort
of
mess
of
three
different
ways
to
do
the
same
thing.
So
so
I
do
want
to
proceed
with
this.
I
haven't
had
too
much
time
to
look
at
it.
I
would,
I
would
definitely
love
to
get
like
you
know,
move
off
and
and
other
folks
who
are
pretty
deep
on
composition
to
weigh
in
on
it.
C
C
But,
like
you
know,
if
you
end
up
liking,
hey
like
you
know,
this
is
not
a
problem
security-wise,
because
composition,
writers
are
almost
admins
anyway.
I
would
kind
of
prefer
it
to
be
on
a
managed
resource
level
so
that
it
is
available
both
for
composition,
users
and
managed
resource
users.
C
So
you
would,
you
can
think
of
some
one
array
in
in
spec
of
the
rds
instance,
let's
say
some
kind
of
like
a
patches
that
similar
to
patches
that
you
have
in
composition
so
that
like
even
though
you
don't
have
you
don't
use
composition,
you
would
be
able
to
do
the
patching
from
other
resources
and
then,
if
you
need
it,
you
can
change
that
patch
parameters
with
within
the
composition
that
you
that
you
have
here
but
yeah.
My
main
concern
was
like
you
know
the
the
access
control
aspect
of
it.
F
Mr
mx,
who
raised
the
pull
request,
I
I
had
mentioned
also
a
preference
to
do
this
at
the
managed
resource
level
and
he
sort
of
responded
by
saying
that
one
downside
of
that
is
that
you
need
to
build
it
into
every
provider,
which
is
true.
So
there
is
some
niceness
of
just
sort
of
implementing
into
one
place
in
in
cross
plane.
F
But
you
know
the
the
our
back
thing
as
well
does
does
concern
me
because
you
know
if
we,
if
we're
effectively
saying
any
composition,
even
even
if
you
just
give
an
r
back
access
to
config
maps,
I
presume
I,
I
suppose
you
could
lock
crossblade
down
to
have
config
maps
only
in
any
name
space.
But
at
this
point
like
cosplaying,
presumably.
C
F
You
just
say:
crossblade
has
access
to
read
conflict
maps
in
general,
then
anyone
who
could
write
a
composition
could
just
pull
any
data
out
of
any
config
map
anywhere
in
the
cluster
and
patch
from
it,
which
is
a
bit
of
a
blast
radius.
There
one
thing
we
could
potentially
just
talking
off
the
top
of
my
head.
F
One
thing
we
potentially
do
to
like
lock
this
down
would
be
to
introduce
a
very
confident
like
type
that
is
cross
plate
specific
effectively
to
just
say
at
least
you
won't
be
able
to
read
config
maps
that
aren't
aren't
for
cross-plate,
but
I
think
the
the
long
story
short
this.
This
seems
like
something
that
kind
of
needs
a
one
pager
to
make
sure
we
get
alignment
on,
but
I
am
really
glad
that
I
don't
actually
know
his
name,
but
mr
mx
is
you
know,
driving
this
forward
and
sparking
conversation.
C
A
Yeah,
I
can
totally
agree
with
that
as
well,
that
you
know
I
like
that
max
is
driving
this
and
then
also
you
know,
there's
there's
a
clear
need
in
the
community
for
something
along
this.
So
getting
the
api
right
getting
the
security
concerns
right.
You
know
we
don't
want
to
rush
that
and
make
some
mistakes
that
we
can't
walk
back,
but
I
love
seeing
this
moving
forward
and
and
the
enthusiasm
on
this.
So
this
is
this
is
fantastic.
A
All
right,
thank
you
for
bringing
that
up,
jillian
all
right,
so
I
think
we
raced
through
and
got
through
most
everything
here
on
the
agenda
of
25
minutes
left.
So
let's
then
go
ahead
and
jump
into
it
here,
since
we
wanted
about
30
and
we
got
25..
A
So
I
and
I
have
a
hard
stop
here
at
the
top
of
the
hour.
So
if
the
conversation
does
continue
and
folks
do
want
to
stay
on
for
it
or
can
stay
on
for
it,
then
I
can
make
somebody
else
the
host
and
and
I'll
take
off,
but
I
think
muafik
you
want
to
start
driving
this
then
or
nick
who
wanted
to
start
the
conversation.
C
Yeah,
I
can,
I
can
start
with
a
like,
you
know,
short
representation,
short
presentation
of
the
latest
status
because
I
believe,
like
you
know
not,
everyone
has
checked
the
latest
updates.
So
we
might.
C
Yeah
yeah
sure
so
yeah.
This
is
this:
is
the
proposal
to
merge
the
three
providers?
You
know
we
have
jet
aws
and
aws
providers,
jgcp
and
gcp
provider.
C
So
the
problem
that
we
have
been
facing
since
the
release
of
these
providers
is
that
people
have
been
asking
us
like
you
know:
hey,
like
you
know
now
you
have
jet
providers
and
also
classic
providers
which
one
we
should
use
and
yeah
and
our
answer
has
been
like
you
know:
hey,
if
you
want,
like
you
know,
maturity
and
classic
provider
meets
your
needs.
You
can
use
that
one,
but
if,
if
there
are
like
a
resources
that
doesn't
meet
your
needs,
you
can
use
jet
provider,
which
has
like
a
lot
of
a
lot
of
resource
coverage.
C
C
But
for
the
reasons
listed
there
like
that,
we
wanted
initial
releases
to
be
separate
and
see
like
you
know
how
they're
used
and
like
how
people
receive
it,
receive
those
providers
and,
like
you
know,
just
just
to
make
sure
we
first,
like
you,
know,
see
people
using
it
and
dropping
feedback
so
that
we
can,
like
you
know
later
on,
make
it
make
a
new
release
decision
if
needed.
C
So
this
proposal
is
about
that
this
that
discussion
now
that
people
have
used
jet
providers
and
also
classic
providers,
they
have
more
ideas
about,
like
you
know
how
they
look
like
how
well
what
are
the
downsides
or
upsides
of,
like
you
know,
using
jet
providers
compared
to
provider
aws
or,
like
you
know,
the
classic
providers,
so
that
is
why
we
are
we
want
to,
like
you
know,
start
this
discussion
again
and
propose
that
so
here,
actually
let
me
let
me
go
through,
like
you
know
the
options,
real,
quick
in
the
provider
strategy.
C
Talk.
One
option
is
completely
separate
providers,
which
is
what
we
have
today,
and
the
second
option:
option
b
is
one
provider,
but
jet
based
resources
in
their
own
api
group.
Think
of
it,
like
you
know,
you've
got
iem.aws.crosspin
io.
You
would
have
iam.aws.jet.com.io
as
well,
so
that,
like
you
know
they
would
be
completely
different.
Each
crd
and
you
would
differentiate
hey
okay.
C
This
is
a
convention
xrm
compliant,
so
implementation
does
not
really
matter
for
the
users,
so
these
were
the
options
and
we
went
for
option
a
this
is
proposing
option
c,
so
I
will
just
go
back
to
the
goals
so
that
we
are
setting
the
ground
for,
like
you
know
what
we
want
to
achieve
at
the
end
of
the
day.
But
what
like?
What
is
the?
What
is
the
end
game?
So
for
providers?
We
have
three
goals.
One
is
like
you
know.
C
We
would
like
cloud
vendors
to
maintain
them
essentially
because
we
think
I
believe
nick
will
share.
Like
you
know,
some
thoughts
are
on
crossbands
charter,
but
we
are
thinking
that
the
only
way
scalable
for
those
providers
to
getting
get
is
to
get
cloud.
Vendors
maintain
them,
and
this
is,
like
you
know,
more
possible
with
crosstalk
being
a
cncf
project
and
then,
like
you
know,
vendor
neutral
will
be
the
vendor
neutral
governance
compared
to
other
products
in
the
industry.
C
C
So
with
these
three
goals
in
mind-
and
here
there's
like
a
small
status
summary
that
we
have
like
you,
know,
number
of
crds
and,
like
you
know,
I
have
used
beta
as
like
a
proxy
for
maturity.
It's
not
really,
like
you
know
very
accurate,
but
still
like
you
know
it
gives
you
an
idea
of,
like
you
know
how
many
of
those
are
like.
You
know,
mature
enough
to
be
better
and
the
comparisons
of
crd
numbers.
I
mean
this
is
just
like.
You
know
significant
difference
so
yeah.
C
The
proposal
is
that
we
are
first
of
all
we
are
making
decisions
only
for
these
three
providers,
all
other
jet
providers
or
classic
providers,
are
free
to,
like
you
know,
go
with
their
with
their
choices,
their
own
choices
so
like
now.
This
is
only
for
these
three
providers
and,
like
you
know,
maintaining
maintainer
goals
of
these
three
providers.
C
C
We
will
add
the
rest
with
je
pro
jets
crds
to
complete
it
to
the
like
full
coverage,
and
then
there
is
a
guideline
here
that
I
will
go
through
in
a
minute
to
to
to,
like
you
know
to
let
you
know
how
we
would
convert
one
one
from
from
one
to
the
other,
like
you
know,
should
we,
for
example,
change
the
vpc
to
terajet,
based
implementation
or,
like
you
know,
should
like?
When
do
we
convert
a
jet
implementation
to
a
classic
one?
C
So
just
to
just
to
make
it
like
a
more
concrete
in
your
mind,
I
will
talk
about,
like
you
know
the
order
of
execution,
for
example,
for
aws,
which
is
which
has
the
most
number
of
resources.
C
So
what
will
happen
is
that
we
will
add
teloget
pipeline
to
the
provider
repository
similar
to
sdk
that
we
have
today
we
will
have
the
only
we
want
alpha
one,
two
resources
from
jet
providers
that
have
the
manual
resource
configurations,
which
are,
I
believe,
around
80
and
the
ones
that
are
only
like
you're,
not
existing
in
the
classic
provider.
For
example.
I
believe
we
have
rds
instance,
both
in
jet
and
classic.
C
C
Many
of
those
resources
are
not
existing.
I
believe
so.
It
will
only
be
about
like
an
api
group
change,
but
for
the
ones
that
we
don't
move
over
like
rds
instance,
there
will
be
some
schema
changes,
so
we
will
need
to
handle
that
in
that
rule
with
that
rule,
so
coming
to
the
latest
section
that
I
updated
very
recently.
C
So
there
is
this,
this
section
that
we
are,
I
believe,
getting
getting
on
the
same
pages
that
the
end
goal
is
like
you
know,
in
an
ideal
world,
we
would
like
a
provider
to
have
a
single
implementation,
so
that,
like
you
know,
we
have
the
homogeneity
in
the
code
base
that
is
easier
to
maintain
easier
to
debug
for
users
and
you
there
would
be
a
single
tool
to
use
so
right
now.
C
I
we
believe
that
in
the
proposal
we
believe
that
terajet
is
that
tool
for
today
and
most
of
the
resources
in
an
ideal
world
like
no
hundred
percent
resources
would
be
on
terajet.
However,
tarajit
has
its
own,
like
you
know,
strategic,
let's
say,
like
you
know,
shortcomings
for
example.
C
C
C
Excuse
me,
the
api
schema
changes
are
really
like.
You
know
straightforward
and
easy
to
handle
with
conversion
map
hooks
thanks
to
the
power
of
like
a
kubernetes
api
that
we
depend
on,
but
the
actual
control
implementation.
What
I
call
provisioning
behavior
is
different
than
like
you
know
it
can
be
different
in
terror
for
and,
like
you
know,
the
implementation
that
you
would
have
in
the
classic
in
the
classic
means,
and
the
difference
is
mostly
with
complex
resources.
So
I
give
an
example
here,
for
example,
an
aws
rds
instant
is
a
complex
resource.
C
Where,
like
you
know,
you
affect
the
provisioning
behavior
with,
like
you
know,
some
of
the
values
and
fields
and
conventions
in
the
api,
like
you
know,
instead
of
create
rds,
is
that
you
would
call
restore
rds
from
s3
call
make
that
call,
for
example,
so
tell
if
you
look
at
terraform
code,
you
would
see
like
a
lot
of
these
knobs
being
like
specific
logic
to
it,
and
you
would
see
the
same
thing
in
cross
plane,
code
codebase
as
well
as
com
when
you
compare
that
to
iam
role,
for
example,
it
is
mostly
like
you
know:
hey
what's
in
the
observe,
is
get
roll
what's
in,
create
is
create
role,
so
it's
like
in
a
very
simple
crude
application.
C
Our
goal
is
like
you
know,
to
get
to
a
place
where,
like
you
know,
the
tool
that
is
decided
for
that
specific
provider
has.
The
most
like
you
know
is
the
is
the
prevailing
technology
in
that
provider,
and
I
give
here
an
example
of
like
you
know,
for
example,
a
year
from
now
it
could
be
ws
cloud
control
for
provider
aws
when
they
do
decide
to
come
in
and
maintain.
Let's
say
in
that
case
that
would
be
like
you
know
the
tool
that
we
would
like
to
get
every
resources
implemented.
C
I
mean,
depending
on
on
what
they
want.
They
might
like,
you
know,
want
to
bootstrap
another
provider
or
they
might
like.
You
know
just
come
in
as
maintainers
and
like
you
know,
slowly
migrate
over,
but
the
the
end
goal
is
essentially
there
is
this
specific
technology
for
a
given
provider,
and
that
is
like
you
know
the
end
goal
and
we
are
getting
there
fast.
Maybe
maybe,
like
you
know,
in
a
fast
way,
with
simple
resources,
but
more
slowly
with
the
complex
resources
so
that
our
users
do
not
experience
a
lot
of
they.
C
F
Yeah
any
questions
or
comments
from
from
folks
so
far,
especially
especially
curious
about
folks
who
aren't
you
know
like
you,
jillian,
who
haven't
been
working
at
upbound
and
thinking
about
this,
like
solo
sam
for
a
while.
G
Loads
of
questions,
so
one
I'm
like
not
opposed
to
combining
to
one
provider
the
conversation
of
going
between
jet
and
classic.
I
think
that
we
can
acknowledge
that
jet
is
more
feature
complete
on
against
any
of
the
classic
resources
right,
because
terraform
has
been
around
for
way
much
for
much
longer.
So
what
I'd
struggle
with
is?
Let's
use
rds
as
an
example
right,
it's
a
more
complex
resource.
We
have
it
on
classic.
We
have
it
on
jet.
Jet
is
feature
complete.
G
G
If
it's
available,
because
it
has
everything
we
need
and
I
can
get
adoption
much
faster
and
so
when
we
say
hey,
we
already
have
rds
for
classic.
So
we're
not
going
to
take
the
jet
makes
kind
of
hurts
a
little
bit
because
I
would
argue
that
maybe
they
both
could
lift
like.
Maybe
you
have
two
apis
for
that
in
those
scenarios
where
eventually,
the
idea
is
to
collapse
down
to
one
version
until
one
wins
out
right,
so
maybe
classic.
F
Would
you
would
you
just
a
follow-up
question
regarding
your
sort
of
preference
to
keep
the
two
implementations
of
rds,
for
example,
would
you
have
any
preference
for
those
two
implementations
living
in
sort
of
one
provider,
or
would
you
be
fairly
happy
with
the
status
quo
of
just
you
know,
installing.
F
Is
is
it
fine
or
or
like
better
in
any
like?
I
guess
what
I'm
saying
is
where
my
mind
goes
is
if
we,
if
we
want
to
keep
you
know,
the
option
to
install
the
jet
ones
and
the
option
to
install
the
sort
of
classic
or
native
ones
is:
is
there
actually
a
problem
with
with
the
status
quo,
where
we
have
two
separate
ones?
So
this
is
something
we've
had
mixed
feedback
on.
F
Personally,
like
you
know
some
folks,
what
we're
seeing
from
the
community
is
a
lot
of
people
just
by
having
a
choice
that
you
know
the
native,
the
the
natural
thought
rather
is
well
which
one's
better,
which
one
should
I
pick,
whereas
other
people
are
sort
of.
You
know
similar
to
what
I
think
you're
saying
is
like
they're,
just
like
well,
there's
two
choices
and
I'm
happy
to
install
both
that
I
can
mix
and
match
them.
G
Right
so
for
us,
outside
of,
like
the
the
scaling
issues
that
we're
already
hitting
with
all
the
crds
I'd,
have
no
problem
installing
both
providers
versus
just
one,
I
think
from
like
all
of
us
who
manage
the
kubernetes
clusters.
Two
aws
providers
kind
of
hurts
because
they're,
like
that's
weird
versus
one
aws
provider,
with
a
host
of
options
right
and
knowing
what's
under
the
covers,
knowing
that,
if
I
could
choose
a
jet
resource
for
things
that
I
know
that
are
more
feature
complete
over
a
classic
resource.
That
would
be
nice.
G
But
I'm
only
really
going
after
my
own
goals
of
what
we're
trying
to
get
to
production
super
fast
and
a
little
bit
more
reliable.
But
I
like,
but
my
preference
is
always
for
the
classic
over
jet.
It's
just
classic's
not
fully
there.
Yet
on
the
re
on
all
the
resources.
C
C
Like
you
know
harder
to
do,
but,
for
example,
in
your
case
let's
say
terraform
one
is
like
no
more
feature
full.
So,
like
you
know,
you
can
open
a
pr,
for
example,
to
convert
it
to
jet
and,
like
you
know,
show
hey
like
you
know
these.
These
are
the
cases
that
we
tested
like
you
know,
because
the
end
goal
is
to
converge
on
terajet.
We
would
be
happy
to
accept
the
pr
and,
like
you
know,
convert
its
implementation
to
a
jet
one.
G
Right-
and
I
think
I
would
look
at
you
like,
if
you're
all
your
users
who
are
moving
from
terraform
to
crossplay
there,
that
the
lack
of
functionality,
I
think,
may
be
what
hurts
people
from
fully
onboarding
to
crossplane
as
quickly
as
they
want
to.
I
would
tell
you
that
that's
probably
our
number
one
right
is
that
you
know
if
we
don't
have
all
of
the
feature
functionality,
it's
harder
for
us
to
move
fast
right
to
gain
adoption
and
then
to
truly
spend
more
time
in
cross
plain
functionality
versus
resource
functionality.
Right.
G
That
shouldn't
be
my
biggest
problem,
my
biggest
like
the
fun
things
are
like
the
cross,
plane
infrastructure-
and
you
know
managing
things
to
that
part.
But
we
can't
fully
get
there
because
of
lack
of
provider
functionality.
F
Yeah,
I
I
think
we've
touched
on
this,
but
I
want
to
provide
a
little
bit
more
color
on
our
long-term
thinking.
There
is
a
there
is
a
document
that
exists
only
within
the
steering
committee
at
the
moment
that
is
touching
on
our
what's
in
scope
and
our
goals
for
the
cross
plane
project
more
broadly,
a
little
bit
longer
term.
That's
not
quite
ready
to
share
yet,
but
I
think
one
thing
that
we
all
agree
on
is
what
I've
said
that
we
want.
F
You
know,
pardon
me
our
goal
is
for
for
any
provider,
not
just
the
big
three.
Our
ideal
is
that
the
owner
of
the
api
that
we're
interfacing
with
owns
that
provider,
we.
F
There
for
every
everything,
but
you
know
the
idea
is
that
you
know
we'd
love
aws
to
own
and
maintain
the
aws
provider,
we'd
love
gitlab
to
own
and
maintain
the
gitlab
provider,
et
cetera,
et
cetera,
et
cetera,
and
one
thing
we've
noted.
If
we
look
at
the
big
three
cloud
providers,
if
we
look
at
how
in
some
cases
they
own
their
terraform
providers,
in
some
cases
they
don't
and
they
all
have
sort
of
kubernetes
controllers
effectively.
F
One
thing
we've
noticed
is
that
they
actually
don't
tend
to
lean
towards
what
we
would
consider
a
classic
or
a
native
implementation,
which
honestly
is
my
sort
of
personal
preference
as
a
way
to
build
this.
But
if
you
look
at
terraform,
you
know
it
took
them
seven
years
to
get
to
where
they
are
at
the
moment
with
the
with
the
aws
provider.
F
So
what
we've
seen
is
azure
uses
azure
resource
manager,
which
is
sort
of
you
know
a
standardization
api
that
is
kind
of
like
an
intermediary.
It's
you
know
in
some
ways
like
terraform,
except
it's
hosted
by
them,
ac
amazon.
It
seems
to
be
it's
it's
hard
to
tell,
but,
but
it
seems
like
they're
pushing
their
new
cloud
control
api
and
it's
unclear
how
that's
gonna
relate
to
sdk
going
forward
whether
the
two
projects
will
continue
to
exist
or
whether
whether
sort
of
cloud
control
will
become
the
new
hotness.
F
That's
what
the
official
terraform
provider
is
is
planning
to
move
to
that's
similar
to
arm
a
arm
with
azure,
it's
sort
of
like
an
intermediary,
almost
like
a
cloud
formation,
light
api
and
then
google's
taking
an
approach
of
releasing
this
declarative
configuration
library
which
they're
planning
to
build
all
their
providers
around.
So
one
thing
that
we've
been
thinking
of
is
let's
say
in
a
year
or
two
we're
hoping
to
you
know,
have
enough
traction,
and
you
know:
there's
there's
some
promise
with
the
conversations
we're
having
with
these.
Are
these
big
providers?
F
One
problem
is:
we
just
have
no
signal
that
writing
native
code
is
gonna,
mean
anything
in
fact,
there's
there's
a
there's,
a
good
chance
that
we
could
spend
a
year
porting
everything
to
like
the
classical
limitations,
and
they
could
just
throw
it
all
out
and
say
well,
thank
you,
but
you
know:
we've
decided
to
come
and
know
our
provider
and
we're
gonna
generate
it
using
aws
cloud
control,
so
we've
considered
well,
you
know
we
could
try
and
second
guess
and
say
you
know
it's
probably
he's
going
to
use
cloud
control,
as
he's
probably
going
to
use
on.
F
So
we
could
try
and
build
things
that
way.
But
one
of
the
challenges
is
we
just
have
a
pool
of
like
five
to
seven
people
who
are
maintaining
the
big
three
providers
so
building
three
different
co-generation
pipelines
is
tough.
So
part
of
the
reason
that
we're
thinking
of
going
to
terra
jet
here
or
terraform
is
is
really
just
because,
while
fundamentally
I
do
agree
with
your
assessment,
jillian
that
the
the
the
classic
the
native
implementations
are
better
in
a
lot
of
ways.
F
We
just
we
just
don't
think
we
just
don't,
have
good
signal
for
the
for
this
big
three
that
it's
worth
the
effort
to
write
all
of
those
and
so
then,
given
a
choice
between
you
know
consist
being
consistent
or
not.
My
preference
is
that
we,
we
don't
have
a
mix
of
you,
know
multiple
different
implementations,
where
people
have
to
maintain
the
classic
rds
and
the
daradat
rds
all
within,
like
one
provider,
for
a
significant
amount
of
time.
F
That's
effectively
what
we're
doing
that?
That's
that's,
basically
what
I
what
we're
trying
to
push
for.
I
think
what
we're
trying
to
well
me
personally,
I
shouldn't
speak
for
everyone,
but
I
think
one
thing
we
considered
was:
we
could
just
deprecate
the
provider
aws.
Rather
we
just
say
it's:
it's.
You
know
we're
not
going
to
release
it
anymore.
It's
we
recommend
everyone
moves
to
jet,
but
then
everyone
has
a
migration
to
if
you're,
using
rds.
Today
on
provider
aws,
you
have
to
port,
you
know
over
to
the
slightly
different
apis
so
effectively.
F
What
we're
trying
to
do,
I
think,
is,
is
what
I'm
saying
is.
We
should
merge
them
together
and
then
set
an
aggressive
timeline
to
to
effectively
align
them.
You
know
slowly
remove
the
native
controllers
and
replace
the
patera
jet
ones
and
by
merging
them
together.
That
means
we
could
use
the
webhook
based
approach
the
mova
set
so
that
we
can
sort
of
serve.
We
can
introduce
the
new
api
shape
while
still
supporting
the
old
one,
so
we
can
sort
of
more
transparently
migrate
to
terajet.
F
F
But
then
the
the
general
idea
is
that
that
sort
of
gets
everyone
who
is
a
contributor
working
on
the
same
platform
like
everyone's
contributing
to
terajet.
You
get
sort
of
that
force
multiplier
of.
Like
you
know,
if
you
fix
a
bug,
there's
a
good
chance.
You
fix
a
bug
for
everyone,
favorite
controller
right,
and
there
was
one
more
point
that
I
wanted
to
make
that
keeps
slipping
out
of
my
mind.
Oh
yeah,
just
reiterating
one
move.
F
I've
said
we
do
always
think
that
there's
going
to
be
like
some
small
exceptions,
just
because
some
resources
have
weird
bugs
or
just
don't
work
with
terra
jet.
So
the
way
that
I
the
way
that
I
think
about
it
is
like
you
know.
If
you
have
a
hundred
resources
in
a
provider,
I
would
like
you
know,
97
of
those
resources
to
be
terri
jet
and
the
ones
that
aren't
to
have
like
some
explicit
reason
like
we
couldn't
use
territory
here
because
x
or
y.
So
this
is
like
an
exception.
F
Right
and
you
know
that
the
this
confuses
a
lot
of
folks,
because
there
is
a
lot
of
subtleties
with
terajet.
Like
you
know,
we
we've
used
the
term
temporary
when
talking
about
terraform
a
lot
and
in
in
certain
contexts.
That
is
true
like.
If
you
ask
me
in
five
years,
will
we
be
on
terraform?
F
I
would
like
cross
play
to
have
one
and
terraform
to
not
be
a
thing
anymore
in
five
years
or
ten
years
right,
but
that's
a
long
time
away,
terraform's
not
going
anywhere
anytime
soon,
and
I
think
we
can
cross
that
bridge
when
we
come
to
it,
then,
hopefully,
in
five
years
you
know,
aws
will
have
come
and
like
take
an
ownership
of
their
provider
or
if
they
don't
we'll,
have
a
great
team
of
you
know
10
maintainers,
who
can
like
figure
out
a
better
way
to
just
work
on
providing
aws
every
day.
F
G
Yeah,
so
I
think
that,
given
that
my
preference
would
be
for
jet
to
beat
out
classic
and
then
for
the
scenarios
in
which
classic
needs
to
has
solve
certain
bugs
that
aren't
exist,
I
would
go
that
that
would
be
my
preference
and
also
to
go
to
one
personally.
That
would
be
my
preference
to
go
to
one
provider.
G
I
think
that'll
also
benefit
you,
because
you
know
you
know
where
cubella
right
can
helps
with
you
know
is
a
compliment
to
crossplane
in
some
aspects,
but
they're
also
the
there's
also
like
the
terraform
controller
through
cubella,
or
that
was
created
by
those
those
folks
not
on
the
same,
but
not
on
the
same,
offering
in
some
level
of
abstraction.
I
think
that,
having
played
with
both,
I
would
prefer
like,
I
would
and
us
using
cubella
as
well.
G
My
preference
would
be
to
have
cross
plane
resources
alongside
cubella
versus
a
mix
of
terraform
controller
crossplay.
F
Yeah,
you
know
you
should
let
the
coup
villa
folks
know
that
we're
friends
with
them
and
will
who
maintains
their
terraform
controller,
also
maintains
the
various
alibaba.
F
C
C
You
know,
hey
jet
is
the
end
goal
point
of
view,
but
it's
just
like
you
know
how
we
go
there
and
how
aggressive
we
will
be
about
the
conversion
is
a
little
bit
of
like
you
know
the
topic
of
discussion,
but
I
I
I
would
also
like
to
get
some
opinions
from
christopher
as
well
for
over
time,
but
I
would
like
to
like
you
know,
get
some
thoughts
me.
F
Too,
I
just
want
to
provide
a
tiny
bit
of
context
about
the
timing
as
well
move
off.
I
mentioned
that.
There's
two
there's
two
things
here
right.
I
think
the
aggressiveness
of
porting
over
the
terrafor
terror
jet
one
is
just
the
more
aggressively
we
do
that.
The
more
risk
there
is
for
for
users.
You
know
if
we
just
ported
all
100
resources
in
one
release.
You
know
there's
a
great
chance
for
breaking,
so
we
can
address
that
with
feature
flags,
etc.
F
The
other
challenge
somewhat
selfishly
for
those
of
us
working
outbound,
is
that
you
know
moveoff's
team
of
you
know
four
folks
that
are
bound.
Does
a
lot
of
the
maintenance
of
this,
and
I
don't
want
to
be
like
hey
folks-
spend
the
next
three
months
doing
nothing
but
porting
resources
over
to
to
terror
jet
so
part
of
it
is
you
know?
F
Are
you
still
around
christopher.
F
Excuse
you
know
what
your
thoughts
were
on
all
of
this.
I
know
I
think
I
saw
you
weigh
in
on
the
on
the
issue
right.
E
So
in
general,
our
thoughts
in
the
company
is
we.
We
had
a
big
issue
at
the
end
with
a
lot
of
crds
in
the
clusters,
because
for
governance
reason
we
we
need
to
limit
what
we
are
doing
on
our
kubernetes
clusters
and
with
the
jet
providers.
We
add
a
lot
of
more
crds.
This
is
our
only
concern
and
if
you
have
so,
for
example,
the
native
ones
and
the
terror
jet
ones
together
and
one
provider.
E
Custom
builds,
so
that
means
that
we
drove
drop
all
crds
out
of
the
jet
providers
and
also
out
of
the
native
providers
we
are
not
using
because
then
we.
Otherwise
we
had
a
lot
of
discussions
with
the
governance
here.
Why
we
had
so
much
cids
there
and
we
need
to
clarify
for
each
crd
while
we're
using
them,
and
then
we
also
need
to
check
if
how
our
im
policies
looks
like,
for
example,
the
only
thing
we
have
in
concern
is:
how
does
it
looks
like
for
the
sdk
version?
E
C
C
So,
like
you
know,
once
we
once
you
merge
them
and
then,
like
you
know,
we
can
think
of.
Like
you
know,
specific
resource
by
resource
like
you
know,
hey
dynamo
db
table
like
you
know
when
we
convert
it
to
iteration,
we
will
see,
like
my
api
schema,
for
example,
and,
like
you
know,
see
the
behavior.
So
if
it's
like
you
know
the
same,
then
it's
like
you
know
easier
to
convert
regardless
of
of
the
sdk
version,
so
it
will
be
like
you're
in
the
bucket
of
the
conversion
risk
essentially.
E
Okay,
yeah,
then
so
so
in
general,
we
have
not
a
big
issue
with
this.
If
you
merge
them
and
if
you
have
time
to
to
decide
in
future,
if
we
in
which
direction
we
go
yeah
because
fair
enough
we're
running
both
providers
today
in
the
clusters.
C
Thanks
yeah
folks,
please
drop
comments
on
the
pr
as
well
about
your
thoughts
and
concerns
and
context
so
that,
like
you
know,
other
people
can
read
them
and,
like
you
know,
have
more
idea
about,
like
you
know,
different
use
cases
that
we
need
to
address.
C
C
So
yeah,
please,
please
feel
free
to,
like
you
know,
leave
more
comments
in
your
in
your
use.
Cases
to
help
the
discussion
go
forward.