►
From YouTube: 2019-11-26 Crossplane Community Meeting
Description
2019-11-26 Crossplane Community Meeting
A
So
this
is
an
issue
where
we
were
setting
owner
references
across
namespaces
like
a
claim
object
like
a
Postgres
claim,
was
being
set
as
the
owner
of
a
let's
say
cloud
sequel
database
in
in
kubernetes
owner
references
cannot
be
across
namespaces.
So
when
a
garbage
collection
kicks
in,
it
can
unintentionally
clean
up
resources
that
are
still
in
use.
So
thank
you.
Nick
have
a
look
Nick's
on
the
call,
but
thank
you
Nick
for
putting
in
fixes
for
that
and
getting
the
patch
release
out
as
well.
A
For
all
of
those
stacks
during
the
week
of
Q
Conwell,
why
we
were
here
in
San
Diego,
there
is
one
more
part
of
this.
The
stack
manager
itself
is
affected
by
a
similar
issue
where
the
namespace
stack
installs
are
starting,
that
name
space
stack
records
are
set
as
the
owners
of
the
CR,
the
cluster
scope,
CR
DS,
that
they
install.
A
B
A
Started
so
this
really
came
to
light
when
I
last
the
week
before
Q
con
when
I
was
setting
up
the
get
lab,
Auto
DevOps,
getting
that
manage
staff,
integration,
demo
and
I
would
leave
it
running
for
a
couple
hours
at
a
time.
Let's
say
and
I
would
come
back
and
sometimes
things
would
be
deleted.
So
this
happens.
This
reproduces,
when
the
controller
manager
I
believe,
has
restarted
or
or
for
whatever
reason
it
may
be,
so
the
controller
manager
comes
back
up.
A
You
know
a
cross
play
in
managing
some
instances
that
are
not
more
that
are
less
ephemeral
and
more
permanent
or
persistent,
because
you
know
you
don't
see
this
issue
when
you're
testing
fixes
you
know
locally
with
you
know,
MIDI
cube
or
micro,
kate's
or
whatever
it
may
be,
because
you
don't
clean
them
up
and
move
on
your
way.
You
know
before
hours
pass
where
this
might
hit,
so
this
is
I
think
this
speaks
to
a
definite
gap
in
missing
a
dog
food
or
long
haul
type
of
scenario
that
we're
running.
A
Awesome
Indian-
this
is
I
mean
this
is
great
to
this
speaks
to
the
investment
that
we
put
into
our
released
engineering
that
we
can,
you
know,
get
a
patch
out
and
then
basically
anyone
on
the
team
can
run
that
as
well.
So
you
know
being
able
to
turn
around
fixes
for
issues
that
we
find
and
get
them
out
into.
The
community
quickly
is
really
amazing
that
were
able
to
do
that.
So
the
the
next
release
that
we're
working
to
is
the
0.6
timeframe,
and
so
you
know
our
monthly
release.
A
A
A
I'm
going
to
quickly
bring
up
the
0.6
roadmap
I
think
we
need
to
take
a
another
pass
to
update
this
with.
You
know
what
we're
committing
to
or
planning
in
0.6
I
think
the
broad
strokes
are
largely
accurate,
where
we'll
be
investing
in
the
permissions
in
rolls
and
security
in
the
stack
manager,
Marcus
will
be
focusing
on
that
mid.
A
Some
other
scenarios
around
the
around
the
stack
manager-
and
you
know,
management
of
like
installation
and
observability
stacks
template
stacks-
is
something
that
would
that
Susskind
will
continue
to
be
investing
in
see
what
else
here
easy
stacks
or
resource
packs.
That's
something
that's
wafak,
ants,
that's
gonna
believe
are
going
to
be.
Tackling
continuing
to
move
api's
towards
beta
is
something
to
will
be
investing
in
as
well
are
there
are
there
large
efforts
here
that
I'm
blanking
on
that
are
not
included
in
this
roadmap
list
that
we
should
be
updating
it
with
anybody.
C
A
A
E
D
E
E
More
specifically,
thank
you
and
then
the
other
thing
I
was
going
to
mention
is
we
might
want
to
include
something
on
the
roadmap
here
for
the
kind
of
like
bring
your
own
cluster
and
then
any
kind
of
like
side
effect,
work
that
happens
with
that.
Oh
I,
shouldn't,
say,
side
effect,
that's
confusing
with
what
makes
working
on
any
coronaries
cluster
provisioning,
claiming
stuff.
That
goes
along
with
that,
so
when,
if
we
just
want
to
call
that
out
in
some
manner
well.
A
D
A
B
Awesome,
it's
not
on
the
roadmap
right
now.
I
know
that
we
have
an
issue
tracking
it,
but
some
of
the
feedback
that
we
received
at
cube
con
was
interested
in
VM
types
and
with
packet.
We
have
the
machine
instance
abstraction.
So
maybe
we
could
at
least
add
to
the
roadmap
whether
or
not
we
have
an
issue
tracking
it
adding
Google
VMs.
Is
your
vm's
and
ec2
instances
yeah
good
idea.
A
B
A
A
Cool
so
coop
con
San
Diego
was
last
week.
Thank
you,
everybody
for
coming
down
here
to
visit
me.
That
was
very
kind
of
you.
It
was
good
to
see
your
faces
down
here.
It
was
there
I
wanted
to
open
the
foreword
for
just
a
few
minutes
here
for
any
high
level.
You
know
trends
or
observations
or
discussions
that
you
guys
had
with
you
know
the
community
and
the
ecosystem
there,
because
there
was
a
ton.
E
Yeah
I
mean
it's
already
kind
of
brought
it
up
briefly,
but
people
are
really
interested
in
bringing
their
own
kubernetes
clusters
and
using
them
with
crossplane,
as
well
as
importing
existing
resources,
because
you
know
it's
likely
that
many
adopters
of
crossplane
will
have
some
infrastructure
that
already
is
set
up,
and
you
know
one
of
the
big
values
of
crossplane.
It
pays
off
when
you
kind
of
are
managing
all
of
your
em
structure
with
it,
and
unfortunately,
many
organizations
still
have
a
fair
number
of
infrastructure
instances
and
so
being
able
to
support.
E
That
is
something
people
are
definitely
excited
about
and
I
think
that
is
kind
of
functionality
that
would
move
crossplane
from
you
know.
This
is
a
cool
thing
to
run
some
of
your
em
pressure
with
this
is
a
way
to
manage
all
instructure
in
your
organization,
so
I
felt,
like
I,
heard
a
lot
of
fun
sentiment
in
that
regard.
A
And
that
would
kind
of
surprise
to
me
a
little
bit,
because
I
figured
that
you
know
people
wanted
to
take
cross
claim
for
a
test
spin
or
you
know,
get
more
comfortable
with
it
that
they
would
want
to
use
that
on.
You
know,
ephemeral,
test
infrastructure
and
just
kind
of
get
more.
You
know
more
confidence
in
it
as
opposed
to
having
it
manage
some
other
existing
stuff.
That,
maybe
you
know
it's
important
to
them,
or
you
know
they
have
a
dependency
on
to
their
environments.
Yeah.
E
I
I
think
that
it's
true
that
if
people
are
testing
it
out,
that
they
definitely
will
do
that,
I
think
one
of
the
things
that
deters
people
from
even
testing
it
out
is,
if
they're,
like
you
know,
I'm
not
gonna,
be
able
to
manage
all
this
stuff.
I
already
have
so
what's
the
point
in
even
trying
this
right
now,
but
I
think
if
they
were
testing
they'd,
probably
you
know
not
be
doing
importing
resources
first
thing,
because
those
are
likely
long-lived
resources
that
they
wouldn't
want
to
mess
with.
A
B
That
was
useful.
I
thought
there
were
also
a
lot
of
them
integration
opportunities
and
a
lot
of
people
approaching
us
about
creating
stacks
and
and
also
using
stacks
for
their
internal
deployments.
So
that
was
really
encouraging,
and
then
there
were
other
types
of
integration
opportunities
like
places
where
crossplane
could
just
be
included
within
an
existing
tool
as
an
add-on,
one
of
those
places
is
Mike
rotates.
B
The
loop
unto
kubernetes
distribution,
I
created
a
PR
on
there
to
suss
out
whether
or
not
folks
would
be
interested
in
adding
crossplane
there,
but
the
maintainer
of
the
project
kind
of
gave
it
a
pre
thumbs
up
at
KU
con
at
the
booth
stuff.
I'm
hoping
you
know,
that's
the
first
example
we
can
make
of
this.
B
D
I'm
wondering
were
Annie
sorry
to
interrupt
to
you
Marcus
yeah,
so
I
was
wondering
if
you
guys
came
up
with
any
others
and
what
sort
of
a
Luther
presentation
which
kind
of
similar
to
cross
playing,
not
necessarily
competitiveness
but
like
how
much
ink
UConn
it
sounded
like
existence
of
something
like
cross
plane.
It's
something
that
the
fall
would
be
felt
in
the
community.
B
The
thing
that
I
saw
that
was
most
similar
was
the
open
service
brokers,
ability
to
provision
resources
like
my
sequel,
from
a
from
a
Service
Catalog,
but
then
for
anything
that
even
came
close
to
resembling
cost
crossplane
and
really
for
any
tool
that
one
provider
offered
and
another
provider
offered
something.
There
was
always
some
differentiator
and
for
crossplane
that
differentiator
over
open
service
broker
was
the
abstractions
that
that
that
we
offer.
B
So
that
was
the
main
differentiator,
but
it
was
encouraging
to
also
communicate
with
with
those
communities.
So
we
had
Jonathan
I'm
forgetting
his
last
name,
but
he
he
did
one
of
the
Kudo
talks
and
they're
not
CUDA.
He
did
one
of
the
open
service.
Robert
broker
talks
and
came
to
our
booth
and
I'm
sort
of
like
traded
position,
points
to
understand
what
the
tools
do
now
and
and
where
they
were
they're
different.
Also,
on
the
stack
side,
like
we've
been
looking
at
templates,
tax
and
Kudo
has
some
similar
functionality.
B
There
not
functionality
that
we,
you
know
that
we
would
really
want
to
have
already
in
templates
tax,
so
that
was
also
cool
tabs
I'm
gonna
blink
on
his
name,
but
Kudo
developer
come
over
and
talk
to
us
about
where
they're
headed
with
that,
how
they
would
appreciate
any
kind
of
you
know,
support
or
development
work
that
we
could
do
towards
that
and
at
the
same
time
you
know
us
these
product,
these
projects
having
their
own
life
cycles,
probably
better
for
the
overall
community.
Just
because
there's
one
existing
implementation
doesn't
mean
that
that's
the
be-all
end-all.
B
C
Yeah,
and
also
just
in
general,
you
know
we
had
like
I
think
like
500
fliers,
so
we
printed
out
for
crossplane
and
they
were
all
gone
like
on
by
the
end
of
the
second
day,
so
we
had
to
print
out
all
another
ream
of
flyers,
and
so
it
was
just
really
awesome
to
see
a
ton
of
Community
Interest,
and
you
know
it
kind
of
really
resonated
with
having
like
a
more
cubrir
entities
in
a
way
of
being
able
to
do.
C
You
know
cloud
service,
provisioning
and
stuff,
and
so
that
was
really
good
and
then
just
like
a
lot
of
excitement
around
the
continuous
deployment
projects
and
seeing
the
integration
with
Argos,
seeing
the
integration
with
guilt
and
then
having
others,
you
know
come
up
that
wanted
to
do
a
similar
type
of
integration
or
at
least
kind
of
document
how
they
work
together.
So
that
was
cool.
C
A
A
Cool
alright,
let's,
let's
move
ahead,
did
one.
B
A
Alright.
So
that's
just
real
quick
I
wanted
to
call
us
out
here
that
we
started
keeping
a
little
list
of
all
the
infrastructure
stacks
that
have
been
developed
or
ready
for
consumption.
So
that's
a
little
section
of
the
readme
now
here
is
links
to
you
know
all
the
you
know
three
major
cloud
providers,
the
rook
stack
packet
stack
cloud
scale
stack,
so
that
list
will
continue
to
keep
growing
and
we
will
update
that.
That's
great
to
see
that
grow,
so
there's
a
link
to
it
in
the
agenda
documents.
E
Yeah,
just
basically
saying
that
now
they
are,
the
framework
for
implementing
them
is
included
in
crosswind
runtime.
So
anyone
who
is
building
specs
should
be
able
to
get
that
functionality
when
they
update
crosswind
runtime
to
latest
I'm
sure
it
will
be
in
the
next
release.
We
cut
of
cross-burning
runtime,
obviously,
and
it
makes
it
pretty
easy
to
excuse
some
table-driven
tests
integration
test.
So
basically
what
it
does
is
that
either
spins
up
a
cluster
locally
or
it
uses
whatever
the
configured.
E
E
This
isn't
long-running,
but
it
does
help
us
identify
small
bugs
and
also
helps
us
verify
our
examples.
So
another
thing,
along
with
this,
is
be
like
generic
service
examples.
I'm
then
copying
over
to
the
individual
stack
repos.
So
basically,
we
can
guarantee
that
those
are
valid
by
running
integration,
tests
of
actually
creating
those
and
making
sure
the
status
is
expected.
D
Awesome
Dayne
I
had
a
question
that
I
remember.
We
had
this
discussion
at
some
point.
Doing
the
integration,
an
actual
like
cloud
stuff
like
if
you're
doing,
indication
on
GCP
or
AWS,
actually
going
installed,
stack
on
mgcp
and
then
run
stuff
on
it
and
do
this
so
something
we're
going
to
connect
proceeding
in
the
future
or
is
it
that
does
this
work
in
containing
part
of
that
yeah.
E
So
this
does
work
actually
provisioning
resources
on
actual
cloud
providers
like
actually
bringing
up
the
infrastructure
we
probably
also
so
by
default.
That's
what
this
will
be
doing.
We
probably
also
want
to
set
up
in
the
future
some
kind
of
like
mocked,
HTTP
stuff
to
mock
the
API,
so
that
we
can
run
these
on
a
more
frequent
basis
and
get
some
like
kind
of
lighter
weight,
integration
tests
that
could
potentially
be
run
on
every
PR,
because
we
obviously
don't
want
to
spin
up
a
ton
of
them
structure
on
every
PR.
A
You
know
they
have
some
interesting
philosophies
and
take
on
how
to
stay
productive,
how
to
stay
engaged
and
how
to
collaborate
as
a
team,
and
so
one
of
the
things
that
they
mentioned
to
me,
there's
actually
Dylan
Dylan
Griffin
is
the
one
who
mentioned
it
to
me.
Is
that
a
they
have
a
shared
philosophy,
a
shared
value,
maybe
even
it's
reviewing
pull
requests
or
merge
requests
in
their
land
is
is
a
top
priority.
A
A
So
you
know:
we've
had
a
couple
discussions
in
retrospectives
around
the
length
of
time
that
it
takes
for
requests
to
be
approved
and
back
and
forth
and
such
and
this
cultural.
This
should
own
a
share.
A
priority
of
making
reviewing
pull
requests.
The
top
priority,
I
think,
is
really
smart
and
it
can
keep
us
keep
work
moving
to
the
pipeline.
It
can
keep
us
productive,
reduce
frustrations.
You
know,
and
I
can
personally
you
know
it's
take
in.
A
You
know
it's
some
accountability
here
for
that,
and
you
know
acknowledge
that
I
have
not
always
prioritized
reviewing
pull
requests.
You
know,
as
a
top
item
work
item
for
my
focus,
so
that's
the
change
that
I
want
to
start
making
for
myself
and
kind
of
gold
build
that
into
our
culture
as
a
community
here
in
reviewing
pull
requests
is
a
top
priority.
What
was
the
team's
been
doing?
E
F
A
I
think
that
enabling
other
people
it's
40
people
is
is
is
fantastic
for
so
many
reasons
you
know
besides
just
scaling
the
team's
capacity
and
ability
to
deliver
it
execute
it's
got
a
you
know
a
lot
of
other
benefits
besides
that,
so
I
totally
think
that
that's
a
part
of
a
culture
in
values
that
I
I
loved.
That
I
think
is
important,
and
you
know
love
to
see
as
part
of
this
particular
community
as
well.
F
Second,
follow-up
question
for
for,
if
you,
so
you
acknowledge
that
you
know
you
feel,
like
you
have
some
role
in
this
for
the
future.
What
are
your
thoughts
on
how
you
want
us
to
bring
that
up
to
you
in
a
way?
That's
supportive!
Do
you
have
any
do
you
have
any
preferences
on
that.
A
Well,
like
a
couple
things,
one
thing
that
could
to
get
that
folks
mentioned
is
that
they
have
clear
ownership
of
merge
requests,
so
they
use
the
assignee
field
to
make
sure
that
it's
always
assigned
to
whoever
has
action.
At
that
point,
though,
it's
a
like
it's
a
side
too.
If
you
need
review
or
you
need
feedback,
then
it's
a
sign
to
somebody
that
you
want
feedback
from
and
that's
an
indication
to
them
that
that's
you
know
in
their
work
you
or
that
they're.
A
You
know
they
are
expected
to
be
putting
effort
into
that
and
providing
that
feedback,
and
then,
when
they're
done,
they
provide
they've
given
all
their
feedback,
they
can
assign
it
back
to
the
PR
author
to
respond
to
or
incorporate
the
feedback
so
having
clear
ownership.
There
is
one
good
way
to
do
that.
A
Another
way
to
answer
that
question
is
that
you
know
if
we
adopt
a
philosophy
of
prioritization
on
reviewing
work
with
doing
for
requests
and
that's
an
important
thing
that
you
know
if
that
shared
value
and
expectation
is
not
being
met,
that
you
know,
I
can't
speak
for
everybody,
but
I
personally
am
just
fine
with
you
know,
being
pings
or
called
outs,
you
know:
okay,
yeah
I,
think
I.
This
pull
request
is
assigned
to
you
or
I
am
expecting
feedback
from
you.
Can
you
please
take
a
look?
A
You
know
to
be
able
to
get
some
cost
of
attention
to
it.
Let's
say
I
have
no
problem
with
that
being
done
in
whatever
way,
like
the
comments
of
the
PR
or
slack
message,
or
you
know,
even
more
of
a
public,
a
public
sort
of
a
Jared.
What
you
know
you
said
you
would
look
at
these
weren't
you
looking
at
these
like
I'm,
okay,
I'm.
Okay,
with
that
personally,
I
can
only
speak
for
myself
on
that.
F
Yeah
cuz,
when
I
was
again
I
was
getting
more
at
the
second
part
of
the
head.
The
assignee
thing
sounds
pretty
useful,
so
I'm
gonna
start
trying
that,
but
it's
getting
more
at
the
second
part
of
it,
because,
even
even
if
you
feel
like
you
have
some
responsibility
there,
if
we
have
a
standard
that
we
agree
upon
as
a
group-
and
we
didn't
say
anything
about
it,
when
we
felt
like
you
were
doing
something
below
that
standard,
then
it's
also
us
for
not
saying
anything
to
you
right.
A
And
that's
one
of
those
five
dysfunctions
of
a
team
right:
Daniel,
accountability,
yeah
yep
and
so
I
I.
Believe
in
that.
If
you
know,
if
I
find
personally
not
meeting
a
shared
expectation
of
the
team,
then
I
died
to
add,
you
know
be
held
accountable
and
yeah
cool
cool.
So
I
put
this
I
think
on
the
agenda.
Last
last
night
before
I
responded
to
feedback
on
its
I.
Don't
Steven
I
can
take
a
look
at
this
and
give
some
new
thoughts.
I.
A
F
A
F
F
A
Right,
I,
don't
see
anything
else
on
the
agenda
for
the
main
topics
of
the
meeting.
So
if
somebody
wants
to
bring
up
another
topic,
that's
perfectly
fine
right
now.
Otherwise
we
would
move
into
the
optional
time
section
here
to
dive
into
a
little
bit
more
technical
discussions
in
which
people
are
more
than
free
to
go
ahead
and
drop
off
the
call
and
we'll
get
it
to
some
boring
technical
details,
but
were
there
any
other
community
topics
here
before
we
got
into
the
optional
technical
section.
E
Yeah
so
I
put
this
in
the
optional
time,
because
this
is
more
of
like
in
I'm.
It
was
obviously
a
discussion,
but
more
of
an
informational
thing
that
I'm
not
requesting
feedback
on
necessarily
unless,
if
people
want
to
give
it,
but
I
just
want
to
put
out
there
because
I
feel
like
this
is
a
like
pretty
big
topic
and
it's
a
pretty
big
like
design
decision.
E
So
anyway,
so
we've
been
talking
about
and
we
we
got
a
lot
of
feedback
at
cube
con
about
the
ability
to
kind
of
bring
your
own
cluster
and
thinking
about
how
that
would
be
designed
with
this
issues
here.
That
Jarrod
has
up
has
some
of
the
discussion
around
that
kind
of
brought
up
some
other
questions
about
how
kubernetes
clusters
work
in
cross
fun
at
all.
E
So
if
you've
seen
any
of
the
demos
or
anything
like
that,
a
lot
of
times,
we've
created
grenades
clusters
and
then
deployed
an
application
into
them
so
kind
of
like
application,
provisioning
time
we're
also
creating
a
kubernetes
cluster
which
doesn't
feel
super
realistic
in
an
organization
or
in
a
team,
or
something
like
that.
Typically,
you're
not
creating
a
kubernetes
cluster
for
every
application.
E
You
run
unless,
if
you're
really
big
on
clusters
got
all
kind
of
stuff,
but
even
then
you're
probably
likely
going
to
reuse
them
in
some
form
or
fashion
or
use
them
for
multiple
applications
so
anyway,
just
because
we
do
that
in
those
demos
doesn't
mean
that
that's
what
we
should
do
or
that's
what
you
have
to
do.
You
can
obviously
statically
provision
kubernetes
clusters.
You
could
dynamically
provision
them
ahead
of
time,
etc.
So,
there's
a
lot
of
different
ways.
You
can
do
that.
E
We're
more
interesting
things
come
in
are
how
cou
renée's
clusters
are
consumed,
how
they're
different
from
other
managed
resources
we
support
and
how
they're
created
so
really
the
the
three
main
scenarios
for
how
you'd
want
a
coup
Renee's
cluster
consumed
in
a
you
know:
crossplane
ecosystem
would
be
you
know,
one
namespace
is
consuming
it,
so
that's
kind
of
what
we're
doing
when
we're
actually
creating
the
kubernetes
cluster
and
deploying
application
into
it
like
this.
Is
this
one
cluster
for
this
application?
E
You
may
want
to
have
a
cluster
that,
basically
all
of
your
applications
are
being
deployed
into
so
across
all
namespaces
or
you
may
want
some
subset
of
the
total
namespaces.
So
right
now,
really
only
the
first
one
is
supported
because
in
order
to
schedule
a
kubernetes
application
which,
if
you're
not
familiar
likely,
everyone
on
this
call
is
but
I'll
do
a
brief
rundown.
Just
for
the
recording
and
that
sort
of
thing
our
kubernetes
application
is
our
way
of
encapsulating
kubernetes
resources
that
we
want
to
be
deployed
into
a
target
cluster.
E
We
also
have
the
concept
of
the
kubernetes
provider,
which
was
created
when
stack
rook
was
created,
and
we
thought
about
using
that,
as
you
can
see
in
this
issue,
to
support
the
ability
to
import
clusters
right,
because
all
you
need
is
that
secret,
with
your
cube,
config
information
essentially
and
then
to
you,
can
use
that
to
create
a
client
and
deploy
workloads
into
that.
So
right
now
the
kubernetes
provider
is
cluster
scoped.
E
Another
option
is,
you
know,
multiple
claims.
Being
able
to
claim
that
managed
resource
that
changes,
some
things
with
our
other
manage
resources
which
are
designed
right
now
with
our
controllers
does
not
support
multiple
claims
claiming
a
managed
resource.
You
know
there
is
a
difference
in
kubernetes
clusters
manage
kubernetes
offerings
or
other
than
other
managed
services
that
we
support.
E
If
you
didn't
want,
you
know
nine
member
structure
non
platform
team
people
to
be
able
to
enable
that
functionality,
so,
anyway,
that's
kind
of
a
a
short
or
long
brain
dump
of
what
we've
kind
of
talked
about
there.
I
just
want
to
get
that
out
there
in
case.
If
you
know
anyone
had
strong
opinions
about
how
they
should
look
or
anything
like
that,
and
we're
going
to
be
working
through
a
design
on
this,
we
want
to
support
that
BYO
cluster
scenario
really
soon.
E
E
So
if
you
know
anyone
in
the
community
has
strong
opinions
or
has
thoughts
about
what
they
feel
like
it
should
look
like
to
consume
with
kubernetes
cluster
from
either
an
application
owner
side
or
an
instructure
on
their
side.
Then
now
would
be
a
time
to
present
that
not
necessarily
here
in
this
meeting,
but
now
in
general,
so
that's
it
over.
E
So
so,
and
I
we're
going
to
put
together
some
sort
of
design
document,
one
pager,
whatever
I,
think
Phil
has
more
of
a
concept
of
what
use
cases
need
to
be
supported
for
from
an
end
user
perspective
and
I
have
more
of
the.
This
is
how
the
API
design
is
currently,
and
this
is
how
the
controllers
work-
and
this
is
like
what
the
implications
of
enabling
that
functionality
would
be
so
hopefully
we
can
put
something
together
there.
E
That
being
said
even
prior
to
that,
if
someone
or
if
anyone
has
you
know
like
strong
feelings
about
how
this
should
work,
or
you
know
strong
feelings
about
how
it
shouldn't
work,
then,
obviously
that
will
help
us
drive
that
design.
So
anyway,.
B
So
a
quick
question
I
have
about
this,
which
was
something
that
during
coop
Cana
I
thought
I
knew
how
something
worked
but
turned
out.
It
wasn't
so
sure.
But
in
the
comment
just
above
your
final
comment
here
at
point
number
2
bullet
to
is
in
response
direct
response
to
what
Nick
is
saying
in
the
previous
block.
B
E
B
That
would
be
a
cool
thing
and
I
think
we
were
talking
about
whether
or
not
that
would
be
useful
and
then
I
was
thinking
about
how
moving
it
to
provider
of
provider
dependency
might
break
the
possibility
of
doing
that
in
the
future.
But
I
do
like
the
direction
of
this
bullet
number.
Two
and
I
think
that
that
would
allow
us
to
perhaps
investigate
that
in
the
future
that
you
could
just
make
a
kubernetes
application,
not
really
care
what
cluster
gets
provision
to
just
use
some
sort
of
cluster
label
selector
and
inter
cluster
class.
A
Label,
selector
and
Marcus
does
dessert
I
think
those
are
two
different
issues
there
that
are
being
conflated
right
now.
One
is
like
you
can
right
now
is
my
understanding
cord
and
correct
me
if
I'm
wrong,
a
company's
application
can
influence.
You
know
through
a
label
selector
like
what
cluster
is
scheduled
to,
but
you
use
the
word
provision
there,
which
is
the
conflation
I've
seen
here,
that
a
kubernetes
application
will
won't
cause
a
cluster
to
be
dynamically
provisioned,
but
it
can
influence
where
it
get
what
close
existing
cluster
it
gets
scheduled
to.
B
E
Yeah
I
I
definitely
understand
what
you're
saying
I.
To
be
honest,
I,
don't
think
that
the
I
think
that's
probably
out
of
scope
for
this
design
as
far
as
the
dynamic
provisioning
with
the
kubernetes
application,
I
think
that's
something
that
could
easily
be
implemented
in
the
future,
depending
on
how
we
decide
that
the
model
is
so.
If
we're
saying
that
you
know,
essentially
you
have
to
create
that.
E
However,
you
dynamically
provision
a
cluster
following
this
design.
You
could
always
do
that
simultaneously
with
your
kubernetes
application,
so
it's
just
like
a
you
know,
slightly
less
yeah
Mille,
maybe
so.
Well,
that's
probably
something
that
we
will
not
address
in
this
design.
If
I
were
to
guess,
but
it
is
a.
It
is
a
cool
thought,
though,
for
sure
I.