►
From YouTube: 2019-12-10 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
thank
you
for
joining
we're
gonna.
Do
our
normal
kind
of
checkup
of
our
next
milestone
for
a
release,
and
then
we
have
a
lot
of
PRS
here
to
talk
about.
It
doesn't
look
like
all
the
authors
are
currently
on
this
call,
so
there
may
be
some
that
we
can
skip
over
and
I'm
gonna.
Let
Dan
Susskind
Daniel
suss
can
run
that
part
of
the
meeting,
but
we'll
start
off
with
kind
of
a
quick
check
up
on
0.6
all
right.
A
So
if
you're
not
familiar
with
how
we
do
our
planning,
we
have
this
roadmap
document
in
the
main
cross,
plane
repo,
where
we
keep
track
of
our
goals
for
different
releases
and
and
link
to
them
as
well.
So
our
upcoming
release,
which
we
have
defined
here
as
targeting
next
Wednesday
so
a
week
from
tomorrow,
is
going
to
be
0.6,
and
these
are
a
couple
things
that
we're
focusing
on
for
that.
A
I
think
this
could
use
some
refinement
here,
but
essentially
the
main
ideas
of
what
we're
moving
towards
are
making
a
better
stack,
UX
experience
so
essentially
making
it
so
that
stacks
can
be
provisioned
without
having
to
write
code,
and
you
know
you're
able
to
kind
of
customize
the
way
that
you
define
claims
and
classes
and
that
sort
of
thing.
So
we
have
a
lot
of
work
on
that
which
we'll
get
to
in
the
PRS
and
then
the
other
thing
is
v1
beta-1
service
API,
so
primarily
for
this
release.
A
We're
focusing
on
getting
the
G
ke
V,
what
V
1
beta
1
done.
So
we've
previously
focused
on
database
and
caches
for
V
1
beta
1,
API
quality,
but
bearnaise
clusters
are
a
little
bit
more
complicated.
I
guess
you
would
say
in
terms
of
what
vo
and
beta1
looks
like,
how
you
define
masters
and
nodes
and
that
sort
of
thing
so
we'll
take
a
look
at
that
as
well,
but
yeah
so
be
looking
for
that
release
coming
out
next
week
and
we're
gonna
aim
I
think
for
a
feature
freeze
this
Friday.
A
B
Okay,
so
the
first
eye
I
added
a
few
of
the
PRS
here,
the
first
four
so
and
for
these
ones,
I
just
wanted
to
call
attention
to
them
as
probably
needing
help
since
they've
all
been
open
for
more
than
three
weeks.
At
this
point,
so
the
first
one
looks
like
it's
stacks
version
design,
I'm,
just
gonna
pop
these
up
on
the
screen
and
then
think,
since
the
author's
aren't
here,
we
won't
talk
about
them
very
long.
So,
first
one
this
Stax
version
design.
A
B
Believe
this
one
and
the
next
one
are
yeah
they're
related
to
some
of
the
permissions
work.
I'm
thinking
that's
been
going
on.
It's
about
all
the
context
that
I
have
for
them,
though
next
one
also
Marcus
twenty-seven
days
ago,
using
a
cluster
rule
for
namespace
stack
deployments,
sound,
similar
and.
B
B
A
Absolutely
if
you
want
to
open
that
one
up
I'll
give
a
little
bit
of
context
on
this.
So,
as
you
see
I've
kind
of
been
work,
this
one
with
a
PR
open
and
trying
to
document
design
decisions
I'm
making,
because
with
gke
clusters,
there
is
kind
of
an
interesting
situation
where
to
create
a
node
pool
for
the
GK
cluster.
The
cluster
has
to
already
exist,
although
you
cannot
create
a
cluster
with
no
node
pools
and
/
are
kind
of
like
API
design,
which
move
offic
has
done
a
lot
of
work
on.
A
We
don't
like
to
have
kind
of
like
side
effect
or
inline
provisioning
of
resources
that
are
also
managed
with
crossplane,
so
we
want
to
model
node
pools
as
a
cross
plane
resource
themselves.
So
the
issue
that
this
will
actually
fix
also
specifies
new
1,
beta
1,
gk,
node
pools
and
the
api
types
are
defined
in
this
PR
will
decide
if
we
want
to
implement
them
at
the
same
exact
time
or
not.
You'll
also
see
that
2
of
those
bullet
points
are
related
to
supporting
both
v1
alpha
3
and
V
1
beta
1
of
GK
cluster.
A
So
right
now,
essentially,
you
can
only
create
a
GK
cluster
with
the
default
node
pool.
That's
essentially
created
inline
and
no
pools
are
not
a
resource
modeled
by
cross
plant
as
a
custom
resource,
and
it
was
our
initial
thought
that
we'd
like
to
keep
support
for
that
for
the
time
being,
while
we're
supporting
this
new
version,
which
requires
creation
of
the
GK
cluster
and
the
node
pools.
A
However,
it
really
doesn't
work
super
well
with
conversion
functions
since
we're
creating
kind
of
like
one
resource.
When
you
do
conversion
functions
with
custom
resources
in
cross
plane,
I
mean
in
kubernetes.
You
essentially
have
to
preserve
all
information.
So
what
that
means
is
that
there'd
have
to
be
fields
that
kind
of
kept
the
the
inline
node
pools
from
V
1
alpha
3
and
created
actual
node
pool
resources,
since
it's
dividing
into
two.
A
I
will
probably
be
pushing
for
just
moving
on
to
view
on
beta
one
there,
but
I
have
some
open
questions
there
with
answers
as
I've
kind
of
encountered
them,
as
well
as
comments
below
where
I've
reviewed
the
code
myself
and
I'm
kind
of
like
pointing
out
some
of
the
places
where
I've
made
kind
of
like
subjective
decisions
and
that
sort
of
thing,
and
so
I'd
love
any
input
or
feedback
there
from
anyone.
Especially
if
you
have
a
particular
use
case
for
GK
clusters.
A
B
C
Yeah,
actually
nothing
much
to
talk
about.
We
are
bumping
the
controller
on
time
dependency
to
0.4,
because
there's
a
rating,
changing
client
code
and
like
newly
created
stacks
are
not
able
to
import
crossplane
runtime.
Due
to
that,
so
we
need
to
bump
it
and
I
just
opened
this
peer
and
the
corresponding
peers.
In
other
posters.
A
A
Currently,
the
version
of
controller
runtime
we're
using
is
on
kubernetes,
1.14,
I,
believe
and
there's
an
issue
in
within
the
1.14
releases
of
client
go
where
it
is
not
compatible
with
all
the
other
1.14
cakes
do
vendor
dependencies,
so
I
would
be
very
about
getting
these
merged
and
bumping
to
the
newer
version
of
controller.
One
time.
C
Yeah,
so
basically,
this
PR
is
the
main
one
and
it
does
only
the
pumping,
but
in
the
other
ones
like
cross
plane
and
stack
repositories.
We
have
the
replace
statements
that
uses
this
commit
of
the
PR.
Basically,
when
we
merge
this,
we
need
to
update
the
other
ones
to
you
know,
update
the
controller
runtime
that
they
depend
of
a
lot
process.
C
A
B
A
So
this
is
a
fairly
new
pull
request:
neck
neck
just
opened
this
on
Sunday
night,
I
believe
and
Daniel
I
think
you
actually
gave
this
a
review.
It
looks
like,
but
this
is
basically
allowing
users
or
staff
owners
to
more
easily
define
their
own
resource
claims.
So
this
is
a
pretty
big
improvement.
In
my
opinion,
this
is
the
design
for
it.
So
I
think
there'll
be
a
fair
amount
of
discussion
here.
B
C
A
Yeah,
so
this
is
another
one
that
I
have
here.
Essentially
what
this
does
is
for
context
eks
on
AWS
the
connection
secret
for
cluster,
that
you
provision
actually
gets
updated
over
time.
I
think
move
offic
knows
the
actual
like.
How
often
is
something
it's
like
in
every
hour
or
something
like
that.
So
it's
frequent
enough.
So
when
you
provision
a
managed
resource
and
it's
claimed,
the
connection
secret
is
propagated
to
the
namespace
of
the
claim.
A
So
if
that
happens
just
once
then
quickly,
your
connection
information
is
going
to
get
out
of
date
for
something
like
an
e
KS
cluster.
So
what
we
have
and
Nick
implemented
this
maybe
a
few
weeks
ago,
is
a
secret
reconciler
which
basically
uses
annotations
to
propagate
a
secret
continuously,
so
you
can
think
of
it
as
watching
two
different
secrets.
One
of
them
is
kind
of
the
manage
resource
secret
and
one
of
them
said
claim
secret
and
when
the
manage
resource
updates
its
connection
secret,
that
gets
propagated
to
the
namespace
secret
or
the
claim
secret.
A
Excuse
me
they're
both
named
spaced,
so
that's
really
great,
but
right
now
we
just
have
the
abilities
for
one
managed
resource
secret
to
be
propagated
to
one
claimed
secret,
which
makes
sense
in
our
model
right
now.
The
kind
of
impetus
for
this
which
I
think
it's
linked
right
there
in
crossplane
1101,
is
for
our
new
kubernetes
target
design,
which
basically
allows
us
to
schedule
kubernetes
applications
and
multiple
namespaces
to
the
same
ranae's
cluster.
A
So
in
order
to
be
able
to
do
that,
you
need
both
propagate
the
managed
resource
secret
to
multiple
kind
of
claim,
connection
secrets,
they're
not
actually
claimed
explicitly,
but
essentially
just
have
the
ability
in
the
secret
reconciler
to
propagate
one
kind
of
managed
resource
secret
to
multiple
different
users
of
it.
So
this
PR
adds
the
ability
for
the
secrets
to
be
propagated
to
multiple
kind
of
like
namespaces
that
are
using
them,
so
that
enables
the
design
in
the
consuming
plates
gates,
closures,
PR,
they're,.
D
B
A
So
I
this
one
is
probably
I've
just
been
doing
the
reviewers.
So
if
you
want
to,
if
you
want
us
to
move
to
that
signing
process,
I'd
be
happy
to
assign
I
mean.
This
is
probably
gonna,
be
just
something
Nick,
I
think
user
media
once
already,
but
you
know,
reviews
and
approves,
and
that
sort
of
thing
so
we
could
assign
him.
If
you
wanted
to
he's
aware
of
this
but
yeah,
just
ever
my
own
personal
context
is
the
assigning.
How
does
this
signing
differ
from
their
views?
B
So
that
if
there
are
multiple
like
rounds
of
feedback,
it's
easier
to
tell
whose
cord
the
ball
is
and
okay
cool,
because
if
you
like,
if
Nick
left
feedback
and
then
wanted
feedback
from
you
than
he
could
assign
you
again,
awesome
yeah
we're
still
experimenting.
So,
if
you
think
it's
not
useful,
that's
good
feedback
to
yeah.
A
A
So
this
is
something
that's
been
open
for
a
while
again,
if
you
remember
in
in
meeting
a
few
weeks
back,
we
had
the
initial
PR,
which
basically
added
the
library
to
crossplane
runtime,
to
allow
us
to
do
integration
tests,
I'm
actual
live
clusters
and
that
sort
of
thing.
So
what
this
does
is
all
of
the
the
stacks
when
they
implement
integration
tests,
a
lot
of
them
to
test
things
like
dynamic,
provisioning
or
even
just
to
register
the
controllers
to
run
need
to
also
have
the
core
crossplane
CR
DS
and
installed
in
there.
A
So
that
would
be
like
kubernetes
application
and
then
all
the
different
claim
types
one
way
to
do.
That
would
be
to
actually
you
know
copy
those
see
our
DS
into
the
repo
where
the
integration
tests
are
defined,
but
that's
not
really
a
super
great
thing
to
do
to
have
these
CR
DS
that
are
basically
defined
somewhere
else,
and
you
have
to
continuously
copy
over
into
a
version
control
system.
A
So
the
integration
tests
have
been
something
that
haven't
really
been
made
a
priority,
but
more
of
just
like
this
is
something
we
want
long-term
and
so
I'd
love
to
be
able
to
push
these
forward,
because
I
think
that
our
iteration
speed
on
developing
is
gonna
be
greatly
enhanced
by
I'm
integration
tests
and
they
seem
to
kind
of
keep
hitting
blockers
or
just
being
like
side
worked
on
on
the
side
and
that
sort
of
thing.
So
this
is
just
kind
of
a
plug
for
those
awesome.
A
D
D
B
A
C
Minimal
TCP
resource
pack,
where
you
have
like
everything
OneNote
like
in
the
in
the
same
network
and
stuff.
So
if
you
want
to
develop
something
like
that,
what
you
have
to
do
is
just
basically
map
the
variant
references
from
your
CR
to
the
that
you
use
in
the
customization
yeah
Mo's
and
you
are
pretty
much
you
know
up
and
running.
C
D
C
C
D
C
To
inject
your
custom
logic,
one
is
you
know
this
reconciler
creates
an
overlay
customization
file
and
you
can
supply
your
own
functions
where
you,
edit,
that
customization
object,
like
you
can
add
a
new
name
prefecture
or
you
know
some
other
logic
and
stuff.
So
you
see
here
that
are
like
defaults.
The
name
prefix
is
labeled
propagator
like
on
the
reference
adder.
C
You
have
different
resources,
and
at
that,
like
step,
you
can
inject
your
custom
logic
to
interoperate
on
the
kubernetes
object
itself,
like
not
customers,
customers
are
but
like
objects
and
themselves
before
the
deployment,
and
after
that
it
just
applies
all
the
resources
that
is,
like.
You
know,
reconcile
conciliation,
which
is
pretty
simple.
Actually
it
just
runs
custom,
easy
and
then
last
the
patches,
and
that
applies
each
and
each
generated.
C
C
C
B
C
D
B
B
D
C
Now
somehow
overlaps
it
controller.
Like
you
know,
it's
not
completely
overlooked,
unlike
you
know
like
for
example,
in
this
case,
our
goal
is
like
specifically
for
like
resource
classes
and
like
configurations
yeah
we're
a
template.
Controller
is,
like
you
know,
do
whatever
you
want.
So
here
we
were.
We
are
like
more
like
optimizing
for
that
case,
but
like
in
the
end
like
after
template
controllers
arrived,
we
could
just
do
the
same
thing
with
them
but
like
when
you
are
more,
you
know
even
for
specific
things.
I'm,
like
you
know,
good
Oh,
Canyon,.
D
D
D
D
D
A
A
All
right
so,
like
I
said
this
was
talked
about
last
community
meeting
I
believe,
but
the
patch
release
actually
happened.
So
I
just
wanted
to
point
out
exactly
what
the
change
was
again.
So
last
community
meeting
you'll
see
that
we
had
the
Stax
have
a
patch
release,
and
this
was
related
to
custom
resources,
essentially
getting
cleaned
up
inadvertently
through
garbage
collection,
so
that
fixed
it
in
relation
to
the
individual
stacks,
but
we
still
needed
to
fix
it
with
the
actual
stack
manager.
A
So
the
issue
here
that
was
being
experienced,
which
you
can
read
about
in
the
link
to
release
here,
is
that
cluster
stack
installs
where
Nate
are
are
named.
Spaced
but
CR
DS
are
cluster
scoped,
so
there
was
a
basically
invalid
relationship
between
them
and
that
we
had
a
namespace
resource
having
owner
references
on
a
cluster
scoped
resource
which
is
kind
of
like
a
cross
namespace
own,
our
F,
so
the
CR
DS
were
getting
cleaned
up
through
garbage
collection
and
when
you
delete
a
CR
D,
all
the
corresponding
custom
resources.
A
A
So
this
was
implemented
and
we
got
out
the
0.5
dot
one
release,
so
the
people
who
did
work
on
this,
where
Nick,
Jared
and
Markus
here
so
special
thanks
to
them
I,
don't
think
any
of
them
are
on
the
call
but
really
appreciate
y'all
working
through
that,
and
that
was
definitely
a
big
benefit
for
for
the
project.
So,
thank
you
for
that
and
this
problem
should
be
alleviated
now.
Another
thing
to
keep
in
mind
is
a
pull
we
mentioned
earlier,
could
potentially
impact
this,
so
cluster
scoped
cluster
stack
install.
A
So
as
we
mentioned
before,
this
issue
was
happening
because
we
had
a
name
space.
Cluster
stack
installed
with
owner
references
on
a
cluster
scope,
CR,
D
and
so
I'm,
not
exactly
sure
how
this
looks
in
relation
to
the
labels.
If
we
want
to
continue
cleaning
up
with
labels
or
not
because
they're,
both
cluster
scoped
now
I
think
we
could
potentially
use
the
owner
references
again.
Don't
take
me
verbatim
on
that
I
I
would
want
to
check
with
Marcus
or
Nick
on
that,
but
so
anyway,
that's
how
we're
doing
it.
For
now
it
works.
A
Well,
it
doesn't
change
the
experience
from
a
user
perspective
at
all
and
you're
not
gonna,
see
random
deletion
of
resources,
so
it
was
really
nice
to
get
that
fixed.
All
right.
Last
thing
here
is
our
optional
time
for
technical
discussion.
Is
there
anyone
on
the
call
who
wanted
to
talk
about
anything
in
particular.
B
B
Believe
part
of
the
original
motivation
for
doing
the
the
separate
effort
was
that
we
thought
that
by
doing
it
a
subset
of
the
functionality
for
resource
packs,
we
might
be
able
to
get
it
out
sooner,
while
the
faller
template
stack,
stuff
was
being
worked
on
so
I,
don't
think
that
of
Ithaca
I
made
that
decision
directly,
but
we
can
make
this
decision.
D
I'm
not
sure
I
haven't
looked
at
the
full
details
of
this,
but
it
does
seem
that
there
is
you.
You
could
also
talk
about
it
from
a
different
perspective
and
say:
okay,
can
we
get
an
early
version
of
template
stacks
right,
I,
don't
running
you
can
make
the
argument
on
both
sides
so
I'm,
and
if,
if
what
we
are
doing,
is
writing
a
controller
that
can
plug
in
variants
into
a
customized.
B
B
C
Actually
I
think
like
if
he
would
have
a
chance
like
to
sing
because
I
think
there
is
some
overlap,
but
there
is
also
like
you
know.
You
know
like
template.
Stacks
I,
don't
know
how
much
like
it
exposes
to
users,
so
maybe
we
might
want
to
have
like
one
or
two
different.
You
know
options
for
the
developers
that
taking
just
choosen,
you
know
their
level
of
like
either
foodie
ammo
or
like
get
some
gold
code
or
write,
know
some
or
be
needed.