►
From YouTube: 2022-08-11 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
recording
has
started
and
is
this
is
the
august
11th
2022
crossplane
community
meeting
agenda
document
is
right
here
on
my
screen,
but
I
will
drop
a
link
to
it
into
the
chat
right
now.
It's
the
zoom
chat,
so
you
have
a
direct
link
to
it.
There
as
well
feel
free
to
add
yourself
as
an
attendee,
or
I
just
add
some
agenda
items
you
want
to
discuss
as
well.
A
The
agenda
is
open
the
entire
meeting
until
we
get
to
the
end,
and
we
adjourn
so
feel
free
to
add
anything
you
want
to
talk
about
until
we
get
to
the
end
there.
So,
let's
hop
on
into
releases
so
1.10
we
are
still
less
than
a
third
of
the
way
through
the
cycle
for
the
1.10
release,
1.9
was
middle
of
july
or
so
and
as
a
reminder,
the
we
made
a
decision
to
move
to
a
three-month
cadence
for
getting
crossplane
official.
A
You
know
upstream
sorry,
core
crossplane
releases
out
so
we'll
be
doing
that
on
a
quarterly
cadence,
which
means
that
they're
every
three
months
or
so
so
from
middle
of
july
to
middle
well,
the
three
months
would
carry
us
to
the
middle
of
october.
A
So
I'm
proposing
a
release
date
of
october
18th,
which
is
a
tuesday
and
it's
a
tuesday
I
think
the
week
before
kubecon,
so
that
fits
in
kind
of
nicely
to
get
a
release
out
and
have
something
to
talk
about
at
cubecon.
You
know
when
we're
doing
our
maintainer
track
when
we're
getting
a
little
pr
from
the
cncf
and
all
that
sort
of
stuff.
A
So
I
think
that's
that's
the
schedule
that
I
propose
the
calendar
has
not
been
updated
yet
because
I
just
wrote
that
in
this
morning
here
on
the
community
agenda
document,
so
we
can
follow
up
to
get
the
the
release
table
in
the
community
calendar
updated.
But
I
think
that
schedule
would
work
well
given
the
three
month
cadence
that
we're
going
after
now.
A
So
we
are
yeah
so
we're
about
a
third
a
little
bit
less
than
a
third
of
the
way
through.
So
something
else
to
show
you
all
is
that
I
took
a
pass:
it's
kind
of
cleaning
up
what
we're
doing
with
our
projects
and
releases.
So
you
know
github
now
has
made
what
they
were.
Calling
the
beta
projects
they've
made
that
now
into
the
default
stable,
regular
project
experience
now.
A
So
I
we
were,
we
had
been
using
like
a
1.7
1.8
1.9,
milestone
by
by
milestone
project
board
with
the
old
style
boards,
but
now
the
github
is
going
with
the
you
know.
Their
beta
boards
are
now
the
mainstream
way
to
do
it.
I've
gone
ahead
and
created
a
new
board
for
that.
So,
oh
and
I
need
to
create
access
to
it
as
well,
so
I
am
because
I'm
not
logged
in
here.
A
A
All
right,
let's
try
this
again
now:
okay,
sweet!
There
we
go
so
now
it
is
publicly
accessible.
So
basically,
what
we're
looking
at
here,
just
a
quick
primer
on
these
new
github
project
boards.
A
Basically
they
all
they
pull
from
one
source
of
of
issues
and
and
tracking
things
that
get
added
to
this
this
the
project
and
then
you
have
a
whole
set
of
views
and
filters
on
top
of
it
is
basically
what
you
get
so
the
filter
we
have
here
is
all
issues
that
that
are
kind
of
something
that
are
priorities
for
the
community.
A
So
we
have
like,
in
this
view
we'll
have
you
know,
what's
in
review,
what's
in
progress
what's
in
design,
etc
and
then
a
backlog
here
of
issues
that
we
know
are
important
or
things
that
are
going
to
be
worked
on,
and
then
we'll
have
some
other
views
like
milestone
by
milestone.
So
we'll
have
a
1.10
scoped
view
that
will
kind
of
filter
out
everything,
that's
included
in
the
1.10
milestone
and
we'll
have
that
pretty
much
that
approach
of
all
the
releases
will
essentially
be
views
into
this
project
here.
A
So
I
think
that
will
work
very
well
and
we
can
get
kind
of
fancy
with
it
a
little
bit
too.
We
just
created
this.
So
it's
still,
you
know
just
a
single
backlog
with
a
couple.
You
know
with
different
statuses,
for
where
issues
are
and
we're
not
filtering
or
separating
out
by
milestone.
Just
yet
but
that'll
that'll
happen
pretty
darn
soon,
since
we've
got
this
now
set
up
and
we're
ready
to
use
it.
A
So
if
folks
have
things
that
they
want
to
add
to
this
project
board
feel
free
to
ping
me,
and
let
me
know
so,
I
can
get
it
included
in
the
board
here
and
we
can
prioritize
and
do
all
that
sort
of
stuff.
A
So
that's
how
we'll
be
tracking
or
giving
visibility
into
what's
in
the
releases.
Now
it's
all
into
one
board,
as
opposed
to
you,
know
different
boards
and
different
links
for
each
milestone.
It's
in
kind
of
concentrated
in
one
place
now,
essentially
any
notes
on
1.10
or
the
efforts
there
bob.
A
You
may
notice
that
you
know
there's
a
whole
bunch
of
talk
that
you've
been
driving
and
been
a
part
of
with,
like
the
the
this
issue
here:
foreground
cascading
deletion,
so
I've
kind
of
already
just
included
that
in
the
design
there,
because
I
mean
we're
kind
of
talking
and
discussing
and
working
on
design
already
right.
A
All
right,
then
we
can
move
on
to
providers
then
is
christopher
on
the
call.
No,
it
doesn't
look
like
christopher's
here
today,
but
there
is
so,
I
think,
within
the
last
six
hours
or
so
there's
a
release
for
provider
aws
the
0.30
release.
So
you
know
all
the
release.
Notes
are
here
for
a
whole
bunch
of
new
services
and
fixes
for
existing
managed
resources
as
well.
A
So
I
think
a
bunch
of
good
work
there
that
went
into
that
release
and
I
yeah
I
don't
see
anyone's
name
on
the
call
that
I
think
would
want
to
speak
more
to
it.
But
if
anybody
has
more
comments
that
they
want
to
add,
you
know
feel
free
to
add
comments.
Now.
A
A
We
also
have
a
provider
kubernetes
release
that
was
out
just
after
the
last
community
meeting.
So
somewhat
somewhat
it's
recent
since
the
last
community
meeting,
but
it's
been
almost
two
weeks
now,
hassan
you,
it
looks
like
you
ran
this
release.
Do
you
want
to
talk
a
little
bit
about
the
functionality?
That's
that's
in
there.
C
Sorry
so,
yes,
let
me
quickly
check
that
yeah.
So
in
provider
kubernetes,
we
didn't
have
the
max
reconcile
rates
flag
and
in
this
release
we
put
that
flag
thanks
to.
I
think
it
was
bob
and
like
more
more
important
than
that
with
this
flag
and
with
the
new
default
provider,
kubernetes
now
processes,
custom
resources
in
10,
parallel
threats,
which
is
a
performance
improvement,
or
you
know,
an
improvement
already
other
than
just
having
black.
A
And
before
this
change,
and
thanks
bob
for
making
that
change
before
this
change
was
it
doing,
was
it
processing?
You
know,
resources
entirely
in
a
serial
manner,
one
after
another.
C
Yeah
yeah
actually
like
this,
like
the
default
value
of
this
flag,
has
a
side
effect
of
setting
max
concurrent
reconciles
flag
of
the
control
runtime
to
10
previously
deposed
one
and
now.
A
Cool
okay,
that's
great
yeah!
That's
definitely
a
performance
improvement
for
sure
doing
things
in
parallel.
Now,.
C
And
while
I'm
on
on
mute,
I
contacted,
and
he
said
that
he
will,
he
won't
be
able
to
make
it
today,
but
he
regarding
the
fuzzing.
If
I
understood
correctly,
he
said
that
he
they
are
planning
to
take
the
tickets
into
this
prince
already.
A
Awesome,
thank
you
hassan
then,
for
that
update.
That
sounds.
That
sounds
great
thanks
for
checking
in
all
right
and
then
look,
there's
been
a
fighter
terraform
release
as
well.
That
looks
like
bob
ran
this
one
and
a
number
of
fixes
from
both
bob
and
yuri
on
this
one.
So
do
you
guys
want
to
tell
us
a
little
bit
more
about
this
release.
B
Yeah,
this
was
basically
the
same
change
that
went
into
provider
kubernetes
in
terms
of
we're
now
supporting
max
reconcile
rate
as
an
input.
The
difference
here
is
because
provider
terraform
is
running
the
terraform
coi.
B
If
you
know,
if
you've
got
that
much
workload
on
it,
so
I
left
the
default
to
one
and
basically
it's
up
to
the
user
to
decide.
You
know
if
they
want
to
increase
it
and
then,
if
they
want
to
put
any
resource
limitations
on
it.
So
that's
really
the
big
change
there.
I
do
run
it
with
10
in
our
environment
and
it
works
great.
B
B
Oh
there
was
a.
There
was
an
issue
with
the
or
a
change
with
the
git
cred,
making
the
get
credentials
available
to
local
terraform
modules,
as
well
as
remote,
terraform
modules
that
I
think
louise
had
requested.
So
that's
in
there
as
well.
A
D
D
D
Thank
you
a
lot
and
thanks
a
lot
for
all
contributions
and
discussions.
So
all
over
course
planning
ecosystem
be
very
happy
to
have
you
in
community
bob.
Thank
you
so
much.
A
Right
on
yuri
thanks
for
mentioning
that-
and
I
totally
seconds
second
all
of
that-
there
great
contributions
all
right.
So
those
are
the
release
of
recent
releases
or
providers
that
I'm
aware
of
I'm
sure
there
are
others,
as
well
as
the
provider
ecosystem
continues
to
expand,
so
feel
free
to
drop
a
link
to
any
other
releases
of
providers
from
the
ecosystem
into
the
agenda
document
here
and
we'll
get
some
coverage
of
it.
There.
A
All
right
what
just
happened
here
we
are
so
moving
on
to
the
community
topics
section.
You
know
we
typically
have
a
bunch
of
links
here
for
interesting
content
going
on
about
or
referencing
crossplane.
A
So
there's
a
couple
of
interesting
links
here
for
some
upcoming
talks,
this
one
here,
aaron,
swegerson
and
ryan
baker,
they're
going
to
do
a
talk
at
vmware
conference
coming
up
about
integrating
cross-plain
and
sanzu,
and
specifically,
I
believe
it's
focusing
on
some
some
supply
chain
security
stuff.
So
it
should
be
pretty
interesting
to
check
out.
So
you
can
see
the
link
there
to
get
some
information
on
how
to
find
that
talk
and
then
there's
a
number
of
blog
posts
that
folks
have
been
writing
recently
that
are
linked
to
here.
A
I
would
particularly
call
out
mauricio's
talk.
Sorry
blog
post
here
about
integrating
v
cluster
in
cross
plane,
which
is
pretty
interesting
topic,
so
feel
free
to
read
through
these
check
these
blog
posts
out
and
always
interesting
content
going
on
that
people
are
people
are
writing
in
the
ecosystem.
Here,
love
all
of
it:
quick
updates
for
the
linux
foundation
mentorship
program,
so
we
are
working
with
both
rohika
and
parole
on
their
projects
as
we
go
through
the
summer
term.
Here
we're
in
the
back
stretch
of
the
summer
term.
A
Their
week
terms,
I
think,
we're
in
week
9
or
10
now
so
we're
in
the
back.
Third.
The
final
stretch
here
for
these
mentorship
projects
with
the
linux
foundation,
so
rahika
is
she
has
started.
A
Writing
the
testing
logic
for
being
able
to
detect
we're
breaking
changes
in
crds
she's,
adding
first
condition
of
you
know,
fields
being
removed
from
one
version
of
the
crd
to
the
next,
so
she's
working
on
that
logic-
and
you
know,
is
integrating
it
into
the
you
know
continuous
integration
system,
so
that
check
will
run
when
people
open
pr's.
So
that's
her
next
step
and
something
that
she's
trying
to
get
into
end
working.
A
So
then
we
can
add
more
rules
and
more
logic
of
things
to
check
for
breaking
changes
in
crds,
but
starting
with
one
rule
and
plumbing
it
through
end
to
end
all
the
way
so
that
you
know
the
ci
system
will
run.
It
will
examine
them
and
it
will
report
back
and
cause
errors,
essentially
on
as
a
status
check
on
pull
requests.
A
So
that's
what
rick
has
been
up
to
parole
has
started
writing
her
unit
tests
for
connecting
to
private
registries
at
runtime
and
we're
starting
with
google's
artifact
registry,
whatever
it's
called
I'm
sorry,
I
flake
on
the
name
a
little
bit
there
sometimes,
but
she
has
started
writing
that
unit
test
dan
mingum
has
been
helping
out
with
creating
the
private
repositories
and
a
ci
process
to
build
and
push
to
private
repositories
so
that
her
unit
tests
have
something
to
connect
to
and
pull
down.
A
She's
also
been
doing
a
number
of
additions
to
some
of
our
provider
and
crossplane
upgrade
integration
into
ends.
Tests
as
well,
those
have
been
historically
a
little
unreliable
and
so
she's
added
some
really
helpful
logging
to
be
able
to
identify
the
cause
of
failures
and
be
able
to
troubleshoot
those
better.
So
she's
got
a
pull
request
open
for
that
as
well
yeah.
So
those
are
the
latest
updates
for
rick
and
parole.
A
I
would
like
them
to
speak
for
themselves,
but
I
think
it's
like
after
midnight
where
they
live
so
they're,
not
in
attendance
here
but
yeah.
That's
a
really
good
effort
for
both
of
those
both
of
those
folks
and
enjoying
having
them
as
mentors
this
summer.
We're
sorry
mentees
this
summer,
so
for
kubecon,
north
america
2022
in
detroit
michigan,
the
agenda
has
been.
A
The
schedule
has
been
released
since
the
last
community
meeting,
and
so
there
are
eight
different
talks
that
are
available
or
going
to
be
featuring
or
mentioning
crossplane.
I
didn't
quickly
figure
out
a
way
to
link
to
the
search
results
here.
So
you
know,
unless
somebody
figures
out
a
nice
way
to
do
that.
You
just
type
crossplane
in
hit
search-
and
here
are
all
the
eight
different
talks
that
are
going
to
be
about
crossplane,
so
that's
kind
of
in
some
shape
or
form
so
really
really
excited
about.
A
You
know
continuing
to
get
interesting
content,
interesting
use
cases
and
scenarios
etc
being
shared
with
the
greater
ecosystem.
In
this
you
know
in
kubecon,
one
of
the
bigger
formats
for
our
ecosystem
to
be
on.
You
know
be
on
the
stage
and
be
talking
about
the
you
know
what
we're
doing
so.
I'm
really
excited
to
see
that
and
they're
seeing
more.
You
know,
folks
that
are
not
maintainers
and
you
know
more
end
users
and
folks.
A
You
know
all
across
the
community
and
ecosystem
in
other
projects,
as
well
being
able
to
you
know,
integrate
crossbane
and
talk
about
it
too,
so
really
excited
for
eight
different
talks
in
detroit
and
congratulations
to
everyone
who
got
their
talk
accepted.
A
Maybe
maybe
not
but
congratulations
to
everyone
either
way.
A
A
So
we've
raised
this
a
couple
times
on
the
on
the
agenda,
and
so
I
probably
will
leave
it
off
next
time,
but
craig
is
is
still
looking
to
connect
with
folks
from
the
crossband
community
about
their
success
stories,
about
their
use,
cases
etc
so
feel
free
to
click
on
this
link
and
add
yourself
there
and
participate
in
that
conversation
and
craig
will
reach
out
to
you.
A
This
is
the
first
testing
stuff
that
I
had
asked
for
more
for
stuff
on
so
hassan
thanks
for
link
connecting
us
on
that
so
yeah.
It
hasn't
been
clear
to
me,
so
the
cncf
is
collaborating
with
us
to
do
some
security
testing,
specifically
fuzz
testing,
and
so
we've
gotten
some
some
issues
that
have
been
coming
out
of
that
a
handful
of
issues.
A
It's
not
been
immediately
clear
to
me
exactly
what
to
fix
or
what
to
do
with
some
of
those.
So
the
like
hassan
told
us
that
mafik
is
going
to
be
following
up
on
those
and
over
the
next
couple
weeks
here
to
dig
into
those
and
see
what
issues
or
what
what
remedies
we
would
need
to
put
into
crossplane
to
deal
with.
A
Some
of
the
potential
issues
coming
up
from
fuzz
testing,
one
of
the
things
I've
had
a
little
bit
of
problems
with
is
that
the
the
output
for
some
of
these
tests
is
kind
of
hard
to
parse,
and
so
they
say,
like
you,
know,
unreproducible.
So
I'm
not
sure
if
that
means
like
okay,
is
this
a
flake?
Is
this
something?
That's?
You
know
not
an
issue,
so
we
need
to
dig
into
it
a
little
bit
more
to
understand
the
severity
of
these
and
and
what
they
actually
mean,
which
we
will
be
doing
all
right.
B
Then-
and
so
this
is
one
of
the
things
we
talked
about
in
the
last
meeting-
and
I
know
nick
had
said
that
he
thought
this
was
probably
a
bug.
This
specific
issue
was
probably
a
bug
and
was
not
intentional.
B
The
render
function
in
the
compose
the
composite
area
was
basically
clobbering,
any
existing
owner
references
and
just
putting
the
cross
plane
controller
owner
reference
in
this
change,
just
refactors
that
so
that
it
just
adds
the
control
reference
to
any
existing
owner
references,
so
that
will
preserve
any
owner
references
that
were
added
by
other
kubernetes
things.
A
Yeah
good
point
bob-
and
I
remember
that
conversation
from
last
community
meeting
and
like
yeah,
the
general
consensus,
was
that
that
was
probably
a
good
approach
to
take
there.
So
thanks
for
following
up
and
getting
a
pull
request
open
with
that
code,
change
and
so
yeah
we'll
try
to
get
one
of
the
crossband
maintainers.
Take
a
look
at
that
and
provide
me
feedback,
or
you
know
think
through
that
use
case,
or
this
this
scenario
a
little
bit
further
to
make
sure
that's
the
the
safe
thing
to
do.
A
Is
there
any
part
of
that
change
there?
It
looks
fairly
straightforward
bob.
Is
there
any
part
that
you
want
to
specifically
call
out
to
pay
special
attention
to
in
the
review,
or
you
know
it's
it's
kind
of
straightforward
of
this.
The
addition
here
then
you've
got
it.
You
know
some
test
cases
for
it
too.
B
Yeah
I
mean,
I
think
it
was
pretty
pretty
straightforward.
You
know
just
basically
changing
the
the
override
to
a
to
an
append.
So,
and
you
know
if
the
if
the
nice
thing
is
that
the
way
that
code
is
built
is
if
that,
if
the
append
finds
the
object
already
there,
it's
just
gonna,
it's
just
gonna
update
the
existing
one.
So
there's
no
issues
with
duplicates
or
anything
so
yeah
I
mean,
like
you
said.
I
think
it's
pretty
straightforward.
A
All
right,
let's
go
ahead
and
check
out
the
next
set
of
links
here.
So
yes,
the
foreground,
cascading
deletion.
I
know
there's
a
whole
bunch
of
conversation
on
that
one.
I
thought
this
was
a
bigger
issue
that
had
a
bunch
of
discussion.
Am
I
thinking
of
something
it
is.
B
It
is
a
bigger
issue
and
I
just
and
my
initial
plan
was
to
try
to
tackle
the
whole
thing
and
then
I
kind
of
thought
to
myself.
You
know
what,
if
somebody
came
to
me
and
wanted
to
address
all
of
these
things
at
once.
I
would
probably
glaze
over
and
not
understand
what
they
were
talking
about,
and
so
I
decided
to
break
it
into
some
more
manageable
pieces.
B
B
The
way
I
think
it
should
be
handled-
and
I
think
yuri
agrees
with
me-
but
other
people
may
have
have
other
opinions
which
I'd
love
to
hear
basically
with
cascading
with
foreground
cascading
deletion.
B
Kubernetes
will
follow
the
owner
reference
chain
to
set
up
a
dependency
graph
and
it
will
add
a
finalizer
to
everything
that
is
not
the
bottom
of
that
graph
right.
The
the
leaf
nodes
of
that
graph,
preventing
those
things
from
being
deleted
until
the
dependencies
themselves
are
deleted.
You
know,
so
you
can't
delete
a
pod
until.
B
You
can't
delete
the
the
deployment
until
the
pods
are
gone.
You
can't
delete
the
pods
until
the
pvcs
are
gone
that
whole.
You
know
concept,
unfortunately,
in
in
cross
plane,
there's
two
issues:
number
one
cross
plane
is
not
setting
the
block
owner
deletion
flag
in
its
controller
references
which
I
don't
think
is
a
big
deal.
I
think
that's
a
pretty.
I
mean
it's
a
it's
a
one-line
coaching
change
and
it
only
affects
cascading
forward
deletion.
It
has
literally
no
impact
on
anything
else.
B
So
if
you're
not
doing
foreground
cascading
duration,
you
won't
care.
The
second
part
is
a
little
bit
more
invasive.
A
little
bit
more
complicated
crossplane
is
ignoring
finalizers
that
are
not
its
own,
so
if,
as
soon
as
an
object
gets
mark
deleted,
even
if
there's
another
finalizer
from
foreground
deletion
or
from
some
other
controller
crossplane
doesn't
care,
it
just
executes
the
delete
functionality
and
the
remote
resource
in
a
case
of
a
managed
resource.
The
remote
resource
goes
away
in
the
case
of
a
composite
resource.
B
They
want
me
to
stick
around
until
their
dependency
is
cleared
and
once
that
dependency
is
cleared,
they'll
remove
the
finalizer
and
I
can
go
away
and
I
think
it
would
be
good
for
crossplane
to
honor.
That-
and
you
know
it's
a
little
bit
more
complicated.
Obviously
because
crossplane
is
is
representing
other
things.
It's
not.
You
know
the
resource
itself,
it's
the
representation
of
other
resources,
so
it's
a
little
bit
more
complicated,
but
the
more
I
thought
about
it,
the
more
logic
you
know
more
logical.
It
seemed
to
be
so.
B
I
would
love
to
get
other
folks
opinion
on
how
that
should
work,
or
you
know,
if
there's
other
things
that
I'm
not
considering
it
is,
it
is
coordinated.
Changes
between
cross,
plane
and
cross
plane
run
time,
which
obviously
means
we'd
have
to
rebuild
some
of
the
providers
with
the
updated
cross
plane
runtime.
B
E
Can
I
can,
I
say
something
and
that
that's
awesome
if
we
can
honor
other
finalizers,
we
can
also
add
some
custom
logic
for
deletion,
for
example,
where
I
work,
I
want
to
give
a
deletion
logic
for
repositories
in
github.
I
don't
want
to
delete
them.
I
want
to
archive
them
instead
of
tradition,
so
I
could
add
a
cousin
analyzer
to
the
repository
source
and
for
cross
plane.
The
deletion
policy
would
be
orphan,
but
for
me
I
would
have
a
controller
which
which
will
handle
this
archive
option
in
the
repository.
E
B
Yeah,
that's
a
great
example.
I
mean
that's
kind
of
what
I
was
thinking
as
I
was
going
through.
This
is
that
it
really
makes
it
it
makes
cross
playing
conformity
to
the
way
kubernetes
expects
things
to
work
which
allows
people
to
do
other
things
outside
of
crossplane
that
maybe
now
crossplane
doesn't
need
to
worry
about
right.
There's
been
this
whole
issue
around
dependency
management,
and
you
know
other
other
kind
of
adjacent
features
that
crossplane
really
doesn't
want
to
pull
into
core
and
yet
they're
really
important.
B
Well,
maybe
if
we,
if
we
can
do
something
like
this,
then
maybe
you
can
leverage
kyverno
or
you
can
leverage
other
projects
to
do
some
of
those
features
you
know
in
kubernetes,
space
and
crossplane
doesn't
need
to
worry
about
it.
Crossplane
just
benefits
from
being
a
part
of
that
and
and
supporting
those
interfaces.
B
A
Yeah,
I
think
that's
that's
a
good
point.
You
know
bob
and
gabriel,
that's
those
are
really
good
points
there.
Where
you
know
the
more
com,
conformance
or
compliance
an
operating
model
is
with
the
rest
of
the
ecosystem.
You
know
the
more
like,
tooling
and
integrations,
etc
can
be
built
around
it,
because
the
behavior
is
as
expected.
So
that's
a
good
point
to
make
there
and
bob
thanks
for
taking
the
time
to
you,
know
kind
of
write
this
up
and
and
get
a
little
bit
more
clarity
into
your
thoughts
here.
A
Did
you
so
you
were
talking
about
a
couple
phases?
Did
you
capture
also
like
what
this,
what
the
next
phases
would
be
as
well
or
like
kind
of
like
the
way
you
sequence?
The
way
you
see
the
sequence
of
this
playing
out.
B
B
So
I
do
have
the
I
do:
have
the
next
proposal
kind
of
sketched
out
and
basically
what
it
does
is
it
takes
it?
It
builds
on
top
of
this
and
if,
if
you're
familiar
with
the
dependency
management,
that
was
added
into
provider
kubernetes
where
they
have
a
depends
on
reference
type
that
will
reach
out
to
a
dependency
and
add
a
or
to
a
dependent,
a
depending
object
and
add
a
finalizer
to
it
right.
So
if
I
say
that
the
object
that
I
have
in
provider,
kubernetes
is
dependent
on
that
thing.
B
Over
there
then
provider
kubernetes
will
reach
out
to
that
thing
over
there
and
put
a
finalizer
on
it,
so
it
can't
go
away
until
the
provider.
Kubernetes
object
has
gone
away,
which
is
great,
except
that
it
won't
work
with
cooper
with
crossplane
resources
because
of
this
right
here
right
and
it
really,
it
really
allows
you
to
start
managing
dependencies
between
applications
as
an
infrastructure
or
whatever
other
dependencies.
You
need
to
manage
without
worrying
without
doing
it.
You
know
without
cross
plane,
worrying
about
it
right,
because
I
know
crossblank
doesn't
want
to
worry
about
dependencies.
B
B
So
the
next
proposal
was
basically
to
or
is
going
to
be
cascading
finalizers
during
reconciliation,
because
it's
not
enough
for
me,
it's
not
enough
for
me
to
put
a
finalizer
on
a
composite
object
and
expect
that
cascading
foreground
deletion
is
going
to
take
care
of
it
because
it
won't
cascading.
B
Foreground
deletion
does
not
put
finalizers
on
the
leaf
objects
of
the
tree,
and
so
the
deletion
will
start,
even
though
that
finalizer
has
been
put
there,
and
so
I
think
what
we
need
to
do
and
the
proposal
will
be
is
to
in
reconciliation,
cascade
finalizers
down
through
the
resource
tree,
including
to
the
leaf
nodes,
so
that
nothing
can
go
away
until
that
root
level.
Finalizer
has
been
removed
by
the
dependent
object,
and
you
know
then.
Obviously
the
reconciliation
would
remove
the
the
reconciled
finalizers
and
the
deletion
can
happen.
The
way
it
goes.
B
I
need
this.
One
is
definitely
going
to
need
pictures
and
examples,
and
you
know
much
more.
I
think
description
of
how
it
how
it
should
quote
unquote
work,
but
I
think
that
will
get
us
closer
toward
to
where
we
can
actually
do
dependency
management.
B
You
know-
and
I
think
the
complexity
lies
in
the
fact
that
a
typical
kubernetes
dependency
is
a
one-to-one
thing
right,
I'm
dependent,
I'm
dependent
on
a
volume
or
I'm
dependent
on
pod
or
whatever
and
crossplane
you
declaring
the
dependency
on
a
composite
is
not
enough:
you're,
really
you're.
What
you
mean
is
that
you
are
dependent
on
all
of
the
resources
that
that
composite
is
composed
of,
and
so
we
need
to
preserve
all
of
those
resources
until
the
dependency
is
removed.
A
That's
the
next
yeah,
that's
good,
awesome,
bob
yeah!
This
is
great,
so
yeah.
I
think
that
there
will
definitely
be
some.
You
know
thinking
through
how
this
model
you
know,
can
potentially
interact
with
some.
Maybe
some
assumptions
that
are
in
the
composition,
machinery,
so
it'll
definitely
be
good
to
get
some
more
opinions
on
on
how
this
integrates
with
that
or
if
this
would
cause
any
break
any
of
those
assumptions,
or
you
know,
change
anything
there,
hassan
or
yuri
et
cetera.
A
Do
you
guys
have
some
other
comments
that
you
want
to
add
to
this
right
now
or
you
know,
go
think
about
it
that
we
can
go
think
about
it
and
add
comments
asynchronously
as
we
kind
of
digest
it
more.
D
Yeah,
maybe
quickly
on
a
finalizer
behavior.
So
what
kind
of
model
do
you
have
in
mind
when
to
execute
when
to
actually
react
to
deletion
timestamps
when
only
crossplane
finalizer
core
cross,
plane
finalizer
is
left
on
the
resource?
Is
that
correct?
So,
okay.
B
Yeah,
the
the
the
trivial
watch,
I
mean
right
so
right
now,
there's
one
and
only
one
finalizer
right,
because
crossplane
has
added
it.
Presumably,
and
so
the
trivial
check
is,
you
know
the
length
of
the
file
the
length
of
the
list
of
finalizers
is.
Is
one
right
at
that
point?
You
know
the
only
finalizer
left
is
the
cross
plane
finalizer
and
you
can
execute
the
deletion
code.
I'm
not
sure
it's
I'm
not
sure,
that's
sufficient,
but
the
you
know
the
trivial
check
would
be.
D
Right
and
what
about
owner
references
specifically
in
a
provider
kubernetes,
they
still
required
for
a
proper
foreground
foreground
cascading
division.
Is
that
correct.
B
Exactly
foreground,
cascading
deletion
relies
on
the
owner
reference
to
do
to
do
the
tree,
the
the
graph
generation,
which
is
how
it
knows
where
to
put
the
find
the
the
foreground
deletion
finalizer.
B
It
strikes
me
a
little
bit
that
there
may
be
some
oh
with
the
owner
reference
change
in
provider
kubernetes.
There
may
be
some
overlap
now
between
those
two
features:
the
the
finalizer
that
provider
kubernetes
adds
may
be
not
necessary
if
you're
doing
foreground,
cascading
deletion,
but
maybe
you're
not
doing
foreground
cascading
deletion,
and
you
want
that
finalizer
to
be
there.
So
I
don't
think
it's
a
big
deal
to
have
two.
D
B
C
C
So
this
is
definitely
like
my
to-do
list
and
I
would
like
to
like
thank
you,
bob
and
also
yuri
for
driving
this,
because
this
is
also
something
that
I
personally
hit
a
couple
of
times
and
currently
result
with
some
sort
of
work
around
by
putting
managed
resources
inside
some
extra
composites
etc.
So
I
think
this
will
definitely
be
helpful.
C
I
have
a
couple
of
questions
or
concerns
like
with
the
like
discussion
so
far
I
heard
in
this
meeting,
but
I
feel
like
I
should
better
comment
on
the
tickets
and
we
can
discuss
there
but
like
listening
this,
like
all
these
contacts,
definitely
helped
me
a
lot,
so
I
can
jump
on
that
issue
like
quicker
right
now,
thanks
paul.
A
Awesome
yeah
thanks
for
for
sharing
that
as
well
man,
great
okay,
it's
cool
yeah!
So
thanks
again
for
driving
this
and
getting
this
well
organized
and
then
you
can
think
through
this
all
and
you
know,
have
it
well
fleshed
out
together.
So
this
is
great.
We
can
keep
on
driving
that,
let's
see
so
that's
everything
that
was
on
the
agenda.
D
C
F
D
We
got
chris
uncle
in
the
same
meantime,
so
he
can
share
us
some
private
readability,
all
right.
F
So
yeah,
if
you
want,
I
was
a
little
bit
late
yeah
I
cut
the
release
and
bring
a
lot
of
things
in
which
was
open,
sprs,
so
a
few
new
resources
available
so
like
vpc
flow
logs
security
group
rules
because
yeah
this
was
out
of
my
company
because
we
needed
this
in
our
eks
setup
to
yeah
fulfill
our
security
stuffs
and
we
bring
it
up
and
then
also
out
of
the
community.
F
There
was
a
few
things
for
premier
toys
or
managed
promethoric
service,
so
rule
group
namespaces
alert
manager,
things
that
is
also
available
now
yeah
and
the
rest,
I
guess,
was
a
few
things
for
bug:
fixes
in
s3
buckets
in
topics
yeah.
That's
it!
I
guess
sorry
that
this
was
a
little
bit
late,
because
I
had
not
so
much
time
in
the
last
two
weeks.
F
A
F
A
Definitely
very
cool,
very
cool,
okay,
awesome
yeah,
thanks
for
sharing
that
christopher
and
all
right.
So
then,
yes,
so
I
think
that's
everything
for
the
agenda
then,
and
we
can
go
ahead
and
wrap
it
up.
I'm
good
to
see
everybody
this
week
and
we'll
continue
the
discussions
on
github
and
slack
and
good
to
see
everybody.