►
From YouTube: 2022-07-28 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
recording
is
started,
and
this
is
the
july
28th
almost
to
the
end
of
july,
cross
playing
community
meetings.
So
let
me
add
my
name
here
too,
because
I
am
here-
I
am
present
all
right,
so
let's
jump
into
recent
releases
and
milestone
checkup
here
since
the
last
community
meeting.
Obviously
the
big
news
is
that
we
released
the
version
1.9,
so
that
is
out
and
available
for
folks.
I
think
it
was
just
we
were
in
the
middle
of
this
release
during
the
last
community
meeting.
A
So
now
this
is
out
there
and
available,
and
I
think,
there's
probably
there's
maybe
some
ongoing
work
here
as
well.
That
will
potentially
end
up
in
patches
too,
but
this
release
is
out
there
and
I
think
folks
have
already
upgraded
to
it,
and
then
that
is
the
big
news
since
the
last
community
meeting.
So
you
know,
after
getting
the
1.9
release
out,
obviously
the
next
tier
was
going
to
be
v
1.10.
A
Now
it's
a
release
date
on
that
one.
Let's
talk
about
that
for
a
bit
because
we've
had
some
conversations.
Let's
see,
let's
put
it
into
let's
open
this
this
this
issue
here,
so
we've
had
some
conversations
over
the
past.
You
know
it's
taking
a
look
at
this
over
the
past
year,
so
we've
been
on
a
typically
a
two
eight
week.
Release
cycle
has
been
the
typical
cadence
so
far
there.
A
You
know
there
hasn't
been
a
strong
need
and
then
necessarily
like
some
of
the
features
that
are
going
in
to
continue
at
that
pace.
Right
now,
upstream
kubernetes
does
not
do
that
pace
either.
So
I
think
that
there
is
some
good
reasoning
for
for
reducing
the
pace
of
the
official
crossplane
releases
to
a
quarterly
release,
instead
of
basically
two
months
going
to
more
something
like
three
months.
A
So
I
think
that
that's
probably
strikes
a
good
balance
between
velocity
of
and
frequency
of
the
releases
and
then
balancing
time
to
get
bigger
features
in
and
to
get
things
you
know
to
to
have
meaningful
releases
going
out
as
well.
So
you
know
that's
something
that
the
steering
committee
did
meets
together
on
and
and
go
ahead
and
make
a
decision
to
move
to
a
quarterly
release.
A
So
that
is
like
the
latest
update
on
that
that
the
decision
has
been
made,
but
it
has
not
been
put
necessarily
into
practice
yet
so
the
release,
documentation
etc.
You
know,
and
that
talks
through
our
release
cycles
and
dates,
etc
would
need
to
be
updated
to
reflect
that,
but
I
think
it
probably
makes
sense
to
do
that
as
part
of
this
1.10
effort.
So,
instead
of
so
I'm
I'm
thinking
that
to
just
roll
this
out
now
and
have
the
if
you're
doing
a
quarterly
three
month
release
sort
of
thing.
A
This
would
put
the
the
release
the
1.10
release
for
about
mid-october,
from
mid-july
to
of
1.9
to
mid-october
for
1.10,
and
that's
interestingly,
it
does
coincide.
I
think,
with
the
next
kubecon
also
for
kubecon
detroit,
so
that's
nice
to
be
able
to
get
a
release
out
that
will,
you
know,
hopefully
have
some
meaningful
things
in
it
right
around
kubecon
time.
A
You
know,
obviously,
I'm
not
to
use
that
as
a
driver
for
for
subsequent
releases,
but
it's
nice
that
the
potentially
the
first
release
on
a
quarterly
cadence
does
line
up
that
way.
Nick
anything
else
you
wanted
to
add
to
that,
or
you
know
that
all
makes
sense
to
you
as
well.
For
her
your
perspective.
A
Okay,
so
you
know
we
just
did
we
just
dropped
1.9
after
in
between
the
last
community
meetings,
so
there
is
not
yet
a
1.10
project
board.
I
think
the
the
roadmap
board
here
is
still
fairly.
A
It
reflects
fairly
well
some
of
the
highly
demanded
or
highly
sought
after
features
and
functionality
like
some
of
the
ones
I
can
think
of
immediately.
Are
you
know,
composition,
functions
and
custom
compositions
and
observe
only
resources,
so
those
are
reflected
on
the
roadmap
and
those
would
be
considered
for
you
know
1.10
planning
as
well,
but
you
know
we
just
started
in
the
first
week
of
the
1.10
cycle.
So
I
have
an
action
item
then,
to
create
the
1.10
project
board
and
get
that
going
and
start
getting
some
prioritization
in
there.
A
So
folks
are
more
than
welcome
to
add
some
feedback,
as
we
get
the
1.10
board
going
for
anything
that
we
might
want
to
consider
for
prioritization
there.
You
know
obviously
there's
going
to
always
be
the
reality
and
constraints
around
engineering
resources
for
folks
wanting
to
are
able
to
take
on
some
of
those
features.
But
you
know
we're
just
in
the
beginning
of
the
1.10
release,
you
know
planning
sort
of
stuff,
so
ideas
and
suggestions
and
prioritization,
prioritization
desires
etc
are
are
you
know,
open
folks
to
be
contributing
to
that.
A
So
I'll
take
an
action
item
then
to
get
the
1.10
board
up
and
we
will
have
that
before
the
next
community
meeting
anything
for
folks
to
add
to
core
cross
plane
and
crossband
runtime
planning
and
1.10
efforts.
A
Alrighty
alrighty,
okay.
So
let's
move
on
down
to
providers
as
well.
I
didn't
get
a
whole
bunch
of
time
to
fill
in
stuff
here,
so
we
can
do
if
there's
news
and
updates
etc
that
we
want
to
do
on
the
fly.
Then
we
can
do
that
right
now.
Obviously,
one
of
the
biggest
things
here
is
that
a
long
long
awaited
release
was
put
out
for
provider
jet
aws.
A
So
I
think
a
note
I
want
to
make
on
that
is
that
you
know
we.
So
within
the
community
there
you
know
we
had
had
a
bit
of
a
backlog
on
some
some
pr's
some
contributions
and
then
that
released
itself
right,
and
so
we
did
find
some
time
recently
to
go
ahead
and
get
that
release
out.
But
then
I
know
I'm
aware
of
that.
There's
a
couple
pr's
that
are
are
desired
for,
for
being,
you
know,
merged
in
and
getting
taken
to
the
finish
line
as
well.
A
So
there
is
some
engineering
resources
that
are
becoming
available
now
from
from
the
folks
at
upbound
that
had
not
been
available
in
in
the
recent
past.
So
you
know
there
will
be
somewhere
more
time
from
folks
that
are
maintainers
there
to
be
able
to
review
some
things,
get
some
things
in
that
have
been
blocking
the
community
so
yeah.
We
should
see
the
velocity
pick
up
a
little
bit
in
those
areas,
as
folks
are
becoming
more
available
than
they
had
been.
So
we
can.
C
A
There,
when
we
get
some
more
prs
merged
in,
but
there
will
be
some
some
some
velocity
that
had
been
that
hadn't
been
there
pre
in
the
recent
recent
past
christopher
anything
that
you
can
think
of
and
bob
too
anything
and
other
folks
as
well.
Anything
you
all
can
think
of
in
the
provider
world
that
we
want
to
add
here
as
status
updates
and
things
to
make
note
of.
C
D
Yeah,
I
don't
have
anything
on
pro
on
provider
terraform,
but
on
provider
kubernetes
they
just
merged
in
some
changes
to
bring
it
up
to
the
latest
cross
plane,
run
time
and
add
the
max
reconcile
rate
support.
So
now
the
default,
I
think,
is
10
concurrent
reconciliations,
which
has
really
helped
helped
performance
in
provider.
A
A
And
christopher,
is
there
already
a
provider
aws
tracking
issue
for
getting
the
release
out?
Yes,
we.
A
Well,
let
me
see
if
I
can
find
that
real
quick
unless
you
already
know
oh
here
it
is,
I
found
it
1379.
all
right.
So,
let's,
let's
just
put
that
tracking
purposes
there.
A
And
I'm
trying
to
drag
zoom
windows
out
of
the
way,
so
you
probably
just
see
my
mouse
moving
around
and
not,
but
nothing
really
happening.
A
Thank
you.
Zoom,
okay,
sweet
any
other
provider,
announcements
or
things
to
add
to
your
status.
A
So
right
so
community
topics
so
they're.
So
in
terms
of
recent
content,
there
was
a
super
interesting.
The
live
stream
this
morning
between
victor
who
is,
you
know,
obviously
a
devrel
dev
advocate
in
the
cross
lane
space
I
mean,
we've
seen
him
many
many
times.
He
was
teaming
up
with
some
of
the
folks
at
vmware
to
do
a
live
stream
today
and
kind
of
introduce
crossplane
and
talk
about
some
of
the
concepts
there.
So
I
think
that
was
really
cool
experience
and
the
youtube
link
is
right
here.
A
So
you,
if
you
missed
that
live
stream
from
just
like
a
couple
hours
ago.
You
can
follow
up
on
that
and
check
that
out
as
well.
I
believe
victor
is,
it
might
be
right
now
or
it
might
have
just
passed.
I
wasn't
able
to
attend
it,
but
victor
is
also
doing
you
know
his
devops
toolkit
a
live
stream
this
morning
as
well,
and
then
mauricio
salatino
was
joining
him
as
well.
A
So
I
think,
there's
probably
interesting
conversation
there
too
and
we'll
get
a
link
to
that
as
well,
when
we
have
that
one
on
two,
a
couple
other
interesting
blog
posts
from
folks
in
the
community
about
essentially
two
different
ones,
about
using
flux
for
setting
up,
get
ops-
and
you
know
bridging
resources-
and
you
know
delivering
your
software
with
the
get
ops
approach
are
both
available
here.
So
the
links
are
available
there.
A
One
of
them
was
the
cncf
blog
one
and
then
another
one
was
someone's
personal
blog,
so
those
are
available
to
to
peruse
and
learn
from
and
yeah
folks
can
check
those
out
any
other
things
that
folks
want
to
add,
as
new
content
to
this
section
here
feel
free
to
add
it
as
a
suggestion
and
I'll
go
ahead
and
make
sure
it
gets
merged
in
there
the
agenda
doc,
let's
see
if
any
of
our
amazing
lfx
mentorship
folks
are
online
today,
I
don't
think
they
are,
and
it
this
isn't
a
super
convenient
time
for
them
anyway,
since
it's
kind
of
late
for
night
late
at
night
time
for
them,
but
quick
updates
on
both
of
the
yeah
the
the
mentorship
projects
for
this
summer.
A
So
we
are
in.
I
think
we
just
started
week,
8
of
12,
so
we're
about
two-thirds
of
the
way
through
the
mentorship
project
term.
So
the
two
projects
there
of
rico
working
on
catching
breaking
changes
in
crds.
A
She
has
gotten
to
almost
the
finish
line
for
one
of
her
prs
for
adding
a
managed
resource
to
provider
gcp
for
doing
adding
dns
policy,
and
the
next
step
there
is
to
you
know
in
a
draft
form
purposely
introduce
some
some
breaking
changes
that
crd
not
release
them,
but
to
have
them
in
a
draft
form.
So
then
she
can
start
testing
and
working
on
her
automation.
Logic,
like
a
you
know,
pull
request
bot
to
detect
those
those
breaking
changes
and
then
surface
those
on
the
pr
as
well.
A
So
that
will
be
probably
the
remainder
of
her
time
over
the
next
month
or
so
in
the
mentorship
program
to
be
getting
that
the
first
stage,
essentially
of
that
pr
bot
to
detect
breaking
changes
in
crds
and
you
know,
add
a
failed
status
check
in
the
comments
etc
to
the
pr
so
that
we
can
use
those
across
all
providers
to
be
very
aware
as
a
quality
check
during
our
pull
requests
when
we're
adding
new
resources
or
sorry
not
adding
new
resources,
but
updating
resources
to
making
sure
be
aware
very
aware
of,
if
there's
any
breaking
changes
going
in
there.
A
So
I
think
that's
going
to
be
really
exciting.
Project
and
she'll
continue
making
progress
on
that
over
the
next
month
and
then
rule
has
been
working
on.
You
know,
unit
tests
and
integration
tests
actually
to
making
sure
to
make
sure
that
private
registry
polls
are
working,
so
she
has
updated
a
number
of
the
existing
end-to-end
tests.
A
We
have
around
provider
updates
and
cross-plain
version
updates
as
well
and
adding
some
diagnostic
things
into
those
to
kind
of
help
over
the
long
term
make
sure
that
those
tests
are
staying
healthy
and
if
they're
not,
we
have
the
observability
information
to
troubleshoot.
Why
and
then
she's
just
getting
started
kind
of
on
building
out
the
integration
test
structure
for
doing
a
doing
pull
operations
of
packages
that
are
in
private
registries.
A
So
you
know
she's
got
a
month
left
on
that
project
as
well
and
that'll,
be
the
main
focus
for
her
is
delivering
the
private
package
testing
flow.
Now
that
we've
gotten
to
where
we
are
so
far
with
our
progress
on
the
other
tests
so
far
yep.
So
both
those
girls
are
doing
awesome
work
and
you
know
we're
really
happy
to
have
them
around
the
community
this
this
summer,
while
they're
working
there
all
right
next
issue
here.
A
I
think
this
was
on
the
was
on
the
agenda
for
last
meeting,
but
I
wanted
to
leave
it
on
there
to
just
get
a
little
bit
more
awareness
on
it
is
that
craig
is
wanting
to
hear
and
connect
with
folks
in
the
crossband
community
about
some
of
the
experiences
that
folks
have
had
some
of
the
success
stories
here,
and
we
would
want
to
be
sharing
these
on
the
crossplane,
blog
and
just
kind
of
getting
more
of
people's
experiences
like
more
exposure
to
them
as
part
of
our
plane,
like
you
know,
ecosystem
and
community
and
in
the
you
know,
blog
et
cetera
that
we
that
we
publish
so
if
folks
want
to
participate
in
that
there's
a
direct
link
to
the
discussion
on
it.
A
So
you
know
add
some
of
your
experience
or
your
desire
to
participate
in
it
there
and
craig
will
reach
out
to
you.
A
I
don't
think
muafik
is
here,
and
I
wanted
to
sync
with
him
on
this,
because
I
think
he
might
have
the
most
context.
But
you
know
the
cncf
has
been
doing
some.
You
know
work
to
improve
the
security
posture
and
invest
in
the
security
posture
of
the
crossplane
project,
and
one
of
those
aspects
is
fuzz
testing.
A
We
had
kicked
off
that
effort
over
the
past
couple
months
and
it
looks
like
one
of
the
first
issues
has
been
found
from
from
the
testing,
and
you
know
the
fuzz
testing
that
some
of
the
folks
from
the
cncf
have
been
adding
in
so
there's
a
link
here
to
the
to
the
a
essentially
an
out
of
memory,
panic
that
was
found
from
from
from
the
fuzz
testing,
and
I
was
hoping
to.
A
I
guess
I
don't
have
a
ton
of
context
myself,
but
I
was
hoping
to
get
a
little
context
from
folks
that
might
know
about
that
to
to
see
if
you
know
what
the,
if
we
understand
the
origin
of
the
of
the
the
root
cause
of
that
that
issue
there
so
muafik,
I
know,
did
open
an
issue
about
it
and
he'll
be
tracking.
A
Some
some
effort
towards
a
resolution
there
so
we'll
make
sure
to
share
some
of
those
results
of
of
of
you
know
what
the
fix
is
and
how
we,
how
we
go
about
it
and
then
maybe,
if
there's
some
general
lessons
here
as
well
from
you
know
the
fuzz
testing
and
you
know
like
and
if
there's
other
patterns
that
are
throughout
the
code
base,
that
would
need
to
be
updated
as
well.
A
But
it's
exciting
to
see
that
you
know
we
can
make
some
improvements
from
some
of
the
effort
from
the
cncf
to
do
some
security
provoking
here.
So.
A
A
Okay,
so
we
can
move
on
to
some
other
community
topics
here.
So
it
looks
like
there's
a
been
a
couple
of
things
that
have
been
added
on
as
as
pull
requests
and
discussions.
So
let's
go
through
these
and
and
see
what
each
one
of
them
is.
So
this
is
the
first
one
in
the
list
here.
So
let's
click
on
that
yeah
christopher.
You
want
to
give
us
some
context.
C
On
this
one,
yes,
this
is
from
our
side,
so
we're
seeing,
for
example,
in
the
provider
jet
aws,
that,
if
you're
running
the
provider
in
debug
mode
that
the
provider
is
leaking
the
security
credentials
from
aws,
so
you
can
see
access
key
and
secret
key
is
in
the
debug
block
at
the
end
and
if
you're
using
static
credentials,
you
have
the
problem
that
this
log
messages
will
go
through
your
efk
stacks
whatever,
and
then
you
leaked
your
security
credentials
completely
and
yeah.
C
The
question
is:
if
we
can
disable
this
in
debug
clock,
I
guess
it
is
because
in
debug
clock
we
have
the
whole
main
tf
file
cut
it
completely
and
it's
part
of
it
the
security
credentials,
access
key
and
secret
keys.
A
It's
an
interesting
point
christopher
and
thanks
for
bringing
that
up,
and
so
so
just
to
confirm,
though
this.
This
is
something
that
only
happens
when
debug
logging
debug
motors
on
yeah,
okay,
okay,
that's
good
to
know
yeah,
and
so
it
looks
like
we
just
got
like
cause.
I
know
that
there
is,
you
know,
a
body
of
work
that
you
know
understands
sensitive
fields
right
and
you
know,
stores
those
appropriately
and
obfuscates
them
or
you
know
not,
obviously
it's,
but
what
is
right
word.
A
I'm
looking
for
you
know,
does
not
print
them
out
to
various
various
means
of
output
right,
and
so
it
looks
like
in
this
debug
thing
here:
yeah
when
we're
just
kind
of
throwing
out
the
entire.
You
know
file
state
file
there
that
that
it's
included
there
and
it
probably
should
not
be
so
yeah.
This
is
something
interesting
to
look
in
and
I
you
know
any
any
if
you've
done.
A
Some
look
in
looks
already
christopher
into
like
where
this
is
coming
from,
or
maybe
some
strategies
around
approaching
it
like
do
feel
free
to
share
those
as
comments.
So
we
can
kind
of
get
some
any
sort
of
your
insights.
You
already
have
into
it
to
kind
of
better
inform,
better,
informing.
C
Sort
of
approach
over
here
so
from
the
provider
perspective,
we
found
nothing
that
we
can
disable
it
if
you
enable
the
debug
things,
so
we
at
the
moment
adding
an
annotation
to
the
pot.
If
we
enable
debug
block
not
shipping
the
locks
to
our
central,
lock
sensors,
it's
the
only
thing
we
can
do
at
the
moment.
I
guess.
B
I
guess
this
is
built
in
in
terrigen,
somewhere
logging.
The
entire
terraform
stage
file
seems
a
little
much
to
me
to
be
honest,
even
for
even
the
debug
logs,
so
I
would,
as
jared
mentioned,
we
under
the
hood.
This
should
be
using
the
zap
longer.
I'm
guessing.
That's
probably
got
some
logic
for
filtering
out
sensitive
fields,
but
then
because
this
is
a
subfield
of
like
a
giant
json
blob,
that's
gonna
be
even
harder.
B
I
would
potentially
raise
ontario
jet
just
whether
we
how
valuable
it
is
to
be
to
be
logging,
this
entire
state
file
in
the
debug
logs,
if
we
just
maybe
just
not
do
that
anymore,.
D
B
Yeah,
I
I
wonder
whether
this
is
also
being
a
lot
of
the
time
when
we
emit
a
debug
log.
We
also
made
an
event
to
the
kubernetes
api
server,
so
it
would
be
worth
checking
whether
we're
just
putting
this
in
events,
even
when
debug
or
b
isn't
turned
on,
because
that
is
pretty
bad.
That
means
anyone
can
get
offense
and
the
api
server
can
get
those
credentials.
B
A
Okay,
yeah,
it's
a
good
point,
nick
and
yeah
that
that
speaks
to
understanding
the
like
where
this
is
coming
from
within
the
runtime.
So
we
can
yeah,
we
could
verify
it's
like
is
something
that's
going
to
an
event
also
or
is
it
just?
You
know
a
debug
statement.
That's
only
going
to
the
the
logging
logging
stream.
That's
a
really
good
point,
nick
yeah!
So
so
so
folks,
in
this
conversation
here
feel
free
to
add
a
couple
like
these.
A
These
observations
as
comments
this
issue,
so
we
keep
that
context
there
and
we
can
follow
up,
follow
up
there
and
bob
one
thing
I
was
noticing
is
that
you're,
when
you
start
speaking,
you
sound
your
voice
sounds
a
lot
like
aaron
eaton's
voice.
So
until
I
look
and
see
who's
talking,
I
think
that
aaron
is
is
talking
on.
This
call.
A
That's
really
funny!
Sorry,
I
think
is
that
aaron
aaron's
awesome.
We
love
aaron.
So
that's
it's
not
not
a
problem.
It's
definitely
funny
it's
playing
tricks
on
my
mind:
okay,
great
yeah.
So
let's
get
some
comments
on
that,
so
we
can
continue
following
up
and
understanding
the
scope
of
that
and
potential
mitigations
for
it
as
well.
A
All
right
here
is
another
discussion
that
I
don't
think
I
am
familiar
with
this
one.
Yeah.
C
C
So
if
you
give
a
little
bit
up
metadata
labels,
crossplane
io,
claim
name
to
external
name,
and
the
issue
here
is
that
for
kms
keys,
aws
decided
the
external
name
for
the
keys
and
we
patching
in
every
loop
from
cross
plane
to
the
external
name
and
then
in
every
loop
a
new
key
was
created
in
our
aws
accounts
and
accidentally
we
create
this
in
many
many
accounts
of
our
organization
and
at
the
end
we
come
to
a
cuberno
cluster
policy
that
we
disabling
for
chems
keys,
in
this
case
overwrite
the
external
name.
C
C
000
keys
because
every
key
costs,
one
dollar-
and
yes,
it's
more
like
that.
Folks
are
aware
of
because
the
default
yeah
you
can
create
an
aws
account.
100
000,
kms
keys.
C
A
A
B
A
So
now,
so
so,
to
help
me
understand
this.
A
little
bit
better
now
is.
Is
this
something
that
would
not
happen
if
the
particular
resource
that's
being
patched
here,
had
a
consistent,
you
know
external
name
like
a
predictable,
consistent
external
name
that
you
set
and
it
uses
it
and
that's
it.
C
A
C
A
Yeah,
interesting
yeah.
Sorry,
you
ran
into
this
christopher,
but
yeah.
Thank
you
for
sharing
your
experience
with
the
community
and
then
also
this
this
policy
as
well,
for
it
now
does
this
is
this
policy?
Is
this?
Is
this
something
that's
you
could
be
made
applicable
to
like
resources
in
general,
or
is
it
something
that's
specific
to
like
you
know
the
you
can.
C
We
have
a
match
in
this
resource
at
the
moment,
kms
aws
cross
plan
I
o
and
you
can
disable
the
match
and
can
have
it
for
all
resources.
If
you
want
oh.
C
This
for
the
keys,
because
we
saw
that
the
keys,
so
we
needed
this
to
stop
the
generation
of
the
keys
in
the
production
to
start
removing
the
keys,
you
know
and
fixing
the
the
the
composition,
things
and
roll
all
the
things
out.
So
we
started
to
yeah
match
only
the
keys,
but
we
can
do
it
also
for
other
resources.
A
Got
it
got
it
got
it
and
nick
is
there.
If,
in
your
perspective,
is
there
is
there
something,
like
you
know,
fundamental
in
cross
plane
runtime
that
should
should
be
enforcing
this
type
of
behavior
of
you
know,
like
external
name,
not
changing
once
it's
set
and
keeping
it
you
know,
uneditable
is
that
does
that
make
sense,
or
is
that
something
that
is
not
in
scope?
Or
you
know
you
wouldn't
there's
scenarios
that
you
wouldn't
want
that
to
enforce
that
vapor
in
the
runtime.
B
I
don't
think
it's
possible
for
crossland
runtime
a
controller
to
enforce
this,
so
we
would
presumably
need
an
admission
control,
workbook
or
something
like
that,
which
is
effectively
what
coverno's
doing
here.
So
the
question
then
is
more:
do
we
want
to
build
our
own
webhook
or.
A
A
C
A
With
community
so
that
you
know
to
raise
awareness
of
it
and
help
people
at
least
understand
this
behavior
and
and
have
a
solution
for
it.
Awesome
chris.
A
Okay
and
then
I
think
this
is
the
the
last
one
in
the
set
here
of
of
issues
that
folks
want
to
talk
about.
Let's
see
so
I
know
this
has
been
talked
about
a
little
bit
before.
Let's
just
see
real
quick,
what
the
the
latest
is
on
this
one
bob,
I
think
you've
been
driving
this
one.
So
do
you
want
to
kind
of
give
a
little
bit
more
context
and
and
get
us
all
caught
up
to
speed
on
this.
D
Yeah
I
mean
this
is
kind
of.
As
you
said,
there's
been
discussion
of
this
in
various
different
places.
I
know
provider,
kubernetes
and
provider.
Terraform
have
both
had
issues
here
and
the
issue
basically,
is
that
you
know
when
you're
deploying
everything
in
one
big
composition,
infrastructure,
plus
the
application
workload
when
you
delete
the
infrastructure
just
goes
away
and
there's
no
there's
no
opportunity
for
the
application
workload
to
do
any
of
its
normal
deletion
handling
right.
Your
helm,
charts
can't
run
their
pre
and
post
delete
hooks.
D
Your
kubernetes
objects
can't
even
connect
to
the
cluster
anymore
to
do
the
deletion,
because
the
cluster's
gone,
and
so
you
know
I
was
trying
to
collect
all
of
that
in
one
place,
hoping
to
get
a
little
bit
more
discussion.
But
you
know-
and
I
know
yuri
was
looking
at
this
a
little
bit
as
well
in
provider
kubernetes
context,
but
you
know
what
I've
been
thinking
and
hoping
to
get
input
from.
D
Other
folks
is
whether
we
could
do
something
with
cascaded
forward
deletion
that
kubernetes
already
provides
to
kind
of
synchronize
or
or
get
a
little
bit
of
control
over
the
deletion
handling,
and
you
know,
there's
a
whole
lot
of
discussion
there.
I'm
not
sure
how
much
it
really
provides
clarification,
but
I
mean,
I
think,
there's
kind
of
two
different
scenarios:
there's
there's
a
scenario
where
you've
got
all
your
infrastructure.
D
In
one
composition,
you've
got
all
your
application
workload
stuff
in
another
composition
and
then
those
two
compositions
are
combined
together
and
you
really
want
the
deletion
of
the
application
composition
to
happen
first
and
you
know
kind
of
delay
that
deletion
of
the
infrastructure
until
the
until
the
application
is
gone
cleanly
and
so
there's
a
couple
of
things.
I
think
we
could
do
inside
crossplane
to
to
help
facilitate
that
one
would
be
to
delay
the
delete
handling
of
the
cross
plane
resources
until
all
the
other
finalizers
are
gone
right.
D
D
You
know:
can
we
delay
calling
the
delete,
the
external
delete
and
the
composite
delete
methods
until
all
those
other
finalizers
are
gone,
and
we
know
that
anybody
else
that
was
interested
in
this
resource
is
gone
and
now
crossplane
can
do
its
thing
and
delete
and
that
kind
of
combines
with
the
owner
reference
issue
right
now
today,
crossplane
enforces
that
it
is
the
only
owner
reference
that's
allowed,
which
is
understandable.
Right
I
mean
crossplane
is
the
controller
it's
the
owner.
D
It
shouldn't
allow
anybody
else
to
be
the
controller,
but
I'm
wondering
if
we
can,
if
we
can
ease
that
a
little
bit
and
just
say
you
know
what
nobody
else
can
be
the
controller,
but
if
somebody
else
wants
to
be
on
here
as
an
owner
reference
to
provide
some
of
that
dependency
context,
you
know
that
would
be
allowed.
D
B
Think
that
is
the
case,
so
in
kubernetes
a
controller
reference
is
a
special
kind
of
object
reference
and
there
explicitly
can
only
be
one
controller
reference.
That's
okay
use
constrained,
so
the
control
it
should
always
be
the
the
object.
That
is
the
parent
of
a
thing.
So
if
we
create
a
thing,
we
make
ourselves
the
controller
reference,
but
I'm
not
aware
of
you
know
if,
if
we're
stopping
other
things
from
being
a
normal
owner
reference,
that's
not
intentional.
D
D
Okay,
yeah,
so
I
mean
I
can
open
an
issue
on
that.
I'm
just
basically
kind
of
looking
for
some
guidance
here.
Do
we
want
to
you
know?
Do
you
want
me
to
open
an
issue
with
a
proposal
and
say
this
is
how
I
think
it
should
work.
You
know.
Obviously
we
can
address
the
owner
reference
issue
as
a
separate
thing,
and
that
would
be.
That
would
be
awesome.
D
I'm
just
thinking
that
if
we
can
get
some
of
these
things,
you
know
kind
of
figured
out,
then
it
really
opens
up
the
possibility
of
doing
things
like
kyverno
policies
that
can
automatically
figure
out
your
dependencies
and
apply
them
for
you
right,
I
can
say
it'd
be
pretty
easy
to
set
up
a
caverno
policy
that
says
for
this.
Kubernetes
object
go
find
the
the
cluster
that
that
it's
being
deployed
into
and
set
up
that
dependency
right.
D
And
then,
if
we
combine
all
of
that
with
kubernetes
forward
cascaded
deletion,
then
we
can
kind
of
get
some
control
over
this
without
actually
having
to
do
a
lot
of
manual
work
right,
because
we
don't
want
to
manually
track
these
dependencies
anymore
than
we
have
to.
You
know
I
don't
mind
manually
tracking
the
application
composition
dependency
on
the
cluster
dependency,
but
you
know
I
just
I
just
think
if
we
can
get
maybe
the
owner
of
reference
stuff
fixed
and
take
a
look
at
the
finalizer
handling.
B
Yeah,
I
think
totally
open
to.
I
think
the
answer
is
yes
to
you
know,
opening
a
one
major
design
start.
Looking
to
this,
I
would
say
the
the
reason
that
we
haven't
historically
had
much
forward
progress
on
this
is
that,
while
I
agree
that
you
know
especially
for
cascading
deletion
and
whatnot,
we
want
that
to
work.
B
We
have
built
some
special
use
cases
where,
for
instance,
we
have
all
the
provider
provider,
config
usage,
things
that
all
basically
make
sure
that
you
can't
delete
a
provider
config,
that's
still
in
use,
because
we
have
such
a
big
mesh
of
things
reference.
You
know
you
have
your
your
claim
and
your
xr,
and
then
you
compose
resources
that
then
the
compose
resources
can
all
depend
on
other
things,
as
you
know,
cross
resource
references
and
whatnot.
B
B
It's
idiomatic
in
kubernetes
to
just
be
eventually
consistent,
so
wherever
possible,
we
would
you
know
we
would
like
to
just
be
like
delete
all
this
bundle
of
stuff,
and
just
you
know
let
the
cloud
providers
work
it
out
sort
of
thing,
but
then
right,
as
you
point
out,
if
you've
got
you
know
the
quintessential
example
being
like
a
bunch
of
home
charts
deployed
to
a
kubernetes
cluster
and
you
delete
them
all
the
kubernetes
cluster
goes
away.
Then
you
can't
delete
their
home
charts
because
the
cluster's
not
there
anymore.
So
definitely
you
know.
B
Definitely
we
should
have
a
discussion
about
fixing
this
and
I
definitely
would
love
to
see
one
major.
I
think
it's
just
gonna
be
a
case
of
like.
Can
we
find
an
elegant
solution
to
it?
Sort
of
thing
that
doesn't,
you
know,
have
have
sort
of
unwanted
side
effects
on
cosplaying.
D
Yeah
I
mean
from
one
perspective,
you
know
deleting
the
helm,
charts
and
even
the
kubernetes
objects
and
the
terraform
workspaces
right
I
mean
who
cares?
This
stuff
is
gone
right,
but
there's
just
no
way
for
the
provider
to
know
that
yeah
so
yeah
I
mean,
I
think
I
think,
if
we
can,
if
we
can
get
some
of
the
some
of
the
references
and
let
kubernetes
do
its
thing,
that
would
take
a
lot
of
the
onus
off
of
crossplane
and
just
let
kubernetes
figure
it
out.
So.
B
I
had
thought
in
the
past:
didn't
we
didn't?
We
make
a
change
that
basically,
if
the
provider
config
was
gone
like
if
you
do,
if
you
delete
the
cluster
and
the
provider
config
that
sort
of
points
to
that
cluster
goes
away.
Like
the
provider
config
for
helm
or
kubernetes,
I
thought
that
we
had
logic.
That
was
just
like.
Oh
there's,
no
provider
config
anymore,
and
you
asked
me
to
delete
it
so
I'll.
Just
assume
that
it's
gone,
I'm.
B
Yeah
good
point
right
so
yeah
I
could
also
yeah.
I
could
imagine
attempting
to
solve
this
through
some
kind
of
clever
logic
that,
like
figured
out
the
likelihood
that
the
provider
configures
point,
you
know
the
protocol
thing
can't
connect
to
the
underlying
cloud
anymore.
Sort
of
thing,
for
you
know,
with
a
with
a
404
or
something
like
that
like
this
doesn't
exist,
then
that
maybe
we
could
be
like,
oh
as
opposed
to
an
authentication
issue,
or
something
like
that
and
you're
asking
for
the
thing
to
be
deleted.
B
Then
maybe
we
could
just
assume
there's
nothing.
We
could
do
and
just
let
the
resources
get
deleted
because,
as
you
say,
the
the
external
resources,
the
things
that
were
running
on
the
cluster
in
that
case
have
have
already
been
deleted.
But
that's
certainly
that's
only
kind
of
the
hell
will
provide
a
kubernetes
use
case.
I
haven't
looked
into
the
the
provided
terraform
one,
for
example,.
D
A
Yeah
bob
that's
great
to
kind
of
separate
them
out.
That
way
of
you
know
if
there's
one
owner
reference
like
bug
like
yeah
track,
you
have
a
certain
issue.
Maybe
that
can
be
more
targeted
thing
and
then
this
longer
term,
like
more
challenging,
you
know
bigger
scoped
item
there
than
one
major.
Is
it's
a
good
way
to
separate
those
two
yeah.
A
All
right,
okay,
let's
see
here,
looks
like.
A
Let
me
let
me
just
accept
this
first
and
then
we
can
adjust
a
little.
This
is
a
brand
new
topic,
christopher.
C
Yeah
we
I
I
saw
this
in
our
set
up
a
few
times
and
today
I
had
the
discussion
in
the
community
with
andrei
koslowski.
I
guess,
and
he
found
the
same
issue
in
the
jet
provider.
So
that's
why
we're
creating
together
the
issue.
C
A
Okay,
there
we
go
I'm
back.
I
can't
can't
do
anything
else
until
the
formatting
was
fixed,
so
yeah,
okay,
let's
open
this
up
and
let's
check
it
out
all
right.
Let's
go.
C
Okay,
I
can,
I
can
have
a
short
overview
about
this
issue,
so,
for
example,
if
you
create,
we
found
out
two
resources
at
the
moment,
but
I
guess
there
are
more
so,
for
example,
if
you
create
a
security
group
rule
or
a
forget
profile
directly
after
the
creating
it
is
created
in
aws
and
the
first
observe
says:
okay,
the
is
creating
and
the
resource
is
ready.
C
True,
everything
is
great,
but
in
the
second
observed
loop
you
see
that
the
provider
says
something
like,
oh
sorry,
but
there
is
a
duplicate
security
group
rule
or
in
already
existing
fargate
profile,
and
we
checked
a
little
bit
in
the
provider
how
it
looks
like
and
in
the
temp
directory
the
terraform
things
are
created
under
the
hood
and
if
you
scroll
a
little
bit,
we
found
out
that
the
terraform
tf
state
looks
empty
and
only
the
terraform
tf-state
backup
has
the
resource
available.
C
And
if
you
are
in
this
case,
you
never
bring
the
resource
as
ready
again.
So
you
need
to.
I
don't
know
what
you
what
you
can
do.
We
we
test
it
to
remove
the
resource
in
from
the
provider
and
set
it
up
again,
and
you
see
it
every
time
and
you
cannot
remove
everything
from
it.
So
you
need
to
click
in
your
aws
console
and
remove
the
things
by
hand
and
that's
everything
you
can
do.
A
Does
this
reproduce
100
of
the
time
christopher
or
is.
A
Okay,
okay,
okay,
yeah,
interesting
yeah.
I
don't
I
don't
know
if
I
I've
seen
this
particular
flavor
of.
C
Provider
sees
that
the
resources
there,
so
we
have
no
idea
why
we
get
the
issue
that
this
duplicate
or
already
existing,
because
yeah
this.
This
would
be
the
true
case
that
the
resource
is
already
existing.
That's.
A
A
C
Little
bit
under
the
hood,
because
if,
if
you
see
a
a
true
ready
resource,
then
the
tf
state
file
is
not
empty
like
this
one.
We
see
here
for
this
resources.
A
Yeah,
exactly
looks
like
something
is:
is
going
awry
with,
like
the
with
the
bookkeeping
and
management
of
of
the
state
of
the
resources
locally.
D
A
A
C
A
Up
yeah,
that's
fair,
all
right,
cool
yeah,
thanks
for
bringing
that
up
to
christopher
all
right.
So
that's
everything!
That's
that's
been
added
to
the
agenda
documents.
Do.
Are
there
any
other
topics
that
folks
want
to
bring
up
then,
before
we
adjourn.
A
All
right,
then,
cool,
well
good,
to
see
everybody
thanks
for
joining
and
thanks
for
all
the
collaboration
and
participation
here
so
yeah
good
to
see
everybody
and
we'll
see
you
all
on
slack
until
two
weeks
from
now.