►
From YouTube: 2021-06-03 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There
we
go
recording
has
started,
and
this
is
the
june
3rd
2021
cross
playing
community
meeting,
feel
free
to
add
yourself
as
an
attendee
here
in
the
list
or
folks
can
kind
of
crowdsource
that
effort
there.
Let's
jump
into
milestones
and
releases.
I've
got
a
number
of
things
going
on
there.
We
just
yesterday
did
a
series
of
patches
for
1.01.1
and
1.2.
A
They
are
all
linked
directly
here
in
the
agenda
document
dan.
You
drove
those
releases
yesterday.
Do
you
want
to
give
us
a
quick
overview
or
a
summary
of
what
was
in
each
of
those
past
releases
to
those
those
back
reverse
back
versions.
B
Awesome,
thank
you
I'm
on
mobile,
so
I'm
I
might
be
cutting
out
here.
We
have
an
internet
outage,
but
yeah
and
those
patch
releases.
Basically
all
that
was
included,
was
an
update
to
how
the
lock
resource
is
managed.
So
for
folks
who
have
tried
to
uninstall,
crossplane
or
potentially
upgrade
from
one
version
to
another.
B
Basically,
we
created
that
singleton
lock
resource
which
kept
track
of
package
dependencies,
and
so
when
you
tried
to
uninstall
crossplane,
we
put
a
finalizer
on
there
that
we
never
take
off,
which
meant
this
resource
would
basically
hang
around
forever.
So
if
you
uninstalled
crossplane
and
then
tried
to
reinstall
it,
for
instance,
especially
and
1.0
releases
patches
on
that
branch,
you'd
have
issues
because
the
resource
would
already
exist
and
it
couldn't
take
ownership
and
that
sort
of
thing.
So
the
changes
here
basically
made
it
such
that.
B
If
there's
no
packages
installed,
we
release
the
the
finalizer
on
that
on
that
singleton
resource,
which
means
that
if
you
uninstall
all
packages
and
then
uninstall
crossplane,
then
you'll
have
a
totally
clean
uninstall
and
that
would
obviously
help
upgrades
as
well.
The
one
other
pr
that
was
included
or
backported
to
these
branches
was
around
the
controller
config.
B
Basically,
we
we
override
values
in
a
controller
config
when
we're
creating
the
controller
deployment,
and
there
was
just
one
value
that
was
not
being
considered
there,
so
it
went
ahead
and
made
sure
that
that
was
considered.
So
not
anything
super
major,
but
is
super
helpful
for
you
know,
folks
that
want
to
consume
that
functionality.
A
Welcome
back
and
yeah
thanks
for
driving
those
patches,
backboarding
them
to
a
number
of
previous
releases
as
well
too,
and
getting
those
released
yesterday.
A
So,
let's
talk,
then
about
the
upcoming
release,
the
1.3.
We
are
still
in
the
active
development
cycle
for
the
1.3
release.
The
expected
release
date
is
at
the
end
of
this
month
june
29th.
So
we
still
have
a
couple
or
a
few
more
weeks
until
you
know
future
fees,
code,
freeze,
type
of
thing
where
we'll
stop
development
on
active
features
and
go
into
more
of
a
bug
fix
type
of
mode,
so
we
still
have
time
to
get
things
in
and
keep
keep
making
progress
here.
A
So
the
you
know,
as
is
typical
for
this
meeting
here.
You
know
we'll
talk
about
the
investments
in
work.
That's
going
in
to
the
1.3
release
on
the
board
here,
but
this
is
also
a
forum
for
the
entire
community
to
you
know
speak
about
what
they
were,
what
they're
working
on
what
they
would
like
to
see
as
well,
too,
what's
a
priority
for
them,
etc.
So
this
can
definitely
be
an
open
dialogue.
As
usual,
we
did
take
a
pass
through
the
1.3
board
since
the
last
community
meeting.
A
So
this
should
be
in
more
reflects
about
the
work.
That's
that's
going
on
right
now,
one
one!
Pr
that
we
have
open
here-
and
I
think
that
nick
and
maybe
moff
and
dan
as
well
too
we're
meeting
with
with
ben
to
talk
about
this
pr
or
was
that
on
the
agenda
for
later
on
today
about
the
multiple
paths
in
the
composition,
yeah.
C
It's
it's
going
to
be
in,
like
the
technical
discussion
section.
A
But
yeah
so
we've
got
some.
We've
got
some
closed
prs
that
are
going
to
be
included
in
in
the
1.3
time
frame.
This
is
you
know,
1.3
is
the
cross
plane
specific
release
and
providers
are
not
tied
to
that
same
release
cycle,
so
there
you
may
see
some
things
on
this
board
that
are
from
the
providers,
but
they're
not
necessarily
tied
to
the
same
release
cycle,
one
particular
merged
item
that
was
completed
recently.
A
That
of
note
and
there's
there's
many
of
them,
though,
is
that
lambda
support
for
aws
scott
merged
in
recently,
and
so
that
will
be
included
in
the
next
version
of
provider.
Aws
it's
in
the
done
column,
beyond
that
we
have
a
nearing
initial
completion
as
well,
too
is
somewhere
integration
testing
that
rahul
has
been
working
on
as
part
of
the
lfx
mentorship
program.
A
We'll
talk
more
about
a
update
on
that
and
progress
on
that
as
the
mentorship
program
is
drawing
to
a
close
later
on
in
the
agenda,
but
that's
also
something
that
will
be
included
in
in
1.3
dan.
Your
face
is
on
a
couple
of
these
as
well
too.
Do
you
want
to
give
like
a
high-level
summary
for
some
of
the
important
items
that
are
being
that
progress
is
being
made
on
the
general
themes
for
1.3?
For
yourself.
B
Yeah
sure
so
a
number
of
them
are
just
kind
of
like
smaller
bug
fixes,
and
I
have
one
in
particular:
that's
open
that
I'll
leave
for
a
discussion
later
on,
but
I'm
trying
to
make
sure
I'm
on
the
right
place
with
you
all
here.
B
B
The
second
one
there
in
the
in
progress,
column,
unhealthy
configuration,
revision,
ben
and
I
were
actually
discussing
that
this
morning
and
it
will
also
be
deferred
to
the
technical
discussion
portion
and
then
the
only
other
ones
here
that
I'd
say
would
be
good
to
talk
about
now.
Is
we
have
these
two
in
the
accepted
column
around
supporting
package
registries
and
package
pull
credentials?
B
These
are
things
that
could
easily
have
support
added,
but
we
may
want
some
discussion
around
what
users
are
are
desiring
this
functionality
before
we
move
forward.
That's
why
we
haven't
made
progress
there,
yet
so
I'll
probably
be
following
up
in
the
general
channel
and
that
sort
of
thing
around
that.
A
Awesome
dan
thank
thanks
a
lot
for
that
update
there
and
then
it
looks
like
we
have
in
progress
too
now
support
patching
from
common
data
sources.
I
thought
that
was
an
interesting
ticket
there.
Do
you
want
to
give
a
quick
status
on
on
the
update
of
the
impacts
or
benefits
of
that
issue
as
well
too.
C
Yeah
sure
so
there's
this
problem,
like
in
composition,
some
users
want
to
use
values
from
the
common
data
sources
like
aws
account
id
or
some
other,
like
values
that
do
not
exist
on
composing,
on
composer's
resource
or
on
the
composition,
or
they
vary,
like
you
know,
between
the
instances
or
like
namespaces
that
the
claim
is
created
in
so
we
we
wanted
to
have
a
way
for,
like
you
know,
users
to
specify
a
config
map
or
a
secret
or
some
resource
to
be
used
as
the
source
for
such
values.
C
C
C
You
can
like
you
know
you
can
use
this
field.
You
will
be
able
to
use
this
field
even
if
you're
just
creating
many
resources
and
by
reusing
that
in
composition
we
will
achieve
also
the
patching
from
common
data
source
feature
and
I
think
yeah.
I
I
have
a.
I
have
a
draft
design
doc
now,
but
also
like
implementation
is.
Implementation
will
be
like
you
know,
following
that
at
the
same
time,
but
I
haven't
opened
the
pr
yet.
A
Oh
interesting
waff,
so
it's
are
you
saying,
then
that
will
we
be
tackling
then
the
general
problem,
the
full
general
problem
of
1770
here
as
part
of
this?
Yes,
exactly.
A
That's
awesome.
I
don't
know
if
I
had
heard
that,
but
I'm
super
excited
about
that,
because
I
think
that
cross
resource
referencing
was
was
a
really
really
powerful
feature
and
it
opens
up
a
lot
of
scenarios.
In
addition
to
you
know
the
the
more
constrained
thinking
of
being
able
to
patch
from
like
into
big
maps
and
other
resources
etc.
So
that's
super
interesting.
I'm
I'm
really
excited
to
hear
that
yeah.
B
Move
off
right
did
you?
Are
you
thinking
this
will
get
into
1.3.
B
Yeah,
so
I
guess
it
would
be
in
two
weeks
so
so
we
have
two
full
weeks
after
this
one,
I
think
until
code
freeze.
That
being
said,
all
of
that
none
of
this
actually
requires,
at
least
from
my
understanding.
I
imagine
this
would
go
into
crosswind,
runtime
and
then
filter
out
to
the
providers,
because
this
would
be
implemented
on
the
managed
resources.
So
we
may
not
be
subject
to
those
constraints
and
it
might
just
roll
out,
as
the
providers
have
their
release
cadence.
Does
that
sound
right
to
you.
B
C
A
Yeah-
and
maybe
we
could
talk
about
it
later
on
because
it
might
be
getting
into
two
deep
technical
details
right
now,
but
I'm
curious,
I
think
I
don't
have
it
fully
in
my
mind
how
the
generic
cross
response
preferences
solve
some
of
the
patching
things
here
like,
for
instance,
if
you
wanted
to
patch
from
a
config
map
and
then
also
do
a
transform
on
it.
At
the
same
time,
I'm
not
sure
if
I
I
fully
get
that
one
yet
so
we
could.
A
Well,
I'm
super
excited
about
this.
Like
prosecution
of
cross
resource
references
is
really
really
cool,
so
that's
exciting.
Okay,
so
then
that
would
attacking
that
would
take
one
of
the
you
know,
high
items
that
came
that
we
have
been
in
demand
from
the
community
since
1.2
as
well
too
or
before
that.
A
Actually,
if
we're
being
honest
here
about
how
how
long
we've
been
talked
about
this
one,
so
that's
super
super,
exciting
anything
else
on
the
board
here
or
issues
of
importance
to
the
community
or
anything
that
people
want
to
bring
up
in
the
context
of
the
1.3
milestone.
That's
coming
up
in
within
the
next
month,.
C
I
think
like
not
directly
related
with
core
cross
play,
but
we
have
two
pr's
in
ack
and
I
and
I've
been
like
you
know,
making
them
ready
to
merge
for
generating
like
more
stuff
in
aws,
so
the,
but,
with
these
two
pr's
merged
we
will
have
late,
initialize
and
also
up-to-date
check
implemented,
which
means
we
will
probably
get
like.
You
know:
update
functionality
for
free
for
a
lot
of
generated
resources.
Now,
that's
cause.
That's
gonna,
be
nice.
A
Nice
and
there's
most
certainly
been
a
lot
more
resources
that
have
been
added
in
by
maintainers,
but
also
from
the
community
as
well
too,
for
resources
they
need
in
aws
that
are
taking
advantage
of
the
code
generation
and
the
more
that
we
flesh
out
the
functionality
of
the
generated
resources.
The
more
you
know,
the
more
capabilities
that
the
the
community
itself
gets
and
new
resources
that
they're
generating.
So
that's
fantastic
that
to
have
progress
on
that
as
well,
too.
A
Is
there
anything
blocking
upstream
in
sdk
mafic
to
get
these
in,
or
is
it
all
on
our
side
or
the
ack
side?
To
finish
these.
C
Yeah,
it
was
blocking
like
the
pr
was
open.
The
aprs
were
open
for
a
long
time
now,
like
one
or
two
months,
but
it
was
like
you
know
both
waiting
for
review
and
also,
I
didn't
add
tests
meanwhile,
but
now,
like
you
know,
after
all,
all
the
business
now
I
was
able
to
like
you
know,
add
the
test
and
like
to
make
sure
all
the
resources
are
correctly
generated.
So
now
they're
ready
so
right
now
the
only
looking
thing
is
getting
get
a
view
from
jay.
A
Fantastic
and
nick
yeah,
what's
on
anything
on
your
mind
for
for
1.3
as
well
too.
D
I
have
nothing
that
I'm
personally
leading.
I
there's
a
couple
of
things
that
I
think
we
already
have
planned
to
talk
about
like
the
many
to
one
patching
and
things
like
that.
But
no,
I
I
haven't
directly
touched
crossblade.
A
Awesome
yeah
it's
so
you
know
the
floor
is
still
open
for
folks
that
want
to
bring
up
issues
or
features
for
1.3
and
also
any
folks
there's
a
number
of
contributors
on
the
hall
here
that
I
know
have
active
prs.
So
if
there
are
like
work
that
you're
doing
right
now
like
either
building
or
reviewing,
feel
free
to
add
it
to
this
board
or
send
me
a
dm
to
add
issues
to
this
board
here.
E
Hi
jared
here
sean,
I
think
you
earlier
because
of
an
issue.
Basically
young
opens
it's
a
little
further
down.
Maybe
that's
worth
discussing
so
background
is
that
we
upgrade
well.
E
We
provisioned
kubernetes
clusters
and
recently
we
started
rolling
out
or
using
the
kubernetes
version
1.1.19
and
with
the
version
1.19
aws
stopped
automatically
tagging
the
associated
subnets
with
the
shared
tag,
meaning
that
if
you
spin
up
a
little
balance
or
within
kubernetes,
it
ends
up
basically
in
the
wrong
subnet
or
isn't
properly
provisioned
at
all,
meaning,
that's
basically,
basically
an
issue
down
down
the
line.
E
So
we
we
have
to
basically
tag
the
subnets
manually
and
then
we
head
into
like
in
our
team
a
few
discussions,
because
the
aws
api
provides
basically
the
methods
of
creating
tags
and
deleting
tags
and
various
resources.
You
could
do
it
specifically
for
subnets
you
could.
E
I
believe
you
can
do
it
for
all
kinds
of
easy
to
related
resources,
but
really
I
wanted
to
get
basically
your
opinion
on
how
that
best
fits
basically
into
crossplane
right
now,
because
we
didn't
want
to
start
just
implementing
something,
because
text
can
be
added
through
multiple
pathways.
Basically,
throughout
the
the
api,
for
instance,
if
you
create
a
subnets
subnet
yourself,
you
can
automatically
pass
a
few
tags,
but
you
could
also
do
it
yeah
through
something
separate
from
this.
E
Create
text
method
yeah
so,
but
that
basically
also
gives
a
few
issues
that
apparently,
you
can
only
have
30
or
50
tags
with
the
subnet
or
with
the
resource
and.
C
And
I
think
it's
just
worth,
having
a
look
at
it,
it's
like.
Maybe
you
have
an
opinion
on
that
yeah
yeah,
so
these
actions
create
thousand
liters.
They
are
frequently
used
in
a
lot
of
aws
resources
where
it's
available
and
in
some
resources
the
tags
are
just
part
of
the
original
api.
C
C
If
you
know
a
few
places
like
rds
and
and
some
others
like
creating
a
map,
checking
whether
like
the
the
tags
exist
in
the
remote
or,
like
you
know,
create
them
or
delete
the
ones
that
are
that
do
not
exist
in
the
local
anymore
and
such
so
instead
of
like
having
them
as
tags
as
a
separate
managed
resource,
we
kind
of
opted
for
having
them
on
the
managed
resource
itself
per
resource
and
call
these
apis.
When
we
have
to
like
in
a
lot
of
cases,
we
don't
have
to
it's.
C
It's
originally
part
of
the
api,
but
in
other
cases
like
ec2
one.
We
handle
that
in
the
code,
basically
without
without
having
a
separate,
managed
resource.
E
Okay,
so
basically
that
also
means
we
wouldn't
run
into
any
race
condition,
because,
like
different
resources
are
basically
trying
to
manage
attacks
of
the
same
reasons
right.
C
Well,
the
the
tags
on
the
match
resource
field,
for
example.
It
belongs
to
the
specific
subnet,
for
example
like
on
the
subject.
Cr
you
would
have
for
provider
tags
and
your
controller
would
just
like
you
know,
reconcile
that
text
array
so
like
it
would
be
one
entity
trying
to
add
or
delete
tags.
E
Okay,
so
basically
everything
I
have
met
in
that
resource,
like
basically,
if
I
have
three
tags
in
there
and
the
controller
that
the
provider
or
controller
would
only
make
sure
that
those
three
tags
are
there
or
not
anymore
there.
If
I
delete
that
from
that
array,
right.
C
E
C
E
So
basically
like
returning
to
the
issue,
that
would
mean
you
would
rather
recommend
not
implementing
a
separate
managed
resource,
or
how
would
you
go
about
like
the
proper
problem
with
with
a
cluster
eks
cluster
resource,
where,
basically,
the
the
subnets
are
attached
automatically
anymore?
Should
that
be
part
of
the
ministry's
managed
resource
of
the
eks
cluster
or
separate.
C
And
so
the
the
tags,
how
are
they
are?
They,
like
you
know
dynamic
well
like
if
there
is
a
place
where
we
can
get
the
values
you
can
like?
You
know,
patch,
the
subnet
you
create
in
the
composition
like
get
those
values
in
the
tags
array,
so
I'm
I
haven't
checked
but
like
if
subnet
doesn't
have
a
tags
field.
We
can
add
that
so
you
would
actually,
like
you
know,
put
that
automation
in
your
custom
resources.
E
E
C
E
A
Yeah,
that's
that
sounds
reasonable
at
least,
and
if
there's
anything
we
can,
you
know
follow
up
on
once
we
kind
of
look
into
that
a
little
more
see.
If
that,
if
that
addresses
your
use
case
sean,
then
we
can
keep
that
conversation
going.
A
Yeah
thanks
for
bringing
that
up
sean
cool
all
right,
so
then,
maybe
we
can
go
ahead
and
move
on
to
the
community
topic
section
after
we
have
wrapped
up
a
1.3
and
milestone
section.
I
think.
A
All
right
dan-
I
I
don't
know
if
this
is
stale
here-
was
there
any
like
live
stream,
live
stream
updates.
For
recently,
I
think
the
last
community
meeting
we
had
a
couple
of
them
that
you
had
done.
Were
there
other
ones,
there's
a
podcast
that
came
out
today.
I
think
yeah.
B
That's
correct:
we
don't
have
any
partner.
Well,
there's
probably
some
with
equinix
coming
up,
but
I
don't
have
the
exact
information
on
those
so
I'll
make
sure
to
drop
that
in
slack
when
when
those
dates
are
available
and
then
yeah
the
podcast
from
today,
but
nothing
on
the
schedule
other
than
that.
For
now.
A
And
if
danny
get
a
chance
or
anybody
else
in
the
call
because
they
don't
have
it
right
in
front
of
me,
I
can
throw
a
link
to
that
day.
Two
cloud
podcast
recording
in
here
that
would
be
awesome,
wrapped
on
my
list
of
of
things
to
listen
to
and
consume
my
my
cross
playing
media
consumption,
all
right,
quick
update
on
the
incubation
proposal,
so
we
are
on
the
final
end
user
interview
by
the
technical
oversight
community.
I.
A
Don't
know
maybe
like
67
of
them,
I
think
and
there's
one
more
that
is
planned
to
be
done
today
later
on
this
evening.
So
that
will
get
finished
and
I
believe
that's
all
once
that
end
user
interview
is
completed,
then
the
r2
tech
coc
sponsors
can
then
bring
it
to
the
rest
of
the
technical
oversight
committee
and
we
can
go
ahead
and
move
on
to
the
next
phase.
A
We've
been
stuck
on
this
phase
for
a
little
bit
while
we've
been
organizing
schedules
and
finding
the
right
folks,
the
right
adopters
that
are
willing
to
be
interviewed
by
the
toc.
So
thank
you
to
everyone
who's
on
this
call
and
not
on
the
call
that
has
been
that
is
agreed
to
be
interviewed
by
the
toc.
It's
been
super
super
helpful
to
help
us
move
forward
there.
A
So
thank
you
so
much
for
that
and
then
I'm
definitely
hoping
for
a
a
quick
vote
after
the
we're
into
that
phase
there.
But
things.
D
Can
I
can
I
throw
out
a
quick
related
topic,
real
quick,
sir
nick?
I
hadn't
thought
about
this
very
much
just
to
go
to
me
right
now,
but
where,
where
part
of
the
incubation
vote
is
getting
the
crossplane
conformance
program
up
and
running,
which
allows
distributions
of
crossplane
and
crossplane
providers
to
sort
of
be
certified
as
conformant
as
like
enough,
you
know
a
past,
basically
some
integration
tests
to
say
that
they
can
form
with
with
cosplay
and
best
practices.
D
We've
got
a
couple
of
those
open
for
the
gcp
provider
and
the
home
provider,
for
example,
as
well
as
uxp
upbounds
distribution
of
crossplane
we'd,
really
like
to
get
one
open
for
the
aws
provider,
but
part
of
running
the
conformance
tests
involves
making
sure
that
every
possible
resource
in
the
provider
works
and
can
become
ready,
which
is
quite
labor-intensive
if
anyone's
I.
This
is
probably
not
the
most
exciting
thing
to
help
out
with,
but
if
anyone
is,
you
know
interested
in
helping.
D
E
D
Make
sure
that
they
can
become
ready
for
us.
That
would
be
super
valuable
I'll
understand.
It
was
very
boring,
but
if,
but
if
anyone
is
interested
in
helping
with
that
feel
free
to
dm
me
on
cosplay
and
slack,
and
I
would
very
much
appreciate
it
and
I'm
negs
energy
zed.
A
Nick
quick
question
on
that
is:
is
it
possible
to
batch
or
parallelize
the
efforts
of
the
conformance
testing?
Like
you
know,
one
person
runs
performance
tests
with
you
know
the
the
resources
in
the
database
group
and
then
another
person
runs
the
conformance
test
for
ec2
and
they
can
kind
of
be.
You
know,
like
map
produced
back
together
into
a
cohesive
result.
D
That's
not
how
they're
designed
to
run
no,
so
maybe
I
I
guess
it
depends.
It
depends
what
you
know.
I
wrote
the
review
process,
so
maybe
I
can
update
it
to
say
that
that's
something
that's
allowed
to
happen,
but
I
my
thinking
is
that
if
we
can
just
validate
that
all
of
the
resources
work
and
maybe
get
example
gambles
for
all
the
resources
that
are
not
working,
then
once
we
know
that's
the
case,
it
should
be
a
little
bit
easier
for
one
person
to
take
all
of
those.
D
Oh,
I
see
other
tests,
but
if
we
ask
you
know,
if
we
say
you
know,
I'm
going
to
go
and
go
through
the
tens
of
resources
and
provide
aws,
and
you
know
hit
every
bug,
potentially
that's
there
and
try
and
get
all
of
them
working
and
figure
them
all
out.
It
can
take
a
long
time
so
splitting
that
up
would
definitely
help
there.
C
Yeah
yeah
for
what
it's
worth
the
the
yamas
in
examples
folder
they
are
supposed
to
work
like
now.
We
we
welded
them
before
merging,
but
yeah.
You
never
know
like
you
know.
Sometimes
we
do
see
cases
where
example
diamond
is
just
not
working
for
some
reason,
yeah
and
also
like
the
last
time.
C
I
did
that,
like
the
example
the
animals
I
try
to
have
them
refer
to
each
other
in
cases
where
it's
very
like
there
has
to
be
a
source,
for
example,
like
a
subnet
has
to
have
a
vpc,
so
it's
likely
that
you
might
be
able
to
get
away
with
just
like
cuba,
city
apply
and
checking
readiness
for
for
most
of
resources.
I
would
say,
but
some
just
like
not
take
time
like
eks,
cluster
and
stuff.
D
Nice
yeah,
I
haven't
even
tried
to
look
into
aws,
yet
I
know
that
hassan
started,
looking
at
provider
azure
and
actually
found
one
or
two
places
where
that
provider
is
not
conformant,
which
is
great.
It
means
we
can
go
fix
that
now
sort
of
thing,
so
it
could
be
that
provided
aws.
It
goes
smoothly
because
it
definitely
gets
a
lot
more
well,
a
bit
more
attention
than
most
other
providers
being
very
popular.
A
Well,
nick
yeah,
thanks
for
all
the
energy
and
and
focus
you've
put
on
defining
the
performance
program
and
getting
that
in
placement,
so
yeah.
So
as
nick
was
mentioning,
you
know,
this
certification
or
performance
program
for
both
distributions
and
providers
is,
is
live
and
you
know
a
couple
of
pr's
are
open
for
that,
with
conformance
test
results
for
gcp,
helm
and
uxp,
and
so
we
want
to
continue
building
that
out.
A
I
know
that
the
ibm
cloud
folks
are
working
on
one
as
well
too,
and
then
equinix
metal
as
well
too,
would
be
another
one,
but
you
know
the
more
the
more
the
merrier
over
there
for
getting
providers
certified
and
conformant.
You
know
to
to
kind
of
build
out
that
that
community
there
as
well
too,
is
great.
A
So
I
moved
this
agenda
item
up
a
little
bit
in
the
dock
and
then
we
can
go
ahead
and
move
down
to
the
lfx
mentorship
section
here
too,
so
for
the
linux
foundation's
mentorship
program.
This
is
the
final
week
of
rahul's
inspector
here
with
us.
It's
been
an
awesome
time
having
raul
working
on
the
project
and
we've
all
learned
a
lot.
It's
been
a
really
good
experience
there.
The
final
evaluation
will
be
done
this
week
and
rahul.
E
Yeah
sure
so
the
second
segment
of
the
mentorship
after
the
first
evaluation
was
mainly
based
on
testing
that
composition
engine.
So
we
have
got
that
provided
now
pretty
and
now.
The
second
step
was
to
add
composition,
engine
test
cases.
So
I
added
some
basic
test
cases
which
you
can
see
in
this
pr,
which
there
are
some
comments
by
dan,
and
I
would
just
update
it
on
that.
But,
yes,
the
tests
cover
mainly
the
two
cases
in
which
I
mentioned
and
yeah.
A
Awesome
yeah,
it's
been
great,
having
your
contributions
for
hold
definitely
been
a
meaningful
part
of
the
community
as
you've
been
working
on
on
on
the
project.
So
far
this
this
semester-
cool
dan
thanks
for
doing
a
review
here
as
well
too.
Sorry
nick,
were
you
saying
something.
D
I
was
just
gonna
say:
I'm
excited
to
hear
that
the
sort
of
official
provider
knop
is
is
ready.
I
was
using
an
ancient
less
feature
for
one
that
I
wrote
in
the
conformance
test.
So
I'd
like
to
change
those
over
to
use
rules
awesome.
E
I
opened
a
pr
for
that,
as
you
might
see
that.
A
All
the
better
all
right
dan,
do
you
want
to
kind
of
get
us
an
update
on
the
provider,
gcp
updates
to
use
v1
packages
like
most,
notably
for
gke.
B
Yeah
absolutely
I
apologize.
I
can't
share
my
screen
since
I'm
on
mobile,
but
essentially
the
gcp
provider
has
support,
for,
I
don't
know,
probably
like
20,
to
30
resources.
I
believe
right
now
that
might
be
high,
but
essentially
all
of
them
were
relying
on
a
gcp
api
sdk
package
and
within
that
package,
there's
different
levels
and
within
google's
apis,
there's
different
levels
of
maturity,
and
they
do
not.
B
They
use
the
same
vernacular
as
kubernetes
apis
and
in
turn,
cross-plane
apis
in
terms
of
v1,
alpha,
1,
v1,
beta,
1,
etc
to
v1,
and
but
they
don't
maintain
the
same
backwards.
Compatibility
at
the
v1
beta1
level,
so
gke
cluster
and
node
pool
were
relying
on
v1
beta
1
apis,
which
the
decision
was
made
at
the
time,
because
there
was
extra
features
right,
because
that
was
a
less
mature
api.
So
they
had
additional
things
there.
B
However,
when
some
of
those
additional
features
became
deprecated
or
removed
by
gcp,
we
were
unable
to
upgrade
to
new
versions
of
the
sdk,
which
in
turn
meant
that
other
resources
that
were
dependent
on
that
package
were
not
able
to
receive
new
new
fields
right.
New
fields
are
added
to
the
api
types,
so
this
was
a
pretty
big
blocker
and
was
blocking
implementation
of
a
number
of
resources
and
a
number
of
use
cases.
B
So
it's
kind
of
just
a
necessity
to
move
those,
the
gk
cluster
and
node
pool
to
their
v1
apis.
That's
the
gcp
v1
apis,
not
not
cross
plane
apis
there.
But
because
of
that,
as
I
mentioned,
we
did
drop
some
fields
and
there
were
some
changes
on
the
gcp
or
the
gk
cluster
and
the
node
pool,
and
and
in
this
pr
I
have
a
drop
down
where
you
can
see
the
exact
changes.
B
I
will
also
have
those
in
the
release
notes
as
well,
given
that
some
folks
rely
on
beta
features
for
for
a
gcp.
B
We
have
also
started
provider
gcp
beta,
which
will
maintain
the
the
older
functionality
that
we're
removing
in
this
pr
and
we'll
also,
you
know,
ostensibly
in
the
future
as
folks
need
it-
support
v1,
beta
1
apis
for
other
resource
types
as
well,
and
we'll
have
the
policy
in
that
package
that
those
those
api
types
will
never
reach
v1
right,
because
if,
if
gcp
isn't
making
a
commitment
to
compatibility,
then
we
also
cannot
do
so,
so
those
will
will
stay
at
alpha
level
most
likely.
B
So
the
idea
is
for
the
next
release
of
provider
gcp.
We
will
simultaneously
have
the
first
release
of
provider
gcp
beta,
so
that
folks,
who
currently
consume
those
features
that
are
being
removed,
have
the
ability
to,
but
we'll
also
get
some
new
features
in
provider
gcp
as
far
as
kind
of
the
protocol
for
going
through
a
large
breaking
change
like
this.
This
is
a
break
to
our
api
guarantee
for
for
beta
resources,
which
we
certainly
don't
want
to
do.
B
I
will
note
that
each
of
the
providers
right
now
is
still
you
know:
pre
1.0,
so
there's
some
versioning
kind
of
leeway
there,
but
this
is
something
that
you
know
we're
trying
to
provide
a
great
experience
for
folks
and
it's
going
to
block
us
in
the
future
and
so
better
to
make
this
change
earlier
rather
than
later,
and
to
do
so
to
notify
folks,
we've
had
an
rfc
open
for
over
two
months
now
leading
up
to
this,
and
we've
also
announced
it
in
slide
multiple
times.
B
That
being
said,
if
you
have
concerns,
please
reach
out
we'll
also
be
providing
manual,
upgrade
steps
as
well
to
consume
this,
this
new
version
so
yeah.
I
think
I
think
that's
all
of
it.
C
C
I
was,
I
was
gonna,
say:
ask
you
were
talking
about
introducing
a
newly
named
like
cluster
resource
in
the
provider
gcp
and
dropping
the
gk
cluster
so
that
the
crs
won't
be
deleted
and
you
will
have
time
to
move
to
cluster
resource
while
provider
is
running.
Are
we
still
doing
that.
B
Yep
and
that
pr
has
been
updated
to
move
gk
cluster
to
cluster,
so
I'm
glad
you
brought
that
up
move
off
it,
because
this
is
this
is
something
that's
certainly
relevant
in
terms
of
trying
to
well.
First
emphasizing
some
of
the
cross-fling
features
right
and
also
trying
to
make
this
the
the
least
dangerous
upgrade
process
for
folks
possible.
B
So
if
you
have
the
current
latest
version
of
provider
gcp
installed,
if
you
have
gke
cluster
instances
that
exist
when
you
upgrade
we've
actually
renamed
the
type
as
well
as
change
the
api
version
to
just
cluster
which
more
closely
matches
the
gcp
api.
What
that
means
is
that
the
new
revision
that
gets
created
for
your
provider
gcp
package
is
going
to
not
touch
the
old
gk
cluster
api
type.
It's
going
to
install
a
new
cluster
type
and
you
could
start
to
consume
that.
B
But
if
you
have
existing
gk
cluster
resources,
those
will
basically
sit
there
untouched.
They
just
won't
be
reconciled
because
the
older
revisions
controllers
will
be
stopped.
So
that
makes
it
such
that,
if
you
you
know
for
some
reason
or
not
on
this
call
for
some
reason,
haven't
seen
the
communication
or
didn't
see
the
release
notes
before
upgrading.
B
You
should
have
the
opportunity
to
see
that
you've
gotten
to
an
unhealthy
state
and
you
can
roll
back
to
consuming
the
old
functionality
without
breaking
anything,
which
is
definitely
a
nice
feature
that
we
get
kind
of
out
of
the
box
with
a
package
manager.
A
And
dan
you
are
the
upgrade
or
migration.
I
suppose
steps
are
those
already
documented,
or
we
will
merge
this
pr
and
then
have
those
docs
written
up
as
well.
B
Too,
it's
my
plan
to
merge
this
pr
and
potentially
recruit
some
folks
who
are
interested
in
this
functionality
to
also
kind
of
like
test
this
out,
since
it
is
a
large
diff,
we've
obviously
done
the
requisite
testing
that
we
typically
do
on
on
any
merges.
But
since
this
is
a
large
breaking
change,
we'd
like
to
have
some
bait
time
on
on
the
master
branch
with
this.
So
the
the
plan
is
to
merge
it
and
then
add
some
of
this
documentation.
B
A
That
sounds
great
yeah.
Definitely
I
appreciate
the
attention
that
we've
been
paying
to
a
circulating
this
idea
for
feedback
and
be
announcing
it
and
kind
of
getting
some
awareness
about
it,
but
then
see
possibly,
most
importantly,
the
attention
we're
paying
to
migration
steps
and
how
to
transition
folks
to
to
to
the
new
versions
that
have
this.
So
I
definitely
appreciate
that
effort.
Man.
A
Okay,
so
yeah,
we
already
talked
about
the
tags
for
aws
issue
as
well.
Here
too,
that
sean
brought
up
so
then
we
can
move
on.
I
believe
to
the
prs
discussion
here
as
well
too
or
actually
before
we
do
that
we're
about
to
start
getting
into
some
more
of
the
technical
conversations
the
that
are
optional
for
attendees
here.
So
before
we
kind
of
move
into
more
of
that
stuff.
Does
anybody
have
any
other
community
high-level
agenda
items
that
they
would
like
to
bring
up.
A
Okay,
all
right!
Well,
let's,
let's
move
on
here
then
to
to
this
pr
here
dan,
that
you
wanted
to
bring
up.
B
Yep,
so
this
one,
I
I
think
this
is
relatively
non-controversial
in
my
opinion,
you'll
see,
I
have
a
pretty
long
write
up
there
in
the
commit
message,
because
this
touches
a
few
other
areas
of
functionality
that
that
I
think
are
important
to
highlight
here,
but
essentially
what
this
change
is
is
if
you
roll
forward
to
a
new
revision.
So
if
you
upgrade
a
package,
this
is
changing
for
inactive
revisions
to
not
recreate
resources
if
they
don't
exist.
B
So
this
is
primarily
based
around
a
situation
where
someone
had
installed
a
provider,
aws
version,
and
then
they
had
used
a
or
even
if
you,
even
if
you
didn't
use,
actually
a
a
v1
alpha,
one
or
v1
alpha
any
version,
crd
or
instance
of
a
crd
and
it
was
dropped
in
the
next
version.
You'd
find
yourself
in
this
place
where
you
could
potentially
well.
B
First
you'd
have
trouble
upgrading
right,
because
your
your
new
revision
that
dropped
support
for
that
alpha
version
and
potentially
added
beta
support
for
that
resource
would
be
trying
to
update
a
crd
and
basically
the
version
exchange.
There
doesn't
work
very
well,
so
typically,
what
folks
have
done
in
that
situation
is
just
delete.
The
alpha
crd
and
then
the
beta
one
just
gets
recreated
in
its
place
and
the
way
that
active
versus
inactive
revisions
manage
resources
is
active.
B
Revisions
become
controllers,
so
they
set
a
controller
reference
on
their
resources
and
inactive
revisions
become
owners.
That
being
said
both
of
them,
if
a
resource
is
not
existing,
they
will
recreate
it.
So
that
could
get
you
in
a
situation
where
you
had
drop
support
for
an
alpha
crd
type,
and
so
you
deleted
it
to
allow
the
upgrade
to
proceed
successfully.
B
But
the
active
and
inactive
revisions
were
battling
over
recreating
that
resource,
basically
and
every
time
the
inactive
revision
won.
Then
you
would
basically
be
unable
to
update
and
it's
not
super
important-
that
inactive
revisions
have
their
their
resource
types
existing.
B
If
you
know
they're,
they're
inactive,
for
instance,
and
the
only
way
you
would
actually
find
yourself
in
this
position
is,
if
you
manually
deleted
that
resource
type
anyway.
So
anyway,
this
is
just
updating
to
say
if
an
inactive
revision
has
a
missing
resource
type,
don't
try
and
recreate
it
and
proceed
with
just
making
sure
your
owner
references
are
present
on
all
resources
that
you
install
in
your
package
that
are
present
in
the
cluster
and,
like
I
said,
there's
a
right
up
there.
B
If
you
want
to
look
more
at
why
this
is
important,
slash
look
at
the
original
issue.
The
one
other
note
I
wanted
to
make
about
this
is
that
this
is
related
to
discussions.
We've
had
around
not
installing
alpha
resources
by
default
and
that
could
either
be
implemented
at
potentially
the
provider
package
level
or
at
the
package
manager
level.
We're
just
saying.
Oh,
this
has
a
you
know.
B
Alpha
version
we're
not
going
to
install
unless,
if
you
provide
a
specific
flag
on
the
package
install
that
being
said,
I
view
this
as
somewhat
isolated,
because
in
that
case
someone
could
still
find
themselves
in
that
situation
if
they
opted
in
to
alpha
resources.
B
So
anyway,
this
has
been
open
for
a
bit,
but
I
think
it's
pretty
non-controversial,
but
if
folks
want
to
give
that
a
peek,
I
definitely
want
this
included
in
1.3.
A
Sundan
sounds
reasonable
and
I
appreciate
the
thoroughness
of
thinking
through
there.
It's
like
the
complexity
between
inactive
inactive
and
all
that
sort
of
stuff.
A
Yes,
so
the
pr
is
linked
here
in
the
agenda.
Doc
folks
want
to
take
a
look
at
that
and
add
comments
or
feedback.
A
C
Yeah
yeah,
I
can
actually
take
the
screen.
A
You
can
take
the
screen.
Let
me
stop
sharing
now
and
we
have
10
minutes
left
in
this
call
here.
So
let's
do
our
best
to
try
to
fit
it
in
all
in
here,
but
we
may
need
to
break
out
and
have
a
another
discussion.
If
we
go
over.
D
I
just
want
to
provide
a
quick
bit
of
background
here,
move
off
dan
and
I
met
to
try
and
look
at
some
of
these
pr's
that
are
altering
the
the
composition,
api
or
updating,
adding
features
to
the
composition,
api
sort
of
holistically
and
see
if
there
was
any
interactions
between
them
or
anything
like
that.
We
did
realize
afterwards
that
we
had
sort
of
just
because
we're
all
co-workers.
We
got
together
to
chat
about
this
and
we're
like
really
wish
that
we
had
the
folks.
D
So
we're
totally
open
to
and
would
like
to
in
fact
have
follow-ups
with
the
folks
that
working
on
these,
which
I
think
is
ben
pratush
and
and
stephen,
to
make
sure
that
we're
here,
everyone
take.
With
that
I'll
hand
it
over
to
move
off.
C
Yeah
thanks
for
watching
nick
so
yeah,
so
this
was
the
original
document
that
we
have
talked
over.
So
this
is
like
you
know
an
example
composition
with
all
the
changes
that
we
have
as
open
prs.
C
This
is
like
from
many
composite
field
paths
with,
like
you,
know,
multiple
sources
and
constructing
a
string,
and
there
is
this
from
constant
value
to
this,
and
also
from
afan,
which
generates
a
random
string
with
the
with
the
character
set
here.
So
I
think
the
biggest
change
is
this
first
one.
C
So
there
are
like
a
couple
things
that
we
might
be
able
to
do
like
a
couple
things
that
might
enable
us
to,
like
you
know,
develop
more
features
on
top
of
this
in
the
future.
Why
not?
While
like
not
making
the
api
burst?
Essentially,
so
I
think,
like
one
of
the
things
that
I
was
kind
of
wanted
to
change,
I
wanted
to
change,
was
having
these
as
strings
directly,
which
kind
of
like
you
know,
limits
our
ability
to
add.
C
Like
you
know
more
details
about
them
and,
like
you
know,
for
example,
one
thing
was
when
you
have
like:
you
know
two
strings:
you
can
use
like
person,
s
and
percent
s,
and
if
you,
if
you
want
to
use
meta
name
twice,
you
have
to
include
it
like.
You
know
twice
here
in
the
list
as
well
so
like
after
and
also
like
the
name
is
kind
of
I
mean
well.
This
is
kind
of
neat
picky,
but,
like
you
know,
maybe
we
could
do
like.
C
You
know
a
better
job
because,
like
from
many
composers,
feel
like
there's
also
two,
and
that
is
not
like
you
know
many
like
you,
don't
patch
too
many
targets,
so
that's
kind
of
like
you
know.
The
reverse
direction
is
not
like.
You
know
really
that
clear,
and
so
I
wanted
to
so.
I
want
to
show
the
thing
that,
like
you
know,
that
is
draft
that
we
talked
about
and
get
your
opinions
and
like
ideas
about
it.
C
C
So
maybe
we
could
do
something
like
instead
of
string
array,
we
could
say
like
these
are
struct,
but
for
now
this
is
like
only
field
path
which
is
like
you
know
essentially
the
same,
but
we
could
actually,
like
you
know,
have
more
fields
here,
for
example,
a
name
field
that
would
allow
us
to
reuse
it
without
adding
it
twice
when
we
need
it,
and
also
like
you
know,
a
strategy
to
say
like
hey.
C
This
is
a
string
type
combination,
essentially
as
as
it's
like,
a
group
of
suggestions
for
name
as
well
like
combine
from
composite
and
like
the
reverse
direction,
would
be
combined
to
composite
field
paths.
So
the
combine
is
actually
like
an
action
that
we
take
here
and
only
like
you
know,
this
section
is
different
than
other
patches.
C
Two
field
pad
is
always
like
a
single
string,
so
that's
kind
of
like
you
know
what
we
kind
of
thought
about,
like
you
know,
drafting
up
and
see
what
especially
ben,
who
worked
on
the
original
pr
things
about
it.
E
Yeah,
I
my
reservations
about
the
original
pr
were
always
around
the
api
design
and
getting
that
right
because
after
after
writing
it
I
did
realize
that
it's
like
it
adds
quite
a
bit
of
complexity
and
isn't
necessarily
the
easiest
thing
to
understand
straight
away.
So
actually
I
I
like
your
suggested
changes.
I
don't
know
exactly
how
that
would
change
the
implementation
right
off
the
top
of
my
head,
but
I
like
that
as
a
as
a
plan.
E
The
only
question
that
I
have
right
now
is:
how
do
how
do
you
see
the
transforms
applying
alongside
the
combined
from
patch?
Is
that
something
is
a
transform,
something
that
happens
after
the
combined
from
is
applied.
C
Yeah,
so
the
output
of
this
combined
from
is
actually
a
single
string,
which
is,
like
you
know,
compliant
with
the
with
the
current
situation
like
in
the
like
right
now,
when
you
do
a
front
field
path,
you
get
a
single
string
and
you
pass
it
through
transforms.
C
So
in
this
case
once
like
you
know
this,
like
assume
this
as
the
front-field
path
step,
so
it
does.
Some
operations
gets
the
filters,
but
as
a
result,
there
is
always
like
you
know.
One
single
string
and
then,
like
you
know,
transforms
don't
need
to
change,
because
it's
always
like
one
string
that
that
that
is
passed
through
the
transforms.
D
Yeah,
what
one
thing
we
were
thinking
about
was
it's
a
nice
idea
to
try
and
reuse
the
transforms,
but
I
you
probably
ran
into
this
ben
that
not
all
transforms
are
always
appropriate
for
for
combining
things
sort
of
thing.
So
the
I
what
we
were
thinking
was
we'd
have
one
it
might
make
sense
for
us
to
have
one
specific
sort
of
field.
That's
just
like
here's.
How
you
take
all
these
things
and
combine
them
to
one
value
effectively,
whether
it
be
a
string
or
an
int
or
whatever
they.
D
Into
the
transform
pipeline,
if
you
need
to
to
do
something
with
that,
we
were
a
little
on
the
fence
about
whether
there's
any
use
cases
where
you
might
need
to
do
multiple
things,
multiple
steps
to
combine
values,
but
we
couldn't
immediately
convince
ourselves
that
that
that
was
the
case.
E
Yeah
so
go
ahead,
then
go
ahead,
I'm
just
gonna
say
it
was
always
in
the
back
of
my
mind
that
there
might
be
a
situation
where
you
wanted
to
like
map
a
value
and
then
and
then
format
it
into
a
string,
but
the
reality
is
is
that
I've
used
cross
playing
relatively
heavily
over
the
last
couple
of
months.
I
haven't
run
into
a
situation
like
that
myself,
yet
so
yeah
I
mean
this
makes
a
lot
of
sense
to
me.
I
definitely
found
that
with
the
transforms
it
was.
E
It
was
kind
of
tricky
in
some
situations
to
work
out
what
was
being
done
by
patch
and
what's
being
done
by
the
transformer
pipeline
on
the
patch.
Sometimes
so
it's
if,
if
we're
treating
those
two
as
like
completely
separate
things,
transforms
only
ever
work
on
one
value,
take
one
value
and
produce
one
value
out
and
any
sort
of
combination
of
values
is
done
by
a
a
dedicated
patch
type.
For
that
I
think
that's
a
solid
way
of
approaching
it.
D
Yes,
as
we've
said,
one
of
the
things
that
we
sort
of
kept
coming
to
here
was
do
we
want
to
just
sort
of
go
hog
wild
and
allow
something
like
go
templates
which,
which
opens
up
a
lot
of
of
potential
power
of,
like
you,
know,
loops
and
conditions,
and
things
like
that
all
within
the
template-
and
I
am
always
a
big
fan
of
keeping
things
as
simple
as
possible
until
we
know
we
need
to
do
things.
D
I
think
realistically,
as
far
as
I'm
aware
and
I'd
love
to
hear,
if
there's
I'd
love,
to
hear
folks
chime
in
on
on
the
issue,
if
they
haven't
already
but-
and
I
haven't,
read
the
issue
in
a
little
while
so
I
could
have
just
forgotten,
but
as
far
as
I'm
aware,
most
people
basically
want
this
to
to
work
with
iam
where
they,
when
they
have
to.
I
think
it's
a
make
an.
D
I
am
policy
document
if
I
recall
correctly,
not
aware
of
like
too
many
different
use
cases,
but
what
we
were
thinking
was
that,
if
possible,
we'd
prefer
to
go
with
sort
of
a
simple
limited
functionality,
string
format
thing
like
we
have
today,
but
taking
multiple,
multiple
sources
of
what
we've
been
designing
for,
but
to
design
in
such
a
way
that,
if
that
turns
out
to
be
too
limiting,
we
could
extend
it
in
place
to
add
a
more
powerful
sort
of
templating
support
there
without
having
to
you
know,
add
yet
another
kind
of
patch.
C
Yeah
one
additional
thing:
I
was
thinking
recent
days
whether
we
need,
like
you
know
this
strategy
to
be
string
or
just
fmt,
because,
like
you
know,
it's
always
string,
even
if
it's
like
you
know
template.
So
maybe
we
could
say,
like
you
know,
this
is
strategy
is
fmt
and
then
here
fmtm
give
the
like
the
string
and
if
we
end
up
with
template,
we
could
have
like.
You
know,
template
here
and
then
give
it
a
template.
D
I
don't
think
it
necessarily
has
to
be
the
type.
Maybe
it
is
I
could.
I
could
imagine
potentially
yeah
I
don't
know.
Actually
maybe
that
is
maybe
that
is
what
it
is,
but
I
need
to
have
more
thought,
but
I
think
I
think,
for
example,
if
you
had
several
values
that
were
strings
or
even
several
values
that
were
a
mix
of
like
strings
and
hints
or
something
you
could,
you
could
pass
it
in
as
an
option
to
like
a
string
format
thing.
D
So
we
could
change
that
up
strategy
to
format
like
you
said
and
then
have
strategy
templates
strategy,
math
or
whatever,
as
those
different
options
rather
than
strategy
string.
But
I
I
guess
I
would
I
feel
like
it
is.
It
is
what
the
intent
of
the
field
is.
Basically,
how
do
you
combine
all
these
valuables,
so
you
could
call
that
the
type
of
combination
or
you
could
call
it
the
strategy
for
combination,
but
I
don't
think
it
necessarily
means
the
type
of
the
output
value
of
area
value,
the
value.
C
D
C
Cool
okay,
so
this
seems
to
be
like
the
the
final
api
that
we
are
all
on
the
same
page
for
the
specific
change.
A
And
well
quick
question
for
me:
I
I
think
ben
brought
it
up,
but
I
I
maybe
didn't
get
it
so
so
out
of
the
combined
from
become
like
the
output
of
that
is
one
string
and
then
and
then
they
passed
along
to
the
transform
pipeline
there
and
then
the
the
line
43
there
has
in
place
orders
for
two
strings.
Is
that
is
that
typo
or
do
I
just
not
understand?
D
A
God
I
gotta
go
okay.
That
makes
more
sense
to
me
now.
I
thought
I
started
getting
something
fundamental
here
awesome
just
left
over
from
when
we
were
editing
it
got
it
got
it
got
it
so,
quick
note,
so
we
we
we're
past
the
one
hour
allotted
for
for
the
meeting
here.
This
optional
section
here
can
continue
to
keep
rolling
on.
A
D
So
real
quick,
the
the
other
two
things
there
were
the
with
a
from
format
and
from
constant
value.
We
didn't
get
as
deep
into
talking
about
those
as
we
would
have
liked
to,
but
basically,
I'd
like
to
set
up
a
separate
chat
if
folks
have
time
with
with
at
least
protrusion
and
stephen
to
discuss
them.
These
ones
are
kind
of
interesting
for
a
couple
of
reasons.
D
One
is
that
most
patches
we
found
because
we
didn't
end
up
going
with
a
distinct,
like
direction
argument
to
say
which
way
the
patch
is
going
most
patches
end
up
needing
a
to
go
in
both
ways.
Basically,
but
in
these
cases,
where
it's
a
constant
value
or
a
random
value,
there's
not
really
much
point
in
patching
that
back
to
the
composite,
or
rather,
if
you
are
sorry
patching
that
back
to
the
composite
resource.
D
If
you
are
wanting
to
patch
a
constant
value
back
to
the
composite
resource,
it's
probably
not
a
value,
that's
associated
with
any
one
particular
resource.
I
would
imagine
it's
just
a
constant
value,
constant
of
the
scope
of
the
the
whole
composition,
so
some
of
the
stuff
that
we
kind
of
flirted
with
and
floated
around
was
like
what,
if
you
just
had
like
an
array
of
variables,
or
something
like
that
in
the
composition
that
you
could
then
reference
in
patches
or
you
could
somehow
patch
them
back
to
the
to
the
composite.
D
C
Yeah,
I
think
like
additional
to
that,
like
from
like
the
random
string
type,
may
actually
not
be
compatible
with
how
like
composition,
patching
stuff
works
because,
like
it
always
assumes
that,
like
you
know,
you
set
the
value
on
the
map
resource,
but
you
don't
read
it
back
in
the
composite
reconciler.
So,
like
you
know,
maybe
we
could.
We
could
possibly
have
an
exception
there
like
hey.
If
there's
a
value
like
generated
random
value,
independent
resource
do
not
generate
a
new
one,
but
that
is
like
not
immediately
compatible
with
what
we
have.
C
So
we
were
also
like
questioning
whether,
like
you
know,
we
still
need
this
functionality
because
it's
it
would
be
really
hard
to
implement
in
the
composite
reconcider.
C
Okay,
so
I
it
seems
like
we
have
been
running
out
of
time
for
five
minutes.
I
will
set
up
a
different
meeting
for
these
two
I
was
hoping
we
could
like.
You
know,
get
over
them,
but
it
seems
like
that's
not
the
case,
so
yeah
hit
me
up
on
crossbands
like
if
you
want
to
be
invited
to
that
meeting.
D
So
for
the
for
the
181
that
we
just
discussed
move
off,
do
you
want
to
sort
of
maybe
maybe
link
or
or
put
a
completely
version
of
that
just
on
on
ben's
pr
and
then
ben?
Do
you
still
feel
good
about
taking
the
action
to
get
that
done?
I
want
to
balance.
I
I'm
happy
for
one
of
us
to
go.
Do
it
because
you've
been
waiting
for
so
long?
I
don't
want
to.
You
know,
overload
you,
but
also
I
want
you
to
have
the
satisfaction
of
merging
that
pr.
E
That's
a
good
I'll,
have
a
quick
look
at
it.
If
I
get
stuck
I'll
shout
up
next
week
sometime
but
yeah,
I'm
happy
to
continue.
A
Awesome
thanks
for
thanks
for
driving
that
and
thanks
ben
for
the
all
the
you
know,
contributions
and
further.
You
know
finalization
of
the
really
compelling
functionality
of
being
able
to
combine
in
multiple
sources
into
a
single
patch.
So
that's
fantastic!
A
Let
me
let
me
just
bring
up
the
agenda
doc,
real,
quick
on
my
screen
again,
I
think
that's
this
one
here
and-
and
so
I
think
a
number
of
us
have
a
hard
stop
in
seven
minutes
or
so,
but
dan
is
there?
Something
did
you
want
to
bring
this
up
as
well,
here
too,
to
just
close
this
off
for
the
meeting
today.
B
B
My
kind
of
thought
process
we're
just
not
going
to
make
any
changes
here,
but
if
folks
feel
passionately,
this
is
kind
of
like
a
question
of
cross
planes
responsibilities
in
different
situations
from
the
package
manager
perspective.
A
Okay,
that
makes
that
makes
sense.
Yeah
folks
want
to
comment
on
that
or
continue
the
discussion.
You
can
certainly
pick
that
up.
A
Yeah
and
then
more
like,
if,
when
you're
getting
together
folks
for
some
of
the
other
api
changes
here
or
design
elements,
if
you
could
post
that
on
slack
as
well,
you
might
have
said
that.
But
if
you
want
to
post
that
on
slack
to
get
the
word
out
about
it
or
encourage
or
invite
other
people
to
join,
then
that
that
would
be
great.
A
All
right
a
lot
of
the
convey
lots
of
good
discussions
pretty
packed
in
community.
So
thanks
everybody
for
joining
in
discussing
everything.
You
know
we'll
keep
on
working
towards
the
end
of
the
month
here
to
do
the
1.3
release
we'll
see
each
other
on
slack
and
on
github
to
keep
working
on
these
issues
and
designs
and
keep
keep
moving
the
project
forward.
So
thanks
everybody
for
joining
today,
it's
good
to
see
everybody
thanks.