►
From YouTube: 2023-06-01 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
recording
has
started,
and
this
is
the
June
1st
2023
cross
plain
community
meeting.
I
will
right
now
drop
a
link
in
the
zoom
chats
to
the
agenda
document.
So
if
you're
not
already
in
there,
you
now
have
a
direct
link
to
it
and
you
everyone
is
more
than
welcome
to
add
suggestions
of
topics
they
want
to
discuss
in
in
any
of
the
relevant
sections,
so
feel
free
to
jump
into
the
dock
and
add
things
as
you
wish.
A
So
there
have
not
been
any
specific
patch
releases
for
a
crosswind,
runtime
or
core
cross
plane
since
the
last
community
meeting.
A
We
are
planning,
though,
or
we've
been
at
least
starting
to
talk
about
it
or
doing
a
patch
series
of
patch
releases,
probably
early
next
week,
I
think
there's
been
a
number
of
fixes
and
small
things
that
are
kind
of
accumulating
in
the
release:
branches
like
the
1.12
release,
Branch
for
sure,
maybe
1.11
as
well,
but
kind
of
do
like
a
regular
maintenance
patch
release
to
get
all
those
fixes
out
and
and
available
to
the
community.
A
So
we
I
don't
think
I
have
a
tracking
list
for
that
set
up
yet,
but
the
intent
is,
you
know,
let's
just
say,
intending
to
do
a
series
of
patch
release
it's
early
next
week,
probably
and
if
folks
have
anything
that
they
specifically
are
meaning
to
include
in
that
patch
release.
Do
let
me
know
I
think
most
everything
has
been
merged
and
kind
of.
A
The
long
poll
is
one
of
the
pr
I'm
working
on
for
our
our
very
confusing
deprecation
message
of
the
controller
config
type
and
clearing
that
up
for
the
community,
so
I
think
that's
the
maybe
the
the
last
one
that
we're
waiting
to
get
in
and
done
all
right,
but
yeah.
Let
me
know
if
anybody
wants
to
include
anything
else
in
in
those
patches.
A
Let's
take
a
look
at
high
level
roadmap
and
go
over
a
everything.
That's
in
progress
there
I
know
there's
a
whole
bunch
of
stuff
going
on
in
1.13.
So
let's
see
what
sort
of
things
we
want
to
highlight
here,
let's
see
so
Hassan
do
you
is
Hassan
on
the
call
by
the
way,
that's
her
son.
Do
you
want
to
give
us
a
quick
update
because
I
haven't
been
able
to
catch
up,
keep
up
with
it
myself
and
I
know
it's
it's
pretty
impactful.
B
Yeah
sure
yeah,
first
of
all,
I
I
I
I,
have
some
concerns
and
thoughts
about
like
emerging
these
two
apis
or
these
two
like
use
cases
into
a
single
API.
So
I
I
left
a
comment
on
that
regards,
so
we
can
continue
the
discussion
on
this
design,
specific
PR
and
we
can
discuss
whether
it
makes
sense
to
combine
the
deletion
ordering
and
processor
referencing
or
patching
whatever.
This
is
one
thing
that
worth
mentioning.
B
The
other
thing
is
I
on
on
the
like
proposal,
issue
for
deletion,
ordering
I
left
couple
of
comments,
and
we
have
we.
We
are
having
a
discussion
with
Bob
he's,
providing
quite
valuable
feedback,
so
basically
I
I
attempted
a
POC,
but
basically
I
forgot
about
the
XR
use
case,
especially
with
the
deletion
policy
foreground,
which
kind
of
invalidates
or
makes
makes
it
hard
to
solve
the
the
problem
with
the
finalizer-based
based
approaches.
B
So
currently,
I
am
in
the
face
of
like
evaluating
a
web
hook
based
solution,
I'm
I'm
doing
some
pocs
on
my
local,
but
it
looks
like
at
least
currently.
It
looks
like
the
most
feasible
solution
is
implementing
a
web
hook
based
based
solution,
so
that
we
can
prevent
or
delete
calls
to
the
to
the
XRS
yeah.
That's
basically
where
I'm
at
right
now.
A
Thanks
for
that
update
on
that
because
I
know
this
is
a
fairly
fairly
large
scale
area
to
be
investing
in
and
then
you
know
we
know
it
has
direct
impact
on
attempting
to
solve
at
least
a
couple
of
high-level
things
and
then
potential
impact
as
well.
That
is
yet
to
be
determined
and
potentially
you're.
Saying
too
it
might
not
be
the
right
way
to
go
around.
You
know
unifying
a
couple
other
Concepts
within
the
crossbling
model.
Do
you
Hassan?
A
Did
you
need
any
specific,
unblocking
or
any
specific
feedback
in
the
short
term
on
on
some
of
these
questions?
Here
you
kind
of
just
proceeding
still
with
POC
and
and
try
to
get
a
more
formed
opinion
in
your
mind,
while
you're
working
on
that.
B
Yeah
I
I
cannot
say
that
I
am
blocked
at
any
point
right
now.
It's
mostly
like
I'm,
mostly
focusing
right
now
on
the
like
implementation,
part
of
the
deletion
ordering
problem
and
the
like,
which
API
should
realize
that
is
the
next
iteration
of
of
of
my
you
know,
Journey
So,
currently
I
cannot
say
I'm
blocked,
but
in
the
midterm,
I
will
need
more
feedback
and
I
have
plans
to
discuss
this
in
person
with
Nick,
because
he
was
the
one
who
proposed
merging
these
two
apis
together.
A
Yeah,
that
sounds
great,
then
Hassan
and
yeah.
Thanks
for
your
progress
on
this
so
far
and
sharing
the
update
as
well
we're
going
to
dive
into
integration
testing
thoroughly
later
on
in
the
meeting
so
we'll
go
ahead
and
skip
that
stuff.
This
the
area
of
observe,
only
resources.
A
You
know
that
was
included
in
the
1.12
release
and
then
we
want
to
continue
maturing
it
and
investing
in
in
the
API
and
stability
and
all
that
sort
of
stuff
for
the
to
make
it
to
Beta
as
well,
because
you
know
we
kind
of
have
a
a
few
high-level
major
features
in
crossbane
right
now
that
are
all
you
know,
fairly
new
and
early
in
their
life
cycle
and
they're,
all
quite
impactful
and
highly
demanded.
A
A
Those
are
all
things
that
were
very
explicitly
focusing
on
investing
in
in
1.13
time
frames
such
that
you
know
they
can
get
closer
to
being
ready
for
production
and
stable
apis
I.
Think
there's
not
specific
updates.
Besides,
you
know,
the
everything
we're
doing
is
is
captured
in
these.
A
C
We're
mostly
completing
phase
two,
so
validating
patches
and
field
paths
inside
patches
phase,
three
probably
will
come
later.
We
we
still
have
to
discuss
on
that
and
this
and
take
a
decision
on
that,
but
yeah
there
is
only
one.
There
is
only
one
item
left,
which
is
an
open
PR
waiting
for
for
feedbacks.
A
Excellent
usage
by
the
way
of
task
lists
here,
you
know
where,
as
the
crosswind
project
of
the
crossing
GitHub
org
is,
is
in
the
beta
program
for
using
task
list.
So
this
is
really
nice
to
see
like
you
know,
the
phases
explicitly
called
out
here
and
separated
and
then
with
their
own
individual
test
lists
they
get.
You
know
we
go
through
those
that
completion
as
well
too
so
very
cool
to
see
that
Felipe.
In
addition
to
almost
finishing
the
phase
two
also.
A
All
right,
that's
rad,
okay,
so
that's
the
1.13
high
level
stuff
there
I
know
that
we
might
want
to
pop
a
quick
look
into
some
of
the
more
granular
stuff
as
well.
I
have
a
feeling
that
the
the
high
level
stuff
covers.
Most
all
sorry,
the
high-level
road
map
items
kind
of
covers
a
lot
of
these
individual
items
that
are
in
progress
now,
but
oh
yeah,
one
thing
I
did
want
to
check
in
on
in
Hassan.
This
is
for
you
again,
my
friend
or
Dan.
A
Also,
if
you
stay
here
today
too,
but
I'm
not
sure,
I
can't
see
the
list
right
now.
What
was
the?
Could
you
give
us
a
quick
update
on
progress
for
the
improved
revision,
resource
ownership,
transition
stuff
or
you
know,
which
has
a
big
impact
on
reliability
of
you
know,
upgrades
and
you
know
transitioning.
You
know
one
for
one
version
of
a
package
to
another.
Is
there?
Is
there
a
new
update?
You
can
share
with
the
community
on
that
too
Hassan.
B
Yeah
I
can't
say
that,
like
there
is
much
progress
on
that,
basically
the
last
status
was
there
was
a
draft
PR
that
I
have
opened
but
looks
like
we
are
not
on
the
same
page
with
10.
Yet
so
I
need
to
push
that
forwards,
probably
like
finding
some
time
to
discuss
this
further
with
Nate
with
Dan.
So
currently
it's
kind
of
like
waiting
for
me
to
get
back
to
that
again,
but
right
now
there
is
a
draft
PR
with
a
possible
solution.
We
just
need
an
alignment
on
that.
A
All
right,
let's
see
and
then
wait
so
Dan
was
not
on.
The
call,
though,
is
that
correct
wanted
to
see
if
he
had
anything
to
share
about
this
provider.
Runtime
interface
efforts.
A
Awesome
thanks
for
confirming
Sean
all
right,
sweet,
so
I
think
that
that's
the
high
level
roadmap
areas
and
then
some
more
specific
granular
issues
that
are
being
worked
on
as
well.
We
have,
let's
see
the
release
date
is
going
to
be,
is
around
the
end
of
July,
so
July
25th
is
what
we
have
there.
A
So
we
do
still
have
a
fairly
significant
amount
of
time
left
we're
only
one
third
of
the
way
through
the
least
Milestone
period
right
now,
so
yeah
do
folks
have
other
questions,
concerns
or
things
that
are
important
for
them
in
the
1.13
Milestone
that
we
want
to
bring
up
today
to
chat
about.
A
All
right,
then,
we
will
keep
plug
on
plug-in.
You
know
there
is
a
ton
of
stuff
in
progress
and
a
lot
of
Highly
impactful
features
that
we're
continuing
to
invest
in
so
I
st
I
force,
C
1.13
continuing
to
have
be
you
know,
carrying
the
momentum
from
some
of
the
other
recent
releases,
one
about
11
and
1.12s,
to
speak
up
specifically
where
there
will
be
a
lot
of
activity
and
a
lot
of
a
lot
of
progress.
So
I'm
happy
to
see
all
of
that
and
there's
a
lot
of
stuff
going
on.
A
A
Let's
see
what
was
the
most
the
one
that
was
really
driving
that
one,
oh
I,
think
it
was
being
able
to
derive
redness
conditions
from
kubernetes
objects
from
inside
of
provided
kubernetes.
So
that
is
the
0.9
release
was
just
last
week
and
that
is
available
for
folks
to
pick
that
up
so
I
know.
A
number
of
folks
were
waiting
on
that
fix
and
that
is
available,
but
yeah.
A
So,
let's
jump
in
into
an
update
then
from
from
John
I'll,
go
ahead
and
ask
ask
you
for
this
John
on
what
we're
doing
for
crd
scaling
and
what's
the
next
steps.
D
Yeah
sure
so
our
initial
testing
and
you
know
feedback.
We
got
from
some
community
members
as
well
as
our
customers
showed
that
you
know
the
new
kind
of,
like
smaller
providers
are
being
received.
D
Well,
people
are
seeing
positive
impact
on
on
their
clusters
from
a
performance
point
of
view
and
not
reaching
the
the
kind
of
like
scaling,
Tipping
Point
that
they're
used
to
at
this
stage
we're
all
going
well
we'll
stick
to
our
planned
release,
date
of
the
13th
of
June
and
we're
just
in
the
pros
process
of
creating
some
tooling
to
help
people
to
automate
the
process
from
converting
from
the
monolithic
provider
to
the
new
provider
family.
D
So
it
will
take
care
of
converting
all
of
the
configurations
for
you,
and
you
know
that
and
some
documentation
and
stuff
so
busy
with
the
final
polishing
stages
before
we
release
that.
A
Don,
have
there
been
any
like
showstopper
bugs
or
anything
that
have
come
from
the
community
testing,
or
has
that
all
been?
You
know,
feedback's
good
things
are
looking
good,
nothing
major!
That's
come
up
to
fix.
D
No,
nothing
major
has
come
up
mostly
some
you
know
clarification.
We
needed
to
provide
making
instructions
simpler
and
we've
changed
some
of
our
approaches
to
optimize
how
we
go
specifically
for
migrating
from
the
monolithic
packages
to
the
to
new
ones.
So
it
was
mostly
around
polishing
edges
and
and
understanding
where
there
might
be
some
confusion
and
struggling
points.
D
A
Awesome,
that's
really
good
to
hear
to
be
able
to
streamline
that
and
make
it
more
more
success,
more
likely
to
succeed
to
adopt
these
new
providers.
If
you
already
have
the
live
deployments,
amazing,
all
right,
so
13th
of
June
that
looks
like
that
would
be
before
the
next
community
meeting.
So
then,
if
all
goes
as
planned
on
the
next
community
meeting,
but
it
would
be
plus
14
would
be
June
15th.
We
will
be
able
to
celebrate
a
little
bit.
A
These
providers
are
out
and
people
are
starting
to
feel
relief
from
getting
their
clusters
clusters
steamrolled
by
a
massive,
a
number
of
crds.
A
Sweets
all
right,
all
right,
okay,
sweet!
So
let's
keep
on
rolling
here
and
it
looks
like
pedrag.
We
will
have
plenty
of
time
still
for
the
intent
testing
stuff.
So
just
talking
about
some
recent
pieces
of
content
and
updates
and
stuff
from
around
the
community,
one
of
the
ones
that
I
think
is
particularly
high
quality
is
Nick.
Did
a
a
full
report
on
like
20
something
page
report
on
what
is
cross-plane
for
O'reilly
that
is
available
Nick.
A
Do
you
want
to
just
give
a
quick
like
overview
of
that
and
some,
whether
you're
thinking
or
the
content
there.
E
Sure
it
is
a
report
about
cross-plane,
it's
called
water,
Brooklyn.
Sorry
Jared,
just
I
was
not
prepared
for
this
locked
up.
My
head.
E
It's
it's
not
a
full
O'reilly
book
full
of
rally.
Books
tend
to
be
sort
of
really
focused
on
like
how
to
you
know
like
a
guide
like
how
to
use
the
technology,
because
it's
not
that
it's
more
background.
Is
it
walks
through
roughly
what
we
think
a
good
bar
control
plane
is
how
cross-plane
can
be
set
up
to
be
a
good
Cloud
control,
blade
more
experience
of
use
cases
for
it.
I
would
recommend
folks
take
a
look,
regardless
of
their
level
of
experience
with
cross-playing
I.
A
Yep
so
I've
put
you
on
the
spot
Nick,
but
I.
I
think
that
this
is
this
is
such
a
high
quality
piece
of
content
that
you
know
really
goes
into
a
lot
of
details
and
is
written
very,
very
well
and
Nick
worked
for
a
number
of
months
on
it
with
the
O'reilly
team
and
a
team
of
editors
and
all
that
sort
of
stuff.
So
it's
it's
ridiculously
high
quality
and
it's
really
quite
compelling.
So
I
really
wanted
to
share
that,
because
I
think
people
will
get
a
lot
of
value
out
of
it.
A
There
is
a
number
of
other
cool
blog
posts
and
and
YouTube
videos
and
stuff
that
are
pretty
interesting,
so
you'll
click
feel
free
to
click
through
all
these
links
and
check
out
all
that
cool
stuff
that
everybody
everywhere
else
around
the
community
is,
is
publishing
all
right,
so
yeah
we're
about
20
minutes
in
so
we
definitely
have
plenty
of
time
to
jump
into
intend
testing
stuff.
I've
asked
the
project
and
I.
A
Think
lovro
is
that
a
having
a
dinner
party
at
his
house
so
I
think
it's
going
to
be
mostly
driven
by
project,
but
to
kind
of
give
us
an
introduction
to
the
thinking
and
the
proposal
for
the
intent
integration
testing
and
then
walk
us
through
a
little
bit
of
the
experience.
That's
proposed
there
as
well
I.
Think
that
you
know
I
want
the
main
goal
here
for
me
is
to
share
this
thinking
with.
A
You
know
the
rest
of
the
community
here
on
the
meeting
and
watching
the
recording
later
that
haven't
necessarily
been
following
very
closely
on
the
pr
or
maybe
even
looked
at
the
pr
yet
but
giving
an
overall
sense
of
hey.
This
is
what
the
proposal
is,
so
more
people
can
understand
it,
and
then
we
will
have
time
to
discuss
some
of
the
nuances
of
it
as
well,
but
I
really
want
to
kind
of
share
what
this
thinking
is
first
and
do
a
little
demo
and
stuff.
G
A
H
Is
great
yes,
okay,
so
I,
just
oh
well
as
as
Jerry
mentioned.
So
we
will
also
like
to
to
give
a
couple
more
details.
So
so,
last
week
we
submitted
the
the
proposal
which
improved
the
end-to-end
testing
or
like
bring
enter
nesting
back
to
the
project.
So,
as
pointed
out
in
in
the
one
of
the
issues
which
was
kind
of
a
source
of
was
kind
of
triggering
that
this
proposal,
like
the
the
community,
the
cross
plane
as
a
project,
is
lacking
a
number
of
internal
tests.
H
And
you
know
we
would
like
to
improve
that
situation
in
particular,
so
that
we
can
test
all
the
user-facing
functionalities
or
like
most
of
them,
on
each
full
request
and
to
kind
of
increase.
The
confidence
in
the
quality
of
the
software
across
brain
as
a
project
is
delivering.
H
So
in
order
to
set
up
the
proposal
started
with
analyze
a
couple
of
Frameworks,
so
we
believe
that,
contrary
to
unit
tests,
the
end-to-end
tests
typically
require
more
complex,
setups
fixtures
and
what's
not
in
order
to
kind
of
make
the
writing
tests
fund
for
for
developers.
So
we
need
to
kind
of
approach
this
with
different
abstraction
levels,
to
Simply,
to
hide
all
these
details
and
like
make
them
kind
of
reusable
across
across
the
many
tests.
So
so,
what?
What
do
we
want
to
do
to
propose?
H
So
there
are
a
couple
of
goals
we
wanted
to
to
achieve
with
these
proposals,
and
we
want
to
to
select
framework
approach,
testing
approach
which
will
enable
us
to
do
declarative
testing
so
I
SEO,
Euro
or
whatever,
no
to
the
kubernetes.
What
declarative
means
so,
instead
of
like
really
through
imperative
logic
of
defining
the
the
necessary
steps
towards
when
it's
an
interactive
system,
we
would
like
to
describe
test
as
a
set
of
defined
or
like
wish
States.
H
If
the
system
is
responding
as
expected,
having
that,
we
would
also
like
to
really
deliver
tests
or
like
having
a
framework
which
will
enable
us
to
to
reach
really
good
readability
of
of
tests
itself,
as
as
any
any
kind
of
code
well
similar
to
production
code
tests
are
more
red
and
written,
and
it's
a
session
for
I
mean
for
all
the
parties
which
are
involved
in
the
project
to
have
like
a
good
understanding
how
the
system
behaves
and
what
tests
are
all
about.
C
H
H
So
the
idea
is
to
try
to
to
test
system
as
a
black
box,
basically
from
user
user
point
of
view,
using
the
the
tools
or
like
the
clients
which
are
available
to
the
users.
So
in
that
sense
we
can
also
validate
that
what
we
are
kind
of
advocating
or
exposing
to
documentation
how
you,
a
user,
should
interact
with
a
system
it's
possible
because,
like
if
we
are
kind
of
using
some
internal
apis
or
internal
knowledge
to
test
the
system
and
the
user
is
not
aware
of
that.
H
So
it's
kind
of
we
are
violating
that
kind
of
concept
for
end-to-end
testing.
So
we
would
like
to
really
interact
with
the
system
from
the
user
user
point
of
view.
So
using
some
some
tooling,
and
in
particular
whatever
I
mean
here,
is
like
standard
set
of
clients
for
the
kubernetes
that
keep
cutting
and
then
initially
the
the
number
of
tests
score
small,
but
over
the
time
and
I
I
would
hope.
Really
very
soon.
H
Then
those
tests
should
be
really
run
against
arbitrary
system
deployment.
So
typical
developers
will
do
that
against
the.
H
Cube
cluster,
on
on
the
pull
request
on
the
GitHub.
Maybe
we're
gonna
also
start
running
those
tests
against,
like
a
matrix
of
different
cluster
deployed
on
different
Cloud
providers
and.
H
D
H
F
H
Code,
similar
production
code
should
be
reusable
and
then
the
joy
of
of
writing.
Those
tests
should
increase
with
all
the
time,
especially
the
beginning.
Maybe
we
need
to
invest
more
time
to
to
write
on
the
contributing
pieces,
but
the
intention
here
is
that,
as
the
time
goes
passes
so
like,
we
would
like
to
invest
less
time
into
writing
a
new
test.
Basically,
all
the
need
is
pieces
should
be
already
there.
We
should
really
combine
them,
like
you
know
like
for
Lego
blocks.
H
C
H
Not
really
the
goal
of
the
proposal,
but
definitely
should
come
after
after
the
proposal
gets
accepted,
is
to
define
the
set
of
use
cases
like
to
really
to
write
and
and
test
test
the
cross
plane
against
yeah.
So
there
are
like
two
basic
sets
or
like
two
main
sets
of
the
use
case.
We
kind
of
recognize
the
one
is
one
related
to
the
package
manager
itself.
H
Another
one
is
related
to
the
composition
so,
and
this
is
definitely
it's
not
goal
of
this
project-
to
identify
all
these
use
cases,
but
definitely
something
which
needs
to
come
after
this.
And
the
second
thing
is
that
once
the
the
tests
are
there
and
the
the
info
framework
is
in
place,
we
will
definitely
need
to
to
set
up
CI
and
the
GitHub
actions
to
start
running
this
test.
But
again,
this
does
also
the
scope
of
the
or
the
approach
of
this
proposal.
H
So
some
of
you
probably
know
so.
We
have
visited
the
full
request,
but
in
essence,
what
we
are
proposing
here
after
investigating
and
developing
a
couple
of
Frameworks
I
would
like
to
propose
to
the
crosswind
community
to
adapt
the
a
methodology
which
is
called
specification
by
example
in
in
essence,
it's
it's
based
in
the
cucumber,
is
the
framework
and
and
the
gold
buy
next
for
it.
So
the
cucumber
is
a
very
well
known
framework
for
typical
for
executing
and
running
end
and
tests,
and
it's
used
across
the
industries.
H
I
have
participated
in
one
a
couple
of
project
which
at
least
here
like
we
were
successfully
tested,
serious
binding
operator
using
cucumbers
approach.
There
are
also
like
other
others
which
are
running
around
in
elastic
elastic,
is
running
end-to-end
tests
using
cucumber,
then
the
conformance
test
for
service
binder
specification,
obviously,
as
you
can
brand
and
in
general
godog
bindings,
are
very
popular
GitHub.
They
have
like
more
than
I
would
say
close
to
two
thousand
stars
and
it's
really
used
by
1.2k.
H
There's
the
statistic
on
the
GitHub,
which
says
that
it's
used
by
1.2k
GitHub
users
or
projects
and
I'm
not
sure
so
what
how
they're
Counting.
So
what
is
this
all
about?
I
will
I
will
not
really
go
into
details
about
what
the
the
cucumber
is
is
and
what
is
the
gerking
language
behind,
so
in
The
Proposal
we
we
have.
We
have
shared
the
link
to
the
the
further
documentation,
but
in
essence
for
us
for
the
for
the
Cross
Point
project.
H
We
can
see
the
this
kind
of
cucumber
feature
files
as
a
sort
of.
H
So,
yes,
you
can
use,
you
can
write
configuration
using
yaml,
syntax
or
Json
syntax
or
something
else,
but
cucumber
feature.
Files
allows
me
to
to
write
very
easy,
readable
tests
which
very
close
to
the
English
Pros
I'll
show
you
so
how
how
a
feature
file
looks.
H
Together
with
the
pr
two
feet,
which
basically
replacing
the
the
end-to-end
test,
which
are
part
of
the
repo
already,
so
what
is
what
is,
for
example,
this
one
right.
So
this
is
something
which
we
are
speaking
What.
We
would
like
to
test
composition
and
so
there's
a
feature
which
is
called
composition
and
in
that
net
feature
file.
H
You
have
very,
very,
very
kind
of
loose
syntax,
so
you
need
to
specify
a
couple
of
tags
like
a
feature,
and
then
you
have
like
a
free
phone
text
which
basically
describes
whatever
you
want
to
to
to
do
to
the
community.
So
what
is
the
future
about
about?
And
what
is
how
it
should
actually
work.
Then,
and
then
you
have
a
number
of
scenarios
inside
so
scenario
is
also
like
identified
by
a
tag
where
it's
basically
specify
the
scenario
name
and
the
scenario
name
is
actually
something
which
is
equal
to
the
test
name.
H
So
you
you
describe
in
the
free
from
text.
What's
the
scenario
about,
and
then
you
have
a
list
of
steps.
So
did
we
just
sequentially.
H
So
you
basically
list
the
steps
which
are
going
to
be
execute.
All
these
steps
and
and
the
vocabulary
which
is
present
here
is
defined
by
us,
so
it's
not
imposed
by
by
a
cucumber
or
a
girl.
So
these
kind
of
things
are
really
tailored
to
exactly
what
we
would
like
to
be
seen
in
those
feature.
Files
and.
H
H
Then
we
also
want
to
have
a
test
which
will
basically
assert
and
if
that's
not
the
case,
install
a
given
provider
in
the
cluster,
because
in
order
to
run
some
compositions,
you
need
to
have
some
some
providers
some
series
inside
of
cluster,
and
after
that
we
would
like
to
Define
some
some
composite
risk
definitions.
This
test
is
about
that.
You
can
see
that
we,
we
are
basically
pudding
embedding
in
into
into
the
step
itself
the
full
content
of
that
that
resources
in
yaml
Fork.
H
We
could
do
that
differently
because
again,
so
we
are
controlling
how
how
this
step
look
like.
We
could
simply
say
here
instead
of
doing
that,
we
could
refer
some
choose
some
file
name
or
some
naming
a
file
which
exists
in
the
lock
on
the
file
system
and
then
our
step
implementation
step
binding,
will
read
from
that
file.
H
Believe
that,
having
that
embedded
inside
of
this
step
directly
makes
the
things
way
more
readable
because
you
don't
need
to
switch
the
context
between
a
test
and
and
the
inputs.
So
everything
is
present
here
and
it
looks
like
a
piece
of
documentation
basic
voice.
So
you
you
can
see
immediately
exactly
what
what
is
applied
on
the
cluster
and
that
that's
the
full
Yama
thing
then.
H
Only
difference
is
basically
what
we
are
basically
you're
saying
what
what
kind
of
resources
apply
to
the
cluster.
We
have
similar
thing
with
the
composition,
so
we're
gonna
apply.
We
find
a
composition
to
the
cluster
and
then
we're
gonna
finally
deploy
a
claim.
Again.
We
are
providing
here
the
full
full
content
of
the
claim,
and
so
we
are
saying
here
with
when
we're
making
difference.
H
We
want
to
emphasize
that
what
is
basically
under
the
test-
and
so
another
test
is
applying
your
claim
to
the
cluster
as
a
result
of
that
some
managed
resources
are
going
to
be
created
and
some
status
is
going
to
be
changed
or
updated,
and
after
that,
it's
coming
down
below
here.
So
we
are
saying:
okay,
so
I
have
to
claim
it
got
got
it
deployed.
We
would
like
we.
H
We
expect
that
the
claim
is
going
to
become
synchronized
in
Revenue
and
same
goes
with
the
composite
research
is
going
to
become
synchronous
in
ready
and
eventually
manage
resources
as
well,
and
that's
all
about
it.
That's
single
scenario
that
single
test.
A
A
quick
question
on
that,
so
is
the
first,
because
at
the
very
very
top
there
was
like
a
feature
level
definition
is
any
of
that
interpreted
by.
H
H
Whatever
you
want
before
only
thing,
which
is
important
like
Improvement,
is
background.
The
background
is
something
like
if
you
have
a
common
common
setup
or
fixture,
which
you
share
between
scenarios.
You're
gonna
typically
put
them
here
in
the
background,
so
it's
gonna
be
executed
only
once
yeah.
So
you
you
make
that
the
scenario
and
more
readable
got.
H
E
Some
something
that
I
know
we've
been
discussing
in
the
design
dock,
but
I
still
don't
fully
understand
if
you
can
scroll
up
a
little
bit
I'm
looking
for
one
of
the
particular
pieces
of
not
a
not
a
code
comment.
I
C
E
So
yeah
give
and
provide
a
x,
package.o
blah
blah
who's
running
a
cluster.
So
I
like
that.
This
is
quite
readable,
but
one
thing
that
you
and
I
have
been
talking
about
on
the
design
is
how
rightful
that
is,
and
my
understanding
is
that
what
happens
in
the
background
is
that
they're,
the
regular
expression
and
the
regular
expression
is
written
so
that
it
could
extract
the
string.
E
X
package.l
basically
identify
the
provider
from
within
this
English
language
and
then
takes
that
and
makes
it
an
argument
to
a
go
function
and
one
of
the
things
I'm
concerned
about
is
that
if
I
wanted
to
add
another
test
here,
obviously
there's
copy
and
paste
cargo
called
style.
But
if
I
want
to
add
another
test
here
and
understand,
I
can't
write
a
provider.
E
E
H
So
I
can
I
can
I
can
tell
you
one
one.
There
is
one
one
easy
thing
here:
we
can.
We
can
do
first.
So
like
this,
this
framework,
the
Go
Dog,
gives
the
also
ability
to
list
the
definition.
H
So
I'll
just
quickly
demonstrate
this
and
that's
probably
not
the
the
final
thing
we
would
like
to
go
to
get,
but
we
can
go
go
towards
towards
even
better
solution.
So
you
you
basically
invoke
this
test
using
using
the
standard,
go
test
test,
feature
and
and
command,
and
so
then
you
could,
you
can
also
provide.
There
is
a
number
of
parameters
you
can.
You
can
provide
to
just
go
down
and
one
thing
you
can
see
here:
I
mean
there
are
plenty
of
Standards
Flags,
but.
H
Here
you
you,
basically,
you
can
get
so
just
yeah
here,
the
finishing,
so
you
can.
You
can
list
definitions,
and
so
these
definitions
I'll
show
you
later
on,
but
you
can
treat
these
step
names
as
it's
just
a
more
readable
function,
names
which
are
eventually
a
go
functioning
which
eventually
called.
But
if
we
say
now
here,
okay,
the
definitions,
you
will
get
here,
those
regular
Expressions,
you
mentioned
just
just
a
few
minutes
back
and
even
like
pointing
to
the
and
the
pointers
to
the
function
which
are
going
to
be
involved.
H
Now
the
regular
expressions
are
there
just
because,
as
you
as
you
rightly
said,
there
is
a
way
that
you
can
actually
parameters
that
you
can
make
a
parameters
embeddable
inside
of
the
inside
of
the
step.
But
typically
you
are
not
going
to
go
wild
there,
so
you're
gonna
have
like
a
parameter
or
arguments
or
two,
and
what
you,
when
you
see
here,
is
basically
matching
the
the
whole
whole
sentence
here
like
beginning
in
the
end,
the
only
so
what
we
did
as
with
the
providers.
H
So
here
that
this
there
is
a
step
which
basically
has
inside
in
the
in
the
in
the
middle
parameter,
which
is
in
our
case,
what
we
we
did
crafted
for.
The
PSC
is
the
image
reference.
We
could
do
plenty
of
other
things
in
document,
basically
in
some
document,
the
number
of
steps
and
what
is
the
synthesis
behind
it?
H
It's
up
to
us
how
we
would
like
to
make
it
understandable
or
not.
So
we
we
did
that
just
because
it
easy
quick
and
there
was
a
convention
and
we
established
a
kind
of
convention
that
the
basically
the
last
part
of
the
image
reference
becomes.
Also,
the
provider
provider
name
which
is
gets
deployed
and
so
on
by.
We
could
fully
make
this
completely
in
different
way
in
a
way
how
we
would
like
how
we
feel
and
think
that
it's
more
readable
for
developers
and
for
the
others.
E
But
some
of
the
other,
the
ones,
the
ones
that
don't
effectively
extract,
match
groups
out
of
a
regular
expression
so
claim
becomes
ready
and
synchronized
or
civilized
and
ready
in
in
the
example
feature
file
you
showed
you
know
the
claim
that
become
synchronized
and
ready
is
in
a
inline
document
and
that
gets
passed
to
the
go
function,
claim
becomes
ready
and
synchronized
or
whatever.
The
good
function
is
right
there
as
some
kind
of
argument
as
well.
Is
there
any
way
to
know.
H
H
E
I
don't
find
this
to
be
very
writable
personally,
even
with
this,
even
with
this
utility
that
prints
out
all
of
the
mappings
or
the
regular
expressions
and
whatnot,
it
makes
me
feel
that
I
need
to
know.
Gurkin
I
need
to
know
the
regular
Expressions
I
need
to
know
the
mapping
and
all
that
kind
of
thing
to
be
able
to
make
sure
a
brighter
that
is
valid.
I
know
you
and
I
have
discussed
it
the
length
of
a
document,
though
I
don't
want
to
derailed
it
this
session.
E
But
that
was
something
I
wanted
to
try
and
clarify
to
make
sure
I
was
understanding
correctly
how
it
worked.
I.
H
Mean
there
are
projects,
for
example,
which
are
I
mean
this.
This
list
is
just
just
a
start
it
we
can
apply
some
some
some
kind
of
Generation
tooling
around
this
and
basically
put
some
documentation.
For
example,
I
mean
I'm
speaking
just
what's
possible.
Putting
some
documentation
here
and
go
generate,
maybe
even
generate
better.
H
You
know
markdown
document
listing
all
the
steps
and
you
know
giving
you
detailed
explanation
how
to
use
them
and
so
on,
so
that
would
that
would
be
doing
if,
if
that's
gonna,
improve
user
I
mean
developer
experience
it
further,
that's
definitely
what
we
will
do
so.
H
Again
here
should
be
fairly
simple,
so
we
should
not
go
while
there
and
you
know,
have
the
many
groups
and
many
parameters
so
on
and
therefore
maybe
from
experience
my
experience,
it's
better
to
have
like
more
individual
steps,
which
are
clearly
describing
the
only
human
three
forms.
So
what's
this
about
them,
trying
to
you
know,
save
some
some
space
to
encode
with
having
like
work
level,
regular
expression,
then,
of
course,
with
any
regular
profession,
you're,
not
sure.
H
So
if
when
this
is
going
to
be
matched
and
should
not
be
matched
and
so
on
so
yeah
in
that
sense,
so
I
can
show
you.
If,
if,
if
you
know,
the
group
wants
to
see
how
this
will
do
Works
in
practice,
so
yeah
I
can
run
some
tests
for
you.
J
H
It's
a
regular
go
to
go
test
and
go
test,
so
we
we
have
just
one
one
entry
point
and
then
as
a
part
of
configuring
this,
so
we
are
calling
this
method
or
like
the
godor,
will
call
for
us
and
initial
scenario
method.
We
are
registering
each
step
description
with
the
proper
function
it's
going
to
be
executed
and
we
also
have
a
hooks
to
that.
Do
some
actions
before
each
scenario
after
a
scenario
before
the
features
and
so
on
so
like
that
you
can,
you
can
do
some
cleanups.
H
You
can
do
some
some
other
common
work,
like
you
know,
setting
up
some
namespaces
or
or
cleaning
up
some
Nations
inside
yeah
Nick.
You
raise
the
hand.
E
Yeah
I,
just
I,
just
make
sure
I'm
making
my
point.
I
I
think
that
this
gherkin
language
with
go
code
with
this
binding
between
them,
is
it's
a
layer
of
abstraction
right,
I.
Think
the
layer
of
abstraction
is
to
be
inarguably,
valuable
for
people.
Reading
this
code
right
or
if
I,
don't
know,
go
and
I
want
to
learn
how
our
tests
work.
What
I
like
about
gurkid
is
I
can
just
read
that
feature.
E
File
and
I
really
need
to
know
the
syntax
and
just
read
like
regular
English,
which
is
which
is
great,
but
if
I
think
about
writing
it,
I
put
people
in
kind
of
one
or
two
buckets
right.
People
are
pretty
comfortable
with
go
and
and
writing
go
tests,
and
then
people
who
are
not
and
I
don't
think
this
helps
people
who
are
not
because
they
need
their
regular
Expressions.
They
need
to
be
able
to
open
up
this
go
code.
E
They
need
to
be
able
to
look
at
this
map
in
like
I
and
for
people
who
are
familiar
with
go
code.
Well,
they
could
just
go
write
the
go
code
in
the
first
place
without
this
layer
of
abstraction.
So
that's
kind
of
what
I'm
trying
to
get
at
here
is.
Do
we
think
that
this
is
something
that
is
helping
people
who
don't
know
go
to
write
tests
or
only
to
read
tests.
H
My
personal
opinion
here
is
that,
even
even
for
people
who
know
how
to
write
go
tests,
this
particular
abstraction
allows
them
to
write
in
10
test.
Even
quicker
I
can
just
I
mean
maybe
lower
can
can
tell
more
so
what
we
have.
What
we
did
for
the
pull.
C
E
H
Features
like
one
is
about
the
testing
the
composition,
another
one
is
which
replaced
the
two
two
tests,
which
were
testing
some.
Some
aspects
of
package
manager
installed
configurations
with
providers
and
checking
if,
if
you're,
specifying
this
script
dependency
resolution
what
happens.
So
this
is
what
what
is
currently
part
of
the
pr
and
yesterday
Laura
added
a
couple
of
more
tests
in
a
in
a
different
feature:
file
and
yeah
Laura.
Maybe
you
should
share
your
experience
with
that,
but
in
essence,
so
Laura
just
added
additional
or
two
steps.
J
J
The
reason
why
we
are
doing
this
so
I
I
tried
this
yesterday,
because
previously
Frederick
created
those
tests
and
I
can
say
that
it's
I
needed
only
to
create
one
test
in
one
patch
case
and
one
one
step
in
other
page
case,
and
it
was
pretty
simple
because
I
could
reuse
what
was
already
there.
J
J
You
know,
like
a
short
amount
of
time
like
compared
to
either
creating
your
own
framework
or
yeah,
basically,
basically
that
or
going
into
some
something
like
end-to-end
framework,
which
would
like
a
difficult
like
setup,
which
may
also
not
be
maybe
I'm
wrong,
but
in
in
this
case
it
seems
for
me
that
it's
easy
for
like
people
to
get
in
and
start
writing
tests
like
it
doesn't
take
long
to
get
into
this
way
of
thinking
like
in
the
in
gherkin.
H
J
J
Just
said
at
the
last
one
yeah
so
yeah,
this
one
actually
uses
some
retips
Magic
like
it's
magic
because
it
checks
the
field
is
set
to
something,
but
it
can
be
okay,.
H
J
What
yeah
it
goes
to
the
upper
van,
because
the
boulevard
one
has
a
hack
because
of
provider
damage
how
it
works.
A
I
wanted
to
do
a
quick
time
check
as
well,
because
we
have
about
less
than
10
minutes
left
in
the
community
meeting
and
there
are
a
couple
more
agenda
items
that
got
added.
A
Bobby
Christina
so
yeah.
Let's,
let's
finish
on
this
topic
here
and
then
wrap
it
up
and
then
we
can.
We
can
continue
discussing
on
the
pr
yeah.
J
J
So
it
was
not
a
big
deal
to
add
this
step
more
I
spent
time
on
creating
the
yaml
and
so
on
for
the
for
the
test
itself
and
adding
it.
E
Is
it?
Is
it
fair
to
say
that
the
reason
that
this
is
pretty
quick
is
from
the
general
pattern
of
having
test
fixtures
that
are
supported
by
reusable
functions.
E
J
Yeah
yeah
we'll
discuss
this
for
sure.
Furthermore,
we
just
wanted
to
show
and
maybe
invites
others
if
they
have
some
feedback
to-
please
add
it
to
the
design
documents.
It's,
as
we
all
know,
a
bit
subjective
of
how
you
know
how
you
like
to
write
your
tests
but
feel
free
to
add
your
subjective
opinion
as
well.
C
H
See
the
the
basically
the
repetition
of
the
feature
file
here
displayed
on
the
console.
In
the
end,
you
see
like
the
stats
of
how
many
scenarios
got
passed
from
many
steps
and
so
on
at
times,
and
so
and
you
could
use
this.
The
regular
go
test
mechanism
to
specify
you
know
like
individual
tests
by
providing
kind
of
the
test
name
and
so
on.
So
it's
better.
You
can
integrate
and
evolved
test
for
many
ID
in
editor,
which
can
invoke
go
tests
right
now.
H
A
A
Yeah,
all
right:
okay,
cool!
Let
me
start
sharing
again
real
quick,
so
we
can
get
to
the
last
things
and
try
to
finish
this
off.
So,
okay
I
think
I'm
sharing
again
here,
it's
a
bob,
if
you
are
still
with
us,
look
I
think
you
added
this
topic
about
composition,
selection
and
nested,
composites.
I
Yeah,
which
is
so
we
tend
to
have
nested
pretty
deep.
You
know
six
seven
layers
deep
at
our
composite
trees
and
we're
finding
that
it
would
be
nice
to
be
able
to
do
composition,
selection
at
the
lower
levels
based
on
an
input
from
the
claim
and
the
way
we've
been
doing.
I
That
now
is
adding
inputs
to
all
the
intermediate
Composites
so
that
by
the
time
we
get
down
to
the
lower
levels,
we
can
do
wavel
selection
on
the
com
on
the
composition
that
is
less
than
optimal,
for
hopefully
obvious
reasons
and
I'd
like
to
find
a
way
to
somehow
pass
information
down
from
the
claim
level.
That
would
allow
me
to
do
composition.
Selection
at
the
lower
levels.
I
I
mean
the
immediate
solution
that
comes
to
mind
is
to
just
put
a
generic
input
parameter
on
every
composite
that
you
can
pass
label
selectors
down,
and
you
know
you
can
do
patching.
That
is
basically
a
no
op
if
there's
nothing
there
or
if
there
is
something
there,
it
can
pull
the
labels
in
and
do
composition,
selection
that
doesn't
seem
to
be
a
real
optimal
solution,
either.
I
I
guess
another
possibility
might
be
to
use
environment
config
for
something
similar
to
that
I
just
didn't
know
if
this
was
anything
that
anybody
had
put
any
thought
into
or
if
anybody
else
had
run
into
the
same
scenario,
if
there's
something
out
there
that
I'm,
just
not
aware
of
that,
you
know,
makes
this
possible
if
not
trivial,.
A
Definitely
a
good
question:
Bob.
Do
you
feel
like
you
have
the
same?
Is
this?
Is
this
like
more
General
Beyond,
just
composition,
selection
at
nested
levels,
or
is
it
like
more,
you
know?
Is
it
more
general
of
hey,
there's
just
some
value
that
I
want
from
the
user?
The
person
who's
deploying
this
claim
I
want
them
to
be
able
to
influence
it,
but
I
don't
want
to
have
to
be
like
pass
it
from.
You
know
down
through
all
the
nested
layers.
Is
that
the
same
problem
domain
here
or.
I
Yeah
I
mean
I,
think
it
it
could
be
generalized
I
mean
there's
certainly
times
when
you
know,
there's
a
there's,
a
deeply
nested
composite
that
has
an
input.
That's
not
exposed
all
the
way
up
and
it
would
be
nice
to
be
able
to
to
affect
that
input.
That's
two
or
three
or
ten
levels.
Deep,
you
know
without
you
know,
exposing
it
all
the
way
up
and
I
realize
that
some
of
this
goes
against.
The
you
know
create
a
simplified
API
reasoning
for
cross-plane
to
be
and
I
guess.
I
You
know
the
in
my
world
anyway,
I'm
I'm,
not
I'm,
not
dealing
with
an
external
set
of
users
who
I'm
trying
to
simplify
apis
for
right,
I'm,
I'm,
creating
apis
for
myself
that
simplify,
you
know,
create
creating
Lego
blocks
for
myself
to
to
build
things
easily,
and
you
know
it's.
It's
makes
sense
for
me
to
want
to
be
able
to
tweak
that
thing.
That's
two
or
three
or
ten
levels,
deep
where
in
a
different
scenario
it
you
know
it
may
not
make
sense
to
want
to
do
that.
E
I
do
think
copying
plain
label
to
the
composite,
probably
like
just
some
sense
and
then
potentially,
you
could
just
hatch
those
labels
down
as
needed.
I'm,
not
sure
I,
like
the
idea
of
not
specifically
for
this
use
case,
but
I
am
a
little
wary
of
the
number
of
cross-plane
knobs.
That.
I
I
It's
almost
like
the
the
composite
needs
some
way
of
identifying
labels
that
are
important
to
it
or
that
it
knows
about
versus
labels
that
it
doesn't
know
about,
because,
obviously,
if
I
just
randomly
patch
labels
into
a
match
label
selector,
it's
not
going
to
find
anything
for
all
the
labels
that
it
doesn't
know.
So
you
know
that's
why
I
was
kind
of
using
encapsulating
everything
in
an
object
where
I
could
select
by
composite
type
and
know
that
the
labels
under
that
composite
type
in
my
object
are
valid
for
this
composite.
I
E
A
G
No
worries
so
this
is
actually
a
terraform
issue,
not
per
se
a
AWS
account
provider
issue,
but
I
am
creating
a
Lambda
with
image
like
container
image
and
then
the
terraform
provider.
Every
time
hits
the
API
and
say
hey
because
of
the
source
hash.
This
is
a
new
container
and
every
time,
every
10
minutes
on
the
reconciliation
it
recreates.
My
landlord
fishes
like
the
new
image,
which
is
the
existing
one.
G
Long
story
short
I
was
trying
to
go
through
the
late
initializer
and
remove
that
Source
hash,
but
we
need
the
source
hash.
If
it's
a
S3
zip
file,
we
don't
need
it
if
it's
a
container
image-
and
there
is
a
bug
open
with
terraform
on
this,
but
I,
don't
know
when
it's
gonna
get
picked
up
in
long
story,
short
I
I,
don't
see
a
way
to
conditionally.
Do
that
in
the
late
initializer,
so
I
was
wondering,
like
should
I
request.
G
Another
should
I
open
another
issue
for
it
like
how
do
I
go
about
it.
A
Yeah
I
think
that
this
issue,
right
here
Christina
is,
is
probably
good
enough
for
for
tracking
purposes
and
yeah.
I
think
that
we
can
take
a
follow-up
to
see
if
you
know
poke
on
to
folks
to
see
if
anybody
has
an
idea
about
ways,
you
could
influence
this
Behavior
at
the
object
level
or
if
you
know
what
next,
what
steps
could
be
done
for
that,
and
that's
probably
probably
the
only
thing
at
least
I
could
think
of
immediately
and
that
somebody
else
has
another
idea.
F
Yeah
just
wanted
to
jump
in
on
this,
want
the
observed
with
the
observant
new
update
and
I
think
it's
partial
coming
in
113,
so
basically
to
use
the
life
cycle
on
some
of
the
field
of
the
CRA.
Will
this
result
decision.
G
Life
cycle:
what
so
this
is
not
an
observe.
Only
resource
I
am
creating
the
resource,
I'm,
not
sure
how
what
that
resolve.
C
J
There
is
also
a
new,
a
new
proposal
for
ignore
changes,
which
is
basically
morphed
into
something
more
like
a
granular
management
policy
where
you
can
control.
If
you
want
late
initialize
to
happen
on
your
resource
or
not
so,
but
so,
if
you
create
a
resource
which
later
initialize
turned
off,
you
shouldn't
get
this
update
on
on
the
field,
but
you
you
won't
be
able
to
choose
like
it's
it's
done
when
it's
this.
J
This
field
is
like
this
or
it's
not
done
when
the
field
is
out
there,
because
it
didn't
actually
read
what
exactly
or
the
fields
that
you
mentioned,
but
yeah
I
think
this.
This
new
feature
should
allow
you
to
ignore
it
later
in
this
initialization
for
yeah.
G
J
Go
ahead,
yeah
I'm,
not
I'm,
not
sure.
So,
at
what
point
are
you
creating
the
like
I?
Just
you
said
this
while
creating
the
manage
resource.
G
Yes,
so
we're
creating
a
managed
resource
which
is
a
Lambda
function
and
Lambda
has
two
options:
to
create
a
Lambda.
You
can
either
create
the
put
your
code
in
the
S3
bucket
in
a
zip
file
or
you
can
give
it
a
container
image.
G
So
when
you
give
it
a
S3
bucket
and
a
key
and
a
key,
you
need
the
source
hash,
because
you
need
to
know
if
that
updated
right.
So
then
the
source
hash
is
useful,
but
if
it's
an
image,
then
you
don't
need
the
source
hash
and
right
now
it's
a
bug
with
terraform
that
it's
still,
even
though
it
shouldn't
it
still
gets
the
source
hash.
C
J
When
you,
when
you
create
a
message
resources,
when
you
create
it
with
the
image
you
should
put
its,
you
should
turn
off
the
light
initialization
and
when
you
create
it
with
exactly.
A
Nice
yeah
thanks
for
bringing
that
up,
Christina
and
then
Clementon,
thanks
for
connecting
those
dots
to
that
that
ignore
changes,
proposal
potentially
being
a
solution
and
thanks
lover
for
your
expertise
there
as
well
all
right
sweets.
That's
everything
in
the
agenda
we're
a
bit
over
time
here,
but
thanks
for
everybody's
contributions
to
participation
today,
it's
good
to
see
everybody
and
thanks
for
all
the
time
spent
together
here.
So
thanks
take
care.
Everybody.