►
From YouTube: 2023-01-26 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
recording
has
started,
and
this
is
the
January
26th
2023
cross
plain
community
meeting
one
day
before
my
birthday,
so
happy
birthday
to
me
and
let's
jump
on
into
it,
so
Milestone
checkup,
we
reviewed
1.11,
is
the
big
thing
on
everyone's
mind
right
now,
but
we
also
did
do
a
patch
release
just
recently
for
1.10.2
that
fixed
a
I
think
a
problem
that
had
a
lot
of
impact.
A
So,
let's,
actually
let
me
do
something
real,
quick
with
the
chat
I'm
going
to
drop
into
the
chat,
a
direct
link
to
the
agenda
doc.
If
folks
don't
already
have
that
and
that
will
take
you
into
the
agenda
Doc
and
then
you
can
add
a
suggestions,
any
topics
that
you
want
to
talk
about
all
right.
Let's
hop
into
the
release
that
Dan
drove
this
release
and
the
fix
as
well
so
Dan
give
us
an
update
on
this
one.
If
you
can.
B
Sure
so
this
was
a
fix
that
was
very
similar
to
the
one
that
went
into
10.1
as
well,
so
for
context
kind
of
like
setting
the
background
for
this
and
b1.10.0.
B
Some
folks
like
go
out
Nick
and
some
other
folks
worked
on
making
the
establishment
of
resources
in
a
package
Pink
current
right,
so
we
spent
off
some
go
routines
right
in
and
establish
those
and
instead
of
doing
it
serially,
which
is
awesome.
One
of
the
things
that
that
did,
though,
is
it
changed
the
ordering
of
references.
So
if
you
take
a
provider
revision
or
configuration
revision,
it
has
references
to
all
of
the
resources
that
it
owns
in
its
status
and
then
on
the
resources.
B
The
active
revision
has
a
controller
reference
in
the
inactive
has
an
owner
reference.
So
that's
great,
but
doing
the
concurrent
establishment
changed
the
ordering
of
the
collection
of
those
resource
references
right
because
it
was
non-deterministic
and
it
was
non-deterministic
between
reconciles,
whereas
before
it
was
reflective
of
the
order,
they
were
included
in
the
package.
B
So,
anyway,
the
references
were
basically
jumping
around
in
the
status,
and
so
when
we
updated
the
status
of
the
revision,
it
would
trigger
the
parent
controller,
which
is
for
the
provider
configuration
which
would
then
trigger
the
the
revision
controller.
Basically,
every
10
seconds
or
so
you'd
have
like
a
reconcile
Loop,
which
is
pretty
expensive
right,
because
these
reconciliers
are
actually
establishing.
You
know,
potentially
like
a
thousand
crds
or
something
like
that,
so
not
a
great
situation
in
V
1.10.1.
B
We
fixed
it,
but
on
the
inactive
revisions,
because
of
the
way
it's
doing
establishment,
it's
basically
just
putting
its
owner
reference
on
there.
It
doesn't
have
the
uid
of
the
the
the
resource
of
the
crd
or
composition
or
xrd,
and
so
those
references
we
were
using
the
uid
as
the
sort
key
to
ensure
we
had
deterministic
ordering,
but
that
was
not
present
on
the
inactive
ones.
B
So
the
inactive
ones
we're
doing
what
we
noticed
on
the
active
ones
before
so
anyway,
we
moved
to
using
GVK
instead
as
the
actually
GVK
plus
name,
which
is
important
because
of
how
some
things
are
types
and
some
things
or
instances
to
do
that.
Reference
ordering,
which
also
has
a
nice
property
of
just
like
having
what
you'd
expect
in
the
order
there
so
anyway,
it
got
us
out
of
that
jumping
around
and
changing
of
the
status.
B
You'll
still
have
a
pretty
big
spike
there
right
when
you
first
install
a
a
package-
and
that's
really
nice
in
some
respects,
because
you
can
get
the
availability
of
those
types
faster
right.
You
install
provider
AWS
and
you
have
all
the
AWS
types,
but
there's
a
follow-up
issue
that
will
not
go
into
this
next
release,
but
something
we
want
to
explore
with
the
community
around
making
that
tunable
right.
B
A
Awesome
Dan
it's
great
to
get
that
confirmation
from
folks
that
were
specifically
affected
by
this
issue
as
well.
You
know
and
be
able
to
provide
that
graph
here
as
well.
So
thank
you
very
much
to
Dan
ports
for
following
up
and
confirming
this
fix
there
too,
and
you
know
it's
by
naive
eyes.
This
line
is
a
lot
lower
than
this
line.
So
looks
like
a
good
fix
to
me
all
right,
so
that
was
ten
dots,
sorry
1.10.2
that
release
and
that
went
out
last
week.
A
So
since
the
last
community
meeting
and
let's
go
ahead
and
get
into
the
big
1.11
release,
that's
going
on
so
currently
we
are
in
code
freeze
now,
so
we
are
finalizing
a
lot
of
things
for
the
release
and
doing
a
lot
of
testing
right
now,
basically,
is
where
we're
at.
A
We
have
a
specific
ask
for
testing
later
on
that
we'll
get
into
in
more
details
within
the
agenda,
but
let's
take
a
quickie
look
here
at
the
release
board
to
see
where
things
are
winding
down,
so
I
think
maybe
so
these
are
things
we
identified
as
things.
A
We
want
to
continue
spending
time
on
in
the
1.11
time
frame,
not
necessarily
explicitly
blocking
the
release,
but
things
that
we
want
to
continue
spending
time
on
the
one
that
I
think
is
making
the
most
that's
underneath
or
under
the
most
like
con
scrutiny
and
effort.
Right
now
is
this
last
bump
to
all
of
the
dependencies
for
core
cross
plain
Nick
went
through
a
lot
of
effort
to
get
the
kubernetes.
A
You
know
client
dependencies
and
packages,
dependencies
updated
in
cross-plane,
runtime
and
so
updating
those
dependencies
as
well
in
core
Crossbay
is
something
that
would
be
ideal
for
getting
done
before
the
1.11
release
goes
out.
The
door
Hassan
has
been
making
a
lot
of
progress
on
the
some
exponential
back
off
scenario.
A
Investigations,
there's
not
going
to
be
a
fix
to
that
in
1.11,
but
there
are
some
proposals
for
how
to
approach
that
and
then
and
we'll
talk
about
the
controller,
vague
deprecation
as
well
in
just
a
little
bit,
so
we
don't
have
to
get
into
that
now.
So
then,
the
other
one
I
wanted
to
bring
up
is
that
the
rbac
permissions
issue,
an
open
shift,
Nick
reviewed
this
fairly
thoroughly
and
I.
Don't
think,
there's
been,
oh,
the
the
where's,
the
pr
that
thing
should
be
linked
up
from
the
top
right
here.
A
So
I
don't
think,
there's
been
yet
a
response
from
the
pr
author
from
Nick's
feedback
from
a
few
days
ago.
So
it
looks
like
this
one
would
probably
not
be
included
in
1.11
as
well.
This
may
be
something
we
can
consider
for
a
patch
release,
but
there
are
some
changes
that
are
needed
on
this
VR
foreign.
A
So
that's
the
things
that
are
on
my
radar
for
1.11,
as
we're
winding
things
down
36.59
is,
is
going
to
be
the
the
probably
the
biggest
remaining
thing
to
handle.
A
C
D
A
Big
docs
design,
redesigned
New,
York
stuff
has
a
specific
topic
later
on
as
well.
A
Sweet
sounds
good
bud
all
right,
so
then
the
the
release
calendar,
the
community
calendar-
was
updated
as
well
with
you
know,
we
we
did
push
the
1.11
release
back
two
weeks
and
so
the
we're
going
to
stay
and
remain
on
our
quarterly
Cadence.
So
every
three
months
releases
will
go
out
so
in
about
three
months.
The
fourth
Tuesday
I
think
it
is
of
April
will
be
the
1.12
release,
so
the
community
calendar
is
up
to
date.
Now,
with
all
of
that
stuff,.
A
and
and
then
next
community
meeting,
as
once
we
get
one
we've
got
1.11
out
the
book
out
the
door.
Then
we
could
have
a
nice
big
planning,
discussion,
I
think
together
as
we're
getting
1.12,
kicked
off
and
kind
of
make
sure
that
we're
we
all
have.
You
know
we're
aligned
on
priorities,
and
you
know
we're
getting
folks
that
can
contribute
to
things
assigned
to
issues
that
we
want
to
make
continue
making
progress
on.
So
we
can
talk
about
that
in
the
next
community.
Meeting.
A
I
think
and
focus
on
getting
1.11
shifts
right
now,
but
if
there
is
a
pressing
topic
that
folks
want
to
bring
up
that'll
be
after
1.11,
the
floor
is
open
for
that
right
now
too.
If
somebody
wants
to
get
into
a
specific
topic.
A
All
right,
then,
let's
move
on
to
the
providers,
so
there's
definitely
been
some
cool
releases
coming
out
recently
and
so
Christopher
I
see
that
you're
on
the
call
and
you've
got
the
first
two
I
think
they're.
Both
new
providers
right
an
open
search
and
zscaler
once
and
we'll
talk
about
those.
E
Yeah
in
general,
from
my
company,
we
open
source
the
provider
for
open
search
in
in
our
at
dkv
bank,
get
organization
thanks
to
Daniel
or
then
to
Drive
the
marketplace,
things
to
onboard
and
the
same
for
the
z-scaler
thing
we're
working
with
our
partners
from
the
bank
side.
E
We
see
scaler
together
that
we
bring
this
also
in
the
z-scale
organization
that
we
get
also
professional
support
on
on
this
providers,
and
it's
also
public
available
for
other
folks
in
the
community.
It's
open
source
so.
B
E
And
every
or
both
providers
are
generated
out
of
the
object
approach
from
the
provider
perspective,
so
thanks
for
a
bound
to
open
source
that
the
web
you
can
use
this.
A
Thank
you,
awesome
dude,
yeah
I
thought
that
was
really
cool
to
see
two
brand
new
providers
when
I
was
going
through
recent
releases,
so
excellent
work
at
driving,
all
those
Chris
and
then
Dan
as
well
for
helping
out
you
know,
get
those
published
and
everything
and
then
also
Christopher
I,
think
that
maybe
I
think
there
was
a
for
the
Community
AWS
provider.
There
was
a
36.0
and
a
36.1
released.
Perhaps
is
that
correct?
It's
the
last
community
meeting.
E
Yeah
that
that's
correct,
so
in
general
one
one
release
we
fixed.
We
tried
to
fixing
the
Kafka
service
in
AWS
side
and
we've.
We
found
found
out
that
there's
a
problem
in
the
observed
stated,
but
that's
why
we
also
published
a
patch
release
because
it
was
not
possible
to
observe
existing
Kafka.
F
A
Awesome
great
yeah,
so
thanks
for
getting
that
patch,
then
out
for
in
response
to
folks.
E
I
guess
in
the
in
the
in
the
in
the
normal
release,
the
the
biggest
thing
was
that
you
also
changed
the
logo
to
ez08601
time
encoder,
because
a
lot
of
folks
had
problems
in
their
logging
logging
solutions
to
to
ingest
all
the
the
logging
times
and
so
on
so
and
we
we
did
it
like
I
guess
in
provider,
kubernetes
and
hem,
then
you
get
the
same.
The
same
time
stamps
in
the
loggings
and
not
something
like
goal
is
doing
or
or
the
the
zap
is
doing,
per
default.
A
Oh
actually
I
have
a
vague
memory
of
Nick
opening
a
ish,
a
similar
issue,
perhaps
maybe
in
CrossFit
runtime
recently
about
the
you
know,
time
stamp
encoding
for
debug
logging.
So
maybe
this
is
related
to
that
I
think
yeah.
E
It
is
very
difficult
otherwise
to
to
to
to
find
corresponding
logs
between
a
lot
of
things
in
the
kubernetes
world,
if
you're
not
using
the
correct
time
encoder.
So
we
had
some
troubles
in
the
past.
That's
why
we
also
adding
this
to
the
provider
AWS
to
find
out
things
between
all
of
the
things
we're
running
in
kubernetes.
G
Just
a
heads
up,
I
think
that
will
actually
be
fixed
in
the
latest.
It's
fixing
the
latest
controller
runtime,
so
I
I
bumped
across
quite
around
time
to
use
that.
So,
if
you
update
those
you'll
you'll
get
this
fixed
automatically
I.
G
G
Is
supposed
to
be
like
machine
readable,
it's
all
structured,
Json
data,
but
then
it
definitely
used
RFC
339
on
debug
mode,
where
it's
supposed
to
be
read
by
humans,
so
I
think
it's
going
to
use
3339
all
the
time.
Now
is
my
understanding.
A
Awesome
Nick
and
then
Nick
were
you
were
you
present
when
we
were
talking
earlier
about
the
V
1.11
release
board
and
the
remaining
potential
work
on
bumping,
the
kubernetes
dependencies
and
the
crosswind
runtime
dependencies.
G
The
risk
of
potentially
repeating
you
I
think
the
only
blockers
on
the
pr
that
updates
the
dependencies
are
there's
one
type
to
do
with
secret
stores.
That
is
losing
a
bit
of
metadata
for
reasons
that
a
couple
of
us
have
been
talking
to,
and
we
can't
quite
get
to
the
bottom
of.
But
the
other
thing
is
we
recently
merged
yeah.
G
That
adds
a
bunch
of
fuzz
testing
from
the
OSS
funds
and
cncf
folks
to
cross
plain
and
for
some
reason,
the
fuzz
tester
fails
of
spr
with
a
bunch
of
cross-plane
functionality,
despite
the
fact
that
that
functionality
didn't
change
mainly.
D
G
Dependencies
so
I
need
to
figure
out.
What's
going
on
there,
I
I
think
the
fuzz
tester
was
actually
said
to
just
like
pull
the
GitHub
action
to
use
from
Master
every
time
or
something
which
potentially
would
just
have
like
a
different
version
of
the
fuzz
test
running
on
this
PR
or
something
there's
a
few
little
things
to
work
through
there.
That
are,
you
know,
annoyingly
like
taking
a
long
time,
basically
maybe
relative
to
the
value
that
they're
adding,
but
we
want
to
try
and
get
through
today.
A
Awesome
Nick
and
yeah.
That
did
not
repeat
me
at
all.
That
did
not
have
that
specificity
of
what's
going
on
with
the
effort
to
update
this
dependency.
So
thank
you
for
adding
that
cool.
Okay
and
then
there
is
a
set
of
three
updates
there
for
a
bounce
official
Fighters
for
the
big
three
Cloud
providers.
A
So
those
are
available
and
you
can
click
the
links
to
follow
those
through
to
the
marketplace
and
then
oh
I,
guess
there
was
a
provider
terraform
released
since
the
last
community
meeting
as
well.
0.4.0,
so
lots
of
Provider
activity
and
new
releases
to
try
out
and
upgrade
to.
H
It's
available
now:
yes,
just
quickly
on
provider,
Azure
and
gcp.
They
actually
went
through
a
big
update
in
terms
of
the
underlying
terraform
provider.
That
was
six
months
out
of
date.
So
folks,
who
upgrade
that
you
know
like
we
did
a
lot
of
testing
to
to
make
sure
that
your
resources
still
worked.
But
it
was
a
big
big
gap
between
the
the
previous
version
of
the
terraform
provider
and
and
the
new
ones
we
updated
to.
H
So
just
keep
an
eye
on
on
the
release,
notes
and,
and
some
small
kind
of,
like
casing,
changes
in
service
names
and
stuff
like
that.
A
Yeah,
thank
you
for
calling
that
out.
John,
that's
a
really
good
point
and
then
John
do
you
have
a
further
commentary
on
like
the
Cadence
for
updating
the
underlying
terraform
providers?
You
know
going
forward
now
that
we've
done
in
this
Investments
and
have
some
tooling
around
it.
H
Yeah,
so
we've
we've
built
some
kind
of
some
tooling
to
to
detect
schema
changes
as
as
well
as
changes
between
the
terraform
plan
state
and
we're
looking
to
automate
that
we
with
this
first
run
because
it
was
such
a
big
time
frame.
H
H
H
So
I
would
expect
you
know
the
next
one
we
we
need
to
upgrade
is
AWS,
which
is
a
big
one
as
well
and
after
that
we'll
be
investing
in
some
tooling
to
make
sure
that
we
automate
the
process
of
testing.
You
know
when
we,
when
we
upgrade
the
native
Pro
terraform
provider,
and
that
will
be
at
a
regular
Cadence
where
we're
not
out
of
sync.
So
for
such
a
big
period.
A
Right,
that's
great
John,
thanks
for
clarifying
that
and
the
investment
there
so
that
we
can
keep
up
to
date
on
a
regular
Cadence.
That's
a
really
good
Improvement,
all
right!
So
yeah
I
think
we've
got
a
fair
amount
of
things
on
the
agenda
and
then
Nick
also
has
confirmed
that
that
he'll
be
able
to
give
a
little
demo
for
us.
Walk
through
the
composition
function
functionality.
So
the
the
the
rest
of
the
agenda
is
pretty
packed.
So
let's
keep
on
moving
here.
A
Let's
move
into
the
vendor
updates
section,
and
so
this
section
is
for,
if
you
know
any
any
company
organization
doing
things
around
crossplaying
to
share
what
they're
doing
with
us.
So
this
is
also
open
for
anybody
to
add
what
their
company
is
doing
with
crossplaying
so
Sean.
Do
you
want
to
take
it
away
here
for
what
a
Brown's
doing
investments
in
the
near
future.
H
Yeah
I'll
keep
it
very
brief.
I've
kind
of
like
put
down
the
status
there
Hassan's
getting
back
tobs
if
any
resource
design
proposal
and
trying
to
get
through
the
the
feedback
to
get
alignment.
H
Esky
made
great
progress
on
the
pluggables
secret
stores,
and
we
you
know
we
we
should
see
some
PRS
appear
in
the
in
the
repos
next
week
and
we're
targeting
the
112th
release
for
that
and
we
weren't
able
to
make
a
lot
of
progress
on
the
validate
validate
compositions
via
webhook
Philip
is
working
on
a
proposal
for
that,
but
he's
been
kind
of
like
tied
down
with
you
know
some
of
the
the
cve
work
and-
and
you
know
the
renovate
dependency
updates
and
so
on
and
which
we
felt
was
necessary
to
focus
on
first,
so
he'll
be
getting
back
onto
that.
H
So
hopefully
we'll
we'll
get
a
proposal
out
next
week
for
some
feedback
as
well.
That's
it.
A
Fantastic
all
right,
so,
let's
move
down
on
into
the
community
topics
section
here.
So
a
couple
cool
blog
posts
that
folks
have
been
writing
recently.
I
think
Dan
wrote
one
I
think
it
was
Dan.
If
I
see
package
I
assume
it
was,
it
was
probably
Dan
wrote
a
wrote,
a
blog
post
about
Crossway
and
its
package
based
approach
that
was
on
the
new
stack
blog,
so
be
able
to
check
that
out.
A
Craig
wrote
a
good
blog
post
about
troubleshooting
cross
planes,
so
you
know
a
couple
different
steps
and
approaches
that
you
can
use.
You
know
to
debug
and
observability
and
things
like
that
and
then
also
Dan.
This
isn't
specifically
crossbayne
related,
but
Dan
wrote
quite
an
interesting
and
deeply
technical
dive
into
the
storage
interface
for
kubernetes.
You
know
essentially
how
things
get
stored
in
NCD,
so
very,
very
informative.
If
you
want
to
dive
into
that,
so
I
just
thought:
I'd
call
that
out
too
all
right.
A
So
1.11
we've
been
talking
about
a
lot
today
and
you
know:
we've
moved
to
a
phase
there
where
we're
code,
freeze
and
we're
doing
focusing
on
testing.
So
Nick
has
some
specific
asks
around
testing
that
is
available
in
this
discussions
item
right
here
and
basically
one
of
the
big
things
here
is
that
you
know
there's
a
lot
of
new
features.
A
So
obviously
you
know
we'd
want
people
to
try
those
out,
and
you
know
make
sure
that
those
are
working,
and
you
know
they're,
of
of
quality
that
we'd
want
to
go
out
with
a
release.
But
it's
also
important
here
with
you
know:
a
fair
amount
of
new
features
and
some
of
them
being
hidden
behind
Alpha
Flags
such
as
composition,
functions
that
Mainline,
you
know,
non-alpha
functionality
does
not
progress
and
that
things
still
work
smoothly
there
for
your
existing
scenarios.
A
So
if
folks
want
to
dive
in
to
the
release,
candidate
builds
for
1.11
and
you
know,
try
out
either
new
features,
or
you
know
make
sure
that
your
existing
scenarios,
you
know,
continue
to
work,
then
there's
no
regressions.
That
would
be
especially
helpful
as
we're
driving
to
the
release
on
Tuesday
Nick.
Anything.
To
add
to
that,
my
friend.
G
Not
really
just
just
reiterating
the
rationale
is
that
for
the
new
features,
those
are
mostly
Alpha,
we
can
fast
follow
if
there
are
bugs
those
it's
it's
a
bigger
deal.
If
we
break
existing
workflows
for
people
who
are
not
opting
into
an
alpha
feature
flag,
so
I
know
it's
a
little
boring
just
to
test
that
everything
works
the
same,
but
you'll
probably
value
catching
it
now,
rather
than
catching
it
after
we've
released
the
0.11.
A
Yep
that
sounds
good
then,
and
so
yes,
this
that
issue
here
talks
through
what
to
test,
or
you
know
kind
of
where
we
want
to
be
focusing
our
attention
on.
So
you
can
follow
through
on
that.
That's
also
available
in
the
announcements
channel
on
Slack
all
right,
so
Peach
has
done
an
enormous
effort
on
reorganizing
the
documentation,
rewriting
some
content,
adding
new
articles
as
well
like
an
introduction
to
cross
plane.
A
So
this,
hopefully,
will
be
more
friendly
towards
you
know:
Folks
at
all
site
all
areas
of
the
spectrum
for
cross-plating
experience,
there's
an
open
PR
for
it
here,
271
and
a
preview
for
the
site.
Pete.
Do
you
want
to
share
your
screen
and
dive
through
something
specifically
or
do
you
want
me
to
drive
or
yeah
I'll
share
a.
A
Let
me
stop
sharing
real
quick,
then
first
big
red
button,
there
we
go
all
right,
Pete,
all
yours,
I,
think
all.
C
Right
so
the
biggest
thing
that
I
think
will
catch
everybody
by
surprise
is
we're
now
going
to
have
a
landing
page
and
we're
gonna
break
up
some
of
our
content.
So
if
you
go
to
docs.bound,
You'll
now
be
created
with
these
three
columns,
so
user
documentation
is
kind
of
most
of
what
used
to
be
documentation.
The
knowledge
base
is
pulling
out
a
lot
of
that
information
into
more
of
a
static
repository.
C
That's
not
versioned,
you
know
guides
or
Integrations
and
we'll
talk
about
that
in
a
second
and
then
a
contributing
guide
specifically
for
documentation.
C
C
You
know
for
the
software
project
everything's
in
the
contributing
NB
file,
so
the
knowledge
base,
a
lot
of
the
stuff
is
still
a
little
bit
of
a
work
in
progress
and
so
I
think
you'll
see
a
bunch
of
commits
after
release
to
keep
getting
this
in
shape.
The
knowledge
base
will
be
independent
of
releases
going
forward.
C
You
know
anything
that
is
a
more
generic
about
cross-plane,
that's
not
about
a
version.
That's
not
about
a
cross-plane
theory
of
operation.
C
Likely
is
a
good
candidate
for
the
for
the
knowledge
base
right
anything,
that's
an
integration.
You
know
configuring
cross-plane
with
rocd
any
special
considerations
about
upgrading
or
downgrading
or
whatever
that
again
are
not
about
going
from.
You
know,
version
1.10
or
generic
cross-plane
and
then
the
documentation
itself.
We've
got
a
new
little
like
marketing
type
blurb
of
what
is
cosplaying.
C
Why
do
you
care
a
lengthy
introduction
to
try
to
introduce
each
of
these
Concepts
to
folks
who
are
new
to
cross-plane
and
I'm
in
the
process
of
finishing
at
least
the
AWS
quick
start?
C
That
will
be
a
multi-part
process
of
connecting
to
AWS,
creating
managed
resources
to
prove
that
it
connects
AWS
compositions
claims
and
claim
patching.
So
really,
what
are
all
those
core
components
of
cross-plane
and
how
do
we
do
it
in
a
quick,
simple
and
hopefully
obvious
way,
and
then
we'll
repeat
that
process
across
AWS
I'm,
sorry
across
Azure
and
gcp?
Shortly
thereafter,
we
welcome
quick
starts
for
any
provider.
C
You
know
kind
of
following
that
same
methodology.
So
if
anybody
wants
to
write
a
quick
start
guide
for
their
provider,
please
let
me
know
I'll
be
happy
to
help
again.
Some
of
this
is
still
being
organized.
We
have
install
uninstall
and
upgrade
guides
that
have
been
Rewritten
and
then
our
Concepts
as
merging
some
of
the
stuff.
That
was
there
under
the
concepts
before
to
try
to
keep
this
a
little
tighter
around.
C
Basically,
what
is
every
high
level
concept
within
cross
plain,
so
we
have
a
definitive
resource
on
where
to
look
for
that
and
as
always,
love
feedback.
Please
let
me
know
in
the
pr
hit
me
up
directly
and
I
think
another
thing
with
the
docs
is
taking
a
very
kind
of
continuous
delivery
approach,
so
we
will
push
these
big
changes
live
with
the
release
on
Tuesday,
but
you
know:
there's
still
a
lot
of
work
to
be
done
and
we'll
just
keep
making
those
minor
changes
as
we
move
forward.
A
And
so
p,
is
there
a
specific
or
like
a
set
of
pages,
that
you
want
the
most
eyes
on
or
like
highest
priority,
to
get
some
visibility
on
yeah.
C
I
think
this
introduction
would
be
super
helpful,
and
that
would
be
one
and
then,
after
later
this
afternoon,
I
will
the
first
part
of
the
quick
start
is
written
that
walks
through
basically
installing
cross-plane
and
connecting
and
creating
a
single
Mr
part.
Two
will
be
completed
this
afternoon.
C
Or
will
we
pushed
this
afternoon?
I
just
finished
it
before
this
meeting
and
so
that
actually
walks
through
creating
a
composition,
creating
a
CR
or
an
xrd
and
all
that
stuff.
So
I
think
those
two
would
be
very
helpful.
A
Awesome
sounds
good
thanks
for
calling
that
out
and
thanks
for
this
big
effort
to
to
write
and
build
all
the
documentation
as
and
especially
I
mean
especially,
but
additionally,
all
the
grades,
like
you
know,
functionality
for,
like
you
know,
note
boxes
and
like
copy
paste
for
code
blocks
and
like
all
that
sort
of
stuff,
it's
like
moving
it
along
pretty
far
I
like
it
a
lot
yeah.
C
I'm
gonna,
actually,
let
me
show
you
one
other
thing
that
I'm
actually
pretty
excited
about.
So
this
is
this
is
the
draft
that
has
been
pushed.
This
is
part
two,
so
I've
done
a
little
bit
of
work
compared
to
the
other
one,
but
one
of
the
things
we've
done
is
we've
added
these.
This
ability
to
highlight
when
you
hover
over
some
of
the
commands,
and
especially
when
we're
looking
at
things
like
a
composite
resource
definition
and
how
that
relates
to
say
a
claim.
C
You
know
this
becomes
really
useful
because
you
can
see
the
xrd
kind
here
mapped
to
the
kind
in
that
claim.
The
spec
in
that
API
is
there,
and
you
know,
custom
API
maps
to
the
claim
name
so,
hopefully
making
a
little
bit
easier.
I
think
this
is
one
of
the
challenges
that
I've
had
as
a
new
person
into
cross-plane.
C
A
A
And
a
paragraph
down
below
I
think
that's
a
fantastic
feature
and
Jason
had
a
comment
in
the
chat
around
a
upjet
provider.
Tutorial
section
here.
I
know
that
there
is
at
least
you
know
within
the
up
Jets
repo.
There
is
like
a
guide
for
adding
new
resources,
but
you
know
probably
something
a
little
bit
more.
You
know
accessible
for
for
new
folks
and
having
like
a
nice
nice
stock,
Tech
touch
to
it.
You
know
seems
like
a
great
idea
later
on
too
yeah.
C
I
would
say
if
you
want
to
do
it
yourself,
of
course,
I'd
love
to
help
you
otherwise
feel
free
to
open
an
issue
against
the
dox,
repo
and
I'm
working
on
kind
of
building
a
a
bit
of
a
docs
road
map
for
the
next
few
months
to
prioritize
some
things
and
I'll
I'll
get
it
in
the
list.
H
A
We
are
so
let's
keep
on
moving
down
the
agenda,
so
we
did
want
to
specifically
call
out
the
deprecation
of
controller
config
and
the
1.11
time
frame
and
I'll
have
Dan
talk
about
that
a
little
bit
more
thoroughly
and
what
that
means
and
where
we're
going,
but
I
do
want
to
stress
that
this
is
simply
just
an
early
notification
to
the
ecosystem
that
this
type
will
be
deprecated,
it's
not
being
removed
in
this
Milestone
and
it
will
not
be
removed
until
there's
a
viable
replacement
for
it,
and
you
know,
guidance
about
how
to
migrate
and
all
that
sort
of
stuff.
A
So
I
wanted
to
make
that
very
clear
and
then
Dan.
If
you
want
to
hit
some
highlights
on
on
this
deprecation
here,
that
would
be
very
helpful.
B
Yeah
so
controller
config
was
introduced
v0.14
so
a
long
time
ago
and
at
the
time
I
think
the
biggest
motivator
was
a
feature
that
a
lot
of
folks
use
with
AWS
IM
rules
for
service
accounts.
So
basically,
when
we
created
the
deployments
you
needed
to
be
able
to,
you
know,
set
some
annotations
and
things
like
that
on
it.
B
When
we
introduced
it,
it
was
a
little
bit
of
you
know.
This
is
an
alpha
feature
and
it
gets
the
job
done,
but
it
is
probably
not
what
we
want
long
term.
It's
kind
of
continue
to
expand
over
time
where
it's
become
almost
akin
to
just
a
deployment
spec,
but
it's
not
actually
fully
compliant.
It
can
be
a
little
bit
confusing
how
to
use
it.
B
So
we've
had
an
issue
of
open
tracking,
promoting
it
to
Beta,
which,
as
you'll,
see
in
our
feature
policy
was
already
referenced
a
little
bit
earlier.
We
kind
of
have
an
up
or
out
policy
when
it
comes
to
apis,
so
this
has
just
been
stagnant,
so
we
need
to
make
a
decision
on
it
at
the
same
time.
I
know
this
is
a
feature
that
a
lot
of
folks
have
taken
a
dependency
on,
despite
it
being
Alpha.
B
It
was
also
before
our
feature,
flagging
mechanism
that
we
have
now
so
folks
have
been
using
this
without
explicitly
setting
a
flag,
saying:
hey
enable
controller
configs,
and
so
we
want
to
be
extra
careful
and
make
sure
you
know
that
the
community
has
a
good
transition
period
here.
So,
as
Jared
said,
this
is
not
a
removal.
It'll
continue
to
work
the
exact
same
way
for
now,
but
we
are
embarking
on
finding
a
better
solution.
B
B
Runtime
interface
issue
that's
linked
from
there
and
if
you
have
any
concerns
just
about
you
know
this
process
and
how
we
go
through
our
future
life
cycle
feel
free
to
comment
on
here,
I'm
more
than
happy
to
chat
with
anyone
about
more
details
about
what
this
means.
A
Thanks
for
those
details,
Dan
all
right
so
yeah,
so
we
wanted
to
make
sure
that
folks
are
aware
of
that.
And
then
you
know
we'll
continue
figuring
out
like
what
is
the
right
replacement
to
make
sure
that
that
is
a
smooth
story
and
you
know
solves
the
use
cases
that
are
where
you
know
the
community
actually
has
so
Philippe
is
a
new
contributor
to
cross
planes
somewhat
recently
and
he's
been
working
on
some
pretty
useful
stuff
immediately
out
of
the
gates.
A
One
of
them
that
I
wanted
to
call
out
is
that
we
have
adopted
renovate
to
manage
all
of
the
project
dependencies
and
you
know
stay
on
top
of.
You
know,
security,
advisories
and
you
know
updated
kubernetes
versions
and
all
that
sort
of
stuff
I'm
not
super
familiar
with
renovate
myself.
Yet
so
I've
been
seeing
a
lot
of
PRS
and
a
lot
of
activity
and
kind
of
following
along,
but
you
know
I'm
not
an
expert
with
renovate,
but
I
do
like
the
automation
that
it
provides.
A
So
it's
I
don't
know
if
it
will
always
keep
this
issue.
If
this
is
a
permanent
place
to
look
at
it,
but
I
do
like
that,
it
gives
you
a
status
of
all
the
dependencies
that
it
wants
to
update
and
where
it
wants
to
update
them.
So.
A
You
can
see
that
you
know
like
all
the
different
packages
that
we
have,
what
it
might
want
to
update
there.
You
know
there's
not
much
left
at
all
in
master,
because
we've
been
pushing
on
that
for
the
before
we
cut
the
release,
Branch
review
1.11,
but
this
will,
you
know,
help
us
stay
on
top
of
making
sure
that
you
know.
First
of
all,
the
most
important
thing
I
think
is
you
know
in
their
security
vulnerabilities
in
our
dependencies
that
we're
getting
those
updated,
but
then
we're
also
staying
up
to
date.
A
You
know
over
time
with
major
dependencies
such
as
the
kubernetes
clients
and
Packages
Etc.
So
you
know
we'll
continue
using
this
and
I
think
you
know
Nick
and
Philippe
are
pretty
on
top
of
all
of
that.
So
far,
maybe
we'll
do
a
little
bit
of
a
knowledge
sharing
on
that
one.
So
more
people
can
come
a
little
bit
more
proficient
with
it,
but
it's
definitely
been
helpful
to
kind
of
keep
the
project
momentum
and
moving
things
forward
and
staying
on
top
of
things.
A
So
great
work
for
Philippe
to
be
implementing
that
and
getting
that
integrated
into
the
project.
I.
Also
wanted
to
call
out
that,
along
with
that
somewhat
related
to
that
in
our
first
testing
effort
and
our
graduation
effort,
we
have
a
proposal
for
the
security
disclosure
process
for
for
cross
plain,
so
we're
discussing
it
on
this
PR
and
you
know
folks
want
to
provide
a
little
bit
of
feedback
on
it.
A
That's
more
than
welcome,
but
you
know,
essentially,
this
defines
the
policy
through
which,
if
vulnerabilities
are
discovered
with
any
possibly
how
they
can
be
responsibly
disclosed.
You
know
to
project
maintainers,
so
we
can
get
a
fix
in
there
and
then
you
know
get
the
get
get
that
fix
published
and
you
know
security
advisors
out
Etc.
A
So
this
is
super
helpful
in
terms
of
continuing
to
mature
the
Pol,
the
project
and
continuing
to
improve
our
security
posture
which
we're
focusing
on
this
year.
So
this
is
also
another
great
contribution
from
affiliate
that
I
wanted
to
call
out
to
and
I
think
that's
PR
yeah.
The
pr
is
linked
here
in
the
agenda
doc.
So
if
folks
want
to
read
that
and
get
an
idea
of
where
we're
going
with
this
provide
some
feedback,
that's
more
than
welcome.
A
So
then
the
final
note
I
had
before
we
turn
it
over
to
Nick
for
a
composition,
functions.
Demo
walkthrough.
Is
that
there's
the
opportunity
for
to
participate
or
to
apply
to
participate
in
the
Country
Fest
at
kubecon
Europe
at
Amsterdam
this
year
we
are,
we
have
already
applied
to
do
the
maintainer
track
session.
We
do
intro
and
deep
dive,
that's
already
applied
to
and
then
I'm
going
to
today
apply
it
to
this
contribute
best
opportunity.
A
Hopefully
we
get
it,
but
the
idea
is
to
get
people
together
in
a
room,
so
we
can
do
some
synchronous
contributions
and
efforts
towards
cross-plane.
We
don't
have
a
specific
issue
or
you
know
design
that
we'll
be
working
through
right
now,
but
the
idea
is
to
either
identify
one
that
you
know
we'll
get
in
the
room.
We'll
talk
through
design,
we'll
you
know
talk
through
a
PR.
A
Whatever
do
some
synchronous
work
together,
which
is
super
exciting
to
kind
of
have
you
know
the
contributor
Community
here
actually
in
a
room
and
hanging
out
and
doing
stuff,
so
I'm
super
excited
about
that.
If
there's
not
a
particular
big
issue
that
we're
working
through
our
big
design
that
we're
working
through,
then
the
idea
is
that
we
would
lead
some
sort
of
contributor
enablement
session.
So
you
know,
folks
that
are
newer
to
be
to
being
a
contributor
on
the
project.
A
We
would
do
some
education,
some
training,
some
you
know,
presentations
some
Hands-On
stuff
whatever,
so
that
folks
that
are
interested
in
writing
code
and
contributing
to
crossplaying
can
get
get
that
education
and
get
that
hands-on
experience.
So
either
way
it
should
be
fun.
I,
don't
know
exactly
what
the
agenda
is
going
to
be,
but
either
way
it
should
be
a
pretty
exciting,
exciting
session.
A
I,
don't
think
so.
Jason
I
I
think
it's
not
it's
not
like
a
public
thing.
It's
just
you
know
we
applied
to
it
and
then
you
know
the
program
selection
committee
selects
on
the
back
end
with
I.
Don't
think
it's
a
transparent
process
at
all,
but
I
like
the
idea,
though
I,
like
the
the
spirit
of
that
Jason.
So.
A
There
you
go,
that'll,
move
it
along
nice,
all
right,
so
yeah,
so
Nick
I
will
go
ahead
and
turn
it
over
to
you.
Then
my
friend
and
you
can
walk
us
through
some
exciting
stuff.
With
the
composition
functions.
G
G
All
right
sharing
my
entire
desktop
here
and
if
you
see
me
looking
over
there,
that's
because
that's
where
I've
I've
put
all
of
you
so
I
am
I,
am
looking
at
you
despite
it
looking
otherwise
all
right.
So
for
anyone,
who's
not
aware
composition
functions
is
an
alpha
feature
that
we're
Landing
in
V
1.11.
That
will
allow
you
to
compose
resources
and
cosplaying,
using
tools
or
languages
of
your
choice.
G
In
a
existing
composition.
You
will
see
you
will
see
this
array
of
resources,
I'm,
calling
these
sort
of
retroactively,
naming
these
patch
and
transform
composition,
and
then
we've
added
the
ability
to
run
functions
to
teach
cross-blade
how
to
compose
resources.
So
when
you
create
a
plane
or
you,
click
create
an
XR
Pressly
needs
to
know
what.
D
G
Do
you
need
to
know
what
managed
resources
or
other
composite
resources
to
go
and
make
historically
you've
done
that
using
this
array
of
resources
with
a
bunch
of
patches
and
potentially
transforms?
You
could
still
do
that
in
1.11.
We
don't
expect
that's
going
to
go
away.
It's
it's!
Definitely
not
deprecated
or
anything
like
that.
But
now
you
can
instead
of
doing
that
or
in
addition
to
doing
that,
use
what
we're
calling
composition,
functions.
Composition,
functions,
work
effectively
by
taking
your
XR,
sending
it
to
a
Docker
image
and
saying
hey
Docker
image.
G
What
do
you
want
me
to
do
with
this,
and
then
the
docker
image
is
expected
to
return
back
the
composed
resources
that
it
would
like.
We
think
that
this
is
beneficial,
because
it
allows
you
to
use
more
advanced
logic
without
us
having
to
build
that
into
our
sort
of
patch
and
transform
sort
of
DSL
or
yaml
document,
and
because
you
can
use
general
purpose
programming
languages,
there's
a
really
great
testing
tools,
already
great
linking
tools,
all
kinds
of
stuff.
For
that
there.
G
It's
also
kind
of
neat
because
you
could
just
test
each
function,
one
by
one,
which
is
something
I'll
show
later
so
kick
started.
I'll
just
install
a
version
of
crossplay
that
has
composition,
function,
support.
What
you
can
see
here
in
my
terminal
is
I.
Just
I.
Just
looked
for
the
latest
Master
build
of.
G
Exported
that
as
the
release
candidate
version
1.11.0
release
candidate,
something
tell
me
something
and
these
arguments
I'm
going
to
give
to
helm
I'm
going
to
install
it
in
our
bound
system.
Just
so
I'm
going
to
use
an
upbound
sort
of
configuration
to
test
this
out,
it's
easier
to
use
that
namespace
I'm,
going
to
turn
on
debug
logging
for
the
cross,
plane,
pod
I'm,
going
to
turn
on
a
feature
flag
to
enable
composition,
functions.
The
speech
blank
has
to
be
on
or
composition.
Functions
will
not
work.
G
That's
because
it's
Alpha
and
we
reserve
the
right
to
just
delete
it
or
change
the
API,
fundamentally
so
not
recommended
for
production
use.
I'm
also
going
to
set
X
function.enabled.
True
X
function
is
a
sidecar
container
that
runs
next
across
plane
and
it's
the
container
that
cross-plane
uses
to
actually
run
in
composition,
functions.
G
And
if
we
describe
this
cross-plane
pod
you'll
see
that,
in
addition
to
the
normal
cross-cleaning
container,
we
have
this
class
by
an
X
function,
container
xfa
in
the
container.
Next
to
it.
So
we've
designed
composition
functions
for
the
the
runner
that
actually
runs
the
oci
images
to
be
pluggable
in
Cross
plane,
which
means
that
there
could
be
different
implementations
of
them.
We
did
this
because
we
couldn't
think
of
sort
of
one
perfect
way
to
run
functions
that
would
work
well
for
everyone.
G
There
was
just
a
lot
of
trade-offs
in
every
direction,
so
the
the
reference
implementation
is
going
to
run
as
a
sidecar
I
was
going
to
use
what
we
call
rootless
containers
to
run
case,
and
indeed
it
runs
containers
inside
itself.
These
containers
that
are
run
for
composition
functions
are
typically
special
script
or
a
really
simple
python.
Script
like
I'll
show
today
something
something
really
small
all
right.
So
let
me
cut
over
to
this
X
functions
directory
here
in
this
directory,
I've
created
a
composition
function.
E
G
I
was
joking
the
other
day
that
I
was
a
full-time
professional
python,
distributed
systems
developer
for
eight
years,
and
then
I
didn't
do
five
years
and
now
I
write
really
bad
python
code.
The
general
idea
here,
though,
is
what
we're
going
to
do
is
pass.
G
G
The
way
it
passes
things
to
these
containers
is
on
standard
in
so
the
process
is
standard
input
as
a
file
effectively
and
it
encodes
it
using
this
sort
of
customer
resource.
Looking
document
this
is
not
a
kubernetes
custom
resource
you
to
create
a
function
IO
in
your
API
cluster.
It
just
looks
like
a
kubernetes
custom
resource.
G
This
is
passed
to
standard
in
to
your
function
and
in
this
case
it's
saying,
hey
first
launched
in
the
pipeline.
There
are
no
composed
resources.
The
only
thing
that
I
have
observed
is
a
composite
resource,
so
the
composite
resource
looks
like
this
and
then
it's
going
to
expect
the
function
to
give
back
some
desired
resources.
It's
going
to
affect
me.
G
It's
going
to
expect
the
function
to
set
an
object,
called
desired
and
include
an
array
of
composed
resources
there,
as
well
as
say,
as
well
as
a
desired
resource
telling
myself
that
Jean
is
not
laughing
at
me,
while
he,
while
he
Giggles
away
there,
he's.
H
G
All
right,
so,
let's
build
this
function,
so
this
is
just
at
the
moment
of
the
alpha
launch.
There's
not
going
to
be
any
sort
of
tooling
to
help.
You
build
functions
outside
of
what
you
just
get
in
the
docker
ecosystem,
but
the
nice
thing
is
that
they
are
just
a
just
a
Docker
image
effectively.
So
I
could
just
do
dollar
bill.
I'll
show
you
really
quick,
my
Docker
file
it.
G
Basically
it's
just
using
a
build
container
to
make
a
virtual
environment
for
python
to
build
this
up,
and
then
it
is
putting
that
into
the
display
list
container,
which
has
a
python
executable.
Edit
I
should
also
clarify
what
we
expected
this
function
to
do.
I'll,
let's
this
again
oops
this
function
hits
an
API
called
quotable,
which
is
just
an
API
endpoint.
G
That
I
found
someone
put
on
GitHub
that
that
gives
you
a
random
quote,
and
all
that
this
function
does
is
complete
toy
just
for
demos,
it'll,
take
any
existing
composer
resources
and
annotate
them
with
so.
D
G
That's
very
useful
in
real
life,
but
something
that's
pretty
quick
to
demo.
Hassan
has
a
much
more
interesting
composition
function
that
he
started
working
on
that'll
actually
go
templates
to
to
render
out
compose
resources.
So
this
is
kind
of
part
of
the
spirit
of
composition
functions.
We
know
that
different
people
have
different
preferences.
Some
people
love
yaml,
some
people,
hate
yeah;
well,
some
people,
a
lot
of
HCL,
some
people,
love
hell
and
some
people
hate
Helm.
G
By
sort
of
giving
it
an
open-ended
way
to
do
composition,
you
can
sort
of
pick
and
choose
the
the
tool,
the
the
practices
that
fits
for
you.
So
in
this
case,
let's
pretend
I
just
I
work
in
Python
every
day.
So
python
is
what
I'm
most
comfortable
with
that's.
How
I
would
like
to
provide
my
composition,
logic,
that's
kind
of
what
I'm
doing
here.
G
It's
probably
gonna
be
in
the
Repository
all
right
so
now,
I'm
going
to
cut
over
to
this
platform,
ref
gcp
directory.
G
I
haven't
actually
prepared
a
an
actual
configuration
for
crossbody,
so
but
I,
but
I
do
have
this
one
sort
of
checked
out
all
right,
so
I'm
gonna
just
already
forgotten
what
my
what
my
Docker
image
was
called.
Where
are
you
oh
sorry,
window.
G
So
this
is
a
platform
platform,
ref
opposition,
it's
something
that
upbound
puts
together.
It
just
demonstrates
using
crossplayer
gcp
I'm
gonna.
Take
it
basically
as
it
is,
and.
G
It
to
run
the
one
composition
function
that
we
just
built.
So
in
addition
to
all
of
the
how
to
transform
stuff
that's
already
going
on
here,
it's
going
to
run
our
simple
quotable,
composition,
function
and
I
would
expect
that.
That
means
that
everything
that
comes
out,
the
other
end
of
this
composition
will
have
a
will
have
a
quote
on
it.
A
Nick,
you
said
that
the
functions
are
run
in
order,
but
after
any
patch
and
transforms
stuff
is
applied.
G
Yes,
yes,
exactly
yeah
looking.
D
G
The
scenes
what
actually
happens
when
you
enable
this
feature
flag
is
cross
table
completely
changes
composition
into
you
responded
as
aware
of
Patcher,
Transformer
and
composition
functions
because
they
kind
of
need
to
be
able
to
move
it
together.
If
you
use
both
the
the
rough
flow
is
cross-final
gravity
existing
composed
resources,
and
if
there
is
patch
and
transform
configured,
it
will
use
those
to
render
both
new
and
update
existing
compose
resources,
then,
before
it
creates
or
applies
any
of
those
changes,
it
will
pass
them
to
the
composition,
function,
Pipeline
and
say
a
composition
function
pipeline.
E
G
Then,
finally,
once
all
that's
run,
it'll
be
late.
Okay,
I
built
up
all
my
state
now
I'm
going
to
go
apply
that
to
the
cluster.
So
that
means
that
you
know,
if
you
don't
have
functions,
just
leave
the
pumpkins
out
and
it'll
go
straight
to
transform
to
apply.
If
you
don't
want
to
use
patch
and
transform
just
skip
that,
it'll
go
straight
to
the
functions
and
the
functions
will
have
to
produce
all
the
resources
that
they
want.
G
C
G
Think
I
have
done
everything
I
need.
I
am
going
to
need
to
install
a
provider,
though.
A
G
G
It's
just
a
little
script
that
Brute
Forces
a
bunch
of
provider
configs
for
all
the
testing
that
I
do,
but
it's
effectively
going
to
do
is
create
a
provider
config
for
gcp
with
an
account.
But
we
use
regularly
for
cross-plane
testing.
A
And
quick
time
check
Nick
we
have
it's
like
three
or
four
minutes
left
in
the
in
the
hour.
Yep.
G
G
And
let's
have
a
look
at
a
cross,
plain
X
function,
pod
and
you
can
basically
say:
there's
not
a
lot
of
debugging
going
on
in
boom
expansion
POD
at
the
moment,
but
it's
basically
telling
you
it
is.
It
is
running
this
functions.
Interestingly,
I
see
it's
running
two
functions,
which
is
not
what
is
expected.
This,
who
knows
what's
actually
going
on
I,
must
have
messed
something
up.
A
The
the
function
was
applied
at
the
composition
level.
Nick
is
that
right,
so
then,
all
of
the
composed
resources
there
would
would
we
expect
to
have
this
annotation.
G
Yes,
but
that
is
not
fundamental
to
composition.
Function!
Oh
right
right,
so
you
always
put
your
function
in
a
composition
and
the
function
can
then
apply
to
the
XR
that
uses
that
composition
and
therefore
the
claim
or
all
the
composed
resources.
It's
not
what's
the
word.
It's
it's
it's!
It's
kind
of
it's
not
like.
It
only
applies
to
composed
resources
right
right,
okay,
good,
good
clarification.
All
right
there
you
go
woohoo
demo
complete!
G
You
can
see
that
that
composition,
function
has
run,
it
has
gone
and
hit
quotable.io
and
it
has
updated
the
annotations
on
all
of
the
both
resources
to
have
this
quote.
Obviously,
this
is
a
little
bit
of
a
toy
right,
but
you
know
not
something
you
would
really
want
to
do,
but
these
composition
functions.
D
G
Instead
be
actually
creating
and
describing
these
entire
composed
resources,
you
could
use
them
to
enforce
policy
if
you
wanted
to
it.
Well
all
all
kinds
of
stuff
there
all
right.
Let
me
see
if
I
can
really
quickly
look
at
the
new
messages.
A
And
I
can
help
you
out
with
that.
Nick
Carlos
is
asking
why
not
a
web
hook,
I.E
HTTP
Json
as
an
interface
for
input
and
output
instead
of
using
standard
and
standard
out.
G
Oh,
the
main
reason
I
would
that
I,
like
this
interface
rather
than
the
web
Hook,
is
because
this
interface,
you
build
an
oci
container
right.
You've
got
a
small
amount
of
logic,
usually
that
you
want
to
run,
and
you
run
that
as
a
Docker
image.
If
you
instead
want
to
hit
a
web
hook,
you
have
to
go
and
deploy
that
webhook
somewhere.
Typically,
I
know
that
there's
this
Lambda
and
there's
some
options
for
this,
but
typically
I,
think
going
and
deploying
and
maintaining
and
running
and
building
a
web
server.
G
I
G
I
G
I
G
The
the
functions
should
I
guess:
I'm
a
little
lost.
Sorry
if
you
just
if
you
just
go
back
to
the
fundamentals
of
what
we
want
to
do.
We
have
some
code
that
we
want
to
run
right
and
cosplay
wants
to
say,
run
this
code
when
I
Hear,
webhook
What
I
Hear
is
the
code
is
hosted
by
some
external
service
right
Crossfire.
I
Oh
no,
no
I
was
I
was
referring
to
you.
You
run
that
container
and
that
container
runs
a
demon
on
port
8080
and
then
just
just
you
just
send
the
the
data
in
and
then
you
get
the
data
out.
G
I
G
I
I
mean
I,
don't
know
it
seems
like
much
of
a
muchness
to
me.
I
guess
the
trade-off
is,
then
you
have
to
have
web
server
ability
and
you've
got
to
listen
on
a
port
and
you've
got
to
make
sure
that
that
Port
is
locked
down
and
all
that
kind
of
stuff
I
don't
think
it's
fundamentally
a
worse
approach
than
what
we're
doing,
but
I
don't
think
it's
fundamentally
better
approach
either
off
the
top
of
my
head,
but
maybe
I'm
missing
something.
I
Okay,
yeah
we
I
can
I
can
discuss
more
on
flag.
It's
it's
just
my
my
background
is
yeah.
I've
done
I've
done
this
before
with
other
interfaces
where,
where
for
some
people
it
just
some
people
telling
them
hey,
run
a
container
and
listen
support
8080.
It's
like
bread
and
butter
to
them
versus
write
a
container
that
has
something
inside
that
takes
standard
in
in
this
format
standard
out
in
this
other
format,
but
there's
four
sentences.
You
you
lost
that
person
versus
the
other
person.
Just
you
follow
for
the
world.
I
A
Yeah
thanks
for
bringing
that
up,
Carlos
and
then
so.
Jesse
had
a
question
as
well:
Jesse
has
his
hands
up.
J
But
I'll
go
really
quickly:
okay,.
A
Yeah
just
make
a
quick
announcement
that
you
know
the
the
regular
time
for
the
meeting
is
over,
so
any
folks
that
wants
to
drop
off
and
get
to
other
things
to
do
feel
free
to.
But
you
know
otherwise,
though
yeah
Jesse
happy
to
hear
from
you
too
yeah.
J
So
it's
a
comment
and
and
a
question
an
open-ended
question.
The
comment
is:
we've
been,
you
know,
investigating
how
to
allow
for
folks
to
use
compositions
and
and
develop
them
kind
of
in
a
in
a
fashion.
That's
you
know,
distributed
amongst
multiple
teams,
multiple
time
zones
and
and
not
necessarily
through
a
kind
of
single
funnel,
and
one
of
the
issues
that
we've
had
is
how
do
we?
J
How
do
we
test
effectively
across
all
those
new
compositions
that
are
coming
in
and
give
the
teams
the
ability
to
do
that,
testing
and
isolation
of
each
other
and
I?
Think
one
of
the
one
of
the
things
I
have
one
of
the
questions
I
have
about
compositions
generally
and
and
the
use
of
composition
functions,
but
just
just
generally
about
compositions
is
how
do
you?
How
do
you
test
that
yaml
right?
How
do
you?
How
do
you
test
it?
J
Amongst
you
know
the
integration
between
compositions
and
compositions
of
compositions,
and
so
on
and
so
forth
and
I
think
what
I
like
seeing
with
custom
compositions
is
now
and
I.
Think
you
mentioned
it
a
little
bit
Nick.
Now
you
have
the
atomicity
to
kind
of
test
them
individually.
You
can
test
at
the
function
level
and
and
the
inputs
and
outputs
there
become
kind
of
you
know
something
that
you
can.
You
know,
prove
consistency
through
automated
testing.
G
Yeah
I'll
just
cut
over
here
and
share
again
real
quick,
but
it
should
be
possible
to
like.
G
There
we
go
yeah,
so
you
can
just
use
you
know.
I
just
ran
that
function
locally
and
sent
it
some
test
data
that
was
a
function.
I
o
that
I
prepared
earlier
that's
on
the
local
file
system
and
like
it
validated
that
that
logic
works
the
way
I
want
without
having
to
like
even
be
running
crossblade
or
anything
like.
F
I
have
a
question
around
the
just
the
function
balance,
so
you
don't
have
to
use
the
default
functional
right.
You
could
just
have
like
a
grpc
and
a
point
somewhere
and
behind
there
you
can
just
do
whatever
you
want
pretty
much
yep.
Okay!
So
you
for
that.
You
just
pass
the
xfn
blah
blah
blah.
What
is
it
called?
Xfn
dot
enabled
for
that
just
set
that
to
false,
and
then
it
will
enable
the
composition
functions,
but
it's
not
going
to
run
the
default
function.
Yeah.
G
So
there
is
a
there
is
a
option
when
you,
when
you
I,
didn't
sort
of
show
the
full
spec
of
what
you're
saying
you'd
like
to
run
a
function.
When,
in
the
composition,
when
you're
running
a
function,
you
can
specify
quite
a
lot
of
things
like
whether
it
has
network
access
or
not
what
compute
resources
it
should
be
limited
to
or
not,
and
one
of
the
things
you
can
specify
is
basically
what
is
the
endpoint
of
the
runner
that
should
run
this
this
function.
G
For
me,
the
default
is
to
use
an
abstract
Unix
domain
socket,
which
is
how
we
sort
of
communicate
without
sort
of
going
over
the
network
as
such
to
our
sidecar
pod
in
the
in
the
code.
That's
there
at
the
moment
in
Alpha,
we
don't
use
encryption
or
authentication
or
anything
like
that
on
the
grpc
stuff.
G
So
you
probably
don't
want
to
go
run
like
you
know,
a
Goku
said
grpc
to
something
over
the
actual
network
over
TCP
or
HTTP
2,
but
but
the
intention
is
to
support
that
at
some
point
of
the
future
I
also
like
we,
we
initiated
this
TBD,
but
under
the
design,
I
I
thought
you
know
running
this
as
a
sidecar
would
be
a
great
way
to
do
a
sort
of
occur
to
me
that
the
the
function
writer
we
have
at
the
moment
is
stateless
and
there's
sort
of
no
reason.
G
We
couldn't
deploy
a
lot
more
of
them,
but
you
can
also
Imagine
a
an
implementation
that
went
and
just
like
sent
things
to
the
Kubler
or
something
like
that
as
well
in
the
initial
design.
But
it's
more
again
there's
a
ton
of
trade-offs
here
like
if
you
go
talk
to
the
Kubler,
it
becomes
kind
of
a
pain
to
like
pass
the
standard
input,
get
standard
output,
all
that
kind
of
stuff.
This
will
potentially
like.
We
really
want
these
functions
to
spin
up
instantaneously.
G
Just
wait:
five
minutes,
while
I
bring
a
node
online,
which
is
not
something
you
want
to
block
your
opposition.
So,
okay,
long
story
short.
Yes,
you
can
just
turn
off
the
Xbox
One
Runner
and
point
it
at
something
else.
You
might
need
to
send
some
code
to
make
doing
some
secure,
okay,
cool.
A
Thank
you,
awesome,
Leo,
love
all
the
feedback
here
and
excuse
me,
and
it's
going
to
be
even
better
too
once
we
get
this
out
with
1.11
and
people
get
their
hands
on
it
and
you
know,
try
it
out,
and
you
know,
surrounding
their
own
functions
and
there's
reusable
functions
that
people
can
use
to
make
the
compositions
go
more
simply
without
you
know,
having
to
write
code
too,
so
just
a
whole
bunch
of
stuff
that
opens
up
and
super
excited
about
it.
A
So
Nick
thanks
for
driving
it
and
thanks
for
showing
it
off
today
as
well.
This
is
kind
of
a
bit
of
a
game,
changer
I,
think
for
for
what
Crossing
to
do
so
super
pumped
for
this,
but
we're
so
over
time
now
and
I.
Think
other
folks
folks
got
some
responsibilities
to
get
to
here.
So
let's
go
ahead
and
wrap
it
up
for
the
day,
then
great
to
see
everybody
by
the
time
you
meet
next
time.
We
will
have
the
view
1.11
release
out.
So
thanks
everybody
for
contributions
to
that.
A
If
you
can
test
it,
that
would
be
great
as
well
and
great
to
see
everybody
today
bye,
everybody.