►
From YouTube: 2020-11-23 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
the
recording
has
started-
and
this
is
the
november
23rd
2020
cross
playing
community
meeting.
Folks
care
are
more
than
welcome
to
add
themselves
as
attendees
on
the
attendee
list
here
in
the
agenda
doc.
Otherwise
I
can
follow
up
and
and
do
that
as
well
too,
so
you
don't
have
to
get
into
the
agenda
doc.
A
The
biggest
thing
that
we're
working
towards
right
now
is
the
v
1.0
release.
Let
me
admit
christian
to
the
meeting
here
so
v
1.0,
we
have
updated
the
roadmap.
Phil
updated
the
roadmap
here,
so
we
have
all
of
our
areas
of
focus
there
and
we
kind
of
have
those
listed
in
the
stock
here.
But
the
jumping
just
real,
quick
down
to
the
schedule
here,
is
that
we're
targeting
mid-december
around
december
15th
for
this
release.
A
So
there
is
something
around
three
weeks
left,
let's
say
until
this
1.0
release,
so
I
think
it
would
be
pretty
wise
today
to
check
in
on
all
of
the
focus
areas
and
see
what
the
status
is
and
see
what
sort
of
some
of
the
risks
may
be
in
terms
of
being
able
to
deliver
on
the
functionality
that
we
are
hoping
to
for
the
1.0.
A
So
let
me
bring
up
the
board
real,
quick,
I'm
not
sure.
If
that
may
be,
that's
not
gonna
be
the
most
effective
view.
A
So,
let's
yeah,
let's
take
a
look
at
the
roadmap
and
I
don't
think
it's
that
we
need
to
like
go
through
the
summary
here
of
everything,
but
let's
kind
of
touch
on
the
areas
and
talk
about
sort
of
the
progress
or
the
status
or
some
of
the
risks
that
we
have
there
that
are
in
each
one
of
those
those
focused
areas.
A
Let's
see
so,
let's
start
with
composition.
So
some
of
the
functionality
there
is
is
nick
on
the
line.
A
Does
anyone
have
an
update
with
some
of
these,
like
the
composition,
related
functional
functionality
like
bi-directional,
patching
and
update
propagation
from
acclaim.
B
So
that's
one
thing
bi-directional
patching.
I
don't
think
there
is
a
pr
for
that,
yet
we're
still
kind
of
backseating
the
api
style
there,
together
with
the
like
new
patch
types.
B
A
Yeah,
I
think
the
web
box
is,
is
scope.
It's
gonna
be
scoped
out
of
1.0
as
well
too
yeah.
Okay
cool
did
you
want
to
get
us
updated
on
package
manager.
C
Yeah
for
sure,
so,
at
the
end
of
last
week
slash
during
last
week,
we
gave
some
demos
of
a
package
manager,
dependency
resolution
and
I
also
added
to
the
end
of
the
agenda
today
to
kind
of
do
one
of
those
for
the
general
community.
There
is
a
pr
open,
that's
a
little
messy.
I
still
need
some
cleanup,
but
I
would
love
to
show
folks
on
this
call
a
little
bit
about
what
it
looks
like
right
now,
but
it's
moving
right
along
and
should
land
fine
for
1.0.
A
Awesome,
that's
fantastic
cool
and
then
the
general
area
of
providers
and
there's
been
a
ton
of
progress
recently
with
you
know
the
amazon
and
azure
providers
and
terraform
code
generation
casey.
Do
you
wanna
kind
of
get
give
us
a
quick
update
on
all
the
progress
on
that
front.
D
Yeah,
I
will.
I
will
move
off
quick
update
on
the
ack
side,
because
he's
been
doing
the
work
there
on
azure.
Let's
see
that's
gonna
be
out
for,
I
think
the
month
of
december,
so
I
don't
think
we're
gonna
get
a
lot
of
new
progress
on
the
aso
generation
before
he
goes.
D
I
do
want
to
get
a
doc
started
with
him
to
figure
out
what
that
looks
like,
but
you
know,
I
don't
think
I
don't
think
we
get
beyond
showing
like
what
types
look
like
and
having
a
plan
for
generation,
but
I
think
once
he
gets
back
in
january,
we
should
be
able
to
move
pretty
quick
but
yeah.
D
I
don't
think
I
don't
think
there's
gonna
be
like
a
functional
azure
provider
for
1.0
and
then
on
the
terraform
side,
there's
kind
of
a
few
areas
of
code
generation
that
that
you
know
now
that
we
have
the
types
and
we
have
the
title
generating
it's
a
matter
of
generating
the
sterilization,
deserialization
code
and
comparison
code
and
then
otherwise,
things
are
pretty
generalized
out
into
other
layers,
already
written
for
the
kind
of
prototype
gcp
provider.
So
what
I'm
focusing
on
right?
D
This
week
is
the
marshalling
on
a
marshalling
code,
and
I
have,
I
think,
a
good
plan
for
that,
and
so
I
think
you
know
I
think,
sometime
early
next
week,
that
should
be
that
should
be
working
and-
and
you
know
once
I
get
to
that
point-
I
want
to
try
to
put
all
the
pieces
in
place
to
like.
Have
you
know,
I
think
that's
enough
to
like
be
able
to
like
create
and
delete
once
we
have
unmarshalling
and
marshalling
working.
D
So
so
I
think,
I'm
about
a
week
away
from
you
know,
seeing
how
everything
looks
together
just
for
the
create
and
delete
case,
and
then
there's
like
another
type
of
code
gen
for
doing
comparison
in
order
to
generate
updates
that
that
would
be
like
the
next
step
after
that.
D
So
I
think
I
think
you
know
my
goal
is
to
get
this
girlfriend
provider
to
a
place
where
we
can
at
least
kind
of
start,
comparing
it
to
the
ack
provider
and
figure
out
like
what
what
additional
work
do.
We
need
to
do
to
get
those
two
at
a
parity
which
you
know,
I
think
I
think
we'll
probably
get
there
for
for
1.0.
But
it's
not
going
to
be
a
you
know:
full
a
full-featured
provider.
A
Yeah
that
makes
that
make
sense.
Casey
you
know
not
having
it
be
like
a
full
full-featured
provider.
It
makes
makes
a
lot
of
sense.
I
think
I
think
for
me
in
my
mind
the
the
you
know
the
code
generation
and
you
know,
provider
coverage,
acceleration,
etc.
A
Effort
is
is
kind
of
the
least
clear
in
my
mind,
in
terms
of
like
what
is
the
the
specific
deliverable
or
what's
going
to
be
available
along
the
1.0
release,
but
I
think
there's
also
a
lot
of
leeway
there
in
terms
of
that's
like
just
the
nature
of
the
features
there
is
is
not
really
like.
This
is
exactly
what's
going
to
be
there
and
it
just
doesn't
really
necessarily
lend
itself
to
that.
A
Perhaps,
but
it's
good
to
see
all
the
progress
going
in
there
and
seeing
like
you
know
how
we're
getting
closer
to
having
like
end-to-end
resources,
you
know
able
to
be
reconciled
and
and
functional
so
that
that's
really
good.
D
Yeah,
I
think
you
know,
I
think,
there's
like
a
a
goal
as
well
just
to
get
more
people
involved
with
it,
and
that's
kind
of
where
I
want
to
wind
up.
You
know
by
1.0
as
well,
is
to
be
able
to
having
something
that
I'm
not
working
on
by
myself,
where
get
go.
Reviewing
and
understanding
like
what's
going
on,
so
that
so
that
more
more
people
can
help.
A
And
move
off,
do
you
want
to
also
give
an
update
on
ack
amazon's
related
stuff,
too?.
B
B
My
target
is
basically
before
december,
15,
getting
like
all
available
services
to
be
generated
for
cross-play,
the
the
ones
that
are
available
in
the
ack,
so
there
are
issues
like
corresponding
to
each
of
them
like
ecr
sns
sfn,
and,
I
believe
sqs
so
yeah
then,
like
you
know,
after
getting
all
the
services
available
and
crossplaying
them,
we
will
start
expanding.
A
I
mean
yeah,
that's
totally
reasonable,
that
the
expectation
is
around
parity
with
atk
itself.
Right
like
I
don't
I
don't
necessarily
have
to
be
generating
things
that,
like
ack,
doesn't
even
support.
So
that's
that's
totally
reasonable
right
on
and
phil
did
you
have
additional
comments
for
any
of
these
functionality
areas
or
you
know,
efforts
towards
1.0.
E
No,
I
think
I
think
that
basically
covers
it.
You
know
100
is
you
know,
going
to
be
coming
out
in
just
a
few
weeks,
so
we
don't
have
a
ton
of
room
for
anything
additional,
but
we
have,
you
know,
started
brainstorming
about
post
10
and
have
that
down
below
under
kind
of
the
under
consideration
section.
So
you
know
take
a
look
over
that.
E
If
there's
any
questions
on
that,
you
know
happy
to
walk
through
that,
and
I
will
note
that
for
that
top
bullet
there,
the
first
class
multi-language
support,
that's
something
that
I'm
actively
prototyping
with
a
lot
and
I
move
off
because
working
on
custom
compositions
to
support
running
like
cdk's
programs,
behind
the
api
line
in
custom
compositions,
and
so
there's
kind
of
multiple
aspects
of
that
they're
all
coming
together
and
so
super
excited
about
that.
I'm
happy
to
talk
about
that
in
more
detail.
If
anybody's
interested.
E
And
I'll
drop
the
the
two
issues
that
are
1955..
If
I
recall
correctly,.
A
Cool
all
right,
so
I
added
a
link
to
the
under
consideration
section
of
the
roadmap
to
the
agenda
doc
here
and
then
you
know
reinforcing
that.
We
are
definitely
very
happy
to
listen
to
and
take
in
suggestions,
feedback
comments,
etc
on
some
of
the
areas
of
functionality
and
scenarios
that
the
community
at
large
would
would
like
to
see
addressed
there
we
go
and
make
that
a
link.
A
It
looks
like
there
you
go,
spacebar,
link,
okay,
cool
and,
and
then
oh,
is,
is
I've
seen
some
some
work
on
the
ibm
provider?
It's
recently
too
paulo
you're
on
the
call.
Do
you
want
to
give
us
an
update
on
some
of
the
some
of
the
work?
That's
been
done
for
provider
ibm
recently.
F
Oh
yeah
sure,
can
you
hear
me?
Yes,
I
can
hear
you
great,
oh
yeah,
okay,
yes,
so
we've
been
making
good
progress.
F
F
So
the
next
step
is,
I'm
just
going
to
merge
a
new
pr
to
to
manage
credentials
for
connected
to
that
service,
and
then
I
think
at
this
point,
we're
gonna
have
kind
of
initially
functioning
provider
that
will
allow
to
at
least
implement
some
of
the
examples
similar
to
the
ones
you
already
have,
or
some
of
the
other
services,
for
example
the
postgres
managed
service,
so
that
would
be
somehow
a
pair
with
on
pair
with
some
of
the
other
providers
that
you
have
at
least
from
from
the
kind
of
perspective.
F
F
So
at
some
point
we
like
also
to
to
make
things
in
such
a
way
that
we
can
automatically
generate
debian
providers
as
well,
we're
not
today
yet.
But
I
think
this
is
the
long-term
goal
that
we
have.
A
F
Yeah,
because
we
think
I
mean
the
long
term,
the
only
way
to
make
this
sustainable
is
to
to
get
to
that
kind
of
model.
Otherwise
will
be
very
hard
because
we
have
the
same
problem
with
terraform.
We
have
actually
provided
for
terraform,
but
the
problem
is:
whenever
a
service
team
make
changes
to
the
api.
It
takes
some
time
weeks
or
even
months
before
the
update
propagates
to
the
provider.
D
Yeah
one
one
thing
we've
kicked
around
is
the
idea
of
building
something
to
decode
generation
from
open
api.
So
if
you
have
open
api
descriptions
of
your
apis,
that's
something
we
should
talk
about.
Yeah.
F
Exactly
that's
what
we
have.
Actually,
we
already
have
a
project
that
starting
from
the
open
api,
is
generating
all
the
sdks
for
for
our
apis.
So
there
is
already
a
team
in
ibm
working
on
that
and
actually
most
of
our
apis
are
actually
generating
sdks,
even
in
different
languages,
not
just
go,
but
also
java
and
other
languages.
They
also
degenerate
things
like
examples
of
usage
documentation
pretty
much.
They
do
everything
at
some
point.
Maybe
will
be
interesting.
D
Yeah-
and
there
could
be
a
more
interim
step
that
we
could
work
on
where
we
can
generate
the
go
types
and
and
some
of
the
kind
of
boilerplate
pieces
for
provider
from
open
api.
You
know
before.
F
Yeah
exactly
the
way,
actually,
our
team
is
solving.
Some
of
the
problems
is
to
also
add
some
kind
of
custom
annotations
to
open
api,
because,
of
course,
just
taking
open
api
is
not
enough
to
know
what
that
we're
going
to
generate,
and
there
are
maybe
some
special
cases
that
are
to
deal
so
we
have
these
special
annotations.
D
Well,
I'd
love
to
follow
up
and
see
what
those
annotations
are,
because
that's
definitely
something
that
we're
interested
in
if
there's
any.
If
there's
any
chance
of
standardizing
that
you
know,
because
I
think
there
probably
are
common
patterns
too,
where
all
all
providers
would
need
certain
classes
of
annotations.
So
that
would
be
yes.
F
A
Awesome
yeah.
I
appreciate
the
foresight
here
too,
paulo
about
you
know,
thinking
about
the
kind
of
longer-term
vision
and
how
to
make
it
sustainable.
So
that's
really
good
yeah.
F
Yeah,
but
by
the
way
I
just
want
to
mention
at
some
point
we,
like
also
to
probably
in
december,
to
make
some
kind
of
announcement
to
to
say
you
know
ibm
has
been
working
with
the
crossplane
community
on
this
ibm
provider.
Maybe
tag
along
with
some
of
the
announcements
you're
gonna
make
on
the
version,
one
zero
and,
and
so
will
be
good
if
we
at
least
can
can
show
okay.
Maybe
there
is
already
some
initial
prototype
of
this
provider
and
we
can
say:
okay,
these
are
you
basically
can
use
it?
A
Yeah,
I
think
that's
great
we'd
love
to
make
some
noise
about
that
as
well
too.
I
think
that's
great
for
everybody,
so
I'd
love
to
be
involved
in
that
and
do
some
like
co.
You
know,
make
some
announcements
in
the
code
fashion.
There.
A
All
right
cool,
so,
let's
hop
on
down
here
to
the
community
topics
section,
so
dan
you've
got
the
first
few
bullet
points
here
you
want
to.
You
want
to
drive
for
just
a
little
bit
now.
C
Sure
so
yeah
we
haven't
had
a
tbs
episode
in
a
while,
but
we
are
definitely
planning
on
doing
the
red
hat
episode
with
krish.
So
looking
forward
to
that,
he
did
some
work
to
get
some
things
into
provider
aws,
which
is
going
to
make
that
a
lot
easier
and
hopefully
also
ahead
of
the
1.0
release.
That'll
be
an
opportunity
to
show
off
some
of
the
dependency
resolution,
as
we'll
obviously
need
some
backing
cloud
providers
for
that.
C
Also,
in
the
same
vein,
want
to
welcome
krish
as
a
new
maintainer
of
provider.
Aws
he's
been
doing
a
lot
of
work
on
that
and
some
really
solid
work
and
he's
been
super
responsive
to
issues
and
basically
just
embodying
all
the
things
that
we
have
in
our
governance
talk
about
what
a
maintainer
should
be.
So
it's
an
easy
decision
and
we
are
excited
to
have
krish
committed
to
crossplane
for
the
long
haul.
C
Next
thing
would
be
there's
a
few
new
cross-plank
and
trib
projects.
So
you
know
this
is
kind
of
an
org
where
we
can
spin
up
experimental
things
and
keep
them
going
and
allow
folks
to
just
kind
of
move
at
their
own
pace
without
having
to
you
know,
ensure
some
level
of
reliability
and
then
the
goal
is
eventually
for
things
to
mature
into
the
cross
plane
org.
C
So
we
have
a
few
new
repos
there
provider
sql,
which
nick
had
bootstrapped
and
some
folks
showed
interest
in
it,
is
now
in
crosslink
and
trib,
and
some
of
the
folks
from
vision
are
maintaining
that
so
thanks
to
them
for
stepping
up
for
that
and
they've
been
really
receptive
to
folks
in
slack.
So
we
appreciate
that
the
new
one
I
just
created
today
was
provider
secret.
C
This
is
kind
of
an
experimental
one
to
work
on,
while
we're
still
trying
to
kind
of
work
through
how
we
want
to
store
things
in
places
other
than
kubernetes
secrets.
The
goal
there
is
that
you
know
it
kind
of
is
a
intermediary
where
it
basically
watches
for
secrets
and
syncs
them
to
external
stores
such
as
vault
or
or
you
know,
secrets
manager
on
aws,
etc,
so
kind
of
a
place
for
us
to
experiment
with
some
of
these
different
ideas
that
folks
have
for
storing
credentials.
C
So
some
of
that
stuff
may
end
up
trickling
back
into
core
crossplane
or
cross
plane,
runtime
and
that
sort
of
thing,
but
should
be
a
nice
way
to
experiment
and
publius
from
ripcord,
is
going
to
be
leading
some
effort
there
and
also
welcome
to
anyone
else
in
the
community
to
help
out
with
that.
Next
is
crisscross,
which
is
something
I
posted
about
a
little
while
back.
C
It
is
revolving
around
basically
separating
out
the
external
actions
that
a
manage
reconciler
takes
and
putting
those
sort
of
in
functions,
or
basically,
any
endpoint
that
you
can
serve.
So
the
idea
is,
instead
of
writing
a
full
controller
installing
it.
You
can
just
write
an
extremely
simple
web
hook
and
deploy
to
something
like
a
k,
native
service
or
something
like
that
and
then
create
one
of
these
registrations,
and
it
will
just
send
you
the
events
at
the
end
point
and
you
can
kind
of
implement
your
stuff
there.
C
So
definitely
we'll
be
adding
some
more
there,
and
I
think
some
of
the
folks
from
red
hat
may
be
jumping
in
on
that
some
so
excited
to
have
them
working
there
as
well.
A
Dan,
we
don't
have
to
you
know,
go
down
too
big
of
a
rabbit
hole
too
quickly
here,
but
something
I
was
kind
of
curious
about
is:
is
there
already
like
an
auth
story
for
this
about?
You
know
you
give
an
end
point
here,
you
know
and
how
you
can
be
sure
about
who's,
sending
sending
you
requests
to
do,
creates
and
updates
and
stuff
like
that
or
that's
like
not
quite
there
yet.
C
Yeah,
no
definitely
not
there
yet
definitely
want
to
have
certificate
management
and
that
sort
of
thing
and
one
of
the
motivations
for
that
would
be
potentially
bringing
something
similar
to
this
into
crossfit
core
at
some
point
in
the
future.
C
That
could
allow
it
to
basically
have
a
separate
package
type
that
manage
these
kind
of
functions
and
set
up
authentication
between
the
reconciler
and
the
you
know
the
web
hooks
that
you
established
that's
definitely
farther
down
the
line
and
we
can
have
a
little
more
flexibility
while
it's
as
a
standalone
provider,
but
yeah
absolutely
right
now
do
not
use
this
in
a
production
scenario,
do
not
accept
arbitrary
requests,
etc.
C
The
demo
that
you
can
run
with
the
documentation
here,
it's
just
running
within
a
cluster,
so
you're,
not
exposing
anything
of
the
public
internet
or
anything
like
that.
Still,
please
do
not
run
this
in
a
production
scenario.
It
is
not
well
tested
but
excited
to
see
some
of
the
direction
we
can
go
with
it.
A
A
Know
that
you'd,
like
just
basically
brought
this
up
over
a
weekend
or
something
like
that,
so
I
you
know,
wasn't
anticipating
like
a
full
security
story
or
anything
like
the
idea
here.
The
concept
is,
is
really
pretty
interesting,
so
I'm
really
happy.
A
They
do
stars
are
live
all
right,
awesome
great!
Yes,
I'm
really
cool.
I
I
love
how
we're
using
crossband
contributing
we're
getting
like
a
lot
of
like
neat
projects
to
incubate
in
there,
and
you
know
kind
of
getting
some
good
innovation
going
and
some
collaboration
with
the
community
too,
before
they
kind
of
become.
You
know,
really
fully
maintained
kind
of
longer
term
position
in
the
road
map
status.
So
I
think
that
that's
really
awesome.
Look
at
the
cross
make
a
trip.
Org
is
having
some
cool
stuff
incubating
there
all
right.
A
I
wanted
to
share
a
quick
reminder
with
everybody,
as
we
got
some
recent
feedback
from
the
cncf
around
the
messaging
and
branding
of
our
status
with
cncf.
A
So
when
referring
to
crossplane
as
being
a
cncf,
a
cloud
native
computing
foundation
project,
it's
also
important
to
refer
to
its
status,
which
is
a
sandbox
project,
the
earliest
phase
of
the
life
cycle
of
the
cncf
project,
the
sandbox
incubating
graduated,
and
so
you
know,
if
you
call
crossplaying
a
cncf
project,
the
appropriate
way
to
do.
That
is
saying
that
it's
a
cncf
sandbox
project
and
so
just
a
quick
reminder
that
I
wanted
to
share
with
everybody
from
feedback.
A
We
got
from
the
cncf,
so
kubecon
north
america,
that
was
last
week
super
busy.
We
spent
some
hours
there.
I
wanted
to
open
the
floor
to
hear
from
folks
on
at
first
the
general
experiences
like
if
anybody
went
and
saw
some
cool
talks
or
anything
like
really
interesting
in
the
keynotes
or
you
know
the
experience
you
know
virtually
and
all
that
sort
of
stuff
did
anybody
have
any
thoughts
or
anything
interesting
to
share
from
last
week
being
at
kubecon.
E
I
mean,
I
think
there
was
a
ton
of
energy
and
stuff,
and
you
know
just
kind
of
not
necessarily
at
the
booth
but
kind
of
in
you
know
slack
and
on
twitter,
and
all
that
you
know
there's
just
kind
of
a
lot
of
good
kind
of
vibes
floating
around,
and
you
know
nice
to
see
kind
of
all
the
the
progress
that
that
we've
made.
I
think
you
know
as
a
project
so
kind
of
good
to
see
that
coming
together
and
a
couple
talks.
E
I
guess
that
the
jared
and
others
did
so.
That
was
super
cool.
I
know
dan
did
a
talk
with
stephen
as.
A
G
Yeah
I
like
to
meet
the
maintainers
of
the
office
hours
track.
There
was
a
you
know
I
mentioned
I
mentioned
online
afterwards.
It
would
have
been
really
nice
if
there
was
a
bit
more
for
those
interactive
sessions.
If
people
were
able
to
like
hop
on
the
video
or
I
don't
know
if
that
was
because
they
literally
weren't
allowed-
or
it
was
difficult
to
figure
out
how
to
but
still
a.
A
Yeah
yeah-
that
was
a
well-run
session
as
well
too
nick
thanks
for
stepping
up
to
to
do
the
meet
the
maintainer
session
and
presents
on
crossband
and
give
a
demo
and
stuff
that
was
really
cool
and
the
yeah.
That
was
the
way
that
the
cncf
had
that
set
up
is
that
they
did
not
allow.
It
was
technically
like
not
even
enabled
for
attendees
to
share
to
do
camera
or
audio
at
all.
It
was
only
through
the
q.
A
box
was
the
only
way
that
they
set
up
any
interactions.
A
Yeah
that
was
awesome.
What
did
we
think
about
like
the
booth
experience
having
the
booth?
There
did
we
feel
like
we
got
some
good
value
out
of
it.
Are
there
things
that
we
should
change,
and
I
asked
that
also
because
they
sent
a
survey
out
to
the
booth
coordinators,
which
I
was
the
coordinator
for
our
booth
here.
So
I
have
a
little
survey
link
to
fill
out
on.
You
know
what
we,
what
we
thought
of
the
the
booth
and
the
you
know
experience
and
what
we
would
change
and
stuff
like
that.
G
A
Thoughts,
you
know
please
do
share
right
now,.
D
I'll
say
that
the
like
user
experience
of
paying
people
to
chat
is
was,
I
don't
think
that
I
think
needs
to
be
worked
on.
The
it's
like
people
would
kind
of.
It
would
show
them
as
being
in
the
booth,
but
then
most
people
would
kind
of
just
like
decline.
G
Yeah,
I'm
not
sure
if
it
was
just
the
cross-plain
booth
being
slightly
harder
to
discover,
maybe
than
the
actual
sponsor
booths,
with
it
being
sort
of
only
accessible
by
the
drop-down
menu.
But
I
mean
I
I
I
and
I
think
I
was
I
had.
I
had
three
four
hour
shifts
and
I
would
say
that
I
was
there
for
like
60
of
those
shifts,
maybe
75
with
distractions
and
meetings,
and
things
like
that,
and
I
saw
literally
no
one
coming
to
our
zoo,
the
link,
the
whole
time
sort
of
thing.
A
A
Yeah
yeah,
I
I
see
it
like
that's
kind
of
similar,
my
thoughts
nick,
where
I'm
not
seeing
it
as
a
a
great
opportunity
in
terms
of
quantity
for
number,
for
the
number
of
engaging
conversations
you
can
have.
I
think
that
we
we
did
have
some
of
those
you
know
like
when
people
joined
the
zoom.
It
was
really
the
main
way
that
we
got
to
talk
to
people,
but
it
was
a
small
number.
So
it's
not
a
large
quantity
of
having
good
engaging
conversations.
A
Nowhere
near
the
number
that
you
have
with
the
real
physical
booth,
so
the
value
I'm
thinking
you
mainly
get
is
like
the
presence
there
of
having
the
booth
there
being
like
just
one
of
five
for
the
cncf
projects.
You
know
seeing
the
logo
going
there
having
access
to
all
of
our
docs,
our
you
know
our
links
to
slack
our
you
know.
A
Our
recent
blog
posts,
like
all
that
sort
of
stuff,
was
super
useful
for
people
to
self-serve
and
get
access
to
that,
and
you
know,
get
some
awareness
out,
but
I,
I
would
definitely
not
use
two
staff.
Two
people
sitting
there.
You
know
the
entire
time
like
one
person
could
handle
it,
and
then
we
could
even
consider
having
a
more
automated
thing
of
you
know
like
here's
access
to
all
our
content.
A
G
Yeah
I
had
the
same
thought
and
I
wouldn't
I
would
again
not
having
seen
the
morning
shifts
so,
but
from
my
perspective
I
would
I
would
not
have
anyone
on
those
booths.
I
would,
I
would
have
it
just
be
like
hey.
This
is
unstaffed
we're
on
slack
over
here
sort
of
thing
like
love,
to
chat
to
you,
yeah.
A
Yeah
the
awareness
aspect
like
especially
because
it's
free,
we
don't
like
it's
a
free
thing
from
the
cncf,
so
just
having
the
logo
there
and
getting
access
to
our
content
and
having
people
be
able
to
discover
us
and
find
us
and
come
interact
with
the
community.
Like
that's
super
worthwhile.
Absolutely,
I
would
definitely
I'll
have
my
finger
on
the
booth
button
for
eu
also,
but
we
should
reconsider
our
staffing.
D
B
A
Cool
all
right
and
then
another
reminder
here
that
the
call
for
proposals
for
the
next
cube
con
is
less
than
the
end
of
it.
The
deadline
is
less
than
a
month
away,
so
december
13th,
which
is
like
three
weeks
from
now,
is
another
lovely
deadline
for
submitting
a
talk
and
register
attempting
to
be
a
speaker
and
share
your
ideas
with
the
greater
ecosystem
at
the
kubecon
for
eu,
which
will
already
be
virtual.
It
is
not
definitely
not
an
in-person
event,
so
it
will
be
virtual.
A
All
right
so
that's
kubecon
did
I
don't
see
any
links
to
pr's
that
need
discussion
this
time,
so
that
would
be
the
end
of
the
full
full
section
of
content
and
agenda
topics
before
moving
on
to
the
optional
time.
So
I
wanted
to
pause
for
a
second
and
before
we
go
to
the
optional
deeper
technical
discussions
phase
of
the
meeting
in
which
people
are
more
than
welcome
to
break
off
and
sign
off
for
the
meeting.
E
Yeah
totally
so
community
day
we
basically
did
one
back
in
may
and
so
we're
doing
a
roughly
like
twice
a
year,
and
so
this
will
be
the
second
cross
playing
community
day
and
it's
where
we
get
people
together
from
all
over
the
the
community,
especially
those
focused
around
you,
know,
managing
you
know,
kind
of
via
kubernetes,
all
the
different
things
and
so
crossplane
you
know,
does
a
particularly
good
job
of
that.
E
So,
when
you
think
about
managing
cloud
apis
when
you
think
about
you,
know
managing
apps
and
kind
of
having
that
you
know
kubernetes
api
machinery
there,
and
so
historically,
we've
had
you
know
I'm
various
folks
from
the
kubernetes
ecosystem,
a
lot
of
whom
have
been
on
the
the
binding
status,
the
tbs,
and
so
that
you
know
also
has
included
folks,
like
you
know
the
cdk,
it's
folks
from
aws.
You
know
who
are
basically
writing.
You
know
like
language
bindings.
On
top,
we
had
kelsey.
E
Hightower
kind
of
you
know
give
the
keynote
there
and
so
he's
going
to
be
coming
back
along
with
a
handful
of
others,
including
like
joe
beda
and
brendan
burns,
and
many
of
the
folks
that
we're
collaborating
closely
with
in
the
community,
like
you
know,
jay
pipes
and
others.
So
I
think
dan's
gonna
be
kicking
it
off.
E
Bassam's
gonna
do
a
little
kind
of
a
keynote
and
then
we're
just
gonna
kind
of
dive
into
you
know
kind
of
the
panel
discussion,
and
so
I
think
kelsey's
going
to
be
moderating
that
and
yeah
it's
going
to
be
a
really
great.
You
know
crowd.
We
have
a
bunch
of
lightning
talks
as
well
from
from
various
folks
who
are
using
crossplane
in
various
forms
to
enable
self-service
for
developers
running
it
in
production
and
just
kind
of
sharing
their
stories
about.
E
F
Sure
I
mean,
as
parallel
said,
there's
kind
of
some
interest
on
the
the
ibm
side
to
kind
of
make
a
little
bit
of
noise
and
formally
you
know
announce
that
we've
been
working
with
the
cross,
plane,
community
and
so
on,
and
you
know
lining
up
with
the
community
day
and
the
you
know.
The
v1
driver
sounds
like
a
good
opportunity
to
do
that.
E
Yeah,
absolutely
so
is
this
paulo.
E
Got
it
okay,
yeah,
definitely
I'll
pencil
it
all
in
and
then
just
reach
out
on,
cosplaying
select,
sure
sweet.
A
C
I'm
appropriately
listed
last,
so
I
thought.
A
See
if
you
can
drag
and
drop
here,
just
drag
dan
over
here
there
we
go.
Look
I'm
hacking,
I'm
hacking!
Oh,
it
didn't
work
right
on
cool
man,
okay,
yeah,
thanks
for
bringing
that
up.
Chris
yeah
great
question
all
right,
so
I
think
then
dan
we
can
hop
on
over
to
you
wanted
to
do
a
demo
of
dependency
resolution
and
just
a
reminder
too
that
this
we
are
now
moving
into
the
optional
time
section.
Not
the
dance
demo
isn't
amazing,
but
this
is
an
optional
time
section
here.
A
So,
if
folks
want
to
take
off
for
the
remainder
of
the
block
here,
that's
they're
more
than
welcome
to
but
I'll
stay
on
here
and
watch
the
stand.
So
let
me
give
you
a
share
I'll
stop
sharing
my
screen.
A
C
A
fair
number
of
folks
have
already
seen
this
demo,
but
I
I
it's
a
fairly.
I
think
important
I
would
say,
feature
how
this
works
in
cross
plane.
So
I
just
wanted
to
have
number
one
for
the
folks
on
the
call
and
also
so
we
have
a
recorded
presentation
of
of
what
this
looks
like.
So
we
can
point
any
folks
to
it
if
they'd
like
to
see
so
as
you
can
see
here,
I
have
let
me
zoom
in
a
bit.
C
Maybe
I
have
a
dependency
tree
here,
which
basically
this
feature
is
that
you're
able
to
declare
your
dependencies
in
your
crossplan.yaml
for
either
a
configuration
or
provider
or
any
type
of
future
package.
We
support
and
you're
able
to
supply
dependency
ranges
basically
on
any
number
of
other
packages.
You
can
get
arbitrarily
complex
with
how
you
declare
your
dependencies.
So
in
this
case
we
have
a
top
level
configuration
which
it'd
probably
be
helpful.
C
Also,
if
I
went
ahead
and
showed
an
example
of
what
these
configurations
that
are
here,
look
like,
so
I
basically
just
have
all
these
configuration,
no
ops,
which
basically
means
they
install,
but
they
don't
really
do
anything
and
so
the
top
level
one.
C
I
have
a
has
a
dependency
on,
b
and
c,
and
basically
saying
any
version
is
fine
and
then
b
has
a
dependency
on
d
and
e
c
has
a
dependency
on
just
e,
et
cetera
and
then
d
and
e
have
dependencies
on
providers
and
configurations
respectively,
and
then
you
can
see
some
of
these
version
ranges
here.
We'll
definitely
have
some
updated
documentation
about
the
syntax
for
semantic
versions,
but
basically
we
have
a
very
expressive
syntax
that
lets.
You
basically
define
any
sort
of
range.
C
But
basically,
what
we
want
to
see
is
when
you
install
a
that
you're
able
to
get
all
these
configurations
and
providers
installed
with
appropriate
versions.
That's
a
difficult
problem,
because
these
are
packaged
up
as
oci
images
and
the
contents
of
the
oci
image
which
have
the
crossplan.yaml
in
them.
Declare
what
the
you
know
the
transitive
dependencies
will
be.
So
when
you
actually
install
provider
aws
errors.
Excuse
me
configuration
a
here
when
we
look
and
see
that
there
is
a
dependency
on
b
and
c
and
we're
allowing
any
version.
C
We
don't
know
what
the
dependencies
are,
that
each
of
those
versions
have
until
we
actually
download
and
unpack
them.
We
don't
want
to
download
every
version,
especially
if
you
have
some
sort
of
constraint
like
here
where
we're
saying
any
version
is
fine
right.
We
don't
want
to
check
the
contents
of
every
version
and
look
through
them.
We
are
fine
with
listing
the
available
versions
and
getting
one
that
fits
in
the
constraint.
C
So,
for
instance,
if
you
use
this
constraint,
we
should
have
the
the
latest
version
of
each
of
these
installed
and
and
by
latest,
where
we
enforce
semantic
versioning
in
that
regard
with
crossplane.
So
that
is
a
layer
on
top
of
oci
images,
which
obviously
your
tagging
strategy
for
oci
images
does
not
have
to
be
semantic
versions.
C
So
if
you
want
to
opt
out
of
that,
you're
going
to
need
to
you
know,
declare
explicit
explicit
tags
all
right,
so
I
already
have
crossplane
running
here,
so
I
want
to
show
kind
of
what
happens
when
we
install
the
a
here
and
see
how
the
rest
of
the
dependencies
are
resolved
and
basically,
what's
going
to
happen,
is
we
are
going
to
first
resolve,
b
and
c
and
then
each
of
their
dependencies
etc.
C
C
C
Then
this
dependency
tree
would
then
become
invalid
and
we
would
discover
that
once
we
got
down
to
that
level
of
resolution,
so
to
kind
of
visualize
what's
happening
here,
I'm
going
to
go
ahead
and
give
a
little
more
space
here
and
I
will
watch
our
configurations
and
then
I'll
go
ahead
and
install
our
top
level
configuration.
C
And
I've
got
this
in
the
upbound
package
registry
under
my
personal
account,
and
this
is
config
knob
a
and
I've
just
pushed
one
version
and
it's
0.0.1.
So
all
of
these
config
knobs
that
I
showed
that
are
kind
of
in
this
dependency
tree.
They
all
only
have
one
version
push
which
is
0.0.1,
so
we
should
see
that
use
for
all
of
them.
C
C
All
right,
so
we
see
config
knob
a
come
in
and
then
b
and
then
c
and
then
d
and
then
e
and
then
once
all
those
are
installed,
we
now
see
aws
and
helm
come
through
they're
all
going
through
kind
of
their
installation
process
here,
so
creating
their
resources,
starting
their
controllers,
etc.
C
We
see
that
there
is
a
bit
of
cycle
here,
one
of
the
things
we're
going
to
look
at
cleaning
up
after
getting
this
dependency
resolution
in
is,
hopefully
less
cycling
on.
You
know,
health
of
things
that
are
installed
just
because
it's
a
better
user
experience
nazi,
you
know
true
false
true
we'd
like
to
be
able
to
minimize
that,
if
possible,
but
what
you're
seeing
is
eventually
all
of
these
are
becoming
true
and
then
our
top
level,
a
also
became
true.
So
if
we
go
ahead
and
exit
this
watch.
C
We
should
see
that
we
have
all
installed
and
healthy
versions
here
and
they've
all
been
installed.
Basically
by
this
one
command
with
config
knob,
a
which
you
can
you
can
imagine
you
know
if
you
have
a
large
platform
that
has
a
lot
of
different
components.
This
can
be
really
valuable
to
have
this
robust
dependency
resolution
scenario.
However,
you
could
also
kind
of
like
choose
your
level
of
granularity
and
packaging.
You
could
just
say
you
know
all
the
things
in
these
different
configurations.
C
Let's
just
put
them
in
one
big
bundle
and
then
all
you
have
to
worry
about
is
the
providers
that
it
relies
upon
and
you
can
be.
You
know
a
little
more
safe
about
your
versioning,
whereas
you
know,
if
you
have
all
these
different
packages,
you
have
to
worry
about.
You
know
versioning
conflicts
when
it
gets
down
to
the
lowest
level.
C
One
of
the
things
I
do
want
to
mention
is
that
if
we
push
a
new
version
of
provider
aws-
and
let's
say
let's
say
we
pushed
a
new
version
of
of
b
actually
and
said
it
was
0.0.2
and
then
in
config
knob
a
we
said
this.
This
requires
0.0.2
when
you
roll
forward
to
config
knob
a
the
new
version
of
it,
we're
not
going
to
automatically
go
and
update
config,
not
b.
C
The
reason
for
that
is,
it
would
be
relatively
complex,
and
you
don't
really
know
who
the
like
you
know.
Top
level
package
is
if
we
had
another
configuration
here
so
a
completely
separate
dependency
tree
that
just
had
a
reliance
on
b
and
said,
you
know
you
can't
upgrade
to
the
newest
version
that
violet
violates
my
constraints.
Even
if
a
says
I
need
0.0.2
when
we
roll
forward
we're
not
going
to
automatically
go
to
0.0.2
on
that
dependency,
even
if
it
would
be
okay
for
the
range
for
the
other.
C
The
reason
for
that
is,
it
can
be,
you
know
somewhat
unexpected
and
it's
just
a
little
bit
easier
to
say.
Hey
like
I
want
you
to
know
this
new
version
of
a
is
not
healthy.
You
can
roll
back
to
a
healthy
state
which
you
knew
was
good
or
you
can
go
and
update
config,
not
b,
which
is
the
one
that's
not
fitting
in
the
version
range.
C
So
you
know
in
the
future
we
could
say
all
right,
we
will
intelligently
say
this
fits
within
the
version
constraints
for
all
your
different
packages,
but
for
the
short
term
we
think
it's
a
little
easier
to
say
you
know
once
something's
installed
as
a
dependency,
we
treat
it
as
a
top
level
thing,
so
you
update
that
and
you
roll
it
forward
and
backward.
C
But
anyway,
like
I
said
I
we
imagine,
a
lot
of
these
are
just
going
to
have
a
single
level
of
dependencies
that
says:
hey.
I
need
aws,
gcp
and
azure.
I
need
aws
and
helm
et
cetera,
but
we
do
want
to
provide
a
robust
option
for
folks
that
need
to
do
that.
Last
thing
I
want
to
mention.
I
know
this
has
gone
on
a
little
while
now,
the
way
this
is
working
behind
the
scenes
is
we
have
a
singleton
resource.
C
So
basically,
we've
introduced
a
new
type
right.
Now
it's
called
the
lock.
We
may
change
it
to
blueprint
or
some
other
name,
but
basically
it's
kind
of
like
a
go.sum
file,
if
you're
familiar
with
that,
and
so,
if
we
let
me
just
get
the
lock
and
I'll
show
you
a
little
bit
about
what
the
output
looks
like
all
right.
So
it's
basically
that
is
not
what
I
meant
to
do,
but
it
works
nonetheless.
C
C
But
if
we
look
up
to
something
like
config.e,
you
can
see
all
the
constraints
that
it
has
on
the
different
dependencies
it
has
so
you
can
think
about
this
as
a
reflection
of
the
graph
and
edges
that
exist
between
the
nodes
in
the
graph,
and
so
you
can
imagine
if
we
have
this
kind
of
like
snapshot
of
the
cluster
at
a
future
date,
it
may
make
sense
to
be
able
to
kind
of
like
save
this
snapshot
and
be
able
to
restore
a
cluster
in
terms
of
its
packages
by
just
creating
this
lock
and
having
it
kind
of
like
auto,
install
the
missing
packages
right
now.
C
C
So
anyway,
that's
kind
of
how
it's
working
right
now.
Hopefully
this
is
a
demo
that
you
know
some
folks
can
reference
back
to
in
the
future,
but
as
this
is
something
we
definitely
move
towards,
merging,
wanted
to
go
ahead
and
give
a
demo
for
folks.
A
Right
on
dan,
thanks
for
sharing
that
with
us,
I
think
it's
great
functionality
too,
especially
when
you,
you
know,
can
can
author
a
package
and
then
the
consumer.
C
A
That
package
doesn't
have
to
care
about
installing
everything
else
like.
Oh,
why
is
my
package
broken
and
things
aren't
working?
I
didn't
this
error
message
I
traced
down.
I
have
to
do
what
I
have
to
install
who
so
it's
nice
to
be
able
to
add
automation
wherever
we
can
to.
You
know,
streamline
the
experience
and
minimize
potential
for
failures.
So
that's
great
man
cool,
and
that
was
everything
we
had
on
the
full
agenda
now.
A
So
if
there
are
no
other
things
that
people
want
to
bring
up,
then
we
can
go
ahead
and
adjourn.
Then.
E
I
just
wanted
to
introduce
myself,
I
wanted
to
say
I
saw
daniel
and
I
think
it
was
steve's
presentation
on
kukan
kukan
on
friday
and
super
blown
away.
I
want
to
get
in
this
as
much
as
possible,
so
I'm
going
to
try
showing
up
and
and
and
getting
involved,
I'm
going
to
get
caught
up.
So
I
can
speak
intelligently
about
it
on
what
you
guys
are
doing,
but
noob
here
and
I
just
wanted.
E
Hopefully
you'll
see
my
name
pop
up
more
and
more
and
and
faced
with
the
name.
A
Right
on
douglas
or
doug,
either
one
dough
welcome
to
the
community
doug.
It's
great
to
see
you
here.
Thanks
for
joining
man
yeah,
you
know
cool
yeah,
we'll
see
you
on
slack
too,
for
any
anything
you
want
to
chat
about
as
well
too.
The
crossband
slack
is
like
a
great
place
to
go
anytime
in
between
these.
You
know.
Community
meetings
are
just
once
every
two
weeks,
so
we're
always
active
on
there
and
we
can
get
in
conversations
anytime.
There.