►
From YouTube: 2020-03-03 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So
that
will
take
us
to
the
end
of
the
Sprint's
is
Friday
March
13th,
so
I'm
proposing
one
possibility
here
is
that
we
could
finish
the
sprint
and
have
that
be
the
feature
freeze
day,
and
then
we
could,
you
know,
do
some
quality
assurance
and
some
stabilization
and
stuff,
and
then
sometime
the
afternoon,
maybe
on
Monday
to
March
16th.
We
could
run
the
release
from
0.9,
that's
open
for
discussion
and
simply
just
a
suggestion
or
a
proposal.
We
can
certainly
move
things
around
as
needed,
but
that's
basically
the
middle
of
the
month.
A
A
We
do
have
a
0.9
milestone,
open
and
I've
been
trying
to
add
issues
to
them.
We've
gots
already
got
some
that
are
fixed
and
we've
got
some
that
are
still
open,
but
I've
been
adding,
at
least
in
the
core
crossplane
repo
to
the
0.9
milestone,
and
then
Phil
has
updated
the
roadmap
and
that
has
been
merged
Phil.
Do
you
wanna
talk
through
a
couple
of
the
big
roadmap
items?
Sure.
B
Yeah
sure
so
we're
continuing
to
work
on
incorporating
the
versioning
and
upgrade
design
feedback
that
Marcus
is
working
on
as
part
of
1116.
So
I'm.
If
you
have
a
chance,
give
that
a
look.
Oh,
we
did
rename
begin
hub
work.
So
thanks
to
Ashton
and
Jared
for
the
emerges
and
approvals
and
stuff
on
that.
So
now
you
can
get
it
at
github.
B
Slash
cross
plane
instead
of
cross
plane
I/o.
So
that's
super
exciting
and
we
have
some
really
great
stuff
coming
down
the
pipe
with
some
experimental
support
for
OMA
API
types
in
crossplane
I
would
basically
add
just
a
small
handful
of
API
types
for
app
configs,
including
you
know,
like
workloads
and
traits
and
those
types
of
things.
So
that's
coming
down
the
pipe
and
we
basically
have
our
templating
engine
support
for
both
customized
and
helm.
Now
added
in
so
you
can
easily
create.
B
B
B
Gonna
go
in
and
make
sure
it
gets
set
up
with
the
right
provider
credentials
and
things
like
that.
So
that's
super
exciting.
It's
also
actually
used
to
pour
over
the
WordPress
application
that
can
deploy
on
top
of
these
minimal
stacks,
and
so
the
0.8
blog
post
actually
walks
through
that
from
the
CLI
for
WordPress
on
top
of
minimal
GCP,
and
so
the
AWS
and
natural
ones
got
added
in
in
this
certain
time
release
and
then
the
other
awesome
thing
that
we're
yeah.
B
So
it
just
kind
of
walks
you,
through
step
by
step,
the
commands
you
need
to
run
and
some
of
the
ml
and
then
also
the
resultant
resources
that
get
created,
and
then
we
actually
have
WordPress
up
and
running
on
top
of
that
in
literally
no
time
flat.
So
super
exciting
to
see
all
that
come
together
and
get
additional
cloud
support
for
that
in
0.9
and
of
course,
installing
things
from
you
know,
public
docker
hubs
great.
B
A
C
B
A
Jerry
cool
Thank,
You
Phil
appreciate
that
yeah.
So
you
know,
if
anybody
has
feedback
on
the
release,
suggestions
or
proposal
for
the
timing,
then
we
can
take
that
offline
and
discuss
like
in
slack
or
get
more
community
feedback
on
it.
So
that's
the
0.9
stuff.
We
can
go
ahead
and
proceed
on
to
the
community
topics
section
here.
So
the
first
one
we
have
here
is
always
an
update
on
the
latest
TBS
episode,
so
CBS
11
ran
last
week
and
dan.
Do
you
want
to
talk
about
that?
Pretty
pretty
cool
episode,
sure.
C
So
we
had
one
maintained
errs
from
cluster
API
on
Jason
and
basically
what
we
did
was
use
cluster
API
to
spin
up
a
kubernetes
cluster
on
AWS,
so
not
using
a
managed
granade
service,
but
bootstrapping
your
own
using
cube
admin.
It
was
pretty
cool
because
we
actually
were
the
first
demonstration
and
documentation
of
the
v1
alpha
3
release
of
their
API
types,
which
had
some
pretty
major
changes
and
brought
a
lot
of
new
functionality
I'm.
C
So
it's
cool
to
see
that
so,
essentially,
what
we
did
was
spin
up
a
new
cluster
and
then
create
a
kubernetes
target
with
the
cube
config
information
for
that
cluster
and
then
scheduled
resources
to
that
cluster
using
crossplane.
So
in
the
control
cluster
we
had
crossplane
installed
and
cluster
api
installed,
and
so
you
basically
get
out-of-the-box
that
kind
of
like
manual
kubernetes
offering
as
well
as
you
know,
the
offer
the
managed
service
offerings
that
could
be
night
air
that
crossplane
provides.
C
So
we
don't.
You
know
spin
up
like
easy
two
instances
and
then
bootstrap
them
with
kubernetes
for
you,
although
you
could
do
that,
but
they
kind
of
like
package
it
up
into
some
nice
API
types,
so
especially
if
you're
already,
using
that,
it's
really
useful
to
be
able
to
bootstrap
new
clusters
with
that.
So
what
a
cool
work
going
on
over
there-
and
it
was
nice
to
kind
of
collaborate
with
some
of
the
upstream
kubernetes
folks-
show
that
off.
A
Right,
fair
enough,
we
will
wait
for
to
see
what
the
next
exciting
episode
brings
us
cool
right.
There
we
had
a
couple
updates
about
the
open
application
model
and
integration
with
crossplane
Phil
already
touched
on
some
of
those
in
the
roadmap
update
that
she
gave,
but
also
of
note
here
is
that
in
the
upstream
home
spec
the
Nix
proposal
to
make
the
experience
in
the
API
a
little
bit
more,
an
inline
with
kubernetes
practices
has
been
merged
and
accepted
upstream.
So
that's
pretty
pretty
good.
They
have
seen
that
progress
there,
like
as
a
community.
B
A
Just
last
night,
within
crossed
lane
opened
up
a
pull
request
that
will
that
extends
crossplane
with
knowledge
and
understanding
of
some
of
the
own
core
types
like
application,
configuration
and
traits,
and
workloads
and
stuff
like
that,
and
he
also
wrote
a
controller
that
will
be
able
to
reconcile
an
application
configuration
object
from
the
home
spec
in
reconcile
into
various
workloads
and
traits,
and
things
like
that.
That
could
be
further.
A
C
Sure
so
the
work
that
Nick
has
done
basically
takes
the
most
end
user
or
highest
level,
user
of
crossplane
and
OAM,
which
would
be
like
an
application
owner
or
maybe
even
someone
less
technical
than
that
and
basically
allows
them
to
use
the
catalog
of
different
components
that
have
been
defined
and
that
sort
of
thing,
and
then
that
gets
translated
down
the
line.
She's
a
couple
of
different
resources
so
that
application
configuration
controller
implements
right
now,
the
core
workload
types,
which
is
only
one,
the
containerized
workload,
which
is
pretty
similar
to
a
kubernetes
deployment.
C
And
then
it
also
allows
you
to
use
the
core
traits
to
find
there.
So
a
trade
basically
modifies
the
way
a
workload
is
created
or
it
you
know,
adds
something
to
it.
So,
on
the
in
a
containerized
workload
or
something
like
that,
it
may
add,
you
know
an
ingress
to
it
or
you
know
a
number
of
replicas
on
the
deployment.
That's
translated
into
or
something
like
that.
C
So
all
of
these
are
kind
of
more
abstract
concepts
that
eventually
get
translated
down
into
core
kubernetes
types,
and
so
the
the
main
part
that
I'm
working
on
and
then
I'm
some
folks
over
at
alibaba,
who
I
I
don't
think,
are
on
the
call
right
now,
but
ryan
zhang
is
working
on
local
kubernetes
story
and
I'm
working
on
the
remote
kubernetes
story.
So
in
the
local
case
you
can
translate
directly
to
kubernetes
objects,
so
you
know
a
deployment
in
this
case.
C
A
Awesome,
and
so
just
for
my
own
edification,
real
quick,
the
between
these
kubernetes
local
and
communities,
remote
limitations,
they're,
those
controllers
they
will
is-
is
the
reason
that
the
communities
remote
one
it's
essentially
the
same
exact
things
that
would
be
deployed
to
the
cupola
or
created
the
artifacts
that
would
be
created
by
the
Copernicus
local
implementation
or
just
bundled
up
in
a
criminal,
easy
application.
They
can
get
scheduled
remotely.
C
That's
right
and
there's
a
couple
different
things
you
have
to
do
with
that.
If
you
think
about
you,
know
injecting
secrets
and
that
sort
of
thing,
so
there
is
some
like
kind
of
custom
logic
around
doing
that.
But
yes,
essentially
you
end
up
with
the
same
kubernetes
native
types
in
one
location
or
the
other.
A
Cool
awesome
that
that's
good
to
know:
there's
not
as
much
divergence
and
the
implementation
necessarily
expectations
to
do
awesome,
cool,
so
good
updates
on
the
integration
and
collaboration
work
there.
Then
we
also
had
a
community
of
going
to
post
this
questions,
the
community.
We
are
planning
on
investing
some
more
and
the
supported
services
for
a
different
cloud
provider.
Stax
providers,
so
we
will
like
AWS,
is
one
that
we
want
to
invest
in
as
well.
A
A
C
C
A
A
A
Yeah,
it's
I'd
seen
this
one
did.
Do
you
wanna
tell
us
a
little
bit
more
about
this
one
and
then
also
especially
in
reference
to
like
your
like
testing
that
had
been
done
on
it
to
make
sure
that
everything's
continued
to
be
parsed
by
any
different
implementations
of
the
you
know,
stack
parser.
There
may
be
out
there
yep.
C
For
sure,
so
the
reason
why
this
came
up
is
we
have
two
different
types
of
the
same
kind
but
they're
in
different
groups
inside
GCP,
so
they'll
be
the
gke
clusters.
So
right
now,
what's
happening
is
when
they
all
go
into
the
same
directory,
we're
getting
the
same
resource,
manifest
applied
and
icons
etc,
applied
to
both
the
compute
version
and
the
container
version,
which
is
you
know,
they're
pretty
much
the
same
thing:
it's
not
a
huge
issue,
except
for
that.
C
It
then
categorizes
and
in
the
parsing
system
it's
gonna
categorize
the
container
one
into
the
compute
one,
because
that's
the
category
on
the
resource
manifest.
So
essentially
what
this
does
is
just
modifies
the
bash
command
a
little
bit
to
copy
into
specific
directories,
and
you
know
it's
possible.
This
is
a
very
small
case,
but
it's
possible
in
the
future.
C
We
could
have
more
conflicts,
so
we'd
probably
want
to
move
towards
this
towards
doing
this
across
the
board
and
as
long
as
you,
you
know,
parse
this
in
the
same
way
that
the
stack
manager
does
so
I've
actually
created
this
and
made
sure
that
the
annotations
on
the
CR
DS
that
are
created
match
the
resources
that
are
in
the
directory
with
that
CRD
and
you
can
test
any
that
way.
So
as
long
as
any
parsing
implementation
follows
that
same
logic,
then
it
should
work.
Fine.
A
C
Very
similar,
the
only
difference
is
the
the
the
group
category
is
actually
defined
on
the
dot
resource
DMO.
So
if
you
parched
the
container
gcpd
across
Mondavi,
oh
it
was
showing
up
in
the
compute
group.
They
were
both
showing
up
as
CR
DS
that
were
installed.
They
were
installed
successfully,
but
it
the
container
one
was
showing
up
and
compute.
So
once
again,
a
very,
very
minor
issue
in
this
case,
but
it's
nice
to
fix
it.
C
All
right,
so
this
is
a
PR
from
Sahil,
so
I
just
wanted
to
surface
this
one.
There's
some
really
great
work
here
and
I
think
it's
useful
for
any
new
contributors,
because
Sahil
is
kind
of
coming
to
this
without
previous
knowledge,
just
using
the
docs
and
that
sort
of
thing,
and
then
we've
had
some
good
back-and-forth
in
the
reviews
and
comments
that
I
think
could
be
useful
across
the
board,
and
so
he
did
a
great
job
actually
of
getting
this
implemented
before
even
the
first
review.
C
So
there
wasn't
that
many
changes,
but
the
small
changes
are
probably
the
ones
that
most
new
contributors
are
going
to
run
into.
So
I
just
wanted
to
add
it
to
the
notes
and
reference
it
for
anyone
who
is
looking
to
get
started
on
this
kind
of
work
of
either
updating
resources
or
implementing
new
ones.
I
mean
kind
of
seeing
some
some
of
the
gotchas
that
can
get
you
while
you're
doing
so,
but
yeah
and
also
just
give
Sahil
a
shout-out
because
he's
gotten
this
very
quickly,
something
that
that's
taken.
C
You
know
months
of
doing
this
everyday
for
for
us
to
kind
of
Drock
and
understand.
It
was
also
some
some
nice
affirmation
of
of
some
of
our
documentation,
so
we're
gonna
continue
to
refine
that
and
build
on
that
from
experiences
like
this,
but
pointing
to
actual
implementations
is
a
lot
of
times
more
useful
for
new
contributors
and.
A
They
did
is
this,
so
it's
it's
nice
that
we
had
this
feedback
and
kind
of
you
know
movie
that
suffered
along
with
the
new
contributor
and
fixing
some
of
those
first-time
common
mistakes.
Do
we
also
have
those
well
captured
in
the
developer,
a
contributor
it
guides
so
that
you
know
people
like
it's
in
a
more
permanent
fixture
that
people
would
go
to
as
opposed
to
just
in
an
old
PR
yep.
C
So
mu,
offic
and
I
have
been
working
through.
It's
actually
an
open,
PR
right
now
in
crossplane,
a
v1
beta,
one
checklist,
which
is
meant
to
kind
of
synthesize.
Some
of
the
work
that
he
did
in
the
manage
resource,
API
standards
that
we
have
and
just
having
in
a
checklist
form,
makes
it
a
lot
easier
to
both
review
and
kind
of
like
make
sure
you've
done
everything,
because
the
API
dock
is
more
of
a
paragraph
form
and
a
little
bit
more
conceptual.
C
A
C
C
Yeah
I
guess
I
kind
of
hog
the
PR
set
a
session,
but
this
is
an
update
that
move
offic
actually
brought
this
to
my
attention
last
week.
So
since
the
implementation
of
the
kubernetes
application
as
a
kind
of
packaging
method
for
deploying
resources
to
target
clusters,
it
originally
was
a
cluster
ref
and
that
was
in
the
status
and
then
when
it
was
changed
to
target
ref
when
kubernetes
target
was
implemented.
That
was
also
in
the
status.
C
So
the
issue
with
that
is
that
if
that
value
gets
set
based
on
scheduling
and
then
we
lose
the
the
target
ref
or
we
lose
the
status,
then
we
could
have
the
issues
with
that
being
scheduled
somewhere
else,
which
we
don't
want,
and--but
just
based
on
the
labels.
If
there's
multiple
targets
that
have
those
labels
and
also
these
controllers
are
a
little
bit
old
and
the
API
types
weren't
using
some
resource
status,
which
actually
was
helping
us
not
lose
the
the
target
ref,
because
the
spec
and
stats
are
being
short
together
and
so
separately.
C
But
this
kind
of
updates,
those
controllers
to
a
bit
more
modern
of
a
pattern,
also
starts
using
some
resource
status
and
moves
the
target
ref
into
the
spec.
As
a
byproduct
of
that,
you
can
also
actually
directly
reference
a
kubernetes
target
that
you
want
to
schedule
to.
So
previously
you
could
only
schedule
by
labels,
and
now
you
could,
you
know,
actually
put
a
direct
reference
to
criminate
target.
Your
name
Jason
say
go
to
that
cluster.
Oh.
A
Yeah
I
like
these,
the
updates
to
the
usability
and
the
standards
of
this
that's
awesome,
dinner
for
sure,
sweet,
ok,
I
think.
That's
all
the
agenda
items
we
had
here
so
I'll
open
the
floor
for
any
other
topics
that
anybody
may
want
to
bring
up
now.
But
if
there
are
not
other
topics,
then
we
can
adjourn
early
here
and
get
30
minutes
back.
Oh.