►
From YouTube: 2020-03-17 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
there
we
go
now
the
recording
is
started.
So
this
is
the
March
17th
2020
crossplane
community
meeting,
and
we
are
headed
towards
our
next
release
of
0.9
right
now.
The
tentative
schedule
that
we
are
proposing
would
be
a
feature
freeze
on
at
the
end
of
day
Monday
March
23rd,
and
then
we
will
be
releasing
on
the
morning
of
March
25th,
which
is
a
Wednesday
with
the
feature
freeze
there.
It
is.
A
We
want
to
have
all
features
in
to
master,
so
you
could
be
integration
tested
and
vetted
and
find
any
issues
to
be
able
to
clean
those
up.
But
if
there
are
any
important
things
that
are
still
coming
in
on
the
24th,
we
can
do
a
triage
to
assess
the
importance
of
them
and
the
risk
of
introducing
them
is
well,
and
you
know,
make
decisions
about
what
would
be
included
for
the
release,
but
will
be
will
be
targeting
feature
freeze
on
Monday
the
23rd.
That
is
six
days
from
now.
A
So
next
Monday
the
red
act
of
the
weekend
here
is
when
we'll
be
freezing.
That's
the
the
bring
up
the
road
map,
quick,
so
I,
think
in
the
0.9
roadmap.
Here
you
know
like
things
like
the
template
stacks
we've
got
those
pretty
much
on
lockdown
there
I
think.
The
only
thing
that's
still
left
is
like
code
car
bridge
I
was
just
talking
to
morphic
about
that
I
mean
there's.
You
know
good
coverage
for
the
templating
controller,
which
is
the
reusable
logic
to
do
to
flip
rendering.
So
that's
in
a
good
shape.
A
You
know,
morphix
couldn't
take
on
their
pass-through,
like
stack
the
integer
integration
to
see
with
there's
any
big
testing
gaps
there,
but
all-in-all
Susskind,
daniel
Susskind
and
Moffat
did
a
fantastic
job
of
getting
all
that
functionality
for
template,
stacks
together
and
fleshed
out
and
polished
as
well.
So
they
seem
to
be
working
really
well
in
all
of
our
usage
of
them
with
the
minimal
GCP
in
AWS
stacks
and
wordplay.
A
Wordpress
application
is
template,
so
everything
seems
to
be
going
really
well
there
and
you
know,
that's
that's
gonna
be
in
this
milestone
and
things
are
good
yeah.
We
already
updated
the
github
org
from
across
my
note
across
planes,
so
you
can
just
remove
cross
plane
I/o
from
your
collective
memory.
I,
never
think
about
it
again
stuff.
We
have
a
little
update
on
that
in
a
bit,
so
we
can
focus
on
that,
but
that's
going
well
as
well,
also
and
that'll
be
included
in
0.9
milestone.
A
Ya
know
these
process
I
think
there's
a
couple
things
that
we
might
want
to
follow
up
on
there
still
to
improve
our
our
documentation
in
some
of
our
approach
there.
But
it's
not
massive
things
so
think
yeah,
that's
most
stuff
for
0.9
I.
Think
the
big
outstanding
things
are
I
think
probably
around
finishing
up
our
plans.
Support
for
own
here.
A
Okay,
so
the
morphic
added
some
notes
here
to
the
gen,
but
let
me
go
ahead
and
approve
those
alright.
So
we
can
move
on
to
the
community
topic
section
here.
So
first
note
that
real
quick
is
that
they're
across
plain
I/o
site
has
been
updated,
so
the
messaging
on
here
in
the
focus
explaining
the
value
of
cross
plane
in
you
know,
benefits
to
users
and
what
it
does
has
been
scrubbed
and
cleaned
up
and
made
a
lot
more
streamlines
and
understandable,
so
cross
plane.
You
know
you
manage
your
applications
infrastructure
at
the
kubernetes
way.
A
I
like
that
messaging
and
the
whole
site's
been
updated
here
with
with
you
know
that
type
of
explanation
in
value
for
cross
plains
so
feel
free
to
take
a
look
at
that
TV
episodes
we
had
at
the
latest.
Tv
episode
was
last
last
week
on
Thursday,
and
that
was
a
really
good
episode.
Dan.
Do
you
wanna
give
us
a
little
some
of
the
high-level
highlights
there
for
the
linker
D
episode.
Sure.
B
So
Thomas
ramble,
Berg
from
linker
D
joined
us
and
he's
around
the
kubernetes
community,
and
it's
been
working
on
linker
D
for
I.
Think
about
two
years
now,
right
now
and
probably
the
most
interesting
part
of
episode
he's
working
on
their
multi
cluster
approach,
so
they're,
basically
developing
something
called
service,
mirror
where
their
strategy
for
networking
across
different
clusters
and
different
regions,
and
that
sort
of
thing
is
to
basically
create
gateways
and
mirror
all
services.
So
every
service
has
a
representation
in
every
cluster
and
then
they
basically
proxy
that
to
other
clusters.
B
So
he
had
some
really
cool
insight
on
that.
One
of
the
the
best
parts
of
episode
for
sure
was
when
we
talked
about
a
blog
post,
where
he
was
talking
about
architecting
for
multi
cluster
kubernetes,
and
we
talked
a
little
bit
about
a
specific
section
that
was
talking
about
having
a
single
control
plane
where
it
was
basically
advocating
for
not
having
a
single
control
plan,
and
you
know
that
seems
in
conflict,
I
guess
with
some
of
cross
plains
messaging.
B
But
throughout
our
conversation
we
really
actually
agreed
that
the
cross
plane
handling
the
movement
of
data
between
clusters.
So
configuration
data
is
actually
the
right
model,
so
he
referred
to
it
as
a
hierarchical
control,
plane,
which
I
thought
was
a
great
term
for
it,
and
it
I
think
everybody
was
kind
of
excited
about
the
possibility
of
using
something
like
linker
D,
for
a
data
plane
with
cross
plane
kind
of
orchestrating
the
components
of
that.
B
So
a
lot
of
cool
stuff
happening
there,
both
he
and
the
buoyant
CTO,
who
is
one
of
the
founders
of
linker
D,
are
excited
to
kind
of
circle
back
and
work
on
some
collaboration
with
cross
planes.
So
there's
overall
good
episode
and
those
the
linked
show
notes.
There
have
the
guide
for
deploying
linker
D
into
remote
clusters
using
cross
plane
so
yeah,
it's
pretty
fun.
A
A
You
know
single
control
plane
being
a
bad
idea
and
what
it's
one
of
the
little
observations
that
I
had
is
that
what's
interesting,
is
that
I'm
sinasohn
rook
before
or
like
rook
is
like
an
orchestration
playing
orchestration
control,
playing
layer
for
storage
and,
like
data
playing
you
know
you
could
turn
you
could
turn
the
operators
off.
Like
you
look
out
for
yourself,
you
can
sort
of
control
across
planes
controllers
off
in
the
functionality
that
they
set
up
and
manage
configured
is
still
working.
A
They
still
have,
like
a
you
know,
a
bit
of
a
control
plane
to
run
the
databases,
the
caches
and
the
buckets,
and
you
know
all
those
things
that
you
set
up
so
they're
kind
of
an
independent.
This
distinction
between
like
the
control,
plane
and
then
control
planes.
You
know
I
get
a
lower
level
for
the
services
that
are
running,
so
it's
kind
of
like
this
hierarchy
between
like
contributing
across
playing
kind
of
overlooking
an
oversight
for
everything
and
managing
it.
A
And
then
you
know
individual
control
planes
in
each
cluster
to
continue
running
services,
and
so
it's
not
like
a
single
point
of
failure
at
all.
Actually,
although
that
was
really
interesting
to
bring
that
up
and
hadn't
thought
about
that
before,
did
you
have
any
insight
into
what
the
next
episode
might
be
and.
B
As
with
last
week,
yes,
but
I'm,
not
gonna,
say
yeah,
just
in
case
that
the
plans
fall
through,
we
actually
had
a
different
plan
for
the
linker
D
episode
and
ended
up
kind
of
getting
that
together
last
minute,
which
turned
out
really
great.
Hopefully
that
doesn't
happen
this
week,
but
I'll
hold
onto
it
till
beginning.
A
Next
week,
fair
enough
one
of
these
community
meetings,
I'll
get
you
to
speak
about
the
plans
ahead
of
time.
That's
okay,
all
right!
So,
yes,
sir!
Then
some
open
application
model,
ohm
updates,
so
application
configuration
controller.
What
are
the
top
level
entities
in
the
home?
Spec
NIC
has
merged
that
into
master
core
across
planes
master
bridge,
so
you
could
take
an
application
configuration
and
then
render
that
reconcile
that
into
a
number
of
like
workloads
and
traits,
and
things
like
that.
A
So
that's
available
and
then
there's
the
next
layer
of
processing
in
the
spec
abstraction
to
concrete
types
there
dan
has
been
working
on
an
add-on
that
will
take
those
workloads-
and
you
know,
abstractions
from
a
ver
term,
into
kubernetes
applications
and
concrete
types
that
can
actually
run
those
those
resources
and
services
etc
in
remote
clusters
and
there's
also
some
work.
That's
ongoing
for
giving
that
for
local
cluster
as
well.
A
So
there's
a
lot
of
shared
commonality
between
these
implementations,
but
at
the
end
of
the
day
in
the
0.9
release,
you
would
be
able
to
take
a
top
level.
You
know
application
configuration
and
get
it
to
reconciled
and
render
down
to
running
components
for
actually
having
services
and
apps
and
everything
up
and
running
and
being
functional.
So
that's
super
exciting
to
have
all
that
stuff
included
in
0.9
morphix.
You
want
to
talk
a
little
bit
about
the
resource
tagging
that
you
added
to
the
agenda.
Yeah.
C
So
this
feature,
but
we
were
thinking
it
was
amending
it
like
and
then
Remy
know
the
guy
that
the
Daniel
had
made
a
TBS
episode.
He
opened
an
issue
about
like
this,
like
hey
I
want
to
be
able
to
fit
down
my
resources
like
with
provider
class
and
all
that
stuff.
So
we
have
accelerated
the
effort
and
now
it
landed
on
Crispina,
runtime
and
I'm
implementing
it
over
the
you
know
providers.
So
this
has
been
especially
frustrating
for
me.
For
example,
when
I
lose
my
mini
cube
or
like
indie
cluster.
C
For
some
reason,
all
vp
v
pieces
are
leaked
in
AWS,
but
this
way
I
can
just
go
into
like
VP's,
because
I
didn't
have.
We
can't
really
name
the
VP
C's.
They
have
their
own
IDs.
So
you
can
go
ahead
in
the
AWS
console
and
like
search
for
your
name
space
or
maybe
resource
name
or
provider,
name
event.
And
then
you
can
get
like
everything
and,
like
you,
can
see
how
much
they
cost
you
and
stuff
so
I
think
that's
pretty
cool.
A
Yeah,
so
this
is
implemented
in
the
crossplane
runtime,
so
you'd
do.
Is
there
further
integration
work
for
each
like
provider
or
stack
or
whatever,
to
like
incorporate
that?
Or
is
it
like
for
free
with
because
it's
implemented
in
the
cross
leg,
runtime.
C
D
E
C
Key
value
maps
in
most
of
the
cases
we
have
right
now,
like
name
name
space
if
it
exists
provider
and
class.
So
basically
you
would
say:
hey
like
it
cost
like
he
cares
cost.
Of
course,
that
I
happen.
My
cross
plan
like
how
many
details
clusters
are
provisioned
via
this
class
and
stuff
on
the
AWS
console.
A
Cool
awesome
perfect-
and
this
is
something
else
I
thought
about.
Adding
snitchin
to
here
is
the
terminology
changes.
Recently.
We've
changed
some
of
the
repositories
to
reflect
some
easier
terminology.
Thinking
Phil,
do
you?
Would
you
be
able
to
talk
through
some
of
the
thinking
around?
You
know
changing
from
calling
everything
a
stack
to
calling
them
providers
and
stacks
and
applications
yeah.
D
D
I'm,
like
cloud
provider
stacks
or
just
you
know,
providers,
and
then
they
have
one
of
the
queue
resources
in
them,
which
is
the
provider
resource
itself,
and
so
and
then
people
are
pretty
familiar
with
allocations
and
so
we're
just
kind
of
moving
to
kind
of
like
a
more
refined
I'm
set
of
terminology
where
we
have
providers
which
basically
model
out
the
providers
that
we
have
today.
So
anything
that
has
like
a
provider
resource
in
it
is
basically
called
the
provider
and
that
can
either
be
like
a
cloud
provider
or
a
smaller
managed
service
provider.
D
So,
like
a
cloud
provider
I'm
like
GCP
or
as
your
AWS,
Alibaba,
etc,
you
know
have
lots
of
cloud
services
are
available
or
that
are
made
available
in
crossplane
through
the
kubernetes
api.
So
you
can
kind
of
provision
them.
You
know
from
within
kubernetes,
and
so
those
are
basically,
you
know
providers.
E
D
Cloud
services,
by
the
way,
those
those
cloud
services
can
also
be
on-prem
right.
So
if
you're
using
like
VMware
or
other,
you
know
on
Prem
services
crossplane,
you
can
also,
you
know,
have
those
be
providers,
and
so
the
app
teams
come
in
they're
like
okay,
great
I
want
to
deploy
on
to
GCP,
but
there's
not
a
curated
set
of
services
like
in
a
catalog
that
you
know
a
officer
and
infrastructure
team
like
I,
want
to
create
and
so
I'm
they
can
go
and
then
create
a
stack.
D
Populate
a
catalog
of
available
services
that
can
be
provisioned
through
the
kubernetes
api,
with
all
of
the
best
practice,
configs
and
security
and
so
forth,
and
so
then
app
teams
can
then
just
go
and
consume
that
infrastructure
they're
automatically
compliant
with
the
best
practices
that
the
infra
and
Ops
teams
we
put
together
for
that
they
can
self
service,
and
that
works
great.
So
basically,
the
three
things
are:
providers
stacks
and
applications,
so
yeah
cool.
A
Phil,
thank
you
for
updating
us
on
that.
There
and
I
think
there
might
be
so
most
of
the
all
the
provider.
Repos
have
been
updated
to
the
naming
change
there.
I
think
there
maybe
are
some
a
little
more
outstanding
ones,
like
maybe
the
WordPress
example.
There
needs
to
be
renamed
to
application,
but
the
major
cloud
provider
and
repositories
have
been
updated
now.
Yep.
D
E
A
E
B
Say
it
is
a
little
bit
hard
to
say
like
cloud
provider
providers
like
it
cloud
providers,
provider
kind
of
but
I
wonder
if
we
can.
This
is
a
discussion
for
another
time.
I'll
bring
up
now
anyway,
the
I
wonder
if
the
term
cloud
provider
could
be
adapted
to
a
more
general
term
because,
like,
for
instance,
our
github
stack,
which
is
brand
new,
like
you
might
not
think
of
that
as
a
cloud
provider,
but
there's
a
provider
for
it,
and
it
would
also
remove
that
kind
of,
like
stuttering,
of
the
name
of
cloud
provider
provider
anyway.
A
We
need
to
update
that
like
every
time
we
have
a
new
import
or
it
seems
like
it
might
be
a
little
fragile,
perhaps
in
terms
of
the
maintenance
of
it
over
time.
So
I
wanted
to
open
the
floor
up
to
anybody
who
may
have
experience
with
one
of
these
types
of
dev
containers
here,
and
you
know
about
any
recommendations
on.
You
know
how
how
useful
this
will
be
broadly
or
if
there's
a
major
feedback
on
ways
to
improve
it.
I.
B
Was
while
I
was
thinking,
the
I
was
talking
to
our
care
about
this
other
day
and
he
was
wanting
to
add
it
just
to
make
it
as
local
dev
experience
better,
which
one
of
the
things
we've
run
into
with
people
a
few
times
is
having
the
wrong
version.
Basically,
and
sometimes
it's
well,
it's
I
guess
all
the
time,
not
super
straightforward
to
switch
between
go
versions
on
your
local
machine,
especially
if
you're
a
new
contributor
or
something
like
that.
So
I
think
it's
useful
in
that
regard.
B
I
wonder
if
we
could
incorporate-
maybe
not
even
this
collaboration
sort
of
thing
with
BS
code,
but
at
least
the
actual
container
manifest
into
the
build
so
module
to
keep
it
more
lockstep
with
that,
because
we
don't
want
developers,
we
don't
want
them
to
get
out
of
sync
and
then
someone
developing
locally
with
something
that's
not
gonna
build.
So
anyway,
that's
kind
of
my
two
cents
on
it.
A
Yeah,
okay,
yeah
I,
think
yeah.
That's
kind
of
one
of
my
like
going
down
the
line
of
thinking
that
I
had
of
like
there
are
the
maintain
maintenance
of
this
thing
like
you
have
to
keep
updating
it
or
you're
gonna
run
into
issues
where,
like
it,
it
isn't
in
agreement
with,
you
know
other
dependencies
that
flow
in,
and
it
was
interesting
to
like
bringing
in
a
spanner
dependency
here.
Where
did
that
even
come
from
to
like?
Why
would
why
does
core
cross
play
now
need
to
depend
on
spanner.
C
A
Go
up
the
chain
yeah,
okay,
so
it
looks
like
maybe
some
discussion
to
have
on
the
PR
here
but
like
because
you
know
I
personally,
don't
I've,
never
I
haven't
run
its
issues
or
needed
to
do
this
and
I
don't
understand
how
it's
used,
but
that
does
not
mean
the
community
at
large
wouldn't
benefit
from
it.
So
you
know
just
because
old
dog
over
here
doesn't
want
to
learn.
New
tricks
doesn't
mean
that
we
shouldn't
introduce
this,
for
you
know,
people
that
are
hip
and
with-it.
A
Let's
say
cool,
really,
some
feedback
on
that
one,
and
also
it's
awesome.
There
are
tourists,
you
know,
jumping
in
and
contributing
and
adding
new
stuff
that
you
know
could
be
beneficial
to
the
community.
So
thank
you
very
much
to
start
sure
he's
not
on
the
call
but
YouTube
shout
out
okay
and
then
Dan.
You
brought
this
one
up
here
of
1341.
B
Not
too
much
to
discuss
here
just
wanted
to
make
people
aware
of
it.
We've
actually
for
quite
some
time,
then
creating
remote
resources
successfully,
but
not
actually
updating
them
when
changes
are
made
to
the
a
either
kubernetes
application
or
directly
to
the
kubernetes
application
resource
template
and
the
issue
for
that
is
that
we
were
basically
getting
the
or
taking
the
change.
B
You
know
in
the
reconcile
getting
the
objects
that
that's
being
reconciled
and
then
checking
the
remote
status
of
that
object
and
then
applying
the
patch
with
the
remote
status
of
the
object,
which
is
obviously
a
Noah
every
time,
because
you're
getting
the
status
and
then
just
applying
it
back
to
it.
So
as
part
of
that,
this
updates
it
to
use
the
actual
you
know
desired
template
now.
Instead
to
perform
the
patch,
it
also
starts
to
move
towards
using
some
more
of
the
crossplane
runtime
stuff.
B
These
controllers
are
pretty
old
and
there's
a
lot
of
new
standards
which
not
all
are
implemented
here,
but,
for
instance,
we're
kind
of
making
slow,
incremental
improvement
on
these.
As
things
come
up
so
here
we
move
to
using
raw
extension,
which
is
kind
of
preferable
over
having
the
unstructured
templates
and
the
reason
for
that
is
that
it
makes
it
harder
to
basically
actually
modify
the
object
itself,
because
you're
passing
around
bytes.
That
has
to
then
be
marshaled
into
an
unstructured
object
or,
if
you
know
the
object
type
into
that
struct
itself.
B
So
that's
kind
of
a
pattern
that
we're
trying
to
update
things
to
there's
not
actually
too
many
places
where
this
even
happened,
but
it
does
update
it
to
follow
that
pattern.
We've
also
done
things
like
adding
some
better
login
here
and
in
the
future.
This
will
also
be
improved
with
a
venting
and
that
sort
of
thing.
So
it's
kind
of
making
some
some
soil
improvements
on
today's
applications
and
kubernetes
application
resources,
as
they've
kind
of
been
dormant
for
for
a
bit.
C
B
No,
it
doesn't
there's
actually
which
I
guess
I
could
have
put
on
here
as
well,
a
separate
PR
on
crossplane
runtime
that
was
merged.
That
allows
you
to
pass
arguments
to
the
apply
function,
which
does
the
merge
patch
so
for
context.
The
issue
that
moveth
ik
is
referencing
here
is
that,
with
the
OEM
work,
we
are
basically
rendering
to
renée's
applications.
When
we
want
the
output
of
a
om
workload
to
be
scheduled
remotely,
we
get
the
translation
of
it
and
basically
package
up
in
a
queue
renée's
application,
so
it
can
be
scheduled.
B
The
issue
is
that
is
that
when
you
do
a
merge
patch
arrays
array
elements
get
overwritten,
which
you
can
imagine
why
that
would
be
the
case,
because
you
don't
know
if
it
should
update
an
element
or
replace
it.
If
it's
like
a
struct
in
place,
you
know
what
what's
the
criteria
for
saying.
This
is
similar
enough.
That
I
just
want
to
change
this
one
field
where
sure
place
the
whole
thing.
So
that's
normally
not
an
issue
when
we're
updating
resources,
because
we'd
want
it
to
have
that
behavior.
B
But
if
we
have
something
like
a
deployment
which,
in
the
SPECT
has
things
like
selectors
pods
by
template,
it
has
replicas,
which
is
specifically
the
field
we
were
interested
in
in
this
case,
which
are
not
array
values
right,
so
we
want
to
either
remove
those
or
keep
them
as
they're
existing
and
we're
not
passing
a
nil
value
to
it.
So
what
was
happening
is
because
we
are
modifying
the
kubernetes
application
that
has
the
deployment
nested
as
an
array
element.
B
It
was
actually
overriding
that
full
element,
so
something
like
a
replica
which
normally
wouldn't
be
overridden
when
you
patch
a
deployment
itself
was
getting
overwritten
because
it's
replacing
the
whole
deployment
as
an
array
element
in
the
kubernetes
application.
So
basically,
what
we're
doing
instead
now
is
passing
a
function
to
the
patch
apply
functionality.
We
have
in
crossplane
runtime
that
before
it,
patches
emit
essentially
moves
them
into
separate
objects
and
patches
them
individually
and
then
puts
them
back
into
the
kubernetes
application
and
then
does
a
patch
with
that.