►
From YouTube: 2019-02-19 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
has
started-
and
this
is
the
February
19th
2019
crossplane
community
meeting,
so
we'll
go
ahead
and
get
started
by
looking
at
some
of
the
current
status
of
the
current
milestone,
which
is
V
0.2.
We
are
actively
targeting
around
like
a
quarterly
release,
cadence,
which
would
put
this
0.2
release
around
the
middle
to
late,
March
timeframe
so
about
another
month
left
and
that.
A
I
think
a
good
view
here
is
also
on
the
roadmap
to
kind
of
show
the
general
themes
for
the
0.2
milestone
to
remind
ourselves
of
that
so
40.2,
a
big
focus
is
on
the
initial
implementation
of
the
workload
scheduler
and
before
Illya
went
on
his
vacation
to
India.
He
was
able
to
finish
that
and
get
it
into
master.
So
we
now
have
you
know
a
scheduler
that
is
aware
of
some
scheduling,
primitives,
like
affinity
and
some
placement
stuff
like
that.
A
That
can
influence
where
workloads
end
up,
but
that's
an
initial
first
stab
at
that
the
overall
design
about
being
able
to
optimize
for
certain
features
like,
for
instance,
geographic
location
or
a
cost,
or
anything
like
that.
That's
going
to
be,
you
know
something
that
comes
in
further
iterations
of
the
workload
scheduler,
but
the
core
components
itself.
The
scheduler
piece
is
now
available
in
master,
so
I'm
happy
to
get
that
in
the
0.2
milestone.
A
A
So
it
won't
be
too
much
longer
before
that
the
Redis
support
across
all
cloud
providers
will
be
available
for
full
review
from
the
community
and
before
it
goes
into
master,
we're
also
working
on.
We
have
276.
Here
we
have
oh
oops.
We
have
a
pull
request
with
the
full
implementation
for
that
as
well.
That
has
gotten
some
feedback
from
Nick
and
I'm
actively
integrating
that
feedback.
A
Now,
there's
an
item
here
in
the
community
meeting
agenda
that
I
want
to
discuss
a
little
bit
about
that,
but
basically
the
work
to
kind
of
refactor
our
resource
controllers,
to
remove
a
lot
of
the
duplicated
code
that
they
had
across
them
and
use
a
single,
unified,
general
resource
controller.
For
to
to
improve
the
maintainability,
so
we
don't
have
duplicated
code
and
that
changes
will
affect
all
of
them
in
makes
testability
easier
and
all
those
lovely
engineering
properties
that
is
will
be
soon
ready
to
go
into
master.
A
So
there's
a
couple
of
smaller
items,
then
that
are
still
left
for
the
0.2
milestone.
So
we
are
making
pretty
good
progress.
I
would
say
we
have
a
month
left
and
we
have
some
smaller
issues
about
improving
the
user
experience
like
making
being
able
to
have
default
resource
classes
right
now.
You
know
that
that
functionality
is
available
in
you
know,
upstream
kubernetes
storage
classes.
A
You
know
if
you
want
a
persistent
volume
and
you
create
a
persistent
volume
claim
that
doesn't
reference
a
storage
class
there's
a
concept
of
having
a
default
class
for
that
cluster,
which
would
then
be
used
to
provision
your
volume
dynamically
for
you.
So
we
want
to
be
able
to
have
that
same
type
of
functionality
where,
if
you
say
you
want
a
Postgres
database
and
you
don't
specify
a
specific
resource
class,
then
a
default
one
would
be
able
to
be
used
for
you
and
you'd
get
that
Postgres
database.
So.
B
Hey
Jared:
does
that
mean
that
so
you
know
like
and
the
kind
of
like
QuickStart
thing
or
any
of
the
quite
deploying
database
in
our
done.
I
can
do
on
the
website
when
you
like,
create
a
resource
class
and
then
a
resource
claim.
Does
that
mean
that
you
could
essentially
just
create
the
resource
claim
and
then
it
would
use
it
the
default
class?
Why
you
default
classic
and
defined
one
separately
to
reference
yeah.
A
Exactly
that's
exactly
what
the
experience
would
be
like
so
that
does
take
into
you
know,
does
rely
on
the
cluster
administrator
or
some.
You
know
an
administrator
context
to
have
set
up
some
resource
class
because
at
the
zero
resource
classes
to
find
I
I,
don't
think
we're
gonna
get
in
the
business
of
making
one
up
on
the
fly.
B
A
The
I
think
that's
pretty
much
the
way
that
would
work
like
the
I
think
and
upstream
communities
with
storage
classes.
The
way
it
works
is
I
think
it's
with
an
annotation,
but
that
may
be
an
older
beta
implementation
that
may
do
something
different,
but
it's
basically
still
setting
a
label
or
annotation
or
some
sort
of
property
on
the
resource
class
that
you
want
to
be.
A
The
default
like
is
default,
:
true
and
that's
what
determines,
if
that's
the
defaults
or
not
gotcha,
Thanks,
no
problem,
and
then
so
and
then
so
there
is
this
item
still
that
we've
done
a
little
bit
of
work
on,
but
it's
needs
some
more
fleshing
out
of
you
know.
Right
now
we
have
a
fairly
broad
support
across
cloud
provider
of
managed
services.
So
let's
say:
if
you
want
to
Postgres
database,
you
can
get
it
easily
in
a
portable
way
across
AWS,
Google
Adger,
but
there's
also
a
scenario
for
local
clusters.
A
If
you
want
to
be
able
to
get
a
Postgres
service
running
inside
of
a
local
cluster,
that's
maybe
on
on-premises
or
bare
metal
and
does
not
have
you
know
access
to.
You
know
a
nice
cloud
provider
managed
service
to
use,
so
we
demonstrated
that
functionality
in
a
proof-of-concept
way
at
Keuka
in
Seattle
during
one
of
our
talks,
but
we
haven't
really
fleshed
it
out
further.
So
there's
some
work
to
do
here
in
terms
of
defining
what
that
interface
looks
like
to
be
able
to
have
provisioners
that
work
for
local
clusters
as
well.
A
A
That's
so
yeah
I
said
like
some
write-up
about
how
to
do
that
and
then
to
maybe
taking
that
Postgres.
You
know
cockroach
DB
demo
that
we've
done
maybe
a
little
bit
further,
so
that
when
there's
there's
the
work
to
do
there,
I
wouldn't
say
it's
a
large
risk
right
now
for
the
milestone,
but
it
is,
it
is
the
highest
risk,
I'd
say
of
of
the
items
that
we've
declared
40.2.
A
So,
just
taking
a
look
at
the
milestone
here,
you
know
there's
still
a
fair
amount
of
tickets
that
don't
have
owners
associated
with
them.
I
think
that
there
needs
to
be
a
little
bit
of
tickets.
Maintenance
needs
to
be
done
here
because
I'm,
seeing
a
few
that
are
not
on
the
0.2
road
map
but
they're
in
the
milestone
here
and
get
up.
A
All
right,
so,
yes
and
they're
also
looking
ahead
in
0.3
milestone.
You
know.
A
lot
of
this
work
is
moving
forward
towards
being
able
to
support,
get
lab
the
deploy,
a
portable
deployment
of
get
lab
using
cross
plane
and
the
get
lab
engineers
have
started
started
at
some
investigation
on
that
fairly
thoroughly.
Jason
plum
has
been
working
on
that
in
testing.
The
scenarios
that
get
lab
needs
like
object,
storage,
buckets
and
databases,
etc
across
all
the
cloud
providers-
and
you
know
finding
some
of
the
shortfalls
or
where
the
cross
plane
design
doesn't
yet
meet.
A
What
the
needs
of
get
lab
would
be.
So
that's
really
helpful
that
Jason,
you
know
took
that
took
that
approach
there
from
the
you
know,
with
his
get
lab
expertise
and
was
able
to
provide
feedback
to
help
us
keep
going
in
the
right
direction
and
when
Nick
is
done
with
Redis
support,
we
will
have
all
of
the
components
supported
that
get
lab
needs,
Postgres,
Redis,
object,
storage
and
Nick
will
move
on
to
fully
working
on
the
design
and
the
implementation
for
supporting
cat
lab
income
in
full.
A
So
that's
that's
good
that
we
have
a
dedicated
resource
that
will
be
focusing
on
that
alright.
So
let's
move
ahead
then
to
the
community
topics.
I
was
hoping.
Nick
would
be
here
for
this
one,
this
testability
approach
and
it's
not
it's
not
essential
to
be
figured
out
right
now
to
Ilya
would
probably
be
a
good
resource
on
that
as
well.
So
we
could
probably
skip
that
one
for
today
and
move
ahead
to
so
I'll
put
that
on
the
agenda
for
next
time.
A
So
let's
take
a
quick
example
here
of
a
poor
request
that
I
have
open
right
now.
Let
me
make
this
bigger
baby.
There
we
go
so
a
poor
request.
I
have
now
two
for
the
bucket
and
cluster
controllers
and
now
use
the
general
resource
controller
and
remove
a
lot
of
that
duplicated
code.
Nik
had
some
good
feedback
here
about
the
way
that
we
track
binding
between
a
resource
claim
and
a
concrete
resource.
A
You
know
refactoring
some
of
our
controllers
to
use
a
general
controller,
and
so
that's
there's
definitely
some
debate
about
if
changing
the
way
that
we
track
binding
States
is
in
scope
for
that
or
not
I'd
be
inclined
to
say
it
that
it's
out
of
scope
because
it
brings
along
some
associated
risk
with
it.
I
think.
A
That's
one
dimension
that
we
probably
want
to
be
cognizant
of
here
of
you
know
the
this
kind
of
changes,
the
interface
or
the
way
that
objects
resource
objects
are
stored
inside
the
kubernetes
api
and
it's
backing
sed
server
and
the
way
they're
serialized.
So
there
has
some
inherent
risk
here
of
regressions
or
breaking
some
functionality,
and
then,
if
you
look
at
the
poor
requester
later
on,
you
would
see
a
bunch
of
you
know
editions
of
this
code
here.
It
touches
all
of
our
resource
types
and
kind
of
dilutes.
A
The
original
vision
of
the
poor
request
of
being
able
to
change
our
resource
controllers
to
use
in
the
general
resource
controller,
so
I'd
be
inclined
to
think
it's
kind
of
at
a
scope,
and
it
should
be
detract
later
on
in
the
future,
or
there
are
other
opinions
about
in
general,
how
you?
What
are
some
good
guidelines
for
figuring
out
if
feedback
from
my
pull
request
should
be
included
in
that
pull
request
or
done
sometime
in
the
future.
B
So
I
don't
know.
If
we
want
to
wait,
we
could
just
move
all
those
sort
of
things
to
a
more
like
discussion
friendly
for
I'm
sort
of
so,
but
it
also
was
helpful
to
look
through
and
see
how,
like
kids,
it
ties
it
more
to
the
code
if
we're
talking
about
it
in
an
actual
port
for
us,
so
I
don't
know
what
the
best
way
I
did
it
I
mean
you
know
like
something
that
just
comes
to
mind
RAF.
A
Yeah
that
makes
sense,
Daniel,
yeah
I,
think
that
you
know
another
thing
I'm
thinking
about
too.
Is
that
when
you?
So,
if
you
think
about
cost
of
implementations
of
things
there,
when
you
are
adding
a
new
feature,
changing
something
in
the
codebase,
there
is
an
Associated
cost
with
you
know
performing
that
that
change,
but
then
testing
it
is
well,
you
know,
kind
of
running
through
some.
You
know
manual
testing
scenarios,
especially
since
we
don't
have
good
integration
coverage.
A
We
have
XD
pretty
decent
unit
test
coverage,
but
not
great
integration
level
coverage,
though
there's
that
cost
of
of
you
know
testing
things
before
you
merge
it
to
mask,
and
one
argument
is
that
you
could
amortize
that
cost
of
it
across
more
features.
So
if
you
know,
there's
some
feedback
and
you
go
ahead
and
include
it
in
the
same
poll
request
while
you're
already
having
to
perform
that
testing,
then
you
kind
of
you
know
that
testing
cost
covers
more
features
as
opposed
when
you
come
back
later
and
another
poor
request.
A
You
know
start
at
the
implementation
of
that
feedback.
Then
then
you've
got
to
go
through
a
full
testing
cycle.
Again,
that's
one
maybe
Pro
to
doing
it
in
a
ProQuest,
but
it
does
really
dilute
the
the
poll
request
itself
and
the
scope
of
it,
and
it's
not
very
constrained
specific.
You
know
functionality
that
you
can
look
at
with
a
clean
history
later
on
right.
A
That's
a
great
point:
yes,
I!
Think
that
I
think,
probably
where
I
fall
on
the
side
of
it
is
that
you
know
the
cost
of
you
know
of
testing
stuff.
First,
we
should
be
making
strides
to
incorporate
integration
testing
and
you
know
be
able
to
have
a
very
robust
protection,
automated
protection,
a
verification
against
regressions.
You
know
that
should
not
be
the
responsibility
of
you
know:
individual
developers,
manually
testing
scenarios
before
things
get
merged
or
releases
go
out.
So
that's
you
know.
A
The
first
thing
is
that
we
definitely
need
to
improve
our
integration
test
coverage,
but
I
would
be
inclined
to
say
that
you
know
the
cost
of
doing
integration.
Tests
manually
right
now
is
not
that
high
and
I
would
not
make
that
the
strongest
point,
for
you
know,
throwing
everything
at
possible
into
a
ProQuest
I'd,
probably
err
on
the
side
of
clean
history
in
in
the
codebase
and
making
things
they're
logically,
grouped
together,
you
know
be
able
to
stay
together
without
getting
diluted.
A
A
Alright,
let's
go
ahead
and
move
on
to
some
of
the
open,
pore
requests.
We
have
right
now
so
Daniel.
This
is
your
pull
request
to
use
the
cached
help
tool
for
mini
cube
and
local
clusters.
Did
you
did
you
have
any?
Did
you
need
anything
else
from
Bassam
in
terms
of
feedback
or
have
you
incorporated
everything
that
was
was
needed
here?
I.
B
B
A
A
That's
I
think
that
that
would
need
to
be
done
such
that
when
you
know
somebody
else
when
this
goes
to
master,
you
need
all
the
code
that
you
had
updated
in
the
build
sub
module
is
in
line
or
coupled
with
these
changes
here.
Right
get
pick
up
these
variables,
I
think
I.
Think
you
would
need
to
do
that.
Okay,.
A
It's
it
uses
the
the
the
normal
get
sub
module
flow,
there's
nothing
specific
to
crossplane
about
that.
That
being
said,
sub
modules
are
and
they're
always
kind
of
confusing
they're
a
great
feature
that
allows
you
to
pull
in
other
repositories
of
code
into
your
posture.
It
looks
like
a
single
repository,
but
a
lot
of
people
get
very
confused
by
them
and
I
still
to
this
day
personally,
get
confused
a
little
bit
by
them
as
well.
Right,
especially.
B
A
Yep
so
I
think
it
would
be
something
along
the
lines
of
like
from
the
root
of
the
cross
plane
repo
you
do
like
I,
get
sub-module
updates
or
get
some
module
sync
or
something
like
that.
I'd
have
to
look
up
what
the
commands
are,
but
there's
nothing
specific
to
cross
plane.
You'd
have
to
do
it's
just
a
git
command,
and
then
it
should
look
like
as
like.
In
your
stage
changes.
It
shows
up
as
a
modifications
into
this
build
awesome,
cool.
A
You
Daniel,
alright
and
then
the
last
one
we
had
here.
It
would
be
good
to
have
some
other
opinions
on
it,
but
we
can
just
bring
it
up
quickly
here.
So
in
Daniels,
pull
request,
which
is
great
by
the
way
Daniel
I
was
very
happy
to
see
that
work
get
implemented
to
support
as
your
resource
groups
as
a
managed
resource
here.
So
that
was
really
cool.
So
there's
the
general
question
of
you
know:
resource
groups
have
a
different
life
cycle
than
a
lot
of
other
resources.
A
Do
like
databases,
you
may
bring
them
up
and
spin
them
down
or
buckets
you
might
you
know
they're
they're,
not
incredibly
transient,
but
they're.
Definitely
more
transient
than
a
resource
group
is
those
I
for
my
understanding
of
how
people
use
resource
groups,
that's
something
that
you
tend
to
create.
You
know
when
you
first
like
as
your
account
going
and
then
you
don't
really
ever
delete
them.
A
You
can
certainly
it's
a
good
way
to
track
a
group
of
resources,
maybe
if
you're
using
them
for
a
temporary
project-
and
then
you
can,
you
know
for
billing
is
well
I.
Think
it's
a
good
reason
to
use
resource
groups
but
they're,
not
something
that
you,
you
know
willy-nilly,
create
and
delete.
So
you
had
you
brought
this
up.
Daniel
yourself
about
you
know
with
different
type
of
life
cycle.
Is
there
a
different
approach
that
we
would
need
to
use
in
terms
of
a
reconciler
or
controller
that
manages
it?
A
You
know,
is
it
inefficient
to
you
know,
sit
there
and
try
to
sync
with
the
external
you
know
as
your
API,
you
know
every
minute
when
the
things
probably
not
gonna
be
deleted.
For
you
know,
months
or
years
and
I
don't
have
a
good
I.
Don't
have
a
good
answer
for
that
right
now,
myself,
something
to
think
about
that.
You
have
any
more
thoughts
on
if
there's
any
efficiencies
or
improvements
that
could
be
made
there
I.
B
Don't
really
know
about
the
cost
of
this,
but
I
kind
of
mention
it
there
in
the
TV
section,
at
least
with
the
sink
right
now
so
every
time
it's
running,
creator
update
and
so
like
that's,
fine,
I
guess
if
it's
it,
kids
not
really
changing
anything,
because
it's
just
reconciling
it
so
like
the
spec,
isn't
changing
or
anything.
So
it's
not
creating
new
one
and
there's
no
updates
to
make.
But
it's
still
running
that
there's
also
a
checking
distance
function
that
exists
for
the
edge
or
SDK.
B
That
would
probably
just
be
a
little
bit
more
cleaner
to
run
in
the
regular
reconciliation.
So
that's
kind
of
the
only
I
have
about
that
as
far
as
like
the
actual,
like
theory
of
like
whether
we
should
be
syncing
it
regularly.
I
guess
it
just
kind
of
seems
like
a
design
decision.
I
feel
like
for
consistency
with
other
resources.
It
would
make
sense
which
I
make
kind
of
mentioned
in
some
of
those
comments,
and
it's
not
like
a
super
expensive
thing
to
do,
but
it
is
a
little
bit
wasteful
I
guess
so.
A
Are
there
an
own
set
of
properties
for
resource
groups
that
can
change
over
time,
but
can
you
change
the
name
of
it?
Can
you
change
like
any
other
properties
on
it,
so.
B
B
The
only
things
that
are
required
are
the
location
and
name
which
is
kind
of
interesting
yeah,
so
it
yeah.
If
you
just
change
the
name
or
something
like
that,
I
believe
that
the
would
change
I,
don't
know,
I'll
have
to
look
and
like
as
your
portal.
If
you
it
even
gives
you
the
option
to
like
edit
the
name.
Oh
you're,.
B
But
by
judging
by
the
the
seal
I
and
the
SDK
I'm,
guessing
that
you
can't
which
is
sort
of
interesting,
but
you
can't
and
migrate
resources
from
one
resource
to
another.
So
I
think
that's
generally
how
you
go
about
if
you
have
wanted
a
different
name
for
your
resource
group.
You'd
have
them
migrate,
all
of
your
resources
by
creating
new
and
then
migrate.
All
your
existing
resources
to
it
interesting.
A
You
know
since,
since
you
mentioned
that
resource
groups
and
azure,
they
have
a
location
field
to
them.
Now
that
does
not.
So
if
your
resource
group
is
in,
let's
say
you
know
the
u.s.
west
coast
and
but
you
can
you
not,
can
you
creates
resources
that
are
in
different
locations
or
different
regions
and
assigned
them
to
that
resource
group,
or
does
that
mean
that
all
resources
have
to
live
in
that
same
resource
group
that
the
the
resource
group
itself
lives
in
I?
Am.
B
A
I
guess
that
that
would
help
a
little
bit
you
know
to
under,
in
general
understanding
what
properties
are
mutable
or
properties
you
can
change
inside
of
a
resource
helps
make
determinations
about
what
a
reconcile
loop
should
be
doing.
You
I,
don't
think
you
really
change
like.
Is
it
just
a
simple
existence
check
or
is
it
you
know,
do
we
need
to
be
calling
the
update
or
something
change?
B
A
Awesome
man:
okay,
let's
see
yes,
I
think
that
was
the
last
item
I
had
on
the
agenda
today.
Is
there
anything
else
that
anybody
wants
to
bring
up
before
we
adjourn
for
the
week.
A
Not
me
alrighty
cool,
okay.
Well,
thank
you
for
for
attending
today,
folks
and
we'll
continue
working
towards
0.2
milestone
and
the
0.3
milestone
for
Cuba
and
Barcelona
will
be
very
exciting
to
show
off
all
the
hard
work
that
we're
doing
as
a
community
on
stage
there
where
whatever,
hopefully
one
of
our
talks
gets
accepted,
so
it'll
be
great
to
show
out
the
show
off
the
oldest
hard
work.
Sorry
to
have
a
have
a
great
week.
Everybody.