►
From YouTube: 2019-08-06 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alright,
the
recording
is
started,
and
this
is
the
August
6
2019
crossplane
community
meeting.
Well
one
of
the
things
that
we've
spent
a
lot
of
time,
focusing
on
recently
is
figuring
out
an
updated
roadmap
of
epics
and
features
that
we
want
to
address
in
this
project
over
the
next
few
milestones.
So
with
the
current
milestone
is
0.3
and
we
have
a
pretty
much
a
complete
plan
figured
out
with
the
epics
opened
and
features
associated
with
those
epics,
and
the
roadmap
is
updated
in
a
pull
request
as
well.
A
So
let's
take
a
quick
look
at
that
just
to
get
on
the
same
page.
So
here
is
the
roadmap
which
Phil
has
been
driving.
So
let's
take
a
look
at
the
roadmap
specifically,
and
we
have
the
previous
milestones
in
there
first
and
then
we
have
the
0.3
milestones
Phil.
Do
you
want
to
briefly
talk
through
this
updated
the
roadmap
that
you're
driving
I'm.
B
B
B
So,
as
many
of
you
have
seen
like
uconn,
we,
you
know
demo
deploying
the
gitlab
stack
to
multiple
clouds,
to
both
GCP
and
AWS,
and
so
removing
them
to
provide
set
of
resource
class
enhancements
around
having
default
classes
in
an
environment
that
increases
the
portability
of
the
claims,
because
you
have
to
specify
the
class
reference
there
and
then
so.
That's
just
an
optional
choice
that
you
have
to
make
your
claims
more
portable
and
then
also
allowing
those
resource
classes
to
have
stronger,
open,
API
validation
on
each
of
the
respective
CR
DS.
B
So
so
that's
in
flight
right
now
and
also
supporting
annotations
for
that
and
install
time
annotations
kind
of
coming
in
the
next
section.
So
so
the
big
thrust
of
the
0.3
release
is
moving.
These
infrastructure
tracks
infrastructure
stacks
out
of
tree
so
that,
basically
you
know
they
can
be
maintained
more
independently
and
that
it's
easier
for
anybody
to
come
and
say.
Oh
I,
want
to
create
infrastructure
stack
for
a
cloud
provider.
B
So
there's
a
set
of
functionality
in
here
around
enhancements
to
the
the
Stax
manager,
which
is
basically
a
renamed
extension
manager,
adding
in
the
notion
of
apps
versus
infrastructure,
stacks
namespace
isolation
and
allocation
support
for
doing
the
annotation
overlay
install
time
moving
the
infrastructure
stacks
into
separate
repos
for
the
big
three
there
and
then
upgrading
to
key
builder
2.
As
part
of
that.
B
So
you
know
getting
these
infrastructure.
Specs
split
out
of
tree
is
really
great,
but
we
want
to
make
it
gets
super
easy
for
people
to
enable
a
couple
key
scenarios.
So
one
is
just
you
know:
hey
there's
an
existing
infrastructure
stack
like
GCP
or
AWS,
and
then
I
want
to
add
an
additional
cloud
service
to
that.
So
you
know
tell
me
what
I
need
to
do
to
accomplish
that
and
then
the
other
one
is
hey.
I
want
to.
B
B
Those
0.3
Docs
to
reflect
all
the
enhancements
from
above,
which
is
ton
of
work,
so
super
excited
about
that,
but
also
to
provide
kind
of
a
more
seamless,
onboarding
experience
and
just
making
it
easier
to
get
started
as
a
user
as
a
contributor
and
then,
lastly,
providing
some
examples
for
dev
up,
I
appliance
like
Jenkins
and
get
lab
and
then
maybe
a
get
ops
flow
using
both
G
CPU
native
us
stacks
and
probably
Azure
as
well.
Awesome.
A
Phil,
thank
you
for
sharing
the
you
know
what
we've
converged
on
for
a
near
term:
roadmap
in
0.3
and
there's
also
some
of
the
items
that
we
see
in
the
future
as
well
as
we,
you
know
kind
of
refine.
Some
of
the
you
know
upcoming
milestones
here
at
4
and
0.5,
plus
we
Daniel.
We
will,
you
know,
keep
this
roadmap
up-to-date
along
with
that
as
well.
A
I
am
definitely
excited
about
the
docs
in
examples
getting
updated.
You
know
taking
a
complete
pass
through
that
to
match
our
new
or
you
know,
refined,
and
you
know
more
mature
user
experience
and
making
sure
that
everything's
accurate,
easy
to
follow,
etc.
I'm,
definitely
looking
forward
to
that.
I
believe
that
the
majority
of
our
documentation
is
still
from
the
night
before
our
first
release,
so
I'm
shocked
that
it's
s'prise
it's
it's
remained
to
this
day
here
and
so
somewhat
many
improvements
we
can
make.
B
A
Alright,
so,
in
addition
to
the
roadmap
we,
as
I
mentioned,
we
updated,
we
opened
a
set
of
epics
that
basically
one-to-one
match
what
we
just
went
over
in
the
road
map,
so
we're
now
using
an
epic
label
here
that
captures
higher
level
themes
of
any
large
scale,
issues
or
features
and
desires
for
the
project.
So
they
can
all
be
found
in
the
epic
label
now
and
if
you
drill
down
into
you
know
any
of
these
particular
epics,
you
will
see
here
beyond.
A
You
know
explaining
what
the
epic
is
about
and
how
we
would
validate
it
and
what
we're
trying
to
accomplish
with
the
epic.
We
also
have
the
the
composites
the
set
of
features
and
issues
that
make
up
that
epic
link
to
all
here
as
well,
so
you
can
follow,
along
with
the
high
level
theme
in
the
set
of
stories,
issues,
features,
tasks,
etc
that
we
need
to
execute
on
in
order
to
live
to
deliver
on
the
high
level.
Epic
theme.
A
Okay
and
tried
to
wrap
this
up
here.
So
in
addition
to
the
roadmap
and
the
epics
that
we've
opened
here,
we
took
a
pass
and
updated
the
0.3
milestone
as
well.
So
these
0.3
milestone
now
has
all
of
the
epics
and
features
that
we
have
identified
as
part
of
0.3
milestone
in
the
roadmap
and
they're
all
captured.
Here
we
have
moved
out
any
of
the
older.
You
know
issues
or
features
that
are
no
longer
in
scope
for
the
milestone,
so
the
the
milestone
should
be
completely
up
to
date.
A
Now,
as
well
as
the
project
board,
is
well
kind
of
mapping.
You
know
these
phase
or
state
that
each
issue
is
in
so
I
believe
this
is
up-to-date
with
everything
that
I
know
is
in
progress
or
in
review,
and
then
the
to-do
column
as
well.
So
the
number
of
issues
in
the
milestone
increased
recently,
because
you
know
we're
with
tracking
the
epics
and
then
opening
up
issues
for
each
of
the
you
know
subtasks
or
features
that
make
up
that
epoch.
A
Those
are
all
included
in
the
milestones,
so
the
number
of
issues
did
increase,
but
you
know
we
are
targeting.
You
know,
beginning
of
September
timeframe
so
around
a
month
or
so
from
now
give
or
take
to
release
0.3,
and
we
have
pretty
much
all
contributors
that
are
assigned
and
executing
on
all
these
issues.
So
we
are
making
rapid
progress
through
those
and
we
will
keep
an
eye
on
the
progress
and
the
burndown
as
we
go
through
the
rest
of
the
milestone
timeframe
to
the
month
of
August.
A
But
it
looks
like
this
plan
is
pretty
well
fleshed
out
with
the
milestone,
the
roadmap,
the
project
board,
etc.
So
we
are
pretty
up-to-date
on
what
we're
hoping
to
do
in
the
remaining
time
for
the
0.3
milestone
all
right.
So
that's
all
of
the
milestone
roadmap
sort
of
stuff.
That's
you
know
talking
about
what
we're
going
to
be
working
on
Thank
You
Phil,
for
going
through
that
whole
map
update
there
and
just
a
reminder
to
comment
or
provide
feedback
on
the
pull
request
with
the
updated
roadmap.
A
A
So
I'm
really
happy
with
with
Mark's
effort
here
to
you,
know,
find
the
issue
from
using
crossplane
in
his
own
scenarios
about
finding
the
root
cause
of
it,
fixing
it
and
then
providing
adequate
testing
to
verify
and
prevent
from
future
regressions
in
the
future.
So
thank
you
very
much.
Mark
Fisher.
A
All
right,
the
next
topic
I
wanted
to
talk
about
is
that
I,
don't
I,
don't
remember
if
we
had
talked
about
this
very
broadly,
but
it's
definitely
worth
bringing
up.
Is
that
in
terms
of
areas
of
the
cross
plane
project
in
the
way
that
particular
contributors
are
organizing,
we've
decided
to
at
least
in
a
early
phase,
adopt
special
interest
group
type
of
organization.
You
see
this
in
the
kubernetes
upstream
project,
that's
where
I
am
most
familiar
with
it.
A
You
know
you
have
a
cig
special
interest
group
for
storage
and
from
the
cloud
providers
and
the
API
and
documentation
all
sorts
of
sync
groups
there.
Those
all
have
very
explicit
charters
and
they
have
governance
around
how
they
operate.
We're
not
at
that
phase.
Yet,
where
we
don't
have,
you
know,
charters
and
governance
etc
around
the
special
interest
groups,
but
we
do
have
a
high-level
sense
of
what
is
their
purpose?
What
are
they
focusing
on,
and
you
know
how
contributors
organize
in
them
to
you
know,
focus
and
deliver
on
a
shared
set
of
functionality.
A
So
the
two
special
interest
groups
that
we
have
so
far
are
for
services
and
Stax
is
what
we're
calling
them.
So
the
services
special
interest
group
that'll
be
the
set
of
contributors
that
are
focusing
on.
You
know,
converging
on
the
kubernetes
api,
for
a
consistent,
normalized,
universal
way
of
managing
infrastructure,
and
that
involves
you
know:
modeling
and
writing
controllers,
CR,
DS,
api,
etc.
A
For
all
these
difference,
platform,
services
and
infrastructure
cloud
providers,
all
that
sort
of
stuff
and
then
also
creating
a
layer
of
abstractions
and
portability
across
those
various
services
for
workloads
to
be
able
to
become
portable
and
deploy
to
any
any
particular
cloud.
The
we
have
a
link
here
in
the
meeting
agenda
for
the
YouTube
list.
We
have
a
on
the
cross,
blade,
YouTube
channel.
We
have
a
set
of
playlists
now
and
we
have
two
new
playlists
added
one
for
the
services
cig
and
also
one
for
the
other
special
interest
group
to
stack
cig.
A
B
And
just
a
note
on
that
Jared.
So
we're
also
going
to
be
posting
a
larger
set
of
videos,
they're
just
more
impromptu,
to
the
crossplane
slack,
and
so
it
does
take
about.
You
know
20
to
30
minutes
to
produce
one
of
the
videos
for
YouTube,
and
so
you
know
also
kind
of
tune
into
the
crossplane
slacks
and
looks
for
some
of
the
replays
there
we're
using
zoom
and
those
also
have.
A
B
A
That
is
fantastic
functionality
to
be
able
to
jump
around
I'd
like
to
remember.
When
were
we
talking
about
what
portability
or
whatever,
then
you
can
search
for
the
keyword,
portability
and
then
I'll
go
to
your
various
times
in
the
video?
Where
would
you
go?
That
is
amazing,
yeah
I
love
that
Wow,
so
it
didn't
turn
up
to
be
like
a
horrible,
you
know
poorly
transcribed,
like
mess
of
words.
It
was
actually
fairly
fairly
high
fidelity
to
the
good
real
conversation.
B
A
Well,
that's
a
it's
better
than
I
set.
My
bar
low
and
I
am
pleasantly
surprised,
so
that's
a
good
way
to
approach
life
by
the
way
set
your
bar
low
okay.
So
the
we
have
a
couple
of
pull.
Requests
that
are
open
is
well
the
first
one
we
have
in
the
list
here.
Move
offic
is
driving.
He
is
stuck
in
traffic
right
now,
so
he's
I
guess
just
doing
a
different
type
of
driving
right
now.
But
this
request
here
is
the
goal.
A
Here:
is
the
current
cross
plane
repo
is
built
on
the
cube
builder
view,
while
I'm
in
control
the
runtime
version
1
patterns,
and
so
we
want
to
update
and
get
modernized,
because
the
b1
is
pretty
much
deprecated,
so
we
want
to
be
on
the
latest
controller
runtime
and
you
know
pick
up
all
the
new
functionality
and
new
patterns.
So
we
are
consistent
with
other
controller
runtime
based
projects
in
the
kubernetes
ecosystem
and
also
take
the
the
benefit
of
bug,
fixes
and
new
features,
etc.
A
So
we
need
to
it's
funny
to
say
that
we
need
to
modernize
the
repo
when
it's
not
even
a
year
old.
The
kubernetes
ecosystem
moves
quickly,
so
we
don't
want
to
get
left
behind,
but
move
offic
I
met
with
Mubarak
last
night
to
sync
up
on
this
and
then
I
think
I'm
planning
on
syncing
up
with
him
again
after
this
meeting
sometime,
but
basically
he's
got
it
written
out
here
kind
of
a
overview
of
what
of
what
some
of
major
changes
are.
There's
a
couple
different
patterns
in
Cupid
urbeats,
you,
but
one
of
the
ones.
A
That's
that
might
make
sense
for
us
to
simplify
some
things
is
to
have
a
RCR.
Ds
will
still
be
your
at
runtime
in
a
nested
hierarchy
where
they
have.
You
know,
groups
that
sort
of
identified
their
functional
areas
like
database
that
cross
link
that
IO
or
cache
dot
cross
plane.
That
IO,
but
in
terms
of
you
know
the
types
being
defined
like
a
types
that
go
file
that'll
be
more
of
a
flat
list,
bye-bye
version,
so
it
makes
a
lot
of
it
makes
even
more
sense
would
be
the
next
step
of
this
migration.
A
We
plan
to
move
the
provider,
each
cloud
providers
specific
types
and
controllers
into
their
own
specific
repos
there,
where
you
know
you'll,
have
your
types
etc
defines
in
kind
of
a
flat
list,
but
you'll
be
in
a
cloud
provider
specific
repository
already.
So
that
makes
plenty
of
sense,
and
by
doing
this
we
can
remove
our
need
to
have
a
fork
of
the
controller,
runtime
or
sorry
the
controller
tools
project
where
we've
been
maintaining
this
fork,
that
allowed
a
nested
hierarchy
and
all
this
for
our
our
API
files.
A
And
so
if,
when
we
go,
if
we
remove
that
necessity
currently
and
move
to
a
more
flat
list
and
we'll
be
able
to,
you
know
not
have
the
maintenance
burden
of
that
fork,
but
more
easily
keep
up
to
date
with
upstream
controller
controller
tools.
A
couple
of
other
things
are
mentioned
in
move
effects
right
up,
I,
think
Dan
already,
provided
some
feedback
is
well.
So
I
think
this
is
something
to
keep
an
eye
on
and
we
want
to.
A
We
want
to
move
this
quickly
because
this
is
a
prerequisite
before
moving
being
able
to
move
the
cloud
provider
specific
functionality
out
into
their
own
repos.
So
this
is
a
bit
of
a
bottleneck
in
a
bit
of
a
blocker,
so
robotics
working
on
this
is
his
primary
focus
and
I
am
helping
him
with
this
as
well.
But
if
we
see
any
other
opportune
for
parallels,
a
parallelization
or
you
know
making
this
a
priority
to
review
in
converge
on.
C
Absolutely
so
I've
mentioned
this
before,
and
it
was
kind
of
brought
up
I
think
it
was
even
last
community
meeting.
We
talked
about
it
briefly,
just
the
movement
to
strongly-typed
resource
classes,
so
we
have
already
implemented
the
features
needed
to
be
able
to
basically
implement
only
type
resource
classes
and
default
to
them
for
every
managed
kind.
So
this
PR
is
basically
implementing
just
for
a
single
managed
kind.
So
GCP
cloud
memory
store
was
chosen,
and
this
is
kind
of
an
interesting
looking
pull
request
here.
C
C
So
if
it's
a
one
that
means
that
this
is
just
something
that
had
to
happen
for
us
to
be
able
to
do
strongly-typed
resource
classes
at
all
anywhere,
and
if
this
PR
was
merged,
then
it
would
never
have
to
be
done
again
and
and
we'd
be
good
on
that.
To
means
that
this
is
something
that
has
to
happen
every
time.
A
new
abstract
kind,
shortly
type
resource
class
is
added,
so
here
we're
doing
GCP
cloud
memory
store,
so
that
would
be
the
abstract
class
that
would
the
abstract
kind
that
would
consume.
C
That
would
be
a
claim
kind.
So,
every
time
a
new
abstract
kind
gets
added,
so,
for
instance,
if
we
had
to
or
if
we
were
implementing
sure
only
type
resource
classes
for
replication
groups
or
the
azure
Redis
managed
kind.
We
wouldn't
have
to
do
any
of
the
things
specified
with
a
to
here,
but
if
we
were
to
implement
strongly-typed
resource
classes,
for
you
know
GCP
cloud
sequel
or
whatever,
then
we
would
need
to
do
two
for
the
my
sequel,
for
instance
abstract
kind,
and
then
three
is
something
has
to
happen.
C
Every
time
a
managed
kind
at
all
is
updated.
So
if
we
are
updating,
AWS
replication
groups
are
Azure
Redis.
We
would
have
to
do
all
of
those
steps
for
each
of
those,
so
it's
kind
of
meant
to
lay
out
a
path
for
implementing
this
across
the
board
and
it's
generally
a
very
similar
pattern.
The
only
things
that
I
would
point
out
is
that,
right
now
we
have
some
helper
functions
because
with
the
generic
resource
class,
we
do
not
have
strongly
typed
fields.
Obviously,
so
it's
just
like
a
map
of
parameters
has
to
be
parsed.
C
So
right
now
the
claim
reconciler
usually
calls
some
sort
of
you
know,
get
specs
from
resource
class
or
something
of
that
nature
for
each
manage
kind
and
it
just
parses
the
map
of
parameters.
So
those
things
can
be
eliminated
now
and
we
can
just
set
the
spec
directly
because
it's
a
one-to-one
with
the
strongly
typed
resource
class
and
the
manage
resource
kind.
C
So
that's
when
a
little
more
kind
of
like
discretion
comes
in
a
little
more
actual
functional
code
changes
happen,
so
you'll
see
in
this
one
I
think
there
was
a
one
function
that
did
that
kind
of
thing
and
it's
just
been
removed
in
the
claim.
Reconciler
configurator
just
sets
the
spec
before
the
managed
resource
kind
to
the
strongly
typed
resource
class
kind,
so
yeah.
This
would
be
something
that
we'd
want
to
converge
on
before
implementation
across
the
board.
A
Cool
that
sounds
good
Daniel.
Thank
you
for
describing
that
for
us
I,
like
the
pattern
that
we've
had
recently
for
some
of
some
of
the
more
complicated
pull
requests.
We
you
know
we
we
have
been
in
the
description
of
the
pull
request,
adding
you
know
in
particular
like
a
summarization
of
the
functionality
included
here,
particularly
around
some
of
the
more
difficult
parts
to
understand
or
some
of
the
parts
that
we
want,
more
verification
or
feedback
on
so
I
like
that
pattern
here,
where
you
know
in
English
pros
here
kind
of
describing.
A
What's
you
know
what
the
power
request
is
so
that
the
reviewer
of
the
for
request
doesn't
have
to
you
know
grok
all
that
from
just
reading
the
codes,
that's
really
useful
to
be
able
to
have
some
supplements
or
information
around
that
you're.
Obviously,
you
know
for
big
features
and
stuff
we'd
want
that
in
a
design
doc
ahead
of
time.
But
this
is
something
I.
Do
this
in
supplemental
to
the
you
know,
to
kind
of
help
you
know
understand
and
parse
the
the
put
you
know
the
pull
requests
for
reviewers.
A
Yeah,
that's
that's
another
thing
too.
You
know
a
good
point
that
we
need
to
take
special
note
on
is
that
you
know,
as
the
coupon
the
to
migration
goes,
we'll
need
to
be
cognizant
of
other
changes
in
flights
or
impacts
to
other
changes.
You
need
to
be
able
to
handle,
merge
conflicts
or
you
know
make
sure
that
everything
ends
up
converging
smoothly.