►
From YouTube: 2020-01-07 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Absolutely
so
we
had
a
community
member
realized
that
they
were
unable
to
connect
to
a
gke
cluster
using
our
new
view
and
beta1
API
using
the
connection
secret.
So
the
issue
there
is
that
when
you
provision
a
gke
cluster,
the
service
account
role
that
you
have
basically
doesn't
have
admin
rights
and
the
in
the
cluster.
So
you
have
to
turn
on
basic
authentication
to
get
admin
rights
which
we
want
to
be
able
to
do
as
we
provision
kubernetes
applications
into
that
cluster,
using
it
as
a
target.
B
So
basically,
what
we
need
to
do
is
rien
a
baleen,
so
have
the
username
and
password
in
a
in
the
cube.
Config
that's
generated,
so
the
issue
with
that
is.
We
don't
have
a
story
right
now
for
creating
a
cluster
with
basically
existing
credentials,
because
we're
there's
actually
I,
think
a
issue
open
for
this.
We're
considering
how
we
want
to
provide
kind
of
like
secretive
information
to
a
configuration.
B
So
what
we're
doing
right
now
is
we
are
creating
allowing
the
specification
of
a
username
I'm,
then
relying
on
GCP
to
generate
that
password
and
propagate
that
back
in
the
cube
config.
And
then
you
can
obviously
just
use
that
to
connect
to
the
cluster,
so
is
somewhat
declarative
and
the
credentials
we're
using
so
long
story
short.
If
you
specify
basic
often
you
can
automatically
connect
to
the
cluster.
B
Otherwise
you
would
need
to
go
through
provisioning,
a
GK
cluster
and
then
actually
connecting
to
it
through
GK
and
setting
up
a
service
count
for
across
London
used
for
scheduling.
But
it
was
nice
that
we
we
got.
This
issue
opened
on
the
GCP
repo
and
we
had
the
fix
merged
in
14
hours,
so
that
was
cool
to
be
able
to
get
that
fixed
quickly
and
I'm
excited
to
have
that
in
the
patch
release
coming
out
today.
A
A
All
right
sweet,
so
we
will
get
that
actually
us
out
I.
Think
probably
today,
since
everything
is
merged
into
the
release
branch,
so
it'll
be
pretty
simple
process
of
tagging
in
publishing,
so
shouldn't
be
too
hard
for
zerode.
So
we
are,
you
know
the
next
release.
The
monthly
release
is
0.7.
We
had
0.6
right
before
the
the
holidays
in
December
there,
so
we
continue
to
get
out.
A
A
The
next
middle
of
the
month
timeframe
for
release
would
be
the
end
of
next
week
on
Friday
the
17th,
and
we
can
consider
a
feature
freeze
a
couple
days
before
that,
but
that
seems
well
aligned
with
some
of
the
features
and
bug
fixes
and
working
we're
planning
in
the
next
next
two
weeks
here.
So
it's
at
least
a
proposed
date
right
now
to
do
our
0.7
release.
The
roadmap
is
a
roadmap
we
is,
it
needs
to
be
updated,
so
it
includes
0.7
features
on
it
as
well.
So
we'll
follow
up
on
that.
A
Let
me
know
if
anybody
has
any
major
concerns
about
a
proposed
date
for
end
of
next
week.
I
think
that
shouldn't
be
a
I
mean
there
could
be
discussion
about
what
they
do
it,
but
it
shouldn't
be.
The
general
idea
should
not
be
too
wild
and
crazy',
since
this
will
be
like
the
fourth
monthly
release
around
the
middle
of
the
month,
that
we've
done
in
a
row
here.
So
this
cadence
is
pretty.
A
Worry
about
that
later,
okay,
so
that's
Senate,
sat
for
just
the
upcoming
releases
and
milestones
that
I
know
if
we
do
not
have
any
known
patches
or
patch
releases
for
Maine
crossplane
or
for
any
of
the
other
stacks
like
an
AWS
Roger.
So
these
are
the
only
releases
over
the
next
few
weeks
that
I
am
aware
of
at
least.
C
Yeah
I'm
not
sure
if
that's
the
right
headline
put
under,
but
no
I
actually
wanted
to
ask
like
whether
we
want
to
include
these
easy
stacks
in
our
documentation
like
QuickStart
guides
and
stuff
like
they
make
it
easier
to
just
get
up
and
running
with
crossplane,
and
we
will
move
the
repositories
to
prosper
in
I/o,
github
organization.
So
yeah.
A
And
I
saw
that
you
I,
should
have
the
rights
now
I
believe
on
the
source
repositories
to
move
those
into
the
crossplane
org
now,
so,
thank
you
for
doing
that.
Norfolk,
yeah
and
I
definitely
think
that
it
is
a
good
idea
to
have
user
facing
documentation
like
our
user
guides
or
quick
starts
to
be
able
to
take
advantage
of
these.
You
know
easy
minimal,
streamlines
experiences
here
from
the
these
stacks
you've
created.
You
know,
I
think
like
I'm,
starting
to
think
about
it.
A
In
sort
of
you
know
how,
like
there's
a
kasi
nice,
our
does
like
a
kubernetes
the
hard
way,
the
repository
and
walkthroughs
and
guides,
and
so
I
think
it
might
be
almost
interesting
now
to
to
think
about
a
similar
approach
here
for
crossplane,
where,
if
we
boil
down
the
experience
at
least
a
very
streamlined
experience
to
you
know,
just
installing
a
couple
stacks
and
then
creating
you
know
a
CRD
that
represents
your
config
and
then
everything
else
starts
being
taken
care
of
for
you
about
pretty
automatically
and
there's
less
manual
steps.
A
Then
that
would
be
like
a
nice
quick
start.
You
know
in
the
true
definition
of
the
term
of
a
way
to
get
started
quickly
and
then
there's
you
know
could
be
like
Khurana
said:
call
it
cross
playing
the
hard
way,
but
you
know
more
in-depth
guide
that
kind
of
walks
you
through
under
the
covers,
what's
happening,
what
are
the
resources?
What
are
the
concepts?
All
that
sort
of
stuff?
That's
more
a
little
bit
more
detailed
and
you
know
a
deeper
walkthrough.
A
D
Totally
and
maybe
even
adding
like
an
alternate
QuickStart
or
incorporated
somehow
into
the
QuickStart,
so
it's
just
a
little
bit
faster.
Both
of
those
would
be
good
ways
to
to
tackle
that.
Then
we
also
have
some
work
coming
in
to
kind
of
get
that
stuff
ported
over
to
a
template
stacks
and
so
in
terms
of
documenting
it
probably
from
like
end-user
perspective
that
would
be
totally
cool
and
then
from
kind
of
like
the
forking
and
customizing
experience,
maybe
adding
that
in
like
once,
we
get
to
the
template
stack
stuff.
That's.
A
A
good
point:
Phil
cool
but
yeah
so
yeah
that
would
be
awesome
moffat
overall
to
get
you
know,
guides
or
some
some
walkthrough
read
nice
about
how
to
use
the
the
easier
minimal
stacks
there.
I
think
that
would
be
awesome
and
then
pipeline
CdSe,
ICT
pipelines
we
can
I,
can
help
you
out
with
that
as
well.
A
C
D
A
Marcus
I
think
what
you're
getting
at
is
like.
Damn
in
the
repo
itself,
like
I,
was
actually
there
and
I
opened
up
the
dialog
to
start
moving
them
to
the
cross,
plane
repo
yester
organization.
Yesterday
I
was
like
how
does
this
fit
in
our
naming
scheme
and
it
might
be
if
you're,
from
a
repository
sorting
in
searching
perspective.
It
might
be
nice
to
you,
know,
to
adopt
the
same
naming
mechanism
or
naming
a
conviction
that
we
already
have
like
stack
GCP.
A
So
it
could
be,
maybe
stack,
tcp
minimal
or
something
like
that
or
you
know
yeah
something
so
that
you
I
could
be
sort
of
like
right
next
to
each
other,
as
opposed
to,
like
you
know,
far
apart
from
each
other
in
the
default
list
or
whatever
when
you
go
to
get
out.
So
that's
something
I
was
thinking
about
too
one.
B
A
That's
a
manual
prerequisite
now,
since
we
don't
have
dependency
resolution,
but
you
know
with
dependency
resolution
it
would
bring
in
the
upstream.
The
stack
GCP
automatically
yeah
yeah
itself
is
a
stack
that
gets
installed,
has
a
controller
or
has
you
know
a
reconciler?
That's
you
know
taking
so
CRD
from
a
user,
and
you
know
outputting
artifacts
from
it.
So
it
is
a
stack
with
a
body,
at
least
in
that
sense,
awesome.
B
A
B
B
These
are
all
in
service
of
our
new
strategy
for
scheduling,
kubernetes
applications.
So
essentially,
what
we
want
to
enable
is
any
cluster,
any
kubernetes
cluster
that
had
that
we
can
get
access
to
through
queue
config.
We
want
to
be
able
to
schedule
kubernetes
applications
to
them,
so
that
enables
things
like
scheduling
to
on-prem
clusters
within
the
same
cluster,
that
is
the
crosslink
control
cluster,
and
that
sort
of
thing
to
do
that.
We
need
to
do
some
modifications
to
how
the
kubernetes
application,
scheduler
works
and
they're
very
light
modifications.
But
we
want
a
consistent
scheduling
process.
B
So
we
also
need
to
slightly
modify
how
humanized
clusters
that
are
able
to
be
scheduled
to
right
now
so
crossplane
provisioned
kubernetes
clusters,
whether
it
be
gk
e
ek,
SaaS,
etc,
how
they
basically
expose
the
ability
to
be
scheduled
to
so
the
the
main
ways
this
happens
is
the
first
one
we
saw
there
was
the
secret
reconciler.
So
we
now
have
the
ability
to
schedule
from
multiple
namespaces
to
one
kubernetes
clusters.
B
So
our
secret
reconciler
set
basically
just
says,
there's
two
secrets
that
are
both
consented
to
propagation,
and
one
of
them
is
the
from
secret
one
of
them's
a
to
secret.
So
the
from
secret
may
be
in
a
namespace,
frequently
crossplane
system,
so
something
that's
created
by
a
managed
resource
and
then
the
to
secret
may
be
an
application
team,
specific
namespace
and
essentially
what
this
PR
allows
us
to
do
is
have
multiple
secrets
that
are
propagated
from
the
same
firm
secret,
so
that
just
involved
modifying
the
annotations.
B
We
put
on
secrets
to
allow
multiple,
the
from
secret,
multiple
to
secret,
which
includes
their
UID
name
in
namespace,
and
that
sort
of
thing.
So
now
we
can
have
that
multiple
propagation.
With
this
secret
reconciler,
the
the
next
piece
of
that
is
actually
creating
the
kubernetes
target
resource.
So
what
this
is?
It's
it's
almost
similar
to
how
a
provider
works
in
that
it
is
just
a
resource
that
points
to
a
secret
that
has
connection
information
for
kubernetes
cluster.
B
What
we
want
to
do
is,
if
you
are
dynamically
provisioning
kubernetes
clusters
in
the
traditional
way
on
crossplane.
That
is
already
enabled.
We
don't
want
you
to
have
to
go
through
the
extra
step
of
creating
a
kubernetes
target
just
because
you
want
to
be
able
to
schedule
to
it.
So
part
of
this
involves
creating
an
automatic
Coronas
recreation
controller,
which
basically
says
if
you
claim
a
kubernetes
manage
resource.
So
that's
once
again,
frequently
gke
EK
saas
we're
gonna
automatically
create
a
crewman,
a
target
in
the
namespace
of
that
claim.
B
A
Is
that
is
that,
when
a
claim
generates
in
a
dynamic
is
that
when
a
claim
results
in
a
dynamically
provisioned
kubernetes
cluster
and
when
a
claim
results
in
getting
matched
to
an
existing
committees
cluster?
So
it's
both
like
static
and
dynamic
that'll
end
up
the
community's
target
within
the
namespace
of
the
claim.
Yes,.
B
That's
absolutely
correct,
so
the
reason
we're
doing
that
is
essentially,
you
can
only
claim
a
resource
once
right.
Once
it's
bound
to
a
claim,
then
you,
you
can't
do
that
again.
So
right
now,
that's
the
method
for
propagating
a
secret
to
a
namespace.
So
the
true
granddaddy's
target
allows
multiple
namespaces
utilize,
the
same
cluster.
So
if
you
have
like
a
sandbox
cluster
or
something
like
that,
it
may
be
useful
to
have
developers
and
multiple
namespaces
using
that.
So
if
you
claim
that
from
a
namespace
the
termination
target
would
be
automatically
created.
B
If
you
wanted
to
consume
that
same
cluster
in
other
namespaces,
then
you
just
create
a
coup
Benes
target
that
reference
that
cluster.
So
that's
another
key
distinction,
and
then
you
can
bring
any
cluster
that
you
have
a
cute
config
for
and
connect
to
it.
So
if
that
was
an
on-prem
cluster,
something
that's
not
currently
reflected
as
a
managed
resource
and
crossplane,
then
you
could
just
create
a
secret
directly
with
that
cube
config
and
then
create
a
kubernetes
target
that
references
it
and
you'd
be
good
to
go.
B
The
other
scenario
is
that
you
have
a
kubernetes
cluster.
That's
already
been
claimed,
and
you
want
to
consume
that
from
other
namespaces
you
we
actually
have
a
cluster
ref
field,
which
is
an
urbanized
object,
reference
on
the
true
Bernays
target
that
allows
you
to
actually
just
point
to
a
cluster
and
it
will
propagate
that
secret,
for
you
set
the
connection
secret
reference,
and
then
you
can
begin
scheduling
there.
B
So
you
have
to
imagine
that,
in
addition
to
this
automatic
creation
controller,
we
also
have
to
implement
a
target
reconciler
for
each
stack
that
provides
a
managed
kubernetes
offering
for
when
you
reference
a
cluster
directly.
Basically,
we
need
to
have
a
controller
in
each
of
the
stacks
that
says:
ok,
I've,
been
referenced
by
a
Q&A
area.
I
need
to
find
the
connection
information
for
that
and
propagate
get
propagated
to
the
namespace
of
the
kubernetes
target.
So
that
is
what
this
this
PR
is
doing
here.
B
So
you'll
see
the
addition
of
a
target
reconciler
here
they
can
be
implemented
across
the
stacks
with
very
little
configuration.
So
these
all
kind
of
work
in
tandem,
so
they'll
be
landing.
At
the
same
time,
this
will
be
enabled
in
in
0.7,
and
there
should
be,
if
you're,
you
know,
using
the
existing
manner
of
consuming
clusters.
There
should
be
very
little
churn
for
your
perspective,
but
if
you
want
to
do
these
advanced
things,
then
you'll
now
be
able
to.
B
Yeah
exactly
so
so,
like
I
said
anything.
If
you
have
the
connection
information
to
talk
to
a
cluster,
then
you
can
now
schedule
there,
because
the
kubernetes
api
is
consistent,
whether
you're
using
a
managed
service
or
you're
running
it
on
bare
metal.
If
you're
running
in
on
you
know
raspberry
PI's
in
your
apartment,
you
could
still
provision
things
from
crossplane
to
that.
So
well,
we'll
definitely
have
some
cool
demos
coming
up
showing
how
you
can
use
on
pram
or
cell
phone
and
recruitment.
Others
awesome.
A
B
Absolutely
I
think
a
lot
of
that
and
I
think
there's
actually
issues
open
for
this
on
the
CLI,
where
we
want
to
be
able
to
take
even
just
like
a
local,
config
and
inject
that
into
a
cross
plane
cluster
with
the
kubernetes
target
so
basically
like
take
this
other
cluster
I
can
connect
to
and
make
it
able
to
be
scheduled
to
in
cross
plane.
So
I
think
that
would
be
really
valuable.
Another
thing
that
has
been
requested
by
the
community
is
kind
of
doing
the
reverse.
B
We're
saying
like
okay,
I
provision,
this
kubernetes
cluster
with
cross
plane,
so
I
have
the
secret
information.
Let
me
pull
that
down
to
my
local
and
connect
to
it
directly.
So
it's
almost
like
how
you'd
imagine
like
in
the
past,
people
would
want
to
connect
to
a
VM
that
was
provisioned
for
them,
using
like
SSH
keys
kind
of
like
a
similar
scenario
to
that.
If
someone
will
see
it
more
insight
into
a
cluster,
they
provision
using
cross
plane.
A
Yeah
I
mean
I
could
see
it
as
the
subs,
like
maybe
you
had
like
your
AWS,
your
dot.
Any
of
us
can
slash
config
file
locally,
if,
like
hey
just
you
grab
the
information
on
that
create
a
secret
for
me,
more
automation,
less
manual
steps
right.
So
that's
the
topics
that
we
had
on
the
agenda
here
was
there.
Did
anybody
have
further
agenda
items
or
topics
they
wanted
to
add?
Before
we
adjourn
for
the
week,
you.
A
Okay,
sweet
all
right,
then
we
can
go
ahead
and
wrap
up
this
community
meeting
here
and
we
will
continue
our
normal
frequency
now
after
the
holidays
of
every
two
weeks.
So
the
next
one
would
be
the
21st
I
think,
two
weeks
from
now
so
yeah
we
will
go
ahead
and
proceed
with
getting
the
GCP
stack
patched
out
and
keep
on
working
towards
0.7,
which
will
be
before
the
next
community
meeting.
And
then
we
will
see
everyone
here
online
in
two
weeks.