►
From YouTube: 2020-07-06 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
is
started,
and
this
is
the
July
6th
2020
cross
playing
community
meeting.
We
people
feel
free
to
add
your
name
as
an
attendee
to
the
documents
here.
The
gender
doc
and
I'll
drop
a
link
in
the
chat
directly
to
the
document
here
and
feel
free
to
add
your
name.
If
not
I
will
add
us
later
on.
So
I
can
take
care
of
that.
Let's
have
started
on
0.13,
which
is
the
current
milestone
that
we
are
working
towards
starting
last
milestone
in
0.12.
A
So
here
is
a
0.13
for
it
and
you
know
we
don't
need
to
walk
through
every
ticket
here,
but
we
can
talk
about
some
of
the
more
important
ones
that
folks
are
working
on
now,
I
had
called
out
a
couple
of
various
feature
areas
for
designed
like
high-level
beach
areas
for
discussion
here
on
cogeneration
and
provider,
acceleration
and
the
cross
between
agent
and
there's
a
lot
of
work
on
and
the
package
manager
as
well.
So,
let's
probably
most
issues
specifically
I'd
love
to
get
an
update
to
share
with
the
community
here.
A
B
Yeah
sure
so
I
have
started
the
design
and
I've
got
great
feedback
from
Luke
and
Nick,
but
he
is
a
complete
Derby
yet,
but
the
general
idea
is
that
you
will
install
the
cross
plane
agent
in
your
cluster
and
you
will
annotate
your
namespace
with
the
target
namespace
that
you
would
like
to
sync
to
at
a
high
level
on.
Maybe
I
should
explain
what,
but
it
actually
does
the
cross
plane
agent.
What
it
does
is,
basically
you
put
it
into
a
like
any
kubernetes
cluster,
and
then
you
connect
it
to
a
cross
plane.
B
That
runs
you
know
anywhere
and
after
this
connection
is
made
all
the
requirements
get
synced
into
this
central
cluster
and
you
get
back
connection
details
secrets
that
you
can
use
in
your.
You
know:
applications
like
other
stuff
like
mounting
to
the
pod
and
other
things
which
is
like
you
know,
changing
actually
like
replacing
the
kubernetes
application
stuff
that
we
had
before
and
just
like
before.
It
was
like
you
know,
in
the
central
cluster,
you
got
the
commenter's
application
that
pushes
the
secret
and
also
like
you
know
your
pods,
your
deployment.
A
B
You're
pulling
the
secrets
and,
like
you
like
you,
requesting
the
infrastructure
and
you
pull
the
secrets
and
the
design
has
some
details
of
how
that's
implemented
and
one
of
the
important
area.
Is
that
like
how
you
would
specify
a
one,
the
source
account
credentials
to
connect
to
central
cluster
and
right
now
it
status
is
that
you
either
supply
one
service
account
as
default
like
a
secret
that
contains
the
credentials
as
default
during
the
installation,
or
you
can,
like
you
know,
specify
the
secret
on
the
namespace
annotations.
B
That's
for
the
service
account
credential
for
the
namespace
pairings,
there's
only
one
way
which
is
like
you
know.
You
have
to
annotate
your
namespace
with
the
name
of
the
target
namespace.
So
we
always
have
like
you
know,
one
to
one
namespace
pairing
between
your
remote
cluster
and
the
central
cluster.
A
B
Let's
say,
like
you
know,
you
have
you
got
three
clusters
with
three
names
faces
on
in
each
you
have
to
have
nine
namespaces
in
the
central
cluster
so
that
you
can
have
you
know
one
to
one
pairing
so
yeah,
like
you
know,
the
work
is
ongoing,
but
I
think
this
week
we
will
be
able
to
wrap.
It
start
implementation.
A
Co-Op
ik,
thanks
for
the
update
on
that.
The
other
two
major
feature
is
that
I
had
identified
a
niche
of
the
document
where's
the
package
manager
refactor.
So
let's
do
that
one
Nexus
it's
in
this
column
and
if
there's
anything
that
people
want
to
call
out
in
on
the
port
to
Phil,
you
know
feel
free
to
at
the
end
of
a
section
of
the
problem
David
when
he
gives
us
a
quick
update
on
packet
or
Rebecca
yeah.
C
For
sure,
so
there
is
a
design
dark
open
for
that.
It's
still
marked
as
work
in
progress,
because
there's
a
pretty
good
amount
of
discussion
happening
on
that
I.
Think
there's
100
something
comments
at
this
point,
but
I've,
gotten
great
review
from
move
off
ik
and
Nick
and
Jared
and
I
think
we're
moving
in
a
good
direction.
C
So
definitely
feel
free
to
take
a
look
at
that
design
dock
there
most
of
the
concepts
are
pretty
current,
but
the
actual
execution
on
what
happens
could
be
changing
a
little
bit.
So
just
keep
that
in
mind
if
you're,
giving
a
review
there
but
yeah
I
think
we're
in
a
good
direction
and
like
new
bothwick
said
I'm
hoping
to
you,
know,
settle
on
a
pretty
strong
design.
So
that
we
can
move
towards
implementation
by
the
end
of
this
week,
but
definitely
we'll
keep
that
issue
or
that
PR
updated.
A
A
D
E
Still
an
RD
there's
no
artifacts
from
it,
yet
we're
I'm
syncing
with
Katie
later
today
and
we're
having
a
meeting
to
basically
decide
whether
we
do
a
go/no-go
on
this,
which
is
not
really
going
to
be
a
design
doc.
It's
more
gonna
be,
is
a
list
of
known
risks
and
things
that
we
not
gonna
be
able
to
have.
When
we
go
to
terraform.
A
A
C
One
thing
to
mention
is
I
know:
there's
been
a
couple
folks
that
have
been
talking
about
the
s3
buckets
and
so
I
just
want
to
encourage
anyone
who
is
working
on
any
of
those
open
issues
to
feel
free
to
reach
out
to
us
and
slack.
If
you
want
more,
like
one-on-one
guidance
or
even
a
group
get
together
to
plan
on
how
we
want
to
knock
some
of
those
things
out
so
yeah.
If
anyone
here
is
kind
of
a
newer
contributor
than
please
feel
free
to
reach
out
on
those
I.
A
C
For
sure
so,
I
don't
think
Christians
on
this
call,
but
Christian,
but
Red
Hat
has
been
basically
putting
together
some
compositions
for
use
with
the
quay
operator,
so
it
uses
a
yeah.
It
might
be
one
or
multiple
s3
buckets
a
Postgres
database
and
a
Redis
cluster
to
do
back
the
service.
So
the
operator
right
now
it
will
either
run
those
in
cluster
for
you
or
you
can
specify
connection
details.
So
obviously
that's
a
pretty
good
use
case
for
using
cross
plane
there.
There
are
some
changes
that
had
to
happen
upstream
in
the
quarry
operator.
C
Specifically
it
didn't
allow
for
providing
a
secret
for
Redis
connection.
So
saying
you
know,
I'm
always
going
to
run
this
in
cluster.
While
you
can
have
an
external
postcards
database,
so
there
need
to
be
some
slight
adjustments
there,
but
yeah
we're
looking
pretty
good,
I
think
on
the
s3
side.
Since
that's
not
a
view
on
beta
one
API,
it's
missing
some
of
the
ACL
customization
for
the
buckets.
So
that's
one
of
the
reasons
why
there's
a
push
to
get
back
to
v1,
beta
1
and
Chris
she's,
definitely
interested
in
working
on
that
I.
Believe.
C
A
Yeah,
that's
a
good
point.
I
def
I'd
really
like
to
that.
When
we
have
an
incidence
or
instances
of
use
cases
informing
some
of
the
functionality
features
that
go
into
the
cross.
Mate
supports
I,
think
that's
awesome,
cool,
okay,
so
I
think
that
was
everything
that
we
had
down
here
for
updates
on
0.13
I,
anticipate
that
that
would
be
a
typical
mid
month.
Release
I,
don't
necessarily
have
a
exact
date
that
I
that
I've
heard
proposed
or
that
I
proposed
myself.
A
F
E
I
thinks
it.
Let
me
just
double-check,
to
see
participants
yeah,
yeah,
so
I
believe
that
most
of
the
work
to
actually
move
things
into
the
OEM
communities,
one-time
repo,
is
done.
We
do
need
to
figure
out
the
build
system
for
that
I
don't
know
Jared.
If
that's
somebody
who
you
have
some
bad
words
know
about
with
this
week
on
on,
Jenkins
are
fiying
that
that
thing
we
do,
there
is
I'm,
actually
not
sure
how
to
have
one
of
this.
E
A
G
Curious
this
is
Ryan,
so
how
difficult
is
it
to
create
a
a
build
for
the
OEM
communities
run
time
alone,
cuz
I'm?
Currently
they
see
kind
of
build
a
helmet
at
for
it
now
count
home
chart.
So
people
can
actually
just
install
a
standalone
ham
chart,
but
it's
not
under
the
cross
playing.
Oh
I,
don't
have
the
cross,
pencils,
talker
have
or
whatever
all
these
credentials
or
anything
so
I
just
used
orange
credentials.
So
there's
a
home
table
there,
but
it
looks
like
I
looked
at
other
repos.
G
A
Ryan
that
that's
a
pretty
typical
pattern.
We
have
in
place
with
the
build
sub
module
and
like
the
chickens,
files,
etc,
so
that
that
pattern
is
used
pretty
commonly,
and
you
know
the
logic
that's
in
the
build
sub
module
will
do
the
whole
chart
for
build
for
you,
along
with
you,
know,
docker
images
or
docks,
and
you
know
all
sorts
other
stuff,
but
then
use
as
credentials
to
publish
to
the
artifact
locations
that
you
know,
crossplane,
officially
uses
etc.
A
So
because
the
lot
of
that
logic
is
is
codified
and
reusable,
it
should
be
fairly
straightforward
for
someone
who's
done
it
before
and
has
access
to
some
that
stuff
I.
Think
that's
where
we'd
run
into
similar
the
you
know:
it's
not
the
best
developer
experience
and
there's
some
some
access
issues
that
we're
happy
to
help
out
and
do
the
things
to
make.
You
know
forget
the
conversation,
but
that's
the
right
way
to
go,
but
we're
happy
to
do
the
support
work
to
make
sure
that
works
for
us.
A
A
Should
try
to
get
like
so
maintainer
access
like
your
maintainer,
you
should
be
able
to
have
access
to
those
artifact
locations
that
sounds
totally
reasonable,
but
for
Jenkins
I
guess
we
get
into
it
in
a
bit
too,
but
for
Jenkins
I
was
happy.
I
was
working
with
a
honcho
and
that
I
was
having
I.
Don't
understand
why
chickens
isn't
doing
what
I
asked
it
to
so
there's
some
things
to
work
through
perhaps.
G
D
H
Yeah,
this
is
how
I
talk
my
something
I
also
want
to
mention
in
the
meeting.
So
currently
the
rankings
on
the
OEM
runtime
people
like
they
are
duplicated
I.
Currently
we
have
both
the
github
actions
and
also
the
pump
jenkins,
my
running
to
run
the
test
of
a
test
and
I'm
making
sure
the
code
is
qualify
for
a
merge
but,
like
most
of
testing,
has
been
duplicating
both
CI
and
we
don't
have
access
to
the
Jenkins
one.
So
I'm
wondering
now.
A
Yeah
I
mean
it's
it's
relevant
right
now,
so
let's
just
go
ahead
and
have
this
conversation
now
I
think
that's
perfectly
fine.
Me
I'll
put
that
agenda
there,
let's
be
on
this
topic
now.
If
anybody
has
any
disputes
with
that,
then
speak
now,
but
we
can
just
have
this
conversation
now,
I
think
so
yeah
so
I
think
I.
Think,
like
the
the
chickens
files,
you
know
like
the
kind
of
ask
Jenkins
to
tell
Jenkins
how
to
use
like
what
build
commands
to
run
and
where
artifact
should
go
etc.
A
Those
are
you
know,
just
those
are
per
repo,
so
you
can
come
pretty
much.
The
normal
pattern
is
that
you,
you
know
you
can
copy
it
from
one
repo
and
then
put
it
into
your
repo
and
make
the
modifications
you
need
so
the
Jenkins
files
themselves,
I,
don't
think
there
are
any
access
issues
with
those
at
all.
A
The
the
issue
that
I
do
see
so
far
is
allowing
you
know
right
access
or
administrative
access
on
Jenkins
to
particular
particular
contributors
to
the
projects,
because
it
does
all
of
its
authentication
through
github,
wass
plugin,
and
so
it
knows
when
you
log
in
to
Jenkins.
It
knows
who
you
are
on
your
Jade
on
your
github
profile
and
the
configuration
that
we
have
set
up
is
for
each
repo
use
the
permissions
that
github
repo
defines.
A
So,
if
you're
an
admin
on
a
on
a
github
repository,
you
should
have
access
admin
access
on
Jenkins
to
that
same,
build
for
that
same
repository.
But
as
we
were
kind
of
working
on
that
to
try
to
figure
that
out,
it
doesn't
seem
like
it's
providing
that
access,
which
I
don't
quite
understand
so
I
want
to
troubleshoot.
Do
that.
But
essentially
you
know,
I
was
expecting
that
the
access
you
have
to
the
repo
and
github
should
be
the
same
access
that
you
have
in
Jenkins
as
well.
Yep.
H
Cool
yeah,
another
issue
is
like
currently
we
are.
We
also
have
another
like
contributed
to
the
weepers.
You
can
see
really
active,
contributing,
Co
and
commenting
on
issues.
His
name
is
right,
like
the
the
Joe
and-
and
we
are
wondering
like
how
we
can
add
him
to
like
you
know,
I
had
like
the
maintainer
asset,
so
that
I
can
review
and
much
purer,
but
at
least
he
can
approve
pee-pee
oz
and
something
like
that.
H
A
It's
so-so,
as
per
the
most
recent
crossplane
organization
governance,
the
adding
maintainer
z'
to
a
repository
is
the
decision
of
of
those
of
the
existing
maintainer
zon,
the
repository.
So
if
you
know,
if
the
majority
of
the
repository
maintainer
czar
in
favor
of
adding
a
new
maintainer
to
it,
then
that's
that's
okay,
if
you,
if
you
don't,
have
ability
to
do
that
and
get
hub
for
some
reason,
then
you
can
just
let
me
know
and
I'll
look
into
that.
But
you
know
in
the
github
repo
access.
A
You
should
be
able
to
add
that
person
as
another
someone
with
maintainer
access.
Just
like
you
have
maintainer
access
already
and
hopefully
then,
if
the
github
auth
plugin
for
Jenkins
worked
the
way
I
expected
it
to
then
that
would
automatically
then
because
they
then
have
maintainer
access
on
the
repository
and
get
hope
that
they
would
then
have
you
know,
maintainer
access
to
the
Jenkins
build
as
well.
That's
that
was
the
goal,
but
you
know
that's
what
we're
still
trying
to
figure
out
why
it
doesn't
work.
A
You
should
be
able
to
self-service
that
that
should
be
because
it
in
the
governance,
the
maintainer
z--
for
a
repository-
are
allowed
to
add
new
maintainer
z--
themselves.
But
if,
for
some
reason
you
can't
add
that
then
let's
talk
about
that
and
I'll
try
to
help
out
with
that.
But
the
governance
says
that
you
should
be
allowed
to
do
that
as
the
maintainer
z'
of
that
repository,
you
could
do
that.
You
know
with
autonomy,
autonomy.
E
I,
don't
think
that
the
actual
configuration
of
the
security
of
these
repos
it
is
in
line
with
governance.
I
think
is
maybe
the
problem
I
think
the
governance
says
that
the
maintainer
x'
should
be
able
to
do
this.
I'm,
not
sure.
If
maintain
access
is
enough
to
grant
other
people
maintaining
access,
so
whether
you
need
owner.
A
C
A
Yeah,
sorry,
if
I
didn't
understand
the
question
of
yeah,
so
it's
basically
two
parts
like
update
the
owners
file
so
which
is
just
informative.
It's
just
to
let
people
know
you
know
publicly
who
the
maintainer
czar
and
then
the
you
know
access
to
the
repositories
through
the
standard.
Github
settings
manage
access
thing,
which
is
you
know
not
automated.
A
human
has
to
do
that.
You
know
if
I
don't
have
access
to
that,
that
we
need
to
figure
that
out
so.
A
H
A
G
I
have
another
question
regarding
this
Jenkins
in
the
test,
because,
as
far
as
I
know
the
reason
we
add
the
git
git,
let
a
kid
hop
see
is
we
want
to
run
ete
test
if
I
eat
we
I
mean
Eko
talking
to
a
real
kubernetes
api
server.
But
most
of
this
the
unit
tests
are
mocking
the
api
server
and,
from
many
of
the
logic,
is
pretty
much
defied
the
purpose,
because
if
you
mocked
off
there's
not
much
logic
left
in
there
in
some
big
chunk
of
the
code.
G
That's
why
I
added
to
get
a
hub,
CI
and
I
pick
up
a
few
bugs
through
that.
But
would
you
cannot
really
test
with
all
marks,
so
I
just
I,
remember
at
that
time,
because
I
think
ins
at
that
time,
at
least
he
meant
now
it's
a
black
box.
I
I
heard
it
it's
not
easy
to
do
the
e3
test,
I'm
thinking,
I,
don't
know!
If
it's
it's
true,
because
the
automation
we
have
two
systems
right.
G
So
if
we,
if
Jenkins,
can
do
the
e
to
e,
then
we
can
get
rid
of
the
wicked
lab
and
have
C
I.
Think-
and
another
thing
is
I-
want
that
part
back
cast.
Also,
we
counted
on
to
the
code
coverage,
because
the
currently
I
think
the
code
coverage
only
counts.
The
unit
tests
through
the
Jenkins
system,
but
at
least
some
of
the
code
actually
is
more
covered
through
the
III
than
the
CI.
C
First
point
for
the
for
running
the
indian
tests
on
jenkins:
that's
definitely
possible.
You
just
basically
have
to
have
the
jenkins
script
set
up
the
kind
cluster
for
you
as
you're
doing
currently
with
the
github
actions.
Right
now
you
know
the
the
framework
does
support,
having
basically
a
cute
config
that
you
can
use.
So
you
just
need
to
make
that
accessible.
C
We
can
talk
more
out-of-band
if
you
want
to
get
that
set
up
on
on
Jenkins
I.
Think
if
I
mean
I,
don't
speak
for
everyone,
but
I
think
if
you
wanted
to
keep
using
github
actions
and
it's
working
well
for
those
integration
tests
right
now
or
there's
an
test
that
that
seems
like
a
reasonable
solution
to
me.
But
I
also
understand
that
you
know
having
a
single
CI
tool.
Is
it's
also
useful
I?
Could.
G
Actually
I
don't
mind
having
two
systems
because
they
have
cannot
just
two
different
purpose,
but
the
more
important
part
is
how
to
count
the
III
test
into
the
code
coverage.
It's
again
many
times.
I.
Think
III
test
covers
the
code
better
than
UI
test.
When
most
of
the
code
is
basically,
you
know,
apply
one
talk
to
a
case
of
a
512
API
server
code,
just
that
most
of
the
week
on
site,
like
that
yeah.
E
H
Think
there's
something
we
actually
started
before
I
previously
like
in
you
know.
We
also
like
it
massive,
like
engine
testing
and
and
had
the
courage
I
show
up
only
currently.
The
unit
has
so
basically
how
the
Co
coverage
work
is
that
when
you
run
code
test,
it
will
output
a
calf
file
which
generates
the
courage
rate
and
the
couch
readers
file
in
order
to
lie.
Have
the
courage
show
up,
you
need
to
generate
this
file,
and-
and
how
does
is
that
like,
even
though
the
most
of
the
environment
is
created
as
an
environment?
H
E
G
E
I
mean
I
think
that
any
both
github
and
Jenkins
are
just
runners
of
scripts
right.
So
so,
when
it
comes
to
invoking
and
running
tests,
both
systems
are
capable
of
doing
it,
I'm
sure
that
we
can
figure
out
getting
James
to
do
it
or
that
we
can
get
figure
out
going
to
get
up
to
do
it.
I
think
there's
there's
a
couple
of
things
at
play
here.
One
has
nothing
to
do
with
whether
it's
Jenkins
or
github.
E
It's
just
that
we've
automated
all
of
this,
with
a
large
sub
module
of
of
make
files
that
we're
out
of
rook
and
that
looms
they're,
not
bound
dog
that
has
barriers
to
entry
to
contribute.
One
of
them
is
just
the
learning
curve
is
very
big
to
understand
how
it
works
and
the
other
one
is
that
it
is
under
the
up
bound,
org
or
github,
and
it's
used
by
several
internal
up
bound
projects.
So
you
can't
really
just
change
it
across
playing.
E
You
have
to
folks
who
don't
work
far
bound
have
to
account
for
changes
that
may
impact
things
that
they
can't
see.
So
I
think
that
we
need
to
fix
that,
so
it
I
think
there's
there's
some
things
to
be
done
with
just
the
build
sub
module
and
then
there's
also
a
question
of
what
do
we
use
to
invoke
the
build
sub
module
and
whether
that
is
Jenkins
or
something
else
like
like
github?
E
My
personal
suspicion
and
gut
feeling
is
that
github
will
be
it's
more
self
documenting
it's
easier
for
community
members
to
discover
it
integrates
better
with
with
github.
Obviously
so
I
suspect
that
that
that
would
probably
be
the
lowest
maintenance
cost
going
forward.
But
there
is
a
pretty
big
migration
cost
for
us
to
go
from
Jenkins,
which
we've
built
all
about
tooling
from
they've
got
a
lot
of
Jenkins
files
and
automations
our
entire
release
process
for
everything
that
the
Uptown
team
releases
assumes
using
Jenkins
for
doing
things
like
tagging
and
pushing
and
whatnot
so
I.
E
A
Yeah,
either
way
in
the
mean
time
that
hopefully
we
can
get
the
right
access
to
Jenkins
so
that
you
know
you
can
be
able
to
rerun
your
builds
or
you
know,
kickoff
builds
or
change
the
config
and
stuff
like
that,
and
you
know
you
have
access
to
do
that.
That
would,
in
the
short-term,
alleviate
some
pain
to
just
hug
database
stuff,
Jared.
E
With
that,
without
access
that
you
know
talking
about
give
people
would
they
would
would
the
folks
who
are
working
day-to-day
on
the
runtime,
be
able
to
actually
go
and
add
that
project
to
Jenkins
get
it
all
set
up
and
do
releases
and
all
that
kind
of
thing,
or
is
this
just
access
to
basically
rebuild
and
run
jobs?
Yeah.
A
I
think
it's
a
rebuilding
run
jobs
for
sure,
but
then
it's
also
like
Jenkins
configuration
for
that
pipeline
for
the
pipeline.
For
that
repo,
so
I,
don't
think
you
wouldn't
be
able
to
go
globally.
You
know,
add
new
pipelines
and
new
repos
and
stuff
like
that.
For
that
specific
repo
in
the
pipeline.
For
that
repo,
you
can
change
configuration
into
stuff
tweak
stuff
there,
like
that
yeah.
A
Yeah
totally
Nick,
yeah
and
I
think
yeah
I
mean
if
it
was,
it
does
make
get
up
actually
fairly
appealing
if
it's
inside
the
repo
itself,
you
know,
like
the
whole
reuse
thing
like
the
lot
of
logic,
that
was
built
around
release,
processes
and
handling
a
lot
of
some
complications
around
publishing,
release,
artifacts
in
building
and
versioning,
and
all
that
jazz,
like
figure
it
out
and
I
love
to
boot,
or
you
use
that
stuff.
But
github
actions
like
self-service
integrated
directly
into
the
repo
is
pretty
appealing
from
that
perspective.
Yeah.
E
I'm
not
sure
how
can
have
actions
would
work
with,
like
presume
it,
presumably
if
we
want
people
to
be
able
to
upload
home
charts
to
like
to
cosplay
charts
bucket-
and
you
know,
push
things
to
doctor
hub
and
whatnot
this
even
with
dinner,
there's
still
gonna
be
some
question
of
when
we're
sitting
up.
You
know
when,
when
folks,
like
the
you
know,
Oh
imitators
facing
this,
you
know
how
do
they
you
get
on
board
or
how
do
they
get
those
credentials?
Yes,
sort
of
thing,
yeah.
A
This
was
the
committee
date
recordings
I
mentioned
this
last
meeting,
so
it's
not
new
news,
but
I
just
wanted
to
keep
this
link
here
for
folks
that
wanted
to
see
the
playlist
with
all
the
recordings
of
all
the
demos
and
talks
and
panels
and
everything
from
the
Community
Day
last
month.
They
are
right
there
in
that
YouTube
playlist
Ashton.
Do
you
want
to
give
us
an
update
on
the
latest
TPS
episodes
yeah.
C
For
sure,
so
last
week
we
had
on
pop
from
cystic
and
falco,
and
it
was
a
really
awesome
show
that
I
enjoyed
quite
a
bit
and
it
showed
some
of
the
different
things
that
you
can
get
from
running
Falco
or
assisting
in
cluster,
with
kuben,
with
crossplane
or
in
remote
clusters.
That
crossplane
is
managing.
We
also
showed
off
some
some
pretty
cool
composition,
stuff,
which
will
probably
have
a
show
pretty
soon
dedicated
exclusively
to
that
and
exploring
all
the
different
functionality
there,
because
I
think
that
could
be
pretty
helpful
to
end
users.
C
But
some
of
the
things
that
you
can
get
from
running,
Falco
or
cystic
in
your
cluster
is
like
monitoring
outgoing
connections
and
stuff.
So
one
of
the
things
we
looked
at
was,
since
each
provider
runs
as
its
own
pod
you're
able
to
see
the
actual
outgoing
connections
from
each
of
those.
So
we
can
see
those
you
know
making
requests
to
AWS
and
GCP
and
api's
when
they're
reconciling
resources
and
that
sort
of
thing.
But
the
primary
thing
is
security
around
this.
C
So
you
can
imagine
the
future
that
maybe
we'd
like
to
make
it
a
little
bit
easier
when
you
provision
a
new
cluster
using
cross
flane
that
you
might
want
to
install
some
of
these
agents
over
there
and
so
thinking
about
what
that
might
look
like
will
definitely
be
interesting.
Folks
also
have
a
lot
of
other
projects
that
they
like
to
run
kind
of
across
all
clusters,
so
yeah,
it's
definitely
a
cool
episode
that
I'd
encourage
you
to
check
out
and
and
Rico
has
the
manifest
from
it.
C
A
A
A
We
wrote
a
blog
post
about
this
here
on
the
blog
cross
plains
IO
and
we
are
like
on
the
sandbox
page
as
a
an
entry
there
and
everything,
so
it
is
as
official
as
official
gets.
So
thanks
everybody
for
the
work
on
that
and
the
help
from
the
SE
gap.
Delivery
as
well.
I
see
Harry
that
joined
the
call
too.
So
thanks
for
the
help
from
save
there.
A
Harry
so
super
excited
about
that,
and
there
is
a
set
of
onboarding
tasks
that
we
will
do
to
complete
the
transfer
of
the
IP
and
you
know,
get
all
the
like
service
desk
and
you
know
marketing
and
that's
it
everything.
Oh
sweet
dip
stats
is
up
now
perfect,
that's
great,
so
a
whole
bunch
of
like
services
at
the
CN
CF
offers
to
get
on
board
with
all
those
things.
So
we
will
follow
up
on
making
sure
that
all
of
these
are
completed
so
that
cross-claim
is
reaping
the
benefits
of
being
an
official
CN
CF
project.
C
Sorry
I
forgot
to
put
my
username
on
that
I
just
wanted
to
give
an
update
for
anyone
who
is
using
these
links
on
Friday.
The
link
to
specific
resources
in
the
repo
were
updated.
So
now,
if
you
go
to
a
repo,
if
you
follow
the
the
URL
by
the
GVK
or
actually
the
gkv
I,
guess
that's
where
the
resource
is
going
to
be,
and
you
can
still
attend
the
version
tag
for
that
as
well.
C
So
yeah,
you
can
see
up
there
that
it's
followed
by
the
group
and
then
the
kind
of
and
the
version
so
before
it
was
using
the
the
path
actually
in
the
repo
to
to
identify
what
resource
it
was.
So
the
issue
with
that
is
like
for
crossplane.
We
have
some
multiple
charts
in
the
same
cross,
plane
repo
they
have
those
CR
DS
under
them,
so
we're
getting
duplicates
thanks,
Phil
for
opening
the
issue
related
to
that
on
the
repo.
So
we
were
able
to
address
that.
C
One
thing
to
note
here
is
that
now
what
it's
actually
doing
is
going
through
and
parsing
all
the
CR
DS,
and
if
it
finds
one
you
know
after
it's
already
parsed
the
same
GVK.
It's
just
overwriting
it.
So
if
you
had
multiple
different
representations,
which
we
don't
have
in
cross
claim,
but
if
you
did,
you
could
end
up
with
you
know
different
versions
or
not
the
version
you
want
so
for
cross
flane.
A
A
E
Just
just
a
heads
up,
I'll
be
out
of
cell
coverage
from
the
13th
to
the
24,
so
I'll
definitely
be
offline.
So
if
anyone
thinks
they
might
need
me.
A
July
30,
that's
that's
like
the
next
Monday
right
now,
like
so.
A
D
A
G
A
G
So
I
I
have
a
question
about
the
agent
thing:
I
can't
fully
read
the
PR.
Just
have
I
just
heard
the
explanation
earlier
I,
just
wonder:
how
does
it
up
so
I
hope?
My
understanding
is
is
going
to
replace
that
kubernetes
application
and
I'm
just
trying
to
understand.
How
does
that
our
high-level?
How
does
it
work
like
the
so
you
have
cross
planes
running
on
one
one:
community
Nettie's
cluster
as
a
as
a
control
plane,
and
you
have
your
users
workload
running
arm
somewhere
else,
and
previously
my
understanding
is
the
the
cross
plane.
G
B
G
B
So
like
how
you
would
package,
your
application
would
be
like
have
a
ham
chart,
your
pods
diplomas
and
all
that
stuff,
and
also
your
like
cross,
be
requirements
like
MySQL
instance
and
others,
and
you
just
deployed
that
the
agent
will
you
me
like,
will
take
care
of
like
bringing
the
secret
into
drink
cluster
and,
like
you
know,
do
that.
Use
all
that
stuff.
G
F
B
F
B
So,
like
the
actual
deployment
of
a
kubernetes
native
resources,
there
there
you
go
so
you
see
the
application
kubernetes
cluster
with
applications.
The
applications
mean,
like
you,
know,
pots
de
ployment,
staples
it
and
others.
Alongside
with,
like
you
know,
cross
plane
types,
for
example,
MySQL
instance
requirement
and
others.
B
However,
the
difference
is
that
they're
not
reconciled
in
this
application.
Cluster,
the
agent
copies
the
Mac,
replicates
them
in
the
into
the
standalone
control
plane
and
then
brings
back
the
status
into
your
application
closet
and
also
the
secrets
that
are
needed
and
then,
like
you
know,
then,
in
this
remote
cluster
you're,
like
you
know,
using
those
secrets
and
other
stuff,
so
you
don't
have
any
like
you
don't
have
to
practice
application
anymore.
You
just
directly
interact
with
cross
plane
types
as
if
you
are
in
the
same
control
plane
with
cross
plane,
but
you're,
actually
not.
A
B
G
G
B
That
results
in
the
dedicated
control
plane
so
in
dedicated
control
thing
you've
got
the
requirement
composite
and
also
managed
resources
like
for
PostgreSQL.
Let's
say:
you've
got
the
PostgreSQL
requirement
and
then
PostgreSQL
composite
like
three
other
resources,
like
you
know,
network
other
stuff,
all
in
the
dedicated
control
plane
in
the
application
cluster
you've
got
only
the
requirement.
You
create
the
requirement
and
push
it
to
the
dedicated
cluster
for
all
this
other
stuff
to
be
created
reconciled,
and
you
only
pull
the
result
which
is,
like
you
know,
secrets
and
also
the
status
that.
G
You
would
like
to
use
so
sounds
like
if
I
understand
correctly,
that
the
application
cluster
is
actually
pretty
thing.
It
doesn't
run
a
lot
of
stuff,
but
most
of
the
logic
is
actually
in
the
dedicated
control
plane
cross.
You
know
whatever
standalone,
oh,
that
I
thought
was
reversed,
so
so
it's
the
other
way
around.
Well,
then,
what's
the
rationale
for
order
normally
in
a
distributed
world,
you
wanna
things
distributed,
but
this
sounds
like
centralized.
What's
the
rationale
what
for
this?
What
does
it
achieve?
G
B
So,
basically,
you
you
can
still
have
each
cluster
to
have
their
own
cross
plane.
For
example,
you
can
have
you
can
actually
have
your
applications
run
in
the
cross,
beam
standalone
cluster
and,
like
you
know,
let's
say
you've
got
like
ten
clusters
with
ten
different.
You
know
we
can
cross
plane
deployments
that
are
all
independent
but
like
if
you
would
like
to
centralize
the
infrastructure
for
various
reasons
like
you
can,
you
might
want
to
have
one.
You
know
single
view
or
all
the
infrastructure
that
the
organization
uses
or
you
might
want
to.
B
F
Undoing
that,
with
probably
calling
this
the
cross
plane
operator
that
can
include
their
cross
plane
agent
and
a
nice
tiny
bundle
that
you
could
install
like
with
a
kind
of
a
metal
bundle
there,
and
so
you
know,
be
an
all-in-one
kind
of
a
thing.
But
then
you
could
still
have
the
option
of
installing
you
individual
home
trip.
So
we're
still
working
through
that.
But
I
just
have
a
whole
thing
here,
yeah
so.
F
And-
and
that
has
the
benefit
of
then
can
operate
locally
and
then
we
don't
need
like
a
remote
long.
You
know
yeah.
G
F
B
If
you'd
like
to
stick
to
like
one
common
class
everything,
yes,
you
have
to
all
the
providers
that
you'd
like
into
that
one
single
cluster.
But
if
you
don't
want
to,
you
can
still,
you
know,
have
your
application
coastal
to
have
cross
plane
of
its
own,
and
you
know,
as
your
provider
as
well,
I
think
that,
like
from
the
own
perspective,
what
will
change
will
be
like
in
the
application
cluster?
B
We
only
expose
the
requirements,
so
there
won't
be
posted
like
it
won't
really
be
possible
to
create,
like
directly
managed
resource
like
PostgreSQL
instance
of
like
crossover
of
Asia
or
like
GK
past.
Exactly
only
the
requirements
are
exposed,
so
the
oil
I
am
Who.
I
am
components
in
the
application.
Clusters
can
only
include
requirement
types.
D
B
B
D
I
also
have
another
question,
so
are
you
going
to?
Are
you
planning
to
host
to
the
coastline
standalone
somewhere
on
public
cloud
or
who
will
be
sponsible
for
maintaining
that
quote
standalone
component,
because
I
say
from
that
point
of
view,
so
everybody
are
actually
working,
I
mean
the
user
is
actually
working
on
their
local
culture.
I
will
say
the
application
has
arrived
so
who
will
be
the
one
who
maintained
they
stand
along
host
and
park?
D
F
You
know,
providers
and
resources
that
they
want
to
make
available
right
the
intentions
and
then
the
app
teams
can
basically,
you
know,
have
the
cross
point
agent
installed
there
to
basically
pull
down
those
types
so
that
they're
available
and
like
move
off
requesting
we're
still
going
to
support
the
embedded
mode
which
doesn't
require
an
agent.
And
so
you
can
still
do
that
if
you
want,
but
to
enable
consumption
more
easily
from
not
just
within
communities
natively,
but
also
from
other
yeah.
D
I'm
asking
this
because
we
are
recently
talking
with
the
hyper
hyper
hybrid
cloud
team
Jana
babakov,
so
they
are
very
interesting
so
how
how
Crossman
architectures
going
forward.
So
so
it's
really
good
that
you
can
actually
digest.
What's
the
design
in
this
new
proposal
and
we
don't
actually
like
the
the
hifiberry
cloud
team,
y'all,
never
know
about
that
or
whether
I
think
it's
a
really
interesting
proposal.
So
I
will
keep
you
informed.
You
guys
see
anything
happen.
The
only
local
stack,
that's
great
Harry.