►
From YouTube: CNCF SIG App Delivery 2020-06-03
Description
CNCF SIG App Delivery 2020-06-03
A
A
A
A
C
B
B
Hey
Amy
how
many
Sagan,
TOC
etc
meetings?
Are
you
on
per
day.
A
C
Were
back
hello?
Okay,
that's
great
okay!
So
so,
let's
start
with
this,
so
you
got
the
Lehrer
meeting
and
this
meeting
will
be
recorded
and
upload
to
youtube.
So
please
keep
yourself
looking
good.
Okay
and
first
of
all,
we
do
have
a
new
project
for
tape,
brain
intent,
meaning
this
meeting
and
the
project
across
that
so
I
will
hand
over
the
screen
share
to
tear
out
from
close
tight
community.
Okay,
ready.
B
B
Do
you
see
the
agenda
document
right
now?
Yes,
all
right,
cool.
Sorry,
then
I'll
go
ahead
and
switch
over
to
the
slides
then,
and
we
can
get
started.
I
do
still
see
the
slides
that
I've
put
it
on
full
screen,
looks
good
all
right.
Thank
you,
Amy
for
the
confirmation,
all
right
cool.
So
last
week
we
submitted
a
proposal
to
donate
cross
flane
as
a
CNC
F
sandbox
project,
and
so
my
name
is
Jared.
I
am
one
of
the
founders
of
the
project
and
maintainer
on
it
as
well.
B
So
I'm,
just
gonna
walk
through
some
details
of
the
proposal.
Talk
about
lacrosse
play
news,
etc,
and
you
know
this
is
in
the
context
of
sig
app
delivery.
So
you
know
I
know
you
all
have
a
lot
of
experience
in
this
space.
You
know
a
lot
of
these
concepts
will
necessarily
be
new
but
feel
free
to
jump
in
at
any
time.
If
you
have
questions
about
anything-
and
we
can
make
this
as
interactive
as
you
want,
there
will
be
time
at
the
end
for
questions
as
well.
B
B
Ok,
so
let's
talk
about
what
crossplane
is
so
it
was.
We
did
our
first
B
0.1
release
back
in
December
of
2018,
so
it's
been
been
around
for
about
a
year
and
a
half
now
and
the
same
folks,
myself
included
me
and
Bassam
creators
of
the
rook
project,
which
is
another
as
a
CSUF
project.
That's
hopefully
going
to
be
graduating
soon
and
focuses
on
storage
orchestration.
B
We
are
the
same
folks
behind
the
rook
project
or
folks,
behind
this
crossplane
project,
open
source,
open
governance.
Obviously,
we
just
updated
our
governance
to
be
a
little
bit
more
inclusive
and
allow
a
little
bit
more
diversity
of
maintainer
xand
participants
in
the
project
recently
and
in
terms
of
the
architecture.
It's
it's
based
on
the
kubernetes
control
plane.
So
you
know
added
score:
it's
really
a
set
of
controllers
and
CR
DS.
It's
not!
You
know
an
entirely
new
architected
solution,
it's
running
in
kubernetes.
It
just
needs
API
server,
controller
manager,
NCD
that
sort
of
stuff.
B
So
you
know
it
is
basically
a
kubernetes
add-on.
You
can
look
at
it,
so
this
three
feature
main
feature
areas
here
that
we're
gonna
dive
into
more
details
on,
but
really
the
core.
The
project
is
about
provisioning
infrastructure
for
your
applications
that
are
running
in
kubernetes,
so
you
know
using
the
kubernetes
api
or
tools
like
you
control.
B
You
can
go
ahead
and
use
cross-play
into
eight
instances
of
infrastructure
and
cloud
provider
services
like
Amazon
database
or
a
security
group,
or
you
know,
V
PC,
all
that
sort
of
infrastructure
cloud
infrastructure
stuff
you
can
use
cross
plains
to
create
it.
You
can
also
go
ahead
and
create
and
define
and
publish
your
own
infrastructure.
B
Api
is
we're
going
to
get
into
more
details
on
this
because
I
think
it's
really
important,
but
basically
you
can
define
what
does
infrastructure
mean
in
your
environment
and
you
know:
how
do
you
want
to
expose
that
infrastructure
to
applications
to
consume
it
and
define
your
own
API
and
abstractions
around
that?
So
we'll
get
into
more
details
on
that
and
also
the
third.
The
third
feature
area
here
is
all
about
running
and
deploying
applications
to
use
this
infrastructure
that
we're
deploying
and
provisioning
with
crossplane
a
little
bit
more
history
on
the
project
here.
B
B
You
know,
while
working
almost
exclusively
on
Brooke,
and
so
one
of
the
things
that
we
really
noticed
in
Brooke,
is
that
the
the
volume
abstraction
is
a
really
powerful
concept,
because
it
basically
allows
your
applications
to
define
some
infrastructure
that
they
need
and
start
consuming
it
on
demand.
So
you
know
the
whole
pattern:
around
persistent
volume
claims,
storage
classes,
persistent
volumes,
all
that
stuff.
We
thought
that
that
could
have
a
lot
of
applicability
and
usefulness
in
other
scenarios
for
other
types
of
infrastructure.
B
You
know,
databases,
caches
clusters,
themselves,
buckets
all
sorts
of
stuff
like
that,
so
we
stuff
really
started
with
a
cross
plane
project
and
then
most
recently
we
have
done
the
work
and
through
a
great
collaboration
with
Alibaba
and
Microsoft,
on
the
open
application
model
and
the
ohm
spec
now
cross
plane
is
the
kubernetes
implementation
of
the
ohm
spec.
So
you
can
do
all
the
you
know,
everything
defined
in
the
ohm
spec
there
to
define
your
applications
and
the
portability
around
those
etc.
B
You
can
do
those
in
kubernetes
environments
now
was
crossplane
being
an
implementation
of
the
other
spec
all
right.
So
it's
dive
into
some
of
the
details
of
those
three
feature
areas
I
was
talking
about
so
the
first
one
here
is
provisioning
infrastructure.
You
know
with
with
the
kubernetes
api,
so
the
the
whole
basic
concept
behind
this
is
the
infrastructure
and
services
and
cloud
providers
are
represented
as
CR
DS
now
in
turbidity.
So
it's
cross
plane,
so
you
can
use
whatever
communities
tool
or
the
kubernetes
api.
You
want
to
declaratively
configure
this
infrastructure.
B
You
know
instantiate
a
CR
d
fill
out
of
spec
fields,
all
that
sort
of
stuff,
and
you
know
end
up
with
infrastructure
provision
for
you
inside
the
cloud
providers.
We
started
off
with
support
for
GC
p
for
asher
for
AWS
we've.
Recently
added
value
papa
folks
have
added
an
alabama
provider,
their
support
for
Brooke
in
cluster
stuff.
B
The
packet
is
well
bare
metal
packet
stuff
as
well,
but
let's
look
at
the
diagram
at
the
bottom
here
and
that's
kind
of
puts
it
all
together
here,
where
a
user
you
know
infrastructure,
owner
or
application
developer,
they
can
create
an
instance
of
one
of
these
CR
DS
that
represent
infrastructure.
They
can
use
cube
control.
You
know
any
tool
that
speaks
Committee's
API,
so
you
create
an
instance
of
that
C
or
D.
B
Crossplane
has
a
whole
bunch
of
controllers
that
are
watching
for
events
on
those
CR
DS
and
they'll
go
ahead
and
reconcile
the
desired
state
of
that.
Like
an
Amazon,
RDS
c
rd
that
what
the
spec
says
go
ahead
and
make
calls
to
amazon's
cloud
through
the
Amazon
API
to
make
that
desired,
state
of
the
Amazon
RDS
database
a
reality,
the
actual
state.
So
it's
you
know,
kubernetes
controller,
watching
CR,
DS,
calling
cloud
provider
API
is
to
make
that
infrastructure
happen.
B
Now
the
second
feature
area
here
it's
kind
of
a
new
one
for
us
and
I-
think
this
is
really
where
some
of
the
power
of
the
cross
plane
project
current
starts
coming
up
for
us
is
about
publishing
your
own
infrastructure
api's.
So
you
know
as
an
infrastructure
operator
or
structure
owner.
You
know
you
need
to
be
a
little
bit
opinionated
about
what
description
mean
in
my
environment,
so
Crossman
allows
you
to
define.
B
You
know
like
what
does
Postgres
mean
in
your
environment,
and
you
know
what
does
that
composed
of
because
in
reality
you
don't
just
need
a
Postgres
database.
You
need
the
networking
to
connect
it.
You
need
some,
you
know
the
resource
group
for
it
to
be
a
part
of.
So
it's
not
just
a
single
entity.
It's
you
know.
The
general
need
for
infrastructure
by
an
application
actually
is
composed
of
a
whole
number
of
things
underneath.
B
So
if
your
owners
can
define
their
own
abstractions
for
their
opinionated
versions
of
infrastructure
and
publish
that
so
applications
can
consume
them
now
and
that
example,
we
talked
about
with
my
sequel
database
and
azure.
It
needs
not
just
a
my
sequel
database,
but
it
means
the
resource
group.
It
gets
a
firewall
rule,
and
so
you
can
define
that
and
publish
it
for
applications
to
consume
and
we're
gonna
go
into
a
demo
on
that
too.
So
we'll
see
exactly
what
that
means.
B
But
one
of
the
benefits
here
is
that
allows
you
to
hide
the
infrastructure
and
complexity
and
also
put
some
safeguards
in
place
with
policy
and
specific
configuration
that
you
don't
want
to
expose
the
application
owner.
So
you
get
control
over
this
platform
that
you're
developing,
or
this
API,
that
you're
publishing
for
your
applications
to
be
able
to
get
their
infrastructure
that
they
need,
but
in
a
safe
way
that
you're,
ok
with
as
an
infrastructure
owner
and
the
next
thing
about
this-
is
that
it's
all
declarative.
B
There's
no
code
behind
this
at
all
and
we'll
look
at
that
for
there's
no
code
needed
for
the
infrastructure
operator
to
be
able
to
do
this
stuff.
Here's
a
picture
to
put
that
together
for
you
here.
So
with
those
verbs
we
were
just
talking
about
going
from
left
to
right,
define,
compose,
publish
and
consume.
So
on
the
left
here,
we're
looking
at
I'm
an
infrastructure
operator
and
I'm
gonna
go
ahead
and
define
what
my
sequel
means,
in
my
opinion,
ated
environment.
So
to
me
my
sequel
will
mean
you
know:
I've
got
multiple.
B
I
could
have
multiple
definitions
for
it,
so
I'll
say:
okay,
it
could
mean
my
sequel
could
mean
an
azure,
my
sequel
to
resource
group,
Farwell
rule
or
it
could
also
mean
an
Amazon
RDS
instance
with
the
BBC,
a
subnet
security
group,
etc.
These
are
all
multiple
options
for
what
resources
compose
a
my
sequel
in
my
environment
and
then
I'll
go
ahead
and
publish
those
so
that
applications
running
in
namespaces
can
go
ahead
and
say:
hey
I've.
My
app
has
a
requirement
for
my
sequel.
Please
give
me
a
my
sequel
and
so
on
demand
self-service.
B
They
asked
for
my
sequel
and
they
can
kind
of
influence
which
one
do
they
want
here.
In
this
example,
we're
saying
that
app
a
wants
to
Azure
my
sequel
in
all
the
components
that
that's
no,
that
is
composed
of
and
that
be
once
AWS,
but
that
could
be
any
any
other
things
like
like
fast
and
slow
gold
and
silver
she'd
been
expensive.
Whatever
secure-
and
you
know,
development
whatever
it
may
be,
applications
can
ask
for
infrastructure
on
demand
in
this
API.
This.
B
This
platform
that
cross
button
allows
you
to
define,
will
fulfill
that,
for
you,
okay
and
the
last
feature
area
here
is
all
about
the
application
layer.
So,
as
we
talked
about,
we
cross
play
now
supports
the
open
application
model
which
I'm
sure
this
SIG
is
intimately
familiar
with,
and
so
that
what's
cool
about
that
is
that
you
know
with
both
infrastructure
and
applications.
You
can
kind
of
standardize
now
on
a
single
workflow,
a
single
you
know
single
API,
for
that,
where
you
can
define
your
structure
in
your
applications
and
how
they
will
be
used,
etc.
B
All
in
a
single
workflow,
a
key
part
of
the
crossplane
project
for
the
very
beginning,
was
this
idea
of
a
separation
of
concerns
and
that's
also
a
key
component
of
the
ohm
spec
as
well.
So
that's
why
there's
a
very
good
alignment
between
crossplane
and
ohm,
where
there's
a
couple
different
personas
involved,
you
know
as
an
infrastructure
operator,
you
wanted
to
find
the
infrastructure
the
policy
around
it.
What's:
okay,
what's
not?
Okay,
what
you
want
to
expose
to
your
applications
and
then
application
developers.
They
don't
have
to
worry
about
those
details
right.
B
They
want
to
be
able
to
say:
okay,
I'm,
focusing
on
my
business
logic,
I'm,
focusing
on
my
app
and
I
have
a
general
need
for
a
database
in
a
cache.
That's
all
they
need
to
worry
about,
and
then
this
third
persona
around
application
operators
they
can
kind
of
glue
those
together,
let's
take
out
the
application,
components
and
fulfill
them
with
infrastructure
and
buyer
requirements.
B
Okay.
So
this
is
the
three
feature,
areas
and
I
think
it's
time
for
a
demo
to
kind
of
start
putting
this
together
here.
So
let
me
unfold
screen
that,
and
do
you
see
a
prompt
alright
next,
so
I'm,
just
on
my
laptop
here,
simple
simple,
sits
a
scenario
here:
I've
got
kind
cluster
up
and
running
and
it's
got
nothing
right
now,
it's
very,
very
basic.
B
You
know
no
pods
running
not
much
going
on,
but
with
this
blank
vanilla
kind,
cluster,
I'm
gonna
go
ahead
and
install
crossplane
and
I'm
gonna
install
just
through
a
helmet
art
to
get
the
cross
plane,
CR,
DS
and
pods
up
and
running
to
be
able
to
start
managing
and
Parisian
air
infrastructure
for
us.
So
that's
wait
for
those
to
bring
up
okay,
they're
up
already
nice,
nice
and
quick
yeah.
So
we
have
crossfade
up
and
running
and
then
I
also
want
to
go
ahead
and
install
some
support
for
a
sure
as
well.
B
B
B
So
I
can
log
into
the
azure
portal
and,
with
you
know,
a
secret
key
and
all
that
sort
of
stuff
and
I'm
gonna
put
that
into
a
secret
right
now,
so
that
cross
thing
will
be
able
to
do
it
as
your
operations
on
my
path,
let's
check
and
see
if
that's
running,
yeah,
okay,
so
that's
just
us
running
and
then
we're
go
ahead,
get
it
go
ahead
and
create
a
national
provider.
That's
will
use
my
credentials
and
be
ready
to
do
stuff
and
then
one
more
step
before
we
get
into
the
meat
of
it.
B
This
is
all
the
quick
startup
stuff,
so
I'm
gonna
go
and
create
a
cluster
roll
that
will
let
crossplane
use
these
obstruction.
Cr
DS
that
I'm
going
to
be
defining
on
the
fly
as
we're
going
here
all
right.
So
let's
do
something
useful
now.
Finally,
so
the
first
thing
I'm
going
to
do
is
I'm
going
to
create
a
minute
defining
some
infrastructure.
So
let's
look
at
it
here.
I
made
the
font
bigger
here.
B
So
hopefully
that
should
be
should
be
these
inner
read
but
basically
I'm,
defining
my
own
custom
infrastructure
API
right
now,
so
I
would
say:
hey
here's,
an
infrastructure
definition,
and
this
is
Postgres.
This
is
what
Postgres
is
going
to
mean
in
my
environment
here
as
an
infrastructure
I'm
defining
but
Postgres
is-
and
this
is
basically
I'm,
giving
a
CR
D
template
saying:
hey,
create
a
new
C
Rd
crossplane.
That
is
called
Postgres
sequel
instance,
and
here
is
the
open,
API
v3
schema
for
it.
That'll.
B
So
now
we
want
to
go
ahead
and
publish
that
we've
defined
the
infrastructure
we
want
to
publish
it
and
that
one's
really
simple,
that's
just
a
hey.
Take
care
sequel,
the
one
that
I
defined
for
my
example
org
and
go
ahead,
publish
it
so
applications
can
start
to
consume
it.
Now
we're
going
to
get
into
something
a
little
bit
more
interesting.
Where
you
know
we
haven't
really
said
what
Postgres
really
means.
We've
said:
I
want
Postgres
I
want
to
have
a
Postgres
in
my
environment.
Ok
I
really
said
what
does
that
mean?
Yet.
D
B
Let's
do
two
different
Postgres
compositions,
a
bronze
one
and
a
platinum
one,
and
let's
so,
we
can
confirm
here
that
we've
got
in
terms
of
compositions
of
infrastructure
in
our
environment.
We've
got
bronze
and
we've
got
platinum
now,
so
we've
got
two
of
those
they're
ready
to
go
I'm,
going
to
jump
ahead,
real
quick
to
kick
things
off,
because
I'm
gonna
do
real
things
in
Azure
that
take
a
couple
minutes.
B
So
let
me
just
run
a
command
real,
quick
and
then
we'll
talk
through
everything
and
make
sure
we
understand
what's
going
on
it's
that
should
kick
everything
off
now.
Let's
talk
about
what
we've
actually
done
here,
so
we've
defined
Postgres
and
now
what
does
Postgres
mean
in
this
environment?
So
we've
got
bronze
and
you've
got
platinum,
but
what
that
means
is
that
when
somebody
says
that
they
want
a
Postgres
instance,
so
an
application
says
I
want
Postgres.
B
That
actually
means
okay,
we're
going
to
give
them
a
resource
group
and
Azure
we're
gonna
give
them
an
azure
Postgres
database,
we're
going
to
give
them
a
firewall
rule
to
allow
some
ingress
and
connections
to
it.
And
now
this
is
this:
the
bronze
ones.
This
is,
you
know,
a
little
cheap
slow.
It's
only
got
two
cores,
it's
a
general-purpose,
not
not
too
fancy,
but
then
we
can
also
we're
also
defined.
B
What
does
what
does
a
Platinum
Postgres
look
like
right
and
this
h
hag
we're
defining
a
platinum,
it's
going
to
be
composed
to
the
same
things:
a
resource
group,
Postgres
database
and
a
firewall
rule,
but
here
it's
it's
gonna,
be
a
memory
optimized
Postgres
instance:
that's
gonna
have
more
memory
per
core.
It's
got
32
cores,
instead
of
just
the
little
small
two
core
bronze
instance
that
I
defined.
So
we
basically
given
applications.
The
ability
to
do
you
know
on-demand
self-service,
create
their
own
infrastructure
like
giving
them
some
options
about
as
an
infrastructure.
B
Or
what
am
I,
okay
with
I'm?
Okay,
with
this
bronze
one
I'm?
Okay,
with
this
platinum,
one
and
I'm
very
opinionated
about
what
that
means.
So
the
applications
don't
have
to
care
or
know
about
what
the
details
of
the
infrastructure
they're
they're
requesting
are.
They
just
know
they
need
Postgres
and
they're
gonna
get
what
the
infrastructure
owner
is.
Okay
with
and
they're,
don't
have
too
much
control
over
it,
which
is
good
because
you
don't
want
to
give
application
owners
and
developers.
You
know
every
knob
and
direct
access
to
the
cloud
provider.
B
You
guys
to
really
nearly
do
what
they
want.
So
we've
created
the
our
own
infrastructure,
API
I
published
it.
It
made
it
ready
for
applications
to
consume,
and
now
we
kicked
that
off,
so
it
should
be
doing
real
stuff.
Now
so
I'm
gonna
take
a
look
at
you
know
underneath
what's
happening
so
my
application,
sorry
I,
didn't
show
something
real
quick.
We
also
did
this
step
here
where
right
now,
this
is
it
from
the
persona
of
the
application
developed
operator
and
they've
said:
hey.
B
My
application
has
a
requirement
for
Postgres,
so
he's
giving
me
Postgres
and
you
know,
beating
when
I
defined
as
an
infrastructure
operator
by
the
api
for
a
new
infrastructure
that
I'm
defining.
I
gave
you
some
knobs
to
turn
and
one
of
those
is
you
know
how
big
do
you
want
your
database
and
so
here
as
an
application,
developer,
I'm
saying:
okay
I
want
20
gigs
in
my
database.
It's
this
little.
B
You
know
test
one
and
for
which
one
I
want
I
want
bronze,
so
I
have
chosen
as
an
application
that
I
I
need
Postgres
and
give
me
the
bronze
one.
I,
don't
know
anything
else
and
what
to
do
what
that
means,
underneath
it
I,
don't
know
anything
about
that.
So
we
did
that
and
and
so
that
kicks
off
this
whole
sequence
of
machinery
that
says,
okay,
application
requires
Postgres,
it
wants
the
bronze
one.
What
does
that
mean?
B
Let's
look
up
the
composition,
okay,
here's
what
bronze
means
I'm
going
to
instantiate
CR
DS
for
each
one
of
those.
This
is
all
automation,
I'm
talking
about
this,
and
none
of
this
is
human
stuff.
So
the
machinery
and
crossplane
is
instantiating.
The
CR
DS
for
Postgres
Fraser
for
a
resource
group
for
a
firewall
rule
all
that
stuff.
So
we're
checking
in
on
it
here
with
goop
control
of
getting
get.
My
manage,
resources
give
me
all
of
them,
and
so
that's
giving
us
the
resource
group
assured
post
grants
the
firewall
rule,
etc.
B
B
We
also
created
a
firewall
role
for
the
database
to
allow
the
connection
in
so
it
looks
like
everything
came
up
when
is
working,
which
is
great
and
here's
so
here's
another
key
aspect
of
this
is
that
the
application
requested
their
own
infrastructure,
but
then
they
also,
how
do
they
access
it?
How
do
they
connect
to
it
right,
and
so
the
crossplane
machine
went
ahead
and
published
a
secret
as
well.
It
contains
all
the
information,
it's
all
the
information
that
the
application
needs
to
connect
to
it
like
what
the
endpoint
is.
B
This
is
all
basics
for
decoded
right
now
in
a
secret
like
everyday,
just
normally
does,
but
what
what's
the
password?
What's
a
username
everything
that
the
application
needs
to
go
ahead
and
connect
to
that
Postgres
infrastructure
in
all
the
other,
supporting
infrastructure
that
it
actually
needs
to
run
it
and
connect
to
it?
And
so,
let's
just
look
at
a
simple
pod,
real
quick.
So
let
me
start
this
pod
up
here
and
just
to
connect
to
the
database.
B
So
here
it's
very
simple
pod,
it's
saying,
hey
just
run,
put
a
piece
equal
and
do
a
select
query
on
the
current
database,
just
to
connect
and
pend
something
out,
not
very
not
very
fancy,
but
it's
gonna
take
all
that
connection.
Information
from
the
secret,
the
cross,
plane
created
and
published
for
the
application
here.
So
it's
gonna
get
the
password
the
username,
the
host,
all
that
stuff
from
the
secret
in
its
environment
and
then
securely
connect
to
it.
B
So
if
we
look
at
the
output
of
that
of
the
pod,
it
should
just
show,
like
a
you
know,
select
query
output
yep!
That's
all
they
just
said:
okay,
what's
my
current
database,
the
Postgres
anime
since,
when
I
connected
to
all
good.
So
let's
put
it
all
back
together
here
with
a
real
quick
summary.
Is
that
as
an
infrastructure
operator,
I
define
my
own
API,
my
own
abstractions,
for
what
infrastructure
means
in
my
environment.
I
said
hey.
B
This
is
what
Postgres
means:
there's
a
couple
flavors
for
it,
the
bronze
one,
the
platinum
one,
I'm,
publishing
them
and
making
them
available
for
applications
to
self-service
to
get
the
infrastructure
that
they
need
when
they
need
it,
and
not
have
to
worry
about
the
details,
strong
separation
of
concerns.
They
don't
get
to
configure
too
many
things
that
I'm,
not
okay
with
as
an
infrastructure
operator,
but
they
could
do
it
on
their
own.
They
don't
have
to
file
a
ticket.
They
could
just
get
the
infrastructure
they
need
when
they
need
it.
That's
an
application.
B
B
Adopters
is
a
section
of
the
proposal
that
we
focused
on
so
I
mean,
let's
be
let's
be
clear
here-
that
these
are
mostly
evaluating
phase
folks
are
evaluating
crossplane
and
starting
to
use
it
in
their
environments,
but
not
strictly
taking
a
hard
production
dependency
on
it.
Yet
all
the
details
are
in
the
proposal,
but
you
know
we
have
some
great
partnerships
going
and
some
good
adoption
of
the
crossplane
platform
with
Microsoft
and
Alibaba,
and
the
open
application
model
get
lab
uses
crossplane
as
a
get
that
managed
app
for
its
Auto
DevOps
stuff.
B
That
it
exposes
with
Red
Hat,
the
crossplane
is
available
as
an
operator
and
operator
hub
and
that's
promoting
the
usage
of
crossplane
and
hybrid
scenarios
like
for
open
shift.
You
have
open
shift
running
on-premises
and
you
want
your
applications
in
the
open
shift
to
be
able
to
use
cloud
infrastructure.
How
do
you
do
that?
B
If
you
want
to
the
website
github
slack
all
that
sort
of
stuff.
So
that's
in
the
deck
here
and
we
can,
you
know,
come
be
part
of
the
community
and
that
is
the
end
of
it.
So
I
appreciate
everybody.
Listening
and
I
might
have
gone
a
little
fast,
so
it
didn't
get
a
great
chance
to
ask
questions,
but
now
I'll
shut
up
and
if
there
are
any
questions
from
the
group
here,
I'm
more
than
happy
to
address
all
of
them.
I
think
there's
some
other
folks
from
crossplane
on
the
call
here
too.
C
C
Yeah
I,
don't
have
a
Christian
country,
come
to
my
den.
How
so
so,
let's
say:
I
am
a
I.
Am
gentle
ocean
right
I
want
to
add
my
support
to
question
Andy,
another
call
provider,
so
what
I
need
to
do?
It
seems
like
I
need
to
do
two
things.
First,
I
need
to
write
a
copywriter
or
content
and
then
I
need
to
do
confirmation
to
publish
my
structure
correct.
So
so
I
think
the
question
is,
and
I
actually
use
my
existing
operator,
for
you
can
ocean
and
they
use
composition
in
cross
pass.
C
B
It's
actually
a
good
point:
Harry
yeah,
I,
guess
so,
there's
a
couple
different
levels
of
collaboration
or
integration
that
you
can.
You
can
use.
So
you
know
the.
What
we've
done
so
far
is
that
we've
written
full
providers
like
for
AWS
and
tcp
and
azure
and
etc.
Where
you
know,
we've
got
controllers
and
Ciardi
is
defined
for
all
of
those,
so
they're,
very
native
crossplane,
and
then,
when
you're,
you
know
as
an
infrastructure
operator
when
you
want
to
define,
expose
those
of
structure,
options
and
services
to
your
applications.
B
You
know
you're
you're,
basically
just
defining
them
of
you
know
what
are
the
CR
DS
that
make
up,
say,
Postgres,
and
so
that's
a
good
point
Harry,
where
you
don't
have
to
necessarily
be
fully
integrated
with
crossplane
and
right
across
playing
provider,
and
you
know,
do
the
full.
You
know
implementation
that
we've
done
for
GCP.
The
idea
of
us,
it's
you
could
is.
This
is
all
normalized
on
the
carbonated
API
right.
So
everything
is
a
creative
resource.
B
You
know
the
race
API
object,
so
you
could
define
you
know
an
infrastructure,
composition
or
infrastructure,
eight
abstraction
to
say:
hey,
actually
Postgres.
What
that
really
means
here
in
my
environment
is
this
of
these
other
set
of
ER
DS.
If
something
is
exposed,
as
a
community
api
object,
you
could
compose
and
define
and
publish
infrastructure,
abstractions
and
api
of
whatever
those
are
so
digitalocean
scaredy's
or
you
know,
on-premises
like
my
sequel
operator
or
Redis
operator,
or
something
like
that.
You
can
define
infrastructure
a
demain
composed
of
those
primitives
as
well.
E
And
the
only
thing
we
ran
into
is
when
we
created
a
VM,
we
created
like
hundreds
of
them,
so
we
had
to
kind
of
understand
the
API
a
little
bit
better.
So
we
just
create
one
to
control,
how
many
beings
we
created
at
one
time,
but
it
so
the
API
is
becoming
I
guess.
My
point:
API
is
a
very
important
aspect
of
developing
the
provider
and
the
ability
to
do
lifecycle
within
that
environment
and
so
just
keep
in
mind.
Api
it's
critical
to
make
this
model
as
well.
G
Hi
I
have
two
questions.
The
first
one
is
around
the
model
like
from
what
I
understand
in
in
OpenShift
there's
like
AWS
operators
that
enable
you
to
do
similar
things
right.
Let
create
AWS
instances
express
a
AWS
resources
expressed
as
customer
sources
in
the
cluster,
but
they're
like
namespace
scoped,
and
you
can
use
basically
different
teams
use
different
versions
of
the
operator
in
different
namespaces
different
IM
accounts.
So
how
do
you
guys
handle
like
basically
different
people
should
have
access
to
different
clock
club,
creating
different
cloud
resources?
G
B
Sure
yeah
thanks
Daniel
appreciate
that
yeah.
So
you
know
the
that's
one
of
the
nice
things
about
standard
standardizing
on
the
kubernetes
api.
Is
it's
all
of
these
infrastructure,
primitives
and
then
infrastructure
compositions.
You
know
the
api
that
you
wanted
to
find
it's
composed
of
those
infrastructure,
primitives
they're,
all
at
the
end,
communities
API
objects
as
well.
So
our
back,
you
know
is,
is
fully
flexible
to
you
know,
allow
particular
access
to
you
know
this
API
in
this
verb,
but
not
that
API
and
not
that
verb.
B
So
you
know
that
standardized
means
of
exposing
exposing
all
this
infrastructure
and
locking
it
down
or
allowing
access
to
it
through.
Our
back
is
something
that
we
that's
that's.
You
know
the
common
way
to
or
a
substandard
and
supported
way
to
do
that.
You
made
a
good
point
Daniel
about
about
you,
know
multiple
versions
of
the
operators
and
namespace
vs
cluster
stuff.
In
general.
B
You
know
infrastructure
primitives
that,
like
do
one-to-one
map
to
cloud
provider
resources
in
general,
those
are
almost
all
I.
Think
cluster
scoped
resources
so
like
an
Amazon,
RDS
database
or
a
Google
cloud
sequel
database.
Those
are
going
to
be
cluster
scoped
resources
and
then,
when,
as
an
infrastructure
operator,
you
choose
to
publish
this
infrastructure
either
the
raw
primitives
or
the
compositions
of
them
with
your
own
API.
When
you
publish
those
you're
making
those
available,
you
know
in
a
new
type
like
a
postcard
requirement.
B
We
saw
as
a
namespaced
object,
so
there's
that
kind
of
separation
of
concerns
that
difference
in
scope
of
access
as
well.
Cluster
resources
for
the
raw
cloud
provider,
infrastructure,
primitives
and
namespace
scope
for
the
API
infrastructure,
API
that
you
yourself
defined
and
want
to
publish
to
your
applications.
But
one
part
that
I
think
there's
a
little
bit
more
work
to
do.
Daniel
is,
is
around
handling
multiple
versions
of
these.
You
know
there's
four,
mostly
in
cross
plane,
you
get
you,
you
know,
install
the
let's
say
the
AWS
provider.
B
You
can't
install
two
versions
of
it
at
the
same
time.
Right
now
that
are,
you
know,
handling.
You
know
side-by-side
multiple
versions
of
the
API.
It's
kind
of
one
per
cluster
I
think
there's
a
number
of
challenges
associated
with
that
with
cluster
scoped
resources
and
Ciardi,
multiple
seer
versions
to
see
RDS,
etc,
but
mostly
right
now,
with
cross
flane
you'll
get
one
version
of
each
provider
within
the
cluster
and
the
control
plane.
Does
anybody
else
on
the
lacrosse
playing
team
want
to
add
anything
to
that
on
the
call
yeah.
H
Just
a
quick,
quick
point
also
that
if
you
wanted
to
support,
say
multiple,
a
TBS
credentials
that
are
have
different
privileges
and
used
for
different
things.
I
think
you
mentioned
that
other
other
approaches.
Other
projects
use
namespaces
to
solve
that.
We
support
that
at
the
cluster
scope
as
well
and
combination
of
our
back
and
the
company
combination
of
using
publishing
infrastructure
definitions
into
namespaces,
can
you
can
arrive
at
the
same
thing?
We
just
don't
think
we
we
leave.
H
F
D
B
Let
me
make
sure
I
understand
that
so
I
think
you
know
there's
one
way
to
look
about
this.
Is
you
know
the
infrastructure
that
you're
instantiate
like
the
RDS
database,
you're
instantiating
into
AWS,
and
where
is
it
gonna
get
accessed
from?
Maybe
that's
from
an
e
KS
cluster.
That's
running
an
Amazon,
maybe
that's
from
a
GC
p
or
g
ke
cluster
running
in
google
etc,
and
so
you
can,
you
know,
create
all
of
the
networking
security
primitives
to
make
that
happen
and
allow
connection
from
anywhere.
B
But
you
were
you
asking
about
something
else
about
exposing
that
object,
like
you
know,
publishing
this
infrastructure
type
to
an
entirely
different
cluster
to
be
able
to
consume
it
on
demand.
You
know
in
steam,
like
an
application,
minding
some
routes
to
instantiate
it
themselves
outside
of
the
control
plane,
or
is
that
what
you're
asking
yeah.
D
So
the
pattern
is
where
I
have
a
master
cluster
that
that
that
manages
the
infrastructure.
Let's
say
something
like
Kafka
or
even
dynamo,
and
then
you
have
you
know
multitude
or
clusters
that
that
our
consumers,
meaning
like
the
apps,
come
up
and
which
are
pattern
like
you
create
a
namespace
and
then
you
consume
that
they
could
are
cluster
wide
resource,
I
guess,
but
it's
it's!
You
want
to
be
able
to
push
it
using
your
clustering
to
those
clusters.
D
B
B
Let's
call
it
or
the
control
plane
running
with
all
the
crew
at
the
across
plane
machinery,
and
then
you
can
have
a
number
of
different
remote
clusters
or
worker
clusters
or
whatever
you
want
to
call
them
that
are
actually
running
workloads,
and
so
you
know
you
can
bring
up.
You
know
your
workload,
your
application
in
another
cluster
and
have
it
consume
the
RDS
instance
or
the
cloud
sequel
instance
or
whatever
over
there,
and
that's
been
a
kind
of
a
common.
C
C
C
C
Yeah,
yes,
the
standard
process
that
the
Stieg
will
take
over
the
project
from
this
after
this
presentation-
and
there
will
be
a
review
based
on
the
presentation
and
also
based
on
some
may
be
interviews
with
the
maintainer
in
a
community.
I
will
stay
back
from
the
interview
process,
because
Alibaba
is
kind
of
involved
in
cross
that
project,
but
I
believe
a
loss
or
Brent
will
be
the
contact
person
for
the
review
process
and
after
review
process.
C
The
port,
the
seek,
will
add
a
recommendation
in
the
proposal
to
reflect
to
the
facts
on
the
project
and
the
the
the
recommendation
from
mistake,
for
example,
yes
or
no
or
there's
something
lead
to
further
discussion
and
then
pass
this
proposal
to
Quixote
and
Quixote
will
be
responsible
to
make
the
final
decision
to
co
for
sponsors.
This
is
a
standard
process
right
now.
C
C
E
A
That
you
like
and
I,
will
close
this
on
June
16th
at
12
p.m.
Pacific,
and
so
our
next
meeting
we
might
actually
be
able
to
say
this-
is
the
logo
that
we
think
we
have
chosen.
What
do
we
think
about
it
and
then
getting
some
like
closure
on
being
able
to
say
that
F
delivery
has
a
logo
so
get
in
there?
It's
a
shoe
number
20
I.
B
C
E
A
E
A
C
That's
really
awesome:
okay
and
yeah.
I
think
we
only
need
to
cut
code
for
before
was
after
meeting
okay,
so
also
on
behalf
of
the
sig
I
also
have
a
update
for
the
colonial
feel
pack
project
review,
because
I
think
you
guys
meant
my
notice
that
this
project,
it's
kind
of
you,
know,
count
the
clearance
process,
because
we
don't
have
a
clear
recognition
for
any
users
in
these
things,
if
charter
but
make
sure,
is
required
for
the
incubation
level
due
diligent
review.
C
So
we
actually
have
a
couple
runs
of
discussion
with
TOC
and
it's
all
r6
to
see
how
we
can
continue
the
process
and
also
bring
up
this
discussion
until
the
meeting
yesterday
and
I.
Think
I
got
a
lot
of
very
useful
feedback,
so
the
general
idea
is
that,
because
this
is
because
the
issue
is
maybe
clonaid
in
your
pack,
this
adaptors
would
be
mostly
called
polluters
or
lenders
extending
when
you
extended
with
end
users
that
maybe
isn't
made
sure
of
this
project.
So
so
it's
concluding.
C
That
is
that,
let's
seek
delivery,
will
actually
pass
the
due
diligent
dark
to
TOC
without
yes
or
no
recommendation.
But
we
will
add
a
recommendation
based
on
facts
to
reflect
okay.
What
kind
of
about
currently
this
project
has
and
what
kind
of
create
other
criteria
of
this
project
already
has
already
made,
and
then
we
will
pass
this
information
to
choose.
There's
a
TOC.
C
We
will
make
the
funnel
to
say
that
maybe
sheet
clothing
to
build
pack
at
the
special
case,
so
the
the
idea
is
that
the
TOC
will
review
the
property
will
make
fun
of
the
season
case
by
case,
because
we
cannot
change
the
criteria
right
now.
It
will
require
long
process
and
the
desk,
and
that
means
this
project
will
be
hold
for
a
long
time.
So
we
don't.
Let
me
don't
do
that
happen,
so
we
will
pass
this
case
to
TOC
or
run
of
the
season.
C
C
Sure
so
it
seems
that
all
the
items
had
been
completed
in
this
into
del
Sigma,
ting
and
I'm,
happy
to
see
gospel
and
the
chronically
build
pack
can
go
to
the
next
stage
and
I'm
also
again
calling
for
loads
for
the
logos,
for
they
seek
app
delivery.
Well,
thank
you
for
attending
today's
meeting.
Thank
you
very
much.