►
From YouTube: TAG General Meeting - 2023-02-15
Description
Including presentation on KubePlus by Devdatta Kulkarni of CloudArk.
A
I'm
glad
you
are
able
to
make
it
today,
I
mean
last
two
weeks
back
in
the
previous
meeting
when
I
said
I'll
present
what
happened
afterwards
was
there
was
a
icing
Event
in
Austin,
which
knocked
out
all
the
power
for
most
of
the
city.
I
mean
this
is
just
so
unheard
of,
but
yeah
I
mean
it
became.
We
were
out
of
power
and.
A
Yeah
yeah
exactly
so
I'm
glad
that,
instead
of
last
week,
we
are
doing
it.
This.
B
Week,
just
out
of
curiosity,
are
you
applying
for
sandboxover
incubation,
or
is
this
well.
A
A
So
we
I
do
have
plans
to
apply
for
sandbox,
not
immediately
like
maybe
sometime
in
this
next
few
months.
Okay,
right
now,
we
want
to
I
want
to
get
it
in
front
of
the
community
to
get
more
feedback.
More
participation,
I
mean
we
yeah.
Based
on
that,
then
we
will
see
where
we
can
take
it
because,
usually
for
sandbox,
there
are
certain
criteria
they
look
at
right.
A
So
one
of
the
criteria
generally
is
how
how
best
fit
it
is
for
the
community
and
if
there
are
community
members
who
are
using
it
and
from
that
angle,
I
thought
we
are
at
a
at
a
good
place
to
share
it
with
everyone
and
see.
If
there
is
interest.
A
So
yeah
I
mean
based
on
this
presentation
and
then
engagement
with
others.
We
will
definitely
in
next
few
months
are
considering
that
option.
A
I
I,
don't
I,
so
next
I
also
don't
remember
to
remember
whether
I
did
it.
Maybe
I
did
it
here
in
cncf.
A
Exactly
and
so
at
that
time
we
were
so
this
was
a
year
and
a
half
back.
We
were
still
in
early
stages
of
identifying
really
the
problem
and
building
out
the
solution,
and
so
it
was
very
rudimentary
at
that
point
what
we
what
I
presented,
but
now
we
have
something
which
is
much
more.
You
know
built
out
and
so
I
I
thought
it
will
be
another
good
time
to
come
back
and
present,
but
yeah
good.
You
reminded
me
of
that
because
I
was
also
thinking.
B
George,
how
are
you
is
it?
It's.
Some
timer
goes
into
Century
here,
but
it
was
funny
because
then
I,
a
previous
tree
after
of
the
take
after
told
us
about
your
project
yeah
and
it
looked
really
promising.
So,
oh.
A
Works,
how
does
this
go?
Yeah,
yeah,
absolutely
yeah,
I
mean
at
that
point.
We
were
more
focused
on
you
know,
getting
multiple
operators
and
creating
sort
of
what
we
were
calling
platforms,
tags
declaratively
so
platform
as
code
is
what
we
were
sort
of
building
out
our
ideas
on,
and
so
the
I
mean
that
that
that
went.
Some
I
mean
it
made
some
progress,
but
then
we
realized
that
just
building
a
very
broader
like
use
any
operator
which
is
in
your
cluster
to
build
out
your
declarative
application.
A
Stacks
may
not
have
that
wide
and
applicability,
and
also
in
our
engagements
with
our
early
custom
numbers,
we
saw
that.
Well.
That
was
not
the
issue
that
people
were
struggling
with.
It
was
more
around
multi-tenancy
and
how
to
you
know
do
things.
So
let
me
just
pause
there
because
I
think
it's
10
o'clock,
but
yeah.
We
can
continue
a
discussion
afterwards.
C
A
I'm
yeah
I
mean
the
operator
white
paper
that
came
out
at
that
point.
Apart
from
like
very
early
participation
from
myself
and
our
team
I
was
not
able
to
participate
a
whole
lot
just
because
of
the
other
like
things
which
were
going
on
and
but
I
was
reading
up
on
that
white
paper
it
it
has
come
out
really
well,
I,
don't
know
if
there
is
another
version,
three
or
something
which
is
being
planned,
but
other
yeah
in
that
we
can
definitely
participate.
C
A
Awesome
yeah
I
know
absolutely
I
would
love
that
in
fact,
yeah
there
is
I,
haven't
really
looked
recently
into
operator
life
cycle
manager
or
the
helm
operator.
But
what
you
will
realize
when
I,
when
we
talk
through
and
give
the
presentation,
is
it
is
Q
Plus.
It
does
something
similar
to
what
Helm
operator
used
to
do.
Recently.
A
I
haven't
checked,
but
there's
a
little
twist
where
we
don't
need
to
really
create
a
separate
operator
for
an
application
and
as
the
applic
as
the
presentation
starts,
I
will
go
into
all
those
details,
but
yeah
would
love
to
chat
about
olm
and
all
the
things
related
to
operator.
Yeah.
A
Yeah
cool,
so
yeah
I
mean
whenever
you
want
to
officially
roll
it
off.
I
am
fine.
A
So
usually,
while
we
wait,
I
just
wanted
to
ask
about
the
I
saw,
there
is
the
platform
working
group,
and
then
there
is
the
so
platform
working
group
is
part
of
the
cncf
app
delivery
task.
Is
it
okay,
and
so
that
has
a
separate
set
of
meetings
than
this?
Yes,
okay,.
C
A
So
what
like
are
the
people
that
come
there
and
here
are
the
same
or
the
reason
I'm
asking
is
I
like
do
you
think
if
I
want
to
reach
out
and
get
more
feedback
from
folks,
then
would
it
be
advisable
to
also
come
in
in
that
forum
and
do
a
presentation
or
just
having
doing
it
here
is
enough
working.
C
Group
so
you're
asking
the
right
person.
Let's,
let's
talk
about
it,
I
mean
what
looking
over,
like
I
mean
you're
going
to
present,
but
it
kind
of
looks
like
you
plus,
creates
platform
capabilities.
What
I
kind
of
there's
a
couple
questions
I
have
about
that,
but
yeah
I'm,
not
sure
you
know.
If
we
we
have
right
now
we're
not
really
having
presentations
about.
A
It
no
I
I
mean
I'm,
just
wondering
if
you
are
in
the
working
group.
If
there
are
presentations
happening.
Of
course,
yeah
I
do
understand
that
were
the
the
sort
of
agenda
for
that
working
group
is
very
specific
and
probably
Q.
Plus
is
not
because
I
read
through
the
recent
meeting
notes
there
and
I
felt
that
maybe
yeah
we
are
not
directly
doing
what
is
being
discussed
in
the
platform
working
group.
So
that's
fine,
but.
C
A
Absolutely
no
no
yeah,
but
a
lot
of
good
activity
happening
overall
in
this
space,
so
excited
to
you
know,
be
listening
on
and
reading
up
about
things
that
you
all
are
doing.
C
A
I
mean
that's
the
sort
of
Holy
Grail
where
I
mean
I,
saw
the
presentations.
The
recent
couple
presentations
in
this
group,
as
well
as
the
working
group
platform
working
group
and
both
those
projects
they
I
mean
the
the
problem
is-
has
been
their
since
we
have
so
many
vendors
all
along
right
and
trying
to
sort
of
build
a
uniform
layer
to
hide
the
details
of
vendor
apis
and
provide
one
sort
of
way
to
rule
them
all.
A
And
so
it's
interesting
how
that
theme
keeps
on
coming
up
yeah
it's
difficult
to
really
see
I've.
A
Like
yeah
well
yeah
that
too
and
I
mean
just
the
the
differentiation
so
vendor
those
who
are
not
open
source
they
design
their
apis
to
have
you
know
some
of
their
own
key
advantages
or
key
strong
points,
and
so
that
there
will
always
be
a
difference.
So
there
was
someone
who
was
asking
on
the
chat.
I
forgot
the
name
that
would
all
that
always
be
a
common
minimal
denominator
of
the
API
capabilities
that
such
systems
will
have
to
implement,
because
you
cannot
really
have.
A
You
know,
support
every
small
little
unique
feature
that
each
vendor
has
yeah
and
I.
Remember
back
just
I
thought
of
in
open
stack
word,
so
I
used
to
work
at
Rackspace
before
Cloud
Arc,
and
there
was
the
project
that
we
were
working
building
out
the
plan
platform
as
a
service
for
openstacks,
so
numb
the
project,
and
in
that
context
this
was
around
2014
2015
time
frame.
If
that
this
question
came
up
quite
often
that
12
are
we
going
to
do
common
minimum
denominator
of
all
the
other?
A
So
it
was
very
sort
of
reading
that
again.
After
seven
eight
years,
I
was
like
okay
yeah,
so
that
problem
is
still
there
it's.
It
has
not
gone
away.
It's.
C
Yeah
the
comments
yeah
all
right.
Well,
let's
I
love,
I
love
charity!
Well,
let's
go
ahead
and
get
it
going.
Absolutely
yeah
we'll
publish
this
after
we'll
point
we'll
put
a
pointer
to
here's
where
Dev
data
started.
Okay,.
A
Absolutely
that's
great,
so
I
do
I,
have
sharing
capabilities.
I!
Think
I
do
have
awesome.
So
let
me
share
my
screen
and
also
so
what
I'm
going
to
do
is.
A
Let
me
make
it
full
screen,
so
all
of
us
can
see
it
all
right,
so,
hey
all
I'm
Dev,
founder
of
cloud
art,
and
so
today
what
I'm
going
to
talk
about
is
Q
Plus,
which
is
our
open
source
kubernetes
operator
for
multi-instance,
multi-tenancy,
okay
and
the
picture
here
on
the
left
is
sort
of
a
quick
giveaway
of
what
we
mean
by
a
multi-instance
multi-tenancy,
where
on
a
kubernetes
cluster,
there
are
separate
instances
of
the
same
application
that
basically
deploy
multiple
separate
instances
for
separate
tenets.
A
So
Let's.
Let's
look
at
this
multi-insurance
multi-tenancy
pattern
in
a
bit
more
details
and
who
better
to
ask
these
days
than
to
chat
GPT.
So
what
one
of
our
team
members
did
was
I
just
ask
this
question
to
chat
GPT
about
what
is
multi-instance
multi-tenancy
and
then,
after
some
iterative
question
and
answer
sessions.
This
is
what
we
got,
which
is
actually
quite
accurate.
So
this
multi-instance
multi-tenancy
pattern
refers
to
software
architecture
where
each
tenant
is
assigned
a
separate
instance
of
the
application.
A
So
the
same
application,
but
you
just
pin
up
multiple
instances
of
this,
so
each
tenant
has
its
own
separate
data,
configuration
and
customizations
so
sort
of
the
picture
that
you
see
on
the
left
side,
you
have
a
separate
WordPress
instances
for
three
different
teams,
for
example,
or
chat
Bots
or
like
database
instances,
and
so
on
so
benefits.
All
of
us
are
probably
aware
of
these.
A
Having
a
separate
application
instance
provides
good
amount
of
security
isolation,
it's
very
easy
to
do,
customization
and
then
control
each
instance,
as
per
that
particular
tenant's
needs,
and
then
it's
also
faster
to
go
to
market,
because
you
don't
have
to
worry
about
re-architecting
the
application
code
to
support
multiple
tenants-
and
this
is
especially
seen
in
in
the
use
cases
and
the
domains
where
this
type
of
pattern
is
quite
prevalent,
so
B2B
application
vendor
software
vendors.
A
What
we
have
seen
is
if
they
have
to
host
a
SAS
solution,
then
the
best
way
for
them
and
easiest
way
is
to
on
public
Cloud,
for
example.
They
would
end
up
creating
a
separate
instance
of
their
application
for
their
antenna,
so
this
could
be
a
WordPress
service
provider
hosting
separate
WordPress
instances
for
their
end
customers,
the
similarly
internal
platform
teams-
and
this
is
probably
the
point
which
can
resonate
with
this
group
here
so
platform
teams
all
of
us
have
seen
are-
are
tasked
with
creating
and
maintaining
applications
for
internal
product
teams.
A
So,
for
example,
in
our
experience
platform
teams
been
asked
to
spin
up
with
Jenkins
instances
or
even
monitoring
instances
and
so
on,
and
so
platform
teams
invariably
have
to
maintain
spin
up
separate
application
instances
per
product
team,
and
so
there
also
this
pattern
comes
into
play
and
then
the
final
place
where
we
have
seen
this
happen
is
where
B2B
vendors
typically
can
be
asked
by
their
customers
to
deliver
the
application
on
the
customer's
infrastructure.
A
So,
rather
than
hosting
a
solution
in
the
B2B
software
vendors
own
infrastructure,
the
customer
will
ask
them
to
remotely
manage
and
perform
day
two
operations
on
their
on
their
infrastructure.
So
in
all
these
cases,
multi-instance
multi-tenancy
pattern
exists.
Today
we
have
seen
it
in
real
world
when
we
work
with
customers
they
they
are
using
this
today.
A
So
how
does
this
yeah?
Please
go.
A
Yeah
I
mean
the
apps
code.
Of
course
we
we
are
not
going
to
change
that
yeah.
That
will
be
the
multi-tenancy
within
the
application
architecture,
but
so
the
app
doesn't
change
at
all.
But
for
every
separate
customer
you
just
clean
up
a
new
instance
of
the
app
so
deploy
it
in
create
a
separate
instance,
and
that
is
what
the
multi-instance
so
the
multi-instance
stands
for,
that
you
have
multiple
instances:
each
serving
a
separate
team
or
a
tenant
or
a
customer.
A
So
how
does
this
so
by
the
way
this
this
picture
here?
If
you,
if
you
look
at
the
left
hand
side
all
the
applications
shown
there,
there
is
no
mention
of
kubernetes
in
any
of
this,
because,
at
least
on
this
particular
chart
this
particular
slide.
We
don't
want
to
mention
it,
because
this
pattern
is
not
not
very
specific
to
kubernetes
it
is.
It
has
been
around
in
the
in
the
old
days
or
before
kubernetes.
A
Also,
vendors
have
been
doing
this
just
that
they
were
creating
these
separate
instances
on
VMS,
okay,
now
in
the
kubernetes
world,
also
now
this
Cloud
native
world,
this
pattern
has
been
recognized,
and
last
year
we
worked
with
others
in
our
community
on
the
multi-tenancy
documentation,
which
is
available
on
the
kubernetes
dot
IO
docs,
and
within
that
context,
also
this
pattern
has
been
recognized.
I
mean
it
was
just
not
us
who
who
mentioned
that
this
is
a
reality.
There
were
others
also
who
are
participating,
who
saw
in
real
world.
A
This
pattern
exists
in
the
kubernetes
world
as
well.
Just
the
one
clarification
that
I'd
like
to
make
in
the
kubernetes
dot
IO
doc
on
multi-tenancy.
This
pattern
is
called
multi-customer,
tenancy,
okay,
so
the
multi-instance
multi-tenancy
is
what
we've
been
calling
and
we
thought
that's
much
more.
A
It
doesn't
sort
of
refer
to
any
customer
or
anything,
and
so
we
like
to
call
it
multi-instance
multi-tenancy,
but
in
the
documentation,
is
multi-customer
all
right.
So
how
do
you
go
about
creating
such
a
pattern
or
achieving
that
end
goal
on
kubernetes
today?
Okay,
so
all
of
us
have
been
using
kubernetes
and
it's
the
natural
question
is
well.
If
I
want
to
do
this,
how
can
I
do
it
today?
A
So
it's
very
straightforward
to
begin
with,
I
mean
we
can
start
imagining
that
okay
I,
let's
say
if
I
have
a
WordPress
Helen
chart,
then
all
I
need
to
do
is
create
one
instance
of
the
application
per
namespace
of
that
Helm
chart.
So
just
do
Helm
install
and
specify
different
name
spaces,
and
it
should
be
done
right.
That's
our
sort
of
we
will
We
can
start
thinking
about
this
problem
or
the
solution
from
that
step.
A
Well,
it's
true,
but
then
there
are
a
lot
of
additional
things
that
you
will
need
to
do:
custom
automation
for,
for
example,
how
do
you
really
isolate
these
Beyond
name
spaces?
A
If
you
want
to
ensure
that
there
is
no
cross
traffic
between
the
parts
of
the
two
instances,
how
do
you
ensure
that
or
whoever
is
doing
so,
that's
the
isolation
that
you
have
to
need
to
ensure
or
how
about
ensuring
that
the
person
who
is
doing
your
Helm
install
has
only
minimal
set
of
permissions,
whatever
are
needed
for
that
application
and
nothing
more
that's
the
security
point.
A
The
third
aspect
is:
how
do
you
go
about
monitoring
all
these
different
instances
and
then,
if
you
want
to
do
day
two
operations,
then
how
do
you
go
about
seamlessly
upgrading
all
these
application
instances?
So
that's
where
Q
Plus
comes
into
play.
Q
Plus
is
a
turnkey
solution
to
create
multi-instance
multi-tenancy
pattern
on
kubernetes,
which
takes
any
Helm
chart
and
delivers
it
as
a
service
by
ensuring
that
the
application
instances
that
are
created
they
are
properly
isolated,
the
Q
Plus
itself.
A
It
runs
on
the
cluster,
but
it
doesn't
need
any
any
cluster
admin
level
permissions.
There
are
simple
plugins
that
we
provide
for
monitoring
and
upgrades
and
we'll
see
how
we
can
do
customization
of
every
instance
as
well.
Okay,
so
that's
where
that's
the
value
of
Q
Plus.
So
how
do?
How
do
we
do
that
and
as
as
I
go
through
the
demo
and
then
follow
on
slides?
A
This
will
become
clear,
but,
to
begin
with,
the
idea
is
this
is
the
key
idea
here
that
Q
Plus
has
one
crd
that
we
Define
called
as
resource
composition
and
this
crd
as
part
of
its
inputs.
It
takes
a
link
to
a
Helm
chart.
It
would
be
a
publicly
available
hymn
chart
or
it
could
be
a
local
Helm
chart.
So
that
goes
into
the
input
for
the
crd
and
what
Q
Plus
generates
is
actually
it
will
generate
a
crd
to
represent
that
Helm
chart.
A
So
if
WordPress
and
chart
is
provided
as
input
to
Resource
composition,
then
Q
Plus
will
install
a
new
crd
WordPress
service
crd,
for
example,
in
that
cluster
and
then
advantage
of
doing
this.
Is
this
crd
then
forms
sort
of
the
control
point
for
Q
Plus
to
ensure
all
these
properties?
Okay,
so,
for
example,
for
isolation,
every
application
instance
is
deployed
in
a
separate
namespace.
So
when
you
create
an
instance
of
Wordpress
crd
Q
Plus
will
deployed
in
a
new
namespace
and
Q
Plus
itself
for
security.
A
The
permissions
that
Q
Plus
runs
in
the
cluster
are
not
cluster
admin.
In
fact,
they
are
like
40
percent
48
percent,
less
than
the
actual
cluster
admin
permissions
for
monitoring.
We,
we
collect
network
storage
memory
and
CPU
data
from
cubelet,
persistent
volume
claims
and
CI
advisor,
and
then
upgrades
are
essentially
changing.
The
resource
composition
to
point
to
the
new
version
of
the
hand
chart
will
lead
to
creation
of
new
application
instances
with
that
new
version
and
the
main
customization
that
you
are
able
to
do
is
underlying
charts
values.aml.
A
They
are
Ray
filed
and
they
become
the
properties
of
the
new
crd,
that
is,
that
is
created
by
Q
Plus
and
we'll
see
an
example
of
this.
So
that's
like
a
high
level
introduction
of
Q
Plus.
Now,
let's
look
at
the
demo,
so
fall
demo
I'm
going
to
use
this
Urdu
application
from
bitnami,
so
bitnami
and
charts
there
are
like
96,
odd,
hidden
charts
on
bitnami's
repository,
so
we
just
take
one
and
we
will
we'll
go
through
some
of
these
steps
here.
Okay,
all
right!
A
So
here
are
the
steps
I've
just
documented
the
steps
that
I
want
to
go
through.
So
the
first
thing
I
want
to
show
by
the
way,
I
have
done
some
of
these
steps
ahead
of
time,
just
to
make
sure
that
our
demo
goes
smoothly.
So
let
me
start
here
yeah.
A
So
this
is
the
resource
composition,
crd
that
Q
Plus
provides
in
which
the
the
inputs
that
we
take
are
the
new
resource
that
we
want
to
register
in
the
cluster
by
the
time.
Name
is
here,
I'm
just
giving
it
Udo
service
and
that's
my
chart,
which
is
locally
available,
so
I
have
downloaded
it
and
I'm
going
to
refer
to
that
local
chart,
and
so
this
is
the
resource
composition
which,
with
which
we
will
start,
and
then
let
me
just
point
to
you.
A
This
is
the
step
here:
Cube
CTL,
create
that
resource
composition,
and
there
is
the
permissions
that
Q
Plus
runs
with.
We
provide
utilities
to
generate
scoped,
Cube
config
files.
So
in
fact
that's
the
first
step.
You
would
essentially
install
Q
Plus
with
a
scope,
config
file,
so
the
helm,
install
Q
Plus
will
take
that
scope
to
config
file
and
all
the
operations
we
will
do
is
with
that
at
scope.
Cube
config
file,
so
we
don't
need
cluster
admin
permissions.
A
So
the
first
thing
you
do
is
create
an
instance
of
resource
composition,
and
so
let
me
show
you
that's
where
I'm
going
to
start
this
demo
that
I
have
created
an
instance
of
resource
composition
in
here
by
the
way,
this
is
I'm
running
these
against
a
gke
cluster,
and
what
I've
done
is.
Let
me
first
show
you
all
the
crds
that
are
there.
So
these
are
some
of
these
crds
are
coming
from
a
GK
proper.
There
are
these
resource
composition,
events,
monitors
and
policies.
A
These
are
the
Q
Plus
crds
and
the
crd,
which
so
these
resource
composition
and
all
these
crds
are
installed
when
the
Q
Plus
operator
is
deployed.
For
the
first
time
once
the
operator
starts
running,
the
next
thing
that
you
do
is
create
an
instance
of
resource
composition
which
will
generate
this
crd.
Okay.
Now
what
does
this
crd
really
look
like?
So
let's
take
a
look
at
that
and
for
that
I'm
just
going
to
use
the
explain
command
okay.
A
So
this
is
the
our
standard
explain
which,
which
will
I'm
using
Udo
Service
as
the
name
of
the
kind,
because
that's
the
name
of
the
kind
I
have
specified
in
resource
composition
and
I'm
asking
for
DOT
spec,
and
so
let's
look
at
that.
Essentially,
what
this
should
show
is
basically
all
the
values
dot
yaml
that
exists
as
part
of
the
the
underlying
Helm
chart.
Those
should
those
should,
and
by
the
way,
I
have
a
video
of
this
as
well.
Sometimes
Network
can
be
tricky.
A
So
let
me
let
me
bring
up
side
by
side,
a
video
as
well.
Let
me
just
scroll
this
to
the
point
where
yeah
I
think
here
we
can
look
at
the
video
of
the
output.
C
A
So
let
me
pause
here
so
what
this
is
basically
showing,
while
on
the
background
yeah,
let
me
pause
here.
So
this
is
the
cube
cuddle,
explain,
which
is
essentially
showing
us
all
the
attributes
that
are
part
of
that
the
values
dot
AML
they
get
reified
and
reflected
as
fields
of
this
new
API
that
that
was
created.
Okay,
so
let
me
just
Ctrl
C
out
of
this.
Sometimes,
as
my
network
can
be
problematic.
Okay,
so
I
can
just
go
through
these
steps
here.
The
next
step
that
I'm
going
to
do
here
is
yeah.
A
So
here
you
see
all
these
fields
now
yeah.
Then
there
is
this
Cube
CTL
man
command.
It's
a
plugin
that
we
provide
to
enable
someone
to
get
a
get
a
copy
of
a
sample
resource
okay.
So
if
I
do
this,
it
should
show
us
yeah.
So
now
what
you
see
here
is
the
resource
that
we
have
created
from
from
the
command
plugin.
It
gave
us
that
actual
resource,
so
I'm
going
to
change
the
load
balancer
type
to
node
Port.
So,
by
the
way
this
entire
spec
I
didn't
modified
anything.
A
It
was
there
in
in
the
values
dot.
Yaml
that
just
got
reflected
as
spec
of
this
object,
and
now
then
what
I
do
is:
let's
create
an
instance
of
that
okay,
so
oh
yeah
I
can
create
an
instance
and
instance
creation,
while
the
instance
is
being
created.
Let
me
talk
through
how
the
instance
creation
happens
so
behind
the
scene.
What
Q
Plus
is
doing
is,
let
me
go
backward,
so
instance:
creation
takes
about
20
odd
seconds.
A
The
reason
is
the
architecture
of
Q
Plus
is
like
this
okay,
so
you
have
your
controller.
Then
we
have
a
mutating
webhook
and
then
we
have
a
container
which
is
called
as
a
helmet,
Helmer
container,
and
so
all
these
three
together
make
Q
Plus.
The
controller
is
managing
the
crud
on
the
Q
Plus
crd,
so
the
resource
composition
is
the
Q
Plus
crd,
that
is
being
managed
by
the
controller
and
all
the
new
crds
that
Q
Plus
registers
the
crud
on
that
is
done
by
the
mutating
web.
A
So
we
are
making
use
of
the
mutating
web
hook
for
two
purposes.
One
is
in
its
standard
fashion
to
sort
of
enforce
policies
very
similar
to
something
like
QR,
no
or
opa.
But
beyond
that,
we
also
use
mutating
webwork
to
actually
create
and
manage
these
application
instances.
Okay,
so
this
yeah!
Actually
this
has
gone
through.
Let
me
just
pause.
C
A
C
C
A
Yeah
so
web
hook,
and
so
you
do,
the
you
see
the
arrow
between
mutating
my
book
and
Helmer.
So
the
mutating
Web
book
is
only
watching
for
I
mean
it.
It
acts
like
a
watch
it
just
while
the
resource
is
being
created
when
I
said,
let's
say:
qctl
create
sample
Udo
service
that
will
be
intercepted
by
mutating
Web
book.
I
mean
and
I'll
show
you
the
mutating
Web
book,
the
actual
resources
that
it
intercepts.
A
It's
very
confined
to
the
resources
that
are
of
type
platform,
API,
dot,
Q,
Plus,
and
so
what
the
mutating
web
will
do
is
it
will
keep
on
watching
for
any
incoming
requests
to
HR
for
these
new
crd
types
which
have
been
registered
and
then
when
once
it
sees
that
it
works
with
the
Helmer
component
to
actually
do
the
helm,
install
and
then,
as
part
of
doing
the
helm,
installation
the
values,
any
modification
locations
that
you
have
done,
which
are
those
are
available
in
the
spec
properties,
so
the
new
object
that
is
being
deployed
that
is
available
as
the
spec
properties
of
those
are
available
in
mutating
Web
book.
A
So
the
mutating
web
plays
a
very
critical
role
for
us
by
the
way
it
also
does
not
come
in
the
way
of
any
other
mutating
Web
book
that
might
be
running
so
we
have
tested
this
with.
Let's
say
if
you
have
keyword
no
in
place
and
you
are
using
qrno
to
apply
more
governance
policies
at
the
cluster
scope.
This
will
not
come
in
the
way
of
that.
It
will
still
work
with
that
without
any
problems,
and
so
what
happens
is
behind
the
scene
like,
let's
say
yeah.
A
This
is
a
good
point
to
just
pause
and
take
a
look
at
so
the
command
Q
Kettle
create
sample
order
service.
This
is
our
standard
cubicle
command,
the
sample
Ludo
service
was
created.
Let
me
go
back,
and
maybe
here
we
can
I
can
show
you
yeah
now.
A
I
can
just
do
some
quick
showcase
here,
so
this
sample
Ludo
Service,
was
retrieved
from
the
cube
Kettleman
plugin
that
we
provide
what
it
does
is
it
will
look
at
the
helm,
charts
values,
dot
ml
and
it
will
create
a
spec
out
of
that.
I'll
also
want
to
show
you
a
very
useful
thing
which
let's
say:
I
have
it
here.
A
Let
me
just
type
it:
let's
see,
if
I
do
you
cut
I'll
describe
I
want
to
show
you
that
the
new
crd
that
got
created
that
crd
has
has
the
open,
API
spec
written
properly?
Okay,
so
let
me
just
do
describe
so
this
all
generated
the
new
crd
and
what
I
want
to
show
you
is
Okay,
so.
A
Oh
wait:
I
have
to
do
and
describe
not
an
instance.
I
have
to
do
to
this
yeah.
So
this
is
the
new
crd
that
was
registered
and
if
you
really
wonder
what
is
the
challenge
and
like
create,
is
the
crds
on
the
fly
from
an
underlying
Helm
chart,
which
we
don't
know
anything
about.
The
main
challenge
is:
how
do
you,
when
you
are
registering
a
crd?
Let's
say
this
is
the
uru
service
crd
that
we
are
registering.
A
How
do
you
really
adhere
to
this
requirement
that
we
want
the
crd
to
have
the
open,
API
V3
schema
properly
specified,
because
that
is
now
requirement
since
I
think
kubernetes
1.22
onwards,
you
cannot
have
crds
without
open
API
schema
for
the
underlying
spec
properties,
and
so
the
I
mean
we.
We
are
able
to
do
that
by
when
the
crd
is
registered
is
being
registered.
We
look
through
the
charts,
values,
dot,
ml
and
based
on
the
values,
dot
EML.
We
prepare
this
open.
A
Api
V3
schema,
okay,
so
all
the
properties
that
that
are
defined
in
the
in
in
the
charts
values.ml.
They
are
reflected
here
in
the
crd
and
that's
why?
Because
the
crd
is
now
well
formed
with
all
the
open,
API
schema
and
so
on.
We
are
able
to
issue
statements
like
this,
where
we
can
create
an
instance
of
the
new
kind
without
knowing
anything
about
the
spec
ahead
of
time,
and
so
once
you
do
this,
let
me
maybe
this
is
the
version
that
is
running
I'll
just
now
focus
on.
A
So
this
is
how
the
deployment
is
happening
of
a
new
instance
but
yeah.
So
this
was
the
instance
that
I
deployed
this
morning
for
five
hours
ago,
so
I'll
just
Now
demo.
Here
with
what
I
have
so
beyond
that,
then
you
can
from
multi-tenancy
point
of
view.
You
can,
let's
see
if
we
are
able
to
see
all
the
different
resources
that
that
were
created,
okay,
so
the
yeah.
A
So
we
have
AQ
Kettle
plug-in
app
resources,
which
will
just
give
a
bird's
eye
view
of
all
the
things
that
Q
Plus
creates
when
it
creates
an
instance
of
this
new
kind.
So
uru
service
was
the
kind
and
instance
of
that
by
the
name.
Sample
Ludo
service
was
created
in
the
default
namespace,
but
then
rest
of
the
other
things
were
created
in
this
namespace.
So
we
just
have
this
convention
that
we
will
create
the
same
name
of
the
namespace
by
which
you
are
creating
your
instance.
A
So
that's
why
we
have
sample
order
service
name
of
the
instance
and
the
namespace
we
just
create
that
as
the
name.
So,
if
you
look
till
here,
all
these
are
resources
which
are
directly
defined
in
that
Helm
chart.
So
there
is
a
postgres
and
then
the
actual
there
are
C
grade
service,
accounts,
PVCs
and
so
on,
deployment,
State,
full
set
and
so
on,
but
then
towards
the
end.
These
are
two
here.
These
are
the
network
policies
that
Q
Plus
has
generated,
so
you
can
Define
Network
policies
and
resource
quotas.
A
When
you
create
the
instance
of
resource
composition,
so
this
is
just
showing
that
by
default
there
is
the
allow
external
traffic
and
restrict
cross
network
cross
name,
Space
Traffic.
These
are
the
two
that
we
Define
by
default.
The
last
line
here,
the
resource
quota.
This
is
actually
not
generated
by
Q
Plus,
because
in
this
particular
resource
composition,
I
didn't
specify
any
resource
code
does.
But
I
think
this
is
coming
by
default
from
gke.
A
If
you
create
a
new
namespace,
you
get
this
sort
of
resource
code
I
created,
so
we
have
this
plug-in.
The
other
plugin
that
exists,
which
can
be
very
handy,
is
to
yeah.
Of
course
you
can
look
at
the
app
URL.
So
maybe
we
can
look
at
that,
but
because
this
application
is
not
up
yet
I,
think
that
will
not
show
anything
interesting.
But
let
me
show
you
the
metrics
plugin,
so
this
metrics
plugin
is
quite
handy
to
to
get
the
the
resource
compo
resource
consumption
underlying
resource
consumption.
A
So
you
have
CPUs
storage,
Network
memory,
Ingress
and
egress.
So
this
is
a
pretty
output,
but
we
have
Prometheus
output
as
well,
and
this
is
just
showing
how
many
ports
are
there
as
part
of
this
deployment,
number
of
containers,
and
so
on,
so
quite
handy
and
so
just
to
quickly
mention
so
in
our
sort
of
the
product
offering
the
commercial
offering.
What
we
do
is
you.
A
We
have
a
centralized
control
plane
on
using
which
you
can
manage
SAS
delivery
across
multiple
clusters,
so
multiple
clusters
for
different
customers-
you
can
have-
and
you
can
manage
it
there
and
there
is
built-in
Prometheus
integration.
So
in
in
a
in
sort
of
a
single
pane
of
glass
you
can
see
so,
for
example-
and
this
is
I
have
it
running
in
my
vagrant
environment-
so
I
can
quickly
show
you
so
this
is
showing
for
on
my
mini
Cube.
O
service
is
my
uru
service.
A
I
can
look
at
historical
sort
of
tracking
of
CPU
and
memory,
so
you
get
some
built-in
Prometheus
integration
there.
So.
A
Yeah
yeah,
it
runs
the
Matrix
command
and
you
you
just
get
a
very
sort
of
single
place
to
manage
all
the
Clusters,
all
the
different
Services
you
could
have.
So
the
notion
of
a
service
is
the
resource
composition
that
I
talked
about.
You
could
create
multiple
instances
of
that,
referring
to
different
versions
of
let's
say,
Udo
release,
one
who
do
release
two
and
so
on,
and
then
the
application
instances.
A
So
this
is
like
the
place
where
you
from
a
B2B
vendor
can
manage
all
their
running
application
instances
there
is
built
in
Prometheus
and
troubleshooting.
We
also
provide
a
very
quick
way
to.
Let
me
see
if
I
can
show
you,
there
is
a
place
to
run
Cube
Kettle
commands
as
well.
So
there
is
a
cube,
Kettle
command
line,
shell
and
so
I
can
run
these
commands
from
my
control
Center,
and
we
will
see
this.
The
the
parts
running
and
I
think
in
my
this
mini
Cube
environment
I
had
done
some
testing
with
QR.
A
No,
so
we
have
that
here.
So
this
is
the
place
where
one
can
come
and
manage.
All
of
this
two
plus
SAS
manager
itself
also
runs,
as
it
doesn't
run
on
kubernetes
it's
containerized.
It
can
be
run
on
any
VM.
It
can
be
run
on
cloud.
So
it's
standard
web
app
yeah
is.
A
Source
Dev
yeah,
so
the
SAS
manager
itself
is
not
open
source.
The
yeah
the
container
is
available
free
for
download
but
yeah.
The
the
support
around
that
that's
how,
where
we
are,
are
sort
of
the
main
product
offering
Wheels.
A
A
You
yeah,
you
can
run
it
on
your
own.
There
is
like,
apart
from
the
container
being
open
source.
I
mean
freely
available.
There
is,
you
know,
just
Docker
run
is
never
sufficient.
There
are
like
additional
steps.
You
need
to
do
around
that,
so
those
parts
are
not
freely
available.
Okay,
yeah
and
yeah.
So
like
as
a
summary
of
all
of
this,
the
main
sort
of
appeal
to
the
community
is.
A
We
are
looking
for
folks
to
try
their
own
Helm
charts,
mainly
to
see
if
there
are
any
gaps
that
we
have
missed
or
if
there
are
any.
You
know
things
that
we
have
not
really
accounted
for
from
our
end,
what
we
have
done
is
bitnami's
repository.
It
has
like
96
shards,
and
so
we
have
gone
through
all
of
those
and
yeah
I
mean
it
did,
uncover
some
gaps
and
we
are
actively
working.
A
So
currently,
all
those
are
whatever
the
gaps
we
found
they
are
created.
We
have
created
issues
on
Q,
Plus
repository
for
all
those,
but
to
just
give
you
some
high
level
statistics
out
of
those
96
charts,
68
worked
without
any
any
I
mean
they
just
worked.
There
were
a
few
others
which
ran
into
some
permission
related
issues
which
we
are
now
working
through,
and
so
we
have
a
roadmap
there.
So
take
a
look
at
the
roadmap.
A
A
Let
Me
Maybe
put
this
in
front
as
the
steps
that
we
sort
of
went
through
so
the
ones
that
are
highlighted.
Those
are
the
cube
pedal
plugins
that
we
provide
so
the
Man
app
resources,
Applause
app
URL,
metrics
and
connections.
C
C
Platforms,
and
one
of
the
things
we've
been
thinking
about,
is
how
you,
you
know
provision
out
new
capabilities.
Like
databases
like
you,
know,
WordPress
service
into
your
cluster,
but
we
usually
but
we're
not
really
thinking
about
deploying
a
complete
application.
We're
more
thinking
about.
Oh,
we
need
a
database.
Oh
I
need
a
Kafka
broker,
so
that's
what
I'm
wondering
would
would
if
I,
if
every
you
know
if
every
tenant
might
need
their
own
database
or
their
own
Kafka
broker.
Would
this
be
an
appropriate
thing
to
use
or.
A
Is
this
yeah?
Yes,
no!
No,
it
will
be
as
see
Let
Me
Maybe
take
back
to
this
particular
picture
right,
so
it
could
be
a
full-fledged
applications
like
WordPress
and
but
it
could
also
be
just
a
database
instance,
and
in
fact
let
me
let
me
bring
up,
because
what
you
ask
is
a
very
great
question.
So
if
you
look
at
the
bitnami
and
charts
right,
the
the
the
this
actually
answers
the
question
quite
well.
A
If
you
look
at
all
the
charts
that
exist
in
there,
they
refer
to
full-fledged
applications,
but
they
also
refer
to
infrastructure
services
like
database
or
even
you
know,
cert
manager
console
Concourse.
You
know
some
of
these
are
yeah
full-fledged
applications.
A
Others
are,
you
might
just
install
them
as
part
of
a
broader
application
stack,
but
you
still
have
to
deploy
that
right
and
if
you
have
a
like
requirement
where
every
every
end
user
is
going
to
need
their
own
own
Cassandra
instance,
then
yeah
I
mean
rather
than
directly
doing,
Helm
install
but
then
struggling
with
all
the
other
automation
around
isolation.
And
you
don't
like
with
our
solution.
You
don't
really
need
to
modify
the
health
chart
at
all.
A
Oh,
that's
a
good
question,
so
the
permissions
that
we
Grant
are
to
the
to
the
to
our
Q
config
files.
They,
if
you
where
to
use
those
Cube
config
files
and
try
to
deploy
some
other
application
into
that
namespace,
most
likely.
That
will
not
work
because
those
permissions
are
scoped
to
only
the
things
that
are
needed
in
that
hand
chart.
A
Having
said
that,
if
you
are
cluster
admin
and
if
you
you
know
have
complete
permissions,
then
yeah,
you
might
end
up
deploying
other
things
in
that
namespace,
but
I
mean
that's,
not
how
you
would
generally
do
it
right,
because
it's
again
going
to
impact
how
the
metrics
are
collected.
So
the
pods
yeah,
which
are
not
non-application,
plots
spots.
If
there
are
there
in
that
namespace,
then
they
will,
they
will
sort
of
screw
them
skew.
The
metrics.
C
Yeah
I
mean
you're
right
on.
That's
that's
exactly
what,
because
a
lot
of
folks
think
about
namespace
as
a
unit
of
tendency,
so
they
want
their
database
and
their
API
server
all
in
the
same.
A
C
A
The
application
yeah
yeah
I
mean
in
that
situation.
What
like,
if
you
look
at
this
particular
output
here
right
so
like
what
they
are
doing,
is
the
it
sits.
The
application
has
the
database,
and
if
there
is,
there
would
have
been
an
API
Gateway
in
there.
It
depends
on
how
the
application
is
getting
packaged.
Sometimes
you
may
split
out
different
components.
You
may
have
your
databases
as
a
separate
chart,
which
you
are
deploying
separately,
in
which
case
you
would
end
up
creating
crds
like
with
when
using
Q
Plus.
A
You
would
create
separate
crd
for
the
database
and
then
the
separate
crd
to
represent
your
application
chart
and
that
will
be
okay
or
you
could
create
a
one
chart
in
which
your
database
chart
is
sort
of
the
sub
chart
and
then
that
single
application
chart
with
everything
bundled
together
that
can
be
deployed.
So
it
depends
on
every
team.
Every
situation
might
warrant
something
something
different.
C
A
Correct
correct,
thank
you,
so
yeah
I
mean
just
to
referring
back
to
our
conversation
Josh
or
what
I
remember
like
from
before
this
presentation.
C
Yeah,
let's
pick
it
up
now
because
we'll
the
recording,
if
somebody
wants
to
watch
so
yeah,
what
I
asked
you
before
yeah
I
do
want
to
ask
it
again
is:
how
do
you
compare
contrast,
this
with
operator
lifecycle
manager,
yeah
and
especially
the
one
that
wraps
a
Helm
chart
yeah.
A
Yeah,
so
let
me
try
to
yeah
so
what
I
think
how
I'm
not
caught
up
with
the
current
capabilities
of
olm
but
I
do
remember
when
we
were
looking
at
existing
options,
the
helm
operator
did
surface
in
our
in
our
investigation
and
what
we
realized
was
at
that
point,
Helm
operator
the
way
it
was
designed
and
the
way
the
examples
where
sort
of
documented,
if
I
wanted
to
start
with
the
helm,
chart
and
done
create
an
operator
so
kubernetes
native
API.
A
For
that
Helm
chart
what
used
to
happen.
If
I
recall
correctly
was
you
ended
up
actually
generating
all
the
you
know,
use
the
operator
SDK
and
provide
the
helm
chart
as
input
to
it,
and
it
will
actually
generate
all
the
go
files
and
everything
which
the
types
dot
go
and
everything
which
was
now
specific
for
that
particular
application
Helm
chart.
A
So
if
it
was
Apache,
then
you
would
get
a
new
crd,
of
course,
but
all
of
that
would
still
be
generated
at
the
code
level
and
then
there
were
steps
to
package
it
up
as
a
pod
and
then
deploy
that
pod
into
the
cluster,
which
is
now
your
Apache
operator,
which
is
able
to
handle
crud
on
Apache
resources.
A
A
So
I
cannot
afford
to
have
three
different
pods,
which
are
just
the
operator
Parts
running
in
my
cluster
and
I
mean
it
just
doesn't
scale
if
you
have
many
different
kind
of
applications
that
you
want
to
create
a
kubernetes
API
for,
and
so
this
is
where
Q
Plus
like
technically
the
difference
that
Q
Plus
brings
in
is
you
you
don't
really
need
any
other
pod
to
run
as
an
operator.
A
So
if
you
look
at-
and
this
is
the
get
pods
listing
that
we
are
seeing
in
front
of
us,
Q
Plus-
is
the
only
part
that
is
running
from
the
Q
Plus
point
of
view.
All
the
other
things
are
actually
the
cube
system,
so
we
can
ignore
those
and
that's
my
application
port,
and
so
with
this,
a
single
applic
single
Q,
Plus
operator
pod.
A
We
are
able
to
create
and
manage
new
crds
for
any
any
Helm
chart
out
there,
and
so
that's
where,
in
my
view,
this
is
simpler
to
work
with,
as
if
I
am
a
I'm,
the
provider
and
cluster
admin,
then
I'm
not
looking
at
many
different
operator,
ports
of
one
each
for
separate
Helm
chart.
A
So
that's
the
helm
operator
specific
as
far
as
olm
is
concerned,
so,
like
broadly
olm,
does
the
life
cycle
management
of
The
Operators.
There
are
releases
and
channels
and
all
like
really
evolved
capabilities
of
managing
the
operator
itself
with
Q
Plus.
The
only
operator
that
we
are
managing
is
the
Q
Plus
operator
right,
because
the
others
are
from
Q
Plus
point
of
view.
The
others
are
not
operators
at
all.
A
They
are
I
mean
the
crds
do
get
created,
but
they
are,
all
of
them
are
still
getting
managed
through
Q
Plus.
So,
let's
say
from
comparing
to
olm.
If
I
want
to
release
a
new
version
of
Udo
service,
what
I
will
end
up
doing
is
I'll
create
a
new
instance
of
resource
composition
and
which
will
have
a
different
kind
name.
So
let's
say
Udo
service
release,
2
and
that
new
kind
will
get
registered
in
the
cluster,
and
so
the
life
cycle.
A
Management
of
those
kinds
is
is
completely
done
by
Q
Plus
and
the
lifecycle
management
of
Q
Plus
itself
is
a
simple
Helm
command.
I
mean
it's
in
order
to
install
Helm,
sorry
Q
Plus
operator,
all
we
depend
on
is
helmet.
There
is
a
Helm
chart
for
that.
So
we
we
are
not
so
much
evolved
like
olm
as
far
from
the
release
and
channels
and
those
kind
of
capabilities
yeah
does
that
make.
C
A
I
mean
I,
that
probably
is
needed.
I
mean
I,
I
haven't
looked
at
in
what
all
contexts
that
complexity
comes
up.
Probably
it
is
essential
complexity,
so,
for
whatever
the
use
case
is
that
is
being
targeted.
You
need
those
capabilities,
probably
but
yeah
in
Q
Plus.
We
don't
have
those
kind
of
release
channels
and
those
things
well.
A
Exactly
in
olm's
case,
those
are
probably
more
yeah,
like
involved
processes
which
are
simplified
yeah
with
Q
Plus,
because
we
are
making
our
focus
our
focuses
with
very
specific.
We
are
not
really
one
thinking
about
those
other
problems,
because.
A
Helm
charts
and
yeah,
we
are
focused
on
only
like
making
sure
solving
that
one
specific
problem
of
how
to
really
seamlessly
enable
multi-instance
multi-tenancy,
where
certain
things
like
all
Network
policies,
that
it
shows
quota's
isolation.
That
is
completely
taken
care
of
and
a
platform
engineer.
He
or
she
doesn't
need
to
really
worry
about
that
automation.
It
will
all
be
given
to
you
out
of
box
and
yeah.
A
We
we
started
with
hem
charts
as
the
only
format
and
currently
also
that's
the
only
format
we
support,
because
just
it's
widely
used
and
we
thought
that
will
be
the
right
thing
to
do.
Rather
than
invent
our
new
format
or
you
know,
do
something
which
is
very
custom.
We
want
that's
like
the
constraint
that
we
have
put
on
ourselves.
If
it
works
through
hell,
we
want
to
be
able
to
be
there.
Yeah
Q
Plus
needs
to
support
whatever
Helm
is
able
to
do
or
represent.
B
A
That's
a
great
question
Thomas,
so
in
fact,
in
our
roadmap,
that
is
one
of
the
items
that
has
come
up
so
right
now.
The
answer
is
it's
work
in
progress.
We
we
don't
have
complete
testing
done
yet
at
the
high
level,
I
don't
see
any
problem,
it
should
work
in
a
natural
way,
but
yeah
beyond
that.
I
won't
be
able
to
comment
much
because
yeah,
we
are
still
figuring
that
out
the
GitHub
support
and
that's
a
roadmap
item.
B
A
Yeah
but
yeah
you
nailed
it
George
that
that's
the
that's
the
work
that
we
have
to
do:
it's
not
there.
Yet,
where
resource
composition
gets
updated
and
in
all
the
all
the
running,
Helm
releases
that
are
dependent
or
that
were
built
from
that
resource
composition,
they
will
be
updated,
they
will
need
to
be
updated,
I
mean
that's
the
high
level
idea.
B
A
No
get
Ops
is
definitely
something
this
yeah.
Somebody
else
had
asked
us
about
it
on
the
roadmap,
so
one
of
the
features
that
we'll
be
looking
at
the
next:
how
to
enable
that.
A
Let
me
stop
share,
stop
see,
face
to
face
everyone,
yeah
I,
think
it
just
three
of
us
remaining.
So
I
think
this
was
great.
Thank
you
so
much
thank
you
yeah
and
yeah.
If
any,
if
you
sort
of
circulated
in
other
channels,
want
me
to
come
and
present
this
again
in
platform
working
group,
just
let
me
know
yeah
you're.
C
B
A
Yeah,
so
yeah
I
mean
the
main
as
and
Josh
you
were
not
there
when
Thomas,
and
we
were
chatting
about
this
is
one
of
our
goals
is
to
see
if
we
get
enough
people
interested
and
then
potentially
approach
sandbox,
so
yeah
I
mean
we,
it's
all
open.
I
know
the
standard
evaluation
criteria
for
sandbox
is
quite
High.
There
are
many
criteria
and
so
yeah
I.
It
depends
on
how
much
interest
there
is
in
the
community
and
yeah.
A
Based
on
that,
we
are
definitely
open
to
pursuing
that
channel.
That
Target
as
well.
C
Yeah
I
mean
I,
think
it's
that
come
and
spread
the
spread,
the
good
word
and
and
get
people
trying
it
I
mean
I'm.
Definitely
seeing
some
relationship
here
to
kind
of
what
a
couple
of
the
other
like
cross-plane
and
Graphics
are
trying
to
do
a
little
bit
like
a
guy
that,
underneath
you
know,.
A
Yeah,
no,
in
fact,
the
resource
composition
idea
that
we
sort
of
landed
on
that
was
inspired
by
a
similar
idea,
I
think
cross
plain
had
this
was
yeah
year
and
a
half
ago
when
we
sort
of
okay
realized
okay
for
that's
how
we
want
to
do
it,
and
in
fact
that
was
the
time
when
we
were
really
looking
closely
at
Helm
operator
and
how
you
know
it
is
doing
things
and
then
we
realized,
maybe
that's
not
how
we
want
to
go,
but
then
what's
the
alternative,
because
we
do
still
want
to
be
able
to
do
all
of
this,
and
so
yeah
cross
plain
was
something
that
we
looked
at
and
some
of
the
ideas,
resource
composition,
notion
actually
is
inspired
from
them.
C
A
Yeah
definitely
yeah.
If
there
is
any
session
happening
where
loss
cross-plane
folks
are
going
to
come,
definitely
ping
me,
and
maybe
we
can
have
something
joint
as
well
with
cross
blend
later
on.
B
Just
another
thing
to
and
to
be
honest,
an
unloyal
project
now
for
I
think
one
one
and
a
half
year
and
I
took
a
look
on
on
the
set
of
the
project
and
I
think
you
have
around
400
500
Stars
at
the
moment
and
from
if
you
want
to
call
sandbox
I
would
go
for
it.
Oh.
A
But
yeah
I
mean
see
to
to
be
very
honest.
It's
around
that
time
here
in
year
and
a
half
I
think
exactly
year
and
a
half
ago
we
did
apply
for
sandbox,
and
at
that
time
we
got
rejected.
We
were
not.
There
was
not
any
recommendation
of
you
know,
go
and
get
more
contributors
or
anything
it
was
it
just
rejection,
sort
of
the
decision
and
so
I
mean
I
understand
these
decisions
have
lot
of.
A
You
know
broader
aspects
that
they
look
at,
and
so,
given
that
experience
that
we
have
gone
through
I
want
to.
If,
when
when
I
want
to
approach,
it
I
want
to
make
sure
that
you
know
there
is
enough
Community.
Otherwise,
then
I
don't
want
to
go
through
that
same.
You
know,
decision
again.
A
B
But
if
you
plan
to
go
for
sandbox
just
reach
out
to
us
yeah
that.
A
I
know,
in
fact,
that
was
I
mean
I've,
seen
Toc
will
refer
projects
to
the
cncf
tags
and
so
I
thought.
Okay.
Let
me
just
first
come
to
the
tag
and
present
it
that
way.
You
all
are
aware
in
case
in
the
future.
We
do
decide
to
go
to
sandbox,
then
you
are
now
all
aware
of
what
we
are
doing
and
I'm.
C
A
Yeah
definitely
I
mean
yeah,
we
would
love
to.
You
know,
become
sandbox,
it's
just
after
having
gone
through
that
initial
decision
year
and
a
half
back
I'm
just
barrier
not
to
approach
it
with
everything
really
well
in
place.
You
know
there
cool.
A
Yeah
I
think
we
are
thank
you
so
much
Josh
and
Thomas
and
let's
be
in
touch
all
right.
Thank
you.
Yeah
bye,.