►
From YouTube: Community Meeting August 10, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh
and
welcome
everyone
to
the
kcp
community
meeting
august
10th
2021.
We
have
some
topics
carried
over
from
last
week.
A
I
think
topics
that
we
need
to
carry
over
from
last
week
to
if
you
think
of
something
we
had
on
the
agenda
last
week
and
didn't
talk
about
filemakers
bring
it
up,
but
I
also
wanted
to
give
some
updates
about
how
we're
thinking
about
controllers
can
work
in
a
multi-cluster
or
in
a
you
know,
kcp
logical
cluster,
multi-cluster
scenario
and
clayton
added
a
rundown
of
the
policy
investigation,
let's
which
one?
Where
should
we
start?
A
I
think
one
thing
that
I
wanted
to
talk
about
last
week
that
I
don't
think
we
went
into
enough
was
the
whether
we
should
reuse,
node
selectors
versus
a
location
selectors.
I
don't
know
that
it's
settled
at
all.
I
think
clayton,
you
tend
to
seem
to
prefer
reusing,
node,
selectors
and
sort
of
assuming
that
clusters
have
the
same
physical
clusters
have
the
same
labels
as
all
nodes
within
those
clusters.
I
think
that's
a
fine
assumption.
A
We
can
definitely
not
talk
on
it
and
keep
going
it
should
you
know
we
don't
need
to
solve
that.
To
be
able
to
make
progress
and
have
better
and
better
demos,
but.
B
B
I'd,
probably
say
that,
and
we
were
like
talking
about
principles
we
didn't.
Actually.
I
don't
know
if
we
brought
principal
lovely's
surprise
actually
out
of
the
overall
principles
in
gold
stock,
but
I
think
it
probably
belongs
there
and
so
it'd
be
a
if
you
had
to
invent
a
new
system
for
doing
new
things
that
made.
That
was
completely
different
than
cube.
I
mean
a
cluster
is
just
a
set
of
nodes
and
then
maybe
like
the
question
would
be
within
a
cluster.
B
The
kinds
of
flexibility
is
like
cube
kind
of
has
like
an
0
10
cardinality
for
like
chunks
of
nodes,
like
most
people
have
two
sets
of
nodes
or
three
a
few
people
just
have
one
set,
maybe
like
single
node
clusters
or
a
few
of
the
cloud
infrastructure
providers
that
run
the
control
plane
elsewhere.
Most
on-premise
deployments
have
at
least
two
or
three
categories
of
nodes
and
then
a
certain
set
of
you
know.
B
I've
certainly
seen
this
like
you
might
have
like
four
or
five
different
policy
domains
within
a
cluster
for
different
types
of
nodes,
nodes
by
infrastructure
type
like
if
they
have
gpus
dmz
versus
not
dmz,
is
common,
so
there's
kind
of
a
low
cardinality
of
chunks
of
nodes
within
a
cluster.
B
It's
certainly
reasonable
that
placement
onto
physical
clusters
is
gonna
also
have
low
cardinality,
but
not
you
know.
Singleton
cardinality
like
there
may
be
lots
of
really
good
use
cases
to
say
I
want
to
expose
two
different
chunks
of
capacity
from
a
physical
cluster
to
place
onto
which
have
different
policy
constraints,
and
so
yeah
you
could.
You
could
probably
make
up
some
things
like
either.
All
of
those
nodes
have
a
consistent
label,
in
which
case
placements
kind
of
similar,
tolerations
or
similar
taints
and
similar
labels.
B
Would
a
reasonable
user
expect
those
if
you
wrote
your
own
system
you'd
effectively
be
duplicating
those,
and
so
I
think,
just
all
other
things
being
equal.
We
want
a
reason
to
do
something
different
and
the
reasons
would
have
to
be
like
justified
in
terms
of
we're
surprising,
a
user
by
inventing
something
new
that
has
a
cost.
Therefore,
there
has
to
be
a
justification
for
it.
A
You
briefly
work
now
you're
off
the
airpods,
just
sucking
again
we'll
see
how
long
these
last,
the
the
I
wasn't.
I
don't
necessarily.
I
definitely
don't
want
to
invent
new
concepts,
because
there
are
enough
of
those
already
and
n
plus
one
concepts
is
gonna
just
be
more,
but
I
was
thinking
of
re
of
basically
copying
the
existing
concepts
of
node
selectors
affinity,
you
know
constraints
and
taints
and
things
and
making
them
the
same
concepts
at
the
location
layer
instead
of
the
node
layer.
A
A
And
I
see
what
you're
saying
so
so,
if
I'm
saying
send
me
to
a
cluster
in,
you
know,
aws
or
send
me
to
a
cluster
in
a
better
example
than
that
this
deployment
should
be
spread
across
at
least
two
availability
zones.
Is
it
is
a
common.
A
You
know
we
think
common
use
case
for
for
this
apparently,
first
of
all,
it's
not
completely
transparent
because
you
have
to
have
the
concept
of
locations
in
the
first
place,
but
then
to
be
able
to
schedule
those
we
want
to
say
it's
like
you
have
a
single
global
cluster
where
some
nodes
are
labeled
east
and
some
are
labeled
west
or
some
are
labeled,
amazon
or
some
are
labeled.
Well
then,
every.
B
Node
in
every
cube,
cluster
implicitly
gets
a
zone
and
a
region
label.
Those
are
part
of
the
kubernetes
api
contract
on
all
cloud
providers.
Unless
someone
takes
an
action,
those
are
also
implicit.
So
there
is
an
argument
that
it's
already
an
official
part
of
the
cube
api,
and
indeed
on
premise,
you
don't
expect
that
you're
expected
to
use
the
region
and
zone
labels
as
because
we
actually
have
default
spreading
on
region
and
zone
labels,
and
so
region
and
zone
are
actually
two
really
good
examples
of.
B
If
you
want
to
do
zone
spreading,
the
expectation
is,
you
would
use
the
zone
label
and
if
you
were
to
expect
behavior
within
a
cluster,
to
behave
a
certain
way.
You
would
expect
one
thing
and
then
you
would
probably
expect
that
to
be
consistent,
some
of
the
other
labels
might
be
like
grouping
policy.
That's
not
today.
You
would
have
to
put
those
labels
on
nodes
to
get
those
rules
enforced
and
so
that
and
you
would
have
to
change
your
workload
to
target
them.
B
So
there's
already
kind
of
like
that
implicit
built-in
assumption
that
to
target
you
have
to
use
labels
of
nodes,
and
then
it
really
just
comes
down
to
yeah.
Like
is
there's,
certainly
no
rule
that
there
has
that
a
location
has
to
be
a
homogenous
pool
of
nodes,
and
so
then
it'd
be
like.
What's
if
we
put
if
we
require
something
that
new
is
guaranteed
orthogonal,
but
now
it
is,
you
cannot
mix
node
and
location
labels,
and
so
you'd
have
to
you'd
have
to
answer
both
of
those
how
they
fit
and
how
you
mix
them.
B
Whereas
if
you
just
have
the
one
you
have
to
figure
out
how
they're
consistent,
then
you
have
to
be.
You
have
to
obey
that
consistency,
but
you
don't
have
to
add
to
it.
So
another
example.
You
said
something
was
really
important.
I
think,
and
this
kind
of
ties
back
into
it
jessica
and
I
were
having
a
conversation
earlier
today.
She
was
kind
of
helping
me
go
through
the
policy
and
the
self-service
stuff,
and
we
were
talking
about
you
know.
A
logical
cluster
is
a
tool.
B
There
might
be
different
implementations
of
logical
clusters,
it's
very
reasonable,
to
say
like
oh,
I
could
create
a
crd,
but
in
one
logical,
cluster
results
in
other
logical
clusters,
so
there's
a
special
logical
cluster.
That's
like
the
default
or
an
admin,
that's
always
there
and
then
I
can
make
up.
I
have
an
implementation
that
exposes
logical
clusters
based
on
the
presence
of
crs,
and
then
we
were
saying
but
they're.
The
concept
of
a
logical
cluster
like
cube
clients
can
only
talk
to
logical
clusters.
B
B
So
you've
effectively
created
a
new
system
that
nobody
knows
how
to
work
with
and
that
you
can
do
that.
There's
not
like
we
do
that.
All
the
time
every
programmer
out
there
creates
tens
of
apis
before
breakfast,
but
the
power
of
the
concept
would
be
if
you
create
those
policy
apis
and
the
clients
work
with
it
and
that
you
can
come
up
with
a
way
that
that's
not
surprising
to
a
user.
You
have.
You
have
avoided
the
action
of
creating
a
hierarchy
above
clusters,
which
means
all
of
your
existing
concepts.
B
B
If
somebody
wanted
to
change
that
logical
cluster
crd
or
they
wanted
to
add
new
policy
objects,
reasonable
people
could
disagree,
but
no
one
changing.
Those
concepts
had
to
invent
a
higher
level
system
like
a
higher
level
concept.
Scope
for
the
api
resources
outside
of
cluster
that
other
people
to
work
with.
So
it's
kind
of
like
you're,
there's
a
cognitive
complexity
and
implementation
complexity.
B
B
Don't
invent
two
apis
where
one
will
undo
where
one
will
do,
don't
invent
one
api
where
zero
will
do
and
it
kind
of
plays
to
principle
of
least
surprise
as
well.
So
we
were
seeing
something
similar
to
what
you're
describing
here
in
the
trade-offs
here
in
the
policy
side,
as
well
with
logical
clusters.
A
But
I
think
it
could,
it
could
be
useful.
Is
there
a
good
weather,
probably
not
official,
support
or
or
official
way
to
do
this,
but
a
good
way
that
people
have
seen
for
describing
a
singleton
object
in
a
cluster
like
this?
Is
the
only
one
that
exists
in
the
cluster,
the
only
one
that
can
exist
in
this
cluster
of
this
type?
A
A
B
And
from
a
design
perspective,
we
repeatedly
said:
don't
use
cluster
scoped
objects
or
don't
use
singletons
when
it
would
be
better
to
make
composables
and
there's
a
few
cases
where
it
doesn't.
I
can
think
of
like
four
or
five
examples
where
all
practical
things
you
only
look
at
the
you
only
look
at
a
predictable
name
and
yeah,
you
could
have
a
web
poll.
Commission
I
mean
honestly,
we
could
go.
B
Add
that
for
crds,
if
we
really
wanted
to
it
was
the
it's
not
a
common
enough
use
case
for
singletons,
and
but
I
do
think
it
would
come
down
to
most
of
those
are
containers
looking
downwards.
B
B
Very
useful
sometimes
to
be
able
to
create
one
that
nothing's
going
to
look
at.
That
does
validation
that
was
before
dry
run,
so
that
arguments
faded
a
little
bit,
but
there
were
certainly
use
cases
of
like
here's,
a
bunch
of
them,
and
then
there
have
been
places
where
people
started
with
singletons
and
added
more
since
you
always
have
to
provide
a
name,
no
matter
what
to
a
cube
api.
B
The
thinking
of
the
protection
against
other
names
is
really
more
of
a
validation
rule,
so
you're
widening
validation.
If
you
decide
in
the
future,
you
do
want
multiples,
and
so
it's
kind
of
like
we
couldn't
come
up
with
any
use
cases
for
core
cube,
but
definitely
people
in
the
ecosystem
have
used
the
singleton
pattern
with
webhooks.
B
It's
never.
It's
never
been
an
issue
where
someone's
like
man,
I
really
want
to
block
yeah
reading
more
than
one
of
these
there's
some
confusion
and
certainly
convention
was
important.
So
things
like
pick
a
consistent
name
for
related
objects
that
are
all
singleton.
So
if
you
have
one
global
config
object
now
the
biggest
challenge
I
actually
think
with
those
singletons
is
in
the
multi-cluster
context.
There
are
actually
legitimate
use
cases
where
you
want
to
tie
those
into
you
know
to
make
those
multi-cluster,
which
would
tend
to
make
them
name.
B
Space
scoped
cube,
definitely
did
better
to
err
on
the
side
of
namespace
scoped
over
cluster
scoped,
but
then
that
gets
into
you
know
permissions,
and
all
of
that,
so
I
think
it's
a
most
people
most
of
the
time
want
name
space,
scope,
resources,
most
people
most
of
the
time
don't
want
singletons
enough
people
wanted
singletons
that
there's
a
kind
of
a
pattern
that
people
could
follow,
but
not
enough
people
wanted
singleton
that
there
was
a
reason
to
go
and
prove
the
use
case
more.
A
B
Beauty
of
the
controller
pattern
is
it's
assumed
that
the
controller's
the
source
of
truth,
about
which
one
is
authoritative,
not
the
api,
the
confusion
and
again
another
aspect
of
it
is
your
controller-
could
very
easily
fill
out
status.
That
says,
I'm
ignoring
this
or
your
status
or
your
controller
could
just
not
fill
out
the
status.
It
usually
came
up
with
objects
that
didn't
have
status.
B
It
was
a
little
bit
weirder,
so
someone,
for
instance,
was
proposing
single
tens
for
like
cluster
name
that
got
proposed,
and
I
kind
of
don't
like
that,
because
that
assumes
that
a
cluster
only
has
one
name
in
some
use
cases
for
multi-cluster
it
does
and
then
others
it
doesn't.
So
your
your
resource
type
itself
is
already
defining
a
set
of
implicit
rules
about
how
that
type
is
used,
because
the
controller
is
the
one
who
implements
it.
So
it
didn't
feel
like
it
was
that
big
of
a
burden
to
just
pick
a
convention.
B
A
Many
things
potentially
many
things
that
use
this
pattern,
but
not
enough
to
have
a
single
like
fully
blessed
way
to
do
it,
everybody
kind
of.
Does
it
their
own
way?
That's
that's!
Fine,
that's
more
or
less
what
I
expected.
B
Singletons
actually
start
making
more
sense
if
you
have
a
layer
above
the
cluster,
because
a
lot
of
the
use
cases
where
singletons
fall
down
are,
is
this
the
only
way
that
you're
ever
going
to
use
a
resource
with
this
name
in
this
schema?
And
the
answer
has
historically
been
people
find
like
even
quota
right?
B
B
Controllers
secrets
are
actually
potentially
that
people
have
invented
cluster
secrets
or
reference
secrets,
in
name
spaces
from
cluster
scoped
resources,
to
allow
things
like
a
secret
that
all
knows
mount
through
a
csi
driver
right,
because
almost
every
organization,
I'm
aware
of
kind
of
has
global
policy
organizational
policy
and
in
local
policy
team
or
individual
and
so
cube
does
okay
global
policy
in
some
resources,
unlike
our
back,
but
other
things
like
secrets,
you
know
I
want
every
workload
to
have
access
to
this
proxy
server
is
a
you
know,
is
a
is
a
construct
that
we
never
modeled
in
cube,
and
so
you
know
certainly
folks.
B
A
Great
thank
you
that
is,
that
is
helpful.
I
wish
there
was
a
better
way
to
model
this,
but
the
one
that
we
have
is
seems
good
enough.
If
we
decide
to.
B
Within
the
bounds
of
transparent
multi-cluster,
I
think
I
think,
that's
the
caveat,
that's
your
get
out
of
jail
free
card,
which
is
we
did.
We
did
it
for
the
transparency
and
you
can
you
can
lean
on
that
and
say:
maybe
it
can't
it
doesn't
scale
to
complexity,
which
I
think
is
a
definite
possibility.
Just
based
on
summer
discussions
you
have.
B
Amount
like
the
cube
scheduling,
module
is
not
trivial
at
this
point,
mostly
because
of
backwards
compatibility,
so
you've
got
at
least
four
concepts
that
have
to
kind
of
work,
but
you
know
practically
speaking,
most
people
just
use
the
simplest
version
of
those,
and
if
you
can
come,
if
you
can
articulate
it
as
and
we
can
ignore
everything
that
we
don't
recognize,
that
actually
probably
ends
up
being
a
net
win
for
you
like.
Maybe
there
are
no
tolerations
or
tanks
for
locations.
B
Do
locations
need
taints
or
tolerations
they
might
if
we
implement
the
equivalent
of
nodes
going
on
ready
for
locations
going
already.
A
I
would
want
taints
the
same
for
the
same
reason.
I
would
want
node
taints
to
be
able
to
drain
or
important
or
whatever.
B
Your
own
location,
if
we
can
reuse
the
scheduler,
there's
a
good
chance
that
you
can
reuse
enough
of
the
predicates.
I
will
say:
there's
been
a
lot
of
bugs
in
that
area
where
we
just
we
made
a
few
mistakes
here
and
there
had
to
work
around
them.
C
Maybe
just
just
about
locations,
electors
and
not
selectors.
When
I
mentioned
the
selectors
last
week,
I
think
I
was
mainly
thinking
about
what
michael
said.
You
know
about
acm
ocm,
mainly
his
he's
using
you
know:
yes,
levels,
yes,
correspondence
of
ladders
and
was
thinking
of
of
something
like,
of
course
initially
at
the
at
the
the
nod
level,
a
bit
like
jason,
said
doing
at
the
dot
level,
something
that
is
similar
to
what
you
have
inside.
C
You
know
the
affinity,
which
is
mainly
terms
of
of
not
selectors,
that
you
can.
You
know,
group
and
and
stuff
like
that,
so
I
I
I
was
just
thinking
about.
Is
it
something
like
that
that
we
would
want
to
have
to?
You
know,
define
some
sort
of
rule
that
drives?
You
know
that
that
do
the
the
correspondence
between
to
to
the
between
a
location
and
the
corresponding
physical
clusters.
B
Maybe-
and
it's
certainly
a
good
argument-
you
know
a
few
use
cases
that
we've
come
up
with.
Are
you
have
to
bind?
B
You
have
to
have
a
concept
representing
a
location
that
is
an
independent
life
cycle
from
the
thing
that's
providing
it
because
a
cluster
can
come
and
go
and
the
location
could
be
preserved.
A
cluster
should
be
able
to
be
forcibly
d.
If
we're
behaving
like
nodes,
a
cluster
should
be
able
to
be
forcibly
decommissioned.
B
You
know
force
delete
all
the
instances,
make
sure
they're
not
coming
back,
create
a
new
one
and
assign
that
location
and
the
workload
should
come
back,
and
you
know
the
sinker
the
sinker
should
reconcile.
So
that
would
be
hey.
All
of
these
are
missing,
it's
no
different
from
existing,
so
I
do
think
that
some
set
of
a
different
set
of
concepts
below
the
location
level.
B
Now
you
know
an
interesting
thing,
and
this
kind
of
just
came
up
is
at
some
point
we're
really
a
locations
in
this
context
isn't
terribly
dissimilar
from
a
node
and
certainly
virtual
cubelet.
Does
this
like
fargate
and
all
these
others
with
eks
integration
and
microsoft?
Did
something
like
this?
For
azure
and
their
virtual
mode
like
I
could
maybe
see
the
argument
that
we
might
be
over
complicating
it
and
a
location
can
just
be
represented
as
the
physical
node
object
within
cytological
cluster
and
everything
just
gets
scheduled
on
one
or.
B
I
could
see
places
where
that
might
actually
be
more
confusing
and
that
wouldn't
map
to
other
concepts.
So,
for
instance,
if
you
wanted
to
do
if
you
wanted
to
bind
so
as
I
think
about
this,
like
you
could
argue
that
location
might
be
generic
enough,
that
you
could
use
it
for
other
constructs
and
a
node
is
just
a
a
duct
type
with
location.
B
So
then
some
of
the
actual
labels
and
status
and
capacity
might
actually
be
shared.
So
then
you
could
say:
oh
well,
you
know
like
we
might
have
a
different
types
of
locations
like
k-native
did
and
then
the
scheduler
would
work
on
any
of
them.
That's
also
an
option
and
you
could
actually
make
it
work
on
node
as
well.
I
do
feel
it'd
be
confusing
because.
B
B
I
do
not
think
that
abstraction
is
actually
in
place
for
the
transparent,
multi-cluster
use
case
now
it
might
be
inappropriate
to
abstract
a
location
or
a
part
of
a
physical
cluster
as
a
single
node.
I
was
always
a
little
skeptical
of
that,
because
you're
in
a
very
specific
object
that
you're
not
changing
the
meaning
of
yeah
yeah.
We
want
to
sort
through
like
what
that
means
too.
So.
B
Yeah
well
like
we
might
anyway
like
we
could
have
a
node
location,
we
could
have
a
node
pool
or
a
node
location,
pool
or
a
like.
There
might
be
other
things
that
conceptually
feel
more.
B
They
are
an
abstraction.
Ideally
we'd
want
to
be
able
to
think
about
how
other
types
of
placement
problems
could
be
solved.
So,
like
I
list
a
couple
of
them
out
in
another
dock,
and
I
was
gonna,
I
was
gonna,
bring
it
up
here
under
policy,
but
teams,
placing
chunks
of
chunks
of
like
a
unit
of
something
on
a
bunch
of
different
clusters,
is
a
scheduling
problem.
B
That
would
be
nice
if
it's
more
general
in
a
cube-like
fashion,
so
that
controllers
could
easily
get
binding
and
placement
on
abstract
capacity
for
free,
that's
kind
of
what
cube
promised,
and
so
maybe
there's
an
example
there,
where
we'd
want
to
find
the
commonalities
between.
I
want
to
place
this
database
instance
at
a
very
coarse
generic
level
on
a
set
of
locations,
but
that's
scaling
to
more
locations
and
transparent,
multi-cluster,
transparent,
multi-clusters,
more
like
05
locations,
whereas
this
might
be
a
100
or
a
thousand.
B
The
scheduler
would
certainly
have
no
problems
with
that.
Would
we
want
to
use
the
same
abstraction
or
not?
I
don't
know.
A
It's
an
interesting
area
of
sort
of
hacky
prototyping,
of
whether
you
get
more
comfortable
with
scheduling
in
general
as
a
general
concept
when,
when
trying
to
schedule
some
work
to
some
locations,
create
some
ephemeral,
node
resources,
try
to
schedule
these
resources
to
those
nodes
and
then
take
that
determination
and
then
feed
that
signal
into
the
location
scheduler.
And
so
we
get
the
we
get
node
scheduling
for
free,
but
we
don't
have
to
call
them
nodes
and.
B
Like
so
today,
in
cube,
there's
a
few
places
that
do
this,
which
take
an
existing
cube
object.
They
have
a
aggregated
api
server
virtual
resource
that
transforms
one
object
into
another
and
for
read-only
objects
that
works
really
well.
You
can
also
do
that
inside
an
informer,
so
you
talked
about
controllers
and
all
that,
like
it's
trivial,
I
said
well,
okay,
it's
not
trivial.
It
is
very
easy
today
to
use
an
informer
of
a
concrete
resource
type
and
to
translate
that
into
a
cache
of
things
that
are
not
like
so
say
you
scheduler.
B
Does
this
a
little
bit?
Openshift
has
a
few
controllers
that
do
it,
so
you
look
at
an
ingress
object
or
a
a
pod,
and
instead
of
caching,
the
pod
or
the
ingress,
you
catch
a
generic
representation
of
it.
That's
very
simple:
that's
actually
pretty
to
do
it's
easy
enough
to
do
with
the
controller
infrastructure,
not
a
lot
of
people
do
it,
but
it's
it's
not
a
hard
problem.
B
Then
you
can
actually
build
caches
that
are
span
multiple
resource
types
more
effectively.
So
it's
mostly
useful
when
you
have.
You
wanted
to
look
at
like
all
workload
controllers
and
then
turn
them
into
some
detail
about
their
pod
template.
So
like
hey,
I
can
watch
deployments.
Damn
sets.
Pods
replica
sets
blah
blah
blah
blah
staple
sets
anything
on
third-party
jobs,
index
jobs,
crime,
jobs
and
of
those.
B
I
can
look
at
this
very
specific
thing
in
the
pod
template
and
I
need
something
that
maps
me
from
the
generic
object
to
the
pod
template
and
then
I
can
extract
something
from
the
pod
template
and
only
cache
that
so
I
can
react
when
somebody
somewhere
creates
a
thing.
That's
specific
to
the
pod
template
both
of
those
concepts
kind
of
work
together.
If
we
made
it
much
easier
to
do
virtual
resources
that
subset
an
object,
you
could
absolutely
potentially
jason
imagine
taking
a
physical
cluster
and
exposing
a
virtual
resource.
B
B
A
Everything
is
terrible,
mainly
I'm
trying
to
avoid
writing
a
scheduler
from
scratch
right.
I
want
to
reuse
and
re
and
steal
as
much
as
I
can
of
already
written
code,
whether
that's
forking,
the
node
pod,
node,
scheduler
or
fitting
our
use
case
into
it
or
hiding
it
behind
a
facade
that
makes
those
decisions
and
then
gives
us
location
decisions.
A
I
don't
know
exactly
which
one
of
those
is
going
to
be
the
most
productive,
but
I
definitely
absolutely
don't
want
to
start
with
like
nothing
and
start
writing
a
scheduler,
because
I'm
not
going
to
make
it
very
hard.
I.
B
Think
the
the
cube
scheduler
is
reusable
enough
that
if
we
can't
make
it
work,
that's
something
that's
a
gap
in
cube.
It's
not
the
primary
goal
of
six
scheduling
to
make
it
fully
reusable
yeah,
but
if,
for
instance,
there
was
a
large
set
of
ecosystem
projects
that
wanted
to
do
cube-like
scheduling
the
cube-like
way,
that
is
absolutely
within
scope,
but
at
least
sig
architecture
to
be
like
hey.
B
We
have
a
pretty
powerful
and
relatively
general
cube.
Scheduler
are
there
parts
of
it
that
could
be
stubbed
out
for
generic
scheduling,
type
problems
on
cube
and
would
six
scheduling
supported?
Success
might
be
like
people
do
a
better
job
of
supporting
it
or
it
could
just
be
a
fork
that
inherits
a
lot
of
common
libraries.
I
think
that'd
be
well
within
reason
for
sure.
A
B
A
B
Node
selector
a
taint,
a
toleration,
an
affinity
rule
and
an
anti-affinity
rule
and
then
a
set
of
things
to
place
it
on
in
a
spec
status
system,
with
a
set
of
finite
resource
capacity.
Pools
like
yeah,
there's,
definitely
special
cases
there
yeah,
but
it's
hard
for
me
to
argue
that
they're,
not
just
generic
concepts
too.
It
would
be
usable
across
a
set
of
placement
decisions
in
in
cubling.
C
B
In
the
core
you
know,
I
mean
certainly
cube's
scheduler
was
one
that
we
talked
about
moving
out
of
cube
into
its
own
repo
again,
a
lot
of
those
are
kind
of
slowly
moving,
because
you
know
it
was.
It
needs
bodies
on
the
ground.
I
do
think
there's
people
who
would
be
very
happy
to
help
split
the
scheduler
out
and
make
it
be
reusable
and
it
mostly
come
is
that
do
we
have
a
use
case?
B
A
And
the
goal
itself
wouldn't
just
be
to
extract
the
code,
it
would
be
to
make
it
duct-type-able
generalizable
so
that
yeah
nodes
fit
the
duct
type
locations,
fit
the
duct
tape.
If
anyone
else
wants
some
other
thing
to
cut
the
duct
tape
and
make
scheduling
decisions,
yeah,
I
think
that's,
I
think,
that's
something
we
should
look
into
and
try.
B
The
the
real,
the
only
I
mean
the
nice
thing
about
the
cube
scheduler,
is
from
a
performance
and
throughput
placement
decision.
All
of
the
o's
on
all
the
dimensions
are
much
bigger
than
what
we're
talking
about
for
some
of
these
problems,
even
people
doing
placement
across
lots
of
physical
clusters,
probably
in
the
low
tens.
The
thousands
of
100
thousands
and
sharding
should
be
again
like
part
of
the
goal
of
ex,
like
we
have
the
same
storage
goals
so
we're
getting
around
to
one
of
those
other
topics.
B
It's
like
coming
up
with
a
way
to
let
people
who
have
a
million
problems
effectively
use
a
generic
control
plane
like
api
server
is
well
within
bounds.
Yeah
figuring
out
ways
to
shard
on
the
on
the
problems
is
like
the
is
like
such
a
no-brainer,
which,
like
it's
a
problem,
that
everybody
has.
B
B
It's
still
hard
for
people,
but
everybody
building
anything
has
to
come
up
with
that
chart
dimension.
If
we
can
formalize
the
pattern,
we
want
kcp
to
kind
of
evolve
to
be
the
rails
of
distributed
systems
right,
that's
like
that's
somewhere
in
like
the
blurbs,
and
so,
if
you
can
take
cube
and
then
push
it
in
those
directions,
sharding
of
a
million
scheduling
topics
also
seems
like
something
well
within
the
bounds
of
what
we
would
do.
A
Yeah,
okay,.
B
A
Okay,
great,
I
want
to
move
on
from
the
scheduling
topic
because
there's
a
lot,
but
I
think
this
was
very
useful
and.
A
Yeah
yeah
good
good
call.
I
will
I
will
do
that.
I
wanted
to
briefly,
hopefully
briefly
talk
about
controllers
in
multi-tenant
or
sorry
multi-cluster
scenarios,
so
this
was
something
we've
been.
We've
been
mainly
focusing
on
the
app
use
cases
of
here's
a
deployment,
here's
a
service,
here's
a
even
a
stateful
set
things
like
that.
Where
things
that
get
sent
down
to
the
clusters
don't
have
are
back
back
to
the
api
server
and
don't
define
crds
and
don't
define
cluster
roles.
A
These
are
things
that
would
sort
of
bust
outside
of
the
sinker
bubble.
A
If
we
sync
down
a
sprd
to
a
physical
cluster,
it
might
stomp
on
crds
that
are
already
defined
in
that
cluster
in
a
way
that
we
don't
want
so
and
that's
tough
for
controllers,
because
if
you're
going
to
give
a
logical
cluster
run
this
controller.
For
me,
usually
that
includes
some
crd
to
watch.
Usually
that
includes
cluster
roles
and
usually
includes
watching
all
name
spaces,
which
is
something
controllers
shouldn't
do
from
the
physical
cluster
standpoint.
A
If
they
specify
a
service
account
name,
then
we
will
inject
a
cube
config
that
talks
back
up
to
the
kcp,
where
that
crd
type
is
defined,
where
those
cluster
roles
are
defined.
Where
that
service
account
lives
and
that
controller
won't
talk
to
its
local
physical
cluster
api
server,
it
will
talk
back
up
to
kcp
to
watch
for
all
objects.
I
should
be
controlling
or
do
what
create
resources
back
up
there,
the
so
that
solves
the
sort
of
tendency
problem
at
the
cost
of
chattiness
and
latency
from
physical
clusters
up
to
kcp
all
the
time.
A
I
I
think
this
is
this
unblocks
us
from
being
able
to
experiment
more
how
much
that
is,
for
real
life
controller
use
cases.
The
idea
is
that
that
an
off-the-shelf
operator
controller,
something
like
tekton,
something
like
k
native,
something
like
argo
cd
shouldn't
need
to
be
modified
to
work
in
a
multi-tenant
or
sorry.
Yeah,
I'm
saying
multi-tongue
shouldn't
need
to
be
have
any
modifications
made
to
it
to
support
multi-cluster,
transparent
multi-cluster
is
the
goal.
B
You
call
this
transparent,
cube
integration
or
like
if
we
figure
out
a
way
to
take
transparent,
multi-cluster
and
tackle
cube
integration
like
transparent,
multiple
cube
integration
or
something
yeah.
A
C
Excuse
me
just
to
understand
correctly:
you
are
speaking
about
controllers.
That
would
be
living,
I
mean,
or
you
know,
in
in
physical
clusters,
and
that
would
contact
kcp
to
possibly
watch
on
several
logical
clusters
right.
A
A
To
talk
back
to
right,
so
the
so
not
to
talk
across
multiple
logical
clusters
to
talk
only
to
the
logical
cluster
in
which
that
controller
was
installed.
So
installing
a
controller
is
just
define.
Some
crds
create
a
deployment
that,
when
it
starts,
watches
four
types
of
that
crd
a
bit.
What
what.
C
Has
been
already
down
with
you
know
when
we
use
the
deployment
splitter
in
and
and
the
sinker
in
push
mode,
I
mean
in
pull
mode.
Sorry.
A
C
The
thing
is
that,
for
now
you
we
were
energetic
the
cube
config
with
you
know,
mainly
cube
admin
or
something
like
that,
and
that,
in
fact,
you
would
just
set
up
things
to
provide
precisely
the
sufficient
minimal
rights
to
the
controller
running
on
your
physical
cluster
to
access
the
kcp.
Just
only
have
access
to
the
the
right
resources
right
is
it
is
it?
Is
it
what
you
are.
A
B
Of
100
of
use
cases
for
cube
today,
more
than
five
percent
of
them
or
around
five
percent
of
them
are.
I
want
to
run
a
controller
that
exposes
crd.
That
does
something
in
the
cluster
context
that
I
am
in,
and
so
specifically
that
use
case,
which
is
not
a
multi-logical
cluster,
and
it
is
not
a
mis,
it
doesn't
have
it's
not
a
multi-physical
cluster.
It
is.
I
want
to
create
a
deployment
and
get
a
controller
running
against
crds,
and
I
expect
to
be
able
to
use
service
account
injection
to.
A
B
A
B
Then
you
have
isolation,
and
this
type
of
controller
needs
a
cluster.
It
doesn't
need
the
cluster
right.
Anybody
else
coming
into
that
logical
cluster
would
be
able
to
be
like.
Oh,
I
want
to
create
a
I
want
to
create
one
of
those
instances
of
crds
great
the
controller's
running
in
that
logical
cluster.
Great,
I
don't
care,
so
that's
like
local.
B
That's
testing
controllers,
that's
iterative
development,
that's
a
team
building
their
own
controllers
and
running
them
for
themselves
where
they
just
want
the
controller
to
be
running,
and
they
expect
it
to
work
as
well
as
cube
so
principle
of
least
surprise
for
a
cube
user
who's.
Building,
an
integration.
Okay,.
C
So
well,
since
I
didn't
have
the
whole,
you
know
discussion
before,
so
I'm
not
sure
I'm
trying
to
grasp
the
the
id.
But
if
I
take
an
example,
I
would
take
the
example
of,
of
course,
workspaces.
C
The
new
approach
of
it
is
mainly
based
on
a
dev
workspace
custom
resource
that
then,
at
some
point
through
a
controller,
is
translated
into
a
deployment
and
what
I
discussed
with
some
of
the
people
of
the
crw
team
sometimes
ago
is
that
if
we
wanted
to
apply
that
which
I
would
like,
when
the
rebase
is
finished,
to
try
that
running
on
kcp,
mainly,
I
assumed
that
the
deployment
that
is
generated
from
the
devworkspace
customer
resource
would
not
really
have
access
to
the
custom
resource
itself,
because
you
know
the
customers
was
mainly
if
I
understand
correctly
would
live
on
at
the
kcp.
B
C
Exactly
so,
my
question
was
so
obviously
I
understood
correctly
what
you
were
meaning.
So
what
you're
saying
is
that
the
deployment
that
will
be
running
physically
on
the
physical
cluster,
but
the
context
I
mean
the
cube
context,
let's
say
in
which
the
deployment
of
the
service
account
anyway,
in
which
the
deployment
is
running
and
the
pod
underlying
part
is
running.
C
Finally,
when
it
you
know
it
inside
the
pod,
if
you,
if
you
try
to
look
for
a
custom
resource
of
the
type
dev
workspace,
you
would
transparently
see
the
the
the
customer
source,
but
in
fact
the
customers
would
be
living
in
the
on
the
kcp
leader
and
in
the
same
logical
cluster
that
fi
that
initially
gave
life
to
the
deployment.
I
mean
a
pod.
B
C
A
This
this
was
a.
This
was
a
bit
of
a
source
of
anxiety
for
me
as
well,
because
if
we,
if
we
support
the
app
case,
but
don't
support
controllers,
don't
have
a
good
support
for
them.
Then
we're
we're
not
going
to
get
very
far.
But
I
think
this
elegant
relatively
small
change,
a
relatively
small
injection
of
information
to
be
able
to
support
this
for
multi-cluster
and.
B
A
B
B
Mechanism
where
someone
takes
a
sinker
and
forks
it
in
a
non-transparent
multi-cluster
use
case,
and
that
was
the
discussion
with
with
michael
last
time,
which
was
that
is
the
opposite
of
the
transparent
multi-cluster
use
case.
That
is,
someone
materializing
objects
onto
a
kubernetes
cluster,
with
the
express
intent
of
changing
the
behavior
of
that
kubernetes
cluster.
B
When
you
deploy
something,
doesn't
change
the
underlying
cluster
yeah,
the
other
one.
You
do
you're
explicitly
trying
to
change
the
underlying
cluster
in
an
in
a
non-uh,
transparent
way,
which
means
you
could
have
side
effects,
so
transparent,
multicluster
is
supposed
to
be
side
effect.
Free,
giving
a
controller
access
to
the
physical
cluster
is
absolutely
not
side
effect.
Free.
C
Yeah,
so
that
that's
great,
because
that
that
seemed
to
a
very
well,
why
speaking
with
them
with
the
other
architects
of
the
crw
team,
that
seemed
really
a
very
hard
requirement
that
you
have
to.
You
know,
have
strong
borders
between
layers
between
what
is
at
the
up
level
inside
kcp
or
the
customer
source
level.
And
then
what
is
on
the
physical
cluster
level,
which
is
mainly
you
know,
generated
workloads
and
that
you
would
only
be
able
to.
C
You
know,
go
top
to
down,
but
not
the
way
around,
and
you
would
not
be
able
to
discuss
from
a
pod
to
the
customer
source
that
that
was
originated.
So.
B
A
I
can
do
it.
I
think
I
want
to
write
a
doc
specifically
about
controllers
and
how
they
are
like
here
is
the
explicit
way
to
do
it
is
to
have
to
modify
the
controller
or
to
have
another
sidecar
controller.
That
knows
how
to
schedule
things
to
multiple
clusters.
The
implicit
way
is
to
give
it
to
kcp
kcp
transparently,
distributes
that
work,
but
to
be
able
to
do
that,
you
have
to
talk
to
kcp,
so
that's
sort
of
the
that's
the
summary.
A
I
will
write
more
words
around
that
to
make
it
to
make
it
more
real,
but
yeah.
I
think
I
think
it
was.
It
was
a
relief
to
me
to
to
realize
there
was
a
way
around
this,
because
I
think
it
had
also
been
I've
been
thinking
about
it
in
the
context
of
tecton
but
ready
workspaces.
Absolutely
it
would
have
the
same
problem
any
controller.
Exactly
of
that
exactly.
B
And
that
opens
the
door
then
for
the
next
discussion,
which
we're
not
quite
at
yet,
which
is-
and
I
would
probably
call
this
like
it's
the
evolution
of
logical
cluster,
which
is,
if
you
have
all
these
logical
clusters,
you
want
a
way
to
deal
with
them
in
bulk,
because
that
helps
you
build
more
effective
control
integrations
because
one
logical
cluster
is
even
smaller
than
a
real
physical
cluster.
One
of
the
benefits
of
a
physical
cluster.
Is
you
install
one
api
and
one
controller,
and
it
can
work
for
lots
of
tenants.
B
We
need
the
equivalent
tenant
idea
for
extensions
added
to
a
logic.
Multi-Cluster,
there's
a
side.
I've
got
a
side
doc.
I've
been
working
on
with
jessica,
which
explores
some
ideas,
like
the
flexibility
that
logical
clusters
give
us
would
allow
us
to
have
different
api
instances
in
each
that's
the
apis
and
the
behavior.
B
But
the
next
step
is
that
you'll
be
able
to
tie
different
implementations
to
them,
which
would
allow
you,
as
an
organization
to
roll
out
the
implementation
of
a
new
controller
in
a
controlled
fashion
across
different
chunks
of
thing,
and
so
a
team
might
be
able
to
opt
in
to
upgrading
between
two
incompatible
versions
of
an
api
or
swapping
open
source
implementation
of
a
controller
to
a
vendor
implementation,
or
vice
versa.
B
I'll
have
a
separate
doc
for
that
I
wasn't
actually
able
to
get
the
policy
doc
push
looks
like
github's
back
now,
so
we
can
stop
saying
that
claiming
we're
unproductive.
I
did
want
a
few
minutes
talking
about
that.
Yeah
yeah,
absolutely.
C
It's
just
a
question
because
that's
very
interesting
also
I
mean
probably
workspace-
is
why?
Because
I
that's
where
I
come
from,
but
but
yes,
it
has
always
been
very
painful
to
you
know,
because
you
have
only
one
cluster-wide
customer
source.
I
mean
you.
C
With
singletons,
exactly
for
everyone
and
and
precisely
in
such
a
case
where
you
know
you
have
mainly
one
deployment
coming
from
one
network
space
customer
source,
then
you
can
have
all
the
pods
from
those
deployments
running
on
the
same
physical
cluster.
But
they
would
each
be
tied
to
a
distinct
customer
version
of
the
customer
source
living
in
a
distinct,
logical
cluster.
And
it
seems
to
me
that
this
we
would
nearly
be-
I
mean,
of
course,
probably
not
completely
transparently.
C
But
at
least
we
would
be
able
to
to
test
that
if
we
have
the
even
a
manual
minimal
implementation
of
this
secret
injection,
two
parts
that
we
just
spoke
of,
then
this
would
probably
open
the
door
to
testing
this.
Even
if
it's
a
bit
manual.
But
at
least
you
know,
having
a
proof
of
concept
of
of
something
like
audrey
workspaces,
working
in
several
logical
clusters,
with
different,
distinct
apis
and
having
all
the
workloads
being
pushed
to
a
distinct
physical
cluster.
B
Okay,
anything
jason
you
want
to
cover
before
you
switch
over
to
policy.
B
Hit
most
of
the
things,
okay,
so
policy,
I
did
push
a
brief
investigation
doc,
so
this
one's
a
little
bit
shorter
than
the
other
ones.
I
tried
to
list
all
the
different
areas,
so
this
is
142.,
so
I
just
got
that
open
because
github
came
back.
B
Let
me
link
that
in
the
github
issue,
for
the
agenda
and
just
so
people
can
find
it.
B
So
this
is
just
highlighting
the
idea
so
like
we
had
like
we've,
had
four
invest
three
investigation
topics.
So
far
we
had
minimal
api
server,
which
we've
all
spent
a
little
bit
of
time,
but
we're
mostly
spending
on
getting
used
cases
from
breath
and
prototyping.
That
would
come
back.
We
had
logical
clusters
and
transparent
multi-cluster
and
those
kind
of
sit
on
top
of
minimal
api
server,
self-service
policy
kind
of
sits
on
top
of
logical
clusters.
B
So
this
is
the
concept
that,
if
you
want
to
break
a
big
mega
cube,
so
today,
people
have
big
mega
cube
clusters
because
it's
super
efficient
to
just
add
one
more
team.
The
downside
of
it
is
there's
no
hard
policy
boundaries
between
game
spaces,
so
everybody
in
the
ecosystem
has
gone
through
like
50
different
iterations
and
all
of
them
have
trade-offs.
B
Logical
clusters
were
exploring
a
different
that
tried
to
combine
like
three
problems
at
once:
apis
and
having
different
sets
of
apis
available
breaking
the
the
monolith
of
a
cube
api
server
into
a
set
of
chunks
that
are
discrete
and
enabling
reuse
and
plug
points
that
would
allow
you
to
virtualize
the
implementation
of
things.
Like
rbac
or
quota
so
that
you
could
actually
have
higher
organizational
scale,
so
the
the
observation
is
anyone
getting
really
deep
into
cube,
has
lots
of
it.
B
The
lots
of
it
requires
you
today
to
build
a
layer
on
top,
which
is
totally
reasonable,
but
everybody's
building
many
of
the
same
layers
in
different
ways.
So
there's
no
alignment,
so
the
self-service
policy
is.
Is
there
a
pattern
or
set
of
patterns
that
are
reasonably
broad
enough
in
the
cube
ecosystem
that
we
could
holistically
address,
building
on
logical
clusters
and
the
capabilities
we've
been
talking
about
for
minimal
api
server?
B
That
gives
an
organization
kind
of
a
80
batteries
included
approach
for
building
self-service
directly
into
a
control
plane
that
sits
in
front
of
their
application
infrastructure,
so
team
user
int
comes
in
based
on
who
that
user
is
they're
able
to
ask
for
a
set
of
capacity
which
has
a
set
of
controls,
that
set
of
capacity
might
be
a
logical
cluster
might
be
multiple
logical
clusters
of
art
could
be
a
set
of
apis.
So
what
apis?
Does
this
type
of
user
have
access
to?
Can
they
create
clusters?
Could
they
create
infrastructure?
B
Can
they
only
deploy
pods?
Can
they
only
use
higher
level
constructs
like
tecton
and
k-native,
but
they
only
use
higher
level
crds
like
12
factor
apps,
that
look
like
a
heroku
resource,
but
they
only
use
function
like
very,
very
generic
functions
with
extremely
specific
rules.
That's
different
sets
of
apis,
but
the
policy
would
be.
How
could
you
get
that
self-service
and
what
are
the
controls
and
policy
constraints
so
laying
out
the
docs
pretty
short,
but
it
just
lays
out
a
bunch
of
investigation
areas.
The
intent
would
be
over
the
next
couple
weeks.
B
B
If
you
go
to
kcp
instance-
and
you
can
create
a
crd
that
says
logical
cluster,
I'm
not
I'm
not
going
to
get
any
more
detail
than
that,
and
then
that
results
in
you
being
able
to
hit
a
logical
cluster,
get
access,
get
a
set
of
apis
for
it,
and
then
that
would
play
into
the
other
two
demos
for
transparent
multi-cluster,
but
it
also
kind
of
demonstrate
kind
of
logical
cluster
as
an
implementation
detail
where
the
api
is
pluggable,
so
there'll
be
a
simple
example
and
at
the
same
time,
we're
going
through
and
trying
to
write
up
a
dock
of
a
very
complex
full.
B
What
an
enterprise
or
service
provider
or
very
large
company
would
use
where
they
have
tens
of
thousands
of
application
teams.
Could
you
come
up
with
a
policy
system
architecture
that
hits
the
80
case
for
most
people?
Most
of
the
time
could
go
to
something
that
feels
a
lot
like
cube.
They
get
an
open
source
project
where
they
can
say.
I
want
to
I'm
this
person
that
person
gets
mapped
into
some
organizational
structure
with
a
set
of
allowed
capacity.
B
They
can
self-service
and
get
a
logical
cluster
and
we
might
come
up
with
a
different
name
for
it.
Thinking
workspace
was
kind
of
the
name
we
were
playing
around
with
and
that's
been
something
we've
been
using
internal
discussions
and
then
in
that
workspace
would
be
a
set
of
apis
and
a
set
of
rules.
B
Cases
could
do
the
same
thing
for
policy,
so
this
is
an
exploration
that
second
doc
will
be
a
much
more
advanced
example,
but
it
would
be
a
modular
system
where,
just
like,
we
talked
about
with
kcp
splitting
out
like
these
different
sub
elements.
As
we
go
from
prototype
to
maybe
project
something
will
go
into
cube.
Some
of
it
will
be
separate.
B
Crds
controllers
have
my
own
extensions
and
I
want
to
get.
I
want
that
to
be
deeply
integrated
with
my
application
infrastructure
so
that
I
can
say
things
like
you
know.
Oh
backstage
I
o
could
just
transparently
plug
into
this,
and
they
wouldn't
have
to
worry
about
their
service.
That
was
just
an
example
that
we
came
up
with
I'll
put
that
I
actually
forgot
to
put
that
in
the
examples,
but
it's
like
working
through
that
story
of.
Could
you
do
the
same
thing
for
organizational
self-service?
B
A
Yeah
nice,
I
like
I
haven't,
I
obviously
haven't
read
any
of
this,
but
your
description
was
very
good.
I
think
we
will
find.
I
think
flexibility
is
going
to
be
key
both
for
this
and
transparent
multi-cluster.
I
think
we've
talked
about
transparent
as
if
everyone
wants
transparent,
and
I
think
there
are
some
people
that
absolutely
want
fine-grained
control
over
everything.
I
think
it's
the
same
on
self-service.
A
Definitely
some
people
want
self-service.
A
team
should
be
able
to
show
up
and
get
cluster
capacity
without
having
to
fill
out.
You
know
forms
in
triplicate
or
anything,
but
some
organizations
would
absolutely
not
want
self-service
to
be
an
easy,
like
teams
can
just
show
up
without
talking
to
yeah
and.
B
I
think
there's
a
nuance
here:
that's
that
I
was
I
glossed
over,
but
it's
very
good.
You
caught
that
jason.
So
every
large
enterprise
organization
has
a
system
for
controlling
access
to
resources.
Yeah.
A
B
What
the
kind
of
evolution
of
the
problem
now
is
is
you
still
have
all
the
requirements
right?
You
have
to
physically
divide
up
limited
capacity
to
a
set
of
users,
but
now
you
have
to
do
it
over
even
more
places
than
you
did
before,
because
now
you
have
like
hundreds
of
cloud
accounts
that
you
have
no
idea
where
the
spend
is.
I
don't
think
we're
trying
to
tackle
all
of
the
parts
of
the
problem.
B
We
still
want
to
give
to
caesar
what
is
caesar's
like
we
want
in
an
enterprise
environment,
you're,
going
to
have
a
system
of
record.
What
we
want
to
offer
is
a
better
place
to
plug
in
so
you
get
a
bunch
of
stuff
for
free
and
most
people's
knobs
exist
on
that
system.
So,
in
your
case,
the
one
you're
bringing
up
jason
would
be
you
start
off
with
zero
quota.
Someone
creates
the
workspace
and
it
can
go
into
a
ticketing
flow
and
the
ticketing
integration
between
the
act
of
creating
it
and
and
servicenow
super
trivial.
B
That's
the
kind
of
escape
patch
for
what
you're
describing
not
everyone
will
pick
that,
but
most
people
have
this
problem
and
they
all
have
different
ways
of
solving
it,
which
means
everyone
who
wants
to
improve.
This
has
to
invent
all
the
parts
of
the
solution
themselves,
trying
to
find
the
the
venn
diagram
overlap
of
the
things
that
everybody
needs
that
can
enforce
substandard
rules.
That's
kind
of
the
hope
so.
A
Yeah
awesome
thank
you
for
that.
I
will
read
this
immediately
after
this
I'll
post
the
recording
soon
and
have
a
good
week.
Everyone.