►
From YouTube: kcp design discussion - quota & garbage collection
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Foreign,
okay,
so
we're
going
to
be
talking
about
how
we
implemented
per
workspace
quota
and
how
that
might
be
applicable
to
garbage
collection
as
well.
So
what
I'm
going
to
start
with?
Is
the
code
in
kcp
for
quota
I'm
going
to
skip
over
the
admission
part
since
garbage
collection
doesn't
have
a
component?
A
The
approach
that
we
took
with
quota
was
to
do
one
per
workspace,
and
that
was
largely
because
I
thought
that
the
changes
would
be
more
invasive
and
harder
to
rebase.
If
we
made
the
quota
controller
cluster
aware,
we
could
certainly
revisit
that
decision
and
we
could
decide
to
go
either
way
with
the
garbage
controller.
A
But
let
me
show
you
what
I
did
for
quota
so
in
kcp,
there's
a
cube
quota
package
and
we
have
a
controller
in
here
and
the
idea
for
this
controller
is
that
we
want
to
know
about
workspaces
and
we
want
to
start
a
an
upstream
quota
controller
when
a
workspace
shows
up
and
we
want
to
stop
it
when
a
workspace
gets
deleted.
So
we
have
an
Informer
on
cluster
workspaces.
A
That's
the
the
most
important
thing
here
and
whenever
we
get
basically
any
event,
we're
going
to
go
ahead
and
enqueue
it
and
add
it
to
our
queue
and
let
me
get
down
to
the
process
function
here,
since
that's
skipping
over
all
the
boilerplate
stuff.
A
So
for
this
one
we're
going
to
get
a
key
that
is
for
a
workspace
and
the
workspace
has
a
logical
cluster
component
and
a
name
if
we
want
to
know
the
actual
combination
for
where
that
logical
cluster
lives,
we
have
to
combine
them
together.
So
in
this
case,
imagine
that
the
parent
is
root
org
and
the
cluster
workspace's
name
is
actually
WS.
B
A
We
join
them
all
together
and
now
we
have
a
logical
cluster
in
which
we
need
to
be
doing
work
and
I'm.
Gonna
well
I'll
talk
about
this,
so
this
is
the
like.
If
the
workspace
goes
away,
then
we're
going
to
go
ahead
and
shut
things
down
and
I
can
talk
through
the
specifics
of
how
specifically
we're
tracking
all
this
stuff,
but
we'll
get
there
in
a
bit.
So
basically
all
you
need
to
know
about
this
part.
A
Is
workspace
no
longer
found
shut
everything
down
so
then
I
do
have
this
cancel
funks
struct,
which
is
part
of
what
was
before
what
I
was
just
showing.
So
this
is
basically
a
map
from
logical
cluster
to
some
sort
of
function
and
the
presence
of
a
logical
cluster
in
this
map
basically
means
we've
already
created
a
controller,
an
upstream
quota
controller
for
this
logical
cluster,
and
so
in
the
event
that
a
cluster
workspace
is
got
a
whole
bunch
of
Informer
events
ads
updates,
whatever.
A
A
So
in
this
particular
case,
the
resource
quota
controller
has
an
options
struct
to
initialize
it
and
that's
what
you
pass
in
to
create
it,
and
so
the
the
important
things
to
point
out
are
that
there
is
a
a
client
that
is
scoped
to
a
particular
logical
cluster.
So
when
you
look
at
the
controller
options,
struct
you'll
see
it
takes
in
this
client.
A
It
has
an
Informer,
it
has
a
discovery
function,
so
we've
got
to
do
things
to
make
all
of
these
things
scope
to
the
workspace
that
it
is
tracking
quota
for
and
you'll
also
see
so
I'm
in
a
kubernetes
source
code
file.
Now
we
have
made
a
patch
in
here
to
store
the
cluster
name
on
on
this,
because
it
makes
life
easier,
and
this
is
a
sort
of
patch
that
from
a
rebase
perspective,
is
easy
to
carry
all
right.
So
we
set
up
the
client
that
again
is
scoped
to
the
logic
cluster.
A
A
The
nice
thing
for
quota
is
that
the
set
of
resources
instead
of
API
types
that
we
need
to
potentially
quota
is
the
distinct
set
of
built-in
apis,
so
like
config
Maps,
along
with
all
of
the
crds
across
all
logical
clusters,
and
not
every
crd,
is
available
to
every
logical
cluster,
but
we
don't
have
to
run
Discovery
because
we
know
what
apis
are
being
served,
and
so
we
have
this
shared
Informer,
Factory,
that's
custom
to
kcp.
A
A
Yeah
and
so
then
we
start
the
controller,
and
we
in
that
Discovery
or
the
the
factory
that
I
was
talking
about.
That
knows
about
the
full
set
of
API
types
as
part
of
the
quota
work,
I
added
the
ability
to
subscribe
for
notifications
whenever
a
crd
is
created
or
removed,
and
whenever
that
happens,
we
basically
tell
the
quota
controller.
A
You
need
to
go
redo
Discovery,
so
to
speak,
like
go,
go,
ask
the
factory
for
the
updated
list
of
API
types
and
then
it
can
go
and
set
up
evaluators
appropriately
to
do
object,
counts
and
whatnot,
and
so
here
I
have
a
comment.
That's
like
we
diverge
from
Upstream.
Upstream
has
a
go
routine
that
actually
every
30
seconds
runs
Discovery,
as
I
said
before.
We
don't
need
to
do
that.
So
this
is
a
Divergence
and
then
we
actually
start
the
quota
and
and
that's
pretty
much
the
end
of
it
so
inside
the
kubernetes
code.
A
Basically,
the
changes
that
we
made
I
mentioned
that
we
have
this
cluster
name
in
here.
There's
a
patch
file
that
this
update
monitors
function,
which
just
going
back
to
the
this
part
where
we
diverge
you'll,
see
that
anytime,
that
this
Factory
says
apis
changed
we
go
ahead
and
call
update,
monitors,
update
monitors
is
basically
the
same
code
that
exists
in
the
main
in
the
Upstream
files,
just
removing
the
30
second
polling.
So
this
is
a
copy
and
paste,
but
fortunately
this
is
auto-generated.
A
We
have
a
tool
that
we
checked
into
our
Fork
of
kubernetes,
that
when
we
do
a
rebase,
We,
Run,
The
Tool
and
it
will
just
Auto-
generate
this
function
based
on
where
it's
coming
from
in
I
forget
if
it's
in
this
file
or
that
one,
but
you
know
so,
there
are
techniques
that
you
may
need
to
do
for
garbage
collection
to
work
around
some
limitations
based
on
expectations.
That
Upstream
has
that
we
don't
need
to
adhere
to
necessarily,
though
okay,
so
the
last
thing
I
want
to
talk
about.
Are
these
scoping
informers?
A
A
In
the
case,
where
we're
trying
to
Multiplex
and
create
one
quota
controller
per
workspace,
the
resource
quota,
Informer
or
any
Informer
for
that
matter,
there's
still
just
one
of
them
because
it's
shared,
and
so
what
I
did
was
I
wanted
to
avoid
a
situation
where
inside
of
the
quota
code-
and
let
me
show
you,
for
example,
where
this
is
used-
it'll
be
clearer.
A
A
So
let's
take
config
maps,
for
example:
there's
one
config
map
Informer,
it
spans
all
with
kcp,
all
workspaces,
all
namespaces,
and
if
we
have
five
different
workspaces,
we're
going
to
have
five
copies
of
this
resource
quota,
monitor
running
one
for
each
workspace,
and
so,
if
it's
adding
an
event
handler
to
the
config
map
Informer,
then
it's
basically
going
to
add
it
five
times
once
per
workspace,
and
rather
than
doing
that,
where
it's
actually
it's
impossible
to
unregister
an
event
handler
from
a
shared
Informer,
and
given
that
we
know
workspaces
will
go
away
when
they
get
deleted.
A
We
need
to
unregister
the
event
handler
so
that
we
don't
try
and
dispatch
to
to
these
monitors
that
are
also
going
to
get
deleted
when
the
workspace
is
due.
So
we
have
this
scoping
Factory
that
basically
it
wraps
a
real
Informer
Factory.
So
this
would
be
the
the
real
kubernetes
one
and
it
has
a
delegating
event
handler.
A
So,
basically,
whenever
you
ask
for
a
scoped
Factory
for
a
logical
cluster
and
you
get
all
the
way
down
to
you
ask
for
an
Informer
and
then
when
you
get
back
the
Informer
eventually,
you
try
to
add
an
event
handler
where's
my
code
here,
I'm
not
in
there.
A
Handler
yeah,
so
when,
when
it
gets
around
to
saying
like
I,
would
like
to
add
an
event
handler,
there's
logic
in
here:
I'm
not
going
to
go
into
the
details
right
now,
where
it
basically
registers
one
Handler
per
type.
So
if
you
want
to
get
informed
on
config,
Maps,
it'll
register
for
config
maps,
and
then
it
just
multiplexes
and
dispatches
out
to
all
of
the
individual
individual
instances
that
are
started
down
at
the
bottom
down
in
here
all
right,
let
me
stop
there.
A
B
So
the
thank
you
that's
very,
very
Ico.
Oh,
it
relates
to
the
GC
work,
so
the
the
first
I
guess
question
that
I
would
have.
Is
you
made
the
decision
to
have
one
quota
controller
Opera
workspace?
So
you've
mentioned
the
reason
that
I
mean
Upstream
changes
and
stuff
like
that.
So
to
you,
would
that
apply
for
the
GC
as
well?
I
I
guess.
That's.
A
Where
I
would
start?
So,
if
you
go
look
at
the
yeah,
the
Upstream,
if
you
go
look
at
the
garbage
collector,
so
it
has
a
rest
mapper.
This
obviously
will
need
to
be
cluster
scoped,
because
if
it's
I
mean
it
doesn't
have
to
be,
you
could
try
without
it.
You
could
try
just
using
the
same
discovery
that
we
did
for
quota
the
so
that
I
mean
that
may
work.
Alternatively,
you
could
try
and
get
it
scoped
per
logical
cluster,
but
you
may
not
need
to
the
metadata.
A
Client
will
need
to
be
scoped.
I
mean
that's
a
cue.
That's
a
q,
the
graph
Builder
and
the
reference
caches
I
mean.
A
A
rest
mapper,
it's
got
monitors,
it's
got,
you
know
all
of
this
stuff
like
these
are
going
to
be
per
per
logical
cluster,
and
so,
if
you,
if
you
decided
that
you
wanted
to
make
the
garbage
collector
cluster
aware
then
essentially
anything
in
here.
That's
singular
now
has
to
become
a
map
from
logical
cluster
to
instance,
and
you've
got
to
lock
that
and
maintain
all
of
those.
A
And
then
you
know
all
of
these
cues
whenever
you
DQ,
whenever
you
in
queue
and
DQ,
it's
going
to
have
a
logical
cluster
in
it,
you're
going
to
have
to
deal
with
the
key
you're
going
to
have
like
getting
it
to
be
scoped
to
the
right,
logical
cluster,
so
I
would
probably
start
trying
to
model
it
after
quota,
because
I
think
it'll
be
less
work
but
I
don't
know.
That's
just
my
take
yeah.
B
Yeah
I
agree:
I
either
look
at
the
the
Upstream,
the
graph
builder
stuff.
It
may
be
possible,
but
I
mean
it's
pretty
complex
and,
as
you
mentioned,
we
would
have
to
they.
They
have
deep
down
a
stroke
that
is
supposed
to
keep
the
the
field
that
identify
a
resource
and
we
would
have
to
start
there
at
logical
cluster
and
then
double
everything
up
to
the
world
graph
plus.
We
would
have
to
Fork
it
obviously
and,
and
there
wouldn't
be
no
chance
that
it's
accepted
I.
A
At
some
point,
if
kubernetes
ever
latches
on
to
the
concept
of
logical
clusters,
but
I
mean
basically
if
you
find
that
you're
having
to
make
so
many
changes
that,
like
any
any
relatively
small
patch
that
goes
in
Upstream,
would
basically
make
a
reading
crazy
hard.
You
know
then
I
would
say:
let's
see
if
you
can
do
it
like
quota.
So
you
know
if
you
look
at
the
rest
mapper.
Basically
you
pretty
much
need
to
have
one
of
these
per
logical
cluster
or
you
just
say.
B
A
Know
we're
we're
going
to
be
tracking
we're
going
to
be
scoping
the
clients
and
we're
going
to
be
scoping
the
informers.
So
if
the
rest
mapper
has
more
data
than
is
in
a
single
logical
cluster,
maybe
that's
not
the
end
of
the
world.
On
the
other
hand,
we
have
to
think
through,
like
would
there
be
a
situation
where
the
garbage
collector,
like
you,
have
to
see
what
it
uses
in
the
res
mapper
but
like
if
it
said,
give
me
the
the
resource
for
a
given
gvr.
A
Give
me
the
kind
for
a
given
gvr,
given
that
two
workspaces
can
have
the
same
gvr,
but
one
can
be
a
crd
that
looks
like
this
and
another
one
can
be
from
an
API
resource
schema.
We
may
need
to
and
do
it
per
logical
cluster
I,
just
I
haven't
thought
through
that
to
see
exactly
what
the
implications
would
or
wouldn't
be,
but.
B
A
And
we
call
get,
we
call
rest
mapping
like
those
are
the
two
functions
that
are
used
here
and
rest
mapping
gives
you
a
preferred
rest
mapping,
given
a
group
kind
and
maybe
a
version
so
I
mean
this
is
just
gonna,
give
you
a
gvr
and
a
dvk
I
think
the
only
issue
you
might
face
there
is,
if,
like
the
preferred
version,
was
different
between
two
different
workspaces.
A
The
other
thing
I
saw
was
that
the
graph
Builder
had
a
rest
mapper.
It
calls
kind
for
so.
You've.
Gotta
also
account
for
that
as
well,
and
that's
just
going
to
give
you
a
GVK
from
a
gvr
which
shouldn't
be
a
problem,
so
yeah
I
mean
I
would
just
I.
Would
you
could
try
doing
one
and
then,
if
you,
if
we
run
into
problems,
we
can
make
it
cluster
scoped,
yeah
and
then
anything
that
takes
in
the
informers.
A
So
let
me
yeah,
so
this
is
going
to
do
a
four
resource,
bang
and
add
event
handler
so
you'll.
Just
you'll
want
to
do
the
same
thing
that
we
did
for
quota
and
fortunately
you
don't
have
to
copy
any
or
you
know
if
they'll
write
anything
new,
we'll
just
yeah
we'll
expose
this
export
it.
So
you
can
we'll,
you
know,
move
it
to
a
common
package
for
scoping,
make
it
so
that
you
can
instantiate
it
from
other
packages
and
you'll
have
like
a
cube
quota.
A
Reconciler
like
we
have
now
that
uses
it.
You
can
have
a
cube,
GC
package
that
uses
it
as
well
and
then
you'll
just
pass
that
all
the
way
down
into
The
Constructor
here
and
then
it'll
be
good
to
go.
B
B
A
So
in
Quota
we
didn't
enable
yet
anything
beyond
counting
kubernetes
has
quota
on
CPU
in
pods
memory
like
node
Port
Services
that
uses
spec
data.
That's
not
enabled
right
now
and
we're
gonna
have
to
figure
out
how
to
deal
with
that
eventually,
but
I
think
for
garbage
collection,
given
that
it's
only
looking
at
owner
references,
you'll,
you'll
be
able
to
use
that
factory
and
the
the
rest
mapping.
Information
won't
matter
like
because
the
way
that
that
special
Factory
works
is
it
does
a
partial
metadata
cross
cluster
list
and
watch.
A
So
if
10
workspaces,
each
Define
their
own
variant
of
the
same
crd
and
the
spec
fields
are
different,
the
metadata
is
the
same
for
all
of
them
and
you
can
just
use
that
Informer
which-
and
so,
if
you
basically
say
like
give
me
all
the
widgets
you're,
going
to
get
the
metadata
for
all
the
widgets
and
you're
not
going
to
get
any
of
the
the
spec
information.
But
that's
perfect
because
that's
exactly
what
you
need
foreign.
B
That
kind
of
relate
to
another
question
that
I
with
yeah.
It
was
related
to
the
identity.
You
know
to
be
able
to
differentiate
multiple
flavor
of
the
same
API
So,
based
on
what
you've
just
said.
It
doesn't
really
matter
it.
A
What
logical
cluster
it's
in?
What
namespace
is
it
in?
What
is
its
gvr
and
what
are
its
on
our
references,
yeah
and
that's
common,
regardless
of
how
the
schema
is
defined.
B
B
Yeah
yeah,
but
well
you
did
the
most
of
the
work.
Oh.
A
C
B
Four
quarter:
it
should
be
a
lot
more
easier
than
than
without
it.
Okay.
Well
at
least
I
mean
that
was
the
main
question
and
then
criteria
so
I
I,
guess
maybe
it
does.
We
have
a
few
few
more
minutes.
So
foreign
are
there.
Any
performance
concerns
to
go
the
the
way
of
running
multiple
controllers
instances
compared
to
a
single
one.
A
C
A
Like
I
would
say,
as
we
like
as
a
kcp
installation
scales,
then
you
have
more
users
and
more
workspaces
at
any
given
point
in
time.
There's
going
to
be
some,
you
know,
sort
of
high
water
mark
in
terms
of
what's
active
on
average
and
it's
not
going
to
be
or
presumably
won't
be
the
entirety
of
the
key
space.
So
hopefully
we
can
sort
of
amateurize
the
go
routine
and
compute
costs
across
a
bunch
of
relatively
small
and
not
so
active
workspaces,
but
we'll
see
what
the
the
activity
patterns
look
like.
A
B
Yeah
yeah
I
think
we've
what
you've
just
shown
it
should
be
I,
I,
I
guess
at
least
I
I
feel
like
I
should
be
able
to
do
it,
but
we'll
see,
but
yeah
I
work
on
it
for
for
the
next
couple
of
days
and
see
how.
A
A
I
mean
feel
free
to
reach
out
and
say,
like
hey,
does
this
look
okay
or
ask
clarifying
questions
like
we're
here
to
help
no.
B
That's
great
I,
I,
I
I.
How
could
I
say
I
took
the
freedom
to
to
reach
out
because
it
really
knowing
for
existing
controllers.
They
assume
that
it's.
C
B
Mean
they
rely
on
it
a
lot
like
MLK.
We
have
yeah
pretty
much.
All
our
resources
are
depending
on
each
other,
using
on
our
references
and
then
and
it's
yeah.
B
B
B
C
A
A
No
yeah
Daniel
do
you
have
any
questions
or
thoughts.
C
That
was
super
helpful,
just
to
be
able
to
kind
of
be
a
fly
on
the
law
here,
I've
kind
of
come
through
some
of
the
issue,
backlog
and
just
kind
of
picking
up
some
stuff
to
get
my
feet
wet
a
little
bit,
but
yeah
super
helpful
appreciate
you
all
letting
me
join
in
awesome.
A
Thanks
cool
all
right:
well,
Antonin,
if
you
don't
have
any
other
questions,
I
think
Steve
and
I
will
probably
just
stay
on
here
and
chat
about
whatever
it
is.
He
wants
to
talk
about.