►
From YouTube: Community Meeting, August 16, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
welcome
to
the
community
meeting
for
kcp
august
16th.
A
B
Hey
yeah,
so
I
was
on
this
community
call
a
few
weeks
ago
and
discussing
this
issue
of
the
namespace
scope
finalizes
as
a
feature
request
and
the
the
main
sort
of
problem
that
this
would
be
solving
is
that
it's
possible
to
think
of
to
think
of
a
workload
in
a
namespace
being
synced
to
a
sync
target
and
wanting
to
keep
that
up
for
a
length
of
time
until
the
finalizers
are
removed
from,
for
example,
the
deployment
isn't
isn't
really
guaranteed
to
keep
that
hold
up.
B
If
other
crs
in
that
sync
target
get
cleared
out
too
early,
and
there
isn't
any
particular.
With
the
current
and
advanced
scheduling
system.
There
isn't
really
a
feasible
way
to
make
sure
that
the
sync
soft
finalizers
are
properly
added
to
every
possible
kind
of
resource
that
might
exist
where
and
if
there,
if
the
finalizers
were
set
at
the
main
space
and
then
sort
of
propagated
down
to
every
resource
in
that
main
space
that
they
synced,
then
that
sort
of
solves
that
problem
all
at
once
and
which
seems
like
a
good
way
to
go.
B
Seen
as
everything
in
the
name,
space
is
always
synced
to
the
same
sync
target
anyway,
you
would
think
they
should
all
be
cleared
up
together
rather
than
it's
hard
to
imagine
a
scenario
where
you
would
want
the
deployment
to
stay
for
longer
than
the
service
or
whatever
that's
using
it.
You
know
they're
often
kind
of
related
to
each
other.
So
I
did
talk
about
this
with
you
before,
and
there
was
some
good
discussion
about
it.
B
Then
I
was
asked
to
provide
a
more
sort
of
concrete
example,
and
I
think
the
idea
was
to
use
that
concrete
example
to
maybe
walk
into
a
workshop
or
something
where
this
could
be
discussed,
and
so
I
just
wanted
to
bring
it
back
up
and
say
that
I've
added
the
example
now
I'll
be
interested
in
knowing
what
the
next
steps
we
could
take
regarding
this.
A
B
We
do
we
have
talked
about
some
sort
of
health
check
to
test
whether
a
workload
is
up.
That
certainly
is
connected
to
this.
In
terms
of,
we
don't
want
to
allow
a
workload
to
be
deleted
from
a
sync
target
until
we
know
it's
up
on
the
new
one,
and
so
I
would
think
the
health
check
is
around.
B
The
point
of
the
health
check
is
determining
the
event
that
causes
us
to
remove
the
finalizers
on
the
like
losing
sync
target.
Yes,
and
so
what
this
ticket
in
particular
is
talking
about
is
how
we
can
ensure
that
nothing
is
deleted
from
the
losing
sync
target
until
we
are
ready
for
that
to
happen,
and
for
that
to
cover
resources
that
we
don't
even
know
will
be
synced.
Yet.
A
B
Yeah,
I'm
happy
to
set
up
a
call
or
something
where
we
can
talk
through
potential
solutions
and
to
like
implement
this.
I
have
discussed
it
briefly
with
craig
and
we
we
have
some
thoughts
on
this
already,
but
I
can
certainly
set
up
a
call
and
if
that's
a
good
next
step.
C
If
you
wouldn't
mind
just
clicking
on
the
link,
I
think
so
I
started
a
document
to
add
on
to
the
quota
support
that
we
currently
have,
which
is
just
for
namespace
scoped
resources,
and
this
is
a
proposal
for
a
short
term
solution
to
extend
that
quota
to
do
cluster
scoped
resources
as
well.
C
This
functionality
is
not
available
in
upstream
kubernetes,
because,
if
you're
just
working
with
a
single
cluster,
you
have
to
have
presumably
admin
level
permissions
to
create
things
that
are
cluster
scoped,
and
so
there
just
isn't
support
for
quoting
those
things.
But
in
a
multi-tenant
setup
like
kcp,
where
you
own
your
workspace
and
you
have
admin
there
and
you
might
be
sharing
it
with
other
people,
it
does
make
sense
to
be
able
to
quote
things
like
name
spaces,
other
child
workspaces
and
so
on.
C
So
this
is
a
proposal
for
how
to
do
that.
It
is
to
be
considered
a
short-term
implementation,
because
it's
a
bit
hacky
with
reusing
the
resource
quota
type
to
make
it
support
cluster
scoping.
So
the
proposal
here
is
that
we
designate
a
single
namespace
in
this
in
the
proposal.
I
call
it
admin
and
in
that
namespace,
if
you
create
a
resource
quota
and
if
you
could
scroll
down
a
little
bit
stuff
onto
the
example.
C
You
additionally,
can
continue
to
do
per
namespace
quota,
so
you
could
create
a
resource
quota
in
some
other
namespace
and
say
this
namespace
can
only
have
seven
config
maps,
but
this
does
allow
to
to
do
quota
of
clusterscope
things,
and
the
reason
that
I
I
say
that
this
is
probably
a
short
term.
Implementation
is
because
it
is
a
bit
hacky
you.
You
have
to
know
that
you
need
to
put
it
in
the
right
name
space.
C
You
have
to
know
that
you
need
to
put
the
annotation
on
there
and
it's
probably
better
long
term,
to
have
a
separate
type
kind
of
something
like
cluster
resource
quota,
that
that
would
do
this,
and
we
also
know
that
in
the
long
term,
we
want
to
have
aggregated
quota
that
rolls
up.
So
you
could
say
at
the
top
level
of
some
hierarchy.
C
I
only
want
to
have
five
total
workspaces
and
whether
those
work
spaces
are
all
at
the
same
level
or
nested
as
children
and
grandchildren.
The
total
would
be
five.
We
don't
have
support
for
that
yet.
So
this
is
an
intermediate
solution.
If
you're
interested,
please
take
a
look
and
I'm
hoping
to
begin
doing
this
soon-ish.
A
C
And
frederick
has
a
question
about
cluster
resource
quota,
so
it's
kind
of
subtle.
The
openshift
cluster
resource
quota
lets
you
quota,
namespace,
scoped
things
in
aggregate
at
the
cluster
level.
So
you
could
say
I
only
want
30
config
maps
across
all
namespaces
in
my
cluster,
but
it
doesn't
let
you
quota
cluster
scoped
things
like
namespaces
and
workspaces.
C
And
yes,
stefan
to
your
point,
I
will
have
a
probably
have
a
follow-up
proposal
on
kcp
owned
resources
for
things
like
resource
quota,
where
the
platform
owner,
whoever
is
running
and
managing
kcp,
can
specify
an
annotation
that
indicates
this.
Resource
is
owned
by
the
platform
and
users
can't
change
it,
so
that
would
allow
us
to
with
a
cluster
workspace
type
initializer,
for
example,
inject
a
resource
quota
instance
into
every
newly
created
workspace.
A
A
A
D
Sure
we
hit
this
earlier
today,
somebody's
got.
There
are
a
lot
of
docs
on
how
to
do
bring
your
own
compute
and
they,
some
of
them,
tell
you
which
version
of
the
sinker
to
use.
It
would
be
nice
if,
when
you
emitted,
a
sinker
image
should
be
defaulted,
one
for
you
that
just
made
sense
so
that
people
didn't
have
to
worry
about
version.
Skew.
A
A
Exit
stream
tags
basically
yeah,
so
I
would
put
it
on
tvd.
I
I
think
we
have
a
couple
of
source
work
and
skew
topics
right.
You
have
another
one
steve.
Would
you
brought
up?
I
want
cube
cutter
in
general
talking
to
kcp.
I
don't
know
if.
D
A
D
Yes,
so
basically,
what
I've
described
there
when
we
delete
a
namespace
in
kcp,
the
downstream
namespace,
remains
not
deleted.
D
D
Well,
that
be
our
fixes.
Other
stuff,
like
the
you,
can
see
the
refer
issue
in
dpr,
but
of
course,
while
fixing
that
I
discovered
that
we
are
not
deleting
those
namespaces
and
it
will
require
some,
I
would
say
different
changes
as
we
are
not
monitoring
the
upstream
name,
spaces
of
reacting
to
upstream
spaces
changes.
So
you
know
I
wanted
to
get
one
fix
first
and
then
jump
into
the
other
side.
So
that's
good.
So.
A
C
I
did
a
pr
that's
in
the
queue
that
will
reduce
our
resource
usage,
so
maybe
this
will
fix
it.
I
think
we've
only
seen
it
once
and
it
literally
was
like
did
it
took
28
seconds
to
list
crds
and
then
the
test
timed
out.
A
C
Wait,
this
is
these:
are
our
system
owned
bindings?
Like
I
mean
I
guess
you
could
delete
them.
D
Just
as
a
small
hand,
many
of
those
certs
cert
skew
issues
might
be
because
of
a
vm
that
has
been
suspended
ahead
of
many
of
those,
maybe
just
a
retiring.
D
A
D
A
C
You
can
promote
cause
in
this
case.
The
delete
would
have
a
finalizer,
the
finalizer
clears
and
cubecontrol
will
return
back
to
the
user,
and
you
can
see
they
did
it
get.
There
was
no
workspace
and
then
they
did
a
delete,
and
it
said:
okay,
okay,
so
yeah.
I
would
call
this
a
bug,
probably
in
the
virtual,
but.
A
C
A
Get
workspaces,
oh
I
I
might
know
we
still
have
the
informa,
which
is
filtered
by
the
oauth
cache
in
the
virtual
workspace.
So
basically
we
have
little
non-conformant
behavior
for
listing.
C
C
A
D
D
I
I
so
I
created
a
a
deployment
with
a
I
converted,
the
state
facet
to
deployment,
so
the
service
name
was
there
by
accident,
which
is
not
in
the
schema
and
then
and
then
we
saw
this
should
not
happen
here
in
the
log
and
nothing
else.
D
I
could
create
the
deployment
just
fine
in
kcp
and
david
helped
me
debug,
and
we
found
the
exact
point,
but
then
we
found
out
that
this
field
was
out
of
schema
and
when
I
tried
the
same
thing
on
on
api
server
in
kind,
I
I
just
got
the
error
from
the
server
of
course,
but
nothing.
Nothing.
D
Yes,
exactly,
it
seems
to
me
that
that
we
in
in
the
way
we
use
the
crds
or
maybe
in
the
way
the
crd
mainly
handler,
has
been
tricked
for
kcp.
Maybe
some
some
some
bug
is
still
there
since
quite
some
time,
so
that
it
doesn't
issue
it.
It
supports
additional
fields,
because
you
have
such
an
option
when
you,
you
know,
create
the
the
or
the
shama
validators
and
all
this
stuff,
and
it
might
be
that
we
have
this
error
since
quite
some
time,
because
you
know
it's
it's
very.
D
You
know
it's
a
very
specific
case.
The
case
is
where
you,
you
know,
create
an
object
and
add
a
field
which
is
not
supported.
A
D
C
A
C
A
D
In
the
client
side,
then
then
this
would
would
have
also
failed
in
the
first
case
yeah.
The
thing
is
that
normally.
A
D
Well,
I
don't
know,
but
what
I
I
mean
my
feeling
is
that
there
is
a
burke
in
the
way
we
support
in
the
kcp
fork
of
cube
in
the
way
we
support
open
api
hms
for
the
standard
resources.
You
know
for
the
native
resources
like
deployments,
because
there
is
obviously
a
specific
case
in
the
kcp
cube,
which
included
the
support
of
open
api.
D
A
A
D
A
D
D
A
Steve
it
was
for
what
you
what
you
talked
about
right,
yeah,
okay,.