►
From YouTube: Community Meeting, May 31, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everybody
today
is
may
31st.
This
is
the
kcp
community
meeting.
We
have
some
things
on
the
agenda
for
today.
If
you
would
like
to
add
something,
please
feel
free.
We
will
start
with
skipping
over
the
first
topic
that
we
save
for
the
end
on
incoming
issues
and
milestone
topics.
So
the
first
one
here
is
logo
voting.
So
stefan
do
you
want
to
chat
about
this?
B
A
C
Hey
yeah,
so
mostly
what
I
did
is
just
put
up
a
issue
for
folks
to
do
emoji
reactions
to,
and
I
just
counted
those
up
when
the
deadline
closed,
and
so
you
can,
if
you
scroll
up
slightly,
you
can
see
those
results.
The
kind
of
option
two
and
three
were
the
ones
that
had
equal
votes
and
they're
pretty
similar,
and
so
I
think,
we'll
pursue
some
of
those
kind
of
that
realm
of
graphical
design
and
we'll
go
from
there.
Did
you
want
to
talk
about
these
ones,
specifically
stephan.
D
C
Yes,
so
any
opinions
on
college
schemes
and
that
kind
of
stuff,
I
think
we
can
do
some
more
exploration
as
well,
and
I
know
there
are
some
red
hat
designers
that
get
involved
in
community
projects
like
this,
just
to
make
sure
that
they
have
nice
identities.
So
we
can
chat
with
those
folks
as
well.
E
A
Yeah
I
like
the
green
and
blue
too.
Maybe
we
should
do
a
vote
on
general
color
ideas.
A
C
Sure
I
mean
yeah.
We
can
keep
this
issue
open
for
folks
to
comment
on
things
if
you,
if
you'd
like
and
either
revisit
next
week
or
in
some
other
period
of
time,.
F
G
A
A
Why
don't
we
plan
to
rob
if
you
can
do
something
similar,
like
you
did
up
at
the
top
on
instructions
around
colors
and
voting?
We
can
circle
back
in
a
week
or
two.
C
A
A
So
we
we
haven't
quite
closed
out.
0.5
today
is
the
last
day
in
may,
so
we
may
have
a
little
bit
of
another
day
or
two
working
on
that.
Let
me
before
we
get
into
zero
six
real,
quick,
just
pull
up
the
milestone.
A
So
these
tend
to
be
mostly
the
epics
several
of
them,
so
the
one
at
the
bottom
and
code
generating
or
code,
generate
scoping
wrappers
those
go
together.
The
deploy
an
app
sa
cube,
configs
point
to
kcp.
I
think
we
just
have
that
one
pr
from
joakim
that
we
need
to
close
that
out.
That's
still
a
work
in
progress,
advanced
scheduling,
stuff
onto
what's.
A
So
out
of
all
of
these
things,
what
strictly
has
to
get
done
for
zero?
Five
to
close
out.
D
H
Talked
to
david
today
about
maybe
combining
some
of
the
work
around
this
with
the
another
virtual
workspace
authorization
so
like
framework
together
for
all
virtual
workspaces.
H
Okay,
if
the
goal
is
just
to
get
it
up
and
that
maybe
there's
something
tactical
we
could
do
and
then
come
back
to
this.
D
D
I
Sorry,
just
by
the
way
there
is
the
the
corresponding
in
the
syncher
virtual
workspace.
I
Exactly
I
mean
the
star
we
have
to
emit
is
not
the
same,
but
but
the
the
you
know,
mechanics
would
be
would
be
mainly
the
same,
and
this
is
also
in
the
stretch
goal
of
0.5,
but
what's
you
know
not
made
as
a
blocker
but
but
as
a
stretch
goal?
So
if
we,
if
we
move
also
on
the
api
exports
on
the
stretch
code
side,
that
would
be
just
you
know
exactly
at
the
same
status
both
for
both
vietnam
spaces.
A
Okay,
that
sounds
good
to
me.
There
is
some
work
left
in
our
fork
of
controller
run
time
to
get
the
indexing
working
correctly,
and
then
I
have
something
that
I've
been
working
on,
that
I
started
last
week
to
make
it
so
that
you
can
easily
just
spin
up
a
main.go
that
can
using
a
helper
go
look
up.
The
api
export
go,
get
the
virtual
workspace
url
for
it,
and
then
you
know
write
a
controller
that
that
works
appropriately.
A
That
is
not
anything
that
needs
to
block
0.5,
though
like
it
can
come
in
when
it's
ready
so
back
to
what's
missing,
I
don't
know
that
anything
is
really
missing
in
the
kcp
server
layer
other
than
like
fixing.
This
would
be
nice
to
have.
D
G
A
Okay,
so
the
schedule
that
we
have
right
now
is
fairly
tight,
so
it
would
be
next
week
for
design
and
scoping
and
then
not
even
three
weeks
for
getting
to
code
complete
and
as
before.
We
do
want
to
try
and
be
very
careful
about
what
we
put
in
the
milestone,
instead
of
just
continuing
to
kick
the
can
down
the
road
and
pushing
things
from
one
mile
things
from
one
milestone
to
the
next.
We
want
to
only
put
things
in
the
milestone
that
we
realistically
think
we
should
and
can
be
working
on.
A
So
the
list
that
we
have
in
here
is
a
multi-release
thing
on
sharding
and
stefan
did
you
want
to
talk
about
what
we
have
in
here.
D
Yeah
I
put
the
elephant
there.
That's
a
big
thing,
it's
super
important
to
get
this
in
and
have
I
mean
to
have
something
because
it
changes
the
future
of
our
apis,
especially
how
users
build
controllers
so
there's
a
stock.
So
comments
are
welcome.
Things
replication
claim,
that's
one
idea.
Other
ideas
are
welcome,
so
please
read
through
it.
D
I
A
Okay,
so
that's
the
starting
topic
and
then
the
next
one
is
quota,
which
I
took
a
look
last
week
on
whether
or
not
we
could
take
the
upstream
kubernetes
admission
and
controller
for
quota
that
as
a
first
step,
just
enabling
it
per
workspace.
A
So
there'd
be
no
no
roll
up
or
aggregation
of
specifying
quota
at
the
top
and
then
having
descendant
workspaces
participating
in
that
that
it
would
just
be
within
a
workspace.
You
specify
quota
and
it
works.
Just
like
quota
in
a
single
cluster
would
work
beyond
that.
We
would
want
to
look
into
something
that's
more
fully
featured
around
up,
but
not
probably
not
for
0.6.
A
Moving
on,
we
want
to
do
user
home
workspaces
so
right
now
we
have
a
virtual
workspace
server
that
handles
personal
workspaces
and
also
will
d
duplicate
names.
If
two
people
in
the
same
org
try
to
create
a
workspace
with
the
same
name,
we
would
like
to
try
and
move
instead
to
having
a
concept
of
a
home
directory
or
a
homework
space
hierarchy.
A
That's
separate
from
organization
workspaces
so
that
it
truly
is
tied
to
a
user
and
the
user
would
own
what
essentially
is
their
own
homework
space
or
home
org
workspace
and
in
there
they
can
create
as
many
workspaces
as
when
it's
there
we'll
allow
them
to
do,
and
we
feel
like
that's
pretty
important
for
running
this
at
scale
as
well.
In
a
multi-org
environment.
H
I
A
So
if
we
do
a
a
hierarchy
in
the
works,
you
know
in
the
key
space
that
is
like
root
users,
some
sort
of
bucketing
mechanism
and
then
a
username
or
user
id.
Then
that
essentially,
is
like
an
org
workspace
and
you
would
be
able
to
create
individual
sub
workspaces
in
there
that
represent
apps
or
whatever
you
want,
and
because
you
would
be
admin
in
your
your
top
level
user
workspace.
You
can
create
all
the
rbac
resources
that
you
need
to
grant
groups
and
users
permission
to
your
sub
workspaces.
A
I
So
I
mean
as
soon
as
we
only
share
something
under
the
home:
the
user
homework
space.
There
is
no
specific
design,
only
possibly
just
cosmetics,
with
the
kcp
command
line,
to
ease
the
sync
and
create
the
the
correct
role
bindings,
but
that's
it
yeah,
that's
what
you
mean.
Okay,.
A
Yeah
and
then
outside
of
the
user
homework
space
hierarchy
and
a
in
the
normal
organization
hierarchy.
We
we
have
the
same
ux
that
we
have
today,
where
you
have
to
be.
You
have
to
have
appropriate
permissions
in
the
parent
workspace
to
be
able
to
create
role
bindings,
to
grant
access
to
children.
Sure.
A
Alrighty,
so
moving
on
around
continuing
multi-workspace
controller
development,
we
have
a
sketch
on
thing
permission
claims.
This
is
making
it
so
that
an
api
export
author
can
define
what
apis
and
possibly
what
resources
by
by
name
or
something
that
they
need
in
their
api
export
so
that
they
can
access
what
they
need
to
perform
their
business
logic.
A
That
one,
I
think,
probably
will
we
may
start
with
just
doing
gvrs
so
saying
like
within
an
api
export.
I
need
secrets
and
config
maps
and
name
spaces
whatever
and
then
a
controller
going
through
the
virtual
workspace
for
that
we
have
access
to
those.
But
if
you're
interested
in
that
check
out
this
doc,
we
also
have
work
on
cluster
workspace
type,
which
is.
A
Everything
that's
in
here
around
the
initializing
workspaces
endpoint
that
merged.
I
think
yesterday
fixing
some
security
holes
in
cluster
workspace
types
and
authorization
and
whatnot.
So
if
you're
interested
in
that
take
out,
take
a
look
at
this
issue
and
then
the
last
thing
in
here
is
around
authorization
for
the
api
export
virtual
workspace,
which
sean
was
talking
about
a
few
minutes
ago.
D
Yeah,
it's
a
big
bucket
of
things.
The
whole
diagnostic
topic
becomes
more
important.
Now
we
have
location
api,
so
people
can
use
compute
clusters
and
clusters,
they
don't
own,
so
they
cannot
really
debug
and
if
something
breaks
they
are
lost.
So
we
need
something.
So
if
somebody
is
interested
in
that
events
is
an
obvious
step,
maybe
conditions
on
certain
objects.
D
Any
work
in
this
direction
is
helpful.
There
are
some
design
topics,
location
workspaces.
We
talked
about
so
making
compute
services
more
composable
that
you
can
add
your
own
workload
clusters
to
an
existing
compute
service
so
using
the
same
apis
but
having
your
own
clusters
design
topic
how
to
implement.
But
we
have
to
think
about
that.
D
The
search
one
here
inside
item
similar
to
the
second,
but
to
provide
additional
apis
like,
for
example,
if
you
have
an
openshift
cluster
and
you
you
export
the
kubernetes
apis,
you
might
have
a
second
set
of
gbrs
which
are
for
openshift
types
like
deployment
complex,
for
example,
similar
things
for
keyboard
or
hd
or
whatever
is,
in
addition
to
cube
types,
but
not
every
cluster
provides
basically
also
connected
thing
resource
set.
It's
probably
the
same
as
the
second
item.
D
Here
we
don't
want
that
the
syncer
has
a
list
of
resources,
so
we
want
to
have
some
somewhere
in
the
api,
so
we
have
to
come
up
with
something:
either
it's
a
different
cid
or
it's
inside
of
the
workbook
cluster.
You
can
see
you
can
decide
what
to
do,
but
you
have
to
solve
that.
I
So
what
do
you
mean
that
the
thinker
should
not
have
the
list
of
resources?
It
should
sync
as
an
as
an
input
right,
yeah
yeah.
I
The
api
in
some
way,
but
but
is
it
even
necessary,
knowing
that
the
virtual,
the
synchro
virtual
workspace,
already
exposes
only
the
apis
that
that
are
useful
for
the
syncer,
so
in
the
future,
sinker
would
mainly
just
think
everything
it
sees
so.
D
D
Next
one
is,
we
have
location
apis.
We
have
no
placement
yet,
but
when
we
have
placement,
users
want
to
know
which
locations
exist
so
cube.
Cutter
get
locations
is
obvious,
ci
use
case
here,
so
we
have
to
build
a
projected
api.
We
have
something
similar
with
workspaces
at
the
moment.
Workspaces
are
projection
of
cluster
workspaces,
so
we
need
a
similar
thing
here.
For
locations
probably
can
use.
We
use
lots
of
code
that
must
be
done
what
else
yeah
the
ugly
downstream
namespace
names?
I'm
not
sure
anybody
is
happy
with
them.
D
They're
like
63
characters,
long,
I
think
so
anything
shorter,
which
is
still
safe
enough.
So
safe
enough
against
conflicts.
Naming
conflicts
would
be
nice
placement
in
general.
The
placement
object
we
have
sketches,
we
know
more
or
less
what
we
want.
Somebody
has
to
come
up
with
an
initial
small
placement
api
like
a
cid
which
lives
in
a
workspace
in
user
workspace
and
can
specify
constraints
so
at
least
label
constraints,
label
selectors.
D
And
we
have
placement
authorization,
so
not
everybody
should
deploy
to
plot
clusters.
So
if
you
have
a
product
location,
you
have
to
protect
it
somehow
so
some
kind
of
authorization
and
then
the
ongoing
topic.
I
want
to
put
here
pod
log
spot
exact
product
h,
all
those
things
that
antonio
is
working
on
also
on
tmz.
A
Thanks
stefan,
so
we've
got
some
storage
tasks
potentially
in
scope.
Maybe
I
know
we
had
some
folks
with
storage
expertise
who
have
been
thinking
about
this
and
talking
about
it.
So
hopefully
we'll
get
something
there.
I
would
think
that,
if
possible,
just
playing
around
with
a
persistent
volume
claim
and
seeing
what
happens
would
be
cool
and
then
maybe
for
starters,
having
a
something
that
can
influence
the
scheduler.
A
Next
up,
we've
got
networking,
although
it
looks
like
we
may
need
to
reach
out
to
some
networking
folks
to
see
if
they're
looking
in
it
into
it
and
then
some
security
things
here
at
the
end.
So
this
is
a
lot
of
work.
I
don't
see
any
way
that
we're
going
to
do
all
of
this
in
a
month
or
less.
So
I
think
we
might
want
to.
A
E
A
Yeah
I've
actually
started
work
on
that,
but
I
paused
I
did
just
enough
to
sketch
out
a
foundation
and
then
paused.
When
I
realized
I
was
going
to
need
to
go
a
few
layers
deeper,
but
so
I'd
be
happy
to
to
work
on
that
or
pass
it
off
to
somebody
else.
D
E
Yeah,
exactly
that's
what
I
was
about
to
mention.
Maybe
it's
a
good
point
to
start
with
names
and
say:
the
expectation
here
is
that
if
your
name's
beside
it
you'll
lead
some
targeted
design
discussions
as
well
as
creating
the
initial
task
breakdown,
like
we've
done
in
in
github
in
the
past,
that
just
kind
of
lists
out
here's
what
we
think
we
can
accomplish.
A
H
A
A
I
Maybe
I
just
had
a
question
sorry
about
tmc
compute
work,
since
it's
a
multi-release
epic,
do
we
envision
working
also
on
the
stuff
that
is
related
to
you
know
splitting
or
I
mean
placing
on
several
workload
clusters
at
the
same
time,
which
obviously
drains
the
question
I
mean
or
raises
the
the
the
topic
of
having
a
view
per
location
of
view
of
the
object
per
location
and,
and
so
also
the
transformations
that
that
are
still
awaiting
to
be
to
be
integrated,
I
mean,
or
is
it
something
that
we
on
purpose
keep
for
zero?
I
D
Also
yeah
getting
some
parts
in
is
also
great.
I
Yeah
yeah,
obviously
it
could
be
split
or
with
a
very
simple
part
that
just
maintains
the
location,
the
the
the
you
know,
view
per
location,
but
does
nothing
with
strategies
and
and
complex
transformations
and
then
keep
the
rest
for
for
for
the
future.
I
But
I
mean
just
I
don't
want
any
any
answer
right
now,
but
just
raising
this
step
because
you
know
when
something
has
been
prepared
long
ago
and
then
the
the
the
code
changes
and
then
there
is
requirement
to
rebase
and
the
more
and
more
you
know
the
the
cut
base
diverges
from
from
what
was
initially
prepared.
So
I
mean
I'm
okay
also
to
to
drop
that,
but
I
just
want
you
to
know.
D
No,
no,
don't
drop
it
ebays
and
split
up,
and
then
we
see
what
we
get
in.
It
doesn't
have
to
be
a
blocker.
Not
everything
must
be
a
blocker,
so
sure.
I
A
Right,
so
if
everybody's
okay
with
it
next
topic,
is
stefan's
rendering
package
apis
yeah.
D
A
D
D
A
Yeah
I
mean
the
the.
What
I
want
to
do
with
this
example.
Here
is:
have
a
fairly
minimal
main.go,
where
you're
able
to
set
up
a
controller,
runtime
manager
that
is
multi-workspace,
multi-cluster,
aware
and
then
have
a
standard
config
for
reconciler,
and
then
you,
you
implement
your
reconciler
and
it's
multi-cluster
where
and
it
just
works.
So
the
first
thing
that
has
to
happen
is
it
has
to
look
up
the
api
export
and
go
find
the
virtual
workspace
url
for
it
and
then
go
spin
up
all
this
stuff.
A
So
in
that
case
like
it,
it
does
need
to
pull
in
the
api
export
type
or
work
with
it.
Unstructured
like
either
way.
D
A
Has
to
be
until
so
we
have.
We
have
some
prs
that
we
haven't
opened
yet
upstream,
that
relate
to
the
shared
informer,
config
and
shared
former
factory
configs
that
we
need.
A
We
need
them
merged
into
kubernetes
upstream,
before
folks
can
use
an
unmodified
gomod,
and
I
talked
with
jordan.
He
didn't
think
that
they
would
be
controversial,
so
we
just
need.
I
was
prioritizing
working
with
the
folks
to
get
the
code
working
so
that
we
could
enable
anybody
who
wants
to
write
a
multi-cluster
aware
controller
to
do
so
with
the
go
mod,
replace
directives
in
and
then
the
next
step
in
opening
up
streams.
D
A
So,
if
you're
pulling
in
basically.
A
As
you
start
to
pull
in
more
and
more
all
of
like
our
go,
mod
has
zero
zero,
zero
zero
dependencies
on
all
of
these
things.
So
if
you
pull
in
a
dependency
on
kcp
or
yeah,
if
you
pull
on
a
dependency
on
kcp
and
it's
got
the
zeros,
then
you
have
to
do
all
of
this.
So
if
we
have
the
zeros
right.
D
A
D
A
Okay
and
sean,
you
had
a
question
in
chat.
Should
we
do
the
cue
builder
base
calls
or
just
controller
runtime?
What
did
you
mean
by
that.
H
D
H
Queue
builder
calls
that
they
use
like
from
the
book,
do
the
cron
job
reconcile
or
whatever
example.
That
would
be
just
as
good,
because
it
would
just
be
like
a
here's,
what
you
change,
and
now
it
just
works.
Based
on
the
example
that,
like
everybody,
uses
from
key
builder's
book.
A
Okay,
any
other
topics
before
we
do
issue
triage.
A
All
right,
if
y'all
think
of
anything,
feel
free
to
speak
up,
I
guess
we'll
start
at
the
top
with
the
new
ones.
This
is
not
so
just
a
level
set
here.
We're
not
trying
to
go
deep
in
the
details
of
things.
We're
just
trying
to
set
a
milestone,
so
this
sar
for
service
account
sync
verb.
A
D
D
Yeah
this,
this
is
some
something
I
have
to
talk
to
steve.
We
don't
want
that.
Somebody
in
a
in
an
artwork
space
can
hijack
workspaces
by
initializers,
which
are
not
in
the
work.
A
Creating
workspace
types
you
can't
reference,
I
think
this
is
definitely
tbd.
We
need
to
fix
it,
but
we
have
a
workaround.
A
A
Enable
the
cache
mutation
detector
it's
already
caught.
One
thing
when
I
was
running
it
manually
and
I
know
staphon
found
another
one,
so
I'm
going
to
be
probably
putting
tbd
on
most
of
these
things
unless
you
all
speak
up
so
this
one
we,
I
know
steve
filed
this
and
we
talked
about
it-
that
we're
setting
the
storage
version
hash
and
including
the
logical
cluster
name
as
part
of
it
and
steve
was
questioning.
If
that
made
sense.
A
Suppose,
workload,
cluster
capabilities
via
workload,
api
and
stuff
on
you
looked
at
into
this
a
little
bit
and
you
cross
reference
1084.
D
A
D
A
Okay,
oh
I
want
to
put
milestone
on
this.
A
Assuming
we
do
the
user
homework
spaces
yeah.
I
think
this
just
goes
away
and
because,
like
I
assume
we'll
get
rid
of
the
virtual
workspace
for
personal
workspaces
right
and
like
all
of
the
auto,
the
magic
around
pretty
names
and
the
role
bindings,
that
look
for
the
right
subject
and.
I
I
still
have
the
question
of
how
do
we
make
it
so
that
that
workspace
is
known
to
be
created
by
I
mean
by
a
given
user,
and
for
for
now
we
use
cluster
role
bindings
for
pretty
names,
it's
right,
but
before
using
it
for
pretty
name,
we
also
use
them
to
indicate
that
a
workspace
has
been
created
by
a
given
user.
You
know
it's.
A
A
I
Any
workspaces
that
would
any
workspace
that
would
be
created
above
the
user
workspace
level
would
be
created
by
some
admin
and
would
not
have
any
owner
percy,
but
just
uses
admins
that
have
permissions
to
it,
but
no,
no
real
owner,
yeah.
A
Okay,
all
right,
I'm
going
to
I
kind
of
just
want
to
close
this
one
because
we're
not
going
to
come
back
and
fix
it,
because
the
fix
is
to
switch
to
home
workspaces.
A
Okay,
was
I
deploy
kcp
with
crdb
as
the
backing
store
tbd,
or
do
we
want
to
try
and
I
think
anything
any
planning
beyond
0.6
will
do
when
we
start
planning
0.7
right.
A
A
A
Stefan,
what
do
you
want
to
do
with
this?
One.
D
A
I
Yeah
this
one,
this
one
is
mainly
can
be
closed.
I
mean
it
was
just
raised
just
to
be
true.
We
would
clarify.
A
Talk
to
design
better
controller
structural
patterns
still
would
like
to
do
that.
Labels,
documentation.
A
Do
we
want
to
keep
this
or
close
this,
given
that
the
shard
proxy
was
more
of
a
proof
of
concept.
A
D
D
A
A
A
A
A
A
447
was
the
one
I
was
looking
for.
Yeah.
A
Okay,
we
are
out
of
time.
I
think
we
did
a
great
job
getting
through
these.
Most
of
these,
we
only
have
nine
left
because
I'm
excluding
the
logo,
which
I'm
just
gonna,
get
out
of
our
list
while
we're
here.
So
thanks
everybody-
and
we
will
see
you
next.