►
From YouTube: Community Meeting, May 24, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right:
let's
kick
it
off!
Welcome
to
the
kcp
community
meeting
may
24th,
so
we
have
topics
on
the
agenda,
there's
still
space.
So
if
you
want
to
add
something
just
go
ahead,
so
a
couple
of
things
which
are
pretty
short,
let's
see
so
we
have
the
the
usual
issue
topic
here
which
we'll
put
at
the
end
the
next
one
is
just
a
reminder.
This
is
the
issue
that
rob
opened
two
weeks
ago,
or
so
you
can
go
there
and
vote
for
lugus,
so
it's
open
until
tomorrow,
and
then
we
have
a
choice.
A
So
it
is
a
a
selection
of
candidates.
I
think
we
we
decided
to
vote
again
on
two
or
something
but
go
there
and
put
your
vote
all
right.
There
was
kubecon
last
week,
kcp
was
a
topic.
People
talked
about
it.
I
tried
to
present
something
like
in
the
in
the
contributor
summit.
I
gave
a
talk
in
front
of
I
know
80
or
100
people.
A
People
were
super
curious
and
it
was
for
many.
It
was
the
first
time
they
really
saw
what
we
built
like
what
was
visible
months
ago.
It's
certainly
not
what
we
have
built
and
it's
pretty
great
what
we
have
now
and
people
like
it,
and
people
who,
like
v-cluster
and
other
solutions,
they're
super
curious.
So
that
was
a
good
experience.
A
There's
a
public
lightning
talk.
I
gave
at
reject
at
the
slot
for
lightning
talks.
So
take
a
look
at
that
the
slogan
one
million
clusters.
A
A
A
People
just
returned
from
kubecon,
so
that's
helpful.
What
else
do
we
have
just
a
pointer
to
1084
everybody
interested
in
scheduling
a
location.
A
A
A
I
want
to
remind
ourselves
where
we
are,
and
I
think
it's
a
pretty
good
place,
a
good
position
now
in
the
project.
We
are
pretty
near
to
closing
the
gaps
which
we
knew,
which
are
still
there,
but
we
are
not
too
far
to
build
a
complete
bridge
over
the
the
problem
domain
we
have
defined
months
ago.
So
this
picture,
I
think,
is
pretty
pessimistic.
A
Maybe
that's
the
one
of
version
zero.
Four,
it's
not
zero!
Five!
When
zero
five
is
done.
I
think
we
are
much
much
further
and
just
to
repeat
what
we
had
planned.
I
mean
we
have
multi
tenancy.
Everybody
knows
that
work.
Spaces
are
a
thing.
I
think
they
work
well,
we
have
a
cli
tool
and
everybody
does
things
in
workspaces
and
I
think
this
is
pretty
cool
and
works
well.
A
The
next
thing
we
we
are
working
on
for
quite
some
time
is
providing
apis
to
workspaces
to
user
workspaces,
but
centered
around
the
persona
which
cube
doesn't
have,
which
is
an
api
provider,
a
service
provider
which
presents
an
api
exports.
It
and
users
can
consume
it,
so
export
binding.
We
have
since
co4,
I
think-
and
we
are
very
near
to
have
secure
controllers,
which
means
the
virtual
workspace
work
that
andy
has
started
on
top
of
david's
work
and
yeah.
I
hope
we
get
it
done
this
week
or
early
next
week
or
something
so
with
that.
A
A
The
next
item,
which
is
also
zero-
five,
it's
compute
as
utility-
and
this
is
all
the
work
around
synchro
scheduling,
single
virtual
workspace
state
machine
that
your
key
was
doing
so
lots
of
people
involved
and
when
the
location,
pr
merges
that's
the
last
bit.
Basically
in
this
multi
release,
epic
multi-prototype
epic,
we
have
the
as
well
the
green
check
mark.
So
by
that
time
like
when
we
merge
those
two
things
we
are
pretty
near,
there's
just
one
big
thing.
We
have
to
prove-
and
this
is
a
scale
out
so
by
zero.
A
The
mvp
is
basically
the
point
in
time
when
all
the
things
which
we
need
to
bring
value
have
have
been
done
like
when
we
have
crossed
the
the
water
there's
a
bay
area
here
in
the
gold
gate
which
case
so
we
need
all
of
those.
If
one
is
missing
likely,
kcp
has
not
enough
value,
so
we
have
to
prove
scale
out
we're
just
starting
optimizing,
our
akis
even
more
for
the
use
case
that
you
have
sharding
and
that
you
so
that
you
don't
have
everything
locally
on
one
chart.
A
B
Yes,
go
ahead,
see
you'd
say
that
things
like
garbage
collection
and
quota
are
not
proving
out,
but
adding
the
missing
bits.
A
They
are
doable
like
we
know
it's,
not
a
new
problem
right.
We
know
there's
distributed
quota
and
gc.
We
pretty
well
know
that
it's
just
another
controller.
We
have
to
split
apart
and
make
workspace
aware.
So
there
is
work,
no
question,
but
the
proof
that
kcp
can
stand
on
its
feet
and
brings
value
this.
I
think
those
items
here.
A
B
B
Stefan
got
one
of
my
pr's
merged
about,
so
we
had
an
issue
where,
if
you
were
trying
to
do
a
cross
cluster
wild
card
list
or
watch-
and
it
was
a
partial
metadata
request-
so
give
me
all
the
widgets
and
I
don't
really
care.
I
don't
want
to
see
the
details.
I
just
want
to
see
metadata
so
labels,
annotations,
name
cluster
name-
that
sort
of
thing
we
previously
were
not,
including
any
custom
resources
that
came
via
api
bindings.
B
But
if
you
were
writing
tests,
for
example,
that
had
something
that
you
wanted
to
get
scheduled
or
that
you
were
manually
scheduling
by
setting
the
label
and
you
were
expecting
that
to
just
stay
scheduled
now,
that
is
seen
by
the
resource
controller
and
it
will
generally
get
assigned
to
whatever
the
name
space
it's
in
is
assigned
to,
and
so
you
may
see
some
unexpected
behavior
in
your
tests,
and
that
is
now
why
this
is
really
important,
because
the
namespace
or
the
scheduler
needs
to
be
able
to
see
a
full
view
of
partial
metadata
for
everything,
regardless
of
if
it
came
from
an
api
binding
or
not,
so
that
it
can
do
scheduling
and
then
next
up
will
be
getting
the
api
export
virtual
workspace
in
and
then
switching
it.
B
If
you're
trying
to
do
a
cross
cluster
or
cross
workspace
list
and
watch
you'll
have
to
do
it
either
on
well,
you'll
have
to
do
it
via
api
exports
and
api
bindings.
So
you
won't
be
able
to
just
create
a
crd
in
a
workspace
and
maybe
add
the
same
crd
to
lots
of
other
workspaces
and
do
a
a
cross
cluster
listed
watch
on
that
that
won't
work
in
the
future.
Soon.
A
A
B
That
makes
sense
right,
but
we
need
to
add
some
functionality
to
the
api
export
virtual
workspace,
so
that
if
you
know
that
you're
providing
a
set
of
apis
and
you're
exporting
them,
you'll
go
or-
and
you
need
configmaps
secrets-
that
sort
of
thing
you'll
go
through
the
api,
export
url
and
initially,
I
think,
we're
probably
just
going
to
auto,
bind
and
make
available
all
of
like
core
v1.
B
But
eventually
what
staphon
is
showing
is
you'll
be
able
to
indicate
what
additional
resources
you
need
through.
Your
api
export
works
virtual
workspace
so
that
it's
not
just
all
of
core
v1
or
whatnot.
A
The
same
thing
should
happen
here
at
some
point
like
if
you
subscribe,
you
buy
into
a
service,
and
it
it
tells
you
if
you
bind
you
give
this
service
access
to
configmaps
and
you
can
acknowledge
you
can
stop
the
binding,
and
this
is
part.
What
I
have
shown
as
secure
controllers,
like
controllers,
cannot
do
anything
beyond
what
is
specified
there.
C
B
Ca
for
cube,
we
fill
in
the
token
for
the
api
server,
which
is
pointing
back
to
kcp,
and
we
also
fill
in
the
the
kcp
url
for
kubernetes
service,
and
so
when
you
have
kcp
deployed
with
a
route
in
front
of
it
or
a
proxy
in
front
of
it,
that
has
a
certificate
that
you
have
manually
or
some
other
way
configured.
B
B
B
It's
picking
up
this
ca
that
we
created
that
is
not
valid
for
the
public
facing
certificate
for
kcp
and
so
joakim
had
thought
through
some
options
which
he
pasted
in
slack,
which
I
lost
there
we
go.
B
B
We
could
put
a
flag
on
kcp
so
that
it
doesn't
generate
the
root
ca
for
kcp
in
a
config
map
and
then
the
sinker
could
see
if
it
exists
or
not,
and
if
it
doesn't
exist,
don't
do
anything
we
could
add
a
field
on
something
he
said.
Maybe
the
workspace
or
the
workload
cluster
to
be
able
to
enable
or
disable
this
behavior
or
we
could
do
some
magic
that
can
detect
that
the
certificates
are
different
and
then
do
the
right
thing.
B
B
So
I
mean
this
is
really
topology
dependent
like
if
you're,
if
you
have
a
pot
or
you
know
you
have
a
deployment,
that's
connecting
back
to
kcp
and
it's
on
the
right
network.
And
you
know
if
it
gets
the
ip
or
you
url
correct.
Then
it
can
connect.
But
you
know
maybe
it
should
be
using
an
internal
cert
or
if
the
topology
is
like
this,
where
you've
got
a
front
proxy
with
a
real
cert
that
is
trusted
by
the
root
bundle.
A
B
Well,
but
even
I
have
to
I
have
to
look
at
the
code.
I
haven't
looked
at
it,
but
like
even
in
the
manifest
that
we
have
in
the
repo
if
you
deploy
using
those
manifests,
and
you
go
with
like
an
acme,
let's
encrypt
cert
for
the
front
proxy,
the
sinker
is
still
going
to
send
down
the
wrong
ca
right
now
and
the.
A
B
B
Okay,
I'll
sync
up
with
him
offline,
okay,.
B
B
I
know
that's,
probably
a
thornier
and
bigger
chunk
of
work,
but
if
you're
looking
for
something
that's
going
to
be
very
important,
that
would
be
cool
to
get
some
help
on.
E
A
A
A
So
any
work
is
welcome
in
this
area,
both
conceptually
like
the
capacity
topic
dominic
just
mentioned,
but
also
brings
a
scheduler,
it's
a
namespace
controller,
which
was
basically
replaced
partly
by
the
placement
controller
and
there's
a
scheduling
reconciler
inside,
and
that's
basically
our
scheduler
and
anybody
who,
who
knows
scheduling
like
building
complex
computational
logic
for
scheduling
and
virtues
which
will
be
involved.
So
it
will
probably
look
different
than
a
normal
controller,
because
it's
much
happier
to
compute
that
output.
A
F
Yes,
hi.
I
need
just
a
little
bit
of
a
direction,
maybe
the
I'm
looking
for
her
to
sync
cluster
scoped.
A
B
A
B
Yeah
I
I
know
we
talked
about
this
in
some
of
the
storage
meetings.
We
probably
just
need
to
think
through
like
what.
B
What
does
it
mean
to
define
a
storage
class
in
the
control
plane
and
which
clusters
does
it
get
synced
to?
Is
it
all
of
them?
Do
we
need
some
better
way?
That's
placement
and
location
related
to
get
a
storage
class
to
the
right
place,
but
I
do
think
that
this
is
one
specific
type
where
we
we
want
to
treat
it
as
a
thing
that
the
control
plane
does
support
and
make
sure
that
it
gets
scheduled
to
the
right
sets
of
clusters.
A
Yeah,
that's
just
a
thought.
I'm
not
sure
I
mean
publishing
means
push
right,
or
at
least
sinker
will
pull
it
from
somewhere.
Something
like
that,
but
the
alternative
view
is,
of
course,
that
the
work
load
cluster
admin
will
just
provide
everything
needed
on
those
clusters
and
then
add
a
mapping
in
kcp
to
make
use
of
those
objects
like
this
would
move
the
whole
problem
outside
of
the
sinker
domain.
F
Yes,
I
guess
they.
We
are
just
experimenting
right.
It's
not
that
we
know
that
this
is
the
best
way
we
we
looked
for
a
way
to
to
pre-populate
the
control
plane
with
the
menu
of
classes
right.
F
So
if
you
look
at
the
like,
a
single
cluster
storage
classes
are,
like
you
know,
a
menu
of
what
you
can
pick
from
and
and
we
wanted
to
start
from
the
same
position
for
kcp,
basically
not
not
that
there's
more
abstract
abstractions
needed
like
we
know
that
we
want
to
have
some
constructions
above
it
like
having
things
which
aggregate
storage
classes
into
one
like
a
virtual
store,
etc.
But
we
just
looked
for
a
mechanism
where
we
can
we
can.
F
We
can
make
those
storage
classes
first
of
all,
get
synced
or
appear
in
the
control
pane,
and
then
I
guess
the
the
scope
that
we
looked
for
was
something
of
you
know
I
want.
F
I
want
to
have
a
workspace.
You
know
like
at
some
level
right
a
workspace,
get
populated
with
all
the
storage
classes
of
this,
the
clusters
that
sync
to
it,
but
I
don't
know
if,
if
the
controller
even
have
a
concept
of
something,
you
know
clustered
scope
which
would
resolve
to
a
workspace
scope
right,
I
guess.
A
You
can
build
a
controller
which
picks
objects
from
a
workspace
and
put
them
somewhere
like
on
workload
clusters,
but
there's
no
no
concept
of
the
resource,
which
is
really
workspace,
go
it
doesn't
really
make
sense.
I
think,
but
so
my
gut
feeling
is
a
mechanism
like
that
is
interesting,
but
it's
probably
a
different
one
than
sinking.
A
A
F
A
Replication
like
taking
objects
from
user
workspaces
and
puts
them
into
name
spaces
on
a
physical
cluster.
That's
what
we
call
thinking
and
what
you
are
describing.
It's
also
about
copying
stuff,
but
it's
it's
for
different
purposes
like
there's,
no
confirmation,
it
really
not
goes
through
the
virtual
workspace
technically.
So
it
seems
like
it's
a
different.
F
Problem,
I'm
not
sure,
I'm
not
sure
I
see
the
the
big
difference,
but
maybe
I'm
missing.
I
mean
it's
just
a
completely
different
thing.
A
B
So
I
know
that
somebody
did
an
exploration
into
what
it
would
look
like
to
have
sort
of
like
a
kcp
level
storage
class
that
then
gets
mapped
down
to
different
storage
classes
in
physical
clusters.
I'm
just
checking
to
see
if
we
can
open
that
dock
up
to
the
community
and
then
that
could
provide
some
more
context
as
well.
A
Want
to
clarify
what
I
just
said:
I'm
not
against
the
mechanism,
I'm
just
saying
it's
a
thinker
main
control
loop
as
we
have
it
today.
This
is
not
made
for
this
purpose,
but
there
can
be
a
second
one
like
a
cluster
scoped,
manifest
deployment
control,
loop,
cluster.
F
A
B
I'm
I'm
trying
to
to
get
on
yeah,
also
so
things
that
are
cluster
scoped
are
not
namespaced.
By
definition,
when
kcp
syncs
from
various
workspaces
to
physical
clusters,
we
map
from
the
pairing
of
a
what
is
essentially
a
workspace,
unique
name
and
its
namespace
into
a
single
namespace
in
the
physical
cluster.
F
F
No
so
there
there
might
be
some
translation,
but
I
mean
the.
The
point
is
that
we
need
information
about
the
storage
types
in
the
cluster
right.
That's
what
we're
looking
for
in
in
the
control
pane
and
then
in
the
control
plane
be
able
to
construct.
G
Yeah,
I'm
sorry
by
the
way
hcg
kind
of
has.
We
were
just
talking
about
that.
Today
we
had
a
kind
of
a
similar
desire
where,
like,
for
example,
the
status
of
a
gateway
api
gateway
right
resource.
Could
this
could
be
like
that's
a
shared
resource
across
all
the
http
routes
that
can
use
it?
That
gets
updated
with
some
status
that
we
would
like
to
reflect
up
into
the
kcp
cluster
so
that
our
control
plane
can
get
information
out
of
it.
A
For
small
pieces
of
information
we
have
the
status
of
the
verb
to
cluster
that
will
be
vital
by
the
sinker.
So
if
the
space
is
enough,
like
small
stuff
like
metadata,
basically
we
can
put
there.
It
should
be
pretty
much
generic,
so
it
shouldn't
be
hard
coded
for
storage
and
then
some
gateway
forks
will
come
as
you
say,
and
you
want
something
else
there
in
in
this
status
and
it's
not
generic.
So
it
must
be
generic.
A
F
Say
like
what
is
the
you
know,
is
there
in
the
sinker?
Is
it
in
a
reconciler?
Where
is
that?
No,
it.
A
Wouldn't
be
in
this,
you
need
a
second
controller
in
the
sinker.
So
there's
nothing
like
that
at
the
moment.
At
the
moment
we
have,
we
have
basically,
I
think
three
controllers
we
have
the
heartbeat.
We
have
the
spec
syncer,
which
sinks
from
kcp
down
to
the
workload
cluster
and
we
have
the
status
input
upwards.
So
the.
A
No,
so
sorry,
it's
status
about
objects
from
the
user,
it's
not
status
of
the
cluster.
I
don't
think
we
have
anything
for
this
purpose,
so
you
would
have
to
build
another
controller
and.
A
F
And
if
it
yeah,
and
so,
if
it
does,
do
you
think
it
should,
it
should
add,
add
to
the
heartbeat
one
or
you
said
the
new
one
anyway,
no.
A
No
sorry,
independent,
independent
okay
completely.
Maybe
we
need
a
status
controller
like
a
workload
cluster
status
controller
like
in
the
sinker
which
sits
next
to
the
heartbeat.
Maybe
the
heartbeat
is
part
of
it.
I
don't
know,
but
the
heartbeat
one
is
super
simple:
it's
a
ten
liner
infinite
loop.
It's
not
what
you
want.
A
F
I
I'll
tell
you
what
I
mean
we,
we
were
the
concept
we
were
trying
to
follow.
I
mean
I
don't
know
if
that's
correct
or
not,
but
we
were
trying
to
follow
the
concept
that
says
95
of
the
you
know
the
resources
should
look
like
kubernetes,
so
the
idea
was
to
sync
back
storage
classes
and
nothing.
You
know
different,
but
yeah.
A
F
But
that's
kind
of
that's
that
becomes
like
the
the
the
non-uh
core
kubernetes
yeah.
A
Know
we
should
start
this
discussion,
then,
in
an
issue
or
just
just
a
google,
google,
doc
and
okay
try
to
write
down
your
ideas,
what
you
would
like
to
have
and
invite
people
to
give
opinions
and
direction.
A
Yeah
andy
shares
the
email
address,
so
we
have
the
google
group
and
that's
a
good
place
to
invite
people.
Everybody
who's
interested
in
kcp
will
see
when
you
notify
them
that
there's
something
new.
So
we
have
plenty
of
those
documents
already.
We
usually
call
them
exploration,
something
and
if
you
open
the
exploration,
storage
classes
and
start
there,
that's
a
good
place.
A
We
have
the
usual
routine
of
looking
so
new
issues.
So
last
week
there
was
no
meeting
right,
so
everything
which
is
basically
two
weeks
old.
H
B
B
And
the
the
work
around
was
just
to
create
a
different
role
binding
that
so
can
you
comment.
B
A
B
Yeah
this
one's
hard.
B
I
know
what
I
know
what
this
is.
This
is
just
having
like
a
dry
run:
output,
dml
option
for
the
keep
control,
kcp
workload,
sync
command;
okay,
because
right
now
it
connects
directly
to
kcp
and
creates
some
stuff,
and
then
it
spits
out
yaml
that
you
have
to
apply
to
the
physical
cluster.
B
B
B
B
A
Let
me
see
if
I
can
yeah,
I
have
the
gut
feeling.
If
we
so,
we
will
have
the
topic
of
defining
what
a
compute
service
is,
and
this
is
very
similar
to
the
resource
set
we
talked
about
earlier,
so
we
might
get
a
different
way
how
resources
are
specified,
which
would
probably
mean
they
are
just
cb
gbrs.
B
A
Because
you
don't
want
to
specify
that
in
all
sinker
manifests
right,
the
thinker
should
know
what
to
sync
and
if
all
workload
clusters
should
get
a
new
resource
to
sync,
then
we
should
be
at
a
central
place
somehow
via
the
api.
So
it's
actually
not
really
the
user
interface.
We
want
something
like
base
one
well.
B
I
think
there's
two
aspects
to
it
like
one
is
that
we
want
to
move
the
resources
flag
from
the
syncer
into
the
workload
cluster
and
two
which
I
think
could
apply
in
both
places
if
we
wanted
it
to
which
is
allow
the
use
of
a
rest
mapper
to
resolve
things
now
it.
You
know
if
you're
saying
that
you
want
it
to
always
be
fully
qualified.
A
B
Yeah
I
mean
basically,
this
came
about
because
kasturi
was
trying
to
configure
sinker
to
sink
ingress
and
because
of
hold
on
what's
the
issue,
she
was
running
into.
B
I'm
45
so
basically,
when
you
give
the
sinker
a
resource
that
it
can't
resolve,
it
just
tries
forever
and
never
reports
any
status
that
you
can
control
get,
and
so
you
have
to
go.
Look
in
the
synchro
logs
to
figure
out
what's
going
on,
and
it's
not
really
clear
why
it's
failing
to
retrieve
gbrs
in
this
case-
and
this
is
another
case
where.
B
A
Anybody
wants
to
work
on
those
things.
Yes,
you
want
that
okay
anyway,
so
I
mean
this
can
be
fixed
if
it's
easy,
it's
something
to
learn
the
synchro
and
this
area
of
code.
That's
cool!
So
just
go
ahead.
It
might
be
it's
a
bigger
solution
by
an
api
in
weeks.
I
don't
know
when
it
could
happen.
A
A
Those
urls
must
always
like
forever
be
concerned.
Otherwise
we
break
users,
there
might
be
exceptions
like
when
you
move
a
workspace.
We
talked
about
that
like
moving
one
in
a
hierarchy.
For
example,
you
might
have
three
directions
for
an
amount
of
time
like
for
a
week
or
so
or
months,
but
in
general,
changing
the
external
url
is
super
breaking
for
users,
so
this
is
more
like.
It
feels
like
intentional
that
we
don't
reconcile.
A
B
Andy
yeah
we
like
for
the
name
space
that
the
sinker
manages
like
all
of
the
kcp
hash
name
spaces
and
the
is
it
just
on
downstream.
B
B
No
this
this
is
literally
like
I
create
a
workspace.
I
create
a
deployment,
it
gets
synced
to
a
kcp
hash
namespace
and
that
the
downstream
namespace
has
the
label
from
step
four
on
it.
Okay,
and
if,
if
somebody
modifies
that
label,
it
never
gets
corrected.
B
Yeah-
and
so
I
I
put
some
possible
ways
to
do
this
and
also
number
three
at
the
bottom,
like
we
talked
about
having
some
other
controller,
that
is
not
the
sinker,
that's
managing
those
name
spaces
yeah.
D
A
B
A
A
B
Yeah
I
mean
a
lot
of
the
like
out
of
band
things:
they're
nice
to
fix,
but
yeah
yeah.
C
B
B
Okay,
this
is
for
s
secrets,
though
not
service
accounts.
B
H
This
will
be
fixed
when
the
last
year
I'm
working
on,
gets
merged,
so
just
documentation.
B
Or
a
game
assigned
to
yourself-
and
let's
put
it
in,
I
assume
this
can
go
in
for
zero
five,
but
you
can
put
tbd,
I
won't
yeah.
I
won't
enable
that
that's
fine.
C
B
B
A
time
to
plug
discussions
in
github,
so
if
we
want
to
ask
questions
and
get
to
a
design
resolution,
we
could
use
discussions
instead
of
issues.
B
B
It
is
a
target
to
which
we
want
to
sync
content
so
and
there
are
other
systems,
cluster,
api,
ocm,
acm,
etc
that
have
some
sort
of
cluster
concept
as
a
crd,
so
we
were
thinking
about
renaming
workload
cluster
to
something
like
sync
target,
and
this
would
be
largely
just
a
bunch
of
find
and
replace,
but
it
will
be
disruptive
to
any
open,
pull
requests
that
are
doing
lots
of
work
with
workload
clusters.
So
we
would
just
want
to
organize
the
timing
and
I
apologize.