►
From YouTube: Community Meeting, May 10, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
If
you'd
like
to
say
something,
please
use
the
raise
hand
feature
in
google
meet
and
we've
already
got
several
things
in
here
from
stuff
on
but
again,
please
feel
free
to
add
anything
and
I'm
going
to
go
through
the
first
one.
We
always
skip
over
the
incoming
issues
and
whatnot
and
do
those
at
the
end.
So
I'm
going
to
start
with
the
0.5
epics
and
I've
got
a
list
over
here.
So
stefan
did
you
want
to
walk
through
this
or
you
want
me
to.
I
think
paul
wants
to.
B
Yeah,
I
can
say
a
few
words
on
it
if
you
want
to
just
keep
sharing,
we've
agreed
in
the
past
to
modify
how
we're
doing
our
milestone
planning,
as
well
as
how
we're
breaking
down
tasks.
So
I
was
hoping
for
this
meeting
that
we
can
walk
through
what
the
milestone
blockers
are
and
ensure
that
one
we're
comfortable
with
them
fitting
in
the
milestone
and
two
that
we've
got
the
right
task
breakdown
and
linked
items
on
them,
like
we've
discussed
just
to
make
sure
that
we
can
see
progress
as
they
go.
B
I
think
andy,
the
epic
one
that's
number
two
in
that
list
is
probably
a
good
example.
If
we
want
to
start
by
looking
at
that
and
then
just
go
over
the
other
ones,
not
all
of
these
are
epics,
so
they
probably
all
don't
want
like
a
full
breakdown,
but
we
should
just
agree
that
they
do
or
don't.
A
A
Okay,
I'm
gonna
start
at
the
bottom,
the
oldest
issues,
the
first
one
about
designing
and
implementing
cross
cluster
listers
and
informers,
is
under
active
development.
It's
part
of
this
epic
on
multi-workspace
controller
development,
and
I
firmly
plan
to
do
everything
that
I
can
to
help
get
this
one
in.
A
So
is
that
enough
to
to
say
on
that
one
or
were
you
looking
for
anything
else.
A
So
I
created
this-
I
haven't
put
demo
bits
and
steps
in
yet,
but
for
stories
we
have
adding
a
virtual
workspace
for
api
exports.
This
is
so
that
controllers
can
so
the
the
api
workflow
for
an
api
provider
that
we
want
folks
to
work
with.
Is
I
create
an
api
resource
schema?
It's
like
a
crd,
and
it
represents
some
api
that
I
want
folks
to
be
using.
A
I
create
an
api
export
for
my
schema
and
you
can
export
multiple
schemas,
so
I
create
an
export
and
then
users
can
go
and
bind
to
my
export,
which
basically
means
they
have
access
to
those
apis
that
I've
listed
as
api
resource
schemas,
and
so
as
the
person
or
team
writing
the
controller
for
those
resources.
A
We're
going
to
have
a
virtual
workspace
that
the
controller
connects
to
that
shows
it
only
instances
of
those
apis
that
we
have
exported.
So
if
you
have
two
different
teams
that
have
two
different
workspaces
and
they're,
both
exporting
a
widget
api
type,
then
team,
a
when
they
say
show
me
all
the
widgets
across
all
the
workspaces,
we'll
see
team,
a's
representation
and
instances
for
team
a's,
widgets
and
then
the
same
thing
for
team
b.
So
that's
what
this
virtual
workspace
is
about.
A
The
second
one
is
making
sure
that
we
have
our
back
in
place
for
people
using
the
virtual
workspace.
The
next
one
is
around
authorization
for
api
bindings
themselves.
A
Then
we
have
the
the
client
libraries
for
cluster,
where
client
sets
for
kcp
and
kubernetes,
as
well
as
a
code
generator
and
then
the
same
thing
for
shared
informers
and
listers.
A
We
want
to
restrict
wild
card
listing
and
watching.
So
this
is
show
me
all
widgets
across
all
workspaces
or
all
logical
clusters.
We
call
that
a
wildcard
list
watch
and
we
want
to
make
sure
that
the
only
components
that
are
allowed
to
do
that
when
they're
not
going
through
this
virtual
workspace
are
internal
kcp
controllers
and
then
finally,
in
here,
is
being
able
to
delete
instances
of
apis
that
come
through
an
api
binding.
When
the
api
binding
goes
away.
A
So
that
that's
what
we
have
in
here
for
stories,
the
idea
is
to
have
enough
information
about
a
high
level,
but
if
we
need
to
go
into
more
detail,
we
can
create
individual
issues
for
them.
Some
of
these
issues,
like
the
code
gen
ones,
predated
the
creation
of
this
epic
and
that's
fine.
It's
also
fine
to
go
in
the
other
order.
A
Okay,
so
next
one
up
is
an
oldie
but
goodie
280..
I
do
think
we
need
to
know
what's
left
to
do
here.
I
think
it
was
end-to-end
tests
and
making
sure
that
things
work
in
every
use
case.
So
this
is
around
transparent
multi-cluster
and
I
want
to
take
a
regular
deployment
for
tekton
whatever
and
deploy
it
through
kcp
and
make
sure
that
it
can
be
configured
to
talk
back
to
kcp
when
it's
scheduled
to
a
workload
cluster.
Instead
of
talking
to
the
workload
clusters,
api
server.
B
A
So
this
is
there's
a
lot
of
old
stuff
in
here.
There's
a
research
item
that
is
still
open,
there's
making
sure
role
binds
roll
bindings
are
not
copied
down,
which
is
really
done
already.
A
So
if
you
are
running
a
pod
that
uses
leader
election
using
client,
go's
built-in
leader,
election
or
controller
runtimes
implementation
of
it,
it
doesn't
work
out
of
the
box,
so
this
has
to
do
with
so
I
it
has
to
do
with
with
this
as
well.
So
I
might
actually
just
add
this
as
a
checkbox.
Unless
anybody
objects.
A
E
A
I
think
we
can
probably
close
this
out
and
writing.
The
end-to-end
test
may
uncover
some
bugs.
B
F
F
E
A
A
B
F
Yeah,
no,
that
should
be
good.
I
mean
it's
mostly
writing
the
end-to-end
test
and
figuring
out.
Well,
what
are
the
issues
with
the
leader
election?
So
I
guess
that's
fine!
I'm
working
right
now
on
the
first
issue
that
you
see
the
ad
support
for
sinker
virtual
workspace
lava,
but
I
guess
that
shouldn't
take
me
too
much
because
I
have
it
working.
I
just
need
to
clean
up
things
and
push
it
so
do
we
have
anybody
who
could
write.
E
D
Yeah,
god
speaking,
I'm
rather
new
to
this
entire
topic,
but
I
would
try
to
pick
this
up.
That's
good.
A
Thanks
yeah
and
we're
available
to
help,
so
we
we
have
a
fairly
robust
end-to-end
framework
that
handles
creating
workspaces
and
getting
you
clients
and
things
like
that.
But
if
you're
new
to
it,
it
probably
has
a
learning
curve
so
feel
free
to
look
at
existing
end-to-end
tests
feel
free
to
reach
out
on
slack.
We
will
happily
help
okay,
great
thanks.
So
are
we
going
to
give?
A
Are
you
in
the
right
now
all
right?
Can
you
just
comment
on
this
one?
Please,
and
just
so
we
know
758.
A
D
A
Okay,
so
this
one
was
about
prototyping
exact,
attach
logs
port
forward,
which
I
know
we
have
a
prototype,
but
do
we
want
to
turn
this
into
an
actual
implementation.
E
And
yeah,
I
would,
I
would
leave
it
in
this
term
like
it's.
A
prototype.
Antonio,
is
working
on
a
prototype
to
integrate
it
into
the
kcp
api
server,
but
you
will
see
how
it
will
look
so.
E
A
Okay,
so
move
on
all
right.
This
isn't
factor
out.
Multi-Cluster
concepts
is
another
one
related
to
the.
A
Cockroach
investigation
is
the
this
one
where
we
can
finish
the
investigation,
this
milestone
or
steve.
What
are
you
thinking
about
this.
G
Yeah,
I
think,
we've
investigated
everything
we
need
to
investigate,
there's
some
long
tail
care
of
like
upstream
work,
but
I
think
that's
probably
best
broken
out
into
individual
things.
I'll
do
this
today.
G
I
will,
I
think,
I'll
close
it
out
today,
and
then
I
guess,
there's
like
one
follow-up
is
is
some
of
the
upstream
work.
One
follow-up
would
be
getting
this
into
our
kcp
fork,
so
I
guess
I
would
close
this
and
then
and
open
up
the
sort
of
concrete
things,
I'm
not
sure
the
sequencing
of
the
rebase,
though
we've
had
some
conversations
about
that
informally.
I
wonder
if
we
should
sit
down
at
some
point
and
do
something
more
formal,
yeah
just.
E
E
A
We
talked
about
758
already,
we
talked
about
code
gen,
follow
up
on
api
binding.
So
right
now,
when
you
create
an
api
binding
if
you're
the
first
binding
to
reference,
a
particular
schema
with
the
system
will
create
a
custom
resource
definition
in
a
special
logical
cluster
called
system
bound
crds
and
the
crd
name
is
just
a
uid.
A
A
E
A
A
A
A
A
E
A
Okay,
I'm
glad
this
one's
coming
up,
advanced
scheduling.
This
is
listed
as
an
epic
and
a
milestone
blocker,
and
I
know
that
some
of
the
work
is
done
so
what's
left,
it's
basically
part
of
the.
A
A
A
A
Cluster
name
is
deprecated,
so
we
need
to
use
annotations
or
labels
or
something
that
we
have
available
to
us.
That's
not
going
to
go
away,
and
I
think
this
is
a
time.
Bomb
definitely
needs
to
be
done
before
the
124
rebase
happens,
and
I
see
sean
has
volunteered
for
this
one.
So
thank
you.
Sean.
A
Any
objections
stefan
or
paul,
or
anybody
from
including
this
in
the
milestone.
No,
it's
fine,
okay,
and
we
talked
about
this
one.
It's
a
subtask
and
then
there's
this
one,
which
I
think
probably
just
needs
to
go
into
the
multi
workspace,
epic,
great
stuff
on.
A
A
A
All
right
anything
else,
paul
that
you
wanted
to
go
over
before
we
move
on.
G
B
Okay,
we
don't
have
to
figure
it
out
on
this
call.
I've
got
to
drop
anyways,
but
if
anyone
else
wants
something
to
work
on
in
0.5
and
is
willing
to
put
your
name
on
it
and
commit
to
it,
please
let
andy
or
stefan
or
myself
now
or
just
ping
in
the
public
channel
and
we'll
make
sure
we
learn
something
up.
G
Yeah
I've
been
trying
to
get
up
dispute
on
all
the
reviews
and
figure
out
the
latest
land.
E
E
There
are
some
things
we
should
maybe
discuss
before
like
I
compared
some
of
the
logos
with
the
landscape
from
cncf
so
which
are
similar.
There
are
many,
so
you
need
some
time
to
go
over
them.
Some
color
combinations
they
appear
like
30
times
or
so
cube,
of
course
cube
as
a
concept.
Nested
peeps
are
also
there
many
many
times
so
I
picked
some
which
looked
the
most
similar
and
the
question
is
where's
our
I
mean
what
is
okay
and
what
is
not
okay
in
similarity,
so
you
should
decide
that
this
might
discourage
qualify.
E
E
Everybody
might
get
a
voice,
so
everybody
can
say
the
logo
is
good,
so
thumb
up
or
this
is-
I
don't
like
it
at
all.
I
would
be
embarrassed
to
to
wear
a
shirt
with
it
or
something.
So
it's
a
thumb
down
and
then
we
just
count.
The
difference
takes
it
two
most
voted
for
once
and
then
I'll
do
a
second
round
and
just
then
choose
one
something
like
that.
A
Yeah
I've
I've
had
experience
with
opening
up
issues
and
getting
folks
to
vote
on
things
in
github.
That
can
devolve,
as
you
might
imagine,
I
think,
trying
to
be
as
inclusive
as
possible
across
time
zones
and
folks
that
may
not
have
access
to
various
parts
of
the
internet
would
be
ideal.
So
I
don't
know
what
the
broadest
reach
is
there.
If
it's
github
issues
or
slack,
I'm
not
really
sure.
I
A
A
Timeline
for
when
we'll
stop
collecting
additional
possible
logos-
or
I
don't
know
just
open
the
issue
and
put
the
comments
in
like
you
said,
rob
and
then
have
a
deadline
for
when
we
want
to
have
the
boating
closed.
I
B
E
Should
we
decide
about
reservations
around
similarity?
This
is
something
we
should
decide
before
or
we
do
it
when
we
have
all
logos
next
week
or
something.
A
I
think
the
general
guidance
that
we
probably
should
follow
is
try
to
avoid
similarities
where
folks
could
be
confused,
because
we
don't
really
want
to
have
other
folks
coming
after
us
saying,
trademark,
violation
or
whatever.
I
A
E
There
have
been
discussions.
We
noticed
there
is
a
need
to
rethink
cluster
workspace
workspace
type.
Now
that
we
know
more.
What's
requirements
are
how
services
might
you
set?
How
many
we
want
so
the
the
guidance
or
the
idea
is.
It
came
out
of
discussions
with
staten,
for
example.
This
is
nothing.
A
type
is
nothing
where
you
have
60
or
something
like
that.
E
It's
more
like
a
handful
or
two
handful
types
will
be
created
by
kcp
as
a
platform,
but
there
might
also
be
organizations
who
want
types
or
maybe
some
opinions
opinion
at
the
service
might
want
to
type.
E
Like
inverted
workspace,
you
should
basically
yeah
get
the
same
results
like
the
same
functional
workspace
with
apis
and
in
this
pr
there
are
some
ideas:
how
to
do
that
like
we
can
enforce
it
actually
technically
in
a
virtual
workspace,
which
checks
that
the
user,
everything
that
the
controller
does
against
this
workspace,
where
the
initializer
is
processed
by
the
controller,
can
be
done
by
that
user.
So
we
can
do,
subject,
exercise
or
something
like
that
and
verify
it,
and
that
way
we
guarantee
that
it's
enforced.
E
Another
thing
what
we
came
up
with
so
at
the
moment
in
the
main
branch
you
need
a
type,
so
the
type
object
must
exist
in
a
workspace
in
that
workspace,
where
you
want
to
create
a
workspace
offset
type,
which
means
we
have
code
in
place
which
replicates
things
like
universal
workspace,
types
and
organization
workspace
types.
So
you
have
to
know
basically
where
you
have
to
create
those
objects,
so
they
become
effective.
E
There's
the
idea
to
move
that
up
or
to
replace
that
mechanism
by
walking
up
the
the
parent
relation.
So
you
go
to
the
current
workspace
and
if
the
type
exists
you're
fine,
if
it
doesn't
exist,
you
go
one
up,
so
it's
the
parent
and
then
to
the
root
eventually
and
at
the
root.
You
can
also
define
types.
So
that's
the
first
bullet
point
here.
E
What
we
edit,
we
notice
that
initializers
are
cool,
obviously,
because
you
can
do
everything,
but
in
many
many
cases
it's
just
about
creating
manifests,
and
so
we
might
want
to
add
something
like
whatsapp
manifests
to
the
type.
E
For
example,
if
there's
a
team,
a
team
type,
for
example,
maybe
every
team
needs
a
secret
or
a
special
api,
or
anything
like
that.
So
maybe
an
organization
wants
to
add
that.
So
we
need
a
mechanism
for
that
and
in
the
proposal
here
you
would
be
able
to
create
a
cluster
workspace
type
of
the
same
name
in
your
organization
workspace
and
there
would
be
some
merging
happening.
E
What
else
default
workspace
type,
so
people
playing
with
what
we
have
noticed
that
there's
a
distinction
between
organization
and
universal.
So
if
you're
at
the
root,
you
must
pass
minus
minus
type
organization
to
the
cube
cutter
command
and
that's,
of
course,
not
obvious.
So
one
idea
is
to
have
a
default
workspace
type.
So,
every
time
somebody
creates
in
the
workspace
inside
of
a
certain
type
like
like
an
organization,
it
will
be
of
type
x,
something
like
that,
and
this
is
a
default
and
the
reverse
inverse
is
to
restrict
which
types
are
allowed
like.
E
I
think
it
has
a
way
to
allow
what
we
want.
What
is
sensible
like
adding
more
types
which
are
possible
inside
another
type,
but
at
the
same
time
this
allows
that
somebody
allows
a
placement,
a
nesting
which
the
cluster
works.
First
type
owner
does
not
really
expect.
This
is
example
of
an
organization
in
universal.
We
don't
want
that,
so
that's
in
and
last
but
not
least,
we
are
adding
virtual
workspace
urls
everywhere
at
the
moment.
E
That's
the
last
item
here
so
there's
also
a
status
being
added
and
virtual
workspace
urls,
and
the
idea
is
that
the
controller
which
processes
the
initializer
like
does
it
initialization,
will
watch
this
list
in
the
type
it
owns
and
for
every
chart
you
get
url
and
watch
cluster
workspaces
in
there.
It's
a
virtual
workspace,
so
it
will
just
show
workspaces
of
your
type
and
only
those
which
are
initializing.
So
you
cannot
access
those
when
they
are
ready
when
they
are
ready.
They
are
owned
by
the
user,
not
by
the
controller
anymore.
E
A
Thanks,
stefan,
I
think
it
would
be
helpful
if
we
went
through
and
just
maybe
his
comments
on
the
pr
add
some
sample
yaml
for
like
here's,
what
you
would
put
in
the
root
and
here's
what
you
could
put
in
an
org
and
so
on,
and
just
have
some
some
samples
in
there
a
lot
easier
than
coding
up
stuff
for
starters,
right.
A
All
right,
so
we
got
about
20
minutes
left.
That
was
the
last
item
on
the
agenda
before
I
move
to
looking
at
open
issues
that
haven't
been
triaged,
yet
anybody
have
any
topics,
questions
anything
you
want
to
chat
about.
A
All
right
well,
if
you
think
of
anything,
feel
free
to
reach
out
and
speak
up,
so
I'm
gonna
go
top
to
bottom
here,
just
to
go
through
the
newest
stuff.
A
So,
thank
you
rob
for
creating
that.
I'm
gonna
skip
over
it
for
right
now.
We
do
have
a
new
issue
about
the
container
image.
Failing
to
start,
there
is
a
pr
for
this
as
well.
It
looks
like
we
have
we're
trying
to
create
the
kcp
data
directory
inside
slash
and
that's
not
allowed.
So
if
you're
running
this
via
docker
or
podman
or
something
then
you
either
need
to,
I
guess
bind
mount
in
a
directory
or
work
around
it
in
some
way.
A
A
Nope,
okay
and
if
you're
interested
in
helping
out
here,
if
you
please
feel
free
to
take
a
look
at
the
pr
and
add
some
comments.
A
Okay,
this
one
I
added
so
we've
had
a
couple
times
where
just
accidentally,
we've
merged
in
poll
requests
that
have
our
gomod
pointing
to
a
fork.
That's
not
the
kcp
dev
fork
of
kubernetes,
so
this
is
something
that
would
be
useful
to
catch
that
and
not
let
it
happen.
So
I
have
this
under
good
for
good
first
issue
and
help
wanted
if
anybody
is
looking
for
something
that
would
be
really
really
nice
to
get
in,
but
it's
definitely
not
a
milestone.
Blocker,
so
tbd
is
probably
where
I'm
gonna
throw
it.
A
As
always,
please
speak
up
if
you
disagree
at
times.
This
feels
a
lot
like
monologuing.
So
next
up
is
around
a
confusing
error
message
when
trying
to
list
workspaces
in
an
org
workspace
that
does
not
exist.
So
the
example
here
was
you
just
start
up
kcp
you
try
and
ask
for
workspaces
in
some
random,
logical
cluster
and
you
get
the
workspace.
A
Some
org
is
forbidden
because
root
workspace
access
is
not
permitted,
and
this
error
message
is
not
clear
in
any
way
and
staphon
has
a
comment
about
what
we
can't
do.
So
I
don't
know
that
this
is
necessarily
a
good
first
issue,
but
it
is
a
experience.
Blocker.
A
Oh
yeah,
it's
another
one
of
mine,
adding
tests
that
cube
types
added
as
crds
show
the
expected
columns.
So
we
have
some
custom
wiring
in
place
so
that,
if
you
pull
in
a
crd
for
something
like
deployments
that
it
shows
just
like
a
normal
deployment
as
opposed
to
what
a
crd
looks
like
by
default.
So
this
is
just
to
make
sure
we
don't
regress
as
we're
rewiring
and
rebasing
things.
A
A
And
I
know
that
we
do
have
some
folks
who
are
working
on
that
for
some
internal
testing,
but
I
think
this
should
be
in
the
repo
too.
A
So
this
is
nice
to
have
but
tbd
as
well,
and
just
to
reiterate
everything.
A
E
Okay,
so
upstream
has
tests
which
check
lcd
there's
an
sd
client
things
in
ltd
really
looked
like
we
expect.
Openshift
has
the
same,
and
I
think
we
had
a
time
now.
It's
a
project
where
we
should
have
that
as
well.
It's
kind
of
a
safety
net
for
regressions,
because
if
the
path
changes
of
something
this
basically
means
data
loss
for
the
user.
A
Okay,
I'm
gonna
also
put
that
in
tv.
Do
you
think
this
is
a
good
first
issue
or
is
a
little
bit
more
complex.
A
A
I
so
that,
basically
this
happens,
because,
if
you're
running
kcp
in
process
and
the
tests
are
running
in
parallel
in
the
same
process,
you
can
end
up
with
multiple
kcp
servers
that
are
started
and
the
startup
code
and
logic
is
really
meant
to
be
single
process.
Only
so
in
this
example,
there's
a
log
format
registry
that
you
should
only
call
freeze
on
at
once
and
if
you're
running,
multiple
kcps
in
process
it'll
get
called
multiple
times.
A
I
think
I
like,
when
I'm
testing
locally,
I
spin
up
a
manual
server
outside
of
the
test
process,
either
in
my
terminal
or
in
my
ide
for
debugging,
and
then
I
start
up
the
testing
and
tell
it
to
use
the
external
server.
So
I
don't
run
into
this,
but
it
is
there.
A
All
right,
I
can
leave
it
open.
It's
in
tbd.
A
This
was
one
I
added
that
I
still
think
we
need
to
do,
which
is
just
document
what
it
takes
to
add.
An
api
group,
an
api
type
validation
controllers.
B
A
I
think
we,
I
don't
know
if
we
could
get
this
done
in
a
github
action
with
the
compute
they
have
there
or
if
this
needs
to
be
something
easier,
but
steve
you're
not
actively
working
on
this.
I
assume
you
and
your
puppy.
G
I'm
not,
I
think,
if
so
compiling
at
a
minimum,
I
think
could
probably
be
done
in
action.
I
think
it
depends
on
the
velocity
you
want
in
there.
I
think
imagine
it'll
be
pretty
slow
to
run
80
compilation.
I
can't
imagine,
will
be
too
much
worse
than
what
we
have
in
kcp
today
since
we're
transitively
compiling
it
all
anyways.
A
A
They
were
written
before
we
had
the
sinker
and
the
namespace
scheduler
working
the
way
that
they
do
now.
So
I
think
the
deployment
splitter
is
maybe
broken
today
we
don't
have
any
tests
around
it
so,
and
I
know
that
the
the
work
that
david
and
joachim
and
others
were
doing
with
the
synchro
virtual
workspace
also
handles
deployment
splitting.
So
I
filed
this
to
make
sure
we
figure
out
what
to
do
with
these
long-term.
H
Yeah
and
the
last
feasibility
checks
or
validation
that
I
did
before
starting
you
know
making
the
peers
in
a
good
shape.
It
also
included
the
the
ingress
controller,
in
fact,
so
it
it
mainly
supported
the
whole
scenario
with
deployment
splitting
and
ingress
splitting
that
follows
the
deployment
on
services.
E
Is
this
basically,
so
let
me
comment
any
deployment,
splitter
and
ingress
controller.
They
are
basically
demonstrations
of
a
concept
and
they
will
eventually
go
away
if
it's
cool
that
they
work
with
transformations
medium
term.
There
will
be
something
like
a
strategy
for
placement
which
supports
multi-cluster,
and
at
that
point
we
will
have
something
we
call
today,
a
coordination
controller
which
does
this
kind
of
splitting
like
there
will
be
a
controller
which
supports
multi-cluster
setups
and
can
deploy
workloads
on
both
sides.
For
example,
that's
the
speaking
part
yeah,
which
is
information.
H
Sorry,
which
means
that,
in
other
words,
it
would
not
be
a
dedicated
command
anymore,
but
just
one
man
among
many
options
of
sending
workload
to
to
work
with
clusters.
E
H
Because
yeah,
I
I
had
already,
I
mean
in
the
third
period
I
mean
so
the
that
should
follow
the
basic
virtual
workspace,
one,
the
one
with
the
transformation
and
strategies
I
had
also
changed
the
I
mean
patch,
the
code
of
the
deployment
speaker
and
and
ingress
controller
to
not
create
leafs
anymore
and
just
you
know,
use
the.
H
B
A
We've
decided
there
is
a
path
forward
and
this
issue
was
about
figuring
out
what
that
path
is.
So,
if
you
can
just
add
a
sub
task,
add
an
issue
whatever.
Thank
you.
A
Okay,
we
have
five
minutes
left.
I
think
we
should
just
keep
going
so
project
protect,
gbr
creation,
automatic
consumption
by
controllers.
This
goes
away
when
so.
This
is
basically
a
subtask
of
the
multi-um.
A
Cross
namespace
service,
dns
resolution
and
physical
clusters,
so
if
you've
got
a
pod
and
you're
running
code
in
the
pod
and
it
references
a
service
in
another
namespace
using
in
cluster
dns,
then
that
works
with
normal
clusters.
It
doesn't
work
in
kcp
synced
clusters,
because
we've
changed
the
name
of
the
works
or
the
name
of
the
namespace
that
the
other
service
is
in
and
also
the
namespaces,
maybe
could
be
on
different
clusters.
A
Conversations
definitely
needs
to
be
dealt
with
but
later.
A
Ability
to
remove
a
workload
from
a
physical
cluster,
so
this
was
an
example.
I've
got
a
global
load
balancer
that
currently
is
configured
to
point
at
things
running
on
a
workload
or
physical
cluster,
and
I
want
to
take
that
cluster
down
for
maintenance
or
delete
it.
So
I
want
to
drain
my
connections
to
the
pods
wait
for
that
drain.
To
finish,
then,
I
can
remove
my
workload
from
the
physical
cluster,
and
so
this
is
around
finding
some
weight
so
that
kcp
knows
when
the
workload
is
able
to
be
removed.
E
F
E
A
So
this
one
I'm
gonna
put
in
0.5
and
I'm
going
to
sign
david,
stefan
and
joakim,
and
the
the
expected
outcome
of
this
is
providing
information
about
how
to
do
this,
but
not
to
actually
have
anything
around
draining
and
workload,
cluster
remove
or
workload
removal
entry
or
in
repo
okay.
We
are
at
the
hour.
So
thank
you.
Everybody
hope
to
see
you
next
time.
As
always,
please
feel
free
to
ask
questions
and
add
to
the
agenda
and
hope
you
have
a
good
tuesday
and
a
good
rest
of
your
week.
A
A
All
right
see
y'all
next
time,
bye
guys
later.