►
From YouTube: Community Meeting, March 22, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
kcp
community
meeting
march
22nd
2022.
We
have
a
pretty
packed
agenda,
so
we'll
get
started
as
always.
We're
going
to
put
the
github
issues
since
last
week,
that'll
be
our
space
filler
for
the
end
of
the
time.
Stefan
do
you
want
to
go
first
with
the
work
workspace,
ux
proposal.
B
B
All
right,
you
should
see
a
terminal
yeah,
okay,
so
I
have
this
branch.
It's
it's
live,
so
it's
implemented
mostly.
Basically,
this
current
thing
is
gone.
B
What
is
added
is
that
you
can
walk
around
those
hierarchical
workspaces,
so
this
is
what
we
had
before
nothing
else,
but
what
you
can
do
now
is
the
following:
you
can
create
a
workspace
called
qcp
and
this
is
a
team
workspace
and
we
can
enter
that
so
just
sap
team,
it's
not
ready.
Okay,
so
this
is
a
demon
gods.
Probably
this
doesn't
work.
B
It's
initializing.
I
don't
know
why
maybe
something
broke,
but
what
you?
What
you
are
seeing
here
is
that
this
is
not
the
universal
workspace,
and
inside
of
that
we
can
create
even
more
workspaces
so
technically
for
cube,
this
is
just
a
string.
So
our
cube
for
branch
doesn't
know
about
the
semantics
of
that.
The
kcp
binary,
like
the
kcp
plug-in
of
2-pack,
knows
about
it
and
some
controllers
know
as
well.
I
will
show
them
in
a
second
how
this
works,
but
you
can
basically
have
arbitrary
hierarchies
levels
here.
B
B
In
the
proposal
it's
also
proposed
to
to
make
it
cli
a
bit
simpler,
it's
just
cosmetics.
Obviously,
so
you
can
you
can't.
Why
can't
you
work
spaces?
I
think
I
hope
there
we
go
so
you
can
just
call
that
directly
and
you
can
also
say
ws
even
shorter,
like
that
you
can
pass
absolute
paths
here,
so
you
can
say
like
that
which
you
can't
probably
because
it's
not
ready
now
it
is
ready.
Okay.
So
now
we
are
in
in
kcp
team,
and
I
can
finish
what
I
just
started.
B
So
I
can
have
my
app
here
now.
You
can't
create,
oh
because
this
is
universal,
no
something
something
is
broken,
so
I
cannot
finish
that
anyway.
So
the
the
cli
is
more
like
a
shell
now,
so
you
can
go
back
and
forth
in
the
hierarchy.
It's
a
tree.
So
it's
like
a
file
system.
Some
people
call
it
it,
but
it's
of
course
limited,
but
we
can.
We
can
do
things
which
you
can
have
in
file
systems
as
well
like
links,
for
example,
could
implement.
B
We
can
have
a
virtual
workspace
workspace
objects
in
here.
So
if
you
say
get
workspaces,
he
would
I
got
lost,
I
think
one.
Second,
I
have
to
go
back.
I
hope
there
we
go
so
we
can
have
other
kind
of
objects
here.
Everything
which
looks
like
a
workspace,
so
it
has
a
type
and
it
has
the
url
can
become
a
workspace.
It
doesn't
mean
it
has
to
be
a
cluster
workspace,
so
it
doesn't
have
to
be
like
have
ui
like
that.
B
It
could
be
also
something
services,
workspaces,
personnel,
something
like
that
which
we
have
already.
So
this
is
a
ci
site.
I
said
we
can
have
arbitrary
levels.
So
what
is
behind
here-
and
this
was
actually
the
start
of
this
work-
people
get
our
awk
names
or
cluster
names
actually
wrong.
Pretty
often
they
so
when
you
hit
codes,
sometimes
it's
wrong,
because
the
semantics
is
complex,
complicated.
B
What
this
pr
changes
we
go
away
from,
something
which
is
just
a
root,
an
org
or
a
workspace
to
something
which
is
just
a
hierarchy
of
workspaces.
So
it's
colon
separated
as
before,
but
can
have
any
lines
and
the
api
you
see
here
is
basically
what
you
have
in
the
file
path
package
of
golan.
B
B
It's
it's
not
like
before,
where
orgs
were
root,
colon,
something
normal
workspaces
were
something
called
on
workspace
name.
So
this
was
a
non-uniform
and
for
that
reason
complicated
now
it's
always
the
same.
It's
boot
colon,
a
number
of
segments
which
are
organizations
or
teams
and
at
the
end,
there's
a
cluster
workspace
name.
So
this
is
uniform
and
I
plugged
it
through
in
my
branches
everywhere.
B
Basically
in
cube
in
kcp,
it's
it's
a
new
type
called
logical
cluster,
so
I
went
away
from
cluster
name,
which
is
confusing
because
we
have
other
clusters
and
for
that
reason,
all
the
cluster
names.
So
now
it's
logical
cluster
everywhere,
it's
its
own
type,
so
compiler
complains
when
you
pass
it
as
a
string
and
it
complains
when
you
pass
a
string
as
a
logical
cluster.
So
you
must
be
explicit.
B
So
when
you
go
from
a
general
cluster
to
a
string
called
string,
the
other
way
around
you
have
to
typecast,
and
this
is
a
little
tedious,
but
it
protects
you
to
do
mistakes
and
we
can
trace
very
easily,
especially
in
cube
where
we
use,
for
example,
a
logical
cluster
as
a
string
or
where
we
depend
on
the
logical
cluster
is
mapped
to
a
path
in
the
ul
slash
clusters
string.
B
At
the
moment
I
moved
it
into
a
machinery,
so
github
com,
kcp
dev
api
machinery,
for
the
reason
that
it's
used
inside
of
cube
and
of
kcp,
I
didn't
dare
to
play
with
cyclic
dependencies.
I'm
not
sure
it
even
works
in
go
mod.
Maybe
it
does
for
the
moment.
It's
just
super
simple
package,
just
one
dodger
cluster
here,
the
fire
which
you
just
have
seen
yeah.
Basically,
that's
it
the
conclusion
from
this.
B
It
was
an
experiment
in
the
beginning,
just
to
see
how
big
the
ripples
are
and
whether
it
makes
code
easier.
I
think
it
does
it
or
it
achieves
what
I
hoped
that
code
is
type
safe,
much
more
type
safe
than
before,
and
you
find
places
where
things
are
wrong
or
certain
dependencies
assumptions
are
taken,
and
I
think
it's
despite
it's,
adding
something
like
a
higher
key,
but
because
it's
so
uniform
code
gets
simpler,
and
this
is
a
striking
argument.
B
C
One
is
if
all
these
paths
start
with
root,
why
bother
writing
the
root?
The
other
is,
can
vs
code
search
for
these
typecasts?
I
thought
not,
and
maybe
you
want
to
actually
have
a
function.
Instead,
that
gets
invoked.
B
Yeah
yeah
yeah.
So
for
the
last
question,
yes,
I
was
thinking
the
same
so
for
the
typecast.
I
don't
think
you
can
search
even
in
golang
goland,
so
we
could
have
a
function
and
then
this
is
also
possible.
Yes,
the
root
at
the
moment
is
always
in
the
beginning,
simple
reason:
the
the
path
and
the
url
it's
slash
clusters-
and
imagine
I
mean
where
is
the
root
cluster,
so
we
could
do
something
like
cluster
slash
root
as
a
special
case
and
cluster
slash
first
level
of
org
colon,
something
for
everything
else.
B
B
A
This
looks
so
great,
like
this
looks
like
such
a
an
improvement
over.
I
definitely
love
it
when
things
that
mean
something
are
a
type
and
not
just
a
string
that
happens
to
be
called
logical,
cluster
name
or
something
that's
definitely
an
improvement.
B
A
comment-
and
this
goes
back
to
to
my
discussion
with
maru
yesterday
before
I
think
this
enables
arbitrary
hierarchies
technically
in
kcp
core.
It
does
not
mean
that
a
service
building
on
kcp
offers
any
arbitrary
hierarchies,
so
a
service
operator
can
define
them
everything
that
is
done
via
cluster
workspace
types
and
authorization,
so
there
can
be
opinionated
hierarchies
for
service
and
also
for
stock
kcp.
So
we
might
have
some
set
up
like
workspaces,
maybe
teams,
that's
what
I
have
in
my
branch.
B
You
cannot
go
further,
like
that's
just
one
team
level
at
the
moment.
Technically,
you
could
even
less
than
recursively.
If
you
want.
A
B
Currently,
the
cluster
workspace
types
they
they
are
authorized
like
you
have
to
have
the
use
permission
on
them.
The
verb
use,
which
answers
the
type
must
exist
in
a
workspace.
So
when
an
organization
is
created,
there's
a
controller
adding
universal
and
team
as
cluster
workspace
type
objects.
If
they
are
not
there,
you
cannot
create
sub
workspaces.
So
the
controller
controls
what
you
can
create,
plus
authorization
limits
it
to
certain
whole
exhaustion
like
that.
B
A
Cool
as
far
as
the
point
about
having
a
instead
of
just
having
it,
be
a
typecast
having
a
method
that
takes
a
string
that
creates
the
thing.
If
you
go,
that
route,
that
that
seems
totally
fine.
But
if
you
go.
A
B
C
A
C
B
A
Yeah
this
seems
like
a
real
nice
logical
improvement,
for
you
know
this
kind
of
code
that
tends
to
get
kind
of
mind
bendy
pretty
fast,
so
nice
work.
A
Was
there
anything
else
you
wanted
to
present
about
the
workspace
ux
proposal?
I
also
the
the
cli.
Let
me
let
me
maybe
open
the
dock.
A
B
Love
it
basically
everything
here
and
the
thing
is
it's:
if
you
look
look
for
the
code,
it's
it's
getting
easier
that
way.
So
that's
the
impressive
thing
so
I
mean
dash
we
had
before.
I
think
even
david
did
that
oh
nice!
Okay!
So
if
you
can
go
down,
we
just
see
you
there's
anything
further.
First,
where
the
examples
are.
There
are
some
questions
about,
for
example,
pretty
names.
B
B
Yeah
I
didn't
highlight
that
when
showing,
but
you
saw
me
saying,
get
workspaces
before
you
had
this-
this
kcp
workspaces
list
command,
which
is
gone.
You
can
say
if
you're
in
a
in
org
workspace,
like
you,
say,
dot
dot
you
get
into
default,
for
example,
you
say
get
workspaces
inside
of
that.
So
there's
nothing
like
this
duality
anymore,
that
you
go
to
services,
something
which
is
hidden
by
the
plugin.
Now
everything
is
native
now,
so
it's
really
slash
clusters,
good
default
or
hate
whatever,
where
you
get
the
workspaces.
So
this
is
nice.
B
We
still
have
a
virtual
workspace
behind,
like
there's
a
redirect
to
that,
but
it's
embedded
into
the
normal
url
yeah.
That's
important!
Go
further!
Config
names!
That's
not
so
important!
Put
your
names,
as
I
said,
we
might
get
rid
of
them
for
simplicity,
symbolic
links
are
super
trivial
to
implement.
We
could
do
them
in
cli
if
we
want
to
what's
the.
A
D
D
Our
prototypes
are
supposed
to
be
useful
chunks
of
work
that
we
want
to
show
to
somebody,
so
it
makes
sense
to
to
make
sure
we
finish
them
out
and
if
that
means
we
take
this
week
to
do
it,
I'd
like
to
propose
that
we
go
ahead
and
do
that.
So
my
my
first
suggestion
here
is
that
we
agree
on
what
closed
means
for
a
prototype
and
I'll
just
throw
out
there
that
that,
for
me,
it
means
that
one
we've
we've
merged
all
the
pr's
that
are
tagged
with
that
milestone.
D
We've
got
our
script
in
working
mode
and
people
have
run
through
their
feature
and
recorded
just
an
individual
demo.
For
that
feature,
I'm
not
asking
for,
like
the
big
full,
full
script
being
recorded
by
one
person,
but
just
what
you've
owned
is
demo
type
of
thing.
So
I
guess
I'll
start
there
to
to
see
if
there's
any
feedback
on
either
that
method
of
collaborating
more
at
the
end
of
closing
or
on
what
closing
actually
means.
A
I
gotta
say
I
like
that.
I
like
the
idea
that
the
demo
is
not
or
that
the
the
demo
recording
is
not
one
mega
monster,
demo
recording
of
everything,
because
then
it's
hard
for
whoever
does
that.
Whoever
manages
that
to
pull
that
together
instead,
like
individual
demo
recordings,
seems
like
a
real
improvement
for
everyone's
life.
I
don't
know
if
other
people
agree
or
disagree,
it
sort
of
makes
it
makes
a
less
less
splashy
result,
there's
not
one
mega
demo
recording,
but
at
least
it's
not.
D
And
I
think,
moreover,
we
enable
other
folks
that
may
want
to
demo
these
things
to
create
some
big
flashy
demo
if
they'd
like
well,
if
there
are
no
objections
to
that
sort
of
strategy,
I
think
there's
a
couple
things
that
we
can
focus
on.
First
priority
would
be
make
sure
that
we're
supporting
the
folks
that
are
still
owning
p3
items.
D
If
all
of
those
are
covered,
then
we
can
go
through
the
open
pr's
and
if
there's
stuff,
that's
nearly
there,
we
can
provide
reviews
on
anything.
We
may
want
to
merge
in
the
meantime
and
last.
If
none
of
those
exist,
we
can
work
on
any
of
the
repo
health
items
that
we
know
may
be
coming
up
and
are
achievable
in
this
remaining
time
frame.
D
But,
but
really,
I
think
most
important
to
me
is
that
that
allows
us
to
go
into
p4
as
a
team
and
when
we
do
those
directed
design
discussions,
nobody
needs
to
feel
like
they're
left
out,
because
they're
still
doing
cleanup
on
some
p3
thing.
D
So,
in
that
case,
if
some,
if
we
want
to,
I
suppose
the
next
thing
we
would
do
is
go
over
the
items
in
the
milestone
and
I'd
ask
for
folks
that
own
items
in
the
milestone
still
to
help
us
understand
where
we
stand
with
them,
where
we
could
use
help
and
see
if
folks
are
available,
do
we
want
to
go
through
those?
A
D
Joking,
how
about
you
walk
us
through
this
one
where
we
stand
and
where
can
people
help
you.
E
So
the
functionality
is,
I
mean
at
least
the
pr
that
related
to
this
issue
and
the
functionality
works.
The
pr
is
not
merged.
Yet
I'm
addressing
the
comments
and
I'm
trying
to
find
out
why
some
tests
broke
now,
but
but
it
works
at
least
so,
if
I
can
finish
today,
you
know
all
the
will
be
required
changes.
It
should
be
good.
B
B
What
was
missing
what
I
saw,
I
think
we
need
permissions
like
a
hole
to
create
the
cluster
workspace
object
and
to
update
it.
That's
missing,
I
think,
in
the
in
the
manifest
I
started.
A
B
Okay
and
somebody
has
to
write-
I
mean
there
are
two
pieces,
create
a
clustered
workspace
from
the
thinker
plus
having
a
controller,
doing
whatever
your
pr
expects
right
touching
something
in
this
object.
A
G
And
I
mean
it's
not
like
it's
super
complicated
but
andy
had
moved
it
to
p4.
B
That
was,
I
think,
not
intentional,
it's
so
this
controller
is
basically
the
health
check
touch
controller,
or
is
it
something
else.
G
A
G
D
D
B
The
other
pr
I
posted
in
the
flag,
it's
ready,
it's
green.
There
have
been
reviews
of
andy's
pr
before
by
me
and
by
marvel.
I
think
it
was
everything
or
I
moved
everything
here
into
the
follow-ups
at
the
bottom
of
the
screen.
There's
a
small
list
of
things,
but
we
don't
need
all
of
them
now
for
the
demo,
but
we
shouldn't
forget
about
them
either.
So,
but
this
is
the
list
everybody
who
wants
to
get
into
this
area,
picking
up
something
of
those
four
open
items.
There
makes
sense
but,
as
I
said
not.
A
Yeah,
so
I
I
have
a
work
in
progress
pr
to
do
the
cluster
heartbeat
controller
side.
The
other
side
of
the
sinker
updating
the
heartbeat
is
that
the
cluster
controller
will
mark
the
cluster
as
unready
if
it
doesn't
have
a
recent
heartbeat.
So
this
is
like
very
related
to
the
syncer
setting
the
heartbeat
issue.
We
talked
before
we.
There
was
some
issue
with
creating
the
workload
cluster
object
from
the
sinker
that
sounded
auth
related,
and
so
that
was
why
my
question
for
that
before.
A
Those
are
both
under
the
other
issue
under
the
other
one
we
talked
about
before
and
mario
is
going
to
pick
up
the
lease
controller
to
do
that
and
the
heartbeat,
the
cluster
heartbeat
controller
is
under
review
and
I
think
getting
close.
I
think
that
there's
a
somewhat
overarching
question
of
whether
it's
worth
merging
that
without
the
other
ones
in
there,
because
it
doesn't
really
do
anything
until
something
is
setting
the
heartbeats
for
it
to
pay
attention
to.
But
we
could
talk
about
that
offline
and
then
the
rest
are
stretch
goal
stuff.
A
I
think
we
won't
get
to
in
this
in
this
milestone.
G
I
think
it's
fine
to
merge
it,
as
is
I
just
because
it's
not
going
to
be
enabled
some
of
the
longer
term.
Questions
can
probably
just
be
deferred
just
to
call
out
a
couple
of
them.
G
There
was
the
question
of
how
exactly
the
sinker
and
the
controller
gonna
cooperate
in
terms
of
heartbeat
interval
and
like
the
sinker
needs
to
heartbeat
at
a
certain
interval,
and
then
the
controller
which
is
is
global
to
kcp,
has
to
determine
where
the
interval
is
has
passed
and
without
coordination
like
like
the
controller
in
kcp,
can
be
configured
to
set
an
interval,
but
the
problem
is
like:
how
does
the
sinker
know
that
so
the
sinker
basically
has
to
be
able
to?
There
has
to
be
some
coordination
there
that
we're
not
doing
yet.
A
G
A
A
I
think
we
could
say,
like
though
there
is
a
flag
in
the
cluster
controller,
that
it
that
you
can
change
the
interval,
we're
going
to
assume
it's
a
minute
say
whatever,
whatever
value,
what
some
assumed
value
and
though
you
can
change
it
in
the
sinker,
we
will
assume
it's
some
fraction
of
that,
and
we
don't
really.
I
think
we
don't
normally
expect
operators
or
or
sinker
installers,
to
change
those
by
default.
A
Like
you,
you
could,
if
you
want
to
heartbeat
more
often
or
less
often,
for
some
reason
do
people
normally
like
do
people
normally
change
those
configurations
in
their
kubernetes
clusters
with
nodes
today,.
G
A
I
think
we,
I
think
we
also
just
won't
know
that
until
we've
operated
this
system
for
a
little
while,
so
some
value
now
is
better
than
the
perfect
value
for
both
sides
and
then,
as
we
as
we,
you
know
see
this
operating
in
in
real
life.
We
can
say
oh
a
minute's
way
too
short
or
a
minute.
Wait
too
long
or,
or
you
know,
heart
beating
every
10
seconds
is
way
too
often
or
something.
G
Right,
yeah
and
the
other,
the
other
issue
that
this
work
raised
for
me
was
the
idea
that
the
sinker
is
responsible
for
creating
the
workload
cluster.
I'm
not
entirely
sure
that
makes
sense,
and
maybe
we
should
I
mean,
maybe
it
does
I
just
I
don't
see
any
rationale
for
why
that's
a
good
idea,
because
I
mean
the
way
cube
works.
If
you
can
create
one
object,
you
can
create
many.
If
you
can
create
it,
you
can
probably
do
other
things
to
it.
So
it's
like
there's.
No.
You.
B
G
Name
in
the
whole
world,
so
I
guess
my
point
is
like:
if
we're
giving
access
to
the
sinker
to
create
that
resource,
it
has
to
be
a
fixed
name
so
that
we
can
lock
down
the
auth.
Why
aren't
we
just
creating
the
workload
object
for
it
in
the
first
place?
If
we're
doing
the
off,
we
already
know
what
the
resource.
B
Well,
it
doesn't
help
because
you
have
to
update
the
object
anyway,
so
you
have
to
restrict
the
resource
name
in
the
hole
so.
G
A
G
B
D
Sure,
okay,
I
think
we've
got
ownership
for
everything.
It
sounds
like
we've
got
prs
that
need
some
review
help
and
then,
as
we
mentioned
before,
if
you're
not
helping
with
one
of
those,
please
run
through
and
record
a
demo
for
your
feature
if
it's
ready,
otherwise,
we've
got
this
list
of
open
pr's.
We've
got
only
16
of
them.
D
D
All
right,
I
think,
that's
all
I
had
we
can
talk
about
p4
topics
later
if
we
want
to,
but
but
maybe
we
should
move
on
to
some
of
the
other
topics.
First,.
A
A
You
see
that
have
you
seen
that
okay,
stefan
you're
next
with
api
exports,
bindings
resource
schemas.
B
Yeah,
I
don't
want
to
deal
with
that
and
you
can
do
that
next
week.
Maybe
this
will
merge
soonish,
I
hope
so.
If
somebody
approves
maybe
marvel
takes
a
look,
find
final,
look
just
as
a
warning,
so
please
play
with
it,
but
there
are
still
some
constraints,
one
of
them.
The
last
one
is
pretty
crucial,
so
we
wild
cut.
Why
welcome
informers?
B
That's
our
way
to
go
across
workspace
for
controllers
at
the
moment,
and
they
assume
that
the
schema
is
the
same
in
all
workspaces,
which
means,
if
you
have
two
workspaces
and
use
api
bindings
with
the
same
gbrs
but
different
schemas.
Everything
will
break
down.
So
it's
a
constraint.
Obviously
it's
good
enough
for
the
demo,
but
obviously
it's
a
time
bomb
issue.
We
have
to
fix
soonish
cr
deletion
is
not
there,
so
the
cids
I
mean
everything
is
based
on
crds.
B
They
are
in
some
background
system,
bound
crds
workspace,
the
cr
deletion
controller
doesn't
work,
it
doesn't
run
on
workspaces.
So
at
the
moment
nothing
is
deleted,
just
be
prepared,
and
the
last
thing
I
have
to
read:
oh
yeah,
absolute
relative
workspace
reference.
That's
the
only
thing
we
have
at
the
moment,
so
you
give
reference
to
another
workspace
which
has
a
api
export.
B
That's
fine
for
for
for
development,
it's
fine
for
demo,
of
course.
Eventually
we
want
something
like
a
catalog
feature,
or
at
least
some
primitive
in
the
system
which
allows
an
implementation
of
cadence
on
top.
This
is
not
part
of
p3.
Obviously
I
had
some
sketches
in
my
extended
api
apr,
which
is
still
open.
B
We
should
talk
about
that
eventually,
maybe
it's
also
something
different
team,
maybe
all
ms
interested
in
that
you
could
talk
to
them.
Maybe
there
are
people
who
want
to
join
to
do
those
things.
That's
always
limited
development.
Rotates
workplaces
really
use
cases
really
the
one
which
is
it's
not
the
final
thing
all
right,
but
other
than
that
play
with
it
when
it's
merged.
A
Cool
the
next
one
I
have
down
there,
we've
already
talked
about
as
far
as
the
p3
status
and
topics
and-
and
even
these
things
are
out
of
date.
So
life
moves
fast,
david.
With
a
late
breaking
comment.
You
wanna
talk
about
sinker
virtual
workspaces,.
H
If
we
have
some
minutes
more,
I
can
showcase
where
I
am
in
the
exploration
for
the
singular
virtual
workspace,
so
mainly
the
the
main
ideas
is
that
it
exposes
one
sub
server,
so
one
api
server
virtual
api
server
on
a
sub
path
per
workload
cluster,
so
each
each
thinker
in
fact,
would
have
its
own
url
in
where
it
would
find
precisely
only
the
apis
and
the
resources
it
is
interested
in
and
the
apis
that
are
exposed
to
on
this
subpath
cluster
are
mainly
the
all
the
apis
that
are
found
in
the
negotiated
api
resources.
H
That
are
the
results
you
know
of
the
negotiation
when
you
add
a
new
cluster,
a
new
workload
cluster.
So
in
the
future,
of
course,
it
would
be
based
on
api
exports,
but
for
now
it
just
takes
the
the
shimmers
that
are
in
the
negotiated
api
resource
and
based
on
this,
it
exposes
a
number
of
of
apis
on
each
endpoint
associated
to
each
thinker.
H
And
then
all
the
requests
that
comes
to
into
the
these
endpoints
rest
and
points
are
forwarded
to
the
right
kcp
server.
But
you
can
add
transformations
to
the
request
on
the
fly.
Typically,
for
each
you
know
sinker
each
workload
cluster,
you
would
only
get
the
objects
that
have
the
that
have
the
cluster
level
set
to
this
workload.
Cluster
and
also
you
would
want
typically
to
to
be
able
to
change
in
a
number
of
cases,
some
fields
of
the
spec
or
the
statues
back,
and
we
have
this.
H
H
And
so
here,
I'm
mainly
just
in
the
in
the
default
in
the
demo.
H
H
Worklet
cluster,
sorry
and
then,
if
I
typically
try
to
access
to
the
synchro
virtual
workspace,
which
is
here
as
you
can
see,
we
point
to
services
thinker,
the
name
of
the
of
the
work
workload
cluster
plus
the
it's
it's
logical
cluster
as
well,
and
then
I
would
try
to
get
deployments
there.
H
H
So
it
has
automatically
been
made
available
on
this
synchro
virtual
workspace,
because
the
deployments
api
has
been
negotiated
and
published
in
in
the
corresponding
future
in
in
the
corresponding
workspace.
And
now,
if
I
apply
a
deployment
so
here
directly
into
pcp
into
the
demo
workspace,
let
me
show
you
the
deployment,
I'm
playing.
H
That's
this
one,
and
it's
especially,
I
added
you
you,
as
you
can
see
the
cluster
label
with
the
used
one
and
also
an
annotation
that
you
know
showcase
this
transformation,
just
a
div
between
what
is
the
external
view
of
of
the
the
the
deployment
and
what
should
be
seen
by
this
thinker.
Typically,
here
in
the
case
you
have
you
know
explicit
deployments,
one
thinker
would
see
a
number
of
replicas.
Another
thinker
would
see
an
another
number
of
replicants,
so
it's
just
an
example.
H
Of
course
you
know
a
bit
dummy
here,
but
I'm
changing
the
replicas
to
10.
As
you
can
see,
the
real
replica
number
here
in
the
spec
is
three.
But
now,
if
I
do
get
deployments
here,
I
can
see
that
in
replicas
here
in
the
spec
eigen1010
and
it's
it's
just
replaced
on
the
fly
by
this
transformation
that
is
applied.
H
Then
I
would
not
get
the
deployment
in
the
list
and
also
the
watch
is
implemented.
I
mean
I
tested
that
as
normally
the
watch
should
work
as
well.
So
I
only
get
the
events
in
a
watch
that
are
you
know
that
match
the
various
levels.
You
know
selectors
that
you
want
to
add
in
the
transformation
and
also
in
the
watch
events,
you
would
get
each
you
know
event.
The
object
of
each
event
would
have
been
transformed,
also
by
the
same
transformations.
H
So
yes,
it
seems
to
me
that
this
I
mean
it
has
to
be
reviewed
and
I
have
to
create
a
pull
request
stills,
but
it
seems
to
me
at
least
that
it
set
us
quite
the
the
right
tools
to
to
be
able
to
to
do
what
we
want.
A
Cool
this
is,
this
is
awesome,
will
it
this
won't,
require
changes
on
the
syncer
side,
except
to
point
to
this
virtual
workspace,
instead
of
the
actual
workspace
right.
H
Yeah,
in
fact,
not
really
because
I
mean
for
now-
if,
if
I,
if
I
just
add
the
label
annotation
and
cluster
label
correctly,
what
you
see,
because
you
still
filter
on
this
level
on
the
client
side.
H
But
it
seems
to
me
that
it's
more,
you
know,
tools
to
go
one
step
further
and
start
managing
the
placement
annotation,
which
is
envisioned
to
also
enable
splitting
workloads
across
clusters.
So
yeah.
A
Yeah,
that's
awesome
are
sinkers
able
to
update,
so
one
thing
we
had
talked
about
was
having
like.
If
one
physical
cluster
is
auto
scaling
this
deployment,
it
could
update
its
replicas
and
say:
oh,
I
decided
to
add
an
11th
replica
down
here
and
update
that
upstairs
update.
That's
that
it
is
a
status,
but
it's
in
the
spec.
Is
that
something
that's
supported
by
this
or
is
that?
Is
that
going
to
be
more
difficulty?
I
mean
that
that
that's
accounting
for
a
somewhat
weird
api
quirk
that
this
is
in
this
in
the
status.
H
Yeah,
in
fact,
today
I've
been
trying
to
explore
what
type
of
transformations
we
can
do
and
how
to
we
had
discussed
the
the
idea
that
we
would
store.
You
know,
location,
specific
changes
to
the
status
in
some
annotation,
possibly
so
I
started
trying.
You
know
that
precisely
what
I
showed,
but
then,
of
course
I
mean,
I
think
that
we
have
to
to
really
think
through
the
flow
of
you
know.
The
changes
changes
between
the
the
the
sinker
view,
the
location,
specific
view
and
the
public
view
of
the
object.
H
A
Yeah,
I
guess
I
hadn't,
I
hadn't
even
thought
about
aggregating
the
status
of
those
of
those
physical
clusters
like
when
the
sinker
says
you
know
I'm
ready,
but
the
other
one
is
not
ready.
How
do
we
aggregate
that
to
the
to
the
main
thing,
but
this
this
is
so
amazing.
I
I
I
love
this.
This
is
great.
H
Yeah,
for
now
the
transformations
are,
quite
you
know
completely
generic.
In
fact,
just
you
know
you
change
the
options
of
the
request
or
the
input
object
and
also
update
the
output
object,
but
it's
you
know
you
can
just
implement
and
change
the
transformation
as
you
as
you
want.
So
I
assume
that
we
would
be
able
to
to
do
quite
a
number
of
things
with
that.
A
H
There
is
some
you
know
reminiscence
or
some
some
concepts
coming
back
from
the
two
steps,
thinking
that
that
we
we
discussed
about
some
month
ago,
but
but
now
the
difference
here
is
that
if
you
don't
have
any
difference
between
the
external
view
of
an
object,
kcp
view
and
the
location
specific
view,
then
you
just
have
a
an
empty
diff,
which
is
the
big
difference
between
the
previews.
A
That
would
be
like
the
the
deployment
splitter
controller
would
be
responsible
for
saying
you
know,
though
the
original
object
said
replicas
10.
What
I
want
you
to
do
is
change
that
to
replicas
3
in
us,
east
1
and
replica
7
in
u.s,
west
juan
it's.
Some
controller
is
responsible
for
applying
those
diffs
or
sorry
not
applying
them.
Specifying
those
diffs.
H
Yeah
yeah
yeah,
I
mean
these
are
the
the
all
the
things
things
that
we
have
to
you
know,
discuss
and
and
continue,
brainstorming
based
on
on
this
generation
approach.
A
Cool
any
more
any
more
questions
about
the
virtual
workspace
is
that
david?
Is
that
something
you
are
targeting
for
prototype
4
time
frame?
Yes,
okay!
Yes,
I
think
so.
H
Great
and
by
the
way,
all
the
api
related
stuff-
you
know
the
the
idea
or
that,
based
on
some
shimmer
that
you
find
in
a
negotiated
api
resource
or
in
an
api
export,
then
you
expose
at
a
given
sub-path
all
the
apis.
H
This
is
something
that
would
probably
be
shared
or
useful
for
the
api
exports
virtual
workspace
as
well,
because
the
the
the
underlying
mechanics
is
is
quite
the
same
sort
of
crd
based,
dynamic,
rest
storage.
A
Cool
we
have
about
10
minutes
left
and
I
can
go
through
the
unless
we
did.
We
have
any
other
items.
I
don't
remember
if
there
were
other
things
we
put
off
to
later,
but
we
can
go
through
issues
with
no
milestones
since
the
last.
A
Since
the
last
meeting
support
scheduling
workloads
to
multiple
clusters.
This
is.
A
Definitely
something
we
want.
I
think
probably
this
is
a
a
question.
I
just
need
to
go
back
and
answer,
but
that's
that's.
Definitely
on
our
radar,
mainly
just
a
question.
You
are
here
in
the
march
22
meeting,
get
rid
of
x,
kubernetes,
cluster
and
use
clusters.
L
cluster
is
that,
like
a
code
hygiene
nice
to
have.
B
Hey
I
have
to
talk
to
steve.
I
guess
steve
has
some
opinions.
If
we
can,
we
should
only
have
one
way
to
access
a
cluster
logically
cluster.
I
think
we
have
two
at
the
moment.
H
A
Yeah
I'm
curious
about
his
reason
for
wanting
it,
but
in
general
anything
that
simplifies
server
handling
code
is
probably
good.
So
I
heard
from
from
david
now
it's
pre-dating
steve,
even
so
yeah
it
may.
B
A
It
may
be
pre-date
david
and
me,
we
will
talk
about
it,
oh
yeah,
and
he
has
a
heart
on
that
issue,
so
he
may
actually
be
pro-it
as
opposed
to
anti.
A
This
was
the
time
bomb
you
talked
about
earlier
right,
it's
another
one,
a
different
one:
oh
okay!
It's.
B
A
different
one,
basically
so
gvrs
and
I
mean
api
bindings,
implement
or
implement
it
through
clds
in
the
background,
but
you
can
of
course
see
these
in
the
same
workspace
and
there's
no
admission
or
anything
like
that
to
stop
you
from
doing
that.
The
api
binding
has
priority,
so
this
will
be
offered
by
by
the
server
basically,
but
the
cd
is
still
there
and
you
don't
get
any
feedback
as
a
user.
So
this
is
something
if
somebody
wants
to
dive
into
admission
and
also
into
api
bindings
and
this
stuff.
A
Is
this?
Is
this
the
fact
that
I
can
create
a
crd
called
api
bindings
and
the
right
gvr
that
that
no,
I.
B
Mean
the
way
we
do
that
with
crds
is
basically
by
the
name.
The
name
of
the
cid
is
resource
name,
dot,
group
name,
so
just
a
name
make
sure
that
you
have
no
overlaps
in
csds.
But
now
we
have
two
types
like
we
have
api
bindings
and
clds.
So
this
lcd
conflict
doesn't
appear,
so
we
need
something
else.
We
need
admission.
A
Right
when
you
create
a
crd,
make
sure
there
isn't
an
api
binding
by
the
same
name.
And
when
you.
A
A
A
Any
has
anybody
taken
any
kind
of
deeper
look
into
this.
Mario
and
lisa's
he's
also
seen
it.
G
G
But
it's
definitely
something
I've
seen
repeatedly
over
the
last
week
I
mean
we've
had
something
about
the
ingress.
There's
some
flake
laden
there.
I
don't
know
what
triggers
it.
Sometimes
it
manifests.
Sometimes
it
doesn't,
but
we
definitely
have
something
to
dig
into
it'd,
be
really
nice
to
be
able
to
replicate
it
locally,
so
I
can
actually
see
what's
going
on,
but
thus
far
I've
been
unable
to
do
so.
A
Okay,
if
anybody
is
interested
in
chasing
that
down,
I
will
send
you
one
high
five
over
the
internet.
A
Maybe
two,
if
you
do
it
fairly
well,
api
import
does
not
work
when
adding
the
same
cluster
into
multiple
workspaces.
C
F
When,
when
you,
when
you
create
a
workspace
and
you
you
create
a
cluster
into
that
workspace,
the
api
import
works
correctly.
But
when
you
repeat
that
process
in
any
new
workspaces,
then
it
breaks
the
api
import
does
not
work.
There
is
a
place
in
the
code
where
the
api
importer
is
is
cached
by
cluster
name.
So
that
explains
we.
F
We
faced
that
issue
as
we
created
some
end-to-end
test
for
our
project
and
we
so
we
had
a
pool
of
clusters
and
we
we
run
each
end-to-end
test
in
separate
workspace.
So
we
we
face
that
as
we
use
that
pool
of
clusters
in
in
in
test
workspace
that
we
have
so
yeah,
I'm
I
don't.
I
don't
know
it
seems
like
a
legit
use
case.
B
F
Yeah,
I'm
not
sure
that
it
is,
it
seems
like
there
is
a
place
where
you
know
the
importer
or
get
cached
by
custom
name
and
somehow
you
by
doing
that.
You
end
up
in
the
situation
where
the
the
logics
think
that
this
is
the
same
cluster.
But
this
is
not
the
same
cluster
resource.
It's
a
new
cluster
in
a
different
workspace,
yeah
and.
F
H
And
I
think
it
comes
from
the
fact
that
long
ago
we
had,
you
know
complete
for
the
limited
case
where
one
physical
cluster
is
only
registered
in.
You
know
in
a
single
virtual,
in
a
single
logical
cluster,
to
make
it
simple
so,
but,
but
obviously
precisely
now
that
we
do
not
create
sinkers
from
kcp,
but
the
other
way
around.
It
would
really
make
sense
to
to
enable
that
I
think.
A
Okay,
if
this
also
seems
like
a
relatively
well
relatively
small
kind
of
understood
bug,
if
anybody
is
interested
in
in
picking
that
up
and
just
you
know,
basically
changing
the
key
to
something
more
universally
unique
and
then
seeing
what,
if
anything
breaks,
if,
if
nothing
breaks,
then
we
fixed
it,
we
did
it
everyone
and
then.
F
Yeah
yeah
I'd
be
happy
to
take
it.
If
you
guys,
can
I
mean
give
give
me
some,
I
mean
guideline,
I
can
try.
A
Sure
sure
sure,
let
me
let
me
officially
christen
you
as
the
assigner
assignee
of
this
and
yeah
feel
free
to
reach
out
on
slack
or
or
ping
me
or
anyone.
If
you
have
any
issues.
That's
great
thanks
for
thanks
for
finding
that,
and
also
thank
you
for
being
interested
in
fixing.
F
A
And
finally,
one
last
test
test:
flake,
no
real
follow-up.
Has
anyone
else
seen
this
or
have
any
ideas
where
this
might
be
coming
from.
B
B
A
I'm
going
to
oh,
I
can't
comment
at
this
time.
There
was
something.
A
A
This
to
you,
who
knows
it
might
be
a
sign
it
might
not
be
assigned,
there's
absolutely
no
way
of
knowing
with
that.
I
think
we're
out
of
time,
but
I
I
think
we
had
a
lot
to
go
through
today.
So
I'm
happy,
we
made
it
nice
work
team,
we'll
see
you
next
week
and
we'll
see
you
on
slack
until
then
bye
everyone.
Thank
you.