►
From YouTube: Community Meeting October 19, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
kcp
community
meeting
october
19th
2021.
We
have
a
pretty
packed
agenda
because
we
took
last
week
off
for
kubecon,
so
I
will.
I
will
start
with
carol.
Am
I
pronouncing
that
correctly
or
even
anywhere
close?
Yes,
thank
you
all.
A
B
Absolutely
yes,
hi
everyone.
My
name
is
carol,
I'm
a
developer
on
the
microsoft
developer,
division
team,
working
on
container
tooling
and
azure
tooling.
Some
of
you
may,
for
example,
use
the
docker
extension
in
vs
go.
That's
one
thing
that
I'm
working
on
so
here
I'm
just
basically
asking
for
advice.
B
The
problem
that
we
are
facing
is
we
have
a
bunch
of
scenarios
where
people
are
developing
microservice
style,
slash
cloud
native
applications
and
they
want
to
test
some
portions
of
it
or
debug
them
locally,
but
some
portions
require
dependencies
that
may
run
as
containers
or
as
emulators
or
even
run
in
the
cloud
and
over
the
years
we
have
developed
various
solutions,
both
in
big
visual
studio,
as
well
as
in
visual
studio
code,
and
they
are
kind
of
one-off
and
a
little
bit
messy
frankly.
B
So
the
idea
that
I'm
pursuing
with
a
couple
of
my
colleagues
here
at
microsoft
is
well.
Maybe
we
have
a
way
to
describe
those
workloads
in
terms
of
apis
and
and
workloads
inside
kubernetes,
and
then
you
know,
some
tools
are
gonna,
be
responsible
for
constructing
those
apis
or
object.
B
Kubernetes
object
hierarchies
and
some
other
tools
will
be
then
monitoring
the
actual
running
workload
and
get
logs
and
attach
debuggers,
and
things
like
that,
so
you
know
it
seemed
like
kcp
would
be
actually
a
very
good
building
block
for
building
such
experiences,
but
I'm
not
super
proficient
with
all
things
kubernetes.
So
I'm
just
asking
like
how
to
do
do
it
or
go
about
that.
That's
all.
A
Yeah
can
I
can.
I
ask
some
clarifying
questions
to
make
sure
that
I
understand
your
use
case
a
bit.
Please
you,
you
still
want
to
run
pods
somewhere,
you
you
want
that.
You
want
the
developer
to
express
their
application
and,
and
you
want
it
to
run
pod
somewhere,
it
might
be,
it
might
not
be
in
a
multi-cluster
like
hosted
prod
scenario.
It
might
even
be
like
on
your
local
machine
or
on
some.
You
know
shared
machine
somewhere,
but
running
pods
is
still
the
goal.
B
C
That
actually
matches
a
use
case
that
we
discussed
pretty
early
on
of,
like
you
could
run
like
a
compose.
You
could
put
a
compose
crd
in
there
and
have
something
on
a
local
machine
that
that
translates
that
to
compose
are
there
I
mean
one
and
like
not
to
jump
over
jason
jason.
Did
you
want
to
continue
that
thread?
You
have
a
second
question
or.
C
C
Do
this
with
this,
we
wanted
to
be
open
to
that
because
I
think
like
we
would
hope
that
it
is
generally
applicable
to
a
large
class
of
problems,
not
just
cube
container
orchestration
problems,
but
those
we're
we're
really
in
the
prototype
phase
right
now,
so
that's
kind
of
the
we're
kind
of
looking
for
those
use
cases.
So
if
you
wanted
to
drive
or
to
ask
or
to
participate,
I
think
that
would
be
well
within
what
we'd
want
to
allow,
or
you
know,
collaborate
on.
A
Yeah,
I
would
say
especially
early
early
on,
I
think,
we're
doing
better
at
it,
but
we
can
always
do
better.
It's
been
hard
to
get
the
messaging
right
of
what
kcp
is
like
kcp
is:
is
both
the
minimal
api
server
that
everything
is
a
crd
and
it's
a
plugable
api
control
plane
that
looks
very
similar
to
kubernetes?
A
But
it's
also
this
like
engine
for
multi-cluster
stuff,
and
so,
if
you
don't
care
about
multi-cluster
stuff,
it
can
be
very
muddy.
Reading,
docs
and
reading,
like
even
just
involving
in
these
conversations
like
to
to
for
people
to
myself,
included,
probably
mainly
myself,
to
talk
about
kcp
only
as
the
minimal
api
server
and
not
as
the
core
of
this
thing
that
you
don't
care
about,
but
yeah
I
mean
to
to
clayton's
point
a
long
time
ago.
A
We
talked
about
various
ways
that
kcp
or
something
like
kcp,
could
help
with
development
scenarios
and
also
like
you,
you
linked
the
the
embedded
low
resource
scenarios.
A
Some
folks
have
reached
out
asking
about
how
kcp
can
be
used
in
like
iot
scenarios
and
edge
scenarios,
and
things
like
that,
and
I
think
the
the
case
in
their
case
it
wouldn't
be
in
order
to
run
pods
in
order
to
run
like
kubernetes.
You
certainly
could
run
kubernetes
on
those
edge
deployments,
but
if
you
just
wanted
to
like
have
a
process
running
and
it
stayed
synced
up
to
some-
you
know
kcp,
that
aggregated
state
of
all
the
things
and
have
some
control
mechanism
for
starting
and
stopping
and
monitoring
these
processes
everywhere.
A
I
think
kcp
could
be
a
good
foundation
for
that
central,
like
control
node
across
all
of
those
things,
and
the
benefit
is
that
you
actually
get
like
the
user
gets
a
kubernetes
style
api
with
with
controllability
and
the
edge
device
doesn't
have
to
run.
Kubernetes
doesn't
have
to
you,
know,
schedule
pods
and
have
all
of
the
heavyweight
kubernetes
stuff
to
do
that.
A
D
Yeah
at
some
point
very
long
ago,
we
had
tried
the
quite
mad
idea
of
plugging
a
virtual
cubelet
to
an
a
kcp
api
server.
Of
course,
since
there
was
no,
you
know,
nodes
and
and
node
name
and
stuff
like
that
it
didn't
go
up
to
to
to
the
end,
but
that
was
quite
interesting
and
and
even
without
going
this
far,
it
could
be
quite
easy
to
just
build.
You
know
have
locally
a
controller
running
against
your
kcp
and
when
you
detect
a
labeled
deployment,
you
just
create
the
right
pod
through
podman.
C
Is
this?
Is
this
always
a
local
case
girl?
Like
do
you
think
it's
a
like
it's
a
single
machine
or
small
individual
user
case,
or
do
you
see
other
types
of
use
cases
where
you
might
want
to
run
something
for
someone.
B
C
Know
aspirationally
there's
a
there's
a
use
case
which
is:
could
we
make?
Could
we
make
well
that's
awkward?
Sorry,
I'm
tilted.
Could
we
make
like
the
idea
of
a
generalized
control
plane
for
most
of
like
dev
iteration
like
and
dev
is
always
a
loaded
term
right
like
yeah.
A
C
Know
some
people
are
happy
with
their
command
lines,
their
docker
composes,
their
cube
controls
their
terraforms,
their
paths
platforms.
One
of
the
thoughts
would
be
the
transparent
multi-cluster
is
like
really
pragmatic,
like
it's
going
from
cube
today
to
a
use
case
above
and
trying
to
catch
those
cube
developers,
but
there
is
that
aspirational
part
that
if,
if
we
could
succeed,
if
we
could
bring
a
bunch,
I
stays
together,
like
the
tools
are
good
enough,
that
most
people
could
use
get
ops
for
any
of
their
flows.
C
Tecton
could
work
across
any
types
of
resources
like
it'd,
be
awesome.
If
tecton
had
more
things
to
drive,
argo
could
deploy
cd,
anything
like
you
could
cd,
the
vms
or
the
cloud
formation
templates
or
terraforms
or
docker
composes,
and
they
all
kind
of
work
together.
That's
a
very
like
you
know:
techno,
libertarian
or
techno
freedom
future
we're
like.
Oh,
it's
the
control
plane
for
everything,
but
that
is
a
ways
away.
C
Having
examples
of
use
cases
that
challenge
that,
like
we
thought
about,
the
local
dev
like
at
local,
dev
and
being
a
control
plane
for
local
stuff,
would
help
us
understand
what's
different.
So
I
think
we'd
be
very
open
to
any
experimentation
in
this
space
and
how
we
experiment
is
we,
I
think,
we'd
be
as
open
as
possible.
Right
now
we're
just
kind
of
prototyping
in
the
repos,
but
we're
starting
to
get
more
serious
about
the
prototyping.
C
A
Yeah,
definitely,
I
think
concretely,
if
you
are,
if
you
are
interested
in
prototyping
with
this
and
hit
any
problems,
please
you
know
reach
out
and
we'll
try
to
help
or
move
around
it
or
whatever,
and
if
especially,
if
in
your
case,
the
multi-cluster
stuff
that
we
are
driving
at
is
making
your
life
harder
for
some
reason,
if
we're
we're
binding
these
things
too
close
too
closely
together,
that's
you
know,
we're
tying
stuff
up
that
you
have
to
untie
to
do
your
work.
A
Let
us
know
and
we'll
try
to
be
more
rigorous
about
you
know
separating
these
things
apart.
I
don't
know
if
that
is
the
case
yet,
but
definitely
without
somebody
keeping
us
honest
and
trying
this.
I
think
we
might
slip
into
that.
So
let
us
know
if
we're
making
it
hard
for
you.
B
Thank
you,
and,
and
and
thank
you
for
all
the
information
and
advice,
one
last
quick
question.
So
what
is
the
best
way
to
collaborate?
Is
it
you
know
github
discussions?
Is
it
something
else?
What
what
do
you
guys
prefer.
A
I
can't
speak
for
everyone,
but
I
don't
think
there
is
a.
I
don't
think
we
are
a
mature
enough
project
to
have
ones
that
we
prefer
or
don't
prefer.
If
you,
if
you
reach
out
on
the
slack,
that's
that's
so
far
I
think
has
been
pretty
pretty
responsive.
I
I
saw
that
you
sent
an
email
to
kcp
users
and
I
did
not
see
it
so
maybe
that
is
maybe
that
is
the
least
least
trafficked
one,
but
the
the
slack
and
these
meetings
and
issues
and
discussions
at
github.
C
And
from
and
I'd
probably
say,
if,
ideally
like
kcp
start,
the
binary
is
intended
to
show
off
the
prototype
or
the
demo
flow
and
we're
like
okay,
we
want
to
iterate
on
that
demo
flow
and
show
a
couple
more
ideas.
C
If
you'd
like
to
have
your
own
demo
flow
and
you
need
changes
for
it,
I
think
it's
really
a
very
simple.
Is
this
an
easy
change
to
make
or
does
it
is
it
something
different
and
if
it's
something
different
enough
having
a
separate
binary
that
shows
off
a
different
concept
with
a
different
demo
directory
that
shows
off
the
idea?
It's
totally
reasonable
and
also,
if
you
want
to
fork
the
repo
or
ask
for
us
to
stay
stable
on
some
aspects
like
we're,
trying
not
to
put
too
many
changes
in
that.
C
Don't
just
show
ideas,
and
so
there's
a
little
we're
being
we're
playing
pretty
fast
and
loose
right
now,
because
the
goal
is
to
get
to.
We
could
do
a
second
prototype
demo
that
shows
some
of
the
like
the
app
movement,
the
syncer,
the
multi-transparent
cluster,
also
some
of
the
organization
and
workspace
stuff,
and
then,
after
that,
I
think
we're
probably
going
to
go
back
and
say
what
are
some
of
the
structural
things
that
we
changed
to
start
really
executing
long-term.
So.
D
Maybe
I
could
add
that
there
is
already
a
way
contributed
by
some.
I
mean
some
someone.
I
don't
remember
the
name
that
allows
you
creating
your
own
command
line
of
kcp.
D
Just
you
know
by
creating,
because
the
server
part
api
server
but
has
been
extracted,
and
so
you
can
just
create
the
server
part
inherit
from
the
usual
arguments,
command
line,
arguments
and
then
especially
add
as
many
post
start
hooks
as
you
want,
which
is
very
useful,
of
course,
to
register
your
tabs
and
do
whatever
you
want
and
start
your
controllers.
So
that's
already
available
today.
D
Yeah
so
as
as
we
had
discussed
two
weeks
ago
after
the
demo,
we
we
made
with
you,
I
came
about
the
workspace
over
kcp
with
you
know:
abstracted
ingresses,
the
kcp
inquest
controller
from
from
akim.
We
had
discussed
a
bit
about
this.
I
will
open
a
quick
presentation.
D
Yeah
so
yeah
we
had
discussed
a
bit
about
what
the
ingress
case
showed.
In
fact,
the
fact
that
you
have
the
object
status
you
know
and
when
you
sync
an
ingress
typically
to
the
physical
cluster,
it
will
send
the
hostname
and
field
of
the
status
to
the
host
name
that
is
related
to
the
physical
cluster.
But
what
you
want
to
see
externally
from
typical
non-kcp
or
clients
of
the
kcp
logical
cluster?
This
is
the
host
name
that
will
in
fact
point
to
the
proxy
that
will
drive.
D
You
know
your
ingress
to
whatever
physical
cluster
the
workload
has
been
has
been
assigned
to
so,
and
we
have
discussed
about
that
because
then
you
have.
Finally,
the
sinker
from
the
physical
cluster
will
change
the
status
of
the
object,
and
then
you
have
the
kecp
ingress
controller
that
sets
up
the
android
proxy,
etc.
That
will
have
to
change
this
status
to
put
the
right
hostname,
and
then
we
we
see
that
here.
D
But
this
this
had
led
us
or
could
have
really
let
us
sorry
to
problems
as
well,
because
external
controllers,
like
the
dev,
workspace
controller,
they
expect
you
know
the
the
landscape
of
the
objects
that
are
derived
from
from
the
the
main
custom
resources
to
be
in
a
specific
state.
And
if
you
start
adding
new
objects
or
adding
new
stuff
or
changing
an
object,
then
of
course
your
changes,
the
you
you
just
you
know,
unexpectedly
change
the
conditions
that
are
expected
by
the
behavior
of
the
external
controller.
D
So
there
is
in
fact
a
twofold
problem:
either
an
ownership
problem
if
you
keep
the
same
object
for
the
status,
for
example,
or
a
sort
of
you
know,
visibility
problem
so
that
you
add
stuff
or
change
stuff.
In
the
you
know
the
current
context
of
of
an
external
controller
that
is
targeting
kcp
without
even
knowing
that
it's
not
standard
cube,
so
yeah.
C
I
was
gonna,
so
david,
like
one
note
here
is
like:
did
you
already
start
a
doc
or
something
that
works
through
both
the
use
case,
the
implications,
because
this
is
all
stuff
that
we're
gonna
basically
go
over
and
over
and
over
with
everybody?
Whoever
hits
this,
so
this
is
like
a
pretty
fundamental
design.
Trade-Off.
Yes,
design.
D
Yeah,
well,
I
have
a
five.
I
think
five
slides
and
and
and
a
demo
of
something
we've
been
yeah.
We've
been
thinking
about
that
the
week
before
with
joachim
and
then
implementing
that
last
week.
So
of
course
it's
just
a
prototype,
but
yeah.
I
have
some
something
to
propose
in
fact
today,
for
this
and
to
submit
to
europe.
You
know.
C
Yeah,
because,
like
this
is
a
the,
this
is
definitely
one
of
those
ones
that
we
will
keep
coming
back
to
over
and
over.
So
like.
As
I.
C
Candidate
for
like
a
design,
doc
and
yeah
education
like
documentation,
plus
design
dot
because,
like
we're,
gonna,
learn
from
it.
This
is
one
that
really
needs
to
be
there.
Probably,
you
know
we
can
start
with
the
google
docs
shared
with
kcp,
dev
yeah,
but
sure
I.
D
Just
wanted
to
you
know
present
that
quickly
I'll
try
to
be
correct
and
and
then
we
can
discuss
that
and
of
course
it
could
be
translated
to
to
a
design
doc
with
all
the
amendments
we
that
would
be.
This
is
three
so
that.
D
Thanks
yeah,
what
I
want
to
show
is
mainly
the
is
really
the
main
id
and
the
way
we
found
it
could
be
implemented
quite
simply
in
the
current
state
of
kcp.
So,
of
course,
there
is
the
case
of
spread
deployment,
which
is
a
bit
even
more.
You
know
changes
because
you
create
two
additional
deployments
with
the
it's.
You
know
the
deployment
splitter
and
then,
of
course,
fully
transformed
objects.
D
We
had
discussed
about
creating
history
resources
from
a
kcp
ingress
to
sync,
to
a
physical
cluster
that
supports
istu,
for
example,
and
so
after
thinking
a
while
on
all
these
use
cases,
it
seems
to.
It
seems
to
me
that,
in
fact,
what
we
need
is
some
sort
of
two
levels
of
visibility
and
and
and
also
finally,
two
steps
thinking.
D
What
this
means
is
mainly
that
transform
or
additional
objects
that
we
add,
for
example,
for
the
spread
deployments
should
not
be
visible
from
kcp
clients
that
the
fact
that
we
should
not
change
the
context
of
the
client
and
the
other
thing
is
that
original
and
transform
objects
should
be
in
isolated
context,
because
in
the
most
simple
case,
where,
for
example,
you
just
want
to
change
the
status
of
the
to
update
or
tweak
the
status
of
an
ingress.
D
You
don't
necessarily
want
to
change
the
name
of
the
ingress,
because
there
might
be
some
assumptions
in
the
underlying
you
know,
parts
or
that
will
be
created
by
the
whole
process,
that
the
ingress
has
this
name.
So
the
the
more
we
can
keep
the
names
and
everything
synced
up
to
the
sinking
to
the
physical
cluster,
the
better
it
is,
and
so
the
idea
in
order
to
to
do
this
quite
simply
was
that
to
think
that,
in
fact,
we
already
have
a
way
to
isolate
objects.
D
It's
just
logical
clusters,
I'm
not
speaking
here
of
workspaces,
which
is
mainly
the
you
know,
identity
of
a
logical
cluster.
But
here
really
of
the
underlying
cube
feature,
and
so
the
idea
was
to
in
fact
have
two
lot:
distinct
logical
clusters
per
workspace.
Of
course
you
don't
have
to
create
it,
because
the
logical
cluster
is
just
an
on-the-fly
stuff,
it's
just
finally
a
place
where
you
put
the
case
in
etcd
and
so
to
have.
Finally,
what
we
have
today,
the
main
logical
cluster
for
workspace
and
the
private
one.
D
So
first
you
have
a
transformer
at
the
level
that
sinks
from
public
objects,
I
mean
from
objects
in
the
public,
kcp
logical
cluster,
so
the
one
that
is,
you
know,
used
by
external
controllers,
and
it
would
sync
the
objects
from
these
public
logical
clusters
to
the
corresponding
private,
logical
cluster
and
doing
this
it,
you
can,
you
know,
add
whatever
customization
you
want
to
change.
The
object
create
other
objects
as
well
with
owned
by
relationships
and
then
the
sinker,
which
is
what
we
have
today.
D
This
will
allow
this
thing
to
keep
really
really
systematic
and
simple
extra
simple,
because
it
would
just
have
to
look
for
the
resources
that
are
created
in
the
private,
logical
cluster
and
seeing
that
exactly
as
it
does
today
to
the
assigned
physical
cluster,
and
then
that
means
that
we
can.
But
the
the
cool
thing
here
was
that
in
principle,
if
we
put
transformation
apart,
it's
exactly
the
same
mechanism.
So
all
the
challenges
you
know
of
thinking
back
to
the
status
of
syncing,
the
spec.
D
What
will
we
do
of
you
know
read-only
fields
and
stuff
like
that,
all
these
challenges
we
would
have
to
manage
only
once
because
yeah
so
first
I
mean-
and
the
benefits
of
this
seems
to
me
also
that
we
we
can
now
have.
D
We
would
now
have
a
clear
separation
of
concern
between
the
scheduler
that
would
work
precisely
at
the
kcp
public
level,
just
by
labeling
or
annotating
objects,
then
the
transformer
that
does
the
you
know,
move
between
public
and
private
and
possibly
transform
objects
and
then
the
synchro
that
just
blindly
syncs
the
object
and
lives
in
each
physical
cluster,
and
it
you
know,
would
be
the
same
for
everyone,
which
is
also
a
sort
of
security
ground
for
us.
D
Because
then
we
can
be
sure
that
that
the
sync
to
physical
cluster
is
correctly
done
and
not
overridden
by
anyone,
and
then
for
this
now
that
we
have
a
layer
for
transformation,
we
can
implement
an
extension
quite
easily
implement
some
sort
of
extension
mechanism
for
transformers
based
on
labels
on
the
gvr.
D
When
you
uncontrol
an
ingress,
then
you
use
a
distinct
transformer,
for
example
stuff
like
that
and
in
the
future,
of
course,
it
could
be
driven
by
workspace
policies
or
organization
policy,
or
some
something
like
that,
and
it
seems
to
me
also
that
it
covers
the
various
thinking
use
case.
We've
spoken
of
so
single
fix,
free
physical
cluster,
the
namespace
scheduler.
D
That
json
has
been
working
on
and
also
the
spread
deployments
or
any
anything
custom,
especially,
is
something
I
think
we
spoke
about
long
ago
about
you
know
using
kcp
as
a
way
to
inject
additional
logic,
for
example,
security
words
between
the
abstract
object
that
is
created
at
the
public
layer,
kcp
layer
and
the
object
that
will
finally
be
synced
or
not
synced
to
the
physical
cluster
at
the
end.
D
So
this
these
are
benefits
and
in
terms
of
implementation.
What
I
will
show
right
now,
if
times
allows
is,
is
in
the
kubernetes
feature
branch.
In
fact,
there
was
just
a
minimal
implementation
for
this
to
be
minimally
performant,
which
for
us,
which
logical
cluster.
For
now
we
don't
have
crd
inheritance,
because
it's
quite
hard
to
implement
in
the
current
state
of
the
3d
management,
which
is
mainly
upfront
and,
and
you
know,
controller
based,
but
for
for
this
case
it's
very
use.
D
It's
very
easy
because
you
have
exactly
the
same
apis
between
the
private
and
the
public
logical
clusters,
so
it
was
just
some
additional
hacks
on
top
of
the
existing
hacks
on
the
cube
feature
branch
and
then,
finally,
you
can
you
point
to
admin
or
underscore
admin
underscore
logical
clusters
and
you
get
the
same
api
resources
and
the
same
open
api
shima
and
you
can
create
object
in
either
one
or
the
other,
and
so
that
alert
then
doing
all
the
rest
of
the
stuff
on
the
kcp
side,
mainly
abstract
the
syncer,
to
allow
switching
the
syncing
logic,
you
know
upset
to
downstream
update
status
into
upstream
and
delete
from
downstream
and
based
on
this.
D
Typically,
the
alt
thinker
is
just
the
same
as
an
abstract
sinker
with
the
identity.
Thinking
you
just
copy
without
any
transformation,
and
this
identity
thinking
is
also
the
default
of
you
know
basic
extension
mechanism
which,
which
is
just
you
know,
prototyping
an
id
where
you
can
delegate
to
your
custom
logic,
pro
gvr
and
it
right
it's
driven
by
levels,
if
you
just
add
the
transformer
label
on
an
ingress,
for
example,
or
on
a
deployment
to
use.
You
know
a
transformer
that
would
spread
things
across
clusters.
D
Then
then
it
would
use
this
transformer
instead
of
the
default,
one
which
just
copies
all
the
other
objects
like
configmaps
or
stuff
like
that
and
yeah,
and-
and
this
idea
is
that
this
work
would
be
ready
for
when
the
sinker
supports
many
pological
clusters.
For
now,
because
we
are
not
clear
currently
on
how
we
would
manage
apis
and
different
apis,
you
know,
and
the
api
chemist
and
stuff
like
that
we
don't
have
for
now.
D
Each
synchro
runs
against
you.
You
have
one
in
in
each
logical
in
each
physical
poster.
Sorry,
you
have
one
single
four
for
each
logical
cluster,
but
of
course
the
same
approach
could
be
used,
even
if
you
would
watch
against
all
the
logical
clusters
based
on
labels
or
stuff
like
that
and
yeah.
So
I
mean
just
tell
me
so:
do
you
see
my
my
presentation
or
not?
A
A
D
A
I
think
I
still
followed
fairly
close,
I
mean
even
I'm,
I'm
sure
visuals
would
have
helped
me,
but
I
think
I
think
I
still
got
the
point.
I
like
that.
I
like.
D
C
But
it
loses
the
property
that
the
sinker
is
the
one
who's
actually
authorized
to
make
those
rights,
and
so
a
dumb
sinker
is
not
a
policy
injection
like
a
smart
sinker
located
on
a
cluster
can
impose
rules
that
you
can't
trust
a
smart
sinker
at
the
higher
level
to
run
because,
ultimately,
like
policy
has
to
be
enforced
somehow,
and
so
you
lose,
the
property
of
transformation
is,
is
under
the
control
of
the
person
receiving
the
workload.
D
Well
well,
well,
I
mean
it.
It
mainly
depends
where
you
run
the
transformer.
I
mean,
and
that's
that's
non-opinionated.
D
In
my
example,
I
I
mean
for
the
the
sake
of
the
demo
and
and
the
initial
use
case,
I
would
run
the
transformer
against
the
logical
cluster
and
typically
from
a
local
machine,
but
we
already
even
do
that
for
the
sinker
I
mean
in
the
local
use
case,
but
then
I
mean
in
finally,
you
would
just
have
to
decide
when,
when
we
are
in
pull
mode
to
decide
whether
the
transformer
or
some
default
transformers
for
certain
gvrs
would
be
at
the
kcp
layer
for
optimization
and
some
others
could
be
overridden
as
well
on
the
on
the
physical
cluster.
C
D
Exactly
so
that
that's
that's
part
of
what
is
sorry.
Let
me
come
back
here,
yeah,
that's
part
of
what
I
said
here
that
it
will
be
ready
for
when
synchro
supports
multiple
logical
clusters.
For
now
I
mean
that
that
that's
not
the
case,
so
I
mean
for
now
the
sinker,
which
is
against
the
physical
cluster,
mainly
points
to
a
given
logical
cluster,
a
single
one,
and
so,
of
course,
in
the
prototype.
I
would
present
the
demo
of
it
points
to
the
private
cluster.
D
D
But
of
course,
this
type
of
duplication
long
term
is
not
really
nice
in
terms
of
scaling,
that's
clear,
but
as
soon
as
even
thinkers
are
you
know
enough
kcp
aware
to
be
able
to
watch
across
logical
clusters,
then
they
can
just
get
all
the
you
know:
every
objects
that
are
in
the
pre
that
I
initially
called
as
being
in
the
private
visibility
mode.
Let's
say
it
like
that
would
be
created
with
the
could
be
created
with
a
label.
Let's
say
ready
for
sync,
or
something
like
that.
D
That
means
that
this
could
also
be
driven
by
a
a
kcp
policy
or
logic
that
for
all
the
objects
that
just
don't
need
any
transformation,
the
sinker
which,
in
the
future,
we
would
look
to
all
private
logical
clusters.
D
Oh
sorry,
would
would
watch
to
all
project
registers
would
retrieve
objects
from
both
the
public,
logical
cluster
and
the
private
logical
cluster
of
a
given
workspace,
and
then
would
simply
get
everything
that
is
ready
for
sync
and
it's
in
such
a
case
exactly
the
same
way
as
it
works
today.
In
the
prototype
I
showed
you
would
just
you
know,
avoid
duplicating
every
object
that
doesn't
need
some
sort
of
transformation
in
the
middle.
C
C
F
C
D
So
I
mean
I
have:
I
have
reimplemented
the
the
deployment
splitter
pattern
with
this,
and
it
works
exactly
the
same.
In
fact,
apart
from
the
fact
that
your
deployment
east
and
your
deployment
west
that
were
created
initially
are,
are
created
in
a
private
logical
cluster
associated
to
the
public
one,
so
they
are
not
seen
by
you
know
if
you
do
cube
ctl
on
kcp,
but
then
the
sinker
sees
those
two
objects
exactly
as
before,
because
they
are
labeled
with
the
cluster
name.
D
You
know
physical
cluster
name
and
and
syncs
the
status,
and
then
I
have
a
transformer
that
does
exactly
what
the
previous
deployment
splitter
does.
That
means
that,
when
it
updates
the
status
in
stream,
it
will
just
get
the
the
two
the
status
of
the
two
deployments
that
exist
in
the
private
zone.
Let's
say
like
that
and
and
update
the
real
deployment
with
the
right
status,
which
is
on
the
public
zone,
and
then
it's
exactly.
D
I
mean
what
I
tried
to
I'm
not
saying
it
would
be
the
design
you
know
for
the
future,
but
for
now,
at
least
and
and
as
as
as
a
proof
of
concept
for
an
id
of
isolation
and
visibility,
management
and
that's
how
it
works.
I
mean
using
logical
clusters
and
keeping
exactly
the
same
logic
as
before,
but
creating
having
all
the
objects
and
all
the
modifications
that
we
don't
want
them
to
be
visible
in
the
private.
So.
C
D
Yeah
well
so
that
came
from
the
the
case
of
that
workspace
controller.
For
example,
according
to
how
controllers
are
built
you
might
you
might
expect
that
with
a
given
custom
resource,
for
example,
you
will
you
would
create
a
number
of
operands
of
derived
resources
and
you
would
search,
of
course,
if
you
find
a
resource
that
is
not.
D
I
mean
that
is
with
a
number
of
labels,
for
example,
and
annotations
that
should
not
be
created
by
because
the
the
the
the
main
security
has
changed.
Then
we
could
assume
that
some
controller
would
just
remove
those
objects.
I
mean
that's
just
just
changes
the
the
you
just
change
the
name
space
or
the
content
or
the
the
environment.
D
D
D
You
know,
to
transform
the
status
typically
for
the
the
case
where
ingress
is
associated
to
a
given
cluster,
a
fixed
cluster,
typically
the
the
workspace
use
case,
for
example,
then
you
still
have
to
change
the
status
that
is
in
the
ingress,
and
it
doesn't
seem
a
good
thing
to
me
that
the
initial
host
name,
value
that
is
associated
to
the
ingress
would
be
the
one
associated
by
the
underlying
physical
cluster.
C
A
C
D
Yeah,
so
I
mean,
of
course,
if
we,
if
we
think
only
on
the
use
on
the
on
the
status
or
the
ingress
use
case,
then
you
can
restrict
the
you
know:
privacy
requirement
to
the
status,
but
then
what
we
do
for
other
use
cases
we've
been
discussing
about
spread
deployments
and
possibly.
A
D
D
C
A
C
I
was
kind
of
like
if
an
if
a
deployment,
if
a
deployment
status
is
composable,
then
we
wouldn't
need
another
resource
for
it.
If
it's
not
composable
like
the
ingress
case,
we
do
need
another
resource
and
then
the
real
question
is:
does
that
resource
have
a
schema,
that's
identical
to
its
source
status
or
not,
and
that's
kind
of
like
what
I'm
getting
at
so
maybe
I
guess
what
I
would
ask
would
be:
can
we
can
we
specifically
frame
up
and
ask
the
question?
C
Is
deployment
composable
or
not,
and
if
it's
not,
why
not,
and
if
it
is
then,
is
there
a
difference
between
a
composable
status
and
a
non-composable
status,
ingress
being
one
example-
and
I
guess
like
I'm
concerned,
because
the
we've
increased
the
total
number
of
concepts
by
having
a
second
private
logical
cluster.
A
lot
is
that
throwaway
right.
So
I
guess
the
question
would
be.
When
I
delete
a
workspace,
do
I
have
to
delete
that
private
workspace?
At
the
same
time,.
C
D
Yes,
I
understand
what
you
mean,
but,
but
I
say
yes
or
no
I
mean
it
depends
if,
if,
for
example,
if
by
default,
they
are
not
just
returned,
everything
that
is
private
is
just
not
returned
in
whatever
right.
C
So
that
means
especially
that
means
we
always
load
all
of
those
objects
and
we
filter
half
of
them
out.
So
we
double
the
cost
of
every
read
and
yeah.
That's
no
properties,
but
then
delete
has
to
take
that
into
account.
This
is
something
that
who
needs
access
to
this.
Like
will
admins
need
access
to
this.
So
now
we
have
like
a
a
different
type
of
workspace,
so
that's
kind
of
what
I'm
asking
like
yeah.
C
A
C
Examples
helps
us
understand
it's
totally
worth
working
through
I'm
just
this
is
a
good
step.
I
really
want
to
get
to
the
doc
form
where
we
can
present
the
trade-offs
and
alternatives
and
go
through
it,
because,
like
the
trade-off
here
is
the
schema
for
the
status
that's
copied
up
has
to
be
the
same.
If
we
can
come
up
with
a
counter
example
and
say
no,
but
we
want
ingress
status
as
returned
from
the
sync
logic
up
to
the
cluster,
to
be
different.
C
That
then
says:
oh
well,
then
we
want
a
different
resource
if
we
want
a
different
resource.
That
means
we
should
think
about
how
someone
would
use
that
resource
that
gives
us
enough
input
to
make
some
drafts.
So
this
is.
This
is
great.
I
actually
like
pushing
the
boundary
here
because
it
opens
up.
You
know,
questions
that
we
need
to
get
better
at
answering.
Like
the
composability
question,.
A
Yeah,
I
think
I
I
agree
I
would
love
to
have
this
result
in
some
document
that
says
here
is
here
is
a
possible
way
we
could
solve
like
here's,
the
problem,
here's
a
possible
way
to
solve
it,
trade-offs,
ups
and
downs.
An
experiment
only
fails
if
you
don't
learn
anything
from
it.
So
like
we,
we
we
should
understand
what
we're
what
we're
going
to
get
out
of
this.
I
wanted
to
so
two
things.
Two
things
that
I
like
one
is
the
sinker's
really
dumb.
A
I
think
clayton
had
a
had
a
point
about
that
that
I'll
get
back
to
and
the
other
is
that
the
transformed
state
is
visible
outside
of
the
physical
cluster.
So
the
transformed
the
the
the
you
gave
me
a
resource
and
I
applied
a
transformation
to
it.
That's
available
at
the
kcp
layer
and
not
just
stuffed
in
some
internal
state.
A
The
sinker
before
it
applies
it
to
the
physical
cluster.
I
think
that's
a
useful
trait
that
I
don't
know
is
is
worth
the
trade-offs,
but
I
think
that's
another
thing
on
the
pile
of
nice
things
I
like
about
this
design.
It
means
that
the
the
it
doesn't
mean
double
writes.
It
doesn't
mean,
or
I
mean
it
can
mean
double
rates.
It
doesn't
mean
that
we
have
two
versions
of
the
same
object,
indifferent.
A
A
You
know
split
deployments
out
of
it
because
that's
how
we
decided
to
do
it
and
then
we
sync
them
down
to
the
cluster
and
the
cluster
just
applied
it
if
the
single,
if
the
kcp
layer
is
just
is
if
kcp
is
the
dumb
layer
and
the
syncer
is
the
smart
one
doing
transforms
and
applies
on
the
on
the
physical
cluster,
it's
a
lot
harder
for
a
user
to
see
what
was
actually
created
back
there,
we're
abstracting
to
adults
point
we
are
abstracting
a
lot
and
that's
great,
but
sometimes
when
debugging
these
things,
users
will
want
to
understand
why
we
made
that
decision
and
or
maybe
not
end
users,
but
admins
or
somebody
with
you
know.
D
Yeah
that
that
was
also
with
the
idea
of
of
you
know
being
able
to
see
the
I
mean
if
we
had
time
to
do
the
demo,
but
maybe
not,
I
would
post
it
I
think,
later
on
after
the
community
call.
But
yes,
the
idea
is,
is
just
to
be
able
to
to
see
what
has
been
created
or
spread
or
or
transformed,
but
in
an
obscene
way,
not
not
by
default.
A
Yeah
sean
has
a
hand
up.
F
Yeah
sorry,
I
I
work
on
the
migration
engineering
team
and
we
ran
into
some
similar
problems
here
where
we
were
making
transformations
underneath
the
hood.
While
you
were
moving
between
different
types
of
clusters
and
we
ran
into
the
same
set
of
problems
with
those
transformations
were
very
hard
to
explain,
were
hard
to
debug.
F
F
So
I
I
just
want
to
like
throw
that
out
there
and
say
that,
like
I
think
that
that
makes
sense,
we
saw
some
some
problems
by
doing
it
in
one
big
go,
but
all
the
performance
problems
exist
when
you're
trying
to
do
that
inside
the
controller
like
I
think,
clean's
bringing
up.
D
Yeah
yeah
to
be
fair
before
switching
back
or
coming
back
to
you
know
using
just
two
logical
clusters,
public
and
private.
I
thought
about
you
know,
maybe
storing
in
some
cases
just
to
div
or
things
like
that
and
and
using
the
object
when,
when
there
is
no
change
to
do,
but
I
mean,
as
I
first
starter,
I
I
thought
that
it
could
be
interesting
to
have
something.
Even
if
it's
not
optimized,
because
you
copy
everything,
but
then
it
shows
the
main
id.
C
In
the
I
mean
the
modeling
here
so
like
the
the
cluster,
let
ocm
manifest
work.
This
is
a
different
variation
of
manifest
work,
materialized
through
logical
cluster
federation,
v2
federation
v2
chose
to
duplicate
types
on
the
spec
side.
But
honestly,
there's
an
element
here
of
if
in
the
transparent
constraint,
we're
trying
to
avoid
changing
how
people
interact
as
much
as
possible.
So
the
the
normal
constraint
is
an
object
at
the
top
level
behaves
the
way
you
would
expect
in
the
exceptions
is
ingress
exception
or
not.
C
C
So
in
the
route
example,
it's
very
clear
that
the
default
route
as
reported
up
is
going
to,
would
would
ideally
be
the
one
that
is
aware
of
at
the
higher
level
on
status,
so
the
the
route
yeah
and
the
but
the
others
could
potentially
be
represented,
but
they
would
have
to
be
name-spaced
and
that's
only
possible
because
route
has
the
I
had
already
has
built
into
the
api,
the
concept
of
multiplicity.
C
C
We'll
do
one
thing:
if
we
do
one
thing:
when
status
isn't
composable,
is
it
consistent
or
inconsistent
with
status
composable,
and
is
there
a
third
category
of
status,
which
is
almost
composable,
where
the
trade-off
is
the
extra
complexity
of
or
the
extra
cost
of
a
particular
implementation
pushes
us
to
one
side
or
the
other?
Basically,
is
all
status
composable
or
not
composable,
or
are
there
things
in
the
middle,
which
are
mostly
composable
where.
A
C
D
I'm
just
a
bit
we're
worried
about
the
fact
that
we
focus
very
much
on
the
statues,
but
we've
not
tackled
the
questions
of
you
know:
field
fields
that
from
the
spec
that
are
also
changed,
for
example,
roots
where
I
know
that
in
the
in
the
current
design
of
the
dev
workspace
controller,
for
example,
to
to
guess
that's
just
a
hack,
obviously,
but
to
guess
the
the
suffix
of
the
current
cluster.
D
We
just
create
a
dummy
route
and
get
the
assigned
host
name
of
the
route
which
is
in
the
spec
and
and
that's
just
an
example
among
a
number
of
others
in
native
taps
that
that,
when
you
create
an
object,
it
will
also
fill
a
number
of
fields
in
the
spec
that
by
default,
if
if
they
have
no
no
existing
value
and
obviously
at
some
point,
we
will
have
to
answer
about
this
question
more
generally.
But
but
that
seems
to
mean
for
me
that
that
it's
not
a
status.
Only
problem.
C
A
C
We
cannot
solve
all
problems
caused
by
the
existing
abstraction
in
the
existing
abstraction
we're
going
after.
Is
there
enough
value,
like
so
prove
by
example,
that
there's
enough
value
for
95
and
then
identify
the
set
of
solutions
for
the
remaining
problems,
including
the
answer
of
just
go
design
it
a
different
way.
C
I
think
we're
still
too
early
to
get
to
that,
but
I
would
agree
that
we're
trying
to
get
to
that
point
by
testing
real
important
key
objects,
which
is
what
we're
doing
so.
Maybe
like
we
talked
about
deployment
in
ingress,
we
have
route,
and
then
you
have
a
couple
of
spec
examples.
I
feel
like
what
I
think
we're
at
is
I'm
looking
for
the
design
document
that
says
we
have
a
status
problem.
Here's
a
couple:
here's
the
approach,
we're
trying
to
solve.
C
So
I
I
think,
like
it's
good,
it's
good
to
say
spec
as
well,
but
let's
start
with
status,
composable
versus
non-composables,
at
least
what
we've
talked
about
here.
Deployment
and
ingress.
Two
examples
are
there
other
examples
that
are
particularly
representative.
I
mean
a
three
is
probably
enough
to
at
least
get
started,
but
we
will
have
to
as
we
go
through
the
core
types.
Come
in
and
say
this
type
is
falls
into
this
solution.
C
If
it's
just
one
solution,
that's
trivial,
but
here's
how
we
would
solve
it
in
this
one
and
then
we
need
to
be
able
to
add
the
counter
examples
like
we're
still
in
the
because
I
think
all
of
us
can
think
of
counter
examples.
Ideally,
there's
a
place
that
we
can
go
to
argue
and
add
new
counter
examples.
A
Yeah,
there's
there's
definitely
nothing
better
than
having
some
problem
and
finding
the
solution
in
the
alternatives
considered
section
of
a
design
doc.
Somebody
wrote
three
years
ago,
it's
even
better
when
you
wrote
that
design
document.
B
A
A
Yeah
yeah
old
me
was
smart.
Current
me,
yeah
old
me
was
smart
clayton
I
wanted
to.
I
want
to
ask,
because
I
remembered
the
the
point
that
you
brought
up
before
about
the
sinker
being
dumb.
You
mentioned
that
the
the
nice
thing
about
a
smart
sinker
is
that
it
is
able
to
use
knowledge
about
the
physical
cluster.
It's
sitting
in
to
be
able
to
know
what
kind
of
transformation
to
make.
I
was
wondering
if
you
have
an
example
of
that
sorry.
A
C
Be
under
the
control
so
think
about
think
about
a
cluster
as
a
failure
domain,
but
also
as
a
security
domain.
The
advantage
of
the
smart
sinker
is
the
smart.
Sinker
does
not
have
to
trust
the
control
plane.
C
That
is
a
extreme
like
so
cube
today,
trusts
the
control
plane.
An
early
design
point
for
cube
was
like.
Oh,
you
know
the
first
version
of
security
context
and
pod
security
policy
and
pod
security
was
a
flag
on
the
cubelet
that
says,
don't
let
it
run
any
root
workloads
that
is
probably
still
the
most
effective
security
mechanism
to
keep
a
node
safe.
C
The
problem
is,
is
that
today,
a
cubelet
has
to
be
too
general
purpose.
I
do
not
think
that
the
use
case
for
the
default,
transparent,
multi-cluster
is
general
purpose
right.
It
is
homogeneous
workloads
or
sorry,
it's
homogeneous
capacity,
heterogeneous
workloads
that
are
not
tied
to
that
physical
location
and
instead
use
the
provided
abstractions
within
that
to
achieve
cluster
independence.
C
That
specifically
means
that
it
should
not
have
root
access
on
the
node.
That
specifically
means
it
may
not
use
host
volumes.
That
specifically
means
it
may
not
do
untrusted
things.
The
enforcement
and
capability
of
that,
if
you
allow
the
control
plane
to
make
those
determinations,
then
you
are
effectively
saying
the
control
plane
is
rude
on
these
clusters.
If
you
do
not.
C
Plane
to
do
it,
for
instance,
by
the
sinker
having
a
boolean
flag
or
an
alternative
source
of
policy
that
comes
from
the
cluster
or
insert
mechanism
abc
an
orthogonal
source
of
truth
that
imposes
hard
constraints.
You
don't
have
to
worry
about
the
trust
boundaries
of
the
control
plane.
That
alone
is
valuable.
It's
not
perfect,
so
we
know
that
there's
some
things
that
we
would
have
to
work
around
there,
but
it
kind
of
gets
to
the
that's
one
type
of
transformation.
C
A
second
type
of
transformation
would
be
use
case,
dependent
where
an
abstraction
at
the
higher
level
for
the
workload
you
actually
want
to
virtualize.
So,
for
instance,
say
someone
asks
for
at
the
control.
Plane
level
asks
for
a
new
type
of
csi
driver
that
does
not
actually
get
installed
on
the
cluster,
but
for
which
there
is
an
exact
one-to-one
mapping.
You
can
do
that
at
the
control
plane
level,
it's
really
hard
to
do
securely
and
honestly,
if
the
implementation
varies
between
clusters,
because
you're
trying
to
make
a
consistent
workload
footprint,
you
actually
don't
want.
C
You
don't
necessarily
know
at
the
control
plane
the
details
and
you
may
not
want
to
expose
those
to
the
control
plane
of
how
that
mapping
is
done.
Those
are
just
two.
Those
are
the
two
that
I
have
right
now
and
I
think
there's
equal
argument.
Jason,
for
there
are
places
where
the
control
plane
side
is
a
better
place
to
to
control
parts
of
the
sinking,
and
that
would
be
when
an
end
user
brings
something
that
they
want
to
sink
down,
that
the
underlying
sinker
is
completely
unaware
of.
C
That's
that's
a
policy
object,
maybe
in
the
that's
like
that's
an
influence
on
the
the
sinking
policy
or
a
transformation
policy
in
the
workspace
in
the
organization
that
yeah,
you
know,
perhaps
perhaps
brings
new
workload
types
that
are
we.
We
haven't
really
talked
about
what
that
would
look
like,
but
that's
a
case
of
yeah
the
control,
plane's,
fine
to
trust
and
you're.
Already
looking
at
that
source
for
that
sort
of
modification,
the
end
user
is
in
charge
of
that.
A
C
And
in
the
long
run
just
like
in
the
long
run,
we
want
the
apis
in
use
at
the
control
plane
level
to
mute
to
best
solve
the
problem
that
actual
end
users
have,
which
is
not
pods
deployments
services
ingress.
It
is
a
set
of
problems
of
I
want,
so
I
think
there's
some
level
of
we
should
expect
apis
to
change
over
time.
A
Right:
okay,
great!
Thank
you!
Everyone!
That's
that's
an
hour
david!
Thank
you
for
for
this
wonderful
exploration
and
discussion,
and
we
will
see
you
again
all
next
week
or
online
or
on
the
slack
or
in
the
internet,
see
you
folks,
bye.