►
From YouTube: Community Meeting December 7, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
kcp
community
meeting
december
7th
2021.
I
wanted
to
start
off
the
meeting.
I
feel
like
every
once
in
a
while.
I
want
to
do
a
check
and
make
sure
that
the
format
of
this
meeting
and
the
agenda
works.
I
basically
always
forget
to
create
the
meeting
issue
until
the
morning
over
the
day.
Before
this
meeting,
I
could
probably
automate
it
if
we
feel
like,
but
do
people
like
this
format?
A
Do
people
prefer
a
doc
that,
like
sort
of
is
just
a
running
dock,
that
we
share
with
the
group
that
we.
B
The
video
posted
and
stuff
yeah-
okay,
andy,
what's
up.
C
It
works.
I
have
more
familiarity
with
a
running
dock,
but
either
it's
fine
with
me.
D
I
do
think
notes,
don't
come
out
as
cleanly
as
a
running
dock
does,
but
I
don't
see
any
reason
it
couldn't.
There
is
like
a
slight
benefit
in
the
people
being
in
the
dock
and
being
able
to
add
like
a
couple
of
notes
there.
I'm
not
sure
it
leads
to
a
better
overall
outcome.
I've
definitely
kind
of
like,
as
we've
gone
through,
this
jason
I've
been
like
is
this.
D
This
is
interesting
like
there's
parts
about
it
like
I
like
the
comment
section
of
leading
into
a
meeting,
and
it
does
make
looking
at
some
of
the
older
meetings
a
little
bit
there,
but
it
also
just
reduces
the
things
someone
has
to
join
to
add
a
comment
which
is
actually
like
the
google
doc.
You
have
to
be
in
kcp
dev
and
we
would
always.
E
D
A
Just
a
little
bit
less,
it
does
mean
it
kind
of
it
means
it
kind
of
sucks.
If
you
want
to
add
a
note
to
something,
somebody
else
said
you
can
only
append
to
the
end
of
the
list.
You
can't
like
add
a
bullet
point
or
whatever
yeah.
Okay,
I'm
not
I'm
not
hearing
any
like
massive
upswell
of
hatred
for
this
plan.
If
that
changes
or
if
we
find
something
else,
that's
better.
A
I
also
like
not
having
to
require
joining
an
issue
or
sorry
joining
a
a
mailing
list
just
to
comment
on
a
thing
I
feel
like,
especially
if
we're
inviting
external
folks
like
like
we
did
last
week
like
don't
you
know,
don't
make
them
join
a
list
just
to
say
hi
but
yeah.
I
don't
think
there's
any
perfect
thing
and
they
all
kind
of
suck
in
different
ways.
A
But
I
wanted
to
check,
I
used
to
go
back
through
the
recording
and
try
to
take
better
notes,
I'm
terrible
at
contemporaneous
note-taking,
because
I
get
sidetracked
and
lost
in
trains
of
thought.
But
I
used
to
go
back
and
add
notes
and
it
was
very
labor-intensive
and
I
think
value
free
for
anyone
else.
So
I
can
go
back
and
do
that
if
people
thought
that
was
useful
or
anyone
else
is
welcome
to
if
they
feel
like.
But
I
think
I
think
I
will
continue.
A
That
yeah,
as
always,
let
me
know
if
you
have
ideas
or
think
this
could
be
improved.
It's
a
work
in
progress.
The
other
sort
of
housekeeping
thing
I
wanted
to
mention
at
the
top
of
this
was
that
the
cfp
for
the
next
kubecon
is
coming
up.
I
think
next
week
I
think
one
week
from
now.
A
I
think
we're
doing
a
lot
of
cool
stuff
that
we
could
talk
about,
that
we
could
brag
about
or
ask
for
beg
for
help
on
if
people
are
interested
in
doing
that,
I
assume
it's
going
to
be
a
hybrid
like
the
last
ones
have
been
so
I
don't
think.
Even
though
it's
in
eu,
I
don't
think
it.
I
don't
think
you
need
to
travel
there
to
to
give
a
talk
on
it.
A
If
anyone
has
any
interest
and
is
just
looking
for
someone
to
collaborate
with
or
get
ideas
on,
I'm
willing
to
do
that.
Does
anybody
have.
A
I
think
the
sharded
well,
I
think
the
concept
of
logical
clusters
is
interesting
in
novel.
I
think
the
sharded
api
server
is
interesting
in
novel,
some
of
the
multi-cluster
stuff
we're
thinking
of
doing
is,
I
think,
novel
and
interesting
I'll
probably
propose
one,
I'm
not
sure
exactly
which
of
those
it
will
be,
but
it
might.
D
Be
useful
to
do
some
lightning
talks
on
areas
of
collaboration
with
other
projects
like
the
duck
typing
and
and
stuff
from
last
time.
That's
a
good
like
something
that
gives
a
couple
of
projects
an
opportunity
to
collaborate
and
join
forces.
Those
might
be
other
fruitful
areas.
Yeah
yeah.
A
Good
call
all
right.
Well
yeah,
I
guess
let
me
know
if
you
have
an
idea
that
you'd
like
to
develop
into
a
whole
talk,
or
I
could
give
it
with
you
or
I
could
give
it.
I
could
help
you
with
it,
whatever
other
than
that
we
paul
did
the
herculean
effort
of
filing
1
million
github
issues
for
all
of
the
items
in
our
prototype.
2
milestone.
A
That
is
stefan
helpfully
linked
to
the
issue
board.
B
A
Okay,
I
might
click
that
later
and
see.
I
might
try
to
make
it
public
later
and
see
if
that
works,
because
otherwise
kyle,
I
can
add
you
to
the
org,
so
you
can
see
it
at
least.
C
A
C
Public
toggle,
it
says
everyone
on
the
internet
has
read
access
and
you
choose
who
has
right
net
and
see
how
it
goes.
Okay,
try
it
now
kyle.
A
Yep
it
opened
up
for
me,
nice,
awesome,
okay,
great
so
yeah,
that's
that's
sort
of
where
we
will
be.
A
Is
everyone
saying
yeah
so
yeah?
This
is
all
the
stuff
we're
sort
of
doing
and
thinking
about
with
little
faces
on
each
of
them.
I
don't
know
if
anybody
else
has
any
updates
on
stuff
they're
doing
that
they
want
to
share.
I
can
go
over
the
current
latest
progress
on
the
namespace
scheduler,
stefan
and
I
talked
last
last
week
about
how
to
scope
things
down
better.
That
was
actually
really
useful
because
I
was
about
to
go
build
a
web
hook
and
nobody
ever
wants
to
build
a
web
hook.
A
So,
instead
of
having
that
we're
going
to
have
name
spaces
created
and
are
backed
from
outside
the
clusters
from
outside
the
physical
clusters,
and
then
the
syncer
will
have
only
it
will
have
complete
access,
but
only
to
all
of
the
namespaces.
It
should
and
not
be
able
to
create
anything
else
or
do
anything
else.
Yeah.
D
That's
a
that's
a
great
simplification
and
ultimately,
like
that's
another
one
of
those,
the
more
I
think
about
it.
That
was
a
we
didn't
put
enough
time
in
up
front
saying
this
is
like
a
cube
gap
which
is
in
theory,
access
to
a
subset
of
things
by
prefix
is
one
or
o
login
at
best
and
so
like
it
matches
it
is.
It
has
mechanical
sympathy
with
all
of
the
things
we
already
do.
D
Just
nobody
had
a
use
case
for
it
before
namespaces
are
effectively
a
prefix
scan
problem,
and
this
was
an
area
that
I
just
completely
forgot
to
like
write
up
early
on,
which
is
like
thinking
about
the
security
system
of
subdividing
a
cluster,
so
that
a
cluster
is
actually
subdivided.
No
one
is
really
done.
It's
a
different
type
of
subdivision
than
logical
cluster.
It's
a
like
like.
Why
is
why
is
a
chunk
of
resources
not
efficiently
allocatable
out
of
namespace
scope,
so
whether
we
solve
it
in
cube
or
whether
we
solve
it
with
a
different
mechanism?
D
It's
a
great
way
to
pump
the
problem
and
also
another
one
of
those
areas
to
go
build.
There
may
actually
be
a
lot
of
value
to
existing
communities
like
the
hierarchical
namespaces
work
and
sigmulti
tenancy
in
many
places
could
have
benefited
from
aspects
of
this.
If
we're
in
the
guts
of
cube
already
there's
some
places
where
like
this
might
actually
be
a
really
efficient
thing
to
go,
do
and
it
might
not
be.
It
does
require
a
bunch
of
touch
points.
We
shouldn't
spend
all
our
innovation
tokens
in
one
place
right
now,.
A
Yeah
to
to
clarify
and
elaborate
the
proposal,
or
that,
like
idea
of
the
thing
we
would
ideally
want,
is
some
way
of
expressing.
I
can
create
things
in
namespaces
where
the
namespace
has
this
prefix
as
a
prefix,
kcp
dash
and
then
anything
in
there
I
can
do
I
can
create.
I
guess
you
could.
The
permission
would
also
be.
I
can
create
namespaces,
but
only
those
create
with
that
prefix.
A
B
E
D
Ourselves
and
it
works,
but
it
only
barely
works.
That
is
another.
That's
a
great
example
of
like
like
there
are
a
bunch
of
people
that
would
probably
jump
for
joy
if
they
could
figure
out
how
to
separate
out
labels
safely
and
the
nice
thing
about
labels
is,
while
you
can't
get
access
to
view
labels
efficiently
in
cube,
you
could
absolutely
control
who
could
set
and
view
them,
and
we've
already
started
doing
that
on
nodes.
We
do
that
on
namespaces.
We
do
that
with
the
stuff
that
people
add
for
like
annotations.
D
A
D
But
then,
like
you
could
say
like
well,
if
you
can
do
that,
then
is
there
another
subdivision
like
at
some
point,
you
run
out
of
name
length
so
like
there's
the
that's
the
part,
I
think
about
hierarchy
that
we
have
to
stay
but
yeah.
The
k
factor
is
how
many
k's
can
you
stick
in
a
namespace
name,
but.
D
A
Yeah,
so
that
is
the
more
medium
slash
long-term
proposal.
We
want
to
try
to
push
upstream
or
or
prototype
in
kcp
and
then
push
upstream,
but
in
the
meantime
we
can
unblock
ourselves
just
by
doing
this.
Namespace
administration
outside
of
the
cluster
and
never
have
such
a
powerful
permission
exist
on
the
physical
cluster,
which
will
be.
D
D
D
So
that's
something
we
should
talk
about
with
like
josh
and
the
the
open
cluster
management
side,
because
that's
like
a
really
concrete
place
where
you're,
like
that
controller,
that
integration
is
a
little
bit
of
the
back
plane
for
workload
and
it's
not
a
front
plane
problem.
It's
not
the
end
user's
problem.
It's
the
person
who
made
that
capacity
available.
A
So
is
that,
for
example,
is
that
saying
is
that
going
to
enable
us
to
say
this?
Kcp
should
never
kcp
as
a
whole
should
never
take
up
more
than
this
resource
more
than
this
chunk
of
resources
inside
my
physical
cluster
and
no
single
namespace
synced
by
kcp
should
take
up
more
than
this
slice
of
that
chunk
inside
of
there.
Those
are
both
things
we
could
express
if
resource
quotas
were
prefix.
D
Capable,
or
even
without
refix
I
mean
like
you,
can
that's
hierarchical,
namespace
controller
effectively
generates
this
stuff
in
the
most,
not
most
inefficient
way
possible,
but
it
is
already
doing
something
like
this,
and
so
you
know
it
is.
It
is
materializing
most
people
materialize
some
form
of
resource
quota
and
resource
quota
itself
kind
of
sucks
a
little
bit.
D
But,
like
you
know
these
kinds
of
controls,
you
could
imagine
a
system
which
says
if
I
define
this
chunk
of
capacity,
I
want
to
offer
to
the
workload
scheduler
and
then
I,
by
doing
that
on
that
cluster,
I
have
to
make
that
that
prepped.
I'm
then
going
to
have
a
set
of
policies
which
are
my
defensiveness
against
that
that
between
others,
so
there
might
be
competing
workloads
on
there
that
having
a
decent
system
for
like
expressing
that
which
is
like
well
I'm
offering
this
much
capacity.
D
I
probably
want
some
hard
limits
in
there
when
I
offer
that.
How
would
the
failure
modes
translate
back
up
well,
they're,
going
to
impact
workloads?
That's
something
that
the
sinker
in
theory
is
trying
to
tackle
head-on
and
be
like.
Oh
no
like
that.
What's
what
does
being
out
of
resources
mean?
What
does
being
blocked
mean?
Well
we're
just
going
to
generically
solve
that
and
do
our
best?
Well,
what
does
our
best
look
like?
Well,
recognizing
certain
failure
conditions
surfacing
messages
effectively.
D
It's
like
suddenly
that
interplay
starts
working
and
then
again,
because
that
interplay
is
going
back
up
to
the
to
the
workload
control
side.
The
workload
controllers
can
also
do
things
like.
Oh,
you
know,
I'm
getting
an
unusual
amount
of
errors
here,
for
whatever
reason
we
talked
about
this
in
previous
meetings
like
that's,
when
you
start
thinking
about
rebalancing
or
summarizing
or
alerting
like
hey,
somebody
made
a
mistake:
it
surfaces
really
fast
because
there's
a
nice
propagation
to
workload,
impact
which
today,
everybody
creating
their
own
crap.
D
Like
my
get
ups
thing
over
here,
my
tool
over
here
my
controller
over
here
each
of
those
fails
in
magical
and
unique
and
special
snowflake
ways,
the
more
workloads
you
can
concentrate
under
that
you
know
relatively
agnostic
layer.
Those
controllers,
don't
even
necessarily
have
to
worry
about
as
many
of
those
problems,
because
they're
just
depending
on
a
separate
subsystem
being
like
yeah
yeah,
like
this
cluster,
seems
flaky
all
of
a
sudden.
B
B
Yeah
so
the
link
I
pasted
there.
The
second
last
comment:
maybe
we
can
go
through
that
quickly.
I
just
want
to
make
people
aware,
because
everything
that
clayton
just
sketched
might
fit
there
like
some
somebody
could
start
modeling
those
things
just
to
get
a
more
concrete
idea
which
which
link
the
external.
A
B
So
this
just
models
this
problem
and
it
leads
to
api
exports
at
the
end.
So,
basically,
external
sync
is
what
an
administrator
defines
to
connect
certain
resources
to
physical
clusters
for
workspaces
which
imports
resulting
api.
So
it
has
a
list
of
resources.
It
has
a
list
of
locations,
maybe
referencing
cluster
objects
or
something
like
that
and
there's
a
status
for
all
for
each
of
them.
How
far
the
syncing
is,
whether
it's
pause
or
failure
conditions
how
to
present
them,
and
we
also
had
the
idea
of
a
template
like
typical,
openshift
workload.
B
There
is
a
negotiation
happening
and
the
negotiation
depends
on
external
sync
resource
location.
So
there's
one
object
for
each
for
the
product
out
of
location
and
resource,
and
this
basically
is
a
placeholder
for
the
discovery
data
which
we
get
from
the
physical
cluster
for
this
resource,
and
this
external
resource
location
is
then
merged
via
some
strategy
like
lcd
or
gcd,
depending
on
what
you
set
in
the
in
the
external
sync
into
the
negotiating
negotiated
schema
inside
of
this
object,
which
is
in
the
middle
now,
it's
external
sync
resource
or
negotiated
resource.
B
So
this
is
a
schema
which
is
actually
used
by
the
workspaces
which
subscribe
to
this
api
and
subscribing
happens
that
this
one
is,
it's
turned
into
an
api
export.
The
same
idea
we
had
with
third
manager
as
a
controller
which
offers
apis
there's
an
api
export
with
this
schema
or
the
negotiated
schema,
and
then
workspaces
can
import
them
and
probably
all
workspaces
will
import
certain
types.
These.
Those
workspaces
which
won't
compute.
C
Andy
yeah
go
ahead
thanks.
Just
quick
question:
is
this
meant
to
interact
with
or
be
used
by
the
cluster
controller?
Today,
that's
doing
the
api
syncing.
B
Yes,
the
synchro,
so
the
basic
idea
behind
that
is
that
I
mean
it's
in
the
context
of
virtual
workspaces,
there
will
be
a
virtual
workspace,
which
is
basically
the
data
source
for
this
syncer.
The
thinker
will
get
a
url
where
it
sees
all
the
tenants
of
those
api
imports,
basically
all
workspaces
which
one
compute
and
you
will
see
all
deployments,
for
example,
in
this
virtual
workspace.
E
Would
expose
all
the
apis,
but
with
the
real
you
know
all
the
discovery
that
corresponds
to
the
negotiated
hmis
of
all
the
objects.
The
sinker.
This
thing
the
thinker
of
this
external
thing
set,
would
want
to
sync
so
if
the
syncer,
typically,
if
the
thinker
would
point
to
this
url,
it
would
get
the
exactly
the
discovery.
E
The
objects
that
it
needs
to
watch.
Am
I
right.
B
D
D
So
that's
good,
so
stephan
to
go
back
a
little
bit
like
one
thing
that
it
may
be
like
I'm
there's
we,
we
probably
need
to
get
terminology
in
place,
but
I
think
there's
two
strategies,
two
types
of
strategy
at
play
and
like
gcd
and
lcd,
is
like
the
one
type
when
we're
talking
about
like
how
you
want
to
approach
the
type
and
how
it's
reconciliation,
we
may
want
to
come
up
with
a
separate
name
for
that
or
name
it
in
terms
of
the
context
that
it's
being
used.
D
We
want
we'll
probably
have
hard-coded
handling
behavior
in
the
sinker
for
specific
types
right
like
whether
that's
called
a
strategy
or
like
the
sync
strategy
or
transformation
strategy
like
whatever
that
might
actually
be
the
long
run.
D
The
sinker
as
a
controller
framework,
unlike
the
cubelet,
should
not
be
this
like
totally
closed
system,
because
I
think
we
can
imagine
many
different
types
of
sinkers
and
we
can
imagine
many
different
types.
So
there
is
a
design
constraint
on
the
sinker
in
the
long
run.
I
think
which
would
be
the
idea
of
transforming
the
type
as
it
comes
out
or
one
type
splitting
to
multiple.
We
might
want
to
spend
more
time
on
that
as
a
construct
within
the
syncer
than
like
other
types
of
controllers
would
right.
D
So
this
is
like
a
good
terminology
opportunity
to
like
split
the
concept
of
how
the
the
object
is
transformed
or
the
how
the
object
is
unified
strategy
is
a
good
word
for
it.
But
when
you
talk
about
transformation,
give
an
example,
so
I'd
say
a
concrete
example
would
be.
You
have
an
ingress
object
in
a
logical
cluster
that
ingress
object
is
transformed
into
a
gateway
api
object
as
part
of
a
strategy
for
saying
hey
on
this
cluster.
D
D
That
strategy
effectively
looks
at
it
in
grass
maps
it
to
a
gateway
object,
creates
the
gateway
object.
The
status,
the
summarization
of
it
back
to
the
ingress
might
include
no,
so
there's
a
part
of
that
strategy.
For
that
that
says,
I'm
converting
the
gateway
api
object
status
to
nothing
or
I
transform
it
to
that
annotation
in
the
metadata
of
the
ingress
object
that
a
higher
level
controller
looks
at
so.
B
D
Our
back
might
have
like
a
partial
identity
strategy
where,
like
or
a
filtering
strategy,
where,
like
some
are
back,
rules
are
dropped
when
copied
down
or
some
role
bindings
are
rollbacks
have
a
a
drop
strategy,
the
name
for
what
the
strategy
is.
It
might
be
code,
I
think,
in
the
very
early
days
of
the
project,
when
we
were
kind
of
spitballing.
D
I
was
anticipating
that
on
order
I
would
probably
expect
us
to
have
maybe
20
or
30
type
specific
transformations,
some
of
which
might
get
very
complex
and
involve
other
objects
and
might
themselves
be
complex,
and
then
there
might
be
a
large
class
of
like
identity
like
standard
transformations,
identity
cell,
like
you
could
imagine
someone
being
able
to
define
and
say,
like?
Oh
here's,
my
new
crd,
here's,
the
strategy
for
it.
Here's
a
cell
transform
do
these
two
things
and
each
of
those
strategies
could
either
be
contributed
in
code.
D
Eventually,
we
might
have
apis
for
them
in
the
short
run,
it's
all
code,
written
in
the
syncer,
hard
coded
and
deployed,
but
one
of
the
advantages
of
code
is
someone
could
audit
that
review
it
and
say?
Oh
I'm
working
in
a
high
trust
environment
and
I'm
actually
going
to
make
additional
rules
and
I'm
going
to
take
the
existing
sinker
and
say
hey.
You
know
that
deployment
strategy,
that's
somewhat
deployment
aware,
I'm
going
to
do
things
like.
Oh,
this
has
a
pod
template.
D
I
will
forcibly
strip
all
of
these
fields
out
and
it
is
not
a
one-to-one
transformation
and
I
can
review
that
and
put
it
into
a
high-stack
environment
and
say
I'm
confident
that
the
only
objects
that
show
up
on
the
other
side
have
these
characteristics
or
something.
So
that's
a
there's,
a
there's,
a
bunch
of
depth
under
it.
That
is
probably
similar
to
external
sync,
but
it
kind
of
starts
getting
into
this.
D
Like
the
configuration
of
the
syncer
I
just
wanted
to
like
strategy
is
a
word
that
we
have
used
for
that,
but
we
haven't
actually
formally
come
up
with
like
a
clarifying
word
for
it
like
transformation
strategy
or
sync
strategy
or
resource
sync
strategy
and
then
put
a
definition
in
place.
It's
probably
time
for
us
to
start
doing
that,
maybe
in
the
transparent
multi-cluster
design
or
in
a
separate
new
design
which
sits
alongside
transparent
multi-cluster.
And
this
that's
the
intersection
of
those
two.
E
E
Of
course
there
were
a
number
of
things
to
you,
know
clean
up
or
or
rethink,
but
the
main
idea
was
to
let
the
thinking
be
done
in
two
steps
which
both
were
you
know,
controller
based.
That
means
that
yeah.
D
I
think
we
need
to
actually
have
a
design
that
we're
all
going
to
review,
because
it's
going
to
be
fundamental
to
the
project,
and
it
might
be
that
some
of
the
trade-offs
there
that
the
two-step
one
have
would
be
not
acceptable
because
of
amplification
or
that
those
are
just
two
parts
that
should
be
in
the
same
controller
or
that
those
actually
we
we
want
to
conceptually
think
about
it
as
two
steps,
but
it
might
actually
just
be
treated
as
one
step
from
the
perspective
of
code
organization.
All
of
those
are
valid
points
sure
yeah.
E
My
my
question
was
also:
do
we
expect
this
to
be,
I
mean
finally
easily
pluggable
from
outside
I
mean
or
or
then
you
or
you
have
to
inject
code,
like
you
know,
really
in
your
kcp
with
admission
controllers,
always.
D
D
So,
like
thinking
about
like
the
spectrum
of
choices,
we
have
the
sinker.
I
think,
if
done
right
could
be
a
very
important
component,
but
we
still
need
to
keep
prototyping
down
the
path
we're
on
to
get
closer
to
you
know,
maybe
it's
not
as
important
that
people
be
able
to
fork
it,
or
maybe
it
doesn't
need
that
much
flexibility.
Our
first
use
cases
are
going
to
be
entirely
hard-coded,
so
it's
mostly
a
theoretical
exercise
to
frame
a
little
bit
of
what
room
that
we'll
leave
for
ourselves
in
the
api
or
being
able
to
say.
A
D
It
should
have
good
code
organization
based
on
what
we
the
problem
we
have
now
and
then
we
will
refactor
the
code
organization,
and
then
we
will
look
at
the
path
between
unlike
web
hooks,
which,
like
there
are
web
hooks,
was
like
one
particular
point
on
a
design
spectrum.
Where
we
knew
going
in
it
would
be
the
worst
of
all
possible
trade-offs.
It
would
be
the
best
of
the
worst
options
and
what
we
are
saying
I
think
with
sinker,
is
working
from
like
the
do
it
all
in
code.
D
It's
a
monolith,
then
we
will
look
at
flexibility,
code
organization
and
structure
once
we
have
enough
examples
and
then
we
will
take
use
cases
for
things
like
hey.
I
want
to
be
able
to
add
a
new
crd
to
the
high
level
control
plane
that
exists
on
one
of
these
clusters
or
doesn't
exist
one
of
these
clusters,
but
will
transform
into
an
object
that
neither
sinker
nor
kcp
knew
about
ahead
of
time,
and
we
will
take
design
requirements
for
what
does
that
look
like
once.
We
understand
what
that
problem
is,
as
jason.
A
Is
saying
yeah
I
want
to
take
a
second
to
surface.
Some
questions
in
the
comments
paul
said
is
versioning
applicable
on
these
strategies
based
on
what
is
consuming
the
transformed
object.
I
think
versioning
is
not
something
we've
thought
about
so
far,
but
definitely
we
need
to
have
a
versioning
strategy
involved
somewhere
or
else
upgrading,
kcp
and
upgrading
the
synchro
on
the
cluster
is
going
to
potentially
have
a
different
implementation
of
what
the
transformation
logic
is
either
either
when
it's
hard
coded
upgrading
the
syncer
or
when
it's
pluggable
changing
the
plugable.
D
Strategy
and
and
the
the
mental
model
paul
that's
very
useful-
is
clusters
are
going
to
have
a
small
number
of
types.
Physical
clusters
should
have
20,
crds,
20
custom
types
and
is
50
of
the
same
types
over
and
over
and
over
again.
The
design
point
we
are
optimizing
for
is
a
small
number
of
apis
on
physical
clusters
that
are
used
generically
by
a
broad
ecosystem
of
novel
and
exciting
apis
that
might
want
to
translate
down
to
those
boring
old
standard
cube
apis
with
maybe
some
standards
so
like.
D
We
don't
want
to
support
a
thousand
different
gateway,
ingress
apis
on
physical
clusters.
We
want
people
to
end
up
with
the
same
physical
clusters
in
lots
of
places,
because
that
benefits
everybody,
because
the
end
goal
of
this
is
that
workloads
move
across
clusters
and
every
api.
That's
different
on
a
physical
cluster
is
a
failure
or
a
shard.
That's
going
to
happen,
it
will
be
important,
but
it
won't
be
as
critical
as
the
other
side
of
the
equation.
There.
A
Right,
we
will,
we
will
schedule
around
compatible
physical
clusters
with
compatible
apis.
We
want
that
to
be
uniform
so
that
we
have
as
much
flexibility
in
combat
incompatible
apis.
A
B
F
Yeah,
I
can
so
do
we
look
for
implementations
on
one
to
one,
to
deployments,
pods
and
and
whatever,
or
we
look
into
collection
of
resources
as
well,
because
I
do
see
these
as
an
opportunity
to
have,
for
example,
a
helm
sinker
where
the
input
would
be
a
helm
package
and
the
output
would
be
like
resources
on
kcp.
That
gets
deployed.
D
D
It
is
not,
I
would
probably
say
that
think
of
the
pipe
between,
like
the
use
case
is
transparent,
multi-cluster,
transparent
means
you're,
either
at
the
control
plane,
translating
big
apis
into
cube-like
apis
and
then
sinkers
the
pipe
or
rarely
you
have
a
complex
type
in
the
control
plane
that
the
sinker
pipe
does
some
expansion
into.
I
would
say,
that's
rare
or
more
rare,
but
it
could
be
that
we
come
back
to
it.
It's
like
imagine
a
helm
object.
I
don't
really
think
we
want
to
have
lots
of
different
types
of
sinkers.
D
I
don't
think
that's
the
design,
principles
and
use
case
principles
we're
organized
around,
because
if
the
sinker
is
pretty
predictable,
then
what
actually
is
where
most
people
do
work
there
work
is
they
take
their
control,
plane,
objects
and
they
explode
it
like
there's.
Nothing
like
a
helm
chart
is
something
that,
as
a
real
example,
would
be
that's
an
explosion
on
the
control
plane
side,
because
the
point.
C
D
E
D
Impossible,
like
I
would
say
these
are,
it
is
a
valid
point
on
the
design
spectrum.
Syncer
is
kind
of
towards
the
more
physical
same
set
of
objects
all
the
time.
That
is
more
of
a
control
plane
api.
That's
just
like
you
know
any
expansion,
it's
totally
reasonable
to
do
that,
to
control
plane
and
to
summarize
it.
F
Okay,
so
if
I
understood
correctly,
then
this
would
be
like
a
replacement
for
federation
or
some
form
federation
with
synchronizing
resources
acro
across
different
physical
clusters
right
so
so.
Basically,
the
end
goal
is
to
kcp
always
receive
the
resources
that
we
are
going
to
produce
somehow
and
the
kcp
is
going
to
distribute
those
across
available
physical
clusters.
D
The
sinker
component
is
intended
to
transform
mostly
objects
from
similar
forms
to
similar
forms
for
the
express
purpose
of
of
keeping
the
intention
of
an
api
the
same
when
you're
in
multi-cluster
contexts.
So
it's
a
little
bit
like
federation
and
federation
v2,
but
they
strayed
from
some
of
these
paths
and
there's
similarities
to
other
things.
It
is
a
useful
first
order,
approximation
to
say
that
I
think
that
there's
a
ton
of
nuance
that
we
don't
know
yet
where
we
may
find
the
example
we're
we've.
D
This
example
of
the
most
dramatic
transformation
right
now
is
like
ingress
to
gateway
api
and
back
or
a
high
level
pvc
to
a
underlying
representation
that
actually
doesn't
look.
Anything
like
a
pvc
does
today.
Those
are
transforming
the
high-level
intent
to
achieve
an
outcome
of
if
you're
on
a
cluster.
Creating
a
pvc
means
something
we're
kind
of
trying
to
take
the
idealized
version
of
what
creating
a
pvc
means
and
let
it
work
in
a
multi-cluster
context.
D
D
We
think
that
if
you
do
that
you're
going
to
lose
some
of
the
other
benefits
of
like
scheduling
and
placement,
so
it's
kind
of
the
kcp
as
a
problem
is
excluding
a
certain
class
of
you
know
types
of
controllers
that
start
at
the
control
plane
and
and
act
individually
on
physical
clusters,
but
we
don't
exclude
those
ocm
and
like
a
lot
of
the
high
level
control
planes
like
hub
clusters
patterns,
people
have
you
know,
git
ops,
a
lot
of
those
are
high
level
control
planes,
refining
onto
controllers
clusters.
D
A
Yeah,
I
also
want
to
make
it
clear
that,
when
we're
not
just
talking
about
the
transformation
one
way
from
the
high
level
thing
to
the
low
level
thing,
it's
also
aggregating.
You
know
you
gave
me
an
ingress
and
I
splatted
30
objects
out
here
to
make
it
work.
I
also
need
to
be
able
to
unsplat
those
30
objects
back
into
an
ingress
status,
to
sync
it
back
up
to
kcp
in
a
way
that
it
understands,
which
I
think
it's
easy
to.
Imagine
how
splitting
works
in
cell.
A
It's
a
bit
harder
to
imagine
how
splitting
and
aggregating
work
in
cell
I
mean
it's
not
impossible
either,
but,
like
you
have
to
write
two
things
now
and
test
them
and
make
sure
that
they
work
together,
yeah.
Stefan,
I
want
to
make
sure
that
we
did
was
there
anything
else.
You
wanted
to
talk
about
on
the
external
sync
api
dock.
B
A
Yeah-
and
this
will
be
so-
the
the
namespace
scheduler
currently
does
discovery
and
watching
of
resources
on
a
workspace,
and
instead
of
it
just
says,
give
me
all
your
types,
and
let
me
know
when
any
of
those
things
change
it
should
instead
only
well,
it
will
look
at
everything
label
things.
The
sinker
will
only
look
for
certain
things
and
pull
them.
It
will
just.
It
will
only
see
those
things
it
should
see.
A
Yeah
the
individuals
makes
this
feel
yeah.
The
namespace
scheduler
should
only
look
for
things
that
the
sinker
will
care
about
and
shouldn't
bother
updating
all
kinds
of
other
things.
Steve
had
a
question:
don't
you
do
discovering
logical
clusters?
Yes,
we
are
doing
discovery
against
the
logical
cluster.
We
should
only
look
for
resources
of
types.
We
know
the
singer.
Will
care
about.
E
Yeah
yeah,
and,
if
I
understand
correctly,
the
virtual
workspace
in
in
this
regard
for
thinking
is
mainly
about
reconciliation
of
the
models
that
are
published
by
each
logical
cluster,
because
then,
when
you
want
to
watch
consistently,
do
a
consistent
list
watch
across
a
number
of
logical
cluster,
whose
apis
are
individually
a
bit
different.
Then
we
have
to
you
have
to
use
the
published
model.
That
is
the
reconciliation
of
of
the
shamers
in
all
these
logical
clusters,
which
is
the
goal
of
the
api
export
that
would
be
made
available
through
a
virtual
workspace.
E
D
Do
want
to
note-
and
so
maybe
this
is
like
something
I
forgot
to
bring
up
when
we
were
talking
about
it,
which
was
one
of
the
things
that
a
sinker
like
a
cubelet
is
doing,
is
holding
a
bunch
of
types
in
memory
from
the
source
model,
the
upstream,
the
the
the
logical
cluster
model
of
like
here's.
What
should
be
here,
and
it
is
also
doing
the
minimum
it
can
to
efficiently
like
apply
and
keep
those
things
in
sync
on
the
cluster,
which
probably
means
holding
them
in
memory
like
we're,
not
really
good.
D
Like
there's
a
we
basically
do
a
bunch
of
rough
art
at
least
algorithmically.
We
know
that
that's
probably
manageable,
even
today,
because
the
things
and
the
things
we
create
are
roughly
one
to
one
on
average
and
it's
a
subset
of
the
total
control
plane
workload
on
that
cluster,
because
we're
not
creating
more
things
on
a
cluster
than
the
control
plane
can
already
handle.
So
you
know
think
about
this.
As
you
have
a
physical
cluster,
you
divide
it
into
four
locations
and
there's
four
different
sinkers.
D
Each
of
those
four
is
probably
one-fourth
the
total
resource
load
and
we're
not
putting
more
things
onto
that
physical
cluster
than
the
physical
cluster
would
normally
do
maybe
like
50
more
because
it's
efficient,
but
not
an
order
of
magnitude.
More
so
in
theory,
one
of
the
advantages
of
a
sinker
is
it
is
a
kind
of
modeling,
a
type
of
controller
which
brings
everything
into
memory
and
the
moment
you
have
everything
in
memory
to
solve
a
problem.
D
You
open
up
some
doors
of
oh
there's
types
of
problems
that
are
very
efficient
to
go
solve
like
this,
like
I've
got
a
bunch
of
policy
objects
and
I've
got
a
bunch
of
desired
state
objects
when
it's
all
in
memory
and
at
the
scale
of
it
is,
you
know,
tens
of
thousands
or
hundreds
of
thousands
of
objects
at
most
that
already
kind
of
have
to
be
reasonably
efficient.
You
really
open
up
some
opportunities
for
there's
a
lot
of
transformations
that
you
can
do
that
become
super
cheap.
D
We
need
to
be
open
to
exploring
those
as
they
come
up.
We're
not
doing
it
right
now,
but
it's
like
the
stuff
like,
oh
you
can.
You
can
actually
reason
about
and
do
things
with
those
objects,
because
you
have
a
view
of
everything
here.
That's
going
down
to
here.
It's
a
little
bit
like
the
scheduler
in
cube.
We
were
able
to
you
know.
The
scheduler
in
cube
is
a
pretty
rich
modeling
system
of
the
system.
You
can
make
a
lot
of
important
things.
D
You
can
determine
a
lot
of
things
that
the
scheduler
is
doing
and
visualize
that
to
the
people
using
it.
That
is
very
valuable
that
we
leverage
only
one
percent
of
right,
like
you
know
the
scheduler's
making
decisions,
but
the
whole
thing
about,
like
you,
know,
descheduler
and
being
like.
Oh
the
decisions
were
good
a
week
ago,
they're
not
good.
Now
the
schedule
already
has
all
that
info
and
memory.
We
just
don't
leverage
it
efficiently.
D
In
all
cases,
that
kind
of
synergy
is
really
important
for
syncer
and
probably
new
types
of
controller
patterns
in
the
future,
where
you
have
a
control
plane
and
a
shard
and
you're,
bringing
all
this
in
memory
and
making
some
efficient
stuff.
That
pattern
will
be
generally
useful,
there's
a
lot
of
places
where
you
can
be
like.
D
Oh,
this
could
be
very,
very
useful
for
how
someone
understands
what
workloads
are
showing
up
on
a
cluster
from
a
security
perspective
or
a
relationship
perspective,
and
we
don't
have
to
do
anything
about
it
right
now,
but
it
is
useful
to
think
about.
We
are
opening
the
door
in
the
ecosystem
for
things
that
benefit
from
that
kind
of
bird's
eye
view
of
like
here's
everything
that's
going
on
this
cluster,
oh,
we
can
draw
the
relationships.
Report,
security,
add
transformations
like
even
some
of
the
strategy
stuff
we're
talking
about
doing
opa
at
that
level.
D
You
know
systems
that
call
out
to
other
systems
to
say
like.
Oh,
I
want
to
treat
these
as
an
atomic
unit
and
then
accept
or
reject
them
altogether.
There
might
be
security
policies,
scanning
techniques
transformations
across
types
like
hey.
If
you
get
this
kind
of
docker
dot,
io,
slash
library,
slash
postgresql
colon
latest,
instead
of
just
naively
sticking
that
in
there
may
be
more
advanced
policy,
things
that
you
could
do,
which
is
like.
D
Oh
transform
that
to
a
url
that
talks
to
a
global
distributed
registry
that
has
a
local
on
cluster
cache
that
offers
you
know,
75
percent,
better
end
latency
for
start
and
also
takes
security
considerations
out
of
the
picture,
because
there'll
be
a
separate
chain
for
authorizing
that,
so
some
of
those
transformations
will
be
very
powerful.
I
think
that's
another
thing
that
we
don't
have
to
solve
right
now,
but
it's
like
there's
a
we're
setting
up
a
goose
that
might
have
golden
eggs.
D
F
And
if
I
may
ask
did
you,
did
you
guys
outline
the
capabilities
that
this
system
might
have
with
disaster
recovery,
workloads,
moving
workloads
between
clusters
and
stuff,
like
that
yeah.
D
F
D
Okay,
cool
yeah-
this
jason,
I
think,
is
a
the
transparent,
multi-cluster
design.
We
need
to
get
to
the
next
stage
of
it
where
we
get
some
of
this
stuff
down.
So
after
we
get
through
prototype.
Two,
like
you
know,
the
igor,
that's
one
doc
that's
shared
but
like
there
should
be
a
couple
of
other
docs
that
sketch
out
some
of
these
paths
and
we
should
probably
make
sure
we're
divvying
up
and
getting
that
down.
A
Yeah,
let
me
show
that
doc
here,
just
to
make
sure
we
have
it
shared
with
the
kcp
list,
but
yeah
the
one
of
the
first
things
we
ever
wanted.
In
fact,
the
first
ever
prototype
was
giving
kcp
a
deployment
and
having
deployments
scheduled
on
two
clusters,
so
that
I
mean
the
networking
wasn't
set
up
and
everything.
A
But
at
least
you
could
see
like
I
asked
for
15,
replicas
and
seven
are
here
and
nine
are
here
or
whatever
so
yeah
and
that's
a
zero
downtime
disaster
recovery
scenario,
because
there's
replicas
both
places
but
in
the
meantime,
we're
sort
of
working
on
generalizing
how
to
do
that
with
anything
and
how
to
do
that
just
moving
stuff
instead
of
having
two
two
copies
moving
things
when
a
cluster
goes
down,
that's
that's
prototype
two
so
and
then
layering
on
top
of
it
networking
and
layering,
underneath
it
volumes
and
all
of
that
fun
stuff.
A
Great,
I
yeah
any
any
more
topics
on
transformation
and
syncing
and
any
any
ideas
about
policy
that
have
come
out
of
this.
I
think
it's
all
very
exciting
we
should
we
should
give
a
koopa
and
talk
about
this
stuff.
A
That's
what
they
call
a
callback
all
right
unless
there's
anything
else
to
discuss
I'm
more
than
happy
to
give
everybody
12
minutes
back
all
right
times.
12
people,
that's
like
that's
like
144
minutes,.
A
That's
right:
that's
right,
shareholder
value.
Everyone
all
right!
Take
care.
Have
a
lovely
day
week,
see
you
around
on
the
internet,
bye.
Everyone.