►
From YouTube: Community Meeting January 4, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone
welcome
to
kcp
community
meeting
january
4th
2022.
We
are
officially
living
in
the
future.
I
we
have
a
somewhat
packed
agenda.
I
guess
I
mean
the
the
prototype.
Two
two
goals
will
probably
take
a
while,
if
there's
anything
else,
anybody
wants
to
talk
about
feel
free
to
toss
it
on
there.
But
stefan
who
I
think
is
here.
A
Great
stefan
wanted
to
talk
about
prototype
two
goals
and
what
we
are
hoping
to
get
done
before.
I
think
we
said
the
end
of
this
month,
so
yeah,
I
think
the
current
source
of
truth
on
that
is
this
project
board.
If
that
is
not
the
case,
speak
now,.
A
Which
I
don't
know
how
we
want
to
go
through
these
top
to
bottom
or
whatever,
this
one's
me.
B
A
One
was
a
source
for
bike,
shooting
structure
blogging,
oh
sure,
sure
yeah.
I
also
think
I
mean
like
yeah
that
one
that
one
feels
optional
to
me
for,
for
you
know
definitely
nice
to
have,
but
not
something
we
need
need
for
a
for
a
prototype
like
the
people
using
the
prototype
won't
be
aware
of
what
logging
we're
using
anyway,
so
yeah,
maybe
as
we
go
through
them,
we
can
we
can
take
note
of
which
ones
don't
necessarily
fit.
A
I
know
that
we
will,
for
the
first
two
at
least
we
will
need
to
be
able
to
register
a
physical
cluster
with
kcp
and
install
the
syncer.
I
would
guess
this
means
do
better
than
the
current
thing
we
do
today,
because
the
current
thing
we
do
today
works
but
kind
of
sucks,
because
we're
just
going
to
ask
you
to
give
us
a
good
config,
and
you
know,
with
all
the
with
all
the
powers
that
enjoys
and
then
use
that
to
install
our
agent.
A
This
might
be
one
in
to
staphon
to
your
point.
If
we
start
running
out
of
time,
this
might
be
one
that
gets
it
that
gets
pushed
off
because
it
works.
It
works
okay,
but
it's
not
like
ideal.
It's
not
what
we
want
to
go
to
prod
with.
C
Of
all
of
the
things
having
a
like
giving
someone
root
to
a
whole
bunch
of
things
and
then
going
and
doing
your
own
stuff
is
great
for
user
experience,
but
it's
really
bad
for
like
the
mindset
of
running
something
as
a
service.
Yes,
we
probably
say
like
of
all
of
them,
figuring
out
what
the
minimal
set
of
permissions
you
need.
Even
if,
even
if
that
minimal
set
of
permissions
is
the
exact
same
permissions
we
have
today,
that
would
probably
be
an
acceptable
minimum
path,
which
is
the
someone
needs
to
be
able
to
install
the
agent.
A
Yeah
yeah,
I
think
that
right,
I
think
the
item
one
and
two
are
related
in
that
the
correct
way
to
register
your
cluster
is
not
to
give
us
a
cube,
config
and
let
us
play
around
in
it
and
do
whatever
we
want
it's
to
install
the
agent
and
that
agent
subscribes
or
registers
itself.
C
Well,
someone
to
define
the
permissions.
You
need
to
run
that
agent
on
a
given
cluster
and
has
it
and
then
being
able
to
clearly
document
that
someone
could
do
it.
Having
a
simplified.
Easy
flow
is
great,
but
that's
a
add-on
on
top
of
that
basic
thing,
and
I
think
that
would
probably
be
the
acceptable
minimum.
A
I
think
yeah,
so
I
will.
I
will
at
least
write
down
better
in
that
multi-cluster
doc.
What
I
think
the
registration
process
should
be,
and
then,
if
we
all
agree
on
that,
we
can
start
writing
code
to
flesh
that
out.
A
But
I
think
I
think,
we've
even
made
progress
in
terms
of
not
that
sinker
will
not
be
responsible
for
creating
namespaces,
for
instance,
that
the
discussion
we
had
probably
about
a
month
ago
about
the
cluster
controller
or
the
sub
you
know
something
outside
in
kcp
outside
of
the
physical
cluster
will
create
namespaces
and
the
syncer
won't
have
to
that's
a
reasonable.
A
I
think
a
good
positive
change
in
the
permission
structure
that
we're
defining
so
anyway
yeah
those
are
those
are
still
on
my
list
and
I
will
write
better
docs
for
that
and
then
pass
them
around.
And
if
everybody
agrees,
we
can
start
writing
code.
Sketchup,
design,
idea,
identity,
unification
between
kcp
and
physical
clusters.
Does
anyone
know
more
context
than
that.
A
D
B
There's
a
document
about
shared
critical
data.
There
is
a
section
about
that
where
I
I
sketched
some
ideas,
but
I
agree.
This
is
probably
post
prototype
too.
The
technology
is
basically
what
you
developed
proved
it
works,
and
now
we
have
to
model
something
on
top
to
make
use
of
it.
But
this
is
next
for
the
next
prototype
for.
A
Great
great
I'll
I'll,
let
andy
or
or
whoever
wants
to
fill
out
that,
like
you
know,
add
that
context
to
the
issue
and
close
it
because
it
that
seems
like
we're
done
for
now
minimal
r
back,
I
think,
is
this
related
to
the
minimal
physical
cluster.
Our
back
or
minimal
are
back
for
logical
clusters,
workspaces.
E
There's
a
search-
I
don't
know
some
of
you
know
me,
maybe
when
the
openshift
auth
team,
previously
monitoring.
E
So
stefan
pulled
me
in
to
look
at
the
off
bits
thanks
for
that,
so
I'm
actively
looking
into
that
one.
We
have
already
one
design
session
with
stefan
on
this
one,
so
I
have
a
couple
of
slides
and
literally
sort
of
like
making
my
feet
wet
inside
the
coat
base.
I
think
I
at
least
found
the
right
spots
where
I
could
hook
in
where
I'm
currently
at
discussing
with.
Stefan
is
like
what
correct
abstractions.
We
should
use
to
implement
the
orbic
logic.
E
This
one
document
that
stefan
shared
to
me
there
is
a,
I
believe,
a
current
active
discussion
around
how
we
want
to
have
informers
be
implemented
and,
at
the
same
time,
having
knowledge
of
logic
clusters
without
exposing
the
information,
and
that's
probably
something
we
want
to
inherit.
Also
in
the
minimal
orbit.
Implementation
there's
also
a
dangling
pr
currently
out
there,
which
I'm
looking
at,
which
falls
into
that
category.
So
yeah
actively
working
on
that
one.
A
Awesome
great
thank
you
for
thank
you
for
taking
a
look
at
that
and
let
us
know
how
it
goes.
Cross
workspace
list,
watch,
controller,
steve.
F
A
user
can
add
a
second
location.
Application
moves
between
locations.
Ingress
follows
application,
which
doesn't
to
me
yeah
refer
to
any
of
this.
We
have
a
a
number
of
controllers
that
list
and
watch
across
workspaces
right
now,.
A
B
F
A
I
don't
know
if
you
were
asking
me
or
if
I
have
the
answer,
but
I
can
give
you
one,
which
is.
I
don't
think
I
care
how
the
prototype
works
so
long
as
it
works
right
like
people
consuming
the
prototype,
aren't
aware
of
the
piles.
F
C
So
the
the
attempt
was
to
get
the
dock
in
place
for
some
of
the
basics
of
that
to
cover
the
high
level,
but
I
think
it
probably
is
the
right
time
steve
to
go
back
from
an
experience
point.
Maybe
this
is
like
a
rob
rob
and
I
can
like
take
a
stab
at
some
parts
of
it
and
get
a
couple
other
folks
involved
on
the
team
or
whatever
to
like,
go
through
and
be
like.
What's
the
because
we
talked
about
this,
and
you
know,
stefan
has
some
examples
of
like.
C
What's
the
oc
experience,
what's
the
cube
control
experience
like
the
mindset
for
prototype
2
was
very
much
like,
as
jason
was
saying,
put
like
the
best
foot
forward
for
showing
the
big
ideas
and
then
that's
where
at
that
point
we
have
something
that
we
can
say
like
hey,
we
talked
about
it
last
year
in
may.
We
made
this
big
pitch
that
we're
going
to
change
the
future
prototype
2
is
like
a
good
realization
of
enough
that
we
can
say,
like
yeah,.
C
C
2
is
the
foot
forward
for
saying
here's,
the
future
of
cube
and
here's
what
exactly
what
it
does,
and
why
and
here's
why
we
think
it's
important
and
that
pitch
will
be
things
that
we
talk
to
customers
about
or
go
to
community
meetings
and
say
like
hey,
like
sig
multi-cluster,
like
we
really
want
to
pitch,
like
jason,
did
the
original
pitch
of
kcp
prototype
2's
out
baby
we've
got
this
great
like
unified
pitch,
so
maybe
it
starts
with
that
deck.
As
you're
saying
steve,.
H
Okay,
one
way
to
think
of
this
is
like
imagine:
we've
got
there's
some
like
big
advanced
cube
user
and
they
want
to
go
like
hey.
I
think
kcp
solved
a
lot
of
the
problems
my
teams
are
running
into.
Can
I
take
prototype
2
and,
like
post
an
instance
for
my
team
to
just
poke
around
with?
Does
it
even
work?
You
know
minimally
for
that
use
case.
I
think
that's
kind
of
what
I'm
aiming
for.
G
So
am
I
right
saying
that
it's
mainly
gathering
all
the
basic
concepts
we've
been
working
on?
You
know:
control
of
logical
clusters
through
workspaces,
which
you
can
create
and
then
plugging
back
on
this,
so
that
each
user
only
has
access
to
what
is
is
and
then
being
able
to
just
have
a
minimal
experience
of
sharding,
saying,
okay,
I
have
one
workspace
on
this
shot
and
another
one
which
is
there
and
can
query
something
on
both
I
mean
just.
C
Showing
the
single
instance
like
it
were
a
shard
which
is
there's
a
hypothetical
place
behind
the
curtains
where
someone
can
use
cube-like
things
to
to
share
stuff.
So
like
more
of
andy's
like
the
crd
stuff
and
the
inheritance
like,
how
do
you
operationally
scale
rolling
out
a
crd
to
tens
of
thousands
of
applications?
C
C
How
do
you
keep
control
across
clouds
so,
like
some
of
these
points
is
like?
Well,
how
would
you
roll
out
a
cloud
load,
balancer
api
change
to
55
different
cloud
clusters?
Stuff
like
that
so
and
then
maybe
other
parts
of
it
are
the
the
workload
movement
like
just
enough
of
the
workload
movement
which
is
kind
of
something
that
we
had
some
basic
examples,
but
we're
kind
of
building
up
to
the
more
general.
C
So
I
I
think
this
is
like,
as
rob
said
like,
we
need
to
go,
show
it
even
stuff,
like
the
identity
like
we
don't
have
to
have
the
right,
identity,
unification
between
the
control
plane
and
a
physical
cluster,
but
we
want
to
be
able
to
say,
like
well
red
hat's
gonna,
make
some
recommendations
based
on
you
know
where
we're
going
and
that's
going
to
involve
key
cloak
and
the
ciam
effort,
but
it
really
is
just
about
getting
the
right
connections
at
each
cube
cluster
and
here's.
How
this
whole
thing
fits
together.
C
So
there's
a
kcp
version
of
this
and
then
we'll
have
kind
of
like,
as
rob
said
like
when
we
go
to
customers,
we'll
have
the
red
hat
stuff
around
it,
and
then
we
want
to
be
able
to
potentially
show
how
someone
could
come
and
plug
in
a
different
view
around
it.
Right
like
this,
is
you
know
large,
customer
or
large
deployer
in
of
open
source
kcp,
and
they
want
to
build
their
own
integrations?
Where
would
they
plug
in?
A
But
I
think
the
missing
thing
from
steve's
question
was:
there
is
not
a
like.
We
would
like
to
have
something
we
could
record
or
take
or
show
to
to
potential
people
we
would
inflict
this
on.
But
there
is
not
a
specific
like
event
thing
like,
like
date,
minutes
left
until
user
is
looking
at
it
or
is
there
or
is
that.
C
So
I'd
like
to
I
mean
like
putting
us
there's
a
couple:
different
sticks
in
the
ground
like
by
so
we
we
talked
to
kubecon
eu
about
this
last
year
and
we
spent
a
lot
of
the
year.
There's
been
a
lot
of
people
who've,
like
you
know,
kind
of
poked
at
it.
This
will
be
the
even
something
that's
more
real
to
poke
so
having
having
it
in
place
significantly
ahead
of
kubecon
eu
is
actually
a
really
good
goal,
because
then
we
can
actually
do
much
more
concentrated.
C
C
We
made
a
lot
of
progress.
Some
other
ones
probably
would
be
prototype.
2
gives
us
the
foundation
that
allows
us
to
go
figure
out
what
the
concrete
places
within
the
cube
ecosystem
we'd
want
to
make
changes
at
and
start
integrating
into
those.
So
like
discussions
with
some
of
the
sigs
and
all
that,
and
so
the
post
prototype,
2
kind
of
is
the
let's.
Let's
get
serious
about
making
this
not
just
this
one
idea
that
you
could
see,
but
putting
the
parts
in
the
right
places.
F
E
F
A
Yeah
and
I
think
the
right,
I
think
we
all
agree-
please
let
me
know
if
you
don't
that
how
gross
it
is
to
set
up
these
controllers
is
out
of
scope
for
the
prototype
like
it.
It
might
be
a
little
gross.
We
can
probably
do
better,
but
something
is
way
better
than
nothing
in
this
case.
H
A
B
It's
it's
basically
just
a
proof
concept.
It
works.
There's
a
patch
client
to
do
that.
We
have
a
big
work
item
after
project
2,
which
is
basically
api
imports
exports
and
that
will
include
in
virtual
workspace
to
offer
this
view
onto
the
right
workspaces,
and
this
will
be
the
real
basis.
Probably
for
those
controllers
will
be
similar,
I
mean
the
basic
technology
is
the
same,
but
it
will
look
much
different
and
that
one
that
one
will
something
be
something
we
can
show
and
maybe
even
mock
in
some
slides
before
there's
a
call.
A
I
Yeah
we
have
this
use
case
that
says
image
pool
secrets
are
copied,
so
I
guess
we
have
something
that
has
been
copy
pasted
everywhere.
I
I
didn't
really
understand
that.
I
So
from
the
list
of
objectives,
I
would
say
everything
is
done.
I
just
want
to
clarify
that
the
multi-location
means
that
there
will
be
like
one
active
and
one
active
location
and
a
passive
location.
Isn't
it
or
we
are
talking
about
being
able
to
send
traffic
to
two
different
physical
clusters
right
now,.
A
Is
nothing
so
I
don't
know
if,
if
rob
or
clayton
or
others
disagree
with
that.
C
Sign
that's
equals
agreement
yeah,
probably
just
we
should
probably
like.
We
should
probably
write
down
those
like
that
objective
and
get
that
in
a
in
a
formal
sense
of
like
what.
What
are
we
willing
to
accept
and
maybe
like
the
the
act
of
getting
the
the
user
flow
through.
It
will
really
help
because
we
kind
of
we
left
it
up
to
saying
like
we
really
want
to
at
least
be
able
to
show
getting
to
the
right.
C
Ingress
and
failover
is
good,
but
we
know
that,
like
the
sinker
will
be
behind
on
the
failover
side,
so
we're
going
to
be
a
little
bit
further
behind
on
the
underlying
infrastructure.
So
maybe
we
focus
on
the
things
that
are
like:
let's
do
a
failure
that
the
sinker
doesn't
react
to
and
and
focus
on
those
kinds
of
things,
because
it's
still
a
good
demo
to
have
two
clusters
and
show
ingress
working
on
both
of
them.
One
of
the
clusters,
like
you,
just
shoot
it
and
ingress
still
roughly,
does
something
right
that
papers
over.
C
That's
what
that's,
ultimately,
what
all
I
mean,
everybody
who
has
an
aws
east
region
found
this
out
right,
like
at
the
end
of
the
day,
you're
just
papering
over
what
happens
like
you're,
not
reacting
to
a
big
failure
like
that.
You
have
to
have
your
reactions
in
place
if
you're
reacting
to
it.
It's
too
late.
So,
like
a
lot
of
the
automation,
we
want
to
add
to
the
sinker
for
moving
and
stuff,
that's
a
happy
path
and
building
bringing
up
new
replicas.
C
So
having
like
a
good
active
passive
mindset
around
the
prototype,
2
demo
would
be
like
hey
look.
The
region
goes
down.
The
clusters
in
those
regions
are
also
down.
How
do
we
show
the
world
that
we're
going
to
is
you're
gonna?
Have
all
this
crud
all
these
other
places.
You
need
to
be
able
to
test
that
and
and
build
yourself
in
the
mindset
of
it
like
stuff,
will
fail
all
over.
C
A
Yeah-
I
don't
I
we've
talked
about
this
before,
but
it
would
be
great
to
write
it
down
better
somewhere.
That
a
lot
of
this
is
useful,
not
like
you
were
saying,
I'm
going
to
paraphrase
not
just
for
when
disasters
happen,
but
to
be
able
to
simulate
disasters
better
and
just
have
just
have
a
chaos.
Monkey
constantly
killing
one
of
your
clusters
forever,
and
that
way,
when
one
goes
down
for
real,
you
don't
care
because
you've
been
testing
it
forever.
Yeah.
A
Yeah,
but
to
joaquim's
question
the
requirement
for
this
prototype
I
just
want
to
make
sure
is
not.
That
is
not
that
ingress
will
always
send
traffic
to
both
locations,
but
that
when
one
location
dies,
the
ingress
will
pick
it
up
and
move
it.
We'll
pick
up
traffic
and
move
it
to
the
the
passive
backup.
C
We
probably
need
to
talk
through
it,
because,
probably
it
would
be
the
traffic
should
go
to
whatever
the
active
is
but
yeah.
It
could
be
that
we
just
say
like
if
the
active
isn't
responsive.
It
goes
to
the
passive,
and
it
might
also
be
that
you
can
put
traffic
on
the
passive.
C
I
C
Since,
since
we're
still
building
out
any
cross
workspace
machinery,
I'd
say
that's
that's
acceptable
and
if
we
have
other
examples
of
controllers
and
experiences
we
want
to
show,
we
should
feel
we.
We
are
going
to
work
on
getting
cross
workspace
controllers,
narrowly
working
and
then
we'll
broaden
it
versus
lumping
it
all
together
is
everybody.
Okay,
with
that,
I
saw
some
nods.
I
A
David,
you
are
next
with
switch
to
a
logical
cluster.
Do
you
yes,.
G
Understand
with
that,
so
the
main
part
is
is
nearly
finished.
I
should
have
the
pull
request
merged
this
week.
I
hope,
which
is
mainly
the
the
virtual
workspace
that
manages
personal
and
organizational
workspaces.
G
Mainly
you
point
to
this
api
server
path,
and
then
you
do
get
workspaces
and
you
get
the
workspaces
you
have
the
right
to
access
to.
So
the
the
overall
mechanism
and
machinery
is,
is
there
for
this
now?
What
would
be
still
required
to
have
the
whole
scenario
working
would
be
integrating
with
the
workspace
and
workspace
chart
controllers
so
that
when
I
get
the
workspaces
from
there,
I
also
get
the
cube
config
and
all
the
secrets
that
are
related
to
this
workspace,
and
then
we
would
be
able
to
build.
G
For
example,
a
cube,
cube,
ctl
plug-in
very
simple.
You
know
client-side
stuff
that
would
get
this
and
be
able
to
switch
directly
to
the
underlying
cube
context
that
corresponds
to
to
this
workspace,
so
the
the
client
part
of
it,
and
mainly
also
the
integration
with
the
overall
other
components,
mainly
kcp
instances
with
shards
and
workspace,
and
workspace
chart
controller.
This
I
still
have
to
do,
but
this
would
be
probably
the
next
step,
just
after
merging
the
pier
for
virtual
workspace.
A
A
sketch
of
what
you
mentioned
a
coup
control
plug-in.
Do
you
have
a
sketch
of
what
those
commands
would
be
and
what
they
do
well
we're
discussing.
G
I
don't
I
don't
know
if
there
is
a
dedicated
document
for
that,
but
you
know
typically
change
workspace.
You
know
you're
just
by
default,
you're
connected
on
on
a
cube
config,
which
is
just
gives
you
your
list
of
workspaces
and
then
you
do
cube
ctl
get
workspaces.
G
That
would
get
all
the
related
information
to
build
a
cubeconfig
and
switch
to
it.
So
it's
you
know
mainly
just
pointing
to
to
to
the
virtual
workspace
with
rest
to
a
subresource
and
getting
the
right
information,
but
I
I
would
still
have
to
implement
this
rest
subresource,
but
in
fact
it's
just
mainly
just
grabbing
some
information
from
the
workspace
work,
workspace
chart
and
associated
config
maps
and
secrets
that
steve
already
built.
G
A
All
right,
I
am
now
realizing
that
there
was
a
whole
label
for
prototype
2
that
I
have
not
been
using
that
I
should
have
used
that
instead
of
the
project
board,
but
that's
okay-
we've
gone
through
most
of
these
already
in
a
different
order.
How
exciting
p
cluster
health
checks
that
I
think
falls
under
the
same
installing
an
agent
flow
right
now
we
do
something
we
currently
check.
A
If
the
agent
we've
installed
is
ready
and
if
it's
not
then
or
we
can't
connect
to
it,
then
we
consider
that
cluster
unhealthy
and
we
that
will
trigger
you,
know
stuff
to
move
away.
I
think
we
have
what
we
have
works
well
enough,
but
we
should
definitely
at
least
have
a
design
and
if
not,
a
implementation,
for
what
to
do
better.
A
That
will
probably
look
a
lot
like
or
have
a
lot
of
overlap
with
the
registration
process,
because
the
registration
process
is
probably
going
to
look
like
what
acm
does
where
they
take
out
a
lease
and
health
check
on
that
lease.
A
So
I
think
I
will
this
one
will
also
be
related
to
the
dock
and
design
for
registration
and
health
checks
and
permissions.
D
That
was
wasn't
that,
like
making
sure
that,
if
you've
got
two
two
name-
spaces
that
are
identical
but
different,
logical
clusters
that
we
can
map
them
without,
you
know.
A
Yes,
issues
in
a
physical
cluster,
yeah
yeah
transforming.
I
might
rename
this
to
be
more
clear
but
transforming
the
namespace
name
on
the
way
down
in
the
sinker.
G
G
Install
some
through
kcp
some
workloads
that
will
finally
leave
on
the
physical
cluster
and
that
these
workloads
would
expect
a
given
namespace
name.
You
know,
because
there
are
a
number
of
things,
or
you
know,
controllers
or
other
workloads
that,
for
some
reasons,
expect
to
be
in
a
given
name,
space,
openshift,
hyphen,
something
or
anything
else.
So
I
mean
in
terms
of
keeping
the
door
open
for
backward
compatibility
of
things.
We
would
like
to
run
through
kcp
without
completely
rewriting
them.
A
G
A
G
A
That
name
space
right
there.
Maybe
there
are
other
ways
we
could
lie
to
them
and
say
yeah
make
them
think
that
they
are.
We,
you
know,
fiddle
with
the
downward
api.
If
that's
what
they're
using
or
do
something
else
to
yeah
yeah
encourage
them
to
believe
they
live
in
the
right
name
space,
even
if
they
aren't
yeah,
because.
A
Yeah
short
term,
if
the
like,
while
I'm
building
this,
I
don't
mind
putting
in
a
back
door
that
says,
like
hey
disable,
this
for
or
even
just
hard
coding
like
like
specific
name
spaces.
We
know
cause
problems
for
now
and
then
with
it
to
do
to
get
rid
of
that
over
time.
That
seems
fine.
G
Because
that's
that's
all!
It
seems
to
me
that
also
dependent
on
the
way
consumers
of
kcp
want
to
use
that
on
on
the
topology,
the
the
plan,
you
know
how
they
see
the
association
between
logical
clusters
and
phys
physical
cluster,
how
how
they
would
spread
the
workloads
across
physical
clusters.
I
mean
it's
mainly
related
to
the
topology
of
this,
so
it
might
be
that
shorthand
some
some
consumers
would.
A
A
A
Image
pull
secrets.
I
was
thinking
I
was
working
on
this
yesterday.
I
think
it's
going
to
be
the
simplest
possible
case
of
a
transformation
of
a
thing
that
that
sinker
will
apply.
It
won't
be
simple,
but
it
will
be
the
simplest
possible
one
I've
found
so
far,
and
so
I'm
going
to
sketch
that
out
and
share
something.
I
think
today
and
start
hacking
on
that.
A
I
say:
goog
configs
point
to
kcp
this
one
is
when
a
workload
has
a
service
account
that
requires
talking
back
to
an
api
server
point
that
api
server
back
up
to
kcp
instead
of
to
its
local
physical
cluster
api
server.
That
one
should
also
be
a
fairly
simple
transformation
on
the
resources
that
the
syncer
applies
to
the
physical
cluster,
but
I
think
it's
slightly
more
complex
than
the
image
pull
secrets
one.
So
I'm
gonna
do
that.
One
after
based
on
the
experience
I
gained
from
that
first,
one.
G
So
those
transformations
would
be
directly
implemented
in
the
in
the
synchro
code.
Since
those
are,
you
know,
really
systematic
and
generic
transformations
right.
A
Yeah
so
so
the
they
will
be
applied
by
the
sinker,
because
we
don't
want
kcp.
We
don't
want
it
to
be
apparent
to
users
talking
to
kcp
that
we're
making
those
changes.
I
think
where
that
code
lives
and
well,
where
the
syncer
runs
at
all
is
sometimes
also
appreciated,
but
where
that
code
lives
is
probably
in
synchro
code
and
how
to
extend
it
arbitrarily
is,
is
still,
I
think,
an
open
and
open-ended
question.
G
And
it
might
be
that
some
transformations
that
we
know
we
will
always
apply
would
still
live
directly
in
the
cut
of
the
sinker.
Why
some
other
ones
optional
or
additional
ones
might
be
provided
from
from
some
external.
A
Way,
yeah,
I
I
think,
there's
I
think,
there's
a
ton
of
open-endedness
to
this
general
problem
space
like
when
do
we
want
users
to
be
able
to
disable
this
behavior
or
how
do
they?
You
know
whatever,
but
starting
with
one
or
two
concrete
cases
that
we
need
to
be
able
to
do
basic
things
and
hard-coding
them
and
then
generalizing?
That
seems
like
a
good
path
forward,
and
so-
and
I
think
that
is
everything
we
have
listed
as
a
prototype
2
task.
A
I
will
go
back
through
and
make
sure
that
they
are
up
to
date.
I
think
we
found
a
couple
that
might
have
incorrect
bodies,
but
yeah
with
that,
I
will
I
mean,
if
there's
anything
else
related
to
prototype,
to
any
burning
questions
in
anyone's
mind.
Please
speak
now
or
whenever.
A
Otherwise,
I
will
kick
it
to
andy.
Who
specifically
did
not
promise
a
demo
of
this.
D
A
D
So
the
topic
is
trying
to
make
it
so
that
controller,
authors
and
developers
don't
have
to
do
very
much
work
to
make
their
controller's
workspace
or
logical
cluster
aware-
and
it's
probably
going
to
be
a
bit
of
a
large
mountain
to
overcome
to
get
these.
These
changes
upstreamed
and
approved
to
go
upstream.
But
I
I
think
I
have
something
that's
workable,
so
I
can
walk
through
the
code
a
little
bit
and
show
you
all
what
I've
got
so
give
me
just
a
second
to
share.
D
Go
in
here
all
righty,
so
but
I've
got
I'll
see
that
okay,
yeah
yeah
okay,
so
you
know
if
you've
ever
written
a
a
controller
before
you
know
that
let
me
get
the
one
that
I've
been
working
with.
I've
been
playing
around
with
the
deployment
splitter,
because
that
was
pretty
pretty
small
in
terms
of
scope.
D
So
what
you
typically
see
is
that
controllers
have
some
like
one
or
more
listers
that
they
typically
work
with
this
one
works
with
a
cluster
lister
and
a
deployment
lister
and
typically
you'll
have
some
sort
of
add
event
handler
call
to
whatever
informers
you
care
about,
and
so
in
this
case
we're
looking
at
the
deployment
informer
anytime,
a
deployment
is
added
or
updated
we're
going
to
call
this
enqueue
function
and
in
queue.
D
Normally.
What
you
would
see
is
a
call
to
something
that
looks
like
cash,
dot,
meta,
namespace,
key
func
and
so
like
this
is
kind
of
typical
code.
You'll
call
meta,
namespace
key
func.
This
usually
will
give
you
a
string
for
the
key
where
the
format
is
usually
something
that
looks
like
my
name
space,
my
name,
and
that
is
a
string
that
just
gets
added
to
a
cube.
D
So
what
I've
done
is
the
first
change
you
you'll
see
here
is
that
there's
this
object,
key
func
I'll
show
what
that
is
in
a
second
and
then
the
other
big
change.
Is
that
there's?
So
you
get
this
key
out
of
the
queue
when
the
controller
is
processing
a
work
item
and
we're
going
to
take
this
string
k
and
convert
it
or
decode
it
into
a
q
key,
which
is
an
interface
that
knows
about
namespaces
and
names,
and
once
you
have
a
key,
we'll
set
up
a
sync
context.
D
This
was
something
that
stefan
had
suggested
a
few
weeks
ago
that
maybe
we
could
have
some
way
to
have
a
context.
That's
controller
related
or
sync
related
where,
if
you're
in
a
kcp
environment
in
a
kcp
aware
code
base,
it
can
make
syncing
work
with
logical
clusters
without
your
controller
have
having
to
be
aware
of
it
and
I'll
I'll
show
how
that
works
in
a
second
and
then,
when
you
look
at
the
the
process
function,
given
a
key,
you
don't
see
anything
here
that
talks
about
logical
clusters
or
cluster
names.
D
D
The
rest
of
the
code
in
here
is
all
standard
in
terms
of
interacting
with
resources
and
you'll,
even
see
that
the
deployment
client
itself
is
just
saying,
give
me
deployments
for
the
current
namespace
and
just
call
update
on
it.
There's
no
logical
cluster
in
here
anywhere
and
that
the
magic
is
that
there
is
a.
I
know
this
is
just
hacky
prototypey
stuff
right
now,
but
there
is
a
controllers.
Config
struct
that
lets
you
specify
a
whole
bunch
of
things.
D
D
What
is
the
key
for
a
cluster-scoped
resource
or
a
namespace,
and
a
name
like
a
namespace
key
resource
and
when
you
fill
out
all
of
these
things,
the
shared
informer
code,
the
lister
code,
the
everything
related
to
the
delta
fifo,
and
all
of
that
is
all
going
to
make
use
of
what
you
pass
in
here
and
then
everything
is
transparently
available
whenever
you're
saying
list
with
context
get
with
context
from
a
lister.
D
So
I
have
a
whole
bunch
of
these
functions
which,
assuming
this
gets
approved,
or
if
we
think
it's
it's
got
a
chance
to
be
approved
upstream.
This
would
be
something
that
we
would
make
available
exported
from
kcp
and
we'd.
Give
you
a
simple,
hopefully
one
liner,
where
you
could
just
say
enable
multi-cluster.
You
know
kcp
enable
multi-cluster
and
you
wouldn't
have
to
code
any
of
this
yourselves.
D
But
basically,
you
know,
all
of
these
things
are
trying
to
generate
keys,
that
are
aware
of
the
logical
cluster
name
and
so
for
indexes
for
keys.
Everything
gets
to
be
logical,
cluster
aware
so
that
when
you,
when
you
are
using
your
controllers,
you
don't
have
to
worry
about
any
of
this
stuff
and
the
one
other
piece
that
is
important
is
that
the
clients
themselves
are
now
there's
a
there's,
a
kct,
kcphgb
client
that
implements
a
couple
of
methods
that
the
clients
need.
D
D
And
so
I
have
a
custom,
http
client
that
this
again
would
be
exported
by
kcp
and
not
something
you
would
have
to
implement,
and
it's
able
to
basically
figure
out.
D
That
you
can
use
for
your
controller,
it's
currently
global.
You
can
only
set
it
once
it
was
easiest
to
do
it
this
way
for
a
prototype.
D
I
think
that
it
may
be
more
palatable
upstream,
rather
than
having
a
global
variable
or
a
package
scope
variable
to
potentially
pass
this
in,
to
shared
informer
factories
and
controllers
and
anywhere
it's
used,
but
that
is
a
much
more
invasive
change,
so
for
this
prototyping
I
didn't
go
with
that
approach.
D
D
I
haven't
thought
about
the
sales
pitch,
yet
I
think
that
the
hardest
part
about
the
sales
pitch
is
going
to
be
around
like
the
fact
that
we
need
the
we
need
a
custom
function
to
do
an
index
for
listing
everything,
whereas,
like
with
kubernetes
today,
when
you
want
to
list
everything,
you
don't
need
a
custom
function
for
that.
You
just
go
through
everything
in
the
store
and
list
everything,
because
there
are
not
multiple
tenants,
there's
not
multiple
logical
clusters.
C
D
Right
so
the
I
have
a
you
have
to
pass
in
the
index
func
for
indexing
biological
cluster.
So
like
the
code
today
in
upstream
kubernetes,
when,
when
you
say
list
against
a
lister,
it
just
says
iterate
through
everything
in
the
store
like
it
does
not
go
to
to
an
index
or
to
or
an
index.
C
I
guess
that's
what
I'm
saying
is
the
default.
Behavior
of
a
cache
is
return.
Everything
in
the
cache
all
subsetting
on
properties
is
indexer.
Namespace
is
technically
a
little
special,
but
arguably
a
namespace
should
also
be
implemented
with
an
indexer.
So
it's
a
little
bit
of
a
midpoint
right,
like
all
subdivision,
is
an
indexer.
So
maybe
the
argument
that
maybe
one
way
to
make
this
argument
would
just
be
like.
C
Maybe
the
real
problem
is
that
indexer
isn't
actually
well
wired
for
someone
being
able
to
be
like
yeah,
like
I'm,
creating
a
cache,
here's
my
indexers-
and
this
is
how
I
want
to
mentally
map.
I
agree,
though,
like
we're
going
to
be
really
careful
about
changing
the
semantic
of
list
which
is
list
does
mean
everything
in
the
cache,
not
everything
on
a
cluster,
so
that
that's
a
that's
a
potential
risk
point
that
if
we
try
to
change
that
it
could
be.
D
I
mean
it
would
work
the
same
for
like
if
you
go
with
the
defaults.
The
defaults
should
just
be
no
changes
to
current
behavior.
So
you
know
if
you
write
a
controller
and
you're
saying
list
everything
from
a
lister,
you
should
just
get
everything
in
the
cache
like
it's.
It's
current
behavior.
So
this
is
the
sort
of
thing
where
we're
enabling
subdividing
the
cache.
G
D
Certain
scope
but
yeah
we're
gonna,
have
to
work
on
the
arguments
here,
but.
C
C
Maybe
that's
actually
the
point
at
which
we
say
like:
can
we
go
back
and
reswizzle
it,
so
any
subdivision
within
list
is
always
going
through
an
indexer.
Therefore,
when
you
think
about
a
lister
you're,
always
talking
like
you
know,
the
lister
is
connected
to
an
indexer
and
the
generation
logic
is
about
turning
indexers
into
public
methods,
for
instance,.
F
D
D
I
would
prefer
to
do
that,
but
it's
a
breaking
change
and
but
I
think
it's
consistent
with
the
client
go
change
from
a
couple
years
ago
to
add
context
to
all
of
the
methods.
So
that
would
be
my
preference.
F
C
F
C
The
lister
interface
wasn't
truly
intended
to
be
guaranteed
to
be
from
a
cache.
It
is
optimized
in
a
weird
way,
like
lister
kind
of
was,
if
go
was
a
little
bit
more
flexible
about
the
structure.
What
we
could
return
the
fact
that
lister
returns,
an
array
of
pointers
is
pretty
key,
and
so
that
was
really
the
key
reason
that
that
separate
interface
existed.
C
So
you
could
potentially
make
an
argument
that
maybe
there's
a
real
problem
here
is
just
unifying
the
client
interfaces,
and
this
is
more
go
peculiarity
right,
like
we
try
to
return
a
continuous,
a
continuous
block
of
memory
for
client
calls,
because
that's
how
we
get
it,
we
don't
want
to
go,
do
post-processing,
but
when
we
have
a
cache,
we're
trying
to
pull
references
out
of
a
map,
so
it
could
be
that,
like
maybe
the
argument
here,
steve
is
maybe
the
lister
interface.
Actually.
C
Maybe
there
is
a
path
here
to
say
like
well.
Why
don't
we
just
go
fix
parts
of
the
client
interface
and
look
at
generators
and
listeners,
maybe
in
a
dynamic
sense.
First,
like
generated
clients
versus
dynamic
clients,
there's
a
couple
places
where
we
actually
made
the
dynamic
client
look
a
little
bit
more
like
the
lister
interface,
unstructured
as
an
example.
The
way
that
we
deal
with
items.
B
I
have
a
question
andy.
We
have
the
second
approach
of
filtering
in
formats.
If
you
look
into
doing
that
in
the
moment
you
do
the
list
call
like
putting
a
wrapper,
which
applies
a
filter,
basically
filtering
to
the
cluster.
You
are
interested
in
and
then
you
get
back
a
list
again
and
you
can
do
whatever
you
want,
like
not
adding
any
different
new
methods,
but
just
one
method
called
filter
which
has
a
clever
input
which
allows
us
to
pass
the
context
in.
D
That's
a
cool
idea:
why
don't
we
talk
about
that
offline
and
yeah
see
what
that
might
look
like.
B
D
F
I
just
I
was
just
worried
that
like
because
the
follow
up
here,
unless
we
expect
to
have
a
a
fork
forever
like
the
follow-up,
would
be
changing
some
of
the
core
controllers
right,
like
the
reason
we're
one
of
the
reasons
we're
doing
this,
is
we
can't
touch
them
as
they
are.
B
I
don't
think
this
is
a
goal
here
we
have.
We
have
different
personas
and
use
cases,
and
I
think
this
case
here
is
about
writing
cluster.
Aware
controllers,
not
our
case,
where
we
want
to
port
existing
ones
like
the
handful
of
those
which
we
have
to
maintain
those
are,
I
don't
think
the
problem
because
they
are
not.
C
No,
I
think,
there's
only
there's
a
couple
that
are
useful
like
scheduler
like
I
think
this
kind
of
gets
into
garbage
like
so.
The
garbage
collection
controller
is
the
best
example.
We
have
today
of
a
truly
generic
controller
that
runs
across
workspaces
that
I
could
see
a
clear
reason
for
wanting
to
amortize
the
cost
of
it
across
hundreds,
like
a
shard
or
like
a
large
chunk.
Just
because
garbage
collector
on,
like
sets
of
like
five
resources
is
gonna.
C
C
Maybe
we
just
have
a
few
of
them,
but
yeah
like
focusing
on
the
controller
needs
of
someone
writing
a
multi-cluster
aware
being
able
to
reuse
as
much
as
possible
the
tooling
having
it
feel
familiar
not
having
to
rewrite,
for
instance,
delta,
fifo
or
informers,
and
then
I
think
like,
but
and
maybe
even
staphon,
to
build
on
that
point.
There's
a
the
the
mindset
after
this
is
like
okay
good
now
I
want
to
go.
C
To
python.net
java,
everybody
who's
created
some
crazy
copy
of
the
cube
ecosystem,
and
maybe
this
is
like
another
place
where
we'd
be
incentivized,
to
invest
in
things
that
align
those
those
tooling,
where
possible,
like
the
generators
and
all
that
putting
extra
time
and
effort
into
improving
them.
If
the
improvements
come
with
a
little
bit
of
a
little
bit
of
you
know,
payback
for
for
our
use
case,
that's
probably
something
that
that's
that's.
What
open
source
is
you
scratch?
My
back,
I
scratch
yours.
B
One
last
thought
I
think:
there's
one
blocker
for
kcp
would
be
one
broadcast
we
didn't
solve
this.
You
want
to
reuse,
client,
go
comedians
clients,
you
want
to
reuse
any
other
cid
client
generated
by
anybody.
If
we
ask
everybody
to
regenerate
because
you
want
to
apply
it
against
kcp,
this
would
be
a
big
problem.
So
I
think
the
main
goal
must
be
reuse
generated
informers
generated
clients
from
anywhere
you
want,
if
you
then
implement
something
cluster
away
on
top
with
custom
code
like
parting
those
contacts
around.
D
Yeah
I
mean
that
that's
you're
saying
you
don't
want
people
to
have
to
regenerate
to
take
advantage
of
logical
clusters.
I
don't
think
that's
possible.
Well,.
C
No,
no
not
regenerating
to
be
able
to
use
a
controller
against
a
single
workspace.
The
minimum
delta
from
like
somebody
has
to
be
like
the
use
case.
We
have.
Is
I've
written
a
controller
that
works
on
one
cluster?
Can
we
get
you
to
say?
Oh,
let's
go
make
that
multi-cluster
aware
as
easily
as
possible,
I
might
argue
and
say
like
it
doesn't
have
to
be
a
zero
cost
operation,
because
you
have
to
add
it,
but
you
want
it
to.
C
We
want
to
minimize
the
the
energy
required
to
get
from
perfectly
working
ingress
controller,
written
against
one
cluster
to
ingress
controller,
written
written
across
50
workspaces
or
10
000
workspaces,
but
like
we
are
we're
going
for
that
conceptual
conceptual
code,
effort
library
dependencies.
C
If
you
have
to
fork
your
code
that
may
be
acceptable
but
like
thinking
about
like
porting
single
cluster
to
multi-cluster
controllers
is
like.
Is
there
a
way
that,
like
you,
don't
have
to
actually
fork
it
having
a
new
version
of
a
library
regenerating?
These
are
all
going
to
be
big.
Friction
points
so
looking
for
ways
to
move
that
into
below
where
that
person
has
to
make
that
decision,
because
it's
already
inclined
go
or
it's
already
where
it's
just
like
an
add-on
on
top
of
client
go.
It
becomes
very.
B
A
That
the
the
thing
that
and
andy's
demo
was
hinting
at
was
that
in
order
to
get
multi-cluster
controllers,
you
would
need
to
you
know
the
docs
would
say
like
generate
regenerate
your
informer
code
with
you
know,
kubernetes
version
x,
plus
and
add
kcp
dot
enable
multi-cluster,
and
that's
all
you
need
right.
That's
like
the
smallest
possible
diff,
that's
okay!
If
you
can
do
the
pitch.
C
C
And
and
honestly
like
the
key
function
like
a
lot
of
these
are
any
place
where
we
can
find
an
example
in
cube
where
it's
just
dumb
and
improve
it.
So
like
listers,
that
back
real
clients
and
context
is
an
example
of
you
know,
that's
a
place
where,
like
context,
might
be
justified.
Additional
filter
functions,
field,
selectors
and
label.
Selectors
are
like
woefully
under
implemented.
C
There's
an
angle
on
field
selectors,
which
is
improving
field
selectors
or
coming
up
with
ways
of
like
looking
for
places
in
the
standard
library
where
we're
doing
filters
after
the
fact
where,
if
we
can
change
the
signature
of
lister's
client
to
make
those
things
be
like
upfront
and
useful
to
people
where
you'd
say
like
yeah,
I
want
to
filter
out
90
of
these
there's
probably
some
examples
of
that.
Where
even
in
the
core
cube
libraries,
we
would
be
able
to
exploit
those
kinds
of
advantages.
Like
don't
look,
don't
miss
that
opportunity.
D
Say
that
in
the
past
yeah
jason's,
the
only
one
who
saw
them,
I
also
just
real
quick.
I
have
a
rebase
on
cube
123.,
there's
a
pr
open
this
code
that
I
was
showing
built
on
top
of
that.
So
if
you
all
have
time
to
review
the
rebase,
the
pr
is
in
our
kcp
dev
fork
of
kubernetes.
A
Cool
all
right
and
with
that
I'm
gonna
adjourn.
Thank
you.
Everyone
and
we'll
see
you
all
next
week.