►
From YouTube: Community Meeting, November 2, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
the
there's
a
ton
on
the
agenda
that
I
think
we'll
all
get
to,
but
I
wanted
to
propose
in
this
in
this
meeting.
Moving
this
meeting
an
hour
earlier
to
be
more
eu-friendly.
There
was,
I
think
I
could
even
do
earlier
in
the
day,
but
I
think
that
starts
to
get
hard
for
pacific
u.s.
A
Folks
would
an
hour
earlier
on
tuesdays
work
for
people,
I'm
seeing
light
head
nodding,
thumbs
ups
and
no
screaming
so
it
passes
with
with
unanimous
or
you
know
something
like
it
approval
all
right.
I'll
move
this
an
hour
earlier
and
we'll
see
if
that
works,
and
if
it
doesn't
we'll
move
it
again
and
we'll
figure
it
out,
but
yeah
great.
Oh,
that
should
be
fun.
Also.
Next
week
is
also
time
zone
shenanigans,
so
it'll
be
perfect.
No
one
will
have
any
idea
when
the
when
the
meeting
is.
A
There
have
been
a
lot
of
discussions
in
the
in
the
slack
and
in
various
documents,
and
I
wanted
to
sort
of
tie
them
all
or
bring
them
all
to
here
to
make
sure
that
we
all
had
visibility
on
all
of
them.
I
think
there's
a
lot
of
good
discussions
and
we
probably
will
continue
to
have
discussions
before
next
week,
but
one
of
them,
I
think,
well
sorry
I'll,
just
go
in
order
that
they
are
listed
here.
A
How
we
handle
workspaces
is
an
open
question
or,
I
think,
could
be
an
open
question.
How
we
handle
off
both
authentication
and
authorization
for
the
kcp
layer
and
the
physical
cluster
layer
is
something
that
people
have
brought
up
as
a
potential
cause
of
concern,
and
stefan
and
steve
who
I
don't
know
if
steve
is
here
steve.
A
Or
is
not
here
so
maybe
we'll
table
that
until
he
gets
back,
but
there
was
a
discussion
about
the
resource
version,
the
opaque
resource
version
that
we
talked
about.
I
think
last
week
carrying
information
about
which
shards
we
want,
which
shards
need
to
be
asked
for
data.
A
The
way
that
that
is
currently
being
prototyped
might
have
a
hard
limit
of
something
of
the
number
of
shards
we're
allowed
to
have,
or
I
mean
the
shard's
not
allowed
the
number
of
shards
we
can
possibly
describe
in
that
in
that
information
and
that
limit
is
lower
than
the
stated
goal
of
being
able
to
run
a
thousand
shards
so
either.
That
means
we
need
to
relax
that
limitation
or
understand
that
limitation
better
or
come
up
with
a
different
way
of
expressing
that
information
than
when
we
have
the.
A
When
then,
what
we
have
been
prototyping
so
that
we
can
get
to
those
thousand
shards
yeah,
I
don't
know
if
any
of
these
are
things
people
want
to
talk
about
first,
otherwise
I
will
dig
into
workspaces
first,
but
if,
if
there's
anybody
or
anything
else,
that
is
not
included
in
this
list,
because
I
know
there's
been
a
lot
of
topics
in
a
lot
of
areas
floating
around.
A
All
right,
so
the
workspaces
thing
is
right,
so
one
thing
we
want
to
be
able
to
do
is
say:
create
me
a
workspace
and
install
these
services
and
install
these
apis
run
these
controllers
or
register
this
workspace
as
caring
about
these
apis,
so
that
some
external
controller
can
watch
them.
A
multi-cluster
controller
can
watch
them.
This
is
relatively
easy
to
do
if
we
don't
have
on-demand
logical
clusters.
The
reason
the
reason
this
is
difficult.
A
The
only
reason
this
is
difficult
really
is
because
logical
clusters,
you
know,
come
into
existence
the
first
time
something
is
put
in
them,
and
so,
if
it
was
an
explicit
request
to
create
a
workspace,
then
something
could
watch
for
those
things
and
assign
some
policy
and
say:
oh
you're,
a
workspace
that
cares
about
these
apis
that
will
install
those
apis
or
register
you
for
these
apis
or
whatever.
Since
logical
clusters
come
into
existence
on
demand,
which
I
think
is
a
good
design.
I
don't
think
we
should
undo
that
either.
A
C
A
Gorkham
specifically
wanted
to
be
able
to
have
workspaces,
come
pre-installed
with
some
things
to
say
like
this
is
a
a
workspace
that
should
have
you
know,
techton
installed
on
it
or
or
argo
cd
or
some
you
know
name
your
name,
your
add-on,
and
in
that
case
I
mean
it
doesn't.
Pre-Installed
doesn't
mean,
like
literally
comes
into
existence
with
those
things,
but
upon
creation
gets
these
things
installed
into
them
and
there's
a
few
different
ways.
We
can
implement
that
there's
a
few
different
mechanisms.
A
We
can
do
to
watch
for
those
and
install
those
things,
and
I
think
it's
just
a
matter
of
picking
which
one
we
like
the
best.
Stefan.
I
want
to
make
sure
we
get
to
here.
Hello.
D
C
Maybe
maybe
do
that
first
time?
Well,
so
we
should.
We
should
be
pretty
clear.
Are
we
asking
what
we
want
or
what
it
is
now?
What's
the?
What
are
we?
What
are
we
trying
to
get
to
because,
like
we
have
a
pretty
extensive
dock
that
lays
out
a
lot
of
these?
C
It's
just
not
organized
particularly
well,
I
think
we've
I
feel
like
maybe
we're
at
a
point
where
maybe
we
shouldn't
have
to
ask
this
question
or
the
fact
that
we're
having
to
ask
this
question
or
do
it
means
we
don't
have
good
enough,
like
summarization
of
like
a
key
concept
that
we
could
say
like
this
is
a
proposed
concept.
Here's
the
counter
arguments
and
the
justifications
versus
having
to
do
it
here.
C
I
think
that's
because
I'm
kind
of
surprised
that
gorkham
would
ask
that
or
we
didn't
feel
like
we
had
a
pad
answer,
which
just
may
mean
that
we're
not
encoding
enough
information
in
like
a
a
a
decision
point
or
like
it's
there,
but
it's
not
understandable
or
someone
can't
be
like.
Oh,
this
is
where
we
define
what
logical
cluster
is,
and
this
is
what
workspaces
like
that's
kind
of
what
I'm
sensing
a
bit
of
is
that
fair
or
unfair.
A
I
think
I
think
that's
a
reasonable
read
like
if
this
is
something
you
think
is
a
subtle
debate.
Then
then,
if
people
are
asking
questions,
it's
because
they
don't
know
that
the
debate
is
settled.
C
Accelerate
the
process
of
getting
to
a
is
this
settled.
What
are
the
open
questions
and
then
how
do
we
move
past
it,
and
where
would
we
go
to
record
that
so
like
this
one?
I
felt
that
we
were
in
a
pretty
good
spot
on,
but
it
probably
isn't
so.
C
I
expected
the
grouping
concepts,
adr
doc
to
lay
enough
of
this
out,
but
arguably
it's
probably
not
in
that
dock
right
now,
it's
probably
in
the
one
that's
shared
with
the
community,
the
the
broad
one,
the
sharding
list
indexing
and
probably
should
get
moved
there
and
someone
should
take
a
stab
at
it
and
I'm
happy
to
take
a
stab
at
the
sections
that
I
put
in
the
other
dock
and
move
it
there.
C
Unless
someone
else
would
like
to
be
that
person
doing
it
like
to
the
sake
of
argument
and
then
say:
hey,
does
this
explain
it
enough
that
we
can
argue
about
it.
B
I'm
happy
to
go
through
and
make
an
attempt
at
that,
because
I
know
I
have
a
lot
of
questions
about
the
terminology
and
I
think,
like
you,
you've
seen
in
my
little
private
doc,
trying
to
get
some
of
this
stuff
explained
in
my
own
words.
So
if
I
can
lift
that
and
put
it
into
a
public-facing
dock
and
be
happy
to
do
that.
C
Yeah
because
I
felt
like
we
were
kind
of
in
the
breath
section
and
then
we
were
trying
to
boil
it
down
as
more
people
were
getting
involved,
and
so
then
there's
like
this
is
a
great
opportunity
to
say
like
this
should
be
something
we
should
be
able
to
either
definitively
answer
or
say
what
the
remaining
questions
are
and
then
gorkham's
question
would
then
be
effectively
a
test
of.
Does
that
explain
it
enough
that
he
agrees
or
disagrees,
or
can
he
does
he
have
a
place?
He
can
register
that?
Oh,
I
thought
of
another
use
case.
C
A
Yeah,
it
definitely
might
be
right
like
like
if
we
know
the
answer,
but
we
cannot
communicate
the
answer
or
know
the
terms
but
cannot
describe
those
terms.
Clearly,
then,
then,
you
know
it's
like
having
no
answer
I'll,
let
I'll
let
eddie
go.
B
C
D
David
used,
the
wording,
I
think
logical
cluster
is
just
another
string
next
to
the
namespace
and
name
of
objects.
Is
this
the
same
understanding.
C
C
Fixing,
I
think,
is
accurate,
but
maybe
like
we
would
just
say,
like
it's
adding
another
tuple
to
how
an
object
is
stored
in
the
storage
engine.
It's
not
the
resource,
that's
what
I
mean
correct.
A
logical
cluster
isn't
is
like
the
registry
storage
in
cube.
It
does
not.
It
might
be
useful
in
implementing
a
higher
level
concept.
It
is
not
a
concept
in
and
of
itself,
and
then
workspaces
are
a
use
case,
driven
api
type
that
bind
or
set
of
apis.
C
C
B
C
A
So
I
have
a
few
clarifying
questions
that
I
think
I
already
know
the
answer
to,
but
it
will
be
instructive
to
me
if
my
perceived
answers
are
incorrect.
A
logical
cluster
is
a
low-level
implementation.
Detail.
Users
never
actually
care
about
them.
They
are
mainly
a
prefix
in
a
key
in
storage.
A
A
user
creates
a
workspace
and
deals
only
with
that
workspace,
correct
that
workspace
maps
one
to
one
to
a
logical
cluster,
but
they
don't
know
they
don't
care.
They
say
give
me
a
workspace
to
your
point:
clayton
they
they
define
in
that
workspace.
Some
quotas,
some
apis,
some
things
about
that
workspace,
but
that's
what
they
care
about.
They
don't
care
about.
The
logical
cluster
logical
clusters
are
implementation
on
our
side,
implementation.
C
There
exists
an
api
that
allows
someone
to
request
a
workspace
once
they
have
requested
and
satisfied.
There
are
a
set
of
use
cases
and
requirements
around
what
that
api
has
to
accomplish
right
then,
at
the
end,
they
get
a
the
ability
to
access
that
workspace,
like
a
cluster
that
that
would
be
like
a
cube.
Client
can
make
cube.
Calls
inside
that
workspace
context,
which
is
the
overlap,
might
be
maybe
there's
concepts
in
cube,
or
you
know
the
client
go
changes
that
steve's
talked
about
whatever
is
necessary
for
a
client
to
target
that.
C
C
The
next
question
this
andy
sounds
like
what
you
were
kind
of
bringing
up
was:
is
the
interface
for
creating
a
workspace
cube-like
or
is
it
a
completely
just
arbitrary
has
no
thing:
what
are
the
use
cases
that
constrain
works,
that
workspace
interface
such
that?
We
would
ask
the
next
question,
which
is:
are
there
cross
workspace
operations
that
allow
you
to
deal
in
bulk
with
applications
that
span
clusters?
C
Is
that
a
requirement
for
us
to
solve
in
this
initial
phase?
Maybe
not,
but
we
haven't
explored
that,
so
that
would
be
one
of
those
open
questions
which
would
be.
Is
there
any
scenario
where
I'm
going
like
today
in
cube,
you
can
apply
to
namespaces
and
then
create
two
deployments
in
those
two
namespaces.
C
Is
there
an
equivalent
use
case,
for
I
want
to
apply
two
workspaces
and
then
apply
that,
and
the
answer
is
because
of
the
decision
that
a
cluster
is
the
fundamental,
concrete
scope
that
we're
not
trying
to
like
redefine
what
clusters
are
and
we're
not
requiring
clients
to
change
everything.
The
answer
right
now
is:
no,
you
cannot
do
those
operations,
and
so
the
opacity
of
the
workspace
does
not
require
that
a
single
cube
control
apply,
allow
you
to
create
a
cluster
and
then
immediately
put
resources
into
it.
C
B
I
think
it
lines
up
as
I've
expressed
a
couple
times
in
slack
today.
I
think
the
terminology
problem
is
something
that
is
hindering
me
like.
I,
I'm
struggling
with
logical
cluster
workspace,
virtual
workspace,
org
workspace,
et
cetera,
et
cetera
and
so
part
of
what
you
were
suggesting
earlier
about
trying
to
have
a
rallying
dock
around
definitions
and
concepts.
I
think,
would
be
really
helpful.
I
know
david
had
previously
proposed
or
questioned
if
logical
cluster
was
the
right
term.
B
C
C
Just
a
stream
so
today,
so
the
analogy
I
draw
today
is
a
kubernetes
cluster
has
a
very
strong
namespace
concept.
That
is
not
in
any
way
something
that
the
storage
is
aware
of.
That
is
use
case
driven
it
evolved.
We
took
an
approach
that
had
trade-offs
where
we
implemented
namespace
lifecycle
via
admission
control
and
via
controllers.
C
However,
deleting
a
namespace
has
guarantees
because
of
the
other
design
choices
we
made
like
supporting
the
idea
that
a
namespace
is
not
physically
homogeneous
in
an
ncd
or
is
not
physically
co-located
in
an
ncd.
You
can
have
aggregated
apis.
That
requires
us
to
support
the
idea
of
a
controller,
but
instead
of
saying,
hey,
delete
this
namespace
and
all
the
stuff
in
it
has
to
talk
to
each
resource
enumerate
the
resource
delete.
All
of
them
wait
for
all
of
them
to
act.
C
I
do
not
think
we
have
enumerated
the
use
cases
around
the
life
cycle
of
workspaces
sufficiently
to
make
a
choice
about
an
implementation
of
whether
the
physical
cluster
or
whether
the
storage
engine
is
aware
of
the
life
cycle
of
logical
clusters.
We
have
not
enumerated
the
use
cases,
we're
actually
trying
to
solve
sufficiently
to
make
that
design
trade-off.
C
And
I
think
all
of
them
are
relevant,
we're
not
trying
to
we're.
We
we
need
to
make
the
determination
of
are
there
specific
benefits
that
we
would
be
going
for,
based
on
the
use
cases
we
have,
which
arguably,
in
a
single
cluster
deleting
a
whole
single
cluster,
is
never
a
problem
we
solved.
It
is
absolutely
a
problem
that
we
have
to
solve
inside
of
a
logical
cluster
right
like
deleting
a
logical
cluster
has
to
work.
It
may
need
finalizers,
it
will
need
to
communicate.
We
will
learn
from
namespaces.
C
The
difference
is,
I
think
that
we
may
be
demanding
a
slightly
different
access
pattern,
use
case,
consistency
guarantee.
We
also
have
some
watch
everything
in
a
name.
I
watch
everything
in
a
workspace
use
cases
that
arguably
cube,
does
not
have,
except
for
garbage
collection
controller,
which
it
doesn't
actually
watch
everything
because
it's
possible
to
go
say.
Well,
I
don't
want
you
to
watch
this,
and
then
we
have
virtual
resources
and
aggregated
apis.
It
doesn't
watch
we
never.
C
We
dealt
with
the
implications
of
those
we
never
really
came
back
around.
So
arguably
the
storage
implementation
needs
to
depend
on
the
use
cases.
For
life
cycle
of
workspace
and
specifying
what
the
lifecycle
of
workspace
is
effectively,
what
broken
is
asking
like?
Can
we
clarify
the
interface
expectations
lifecycle?
What
are
the
trade-offs?
What
do
we
have
to
go
research
based
on
like
garbage
collection
within
a
workspace?
Do
we
allow
cross
cluster
garbage
collection?
C
Probably
not
what
other?
What
are
what
constraints?
Can
we
place
on
the
problem
work
through
those?
So
I
I
would
probably
say
that
was
what
I
was
trying
to
prevent.
Was
us
going
too
far
down
defining
those
so
that
the
folks
here
had
the
opportunity
to
be
like
here's
a
meaningful
trade-off,
here's
the
requirement
or
here's
a
here's,
a
meaningful
requirement,
we
believe,
is
achievable.
C
A
Okay,
I
just
forgot
the
question.
I
was
gonna.
Ask
no
it's
gone.
If
anybody
has
anything
else,
they
want
to
talk
about
workspaces.
A
I
think
we'll
probably
keep
talking
about
workspaces
here
and
elsewhere
anyway,.
C
Andy's
putting
the
hat
on
for
this
right
andy,
I
I
saw
you
assume
the
hat
for
terminology,
better
clarification
of
the
existing
art
for
workspaces
and
the
enumeration
of
the
remaining
questions
or
the
the
open
design.
Questions
like
we
just
right
now
brought
up
a
couple
of
them,
but
you'll
own
that
going
forward.
Yes,
okay,.
A
Regarding
terminology,
if,
if
the
term
logical
cluster
is
confusing
and
is
a
low
level
detail
and
should
not
be
something
users
or
clients
care
about
anyway,
can
we
just
call
it
an
etcd
key
prefix
like
if
that's
what
it
is?
If
that's
what
we
are
calling
like,
that,
that
sounds
sufficiently
implementationy
and
scary
that
hopefully
people
will
stop
thinking
of
it
and
thinking
of
those
as
cluster-like.
C
You
we
could
call
like
a,
I
think,
like
that's
an
example:
storage
shard
identifier,
storage
unit,
like
yeah
storage,
tenancy
unit,
or
something
like
that,
like
those
are
all
options,
because,
while
they're.
C
And
again,
workspace
is
kind
of
an
opinionated
approach,
whereas
logical
cluster
was
intended
to
be
presented
as
a
capability
that
different
users
could
find
value
in,
and
I
think
you
could
say
different
technology.
Anybody,
who's
capable
of
forking
a
minimal
cube
api
server
and
adding
in
go
interfaces
is
going
to
want.
You
know
clear
concepts.
C
We
want
to
make
sure
that
there's
other
use
cases
besides
workspace
to
at
least
but
if
they're
none
workspace
was
intended
to
be
a
little
bit
more
opinionated.
Then
we've
like
cluster
and
namespace
are
opinionated
right.
We
made
specific
trade-offs
with
namespace
to
accomplish
a
set
of
use
cases.
C
We
owled
it
seven
years
in
we've
learned
things
about
what
namespaces
are
good,
not
good,
for
we
dealt
with
the
implications.
We
don't
want
to
throw
out
the
so
I
think
it's
totally
reasonable.
A
C
A
C
Would
be
logical,
workspace,
yeah
workspace,
couples,
api
types,
api
definition,
tenancy
and
a
the
pattern
for
how
you
access
lots
of
these
little
clusters.
It
is
probably
a
kcp
projectism
with
mechanisms
and
cube.
Now
it
may
be
that
we
actually
do
want
to
propose
it
for
cube.
I
think
we're
pretty
far
from
that
until
you
can
show
a
working
valid
system
and
we
do
not
need
workspaces
in
cube
for
cube
to
be
successful
or
for
kcp
to
be
successful
or
minimal
api
servers
to
be
successful.
I
do
not
think.
A
C
Historically,
all
of
these
storage
related
approaches
to
subdividing
a
single
cube
cluster
lacked
generality,
which
would
be
like
you
could
come
up
with
a
way
to
subset
certain
classes
of
resources
or
api
objects
or
namespaces
or
whatever
none
of
those
have
enough
heft
that
they
have
a
clear
value
over
all
the
trade-offs
right
like
you
could.
We
talked
about
various
approaches
in
ap
machinery
a
couple
years
ago
about
like
breaking
the
tenancy
model
up,
so
that
you
could
have
stuff
that
only
a
core
set
of
controllers
sees.
C
A
A
This
is
a
big
ask?
Is
there
any
like
tldr
of
the
last
week's
worth
of
thinking
on
where
we
think
we're
going
with
with
complex
resource
versions
and
client?
Go
potential
client
go
changes.
I
saw
some
stuff
in
the
slack
that
frankly,
scared
me
a
little
bit
about
changing
the
interfaces.
E
Yeah,
so
I
think,
unless
we
come
up
with
a
different
mechanism
for
handling
requests,
when
logical
clusters
move
between
kcp
shards,
we
need
complex
resource
version
even
for
clients
that
are
talking
just
within
one
logical
cluster,
but
that
doesn't
require
any
changes.
E
E
A
A
Okay,
so
those
those
I
think
this
is
miss
information.
I
was
missing
before
the
changes
you
were
proposing
to
add.
The
cluster
to
the
api
client
is
specifically
for
kcp
multi-cluster,
aware
multi,
yeah
multi
workspace,
aware
yeah.
D
We
had
the
the
naming
controller
of
the
crds
today.
As
an
example,
andy
had
found
a
problem
like
it
had
to
list
all
other
crds
and
for
this
request
you
have
to
know
which
cluster
cluster
you
target.
So
you
get
an
event
from
the
informa
which
has
somehow
encoded
the
clusters,
logic,
cluster
string
and
you
have
to
direct
your
client
to
the
right
one.
So
this
is
always
needed
in
non-trivial
cases.
A
A
C
I
feel
like
you've
done
a
pretty
good
job
of
of
looking
at
some
of
the
examples
of
ways
that
you
could
maybe
theoretically
hide
it
from
a
client.
But
I
think
do
you
have
enough
info
now
that
you
could
probably
write
given
you
know,
there's
a
world
in
which
you
to
do
a
multi-cluster,
aware,
client,
you
have
to
change
the
client
and
you
have
your
controller
has
to
select
which
context
it's
talking
to
and
there's
a
counter
which
is
like.
We
talked
about
options
that
you
could
use
to
pretend.
C
E
To
do
I
tried
to,
I
don't
think
yeah
there's,
if
you
just
have
one
client
like
there's
just
no
way,
if
you
have
a
client,
that's
capable
of
talking
to
multiple
logical
clusters
and
you
get
an
event
on
your
queue
that
says
you
know
something
happened,
there's
just
no
way
to
encode
the
information
from
that
event
into
delete.
Call,
for
instance,
without
like
something
explicit.
C
It
would
be
good
to
record
that
maybe
somewhere
in
like
either
trade-offs
or
like
alternatives
or
like
somewhere,
something
that
says
like
we
made
a
choice
for
controllers
that
there's
this
minimal
set
of
change.
That
is
accomplished,
which
is
the
selection
of
the
target.
We
looked
at
the
alternatives,
no
alternative
provided
magic,
here's
what
we
considered
and
then
you
know
we
could
leave
that
to.
Maybe
someone
comes
back
later
on
and
is
like
I've
thought
of
this
completely
different
approach,
but
we've
effectively
eliminated
it.
E
C
Okay,
so
jason
stepped
away.
So
then
the
next
question
would
be
from
that.
Then
do
we
are
there
any
other
implications
of
that
client
stuff?
Or
do
you
want
to
move
on
to
the
next
part
of
it.
E
The
only
there
was
one
andy,
I
don't
know
how
much
you
ended
up
looking
into
this
today,
but
is
it
reasonable
to
opt
into
a
different
key
function
in
a
multi
multi-cluster
queue
without
having
to
change
everything.
C
I
think
the
informer
constructor
currently
hard
codes,
meta,
okay,
but
at
least
like
so
this,
like
I
don't
know
if
andy
you-
and
I
talked
about
this,
I
feel
like
we
had
like
one
conversation
on
this
and
somebody
else
and
I
chat
about.
I
was
like.
I
don't
actually
know
that
meta
can't
couldn't
also
break
down
on
cluster
as
well.
C
B
No
one's
talking
allowed
anything
like
the
the
prototype
code.
Right
now
modified
the
meta
namespace
key
func
to
encode
the
cluster
name,
but
it
didn't
modify
the
splitter
font
to
do
the
reverse
for
the
decoding,
so
we
encode
cluster,
namespace
and
name,
and
we
decode
what's
supposed
to
be
namespace
and
name
but
like
we
need
to
make
that
breaking
change
in
the
go
api
to
is
that
reasonable?
I
mean
I.
C
I
don't
see
why
that
isn't
reasonable,
but
because
it's
just
basically
like
we
originally
put
the
key
functions
in
there,
because
we
imagined
other
type
of
key
functions
like
this
is
like
tim
and,
like
you
know,
I
think,
even
andy
you
and
I
may
have
had
some
discussions
really
early
on
about
like
various
aspects
of
this,
but
like
I
don't
know
of
a
reason
why
we
wouldn't
expect
the
cache
key
function
to
be
symmetric
in
a
way
if
you
have
one
half
of
it
doing
it.
The
other
right.
B
C
B
C
Yeah,
arguably
two
and
there's
some
there's
some
subtle
performance
tradeoffs
in
that
key
accessory
right,
like
that
is
cache
access,
is
a
critical
path
function
for
some
controller
types
in
the
core
code
base.
C
Generally,
like
allocating
a
new
heap
string
to
get
the
you
know,
concatenating
two
strings
together
is
more
expensive
than
creating
a
struct
and
then
doing
the
indirection
into
the
struct
member
to
get
the
struct.
Because,
most
you
know
most
of
the
time
you're
already
in
the
your
strings
are
already
going
to
be
in
the
l1
cache.
I
was
wondering
why
they
did
that
originally.
C
E
C
Passed
in
like
the
strap
with
two
members
string,
string,
keys
are
special,
so,
like
hashing
is
going
to
be
a
little
bit
faster,
there's
some
other
trade-offs.
I
do
not
remember
any
concrete
reason
that
we
did
anything
there
just
like
like
at
the
time.
Honestly,
a
lot
of
people
were
still
learning
go,
and
so
you
know
if
you
can't
show
in
a
performance
test
that
the
struct
is
worse,
the
struct
is
probably
more
correct.
We
use
it
in
other
places.
C
We
actually
we
use
it
in
the
cubelet
in
quite
a
few
number
of
places.
Now
touching
all
caches
in
all
controllers,
you
know
we'd
want
to
be
able
to
do
like
a
performance.
Regression
get
watch
time
to
look
at
it,
but
I
I
don't
know
of
a
real
performance
reason
why
it
would
break
us.
C
So
in
general,
like
it's
easy
to
write
informers
against
arbitrary
third-party
resources,
like
not
a
lot
of
people
do
it,
but
if
you
wanted
to
stitch
together,
like
a
bunch
of
controllers,
it's
probably
easier
to
use
the
cache
list.
Reflector,
all
you
need
is
a
list
call
that
has
forward
the
forward
progress,
sequential
consistency
guarantee
like
if
you
call
you
have
a
api
call
that
hits
bugzilla
and
you
do
lists
you
don't
need
watch
to
use
an
informer,
and
so
you
can
actually
build
out
that
sort
of
infrastructure
pretty
easily.
C
That
is
an
argument
like
those
kinds
of
objects,
are
arguments
for
informer,
taking
meta
cash
and
meta
cash
being
symmetric,
and
then,
if
metacash
is
symmetric,
we
might
as
well
you
go
in
there.
That's
probably
or
the
the
cash
key
being
symmetric
is
probably
an
argument
and
a
new
constructor
for
informers.
I
actually
probably
have
a
pr
open
somewhere
from
like
four
years
ago
that
added
that
informer
and
I
just
never
merged
it,
so
it
does
seem
reasonable
to
at
least
take
a
poke
at
it.
If
we
can
come
up
with
the
justification.
B
C
C
I
I
think
if
we
had
actually
caught
it,
we
wouldn't
have
passed
a
funk
that
it
wasn't
symmetric,
but
it
was
just
you
know
when
we
reviewed
it
at
the
time
like
so
I
I
other
than
like
yeah
the
blast
radius.
Andy,
I'm
concerned
about
the
performance
blast
radius.
I
would
definitely
say
unless
there
is
a,
I
would
not
change
the
signature
of
core
cube,
client
methods.
Yeah.
We
have
in
fact,
in
other
reviews
of
some
changes
like
people
were
trying
to
improve
like
the
undelta
queues
or
the
work
queue.
E
I'm
assuming
yes
for,
like
for
multi
multi-cluster
controllers,
like
we're,
expecting
them
to
hold
a
different
set
of
abstractions
than
they
do
now.
A
different
set
of
clients
that
are
at
a
higher
level
that
can
be
then
like
filtered
down
to
cluster
specific.
C
Do
we
have
like
a
working
basic,
minimal
multi-cluster
example
that
could
like
be
used
as
the
sounding
board
for
that,
because
that
say,
if,
if
there's
no
like
like,
we
want
the
magic
to
be
easy,
the
requirement,
I
don't
think,
is
that
a
controller
just
magically
works
in
a
multi-cluster
context.
It's
it's
easy
to
adapt
the
existing
cube,
plat
controller
pattern
from
the
low
level
through
controller
runtime
through
operator
sdk,
but
it
easy
is
not
the
same
as
automatic
and
magic.
It's
just
low
friction.
E
B
C
Yeah
things
like
things
like
work
queue,
work
queue
is
much
more,
is
much
better
at
taking
a
key
index
right.
It
explicitly
calls
that
out.
You
know
we're
not
we're
not
near
the
point
in
cube
where
we'd
go
templatize,
but
I
would
probably
say:
let's
get
all
the
reasons.
Why,
like
you,
want
good
interfaces,
and
you
want
good
abstractions
cache
store
is
not
specific
to
cube.
It
just
happens
to
default,
to
a
cube
thing,
informer
defaults
to
cube
things
informers,
like
one
of
the
reasons
we
did
delta
or
undelta
store.
C
I
don't
know
what
the
current
state
of
the
art
is
in
the
cloud
controllers,
to
contrast,
what's
in
the
aws
google,
whatever
apis
in
the
core
thing
other
than
just
taking
two
lists
and
iterating
over
them
periodically,
but
that
would
be
a
good
thing
to
explore
like
go,
find
the
existing
things
that
are
synchronizing
a
cube
api
with
another
api
and
just
make
a
double
check
that
they
like
the
way
that
they're
creating.
Are
they
using
a
store
for
the
other
side
or
is
it
you
know
brute
force?
C
A
C
C
Like
I
don't
know,
if
the
vmware
one
actually
ended
up
supporting
multiple
accounts
but
like
there
were
like
there
are
aspects
of
cloud
controllers
that
touch
on
the
problem
of
multiplicity
and
touch
informers
and
cash
doors,
but
you're
right
jason.
It
is
primarily
about
what
are
the
additional
things
you
would
use
to,
and
maybe
there's
a
flip
side
of
this,
which
is
like
we're
talking
about
sharding
by
cluster
name.
Is
there
another
spot
where
you
would
actually
want
to
index
by?
Is
anybody
indexing
by
like
namespace
name?
C
Another
field
like
the
scheduler,
probably
is
in
a
couple
places
using
the
indexer
infrastructure
any
place
where
you
would
want
to
use
an
indexer
or
have
a
different
key
or
have
some
uniqueness
guarantees
store
was
definitely
not
intended
to
just
be.
You
know,
name-spaced
cube
objects.
It
just
happens
to
have
atrophied
that
way.
A
It
happens
to
have
been
widely
used
that
way
after
feed
sounds
bad.
Wide
usage
is
good,
okay
right,
so
the
other,
the
other
issue
that's
come
up
over
the
last
week
or
so
is
around
off,
and
currently
I
think
we
can
we
can
get
by
with
the
current
auth
solution,
which
is
that
you
auth
once
with
kcp
and
kcp
syncs.
This
is
for
multi-cluster
scheduling
of
stuff,
not
just
the
minimal
api
server
tcp,
I'm
trying
to
be
better
about
using
the
right
term.
A
For
these
things,
the
sinker
basically
has
complete
control
over
the
physical
clusters,
and
so
the
when
you
talk
to
kcp
and
say
I'd
like
to
create
a
deployment.
It
says:
okay,
who
are
you
and
what
are
you
allowed
to
do
and
then
applies
it
and
then
the
sinker
syncs
it
with
its
super
powers.
This
works
fine,
but
is
kind
of
bad.
A
I
talked
to
some
folks
that
wanted
to
use
key
cloak
to
a
single,
centralized
key
cloak
for
both
the
kcp
layer
and
the
physical
cluster
layer
to
talk
to
to
agree
about
what
users
are
allowed
to
do,
so
that
the
user's
auth
would
get
passed
from
incoming
in
kcp
and
then
impersonated
down
in
the
syncer,
so
that
only
what
the
user
is
allowed
to
do
on
that
physical
cluster
is
what
they
are
allowed
to
do
in
kcp
and
out
on
the
physical
cluster.
A
But
I
think,
if
we're
going
to
make
this
more
generic
and
swappable,
we
probably
need
to
swap
out
also
key
cloak
for
something
else.
If
something
that
somebody
else
says
something
else,
I
wanted
to
bring
that
here.
In
case
people
had
thoughts
on
if
they've
already
had
thoughts
in
this
area
or
already
had
answers
to
this
problem.
I
don't
think
we've
talked
much
about
off
so
far.
C
So
which,
which
dock
are
we
going
to
record
this
in
your
transparent,
multi-cluster
design,
control
the
synchro
design?
At
this
point,
yeah?
Sure?
Okay,
then
I'll
I'll
lay
out
a
requirement.
The
sinker
is
not
allowed
to
be
rude,
and,
ideally
that
there
are
certain
things
that
the
sinker
cannot
do
on
the
underlying
cluster.
That
allows
us
to
have
a
security
boundary
of
defense
in
depth,
where
certain
operations
can
be
reasoned
about
as
unless
the
sinker
is
changed
from
its
default,
config
or
forked
or
whatever
certain
transformations
cannot
happen.
A
C
Settings
is
a
good
example
of
a
good
boundary
which
is
adding
and
removing
crds,
adding
cluster
scoped
resources,
removing
cluster
scoped
resources,
modifying
cluster
scoped
resources.
That
is
at
least
at
a
first
approximation,
based
on
what
we
have
said,
what
transparent
multi-clusters
intended
to
accomplish.
Unless
we
have
a
counter
example,
no
cluster
scoped
resource
can
be
modified
by
default.
A
A
If,
if
the
syncer
has
permission
to
create
and
do
everything
to
namespaces,
it
also
has
to
have
our
back
permissions
to
those
names
to
give
itself
permission
to
do
stuff
in
the
namespace
it
just
created,
and
in
order
to
do
that,
it
has
to
have
cluster-wide
our
back
permissions.
It
can't
give
itself
permission
only
to
name
spaces.
It
created.
C
C
Or
is
that
like
no
like
projects
in
openshift
yeah,
like
that?
That's
one
way
that
we
could
do
it
another
way
is
there
is
a
there
is
a
rule
that
potentially
allows
it
to
create
namespaces
with
a
prefix
and
any
prefix
given
to
that
sinker
it
can
create
underneath
and
the
choice
of
the
prefix
provides
that
that
isolation
guarantee
that's
one,
but.
A
C
Something
there
has
to
be
a
grant
of
permission
on
a
prefix
or
on
the
things
in
that
prefix
or
a
controller
that
runs
on
that
cluster
outside
of
the
sinkers.
That
says,
when
the
you
know,
when
this
namespace
is
created,
you
know
the
syncer
is
allowed
to
use
it
because
the
prefix
like
so
that
system,
I
think,
is
an
under
under-designed
thing.
But
like
does
anybody
disagree
that
the
security
boundaries
that
that
would
provide
would
potentially
be
valuable
in
any
of
itself
like
we're
not
trying
to
recreate
cluster-let
in
this?
C
The
advantage
is,
if
you
have
those
guarantees
in
place,
then
you
can
leverage
controls
against
the
synchro,
like
a
regular
user
versus
just
assuming
that
it's
rude
on
all
the
clusters.
A
All
right
and
that's
another
yeah,
so
I
think
that
that
provides
useful
security
boundaries
beyond
what
we
have
now,
which
is
the
wild
west
and
cluster
sinkers,
are
basically
god
mode
on
the
cluster.
I
wonder,
and
so
that
is
also
a
level
between
full
god
mode
and
key
cloak,
where
everything
talks
to
the
same
auth
server
to
decide
whether
the
syncer
is
allowed
to
apply
this
deployment
that
it
was
asked
to
apply
or
whatever
right.
It's.
A
Lives
in
a
bubble
that
it
created
and
has,
and
some
other
system
created,
that
bubble
for
it.
C
Today
and
like
the
closest
analog
to
this
is
what,
if
a
note
like
a
vm
on
a
node
that
runs
a
cubelet
is
not
root
on
the
bare
metal
machine
that
the
vm
is
inside
of.
Is
there
any
analog
like
every
every
layer
and
anytime
anybody's
done
anything
multi-clustering
cube?
They
start
with
a
security
assumption
that
they're
just
rude
on
everything.
C
That
is
a
different
set
of
problems
than
it
is
totally
valuable,
there's
totally
valid
that
there
might
be
async
or
somewhere.
That
is
rude
on
the
cluster
right.
You
can
imagine
the
cluster
lit
and
sinker
and
other
types
of
things
like
argo
agents
converging
on
a
yep.
I
want
to
do
admin
level
things,
but
that
is
not
the
same
problem
as
trying
to
schedule
workloads
onto
clusters.
B
Thanks
yeah,
just
to
restate
what
I
think
I
was
hearing
so
you'd
have
you'd,
give
a
sinker
a
permission
to
a
subset
of
namespaces
on
a
cluster,
especially
not
cube
system
and
things
like
that,
and
that
is
the
level
of
grant
that
you
give
it
and
you
don't
have
to
worry
about
another
authorization
check.
B
C
Yeah
there's
a
failure
domain
aspect
here
that
the
key
club,
when
you
say
I
will
delegate
all
authorization
decisions,
not
non-authentication
authorization
decisions
to
a
third-party
system.
You
are
now
no
more
reliable
than
that
third-party
system.
So
there
is
a,
I
think,
part
of
our
quote
unquote.
Mandate.
Slash
goal
is
to
make
failure,
domains
practical
and
understandable,
and
offer
a
recommendation
for
how
you
design
multi-cluster
to
clearly
safely
and
meaningfully
tolerate
certain
failure
zones
where
possible,
delegating
responsibilities
such
that
the
cluster
can
perform
in
isolation.
C
The
decisions
has
advantages,
whereas
coupling
to
a
third-party
system
means
you're,
basically
as
reliable
as
the
the
auth
disease
system.
That's
a
trade-off
that
we
could
choose
in
either
direction,
but
we
need
to
ask
ourselves:
is
that
trade
off
the
work
worthwhile.
C
When
you
say
that
you
mean
things
like
security
context,
constraints
or
cloud
security.
D
Yeah
also
just
whole
bindings
for
certain
apis
from.
C
So
this
is
definitely
called
out
somewhere,
and
this
is
a
json
thing,
so
in
some
design
somewhere,
we
explicitly
describe
we
we
should,
and
if
it
doesn't,
it
should
be
in
the
transparent,
multi-cluster
document
that
the
assumption
that
the
service
account
that,
when
you
get
privileges
on
a
service
account
at
the
high
level,
that
is
mapped
that
does
not
guarantee
you
service
account
privileges
in
the
physical
cluster.
In
fact,
it
explicitly
does
not
do
that
by
design
as
part
of
it,
it's
probably
in
the
stepping
dock,
but
jason.
C
A
I
think
we'd
actually
even
talked
about
if,
if
your
workload
says
it
needs
a
service
account,
this
indicates
you
care
about
the
api
server
you're
talking
to,
and
so
you
should
be
talking
to
the
kcp
api
server
and
not
your
physical
cluster
api
service.
So
that's
where
we
inject
in
the
the
repointing
that
that
that
pod
to
talk
back
up
to
kcp.
C
Yeah,
let's
break
the
use
case
of
today
when
you
get
a
service
account
and
you
grant
a
service
account
secret
in
a
pod
on
a
cluster.
There
are
two
use
cases
which
are
I'm
choosing
to
get
access
to
this
cluster
for
the
purposes
of
of
access
and
there's
another
one,
which
is
I'm
choosing
to
talk
to
the
source
of
truth.
For
my
for
the
controller
or
application
that
I'm
trying
to
run
to
those
are,
we
cannot
tell
those
apart
today
in
the
kcp
world
that
is
very
distinct.
C
The
case
of
I'm
trying
to
talk
to
the
physical
cluster,
I'm
on
to
program
it
or
to
understand
it
is
a
different
use
case
than
the
I'm
running.
A
controller
talking
to
apis.
Today,
all
cube
controllers,
roughly
assume
that
the
control
plane
they're
talking
to
is
the
one
that's
providing
is
hosting
them
we're
going
to
take
advantage
of
that.
C
But
we
are
breaking
that
assumption
and
if
you
want
to
say
schedule,
a
controller
that
talks
to
physical
clusters
in
the
short
run,
the
answer
is
use
something
else
like
acm
or
argo
or
sync
sets
or
get
ops
or
magic
in
the
future.
We
might
bring
back
a
use
case,
which
is.
I
would
like
that
to
run
a
controller
that
orchestrates
a
physical
cluster
that
will
be
through
something
that
is
not
the
default
transparent,
multi-cluster
experience.
C
Access
those
resources
you
are
getting
access
to
those
at
the
kcp
level,
not
the
downstream,
but
when
you're
talking
about
other
resources
like
you're
talking
about
service
accounts
a
bad
one,
because
that
one
overlaps,
but
like
content
maps
configmaps
the
cluster
that
is
exposed
to
you.
If
the
syncer
does
not
have
access
to
config
maps,
then
the
syncer
does
not
support
config
maps.
C
I
we
would
probably
define
a
set
of
resources
like
the
end
goal
here
is
to
deliver
homogeneous
chunks
of
api
workload,
capacity
that
is
stable
over
long
periods
from
physical
clusters
and
all
workload
specific
apis
and
fast
moving
things
move
at
the
higher
level.
So
that
is
a
two
level
system
where
all
workload
extension
happens
at
the
higher
level
and
all
infrastructure
extension
happens
at
the
lower
level.
C
To
get
to
the
point,
though,
staphon
is
like,
I
think
we're
talking
about,
we
would
define
one
or
two
sets
of
api
resources
at
the
physical
cluster
level
that
are
explicit.
We
have
a
default
behavior,
for
everything
else
is
opt-in
and
opt-in
may
mean
you
have
to
ask
the
cluster
administrator
to
start
syncing
that
resource
type.
It
is
an
engineer.
C
C
What
you're
doing
there?
We
should
clarify
this
in
the
transparent,
multi-cluster
docs
a
little
bit
more.
I
think
we've
had
a
few
discussions,
but
they
may
not
have
made
it
into
that.
Like
the
world
is
divided
into
application,
workload,
level
and
infrastructure
level.
A
pvc
is
infrastructure.
Why?
Because
we
said
so
a
config
map
a
secret.
Those
are
generic
application
infrastructure.
C
An
ncd
instance
is
by
default
a
workload
concept.
If
you
wanted
to
expose
xcd
instance
to
all
of
your
cluster
users,
you
could
add
it
to
the
default
set,
but
an
administrator
of
the
physical
cluster
would
make
the
choice
to
install
it
on
that
cluster
guarantee
its
life
cycle
give
our
back
access
to
the
sinker,
and
then
the
sinker
would
announce
that
and
begin
syncing
it.
A
And,
if
that,
if
that
resource,
if
that
controller,
that
infrastructure
controller,
if
the
cluster
it
is
running
on,
has
a
problem
or
disappears
or
whatever,
and
it
will
get
scheduled
to
something
else,
that
other
cluster
must
also
have
opted
into
the
things
it
needs
to
work
right
and.
C
There
is
a-
and
I
do
not
know
that
we've
said
it
in
this
group,
but
it's
like
really
really
important
as
part
of
this.
If
the
simple
things
are,
what
get
synced
down
and
the
simple
things
are
resilient,
that
does
mean
that
if
you
want
to
depend
like,
if
you
need
a
construct,
that's
not
satisfied
by
config
map
secrets,
dns
services,
service,
etc.
C
Like
those
slow
apis
that
are
stable
and
predictable
everywhere,
you
probably
should
write
your
workload
or
your
lower
level
thing
to
deal
with
those
interfaces
versus
inventing
new
crds
that
you
then
orchestrate
to
that
is
a
conceptual
difference.
So
like
staple
sets,
the
ncb
team,
the
hypershift
guys,
are
like
hey.
Maybe
the
entity
operator
is
actually
a
bad
architectural.
They
picked
up
some
learnings
from
how
others
were
doing
it
and
they're
like
hey.
C
Maybe
the
stateful
set
should
just
depend
on
basic
cube
and
all
changes
should
be
injected
via
a
higher
level
operation
that
could
be
done
by
those
things
checking
in
to
like
you
know,
somebody
drives
a
config
map
and
then
that
says
what
truth
is,
but
you
have
a
nice
clean
separation
between
high
level
and
then
what's
delegated.
That's
like
very
mechanic,
run
a
side
car,
so
that's
kind
of
breaking
that
whole
run
a
bunch
of
operators
that
are
like
super
deeply,
coupled
to
everything
into
think
about
whether
you're
providing
a
basic
capability
or
you're.
C
Doing
like
your
really
complex
stuff
up
here,
and
that
means
those
cust
clusters
should
coast,
but
that
may
complicate
people
who
want
to
do
complex
recovery
of
workloads
right
like
if
you're
a
tess
operator
needs
to
go
do
stuff.
C
Maybe
that's
something
that
you
install
at
an
infrastructure
level
and
you
accept
the
fact
that
you
have
to
install
it
everywhere.
We're
giving
you
a
choice.
Not
all
those
choices
are
going
to
be
straightforward.
A
Yeah,
okay,
well
we're
at
time.
Thank
you!
Everybody
and
we'll
see
you
next
week,
one
hour
earlier
steve,
we
decided
to
start
next
week,
one
hour
earlier,
just
for
maximum
confusion
with
time
zones.