►
From YouTube: Community Meeting September 21, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone-
this
is
the
kcp
community
meeting
september
21st
2021,
just
like
the
earth,
wind
and
fire
song.
I
wanted
to
go
over
there's
a
fact
agenda
today,
so
I'm
going
to
try
to
go
fast.
The
demo
2
outline,
which
is
here
or
a
lot
more
information
in
a
lot
more
detail,
but
I
think
the
main
sort
of
distillation
of
those
items
is
is
roughly
this
be
able
to
do
namespace,
granularity
scheduling
and
moving.
A
I
have
a
pr
in
the
works
that
is
attempting
to
do
that.
Both
scheduling
whole
name
spaces,
picking
up
new
items
in
namespaces
and
assigning
it
to
that
namespaces
cluster,
reacting
to
clusters
becoming
unavailable
and
unassigning,
and
then
reassigning
that
namespace
and
everything
in
that
namespace.
A
I
should
have
something
I
think,
hopefully
this
week
to
share
possibly
a
demo
before
next
week,
but
no
promises.
So
that's
that's
sort
of
the
big
scheduling
change
in
demo.
2.
The
demo
2
notes
bullet
points
out
also
make
a
distinction
between
physical
clusters
and
location.
A
The
distinction
is
just
for
review.
The
distinction
between
a
physical
cluster
and
a
location
is
that
one
physical
cluster
can
represent
multiple
locations,
so
you
could
have
one.
You
know
gke
cluster
and
say,
though
this
is
one
gke
cluster.
It
represents
location,
a
with
resources.
You
know
five
cpus
100
gigs
of
ram
and
a
gpu,
and
it
also
slices
into
a
location
with
whatever
arm
nodes
or
a
different
set
of
cpus
and
ram
and
disk.
A
A
The
benefit
of
this
is
that
it
lets
you
dial
up
one
location
in
a
physical
cluster
while
you
dial
the
other
one
down.
If
you
want
to
change
the
the
shift
of
resources
that
are
available
in
that
cluster,
you
can
say:
okay.
Well
now,
this
now
cluster,
sorry
location
b
has
you
know,
10
more
gigs
of
ram
and
location.
A
has
10
fewer
gigs
of
ram
that
lets.
You
shift
over
resources
than
that
cluster.
I'm
not
sure
whether
we
need
to
do
this.
It
is
progress.
A
We
should
show
that's
something
we
want
to
be
able
to
do
eventually,
but
I
I
don't
know
how
critically
important
it
is
beyond
it.
You
know
to
be
inside
the
demo.
It
might
be
something
we
do
quickly
after
something
I
do
think,
which
is
something
we'd
want
to
show
is
multi-cluster
ingress.
A
I
see
what
beam
is
here,
something
we
want
to
be
able
to
do,
probably
if
at
the
namespace
granularity,
when
a
namespace
moves
from
a
physical
cluster
or
location
a
to
b,
we
need
to
up
update
its
ingress
to
say
where
to
find
it
now.
A
This
is
distinct
and
different
from
a
single
multi-cluster
ingress
that
might
route
traffic
to
two
different
clusters
behind
the
scenes,
which
is
something
we
want
in
the
fullness
of
time,
but
not
something
that
would
have
to
come
up
with
a
namespace
granularity
scheduling.
Moving
what
game
does
that
seem
doable?
I
mean
it
seems
easier
actually
than
the
thing
I
think
you've
been
targeting,
which
is
full
multi-cluster
routing
of
of
traffic.
B
Not
sure,
honestly,
I
will
have
to
yeah,
I
will
have
to
review
what
what
those
that
actually
mean.
You
know.
A
Okay,
yeah
yeah,
as
we
as
we
progress
on
this,
I
think
we'll
obviously
keep
having
these
meetings
and
we'll
talk
whenever
we
need
to
to
align.
The
other
thing
that
is
included
in
the
demo
bullet
points
in
163
is
some
some
resource
of
workspace.
A
Workspace
is
the
new
name
or
a
new
concept
for
a
logical
cluster
to
be
able
to
also
attach
things
like
policy
to
it
and
say
this
logical
cluster
is
allowed
to
schedule
to
these
physical
clusters
or
locations
or
other
policies
attached
to
those
things
we
need
to.
We
have
some
vague
ideas
about
what
that
resource
looks
like.
A
I
think
we
don't
have
anything
set
in
stone
that
we
can
like
commit
to
for
very,
very
long-term
support,
but
at
least
having
something
there
that
we
can
start
to
hang
policy
off
of
will
let
us
iterate
on
that
and
come
up
with
something
we
like
better
and
better
and
better.
Until
eventually,
we
come
up
with
the
perfect
thing.
C
Yeah
and
that
sorry,
that
might
be
interesting
as
well
to
start
quite
soon,
with
with
at
least
the
first
draft
definition
of
workspaces,
also
in
the
context
of
api
management,
api
negotiation,
api
binding,
because
now
I
mean
we
have
to
check
the
sub
the
topic
of
more
seriously
defining
how
a
set
of
apis
are
enforced
or
you
know
made
the
default
in
a
given
logical
cluster
in
in
a
given
workspace
and
for
now
this
was
you
know,
since
logical
clusters
were
not
defined
and
not
constrained
by
by
any
policy.
C
So
only
add
it
by
adding
a
crd
directly
on
the
logical
cluster,
but
but
yeah.
I
think
we
have
to
streamline
this,
and
for
that
we
have
to
have
the
workspaces
object
as
well.
A
C
There
is
this
aspect
of
you,
probably,
but
also
the
fact
that
inside
a
workspace,
we
have
to
to
define
more
declaratively
the
set
of
apis
that
would
be
made
available
in
a
given
workspace
and
then,
for
that
we
have,
to
you,
know,
define
what
type
of
of
how
this
would
be
declaratively.
E
Have
to
be,
there
have
to
be
a
there,
has
to
be
a
transactionally,
a
transaction,
not
the
right
word,
a
consistent
api
evolution
of
types
within
the
namespace,
because
the
namespace
is
a
bucket
for
the
type
instances,
and
so
you
have
to
you
have
to
manage
the
evolution
of
that,
and
that
was
actually,
I
think,
one
of
the
good
parts
of
that
discussion
yesterday
was
really
highlighting
the
attributes.
I've
added
some
of
those
things
to
the
adr.
E
I
do
think
we
could
pick
a
a
small
subset
of
it
and
say
like
we
can
skip
over
some
parts
of
the
problem,
but
it's
great
if
we
can
at
least
frame
where
we're
going
in
terms
of
what
we
know
we
need
to
do
like
the
endological
end
goal
would
be
the
version
of
a
scheme
of
a
resource
which
is
completely
unmodeled
in
cube
today,
because
we
didn't
need
to
and
we're
like.
Oh
we'll,
just
you
know,
throw
some
stuff
around
it.
E
It
is
modeled
as
part
of
our
api
evolution
story,
but
actually
going
that
step
further
and
concretely
modeling.
The
evolution
of
a
tight
schema
would
then
open
the
door
for
how
we
move
it.
How
you
think
about
life
cycle
like
there's
a
whole
bunch
of
problems
that
people
have
that
just
don't
go
away.
We
just
basically
papered
over
the
life
cycle
strategy
of
a
cube
cluster
until
somebody
takes
away
crd,
v1,
beta
1
and
it
all
falls
down.
Yeah.
C
Because
to
take
another,
an
analogy,
for
example,
for
logical
clusters,
the
implementation
is
very
and
was
very
complicated
at
the
beginning,
because
you
have
no
constraints,
you
can
have
as
many
logical
clusters
as
you
want.
It's
really
on
demand.
According
to
you
know
the
url,
you
point
so
that
makes
things
completely
dynamic
but
of
dynamic.
But
of
course,
as
soon
as
you
have
workspace
objects
and
policies
that
clearly
identify
a
logical
cluster,
then
things
are,
you
know
easier.
You
can
name
the
various
logical
clusters.
C
You
have
a
clear
number
of
logical
clusters
that
you
can
clearly
identify
and
it
seems
to
me
that
for
apis
we
are
a
bit
in
the
in
the
same
situation
currently
because
nothing
was
clearly
defined
and
declaratively
added.
So
we
have
something
that
is
completely
automatic.
Mainly
you
know
you
just
join
a
cluster
to
the
logical
physical
cluster
to
the
logical
cluster
and
then
everything
is
imported
automatically
and
we
try
to
define
heuristics
of
how
this
should
work.
C
E
So
david,
I
was
going
to
ask
like
do
you
think
so
I
we,
I
captured
some
of
yesterday's
discussion
at
the
at
the
end
of
the
adr
dock
on
sharding
yeah,
just
because
it's
it's
the
most
convenient
place
to
talk
about
a
bunch
of
the
problems
and
you
can
split.
A
E
Out
into
separate
docs
over
time,
maybe
by
the
end
of
next
week,
I'd
like
to
get
to
a
point
where
we
have
like
a
a
straw.
Man
for
a
workspace
object,
the
straw
man,
which
is
kind
of
captured
in
some
of
the
docs
and
stuff
on
organization
policy
or
organization
workspace,
which
would
be
like
the
virtual
like
because
you're
foo,
you
can
come
in
and
create
a
workspace
object
and
then
steve's
currently
looking
at
the
workspace
part
of
it
and
the
api
part
of
it.
E
So
the
three
of
us
at
a
minimum-
and
I
mean
ideally,
we
can
break
it
down
to
smaller
chunks
that
we
each
are
mostly
solo
on
to
get
to
a
point
where
we
have
like
a
a
re-alpha
one
workspace
object.
That
gets
enough
of
the
problem
that
for
the
demo,
two
we
have
enough
pieces
and
then
we
we
identify
which
parts
of
the
api
evolution
we
want
to
model
versus
levon
model
is
that
is
that
a
good
goal
for
end
of
next
week?.
C
Yes,
I
think
at
least
I
I
think
we
we
have
to
to
have
the
workspace
and-
and-
and
I
probably
need
also
answer-
I
mean
to
answer
some-
some
questions.
What
would
be
the
way
to
for
a
user
to
define
to
enforce
an
api?
So
do
we
need
a
distinct
object
from
cla?
What
what
is
the
current
way
or
today,
to
improve
an
api?
C
A
Could
you
for
folks
that
weren't
in
the
whatever
form
this
came
up
in?
Could
you
summarize
the
discussion
you
had
yesterday?
What
incited
it
anything
that
came
out
of
it
like
what?
What
is
the
discussion
yesterday,
referring
to.
E
Steve
steve,
who
was
playing
around
steve,
asked
a
couple
of
good
questions
that
david
had
the
flip
side
of.
So
it
was
what
are
the
similarities
between
when
you
do
negotiation?
E
E
We
have
to
take
a
specific
point
on
that
line
and
put
that
into
a
workspace,
and
then
someone
has
to
be
able
to
see
all
of
the
workspaces
that
are
on
that
line,
and
we
talked
about
you
know:
how
does
a
workspace
evolve
down
that
line?
So,
like
a
workspace
has
to
know
what
the
minimum
schema
is
to
accept
and
the
schema
evolves.
So
each
version
on
that
line.
E
So
let's
say
that
there's
a
crd
just
for
the
sake
of
argument
or
an
api
type,
it's
got
a
euid
which
might
be
some
unique
identifier
of
its
arc
over
history,
and
then
it's
got
a
generation
which
is
each
gen.
Each
each
thing
is
a
set
of
additive
changes
down
the
life
cycle.
The
same
way
like
a
table
in
a
database
might
have
a
logical
set
of
schema
changes.
E
The
line
a
whole
bunch
of
lines
is
the
set
of
apis
that
go
into
a
workspace.
A
sinker
sets
up
and
keep
makes
those
lines
happen.
So
a
syncer
adds
things
to
the
end
of
each
of
those
lines
as
the
api
evolves
on
a
cluster
or
some
other
component
that
sits
alongside
the
sinker
when
a
cluster
is
no
longer
able
to
fit
onto
one
of
those
lines
because
it's
broken
the
schema.
E
That's
the
workspace
that
referenced
that
is
still
on
the
line.
The
cluster
is
no
longer
on
the
line
so
like
we're.
Basically,
just
it
was
like
we
were
working
through
that
to
summarize
the
hour
really
needs
to
happen
in
the
form
of
a
dock.
I
think
that's
kind
of
what
we're
kind
of
dancing
around
it's
like
get
get
the
minimal
concepts
pulled
out
draw
a
picture.
E
E
Conversely,
we
need
to
talk
about
what
happens
when
somebody
breaks
the
line
or
removes
the
line.
So
it's
like
we're
basically
talking
about
that.
That
thing,
that
kind
of
set
underneath
negotiation
underneath
sinker
and
underneath
apis,
which
is
basically
the
api
design
for
large
chunks
of
reused,
schema
evolution
apis
over
time.
A
Yeah,
the
the
I
think,
the
inciting
question
for
all
of
that
was,
if
I
am
a
multi-cluster
controller,
trying
to
list
who's
across
workspaces.
How
do
I
make
sure
I
don't
get
wildly
incompatible
foos
from
two
different
workspaces
right,
yeah.
E
Yeah-
and
that
is
actually
there's
syncer
is
one
type
of
multi-cluster
controller
and
the
novel
part
of
syncer
is
that
syncer
has
something
that's
driving
a
line,
the
most
normal
controllers.
You
would
be
offering
your
controller
implementation
and
you
would
maintain
the
line
yourself.
So
let's
say
you're
you're,
adding
the
lcd
operator
to,
and
you
want
to
expose
the
entity
api
object.
You
are
responsible
for
adding
new
versions
of
the
schema
that
are
compatible
the
moment
you
want
to
break
compatibility,
you
may
need
so.
E
We
went
through
a
little
bit
of
that
design
as
well.
That
doesn't
quite
overlap
with
syncer
like
synchro's.
A
little
sinker
doesn't
synchro,
isn't
the
source
of
truth
for
the
api.
The
same
way
that
a
controller,
that's
exposing
the
ncd
object,
might
be
the
owner
of
that
api.
So
the
it's.
What
happens
when
the
sort
when
the
syncer
is
downstream
of
the
source
of
truth,
which
is
the
cluster,
because
someone
could
add
or
remove
apis
on
that
cluster
and
the
syncer
has
to
communicate
and
update
those
lines
and
then
flag?
E
When
you
know,
there's
a
there's
a
missing
when
the
source
of
truth
cannot
be
reconciled
because
workspaces
are
the
source
of
truth
of
what
instances
are
being
created,
the
definitions
or
source
or
truth
from
the
cluster
and
the
sync.
The
negotiation
sits
in
the
middle
of
that.
So
it's
it's
a
different
loop
than
a
controller,
and
we
need
to
come
up
with
a
name
for
it
and
we
were
working
through
the
use
case.
It
was
a
great
discussion
because
it
actually
surfaced
most
of
the
problems
that
we're
attempting
to
solve.
E
From
the
perspective
of
if
I
could
run
a
controller
over
long
periods
of
time
and
replace
implementations
or
evolve
the
api
and
have
two
incompatible
versions
of
the
same
api
available
at
the
same
time.
How
would
I
do
it?
What
are
the
common
elements
that
we
talk
about?
Stuff
that's
been
talked
around
in
cube,
but
cube
has
no
need
to
solve
that
problem,
except
by
writing
an
api
compatibility
dock,
even
things
like
olm
or
some
of
the
add-on
management
stuff,
just
really
like.
E
Even
help
like
basically
threw
up
its
hands
by
not
allowing
you
to
make.
You
know
incompatible
changes,
not
allowing
you
to
update
crds,
it's
a
it's
indicative
of
the
underlying
gap
in.
I
want
to
offer
an
api
to
thousands
of
consumers,
and
I
want
to
manage
the
life
cycle
of
that
api
over
deep
time.
You
know
years
or
over
multiple
different
implementations,
multiple
different
schemas.
E
What
are
the
tools
I
would
need?
So
it
was
actually.
It
was
probably
the
most
important
hour
long
discussion
we've
had
so
far,
and
we
need
to
get
it
into
paper
form
so
that
we
can
talk
about
it.
A
Yeah
one
question
I
have
from
that
is
like
so
a
controller
you're,
saying
a
controller
runs
uninterrupted
for
a
year
and
in
the
meantime
that
api
that
it's
controlling
on
could
change,
add
fields
or
move
fields
all
compatible
changes
over
that
year.
But
the
controller
never
needs
to
renegotiate
negotiates
the
wrong.
Word
re
rediscover
that
type,
because
it
just
knows
what
that
type
is.
A
I
think
a
missing
piece
of
that
is
the
control
there's
no
way
for
the
controller
to
report.
What
fields
it
cares
about
about
that
object
about
that
type,
and
so
there's
no
way
to
know
whether
the
change
you're
about
to
make
to
the
api
type
will
become
incompatible
with
the
controller
or
you're
saying
the
controller
is
in
charge
that
the
controller
is
part
of
the
computer
in
charge
of
that
api
lifecycle,
and
so
you
can't
make
that
change
both.
E
Of
those
are
valid
use
cases,
there's
the
some
a
human
sits
down
and
makes
a
decision
to
add
or
remove
a
field.
A
human
has
an
implementation
tied
to
versions
of
an
api,
a
lot
of
forward
compatible
api
forward
and
backwards
compatible
apis.
You
know
the
only
there's
really
only
one
api
strategy,
which
is
never
break
anything
and
you
can
only
add
stuff
and
that's
actually
true
in
schemas
right
in
database
schemas.
The
only
operation
that
is
safe
is
add,
and
you
always
have
to
provide
a
default.
E
Every
other
problem
is
a
distributed
coordination
problem
because
you
effectively
have
to
make
sure
that
all
readers,
all
consumers,
stop
using
the
old
field
before
you
change
its
meaning.
So
what
we're
kind
of
talking
about
here
is
how
would
we
build
schema
evolution
into
and
so
there's
a
couple
ways
you
could
do
it
when
you
have
to
change
the
meaning
of
a
field.
There
has
to
be
some
way
to
signal
which
of
the
meanings
that
you
prefer.
E
E
That
mechanism.
That
concept
has
no
solution
or
option
today,
but
you
would
need
it
to
be
able
to
do
long
term
evolution.
So
you
know
whether
that's
something
that's
actually
materialized
on
the
person
consuming
that
api,
like
a
feature
flag
feature.
Flags,
are
great
examples
of.
I
want
a
different
behavior.
E
Another
example
would
be
like
which
implementation
we'll
talk
about
how
we
model
this
and
all
that
we
just
kind
of
surface
the
questions,
but
jason.
I
think
your
point
is,
if
you
only
add
fields
and
you
never
change
behavior
or
remove
anything,
your
controller
just
keeps
tracking.
At
some
point,
you
either
introduce
a
new
api
version.
That's
like
fully
different.
It
changes
in
a
non-compatible
way,
which
most
people
do
all
the
time.
Accidentally
in
those
scenarios
there's
a
different
implementation.
You
might
actually
just
fork
the
controller
code.
E
E
Doing
two
different
lists,
one
for
all:
the
people
that
expect
the
old
behavior
and
all
the
people
that
expect
the
new
behavior
and
if
your
code
in
the
controller
knows
the
difference
between
those
two,
it
just
says
like.
Oh
the
old,
if
you
expected
the
old
implementation,
here's
the
behavior
I
offer,
but
even
a
canary
or
a
canary
roll
out
of
a
deployment.
How
would
you
safely
roll
out
a
change
to
a
controller
implementation
across
tens
of
thousands
of
different
consumers
without
being
certain
that
you
didn't
regress
them?
E
So
canary
is
there's
two
implementations
for
the
same
api.
You
need
a
way
to
move
and
manage
the
who
is
responsible,
so
think
about
like
what
is
the
equivalent
of
a
controller
deployment
object
in
cube.
There
is
none
today
but
like
it
would
be
something
like
the
this
instance
handles
these
consumers
and
you
move
them
over
until
you've
reached
100
percent,
and
then
the
old
implementation
is
no
longer
necessary.
Now
some
problems
won't
break
down
that
way.
So
you
might
have
things
like
blue
green.
You
have
two
controllers,
one!
E
E
What
is
the
mechanism
that
would
exist
that
would
allow
that
transition
to
happen,
and
it
comes
roughly
down
to.
Are
you
exposing
something
that
could
break
or
not?
How
do
you
test?
What
are
the
strategies
you
use?
Those
look
very
similar
to
existing
patterns
in
cube
today.
What
would
be
the
things
that
that?
A
Interesting,
I
still
have
questions,
but
I
there's
a
lot
more
on
the
agenda
and
yeah.
E
I
would
recommend,
let
me
paste
the
link
into
the
chat,
I'll
paste,
the
link
into
the
agenda.
I
recommend
folks
watch
the
the
hour
if
you
are
interested
in
this
problem,
because
we
did
kind
of
go
back
and
forth.
If
folks
don't
have
access
to
it.
I
will
make
steve
probably
needs
to
share
it,
and
I
will
do
that
now.
No,
no,
it
is
shared
with
kcpdf.
So,
okay,
perfect.
I
wanted
to
be
jojo
thanks.
A
Yeah,
because
I
think
I
think
that's
going
to
be
we're
discovering
as
we
go
along
things
that
multi-cluster
controllers
need
to
care
about,
I
mean
multi-cluster
controllers
are
just
a
subset
of
all
multi-cluster
users,
so
multi-cluster
users
will
have
this
problem,
whether
they
are
a
controller
implementation
or
a
human
client
typing
out.
You
know,
yaml
we'll
have
we'll
have
the
same
problems
at
the
computers,
which
is
good
and
bad.
Does
that?
Are
there
any
other
clayton
documents
shared
on
the
mailing
list
and
state
that
you
want
to
do?
A
A
quick
summary
on
was
that
this
actually
covered
the
vast
majority
of
my
summary.
E
So
we've
actually
moved
through
it
steve
and
I
were
iterating
on
the
distributed
list
watch
there
hasn't
been
a
ton
since
the
last
meeting.
What
we
were
kind
of
doing
is
inputs
about
these
kinds
of
problems
like
a
given
workspace.
Has
this
resource
this
api
at
this
schema
version?
With
this
you
know
uid
for
the
whatever
we
call
it
api
uid
like
this
is
the
same
thing:
that'll
be
a
fundamental
input
to
list
across
because
you
actually
need
to
list
all
of
the
things
that
are
in
your
schema
history.
E
E
How
do
you
have
a
consistent
definition
of
a
resource
across
shards
across
multiple
instances?
E
You
would
do
it
so
and
that'll
in
like
what's
what
we
would
need
for
clients,
so
we
haven't
made
a
lot
of
changes,
but
this
discussion
then
we'll
go
back
and
we'll
put
some
more
thought
into
what
you
just
described,
jason,
which
is
we
need
to
understand
if
I'm
a
client
and
I'm
asking
for
ingress-
and
I
say
ingress
v1,
I
need
to
know
which
ingress
v1
I
mean,
and
all
of
the
instances
are
responding
to
that
request.
To
return.
V1
have
to
give
me
a
v1
that
I
can
understand
what
does
understand.
E
A
A
And
and
the
same
v1
alpha,
one
object
can
be
completely
different
than
another
v1
alpha
1
object
because
humans
suck
one
wrinkle.
A
E
A
short
sidebar
in
the
discussion
where
we
were
like
we
could
imagine
a
scenario
where
two
different
people
we
there
wasn't
crt,
we
were
using
as
the
example
we
were
talking
about,
someone
who
needs
to
move
from
like
a
one
api
version
to
another,
where
there's
any
kind
of
assumption
about
breakage.
E
It
may
be
that
crds.
The
way
that
are
implemented
in
cube
actually
does
not
work
for
us
in
a
way
that
we
would
need
aspects
which
is,
it
might
actually
be.
The
right
behavior
for
us
would
be
to
allow
their
controller
to
say.
Oh,
I
don't
actually
care
whether
a
workspace
has
two
versions
of
this
they're
completely
different
things.
E
As
far
as
I'm
concerned
yeah,
but
instead
of
doing
conversion
into
storage,
we
do
conversion
into
schema
lines
and
then
at
read
time
the
things
that
have
a
conversion
from
a
particular
schema
line
to
another
get
infilled.
So
like
imagine,
you
had
ingress
v1,
beta,
1
and
ingress,
I'm
trying
to
think
of
one.
That's
actually
incompatible
out
there.
Let's
just
say
for
the
sake
of
argument:
v1,
beta1,
ingress
and
gateway,
maybe
v1,
ingress
and
gateway.
E
It
may
actually
be
the
case
that
someone
might
want
to
leave
ingress,
v1
and
gateway,
but
have
someone
be
able
to
read
ingress
as
gateway
if
those
are
convertible
or
a
controller?
That
just
reads
both
thinking
through
the
use
cases.
It
might
actually
be
that
some
of
what
we
allow
in
crds
in
a
single
workspace
might
need
to
change.
That
was
just
a
little
sidebar,
but
it
was
starting
to
open
that
door,
for
what
does
version
string
mean,
and
the
answer
is,
as
you
said,
jason.
Nothing.
A
Nothing
it's
another
string
to
append
to
the
other
string.
That
means
anything
you
want.
All
of
this
talk
also
makes
me
like
any
new
requirement.
We
have
on
controllers
to
be
able
to
do
smarts.
I
want
to
to
the
comment
in
the
chat
package
into
a
framework
that
we
build
controllers
on,
like
that.
This
is
not
something
each
controller
should
have
to
care
about,
because
writing
controllers
is
already
massively
painful.
We
don't
need.
We
need
to
like,
simplify
that
and
then
add
complexity
while
simplifying
it,
which
is
it.
E
And
the
problem
david
found
this
week,
where,
like
the
protobuf
negotiation
controller
like
controller
some
of
the
client
libraries
hard
code,
sending
protobuf
for
core
types,
there's
actually
two
other
issues
that
came
up
just
coincidentally
this
week
in
cube,
not
allowing
nested
maps
and
not
allowing
floats
by
default
in
controller
runtime
generation.
E
Those
are
opinions
that
came
from
cube
api
conventions
which
were
rules
about
guidelines
about
cube-like
apis
that
fit
into
core
there's.
Actually,
some
discussion
on
the
api
reviewers,
where
we
were
like?
No,
no,
those
are
guidelines.
Those
aren't
defining
the
full
set
of
apis
that
are
allowed
under
crds.
So
we
made
some
quick
rulings
there,
but
it
did
highlight
we're
probably
going
to
formalize
a
little
bit
what
it
means
to
have
a
what
do.
E
Crds
support
and
what
does
cube
conventions
mean
there's
an
interesting
space
there
of,
like
even
it's
very
easy
to
misinterpret
those
there'll
be
a
lot
of
work
in
the
ecosystem
that
we'll
probably
want
to
drive
and
lead,
which
is.
Can
we
actually
get
some
of
those
definitions
firmed
up
in
cube,
so
that
it's
easy
for
controller
runtime
to
interpret
so
that,
then
people?
Don't
get
broken.
C
Yeah,
because
for
now,
even
in
kcp,
we
have
in
all
the
hacks
on
the
kubernetes
branch.
We
have
a
number
of
places
where
we
make
assumptions
to
convert
open
api
v2.
I
mean
open
api
hmrv2
to
v3
the
old.
You
know:
kubernetes
extensions
like
patch
match
and
stuff
like
that
touch
mesh
k
to
the
new
extensions
list,
map
case
and
stuff
like
that
as
well,
and
all
this
currently
works
somehow
works.
But
it's
based
on.
C
You
know
assumptions
of
how
like
when
it's
works
currently,
but
obviously
there
would
be
a
number
of
things
to
clearly
define
and
and
and
agree
explicitly
agree
upon
to
be
sure
that
that
shimmers
are
not
mixed
up.
E
Or
messed
up,
and
actually
like
my
next
pulled
bullet
on
the
cell
stuff,
so
jordan
and
I
for
really
jordan's
been
looking
for
a
couple
of
months
at
ceo,
it
turns
out
joe
betts
had
actually
been
working
on
and
he
got
a
cap
that
will
go
into
alpha,
which
is
using
cell
to
do
extended,
validation
in
open
api
in
our
crd
specs,
so
that
you
know
the
validation
beyond
types.
Can
we
break
it
down?
E
It's
better
than
web
hooks
like
web
hooks
have
a
whole
bunch
like
if
you
read
through
the
challenges
on
web
hooks
in
the
cell
doc.
It's
part
of
the
reason
why
web
hooks
aren't
turned
on
right
now
in
kcp,
but
the
the
general
topic
is.
Are
there
ways
to
simplify
our
validation,
rules
for
api
objects
or
look
for
patterns
that
would
take
the
complex
rules
we
have
in
cube,
boil
them
down
to
things
that
could
be
represented
in
a
way?
That's
that's
truly
like
tied
to
the
schema.
E
That,
then,
would
allow
us
to
potentially
not
need
full
web
hook
validation
in
some
cases,
even
though
ignition
may
still
want
it,
but
that
also
gets
back
david
to
what
you
were
just
saying,
which
is
like
the
rules
of
what's
allowed
and
the
rules
of
how
they
behave.
No
one
has
really
done
that
for
core
types.
One
of
my
thoughts
is
as
we
go
down
the
cell
stuff
is
it.
E
Is
there
something
that's
missing
in
cube,
whether
that's
you
know
expression,
validation,
rules
and
open
api
doc,
whether
it's
other
unfilled
gaps
that
would
allow
us
to
have
reuse
more
reusable
chunks
of
important
validations.
That
would
allow
us
to
say
things
like
what,
if
you
wanted
to
embed
a
pod
template
in
your
workload
controller,
what
would
you
have
to
do
to
validate
it
correctly?
E
That
kind
of
they
end
up
helping
to
solve
the
problem
that
we
have
right
now
with
those
hacks
in
kcp,
but
they
actually
drive
the
ecosystem
in
the
right
direction,
which
is
someone
who
has
a
crd
that
wants
to
use
pod
templates
doesn't
want
to
have
a
cell
expression
doc.
That's
both
my
hands,
don't
fit
on
my
screen.
This.
C
Big
yeah
that
reminds
me
as
well,
if
on
on
the
cube
native
types,
there
is
somewhere
in
cuban,
it's
a
very
big
list
of
exceptions.
C
A
So
two
things
one
is,
I
think,
like
I'm
all
for
cel
for
basic,
simple
validation
conversion.
Everything
anything
that
can
be
expressed
in
cell
should
because
it's
easier
to
do
that
than
than
in
go
and
setting
up
web
hooks.
But
I
think
there
will
still
always
be
cases
where
you
need
web
hooks
like
tecton's
web
hook.
Validation
is
a
monster,
and
three-fourths
of
it
could
probably
be
rewritten
into
fairly
simple
ceo,
but
the
other
quarter
of
it
is
like.
Is
this
a
dag?
B
E
That's
the
only
reason.
Api
conventions
exists,
which
was
like
to
encode.
A
few
exist,
a
few
examples
of
hard
one
knowledge
into
there.
We
certainly
have
failed
to
reify
those
patterns
so
to
get
back
to
adele's
point
as
well,
like
controllers
exist,
to
solve
problems
currently
today
to
map.
I
have
problem
to
how
I
should
implement
it
in
a
distributed
system
that
involves
cube.
E
That
involves,
like
certain
problems.
That's
a
gap
like
one
of
the
hopeful
things
I
have
about
kcp
and
the
larger
ecosystem
is:
can
we
actually
build
patterns
like
a
pattern,
library
and
and
get
controller
runtime
and
cube
like
cube,
is
trying
to
be
a
pattern
library
for
distributed
systems,
but
practically
what
that
means
is.
E
So
if
you're
building
cloud-native
apps,
you
have
a
distributed
systems
problem,
it's
ultimately
our
ecosystem's
responsibility
to
try
and
solidify
patterns
into
logic,
and
what
you
just
described.
Jason
is
like
the
guidance
about
what
should
be
validation.
What
should
be
admission,
what
should
be
controller?
E
Maybe
we're
actually
missing
concepts.
So
that's
another
thing
I'd
be
like.
Could
dag
validation
be
something
modeled
in
a
different
way
than
what
cube
is
currently
provided
up
till
this
point,
because
it's
close
to
the
validation
are
web
hooks
the
best
way
do
we
have
other
gaps
that
we
could
follow,
like
that's
the
other
types
of
extension
which
we
haven't
really
explored.
Yet
what
is
the
life
cycle
guarantees
on
an
api
for
those?
How
do
you
compare
those
to
implementations?
E
That's
another
branch
of
the!
Can
we
with
their
web
hooks,
which
is,
can
we
actually
remove
the
need
for
web
hook
by
providing
a
better
alternative
to
a
web
hook?
We're
not
gonna
do
that
this
year
or
within
a
year's
time,
but
can
we
set
the
things
in
motion
that
call
that
out
we're
on?
Like
that's?
The
great
part
is
like
by
forcing
ourselves
to
turn
these
over.
C
So
that's
very
interesting
because
well
I
I
don't
want
to
go
too
too
far
too
quickly,
but
but
regarding
the
demo
of
running
def
workspace
controller
on
top
of
kcp.
C
The
main
blocking
part
point,
apart
from
the
small
you
know,
fix
things
that
I
could
fix
is
really
web
hooks
so
to
have
the
demo
working,
I
just
disabled
them
in
the
workspace
controller,
but
in
reality
that's
not
an
option,
because
here
we
are
not
mainly
using
web
hooks
for
validation
a
bit,
but
that's
not
the
main
point
also
for
crd
conversion,
but
that
would
be
replaced
by
some
sort
by
the
api
negotiation.
C
But
the
main
point
is
for
security
is
the
fact
that
you
run
parts
in
which
people
will
exec
and
might
be
able
and
in
those
parts
we
there
might
be
the
case
where
there's
there
are
kubernetes
qriken
shows
are
stored.
C
So
finally,
there
is
a
very
hard
security
requirement,
which
is
that
only
the
creator,
the
initial
creator
of
the
the
workspace,
the
the
workspace
customer
source
should
be
should
be
allowed
to
access
to
exec
inside
the
the
generated
part,
and
so
that's
typically
something
that
is
completely
different
as
a
use
case
than
just
you
know:
high
high
level,
validation
of
the
shima
or
something
like
that,
and
that
that
seems
to
me
yeah,
that's
some
really
another
class
of
of
web
hooks
requirements
that
possibly
could
be
fixed
another
way.
C
E
Issue
like
675
in
cube
it's
almost
seven
and
a
half
years
old.
I
actually
had
we
talked
about
it
and
then
rejected
it
in
cube
for
a
couple
of
reasons
and
we've
periodically
gone
back
but
yeah.
That
is
a
that
is
a
well-known
class
of
problem
that
is
hard
to
do
in
cube,
but
is
fundamental
to
a
set
of
problems
that
cube
does
not
solve
well,
which
is
delegated
authority,
which
is
by
updating
the
object.
E
A
E
E
C
E
Probably
say
until
we
need
it
as
part
of
a
concrete
deliverable,
which
I
don't
know,
the
demo
needs
I'd,
probably
say
we're
getting
closer
to
the
point,
and
we
should
be.
We
should
come
up
with
the
criteria
for
why
it
should
be
included
and
formalize
that,
like
we
need
web
hooks
first
off
like
web
hooks
across
like
like
already
web
hooks,
are
broken
and
won't
work
with
logical
clusters,
because
you
have
to
identify
the
cluster
it's
coming
from
in
a
meaningful.
A
E
So,
even
just
getting
like
the
basic
of
we
should
turn
web
hooks
on
for
these
reasons
and
here's
the
implication
like:
let's
just
go,
get
that
teed
up,
and
maybe
that's
not
demo
two
but
that's
you
know
yeah
yeah
sure
either
demo
three
or
it
could
be
like
re
reset
the
kcb
prototype
into
yeah.
You
know,
libraries
or
you
know
a
real
project
when
it
becomes
a
real
boy.
A
Clayton,
could
you
could
you
start
by
creating
an
issue
that
is
just
that
sentence
you
said
of.
We
need
some
like
it's
off.
Here's
why
it's
off
today,
here's
what
here's
the
kind
of
thing
we
would
need
to
get
it
turned
back
on,
and
here's
how
that
would
look
it's
possible
that
it
takes
long
enough
to
justify
turning
it
on
that
cell
expressions
take
over
the
world
and
we
never
have
to
do
it.
A
But
I
think
to
david's
point
and
to
my
point
earlier,
I
think
there's
always
going
to
be
a
case
for
web
hooks
that
are
needed,
we'll
be
able
to
live
forever
without
web
hooks,
but
hunting
as
long
as
we
can
we'll
will
be
good
for
our
community,
and
you
know
our
web
hooks
optional.
E
And
that
may
be
a
thing
that
we
would
say
is
that
someone
may
choose
not
to
ever
run
a
system
and
that
they
would
say
all
problems
that
could
be
solved
by
web.
Hooks
must
be
solved
by
forking
the
library
project
and
building
your
own
version
of
kcp,
including
turning
on
web
hooks,
might
be
another
outcome,
but
we
should
tee
that
up
from
a
series
of
the
whys
and
what
the
criteria
are
for.
Are
there
other
things
that
benefit
by
force
intensity,
so
yeah
I'll,
create
that
that.
A
Our
cell
expressions,
their
cell
validations,
going
to
make
api
negotiation
effectively
impossible,
like
if
the
validation
in
version
x
is.
You
know
this
field
has
to
be
less
than
10
and
then
in
the
next,
not
version,
but
the
next
iteration
of
that
version
of
that
validation.
It
has
to
be
less
than
nine.
Can
we
detect
that
that
is
a
narrowing
definition
such
that
the
lcd
now
has
to
be
under
nine
for
it
to
be
valid
in
all
clusters
that
are
attached
to
me.
E
Kind
of
problem
too
so,
then
we
get
into,
I
mean
you
could
prove
it,
but
is
that
an
example
of
a
is
there
a
different
way
to
respond
to
validation?
Rules,
which
is
a
tightening
of
validations,
could
be
flagged
or
identified
through
some
other
thing
or
just
the
change
of
validation.
Rule
itself
requires
a
a
estimation
by
a
human,
certainly
in
our
api
evolution
rules
in
cube,
we
have
occasionally
allowed
tightening
of
validation,
but
we
consider
it
a
breaking
change
and
we
force
it
to
go
through
that.
E
We
generally
tried
not
to
accept
in
cases
where
it
was
so
likely
that
no
one
would
be
impacted
that
it
was
worth
it
for
a
security
tradeoff,
but
that
kind
of
gets
into
human
judgment
calls
and
nothing
in
normalization
today,
like
probably
at
some
point
it's
what
does
it
mean
when
a
cluster
like
today,
a
new
code
version
of
a
cluster
could
tighten
validation
that
you
wouldn't
know
anyway,
but
when
we
get
these
better
rules,
how
do
we
think
through
that
so
yeah?
So
I
think
it's
the.
E
What
are
the
human
factors
involved
in
determining
schema
evolution?
What
are
the
machine
factors?
Is
another
way
of
phrasing?
What
you've
asked
jason
yeah.
A
I
think
I
think
it
means
any
like
as
a
dumb
easy
thing
anytime,
a
ceo
validation
rule
changes
in
any
way
bump
it
up
to
a
human
to
approve
it
and
also
a
good
point
that
these
kinds
of
validation,
rule
changes
are
already
happening
undetectable
by
us.
This
means
it
might
be
detectable
by
us
some
some
classes
of
validation
changes
will
be
or
a
much
more
expensive.
Class
of
validation
changes
will
be
detectable
by
us
where
they
weren't
before.
E
C
No,
no,
no
sorry,
I
mean
that's
okay
in
the
meantime,.
E
Yeah
like
and
feature
gates
are
another
example
of
that.
You
know
if
you
turn,
if
you
change
a
feature
gate,
if
we
change
the
behavior
somebody
is
there
a
community
of
users
who
would
benefit
from
a
way
of
articulating
what
is
compatible
versus
just
reading
the
cube
release
notes?
E
Would
there
be
an
incentive
for
someone
who's
running
a
kcp
like
thing
to
actually
contribute
effort
that
talks
through
when
evolution
so
like
there's
the
who
keeps
like
who
guards
the
guards
in
cube?
And
the
answer
is
like
nobody
really
like.
We
all
guard
each
other
and
we
don't
really.
We
do
an
okay
job
of
it.
I'm
not
gonna
sell
a
short.
We
maybe
do
like
sixty
percent
we're
about
six
percent
good.
You
know
changes
in
cube
changes
in
cube,
break
production
users,
all
the
time
well-meaning
changes
have
consequences.
E
Is
there
a
way
for
us
to
line
up
the
incentive
of
lots
of
people
in
the
cube
ecosystem
to
know
when
things
change
through
automation
and
human
effort
and
put
it
into
a
thing
which
is
like
we
might
not
even
have
to
wait
until
the
kcp
cluster
comes
up?
You
could
very
easily
say
like
oh
well,
there's
some
things
that
you
have
to
act.
E
We
know
that
you're
using
this
field
that
changed
behavior
not
saying
we
would
actually
go,
do
it,
but
this
is
another
vein
of.
E
Is
there
a
separate
orthogonal
thing
to
a
kcp
like
ecosystem
project,
which
is
understands
what
breaks
in
cuba
over
time
and
can
bring
to
bear
large
amounts
of
human
and
machine
effort
in
a
way
that
creates
value?
So
it's
like,
maybe
imagine,
a
human
contributed
list
of
all
schema
versions
published
by
any
cube
cluster
feature
gates
and
how
they
impact.
E
Imagine
that
as
a
separate
module
that
runs
alongside
or
you
run
in
large
multi-cluster
environments,
which
performs
that
analysis,
but
can
also
distribute
analysis.
Questions
like
is
this
field
in
use
across
your
kcp
fleet
is
a
kind
of
question
that
you
can
then
turn
into
value
so
like
in
openshift.
We've
done
a
little
bit
of
a
little
bit
with
this
with
like
looking
at
the
data
like,
I
know,
google's
done
some
of
this.
A
couple
other
people
in
the
ecosystem
have
talked
about
is
where
you
look
at
the
workloads.
E
A
Yeah,
I'm
not
sure
I
completely
understand
how
we
can
detect
whether
a
field
is
used
or
not.
I
think
we
can
tell
whether
it's
ever
written
by
a
controller.
Why?
Wouldn't
we-
I
wouldn't
be?
Oh
sorry,
I
made.
E
A
E
Zero
value
or
the
default
value
is,
if
you
break
that,
that's
a
behavior
break
that
might
fall
into
that
stricter
thing.
Maybe
we're
breaking
the
world
into
like
there's
super
things.
You
should
never
break
and
there's
things
that
you're
probably
going
to
break
and
nobody
cares
and
then
there's
a
line
in
the
middle
or
a
fat
area
in
the
middle
that
you
want
to
squeeze
and
put
things
on
either.
Side
of
the
heart
failures
are
stuff.
Like
yeah,
you
break
default
values,
you
remove
a
field,
you
take
apis
away.
E
A
Yeah,
I
think
this
also
goes
back
to
packaging,
the
api
api
negotiation
stuff
as
a
ci
check
as
something
where
it
looks.
Like
you
know,
a
little
paperclip
shows
up
and
says
it
looks
like
you're
changing
the
default
value
from
three
to
five.
This
will
break
someone
or
you're
adding
an
enum
value.
A
This
is
safe,
but
be
careful
and
if
you
think
about
like.
E
Oh
shoot,
it
is
certainly
possible
to
find
some
way
of
like
making
this
a
collaborative.
Like
you
know,
the
best.
E
The
best
open
source
systems
are
the
ones
where
everybody
has
a
small
incentive
to
make
small
changes
that
are
easy
to
understand,
like
those
are
the
most
successful
communities
by
far
and
then
the
things
that
are
so
fundamental
that
everybody
needs
to
get
their
fixes
in
so
like
linux
and
kubernetes
and
other
big,
like
projects
all
have
that
we
might
want
to
look
at
how
we
can
create
that
community
of
people
can
create
apis
and
use
them.
E
So,
like
I
think,
about
terraform
right,
like
we
talked
about
this
before
your
terraform
is
taking
upstream
cloud
apis
that
have
this
level
of
diligence
applied
to
them,
writing
an
adapter
layer
and
having
almost
as
much
diligence
for
the
big
stuff.
But
then
there's
a
long
tail
of
you
know
it's
just
impossible
to
bring
that
effort
to
bear.
Can
we
reduce
the
friction
between
api
definition,
api
evolution
and
api
modeling
so
that
people
are
like?
E
Well,
obviously,
you
just
go
create
a
crd
and
you
you
do
the
quick,
terraform
adapter
into
cube,
and
then
it
just
works
in
cube,
and
then
someone
can
manage
the
evolution
of
that
and
you
catch
these
because
there's
our
more
standard,
ci
tools
for
testing
api,
schema
evolution
and
the
moment
anybody
fires
up
one
of
these
systems
in
the
real
world.
We're
like
oh
well
that
broke
well.
E
A
Yeah
david,
I'm
sorry.
We
have
two
minutes
for
your
demo
of
dev
workspace,
controller,
casey,.
A
C
10
minutes
anyone
can
interested
can
have
a
look,
and
I
think
we
already
discussed
the
you
know
tenants
of
the
demo
I
mean
related
to
web
hooks
and
and
the
other
fixes.
So
I.
A
C
And
the
good
one
of
the
good
news
is
that
it
worked
even
without
implementing
you
know
the
stuff
like
kubernetes
api,
urlpi,
server,
url
and
service
account
change
that
we
should
do
so
that
the
pods
see
you
know
the
context
of
kcp
all
this
is
not
there
and,
finally,
quite
a
really
use
case.
Somehow
works
worked
already
so
that
that's
quite
encouraging
right.
A
D
C
The
pods
on
the
kcp
layer
and
yeah
and
then
we
think
I
think
config
map
service,
accounts,
secrets,
ingresses
services
and
pods
and
deployments
sorry
and
and
no
no,
it's
nice
cool.
A
Yeah
great
I'm
gonna
go
read
this
cel
cap
because
I
have
questions
but.
C
Yeah
and
just
just
one
point
about
cell
and
at
some
point
we
mentioned
yegi,
so
that's
something
completely
different
that
just
one
way,
among
others
to
write
into
interpret
go
code.
So
that's
really
go
code
and
I
was
thinking
about
for
non-validating
use
cases.
The
one
for
example,
for
example
I
mentioned
about
you-
know,
checking
the
the
creator
or
some
other
stuff
that
where
there
would
be
some
mutating
behavior,
maybe
having
something
that
can
allow
executing
go
code
in
some
well-known
untry
points
or
you
know,
hook,
points.
A
E
Like
think
about
it,
the
only
reason
that
we're
having
to
do
any
of
this
work
in
kubernetes
is
because
linux's
plug-in
mechanisms
suck
and
nobody
ever
wrote
a
good
package
manager
or
actually
figured
out
how
to
solve
a
problem
partially
because
it's
an
almost
intractable
problem.
So
one
of
the
interesting
things
would
be
like.
If
you
look
at
the
worker,
what
is
it
cloudflares
web
workers
with
their
v8
isolates
they've
added
wasm
support.
If
you
look
at
the
crust
lit,
which
is
trying
to
do
this
for
wasm
like
it's
all.
E
Basically,
variations
on
assumptions
between
calling
boundaries
are
really
hard
and
there's
like
five
boundaries.
It's
like
calling
conventions
and
data
types
and
then
there's
like
security
boundaries
within
processes,
which
you
know
any
sharing
processes
is
now
already
terrifying
because
of
spectre,
and
then
you've
got
serialization
on
wire
protocols
for
your
inner
process
or
network,
and
then
you've
got
the
orchestration
level
like
you
have
to
actually
have
the
same
meaning
of
two
different
concepts
at
like
a
super
high
level,
and
then
you
have
to
do
api
evolution
of
them.
E
One
of
the
interesting
things
in
this
ecosystem
part
would
be
like
it.
It
does
behoove
us
to
go.
Ask
some
of
these
questions
like
when
we're
thinking
about
like
running
wasm
like
what
is
fundamentally
the
difference
between
australian
asmr
and
something
like
the
cloudflare
worker
or
the
crosslit.
As
I'm
running
cell,
you
know,
cell
doesn't
have
to
do
a
data
structure
type
transformation.
E
Wasm
does
for
strings
like
you
actually
have
to
have
a
shim
on
the
wasm
receiving
side
to
handle
a
ghost
string,
which
means
overhead,
and
you
know
you
kind
of
pick
the
implement
like
a
lot
of
people
instinctively
pick.
The
implementation
that
hits
the
performance
boundary
like
serializing
to
json
is
the
worst
thing
in
the
universe.
E
You
know
part
of
kcp
is
to
open
the
door
for
someone
to
be
able
to
say,
like
I
just
want
this
little
chunk
of
code
to
run
and
access
these
apis,
whether
they're
local
process,
remote
process,
remote
network,
not
really
our
lane,
but
we
want
to
open
up
some
of
those
lanes
for
other
people
to
be
able
to
innovate
without
us
having
to
completely
redefine
everything,
we're
doing
for
them
to
succeed
so
yeah.
It
was
just
a
been
thinking
about
that
a
lot
recently
as
we
go
through
some
of
these,
like.
E
A
Nice
all
right
more
reading
to
do.
Thank
you
very
much,
we'll
see
you
all
next
week
and
we'll
see
you
on
the
slack
and
internet
wherever
you
can
find
us
all
right
so
bye.
Thank
you.
Bye.