►
From YouTube: Community Meeting, November 16, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
kcp
community
meeting
november
16
2021..
We
have
some
items
on
the
agenda
and
I
think
we
could
have
more
also,
if,
if
anyone
is
interested,
I
think
we
can
probably
there's
actually
kind
of
a
lot
in
both
of
these
the
first
one
we
had
a
conversation.
Some
of
us
here
had
a
conversation
on
friday
about
how
it
could
or
how
it
should
look
to
import
and
export
apis
for
a
workspace
to
say.
A
I
would
like
this
api
about
the
the
types
and
the
controllers
that
handle
those
types.
I
would
like
this
version
of
this
and
for
api
providers
to
say
I
provide
this
api.
These
types
in
this
implementation
and
how
the
two
should
match
up
mash
up
to
be
able
to
roll
out
upgrades
of
those
across
workspaces
is
andy
andy.
Do
you
want
to
go
into
sort
of
a
high-level
view
of
how
that
should
look
or
what
we're
currently
thinking
or
stuff
we're?
B
Sure
so
stefan
had
put
together
some
thinking
of
what
a
data
model
could
look
like
for
trying
to
to
make
this
happen,
and
I
don't
remember
if
we've
shared
that
doc
publicly
yet
or
not,
but
if
not
we'll
make
sure
it's
cleaned
up
and
shared,
but
we
were
thinking
about
having
provisional
names,
of
course,
but
a
concept
like
an
api
export
and
you
would
specify
actually
let
me
start
we'd
probably
start
with
something.
B
That's
like
an
api
resource
schema
that
conceptually
it's
like
a
crd
but
there's
no
storage
associated
with
it.
There's
no
handling
code
associated
with
it.
So
it's
really
just
a
way
to
model
and
open
an
open
api
schema
as
a
standalone
resource,
and
then
you
would
have
an
api
export
resource
that
can
point
to
one
or
more
schemas.
So
imagine
that
you
have
something
like
cert
manager,
which
has
multiple
resources
and
kinds
that
are
grouped
together
in
an
api
group.
B
So
each
one
of
those
like
certificate
would
have
its
own
api
resource,
schema
certificate,
signing
request,
issuer
and
so
on.
Would
each
have
their
own
api
resource
schema
resources
and
then
to
export
them.
As
a
group,
you
could
create
a
single
api
export
instance,
and
it
would
point
to
all
of
those
schemas
and
on
the
flip
side
you
would
have
an
api
import
or
api
binding
resource,
and
that
would
allow
a
consumer
to
say,
please
find
or
otherwise
make
available
an
exported,
schema
or
x
export.
B
You
know,
group
of
schemas
into
my
workspace
and
then
we'd
probably
have
one
more
concept
which
we
need
some
way
for
a
client,
whether
that's
somebody
like
cube,
control
or
a
controller,
to
be
able
to
ask
for
a
view
of
all
instances
of
a
particular
resource
that
match
that
exported
schema,
and
I
realized
this
would
probably
be
easier
with
a
diagram
or
the
document.
C
B
I
don't
have
that
handy
right
now,
but
imagine,
for
example,
that
two
of
us
independently
both
want
to
export
cert
manager
apis
to
consumers.
They
may
be
identical
or
maybe
there's
some
slight
variations,
but
as
an
operator
like,
I
want
to
deploy
a
controller
that
can
go,
look
at
certificate
signing
requests.
B
I
only
want
to
see
the
ones
that
are
for
my
schema
that
I'm
exporting
that
other
people
are
importing
and
using
in
their
workspaces.
I
don't
want
to
see
somebody
else's
bound
and
imported
certificate
signing
requests
because
they
weren't
coming
from
my
export.
So
even
if
the
type
is
identical.
A
Because
it
might
not
be
in
the
future
right,
yeah
steve
said
in
the
chat,
notably
right
now,
the
import
is
for
an
export,
not
a
schema.
Yes,
that
is
the
owner
of
the
export
is
to
choose
which
schemas
are
exposed,
not
the
user
right.
B
Yeah
and
so
the
the
the
intent
there
is
to
allow
the
owner
who's,
doing
the
export
to
give
and
export
a
name
like
this
is
my
cert
manager,
export
or
my
cert
manager
apis
and
the
thinking
is,
as
a
consumer
you're,
not
necessarily
saying,
I
need
a
certificate
signing
request
resource
because
that's
too
granular
what
you're
I'd
hesitate
to
make
this
parallel,
but
it's
kind
of
like
service
catalog
and
like
going
shopping
for
services.
It's
like
I'm
going
shopping
for
apis.
B
D
A
So
I
think
I
actually
from
from
hearing
your
description,
I
think
I'm
actually
going
to
correct.
The
thing
I
said
before,
which
is
this
is
entirely
schema.
Centric
and
not
controller
implementation.
Centric,
like
the
the
controller
implementation
that
is
operating
on
these
objects,
can
change
unbeknownst
to
either
importer
or
exporter.
This
is
just
about
saying:
I
have
a
new
schema
for
a
certificate
signing
request.
B
E
Of
the
import,
basically,
is
this
unique
thing
which
never
changes?
Implementation
can
change,
schemas
can
change
but
schemas
with
a
life
cycle
which
sometimes
you
want
to
see.
Sometimes
you
don't
the
imports,
it's
as
andy
said
it
doesn't
specify
the
schema,
but
in
the
moment
you
bind
some
controller
will
basically
fix
the
schema,
which
is
then
represented
in
the.
D
Status
and
there's
a
lot
of
analogs
here
to
like
concepts
that
we
came
up
in
image
streams
right
like
tags
for
image,
streams
and
docker
registries
are
a
lot
like
the
very
very,
very
few
people
know
like
anything
really
about
like
so
they're.
Looking
at
like
something
like
a
name
in
a
docker
repository
for
like
I
want
postgres
latest,
I
went
postgres
nine.
I
went
postgraphs
nine
point
three
point.
Four,
almost
no
one
ever
asks
for
nine
point,
three
point:
four:
they
start
from
the
like.
D
D
D
Basically,
majors
for
the
sake
of
argument,
postgres
hates
semantic
versioning
and
it's
okay.
We
understand
that
we
all
hate
semantic
versioning
so,
like
you
know,
9-3
or
whatever,
I'm
just
making
up
numbers
at
this
point,
that's
your
that's!
What
you
want
to
lock
to
and
you've
made
a
contract
which
is
accepting
9-3
is
accepting
that
someone
doesn't
break
you
under
the
covers.
Postgres
verifies
that
the
the
person
publishing
the
tag
verifies
that
and
then,
if
someone
changed
9-3
out
from
underneath
you
to
be
9-4,
you
would
have
a
very
bad
day.
D
So
I
think,
like
a
bunch
of
analogs
there
we
can
draw.
We
should
basically
talk
about
like
there's
a
good
opportunity
here,
andy
to
like
talk
about
this,
like
grand
theory
of
versioning,
which
we
don't
actually
care
what
the
versioning
is.
But
when
you
bind
to
an
api
you're
accepting
the
the
default
assumption,
probably
should
be.
You
expect
not
to
get
broken.
B
Yeah,
I
totally
agree,
and
I've
used
as
a
parallel
example,
clients
talking
to
any
of
the
cloud
apis,
google,
amazon,
azure
and
whatnot,
and
so
like
those
are
documented
apis.
I
know
like
in
amazon's
case
they
tend
to
be
versioned
by
date
and
if,
if
you're,
writing
your
own,
like
super
low
level
client,
that's
that
should
continue
to
work
as
long
as
you're
using
apis
that
haven't
been
removed
and
presumably
they'd
go
through
a
deprecation
period,
and
you
would
have
time
to
update
your
super
low
level.
B
Client
before
that
happens,
the
same
thing
would
be
true
if
you're
using
an
sdk,
so
you've
got
the
aws
sdk,
it's
a
specific
version
that
presumably
is
going
to
work
against
the
set
of
apis
that
it's
coded
for
until
some
of
them
fall
off.
And
if
you
want
to
take
advantage
of
newer
features,
you
just
upgrade
the
sdk,
and
I
think
the
same
approach
is
something
that
we
should
strive
for
for
both
exporters
and
importers.
B
D
It's
important
point
to
note
like
when
you
invoke
an
api
endpoint
and
it's
not
really
a
a
durable
post,
crud
operation
like
there's.
A
very
important
nuance
about
rest,
which
is
like
cube,
has
really
embraced
the
idea
of
durable
objects,
which
is
you
post
it
and
it's
there
for
the
long
haul.
D
When
you
go
to
it,
I
don't
think
like
an
aws
api
like,
even
though
they
typically
would
never
do.
This,
like
they've,
got
that
old
schema
version
that
schema.
This
is
what
we're
talking
about
with
like
the
stripe.
Equivalent
apis
is
like
that.
Schema
version
has
a
specific
behavior
and
it
doesn't
change.
They
may
have
multiple
different
implementations
of
that
under
the
covers,
mappings,
etc.
D
D
Like
schema
compatibility
like
we
have
not
even
really
talked
about
this,
but
could
you
register
two
objects
that
rep
are
represented
by
the
same
object
somewhere
else,
probably
not
an
ncd
or
in
our
existing
storage
systems,
but
so
like
I'll
put
that
as
a
note
in
one
of
the
the
docs,
but
that's
a
separate
thing.
We
can
come
back
to.
E
But
it's
it's
kind
of
visible,
like
the
use,
as
a
developer
of
the
controller
has
access
to
that
data,
and
if
something
goes
wrong,
like
somebody
messed
up
with
an
api
change,
there
will
be
ways
to
fix
that,
like
you
can
go,
we
work
spec
the
workspace
by
workspace
and
fix
that
in
an
automatic
way.
But
it's
it's.
It's
visible
to
a
developer
of
a
controller.
D
Is
it
unreasonable-
and
that
was
a
good
point
too,
because
that
actually
prompted
another
question?
Is
it
would
be
useful
to
go?
Look
at
other
generic
storage
frameworks
like
people
who
help
you
build
schemas
and
then
you
know
come
up
with
a
database
representation
store
that
and
there's,
like
you
know,
all
the
way
from
like
I'm
not
talking
as
much
about
orm
frameworks.
D
As
I
am
like.
There's
a
set
of
people
out
there
that
do
you
can
create
your
data
model
like
firebase,
and
all
this
like,
create
your
data
model,
create
your
storage
model,
create
your
schema,
create
your
schema.
Here's
your
endpoints,
like
that,
those
how
people
think
about
those
looking
for
things
about
how
people
manage
the
change
over
those
and
what
kinds
of
tools
they
expose
to
users
would
probably
also
be
good
as
a
as
a
sub
research
sub
bullet
research.
D
How
I
don't
know
it's
not
declarative,
like
api
builder
systems
work
like
api
management,
what
does
google
offer
for
api
management?
What
is
firebase
offer?
What
is
some
of
these
startups?
I
got.
I
have
a
couple
of
small
project
things
that
I
can
send
along.
So
I'll
put
that
in
one
of
the
notes,
I
guess
that's
a
good
point.
Yeah.
E
Also
for
for
modeling,
we
should
take
a
look
at
image
streams
and
maybe
we
find
some
good
ideas
which
we
can
copy
yeah.
D
I
can
list
out
whatever
the
ben
and
I
had
like
a
lot
of
long
discussions
on.
Like
does
the
latest
tag?
What
is
the
latest
tag
mean,
and
we
we
agreed
on
things
like
when
you
use
the
latest
tag
you
want
like
when
you
create
a
copy
of
it,
you
actually
want
to
take
what
the
latest
tag
is
pointing
to.
So
the
latest
tag
is
kind
of
like
the
referential
tag
to
a
stream
of
consistent
api.
B
My
my
previous
self
is
gonna
come
back
to
haunt
me.
I
guess
for
creating
this
image
stream
apis.
Many.
D
Many
years
ago,
that's
right
andy.
I
remember
we
were
going
through
whole
brian's
keys
and
arrays
versus
maps
versus
arrays
and
public
schemas
versus
maps
internally.
D
Well,
it
was,
it
was
interesting
because
image
stream
specifically
said
latest
was
a
mistake
and
tried
to
give
an
improvement.
I
think
one
of
the
things
like
with
api
binding
would
be.
We
should
ask
the
question
and
olm
maybe
has
some
data
on
this
home
people
doing
helm.
Charts
is
like
what
is
someone's
expectation
or
even
in
a
cloud
like
when
you
start
writing
a
cloud
formation.
D
Do
you
default
when
you
write
start
writing
a
home
chart?
Do
you
assume
snapshot
and
then
lock
there
right
like
when
you
are
sorry,
not
helm,
terraform
like
when
you,
when
you
create
a
terraform
file
that
creates
a
lock
file
that
is
locking
from
quote
unquote
latest
or
latest
stable
to
all
of
the
versions?
That
was
what
that's
what
I
was
meaning
by
what
image
stream
tried
to
do,
which
is
like
you
start
with
grab
the
snapshot
and
then
that's
your
thing.
If
you
grabbed
k
native,
you
would
obviously
grab
its
natural.
D
If
you
grab
tech
time,
you'd
grab
a
snapshot,
but
then
you
would
expect
it
to
be
supported,
but
then
the
next
question
would
be
like.
Is
that
actually
are
there?
Other
choices
like
we
talked
about
feature
flags
and
feature
gates
and
optional
features,
and
like
can
you
opt
into
behavior
differences
like?
Is
there
a
config
file
for
an
api,
or
is
that
what
an
api
is
like
feature
gates
make
sense
in
cube,
because
we
don't
allow
you
to
have
multiple
versions
of
the
same
api.
D
This
was
like
something
that
was
like.
I
was
just
trying
to
wrap
my
head
around,
even
as
we
were
talking
about
it,
which
would
be.
You
may
not
need
feature
gates,
because
you
can
just
expose
something
with
all
of
the
gates
on
or
a
set
of
the
gates
on
and
be
like.
You
want
to
try
this
here's
this
with
this
field
and
it
it's
a
it's
a
published
version
plus
some
optional
changes
that
may
not
have
a
future,
not
the
graph
stuff,
but,
like
the
you
know,
I
can
expose
you
any
api
one.
D
If
I
want
to
add
a
field
that
controls
behavior,
I
just
add
it
to
the
api,
and
then
you
know,
then
there's
the
corresponding
question,
which
would
be
you
know.
Is
there
a
way
to
specify
behavior
in
mass
and
that's
like
the
organizational
scoping
right
like?
Would
I
expose
an
api
to
an
organizational
scope
or
to
a
smaller
scope
than
all
workspaces,
or
would
I
just
say,
like
I'm,
exposing
an
api
to
my
org,
and
only
my
org
can
see
it.
D
A
D
It's
not,
there
goes
okay,
so
I
was
gonna,
try
and
find
like
a
concrete.
So
I
don't
know
if
we
talked
to
anybody
with
like
who
we
were
kind
of
like
going
on
our
own
experiences.
Here,
we've
got
a
couple
of
them.
When
will
you
all
feel
comfortable
about?
I
was
going
to
line
up.
At
least
I
was
going
to
go
look
to
line
up
at
least
one
or
two
people
to
talk
to
who
have
an
api
evolution
problem.
Today
I
was
going
to
go
like
find
some
folks,
who've
done
this.
D
We
had
the
jordan
thing.
I
don't
know
india
did.
I
did
not
share
your
dr
jordan.
He
said
he
was
looking
for
his
old
stuff,
but
he'd
be
willing
to
comment
on
a
big,
a
list
of
api
evolution,
problems
that
were
more
in
the
general
cube
sense,
but
I
was
going
to
try
and
find
one
or
two
end
user
type,
people
who
are
like
hey.
I
want
to
walk
through
exactly
what
api
evolution
looks
as
a
team
who
has
deployed
an
operator
and
have
this
problem.
Has
anybody
had
any
of
those
discussions.
B
Not
yet
coming
from
a
long
time
working
on
cluster
api,
I
think
that
there's
good
history
there,
so
I
could
reach
out
to
some
of
those
folks
if
we
wanted.
D
And
I
feel
like
we
have
some
runway
here,
like
I
wasn't
thinking
about
this,
like,
I
think,
there's
still
value
in
mining,
our
own
experiences
I
was
going
to
try
and
find
I
was
going
to
try
and
craft
concrete
use
cases
that
we're
going
to
have
to
go
solve
into.
E
D
Actually
so
I
just
thought
so
that's
so
what
you're
saying
stefan
is
like.
I
probably
the
schema,
the
folks
doing.
The
kafka
schema
registry,
because
we're
talking
about
like
api
management,
type
solutions
for
people
doing
api
schema
registry
stuff,
like
for
kafka,
that's
pure
schemas
on
data
that
you
can't
change,
but
there's
rules
around
it
right,
like
you,
can
generate
clients
from
it.
There's
a
lot
of
similarities,
it's
not
identical
and
then
walking.
D
Maybe
we
want
to
talk
about
like
what
are
some
examples
from
like
the
api
management
space
so
like
those
two
could
be
the
concrete
ones
that
I
could
go
sus
up.
D
Because
the
schema
registry-
folks,
you
know,
that's,
that's
imposing
a
schema
on
messages
being
written
onto
a
kafka
log
and
there's
a
lot
of
analogs
there
that
they
may
not
have
the
whole
experience,
but
they
absolutely
have
to
deal
with
evolution
and
we
may
be
able
to
tease
apart
like
what
are
the
elements
that
are
common
for
that
evolution.
And
then
you
know
on
the
apa
management
side
at
least
joaquin.
We've
got
some
experience
on
that
right.
E
We
discussed
how
to
make
those
schema
changes
and
rejections
of
schemas
and
this
whole
graph
idea
visible
to
the
user
to
the
developer,
and
I
think
originally,
the
idea
was
to
make
this
kind
of
an
implicit
relation
between
schemas,
it's
defined
by
some
logical
rules,
but
it's
not
really
visible.
It's
more
like
a
theoretical
concept
and
the
idea
which
we
had
was
that
we
make
it
that
we
make
a
parent
or
a
successor
relation
visible,
like
an
owner
reference
similar
to
that
in
schema.
E
So
schema
can
point
to
its
parent,
and
this
forms
a
graph.
Obviously-
and
we
can
do
validation
like
we
can
check
when
obvious
changes
are
incompatible.
Like
say
I
mean
enaming
a
field.
For
example,
it's
not
possible
shouldn't
be
possible,
those
things
we
can
detect
and
we
can
reject
a
schema
change
or
schema
creation.
So
there
would
be
validation
for
the
developer
in
building
this
graph,
and
we
can.
We
cannot
see
everything,
but
we
can
rule
out
a
lot
of
schema
changes
which.
D
We
don't
want
well
and,
and
then
that
leads
to
the
for
the
things
that
we
can't
rule
out.
Is
there
an
attribute,
that's
similar
to
the
parent
relationship
that
allows
you
to
flag
this
as
incompatible.
That
then
propagate,
like
the
source
of
truth,
is
the
schema
where,
like
someone's
like?
Oh
no,
no,
no.
This
one
was
bad
and
here's.
Why
or
we
can
associate
warnings
with
it
or
something
like
that
or
when,
when
we.
E
A
And
we
have
quite
a
lot
of
schema
compatibility
checking
code
that
david
contributed
exactly
basically
for
that
yeah
yeah.
It
should
be
it's
pretty.
It's
pretty
complete.
Actually,
I'm
not
sure
that
yeah,
it's
complete
for
the
syntactic
changes.
E
Not
for
the
semantic
right,
but
sometimes
like
batching
validation.
You
know
it's
safe
because
you
can
reason
about
the
data
and
clusters
of
customers.
Formally,
it's
not
safe,
but
you
can
override
this
button.
A
Yeah-
and
I
remember
talking
about
the
kept
to
add
cel
validation-
I'm
really
excited
about
that,
but
I'm
also
worried
about
what
that
will
do
for
schema
compatibility
checks,
because
now
you
have
this
like
whole
language
to
validate
things
and
there's
no
way
to
tell
whether.
D
So
you've
stopped
you've
stopped
hacking
off
your
foot
with
a
bloody
hatchet
and
instead
you've
moved
on
to
like
gently
banging
your
finger
against
the
desk
with
a
hammer.
Unfortunately,
I
think
we'll
still
have
web
hooks
and
so
we'll
have
both
the
hammer
and
the
saw
jord
jordan
is.
Jordan
is
actually
in
jury
duty
today
and
we
were
chatting
about
like
another
api
review
thing
before
code
freeze
and
we
were
like
he
was
saying
like
he.
D
He
thinks
the
ceo
stuff,
like
that's
his
like
personal
mission,
to
get
rid
of
web
hooks,
and
we
were
like
talking
about
the
cynicism
of
all
your
mistakes
live
forever.
It
was
interesting
too,
because,
like
the
ceo
stuff,
like
it's
really
just
making
the
indeterminability
of
code
more
obvious,
we
were
going
to
have
to
deal
with
concrete
built-in
type
validation
changing
on
core
objects
in
cube,
no
matter
what-
and
so
we
already
had
accepted.
So
anything
we
do
is
just
net
better,
so
it
may
just
not
be
worth
the
cost
or
all
that.
D
But
it's
it's
a
it's
a
good
angle
of,
like
you
have
to
accept
that
physical
types
on
a
cluster
can
change
will
be
wrong.
You
also
have
to
accept
that
something
can
sneak
by.
We
start
with
those
axioms
we're
better
off,
because
we're
saying
we're
going
to
design
for
humans
get
things
wrong.
How
do
they
get
out
of
those
scenarios?
D
And
so
that's
like
the
a
cube
object
when
it's
broken
usually
lets
you
mutate
it
to
get
back
to
working
without
necessarily
saying
you
can
go,
you
know
you
could
do
it
through
apply
or
whatever,
but
you
don't
necessarily
have
to
go
and
redo
everything
in
your
entire
environment
to
get
back
to
it.
What
is
the
analog
for
api?
D
D
Yeah
and
the
c
outside
two,
so
one
note-
and
this
is
like
stefan,
like
your
virtual
workspaces
stuff,
triggered
this,
which
was
like
I
was
kind
of
going
through
like
what
it
would
take
to
define
a
transformation
such
that
you
could
actually
do
most
like.
We
were
saying
like
work,
virtual
workspaces
or
examples
of
things
that
could
be
in
code.
There
are
relationships
where
it's
like
is
cell,
or
something
like
that
actually
complete
enough
that
you
could
actually
define
a
transformation
under
underlying
types
or
program
it
you
might
be
able
to.
D
I'm
not
sure
it's
worth
it
but
like
that
would
be
a.
That
was
an
example
that
I
hadn't
thought
of,
and
you
were
prompting
it
when
you
were
talking
about
defining
virtual
workspaces
on
the
fly,
even
if
a
lot
of
them
are
done
in
code,
there
might
actually
be
scenarios
where
we're
like.
Oh,
this
is
just
a
like.
D
An
aggregated
api
server
is
hard
to
implement
in
code,
or
it
is
it's
hard
to
implement
in
something
like
cell,
but
you
could
conceivably
source
another
thing
and
then
do
a
transform
on
it,
which,
for
a
lot
of
virtual
resources
that
are
in
the
wild
today
and
for
virtual
sub
resources,
are
similar.
So,
like
there's
a
good
like
thread
that
I
want
to
tie
back
to
as
we
get
more
familiar
with
cel
type
problems
and
cl
type
tradeoffs.
D
Stephon
is
there
any
migration
stuff
that,
like
we
talked
about
like,
if
you
needed
to
go
fix,
something
would
cal
be
enough
to
do
a
fix
and
distribute
it
to.
D
E
It
forces
you
the
other
limits,
so
you
can
never
do
cross
resource
logic
for
that
you
need
code,
but
if
it's
really
just
reduced
of
this
for
this
object,
which
you
have
in
front
of
you
and
you
have
to
you-
can
write
a
function
program,
you
basically
can
do
that.
It
has
some
limits
like
recursive
data
structures
cannot
be
expressed,
but
this
is
not
a
normal
case.
A
D
We
might
get
rid
of
web
hooks
and
kcp
and
say
if
you
want
to
web
hook,
yeah
you'll
put
it
on
the
cluster,
so
we
should.
We
should
ask
that
should
be
like
a
let's.
Let's
put
that
in
the
list
of
debates
to
have
with
ourselves,
which
is,
are
web
hooks
truly
required
for
the
problems
we
are
solving
and
when
we
see
one
we're
like
is
this
a
is?
D
Does
this
require
web
hook,
because
previously
we
didn't
design
a
lot
of
core
cube
resources
with
web
hooks
in
mind,
and
so
the
answer
for
all
the
cube
resources
is
we
didn't
want
a
web
hook
when
we
designed
them,
we
put
web
hooks
into
crds
because
that
would
be
like
crds
in
web.
Hooks
was
like
the
example
of
like
well,
we
need
something
for
admission.
We
know
that
crds
are
coming
with
well
third-party
resources
at
the
time
and
we'll
just
need
something
to
deal
with
it.
What
tool
could
we
use?
D
But,
let's
maybe
we
should
tee
that
up
for
a
ask
the
opposite:
can
we
make
a
hard
line
against
web
hooks
deliberately
in
our
design
and
what
are
the
trade-offs.
A
When
you
say
web
hooks
would
need
to
run
on
the
physical
cluster,
that's
effective,
that's
sort
of
like
how
we're
talking
about
controllers
running
on
the
physical
cluster,
pointing
back
to
kcp.
We
would
just
have.
D
No,
I
I
literally
mean
like
if
you
want
a
web
hook,
you
put
it
on
the
the
low
level
type
that
shows
up
on
a
cluster,
and
then
we
would
have
to
ask
the
question:
is
there
a
problem
that
we
cannot
solve
without
web
hooks
and
like
we
have
a
couple
from
like
crds
already
like
you
know,
you
could
argue
sub-resources
and
admission
but
like
well.
Maybe
we
don't
want
those,
maybe
there's
a
better
way
to
do
that
like
and
again
like.
D
A
D
D
C
D
Yeah
yeah,
like
black
boxing,
is
useful.
I
was
talking
about
that
side
of
it.
I
was
the
like
having
cut
arbitrary
validation,
we're
gonna
have
arbitrary
code,
and
you
know
maybe
there's
use
cases
for
that.
If
you
need
web
hook,
behavior
on
like
a
massive
scale,
so
like
a
problem
with
web
hooks
is
a
web
hook.
That's
dealing
at
the
control
plane
level
has
to
be
making
like
consistent
list
watch
calls
and
doing
all
of
that
stuff
and
they
have
they're
coupling
a
web
hook.
Couples
a
failure.
D
Domain
of
being
able
to
change
something
like
a
web
hook
is
a
a
line,
a
dependency
graph
line
in
all
the
coupled
failures
on
a
cluster.
When
you
have
a
self-hosted
cluster,
it
gets
even
worse
and
so,
like
you
know,
you
cut
the
web
hook
and
then
suddenly
the
whole
cluster
goes
down.
That's
like
the
most
common
web
hook.
Failure
that
anybody's
described
because
they
put
it
on
a
pod
because,
of
course
everybody
wants
web
hook
on
pods.
D
We
have
a
bunch
of
mitigations
for
that
for
the
control
plane,
putting
a
web
hook
on
a
control
plane
should
not
bring
down
the
control
plane.
Putting
a
web
hook
on
a
physical
cluster
should
only
have
an
impact
on
that
physical
cluster,
so
it
might
be,
as
we
talked
through
it,
that
we're
like
oh
web
hooks,
don't
cost
us
much
on
a
logical
cluster
because
they're
so
narrow
in
scope,
but
maybe
there's
like
something
like,
oh
well.
D
That
actually
is
better
than
because
they're,
just
we've
completely
separated
the
failure
domain
and
we
would
just
have
to
make
sure
to
run
those
web
hooks
in
some
place.
That
is
like
you're
still
coupling
to
the
failure
domain
of
the
web
hook.
But
maybe
you
could
say,
oh
well,
then
that's
just
a
sharding
problem,
which
is
it's
very
easy
to
break
up.
Just
like
we
can
break
up
a
controller
just
like
we
could
break
up
parts
of
an
api.
We
could
also
break
up
the
web
hooks.
D
A
I
think
so,
let's,
let's
go
into
the
kcp
work
packages,
doc,
which.
A
Is
effectively
stuff,
we
need
to
do
for
prototype,
2
and
other
stuff
on
other
timelines.
Beyond
that
andy,
you
are
listed
first
and
some
of
the
last
few
is
this.
Is
there
anything
else
to
add
to
excuse
me
to.
B
This
section
well
I'll
I'll
start
by
saying
that
this
is
definitely
a
living
document
and
stefan-
and
I
were
just
brainstorming
this
morning
before
this
meeting
with
fleshing
out
some
additional
things
in
here-
I
think
all
or
the
majority
of
what's
in
here
probably
needs
to
get
translated
into
github
issues.
B
That
folks
can
go,
see
more
information
about
and
potentially
work
on,
if
they're
interested
I'm
happy
to
go
through
things
line
by
line
here,
if
that's
a
good
use
of
our
time,
I'm
also
happy
to
not-
and
let
folks
look
at
this
on
their
own
time
and
add
comments
and
questions
as
needed.
So
I'll
sort
of
turn
it
back
to
the
group,
and
you
all
can
tell
me
what
you
would
prefer.
A
I
don't
personally,
I
don't
think
we
need
to
go
through
the
line
by
line.
I
think
the
interesting
parts
are
where
there
are
dependencies
between
these.
These
are
like
roughly
divided
work,
streams
of
chunks
of
code
or
chunks
of
work.
To
do.
The
interesting
part
to
me
is
more
where
there
are
dependencies
or
where
there
are
overlaps
or
whether
things
like
that,
like
the
sinker,
the
namespace
scheduler
needs
to
become
multi
multi-logical
cluster,
aware
that
is
a
dependency
that
that
chunk
of
work
has
on
that
trunk
of
work.
A
How
that
is
configured
doesn't
really
matter
to
the
sinker,
but
the
syncer
needs
to
be
able
to
enforce
that
in
some
way
or
the
syncer
or
some
system
related
to
the
sinker
needs
to
be
able
to
ensure
that
things
don't
go
over
there
allotted
capacity.
Those
are
the
interesting
parts
to
me.
Otherwise,
it's
just
a
list
of
stuff
for
me
to
go,
you
know
do
but
where
it
overlaps
and
stuff
is
interesting,.
E
E
There
are
some
which
are
more
like
upstream
refactorings
prefactorings,
like
things
which
we
we
know
we
have
to
do,
and
we
have
to
start
now
because
it
just
takes
months
to
get
them
upstream
and
they
will
help
us
considerably
in
the
future
like
when
we
come
to
prototype
3
or
something,
and
we
want
to
build
certain
things
we
better
have
those
in
upstream.
So
there
are
different
kinds
or
categories
of
work
for
yeah
different
interests,
different
characters
of
people.
E
So
we
have
people
who
like
to
work
upstream,
who,
like
defect
doings
there
are,
there
is
work,
it
doesn't
have
to
be
this
prototyping,
hacky
kind
of
rock
style.
There's
other
work
as
well.
A
Yeah,
I
agree,
that's
also
interest,
that's
an
interesting
overlap
between
our
work
and
upstream,
which
has
the
confounding
factor
of
impedance.
Mismatches
like
like
code
will
be
much
faster
to
write
in
our
land
and
much
faster,
much
slower
to
write
in
upstream
yeah.
Thank
you
for
calling
that
out
are
there
is
any.
A
Is
anybody
seeing
anything
calling
out
to
them?
You
know
like
a
siren
song,
I'd
love
to
go
work
on
this
or
or
the
opposite,
terrible.
What's
a
gorgon
or
something
something
you
don't
want
to
look
at.
D
I'm
hearing
crickets
I
mean,
certainly
I
would
say
I
and
stefan
I
don't
know
what
your
take
on
this
is
but,
like
nobody
likes
the
master
package
and
it's
like
a
very
like
painstaking,
like
teasing
it
apart,
but
like
it's
just
like
horrifically
coupled
how
much
friction
there
like
that
one
seems
like
a
one:
that's
like
teasing
parts.
Well,
the
control
plane
package
now
teasing
that
apart,
like
so
that
more
of
that
stuff
is
available.
D
If
you
want
it
in
extensions,
api
server
or
in
another
staging
like
that's
one,
that's
like
that
sits
at
the
root
of
being
able
to
build
minimal
api
servers
and
that
one
also
has
the
clearest
benefit
for
someone
building
api
servers
like
cube.
That
was
the
one
that
I
thought
of
when
yeah
the
second
one
here
in
the
list.
Yeah.
E
And
I
think
there's
also
value
in
upstream,
if
you
don't
consider
kcp
or
anything
similar
like
we
know,
this
is
technical
depth
and
I
think
nobody
will
object
if
we
clean
that.
D
And
I
had
taken
a
stab,
for
instance,
at
like
moving
all
the
internal
apis
into
their
own
staging
repo
and
then,
like
you
know
that
that's
something
I
was
like
that
was
too
hard
to
do.
A
year
ago,
it's
crept
back
in
like
we
should
probably
look
for
places
where
we
view
that
we're
backsliding
on
long
term
objectives,
yeah
code
separation
and
be
like
hey
the
code
separation
needs
stuff.
We're
backsliding.
Is
there
a
way
that
we
can
use
that
refactor
to
accomplish
a.
E
In
absolutely,
we
have
some
tooling
to
restrict
imports
and
we
can
restrict
who
can
use
certain
packages,
so
we
can
define
rules
so
that
upstream,
even
if
this
is
a
multi-month
process,
that
this
is
usually
the
problem
like
fixing
this
in
one
pr
is
easy,
but
you
don't
get
it
merged
because
it
takes
weeks
to
prepare.
So
we
have
to
find
strategies
to
do
those
moves
in
small
pieces
but
make
sure
that
nobody
behind
you
just
destroys
what
you
just
have
done
so,
but
there
are
ways
to
do
that
so
yeah.
E
B
Broken
them
with
the
prototype
code,
so
I
ran
into
that
yesterday
when
I
was
trying
to
bump
jengo
in
our
fork
of
kubernetes,
so
we'll
have
to
do
some
untangling
of
the
broken
imports
as
well.
A
B
No,
no,
I
think
we're
basic
for
the
time
being,
I
think
it's
easier
to
say
we
can
have
the
the
import
violations,
but
if
we
need
to
bump
any
dependencies
in
kubernetes
like
we
just
can't
until
we
undo
the
hacks
and
turn
them
into
real
things,.
D
Yeah
I
mean
we're
all
playing.
Everybody
upstream
is
playing
chicken
with
splitting
up
kk
any
further,
but
like
it,
it's
there's
just
a
bunch
of
ugly
coupling
there
so
like
that
would
be
another
group
that
code
organization
or
whatever
is
like
cubelet
cuba
going
into
staging
is
like
a
super
obvious.
One.
Internal
api's
going
to
staging
has
been
one
of
my
hobby
horses,
but
I
haven't
gotten
past
that
I
mean.
D
Into
staging
sorry,
I
missed
the
go
at
cubelet
going
into
staging
is
a
big
one,
because
that
one
keeps
backsliding
because
cubelet
it
does
not
get
to
use
internal
apis
and
people
keep
adding
internal
apis
to
cubelet.
So.
A
B
They're
not
really
specked
out.
I
think
stefan
certainly
can
provide
insight
david.
When
he's
back
with
my
recent
explorations
into
the
workspace
inheritance
hack,
like
I
I'm
starting
to
learn
how
discovery
is
set
up
for
slash
apis
at
the
root
level
as
well
as
crds,
and
so
some
of
the
some
of
the
stuff
in
there
around
untangling
aggregation
and
the
api
extensions
api
server
like
I've,
got
some
of
that
information
in
my
head
as
well.
E
Yeah,
but
the
tier
vr,
I
think,
is
cube.
Aggregator
is
nothing.
We
want
api
services,
they
don't
play
a
role
here
if
we
want
something
like
navigation.
This
is
more
like
on
the
virtual
workspace
level,
maybe
completely
different
way
to
implement
it.
So
we
want
to
get
that
out.
This
makes
our
control
plan
much
easier,
yeah.
E
So
if
you
have
somebody
somebody
on
the
call
or
somebody
in
your
team
who
wants
to
participate,
I
think
we
can
give
pointers.
There
are
many
of
those
topics
there's
one
as
well
about
resource
version,
so
people
like
controllers,
steve
investigated,
which
controllers
use
resource
version
ordering
you
might
want
to
that
one.
C
Just
seems
blocked
because,
like
there's,
this
really
obvious
use
case
for
managing
a
local
in
memory
cache
and
there's
a
bunch
of
controllers
in
cube
that
do
it.
And
so
I,
like
the
idea
of
a
cap
to
optionally
garble
your
rv,
but
as
soon
as
you
do
that
you're
gonna
break
your
cube
so
like
using
it
to
figure.
C
C
C
D
Generation,
okay
and
those
were
all
ones
that
were
fixable
by
spec
generation
and
metadata.
They
didn't
realize.
E
E
D
Which
which
with
generation
is
like
what
generation
is
for
yeah,
I
daniel,
didn't
really
like
status
generation,
but
like
I
think
I
can
still
win
the
status
generation
argument
eventually,
but
I
gotta
go
prove
to
him
that
his
baby
isn't
ugly
by
using
by
trying
the
server-side
apply
for
cubelet
the
was
there
any
case
steve
where
status
generation
was
or
something
that
covered
status
generation
that
was
was
necessary.
Today.
Do
you
know,
what's
up
your
head.
D
D
A
D
D
We
learned
a
bunch
now
that
we've
learned
a
bunch.
Here's
what
let's,
let's
put
up
a
set
of
principles
that
guide
us
going
forward,
not
the
idealistic
version
that
tim
or
clayton
or
somebody
who's
like
an
old
timer
uses.
Here's
what
someone
can
reference
like!
I
did
that
for
the
when
we
put
in
the
metrics
for
scheduling
I
went
and
like
defined
the
resource
model
in
a
cap,
and
you
know
scheduling,
sig
signed
off
and
node
signed
off,
so
we
can
at
least
be
like
hey
if
you
want.
A
So
it
seems
like
we
have
successfully
volunteered
steve
to
do
the
upstream,
rv
opaquifying
opa
nation.
Are
there
any
others
we
want
to
talk
about
these?
I
mean
this
is
a
living
document.
We
should
add
more
context
to
these
as
we
go,
but.
C
A
C
Quickly,
talk
about
your
your
sequencing
question
about
this
short
term.
C
Like
so
in
in
terms
of
listeners
and
informers,
being
logical
cluster
aware
right
now,
if
you
use
the
right
client
for
your
listener
and
your
informer
and
you're
careful
about
how
you
access
the
listener,
with
the
key
from
your
queue,
you're
fine,
like
it's.
A
It's
done
it
works,
then.
I'm
completely
happy.
First
of
all,
I'm
completely
happy
to
have
the
namespace
scheduler
not
be
multi-logical
cluster,
aware
immediately
anyway,
because
we
can
demo
useful
things
with
one
workspace
being
scheduled.
If
what
you're
describing
already
works
and
just
needs
to
be
held
carefully,
I
just
need
an
example
or
handholding.
A
C
When
I
push
the
workspace
controller
pair
you'll
see
that
what
was
the
other
thing
that
you
wanted,
the
other,
the
other
overlapping.
A
A
I
don't
really
care
how
it's
expressed
to
users
or
how
it
even
looks
to
the
sinker,
but
the
sinker
will
be
responsible
for
enforcing
that
in
some
way,
and
so
we
just
need
to
or
the
scheduler
will,
whether
it's
a
if,
if
workspaces
are
never
allowed
to
have
more
than
10
cpus,
then
the
scheduler
needs
to
enforce
that.
And
if
it's
workspaces
are
never
allowed
to
have
more
than
10
cpus
per
physical
cluster,
then
the
syncer
needs
to
enforce
that.
A
Yeah,
I
just
mean
when,
however,
that
is
designed,
it
will
be
synchro
responsible
for
enforcing
it,
and
I
want
to
make
sure
that
we
don't
okay
design,
something
that
the
sinker
can
enforce
or
have
the
anchor.
A
A
C
C
A
Like
I'm
blocked
on
writing
code
because
this
doesn't
exist
and
blocked
on
writing
code
for
100
other
things
so.
C
A
Yeah,
I
think
so
when,
when
the
namespace
scheduler
needs
to
become
multi-class
multilogical
cluster,
where
I
will
ping
you
for
handholding
about
how
to
use
it
correctly,
but
yeah,
we
have
a
few
minutes
left.
If
anybody
else
has
anything
else,
they
want
to
discuss,
show
off
ideas.
They've
had.
D
D
I
was
just
gonna
say
it's.
The
the
explosion
of
documents
and
people
chasing
stuff
and
coming
with
scenarios
is
awesome.
We
just
need
to
make
sure
that
as
we're
doing
it
that
we
keep
building
the
inner
connections
between
stuff,
so
that,
like
someone,
can
follow
what
we're
doing
so,
it's
good
like
we
should.
We
should
basically
at
some
point
soon
encode,
whatever
our.
What
our
agreed
pattern
is
for
communicating
this
stuff
into
something
that
goes
into
a
repo
which
is
like
here's,
how
we
communicate
design
stuff.
We
start
with
investigations.
D
We
started
getting
into
google
docs
shared
with
kcp
dev.
We've
got
like
some
of
the
what
I
would
call
like:
the
adr
style
docs,
which
aren't
really
adrs
in
a
community
sense,
they're,
more
adrs
and
like
a
folks
executing
on
this,
like
internally,
in
one
set
of
teams.
So
we
probably
need
to
think
about
that,
but,
like
this
is
really
good.
So
I
congratulate
us.
A
A
Nice
work,
everyone
please
apply
one
backpack
all
right
great,
have
a
great
day
and
we'll
see
you
next
week.
Bye,
everyone
see
you.