►
From YouTube: Community Meeting, April 25, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
today
is
April
25th.
This
is
the
kcp
community
meeting
and
I.
Think
if
you
all
were
out
at
kubecon
and
or
couldn't
make
last
week's
meeting,
we
did
talk
about
how
we
are
looking
for
new
maintainers
for
the
project.
Let
me
share
the
note
that
I
posted
last
week,
so
basically
I'll
Echo.
What's
in
here,
we
still
firmly
believe
in
kubernetes
the
ecosystem,
obviously
openshift
and
the
community
that
is
part
of
kcp
kubernetes
openshift
and
the
broader
cncf.
A
It's
an
amazing
group
of
people
and
we've
had
to
do
some
difficult
inward
looking
and
just
try
to
reevaluate
what
we
can
commit
to
and
work
on
upstream,
and
you
know,
unfortunately,
for
for.
B
A
Of
us
who
have
been
maintaining
kcp
and
working
at
Red
Hat,
you
know
we're
being
shifted
to
other
things,
because
kcp
is
less
of
a
priority
for
for
Red
Hat
in
terms
of
what
we
can
deliver
for
Maximum
Impact
and
benefit
for
our
customers
and
products.
So,
having
said
that,
we
we
love
the
open
source
community
and
we
love
that
you
all
have
been
interested
in
kcp
for
the
past
couple
of
years.
So
essentially
we
are
looking
for
new
maintainers.
A
So
if
anybody
is
interested
in
taking
over-
and
you
know-
wants
to
continue
on
with
kcp
and
the
awesome
stuff
that
we
have
here,
please
reach
out
to
me:
Paul
Weil,
Stefan
schmansky.
We
are
available
to
help
with
the
transition
and
you
know
with
love.
If
anybody
has.
A
Capacity
to
take
on
that
role,
so
that's
all
I've
got
for
today
I'm
happy
to
answer
questions
or
continue
this
discussion.
So
if
there's
any.
A
D
Yeah,
let
me
just
add
that,
for
the
edge
MC
work
also
called
kcp
Edge,
we
are
still
retained,
management,
support
or
committed
in
continuing
to
develop
this,
we
will
deal
with
whatever
happens
with
the
remainder
of
kcp.
You
know
we're
still
working
out
the
plan
and
as
part
of
that,
in
fact,
in
this
forum
here,
I
have
some
questions
I'd
like
to
explore.
D
We
do
find
kcp
core
useful.
Do
not
have
the
Manpower
available
to
maintain
it
by
ourselves.
If
there
were
remaining
community
of
people
who
have
found
kcp
core
useful
or
some
a
fragment
of
it,
we
might
be
able
to
join
with
that
Community
to
maintain
it.
But
we
also
do
believe
that
we
need
an
independence
from
kcp
for
the
edge
work
for
those
of
you
who
have
been
following
it.
D
Our
dependence
on
kcp
has
mainly
been
on
being
embedded
in
the
context
that
has
multiple
logical
clusters
and
we
use
these
logical
clusters
for
a
few
things
they're
very
useful,
but
the
the
edgemc
use
of
the
concept
is
not
very
detailed.
D
We
could
work
with
anything
that
supplies
things
that
act
like
Cube,
API
servers
and
the
associated
controllers,
so
the
the
thinking
is
to
develop
an
abstraction
that,
like
Vince,
did
for
controller
runtime,
but
you
know
you
know
for
more
General
use
so
that
we
can
work
with
a
variety
of
providers
for
what
we
need
so
anyway.
So
so
question
for
this
audience
you
know
are,
is
are
others
so
I
guess.
My
point
was
I'm
trying
to
illustrate
how
little
of
kcp
core
we
need.
We.
D
If
there
are
other
people
in
this
community
who
would
be
interested
and
able
to
contribute
to
maintaining
that
level
of
functionality,
you
know
we
might
be
able
to
join
together
to
accomplish
something
also,
as
was
outlined
earlier
here
in
the
discussions
of
upstreaming,
the
kcp
work
right.
One
thing
that
was
agreed
on
with
Upstream
is:
it
would
be
useful
to
Upstream
the
generic
control
plane.
Work
I
want
to
so
I
want
to
ask
about
the
the
possible
future
of
that
I.
Think
I
do
think
that
that
would
be
useful
for
the
community.
D
The
case
kubernetes
Community
as
a
whole,
for
the
reasons
that
have
been
discussed
before
also
I
do
think.
The
idea
of
you
know
virtualizing
the
server,
as
you
guys
call
it
here,
logical
clusters,
Upstream,
they
call
it
super
namespaces.
You
know
whatever
you
want
to
call
it
I
think
it
has
General
utility
if
it's
not
oversold
as
multi-tenancy,
if
we
think
of
it
only
as
adding
a
higher
level
of
name
spacing
I.
Think
a
case
could
maybe
be
made
and
that's
a
separate
track.
D
D
It
would
be
great
if
that
is
or
could
be
finished,
and
then
a
new
release
cut
so
that
we
can.
You
know
at
least
for
a
while
the
the
we're
going
to
continue
to
use
kcp
as
it
is.
It
would
be
great
if,
as
it
is,
we're
a
v
0.12
using
kubernetes
1.26,
so
I
wanted
to
ask
about
that.
A
Yeah
I
have
all
the
unit
tests
passing
everything
limp's,
okay,
I
have
one
issue:
I've
got
to
figure
out
what's
going
on
where
something
in
resource
quota
is
closing
an
already
closed
Channel
and
the
server
panics
I
think
I
had
some
Flakes
with
some
of
the
web
hook
EES,
but
I'm
chasing
that
stuff
down
at
the
moment.
You
know
everything
else
is
essentially
done
pending.
D
Great,
oh
and
I
forgot
one
other
thing,
and
it's
not
so
much
ask
for
the
community.
I
just
need
someone,
perhaps
Andy
Pierce
one
other
person.
As
you
know,
we
talked
about
with
months
ago
with
Stefan.
We
agreed
that
part
of
the
solution
to
our
problem
will
be
to
create
a
new
kind
of
view
that
denatures
objects,
certain
kinds
of
objects
that
that
kcp
is
still
giving
an
interpretation
to.
D
We
started
you
and
I
started
to
talk
about
that
last
week.
We
need
to
actually
finish
that
conversation
with
someone
and
get
that
view
built.
D
B
B
A
C
E
B
That's
a
very
good
question,
I'm,
not
sure.
Actually,
so
far,
we've
been
only
users
of
kcp
and
we've
admired
the
development
work
from
afar.
We
haven't
really
delved
into
the
depth
of
kcp
and
I.
Can't
speak
for
my
project
lead,
so
I
can't
tell
you
how
much
Manpower
we
could
input.
Unfortunately,
we're
a
bit
surprised
by
the
decision
last
week
that
kcp
was
kind
of
getting
less
of
a
focus
at
redhead,
and
so
we
are
a
little
bit
scrambling
and
seeing
what
we
can
do
now.
D
Okay,
now
the
complicating
Factor
here
is
that
is
the
the
API
export
in
API
binding.
There
is
the
external
Cube
bind,
which
does
not
have
any
entanglement
with
the
logical
clusters
in
kcp
and
could
be
used,
but
has
been
pointed
out
here
repeatedly.
The
version
in
kcp
is
much
more
efficient,
precisely
because
it
is
entangled
with
the
logical
clusters
in
kcp.
So
this
begs
a
question,
so
let
me
ask
you
and
MJ:
are
you
interested
in
Kube
APR
export
in
binding,
as
well
as
logical
clusters,.
D
C
Yeah
I
think
API
expert
is
not
that
big
of
the
deals
all
right.
So
we
could.
We
could
work
that
on
a
higher
level,
basically
by
just
Distributing
their
API
binding
says
you
know,
I
would
say
old
school
way
by
interacting
with
multiple
virtual
clusters
like
I
understand
the
complexities
it
brings,
because
it's
the
most
complicated
piece
of
code
in
the
code
base
currently
and
I
personally,
think
that
if
we
remove
that
split
out
now
we
have
a
chance
to
keep
maintaining
it
where,
if
it
stays,
it
might
be
a
bit
overkilled.
D
Well,
yes,
you're
getting
to
the
point
that
I
wanted
to
ask
Andy
and
the
other
people
who
have
been
doing
the
maintaining
right.
This
is
a
really
the
sizing
question
you
know
I
wanted
to
ask.
You
know
how
much
work
are
we
talking
about
if
someone
were
to
take
on
if
some
you
know
remaining
Community
were
to
take
on
maintaining
and,
of
course,
the
important
question
then
we're
talking
about
maintaining
only
logical
clusters,
we're
talking
about
maintaining
logical
clusters,
plus
API,
export
and
binding.
So
I
think.
D
A
A
Well
but
I
think
it's
easier
to
maintain
than
the
API
exports
and
Bindings
that
I
think
that
there
were
a
lot
of
things
that
we
ran
into
with
partial
metadata
across
cluster
lists
and
watches
that
I
understand
the
intricacies
now,
but
when
I
was
first
writing
it
there
were
gaps
in
my
understanding
that
only
showed
up
after
really
digging
in
and
trying
to
figure
out
why
some
random
ede
test
failed
once
in
a
blue
moon,
so
I
I
think
the
maintenance
burden
for
logical
clusters
is
lower,
even
if
it
means
like
it's
more
mechanical
like
you've
got
some
function:
signature
where
in
kcp's,
Fork
of
kubernetes
we've
added
a
parameter
or
two,
and
so
that
has
a
ripple
effect.
A
Obviously,
when
the
function
signature
changes,
you've
got
to
go
change
all
the
call
sites,
but
then
Upstream
diverges
and
adds
some
other
field
to
the
function
signature.
So
you
just
have
to
deal
with
conflicts
and
that's
not
like
that's
mechanical.
It's
like
oh
here's.
What
Upstream
did
here's?
What
kcb
did?
Let's
go
reconcile
the
two:
it's
not
that
bad
from
an
effort
perspective,
whereas
just
I
think
the
API,
export
and
binding
is
just
technically
more
challenging
to
make
sure
you
don't
mess
something
up
going
forward.
D
Okay,
thank
you.
Would
you
be
quantitative?
Let's
start
with
the
simpler
one,
then
just
maintaining
logical
clusters,
I
mean
if
we're
talking
about
maintenance.
Is
this
merely
a
matter
of
you
know
rebasing
every
time
on
every
kubernetes
release
or
is
there
more
work
that
we're
talking
about
well.
D
So,
let's
start
by
quantifying
that,
can
you
give
us
an
estimate
like
in
terms
of
man
hours
or
something
like
that,
so.
F
A
For
the
126,
rebase
I
would
say
from
start
to
having
kcp
start
function
like
have
the
server
come
up
and
be
ready,
it's
probably
about
a
week
of
my
time,
but
where
things
get
a
little
bit
harder
to
quantify,
is
you
never
know
like?
Was
there
a
new
controller
that
was
added
Upstream
that
we
now
have
to
make
it
logical
cluster
aware
because
it's
a
new
critical
component
in
a
new
kubernetes
version
right?
So,
for
example,.
D
Api
priority
In
fairness
we
had
planned,
you
know
a
while
ago
to
to
put
in
you
know
we
had
a
guy
put
it
in
and
then
the
design
was
rejected
and
right
now
it's
not
there
at
all
this.
So
that's
you
know
an
example
of
in
some
sense
of
control.
It's
it's
not
still
not
logical.
A
I
think
I
think
a
better
example
is
like
the
I
mean
that
is.
That
is
a
good
example
of
things
that
need
to
be
kcpified,
so
not
to
discount
the
importance
there,
but
the
cel-based
admission
or
validating
policy.
Another
a
new
feature.
D
A
Right
so
that
one
came
in
I
had
to
undo
some
changes
that
we
had
done
in
kcp.
That
made
it
harder
to
to
pull
that
in,
but
that
one
because
of
some
of
the
foundation
work
that
we
already
have
wasn't
too
bad
to
add
and
I'd
be
happy
to
show
what
that
looks
like.
And
then
we
can
try
and
quantify
that
I.
D
D
You
can't
predict
right
because
it
depends
on
the
changes
that
come
down
the
pike
which
you
never
know
what
that's
going
to
be
yeah.
Let
me
also
follow
up
on
a
week
of
your
time.
If
someone,
if
I,
were
to
tell
someone
something
took
a
week
of
my
time
that
would
be
ambiguous
because
it
you
mean,
like
a
calendar
week
or
a
week,
that's
working
outside
of
meetings
right,
because
my
time
is
highly
diluted
by
meetings
So.
A
It
was
definitely
a
calendar
week
and
there
were
meetings
that
I
was
not
spending
time
working
on
the
rebase,
so
probably
somewhere
between
20
and
30..
A
All
right,
thank
you,
but
also
I,
not
necessarily
to
toot.
My.
C
D
Right,
okay,
so
let
me
also
go:
let's:
let's
go
into
the
other
topics,
though
so
the
generic
control
plane
is
there,
you
know
what
is,
can
you
give
us
some
ideas?
Does
red
hat
still
think
that
that's
something
that
they
would
be
willing
to
spend
time
on.
A
From
what
I
recall
the
direction
that
we
were
given
was
we
can
be
advisory
on
that,
but
not
lead
the
effort
so
like.
If
folks
on
this
call
write
a
cap,
we
can
comment
on
the
cap.
D
So
I
noticed
the
you
know
a
while
ago
that
there
was
already
a
generic
control
plane
directory.
If
I
recall
our
package
in
kcp,
but
lately
it
seems
to
have
disappeared.
Okay,.
A
So
the
origin
of
that
package
is,
we
take
the
cube,
API
server
code
and
copy
bits
and
pieces
of
it
into
this
brand
new
generic
control
plane
package,
and
then
we
go
turn
off
things
or
don't
carry
forward
things
that
we
don't
want
to
have
in
the
generic
control
plane.
So
anything
around
the
kubernetes
service
and
trying
to
deal
with
services
and
endpoints
that
doesn't
carry
forward
so
I
think
I
think
we
talked
about
this
last
week.
A
The
Sig
API
Machinery
is
more
interested
in
having
people
start
from
nothing
and
propose
what
a
generic
control
plane
Library
would
look
like
versus
starting
with
the
cube,
API
server
and
trying
to
strip
pieces
of
it
out.
The
end
result
may
be
identical
or
very
close
to
it,
but
that
that's
just
the
direction
that
they've
given.
D
Well,
what
I
heard
last
week
was
a
little
bit
different,
which
was
a
development
plan
which
was
to
first
develop
the
generic
control
plane.
I
mentioned
this
would
be
a
repo
like
that
would
be
built
on
top
of
API
server
and
once
it
was
sufficiently
developed,
then
the
Kube
API
server
could
be
modified
to
be
built
on
the
generic
control
plane,
rather
than
built
directly
on
API
server.
D
Right
right,
just
like
API
server,
is
a
staging
radio
right
right.
It
gets
published
as
a
separate
repo,
but
it's
under
the
staging
directory
right
right.
So
so
I
heard
what
I
heard
with
development
plan
get
the
generic
control
plane,
functional
and
then
cut
over
the
AP,
the
cube
API
server
to
use
it,
and
of
course
that
makes
sense
you
don't
want
to.
You
know
destabilize
the
thing
you've
got.
D
You
know
you
want
to
make
a
move
only
when
you
can
actually
have
a
functional
move
so
that
that
makes
perfect
sense,
but
it
also
makes
sense
to
me.
If
that's
the
plan,
then
I
would
what
I
would
propose
for
the
generic
control.
Plane
is
exactly
a
subset
of
the
Kube
API
server
right,
because
I
would
be
looking
forward
to
the
day
with
the
kubi
API
server
becomes
in
addition
to
it
a
build
on
top
of
it.
A
I,
don't
disagree
with
you,
you
might
want
to
chat
with
David
eats
from
Sega
famish
energy.
He
might
be
able
to
articulate
his
expectations
better
than
me.
Trying
to
play
telephone
I.
All
I
recall
from
the
conversation
is:
don't
start
with
Cube
API
server
and
strip
things
out
start
with
nothing,
develop
a
generic
control,
plane,
library
and
then
cut
over.
D
A
Just
a
couple
logistical
things
in
terms
of
like
Ci
I
know
we
talked
about
this
last
week,
but
we
have
some
of
our
stuff
in
GitHub
action.
Some
of
it's
in
the
open
ship,
prow
instance
and
I.
Think
for
the
time
being,
it's
okay
to
keep
having
stuff
in
the
openshift
brow
instance
to
keep
the
lights
on.
There
are
some
limitations
you
have
to
be
in
the
openshift
group
or
sorry.
The
open
shift,
GitHub.
A
To
approve-
or
you
have
to
be
a
member
of
that
org
so
that
you
can
slash,
approve,
PRS
and
I
believe
only
Red
Hat
employees
can
be
in
that
github.org.
So
you
know
we'll
be
happy
to
continue
and
review
and
approve
things
that
edit
CI
for
kcp
until
hopefully
there's
some
transition
to
a
different
system.
But
it's
not
something
that
we're
just
gonna
turn
off
immediately.
F
B
F
C
A
We
can
get
you
added,
as
in
the
owner's
file
as
approvers,
make
sure
that
you
have
all
the
appropriate
permissions
in
GitHub
and
certainly
talk
through
any
Logistics
in
terms
of
how
repo
maintenance
is
done.
Releases
and
whatnot
like
we
try
and
have
as
much
stuff
documented
as
possible.
A
I
know:
MJ
asked
for
a
rebate
stock
which
I
still
have
on
my
to-do
list,
but
I
think
really
the
only
the
only
thing
where
people
might
struggle
a
bit
is
just
what
I
was
talking
about
with
prowl
and
getting
the
PRS
approved,
but
we'll
we'll
be
around
for
helping
with
that.
F
Okay,
so
question
for
you
Andy,
you
said
you
had
something
about
how
you're,
managing
or
putting
together
releases
I'd
be
interested
in
seeing.
A
There's
a
doc
in
the
website
and
in
the
the
repo
for
it.
The
only
thing
that
I
think
I
sometimes
vary
from.
What's
written
in
the
doc
is
editing
the
change
log
in
the
release,
notes
that
sometimes
I
will
actually
most
of
the
time
I'll
go
in
and
like
delete
things
that
are
just
you
know,
not
things
that
need
to
be
announced
in
a
change
log
like
oh,
we
cleaned
up
this
code
stuff
like
that,
but
everything
else
I
pretty
much
followed
to
the
letter.
E
C
There's
people
on
the
call
you
mentioned
that
generic
server
code
base
in
the
kcp
itself.
Repository,
do
you
know
from
top
of
your
head,
which
it's.
A
Yeah
and
that,
like
I,
said,
that's
basically
a
copy
from
a
couple
different
packages
related
to
the
cube,
API
server,
consolidating
them
into
one
and
removing
things
that
are
Cube,
specific
or
really
more
compute
specific.
So
anything
that
deals
with
pods
Services
web
hooks
tends
to
have
been
removed
from
there.
C
A
C
Yeah
cool
I
think
if
you
at
some
point
get
that
like
bullet
pointy
list,
which
order
you
did
that
stuff.
That
would
be
appreciate
that
I
suspect,
if
I
had
time,
I
might
just
start
playing
around
just
a
very
base
process
itself,
just
to
understand
the
scope.
Basically,
while
people
still
around
so
it
doesn't
mean
we
need
to
replace
it
now.
But
I
think
your
lab
will
start
poking
around
it'll.
Understand.
A
Yeah
wanting
it
as
soon
as
we
get
the
126
rebase
done,
you
could
always
go
up
to
127.
You
know.
A
A
All
right,
so
we
wrap
up
for
now
and
I,
probably
well,
actually
I'll
be
unavailable
next
week.
So
I
can't
make
the
community
meeting.
We
may
try
and
have
somebody
else
from
the
team
cover,
but
we
are
most
likely
I
would
say
Beyond,
either
this
week
or
next
week,
going
to
need
someone
else
to
take
over
the
community
meetings.
A
Other
thing
I
need,
if,
if
somebody
does
decide
to
continue
the
KSP
Community
meetings,
I
need
to
transition
the
it's
a
separate
YouTube
Google
account
for
the
kcp
community.
E
A
A
Oh
I
mean
it's
affiliate
with
me,
but
what
so?
We
had
a
problem
when
Jason
left
Red
Hat
whenever
he
left
last
year,
where
we
were
having
trouble
with
the
the
YouTube
account
and
the
Google.
C
A
A
Once
those
changes
were
saved,
that
the
old
Google
me
was
deleted
and
then
I
edited
the
meeting
a
second
time
and
I
added
a
Google
meet,
and
that
was
a
that
linked
it
to
my
Google
account
and
then
we
were
able
to
continue.
So
we
can
do
the
same
thing
like
I,
assuming
I'm
able
to
I.
Can
edit
the
invite
or
move
to
Google
me
and
then
somebody
else
could.
B
A
C
C
Yeah
I
don't
think
we
need
to
sold
it
now,
I
think.
First
of
all,
we
need
to
understand
how
we
gonna
take
this
over.
Like
do
we
establish
some
some
new
governance
board
or
another
organization
tries
to
lead
it.
A
D
C
D
A
I
mean
our
governance
is
those
of
us
who
are
in
charge
are
in
charge
and
I
know
they're.
There
were
requests
through
some
back
channels
for
more
official
governance,
so,
given
that
we
need
to
transition
us
to
new
maintainers,
whoever
takes
over
can
decide
where
to
take
it
from
here
and
and
what
sort
of
governance
put
in
place.
Hopefully
you
all
think
that
we've
been
really
nice
from
a
governance
perspective
and
I
would
hope.
B
A
Well,
if
you
all
find
out
that
you're
able
to
commit
any
amount
of
time
to
maintaining
kcp
I
would
be
thrilled,
please
let
me
know-
and
you
know,
whoever
is
interested
in
getting
approval
rights
on
the
repo
and
whatnot.
Just
let
me
know
right
go
ahead.
D
Right
so
also,
as
we
talked
earlier,
I
do
want
to
talk
about
reading
that
denaturing
view
right.
D
Don't
we
don't
need
I
mean
yeah,
so,
oh
and
Stefan's
here
that's
great,
because
there
was
his
idea
in
the
first
place.
So
you
know
I'm
I'm
ready
to
geek
out.
If
you
could
like
show
me
the
the
main
of
a
view
and
help
me
understand
how
to
build
the
view
we're
talking
about.
That
would
be
great.
So.
A
We
I
think
we
need
to
step
back
a
little
bit
so
Stefan
for
context.
We're
talking
about
the
the
DNA
training
or
inert
objects
that
they're
interested
in
doing
I
am
skeptical
to
be
honest,
that
it
would
be
achievable,
at
least
in
the
current
way,
that
kubernetes
works
so,
like
you
were
talking
about,
I,
think
creating
far
back
objects
or
deployments
or
whatever
right
that
live
in
a
workspace
somewhere,
but
there
nothing
happens
to
them.
D
So
yeah,
the
idea
is,
with
you
know,
generally
speaking,
what
we're
doing
one
of
the
things
we're
doing
with
those
workspaces
in
Edge
and
sea
is
we're
using
some
of
them
as
just
containers.
A
D
Some
other
multi-cluster
management
projects
Define
a
container
object.
We
use
a
workspace
as
a
container,
so
we
need
the
objects
to
all
be
inert.
You
know
and
the
attractive
thing
about
kcp
workspaces
is
they
already
make
a
lot
of
Coupe
things
inert,
but
they
still
give
interpretations
to
some
things
like
service
accounts
and
are
back,
and
so
we
want
to
be
able
to
give
a
view
to
typically
what
you
think
it
was
left
shifted,
clients.
A
D
That
are,
you
know,
delivering
from
a
pipeline
into
what
they
think
is
an
API
server.
We
want
to
deliver
into
one
of
these
containers
and
have
the
container
only
contain
so
we
want
to
denature
the
service
accounts,
the
rbac
all
the
stuff
that
even
kcp
gives
an
interpretation
to
today.
D
A
D
And
there's
a
fixed
set
of
these
types
right,
because
kcp
interprets
only
a
fixed
set
of
of
resources,
so
we
would
Define.
You
know
a
few
of
these
denatured
API
groups
that
have
the
the
needed
resources
that
need
to
be
denatured,
and
so
what
the
view
does
is
to
this
client.
You
know
they.
They
look
like
they
appear
under
their
normal
group
and
then,
when
it
comes
time
to
store
them
in
the
underlying
server
they're
stored
in
the
denatured
groups.
E
D
I'm
sorry,
baby
I'm,
misinformed
I
thought
a
view
was
implemented
by
a
logically
independent
server.
It's
basically
like
a
proxy
right.
It's
just
between
the
client
of
the
view
and
a
regular
server.
E
So
Mikey
you
want
to
translate
into
the
The
Logical
types
in
the
view
right,
but
in
the
normal
workspace
they
are
groups
just
correct.
D
Yeah
yeah
terminology
I'm,
not
quite
sure,
I
follow
your
terminology.
So,
let's
see
if
we
can
agree
on
some
terminology
right,
I,
I.
Think
of
this
as
typically
useful
in
a
situation
where
there's
a
pipeline
right
off
to
the
left.
That
thinks
is
delivering
stuff
into
a
regular
API
server,
and
so
it's
the
client
of
The
View,
and
so
the
view
is
logically
a
proxy
that
sits
between
the
clients
that
think
they're
using
regular
Cube,
API
server
and
a
kcp
workspace
where
I
want
to
store
these
modified.
D
E
D
A
Yeah,
you
can
do
something
like
that,
so
you
would
need
to
get
those
API
types
registered.
Yes,.
D
D
I
think
well
so
am
I
going
in
position.
Is
you
know
our
the
goal
here
is
to
make
a
container
that
only
stores.
So
what
happens
so
look
so
supposing
those
references
are
not
modified.
So
that
means
that
the
garbage
collector
on
operating
on
the
underlying
storage
does
not
see
the
owner
references.
D
So
if
you
multicast
owner
references,
you
know,
if
you
want
them
to
be
effective
once
you
multicast
them
to
the
edge
clusters,
they
have
to
be
translated
because
the
owner
reference
has
a
uid
of
the
owning
object.
D
So
I
think.
Maybe
my
going
in
position
is
that
in
the
edge
MC
scenario,
maybe
there
aren't
owner
references
in
the
workload
prescription
right
I
mean
if
you
keep
because
they're,
for
example,
you
know
our
friend
the
deployment
object
right,
their
owner
references
from
the
reference
cassettes
and
the
pods
back
to
the
deployment,
but
in
the
workload
description,
the
the
replica
sets
and
the
pods
don't
exist.
So
we
don't
need
to
worry
about
their
owner
references.
A
Yeah
I
I
would
recognize
that
you
may
have
some
issues
with
owner
references
and
maybe
put
that
you
write
that
down
as
a
to-do,
because
it
is
a
very
valid
concern
that
chiffon
brings
up.
But.
D
Yes,
if
there
really
needed
to
be
owner
references
in
the
workload
you
know
prescriptions,
then
we
have
an
issue
not
only
in
denaturing
but
in
multicasting
to
the
edge
clusters.
So
yes,
I
agree,
let's
suppose
for
the
moment
that
the
workload
prescriptions
don't
need
owner
references.
A
Yeah,
so,
given
that
we
don't
have
a
ton
of
time,
I'm
going
to
try
and
get
through
a
little
bit
of
this
fairly
quickly,
so
I'm
inside
of
package
and
then
I
go
down
to
Virtual
and
I'm
in
initializing
workspaces
and
I'm
in
Builder
build.go.
A
So
I
know
Mike,
you
and
I
talked
one-on-one
the
other
day
about
how
we
have
these
named
virtual
workspaces,
and
we
have
some
Dynamic
virtual
workspaces
I'm
going
to
skip
over
some
of
that
stuff.
You'll
see
that
if
we
go
to
the
bottom
of
this
Builder,
it
returns
this
slice
of
three
different
named
virtual
workspaces.
A
Without
going
into
details
about
why
there's
a
slice,
the
one
that
I'm
interested
in
looking
at
is
the
one
that
returns
an
htb
Handler
and
so
that
one
this
one
is
right
here.
It's
this
workspace
content.
It
returns
a
Handler
virtual
workspace
which,
in
addition
to
having
this
the
three
bits
that
are
in
the
dynamic
one,
it
is
a
Handler
Factory
that
says,
given
the
root
API
server
is
completed.
Config
give
me
back
an
HTTP
Handler
that
serves
content.
A
So
if
we
look
at
this
one
you'll
see
there
is
a
root
path:
resolver,
there's
an
authorizer
and
a
ready,
Checker,
and
then
the
core
bit
of
this
particular
portion
of
the
virtual
workspace
is
this
Handler
Factory,
and
this
can
do
whatever
you
need
it
to
do.
So
you
can,
for
example,
get
the
logical
cluster
from
the
request.
You
can
go
list
it
or
get
it
in
this
case,
and
then
this
actually
goes
through
and
sets
up
a
reverse
proxy
that
forwards
on
to
kcp
Itself.
A
By
doing
some
impersonation,
you
could
translate
here,
so
you
could
take
an
incoming
service
account
and
translate
it
to
a
denatured.
Api
Group
service
account
copy
everything
over
and
you
could
I.
Don't
know
that
you
would
do
a
reverse
proxy.
You
could
maybe-
or
you
could
just
create
a
new
request
and
send
it
on,
but
you
have
the
full
flexibility
of
implementing
your
own
HTTP
Handler
Funk
to
do
whatever
you
need
to
do
in
this
particular
example.
D
So
basic
question:
again:
you
know
I'm
totally
ignorant
about
this,
so
I
need
this
to
work
both
for
resources
that
I
know
about
at
development,
time
and
resources.
The
actually
no
I
only
need
so.
I
only
need
to
modify
resources,
I
know
about
at
development
time,
but
the
client's
going
to
bring
all
their
resources,
including
ones
that
are
developed
by
the
clients
that
I
never
heard
of
yeah.
A
So
the
one
piece
that
we
didn't
cover,
which
is
critical
here,
is
that
all
of
this
everything-
that's
a
virtual
workspace,
currently
is
only
handled
under
a
path
prefix
that
starts
with
Slash
services.
So
any
standard
interactions
like
slash,
API
V1
service
accounts
will
not
go
through
this
path
so
that
we
need
to
do
some
more
thinking
with
you
for
you
to
turn
this
more
into
a
proxy
that
handles
slash,
API,
V1
and
less
into
something.
That's
an
alternate
route
under
slash
services
and
I.
A
Don't
have
a
great
answer
for
you
other
than
if
you
go
into
servers,
config
I
think
it's
in
here.
D
And
you
know
I'm
surprised,
because
for
an
API
export
View
at
least
right,
that's
under
services,
but
you
know
for
each
individual
cluster.
Let's
see
am
I
getting
confused.
A
D
Okay,
so
then
that
brings
me
back
to
my
question,
though
you
know
so
the
the
degree
that
I'm
familiar
with
kubernetes
and
internals.
You
know
there's
this
concept
of
a
scheme
that
has
to
be
told
how
to
Marshal
and
unmarshall
every
resource,
and
you
know
when
we're
dealing
with
things
that
are
user
defined.
D
How
does
this
the
Martian
and
unmarshalling
in
this
proxy
actually
work?
Well.
D
A
So
you
would
probably
need
to
set
it
up
to
delegate
to
the
API
extensions
API
server
for
handling
that
and
you'd
also
need
to
be
able
to
handle
Discovery
appropriately.
So
I
don't
think
this
is
a
super
easy
problem
to
solve.
It
should
be
solvable.
A
We
can
show
you
how
the
composition
is
set
up
for
the
main
kcp
server
to
be
able
to
handle
API
V1,
all
like
all
the
built-in
types
like
our
back
and
whatnot,
and
how
the
API
extensions,
API
server
is
wired
in
as
well
and
how
Discovery
is
handled,
whether
you're
going
to
you
know
some
of
the
built-in
types
or
crds
so
yeah,
I
I,
don't
have
immediate
answers
for
the
exact
shape
of
this
proxy.
That
covers
an
example
like
Argo
talking
to
it,
but
I'm
fairly,
confident
we
can
help
you
get
there.
D
D
D
A
D
E
D
So
let
me
just
be
make
sure
I
understand
the
structure
of
the
problem
here,
so
in
an
AP
export
view.
For
example,
there
is
support
for
Discovery
and
it
must
be
based,
then,
on
something
internal.
That
knows
the
resources
that
the
export
that
are
being
exported.
A
Or
it
maintains
a
set
of
API
domain
keys
to
API
definition
set,
so
what's
in
here
ends
up
being
Discovery
and
just
to
to
recall
what
we
talked
about
one-on-one.
Might
we,
the
API
definition
set,
is
a
bunch
of
gvr's
mapping
to
a
way
that
you
can
get
storage
for
them.
So
if
you
want
to
or
if
you're,
using
this
particular
mechanism,
the
the
API
definition
set
mechanism,
this
does
discovery
for
you.
A
If
you're
building
your
own
HTTP
Handler,
you
need
to
handle
Discovery
yourself
and
I
can
point
you
into
like
how
Discovery
is
merged
between
you
know.
Built-In
types
and
crds
I
can
show
you
how
crd
Discovery
is
done,
but,
unfortunately,
this
type
of
problem
is
unique
enough
that
it's
going
to
be
probably
like
30
percent
cobbling
together
stuff
that
already
exists
and
probably
70
doing.
D
Rolling
around
all
right.
Well,
if
that's
what
it
is,
that's
what
it
is,
but
yeah
so
to
be
clear
because
we
want
to
handle,
you
know
crds
from
users.
We
need
the
left
shifted
stuff
to
be
able
to
submit
crds.
The
crds
do
not
get
denatured,
they
need
to
go
to
the
underlying
server
and
introduce
the
resource
to
the
underlying
server
and
then
Discovery
needs
to
I.
A
Yeah
I,
think
kind
of
what
you
want
to
do
is
write
this
more
like
a
proxy
like
you've
been
saying
so,
but
it's
kind
of
like
a
modified
proxy.
So
if
a
client
does
a
discovery
request,
you
grab
Discovery
from
the
underlying
kcp
server
and
then
you
augment
it
with
the
the
denatured
groups
and
send
that
back
to
the
client
and
then
on
incoming
cred
requests.
A
You
look
at
the
gvr
and
if
it's
for
one
of
your
specific
types
that
you
want
to
denature,
you
modify
the
incoming
body
and
then
forward
that
on
so.
D
Right
I
I
understand
that
the
thing
that
I'm
less
familiar
with
and
less
clear
on
is
just
the
underlying
stuff
that
gets
assumed
right.
We,
you
in
your
outline
you
assumed
that
the
the
request
body
can
be
read,
but
before
we
do
that
the
there
has
to
be
a
scheme,
a
local
scheme
that
has
the
definitions
of
the
resource,
that's
being
read,.
A
Not
necessarily
like
we'll
go,
look
and
see
how
the
CR
Handler
works,
but,
like
it
doesn't
register,
go
structs
with
a
scheme
for
every
single
crd.
That's
out
there
to
my
knowledge
that
stuff
just
comes
in
as
unstructured.
D
Stefan,
can
you
add
any
information
here.
D
F
A
D
Well,
those
are
two
parts
to
your
answer.
Right
do
I
have
to
register
anything
at
all
with
the
scheme
and
if
I,
don't,
then
that's
that's
the
simple
answer
so
Stefan
are
you
confirming
I,
don't
think.
D
So
let
me
ask
you
query
plain
simply:
you
know
to
to
make
this
proxy
handle.
You
know
user-defined
resources
and
for
all
of
those
I
don't
need
to
do
any
denaturing,
because
they're
already
denatured
I
just
want
to
pass
them
through,
but
I
do
need
the
code.
You
know
to
be
able
to
read
them.
So
if
I,
if
the
proxy
does
not
register
anything
in
the
scheme
for
the
easier
to
find
resources,
are
they
going
to
get
successfully?
Read
it's
okay
with
me.
If
they're
delivered
in
the
code
is
unstructured,
that's
fine.
A
I
think
that
this
is
probably
something
that's
best
done
as
an
exploration
like
there's
only
so
much
you
can.
Plan
in
advance,
like
I,
would
probably
create
a
new,
create
the
scaffolding
for
a
new
virtual
workspace
and
start
to
code.
It
the
way
that
makes
sense
and
see
what
happens
and
then
react
based
on
how
many
times
it
fails.
D
E
A
All
right,
I'll
post
the
recording
once
it's
available
and
I,
won't
see
you
all
next
week
and
like
I
said
I,
don't
know
how
this
is
going
to
continue
going
forward.
But
please.
D
Yeah
that'll
take
a
big
interest.
Any
questions
are
for
MJ
and
I.
Guess
he
dropped
already
Kristoff.
You
know
what
what
their
teams
can
muster
in
terms
of
Manpower,
and
we
need
to
compare
that
with
the
work
that
would
be
involved
as
well
as
my
team.
So
yeah.