►
From YouTube: Community Meeting, September 6, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Awesome
andy:
you
have
the
first
subject
here
for
0.8
release.
B
Yes,
thanks
steve
might
want
to
zoom
in
a
little
bit
to
make
it
just
a
little
bit
bigger.
So
I
had
two
things
on
here.
The
first
was
a
an
announcement
and
celebration.
We
were
released
0.8
last
friday
on
time.
I
believe
this
was
our
first
on-time
release
since
we
started
putting
milestone
dates
together.
So
congrats
everybody.
I'm
really
excited
about
this
release.
It's
a
lot
of
hard
work.
From
a
lot
of
folks.
We
had
a
good
group
of
new
contributors
as
well.
B
I
sent
an
email
to
the
kcp,
dev
and
kcp
users.
Google
groups
with
some
highlights
for
what's
in
the
release,
the
release
on
github
has
a
list
of
every
single
commit
that
went
in
there
or
every
pr.
So
please
check
those
out
and
yeah.
Congratulations.
Everybody.
I
really
appreciate
all
all
the
hard
work
and
the
other
topic
I
had
was
I've
got
a
hack,
md
open
that
I'm
working
on
to
get
our
readme
cleaned
up
and
more
up
to
date.
B
So
if
you're
interested,
please
take
a
look
once
I
get
through
my
ideas
on
the
draft,
I'll
probably
switch
it
to
a
pull
request
and
mostly
ask
that
we
try
and
get
this
in
without
too
much
back
and
forth
on
the
review
just
so
we
can
get
something
better
than
what's
there
now
and
then,
if
there's
a
lot
of
stuff,
we
can
follow
up
with
future
prs
if
needed.
A
A
Cool
just
wanted
to
highlight
the
new
contributors
that
came
in
for
this
last
release,
which
actually
make
up
like
a
pretty
sizable
chunk
of
everyone.
That's
that
put
in
work
so
awesome
to
see
you
guys
and
barring
if
anyone.
C
A
Anything
else
they
want
to
bring
up.
Please
say
something
now
or
we're
going
to
jump
into
some
issue.
Triage.
D
Yeah,
I
just
wanted
to
bring
up
the
bug
that
I
posted
sorry.
I
only
put
it
in
seven
minutes
ago.
D
I
wondered
if
this
is
something
that
you
would
notice
yourselves
or
was
on
your
radar
at
all.
I've
noticed
that
I
initially
I
thought
it
was
related
to
advanced
scheduling,
because
I
use
that,
but
it
seems
like
it
isn't.
It
happens
whether
that's
switched
on
or
not,
and
but
if
I
create
a
resource
and
I
tested
it
with
deployment
services
and
ingress
and
put
a
finalizer
a
regular
type
of
finalizer
on
that
resource
and
then
delete
it.
D
The
kcp
reconciles
that
resource
extremely
aggressively,
like
hundreds
of
times
a
second
and
is
adding
and
removing
a
like
annotation
of
a
deletion
timestamp
and
and
it's
it's
aggressive
to
the
point
that
my
own
controller
can't
actually
get
in
and
remove
the
finalizer
that's
causing
the
problem.
D
And
I
suppose,
if
not
then
what's
what's
a
good
next
step
to
move
towards.
E
I
had
a
question
in
fact:
what
what's
your
precisely
your
use
case
for
adding
a
regular
finalizer
on
a
resource
that
you
want
to
think
we
we
don't.
As
far
as
I
know,
if
I'm
not
mistaken,
we
we
don't
sink.
You
know,
find
regular
finalizers
on
downstream.
E
And
and
the
way
to
delay
the
removal
of
of
sync
to
resource
from
the
from
a
downstream
cluster
is
to
use
the
soft.
What
we
initially
call
the
soft
finalizers.
You
know
there
is
a
finalizer's
annotation,
so
I
was
wondering:
is
it
what
what
you
are
trying
to
do?
I
mean
I'm,
I'm
not
clear
about.
D
D
Our
controller
in
kcp,
when
it
sees
an
ingress,
creates
a
dns
record
resource
in
kcp
and
then
uses
that
to
reconcile
the
dns
record
in
a
dns
provider
sure.
So
when
the
ingress
is
deleted,
we
need
to
clean
up
the
dns
record
on
kcp
before
the
ingress
is
deleted,
so
that
finalizer
is
being
used
as
a
finalizer
in
kcp.
It's
not
to
do
with
finalizers
in
the
sync
target.
E
Yeah,
maybe
maybe
it
could
be
interesting
to
to
dive
a
bit
more
into
the
whole
the
whole
flow
here,
but
I'll
leave
the
you
want
to
nd.
B
Oh
yeah,
I
just
want
to
say
finalizers
are
a
standard
feature
of
kubernetes,
so
we
can't
have
code
that
does
this
hot
loop
just
because
we're
trying
to
introduce
a
different
way
to
do
finalization.
So
we
need
to
support
regular
finalizers
or
like
reject
them
in
some
way.
A
Mac,
oh
sorry,
andy.
Do
you
have
anything
that
you
wanted
to
say
more
about
the
mac
binaries
bug.
B
B
The
mac
versions
need
to
be
signed
or
either
with
a
certificate
or
an
identity
or
whatever
it
is.
So
we
just
have
to
figure
out
how
to
make
that
happen.
Hopefully
using
community
infrastructure
I
mean.
Maybe
we
could
get
this
into
home
brew
and
then
homebrew
can
deal
with
the
certificates
and
we
don't
have
to
generate
the
assets
ourselves,
but
we'll
figure
it
out.
B
B
And
yes,
there
is
a
workaround,
but
we
don't
really
want
to
tell
people
to
do
that.
Long.
A
A
E
Yes
exactly
so,
I
added
a
commenting
in
the
community
meeting
as
well.
I
created
two
new
epics,
this
one
and
and
the
next
one
mainly
and
moved
some
of
the
remaining
tasks
that
were
remaining
action,
items
that
were,
in
the
you
know,
multi,
transparent,
multi-cluster,
big
epic,
because
obviously
it
was
covering
more
than
than
the
initial
purpose.
E
So
this
one
is
mainly
about
the
transformations
I
mean
the
remaining
work
on
the
transformation
and
also
coordination
controllers.
So
the
the
initial
epic,
the
the
issues
were
inside
was
much
more
focused
on.
You
know
the
location,
workspaces
being
able
to
define
the
locations
and
apis
on
one
side
and
then
having
a
user
from
a
user
workspace
workspace
being
able
to
bind
to
compute
locations.
E
But
then
there
is
everything
related
to
supporting
multiplexing
targets,
allowing
the
support
for
coordination
controllers,
and
it
seems
to
me
that
it's
you
know
really
a
distinct
or
a
follow-up
that
we
should.
You
know
having
a
distinct
epic
and
and
then
be
able
to
show
in
the
more
precisely
in
the
roadmap
for
kcp,
so
that
that's
the
main
goal
of
this,
this
one
is
to
still
to
be
down.
I
didn't
feel
everything,
and
still
probably
should
have
some
discussions
on
on
details.
E
I
mean,
I,
I
think
part
of
it
at
least
we
would
be
for
for
the
online.
Obviously
we
we
until
you
know
we,
we
see
differently,
merging
the
transformations,
for
example.
So
maybe
the
third
first
points
would
be
four
zero,
nine.
So
I
I
I
propose,
let's,
let's
put
that
for
009
without
a
milestone
blocker,
so
that
it
can
split
in
0
10,
at
least
for
the
last
parts
of
it.
Obviously,
in
the
description
I
would
probably
add
some
you
know
stretch
goals
that
would
go
beyond
zero
nine
asia.
A
E
Yeah
the
thing
is
that,
for
example,
inside
a
location
when
you
think
of
moving
from
one
thing
target
to
the
other
at
some
point
you
might
have
the
sinking
state.
You
know
the
the
the
label
on
on
individual
resources.
That
says
to
which
synchro
to
which
thing
target
you
want
to
sync,
you
might
have
two
levels
at
some
point
you
might
have
to
play
with
the
soft
finalizers.
That
means
to
delay
the
removal
of
a
workload
on
one
cluster
before
everything
is
created
correctly
on
the
second
cluster.
E
So
there
might
have
some
some
coordinations
to
to
you
know
manage
in
all
these
cases
and
even
more,
if
you
have
raw
sources
which
is
shared,
you
know
assigned
to
two
locations
explicitly
in
case,
for
example,
of
deployment
spreading
across
two
regions.
E
Then
obviously,
you
have
to
tackle
things
like
how
to
maintain
one
status
for
each
sinker
for
all
for
the
kcp
resource,
because
each
sinker
will
report
its
own
statues
based
on
on
you
know
what
has
been
created
as
a
deployment
on
every
physical
cluster
and
so
and
then
you
have
to
manage
coordination
controller
which
would
have
the
right
to
you
know
the
ability
to
see
every
single,
every
thing
target
related
status
and
summarize
that
into
the
main
object
precisely
you
know
the
the
typical
example
is
the
deployment
spreading
case.
E
Where
you
know
you
send
half
of
the
replicas
on
onto
sync
targets.
You
have
a
status
for
every
replica
everything
target
and
then
you
just
summarize
the
status
by
a
coordination
controller
in
the
main
object.
So
that's
mainly
setting
up
all
the
the
minimal
you
know,
framework
and
and
kcp
primitives
to
allow
external
services,
like
hybrid
cloud
gateway
to
to
do
this
type
of
coordination.
C
B
E
A
C
A
A
Of
these
fine-grained
stories
will
come
later,
yeah
sure
awesome,
we'll
put
it
in
as
your
knight
for
now
and
split
up
later.
E
And
then
you
had
the
compute
type
management,
yes
after,
which
is
also
a
follow-up
epic,
based
on
on
the
initial
one,
nine
one,
ten
two,
if
I
remember
correctly-
and
this
one
is
related
to
the
remaining
work
about
placement
and
locations,
especially
the
fact
that
currently,
you
have
you
know
all
the
compute
apis
cube,
mainly
apis
in
every.
E
In
any
case,
they
are
imported
from
the
physical
cluster
which
is,
and
the
physical
cluster
is
the
source
of
truth
of
the
cube
api
cms,
for
example,
and
we
want
to
change
that
so
that
we
have
global
api
exports
in
kcp.
For
you
know,
every
cube
main
release
for
openshift
release,
for
example,
and
then
you
would
bind
to
this
and
only
check
the
compatibility
of
the
shimmers
with
with
the
the
corresponding
apis
in
the
same
targets.
E
So
it's
a
sort
of
reverting
the
way
we
do
and
and
having
api
export
for
all
the
compute
types
available
centrally
in
kcp
to
for
users
to
bind
to
that,
and
then
obviously
you
would
be
able
to
choose
the
locations
and
also
to
have
placement
and
scheduling
of
sim
targets
based
on
the
ability
of
apis
availability
of
apis
in
the
same
targets.
E
So
parts
of
of
the
work
has
already
been
done
by
jian
from
the
asm
team,
and
there
is
still
you
know
some
things
to
do
here.
This
would
include
also
removing
the
sync
targets
argument
from
the
sinker,
for
example,
you
know
removing
the
resources
to
sync
argument,
because
everything
would
be
stored
on
the
sync
target.
E
A
A
Why
not,
let's
see
we
have
the
issue
already
talked
about.
E
Yeah,
by
the
way
this
is
I
can
just
say
to
towards
this
is
part
of
the
more
general
work
about
storage,
the
storage,
epic,
which
includes
the
some
requirements
in
terms
of
kcp
primitives.
E
Yes,
that's
strange
that
it's
not
inside
it
already,
but
yes,
mainly
about
the
storage,
epic,
we
we
need
cluster-wide,
sinking,
cluster
right,
resource
control
as
well,
and
so
guy
is
working
on
the
storage,
epic
and
all
the
parts
to
add
inside
the
sinker
and,
in
the
meantime,.
C
C
So
basically,
we
need
some
kind
of
controller
that
takes
care
of
a
scheduling,
resources
that
are
not
name
spaces,
as
it
says,
cluster-wide
resources.
So
the
this
controller
will
basically
add
to
those
resources
to
the
cluster-wide
resources,
all
the
possible
scene
targets
that
are
available
to
the
workspace,
all
the
labels
without
the
intended
state,
the
desired
state
so
empty
state.
Okay,
then
a
third
party.
Well,
a
coordination
controller
will
be
able
to
switch
the
state
of
those
same
targets
as
as
it
wishes.
E
Yeah-
and
this
also
contains
as
point
two-
the
changes
inside
the
sinker
to
also
support
syncing
custom-wide
resources
that
are
exposed
by
the
synchro
virtual
workspace,
based
on
the
thinker
on
the
request
statement.
C
But
for
this
kind
of
resources
we
don't
have
that.
I
mean
we
will
just
ignore
the
namespace
selector
on
placements,
but
we
need
to
discuss
what
to
do
there
and
and
what
sync
targets
to
expose
to
the
coordination
controller.
C
A
Do
we
have
ideas
on
what
kind
of
changes
we'd
need?
I'm
thinking
about
like
collisions
between
cluster
red
resources
on
sync
and
then
how
that
might
impact
the
order
in
which
different
objects
get
synced
down.
E
I
think
these
are
open
questions
and
would
be
great
and
that
you,
you
know,
dump
all
those
type
of
questions
you
that
raise
in
your
head,
everyone
in
the
issue,
and
then
we
can
also
have
additional
brainstorm
or
design
position
to
to
tackle
those
those
things.
The
the
show
point
is
that
it's
obviously
at
least
a
feature
we
need,
and
we
need
something
to
start
something
simple
to
start
to
unlock
and
block
the
storage
work
and
allow
it
to
continue.
E
A
A
E
Yeah,
that's
that's
also
a
question.
I
I
I
think
we
can.
We
can.
You
know,
ask
and
maybe
choose
that
it
might
be
easier
to
do
something.
Storage
specific,
but
obviously
we
don't
want
to
multiply
resource
type,
specific
behaviors
inside
seeker.
C
Yeah,
I
mean
I'm
sorry,
please,
oh
no
good
at
working,
I
was
going
to
ask
a
higher
level
question.
Please
go
ahead.
I
mean
basically
here
what
we
are
going
to
do
is
not
really
schedule
those
resources.
This
controller
will
only
expose
all
the
schedulable
sync
targets
to
a
third
controller,
which
will
be
the
coordination
controller
for
storage.
So
there
will
be
a
set
of
coordination
controllers.
C
You
know
I
was
going
to
say:
I
think
it
makes
sense
to
have
a
little
bit
further
discussion
on
this,
because
anything
that
is
cluster-wide
has
the
ability
to
kind
of
shape
the
ability
of
a
cluster
to
actually
support
a
workload.
We're
having
some
interesting
conversations
about
policy
in
relation
to
this
and
the
ability
to
reason
about
a
cluster
when
multiple
people
can
change
kind
of
the
underlying
shape
of
that,
maybe
with
storage,
it's
it's
something
we
can
make
a
a
trial
run
with,
but
it'd
be
weird
for
for
it
to
say
this.
C
C
What
else
related
to
the
point
that
I
was
going
to
make
and
actually
a
question
is
if
we
have
explicit
recommendations
on
whether
the
physical
clusters
should
be
dedicated
purely
to
kcp
or
if
they
can
be
used,
as
for
direct
workloads
and
as
a
destination
for
workloads?
That
kcp
also
manages-
and
this
seems
like
the
place
where
those
kind
of
conflicts
would
show
up.
E
I
think
it's
what
we
are
saying
today
and
now
the
limitations
about
cluster-wide
thinking
and
possibly
some
you
know,
settings
or
some
some
rules
to
explicitly
allow
only
some
types
or
things
like
that.
We
have
to
define
the
limitations
of
the
process.
A
E
Yeah
or
maybe
what
I
don't
know
yeah,
I
don't
know,
maybe
do
you
think
it
could
be
preferable
to
split
this
issue
into
two,
because
there
is
the
you
know:
cluster-wide
resource
controller.
That
is,
you
know,
adding
the
labels
based
on
the
placement
and
all
this
we
can
start
now,
because
it
doesn't
sync,
nothing
it
sinks,
nothing
on
the
physical
cluster.
Really
it's
just
you
know,
puts
the
the
right
levels
so
that
the
coordination
controller
can
start
its
work.
E
B
I
would
second
steve's
suggestion
to
talk
through
the
use
cases.
I
think
that
makes
more
sense
in
my
mind
than
proceeding
initially
to
do
the
labeling
work.
If
we
don't
know
or
aren't
confident
that
we
have
the
use
cases
ironed
out.
A
A
E
You
were
mentioning
plus
one
two,
three
I'll
run
it
yeah.
It's
this
use
case.
E
Yeah,
the
thing
is,
I
I
don't
know
if
guy
is
here,
I
think
I
saw
him
previously
in
the
meeting
I
mean.
Obviously
it's
something
we
will
need
for
storage
in
any
case,
but
we
can
also
decide
that
if
there
are,
you
know
unknown
too
many
unknowns
related
to
cluster-wise
thinking
seen
as
a
general
thing,
we
can
also
decide
to
come
back
and
you
know
have
only
syncing
of
the
pvs
with
you
know
very
limited
access,
which
is
only
the
the
the
the
the
features
that
the
storage
use
case
would
need.
A
That
I've
created
in
my
workspace
using
an
api
binding.
You
know
questions
like
that.
Come
up
when
you
start
looking
at
this
as
a
cluster-wide
general
concept,
and
they
might
not
be
questions,
we
even
need
to
worry
about
solving
to
make
sure
that
storage
has
a
good
story.
So,
let's
find
some
time
to
think
through
this
this
week.
I
think
if
everyone's
had.
E
Yeah
yeah,
but
you're.
Actually
I
mean
I
if
I
and
if
I
read
correctly
in
the
issue.
The
second
point
you
mentioned
in
the
description
still
mentions
the
the
required
changes
to
the
sinker.
So
when
you
know,
if
you,
if
we
do
the
required
changes
to
the
sinker
to
handle
custom-wide
resources,
we
still
open
the
door
to
you
know.
If
someone
manually,
for
example,
sets
the
you
know
the
sync
label,
then
we
would
still
open
the
door
to
you
know.
A
B
For
jumping
in
maybe
this
was.
A
When
we
discussed
feature,
gate
might
be
in
in
a
good
case.
I
don't.
E
Yes,
yes
helps
me
as
well,
because,
obviously,
a
feature
gate
would
be
sufficient
in
the
sense
that
all
the
resources
now
ex
I
mean
synced
by
the
sinker,
go
through
the
virtual
workspace.
So
you
know
just
through
a
feature
gate.
You
can
decide
not
to
sync
any
cluster-wide
resource
not
to
expose
any
cluster
wide
resource
individual
in
the
synchro
virtual
workspace.
E
That
might
be
also
a
way
to
have.
You
know
the
both
the
best
of
both
worlds
mainly
just
start
working
on
it
under
the
the
common
feature
gate,
so
that
guy
can
continue
on
this.
We
can
start,
you
know
experimenting
with
that
experimenting
with
the
general
also
impacts
of
of
cluster-wide
thinking,
and
once
we
are
okay,
we
can,
you
know,
move
the
feature
get
to
to
default
and
do.
A
A
A
And
I
think
at
some
point
too,
we
might
be
getting
into
a
lot
of
technical
details
about
this
particular
one,
and
the
broader
group
here
might
not
have
as
much
to
say,
but
exposing
sig
target
specific
details
also
presupposes
that
a
controller
will
then
be
able
to
access
like
you
know,
if
we're
talking
about
some
coordination
controller,
that
can
then
do
something
with
this
information.
A
A
E
I
mean
that
that's
already
something
we
have
discussed
with,
stefan
and
and
guy
and
others
long
ago
about
the
coordination
and
that's
related
to
the
coordination
controller,
a
peak
that
I
just
created
previously.
So
obviously
it's
it's
work
that
is
being
started
to
be
done
in
parallel
from
from
you
know
the
transformation
pier
from
the
coordination
controller
world,
but
all
this
has
already
been
you
know
designed
quite
for
some
parts
of
it
quite
long
ago.
C
E
I
mean
all
this
all
this
mechanism
here
with
the
labels,
is
already
working
in
the
resource
I
mean
today
and
implemented
for
namespace
resources
in
the
resource
controller.
So
it's
not
something
new
that
we
are
setting
up
here.
It's
just
going
one
step
further
than
what
we
have
today
today.
E
The
resource
controller
creates
those
labels
and
directly
put
the
sync
value
in
it,
because
for
now
we
don't
really
use
coordination
controllers,
but
from
the
start,
if
you
look
into
let's
say
api
comments,
for
example
from
the
start,
the
idea
that
was
that
in
the
final
state
I
mean
long
term,
the
resource
controller
would
set
this
label
to
empty,
and
this
would
this
is
the
signal
for
optional
coordination
controllers
to
decide
when
we
should
start
thinking
and
and
then
add
the
sync
value
into
those
levels.
E
I
mean
that's
something
that
has
been
designed
quite
from
for
for
a
long
time,
and-
and
you
know
I
I'm
not
aware
of
any
change
in
the
design
since
then-
that
does
it
answer
steve.
A
I
think
we're
getting
into
a
lot
of
details
where
even
I'm
getting
lost
it.
It
sounds
like
fundamentally,
there
are
still
some
open
questions
about,
like
broadly
sinking
cluster
wide
resources.
I
think
we
should
take
some
time
this
week
to
chat
about
this.
I'm
not
sure
that.
A
For
a
day
or
two
makes
a
significant
difference
in
velocity,
I
guess,
if
you're
worried
about
that,
we
should
chat.
E
I'm
100,
okay,
for
with
with
I
mean
everyone
interested
having
a
a
design
discussion
to
to
see
the
impacts
of
this,
I'm
not
sure
I'd
recommend
you
know
you're
kim
to
completely
stop
what
he
started.
I
mean,
but
I
don't
think
he
I'm
not
sure
you're
very
far
in
in
the
development
anyway.
So
it
seems
to
me
that
that
you
know
one
doesn't
prevent
the
other.
E
A
A
E
This
one
is
part
of-
let
me
speak
about
this
because
I
think
chiang
is
not
there.
It's
part
of
the
consume
compute,
transparent,
multi-cluster,
and
then
you
know
well,
no,
it's
part
of
the
new.
I
didn't
change
here
in
the
description.
Sorry,
it's
part
of
the
new
ap
compute
type
management,
exactly
that's
just
one
step
of
it,
the
other
steps
where
I
did
not
create
you
know
detailed
issues.
E
A
Any
reason
why
we
would
be
doing
this
instead
of
discovery,
api
discovery
on
the
virtual
workspace.
A
So
I
guess
my
understanding
was:
we
are
so
by
inverting
the
control
here.
The
syncer
is
expected
to
sync
everything
that
they
see
in
the
virtual
workspace.
Yes,
so
that
I'm
wondering
in
order
to
get
the
list
of
possible
api
resources
that
they
need
to
go,
the
syncer
needs
to
sync.
This
issue
currently
says
to
go:
look
in
the
status
of
the
sync
target
and
I'm
wondering
what
the
difference
is
doing,
that
versus
doing
api
discovery
on
the
virtual
workspace.
It's
amazing.
E
There
are
two-
probably
there
there
I
mean
when
jen
opened
these
issues.
Probably
he
had
two
things
in
mind
which
go
hand
in
hand.
The
first
one
is
that
this
to
to
the
to
sync,
the
thinker
just
has
to
sync
everything
that
is
exposed
from
the
virtual
world
space,
but
there
is
another
aspect
of
it,
which
is
that
we
still
want
to
import
the
shema's.
E
You
know
the
thinker
also
at
start
for
now
imports
the
shema's
of
of
the
various
resources
to
sync
into
the
location,
into
the
the
thing
target,
workspace
and-
and
we
would
still
continue
doing
that-
to
check
the
compatibility
between
the
global
api
export.
You
know
kubernetes
api
sport
and
the
shimaz
of
the
corresponding
apis
as
they
are
in
the
physical
cluster
and
to
drive
this
import.
E
We
would
use
the
content
of
the
status
of
the
sync
sync
target
status,
so
the
sync
target
status.
Sync
resources
would
be
used
to
define
what
we
have
to
import.
E
You
know
to
check
compatibility,
but
as
for
what
we
want
to
think
we
would
just
you
know,
have
to
to
get
everything
which
is
exposed
by
the
virtual
workspace.
Does
it
answer
your
question.
B
So
the
first
sentence
under
proposed
solution
has
the
discovery
bit.
That
was
what
you
were
asking
about
steve.
I
am
curious
david.
What
sets
status
dot
synced
resources-
is
that
just
a
is
that
the
list
of
everything
that
was
discovered.
E
No,
no,
no,
in
fact
it's
something
which
is
already
implemented.
In
fact,
jen
already
worked
on
it.
It
now
in
the
sync
target
in
the
spec,
you
have
the
list
of
api
exports.
It's
not
used
for
now,
but
it
will
be
and-
and
based
on
this
list
of
you
know,
supported-
expected
supported
api
exports
for
from
the
the
thing
target.
E
We
look
for
api
resource
imports
in
the
in
in
the
workspace
of
the
sim
target,
and
we
get
the
shimaz
from
there
compare
those
to
the
shema
of
the
expected
api
exports
and
for
all
that
are
compatible
that
are
present
in
the
thing
target
in
the
in
the
you
know,
physical
cluster
and
that
are
compatible.
We
set
a
status.
E
We
set
a
value
on
in
the
status
sync
resources,
so
the
sync
resources
mainly
contain
the
list
of
all
the
gvs
which
are
present,
which
are
expected.
You
know
which
are
part
of
the
global
api
export
and
which
are
present
in
the
physical
cluster
in
a
compatible
with
a
compatible
shimmer.
B
If
we're
getting
rid
of
the
resources
to
sync
flag,
which
I'm
in
favor
of
how
can
I
say
well,
I
I
like
that
there's
this
global,
kubernetes
or
openshift
export.
That's
got
all
the
schemas
defined
and
I
want
to
sync
all
that
stuff,
but
I
also
want
to
sync
tecton
pipeline
runs.
How
do
we
do
that.
E
Yeah,
so
that
that
would
be
done
quite
a
bit
the
same.
That
means
that,
on
your
sync
target,
you
would
add
an
you
know
in
the
spec
in
the
supported
api
exports,
you
would
add
a
new
api
file,
but
not
pointing
to
the
you
know,
root
workload,
workspace
which
contains
kubernetes
and
openshift
api
sport.
But
you
would
point
to
your
workspace
in
which
you
have
manually
imported
manually,
create
created
an
api
export
with
your
shima
for
keynative,
or
anything
else
that
exists
on
your
physical
question.
E
But
in
fact
this
is
not
the
first
step
of
the
epic
I
mean
I
obviously
there
would
be
some
some
other
tasks
to
do
before,
for
which
I
didn't
or
jan
didn't,
create
an
issue
for
now.
E
A
B
Yeah,
so
the
sinker
tries
to
patch
things
in
the
downstream
or
physical
cluster
at
times,
and
if
that
fails,
the
error
is
logged,
but
the
user
has
no
idea
what's
going
on,
and
so
in
the
examples
that
I
was
debugging.
B
Somehow
a
deployment
cr
managed
to
get
its
spec
deleted
entirely
in
the
kcp
workspace
and
then,
when
that
got
synced
down
to
to
kubernetes
kubernetes
said
yes,
spec
is
required,
like
various
fields,
underneath
spec
are
required
and
there
was
no
way
to
tell
that
was
the
issue.
So
we
need
to
have
a
condition
somewhere
on.
B
Ideally,
the
deployment
like
it
needs
to
be
visible
to
the
user,
and
if
the
user
can't
see
the
sync
target
which
is
potentially
common,
then
at
least
we
need
to
have
it
on
the
deployment,
and
maybe
we
can
put
it
on
the
seek
target
optionally.
B
B
E
B
B
B
A
That
means
that
right
now
we're
doing
quite
a
lot
of
like
large
label
selector
sort
of
stuff.
It's
unclear
today,
what
the
performance
implications
of
that
are
and
how
it
scales.
A
There's
some
breadcrumbs
here
for
like
default
sets
of
indices
that
get
created
in
the
watch
cash.
This
issue
is
basically
a
question
mark
of
like
do
we
will
this
approach
continue
working
and
to
what
extent,
when
we
increase
the
number
of
objects.
E
A
E
A
Today,
with
the
label
selectors,
the
question
is
more
so
on
the
api
server
side.
What's
the
implication
of
that
often.
A
Like
we're
we're
doing
half
the
work,
it's
sort
of
two-step
right
like
we're
doing
half
the
work
in
a
controller
that
adds
labels
to
something
and
digests
some
other
part
of
the
object
into
a
label
and
then
we're
just
using
the
label
selector.
During
the
virtual
workspace
delegation,
we
haven't
yet
hit
anything
where
the
inability
to
do
a
disjoint
selection
on
labels
makes
that
approach
impossible.
A
B
A
A
B
Yeah-
and
I
I
was
chatting
with
paul
last
week
about
like
we
can
just
submit
additional
commits
to
his
pr
and
you
know
if
anything
needs
cleaning
up.