►
From YouTube: Community Meeting, January 31, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let's
see
what
new
topics
we
have,
my
C1
Stefan
actually
zero
11.
When
should
we
target
that
one?
Because
that's.
A
C
You
go
ahead,
so
there's
a
lot
of
flakes
right
now
and
I
realized
that
the
test
flakes
don't
necessarily
need
to
gate
when
we
cut
a
release,
but
it
would
be
nice
to
get
a
lot
of
those
resolved
before
we
do.
If
we
can
I'm
actively
working
on
investigating
each
one
as
I
come
across
it
I
would
ask.
Are
there
large
features
that
are
still
outstanding
that
we
want
to
get
in?
Like
anything,
that's
a
breaking
change
to
storage
or
apis.
Anything
along
those
lines.
D
Tagging,
something
that
depends
on
Coupe
1.24,
I'm,
just
yeah.
So
as
long
as
we
move
the
one
to
26
soon,.
B
I
mean
EBS
will
also
take
some
days,
so
maybe
it's
a
parallel
work,
but
it
will
also
maybe
destabilize
things.
A
F
A
F
Sorry,
yes,
possibly
some
incompatibilities
the
the
thing
is
that
you
know
if
we
increase
the
root
compute
the
version
of
the
cube
version
of
the
root
compute
API
export,
then
that
limits
the
the
cube
that
increases
the
release
that
physical
clusters
need
to
have
in
order
to
be
able
to
to
think
things.
So
we
have
to
you
know
the
more
we
move,
the
cube
version
of
the
root
compute
API
export,
the
more
we
should
think
about
providing
root.
Computes
alternate
your
root.
F
Compute
API
exports
for
as
older
versions
of
cube
in
order
to
you
know,
increase
the
the
the
range
of
of
cube
versions.
We
can
sync
to.
B
F
B
G
One
question
on
this
topic
since
you're
talking
about
this.
This
is
something
that
I
noticed
I
think
was
introduced,
probably
in
zero
Time
Zero
that
you
added
this
compute
workspace
with
this
predefined
schemas
for
deployment
Ingress
a
service.
What
was
the
motivation
for
that.
E
G
A
And
one
question
from
my
set
regarding
the
release,
like
I
literally
started,
to
understand
the
problem,
we're
having
with
service
account
and
charting
and
replication
a
little
bit
better
after
investigated
my
escalation
prevention,
fixed
Stefan.
Is
this
something
like
what
we
started
to
briefly
talk
about
today
in
slack?
Is
this
something
we
want
to
solve
like
short
term
or
is
service
account?
The
service
account
replication,
so.
B
Something
I
mean
I,
don't
think
we
have
to
have
full
support
for
frauding
for
everything
in
this
Tech.
We
know
that
TMC
is
not
ready,
and
this
might
be
something
else.
B
A
I
also
have
another
work
around
my
mind:
I
just
I
also
don't
want
to
like
lay
out
the
whole
problem,
because
I
fully
need
to
understand
it
myself
as
well,
but
I
I
agree.
We
need
some
sort
of
solution.
I
think
I
will
go
ahead
with
the
one,
creating
a
technical
user,
because
I
would
like
to
have
the
privilege
escalation
fixed
to
be
at
least
included
in
zero
11..
Okay,.
A
B
B
A
C
I'm
in
the
Milestone
right
now
has
February
10th
seems
reasonable.
All.
A
Right
sounds
good,
sounds
good
to
me
all
right.
In
that
case
next
topic:
Mike
upstreaming,
Super,
namespaces,
Redux
I,
also
saw
a
comment
from
Andy.
I
will
include.
D
Both
of
you
sure,
right
and
yeah
I
mean
the
thing.
Is
it's
known
by
different
names
in
different
communities
right
in
this
community?
It's
called
launcher
clusters
in
the
APA
I
made
sure
you
say
more
broadly,
they
call
it
super
namespaces.
The
point
is
here
is
I,
don't
want
to
argue
over
the
names
I'm
just
wanna,
you
know,
as
I
mentioned
last
week.
D
D
I
wanted
to
ask
you,
which
is
I,
suspect
that
if
we
Upstream
super
logical
clusters
that
would
reduce
the
size
of
the
the
Divergence
right,
it
would
make
it
easy
at
the
the
the
fork
that
we
carry
in
kcp
would
be
a
much
smaller
change
from
Upstream
kubernetes,
and
we
do
have
other
use
cases
for
this.
Besides
kcp's
use
case,
tmc's
use
case,
we
have
edgemc's
use
case.
Cross
plane
is
a
use
case.
D
Haifa.
Our
colleagues
in
Haifa
have
some
use
cases
in
mind,
so
I
think
it
is.
You
know
a
kind
of
a
generally
interesting
thing
to
do,
and
so
I
wanted
to
gauge
the
interest
here
in
you
know
trying
to
get
that
work
Upstream
for
the
benefit
of
all
the
use
cases,
including
reducing
the
size
of
the
Divergence
in
the
fork.
That's
carried
here.
C
D
Okay,
thank
you.
So
why
do
you
think
Upstream
would
not
accept
it.
D
Yeah
right
I
know
that
they
have
said
no
to
to
things
from
kcp,
but
the
the
thought
I
have
is
that
for
this
logical
cluster
concept,
there
are
additionally
use
cases,
so
there's
a
stronger
case
for
it
Beyond
just
kcp,
and
they
do
say
you
know
that
they
think
API
machine
is
is
good
for
more
than
its
use
in
kubernetes.
So
you
know
this
is
consistent
with
their
their
broader
remarks.
You
know,
even
though
the
kcp
only
sale
has
not
succeeded.
H
H
It's
kind
of
a
a
some
some
Services
we
are
trying
to.
For
example,
we
have
a
what
I
think
it's
already
up
and
the
fabric
service
fabric
is
kind
of
listed
in
ik
I.
Think
in
there
in
the
end,
I
think
the
the
problem
was
that
it's
a
service
that
the
the
customer,
what
the
buzzer
is
is
the
fact
that
CR
this
cluster
level
and
then
there
are
files
having
a
single
cluster,
but
they
don't
want
a
they
don't
want
a
kind
of
everyone
to
everyone.
H
Has
the
access
to
All
the
crds
Right
everyone,
the
test
access
to
crd
can
actually
and
they
like
this
idea
of
different
versions
and
so
on.
So
the
isolation
of,
let
me
put
it
like
that.
If
we
look
on
what
is
exactly
what
is
implemented,
we
simply
add
another
bit
right
in
this
database.
We
now
have
another
kind
of
column,
and
so
the
fact
that
you
can
take
some
stuff
that
are
not
namespaced
and
allow
to
isolate
that
also
to
some
groups,
as
it
is
very
strong
on
many.
D
H
Suggestions,
yeah
I
I
will
send
I
will
send
a
link
to
the
project
itself,
but
the
input
is
from
the
customers,
the
customers
that
actually
want
to
use
that
they
do
not
want
we
round.
Okay,
let
me
put
it
like
that.
We
run
pods
and
workload
on
their
production
cluster.
They
are
fine
with
that,
because
they
they
are
okay
with
the
isolation
that
kubernetes
applies
to,
but
they
do
not
like
us
putting
some
crds
that
kind
of
pollute
the
whole
cluster
okay,
so
they
don't
like.
D
Okay,
so
yes,
if
you
could
send,
maybe
this
is
a
more
of
a
mailing
list
kind
of
thing,
just
a
brief
summary
of
the
projects
that
could
benefit
from
upstreaming
logical
clusters
and
the
customer
problem
that
that
helps
with.
H
One
comment
before
I
can
see
that
Stefan.
What
to
say.
One
thing
that
we
wanted
to
we
make
sure
is
that,
if
suppose,
that
we
can
Upstream
that
we
will
really
like
the
kcp
to
actually
use
whatever
was
Upstream
right,
even
if
it
you're
required
to
change
some
of
your
code.
For
that.
If
we
will
end
up
upstreaming
these
logical
namespaces
and
in
kcp,
you
will
say:
oh,
it's
too
much
work
to
refactor
our
code
to
use
it
and
will
not
use
it
to
use
your
own
kind
of
stuff
there.
D
Yeah
I
think
that's
clear,
all
right
Jesse
also,
maybe
if
you
could
pile
on
to
whatever
Ezra
eventually
does
I'd
like
to
have
a
list
you
know,
so
we
can
take
it
to
the
Apex
Machinery.
So
you
can
say,
look
here.
Are
these
use
cases?
It's
not
just
kcp
sure.
E
E
Yeah
I'll
let
Stefan
go
but
I'll
just
say
that
I
have
the
same
same
situation
where
I
do
I
have
many
customers
who
I
want
to
distribute
crds
and
operators
to
and
I
would
prefer
not
to
pollute
their
view
of
their
clusters.
They
are
not
concerned
with
the
operational
workloads
that
I
put
on
their
cluster.
B
Yeah
I'm
curious
about
what
are
the
next
steps
here
and
who
will
do
what
I
heard
collection
of
use
cases
and
then
some
Doc
Maybe.
D
H
And
Mike
do:
do
you
want
to
do
that
before
we
actually
have
a
cap
for
that
just
to
get
the
general
feeling?
Yes,.
D
H
A
All
right
thanks
a
lot
any
more
thoughts
on
the
upstreaming
initiatives.
A
Cool
very
much
looking
forward
around
the
use
case
in
stock
as
well.
In
that
case,
I
would
like
to
move
to
the
next
topic.
Even
head
charts.
I
Yeah,
so
this
is
something
we've
discussed
in
a
few
threads
on
slack
recently,
several
folks
have
been
asking
for
an
update
to
the
helm,
charts
and
also
asking
questions
about
the
manifests
that
are
in
the
main
kcp
repo
turns
out
that
both
of
those
are
outdated
and
obviously
things
have
been
moving
quite
quickly
in
terms
of
flags
getting
added
and
removed
and
things
so
I
went
ahead
and
pushed
an
update
to
the
helm
chart
based
on
a
fork
that
we've
been
maintained.
I
Maintaining
it
would
be
nice
to
just
decide.
You
know
whether
we're
comfortable
with
the
commitment
of
trying
to
keep
the
helm
charts
up
to
date.
You
know
whether
we're
just
going
to
do
it
for
Maine
or
maybe
Maine
and
the
current
stable
release,
and
if
anyone
else
is
interested
in
collaborating
on
that,
I
also
wondered
whether
it
makes
sense
to
just
remove
the
Manifest
from
the
main
tree,
because,
even
if
you
don't
want
to
use
Helm
effectively,
the
templates
in
there
give
you
manifests
you
can
render
and
then
pack
on.
I
So
it
seems
like
it
might
be
a
useful
First
Step,
just
to
kind
of
have
a
single
place
where
we
maintain
the
Manifest.
So
yeah
just
wanted
to
kick
off
that
discussion
and
see
if
anyone
wants
to
collaborate
and
what
the
next
step
should
look
like.
A
Company
and
Stephen
I
would
like
to
collaborate
generally
for
the
handshots.
I
will
leave
up
the
question
whether
we
want
to
maintain
the
Hem
shots
for
the
wider
audience,
but
yeah
the
company,
and
so
I
would
like
to
see
also
kcp
being
easily
deployed
on
things
like
openshift.
We
have
been
discussing
this
briefly
as
well,
so
I'm
happy
to
help
here.
D
Okay
yeah,
so
this
is
hopefully
a
relatively
minor
thing.
You
know
we
agreed
earlier
that
the
documentation
would
be
versioned
so
that
you
know
we
could
work
on
something
with
that.
Well,
I
come
to
realize
you
know
we
actually
have
a
current
problem.
D
That's
really
horrible,
which
is
the
only
documentation
you
get
is
served
from
Maine,
but
we
want
to
document
how
to
use
the
last
tagged
release,
which
means
that
the
the
latest
documentation
is
not
actually
for
the
latest
code,
which
is
a
really
horrible
situation
to
be
in
so
and
there's
a
PR
to
fix
this
and
fix
it
well.
Using
regular
GitHub
techniques,
I
I
was
just
gonna
hope.
We
could
get
some
enough
review
to
actually
merge
that
and
get
into
a
good
place.
I
The
docs
update
definitely
sounds
like
a
good
thing
and
I
think
it
kind
of
ties
into
the
kind
of
interoper
into
the
you
know
the
more
user-facing
kind
of
experience
we
want
people
to
be
able
to
deploy
onto
clusters,
I
think
in
terms
of
scale,
testing
and
stuff,
like
that.
This
is
going
to
be
important
from
kind
of
a
Dev
perspective
as
well
so
Andy.
Sorry,
you
had
your
your
hand
raised.
C
Thanks
I
agree:
we
should
have
a
way
to
make
it
easy
for
folks
to
install
kcp
and
Helm
seems
to
be
fairly
desirable,
so
I'm
in
favor
of
continuing
to
maintain
them
and
keep
them
up
to
date.
I
would
like
to
see
some
sort
of
cicd
whatever
you
want
to
call
it
so
that
if
we
make
a
code
change
in
Maine,
we
know
fairly
quickly
that
we've
broken
the
main
version
of
the
helm
chart,
whether
we
have
to
like
fix
them
in
lockstep.
C
You
know,
probably
not,
but
we
definitely
need
the
signal
and
I
think
Trying
to
minimize
the
number
of
ways
that
we
do
things
the
better.
So
I
know
that
we
have
a
manifests
directory
in
the
kcb
repo.
That's
got
some
manifests
in.
There
should
probably
just
get
rid
of
those.
C
So
I
guess
my
question
would
be
so.
It
sounds
like
sergius
you're
interested
in
helping
Stephen.
You
obviously
have
done
some
work
with
your
pull
request.
Maybe
YouTube
could
start
with
what
you've
got,
get
it
merged
and
do
some
brainstorming
on
the
CI
aspect.
A
Great
before
we
go
on
back
to
your
original
topic,
again
any
more
thoughts
on
the
home
chart.
Anyone
else
wants
to
chairman.
A
All
right,
Mike,
in
that
case
you're
talking
about
versioning
the
documentation,
I,
think
let's
continue
that
discussion.
If
you
feel
it
has
not
been
answered
enough.
D
Yeah
I
I,
don't
know,
I
mean
there's
this
PR,
that's
been
just
waiting
for
some
attention
for
a
while
and
hasn't
gotten
any
I
was
hoping.
I
could
get
some
commitment
to
actually
get
it
done.
B
B
C
Yeah,
look
like
it
was
pushing
to
the
GH
Pages
branch
in
the
kcp
repo,
which
is
currently
empty,
so
I
had
asked
a
question
like
oh
I,
see
that
you're
doing
this
and
I
think
Ahmad
all
said
yeah
we
haven't
been
using
it,
but
we're
going
to
start
using
it.
I
I
agree
with
you
like.
If,
if
Mike
takes
a
look
at
it,
it
looks
good,
then
we
can
approve
it
longer.
C
Term
I
would
like
to
drastically
simplify
the
docs
process,
and
you
know
that
that's
not
necessarily
a
topic
for
right
now,
but
I
have
found
the
scripting
in
there
is
pretty
dense
and
I'd
like
to
try
and
find
a
way
to
make
that
easier.
C
All
right,
I'll
I,
will
look
at
it
after
the
meeting
again
in
more
detail
and
I
mean
it
does
give
right
permission
to
the
action
to
push
to
the
repo,
but
as
long
as
we're
comfortable
with
what
git
activity
is
in
the
pull
request,
I,
don't
think
it'll
be
harmful.
A
All
right,
okay,
any
more
topics
that
you
want
to
bring
in
short
term.
G
Well,
going
back
to
that
question
asked
earlier
I
added
this
to
the
list
of
topics.
Yeah
I
was
curious
to
know
why
this
real
compute
was
other
than
why
those
predefined
schemas
were
added.
Also
because
I
understood
that
at
some
point
there
was
a
desire
to
sort
of
decouple
the
code
kcp
from
TMC,
and
so
now
I.
Don't
understand
why,
when
you
just
create
you
know,
kcp,
you
end
up
with
those
things
that
I
guess
I
ever
that
there
is
some
problem
now
with
maintaining
possibly
version
of
kubernetes
Etc.
G
C
Yeah,
so
we
are
going
to
split
it
back
up.
It
really
was
just
a
timing
issue
at
the
time
we
want
to
try
and
provide
a
good
experience
for
folks
who
were
just
going
with
kcp
out
of
the
box
to
try
and
minimize
the
amount
of
extra
work
that
was
necessary
to
get
the
deployments
apis
and
services
and
Ingress
apis
available
by
default,
but
it
after
we
finish
our
journey
of
refactoring
things
that
won't
be
in
kcp
core
it'll,
be
part
of
a
TMC
add-on
and
it'll
probably
be
optional.
C
If
you
want
to
pull
it
in
or
not
so
we're
on
a
journey
to
to
get
to
a
combination
of
easy
out
of
the
box
plus
customizable
customizable
settings.
So
you
know
everything
here
is
basically
iterative,
so
that
you
know
it's
where
we
are
right
now.
G
F
Yes,
I
mean
I
think
we
have
to
to
decouple
the
question
of
the
versioning
versioning
that
we
discussed
previously,
which
is
mainly
a
general
question
of
how
do
we
manage
the
you
know,
compatibility
of
the
shema's
between
the
apis
inside
kcp
and
the
apis
on
the
physical
clusters.
It's
a
wider
question.
F
As
soon
as
we
have
some
API
export
available
for
unlimited
number
of
user
workspaces,
then
we
have
to
know
how
we
tackle
the
compatibility
of
Sheamus.
So
you
know
it's
it's
not
specific
to
to
the
cube
API
export
to
the
root
compute
API
export.
So
we
have
to
decouple
that
decoupled
that
and
then,
as
to
the
the
providing
you
know,
a
default
compute
kubernetes
API
export.
It's
as
Andy
said.
It's
probably
something
that
should
be.
That
will
be
part
of
the
TMC.
F
You
know
extracted
project,
but
it's
not
something
that
you
need
to
bind
it's
something
that
you
will
probably
bind
by
default.
If
you
want
to
sync
some
workloads,
something
that
is
available
by
default
and
that
avoids
you
having
to
systematically
import
your
Sheamus
from
your
physical
cluster.
But
it's
only
you
know
something
available
by
default.
G
Yeah
yeah
it
does
was
I
was
having
some
difficulties
with
that.
But
okay,
we
don't
need
to
discuss
that
now,
maybe
offline
Daniel.
You
have
also
some
comments.
J
Yeah,
so
it
sounds
like
we're
mostly
focused
on
like
the
workload
apis
here
and
I'm
curious
if
and
there,
this
may
already
be
something
that's
possible,
but
do
we
currently
have
a
mechanism
to
say
like
basically
create
this
new
workspace
with
no
apis
enabled
like
if
you
look
for
the
API
resources,
basically,
nothing
and
then
subsequently
say:
hey,
create
all
workspaces
with
these
like
custom.
Apis
that
are
not
built
into
kcp
today
is
that
current
functionality
that's
offered.
B
It
was
like
that
I
think
we
just
went
a
bit
too
far
in
adding
the
the
bindings
there
I
guess
a
pull
request
to
move
back
a
little.
Would
we
welcome
so
what
we
have
is.
Maybe
this
is
also
helpful
here.
We
have
this
battery
switcher,
so
you
can
say
minus
minus
batteries
and
then
includes
them
excludes
them
so
either
we
move
it
back
like
there's,
no
compute
or
no
workloads
by
default,
or
we
just
get
this
annotation
or
whatever
it
is
to
to
find
it
to
a
battery.
J
F
Even
now
the
root
Computing
not
included
by
default,
it
is
included
when
you,
you
know,
if
you
bind
to
compute,
if
you,
you
know,
create
a
placement
to
start
syncing,
then
by
default
you
will
also
bind
to
the
common
workload,
no
the
root
compute.
But
if
you
don't
try
to
to
to
sync-
and
if
you
don't
create
a
placement,
you
just
don't
have
the
the
the
the
the
the
default
Cube
API
export
bound.
H
Is
there
an
existing
document
that,
even
on
a
very
high
level,
try
to
come
up
with,
what's
going
to
be
included
in
the
kcp
core
and
what
is
not
or
not
yet
started,
because
we
keep
hearing,
you
know,
okay,
that
will
be
in
tmca.
We
will
end
up
without
having
debt,
but
the
concrete
plans
already.
A
C
We
have
plans,
we
haven't
written
them
down
yet
priority
Wise
It's
lower
than
some
of
the
other
stuff
we're
working
on.
B
B
B
It's
not
in
the
enhancement
or
I
can't
find
this
one
right.
I.
B
A
Okay,
I
guess:
if
there
are
no
more
commands
for
the
root
computer
workspace
questions,
we
can
go
ahead
for
the
incoming
issues.
J
What
is
there
yeah
yeah
is
that
I
just
want
to
verify
that
that
is
not
the
desired
Behavior.
B
A
Awesome
so
help
me
out
Andy
a
little
bit
for
milestones.
What
should
I
say
here,
big
lock
next.
C
All
right,
just
yeah,
so
sergius
put
it
in
next
and
Milestone
could
be
zero.
Eleven.
A
Next
one
feature
Sinker
able
to
create
sets
officious
targets
Mike
since
you're
on
the
call,
maybe
a
little
bit
context.
Sure.
D
For
Edge
we
want
to
support,
disconnected
or
intermittent
connectivity,
so
the
edge
cluster
needs
to
be
self-sufficient,
and
not
only
for
disconnection,
but
for
maybe
data
sovereignty
or
other
regulatory
sorts
of
requirements.
We
will
create
self-sufficient
Edge
clusters.
D
So
that
means
when
the
Sinker
you
know,
creates
stuff
that
has
containers
those
containers
get
connected
to
the
local
API
server
rather
than
the
one
back
in
the
kcp
workspace-
and
you
know,
I
mean
I,
think
that's,
maybe
the
primary
consequence
we
can
adjust
the
heart
beating
to
you
know
basically
a
huge
time
so
that
the
we
don't
have
a
problem
with
bogus
health
problems
detected.
D
But
you
know
so.
The
goal
is
to
support
this
self-sufficient
construction,
and
you
know
talking
with
David
fessler.
He
suggested
that
you
know
that's
a
pretty
small
change
or
generalization
to
The
Thinker.
You
know
we
could
either
you
know,
do
some
kind
of
a
common
core
code
or
just
generalize.
The
existing
thing
I
think
he
suggested
it
was.
The
simplest
thing
to
do
would
be
to
generalize
the
existing
thing.
F
Yeah,
maybe
to
just
to
add
one
point
here:
the
specific
case
of
you
know
not
pointing
back
to
kcp
but
still
pointing
to
the
the
physical
cluster
in
continuous,
yes,
I
mean
technically
at
least
that
mainly
boils
down
to
disabling
some
code.
That
was
explicitly
added
to
you
know,
change
this,
the
the
API
server
endpoint,
so
you
know
disabling
code
or
having
an
option
to
disable
that
it's
not
it's
technically
completely
feasible.
Now
the
question
is:
do
and
that's
more
question
for
the
community
I
assume.
F
Do
we
want
to
feature-wise
enable
this
enable
users
to
have
this
choice
in
a
single
Sinker
common
line,
or
do
we
want
to
still
have
two
thinkers
with
those
two
different
behaviors?
Okay,.
H
That
will
even
allow
me
when
I
do
a
deployment
which
includes
controllers
and
so
on,
specifying
the
yaml
whether
that
controller,
maybe
should
talk
back
to
the,
because
there
are
some
we
can
see
use
case
in
which
we
use
kcp,
as
the
way
to
you
know,
deploy
the
stuff
to
multiple
kind
of
clusters,
but
the
controllers
actually
interact
completely
with
the
local
aircraft,
the
right,
maybe
listening
to
nodes,
doing
stuff
on
the
local
cluster
itself,
not
going
back
at
all
to
resources
on
the
kcp,
and
so
that
will
be
very
nice
to
have
a
feature
like
that.
C
Mike,
can
you
help
me
understand
the
feature
request
here?
What
what
would
the
disconnected
Sinker
do
differently
that
you're
trying
to
do
here.
D
D
So
the
idea
is
that
when
it's
given
this
Choice,
when
it's
told
to
behave
this
way
right,
the
current
Sinker
right
when
it
creates
in
the
P
cluster
deployment,
for
example,
it
modifies
the
you
know
the
deployment
object
so
that
the
containers
in
it
when
they
go
to
use
the
cube
API
right.
They
get
directed
to
the
cube
API
back
in
the
origin-
workspace
not
in
the
API
server
in
the
local
Peak
cluster
right.
C
I
think
that
that
opens
the
door
for,
like,
oh
we'll,
add
another
Boolean
and
it'll
control,
something
else,
and
then
you
have
this
weird
Matrix
of
things
that
may
or
may
not
work.
So
I
think
that
if
we
can
refactor
the
Sinker
as
needed
to
have
a
core
that
both
the
non-edge
and
Edge
sinkers
can
use.
But
they're,
you
know,
wherever
things
are
different,
the
the
differences
are
in
the
individual
repos,
as
opposed
to
as
part
of
the
core.
That
would
be
my
recommendation.
B
Yeah
I
wasn't
basically
to
say
the
same
thing,
maybe
more
concretely
in
the
moment
we
better
flag.
We
make
this
thinker
binary
usable
in
two
completely
different
use
cases,
and
that
happens.
What
Eddie
just
said:
I
think
what
would
be
much
better
first
step,
make
another
binary
and
try
to
make
the
SIM
card
who
are
adaptable.
This
can
be
as
simple
as
adding
a
Boolean
to
the
options
of
the
SIM
card.
I
guess
it
is
such
a
thing.
Yeah.
F
B
B
B
B
D
Right
so
I'll
work
with
David
to
I.
Think
if
this
is
the
the
approach
we're
going
to
take.
Let's
just
start
there
from
the
beginning,
so
I'll
work
with
David
on
coming
up
with
a
separate
binary
is
a
99.9
common
code,
yeah
sure.
E
C
Ahead,
yeah
this
one's
a
bug-
it's
occasionally
pops
up
in
CI
and
I,
have
a
thing
that
tries
to
fix
CI
to
not
randomly
hit
this
bug,
but
that
doesn't
fix
the
bug
that
just
fixes
the
the
CI
test.
This
is
a
purposeful
test,
Stefan
in
the
ede
that
says
we're
going
to
delete
the
root
Shard
and
then
create
a
workspace.
We
want
to
see
that
the
workspace
is
unschedulable.
C
B
A
there
was
a
discussion
in
slick
also
about
disruptive
tests.
Maybe
we
should
start
thinking
about
those
like
to
have
another
Oswego
tests,
which.
C
But
but
this
is
an
actual
bug
that
needs
a
solution.
So
I
don't
think
it's
super
high
priority,
but
it
will
be
need.
It
will
need
to
be
fixed
when
we
expect
shards
to
be
coming
and
going.
A
A
D
Yeah,
this
is
really
simple.
You
know,
I
tried
experimenting
with
creating
workspaces
with
different
names
and
discovered
in
my
first
experiment
that
I
get
a
compound
error
message
that
says
three
things.
It
makes
a
claim
in
English
about
what's
allowed
for
workspace
names,
then
it
gives
two
different
regexes
and
all
three
claims
describe
different
sets
of
possible,
allowed
workspace
names
and
then
I
tried
another
experiment
and
got
only
one
of
the
three
things
out.
B
Ahead,
I
guess
this
is
just
how
cute
works.
It's
a
customer
resources,
there's
a
name,
validation
function
and
we
had
I,
guess
open
it
by
and
that's
what
you
see
in
the
first
one,
probably
embedded
value,
so
the
select
X
we
Define,
you
know
my
API
is
a
cad
or
maybe
not
the
other
way
around.
Maybe
I
don't
know,
but
it's
probably
just
the
output
from
the
validation
of
cads,
and
this
is
just
set
switch.
B
What
we
can
do
is
CL
can
have
nicer
messages.
So
if
you
encode
it
and
see-
and
if
it's
possible
it
would
be
a
way
and
there's
an
idea.
C
C
A
A
All
right
feature:
Daniel,
do
not
require
API
conversion
to
exist
for
API
resource
keyword
for
conversion
required.
J
Yeah
we
had
a
good
chat
in
slack
about
this
I
guess
just
for
context
for
folks
haven't
seen
an
issue
already.
If
you
try
to
bind
to
an
API
export
in
a
workspace
and
you're
trying
to
look
up
the
it's
trying
to
you
know,
get
the
API
resource
schema.
If
it
has
multiple
versions
supported,
there's
no
API
conversion
with
the
same
name,
then
it's
gonna
fail
to
do
so,
and
so,
for
instance,
I
ran
into
this
one.
J
We
had
a
a
resource
with
two
identical
versions,
so
there's
obviously
no
conversion
that
needs
to
take
place
there.
You
can
create
an
API
conversion
that
just
has
an
empty
spec
and
it
works
just
fine,
but
it
would
be
kind
of
nice
to
not
have
to
create
this.
So
essentially,
if
you
scroll
down
a
little
bit,
there's
a
little
bit
of
the
direction
for
moving
forward
here,
it
seems
like
we
have
a
pretty
good
grasp
on
what
needs
to
happen
there.
J
C
I
think
it's
fine
to
default
to
either
one
none
makes
sense,
because
it's
less
effort,
if
you
you
know,
don't
want
to
think
about
it
and
if
folks
decide
that
they
don't
like
none
as
a
default,
we'll
change
it
later.
C
Is
that
one
that
you
want
to
work
on
Dan.
A
Next,
one
DOC
website
offers
Just
One
release
and
it's
another
release.
I
think
this
refers
also
to
what
we've
discussed
today
right
Mike.
C
Yeah,
just
just
put
it
in
in
progress
right,
I
did
get
avenal's
PR
merged,
but
I
need
to
see
how
the
GitHub
action
is
going.
So
gotcha.
C
A
A
But
deployments
couldn't
go,
couldn't
go
back
to
the
original
replicas
count
in
kcp
by
Century.
Consider
after
manual
change
the
deployment
in
p-cluster
I,
don't
know
if
the
David
raised
your
hand.
F
Yeah
yeah,
so
this
one
I
discussed
with
Rama
from
you
know
during
from
QE,
and
this
was
obviously
based
on
the
wrong
assumption
that
since
kcp
should
be
the
single
source
of
Truth
or
ways
for
things
synced
to
to
physical
clusters,
there
was
the
assumption
that
you
know.
F
If
someone
modifies,
for
example,
the
replica
of
a
deployment
on
the
downstream
side
then
automatically
it
should
be
overwritten
by
by
the
version
of
the
deployment
on
the
kcp
side,
and
the
truth
is
that
that's
not
the
case
completely
I
mean
as
soon
as
you
have
something
that
changes
Upstream.
It
will
be
over
return,
obviously,
and
that's
all
works
today,
but
the
only
thing
that
is,
you
know
that
triggers
an
update
of
the
downstream
object
based
on
the
Upstream
object
is
when
you
delete
the
downstream.
F
The
synced
object,
Downstream,
and
we
do
not
expect
you
know
to
trigger
automatically
overwriting
of
Downstream
objects
when
they
are
modified
Downstream.
That
would
you
know
be
on
one
side.
You
know
in
involved
many
events
watching
many
events
and
on
the
other
side,
we
should
think
about
this
more
precisely
because
there
might
be
some
cases
where
it
makes
sense
that
the
downstream
cluster
modified
some
spec
fields
of
synced
resources.
K
Yes,
sorry
yeah
I
remember
this
was
the
it
was
not
kind
of
an
assumption,
but
I
remember
way
back.
This
was
how
it
was
working.
I.
Think
after
speaking
to
David
I,
understand
that
that's
not
the
case
we
will
try
to
I
will
try
to
go
back
and
check
the
chat
with
Andy,
because
I
remember.
We
have
discussed
this
with
Andy
last
time
and
see
what
needs
to
be
done
here,
but
now
I
think,
let's
keep
it
open
and
maybe
I
will
update
the
bug
with
my
comments
tomorrow.
F
Maybe
I'm
I'm
not
saying
that
it's
not
something
that
we
should.
You
know
enhance
in
the
future,
maybe
ensuring
more
consistency
and
and
avoiding
more
over
you
know.
Manual
changes
on
the
downstream
is
is
a
good
thing,
but
I
would
mostly
see
that
as
a
feature
or
and
not
as
a
book,
because
obviously
it
has
to
be-
you
know,
included
in
a
wider
thinking
and
design
about
what
are
the
fields
on
which
we
allow
changes
and
the
fields
on
which
we
don't
allow
changes.
F
F
Yeah
well,
yes,
well
to
me:
Orchestra
I
think
that
we
have
to
at
least
answer
this,
this
issue
to
explain
why
we
close
it
and
then
we
would
be
able
to
open.
You
know
a
feature
issue
about.
Do
we
want
to
enhance
the
the
maintenance?
The
the
you
know,
consistency
maintenance
in
the
future.
All.
A
Cool
okay
release:
workflow
should
play
nice
with
GitHub
packages.
A
C
The
other
day
we
didn't
resolve,
who
was
going
to
take
it,
but
I
would
love
to
see
one
way
to
build
images
which
generally
would
probably
be
proud,
and
we
we
have
another
issue
about
the
discrepancy
between
how
we
build
the
kcp
image
versus
the
Sinker
image,
because
one
uses
build
a
one
uses
co,
so
I
think
in
general.
I
Yeah,
that
sounds
fine.
The
obviously
we
were
discussing
that
on
the
helm,
charts
pay
all
that
I
mentioned
earlier
on
so
it'd
be
good
to
kind
of
reach,
consensus,
I'm,
happy
to
help
and
drive
that
forward.
I
was
not
entirely
sure
on.
You
know
which
approach
we
would
prefer.
One
issue
we
do
need
to
resolve
is
that
I
think
that
currently,
the
prior
images
are
not
using
the
docker
file
from
the
kcp
repo,
so
we'll
need
to
fix
that
first
I
think
before
removing
any
of
the
other
Upstream
images.
I
So
yeah
I'll
take
a
look
at
what's
involved
to
try
and
make
progress
on
that.
C
J
C
I
guess
I'll
repeat
my
call,
which
I
think
I
had
last
week,
which
was
if
you
all
have
time.
We
have
a
lot
of
flakes
a
lot
of
new
flakes
that
have
recently
cropped
up
now
that
we
have
multiple
shards
and
scheduling
to
multiple
shards,
that's
merged,
so
it's
to
be
expected
and
we
need
as
much
help
as
we
can
get
in
reviewing
flakes
filing
flakes.
C
If
you
have
a
pull
request
and
one
of
your
tests
fails
and
it
looks
like
it's
unrelated
to
anything
that
you
changed
in
your
pull
request,
my
ask
would
be
that
first,
you
just
go
to
the
issues
Link
in
kcp
search
for
the
test
that
failed.
If
you
find
one
that
is
either
if
it's
open
just
include
a
link
to
it.
If
it's
closed
see,
if
it's
the
same
and
reopen
it
if
it
is
and
if
not
please
file
a
new
flake
I,
we
may
have
this
documented.
C
If
not,
I
will
document
it
and
and
put
that
up
there,
but
so
the
two
things
are:
if
you
have
a
PR
and
it's
flaking,
we
need
to
know
what
the
flakes
are
and
make
sure
that
it's
either
filed
or
linked.
And
if
you
have
some
spare
time,
we'd
love
help
in
trying
to
deflake
what
is
flaky
right
now.
A
All
right,
okay,
anything
else,
I
think
for
the
Milestone.
That
makes
is
there
anything
we
should
mention
from
here:
basic
API
priority
and
fairness
for
kcp.
C
Need
to
check
in
on
that
with
Jamie
later
and
there's
I
think
we
probably
need
to
have
a
separate
time
to
go
through
the
issues
that
are
currently
in
the
0.11
Milestone
I.
Don't
I
mean
I'm
happy
to
do
that.
Async
or
anybody
can
do
it
as
well.
So.
D
Yeah
you'll
probably
need
to
talk
to
me
more
than
Jamie
he's
been
reassigned.
Okay,
but
yeah.
We
need
to
you
know
we.
Basically
we
were
in
the
status
the
status
is.
We
were
Jamie,
basically
implemented
what
I
outlined
with
Stefan
at
the
beginning,
but
then
Andy
said
well.
We
don't
actually
want
that
so
we're
in
the
process
of
redefining
what
we
want.
So
we
need
to
bring
that
to
some
conclusion.